The Enshittification of Generative AI

Edward Zitron 22 min read

Thanks for subscribing to Where’s Your Ed At Premium, please shoot me an email at ez@betteroffline.com if you ever have any questions.

Yesterday, OpenAI launched GPT-5, a new “flagship” model of some sort that’s allegedly better at coding and writing, but upon closer inspection it feels like the same old shit it’s been shoveling for the last year or two. 

Sure, I’m being dismissive, but three years and multiple half-billion-dollar training runs later, OpenAI has delivered us a model that is some indeterminate level of “better” that “scared” Sam Altman, and immediately began doing what some Twitter users called “chart crimes” with its supposed coding benchmark charts. 

This also begs the question: what is GPT-5? WIRED calls it a “flagship language model,” but OpenAI itself calls it a “unified system with a smart, efficient model that answers most questions, a deeper reasoning model, and a real-time router that quickly decides which[model]  to use based on conversation type, complexity, tool needs, and your explicit intent.” That sure sounds like two models to me, and not necessarily new ones! Altman, back in February, said that GPT-5 was “a system that integrates a lot of our technology, including o3.”

It is a little unclear what GPT-5 — or at least the one accessed through ChatGPT — is. According to Simon Willison, there’s three sub-models — a regular, mini and a nano model, “which can each be run at one of four reasoning levels” if you configure them using the API.

When it comes to what you access on ChatGPT, however, you’ve got two options — GPT-5 and GPT-5-Thinking, with the entire previous generation of GPT models no longer available for most users to access.

I believe GPT-5 is part of a larger process happening in generative AI — enshittification, Cory Doctorow’s term for when platforms start out burning money offering an unlimited, unguarded experience to attract their users, then degrade and move features to higher tiers as a means of draining the blood from users. 

With the launch of GPT-5, OpenAI has fully committed to enshittifying its consumer and business subscription products, arbitrarily moving free users to a cheaper model and limiting their ability to generate images, and removing the ability to choose which model you use in its $20, $35 and “enterprise” subscriptions, moving any and all choice to its “team” and $200-a-month “pro” subscriptions. 

OpenAI’s justification is an exercise in faux-altruism, framing “taking away all choice” as a “real-time router that quickly decides which [model] to use.” ChatGPT Plus and Team members now mostly have access to two models — GPT-5 and GPT-5-Thinking — down from the six they had before. 

This distinction is quite significant. Where users once could get hundreds of messages a day on OpenAI’s o4-mini-high and o4-mini reasoning models, GPT-5 for ChatGPT Plus subscribers offers 200 reasoning (GPT-5-thinking) messages a week, with 80 GPT-5 messages every 3 hours which allow you to ask it to “think” about its answer, shoving you over to an undisclosed reasoning model. This may seem like a good deal, OpenAI is likely putting you on the cheapest model whenever it can in the name of “the best choice.”

While Team accounts have “unlimited” access to GPT-5, they still face the same 200-reasoning-messages-a-week limit, and while yes, you could ask it to “think” more, do you think that OpenAI is going to give you their best reasoning models? Or will they, as they said, “bring together the best of their previous models” and “choose the right one for the job”?

Furthermore, OpenAI is permanently sunsetting ChatGPT access to every model that doesn’t start with GPT-5 on August 14th except for customers of its most expensive subscription tier. OpenAI will (and it appears this applies to the $200-a-month "Pro" plan too, I'm told by reporter Joanna Stern)- reduce your model options to two or three choices (Chat, Thinking and Pro), and will choose whatever sub-model it sees fit in the most opaque way possible. GPT-5 is, by definition, a “trust me bro” product. 

OpenAI is trying to reduce the burden of any particular user on the system under the guise of providing the “smartest, fastest model,” with “smartest” defined internally in a way that benefits the company, marketed as “choosing the best model for the job.” 

Let's see how users feel! An intrepid Better Offline listener pulled together some snippets from r/ChatGPT, where users are mourning the loss of GPT-4o, furious at the loss of other models and calling GPT-5, in one case, "the biggest peice (sic) of garbage even as a paid user," who says that "projects are absolutely brain-dead now." One user said that GPT-5 is "the biggest bait-and-switch in AI history," another said that OpenAI "deleted a workfow of 8 models overnight, with no prior warning," and another said that "ChatGPT 5 is the worst model ever." In fact, there are so many of these posts that I could find posts to link to for every word of this paragraph in under five minutes.

Yet OpenAI isn’t just screwing over consumers. Developers that want to integrate OpenAI’s model now have access to “priority processing” — previously an enterprise-only feature (see this archive from July 21st 2025) to guarantee low latency and uptime. While this sounds like something altruistic, or a new beneficial feature, I’m not convinced. I believe there’s only one reason to do this: that OpenAI intends to, or will be forced to due to capacity constraints, start degrading access to its API. 

As with every model developer, we have no real understanding of what may or may not lead to needing “reliable, high-speed performance” from API access, but the suggestion here is that failing to pay OpenAI’s troll toll will put your API access in the hole. That toll is harsh, too, nearly doubling the API price on each model, and while the Priority Processing Page has pricing for all manner of models, its pricing page reduces the options down to two models — GPT-5 and GPT-5-mini, suggesting it may not intend to provide priority access in perpetuity. 

OpenAI is far from alone in turning the screws on its customers. As I’ll explain, effectively every consumer generative AI company has started some sort of $200-a-month “pro” plan — Perplexity Max, Gemini ($249.99 a month before discounts), Cursor Ultra, Grok Heavy (which is $300 a month!), and, of course, Anthropic, whose $100-a-month and $200-a-month plans allowed Claude Code users to spend anywhere from 100% to 10,000% of their monthly subscription in API calls. This led to rate limits starting August 28 2025 — a conveniently-placed date to allow Anthropic to close as much as $5 billion in funding before its users churn. 

Worse still, Anthropic burned all of that cash to get Claude Code to $400 million in annualized revenue according to The Information — around $33 million in monthly revenue that will almost certainly evaporate as its customers hit week-long rate limits on a product that’s billed monthly. 

These are not plans created for “power users.” They are the actual price points at which these things need to be to be remotely sustainable, though Sam Altman said earlier in the year that ChatGPT Pro’s $200-a-month subscription was losing OpenAI money. And with GPT-5, meaningful functionality — the ability to choose the specific model you want for a task — is being completely removed for ChatGPT Plus and Team subscribers.

This is part of an industry-wide enshittification of generative AI, where the abominable burn rates behind these products are forcing these companies to take measures ranging from minor to drastic. 

The problem, however, is that these businesses have yet to establish truly essential products, and even when they create something popular — like Claude Code — they can’t make it popular without burning horrendous amounts of cash. The same goes for Cursor, and I believe just about every other major product built on top of Large Language Models. And I believe that when they try to adjust pricing to reflect their actual costs, that popularity will begin to wane. I believe we’re already seeing that with Claude Code, based on the sentiment I’ve seen on the tool’s Reddit page, although I’m also wary of making any sweeping statements right now, as it’s just too early to say. 

The great enshittification of AI has begun.

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.