Anthropic Is Bleeding Out

Edward Zitron 29 min read

Hello premium customers! Feel free to get in touch at ez@betteroffline.com if you're ever feeling chatty. And if you're not one yet, please subscribe and support my independent brain madness.

Also, thank you to Kasey Kagawa for helping with the maths on this.

Soundtrack: Killer Be Killed - Melting Of My Marrow


Earlier in the week, I put out a piece about how Anthropic had begun cranking up prices on its enterprise customers, most notably Cursor, a $500 million Annualised Recurring Revenue (meaning month multiplied by 12) startup that is also Anthropic’s largest customer for API access to models like Claude Sonnet 4 and Opus 4.

As a result, Cursor had to make massive changes to the business model that had let it grow so large in the first place, replacing (on June 17 2025, a few weeks after Anthropic’s May 22 launch of its  Claude Opus 4 and Sonnet 4 models) a relatively limitless $20-a-month offering with a much-more-limited $20-a-month package and a less-limited-but-still-worse-than-the-old-$20-tier $200-a-month subscription, pissing off customers and leading to most of the Cursor Subreddit turning into people complaining or discussing they’d cancel their subscription.

Though I recommend you go and read the previous analysis, the long and short of it is that Anthropic increased the costs on its largest customer — a coding startup — about 8 days (on May 30 2025) after launching two models (Sonnet 4 and Claude Opus 4) specifically dedicated to coding.

I concluded with the following:

What I have described in this newsletter is one of the most dramatic and aggressive price increases in the history of software, with effectively no historical comparison. No infrastructure provider in the history of Silicon Valley has so distinctly and aggressively upped its prices on customers, let alone their largest and most prominent ones, and doing so is an act of desperation that suggests fundamental weaknesses in their business models.

Worse still, these changes will begin to kneecap an already-shaky enterprise revenue story for two companies desperate to maintain one. OpenAI's priority pricing is basic rent-seeking, jacking up prices to guarantee access. Anthropic's pricing changes are intentional, mob-like attempts to increase revenue by hitting its most-active customers exactly where it hurts, launching a model for coding startups to integrate that’s specifically priced to increase costs on enterprise coding startups.

But the whole time I kept coming back to a question: why, exactly, would Anthropic do this? Was this rent seeking? A desperate attempt to boost revenue? An attempt to bring its largest customer’s compute demands under control as its regularly pushed Anthropic’s capacity to the limit?

Or, perhaps, it was a little simpler: was Anthropic having its own issues with capacity, and maybe even cash flow.

Another announcement happened on May 22 2025 — Anthropic launched Claude Code, a version of Anthropic’s Claude that runs directly in your terminal (or integrates into your IDE) that uses Anthropic’s Claude models to write and manage code. This is, I realize, a bit of an oversimplification, but the actual efficacy or ability of Claude Code is largely irrelevant other than in the sheer amount of cloud compute it requires.

As a reminder, Anthropic also launched its Claude Sonnet 4 and Opus 4 models on May 22 2025, shortly followed by its Service Tiers, and then both Cursor and vibe-coding startup Replit’s price changes, which I covered last week. These are not the moves of a company brimming with confidence about its infrastructure or financial position, which made me want to work out why things might have got more expensive.

And then I found out, and it was really, really fucking bad.

Claude Code, as a product, is quite popular, along with its Sonnet 4 and Opus 4 models. It’s accessible via Anthropic’s $20-a-month “Pro” subscription (but only using the Claude Sonnet 4 model), or the $100 (5x the usage of Pro) and $200 (20x the usage of Pro) ”Max” subscriptions. While people hit rate limits, they seem to be getting a lot out of using it, to the point that you have people on Reddit boasting about running eight parallel instances of Claude Code.

Something to know about software engineers is that they’re animals, and I mean that with respect. If something can be automated, a software engineer is at the very least going to take a look at automating it, and Claude Code, when poked in the right way, can automate a lot of things, though to what quality level or success rate I have no real idea, and while there are limits and restrictions, software engineers absolutely fucking love testing limits and getting around restrictions, and many I know see them as challenges to overcome.

As a result, software engineers are running Claude Code at full tilt, to the limits, to the point that some set alarms to wake up during the night when their limits reset after five hours to maximize their usage, along with specialised dashboards to help them do so. One other note is that Claude Code’s functionality creates detailed logs about the amount of input or output tokens it’s using throughout the day to complete its tasks, including whether said tokens were written to or read from the cache.

Software engineers also love numbers, and they also love deals, and thus somebody created CCusage, a tool that, using those logs, allows Claude Code users to see exactly how much compute you’re burning with your subscription, even though Anthropic is only charging you $20, $100 or $200-a-month. CCUsage compares these logs (which contain both how tokens were used and what models were run) to the up-to-date information from Anthropic’s API prices, and tells you exactly how much you’ve spent in compute (here’s a more detailed run-down).

In simpler terms, CCusage is a relatively-accurate barometer of how much you are costing Anthropic at any given time, with the understanding that its costs may (we truly have no idea) be lower than the API prices they charge, though I add that based on how Anthropic is expected to lose $3 billion this year (that’s after revenue!) there’s a chance that it’s actually losing money on every API call.

Nevertheless, there’s one much, much, much bigger problem: Anthropic is very likely losing money on every single Claude Code customer, and based on my analysis, appears to be losing hundreds or even thousands of dollars per customer.

There is a gaping wound in the side of Anthropic, and it threatens financial doom for the company.


Some caveats before we continue:

  • CCusage is not direct information from Anthropic, and thus there may be things we don’t know about how it charges customers, or any means of efficiency it may have.
  • Despite the amount of evidence I’ve found, we do not have a representative sample of exact pricing. This evidence comes from people who use Claude Code, are measuring their usage, and elected to post their CCusage dashboards online — which likely represents a small sample of the total user base. 
  • Nevertheless, the amount of cases I’ve found online of egregious, unrelentingly unprofitable burn are deeply concerning, and it’s hard to imagine that these examples are outliers. 
  • We do not know if the current, unrestricted version of Claude Code will last.

The reason I’m leading with these caveats is because the numbers I’ve found about the sheer amount of money Claude Code’s users are burning are absolutely shocking. 

In the event that they are representative of the greater picture of Anthropic’s customer base, this company is wilfully burning 200% to 3000% of each Pro or Max customer that interacts with Claude Code, and in each price point’s case I have found repeated evidence that customers are allowed to burn their entire monthly payment in compute within, at best, eight days, with some cases involving customers on a $200-a-month subscription burning as much as $10,000 worth of compute.

Sidenote: While researching this piece, I decided to send my editor, Matt Hughes, £20 and told him to create an Anthropic Pro account and install Claude Code with the aim of seeing how much of Anthropic’s money he could burn over the course of an hour or so. 

Matt, a developer-turned-journalist, told Claude Code to build the scaffolding for a browser-based game using the phaser.js library — a simple, incredibly accessible tool for creating HTML 5 games. Just creating that scaffolding ended up burning around $2.50, and that’s with just over the course of an hour. 

It’s easy to see how someone using Claude Code as part of their job, or as part of creating their side-project, could end up burning way more money than they paid as part of their package. 

Furthermore, it’s important to know that Anthropic’s models are synonymous with using generative AI to write or manage code. Anthropic’s models make up more than half of all tokens related to programming that flow through OpenRouter, a popular unified API for integrating models into services, and currently lead on major LLM coding benchmarks. Much of Cursor’s downfall has come from integrating both of these models, in particular the expensive (and I imagine compute intensive) Opus 4 model, which Anthropic only allows users of its $100-a-month or $200-a-month “Max” subscriptions to access.

In my research, I have found more than thirty different reported instances of users that have spent in excess of the amount they pay Anthropic by a factor of no less than 100%. For sake of your sanity, I will split them up by their paid subscription, and how much more than it they spent.

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.