We’re approaching the most ridiculous part of the AI bubble, with each day bringing us a new, disgraceful and weird headline. As I reported earlier in the week, OpenAI spent $12.4 billion on inference between 2024 and September 2025, and its revenue share with Microsoft heavily suggests it made at least $2.469 billion in 2024 (when reports had OpenAI at $3.7 billion for 2024), with the only missing revenue to my knowledge being the 20% Microsoft shares with OpenAI when it sells OpenAI models on Azure, and whatever cut Microsoft gives OpenAI from Bing.
Nevertheless, the gap between reported figures and what the documents I’ve seen said is dramatic. Despite reports that OpenAI made, in the first half of 2025, $4.3 billion in revenue on $2.5 billion of “cost of revenue,” what I’ve seen shows that OpenAI spent $5.022 billion on inference (the process of creating an output using a model) in that period, and made at least $2.2735 billion. I, of course, am hedging aggressively, but I can find no explanation for the gaps.
I also can’t find an explanation for why Sam Altman said that OpenAI was “profitable on inference” in August 2025, nor how OpenAI will hit “$20 billion in annualized revenue” by end of 2025, nor how OpenAI will do “well more” than $13 billion this year. Perhaps there’s a chance that for some 30 day period of this year OpenAI hits $1.66 billion in revenue (AKA $20 billion annualized), but even that would leave it short of its stated target revenue
The very same day I ran that piece, somebody posted a clip of Microsoft CEO Satya Nadella saying, who had this to say when asked about recent revenue projections from AI labs:
"What do you expect an independent lab that is trying to raise money to do? They have to put some numbers out there such that they can actually go raise money so that they can pay their bills for compute."
I don’t know Satya, not fucking make shit up? Not embellishing? Is it too much to ask that these companies make projections that adhere to reality, rather than whatever an investor would want to hear? Or, indeed, projections that perpetuate a myth of inevitability, but fly in the face of reality?
I get that in any investment scenario you want to sell a story, but the idea that the CEO of a company with a $3.8 trillion market cap is sitting around saying “what do you expect them to do, tell the truth? They need money for compute!” is fucking disgraceful.
No, I do not believe a company should make overblown revenue projections, nor do I think it’s good for the CEO of Microsoft to encourage the practice. I also seriously have to ask why Nadella believes that this is happening, and, indeed, who he might be specifically talking about, as Microsoft has particularly good insights into OpenAI’s current and future financial health.
However, because Nadella was talking in generalities, this could refer to Anthropic, and it kinda makes sense, because Anthropic just received near-identical articles about its costs from both The Information and The Wall Street Journal, with The Information saying that Anthropic “projected a positive free cash flow as soon as 2027,” and the Wall Street Journal saying that Anthropic “anticipates breaking even by 2028,” with both pieces featuring the cash burn projections of both OpenAI and Anthropic based on “documents” or “investor projections” shared this summer.
Both pieces focus on free cash flow, both pieces focus on revenue, and both pieces say that OpenAI is spending way more than Anthropic, and that Anthropic is on the path to profitability.
The Information also includes a graph involving Anthropic’s current and projected gross margins, with the company somehow hitting 75% gross margins by 2028.
How does any of this happen? Nobody seems to know!
Per The Journal:
Anthropic then becomes a much more efficient business. In 2026, it forecasts dropping its cash burn to roughly one-third of revenue, compared with 57% for OpenAI. Anthropic’s burn rate falls further to 9% in 2027, while it stays the same for OpenAI.
…hhhhooowwwww?????
I’m serious! How?
The Information tries to answer:
Anthropic leaders also claim their company’s use of three different types of AI server chips—made by Nvidia, Google and Amazon, respectively—has helped their models operate more efficiently, according to an employee and another person with knowledge of the company’s plans. Anthropic assigns tasks to different chips depending on what each does best, according to one of the people.
Is…that the case? Are there any kind of numbers to back this up? Because Business Insider just ran a piece covering documents involving startups claiming that Amazon’s chips had "performance challenges,” were “plagued by frequent service disruptions,” and “underperformed” NVIDIA H100 GPUs on latency, making them “less competitive” in terms of speed and cost.” One startup “found Nvidia's older A100 GPUs to be as much as three times more cost-efficient than AWS's Inferentia 2 chips for certain workloads,” and a research group called AI Singapore “determined that AWS’s G6 servers, equipped with NVIDIA GPUs, offered better cost performance than Inferentia 2 across multiple use cases.”
I’m not trying to dunk on The Wall Street Journal or The Information, as both are reporting what is in front of them, I just kind of wish somebody there would say “huh, is this true?” or “will they actually do that?” a little more loudly, perhaps using previously-written reporting.
For example, The Information reported that Anthropic’s gross margin in December 2023 was between 50% and 55% in January 2024, CNBC stated in September 2024 that Anthropic’s “aggregate” gross margin would be 38% in September 2024, and then it turned out that Anthropic’s 2024 gross margins were actually negative 109% (or negative 94% if you just focus on paying customers) according to The Information’s November 2025 reporting.
In fact, Anthropic’s gross margin appears to be a moving target. In July 2025, The Information was told by sources that “Anthropic recently told investors its gross profit margin from selling its AI models and Claude chatbot directly to customers was roughly 60% and is moving toward 70%,” only to publish a few months later (in their November piece) that Anthropic’s 2025 gross margin would be…47%, and would hit 63% in 2026. Huh?
I’m not bagging on these outlets. Everybody reports from the documents they get or what their sources tell them, and any piece you write comes with the risk that things could change, as they regularly do in running any kind of business. That being said, the gulf between “38%” and “negative 109%” gross margins is pretty fucking large, and suggests that whatever Anthropic is sharing with investors (I assume) is either so rapidly changing that giving a number is foolish, or made up on the spot as a means of pretending you have a functional business.
I’ll put it a little more simply: it appears that much of the AI bubble is inflated on vibes, and I’m a little worried that the media is being too helpful. These companies are yet to prove themselves in any tangible way, and it’s time for somebody to give a frank evaluation of where we stand.
if I’m honest, a lot of this piece will be venting, because I am frustrated.
When all of this collapses there will, I guarantee, be multiple startups that have outright lied to the media, and done so, in some cases, in ways that are equal parts obvious and brazen. My own work has received significantly more skepticism than OpenAI or Anthropic, two companies worth alleged billions of dollars that appear to change their story with an aloof confidence borne of the knowledge that nobody read or thought too deeply about what it is that their CEOs have to say, other than “wow, Anthropic said a new number!”
So I’m going to do my best to write about every single major AI company in one go. I am going to pull together everything I can find and give a frank evaluation of what they do, where they stand, their revenues, their funding situation, and, well, however else I feel about them.
And honestly, I think we’re approaching the end. The Information recently published one of the grimmest quotes I’ve seen in the bubble so far:
Some researchers are trying to take advantage of high investor interest in AI. They have told some investors that growing concerns regarding the costs and benefits of AI have prompted them to raise a lot of money now rather than wait and risk a shift in the capital markets, according to people who talked with them.
Hey, what was that? What was that about “growing concerns regarding the costs and benefits of AI”? What “capital shift”? The fucking companies are telling you, to your face, that they know there’s not a sustainable business model or great use case, and you are printing it and giving it the god damn thumbs up.
How can you not be a hater at this point? This industry is loathsome, its products ranging useless to niche at best, its costs unsustainable, and its futures full of fire and brimstone.
This is the Hater’s Guide To The AI Bubble Volume 2 — a premium sequel to the Hater’s Guide from earlier this year — where I will finally bring some clarity to a hype cycle that has yet to prove its worth, breaking down industry-by-industry and company-by-company the financial picture, relative success and potential future for the companies that matter.
Let’s get to it.