Am I Meant To Be Impressed?

Ed Zitron 38 min read
Table of Contents

If you liked this piece, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including vast, detailed analyses of NVIDIA, Anthropic and OpenAI’s finances, and the AI bubble writ large

I just published a lengthy discussion about how OpenAI and Anthropic make up 70%+ of all AI GPU compute capacity and revenue. The previous week I wrote about how OpenAI will kill Oracle — and quite possibly Larry Ellison’s personal fortune, too.

Subscribing to premium is both great value and makes it possible to write these large, deeply-researched free pieces every week. 


God, it’s been a long few years, and only feels longer after every ecstatic, ridiculous round of tech earnings where the world’s largest companies do everything they can to obfuscate the ugly truth behind their numbers.

Let’s start with the biggest, ugliest one: Microsoft, Google, Amazon, and Meta are expected to spend between $800 billion and $900 billion on AI capex in 2026, and over $1 trillion in 2027.

By the end of 2027, big tech will have sunk $2 trillion into AI capex, with very little to show for it.

Oh, I know what you’re going to say. “These companies are growing faster than ever!” “These companies are building for future revenue streams!” “These companies are saying that AI is driving growth!” 

Yet those revenues are, in the case of Meta and Google, not good enough to actually share. 

While Google CEO Sundar Pichai will gladly say that “[Google’s] AI investments and full stack approach are lighting up every part of the business,” said “lighting up” never results in a revenue number that you can point at, because Google knows that analysts and journalists will read “Gemini Enterprise has great momentum with 40% quarter on quarter growth” — which we have no frame of reference for because Google doesn’t share its AI revenues — and clap and honk like fucking seals. Sundar Pichai knows that everybody is desperate to see him jingle his keys, and has such utter contempt for reporters, analysts, and investors that he doesn’t have to prove AI is actually doing anything. Those writing up his earnings will do it for him. 

Meta, on the other hand, has little real AI story, and can’t even seem to get its metrics straight on what AI is doing for the company, per my premium piece from earlier in the week:

People desperate to try and prove that AI matters will claim that Meta’s GEM (Meta’s generative ads model) led to a 5% increase in ad conversions on Instagram and a 3% increase in ad conversions on Facebook feed in Q2 2025.

This is an impressive-sounding stat that doesn’t actually connect to any meaningful revenue information, especially when Meta announced in January 2026 that doubling GEM’s compute allowed it to drive a 3.5% lift in ad clicks (a different measurement) on Facebook and “more than a 1% gain in conversions on Instagram” in Q4 2025, which is…4% lower.

Nevertheless, I have to give Microsoft and Amazon credit for deigning us worthy of actual numbers, even if they’re piss poor.

AI Revenues Are Pathetic and Circular, With OpenAI Representing 71%+ Of Microsoft’s AI Run Rate and Anthropic 80% of Amazon’s

While Meta and Google refuse to actually explain their AI returns, Microsoft revealed that it had $37 billion in AI revenue run rate — $3.08 billion a month or so — and Amazon had $15 billion, or around $1.25 billion a month.

And I must be clear, that’s revenue, not profit.

In any case, I need you to recognize how small these numbers are in comparison to the capex it’s taken to make them. 

To give you some context, Amazon’s AI revenue run rate is roughly 0.419% of the $298 billion in capex it spent on AI capex so far, or around 25% of the $5 billion it just invested in Anthropic last week. Microsoft, on the other hand, has spent $293.8 billion on AI capex through its latest quarter — making its revenue run rate around 1.04% of its spend.

These revenues are deeply embarrassing! I am not sure why this isn’t the common refrain! These fucknuts have spent over a trillion dollars on AI and all they have to show for it is either nothing, vague statements about “everything lifting because of AI,” or pathetic revenues that only get worse the more you think about them. 

OpenAI Represents 70%+ Of Microsoft’s AI Revenue and 80%+ Of Its AI GPU Compute Capacity, Creating The Illusion Of Growth That’s Dependent On A Company That Will Lose $25 Billion+ In 2026

For example: even if Microsoft were to make $37 billion in AI revenue in 2026 — remember, that $37 billion run rate is a snapshot in time!that would still be $500 million less than the $37.5 billion it spent in capital expenditures in the fourth quarter of 2025

Yet things actually get worse when you think about the sources of that revenue, or perhaps I should say source, as both Microsoft and Amazon (and I’d argue Google too, but we don’t know its AI revenues) are heavily-dependent on their large, unsustainable sons — Anthropic and OpenAI.

I’ll explain. Microsoft claims that its $37 billion in AI revenue run rate has grown by 123% year-over-year, which means its run rate, not actual 2025 AI revenue, was about $16.59 billion in Q3 FY25, or around $1.38 billion a month or, if you assume that number is consistent over the quarter (it likely wasn’t), about $4.14 billion. Based on my own reporting from direct Azure revenue numbers, this would make OpenAI’s $2.947 billion in inference spend in that quarter around 71% ($11.7bn) of Microsoft’s Q3 FY2025 AI revenue run rate. That’s embarrassing! 

Oh, and capital expenditures for that quarter were $21.4 billion, or around $4.81 billion more than its annualized revenue. 

Yet my reporting helps us be a little more annoying than that. Back in January 2025 — around Microsoft’s Q2FY2025 earnings — it announced that its AI revenue run rate had hit $13 billion, or around $1.083 billion a month (or $3.25bn a quarter or so). In that same quarter, OpenAI had spent $2.075 billion on inference on Azure, or 63.8% of Microsoft’s AI run rate.

This is particularly funny when you go back to the quarter before, where Microsoft CEO Satya Nadella low-balled that figure, claiming it would be $10 billion in annualized run rate, and specifically said the following:

"It's all inference," he said. "One of the things that may not be as evident is that we're not actually selling raw GPUs for other people to train."

The CEO added that the company is turning away requests to use their GPUs for training "because we have so much demand on inference."

That’s…not really what happened.

Today I can report, based on discussions with sources with direct knowledge of Azure revenue, that in Q2 FY2025, Microsoft brought in around $325.2 million in revenue via renting out GPUs and other AI infrastructure, and around $367 million in revenue from Microsoft 365 Copilot, or less than half of the $1.467 billion that OpenAI spent on inference. 

If you’re curious, the next quarter (Q3FY2025), AI infrastructure brought in around $412 million, and Microsoft 365 brought around $300 million. 

While my sourcing for Azure revenues cuts off at Q3 FY2025, my OpenAI inference and revenue share data goes out a further two quarters to Q4 FY2025 and Q1 FY2026 (so Q2 and Q3 of the calendar year 2025), as well as half of Q2FY2026, and we can make some fairly straightforward estimates as a result.

So, based on my reporting, OpenAI spent $3.648 billion dollars on inference in the third quarter of 2025 on Microsoft Azure, or around $14.4 billion on an annualized basis.  While I only had half the fourth quarter’s numbers, I estimate that OpenAI’s annualized spend hit over $18.5 billion — or around $4.6 billion a quarter — by the end of the year, and that’s not accounting for things like Sora 2 or the launch of its Codex coding platform. In total, this puts its spend at an estimated $13 billion dollars on Azure just on inference, with billions more on training.

Sidenote: If you work at Microsoft Azure and want to talk to me about these numbers, my Signal is ezitron.76. 

Yet Microsoft Azure isn’t the only place that Microsoft gets fed revenue from OpenAI.

Microsoft also accounted for 67% of CoreWeave’s 5.15 billion in 2025 revenue — or around $3.45 billion dollars — and as all of that is used by OpenAI. I also believe this is used for OpenAI’s training compute, as CoreWeave’s announcement related to its direct deal with OpenAI specifically said it was contracted “...to power the training of [OpenAI’s] most advanced next-generation models,” and said capacity was only available because Microsoft declined to extend its current agreement to use compute for OpenAI.

All together, that puts OpenAI’s spend on Microsoft services at over $18 billion dollars in 2025, and it’s easy to see how that would grow to over $24 billion dollars on an annualized basis in the last quarter, or around $2 billion a month. Microsoft is OpenAI’s primary cloud provider, and I estimate that OpenAI represents around 70% of its AI revenue, while taking up the majority of its infrastructure. Otherwise, Microsoft’s 20 million Copilot 365 subscribers likely pay no more than $7 billion a year.

I also think that OpenAI is taking up the lion’s share of compute.

As I discussed in my most-recent premium newsletter, Epoch estimates that Microsoft had around 2GW of compute by the end of 2025, with OpenAI as its largest customer. At the end of 2025, OpenAI’s CFO said that it had access to 1.9GW in compute, at a time when its compute was entirely supported by Microsoft and CoreWeave (estimated to have 480MW of compute). 

Considering that 67% of CoreWeave’s revenue came from Microsoft renting capacity for OpenAI, I also think that it’s fair to assume that 80% or more of Microsoft’s GPUs are taken up by OpenAI, though some might now be taken up by Anthropic, which agreed to spend $30 billion on Azure. I’ve also confirmed that Microsoft’s “Fairwater” data centers — which constitute (when finished) “hundreds of thousands of GPUs” — are entirely reserved for OpenAI. 

Microsoft desperately wants you to think that this is a diverse, booming revenue stream, when in fact it’s spent around $293 billion in four years to make — when you remove OpenAI — less than $3 billion a quarter in revenue, not profit.

Booooooo! Booooooo!!!!!

Anthropic Accounts For 80%+ of Amazon’s AI Revenues And At Least 75% Of Its AI GPU Compute Capacity

As far as Amazon goes, things get a lot grimmer. As I mentioned earlier, in early April, per Reuters, Amazon’s Andy Jassy admitted that its “cloud business’ AI revenue run rate was more than $15 billion in the first quarter of 2026,” which translates to around $1.25 billion in monthly revenue, or roughly 0.419% of the $298.3 billion in capex it spent so far, or around 25% of the $5 billion it just invested in Anthropic two weeks ago

I also think it’s reasonable to assume that a large part — if not the majority of — that revenue comes from Anthropic. Per my reporting last year, Anthropic spent $518.9 million on Amazon Web Services, at a time when it had around $7 billion in annualized revenue, a figure that’s increased by 500% (if you believe it) to $30 billion in annualized revenue since. $518.9 million is about $6.2 billion in annualized spend, and I think it’s fair to assume that its spend will have at least doubled to $12 billion in annualized revenue, or around 80% of Amazon’s AI revenue.

As of the end of Q4 2025, Amazon had 1.67GW of capacity — and based on my estimates from my newsletter published April 21, 500MW of that is taken up by Project Rainier, a data center dedicated entirely to Anthropic, which is also Amazon’s largest AI customer. I’d be confident in assuming that more than 75% of its capacity is taken up by Anthropic.

And man, $1.25 billion a month is fucking pathetic. I’m sorry, how are any of you possibly impressed by this? 

Google Won’t Talk About Its AI Revenues, But Anthropic’s Spend Likely Accounts For Most Of Google Cloud’s Growth

God, everyone loves to slurp down Sundar’s slop. You all fall for it! Sundar Pichai doesn’t respect you enough to tell you how much AI revenues Google makes, but because its current businesses continue to grow thanks to its tried and tested tactic of making shit harder to use so that Google services can show you more ads.

Nevertheless, people are doing backflips over Google Cloud’s 63% in year-over-year revenue growth ($20.03 billion), and I have a few thoughts:

  • “Year-over-year” is an attempt to obfuscate actual growth in the era of AI. A better comparison would be quarter-over-quarter, which was 12% from Q4 2025 ($17.66 billion).
    • This is actually significant, because it’s a slower rate of growth than between Q3 and Q4 2025, when cloud revenue jumped from $15.15 billion to $17.66 billion, or 14.2% quarter-over-quarter). 
      • I think quarter-over-quarter growth is far more indicative of how a business is going. 
  • Google Cloud is far more than AI! It includes all of Google’s workspace revenue, such as Gmail, Google Docs, and so on. It’s important to remember that Google jacked up its workspace pricing twice in 2025, and that by Q1 2026, the majority of customers will have been forced to renew at inflated prices. It also includes all of Google’s cloud revenue, which is incredibly diverse and far more than just AI compute.
    • Google has intentionally bucketed AI-related revenue into Google Cloud so that finance and tech journalists will claim that AI is what’s driving this growth despite there being no proof that that’s the case.

One of the reasons that Google might not want to break out its AI revenues is that they’re — much like Amazon — heavily-inflated by Anthropic’s compute spend. Sadly, we have only a little information about Anthropic’s spend outside of its promise to use “up to one million TPUs, with over a gigawatt of capacity [coming] online in 2026” from the end of last year, and a month ago, when it said it would use “multiple gigawatts of next-generation TPU capacity…starting in 2027.”  

Another guess might be to travel back in time to before Anthropic was a huge consumer of compute. In Q4 2023, Google Cloud sat at about $9.19 billion a quarter, and $11.96 billion in Q4 2024 (around 23% year-over-year, but a putrid 5% quarter-over-quarter from Q3 2024). By Q2 2025, it sat at $13.62 billion, and as I mentioned above, accelerated to $15.15 billion to $17.66 billion (14.2% quarter-over-quarter) to $20 billion (11.7% quarter-over-quarter) in the following three quarters.

Explainer: So, to create an output, a Large Language Model does “inference,” and the more users a company has, the more it spends on cloud services to support their inference. As a result, Anthropic’s growth means that it’s spending way, way more on its core cloud providers — Amazon and Google — to provide its services.

Also, if somebody tells you that “Anthropic is profitable on inference,” they are making it up based on a single interview that Dario gave to Dwarkesh Patel where he explicitly says “these are stylized facts” and are not Anthropic’s. I have serious questions about how Anthropic calculates margins in general

These periods match up exactly to Anthropic’s big jumps in revenue from Q2 2025 (around $3 billion ARR) to Q3 2025 (around $7 billion ARR) to Q4 2025 (around $9 billion ARR) to Q1 2026 (around $19 billion ARR), which suggests that Anthropic’s growth is what’s actually boosting Google Cloud.

Google Is Doing Circular Financing With Anthropic and Its TPUs, Selling TPUs To Anthropic, Who Then Pays To Rent Them Back From Google

Yet things get weirder when you listen to Google’s most-recent earnings call:

The cloud segment posted a notable acceleration, driven by surging demand for GenAI solutions, resulting in the doubling of backlog and tripling of operating income with the inclusion of TPU hardware agreements as a new revenue stream.

Interesting. Interesting. Google appears to be planning to sell its TPUs — its own custom silicon it currently uses only for its own services and some of Anthropic’s — to a non-specific amount of unnamed customers, to the point that its remaining performance obligations jumped from $242.8 billion to $467.8 billion in the space of a quarter. 

Aside: To be clear, RPOs refer to any revenue that Google might earn in the future, such as the tens of billions of dollars Anthropic has agreed to spend, every single annual or bi-annual workspace account, every single massive ads deal, and so on and so forth.

Nevertheless, that’s a remarkable jump, especially when you try and work out who they sell to- oh wait, we actually know!

Google also signed a multi-billion dollar deal to rent TPUs to Meta, per The Information, and is also discussing A) selling TPUs to Meta directly, and B) creating SPVs that will buy its own GPUs and lease them to others:

In addition to forging the Meta deal, Google has signed an agreement with an unidentified large investment firm to fund a joint venture that would lease TPUs to other customers, according to a person involved in that arrangement. Google is in talks with other investment firms to fund other such joint ventures.

This is exactly the same shit as NVIDIA is doing with xAI’s GPU-related financing last year.

To explain, Google is creating something called a special purpose vehicle — a company with one purpose — that it then funds along with an investment firm. The SPV then raises cash via debt, which it then uses to buy TPUs directly from Google.

Now, remember that Anthropic deal to use a million TPUs from last year? How about the deal with Broadcom (which makes TPUs for Google) and Google to use “multiple gigawatts” of TPUs starting in 2027?

Well, Per CNBC, Anthropic agreed to buy $21 billion of Broadcom’s TPUs in 2026 and $42 billion in 2027. Where will those TPUs go? Google’s data centers, probably the ones that it’s backstopping, per my premium from the beginning of the week:

Hey, while we’re on the subject, if AI data centers are such an obvious, rock-solid business, why did Google have to backstop $1.4 billion of Fluidstack and Cipher Mining’s obligations to deliver compute for Anthropic,  $1.8 billion for a similar deal with TeraWulf, and a non-specific amount for Hut8?

It’s a pretty sweet deal for Google! Google pays Broadcom to develop TPUs, Anthropic pays Google to buy those TPUs once Broadcom builds them, Google installs those TPUs in a data center, and then Anthropic pays Google to rent them back. 

This isn’t real demand!

Boo!!!!!! BOOOOOO!!!!!!

Anthropic Has Committed To Spend $200 Billion On Google Cloud and TPUs

So, for the sake of transparency, I wrote the above before The Information published its story about how Anthropic had committed to spend $200 billion on Google Cloud and TPU chips, which contained this very important detail:

But as part of the deal, which begins next year, Anthropic plans to spend about $200 billion with Google over five years, according to a person with knowledge of it. The commitment means Anthropic represents more than 40% of the “revenue backlog” Google disclosed to investors last week, reflecting contractual commitments from its cloud customers.

Google, Microsoft and Amazon’s AI Revenues Are Almost Entirely Based on Circular Financing Relationships That Should Be Illegal

The Information’s story also had this fascinating chart showing that around 50% of Amazon, Google and Microsoft’s backlog (which includes all revenues not just AI) — a staggering amount — is made up of revenue from OpenAI and Anthropic:

To be clear, I also wrote the below before this chart ran, because it was very fucking obvious when you actually looked at the numbers

Anyway, as I said in my last premium newsletter:

Just two weeks ago, both Amazon and Google pledged to invest up to another combined $65 billion in Anthropic, a company that just raised $30 billion in February and plans to raise another $50 billion more, following Amazon’s $15 billion (and as much as $35 billion more) investment in OpenAI in February.

This is not what you do when real, meaningful demand exists for AI services. Assuming that these rounds are closed at their higher limits, it will mean that Google has invested $43 billion and Amazon $33 billion in keeping Anthropic alive.

As I’ve explained, most AI revenues out of Google, Microsoft and Amazon come from two companies that lose billions of dollars a year, have no path to profitability, and are only able to keep paying these companies because the companies (and investors) keep feeding them money.

These relationships are utterly poisonous, and an intentional attempt to deceive investors and the general public. 

Google now plans to invest up to $43 billion in Anthropic, a company that I estimate takes up at least half of its 2.95GW of capacity, which has cost it around $211 billion in capex since 2023. Amazon has already invested $13 billion and as much as another $20 billion more in Anthropic, and announced its latest round with a statement about how Anthropic will use up to 5GW of compute capacity.

While dimwits might read this and say “WOW, AMAZON JUST LOCKED UP TONS OF FUTURE REVENUE,” it’s important to remember that Anthropic plans to lose $11 billion a year both in 2026 and 2027, and that’s based on its own internal (and fanciful) projections! 

Me Explain Why Circular Finance Bad!

Let me spell it out in a way that boosters can understand, in the style of Gillam Fitness: Anthropic not have money to pay big cloud bills, because Anthropic company cost lots of money, more money than Anthropic make! So Anthropic only PAY cloud bills if OTHERS give it money! Amazon GIVE MONEY to Anthropic to GIVE BACK TO AMAZON, which mean no profit! And Amazon not give Anthropic enough money to pay it, so Anthropic have to ask OTHERS for money! That BAD! It mean BUSINESS not STABLE, and CLIENT not STABLE. 

This bad when client MOST OF AI MONEY!

This ALSO mean that Anthropic RELIANT on OTHERS to pay AMAZON, which make AMAZON dependent on VENTURE CAPITAL for FUTURE REVENUE! Amazon SAY it have BIG BUSINESS, but BIG BUSINESS dependent on ANTHROPIC, which mean BIG BUSINESS dependent on VENTURE CAPITAL!

This SAME for GOOGLE! Both say they have BIG CLIENT, but BIG CLIENT MONEY not supported by REVENUE, so BIG CLIENT actually mean “HOW MUCH VENTURE CAPITAL MONEY ANTHROPIC HAVE.” 

This bad business! 

Sidenote: Me know you say “ANTHROPIC STOCK WORTH BIG MONEY,” but me need you remember how much capex Amazon and Google spend! Even if Anthropic stake worth $200 Billion, Amazon and Google still spend MANY more dollar than that on capex! And stake so BIG that neither able to SELL ALL. Only make gain on PAPER, which not REAL MONEY!

And it really, really is

Most of Amazon, Google and Microsoft’s capex is being driven into capacity mostly used by OpenAI and Anthropic, neither of whom have the money to pay without continual infusions of more capital. Only Microsoft was smart enough to realize the problem, which is why it allowed Oracle to take over the majority of OpenAI’s future capacity (which may kill Oracle, by the way!), but both Google and Amazon keep feeding Anthropic money so that Anthropic can feed it right back to them. 

Anthropic and OpenAI Have Become Load-Bearing Failsons, Making Up 70%+ Of AI Revenues and Taking Up 75%+ Of AI GPU Compute Capacity — Meaning That The Entire AI Industry Is Dependent On Whether They Can Raise Money

I’m going to try and speak simply again, because I’m still not sure people get this.

  • Anthropic and OpenAI make up the vast majority of all AI revenues and compute capacity. I estimate 70% of all revenues and capacity demand, if not higher.
  • Amazon, Google, and Microsoft’s AI revenues — and by extension their justification for future capex spend — are justified by Anthropic and OpenAI.
  • OpenAI and Anthropic both lose tens of billions of dollars a year (yes, Anthropic said it’ll lose $11 billion in a projection, and I believe they are being coy with their actual losses), which means that the majority of AI revenue and compute demand is dependent on whether Anthropic and OpenAI can continue to raise money.

The only solution to this problem is if either Anthropic or OpenAI can somehow find a way to become profitable, something that I have yet to see any proof is possible. 

Anthropic Appears To Be Losing Far More Money Than People Believed

In fact, the only proof I can find is that these fucking companies are more unprofitable than ever — in the last month, Anthropic raised $10 billion from Google, $5 billion from Amazon, and is reportedly trying to raise another $50 billion from investors, less than three months after it raised $30 billion on February 12, 2026, which was five months after it raised $13 billion in September 2025.

That’s $58 billion in eight months, with the potential to raise it to $108 billion.

I’m gonna be honest, I think Anthropic is outright misleading its investors if it’s saying that it will only burn $11 billion in 2026 and 2027, per The Information:

If that were the case, why does Anthropic need to raise one hundred and eight billion fucking dollars in less than three quarters? 

Time to make up some booster talking points and get mad at them:

We Need To Talk About Anthropic’s Revenue and Capacity Issues

So, SemiAnalysis — which traditionally does not wheel and deal in revenues! — randomly said that Anthropic had hit $44 billion in ARR, or around $3.08 billion in monthly revenue and…I’m sorry, what? 

I know that my suspicion of Anthropic’s revenue numbers has effectively become a meme by this point, but something about this doesn’t add up.

If we cut the periods down to strictly those after March 9, that means that Anthropic brought somewhere between somewhere between $4.5 billion and $5.58 billion in less than two months, or roughly its entire lifetime revenue.

This was also a period where Anthropic claimed it was facing capacity shortages, but said shortages only appeared to create performance issues for its current customers rather than stopping Anthropic from making money…

…which makes me wonder what all of this “capacity” talk is actually about. 

If Anthropic is truly facing a “capacity crunch,” it’s choosing to solve said crunch through sheer, unbridled greed, taking on more customers as it struggles to keep its services at above two nines of availability. If it were an ethical business, it would simply stop taking on new clients, much like GitHub Copilot did as it transitions to token-based billing.

Nevertheless, its capacity issues also make me wonder whether it’s actually taking on all that revenue, and if so, where it’s actually coming from. 

Per Newcomer, as of the end of last year, 85% of Anthropic’s revenue came from API calls from companies or individuals using their models to power services. This would mean that there was roughly — assuming that number is down to around 70% given the ascent of Claude subscriptions — $3.5 billion of API spend in the space of two months, or a few thousand trillion tokens’ worth of spend.

For some context, Meta’s “token-maxing” fiasco from the beginning of April involved it burning around 60 trillion tokens in 30 days, but based on discussions with sources familiar with Meta’s spend, 80% of that was cache reads.

The Information estimates that the actual cost in that period was around $330 million, meaning that Anthropic needs at least another five — if not ten — Meta-sized customers, or such incredible dispersed demand that has effectively appeared out of nowhere in the past three months, to possibly come close to those numbers.

I personally think it’s because Anthropic is doing something peculiar with its annualized revenue calculations. Per The Information:

Anthropic calculates its annualized revenue by taking the last four weeks of application programming interface revenue and multiplying it by 13, and then adding another figure: its monthly recurring chatbot subscription revenue multiplied by 12, according to a person with direct knowledge of Anthropic’s finances. The monthly figure used to calculate recurring subscriptions is based on the number of active subscriptions that day, said the person. 

The first and most-obvious place to game the numbers is that Anthropic chooses a single day’s active subscribers to anchor to its annualized revenues, which means it can preferentially select one where, say, a bunch of new people were signed up under a trial, or avoid a day where churn had users leaving. One could easily include those who are canceled but have yet to actually leave the service — such as somebody who canceled on April 7th but would still be on as a “paid” subscriber until May 8th — too.

As far as API credits go, it’s easy to manipulate a four-week-long segment based on how Anthropic bills its enterprise customers, specifically self-service enterprise deals.

In this case, Anthropic customers pre-pay a sum (say, $50 million) in credits that are billed based on their teams’ usage, and don’t expire or run out unless they’re actively consumed.

Anthropic could very, very easily manipulate this by — instead of booking based on an enterprise’s actual token burn — saying “we just got $50 million in API revenue in a calendar month!” even though that $50 million might take months to actually use.

To be fair, there are also other customers (referred to as “sales-assisted”) that are billed in arrears for their consumption. It’s unclear what the split is, and Anthropic doesn’t have to tell you.

Remember: Anthropic is a private company! It can do all the non-GAAP bullshit it likes. 

When and How Does Anthropic Actually Solve Its Capacity Issues?

I keep hearing about how Anthropic is capacity-strained and all that shit, but I don’t hear any explanations as to how it fixes that problem, or what that problem actually means for the business itself. Somehow being “capacity constrained” has led to the company making more revenue, which makes me wonder whether it’s a “constraint” so much as “a company running as shitty a service as it can while billing as much as possible.”

Either way, it’s unclear how many data centers are actually getting built, or indeed how long they’re taking to build. What does Anthropic do if it’s 12-18 months away?

And really, why do these capacity constraints not seem to have any effect on its revenue growth?

I ask because Sundar Pichai noted on Google’s most-recent earnings call that Google Cloud would’ve made more revenue had it had the capacity to meet demand. Why is Google revenue-constrained due to capacity but not Anthropic?

While there’s a compelling argument to be made that Anthropic was the customer that would’ve bought that compute, I think we deserve an actual explanation of what Anthropic needs more compute for if it’s not “to make more money.”

Also, if it’s currently making as much money as it likes with its current capacity constraints, wouldn’t getting more compute…make the numbers worse?

Ah, fuck it, let’s move onto something funnier.

Meta Has Burned Over $150 Billion — Its AI Story Is Completely Insane Nonsense, And We Need To Stop Pretending Otherwise

Meta is probably the funniest company in the AI bubble, in the sense that it does not appear to have anything approaching an AI strategy beyond “build as much data center capacity as possible” and “lose $4 billion a quarter selling pervert glasses.”

I realize I sound a little dismissive, but nobody can actually explain to me what Meta is doing with AI in a way that remotely justifies it burning $158.25 billion in capex since 2023, with plans to spend as much as $145 billion in 2026 alone.

Oh, Meta’s AI app was high in the app store charts? Who fuckin’ cares! Who gives a shit! Oh, it launched its own closed-source “Muse Spark” model? What am I meant to be impressed about? That over $150 billion has resulted in a model that ranks #27 on the LLM leaderboards in coding?

Now, some of you — including people I respect so much I’m not going to mention them by name! — appear to believe that Meta has some super-secret way of using all these GPUs to make “more money from ads,” and I must be clear that Meta has yet to explain that that’s the case. 

Per last premium:

People desperate to try and prove that AI matters will claim that Meta’s GEM (Meta’s generative ads model) led to a 5% increase in ad conversions on Instagram and a 3% increase in ad conversions on Facebook feed in Q2 2025.

This is an impressive-sounding stat that doesn’t actually connect to any meaningful revenue information, especially when Meta announced in January 2026 that doubling GEM’s compute allowed it to drive a 3.5% lift in ad clicks (a different measurement) on Facebook and “more than a 1% gain in conversions on Instagram” in Q4 2025, which is…4% lower.

You’ll note that these conversion numbers aren’t connected to any financials, which makes them a little suspicious, as 99% of Meta’s advertising revenue is ads, and “more conversions” should be fairly easy to peg to “more money”...unless said conversions aren’t actually converting into revenue for Meta’s advertisers. What does a “conversion” mean, in this case? Are these CPA ads that reward Meta on a clickthrough? Or CPM ones that pay per thousand impressions that just happen to result in a click? 

Again, these are ads, which means that it’d be very easy to take that “conversion” number and turn it into “made $X,” unless of course said amount is pathetically small in the grand scheme of things.

Seriously though, what is Meta doing? I suppose it doesn’t matter when the Wall Street Journal will breathlessly write that (and I quote) Meta is envisioning “supersmart agents” and the following lede that I find to be one of the more-revolting things I’ve read about a hyperscaler recently:

Meta just offered a glimpse at what it thinks the future of work looks like: training and supervising artificial-intelligence systems to do what used to be your job. And that’s if you still have a job at all.

You may be wondering what the “glimpse” was, and it was “laying off 8000 people” and “grading employees in performance reviews on their AI use” and “making a CEO chatbot for Mark Zuckerberg to talk to.”This is an ugly, wasteful, distressed company that has no idea what to do anymore, run by a mad king who literally cannot be fired, and those who are charged with scrutinizing it will write entirely imaginary comments like “wow, Mark Zuckerberg is building supersmart agents!” without a second’s thought.

How To Argue With An AI Booster About This Round Of Tech Earnings! 

The magical hysteria of the AI bubble is such that Meta, Microsoft, Google and Amazon are, despite proving no actual profit from their AI investments, effectively protected by most of the media, investors and analysts.

To be clear, I don’t think any of these companies die as a result of the bubble bursting, but I’m sick and tired of hearing everybody cover their asses with the same tired and hollow talking points, so I’ve pulled together a few of them:

“These Are Real Businesses That Print Money, They’ll Be Fine”

So, while this is technically true — as I said, these companies will not die as a result of the bubble bursting — any investor (or person who wants to deal in “the truth” rather than “stuff they misread or misremembered”) should be deeply concerned that they’ve sunk around a trillion dollars into AI capex, and all they’ve done is incubate two large, unprofitable companies that have become a burden on their infrastructure, and revenue streams that they either refuse to disclose or are both incredibly-centralized and proportionately embarrassing.

Let’s get specific: 2023, Microsoft, Google, Amazon, and Meta have spent a little over $850 billion in capex, mostly hoarding NVIDIA GPUs that will be near-to-completely obsolete by 2030. 

With these GPUs comes a massive depreciation problem, as I discussed a few months ago in my time bomb premium newsletter. Every quarter, more GPUs come online, which grows the “depreciation” line on the income statement, steadily growing every quarter to the point that the Wall Street Journal projects that it will eat as much as 58% of Meta’s, 40% of Microsoft’s, and 38% of Google’s net income by 2030.

This flows neatly into my next point.

“These Businesses Are Super Profitable, And They’re Still Growing Really Fast! That’s Because of AI!”

Well, let’s be clear: whatever growth these businesses currently have is being eaten by depreciation. Last quarter, Google had $6.48 billion, Amazon $18.94 billion, Microsoft $10.1 billion, and Meta $5.9 billion, numbers that sometimes oscillate slightly down but have, year-over-year, grown by billions of dollars. And yes, year-over-year is appropriate here because this is a balance that has been steadily growing for years.

In any case, depending on the company, that “growth” is either “barely related” or “entirely unrelated” to AI. 

Remember: Microsoft and Amazon are intentionally obfuscating their AI revenues by using “annualized” — a term they refuse to define that usually refers to a monthly figure times 12 — to define something in statements related to quarterly revenue. As a result, it’s impossible to precisely backtrack how much revenue they made.

In fact, that’s probably the simplest response here: if these companies were truly growing as a result of AI, they’d tell you. They’d say “AI revenue was X.” They’d say it in blunt, obvious terms. No annualized revenues, no projections, no fluff, no “AI-influenced,” just a line item that said “AI:” or even a segment, such as “Microsoft Azure AI compute.’

I also want to be clear about something else: I know, from documents viewed by this publication, that Microsoft has these line items fully itemized, and could share them if it wanted to, but intentionally chooses not to.

These companies are deliberately refusing to share their AI revenues: and it’s time for the tech and business media to begin asking them why!

“Umm, People Are PAYING For AI, Actually-”

So much that neither Google nor Meta will tell you how much!

Also, three years in, nearly a trillion dollars, and two companies have dedicated nearly their entire sales operation to pushing it, and the best they’ve got is annualized revenues and no segment breakdown. 

“Oh, Microsoft has 20 million paying Copilot subscribers,” $600 million a month? For a company that makes $80 billion a quarter? That's a pathetic amount of money. You could raise more money by auctioning dogs!

I need you, please, god, to start actually using basic mathematics! Microsoft has spent $293 billion on this bullshit, and spent another $30 billion or so in the last quarter on capex!

When does this pay off?

“Anthropic and OpenAI Are Dependent On The Cloud Providers, Guaranteeing Them Revenue-”

As I said above, 

“Amazon Web Services Cost A Lot Of Money-”

Enough!

Amazon Web Services was profitable in a decade and cost about $52 billion between 2003 and 2017, and that’s normalized for inflation!

Anyone making this point is either intentionally lying to you or incredibly ignorant. I have done the work to prove this point, and will continue to repeat it until those too incurious or deceptive learn to stop doing so. 

“The Capex Will Pay Off”

How? 

When? 

Wwwwhen?????

Whheeeennnnnn??????????????

I’m serious, when? And how???

Not that they would, but in a scenario where Meta, Amazon, Google and Microsoft stopped spending capex on AI next quarter, they would have to make somewhere in the region of $2 trillion in brand new revenueall while other services continued to grow — to make any of this capex worth it.

Please, explain to me how that happens when it’s taken three years and nearly three hundred billion fucking dollars for Microsoft to squirt out maybe three billion dollars in revenue (not profit), with most of that coming from OpenAI! Please, somebody, anybody explain!

You can’t! 

But you know what, let’s try!

  • It’ll get cheaper in the future- okay, are you saying the chips will get better? Because these companies have somewhere between $100 billion and $300 billion of these fucking things.
  • People are starting to pay for AI- okay, but they’re not paying very much, and it’s taken so long that these companies are now burdened with endless piles of GPUs that they’ve yet to fully install. How do they catch up?
  • Just give it time- no! I’ve given it lots of time! Why are you being so generous to them and so impatient with me? 
  • This is investing in tech that will turn into the most transformative tech in the future- you’re a mark!

Big Tech’s AI Story Is Unimpressive, Centralized, Unprofitable and Boring — And The AI Demand Story Is A Lie

As The Information said, around 50% of all remaining performance obligations, as in all (NOT JUST AI) of the upcoming revenue for Microsoft, Meta and Amazon, is from either OpenAI or Anthropic.

Put another way, 50% of big tech’s upcoming revenues are dependent on two companies, neither of which can afford to pay them, meaning that 50% of Meta, Amazon and Google’s revenues will either come from their own venture investments or venture capital.

This is not what stable or diverse revenue looks like, and suggests my grander thesis about AI demand is true. Outside of OpenAI and Anthropic, there’s barely any actual demand for AI services or AI compute at the scale necessary to substantiate a trillion or more in capital expenditures.

Yet the most-disgraceful part is the sheer contempt that these companies have for investors, the media, and the general public. In a functioning regulatory environment — or a market run by people with object permanence — it would be impossible to add such large amounts to your RPO balance without active scrutiny and analyst markdowns based on the fact that Anthropic and OpenAI can literally not afford to pay these bills at this time.

Microsoft, Amazon and Google have scooted along for years on the idea that they’re diverse, well-positioned companies that can build massive AI revenue streams. In reality, they’re the paypigs for Anthropic and OpenAI, providing more than 70% of their compute as a means of artificially inflating their AI revenues, knowing that analysts and the media will nod and smile without a single thought.

In fact, fuck it, I’m ending this with a rant.

The story of massive AI demand is a lie — a trillion dollars annihilated to create the largest circle jerk of all time. 

Venture capitalists and hyperscalers feed money to OpenAI and Anthropic, so that venture capitalists can feed money to startups to feed to Anthropic and OpenAI, so that Anthropic and OpenAI can feed that money back to hyperscalers, who then feed that money to NVIDIA and buy more GPUs. 

While it might seem tempting to credit these men as geniuses for creating companies specifically to feed them revenue, but to keep up the kayfabe of “doing AI” to substantiate this buildout means that they’ve had to massively overcommit to the bit, even though the only two meaningful businesses in AI appear to be Anthropic and OpenAI, and that’s only because they’re effectively intellectual honeypots for the entire industry. 

Outside of those two, the only other competitive AI businesses are those of Amazon, Microsoft and Google — two of which now have annualized AI revenues of around 6% of their capital expenditures so far. 

Google’s AI business is so booming that it refuses to break it out, and while it nebulously claims “AI is creating growth,” it’s not really clear how, and it’s vague about it because analysts and the media are ready to swallow the narrative as long as number go up

That’s why Google doesn’t break out the number, by the way! That’s why Sundar Pichai is able to bullshit his way through every earnings call, because the media and analysts are ready to fill in the gaps in the most preferential way possible. 

Amazon and Microsoft had their hands forced by the markets after their stocks stumbled, and fucked up by sharing their AI revenues. Amazon’s $298.3 billion in capex has successfully created a business that, more than a quarter of a way to a trillion, has successfully managed to make $1.25 billion dollars a month. 

That’s fucking pathetic! If we had analysts with IQs above room temperature they’d run Andy Jassy out of Arlington like Shrek. 

Let’s look at this fucking chart again

Unbe-fucking-lievable! Anthropic and OpenAI have now committed to over $718 billion of Microsoft, Amazon and Google’s revenues, despite the fact that neither of them can actually afford to pay for it. The market’s response? A slight (and short-lived) after-hours lift

Dear members of the media: these companies are laughing at you. They know you are going to cover this in a way that makes them look good. They know you’re going to use this as proof that they’re “doing well in AI,” despite the fact that the majority of their future revenue is tied up in two oafish failsons, one of which (OpenAI) plans to burn $50 billion on compute in 2026 alone.

I realize that it’s a lot to ask people to think about things in negative terms, but things are getting a little ridiculous. These are loadbearing failsons with dysfunctional businesses! It’s very clear both of them are doing weird things with their annualized revenues, and even clearer that there’s no path to profitability!

Sadly, asking the media or analysts to act rationally or apply any real scrutiny is a joke, because  this is the AI bubble, where everybody is wrong because once everybody admits what’s actually happening they’re going to have to admit they’ve all sounded insane for years. $1.25 billion a month! Andy Jassy should be ashamed of himself!

And god, fuck Microsoft too. 

I’m sorry, WOW, Satya! You managed to get up to twenty million paying Microsoft 365 Copilot subscriptions — $600 million a month in revenue, not profit! — and all it took was you investing $13 billion dollars in money to OpenAI, forcing Large Language Models into every one of your products in a way that borders on harassment and about $289 billion dollars in capex, as well as laying off thousands of people and savaging the Xbox brand

Whoopdie fucking shit man! You should be ashamed of yourself. Amy Hood should lock you out of the building. She should turn off your keycard and disconnect your keyboard. 

OpenAI Gave The Tech Industry AI Psychosis, Convincing Everybody That A Dead-End Tech Was The Ultra-Panacea To The End of Hypergrowth

OpenAI is, in and of itself, a kind of psychosis generator. 

It was the first thing in a long time that felt like a new thing since the iPhone for the people that entirely obsess over growth. 

It was the panacea for the tech industry, creating a new way for Business Idiots to spend money on infrastructure, a new thing for consultants to scam people with, a new series of things to be an expert in, all wrapped up in something that could also be both a consumer product, an enterprise software product, and a new kind of API to attach to other enterprise software to. 

In theory, OpenAI’s success would lift everything at once — hardware, software, and even adjacent fields, like services. It promised to both democratize access to creating software while also heavily reinforcing existing power structures to the point that every dollar inevitably ended up in the Magnificent Seven’s pocket. It only succeeded in the latter.

The problem is that the system needed to work one day. It needed to eventually make more money than it cost. Every single one of these companies is talking about AI non-stop, and not one of them can show a profit. The only thing they can do is tell lies of omission by saying “AI helped boost everything,” and when you ask for specifics, the results are either tepid or so secretive you’d think they’re hiding a dead body.

The only reason Google, Amazon and Microsoft are being tolerated at their current excess is because their non-AI segments continue to grow through endless price-increases and enshittification, and its external business units — by which I mean OpenAI and Anthropic — are yet to die. 

Sorry, I just don’t know what Meta is doing. I don’t think Meta knows what Meta is doing. Every so often it buries a fact in one of its blogs about how it saw a 3% increase in something related to AI, then it promises to burn $170 billion dollars and it’s unclear why. It also lost another $4 billion dollars on Reality Labs by the way! There should be a legitimate inquiry into where this money is going. Eighty six billion dollars and all we have is the metaverse and pervert glasses? 

Meanwhile, SpaceX is rushing to have the strangest and largest IPO of all time, all as daily stories leak about billions of dollars of losses and whatever the fuck that deal with Cursor is

Apparently SpaceX will buy it for $60 billion dollars or pay it $10 billion dollars. 

I think what actually happens is the third thing: SpaceX funds Cursor for a bit, there’s a falling out between Musk and CEO Michael Truell, and the company either rushes an acquisition or dies. Remember: Elon killed Cursor’s funding round! He can’t buy it before SpaceX goes public

Elon Musk took fucking OpenAI to court. Do you think he’ll care about killing Cursor? Who’s going to be left to sue him?

Anyway, that OpenAI/Musk suit is a real Alien Versus Predator situation, and if I’m honest I’ve found whole thing a little boring, a duo of dullards shoulder-barging each other to see who can run a company that neither of them can really describe because neither of them do anything other than pontificate and take credit for other people’s work. 

If this breaks OpenAI I’ll be very surprised, but if it does it would be extremely fitting that Elon would accidentally destroy the AI industry, like Mr. Bean sitting on a button that launches a nuke. If I’m wrong here it would be very funny. I’m just not giving it much hope.

Nevertheless, this entire industry is only made possible by the kayfabe circular economy of taking every single sign as good for AI and ignoring every possible glaring warning sign in the hopes that they’ll go away. 

You know, like last week when Microsoft said it’s shifting GitHub Copilot to token-based billing — something I reported a week before everybody else. 

This is effectively killing the product as they know it, and invalidates every single story about its revenue growth ever written. To give you some context about its scale, GitHub copilot is the second largest customer of Anthropic’s models, and was only that large because it was subsidizing the computer spend of its customers.

Why? Because that’s the only way to build any kind of AI business. 

Google and Amazon realize their AI revenues are contingent on the continued survival of Anthropic, and Amazon and Microsoft’s revenues are contingent on OpenAI AND Anthropic. 

They know that if these companies die they’re going to lose billions of dollars of revenue, but that they also have to compete with them for fear that they’ll be seen as “falling behind” their horrible progeny. As a result, they’re incinerating their brands and endlessly pontificating about the power or AI while spending nearly a trillion dollars on capex almost entirely to make sure their competition, which is also their customer and welfare recipient, doesn’t die.

It’s a mess, and a mistake, and eventually one of them is going to grow tired of it. Microsoft was already billions under the analyst estimates for capex. They’re moving to token based billing. They claimed to invest in Anthropic in February but didn’t mention it in their earnings in any way, shape or form. 

At some point these fucknuts are going to be forced to reckon with what they’re doing. 

Until then we’ll have increasingly more frenzied and ejaculatory statements about AI demand that fail to match with reality. 

I truly think that it’s going to be like this if not crazier until one day when the music suddenly stops. Somebody is going to blink. Somebody is going to take a step back and give everybody else permission to stop too. 

Maybe Perplexity, Lovable, Replit, or Cognition dies. 

Maybe Microsoft shifting GitHub Copilot to token based billing in June first inspires others like Anthropic to follow suit. 

Maybe AI token austerity begins at Microsoft, Meta, or another large company. 

Maybe NVIDIA fails to inspire in just the right way, or the fact that data centers are not opening fast enough to have fully digested the last year’s GPUs finally catches up with the economic mismatch that Jensen Huang always beats and raises expectations. 

And that really is the strangest thing.  

At the current rate of sales, it’s taking six months to install a quarter’s GPUs. At this point it’s obvious that there are warehouses of these things. It just isn’t obvious whether they’re in ones owned by hyperscalers or the Taiwanese ODMs (original design manufacturers) like Quanta Computing and Foxconn that build their servers. 

None of this makes sense. 

It hasn’t from the beginning. It’s the largest bubble in history, and has reached such an intellectual and financial scale that many have taken sides on it in a way that will be completely impossible to walk back if they’re wrong. 

As things deteriorate, expect them to cling to their mythologies tighter and become more agitated. 

And really, we’ve never seen anything like this in our lives. 

You realize that Anthropic and OpenAI are insane, right? These companies have promised $718 billion to Microsoft, Google and Amazon, and cannot survive without venture capital funding, because their underlying businesses lose money on every transaction — and so help me fucking GOD if you say they’re “profitable on inference” without proof I will crush you into a cube like a car in a garbage dump!

Every single AI business you see is unprofitable, nor do any of them have a path to break-even, let alone sustainability. Nothing has changed about this story. And nobody has been able to explain the massive differences between my reporting on OpenAI’s revenues and their own leaked figures, other than to say “you must be wrong somehow,” as if that somehow invalidates “direct numbers from Azure billing.”

If you disagree with me, you really better hope I’m wrong, because I’ve got years of receipts and I can remember basically every article about AI revenues written since 2023 off the top of my head. Not a single one of my critics or any AI booster has put an iota of the same amount of effort into proving their case.

The hysteria and excess of this era has proven how many people can come to conclusions without making the effort to prove them. Disagree with me or not, I’ve done the work, and I see no proof that the other side has even started.

The world has been swept away by the fantastical ideals of Sam Altman and Dario Amodei, and two giant, unsustainable, cash-burning monstrosities that were only made possible because hyperscalers built their infrastructure for them and funded their excesses in exchange for theoretical revenues and equity stakes that give them paper gains.

Their hope, I imagine, was that in doing so, OpenAI and Anthropic would create industries surrounding them — both in the business lines attached to hyperscalers and AI startups that would potentially pay them for compute.

In the end, it appears the only way to create any real demand was to literally fund it themselves. 

These men believe they’ve created perpetual energy.

What they’ve actually done is shit their pants and set their houses on fire.

Share
Comments

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.