Premium: What If...We're In An AI Bubble? (Part 1)

Ed Zitron 45 min read
Table of Contents

Every day I read some sort of wrongheaded extrapolation about the future of AI — that today’s models are somehow indicative of AGI creating a “permanent underclass” of people that stops people from building software companies, or really doing any kind of job on the computer:

Hyperbolic? Perhaps. But even those who view the idea of a permanent underclass as overblown tell me that the meme contains a kernel of truth. Yash Kadadi, a 23-year-old start-up founder and Stanford dropout, summarized the sentiment of his peers: “There’s only a matter of time before GPT-7 comes out and eats all software and you can no longer build a software company. Or the best version of Tesla Optimus comes out,” and can perform all physical labor as well. In that world, this year is a human’s “last chance to be a part of the innovation.”

Yash, your peers are fucking idiots. You may as well be talking about breeding Grinches or Ninja Turtles, or kvetching about the upcoming threat from Godzilla. “The best version of Tesla’s Optimus [robot]” suggests that Tesla has released an Optimus robot, or that any prototypes are capable of anything approaching useful work, something that Tesla itself has said isn’t the case.

Every discussion of AI has become a discussion of anywhere between one and a million different theoreticals.

The Information’s headline that OpenAI will “save $97 billion through 2030 in latest Microsoft deal” — one that capped its revenue share (as in the actual money it sends to Microsoft) at $38 billion — hinges on the idea that OpenAI would somehow make $190 billion in revenue, because that’s what it would take to actually max out its revenue share

The majority of articles about METR’s “time horizon” study of how long models take to complete tasks gush with mindless praise, but regularly leave out two valuable details: that these comparisons are made based on estimates of both human task times, and that the most-commonly shared task is based on how likely it is to complete a task 50% of the time: 

The task-completion time horizon is the task duration (measured by human expert completion time) at which an AI agent is predicted to succeed with a given level of reliability. For example, the 50%-time horizon is the duration at which an agent is predicted to succeed half the time.  

It’s the Sex Panther joke from Anchorman, except it’s a chart that gets written up in major newspapers and bandied about as proof of models becoming conscious. 

Nevertheless, everybody appears to be having a lot of fun making stuff up or making ridiculous assertions based on OpenAI or Anthropic’s predictions. Likely gas leak victim Joseph Jacks posted last week that at its current rate of growth, Anthropic would pass Google’s revenue by 2028. Multiple different people I’d rather not link to are posting benchmarks of Anthropic’s still-to-be-released Mythos model as proof that we’re in the early-to-middle stages of the entirely-fictional AI 2027 “simulation,” despite the entirety of this ridiculous, oafish extrapolation relying on the idea that at some point LLMs become conscious and start doing their own research.

None of these people seem to want to engage with reality, even in their extrapolations. 

Whether or not you believe the bubble will burst, it’s hard to argue (not that anybody nobody bothers to try) with my recent reporting about the lack of data centers coming online or the fact that the majority of AI revenue comes from two companies that are, in the end, hyperscalers feeding themselves money. Nobody has presented any real argument as to how Oracle completes its data centers or avoids running out of money given the fact that it needs OpenAI to be able to pay it $70 billion or more a year in the next four years to survive. The lack of any real, thoughtful response to my assertions outside of ultra-centrists and people that can’t count is a sign that I’m onto something, and I take it as a badge of pride.

But what I haven’t done recently — not since AI Bubble 2027, at least — is try my own hand at extrapolating the future based on the things I have read, seen and reported on. 

Today, I’m taking a different approach, inspired by one of my favourite comic series. In Marvel’s “What If…?” writers asked questions that would entirely change the course of the Marvel Universe, such as What If The Fantastic Four Didn’t Get Their Powers, or Loki Was Worthy of Mjolnir.

I’ll be honest that there are a lot of unanswered questions I have about the AI bubble that make precise, time-based predictions almost impossible. We’re in the midst of one of the most insane market rallies in history driven around the exploding valuation of NVIDIA and data center related stocks despite there being a great deal of compelling evidence that millions of Blackwell GPUs are sitting in warehouses, meaning that the market is rallying around the idea of data centers getting built without ever confirming whether that’s actually true.

In the past, I’ve approached things from an investigative perspective, proving what I believe to be one of the greatest misallocations of capital in history. Today, I’m going to have a little more fun, exploring both the worrying signs I see and their potential consequences in the form of questions, mixing my own reporting with a little bit of fiction.

My reasoning is simple: I think people are very good at ingesting and remembering specific facts and events, but much worse at understanding their consequences. For example, Dave Lee of Bloomberg — who I adore and admire! — said that An OpenAI Bubble Is Not An AI Bubble and makes numerous correct assertions about OpenAI, but fails to consider that OpenAI accounts for $718 billion of Oracle, Microsoft, and Amazon’s backlogs, meaning that OpenAI’s collapse would leave Oracle destitute, Microsoft and Amazon short-changed, Cerebras without 80%+ of its revenue, and CoreWeave without a major client and in breach of loan covenants guaranteed by OpenAI’s revenue

Even if Anthropic were able to mop up some of that fallow capacity, it too relies on endless venture capital and hyperscaler welfare to pay, well, increasingly-large shares of hyperscaler revenue

I feel as if many people are willing to ask if we’re in an AI bubble, but few seem to want to talk about what might happen. It’s really easy to say “stocks are overvalued” or “OpenAI is deeply unprofitable,” but thinking much harder than that starts to make you feel a little crazy. Data center construction now makes up a larger chunk of all construction spending than commercial real estate. OpenAI has made promises that total over a trillion dollars, and Anthropic $330 billion. NVIDIA represents 8% of the value of the S&P 500, and that valuation is based on the idea that it will never, ever stop growing, which is only possible if data center construction never stops. CoreWeave, IREN, Nebius, and Nscale all rely on hyperscaler contracts that are related to OpenAI, and if those contracts go away because OpenAI does, they’re screwed.

Most people can say that these things are true, but very few of them are willing to think about their consequences, because when you do so, things begin feeling completely and utterly fucking insane.

Put another way, for me to be wrong, all of these data centers will have to get built, OpenAI will have to make and raise $852 billion in the next four years, the underlying economics of generative AI will have to improve in a dramatic and unfathomable way, and do so in such a way that it creates hundreds of AI startups that can substantiate $400 billion of annual compute revenue. For NVIDIA to continue growing its revenues at an historic rate, it will also have to, by 2028, be selling over $1 trillion in GPUs, which will require there to be funding to buy these GPUs, at a time when hyperscaler cashflows are dwindling and banks are worried they’re “choking” on AI data center debt

The AI bubble is supported almost entirely by magical thinking and people ignoring obvious warning signs again and again and again in the hopes that at some point something changes. You can quote whatever story you like about Anthropic’s skyrocketing revenues (which are absolutely inflated) — there’s no getting away from the fact that it loses billions of dollars year, and if your answer is that it will turn profitable in 2028, please tell me how because there is no proof that it’s possible. 

I also kind of get why nobody wants to think about this stuff. Even though it’s become blatantly obvious that the economics don’t make sense, the stock market continues to rip based on equities connected to the AI bubble in a way that defies logic but rewards positive speculation. Major media outlets continue publishing positive stories about the power of AI that seem entirely-disconnected from what AI can do, and millions of dollars are being spent by companies based on a theoretical return on investment. 

No, really, per The Information’s Laura Bratton quoting PagerDuty CIO Eric Johnson:

“I am preparing myself to be surprised” by the bills, he said. “We believe that there’s a lot of value here. Unfortunately, it’s fairly new technology, so there’s some open questions that we’re gonna be working through” around its costs and getting a return on the investment.

We are fucking years into this man, how is the question of return on investment still an open question? 

Okay, we know the answer: we’re in a bubble. Everybody is pressuring everyone else to “integrate AI,” to “get every engineer AI,” to “become more efficient using AI,” with token spend becoming some sort of vulgar status symbol despite the whole point of the AI push being that workers can be replaced, or enhanced, or, I dunno, something measurable. In the end, all that’s being measured is how many tokens employees are burning, leading to Amazon staff deliberately setting up “agents” to burn more tokens to seem more “engaged with AI” than they really are, all because dimwit managers and executives don’t understand what people do at their jobs and can only comprehend Number Go Up. 

As a result, it’s far easier to fall in with the groupthink, even if it’s hysterical, nonsensical and based on flimsy ideas like “it’s just like Uber” (it isn’t) or “Amazon Web Services burned a lot of money” (it burned less than half of OpenAI’s $122 billion funding round on capex for the entirety of Amazon in the space of 15 years, adjusted for inflation), because thinking that everybody’s wrong requires you to disagree with the markets, most of social media, your boss, and your most annoying coworkers.

People also don’t really like thinking about bad things happening. They’re happy to make vague leaps in a direction that makes them feel prepared for the worst (such as the specious statements about all of these data centers being for the military or a theoretical bailout), especially if it makes them feel smart, but in doing so they get to avoid the actual bad stuff — the economic ramifications for ordinary people, the years of depression ahead for the tech industry, and the calamitous results for the market.

So, today, I’m going to have a little fun thinking about the actual consequences of everything I’ve been writing. I’m going to thread in both my own and others’ reporting, and take these ideas to their logical endpoints as far as I can.

This is going to be the first of a two-part exploration of what the actual consequences of the AI bubble bursting might be.

I’ll also caveat this by saying that these are, ultimately, explorations of potential future events rather than cast-iron guarantees. People seem to be resistant to being told the truth, so perhaps it’s time to explore these ideas as theoretical — fictional, even — so that people are more willing to take them in. 

This series is all about simple scenarios, and one very simple question. 

Time. Space. Reality.

It's more than a linear path — it’s a prism of endless possibility. I am the Watcher, and I am well aware of how AI generated that sentence sounds. 

I am your guide through these vast new realities.

Follow me and dare to face the unknown.

And ponder the question…

What if…We’re In An AI Bubble?

In Today’s Where’s Your Ed At Premium…

  • What if the entire AI industry moves to token-based billing?
  • What if organizations can’t afford to keep spending money on AI?
  • What if the AI capacity crunch never ends?
  • What if data centers aren’t really getting built?
  • What if hyperscalers stop spending so much on data centers?
  • What if hyperscalers have warehouses of uninstalled GPUs?
  • What if data center construction collapses?

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.