Soundtrack: The Dillinger Escape Plan - One Of Us Is The Killer
An MIT study found that 95% of organizations are getting "zero return" from generative AI, seemingly every major outlet is now writing an "are we in a bubble?" story, and now Meta has frozen AI hiring. Things are looking bleak for the AI bubble, and people are getting excited that this loathsome, wasteful era might be coming to an end.
As a result, I'm being asked multiple times a day when the bubble might burst. I admit I'm hesitant to make any direct timelines — we are in a time (to quote former Federal Reserve Board chairman Alan Greenspan) of irrational exuberance, where the markets have oriented themselves around something very silly and very, very expensive.
Regardless, Anthropic is apparently raising as much as $10 billion in its next funding round, and OpenAI allegedly hit $1 billion in revenue in July, which brings it in line with my estimate that it’s made about $5.26 billion in revenue in 2025 so far,
The bubble "bursting" is not Sam Altman declaring that we're in a bubble — it's a series of events that lead to big tech pulling away from this movement, investment money drying up, and the current slate of AI companies withering and dying because they can't raise more money, can't go public, and can't sell to anybody.
Anyway, earlier in the year, a bunch of credulous oafs wrote an extremely long piece of fan fiction called "AI 2027," beguiling people like Kevin Roose with its "gloominess." Written with a deep seriousness and a lot of charts, AI 2027 makes massive leaps of logic, with its fans rationalizing taking it seriously by saying that the five authors "have the right credentials." In reality, AI 2027 is written to fool people that want to be fooled and scare people that are already scared, its tone consistently authoritative as it suggests that a self-learning agent is on the verge of waking up, a thing that is so remarkably stupid that anyone who took this seriously should be pantsed again and again.
These men are also cowards. They choose to use fake company names like "OpenBrain" to mask their predictions instead of standing behind them with confidence. I get that extrapolating years into the future is scary — but these grifting losers can't even commit to a prediction!
Nevertheless, I wanted to take a run at something similar myself, though not in the same narrative format. In this piece, I'm going to write out what conditions I believe will burst the bubble. Some of this will be extrapolations based on my own knowledge, sources, and writing hundreds of thousands of words on this subject. I am not going to write a strict timeline, but I am going to write how some things could go, and how (and why) they'll lead to the bubble bursting.
A Little Preamble Before The Premium Cutoff
For the bubble to burst, I see a few necessary conditions, though reality is often far more boring and annoying. Regardless, it's important to know that this is a bubble driven by vibes not returns, and thus the bubble bursting will be an emotional reaction.
This post is, in many respects, a follow-on to my previous “pale horse” article (called Burst Damage). Many of my original pale horses have already come true — Anthropic and OpenAI have pushed price increases and rate limits, there is already discord in AI investment, Meta is already considering downsizing its AI team, and OpenAI has done multiple different "big, stupid magic tricks," the chief of them being the embarrassing launch of GPT-5, a "smart, efficient router" that I reported last week was quite the opposite.
This time, I'm going to write out the linchpin events that will shock the system, and how they might bring about the bubble bursting. I should also be clear, and I will get to after the premium break, that this will be a series of events rather than one big one, though there are big ones to look out for.
I also think it might "take a minute," because "the bubble bursting" will be a succession of events that could take upwards of a year to fully happen. That’s been true for every bubble. Although people associate the implosion of the housing bubble with the “one big event” of Lehman Brothers collapsing in 2008, the reality is that it was preceded and followed by a bunch of other events, equally significant though not quite as dramatic.
One VC (who you'll read about shortly) predicted it will take 6 quarters to run out of funding entirely based on the current rate of investment, putting us around February 2027 for things to have truly collapsed.
Here's a list of some of the things that I believe will have to happen for this era to be truly done, and the ones that I believe are truly essential.
- NVIDIA's Growth Slows: As discussed in the Hater's Guide To The AI Bubble, NVIDIA is the weak point in the Magnificent Seven (which collectively account for 35% of the value of the US stock market) specifically because of its success (19% of the Magnificent Seven's value) and value, which is driven entirely by its ability to sell more and more GPUs every single quarter. It is inevitable that its growth slows, and once it does, the AI story crumbles with it. Perhaps it has a down quarter then an up quarter — but that's three long months to make the markets happy.
- I will add that there's a chance the bubble reinflates a touch if NVIDIA crushes earnings. Or maybe the market doesn't care? We'll find out at the end of next week.
- AI Funding Will Start Drying Up, And AI Companies Are The Most Funding-Dependent Companies of All Time: As The Information wrote last week, the "dry powder" (available capital to invest in totality) has dropped, and venture capitalist Jon Sakoda of Decibel Partners believes that "...if VCs keep investing at today's clip, the industry would run out of money in six quarters." Removing OpenAI and Anthropic, the party will continue until 2028...but we all know those are the companies that are getting the money.
- One of the major AI companies will collapse: OpenAI and Anthropic both burn billions of dollars a year, and have shown no interest in stopping doing so, with Altman doing his best Lord Farquaad impression and saying that OpenAI is "willing to run at a loss" as long as it takes to get...somewhere. For the bubble to burst, one of these companies has to die, and I will explain how this might happen.
- In both of these cases, going public will be nigh-on impossible, and even if successful will expose what I believe to be their rotten economics.
- Big Tech Will Turn On AI: Meta's AI hiring freeze isn't enough: Meta, Google, Amazon or Microsoft needs to bring an end to their capex burn, and they need to be definitive that it's A) happening and B) that it's happening, at least in part, because they have "built enough" or "exceeded the opportunity in AI." These companies are barely making $35 billion in revenue from AI in 2025, and I doubt that revenue is increasing.
- Another part of this: The Markets Make Big Tech Put Up Or Shut Up: At no point have the markets really interrogated the revenue from AI. I can imagine some weird fucking games happening here. Microsoft, the only member of the Magnificent Seven outside of NVIDIA willing to talk about AI revenue, stopped reporting AI revenue in January, when it was at $13 billion "annualized" (so a little over $1 billion in revenue a month). One has to wonder if this means revenue is flat or falling, as annualized is monthx12.
- I can also see a scenario where these companies start putting out obtuse "AI-enabled" revenue stats. I do not think this works, and if it does, it only puts off the inevitable, because they cannot avoid the capex crunch that's coming.
- This particular scenario is one that will happen in pieces.
- AI Startups Will Start Dying: As I've discussed previously, AI companies are currently raising at suicidal valuations that make it impossible for them to ever sell or go public.
- CoreWeave Will Die: AI data center developer CoreWeave is a time bomb, burdened with, to quote analyst Gil Luria of DA Davidson, "deteriorating operating income." It’s also a critical partner to OpenAI, providing compute as part of an $11.9 billion five-year-long contract, despite the fact that it's unclear if it has even come close to finishing its data center development in Denton, Texas. This has to be done by October, in part because that's when OpenAI is due to pay Coreweave, and in part because that's when it has to start paying back its massive, multi-billion dollar DDTL 2.0 loan.
- The Abilene, Texas "Stargate" project fails to get off the ground...or OpenAI can't afford to pay for it if it does. This alleged 4.5 Gigawatt data center will, when (if?) fully operational, allegedly lead OpenAI to pay Oracle $30 billion a year by 2028, which is more than OpenAI's combined venture capital and revenue to date. If this expansion doesn't happen it's bad, but even if it does, Oracle (unlike Microsoft) isn't going to accept "cloud credits," and for this to make sense, OpenAI would need to be making upwards of $4 billion in revenue a month and still have to raise a bunch of money.
The common thread through all of these points is that they are predominantly impossible to ignore. So far this bubble has inflated because the problems with AI — such as "it doesn't make any money" and "burns billions of dollars" — have been dismissed until very recently as the necessary costs of the beautiful AI revolution. Now that things have begun to unravel, the intensity of criticism will increase gradually, rather than in one big movement that makes everyone say "we hate AI."
And it isn't just because of the money. CEOs like Tobias Lutke of Shopify have oriented their companies’ entire culture around AI, demanding in his case that "employees must demonstrate why AI cannot be used before requesting additional resources." Generative AI is, on some level, a kind of dunce detector — its flimsy and vague use cases having enough juice to impress the clueless Business Idiots who don't really engage with the production that makes their companies money. The specious, empty hype of Large Language Models — driven by a tech and business media that has given up on trying to understand them — symbolizes a kind of magic to these empty-headed goobers, and unwinding their "AI-first" cultures will be difficult...right up until the first guy does it, at which point everybody will follow.
AI has taken such a hold on our markets because it's symbolic of a few things:
- Executives and do-nothing middle managers' ability to control and suppress labor by suggesting a tool exists that can replace it.
- The future of automation, even though Large Language Models are absolutely terrible at it.
- The validity of the "ideas men" that run large parts of our society, who believe that their superior brains make them "above" labor somehow.
In any case, I am going to try and write the things that I think will happen, in detail. I'll go into more conditions in this piece, and as discussed, I'm going to make some informed guesses, extrapolations, and give my thoughts about how things collapse.
I predict that the impact of Large Language Models over the next decade will be enormous, not in its actual innovation or returns, but in its ability to expose how little our leaders truly know about the world or labor, how willing many people are to accept whatever the last thing a smart-adjacent person said, and how our markets and economy are driven by people with the most tenuous grasp on reality.
This will be an attempt to write down what I believe could happen in the next 18 months, the conditions that might accelerate the collapse, and how the answers to some of my open questions — such as how these companies book revenue and burn compute — could influence outcomes.
This...is AI Bubble 2027.