OpenAI Needs $400 Billion In The Next 12 Months

Edward Zitron 23 min read
Table of Contents

Hello readers! This premium edition features a generous free intro because I like to try and get some of the info out there, but the real indepth stuff is below the cut. Nevertheless, I deeply appreciate anyone subscribing.

On Monday I will have my biggest scoop ever, and it'll go out on the free newsletter because of its scale. This is possible because of people supporting me on the premium. Thanks so much for reading.


One of the only consistent critiques of my work is that I’m angry, irate, that I am taking myself too seriously, that I’m swearing too much, and that my arguments would be “better received” if I “calmed down.”

Fuck that.

Look at where being timid or deferential has got us. Broadcom and OpenAI have announced another 10GW of custom chips and supposed capacity which will supposedly get fully deployed by the end of 2029, and still the media neutrally reports these things as not simply doable, but rational.

To be clear, building a gigawatt of data center capacity costs at least $32.5 billion (though Jensen Huang says the computing hardware alone costs $50 billion, which excludes the buildings themselves and the supporting power infrastructure, and Barclays Bank says $50 billion to $60 billion) and takes two and a half years. 

In fact, fuck it — I’m updating my priors. Let’s say it’s a nice, round $50 billion per gigawatt of data center capacity. $32.5 billion is what it cost to build Stargate Abilene, but that estimate was based on Crusoe’s 1.2GW of compute for OpenAI being part of a $15 billion joint venture, which meant a gigawatt of compute runs about $12.5 billion, and Abilene’s 8 buildings are meant to hold 50,000 NVIDIA GB200 GPUs and their associated networking infrastructure, so let’s say a gigawatt is around 333,333 Blackwell GPUs at $60,000 a piece, so about $20 billion a gigawatt.

However, this mathematics assumed that every cost associated would be paid by the Joint Venture. Lancium, the owner of the land that is allegedly building the power infrastructure, has now raised over a billion dollars.

This maths also didn’t include the cost of the associated networking infrastructure around the GB200s. So, guess what? We’re doing $50 billion now. 

OpenAI has now promised 33GW of capacity across AMD, NVIDIA, Broadcom and the seven data centers built under Stargate, though one of those — in Lordstown, Ohio — is not actually a data center, with my source being “SoftBank,” speaking to WKBN in Lordstown Ohio, which said it will “not be a full-blown data center,” and instead be “at the center of cutting-edge technology that will encompass storage containers that will hold the infrastructure for AI and data storage.”

This wasn’t hard to find, by the way! I googled “SoftBank Lordstown” and up it came, ready for me to read with my eyes.

Putting all of that aside, I think it’s time that everybody started taking this situation far more seriously, by which I mean acknowledging the sheer recklessness and naked market manipulation taking place. 

But let’s make it really simple, and write out what’s meant to happen in the next year:

  • In the second half of 2026, OpenAI and Broadcom will tape out and successfully complete an AI inference chip, then manufacture enough of them to fill a 1GW data center.
    • That data center will be built in an as-yet-unknown location, and will have at least 1GW of power, but more realistically it will need 1.2GW to 1.3GW of power, because for every 1GW of IT load, you need extra power capacity in reserve for the hottest day of the year, when the cooling system works hardest and power transmission losses are highest. . 
    • OpenAI does not appear to have a site for this data center, and thus has not broken ground on it.
  • In the second half of 2026, AMD and OpenAI will begin “the first 1 gigawatt deployment of AMD Instinct MI450 GPUs.” 
    • This will take place in an as-yet-unnamed data center location, which to be completed by that time would have needed to start construction and early procurement of power at least a year ago, if not more. 
  • In the second half of 2026, OpenAI and NVIDIA will deploy the first gigawatt of NVIDIA’s Vera Rubin GPU systems as part of their $100 billion deal.
    • These GPUs will be deployed in a data center of some sort, which remains unnamed, but for them to meet this timeline they will need to have started construction at least a year ago.

In my most conservative estimate, these data centers will cost over $100 billion, and to be clear, a lot of that money needs to already be in OpenAI’s hands to get the data centers built. Or, some other dupe has to a.) have the money, and b.) be willing to front it. 

All of this is a fucking joke. I’m sorry, I know some of you will read this, cowering from your screen like a B-movie vampire that just saw a crucifix, but it is a joke, and it is a fucking stupid joke, the only thing stupider being that any number of respectable media outlets are saying these things like they’ll actually happen.

There is not enough time to build these things. If there was enough time, there wouldn’t be enough money. If there was enough money, there wouldn’t be enough transformers, electrical-grade steel, or specialised talent to run the power to the data centers. Fuck! Piss! Shit! Swearing doesn’t change the fact that I’m right — none of what OpenAI, NVIDIA, Broadcom, and AMD are saying is possible, and it’s fair to ask why they’re saying it.

I mean, we know. Number must go up, deal must go through, and Jensen Huang wouldn’t go on CNBC and say “yeah man if I’m honest I’ve got no fucking clue how Sam Altman is going to pay me, other than with the $10 billion I’m handing him in a month. Anyway, NVIDIA’s accounts receivables keep increasing every quarter for a normal reason, don’t worry about it.” 

But in all seriousness, we now have three publicly-traded tech firms that have all agreed to join Sam Altman’s No IT Loads Refused Cash Dump, all promising to do things on insane timelines that they — as executives of giant hardware manufacturers, or human beings with warm bodies and pulses and sciatica — all must know are impossible to meet. 

What is the media meant to do? What are we, as regular people, meant to do? These stocks keep pumping based on completely nonsensical ideas, and we’re all meant to sit around pretending things are normal and good. They’re not! At some point somebody’s going to start paying people actual, real dollars at a scale that OpenAI has never truly had to reckon with.

In this piece, I’m going to spell out in no uncertain terms exactly what OpenAI has to do in the next year to fulfil its destiny — having a bunch of capacity that cost ungodly amounts of money to serve demand that never arrives.

Yes, yes, I know, you’re going to tell me that OpenAI has 800 million weekly active users, and putting aside the fact that OpenAI’s own research (see page 10, footnote 20) says it double-counts users who are logged out if they’re use different devices, OpenAI is saying it wants to build 250 gigawatts of capacity by 2033, which will cost it $10 trillion dollars, or one-third of the entire US economy last year.

Who the fuck for? 

One thing that’s important to note: In February, Goldman Sachs estimated that the global data center capacity was around 55GW. In essence, OpenAI says it wants to add five times that capacity — something that has grown organically over the past thirty or so years — by itself, and in eight years. 

And yes, it’ll cost one-third of America’s output in 2024. This is not a sensible proposition. 

Even if you think that OpenAI’s growth is impressive — it went from 700 million to 800 million weekly active users in the last two months — that is not the kind of growth that says “build capacity assuming that literally every single human being on Earth uses this all the time.” 

As an aside: Altman is already lying about his available capacity. According to an internal Slack note seen by Alex Heath of Sources, Altman claims that OpenAI started the year with “around” 230 megawatts of capacity and is “now on track to exit 2025 north of 2GW of operational capacity.” Unless I’m much mistaken OpenAI doesn’t have any capacity of its own — and according to Mr. Altman, it’s somehow built or acquired 1.7GW of capacity from somewhere without disclosing it.

For context, 1.7GW is the equivalent of every data center in the UK that was operational last year

Where is this coming from? Is this CoreWeave? It only has — at most — 900MW of capacity by the end of 2025. Where’d all the extra capacity come from? Who knows! It isn’t Stargate Abilene that’s for sure — they’ve only got one operational building and 200MW of power, meaning they can only really support 130MW of IT loads, because of that pesky reserve I mentioned earlier. 

Anyway, what exactly is OpenAI doing? Why does it need all this capacity? Even if it  hits its $13 billion revenue projection for this year (it’s only at $5.3 billion or so as of the end of August, and for OpenAI to hit its targets it’ll need to make $1.5bn+ a month very soon), does it really think it’s going to effectively 10x the entire company from here? What possible sign is there of that happening other than a conga-line of different executives willing to stake their reputations on blatant lies peddled by a man best known for needing, at any given moment, another billion dollars

According to The Information, OpenAI spent $6.7 billion on research and development in the first six months of 2025, and according to Epoch AI, most of the $5 billion it spent on research and development in 2024 was spent on research, experimental, or derisking runs (basically running tests before doing the final testing run) and models it would never release, with only $480 million going to training actual models that people will use. 

I should also add that GPT 4.5 was a dud, and even Altman called it giant, expensive, and said it “wouldn’t crush benchmarks.”

I’m sorry, but what exactly is it that OpenAI has released in the last year-and-a-half that was worth burning $11.7 billion for? GPT 5? That was a huge letdown! Sora 2? The giant plagiarism machine that it’s already had to neuter?

What is it that any of you believe that OpenAI is going to do with these fictional data centers? 

Why Does ChatGPT Need $10 Trillion Of Data Centers?

The problem with ChatGPT isn’t just that it hallucinates — it’s that you can’t really say exactly what it can do, because you can’t really trust that it can do anything. Sure, it’ll get a few things right a lot of the time, but what task is it able to do every time that you actually need? 

Say the answer is “something that took me an hour now takes me five minutes.” Cool! How many of those do you get? Again, OpenAI wants to build 250 gigawatts of data centers, and will need around ten trillion dollars to do it. “It’s going to be really good” is no longer enough.

And no, I’m sorry, they are not building AGI. He just told Politico a few weeks ago that if we didn’t have “models that are extraordinarily capable and do things that we ourselves cannot do” by 2030 he would be “very surprised.” 

Wow! What a stunning and confident statement. Let’s give this guy the ten trillion dollars he needs! And he’s gonna need it soon if he wants to build 250 gigawatts of capacity by 2033.

But let’s get a little more specific.

Based on my calculations, in the next six months, OpenAI needs at least $50 billion to build a gigawatt of data centers for Broadcom — and to hit its goal of 10 gigawatts of data centers by end of 2029, at least another $200 billion in the next 12 months, not including at least $50 billion to build a gigawatt of data centers for NVIDIA, $40 billion to pay for its 2026 compute, at least $50 billion to buy chips and build a gigawatt of data centers for AMD, at least $500 million to build its consumer device (and they can’t seem to work out what to build), and at least a billion dollars to hand off to ARM for a CPU to go with the new chips from Broadcom.

That’s $391.5 billion dollars! That’s $23.5 billion more than the $368 billion of global venture capital raised in 2024! That’s nearly 11 times Uber’s total ($35.8 billion) lifetime funding, or 5.7 times the $67.6 billion in capital expenditures that Amazon spent building Amazon Web Services

On top of all of this are OpenAI’s other costs. According to The Information, OpenAI spent $2 billion alone on Sales and Marketing in the first half of 2025, and likely spends billions of dollars on salaries, meaning that it’ll likely need at least another $10 billion on top. As this is a vague cost, I’m going with a rounded $400 billion number, though I believe it’s actually going to be more.

And to be clear, to complete these deals by the end of 2026, OpenAI needs large swaths of this money by February 2026. 

OpenAI Needs Over $400 Billion In The Next 12 Months To Complete Any Of These Deals — And Sam Altman Doesn’t Have Enough Time To Build Any Of it

I know, I know, you’re going to say that OpenAI will simply “raise debt” and “work it out,” but OpenAI has less than a year to do that, because OpenAI has promised in its own announcements that all of these things would happen by the end of December 2026, and even if they’re going to happen in 2027, data centers require actual money to begin construction, and Broadcom, NVIDIA and AMD are going to actually require cash for those chips before they ship them.

Even if OpenAI finds multiple consortiums of paypigs to take on the tens of billions of dollars of data center funding, there are limits, and based on OpenAI’s aggressive (and insane) timelines, they will need to raise multiple different versions of the largest known data center deals of all time, multiple times a year, every single year. 

Say that happens. OpenAI will still need to pay those compute contracts with Oracle, CoreWeave, Microsoft (I believe its Azure credits have run out) and Google (via CoreWeave) with actual, real cash — $40 billion dollars worth — when it’s already burning $9.2 billion in the first half of 2026 on compute against revenues of $4.3 billion. OpenAI will still need to pay its staff, its storage, its sales and marketing department that cost it $2 billion in the first half of 2026, all while converting its non-profit into a for-profit by the end of the year, or it loses $20 billion in funding from SoftBank.

Also, if it doesn’t convert to a for-profit by October 2026, its $6.6 billion funding round from 2024 converts to debt.

The Global Financial System Cannot Afford OpenAI

The burden that OpenAI is putting on the financial system is remarkable, and actively dangerous. It would absorb, at this rate, the capital expenditures of multiple hyperscalers, requiring multiple $30 billion debt financing operations a year, and for it to hit its goal of 250 gigawatts by the end of 2033, it will likely have to have outpaced the capital expenditures of any other company in the world.

OpenAI is an out-of-control monstrosity that is going to harm every party that depends upon it completing its plans. For it to succeed, it will have to absorb over a trillion dollars a year — and for it to hit its target, it will likely have to eclipse the $1.7 trillion in global private equity deal volume in 2024, and become a significant part of global trade ($33 trillion in 2025).

There isn’t enough money to do this without diverting most of the money that exists to doing it, and even if that were to happen, there isn’t enough time to do any of the stuff that has been promised in anything approaching the timelines promised, because OpenAI is making this up as it goes along and somehow everybody is believing it. 

At some point, OpenAI is going to have to actually do the things it has promised to do, and the global financial system is incapable of supporting them.

And to be clear, OpenAI cannot really do any of the things it’s promised.

Just take a look at the Oracle deal!

None of this bullshit is happening, and it’s time to be honest about what’s actually going on.

OpenAI is not building “the AI industry,” as this is capacity for one company that burns billions of dollars and has absolutely no path to profitability. 

This is a giant, selfish waste of money and time, one that will collapse the second that somebody’s confidence wavers.

I realize that it’s tempting to write “Sam Altman is building a giant data center empire,” but what Sam Altman is actually doing is lying. He is lying to everybody. 

He is saying that he will build 250GW of data centers in the space of eight years, an impossible feat, requiring more money than anybody would ever give him in volumes and intervals that are impossible for anybody to raise. 

Sam Altman’s singular talent is finding people willing to believe his shit or join him in an economy-supporting confidence game, and the recklessness of continuing to do so will only harm retail investors — regular people beguiled by the bullshit machine and bullshit masters making billions promising they’ll make trillions.

To prove it, I’m going to write down everything that will need to take place in the next twelve months for this to happen, and illustrate the timelines of everything involved. 

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.