Premium: The Hater's Guide to OpenAI

Edward Zitron 64 min read
Table of Contents

Soundtrack: The Dillinger Escape Plan — Setting Fire To Sleeping Giants


In what The New Yorker’s Andrew Marantz and Ronan Farrow called a “tense call” after his brief ouster from OpenAI in 2023, Sam Altman seemed unable to reckon with a “pattern of deception” across his time at the company: 

“This is just so fucked up,” he said repeatedly, according to people on the call. “I can’t change my personality.” Altman says that he doesn’t recall the exchange. “It’s possible I meant something like ‘I do try to be a unifying force,’ ” he told us, adding that this trait had enabled him to lead an immensely successful company.

He attributed the criticism to a tendency, especially early in his career, “to be too much of a conflict avoider.” But a board member offered a different interpretation of his statement: “What it meant was ‘I have this trait where I lie to people, and I’m not going to stop.’ ” Were the colleagues who fired Altman motivated by alarmism and personal animus, or were they right that he couldn’t be trusted?

No, he cannot. Sam Altman is a deeply-untrustworthy individual, and like OpenAI lives on the fringes of truth, using a complaint media to launder statements that are, for legal reasons, difficult to call “lies” but certainly resemble them. For example, back in November 2025, Altman told venture capitalist Brad Gerstner that OpenAI was doing “well more” than $13 billion in annual revenue when the company would do — and this is assuming you believe CNBC’s source — $13.1 billion for the entire year. I guarantee you that, if pressed, Altman would say that OpenAI was doing “well more than” $13 billion of annualized revenue at the time, which was likely true based on OpenAI’s stylized math, which works out as so (per The Information):

OpenAI multiplies its total revenue for a recent four-week period by 13, which equals 52 weeks —or a full year, according to a person with direct knowledge of its finances.

OpenAI shares 20% of its revenue with Microsoft due to their multifaceted business arrangement, but OpenAI’s financial statements count sales before the company gives Microsoft its slice.

This means that, per CNBC’s reporting, OpenAI barely scratched $10 billion in revenue in 2025, and that every single story about OpenAI’s revenue other than my own reporting (which came directly from Azure) massively overinflates its sales. The Information’s piece about OpenAI hitting $4.3 billion in revenue in the first half of 2025 should really say “$3.44 billion,” but even then, my own reporting suggests that OpenAI likely made a mere $2.27 billion in the first half of last year, meaning that even that $10 billion number is questionable.

It’s also genuinely insane to me that more people aren’t concerned about OpenAI, not as a creator of software, but as a business entity continually misleading its partners, the media, and the general public.

To put it far more bluntly, the media has failed to hold OpenAI accountable, enabling and rationalizing a company built on deception, rationalizing and normalizing ridiculous and impossible ideas just because Sam Altman said them.

The Media Must Stop Enabling OpenAI and Acknowledge That It Cannot Afford Its Commitments

Let me give you a very obvious example. About a month ago, per CNBC, “...OpenAI reset spending expectations, telling investors its compute target was around $600 billion by 2030.”

This is, on its face, a completely fucking insane thing to say, even if OpenAI was a profitable company. Microsoft, a company with hundreds of billions of dollars of annual revenue, has about $42 billion in quarterly operating expenses

OpenAI cannot afford to pay these agreements. At all. Hell, I don’t think any company can! And instead of saying that, or acknowledging the problem, CNBC simply repeats the statement of “$600 billion in compute spend,” laundering Altman and OpenAI’s reputation as it did (with many of the same writers and TV hosts) with Sam Bankman-Fried. CNBC claimed mere months before the collapse of FTX that it had grown revenue by 1,000% “during the crypto craze,” with its chief executive having “...survived the market wreckage and still expanded his empire.”

You might say “how could we possibly know?” and the answer is “read CNBC’s own reporting that said that Bankman-Fried intentionally kept FTX in the Bahamas,” which said that Bankman-Fried had intentionally reduced his stake in Canadian finance firm Voyager (which eventually collapsed on similar terms to FTX) to avoid regulatory disclosures around (Bankman-Fried’s investment vehicle) Alameda’s finances. This piece was written by a reporter that has helped launder the reputation of Stargate Abilene, claiming it was “online” despite only a fraction of its capacity actually existing. 

The same goes for OpenAI’s $300 billion deal with Oracle that OpenAI cannot afford and Oracle does not have the capacity to serve. These deals do not make any logical sense, the money does not exist, and the utter ridiculousness of reporting them as objective truths rather than ludicrous overpromises allowed Oracle’s stock to pump and OpenAI to continue pretending it could actually ever have hundreds of billions of dollars to spend.

OpenAI now claims it makes $2 billion a month, but even then I have serious questions about how much of that is real money considering the proliferation of discounted subscriptions (such as ones that pop up when you cancel that offer you three months of discounted access to ChatGPT Plus) and free compute deals, such as the $2500 given to Ramp customers, millions of tokens in exchange for sharing your data, the $100,000 token grants given to AI policy researchers, and the OpenAI For Startups program that appears to offer thousands (or even tens of thousands) of dollars of tokens to startups. While I don’t have proof, I would bet that OpenAI likely includes these free tokens in its revenues and then counts them as part of its billions of dollars of sales and market spend.

I also think that revenue growth is a little too convenient, accelerating only to match Anthropic, which recently “hit” $30 billion in annualized revenue under suspicious circumstances. I can only imagine OpenAI will soon announce that it’s actually hit $35 billion in annualized revenue, or perhaps $40 billion in annualized revenue, and if that happens, you know that OpenAI is just making shit up. 

Regardless, even if OpenAI is actually making $2 billion a month in revenue, it’s likely losing anywhere from $4 billion to $10 billion to make that revenue. Per my own reporting from last year, OpenAI spent $8.67 billion on inference to make $4.329 billion in revenue, and that’s not including training costs that I was unable to dig up — and those numbers were before OpenAI spent tens of millions of dollars in inference costs propping up its doomed Sora video generation product, or launched its Codex coding environment. In simpler terms, OpenAI’s costs have likely accelerated dramatically with its supposed revenue growth.

And all of this is happening before OpenAI has to spend the majority of its capital. Oracle has, per my sources in Abilene, only managed to successfully build and generate revenue from two buildings out of the eight that are meant to be done by the end of the year, which means that OpenAI is only paying a small fraction of the final costs of one Stargate data center. Its $138 billion deal with Amazon Web Services is only in its early stages, and as I explained a few months ago in the Hater’s Guide To Microsoft, Redmond’s Remaining Performance Obligations that it expects to make revenue from in the next 12 months have remained flat for multiple quarters, meaning that OpenAI’s supposed purchase of “an incremental $250 billion in Azure compute” are yet to commence.

In practice, this means that OpenAI’s expenses are likely to massively increase in the coming months. And while the “$122 billion” funding round it raised — with $35 billion of it contingent on either AGI or going public (Amazon), and $60 billion of it paid in tranches by SoftBank and NVIDIA — may seem like a lot, keep in mind that OpenAI had received $22.5 billion from SoftBank on December 31 2025, a little under four months ago. 

This suggests that either OpenAI is running out of capital, or has significant up-front commitments it needs to fulfil, requiring massive amounts of cash to be sent to Amazon, Microsoft, CoreWeave (which it pays on net 360 terms) and Oracle. 

And if I’m honest, I think the entire goal of the funding round was to plug OpenAI’s leaky finances long enough to take it public, against the advice of CFO Sarah Friar.

OpenAI Is Rushing Toward IPO Against The Wishes of Its CFO — And It Has Every Warning Sign That Something Is Very, Very Wrong With Its Finances

One under-discussed part of Farrow and Marantz’s piece was a quote about OpenAI’s overall finances, emphasis mine:

As OpenAI prepares for its potential I.P.O., Altman has faced questions not only about the effect of A.I. on the economy—it could soon cause severe labor disruption, perhaps eliminating millions of jobs—but about the company’s own finances. Eric Ries, an expert on startup governance, derided “circular deals” in the industry—for example, OpenAI’s deals with Nvidia and other chip manufacturers—and said that in other eras some of the company’s accounting practices would have been considered “borderline fraudulent.” The board member told us, “The company levered up financially in a way that’s risky and scary right now.” (OpenAI disputes this.)

As I wrote up earlier in the week, OpenAI CFO Sarah Friar does not believe, per The Information, that OpenAI is ready to go public, and is concerned about both revenue growth slowing and OpenAI’s ability to pay its bills:

She told some colleagues earlier this year that she didn’t believe the company would be ready to go public in 2026, because of the procedural and organizational work needed and the risks from its spending commitments, according to a person who spoke to her. She said she wasn’t sure yet whether OpenAI would need to pour so much money into obtaining AI servers in the coming years or whether its revenue growth, which has been slowing, would support the commitments, said the person who spoke to her.

To make matters worse, Friar also no longer reports to Altman — and god is it strange that the CFO doesn’t report to the CEO! — and it’s actually unclear who it is she reports to at all, as her current report, Fiji Simo, has taken an indeterminately-long leave of medical absence. Friar has also, per The Information, been left out of conversations around financial planning for data center capacity.

These are the big, flashing warning signs of a company with serious financial and accounting issues, run by Sam Altman, a CEO with a vastly-documented pattern of lies and deceit. Altman is sidelining his CFO, rushing the company to go public so that his investors can cash out and the larger con of OpenAI can be dumped onto public investors.

And beneath the surface, the raw economics of OpenAI do not make sense.

OpenAI Can Only Exist As Long As Venture Capital Subsidizes Its Business and Its Customers, And Its Funders and Infrastructure Partners Have Access To Debt

You’ll notice I haven’t talked much about OpenAI’s products yet, and that’s because I do not believe they can exist without venture capital funding them and the customers that buy them. These products only have market share as long as other parties continue to build capacity or throw money into the furnace.

To explain:

While OpenAI is not systemically necessary, the continued enabling and normalization of its egregious and impossible promises has created an existential threat to multiple parties named above. Its continued existence requires more money than anybody has ever raised for a company — private or public — and in the event it’s allowed to go public, I believe that both retail investors and large equity investors like SoftBank will be left holding the bag.

OpenAI has a fundamental lack of focus as a business, despite how many articles have claimed over the last year that it’s working on a “SuperApp” and has some sort of renewed plan to take on whoever it is that OpenAI perceives as the competition in any given calendar month. 

Everything OpenAI does is a reaction to somebody else. Its Atlas browser was a response to Perplexity’s Comet browser, its first (of multiple!) Code Reds in 2025 was a reaction to Google’s Gemini 3, and its rapid deployment of its Codex model and platform was to compete with Anthropic’s Claude Code. I’ve read about this company and the surrounding industry for hours a day for several years, and I can’t think of a single product that OpenAI has launched first. Even its video-generating social network app Sora was beaten to market by five days by Meta’s putrid and irrelevant “Vibes.”

Actually, that’s not true. OpenAI did have one original idea in 2025 — the launch of GPT-5, a much-anticipated new model launch that included a “model router” to make it “more efficient,” except it turned out that it boofed on benchmarks and that the model router actually made it (as I reported last year) more expensive, which led to the router being retired in December 2025

OpenAI Is A Confidence Game Empowered By The Media and Investors That Is Rigged To Explode

I tend to be pretty light-hearted in what I write, but please take me seriously when I say I have genuine concerns about the dangers posed by OpenAI.

I believe that OpenAI is an incredibly risky entity, not due to the power of its models or its underlying assets, but due to Sam Altman’s ability to con people and find others that will con in his stead. Those responsible for rooting out con artists — regulators, investors, and the media — have not simply failed, but actively assisted Altman in this con.

Here’re the crucial elements of the con:

  • Creating a halo of uncertainty around the actual efficacies of LLMs, to the point that a cult of personality grew around a technology that obfuscated its actual outcomes and efficacies to the point that it could be sold based on what it might do rather than what it actually does.
  • Creating a halo of “genius” around Altman himself, aided by constant and vague threats of human destruction with the suggestion that only Altman could solve them.
  • Normalizing the idea that it’s both necessary and important to let a company burn billions of dollars.
  • Normalizing the idea that it’s okay that a company has perpetual losses, and perpetuating the idea that these losses are necessary for innovation to continue at large.

Sam Altman is a dull, mediocre man that loves money and power. He appears to be superficially charming, but his actual skill is ingratiating himself with others and having them owe him favors, or feel somehow indebted to him otherwise. He remembers people’s names and where he met them, and is very good at emailing people, writing checks, or finding reasons for somebody else to write a check. He is not technical — he can barely code and misunderstands basic machine learning (to quote Futurism) — but is very good at making the noises that people want to hear, be they big scary statements that confirm their biases or massive promises of unlimited revenue that don’t really make any rational sense.

While OpenAI might have started on noble terms, it has since morphed into a massive con led by the Valley’s most-notable con artist. 

I realize that those who like AI might find this offensive, but what else do you call somebody who makes promises they can’t keep ($300 billion to Oracle, $200 billion of revenue by 2030), spreads nonsensical financials (promises to spend $600 billion in compute), makes announcements of deals that don’t exist (see: NVIDIA’s $100 billion funding and the entire Stargate project), and speaks in hyperbolic terms to pump the value of his stock (such as basically every time he talks about Superintelligence).

Altman has taken advantage of a tech and business media that wants to see him win, a market divorced from true fundamentals, desperate venture capitalists at the end of their rope, hyperscalers that have run out of hypergrowth ideas, and multiple large companies like Oracle and SoftBank that are run by people that can’t do maths.

OpenAI is a psuedo-company that can only exist with infinite resources, its software sold on lies, its infrastructure built and paid for by other parties, and its entire existence fueled by compounding layers of leverage and risk. 

OpenAI has never made sense, and was only rationalized through a network of co-conspirators. OpenAI has never had a path to profitability, and never had a product that was worthy of the actual cost of selling it. The ascension of this company has only been possible as part of an exploitation of ignorance and desperation, and its collapse will be dangerous for the entire tech industry.

Today I’ll explain in great detail the sheer scale of Sam Altman’s con, how it was exacted, the danger it poses to its associated parties, and how it might eventually collapse.

This is the Hater’s Guide To OpenAI, or Sam Altman, Freed. 

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.