If you enjoy this free newsletter, why not subscribe to Where's Your Ed At Premium? It's $7 a month or $70 a year, and helps support me putting out these giant free newsletters!
At the end of November, NVIDIA put out an internal memo (that was leaked to Barron's reporter Tae Kim, who is a huge NVIDIA fan and knows the company very well, so take from that what you will) that sought to get ahead of a few things that had been bubbling up in the news, a lot of which I covered in my Hater’s Guide To NVIDIA (which includes a generous free intro).
Long story short, people have a few concerns about NVIDIA, and guess what, you shouldn’t have any concerns, because NVIDIA’s very secret, not-to-be-leaked-immediately document spent thousands of words very specifically explaining how NVIDIA was fine and, most importantly, nothing like Enron.
As an aside: NVIDIA wrote this note as a response to both Michael Burry and a guy called “Shanaka Anslem Perera,” who wrote a piece called “The Algorithm That Detected a $610 Billion Fraud: How Machine Intelligence Exposed the AI Industry’s Circular Financing Scheme” that I’ve been sent about 11 times.
The reason I’m not linking to this piece is simple: it’s full of bullshit. In one part, Perera talks about “major semiconductor distributor Arrow Electronics” stating things in its Q3 2025 earnings about NVIDIA, yet Arrow makes no statements about NVIDIA of any kind on its earnings call, 10-Q or earnings presentation. If you need another example, Perera claims that when “Nvidia launched the Hopper H100 architecture in Q2 fiscal 2023—also amid reported supply constraints and strong demand—inventory declined 18% quarter-over-quarter as the company fulfilled backlogged orders.”
Actually looking at NVIDIA’s inventory for that period shows that inventory increased quarter over quarter.I have not heard of Perera before, but his LinkedIn says he is the “CEO at Pet Express Sri Lanka.” I would suggest getting your financial advice elsewhere, and at a minimum, making sure that you read outlets that actually source their data.
Anyway, all of this is fine and normal. Companies do this all the time, especially successful ones, and there is nothing to be worried about here, because after reading all seven pages of the document, we can all agree that NVIDIA is nothing like Enron.
No, really! NVIDIA is nothing like Enron, and it’s kind of weird that you’re saying that it is! Why would you say anything about Enron? NVIDIA didn’t say anything about Enron.
Okay, well now NVIDIA said something about Enron, but that’s because fools and vagabonds kept suggesting that NVIDIA was like Enron, and very normally, NVIDIA has decided it was time to set the record straight.
And I agree! I truly agree. NVIDIA is nothing like Enron.
Putting aside how I might feel about the ethics or underlying economics of generative AI, NVIDIA is an incredibly successful business that has incredible profits, holds an effective monopoly on CUDA (explained here), which powers the underlying software layer to running software on GPUs, specifically generative AI, and not really much else that has any kind of revenue potential.
And yes, while I believe that one day this will all be seen as one of the most egregious wastes of capital of all time, for the time being, Jensen Huang may be one of the most successful salespeople in business history.
Nevertheless, people have somewhat run away with the idea that NVIDIA is Enron, in part because of the weird, circular deals it’s built with Neoclouds — dedicated AI-focused cloud companies — like CoreWeave, Lambda and Nebius, who run data centers full of GPUs sold by NVIDIA, which they then use as collateral for loans to buy more GPUs from NVIDIA.
Yet as dodgy and weird and unsustainable as this is, it isn’t illegal, and it certainly isn’t Enron, because, as NVIDIA has been trying to tell you, it is nothing like Enron!
Now, you may be a little confused — I get it! — that NVIDIA is bringing up Enron at all. Nobody seriously thought that NVIDIA was like Enron before (though JustDario, who has been questioning its accounting practices for years, is a little suspicious), because Enron was one of the largest criminal enterprises in history, and NVIDIA is at worst, I believe, a big, dodgy entity that is doing whatever it can to survive.
Wait, what’s that? You still think NVIDIA is Enron? What’s it going to take to convince you? I just told you NVIDIA isn’t Enron! NVIDIA itself has shown it’s not Enron, and I’m not sure why you keep bringing up Enron all the time!
Stop being an asshole. NVIDIA is not Enron!
Look, NVIDIA’s own memo said that “NVIDIA does not resemble historical accounting frauds because NVIDIA's underlying business is economically sound, [its] reporting is complete and transparent, and [it] cares about [its] reputation for integrity.”
Now, I know what you’re thinking. Why is the largest company on the stock market having to reassure us about its underlying business economics and reporting? One might immediately begin to think — Streisand Effect style — that there might be something up with NVIDIA’s underlying business. But nevertheless, NVIDIA really is nothing like Enron.
But you know what? I’m good. I’m fine. NVIDIA, grab your coat, we’re going out, let’s forget any of this ever happened. Wait, what was that?
First, unlike Enron, NVIDIA does not use Special Purpose Entities to hide debt and inflate revenue. NVIDIA has one guarantee for which the maximum exposure is disclosed in Note 9 ($860M) and mitigated by $470M escrow. The fair value of the guarantee is accrued and disclosed as having an insignificant value. NVIDIA neither controls nor provides most of the financing for the companies in which NVIDIA invests.
Oh, okay! I wasn’t even thinking about that at all, I was literally just saying how you were nothing like Enron, we’re good. Let’s go home-
Second, the article claims that NVIDIA resembles WorldCom but provides no support for the analogy. WorldCom overstated earnings by capitalizing operating expenses as capital expenditures. We are not aware of any claims that NVIDIA has improperly capitalized operating expenses. Several commentators allege that customers have overstated earnings by extending GPU depreciation schedules beyond economic useful life. Rebutting this claim, some companies have increased useful life estimates to reflect the fact that GPUs remain useful and profitable for longer than originally anticipated; in many cases, for six years or more. We provide additional context on the depreciation topic below.
I…okay, NVIDIA is also not like WorldCom either. I wasn’t even thinking about WorldCom. I haven’t thought of them in a while.
On June 25, 2002, WorldCom, the second-largest telecommunications company in the United States, admitted that its accountants had overstated its 2001 and first quarter 2002 earnings by $3.8 billion. On July 21 of the same year, WorldCom filed for bankruptcy. On August 8, 2002, the company admitted that it had misclassified at least another $3.8 billion.
In the investigation that followed the initial revelations by WorldCom, it was revealed that the company had misstated earnings by approximately $11 billion. This remains one of the largest accounting scandals in United States history. The fall in the value of WorldCom stock after revelations about the massive accounting fraud led to over $180 billion in losses by WorldCom’s investors.
WorldCom, which began operating under the name Long Distance Discount Services in 1983, was led by one of its founders, CEO Bernard Ebbers, from 1985 to 2002. Under Ebbers’s leadership, the company engaged in a series of acquisitions, becoming one of the largest American telecommunications companies. In 1997, the company merged with MCI, making it the second largest telecom company after AT&T. In 1999, it attempted to merge with Sprint, which would have made it the largest in the industry. However, this merger was scrapped due to the intervention of the Department of Justice, which feared a WorldCom monopoly.
WorldCom stock, which rose more than 50 percent on rumors of this merger, began to fall. Ebbers then tried to grow his company through new customers rather than corporate mergers, but was unable to do so because the sector was saturated by 2000. He borrowed significantly so that WorldCom would have enough cash to cover anticipated margin calls, commonly used to prove that a company has funds to cover potential speculative losses. Desperate to keep his company’s stock prices high, Ebbers pressured company accountants to show robust growth on earnings statements.
…NVIDIA, are you doing something WorldCommy? Why are you bringing up WorldCom?
To be clear, WorldCom was doing capital F fraud, and its CEO Bernie Ebbers went to prison after an internal team of auditors led by WorldCom VP of internal auditing Cynthia Cooper reported $3.8 billion in “misallocated expenses and phony accounting entries.”
So, yeah, NVIDIA, you were really specific about saying you didn’t capitalize operating expenses as capital expenditures. You’re…not doing that, I guess? That’s great. Great stuff. I had literally never thought you had done that before. I genuinely agree that NVIDIA is nothing like WorldCom.
Anyway, also glad to hear about the depreciation stuff, looking forward to reading-
Third, unlike Lucent, NVIDIA does not rely on vendor financing arrangements to grow revenue. In typical vendor financing arrangements, customers pay for products over years.
NVIDIA's DSO was 53 in Q3. NVIDIA discloses our standard payment terms, with payment generally due shortly after delivery of products. We do not disclose any vendor financing arrangements. Our customers are subject to strict credit evaluation to ensure collectability. NVIDIA would disclose any receivable longer than one year in long-term other assets. The $632M "Other" balance as of Q3 does not include extended receivables; even if it did, the amount would be immaterial to revenue.
Erm…
A Brief History of Lucent Technologies (And No, It Really Isn’t Like NVIDIA Either)
Alright man, if anyone asks about whether you’re like famed dot-com crashout Lucent Technologies, I’ll be sure to correct them. After all, Lucent’s situation was really different — well…sort of. Lucent was a giant telecommunications equipment company, one that was, for a time, extremely successful, really really successful, in fact, turned around by the now-infamous Carly Fiorina.
From a 2010 profile in CNN:
Yet Fiorina’s campaign biography quickly skates over the stint that made her a star: her three-year run as a top executive at Lucent Technologies. That seems puzzling, since unlike her decidedly mixed record at HP, Fiorina’s tenure at Lucent has all the outward trappings of success.
Lucent reported a stream of great results beginning in 1996, after Fiorina, who had been a vice-president at AT&T (T), helped oversee the company’s spin-off from Ma Bell. By the time she left to run HP in 1999 revenues were up 58%, to $38 billion. Net income went from a small loss to $4.8 billion profit. Giddy investors bid up Lucent’s stock 10-fold. And unlike HP, where Fiorina instituted large layoffs—a fact Senator Boxer loves to mention whenever possible—Lucent added 22,000 jobs during Fiorina’s tenure.
NVIDIA, this sounds great — why wouldn’t you want to be compared to Lucen-
In 1997 Fiorina took over the group selling gear to such “service provider networks.” The company reported that sales to such networks climbed from $15.7 billion in fiscal 1997 to $19.1 billion in 1998. In 1999 they hit an amazing $23.6 billion. In the midst of this rise Fortune named Fiorina — then largely anonymous outside of telecom — to the top of its first list of the country’s most powerful women in business. A star was born.
As Wall Street became fixated on equipment companies’ growth, the whole industry entered a manic phase. With capital easy to come by, Qwest, Worldcom and their peers laid more fiber and installed far more capacity than customers needed. Much like the housing bubble that was just beginning to inflate, easy credit fed the telecom bubble.
Lucent and its major competitors all started goosing sales by lending money to their customers. In a neat bit of accounting magic, money from the loans began to appear on Lucent’s income statement as new revenue while the dicey debt got stashed on its balance sheet as an allegedly solid asset. It was nothing of the sort. Lucent said in its SEC filings that it had little choice to play the so-called vendor financing game, because all its competitors were too.
Oh.
So, to put it simply, Lucent was classifying debt as an asset (we're getting into technicalities here, but it sort of was but was really counting money from loans as revenue, which is dodgy and bad and accountants hate it), and did something called “vendor financing,” which means you lend somebody money to buy something from you. It turns out Lucent did a lot of this.
In the giant PathNet deal that Fiorina oversaw, Lucent agreed to fund more than 100% of the company’s equipment purchases, meaning the small company would get both Lucent gear at no money down and extra cash to boot. Yet how could such a loan to PathNet make sense for Lucent, even based on the world as it appeared in the heady days of 1999? The smaller company had barely $100 million in equity (and that’s based on generous accounting assumptions) on top of which it had already balanced $350 million in junk bonds paying 12.25% interest. Adding $440 million in loans from Lucent to this already debt-heavy capital structure would jack the company’s leverage up to 8 to 1, and potentially even higher as they drew more of the loan.
Okay, NVIDIA, I hate to say this, but I kind of get why somebody might say you’re doing Lucent stuff. After all, rumour has it that your deal with OpenAI — a company that burns billions of dollars a year — will involve it leasing your GPUs, which sure sounds like you’re doing vendor financing...
-we do not disclose any vendor financing arrangements-
Fine! Fine.
Anyway, Lucent really fucked up big time, indulging in the dark art of circular vendor financing. In 1998 it signed its largest deal — a $2 billion “equipment and finance agreement” — with telecommunications company Winstar, which promised to bring in “$100 million in new business over the next five years” and build a giant wireless broadband network, along with expanding Winstar’s optical networking.
To quote The Wall Street Journal:
Winstar was one of scores of stand-alone, start-up companies created in the late 1990s to compete in the market for local telecom services. These firms, known as "competitive local exchange carriers," or CLECs, raised billions of dollars in debt and equity financing, and embarked upon ambitious plans to compete with "incumbent" carriers. For a time in the late 1990s, their stocks were hot properties, outpacing even Internet stocks.
In December 1999, WIRED would say that Winstar’s “small white dish antennas…[heralded] a new era and new mind-set in telecommunications,” and included this awesome quote about Lucent from CEO and founder Will Rouhana:
On one level we are a customer and they are a supplier. On another level they are a financier and we are a borrower. On yet another level they are providing services around the world to accelerate our development. They also want to use our service, and have guaranteed $100 million in business.
Fuck yeah!
But that’s not the only great part of this piece:
WinStar is publicly traded (Nasdaq: WCII), has more than 4,000 employees, and reports more than $300 million in annualized core revenues.
Annualized revenues, very nice. We love annualized revenues don't we folks? A company making about $25 million a month a year after taking on $2 billion in financing from Lucent. Weirdly, Winstar’s Wikipedia page says that revenues were $445.6 million for the year ending 1999 — or around $37.1 million a month.
Winstar loved raising money — two years later in November 2000, it would raise $1.02 billion, for example — and it raised a remarkable $5.6 billion between February 1999 and July 2001 according to the Wall Street Journal. $900 million of that came in December 1999 from an investment from Microsoft and “several investment firms,” with analyst Greg Miller of Jefferies & Co saying:
The Microsoft investment is a significant endorsement that the technology will be used more aggressively in the future," said Greg Miller, an analyst at Jefferies & Co. "WinStar can use the capital.
Cool!
Another fun thing happened in November 2000 too. Lucent would admit it had overstated its fourth-quarter profits by improperly recording $125 million in sales, reducing that quarter’s revenue from “profitable” to “break-even.”
Things would eventually collapse when Winstar couldn’t pay its debts, filing for Chapter 11 bankruptcy protection on April 18 2001 after failing to pay $75 million in interest payments to Lucent, which had cut access to the remaining $400 million of its $1 billion loan to Winstar as a result. Winstar would file a $10 billion lawsuit in bankruptcy court in Delaware the very same day, claiming that Lucent breached its contract and forced Winstar into bankruptcy by, well, not offering to give it more money that it couldn’t pay off.
Elsewhere, things had begun to unravel for Lucent. A January 2001 story from the New York Times told a strange story of Lucent, a company that had made over $33 billion in revenue in its previous fiscal year, asking to defer the final tranche of payment — $20 million — for an acquisition due to “accounting and financial reporting considerations.”
Why? Because Lucent needed to keep that money on the books to boost its earnings, as its stock was in the toilet, and was about to announce it was laying off 10,000 people and a quarterly loss of $1.02 billion.
Over the course of the next few years, Lucent would sell off various entities, and by the end of September 2005 it would have 30,500 staff and have a stock price of $2.99 — down from a high of $75 a share at the end of 1999 and 157,000 employees. According to VC Tomasz Tunguz, Lucent had $8.1 billion of vendor financing deals at its height.
Lucent was still a real company selling real things, but had massively overextended itself in an attempt to meet demand that didn’t really exist, and when Lucent realized that, it decided to create demand itself to please the markets. To quote MIT Tech Review (and author Lisa Endlich), it believed that “setting and meeting [the expectations of Wall Street] “subsumed all other goals,” and that “Lucent had little choice but to ride the wave.”
To be clear, NVIDIA is quite different from Lucent. It has plenty of money, and the circular deals it does with CoreWeave and Lambda don’t involve the same levels of risk. NVIDIA is not (to my knowledge) backstopping CoreWeave’s business or providing it with loans, though NVIDIA has agreed to buy $6.3 billion of compute as the “buyer of last resort” of any unsold capacity. NVIDIA can actually afford this, and it isn’t illegal, though it is obviously propping up a company with flagging demand. NVIDIA also doesn’t appear to be taking on masses of debt to fund its empire, with over $56 billion in cash on hand and a mere $8.4 billion in long term debt.
Okay, phew. We got through this man. NVIDIA is nothing like Lucent either. Okay, maybe it’s got some similarities — but it’s different! No worries at all. I know I’m relaxed.
You still seem nervous, NVIDIA. I promise you, if anyone asks me if you’re like Lucent I’ll tell them you’re not. I’ll be sure to tell them you’re nothing like that. Are you okay, dude? When did you last sleep?
Inventory growth indicates waning demand
Claim: Growing inventory in Q3 (+32% QoQ) suggests that demand is weak and chips are accumulating unsold, or customers are accepting delivery without payment capability, causing inventory to convert to receivables rather than cash.
Woah, woah, woah, slow down. Who has been saying this? Oh, everybody? Did Michael Burry scare you? Did you watch The Big Short and say “ah, fuck, Christian Bale is going to get me! I can’t believe he played drums to Pantera! Ahh!”
Anyway, now you’ve woken up everybody else in the house and they’re all wondering why you’re talking about receivables. Shouldn’t that be fine? NVIDIA is a big business, and it’s totally reasonable to believe that a company planning to sell $63 billion of GPUs in the next quarter would have ballooning receivables ($33 billion, up from $27 billion last quarter) and growing inventory ($19.78 billion, up from $14.96 billion the last quarter). It’s a big, asset-heavy business, which means NVIDIA’s clients likely get decent payment terms to raise debt or move cash around to get them paid.
Everybody calm down! Like my buddy NVIDIA, who is nothing like Enron by the way, just said:
Response: First, growing inventory does not necessarily indicate weak demand. In addition to finished goods, inventory includes significant raw materials and work-in-progress. Companies with sophisticated supply chains typically build inventory in advance of new product launches to avoid stockouts. NVIDIA's current supply levels are consistent with historical trends and anticipate strong future growth.
Second, growing inventory does not indicate customers are accepting delivery without payment capability. NVIDIA recognizes revenue upon shipping a product and deeming collectability probable. The shipment reduces inventory, which is not related to customer payments. Our customers are subject to strict credit evaluation to ensure collectability.
Payment is due shortly after product delivery; some customers prepay. NVIDIA's DSO actually decreased sequentially from 54 days to 53 days.
Haha, nice dude, you’re totally right, it’s pretty common for companies, especially large ones, to deliver something before they receive the cash, it happens, I’m being sincere. Sounds like companies are paying! Great!
But, you know, just, can you be a little more specific? Like about the whole “shipping things before they’re paid” thing.
NVIDIA recognizes revenue upon shipping a product and deeming collectability probable-
Alright, yeah, thought I heard you right the first time. What does “deeming collectability probable” mean? You could’ve just said “we get paid 95% of the time within 2 months” or whatever. Unless it’s not 95%? Or 90%? How often is it? Most companies don’t break this down by the way, but then again, most companies are not NVIDIA, the largest company on the stock market, and if I’m honest, nobody else has recently had to put out anything that said “I’m not like Enron,” and I want to be clear that NVIDIA is not like Enron.
For real, Enron was a criminal enterprise. It broke the law, it committed real deal, actual fraud, and NVIDIA is nothing like Enron. In fact, before NVIDIA put out a letter saying how it was nothing like Enron I would have staunchly defended the company against the Enron allegations, because I truly do not think NVIDIA is committing fraud.
That being said, it is very strange that NVIDIA wants somebody to think about how it’s nothing like Enron. This was, technically, an internal memo, and thus there is a chance its existence was built for only internal NVIDIANs worried about the value of their stock, and we know it was definitely written to try and deflect Michael Burry’s criticism, as well as that of a random Substacker who clearly had AI help him write a right-adjacent piece that made all sorts of insane and made up statements (including several about Arrow Electronics that did not happen) — and no, I won’t link it, it’s straight up misinformation.
Nevertheless, I think it’s fair to ask: why does NVIDIA need you to know that it’s nothing like Enron? Did it do something like Enron? Is there a chance that I, or you, may mistakenly say “hey, is NVIDIA doing Enron?”
Is NVIDIA doing Enron stuff?
Heeeeeeyyyy NVIDIA. How’re you feeling? Yeah, haha, you had a rough night. You were saying all this crazy stuff about Enron last night, are you doing okay? No, no, I get it, you’re nothing like Enron, you said that a lot last night.
So, while you were asleep — yeah it’s been sixteen hours dude, you were pretty messed up, you brought up Lucent then puked in my sink — I did some digging and like, I get it, you are definitely not like Enron, Enron was breaking the law. NVIDIA is definitely not doing that.
But…you did kind of use Special Purpose Vehicles recently? I’m sorry, I know, you’re not like Enron! You’re investing $2 billion in Elon Musk’s special purpose vehicle that will then use that money to raise debt to buy GPUs from NVIDIA that will then be rented to Elon Musk.
This is very different to what Enron did! I am with you dude, don’t let the haters keep you down! No, I don’t think a t-shirt that says “NVIDIA is not like Enron for these specific reasons” helps.
Wait, wait, okay, look. One thing. You had this theoretical deal lined up with Sam Altman and OpenAI to invest $100 billion — and yes, you said in your latest earnings that "it was actually a Letter of Intent with the opportunity to invest," which doesn’t mean anything, got it — and the plan was that you would “lease the GPUs to OpenAI.”
Now how would you go about doing that NVIDIA? You’d probably need to do exactly the same deal as you just did with xAI. Right? Because you can’t very well rent these GPUs directly to Elon Musk, you need to sell them to somebody so that you can book the revenue, you were telling me that’s how you make money. I dunno, it’s either that or vendor financing.
Oh, you mentioned that already-
-unlike Lucent, NVIDIA does not rely on vendor financing arrangements to grow revenue. In typical vendor financing arrangements, customers pay for products over years. NVIDIA's DSO was 53 in Q3. NVIDIA discloses our standard payment terms, with payment generally due shortly after delivery of products. We do not disclose any vendor financing arrangements-
Let me stop you right there a second, you were on about this last night before you scared my cats when you were crying about something to do with “two nanometer.”
First of all, why are you bringing up typical vendor financing agreements? Do you have atypical ones?
Also I’m jazzed to hear you “disclose your standard payment terms,” but uh, standard payment terms for what exactly? Where can I find those? For every contract?
Also, you are straight up saying you don’t disclose any vendor financing arrangements, that’s not the same as “not having any vendor financing arrangements.” I “do not disclose” when I go to the bathroom but I absolutely do use the toilet.
Let’s not pretend like you don’t have a history in helping get your buddies funding. You have deals with both Lambda and CoreWeave to guarantee that they will have compute revenue, which they in turn use to raise debt, which is used to buy more of your GPUs. You have learned how to feed debt into yourself quite well, I’m genuinely impressed.
This is great stuff, I’m having the time of my life with how not like Enron you are, and I’m serious that I 100% do not believe you are like Enron.
But…what exactly are you doing man? What’re you going to do about what Wall Street wants?
I’m serious though. NVIDIA isn’t like Enron
Enron was a criminal enterprise! NVIDIA is not. More than likely NVIDIA is doing relatively boring vendor financing stuff and getting people to pay them on 50-60 day time scales — probably net 60, and, like it said, it gets paid upfront sometimes.
NVIDIA truly isn’t like Enron — after all, Meta is the one getting into ENERGY TRADING — to the point that I think it’s time to explain to you what exactly happened with Enron. Or, at least as much as is possible within the confines of a newsletter that isn’t exclusively about Enron…
What Exactly Was Enron?
The collapse of Enron wasn’t just — in retrospect — a large business that ultimately failed. If that was all it was, Enron wouldn’t command the same space in our heads as other failures from that era, like WorldCom (which I mentioned earlier) and Nortel (which I’ll get to later), both of whom were similarly considered giants in their fields.
It’s also not just about the fact that Enron failed because of proven business and accounting malfeasance. WorldCom entered bankruptcy due to similar circumstances (though, rather than being liquidated, it was acquired as part of Verizon’s acquisition of MCI, the name of a company that had previously merged with WorldCom that WorldCom renamed itself to after bankruptcy), and unlike Enron, isn’t the subject of flashy Academy-nominated films, or even a Broadway production.
Editor’s Note: Hi! It's Ed's editor Matt here! I actually saw the UK touring production of Enron in 2010 at Newcastle’s Theatre Royal. It was extremely good. From time to time, new productions of it show up (from what I can tell, the most recent one was in October at the London Barbican), and if you get a chance to watch it, you should.
It’s not the size of Enron that makes its downfall so intriguing. Nor, for that matter, is it the fact that Enron did a lot of legally and ethically dubious stuff to bring about its downfall.
No, what makes Enron special is the sheer gravity of its malfeasance, the rotten culture at the heart of the company that encouraged said malfeasance, and the creative ways Enron’s leaders crafted an image of success around what was, at its heart, a dog of a company.
Enron was born in 1985 on the foundations of two older, much less interesting businesses. The first, Houston Natural Gas (HNG), started life as a utility provider, pumping natural gas from the oilfields of Texas to customers throughout the region, before later exiting the industry to focus on other opportunities. The other, InterNorth, was based in Omaha, Nebraska and was in the same business — pipelines.
In the mid-1980s, HNG was the subject of a hostile take-over from Coastal Corporation (which, until 2001, operated a chain of refineries and gas stations throughout much of the US mainland). Unable to fend it off by itself, HNG merged with InterNorth, with the combined corporation renamed Enron.
The CEO of this new entity was Ken Lay, an economist by trade who spent most of his career in the energy sector who also enjoyed deep political connections with the Bush family. He co-chaired George H. W. Bush’s failed 1992 re-election campaign, and allowed Enron’s corporate jet to ferry Bush Sr. and Barbara Bush back and forth to Washington. Center for Public Integrity Director Charles Lewis said that “there was no company in America closer to George W. Bush than Enron.”
George W. Bush (the second one) even had a nickname for Lay. Kenny Boy.
Anyway, in 1987, Enron hired McKinsey — the world’s most evil management consultancy firm — to help the company create a futures market for natural gas. What that means isn’t particularly important to the story, but essentially, a futures contract is where a company agrees to buy or sell an asset in the future at a fixed price.
It’s a way of hedging against risk, whether that be from something like price or currency fluctuations, or from default. If you’re buying oil in dollars, for example, buying a futures contract for oil to be delivered in six months time at a predetermined price means that if your currency weakens against the dollar, your costs won’t spiral.
That bit isn’t terribly important. What does matter is while working with McKinsey, Lay met someone called Jeff Skilling — a young engineer-turned-consultant who impressed the company’s CEO deeply, so much so that Lay decided to poach him from McKinsey in 1990 and give him the role of chairman and CEO of Enron Finance Group.
Sidenote: Enron had a bunch of subsidiaries, and some had their own CEOs and boards. I mention this because you may be a bit confused, as Lay was CEO of Enron writ large.
In essence, it’s a bit like how Sam Altman is CEO of OpenAI and Fidji Simo the CEO of Applications.
This bit isn’t important, but I want to be as explicit as possible.
Anyway, Skilling continued to impress Lay, who gave him greater and greater responsibility, eventually crowning him Chief Operating Officer (COO) of Enron.
With Skilling in a key leadership position, he was able to shape the organization’s culture. He appreciated those who took risks — even if those risks, when viewed with impartial eyes, were deemed reckless, or even criminal.
He introduced the practice of stack-ranking (also known as “rank and yank”) to Enron, which had previously been pioneered by Jack Welch at GE (see The Shareholder Supremacy from last year). Here, employees were graded on a scale, and those at the bottom of the scale were terminated. Managers had to place at least 10% (other reports say closer to 15%) of employees in the lowest bracket, which created an almost Darwinian drive to survive.
Staffers worked brutal hours. They cut corners. They did some really, really dodgy shit. None of this bothered Skilling in the slightest.
How dodgy, you ask? Well, in 2000 and 2001, California suffered a series of electricity blackouts. This shouldn’t have happened, because California’s total energy demand (at the time) was 28GW and its production capacity was 45GW.
California also shares a transmission grid with other states (and, for what it’s worth, the Canadian provinces of Alberta and British Colombia, as well as part of Baja California in Mexico), meaning that in the event of a shortage, it could simply draw capacity from elsewhere.
So, how did it happen?
Well, remember, Enron traded electricity like a commodity, and as a result, it was incentivized to get the highest possible price for that commodity. So, it took power plants off line during peak hours, and exported power to other states when there was real domestic demand.
How does a company like Enron shut down a power station? Well, it just asked.
In one taped phone conversation released after the company’s collapse, an Enron employee called Bill called an official at a Las Vegas power plant (California shares the same grid with Nevada) and asked him to “get a little creative, and come up with a reason to go down. Anything you want to do over there? Any cleaning, anything like that?"
This power crisis had dramatic consequences — for the people of California, who faced outages and price hikes; for Governor Gray Davis, who was recalled by voters and later replaced by Arnold Schwarzenegger; for PG&E, which entered Chapter 11 bankruptcy that year; and for Southern California Edison, which was pushed to the brink of bankruptcy as a result.
This kind of stuff could only happen in an organization whose culture actively rewarded bad behavior.
In fact, Skilling was seemingly determined to elevate the dodgiest of characters to the highest positions within the company, and few were more-ethically-dubious than Andy Fastow, who Skilling mentored like a protegé, and who would later become Enron’s Chief Financial Officer.
Enron’s “Creative” Accounting
Even before vaulting to the top of Enron’s nasty little empire, Fastow was able to shape its accounting practices, with the company adopting mark-to-market accounting practices in 1991.
Mark-to-market sounds complicated, but it’s really simple. When listing assets on a balance sheet, you don’t use the acquisition cost, but rather the fair-market value of that asset. So, if I buy a baseball card for a dollar, and I see that it’s currently selling for $10 on eBay, I’d say that said asset is worth $10, not the dollar I paid for it, even though I haven’t actually sold it yet.
This sounds simple — reasonable, even — but the problem is that the way you determine the value of that asset matters, and mark-to-market accounting allows companies and individuals to exercise some…creativity.
Sure, for publicly-traded companies (where the price of a share is verifiable, open knowledge), it’s not too bad, but for assets with limited liquidity, limited buyers, or where the price has to be engineered somehow, you have a lot of latitude for fraud.
Let’s go back to the baseball card example. How do you know it’s actually worth $10, and not $1? What if the “fair value” isn’t something you can check on eBay, but what somebody told me in-person it’s worth? What’s to stop me from lying and saying that the card is actually worth $100, or $1000? Well, other than the fact I’d be committing fraud.
What if I have ten $1 baseball cards, and I give my friend $10 and tell him to buy one of the cards using the $10 bill I just handed him, allowing me to say that I’ve realized a $9 profit on one of my $1 cards, and my other cards are worth $90 and not $9?
And then, what if I use the phony valuation of my remaining cards to get a $50 loan, using the cards as collateral, even though the collateral isn’t even one-fifth of the value of the loan?
You get the idea. While a lot of the things people can do to alter the mark-to-market value of an asset are illegal (and would be covered under generic fraud laws), it doesn’t change the fact that mark-to-market accounting allows for some shenanigans to take place.
Another trait of mark-to-market accounting, as employed by Enron, is that it would count all the long-term potential revenue from a deal as quarterly revenue — even if that revenue would be delivered over the course of a decades-long contract, or if the contract would be terminated before its intended expiration date.
It would also realize potential revenue as actual revenue, even before money changed hands, and when the conclusion of the deal wasn’t a certainty.
For example, in 1999, Enron sold a stake in four electricity-generating barges in Nigeria (essentially floating power stations) to Merrill Lynch, which allowed the company to register $12m in profit.
That sale ultimately didn’t happen, though that didn’t stop Enron from selling pieces to Merrill Lynch, which — I’m not kidding — Merrill Lynch quickly sold back to a Special Purpose Vehicle called “LJM2” controlled by Andrew Fastow. You’re gonna hear that name again.
Although the Merrill Lynch bankers who participated in the deal were eventually convicted of conspiracy and fraud charges (long after the collapse of Enron), their convictions were later quashed on appeal.
But still, for a moment, it gave a jolt to Enron’s quarterly earnings.
Anyway, Enron was incredibly creative when it came to how it valued its assets. Take, for example, fiber optic cables. As the Dot Com bubble swelled, Enron saw an opportunity, and wanted to be able to trade and control the supply of bandwidth, just like it does with other more conventional commodities (like oil and gas).
It built, bought, and leased fiber-optic cables throughout the country, and then, using exaggerated estimates of their value and potential long-term revenue, released glowing financial reports that made the company look a lot healthier and more successful than it actually was.
Sidenote: One of the funniest ironies of Enron is that it was, in many ways, ahead of its time. When most people were still connecting to the Internet through screeching 56k dial-up modems, it saw a future in edge and cloud computing (even if said terms didn’t exist at the time) and streaming video
In 2000, it entered into a 20-year with Blockbuster Video to allow customers to stream films and TV shows through Enron’s fiber network, something that would take Netflix another decade to realize as a product.
Even though it wasn’t clear whether there was much of a market for it (remember, this was in 2000, and broadband was a rarity — and what we defined as “broadband” was well below the standards of today’s Internet) or indeed, whether it was technologically possible.
Anyway, the deal collapsed after just one year, but that didn’t stop Enron’s creative accountants from booking the deal (based on its projected future revenue) as a profitable venture.
Mark-to-market accounting! You gotta love it.
Still, it’s hilarious to think that there’s a future world in which Blockbuster and Enron stuck it out, and the former didn’t collapse around the time of the Global Financial Crisis.
Probably not though.
Enron also loved to create special-purpose entities that existed either to generate revenue that didn’t exist, or to hold toxic assets that would otherwise need to be disclosed (with Enron then using its holdings in said entities to boost its balance sheet), or to disguise its debt.
One, Whitewing, was created and capitalized by Enron (and an outside investor), and pretty much exclusively bought assets from Enron — which allowed the company to recognize sales and profits on its balance sheets, even if they were fundamentally contrived.
Another set of entities — known as LJM, named after the first initial of Andy Fastow’s wife and two children, and which I mentioned earlier — did the same thing, allowing the company to hide risky or failing investments, to limit its perceived debt, and to generate artificial profits and revenues. LJM2 was, creatively, the second version of the idea.
Even though the assets that LJM held were, ultimately, dogshit, the distance that LJM provided, combined with Enron’s use of mark-to-market accounting, allowed the company to turn a multi-billion collective failure into a resounding and (on paper) profitable triumph.
So, how did this happen, and how did it go on for so long?
Well, first, Enron was, at its peak, worth $70bn. Its failure would be a failure for its investors and shareholders, and nobody — besides the press, that is — wanted to ask tough questions.
It had auditors, but they were paid handsomely, turning a blind eye to the criminal malfeasance at the heart of the company. Auditor Arthur Andersen surrendered its license in 2002, bringing an end to the company — and resulting in 85,000 employees losing their jobs.
Well, it’s not so much as it only turned a blind eye, so much as it turned on a big paper shredder, shredding tons — and I’m using that as a measure of weight, and not figuratively — of documents as Enron started to implode, a crime for which it was later convicted of obstruction of justice.
I’ve talked about Enron’s culture, but I’d be remiss if I didn’t mention that Enron’s highest-performers and its leadership received hefty bonuses in company equity, motivating them to keep the charade going.
Enron’s pension scheme, I add, was basically entirely Enron stock, and employees were regularly encouraged to buy more, with Kenneth Lay telling employees weeks before the company’s collapse that “the company is fundamentally sound” and to “hang on to their stock.”
Hah. Yeah.
Additionally, per the terms of the Enron pension plan, employees were prevented from shifting their holdings into other pension funds, or other investments, until they turned 50. When the company collapsed, those people lost everything, even those who didn’t know anything about Enron’s criminality. George Maddox, a retired former Enron employee, had his entire retirement tied up in 14,000 Enron shares (worth at the time more than $1.3 million), was “forced to spend his golden years making ends meet by mowing pastures and living in a run-down East Texas farmhouse.”
The US Government brought criminal charges against Enron’s top leadership. Ken Lay was convicted of four counts of fraud and making false statements, but died on a skiing vacation to Aspen before sentencing. May he burn in Hell.
Skilling was convicted on 24 counts of fraud and conspiracy and sentenced to 24 years in jail. This was reduced in 2013 on appeal to 14 years, and he was released to a halfway house in 2018, and then freed in 2019. He’s since tried to re-enter the energy sector — with one venture combining energy trading and, I kid you not, blockchain technology — although nothing really came out of it.
Sidenote: Credit where credit’s due. This is the opening sentence to Quartz’s coverage of Skilling’s attempted comeback. “Jeffrey Skilling knows a thing or two about blocks and chains.”
Wooooo! Woooooooo!!!! Get his ass!
Andy Fastow pled guilty to two counts — one of manipulation of financial statements, and one of self-dealing. and received ten years in prison. This was later reduced to six years, including two years of probation, in part because he cooperated with the investigations against other Enron executives. He is now a public speaker and a tech investor in an AI company, KeenCorp.
His wife, Lea, who also worked at Enron, received twelve months for conspiracy to commit wire fraud and money laundering and for submitting false tax returns. She was released from custody in July, 2005.
Enron’s implosion was entirely self-inflicted and horrifyingly, painfully criminal, yet, it had plenty of collateral damage — to the US economy, to those companies that had lent it money, to its employees who lost their jobs and their life savings and their retirements, and to those employees at companies most entangled with Enron, like those at auditing firm Arthur Andersen.
This isn’t unique among corporate failures. WorldCom had some dodgy accounting practices. Nortel too. Both companies failed, both companies wrecked the lives of their employees, and the failure of these companies had systemic economic consequences (especially in Canada, where Nortel, at its peak, accounted for one-third of the market cap of all companies on the Toronto Stock Exchange).
The reason why Enron remains captured in our imagination — and why NVIDIA is so vociferously opposed to being compared with Enron — is the extent to which Enron manipulated reality to appear stronger and more successful than it was, and how long it was able to get away with it.
While we may have forgotten the memory of Enron — it happened over two decades ago, after all — we haven’t forgotten the instincts that it gave us. It’s why our noses twitch when we see special-purpose vehicles being used to buy GPUs, and why we gag when we see mark-to-market accounting.
It’s entirely possible that everything NVIDIA is doing is above board. Great! But that doesn’t do anything for the deep pit of dread in my stomach.
So, What Exactly Is NVIDIA?
A few weeks ago, I published the Hater’s Guide to NVIDIA, and included within it a guide to what this company does.
NVIDIA is a company that sells all sorts of stuff, but the only reason you're hearing about it as a normal person is that NVIDIA's stock has become a load-bearing entity in the US stock market.
This has happened because NVIDIA sells "GPUs" — graphics processing units — that power the large language model services that are behind the whole AI boom, either through "inference" (the process of creating an output from an AI model) or "training" (feeding data into the model to make its outputs better). NVIDIA also sells other things, which I’ll get to later, but it doesn’t really matter to the bigger picture.
In 2006, NVIDIA launched CUDA, a software layer that lets you run (some) software on (specifically) NVIDIA graphics cards, and over time this has grown into a massive advantage for the company.
The thing is, GPUs are great for parallel processing — essentially spreading a task across multiple, by which I mean thousands, of processor cores at the same time — which means that certain tasks run faster than they would on, say, a CPU. While not every task benefits from parallel processing, or from having several thousand cores available at the same time, the kind of math that underpins LLMs is one such example.
CUDA is proprietary to NVIDIA, and while there are alternatives (both closed- and open-source), none of them have the same maturity and breadth. Pair that with the fact that Nvidia’s been focused on the data center market for longer than, say, AMD, and it’s easy to understand why it makes so much money. There really isn’t anyone who can do the same thing as NVIDIA, both in terms of software and hardware, and certainly not at the scale necessary to feed the hungry tech firms that demand these GPUs.
Anyway, back in 2019 NVIDIA acquired a company called Mellanox for $6.9 billion, beating off other would-be suitors, including Microsoft and Intel. Mellanox was a manufacturer of high-performance networking gear, and this acquisition would give NVIDIA a stronger value proposition for data center customers. It wanted to sell GPUs — lots of them — to data center customers, and now it could also sell the high-speed networking technology required to make them work in tandem.
This is relevant because it created the terms under which NVIDIA could start selling billions (and eventually tens of billions) of specialized GPUs for AI workloads. As pseudonymous finance account JustDario connected (both Dario and Kakashii have been immensely generous with their time explaining some of the underlying structures of NVIDIA, and are worth reading, though at times we diverge on a few points), mere months after the Mellanox acquisition, Microsoft announced its $1 billion investment in OpenAI to build "Azure AI supercomputing technologies."
Though it took until November 2022 for ChatGPT to really start the fires, in March 2020, NVIDIA began the AI bubble with the launch of its "Ampere" architecture, and the A100, which provided "the greatest generational performance leap of NVIDIA's eight generations of GPUs," built for "data analytics, scientific computing and cloud graphics." The most important part, however, was the launch of NVIDIA's "Superpod": Per the press release:
A data center powered by five DGX A100 systems for AI training and inference running on just 28 kilowatts of power costing $1 million can do the work of a typical data center with 50 DGX-1 systems for AI training and 600 CPU systems consuming 630 kilowatts and costing over $11 million, Huang explained.
One might be fooled into thinking this was Huang suggesting we could now build smaller, more efficient data centers, when he was actually saying we should build way bigger ones that had way more compute power and took up way more space. The "Superpod" concept — groups of GPU servers networked together to work on specific operations — is the "thing" that is driving NVIDIA's sales. To "make AI happen," a company must buy thousands of these things and put them in data centers and you'd be a god damn idiot to not do this and yes, it requires so much more money than you used to spend.
At the time, a DGX A100 — a server that housed eight A100 GPUs (starting at around $10,000 at launch per-GPU, increasing with the amount of on-board RAM, as is the case across the board) — started at $199,000. The next generation SuperPod, launched in 2022, was made up of eight H100 GPUs (Starting at $25,000-per-GPU, the next generation "Hopper" chips were apparently 30x times more powerful than the A100), and retailed from $300,000.
You'll be shocked to hear the next generation Blackwell SuperPods started at $500,000 when launched in 2024. A single B200 GPU costs at least $30,000.
Because nobody else has really caught up with CUDA, NVIDIA has a functional monopoly, and yes, you can have a situation where a market has a monopoly, even if there is, at least in theory, competition. Once a particular brand — and particular way of writing software for a particular kind of hardware — takes hold, there's an implicit cost of changing to another, on top of the fact that AMD and others have yet to come up with something particularly competitive.
Why did I write this? Because I want you to understand why everybody is paying NVIDIA such extremely large amounts of money. Every year, NVIDIA comes up with a new GPU, and that GPU is much, much more expensive, and NVIDIA makes so much more money, because everybody has to build out AI infrastructure full of whatever the latest NVIDIA GPUs are, and those GPUs are so much more expensive every single year.
If you’re looking at this through the cold, unthinking lenses of late-stage capitalism. This all sounds really good! I’ve basically described a company that has an essential monopoly in the one thing required for a high-growth (if we’re talking exclusively about capex spending) industry to exist.
Moreover, that monopoly is all-but assured, thanks to NVIDIA’s CUDA moat, its first-mover advantage, and the actual capabilities of the products themselves — thereby allowing the company to charge a pretty penny to customers.
And those customers? If we temporarily forget about the likes of Nebius and CoreWeave (oh, how I wish I could forget about CoreWeave permanently), we’re talking about the biggest companies on the planet. Ones that, surely, will have no problems paying their bills.
How Did We Get Here? Why Did Everybody Buy GPUs?
Back in February 2023, I wrote about The Rot Economy, and how everything in tech had become oriented around growth — even if it meant making products harder to use as a means of increasing user engagement or funnelling them toward more-profitable parts of an app.
Back in June 2024, I wrote about the Rot-Com Bubble, and my greater theory that the tech industry has run out of hypergrowth ideas:
Yet, without generative AI, what do these companies have left? What's the next big thing? For the best part of 15 years we've assumed that the tech industry would always have something up its sleeves, but what's become painfully apparent is that the tech industry might have run out of big, sexy things to sell us, and the "disruption" that tech has become so well-known for was predicated on there being markets for them to disrupt, and ideas that they could fund to do so. A paper from Nature from last year posited that the pace of disruptive research is slowing, and I believe the same might be happening in tech, except we've been conflating "innovation" and "finding new markets to add software and hardware to" for twenty years.
The net result of this creative stagnancy is the Rot Economy and the Rot-Com bubble — a tech industry laser-focused on finding markets to disrupt rather than needs to be met, where the biggest venture capital investments go into companies that can sell for massive multiples rather than stable, sustainable businesses. There is no reason that Google, or Meta, or Amazon couldn't build businesses that have flat, sustainable growth and respectable profitability. They just choose not to, in part because the markets would punish it, and partially because their DNA has been poisoned by rot that demands there must always be more.
In simple terms, big tech — Amazon, Google, Microsoft and Meta, but also a number of other companies — no longer has the “next big thing,” and jumped on AI out of an abundance of desperation.
Hell, look at Oracle. This company started off by selling databases and ERP systems to big companies, and then trapping said companies by making it really, really difficult to migrate to cheaper (and better) solutions, and then bleeding said companies with onerous licensing terms (including some where you pay by the number of CPU cores that use the application).
It doesn’t do anything new, or exciting, or impressive, and even when presented with the opportunity to do things that are useful or innovative (like when it bought Sun Microsystems), it turns away. I imagine that, deep down, it recognizes that its current model just isn’t viable in the long-term, and so, it needs something else.
When you haven’t thought about innovation… well… ever, it’s hard to start. Generative AI, on the face of it, probably seemed like a godsend to Larry Ellison.
We also live in an era where nobody knows what big tech CEOs do other than make nearly $100 million a year, meaning that somebody like Satya Nadella can get called a “thoughtful leader with striking humility” for pushing Copilot AI in every single part of your Microsoft experience, even Notepad, a place that no human being would want it, and accelerating capital expenditures from $28 billion across the entirey of FY 2023 to $34.9 billion in its latest quarter.
In simpler terms, spending money makes a CEO look busy. And at a time when there were no other potential growth avenues, AI was a convenient way to make everybody look busy. Every department can “have an AI strategy,” and every useless manager and executive can yell, as ServiceNow CEO did back in 2022, “let me make it clear to everybody here, everything you do: AI, AI, AI, AI, AI.”
I should also add that ChatGPT was the first real, meaningful hit that the American tech industry had produced in a long, long time — the last being, if I’m honest, Uber, and that’s if we allow “successful yet not particularly good businesses” into the pile.
If we insist on things like “profitability” and “sustainability,” US tech hasn’t done so great. Snowflake runs at a loss, Snap runs at a loss, and while Uber has turned things around somewhat, it’s hardly created the next cloud computing or smartphone.
Putting aside finances, the last major “hit” was probably Venmo or Zelle, and maybe, if I’m feeling generous, smart speakers like Amazon Echo and Apple Homepod. Much like Uber, none of these were “the next big thing,” which would be fine except big tech needs more growth forever right now, pig!
Aside: None of this is to say there has been no innovation. Just not something on the level of a smartphone or cloud computing.
This is why Google, Amazon and Meta all do 20 different things — although rarely for any length of time, with these “things” often having a shelf life shorter than a can of peaches — because The Rot Economy’s growth-at-all-costs mindset exists only to please the markets, and the markets demanded growth.
ChatGPT was different. Not only did it do something new, it did so in a way that was relatively easy to get people to try and “see the potential” of. It was also really easy to convince people it would become something bigger and better, because that’s what tech does. To quote Bender and Hanna, AI is a “marketing term” — a squishy way of evoking futuristic visions of autonomous computers that can do anything and everything from us, and because both consumers and analysts have been primed to believe and trust the tech industry, everybody believed that whatever ChatGPT was would be the Next Big Thing.
And said “Next Big Thing” is powered by Large Language Models, which require GPUs sold by one company — NVIDIA.
AI became a very useful thing to do. If a company wanted to seem futuristic and attract investors, it could now “integrate AI.” If a hyperscaler wanted to seem enterprising and like it was “building for the future,” it could buy a bunch of GPUs, or invest in its own silicon, or, as Google, Microsoft, Amazon and Meta have done, shove AI in every imaginable crevice of the app.
Investors could invest in AI companies, retail investors (IE: regular people) could invest in AI stocks, tech reporters could write about something new in AI, LinkedIn perverts could write long screeds about AI, the markets could become obsessed with AI…
…and yeah, you can kind of see how things got out of control. Everybody now had something to do. An excuse to do AI, regardless of whether it made sense, because everybody else was doing it.
ChatGPT quickly became one of the most popular websites on the internet — all while OpenAI burned billions of dollars — and because the media effectively published every single thought that Sam Altman had (such as that GPT-4 would “automate away some jobs and create others” and that he was a “little bit scared of it”), AI, as an idea, technology, symbolic stock trope, marketing tool and myth became so powerful that it could do anything, replace anyone, and be worth anything, even the future of your company.
Amongst the hype, there was an assumption related to scaling laws (summarized well by Charlie Meyer):
In 2020, one of the most important papers in the development of AI was published: Scaling Laws for Neural Language Models, which came from a group at OpenAI.
This paper showed with just a few charts incredibly compelling evidence that increasing the size of large language models would increase their performance. This paper was a large driver in the creation of GPT-3 and today’s LLM revolution, and caused the movement of trillions of dollars in the stock market.
In simple terms, the paper suggested that shoving more training data and using more compute power would exponentially increase the ability of a model to do stuff. And to make a model that did more stuff, you needed more GPUs and more data centers. Did it matter that there was compelling evidence in 2022 (Gary Marcus was right!) that there were limits to scaling laws, and that we would hit the point of diminishing returns?
Nah!
Amidst all this, NVIDIA has sold over $200 billion of GPUs since the beginning of 2023, becoming the largest company on the stock market and trading at over $170 as of writing this sentence only a few years after being worth $19.52 a share.
You see, Meta, Google, Microsoft and Amazon all wanted to be “part of the future,” so they sunk a lot of their money into NVIDIA, making up 42% of its revenue in its fiscal year 2025. Though there are some arguments about exactly how much of big tech’s billowing capital expenditures are spent on GPUs, some estimate somewhere between 41% to more than 50% of a data center’s capex is spent on them.
If you’re wondering what the payoff is, well, you’re in good company. I estimate that there’s only around $61 billion in total generative AI revenue, and that includes every hyperscaler and neocloud. Large Language Models are limited, AI agents are a pipedream and simply do not work, AI-powered products are unreliable and coding LLMs make developers slower, and the cost of inference — the way in which a model produces its output — keeps going up.
Big Tech Needs To Make $2 Trillion In AI-Specific Revenue by 2030 Or It’s Wasted Its Capex
So, due to the fact that so much money has now been piled into building AI infrastructure, and big tech has promised to spend hundreds of billions of dollars more in the next year, big tech has found itself in a bit of a hole.
How big a hole? Well, By the end of the year, Microsoft, Amazon, Google and Meta will have spent over $400bn in capital expenditures, much of it focused on building AI infrastructure, on top of $228.4 billion in capital expenditures in 2024 and around $148bn in capital expenditures in 2023, for a total of around $776bn in the space of three years, and intends to spend $400 billion or more in 2026.
As a result, based on my analysis, big tech needs to make $2 trillion in brand new revenue, specifically from AI by 2030, or all of this was for nothing. I go into detail here in my premium piece, but I’m going to give you a short explanation here.
Let’s Learn About Depreciation! (And Why Michael Burry Is Bringing It Up)
Sadly you’re going to have to learn stuff. I know! I’m sorry. Introducing a term: depreciation. From my October, 31 newsletter:
So, when Microsoft buys, say, $100 million in GPUs, it immediately comes out of its capital expenditures, which is when a company uses money to invest in either buying or upgrading something. It then gets added to its Property, Plants and Equipment Assets — PPE for short, although some companies list this on their annual and quarterly financials as “Property and Equipment.”
PPE sits on the balance sheet — it's an asset — as it’s the stuff the company actually owns or has leased.
GPUs "depreciate" — meaning they lose value — over time, and this depreciation is represented on the balance sheet and the income statement. Essentially, the goal is to represent the value of the assets that a company has, On the income statement, we see how much the assets have declined during that reporting period (whether that be a year, or a quarter, or something else), whereas the balance sheet shows the cumulative depreciation of every asset currently in play. Depreciation does two things. First, it allows a company to accurately (to an extent) represent the value of the things it owns. Secondly, it allows companies to deduct the cost of an asset from their taxes across the useful life of said object, right until its eventual removal.
The way this depreciation is actually calculated can vary — there are several different methods available — with some allowing for greater deductions at the start of the term, which is useful for those items that’ll experience the biggest drop in value right after acquisition and initial usage. An example you’re probably familiar with is a new car, which loses a significant chunk of its value the moment it’s driven off the dealership lot.
Depreciation has become the big, ugly problem with GPUs, specifically because of their “useful life” — defined either as how long the thing is actually able to run before it dies, or how long until it becomes obsolete.
Nobody seems to be able to come to a consensus about how long this should be. In Microsoft’s case, depreciation for its servers is spread over six years — a convenient change it made in August 2022, a few months before the launch of ChatGPT. This means that Microsoft can spread the cost of the tens of thousands of A100 GPUs bought in 2020, or the 450,000 H100 GPUs it bought in 2024, across six years, regardless of whether those are the years they will be either A) generating revenue or B) still functional.
CoreWeave, for what it’s worth, says the same thing — but largely because it’s betting that it’ll still be able to find users for older silicon after its initial contracts with companies like OpenAI expire. The problem is, as the aforementioned linked CNBC article points out, is that this is pretty much untested ground.
Whereas we know how much, say, a truck or a piece of heavy machinery can last, and how long it can deliver value to an organization, we don’t know the same thing about the kind of data center GPUs that hyperscapers are spending tens of billions of dollars on each year. Any kind of depreciation schedule is based on, at best, assumptions, and at worst, hope.
The assumption that the cards won’t degrade with heavy usage. The assumption that future generations of GPUs won’t be so powerful and impressive, they’ll render the previous ones more obsolete than expected, kind of like how the first jet-powered planes of the 1950s did to those manufactured just one decade prior. The assumption that there will, in fact, be a market for older cards, and that there’ll be a way to lease them profitably.
What if those assumptions are wrong? What if that hope is, ultimately, irrational?
Mihir Kshirsagar of the Center for Information Technology Policy framed the problem well:
Here is the puzzle: the chips at the heart of the infrastructure buildout have a useful lifespan of one to three years due to rapid technological obsolescence and physical wear, but companies depreciate them over five to six years. In other words, they spread out the cost of their massive capital investments over a longer period than the facts warrant—what The Economist has referred to as the “$4trn accounting puzzle at the heart of the AI cloud.”
This is why Michael Burry brought it up recently — because spreading out these costs allows big tech to make their net income (IE: profits) look better. In simple terms, by spreading out costs over six years rather than three, hyperscalers are able to reduce a line item that eats into their earnings, which makes their companies look better to the markets.
There’s No Way For Big Tech To Make Back Its Capital Expenditures On AI
So, why does this create an artificial time limit?
- Let’s start with a horrible fact: it takes about 2.5 years of construction time and $50 billion per gigawatt of data center capacity.
- One way or another, these GPUs are depreciating in value, either through death (or reduced efficacy through wear and tear) or becoming obsolete, which is very likely as NVIDIA has committed to releasing a new GPU every year.
- At some point, Wall Street is going to need to see some sort of return on this investment, and right now that return is “negative dollars.” I break it down in my premium piece, but I estimate that big tech needs to make $2 for every $1 of capex. This revenue must also be brand spanking new, as this capex is only for AI.
- Meta, Amazon, Google and Microsoft are already years and hundreds of billions of dollars in, and are yet to see a dollar of profit, creating a $1.21 trillion hole just to justify the expenses (so around $605 billion in capex all told, at the time I calculated it).
- You might argue that there’s a scenario where, say, an A100 GPU is “useful” past the 3 or 6 year shelf life. Even if that were the case, the average rental price of an A100 GPU is 99 cents an hour. This is a four or five-year-old GPU, and customers are paying for it like they would a five-year-old piece of hardware. The same fate awaits H100 GPUs too.
- Every year, NVIDIA releases a new GPU, lowering the value of all the other GPUs in the process, making it harder to fill in the holes created by all the other GPUs.
- This whole time, nobody appears to have found a way to make a profit, meaning that the hole created by these GPUs remains unfilled, all while big tech firms buy more GPUs, creating more holes to fill.
In really, really simple terms:
- Big tech keeps buying more GPUs despite the old GPUs failing to pay for themselves.
- To fix this problem, big tech is buying more GPUs.
- Newer generation GPUs — like NVIDIA’s Blackwell and Vera Rubin — require entirely new data center architecture, meaning that one has to either build a brand new data center or retrofit an old one.
- Big tech is spending billions of dollars to make sure it’s able to turn on these new GPUs, at which point you may think that they’ll make a profit.
- Even when they’re turned on, these things don’t make money. The Information reports that Oracle’s Blackwell GPUs have a negative 100% gross margin.
- How exactly are these bloody things meant to make more money than they cost in the next six years, let alone three? They don’t make a profit now and have no path to doing so in the future! I feel like I’m going INSANE!
So, now that you know this, there’s a fairly obvious question to ask: why are they still buying GPUs? Also…where the fuck are they going?
Is NVIDIA Shipping Millions Of GPUs And Putting Them In Warehouses? Where Are The 6 Million Shipped Blackwell GPUs?
As I covered in the Hater’s Guide To NVIDIA:
Going off of [Stargate] Abilene’s [OpenAI’s giant data center project in Abilene, TX] mathematics — $40bn of chips across 8 buildings — that means each building is about $5 billion of chips (and I assume the associated hardware). Each building is 400,000 square feet, which is over 9 acres of space.
NVIDIA CEO Jensen Huang claims that NVIDIA has shipped 6 million Blackwell GPUs — and according to CNBC, that specifically refers to AI GPUs shipped. They have left NVIDIA’s warehouses. These chips are in flight. They are real. Where the fuck are they?
So, Stargate Abilene is meant to have 1.2GW of power, and each building is 440,000 square feet according to developer Lancium, and it appears based on some reporting that each building will be 100MW of IT load, though I’m having trouble getting a consistent answer here.
In any case, we can do some napkin maths! 100MW = 50,000 Blackwell GPUs (I’m going to guess B200s), making 6 million Blackwell GPUs somewhere in the region of 12GW of IT load, and because data centers need 30% or more power than their IT loads (to cover for that “design day” i mentioned earlier), that means 15.6GW of power is required to make the last four quarters of NVIDIA GPUs sold turn on.
While I’m not going to copy-paste my whole (premium) piece, I was only able to find, at most, a few hundred thousand Blackwell GPUs — many of which aren’t even online! — including OpenAI’s Stargate Abilene (allegedly 400,000, though only two buildings are handed over); a theoretical 131,000 GPU cluster owned by Oracle announced in March 2025; 5000 Blackwell GPUs at the University of Texas, Austin; “more than 1500” in a Lambda data center in Columbus, Ohio; The Department of Energy’s still-in-development 100,000 GPU supercluster, as well as “10,000 NVIDIA Blackwell GPUs” that are “expected to be available in 2026 in its “Equinox” cluster; 50,000 going into the still-unbuilt Musk-run Colossus 2 supercluster; CoreWeave’s “largest GB200 Blackwell cluster” of 2496 Blackwell GPUs; “tens of thousands” of them deployed globally by Microsoft (including 4600 Blackwell Ultra GPUs); 260,000 GPUs for five AI data centers for the South Korean government…and I am still having trouble finding one million of these things that are actually allocated anywhere, let alone in a data center, let alone one with sufficient power.
I do not know where these six million Blackwell GPUs have gone, but they certainly haven’t gone into data centers that are powered and turned on. In fact, power has become one of the biggest issues with building these things, in that it’s really difficult (and maybe impossible!) to get the amount of power these things need.
In really simple terms: there isn’t enough power or built data centers for those six million Blackwell GPUs, in part because the data centers aren’t built, and in part because there isn’t enough power for the ones that are. Microsoft CEO Satya Nadella recently said on a podcast that his company “[didn’t] have the warm shells to plug into,” meaning buildings with sufficient power, and heavily suggested Microsoft “may actually have a bunch of chips sitting in inventory that [he] couldn’t plug in.”
The news that HPE’s (Hewlett Packard Enterprise) AI server business underperformed, and by a significant margin, only raises more questions about where these chips are going.
So why, pray tell, is Jensen Huang of NVIDIA saying that he has 20 million Blackwell and Vera Rubin GPUs ordered through the end of 2026? Where are they going to go?
I truly don’t know!
Why Is Anybody Still Buying GPUs? This Is All Insane!
AI bulls will tell you about the “insatiable demand for AI” and that these massive amounts of orders are proof of something or rather, and you know what, I’ll give them that — people sure are buying a lot of NVIDIA GPUs!
I just don’t know why.
Nobody has made a profit from AI, and those making revenue aren’t really making much.
For example, my reporting on OpenAI from a few weeks ago suggests that the company only made $4.329 billion in revenue through the end of September, extrapolated from the 20% revenue share that Microsoft receives from the company. As some people have argued with the figures, claiming they are either A) delayed or B) not inclusive of the revenue that OpenAI is paid from Microsoft as part of Bing’s AI integration and sales of OpenAI’s models via Microsoft Azure, I wanted to be clear of two things:
- This is accrual accounting, meaning that these numbers are revenue booked in the quarter I reported them. Any comments about quarter-long delays in payments are incorrect.
- Microsoft’s revenue share payments to OpenAI are pathetic — totalling, based on documents reviewed by this publication, $69.1 million in CY (calendar year) Q3 2025.
In the same period, it spent $8.67 billion on inference (the process in which an LLM creates an output).
This is the biggest company in the generative AI space, with 800 million weekly active users and the mandate of heaven in the eyes of the media. Anthropic, its largest competitor, alleges it will make $833 million in revenue in December 2025, and based on my estimates will end up having $5 billion in revenue by end of year.
Based on my reporting from October, Anthropic spent $2.66 billion on Amazon Web Services through the end of September, meaning that it (based on my own analysis of reported revenues) spent 104% of its $2.55 billion in revenue up until that point just on AWS, and likely spent just as much on Google Cloud.
While everybody wants to tell the story of Anthropic’s “efficiency” and “only burning $2.8 billion this year,” one has to ask why a company that is allegedly “reducing costs” had to raise $13 billion in September 2025 after raising $3.5 billion in March 2025, and after raising $4 billion in November 2024? Am I really meant to read stories about Anthropic hitting break even in 2028 with a straight face? Especially as other stories say Anthropic will be cash flow positive “as soon as 2027.”
These are the two largest companies in the generative AI space, and by extension the two largest consumers of GPU compute. Both companies burn billions of dollars, and require an infinite amount of venture capital to keep alive at a time when the Saudi Public Investment Fund is struggling and the US venture capital system is set to run out of cash in the next year and a half. The two largest sources of actual revenue for selling AI compute are subsidized by venture capital and debt. What happens if these sources dry up?
And, in all seriousness, who else is buying AI compute? What are they doing with it? Hyperscalers (other than Microsoft, which chose to stop reporting its AI revenue back in January, when it claimed a $13 billion, or about $1 billion a month, in revenue) don’t disclose anything about their AI revenue, which in turn means we have no real idea about how much real, actual money is coming in to justify these GPUs.
CoreWeave made $1.36 billion in revenue (and lost $110 million doing so) in its last quarter — and if that’s indicative of the kind of actual, real demand for AI compute, I think it’s time to start panicking about whether all of this was for nothing.
CoreWeave has a backlog of over $50 billion in compute, but $22 billion of that is OpenAI (a company that burns billions of dollars a year and lives on venture subsidies), $14 billion of that is Meta (which has yet to work out how to make any kind of real money from generative AI, and no, its “generative AI ads” are not the future, sorry), and the rest is likely a mixture of Microsoft and NVIDIA, which agreed to buy $6.3 billion of any unused compute from CoreWeave through 2032.
Sorry, I also forgot Google, which is renting capacity from CoreWeave to rent to OpenAI.
Also, I also forgot to mention that CoreWeave’s backlog problem stems from data center construction delays. That and CoreWeave has $14 billion in debt mostly from buying GPUs, which it was able to raise by using GPUs as collateral and that it had contracts from customers willing to pay it, such as NVIDIA, which is also selling it the GPUs.
So, just to be abundantly clear: CoreWeave has bought all those GPUs to rent to OpenAI, Microsoft (for OpenAI), Meta, Google (OpenAI), and NVIDIA, which is the company that benefits from CoreWeave’s continued ability to buy GPUs.
Otherwise, where’s the fucking business, exactly? Who are the customers? Who are the people renting these GPUs, and for what purpose are they being rented? How much money is renting those GPUs? You can sit and waffle on about the supposedly glorious “AI revolution” all you want, but where’s the money, exactly?
And why, exactly, are we buying more GPUs?
What are they doing? To whom are they being rented? For what purpose? And why isn’t it creating the kind of revenue that is actually worth sharing?
Is it because the revenue sucks?
Is it because it’s unprofitable to provide it?
And why, at this point in history, do we not know? Hundreds of billions of dollars that have made NVIDIA the biggest company on the stock market and we still do not know why people are buying these fucking things.
NVIDIA Is Dependent On Endless Debt and Credit - As Are Most Of The Customers Of AI Compute, Creating A Deadly Cycle That Ends In Disaster
NVIDIA is currently making hundreds of billions in revenue selling GPUs to companies that either plug them in and start losing money or, I assume, put them in a warehouse for safe keeping.
This brings me to my core anxiety: why, exactly, are companies pre-ordering GPUs? What benefit is there in doing so? Blackwell does not appear to be “more efficient” in a way that actually makes anybody a profit, and we’re potentially years from seeing these GPUs in operation in data centers at the scale they’re being shipped — so why would anybody be buying more?
I doubt these are new customers — they’re likely hyperscalers, neoclouds like CoreWeave and resellers like Dell and SuperMicro — because the only companies that can actually afford to buy them are those with massive amounts of cash or debt, to the point that even Google, Amazon, Meta and Oracle are taking on massive amounts of new debt, all without a plan to make a profit.
NVIDIA’s largest customers are increasingly unable to afford its GPUs, which appear to be increasing in price with every subsequent generation. NVIDIA’s GPUs are so expensive that the only way you can buy them is by already having billions of dollars or being able to raise billions of dollars, which means, in a very real sense, that NVIDIA is dependent not on its customers, but on its customers’ credit ratings and financial backers.
To make matters worse, the key reason that one would buy a GPU is to either run services using it or rent it to somebody else, and the two largest parties spending money on these services are OpenAI and Anthropic, both of whom lose billions of dollars, and are thus dependent on venture capital and debt (remember, OpenAI has a $4 billion line of credit, and Anthropic a $2.5 billion one too).
In simple terms, NVIDIA’s customers rely on debt to buy its GPUs, and NVIDIA’s customers’ customers rely on debt to pay to rent them.
Yet it gets worse from there. Who, after all, are the biggest customers renting AI compute?
That’s right, AI startups, all of which are deeply unprofitable. Cursor — Anthropic’s largest customer and now its biggest competitor in the AI coding sphere — raised $2.3 billion in November after raising $900 million in June. Perplexity, one of the most “popular” AI companies, raised $200 million in September after raising $100 million in July after seeming to fail to raise $500 million in May (I’ve not seen any proof this round closed) after raising $500 million in December 2024. Cognition raised $400 million in September after raising $300 million in March. Cohere raised $100 million in September a month after it raised $500 million.
Venture capital is feeding money to either OpenAI or Anthropic to use their models, or in some cases hyperscalers or neoclouds like CoreWeave or Lambda to rent NVIDIA GPUs. OpenAI and Anthropic then raise venture capital or debt to pay hyperscalers or neoclouds to rent NVIDIA GPUs. Hyperscalers and neoclouds then use either debt or existent cashflow (in the case of hyperscalers, though not for long!) to buy more NVIDIA GPUs.
Only one company actually makes a profit here: NVIDIA.
Aside: I should add there are also NVIDIA resellers like Dell or Supermicro, the latter of which buy NVIDIA GPUs, put them in servers, and sell them to neoclouds like Lambda or CoreWeave.
At some point, a link in this debt-backed chain breaks, because very little cashflow exists to prop it up. At some point, venture capitalists will be forced to stop funnelling money into unprofitable, unsustainable AI companies, which will make those companies unable to funnel money into the pockets of those buying GPUs, which will make it harder for those companies buying GPUs to justify (or raise debt for) buying more GPUs.
And if I’m honest, none of NVIDIA’s success really makes any sense. Who is buying so many GPUs? Where are they going?
Why are inventories increasing? Is it really just pre-buying parts for future orders? Why are accounts receivable climbing, and how much product is NVIDIA shipping before it gets paid? While these are both explainable as “this is a big company and that’s how big companies do business” (which is true!), why do receivables not seem to be coming down?
And how long, realistically, can the largest company on the stock market continue to grow revenues selling assets that only seem to lose its customers money?
I worry about NVIDIA, not because I believe there’s a massive scandal, but because so much rides on its success, and its success rides on the back of dwindling amounts of venture capital and debt, because nobody is actually making money to pay for these GPUs.
In fact, I’m not even saying it goes tits up. Hell, it might even have another good quarter or two. It really comes down to how long people are willing to be stupid and how long Jensen Huang is able to call hyperscalers at three in the morning and say “buy one billion dollars of GPUs, pig.”
No, really! I think much of the US stock market’s growth is held up by how long everybody is willing to be gaslit by Jensen Huang into believing that they need more GPUs. At this point it’s barely about AI anymore, as AI revenue — real, actual cash made from selling services run on GPUs — doesn’t even cover its own costs, let alone create the cash flow necessary to buy $70,000 GPUs thousands at a time. It’s not like any actual innovation or progress is driving this bullshit!
In any case, the markets crave a healthy NVIDIA, as so many hundreds of billions of dollars of NVIDIA stock sit in the hands of retail investors and people’s 401ks, and its endless growth has helped paper over the pallid growth of the US stock market and, by extension, the decay of the tech industry’s ability to innovate.
Once this pops — and it will pop, because there is simply not enough money to do this forever — there must be a referendum on those that chose to ignore the naked instability of this era, and the endless lies that inflated the AI bubble.
Until then, everybody is betting billions on the idea that WIle E. Coyote won’t look down.