The Rot-Com Bubble

Edward Zitron 18 min read

If you enjoy this post, why not subscribe to my podcast Better Offline? This week's a two part deep dive into the Rot-Com Bubble.


The noxious growth-at-all-costs mindset of the Rot Economy sits at the core of every issue that I've ever written about. It’s the force that drives businesses to grow bigger rather than better, making more products to conquer more markets rather than making products or services that people need or improving products they already like.

Tech's rot era doesn’t feel like any other economic epoch - such as the post-2008 financial crisis, and the stagnation that lingered for years afterwards - because there wasn’t a single obvious event that started the cataclysmic decline of the services we use, and the companies that make them.

However, if I had to choose, I'd say things really began to deteriorate sometime in 2019. Tech had lost something - while there were new gadgets, apps, and services, tech started to feel iterative rather than innovative, and by 2021, even the pretense of gradual improvement was dropped. It felt like they were trying to sell us things that didn't actually exist. 

We were told that NFTs would replace physical, tangible collectibles. That cryptocurrency would replace regular money, while also emancipating consumers from unstoppable market forces like inflation, as well as the rent-seeking middlemen that take a cut of every purchase made with a credit or debit card. And yet, the actual services championing these arguments didn't really seem to do anything or improve our lives in any meaningful way.  We were told that our futures were in the metaverse, and that we'd live in this interconnected "new internet," yet what we actually got was an extremely wonky virtual reality space that Mark Zuckerberg somehow burned $36 billion to make.

Today, we're being told that our glorious AI-powered future is imminent, yet what we've actually got is unprofitable, unsustainable generative AI that has an unassailable problem of spitting out incorrect information, which Google CEO Sundar Pichai says is "an inherent feature" of a technology he's now plugged into Google Search, generating hilariously incorrect "answers" to queries based on the links of a decaying search engine. And at the forefront of the AI boom is Sam Altman's $80 billion juggernaut OpenAI, a company that allegedly will build "artificial general intelligence" that experiences human-like cognition, an idea that is simply not possible based on how generative AI works

Windows laptops will soon integrate an AI-powered "Recall" feature that allows you to search everything you've done on your computer in the last three months, recording everything from the meetings you've been in, to the things you've written — a feature that nobody asked for, and that inherently encroaches on the user’s privacy and security, as it involves taking screenshots of the user’s machine every few seconds and storing them (along with the AI-generated inferences) on a locally-hosted database. 

It is, in essence, a pre-installed screen recorder — the kind that a hacker might install on a victim’s computer — and if an attacker gains access to it, the ramifications could be catastrophic. Despite being an inherently AI-powered feature, it can’t distinguish between ordinary computer usage and genuinely sensitive information — like passwords, health information, or trade secrets that must be protected at all costs. It treats, say, a confidential email from your doctor with the same concern as a YouTube video or a video game. 

Already, white-hat security researchers have created proof-of-concept malware applications that can stealthily exfiltrate sensitive data obtained or generated by Microsoft Recall from a user’s computer, and with a minimum of effort. 

And now, the UK’s Information Commissioner’s Office — the nation’s privacy watchdog, with powers to impose steep fines on violators — is probing whether the technology presents an unacceptable risk to consumers. Which, as anyone with a modicum of common sense can confirm, it does.

Every major tech company is "integrating AI" into their products and services, yet underneath the hood, the "AI" they're integrating doesn't actually seem to do anything new, or generate a profit, or solve any particular need. Even the companies themselves seem incapable of explaining why it's such a big deal, to the point that Microsoft's Super Bowl commercial for its OpenAI-powered CoPilot assistant featured multiple things that it can't actually do, like generate the code for a 3D open world game.

And that might actually be the problem.

For decades, the tech industry has been remarkably good at coming up with both innovative new products and ways to turn them into huge new markets for hyper-growth. Search engines, digital maps, smartphones and apps, social media, cloud computing, Software-as-a-Service, electric cars, streaming audio and video, and nearly tripling the amount of people that use the internet. There were obvious, meaningful markets to move into — ways to connect people, ways to get people content they wanted, ways to sell people things that solved problems they had either for the first time or faster, like the transition from physical to digital media — and problems that were both important to solve and actually solvable.

Tech has perpetually succeeded at building things new things that neatly create new markets, and incentivized — both in the private and public markets — growing companies as fast and as big as possible to dominate these markets, with the assumption that there would always be more massive, multi-billion or multi-trillion-dollar markets to conquer in the future. 

Between 2022 and 2023, only 100 million additional people got online, the slowest rate of  growth in the last 18 years. And that’s not because the need to bring connectivity to the masses — particularly those in the Global South — has been solved. It hasn’t, as the most recent figures from the UN’s International Telecommunications Union shows, with 33 percent of the world’s population (or 2.6 billion people) having never used the Internet

More worryingly, data I’ve received from Similarweb shows that the majority of the internet’s top 100 web properties have seen significant declines in traffic since 2021. In the years since the world slowly emerged from lockdown, Google.com has seen a decline of 5.3% in web visits, as has YouTube (-3.8%), Facebook (a remarkable -27.7%), Twitter (-3.5%), Amazon.com (-11.6%), Twitch.tv (-17.5%), Wikipedia (-24.8%)  and even porn sites like xVideos (-27.4%) and Pornhub (-17.1%). 

Though you might be tempted to dismiss this as the result of life returning to normal, and traditional office-based environments not being especially conducive to a crafty mid-day Pornhub visit, you shouldn’t — the trend actually began earlier, with SimilarWeb data showing the decline starting as 2019. My analysis focuses on the 2021-2024 period because that’s the one I have the most detailed month-by-month and year-by-year breakdowns for. 

When you look at the trajectory of the web’s most valuable properties on a year-over-year basis, things look especially bleak, with, Google (-0.9%), YouTube (-4.4%), Facebook (-7.7%), Twitter (-6.2%), Twitch (-11.9%) and Amazon (-2.7%) all still seeing significant declines. Only a few sites like Reddit (+31.3%), TikTok (+11.6%), Instagram (+9.9%, but trending down every year since 2019), and LinkedIn (+17.9%) are seeing any form of year-over-year growth. 

While it’s important to note that these are visits rather than active users (and, in fairness, only covers visits made through a browser rather than an app, although that’s unlikely to be a major driver of traffic for many of the sites listed), this is a truly astounding trend that suggests that for the most part, the web’s largest platforms are seeing their own kind of digital recession. While there might be revenue coming from these companies, and they might be still acquiring new users, there’s no good way to spin the fact that traffic to platforms like Amazon and Google has effectively plateaued — something is shifting downwards, and it’s been doing so since 2019. Perhaps this explains why platforms like Google and Facebook have kept making changes to make each user journey more profitable to sustain growth, because fewer people than ever are actually visiting their platforms. 

I believe we're at the end of the Rot-Com boom — the tech industry's hyper-growth cycle where there were so many lands to conquer, so many new ways to pile money into so many new, innovative ideas that it felt like every tech company could experience perpetual growth simply by throwing money at the problem.

It explains why so many tech products — YouTube, Google Search, Facebook, and so on — feel like they’ve got tangibly worse. There’s no incentive to improve the things you’ve already built when you’re perpetually working on the next big thing.

This belief — that exponential growth is not just a reasonable expectation, but a requirement — is central to the core rot in the tech industry, and as these rapacious demands run into reality, the Rot-Com bubble has begun to deflate. As we speak, the tech industry is grappling with a mid-life crisis where it desperately searches for the next hyper-growth market, eagerly pushing customers and businesses to adopt technology that nobody asked for in the hopes that they can keep the Rot Economy alive.


The Rot Economy and tech's growth-lust isn't new. Venture capital has been incentivizing and monetizing the rot for over a decade, with Marc Andreessen advocating in 2011 that we should look to "expand the number of innovative new software companies created" rather than "constantly questioning their valuations." Yet, just one year earlier in March 2010, his partner Ben Horowitz advocated for "fat startups," saying that you "can't save your way to winning the market," and that "startup purgatory" is when you "don't go bankrupt, but you fail to build the number one product in the space" and have "zero chance of becoming a high-growth company," which Horowitz describes as "worse than startup hell" because you're "stuck with the small company," even if it's cash-flow positive.

At the time, it made sense — even if there’s something inherently abnormal about describing a stable, profitable company as being in a state that’s “worse than hell.” 

In 2010, it felt like we were making our first tentative steps into an unexplored digital frontier. There was no Instacart, no Zoom, no Snapchat, no Lyft, no Tinder, no Slack, no Snowflake, no Doordash, no TikTok, no Discord, no Coinbase, no Robinhood, and no Venmo. Tesla, Facebook, LinkedIn, Airbnb, Uber, Square, Atlassian, Okta, MongoDb, Workday, Palo Alto Networks, Asana, UiPath and Spotify had yet to go public. Instagram was yet to be acquired by Facebook, consumer drones had yet to reach ubiquity, voice assistants were yet to be launched, 4G LTE networks were just launching, and Amazon Web Services was on course to make a mere $600 million in revenue in 2010, or around 2.5% of the $24 billion that AWS made in Q1 2024. Google and Apple's stock prices were worth less than a tenth of what they are today, and Nvidia — a stock now worth over $1000 — traded at under $3 a share.

Between 2005 and 2018 we saw both an incredible surge of innovation and tech valuations, a seemingly-unstoppable period of growth and big, sexy ideas that created huge new markets. While "applications" existed in 2008, Apple's App Store  — and the perpetually-connected nature of smartphones — consumerized the concept of distinct service-based applications, as well as the overall concept of software ecosystems, a philosophical and economic force that would permanently change both consumer and business software spending. 

Connecting people digitally — through social media, by voice and video, and so on — has been central to the hyper-growth cycle, with innovations in both connectivity and cloud infrastructure allowing multiple companies to carve out hundred-billion or trillion dollar ecosystems doing so. On some level, big tech companies could put more money (through R&D, investment, or acquisitions) wherever they needed to as a means of making the future a reality — and, thanks to rock-bottom interest rates and the frothy investment market, they were never short of cash.

Yet, at some point, hype began to outpace innovation, and while there are new ideas, the fundamental technology required to actually innovate may not actually exist, making them primarily theoretical. 

When Mark Zuckerberg renamed Facebook to Meta, he told Casey Newton — who should feel nothing but shame for accepting this in good faith — that he believed the metaverse was "an embodied internet" that was a "persistent, synchronous environment" that would resemble social media but also be "an environment where you're embodied in it." This is, of course, total nonsense. A cacophony of word salad conjured up by somebody that knows they won't receive pushback on their ideas, not even from those whose job is to scrutinize the technology industry and ask difficult questions of technology leaders. 

What Zuckerberg had actually built — by which, for the most part, I mean acquired — was a half-arsed virtual world accessed through niche virtual reality technology that nobody asked for and Meta couldn't actually build. What Mark Zuckerberg wanted to do was build "the successor to the mobile internet" in the hopes that "the metaverse" would be another hyper-growth industry where people would buy land (they didn't), or hang out en masse (they didn't), or have meetings (they didn't), all while sharing data with and watching ads served by Meta and other companies. 

In this cynical vision of the future, the internet — neutral, standards based, and controlled by no single company — would be transformed into a platform, a feudal system where Meta would set the rules and exact a cut from each interaction and transaction. It was, quite plainly, a power grab the likes of which the Internet hasn’t seen since the bleak days of Internet Explorer 6 and ActiveX.  

The tech, however, was never there. Mark Zuckerberg sold everybody on the concept of a Ready Player One-esque metaverse. Putting aside the fact that Ready Player One is a poorly-written dystopia, this vision hinges heavily on the idea of technology that completely immerses the user, allowing them to exist in a digital world that feels real. The reality was far worse — nausea-inducing headsets that led you into a clunky, cartoonish world that was neither fun nor practical, and tens of billions of dollars of R&D costs developing headsets that lose money on every sale. 

Zuckerberg's glossy metaverse video — published at the same time Facebook rebranded to Meta, and was significant insofar as it was a demonstration of the company’s future ambitions and direction — showed exactly what he hoped he could build, and had he done so I'd argue it might have been a success, But said success would be predicated on a pace of innovation that we've not had in over a decade. Forty billion dollars later, the metaverse is no closer to existing, yet Zuckerberg keeps burning cash in the hopes that he can get just one more hit of hyper-growth. That's all he needs, man.

That's why Zuckerberg is so full-force on generative AI, shoving it into every Meta platform regardless of whether it does anything useful or says extremely strange things, like pretending it’s a parent of a gifted child and recommending schools to other (actual) parents. It's the same reason that Sundar Pichai and Liz Reid are forcing generative AI into Google Search even as it actively misinforms their customers,  recommending people introduce glue and rocks to their diet, claiming former U.S. President Barack Obama is a muslim, and regurgitating stories from satirical news outlet The Onion as indisputable fact. 

The tech industry is getting withdrawal symptoms, realizing that there might not be any massive new markets to turn into further billion-dollar arms of their trillion-dollar enterprises, and on some level I believe that the industry-wide alignment around an unprofitable, unsustainable tool is proof that things are getting desperate.

Why else would Sam Altman spend most of the time talking about what his stuff might do? Why else would Sam Altman talk so often about building artificial general intelligence — a thing that is totally and utterly impossible to build with any of the generative artificial intelligence his company makes, and likely requires number-crunching technology that doesn’t even exist yet?

Companies that build useful things that people need don't need to talk about what they'll build in the future — you can see it in the things they're selling you today. 

Using the first iPhone, one could theorize that you might make video calls with it, or use distinct third-party apps like you did on a computer — and while Steve Jobs (he was an awful person, but stay with me) did talk about what was coming next, he did so by saying that it would add "3G and amazing things in the future" just before showing the actual, real features of the first iPhone, things that people actually wanted. 

These promises were believable, either because they felt like a next logical step, or because they described features that could be found in other competing products. Apple has always been a company of iteration, rather than sheer, forceful, explosive creation. It doesn’t create new categories, but refines them. This trait was even referenced in the classic Douglas Coupland novel Microserfs, first published in 1995, before Tim Cook’s reign or even the return of Steve Jobs after his ousting. It didn’t make the first smartwatch, tablet, or ARM-based PC. But it made them good

Conversely, Sam Altman, Sundar Pichai and Satya Nadella seem intent on discussing what AI will do — it will have a "monumental impact," be a smart person that knows everything about your life — and yet when actually asked what it does today, they are rendered stuttering and speechless, with Satya Nadella underwhelmingly claiming that Microsoft copilot helps him compose emails better.

As an aside, Satya Nadella said in 2021 that Microsoft was, with the metaverse, creating "a new platform and a new application type" that was "similar to how [it] talked about the web and websites in the early '90s," and that he "could not overstate" how much of a breakthrough it was, only to dump it two years later for artificial intelligence

Two years before that, Nadella called HoloLens 2, Microsoft's augmented reality glasses, "an absolute breakthrough" a few months after its demo failed live onstage at Microsoft Build 2019. And while Microsoft hasn’t killed Hololens, it’s clear the company’s appetite for mixed and virtual-reality technology has dampened somewhat, with Hololens workers particularly impacted over the past year’s sweeping layoffs at Redmond, and Microsoft discontinuing its VR social network AltspaceVR and Windows Mixed Reality.

Every single one of these questionable, weird products and decisions is an act of desperation, an attempt to keep the growth-at-all-costs fire from going out. There are a finite amount of people in the world, and an even-more-finite amount that will go online at any given time, and in turn even fewer of them that might actually use a particular service. Previously, these companies were able to anticipate and meet customers' needs on both a consumer and enterprise level, in part because they understood their customers and knew their success derived from them. 

Despite that, I believe that, behind the scenes, many of these companies have been struggling for years to find the next big thing, knowing that while there might be needs to be met (those inconvenient customers mentioned earlier), those needs might not be things that will create perpetual double-digit revenue growth, or convince Wall Street analysts of the growth potential of their stock. Though I can’t say for certain, I’d also argue that the ascent of multiple management consultant types in the tech industry — people like Sheryl Sandberg, Adam Mosseri, Sundar Pichai and Sam Altman — is a testament to how obvious the next big thing has been for some time, and their failure to adapt or create useful products a symptom of a larger problem of innovation taking a back seat to growth.


What makes generative AI so special is that it has the hint of utility, the scent of a product, and as a result can be sold to the markets as the next big boom that justifies hundreds of billions of dollars of investment and increased market capitalization. Every single company chasing the generative AI dragon is hoping that it's the next Amazon Web Services — the ubiquitous cloud product that went from a side project to a bigger revenue-driver than Amazon's store, and today underpins much of the Internet — because it sort of, kind of seems like it's "the next big thing in cloud," even if it doesn't actually seem to do anything useful, let alone incredible.

On top of that, nobody really seems to be able to explain why or how generative AI is the next big thing. 

ChatGPT, Gemini, Claude and other Large Language Models can do some things that are superficially cool. They can generate images, quickly query datasets (albeit with no guarantees of accuracy), and craft poetry, but there is no endearing reason to pick up one of them every day and use them. The use cases they enable are neither exciting nor ubiquitous, nor are they, if we're honest, anything like what tech executives are trying to sell us, and the problem might not be that it's useless, but that as a piece of technology, it just isn't a hyper-growth market or industry-changer, no matter how many hundreds of billions we put into it.   

Sam Altman isn't asking Microsoft for a $100 billion supercomputer because generative AI is going to get better — he's doing it because what we have today isn't the world-changing boondoggle that the tech industry needs it to be, and his only prospect of fulfilling the lofty promises he’s made is with a tech industry equivalent of the Marshall Plan.

Yet, without generative AI, what do these companies have left? What's the next big thing? For the best part of 15 years we've assumed that the tech industry would always have something up its sleeves, but what's become painfully apparent is that the tech industry might have run out of big, sexy things to sell us, and the "disruption" that tech has become so well-known for was predicated on there being markets for them to disrupt, and ideas that they could fund to do so. A paper from Nature from last year posited that the pace of disruptive research is slowing, and I believe the same might be happening in tech, except we've been conflating "innovation" and "finding new markets to add software and hardware to" for twenty years. 

The net result of this creative stagnancy  is the Rot Economy and the Rot-Com bubble — a tech industry laser-focused on finding markets to disrupt rather than needs to be met, where the biggest venture capital investments go into companies that can sell for massive multiples rather than stable, sustainable businesses. There is no reason that Google, or Meta, or Amazon couldn't build businesses that have flat, sustainable growth and respectable profitability. They just choose not to, in part because the markets would punish it, and partially because their DNA has been poisoned by rot that demands there must always be more.

I also think that the tech industry — both those funding startups and running big tech firms — has become dominated by people disconnected from both building tech products and the habits of real people. 

The Rabbit R1 and Humane Pin both raised incredible sums of capital based on the vaguest of ideas — that people feel overwhelmed by their smartphones — to build products that range from extremely bad to potentially fraudulent, failing to build a product that nobody asked for based on some combination of ignorance and ambivalence. And if I'm honest, these startups are only a symptom of the Rot-Com bubble — that investors disconnected from the process of building things fund ideas that they don't understand in the hopes that the people building them aren't full of shit, either lacking the due diligence necessary or not really caring what happens in the end.

And who can blame them? When the entire industry is engineered to capture what the "next big thing is," which isn’t defined by utility or necessity, but rather  "how big can the market grow," why wouldn't you try and capture the venture demand? Why wouldn't you roll the dice, even if you don't really think it's possible? 

If we're honest, what really distances a man like Rabbit CEO Jesse Lyu and OpenAI CEO Sam Altman? They both went through yCombinator, and both of them have sold products that don't do what they said it would based on what it might do in the future. The sole difference, as far as I can see, is that Altman was just a little bit better at playing the game.

No, really. While Jesse Lyu absolutely misled customers about the capabilities of the Rabbit R1 (and the company’s previous operations), Sam Altman shamelessly overhypes what generative AI can do almost every day. Senior members of Altman’s first startup Loopt tried to get him fired twice for “what they described as deceptive and chaotic behavior,” he was fired from Y Combinator for “an absenteeism that rankled his peers and some of the startups he was supposed to nurture,” and former OpenAI board member Helen Toner told the TED AI show that Altman was so deceitful that the board found out about ChatGPT’s launch…when they saw it posted on Twitter. Toner also added that Altman was caught “outright lying” on multiple occasions, and that two executives provided evidence of “psychological abuse” and lying and manipulative behavior on multiple occasions. 

If anything, Sam Altman is something much worse, all while lacking much of a technical background. He’s the archetypal Rot-Com CEO — an unqualified lobbyist pretending to be a technologist and abusing anyone he needs to in the pursuit of growth.


This isn't to say that there can't be successful tech companies, or that there won't be any more innovation — just that there will be less innovation and at a smaller scale. And I don’t think we should fear this. 

The tech industry can no longer rely on the idea that every year (or couple of years) somebody will find an idea that will create 200 more startups or trillions of dollars in market capitalization. It must reconfigure both venture capital investment and public tech companies to a more sustainable, profitable and useful model where — get this — existing products are made better and more profitable with the understanding that we're approaching Max Internet, and that products cannot be built with the assumption that more and more users will always exist.

I am sure there are some that will say I'm wrong, that tech's hyper-growth era isn't over, and that generative AI will usher in a glorious new future of powerful assistants. But seriously, what does that look like? What happens next? Where does generative AI go from here? Do they fix the problems? Have they made these things cheaper, or more efficient, or found a way to eliminate hallucinations? How will OpenAI train GPT-5? Has it found five times the amount of training data that was used to train GPT-4 yet? Has Google, or Meta, or Microsoft found a way to make generative AI profitable yet, and if not, how will they do so?

And crucially, when does generative AI suddenly prove itself useful? Where is the big moment that changes everything?

The answer is simple: it isn't coming. Generative AI is not going to change the world, and while artificial intelligence itself might, it isn't going to do so in the hands of Sam Altman and OpenAI, or Anthropic. When the bubble bursts, I imagine these firms will be absorbed by the massive tech companies that have put billions of dollars into them in exactly the same way that Microsoft absorbed Inflection, quietly hiding their shameful failures and repurposing the massive compute investments to bolster the already-existent cloud infrastructure.

In the end, this is far better for the tech industry. To have a better world — one with more interesting things, and where society doesn't constantly feel at odds with technology — tech companies must shed their growth addiction, and I believe they'll be forced to do so whether they like it or not. There will likely still be many, many $100 million or billion-dollar companies - just far fewer $100 billion or trillion-dollar ones.

And I believe it’s going to happen. The basic problem with an unsustainable business model is precisely that — it’s unsustainable. Eventually, a bough will break. The immediate consequences will be catastrophic, but it’ll force an overdue reckoning that changes the entire way tech companies are perceived, how we think of their trajectories, and how investment is deployed. 

The Rot-Com bubble will inevitably pop, but until it does, billions will be plowed into building hardware and software to inflate the Rot-Com bubble in the hopes that the growth-at-all-costs party will never end. I just don't think that anyone in big tech is prepared for the bubble to burst.

Share
Comments

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.