Make Fun Of Them

Edward Zitron 36 min read

Have you ever heard Sam Altman speak?

I’m serious, have you ever heard this man say words from his mouth? 

Here is but one of the trenchant insights from Sam Altman in his agonizing 37-minute-long podcast conversation with his brother Jack Altman from last week:

I think there will be incredible other products. There will be crazy new social experiences. There will be, like, Google Docs style AI workflows that are just way more productive. You’ll start to see, you’ll have these virtual employees, but the thing that I think will be most impactful on that five to ten year timeframe is AI will actually discover new science.” 

When asked why he believes AI will “discover new science,” Altman says that “I think we’ve cracked reasoning in the models,” adding that “we’ve a long way to go,” and that he “think[s] we know what to do,” adding that OpenAI’s o3 model “is already pretty smart,” and that he’s heard people say “wow, this is like a good PHD.”

That’s the entire answer! It’s complete nonsense! Sam Altman, the CEO of OpenAI, a company allegedly worth $300 billion to venture capitalists and SoftBank, kind of sounds like a huge idiot!

“But Ed!” you cry. “You can’t just call Sam Altman an idiot! He isn’t stupid! He runs a big company, and he’s super successful!”

My counter to that is, first, yes I can, I’m doing it right now. Second, if Altman didn’t want to be called stupid, he wouldn’t say stupid shit with a straight face to a massive global audience.

My favourite part of the interview is near the beginning:

Jack Altman: So reasoning will lead to science going faster or just new stuff or both?

Sam Altman: I mean, you already hear scientists who say they’re faster with AI, like we don’t have AI maybe autonomously doing science, but if a human scientist is three times as productive using o3, that’s still a pretty big deal.

Jack Altman: Yeah

Sam Altman: And then as that keeps going and the AI can autonomously do some science, figure out novel physics-

Jack Altman: Is it all that happening as a copilot right now? [Editor’s note: this is exactly what Jack Altman says]

Sam Altman: Yeah there’s definitely not… You definitely can’t go say like, “Hey ChatGPT, figure out new physics” and expect that to work. So I think it is currently copilot-like, but I’ve heard like, anecdotal reports from biologists where it’s like, “wow, it really did figure out an idea. I had to develop it, but it made a fundamental leap.” 

This is a nonsensical conversation, and both of them sound very, very stupid. 

“So, is this going to make new science or make science faster?” “Yeah, I hear scientists are using AI to go faster [CITATION NEEDED], but if a human scientist goes three times faster [CITATION NEEDED] using my model that would be good. Also I heard from a guy that he heard a guy who did biology who said ‘this helped.’”

Phenomenal! Give this guy $40 billion or more dollars every year until he creates a superintelligence, that’ll fucking work.

Here are some other incredible quotes from the genius mind of Sam Altman:

  • “You hear these stories of people who use AI to do market research and figure out new products and then email some manufacturer and get some dumb thing made and sell it on Amazon and run ads…there are people that have actually figured out at small scale in the most boring ways possible how to put a dollar into AI and get the AI to run a toy business, but it’s actually working. So that’ll climb the gradient.” 
    • You may wonder if “the gradient” is mentioned at some point elsewhere. It is not.
  •  “So every year before the last maybe up until last year I would’ve said, ‘hey I think this is going to go really far,’ but it still seems like there’s a lot we’ve got to figure out.” 
  • “If something goes wrong, I would say somehow it’s that we build legitimate super intelligence and it doesn’t make the world much better, it doesn’t change things as much as it sounds like it should.”
  • “So yeah, I think the relativistic point is really important, but to us, our jobs feel incredibly important and stressful and satisfying. And if we're all just making better entertainment for each other in the future, maybe that's kind of what at least one of us is doing right now.” 

This is gobbledygook, nonsense, bullshit peddled by a guy who has only the most tangential understanding of the technology his company is building. 

Every single interview with Sam Altman is like this, every single one, ever since he became a prominent tech investor and founder. Without fail. And the sad part is that Altman isn’t alone in this.

Sundar Pichai, when asked one of Nilay Patel’s patented 100-word-plus-questions about Jony Ive and Sam Altman’s new (and likely heavily delayed) hardware startup:

I think AI is going to be bigger than the internet. There are going to be companies, products, and categories created that we aren’t aware of today. I think the future looks exciting. I think there’s a lot of opportunity to innovate around hardware form factors at this moment with this platform shift. I’m looking forward to seeing what they do. We are going to be doing a lot as well. I think it’s an exciting time to be a consumer, it’s an exciting time to be a developer. I’m looking forward to it.

The fuck are you on about, Sundar? Your answer to a question about whether you anticipate more competition is to say “yeah I think people are gonna make shit we haven’t come up with and uhh, hardware, can’t wait!”

While I think Pichai is likely a little smarter than Altman, in the same way that Satya Nadella is a little smarter than Pichai, and in the same way that a golden retriever is smarter than a chihuahua. That said, none of these men are superintelligences, nor, when pressed, do they ever seem to have any actual answers.

Let’s see what Satya Nadella of Microsoft answered when asked about how exactly it’s going to get to (and I paraphrase Dwarkesh Patel’s mealy-mouthed question) $130 billion in AI revenue “through AGI”:

The way I come at it, Dwarkesh, it's a great question because at some level, if you're going to have this explosion, abundance, whatever, commodity of intelligence available, the first thing we have to observe is GDP growth.

Before I get to what Microsoft's revenue will look like, there's only one governor in all of this. This is where we get a little bit ahead of ourselves with all this AGI hype. Remember the developed world, which is what? 2% growth and if you adjust for inflation it’s zero?

So in 2025, as we sit here, I'm not an economist, at least I look at it and say we have a real growth challenge. So, the first thing that we all have to do is, when we say this is like the Industrial Revolution, let's have that Industrial Revolution type of growth.

That means to me, 10%, 7%, developed world, inflation-adjusted, growing at 5%. That's the real marker. It can't just be supply-side.

In fact that’s the thing, a lot of people are writing about it, and I'm glad they are, which is the big winners here are not going to be tech companies. The winners are going to be the broader industry that uses this commodity that, by the way, is abundant. Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry.

But that's to me the moment. Us self-claiming some AGI milestone, that's just nonsensical benchmark hacking to me. The real benchmark is: the world growing at 10%.

This quote has been used as a means of suggesting that Nadella is saying that “generative AI is generating basically no value,” which, while somewhat true, obfuscates its true meaning: Satya Nadella isn’t saying a fucking thing. 

The question was “how do you get Microsoft to $130 billion in revenue,” and Satya Nadella’s answer was to say “uhhh, abundance, uhhh, explosion, uhhhhh, GDP! Growth! Industrial revolution! Inflation-adjusted! Percentages! The winners will be the people who do stuff, and then productivity will go up!”

This is fucking nonsense, and it’s time to stop idolizing these speciously-informed goobers. While kinder souls or Zitron-haters may read this and say “ahh, actually, what Nadella was saying was…” stop. I want to stop you there and suggest that perhaps a smart person should be able to speak clearly enough that their intent is obvious. 

It’s tempting to believe that there is some sort of intellectual barrier between you and the powerful — that the confusing and obtuse way that they speak is the sound of genius, rather than somebody who has learned a lot of smart-sounding words without ever learning what they mean.

“But Ed, they’re trained to do this!”

As someone who has media trained hundreds of people, there is only so much you can do to steer someone’s language. You cannot say to Sundar Pichai “hey man, can you sound more confusing?” You can, however, tell them what not to talk about and hope for the best. Sure, you can make them practice, sure, you can give them feedback, but people past a certain stage of power or popularity are going to talk however they want, and if they’re a big stupid idiot pretending to be smart, they’re going to sound exactly like this.

Why? Because nobody in the media ever asks them to explain themselves. When you’ve spent your entire career being asked friendly-or-friendly-adjacent questions and never having someone say “wait, what does that mean?” you will continue to mutate in a pseudo-communicator that spits out information-adjacent bullshit. 

I am, to be clear, being very specific about that question. Powerful CEOs and founders never, ever get asked to explain what they’re saying, even when what they’re saying barely resembles an actual answer. 

Pichai, Altman and Nadella have always given this kind of empty-brained intellectual slop in response to questions because the media coddles them. These people are product managers and/or management consultants — and in Altman’s case, a savvy negotiator and manipulator known for “an absenteeism that rankled his peers and some of the startups he was supposed to nurture” as an investor at yCombinator, according to the Washington Post.

I’ll try and explain this with a little aside.

Let’s think about a hypothetical question about your friend whose dog died:

You: Oh no, what happened?

Them: Well, my dog had a tragic yet ultimately final distinction between their ideal and non-ideal state, due to the involvement of a kind of automatic mechanical device, and when that happened, we realized we’d have to move on from the current paradigm of dog ownership and into a new era, which we both feel a great deal of emotion about and see the opportunities within.

You would probably be a little confused and ask them to explain what they meant.

You: Wait, what do you mean automatic mechanical what? Huh?

Them: Yeah, exactly, and that was part of the challenge. You see, like, the various interactions we have in our day are challenging, and we see a lot of opportunities in assailing those challenges, but part of the road to getting around them is facing them head on, which is ultimately what happened there. And while we were involved, we didn’t want to be, and so we had to make some dramatic changes. 

You still, at this point, do not really know what happened. Did a car hit the dog? Did they run over their dog?

In this scenario, would you nod and say “wow man, that sucks, I’m sorry,” or would you ask them to explain what they’re saying? Would you, perhaps, ask what it is they mean?

By “coddle,” I mean these people are deliberately engaging in a combination of detective work and amnesia, where the reader or the listener is forced to simultaneously try and divine the meaning of their answer, while also not thinking too hard about the question the interviewer asked. 

Look at most modern business interviews. They involve a journalist asking a question, somebody giving an answer, and the journalist saying “okay!” and moving onto the next question, occasionally saying “but what about this?” when the appropriate response to many of the answers is to ask them to simplify them so that their meaning is clearer.

A common response to all of this is to say that “interviewers can’t be antagonistic,” and I don’t think a lot of people understand what that means. It isn’t “antagonistic” to ask somebody to clearly articulate what they’re saying, nor is it “antagonistic” to say that you don’t understand, or that they didn’t answer the question you asked. If this is “antagonistic” to you, you are, intellectually-speaking, a giant fucking coward, because what you’re suggesting is that somebody cannot ask somebody to explain themselves, which is what an interview is.

And I imagine nobody really wants to do this, because if you actually put these people on the spot, you’d realize the dark truth that I spoke of a few weeks ago: that the reason the powerful sound like idiots is because, well, they’re idiots. They sound like Business Idiots and create products to sell to Business Idiots, because Business Idiots run most companies and buy solutions based on what the last Business Idiot told them. 

To quote the excellent Nik Suresh:

While I like Snowflake as a piece of software, it is probably not a high priority to move to it at most large companies for various reasons I won't get into here. Fine, I'll get into one of them. It's just a really good data warehouse, you absolute maniacs, it isn't the cure for cancer, why the fuck is it valued at $53B?

Because everyone is buying it, and this has to be driven by non-technical leadership because there aren't enough technical leaders to drive that sort of valuation. Why would non-technicians be so focused on a database of all things, a concept so dull that it is Effective Communication 101 to try and avoid using the term in front of a lay audience? It's because if you buy Snowflake then you're allowed to get onto stages at large venues and talk about how revolutionary Snowflake was for your business, which on the surface looks like a brag about Snowflake, but is actually a brag about the great decisions you've been making and the wealth you can deploy if someone becomes your friend. And the audience is full of people that are now thinking "If I buy Snowflake, I can be on that stage, and everyone will finally recognize my brilliance".

I know some of you might read this and say “these people can’t be stupid! These people run companies! They make huge deals! They read all these books!” and my answer is that some of the stupidest people I’ve ever met have read more books than you or I will read in a lifetime. While they might be smart when it comes to corporate chess moves or saying “this product category should do this,” none of these men — not Altman, Pichai or Nadella — actually has a hand in the design or creation of any of the things their companies make, and they never, ever have. 

Regardless, I have a larger point: it’s time to start mocking these people and tearing down their legends as geniuses of industry. They are not better than us, nor are they responsible for anything that their companies build other than the share price (which is a meaningless figure) and the accumulation of power and resources. 

These men are neither smart nor intellectually superior, and it’s time to start treating them as such.


These people are powerful because they have names that are protected by the press. They are powerful because it is seen as unseemly to mock them because they are rich and “running a company,” a kind of corporate fealty that I find deeply unbecoming of an adult. 

We are, at most, customers. We do not “owe them” anything. We are long past the point when any of the people running these companies actually invented anything they sell. iIf anything, they owe us something, because they are selling us a product, even if said product is free and monetised by advertising. 

While reporters — as anyone — should have some degree of professionalism in interviews or covering subjects, there is no reason to treat these people as special, even if they have managed to raise a lot of money or their product is popular, because if that were the case we’d have far more coverage of defense contractor Lockheed Martin. It made $1.71 billion in profit last quarter, and hasn’t had a single quarter under a billion dollars in the last year. 

I’m being a little glib, but the logic behind covering OpenAI is, at this point, “it makes a lot of money and its product is popular,” which is also a fitting description of Lockheed Martin. The difference is that OpenAI has a consumer product that loses billions of dollars, and Lockheed Martin has products that makes billions of dollars by removing consumers from the Earth. Both of them are environmentally destructive. 

Covering OpenAI sure doesn’t seem to be about the tech, because if you looked at the tech you’d have to understand the tech, you’d see that the user numbers weren’t there outside of the 500 million people using ChatGPT, of which very few are actually paying for the product, and that the term “user” encompasses everything from the most occasional users who log in out of curiosity, to people who are actually using it as part of their daily lives. 

If covering OpenAI was about the tech, you’d read about how the tech itself doesn’t seem to have a ton of mass-market use cases, and those use cases aren’t really the kind of things that you’d pay for. If they did, there’d be articles that definitively discussed them versus articles in the New York Times about “everybody using AI” that boil down to “I use ChatGPT as search now” and “I heard a guy who asked it to teach him about modern art.”

Yet men like Dario Amodei and Sam Altman continue to be elevated because they are “building the future,” even if they don’t seem to have built it yet, or have the ability to clearly articulate what that future actually looks like. 

Anthropic has now put out multiple stories suggesting that its generative AI will “blackmail” people as a means of stopping a user from turning off the system, something which is so obviously the company prompting its models to do so. Every member of the media covering this uncritically should feel ashamed of themselves.

Sadly, this is all a result of the halo effect of being a Guy Who Raised Money or Guy Who Runs Big Company. We must, as human beings, assume that these people are smart, and that they’d never mislead us, because if we accept that they aren’t smart and that they willingly mislead us, we’d have to accept that the powerful are, well, bad and possibly unremarkable. 

And if they’re untrustworthy people that don’t seem that smart, we have to accept that the world is deeply unfair, and caters to people like them far more than it caters to people like us.

We do not owe Satya Nadella any respect because he’s the CEO of Microsoft.  If anything, we should show him outright scorn for the state of Microsoft products. Microsoft Teams is an insulting mess that only sometimes works, leaving workers spending 57% of their time either in Teams Chat, Teams Meetings or sending emails according to a Microsoft study.

MSN.com is an abomination read by hundreds of millions of people a month, bloated with intrusive advertisements, attempts to trick you into downloading an app, and quasi-content that may or may not be AI generated. There are few products on the modern internet that show more contempt for the user -- other than, of course, Skype, a product that Microsoft let languish for more than a decade, the product so thoroughly engorged with spam that leaving it unattended for more than a month left you with a hundred unread messages from Eastern European romance scammers. Microsoft finally killed it in May.

Products like Word and Excel don’t need improving, but that doesn’t stop Microsoft from trying, bloating them with odd user interface choices and forcing users to fight with popups to use an AI-powered Copilot that most of them hate.

Why, exactly, are we meant to show these people respect? Because they run a company that provides a continually-disintegrating service? Because that service has such a powerful monopoly that it’s difficult to leave it if you’re interacting with other people or businesses? 

I think it’s because we live in Hell. The modern tech ecosystem is so utterly vile. Every single day our tech breaks in new and inventive ways, our iPhones resetting at random, random apps not accepting button presses, our Bluetooth disconnecting, our word processors harassing us to “try and use AI” while no longer offering us suggestions for typos, and our useful products replaced with useless shit, like  how Google’s previously-functional assistants were replaced with generative AI that makes them tangibly worse so that Google can claim it has 350 million monthly active Gemini users

Yet the tech and business media acts as if everything is fine

It isn’t fine! It’s all really fucked! You can call me a cynic or a pessimist or every name under the sun, but the stakes have never been higher, and the damage never more wide-spread. Everything feels broken, and covering these companies as if it isn’t is insulting to your readers and your own intelligence.

Look at the state of your computer or phone and tell me anything feels congruent or intentional rather than an endless battle of incentives. Look at the notifications on your phone and count the number of them that have absolutely nothing to do with information you actively need. As we speak, I have a notification from Adobe Lightroom, an app I use occasionally to edit photos, that tells me “Elevate any scene - now enhance people, sky, water and more with Quick Actions.” Zerocam, an app that brands itself “the first anti-AI camera app” where you “capture moments, not megapixels,” gave me a notification asking if I took a photo today. Amazon notified me that there is a deal picked just for me — a battery pack that I bought several months ago.

Every single company that sends notifications like these should be mocked, but we have accepted such vile conditions as the norm. Apple should be tarred and feathered for allowing companies to send spam notifications, and yet it isn’t  because, by and large, Apple is less vile and less exploitative than Microsoft, Google or Amazon.

If you are reading this as a member of the tech press, seriously, please look at your daily experience with tech. Count the number of times that your day or a task is interrupted by poorly-designed software or hardware (such as the many, many times Zoom or Teams has a problem with Bluetooth, or a website just doesn’t load, or you type something into your browser and it just doesn’t do anything), or when the software you use either actively impedes you (hey, did you want to use AI? No? You sure?) or refuses to work in a logical way (see: Google Drive). There are tens of thousands of stories like this every day, and if you talked to people, you’d see how widespread it is…or maybe, I dunno, see that it’s happening to you too?

There are people responsible, and the tech media writes about them every day. I realize it seems weird to constantly write that a company is releasing broken, convoluted software, but hey, if we can write 300,000 stories about how crime-ridden New York City is, why can’t we write three of them about how fucked Microsoft Office or Google Search have become?

And why can’t we talk to the people in power about it? Is it because the questions are too hard to ask? Is it because it feels icky to interrupt Satya Nadella as he waffles on about using Copilot all the time by saying “hey man, Microsoft Teams is broken, tons of people feel this way, why?” or “why have you let MSN.com turn into a hub of AI slop and outright disinformation?”

Oh no! You won’t get your access! Wahh!

Who cares? Write a story about how Microsoft has become so unbelievably profitable as its products get worse, and talk about how weird and bad that is for the world! Ask Nadella those tough questions, or publish that Microsoft’s PR wouldn’t let you! 

These people are neither articulate nor wise, and whatever “intelligence” they may claim to have doesn’t seem to manifest in good products or intelligent statements. So why treat them like they’re smart? Why show them deference or pleasantries? These people have crapped up our digital lives at scale, and they deserve contempt, or at the very least a stern fucking reception.

I realize I’m repeating points I’ve made again and again, but why is there such a halo around these fucking bozos? I’m serious! Why are we so protective of these guys? We’re more than happy to criticise celebrities, musicians, professional sports players, and politicians (fucking barely), but the business class is somehow protected outside of the occasional willingness to say that Elon Musk might have sort have done something wrong.

I’m not denying there are critics. We have Molly White, Edward Ongweso Jr, Brian Merchant and — at a major outlet like CNN, no less! — one of the greatest living business writers in Allison Morrow. I believe that tech criticism is a barely-explored and hugely-profitable industry if we treated tech journalism less like the society pages and more like a force to hold the most powerful people in the world accountable as they continually harm billions of people in subtle ways. People are angry, and they aren’t stupid, and they want to see that anger reflected in the stories they read — and the meek deference we show to dumb fucking tech leaders is the opposite of that. 

As I’ve said before: we live in an era of digital tinnitus, nagged by notifications, warring with software ostensibly built for us that acts as if we’re the enemy. And if we’re the enemy, we should treat those building this software as the enemy in return. We are their customers, and they have failed us.

The entire approach to business owners, especially in tech, is ridiculous. These people are selling us a product and the product fucking stinks! Put aside however you feel about generative AI for a second and face one very simple point: it doesn’t do enough, it’s really not cool at all, and we’re being forced to use it. 

I realize that some of you may want them to succeed, or want to be the person who tells everybody that they did so. I get that there are rewards for you — promotions, new positions, TV appearances repeating exactly what the powerful did and why they did it, or a plush role as that company’s head of communications — but I am telling you, your readers and viewers are waking up to it, and they feel like you have contempt for them and contempt for the truth. 

It’s easy — and common! — to try and dismiss my work as some sort of hater’s screed, a “cynical” approach to a tech industry that’s trying “brave new things” or whatever. 

In my opinion, there’s nothing more cynical than watching billions of people get shipped increasingly-shitty and expensive solutions and then get defensive of the people shipping them, and hostile to the people who are complaining that the products they use suck. 

I am angry at these companies because they have, at scale, torn down a tech industry that allowed me to be who I am today, and their intentional and disgraceful moves fill me full of disgust. I have watched the tech media move away from covering “technology” and more toward covering the people behind it, to the point that the actual outputs — the software and hardware we use every day — have taken a backseat to stories about whether Elon Musk does or doesn’t use a computer, which is meaningless, empty gossip journalism built to be shared by peers and nothing else.

And please, please do not talk about optimism. If you are blindly saying that everything OpenAI does is cool and awesome and interesting, you aren’t being optimistic — you’re telling other people to be optimistic about a company’s success. It isn’t “optimistic” to believe that a company is going to build powerful AI despite it failing to do so. It’s propaganda, and yes, this is also the case if you simply don’t do the research to form a real opinion.

I am not a pessimist because I criticize these companies, and framing me as one is cowardly and ignorant. If you are so weak-willed and speciously-informed that you can’t see somebody criticise a company without outright dismissing them as “a hater” or “pessimist,” you are an insult to journalism or analysis, and you know it in your wretched little heart. My heart sings with a firm belief in the things I think, founded on rigorous structures of knowledge that I’ve gained from reading things and talking to people, because something in me is incapable of being swayed by something just because everybody else is. 

You are assuming people are right because it is inconvenient and uncomfortable to accept they may not be, because doing so requires you to reckon with a market-wide hysteria founded on desperation and a lack of hyper-growth markets left in the tech industry

Worse still, in engaging with faux-optimism, you are failing to protect your readers and the general public.  

And if that’s what you want to do, ask yourself why! Why do you want these companies to win? What is it you want them to win? Do you want them to be rich? Do you want to be the person that told people they would be first? What is the world you want, and what does it look like, and how does doing your job in this way work toward creating that world?

This isn’t optimism — it’s horse-trading, or strategic alignment behind powerful entities. It is choosing a side, because your side isn’t with the reader or the truth. If it was — even if you believed generative AI was powerful and that they simply didn’t understand — your duty would be to educate the reader in a clear-set and obvious way, and if you can’t find a way to do so, acknowledging that and explaining why.

True optimism requires you to have a deep, meaningful understanding of things so that you can engage in real hope — a magical feeling, one that can buoy you in the most challenging times.

What many claim is “optimism” is actually blind faith, the likes of which you’ll see at a roulette table. Or, of course, knowingly peddling propaganda.


Let’s even take a different tact: say you actually want these companies to “build powerful AI,” and believe they’re smart enough to do so. Say that, somehow, looking at their decaying finances, the lack of revenue, the lack of growth, and the remarkable lack of use cases, you still come out of it saying “sure, I think they’re going to do this!”

How? Why haven’t they done it yet? Why, three years in, are we still unable to describe what ChatGPT actually does, and why we need it? Take away how much money OpenAI makes for a second (and, indeed, how much it loses). Does this product actually really inspire anything in you? What is it that’s magical about this? 

And, on a business level, what is it I’m meant to be impressed by, exactly? OpenAI has — allegedly — hit “$10 billion in annualized revenue” (essentially the biggest month it can find, multiplied by 12), which is…not that much, really, considering it’s the most prominent company in the software world, with the biggest brand, and with the attention of the entirety of the world’s media. 

It has, allegedly, 500 million weekly active users — and, by the last count, only 15.5 million paying subscribers, an absolutely putrid conversion rate even before you realize that the actual conversion rate would be monthly active subscribers. That’s how any real software company actually defines its metrics, by the fucking way. 

Why is this impressive? Because it grew fast? It literally had more PR and more marketing and more attention and more opportunities to sell to more people than any company has ever had in the history of anything. Every single industry has been told to think about AI for three years, and they’ve been told to do so because of a company called OpenAI. There isn’t a single god damn product since Google or Facebook that has had this level of media pressure, and both of those companies launched without the massive amount of media (and social media) that we have today. 

Having literally everybody talking about your product all the time for years is pretty useful! Why isn’t it making more money? 

Why are we taking any of these people seriously? Mark Zuckerberg paid $14.3 billion for Scale AI, an AI data company, as a means of hiring its CEO Alexandr Wang to run his “superintelligence” team, has been offering random OpenAI employees $100 million to join Meta, thought about buying both AI search company Perplexity and generative video company Runway and even tried to buy OpenAI co-founder Ilya Sutskever’s pre-product “$32bn valuation” non-company Safe Superintelligence, settling instead on hiring its CEO Daniel Gross and buying his venture fund for some fucking reason.

When you put aside the big numbers, these are the actions of a desperate dimwit with a failing product trying to buy his way to making generative AI into a “superintelligence,” something that Meta’s own Chief AI scientist Yan LeCun says isn’t going to work.

By assuming that there is some sort of grand strategy behind these moves beyond “if we get enough smart people together something will happen,” you help boost the powerful’s messaging and buoy their stock valuations. You are not educating anybody by humouring these goofballs. In fact, the right way to approach this would be to ask why Meta, a multi-trillion dollar market cap company with a near-monopoly over all social media, is spending billions of dollars in what appears to be a totally irresponsible way. Instead, people are suggesting this is Mark Zuckerberg’s genius at work

Anyway, putting that aside, what exactly is the impressive part of generative AI again? The fucking code? Enough about the code, I’m tired of hearing about the code, I swear to god you people think that being a software engineer is only coding and that it’s fine if you ship “mediocre code,” as if bad code can’t bring down entire organizations. What do you think a software engineer does? Is all they do code? If you think the answer is yes, you are wrong!

Human beings may make mistakes in writing code, but they at least know what a mistake looks like, which a generative AI does not, because a generative AI doesn’t know what anything is, or anything at all, because it is a probabilistic model. 

Congratulations! You made another way in which software engineers can automate parts of their jobs — stop being so fucking excited about the idea that people are going to lose their livelihoods! It’s nasty, and founded on absolutely nothing other than your adulation for the powerful!

These models are dangerous and chaotic, built with little intention or regard for the future, just like the rest of big tech’s products. ChatGPT would’ve been a much smaller deal if Google had any interest in turning Google Search into a product that truly answered a query (as opposed to generating more of them to show more impressions to advertisers) — a nuanced search engine that took a user’s query and spat out a series of websites that might help answer said question rather than just summarising a few of them for an answer. 

And if you ever need proof that Google just doesn’t know how to fucking innovate anymore, look at AI Summaries, a product that both misunderstands search and why people use ChatGPT as a search replacement. While OpenAI may “summarise” stuff to give an answer, it at the very least gives something approximating a true answer, rather than a summary that feels like an absentee parent trying to get rid of you and then throwing you $20 in the hopes you’ll leave them alone. If Google Search truly evolved, ChatGPT wouldn’t really matter, because the idea of a machine that can theoretically answer a question is kind of why people used fucking Google in the fucking first place.

Again, why are we not describing this company as the business equivalent of a banana republic? It’s actively making its shit worse to juice growth, and it’s really obvious how badly it sucks. 

Why doesn’t the state of Google dominate tech news, just like how random ketamine-fuelled tweets from Elon Musk do? Why aren’t we, collectively, repulsed by Google as a company? Why aren’t we, collectively, repulsed by OpenAI? 

No matter how big ChatGPT is, the fact that there’s a product out there with hundreds of millions of users that constantly gets answers wrong is a genuinely worrying thing for society, and that’s before you get to the environmental damage, the fact it trained its models on millions of people’s art and writing, and oh, I dunno, the fact it plans to lose over a hundred billions of dollars before becoming profitable? 

Why are we not more horrified? Why are we not more forlorn that this is where hundreds of billions of dollars are being forced? The most prominent company in the tech industry is an unstable monolith with a vague product that can only make $10 billion a year (revenue, not profit) as the very fabric of its existence is shoved down the throat of every executive in the world at once. Also, if it’s not fed $20 billion to $40 billion a year, it will die. 

Give me a fucking break.

I don’t know, I sound pretty ornery, I get accused of being a hater or missing the grand mystery of this bullshit every few minutes by somebody with an AI avatar of a guy who looks like he’s banned from multiple branches of Best Buy, I understand there’s things that people do with Large Language Models, I am aware, but none of it matters because the way they’re being discussed is like we’re two steps from digitally replacing hundreds of millions of people.

The reality is far simpler: we have an industry that has spent nearly half a trillion dollars between its capital expenditures and venture capital funding to create another industry with the combined revenue of the fucking smartwatch industry. What I’m writing isn’t inflammatory — in fact, it’s far more deeply rooted in reality than those claiming that OpenAI is building the future.

Let’s do some fucking mathematics!

Projected Big Tech Capital Expenditures in 2025 and revenue from AI:

That’s $327 billion this year, with a total revenue of…what, $18 billion of revenue? And that’s not profit! And that’s if we include OpenAI’s spend on Azure. Even if every single one of these companies was making $18 billion in revenue a year from this it wouldn’t be great, but it’s more than likely that these chunderfucks can’t even pull together the projected revenue ($32 billion) of the global smartwatch industry! What a joke! 

“Wuhh, but what about OpenAI?” 

What about OpenAI? I’ve written about this so much. So what, OpenAI makes $12.7 billion this year, but loses $14 billion, what does that mean to you, exactly? What’re you going to say? The cost of inference is coming down? No, the cost that people are being charged is going down, we have no firm data on the actual costs because the companies don’t want to talk about it, and yes, it will absolutely lower prices to compete with other companies. The Information just reported that OpenAI was doing this to compete with Microsoft last week!

Hey, quick question — wasn’t SoftBank meant to spend $3 billion annually on OpenAI’s software? Did that happen?  

Anyway, even if we add OpenAI’s revenue to the pot, we are at $30.7 billion. If we add the supposed $1 billion in revenue from training data startup Surge, $300 million in “annualized revenue” from Turing, optimistically assume that Perplexity will have $100 million (up from $34 million in 2024, where it burned $65 million) in revenue in 2025, and assume that Anysphere’s (which makes Cursor) $200 million run rate stays consistent through 2025, we are at…$32.3 billion. 

But I'm not being fair, am I? I didn’t include many of the names from The Information’s generative AI database. Prepare yourself, this is gonna be annoying!

So let's add some more. We’ve got $3 billion from Anthropic, $870 million from Scale (now part of Meta), another alleged $300 million for Anysphere (The Information claims $500 million in ARR), we consider Neo4j’s “>$200 million ARR” to mean “$200 million,” Midjourney’s “>$200 million ARR” to mean $200m, Ironclad’s “>$150 million ARR” to mean $150 million ARR, Glean’s $103 million ARR, Together AI’s $100 million ARR, Moveworks’ $100 million ARR, Abridge’s $100 million ARR, Synthesia’s $100 million ARR, WEKA’s “>$100 million ARR” to mean $100m ARR, Windsurf’s $100m ARR, Runway’s $84 million ARR, Elevenlabs’ “>$100m ARR” to mean $100m ARR, Cohere’s $70m ARR, Jasper’s “>$60m ARR” to mean $60m, Harvey’s $50m ARR, Ada’s “>$50m ARR” to mean $50m, Photoroom’s $50m ARR…and then assumed the combined ARR of the remainders are somewhere in the region of a very generous $200m, we get…

Less than $39 billion dollars of total revenue in the entire generative AI industry. Jesus fucking christ! 

According to The Information, generative AI companies have raised more than $18.8 billion in the first quarter of 2025, after investing $21 billion in Q4 2024 and $4 billion in Q3 2024 for a grand total of $43.8 billion, or a total of $370.8 billion of investment and capital expenditures for an industry that, despite being the single-most talked about thing on the planet, cannot even create a tenth of the dollars it requires to make it work.

These companies are predominantly unprofitable, perpetually searching for product-market fit, and even when they find it, seem incapable of generating revenue numbers that remotely justify their valuations. 

If I’m honest, I think the truly radical position here is the one taken by most tech reporters that would rather take the lazy position of “well Uber lost a lot of money!” than think for two seconds about whether we’re all being sold a line of shit.

What we’re watching is a mountain of waste perpetuated by the least-charming failsons of our generation. Nobody should be giving Satya Nadella or Sam Altman a glossy profile — they should be asking direct, brutal questions, much like Joanna Stern just did of Apple’s Craig Federighi, who had absolutely fucking nothing to share because he has never been pushed like this. 

Put aside the money for a second and be honest: these men are pathetic, unimpressive, uninventive, and dreadfully, dreadfully boring. Anthropic’s Wario (Sorry, Dario) Amodei and OpenAI’s Sam Altman have far more in common with televangelist Joel Olstein than they’ll ever have with Steve Jobs or any number of people that have actually invented things, and they got that way because we took them seriously instead of saying “wait, what do you mean?” To a single one of their wrongheaded, oafish and dim-witted hype-burps. 

It’s boring! I’m terribly, horribly bored, and if you’re interested in this shit I am genuinely curious why, especially if you’re a reporter, because right now the “innovation” happening in AI is, at best, further mutations of the Software As A Service business model, providing far less value than previous innovations at a calamitous cost. 

Reasoning models don’t even reason, as proven by an Apple paper released a few weeks ago, and agents as a concept are fucked because large language models are inherently unreliable — and yes, a study out of fucking Salesforce found that agents began to break down when given multi-step tasks, such as “any task you’d want to have an agent automate.” 

So, here’s my radical suggestion: start making fun of these people.

They are not charming. They are not building anything. They have scooted along amassing billions of dollars promising the world and delivering you a hill of dirt. They deserve our derision — or, at the very least, our deep, unerring suspicion, if not for what they’ve done, but for what they’ve not done. Sam Altman is nowhere near delivering a functioning agent, let alone anything approaching intelligence, and really only has one skill: making other companies risk a bunch of money on his stupid ideas.

No, really! He convinced Oracle to buy $40 billion of NVIDIA chips to put in the Abilene Texas “Stargate” data center, despite the fact that the Stargate organization has yet to be formed (as reported by The Information). SoftBank and Microsoft pay all of OpenAI’s bills, and the media does his marketing for him. 

OpenAI is, as I said, quite literally a banana republic. It requires the media and the markets to make up why it has to exist, it requires other companies to pump it full of money and build its infrastructure, and it doesn’t even make products that matter, with Sam Altman constantly talking about all the exciting shit other people will build

You can keep honking about how “it built the API that will power the future,” but if that’s the case, where’s the fucking future, exactly? Where is it? What am I looking at here? Where’s the economic activity? Where’s the productivity? The returns suck! The costs are too high! 

Why am I the radical person for saying this? This entire situation is absolutely god damn ridiculous, an incomparable waste even if it somehow went in the green. For the horrendous amounts of capital invested in generative AI to make sense, the industry would have to have revenue that dwarfed the smartphone and enterprise SaaS market combined, rather than less than half of that of the mobile gaming industry.

Satya Nadella, Sam Altman, Wario Amodei, Tim Cook, Andy Jassy — they deserve to be laughed at, mocked, or at the very least interrogated vigorously, because their combined might has produced no exciting or interesting products outside of, at best, what will amount to a productivity upgrade for integrated development environments and faster ways to throw out code that may or may not be reliable. These things aren’t nothing, but they’re nowhere near the something that we’re being promised.

So I put it to you, dear reader: why are we taking them seriously? What is there to take seriously other than their ability to force stuff on people?

And I’ll leave you with a question: how do they manage to keep doing this, exactly? They always seem to find new growth, every single quarter, without fail? Is it because they keep coming up with new ideas? Or is it because they come up with new ideas to get more money, a vastly different choice that involves increasing the prices of products or making them worse so that they can show you more advertisements.

My positions are not radical, and if you believe they are, your deference to the powerful disgusts me.


In any case, I want to end this with something inspirational, because I believe that things change when regular people feel stronger and more capable.

I want you to know that you are fully capable of understanding all of this. I don’t care if you “aren’t a numbers person” or “don’t get business.’ I don’t have a single iota of economics training, and everything you’ve ever read me write has been something I’ve had to learn. I was a layperson right up until I learned the stuff, then I became a stuff-knower, just like you can be.

The tech industry, the finance industry, the entire mechanisms of capitalism want you to believe that everything they do is magical and complex, when it’s all far more obvious than you’d believe. You don’t have to understand the entire fundamentals of finance to know how venture capital works — they buy percentages of companies at a valuation that they hope is much lower than the company would be worth in the future. You don’t need to be technical to know that Large Language Models generate a response based on billions of pieces of training data, and by guessing at what the next bit of text in a line should be based on what it’s seen previously. 

These people love to say “ah, but didn’t you see-” and present an anecdote, when no anecdote will ever defeat the basics of “your business doesn’t make any money, the software doesn’t do the things you claim it’s meant to, and you have no path to profitability.” They can yammer at you all they want about “lots of people using ChatGPT,” but that doesn’t change the fact that ChatGPT just isn’t that revolutionary, and their only play here is to make you feel stupid rather than actually showing you why it’s so fucking revolutionary.

This is the argument of a manipulator and a coward, and you are above such things.

You don’t really have to be a specialist in anything to pry this shit apart, which is why so much of my work is either engaging to those who learn something from it or frustrating to those that intentionally deceive others through gobbledygook hype-schpiel. I will sit here and explain every fucking part of this horrid chain of freaks, and break it down into whatever pieces it takes to educate as many people as I have to to make things change.

I also must be clear that I am nobody. I started writing this newsletter with 300 subscribers and no reason other than the fact I wanted to, and four years later I have nearly 64,000 subscribers and an award-winning podcast.  I have no economics training, no special access, no deep sources, just the ability to look at things that are happening and say stuff. I taught myself everything I know about this industry, and there is nothing stopping you from doing the same.

I was convinced I was stupid until around two years ago, though if I’m honest it might have been last year. I have felt othered the majority of my life, convinced by people that I am incapable or unwelcome, and as I’ve become more articulate and confident in who I am and what I believe in, I have noticed that the only people that seek to degrade or suppress are those of weak minds and weaker wills — Business Idiots in different forms and flavors. I have learned to accept who I am — that I am not like most people — and people conflate my passion and vigor with anger or hate, when what they’re experiencing is somebody different who deeply resents what the powerful have done to the computer.  

And while I complain about the state of media, what I’ve seen in the last year is that there are many, many people like me — both readers and peers — that resent things in the same way. I conflated being different with being alone, and I couldn’t be more wrong. For those of you that don’t wish to lick the boots of the people fucking up every tech product, the tent is large, it’s a big club, and you’re absolutely in it.

A better tech industry is one where the people writing about it hold it accountable, pushing it toward creating the experiences and connectivity that truly change the world rather than repeating and reinforcing the status quo. 

Don’t watch the mouth, watch the hands. These companies will tell you that they’re amazing as many times as they want, but you don’t need to prove that — they do. I don’t care if you tell a single human soul about my work, but if it helps you understand these people better, use it to teach other people. 

These people may seem all-powerful, but they’ve built the Rot Economy on a combination of anonymity and a placant press, but pressure against them starts with you and those you know understanding how their businesses work, and trusting that you can understand because you absolutely can. Millions of people understanding how these people run their companies and how poorly they’ve built their software will stop people like Sundar Pichai from being able to quietly burn Google Search to the ground. 

People like Sam Altman are gambling that you are easily-confused, easily-defeated and incurious, when you could be writing thousands of words on a newsletter that you never, ever edit for brevity. You can understand every fucking part of their business — the economics of OpenAI, the flimsy promises of Salesforce, the destruction of Google Search — and you can tell everybody you know about it, and suddenly it won’t be so easy for these wretched creeps to continue thriving.

I know it sounds small, and like your role is even smaller, but the reason they’ve grown so rapaciously is driven by the sense that the work they do is some sort of black magic, when it’s really fucking stupid and boring finance stapled onto a tech industry that’s run out of ideas

You are more than capable of understanding this entire world — including the technology, along with the finances that ultimately decide what technology gets made next.

These people have got rich and famous and escaped all blame by casting themselves as somehow above us, when if I’m honest, I’ve never looked down on somebody quite as much as I do the current gaggle of management consultant fucks that have driven Silicon Valley into the ground.

Share
Comments

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.