Hello Where’s Your Ed At Subscribers! I’ve started a premium version of this newsletter with a weekly Friday column where I go over the most meaningful news and give my views, which I guess is what you’d expect. Anyway, it’s $7 a month or $70 a year, and helps support the newsletter. I will continue to do my big free column too! Thanks.
What wins the war is sincerity.
What wins the war is accountability.
And we do not have to buy into the inevitability of this movement.
Nor do we have to cover it in the way it has always been covered. Why not mix emotion and honesty with business reporting? Why not pry apart the narrative as you tell the story rather than hoping the audience works it out? Forget “hanging them with their own rope” — describe what’s happening and hold these people accountable in the way you would be held accountable at your job.
Your job is not to report “the facts” and let the readers work it out. To quote my buddy Kasey, if you're not reporting the context, you're not reporting the story. Facts without context aren’t really facts. Blandly repeating what an executive or politician says and thinking that appending it with “...said [person]” is sufficient to communicate their biases or intentions isn’t just irresponsible, it’s actively rejecting your position as a journalist.
You don’t even have to say somebody is lying when they say they’re going to do something — but the word “allegedly” is powerful, reasonable and honest, and is an objective way of calling into question a narrative.
Let me give you a few examples.
A few weeks ago, multiple outlets reported that Meta would partner with Anduril, the military contractor founded by Palmer Luckey, the former founder of VR company Oculus whichMeta acquired in 2014, only to oust Luckey four years later for donating $10,000 to an anti-Hilary Clinton group. In 2024, Meta CTO Andrew “Boz” Bosworth, famous for saying that Facebook’s growth is necessary and good, even if it leads to bad things like cyberbullying and terror attacks, publicly apologized to Luckey.
Now the circle is completing, with Luckey sort-of-returning to Meta to work with the company on some sort of helmet called “Eagle Eye.”
One might think at this point the media would be a little more hesitant in how they cover anything Zuckerberg-related after he completely lied to them about the metaverse, and one would be wrong.
The Washington Post reported that, and I quote:
To aid the collaboration, Meta will draw on its hefty investments in AI models known as Llama and its virtual reality division, Reality Labs. The company has built several iterations of immersive headsets aimed at blending the physical and virtual worlds — a concept known as the metaverse.
Are you fucking kidding me?
The metaverse was a joke! It never existed! Meta bought a company that made VR headsets — a technology so old, they featured in an episode of Murder She Wrote — and an online game that could best be described as “Second Life, but sadder.” Here’s a piece from the Washington Post agreeing with me! The metaverse never really had a product of any kind, and lost tens of billions of dollars for no reason! Here’s a whole thing I wrote about it years ago! To still bring up the metaverse in the year of our lord 2025 is ridiculous!
But even putting that aside… wait, Meta’s going to put its AI inside of this headset? Palmer Luckey claims that, according to the Post, this headset will be “combining an AI assistant with communications and other functions.” Llama? That assistant?
You mean the one that it had to rig to cheat on LLM benchmarking tests? The one that will, as reported by the Wall Street Journal, participate in vivid and gratuitous sexual fantasies with children? The one using generative AI models that hallucinate, like every other LLM? That’s the one that you’re gonna put in the helmet for the military? How is the helmet going to do that exactly? What will an LLM — an inconsistent and unreliable generative AI system — do in a combat situation, and will a soldier trust it again after its first fuckup?
Just to be clear, and I quote Palmer Luckey, the helmet that will feature an “ever-present companion who can operate systems, who can communicate with others, who you can off-load tasks onto … that is looking out for you with more eyes than you could ever look out for yourself right there right there in your helmet.” This is all going to be powered by Llama?
Really? Are we all really going to accept that? Does nobody actually think about the words they’re writing down?
Here’s the thing about military tech: the US DOD tends to be fairly conservative when it comes to the software it uses, and has high requirements for reliability and safety. I could talk about these for hours — from coding guidelines, to the ADA programming language, which was designed to be highly crash-resistant and powers everything from guided missiles to F-15 fighter jet — but suffice it to say that it’s highly doubtful that the military is going to rely on an LLM that hallucinates a significant portion of the time.
To be clear, I’m not saying we have to reject every single announcement that comes along, but can we just for one second think critically about what it is we are writing down.
We do not have to buy into every narrative, nor do we have to report it as if we do so. We do not have to accept anything based on the fact someone says it emphatically, or because they throw a number at us to make it sound respectable.
Here’s another example. A few weeks ago, Axios had a miniature shitfit after Anthropic CEO said that “AI could wipe out half of all entry-level white-collar jobs and spike unemployment to 10-20% in the next one to five years.”
What data did Mr. Amodei use to make this point? Who knows! Axios simply accepted that he said something and wrote it down, because why think when you could write.
This is extremely stupid! This is so unbelievably stupid that it makes me question the intelligence of literally anybody that quotes it! Dario Amodei provided no sourcing, no data, nothing other than a vibes-based fib specifically engineered to alarm hapless journalists. Amodei hasn’t done any kind of study or research. He’s just saying stuff, and that’s all it takes to get a headline when you’re the CEO of one of the top two big AI companies.
It is, by the way, easy to cover this ethically, as proven by Allison Morrow of CNN, who, engaging her critical thinking, correctly stated that “Amodei didn’t cite any research or evidence for that 50% estimate,” that “Amodei is a salesman, and it’s in his interest to make his product appear inevitable and so powerful it’s scary,” and that “little of what Amodei told Axios was new, but it was calibrated to sound just outrageous enough to draw attention to Anthropic’s work.”
Morrow’s work is compelling because it’s sincere, and is proof that there is absolutely nothing stopping mainstream press from covering this industry honestly. Instead, Business Insider (which just laid off a ton of people and lazily recommended their workers read books that don’t exist because they can’t even write their own emails without AI), Fortune, Mashable and many other outlets blandly covered a man’s completely made up figure as if it was fact.
This isn’t a story. It is “guy said thing,” and “guy” happens to be “billionaire behind multi-billion dollar Large Language Model company,” and said company has made exactly jack shit as far as software that can actually replace workers.
While there are absolutely some jobs being taken by AI, there is, to this point, little or no research that suggests that it’s happening at scale, mostly because Large Language Models don’t really do the things that you need them to do to take someone’s job at scale. Nor is it clear that those jobs were lost because AI — specifically genAI — can actually do them as well, or better, than a person, or because an imbecile CEO bought into the hype and decided to fire up the pink slip printer, and when those LLMs inevitably shit the bed, those people will be hired back.
You know, like Klarna literally just had to.
These scare tactics exist to do one thing: increase the value of companies like Anthropic, OpenAI, Microsoft, Salesforce, and anybody else outright lying about how “agents” will do our jobs, and to make it easier for the startups making these models to raise funds, kind-of how a pump-and-dump scammer will hype up a doomed penny stock by saying how it’s going to the moon, not disclosing that they themselves own a stake in the business.
Let’s look at another example. A recent report from Oxford Economics talked about how entry-level workers were facing a job crisis, and vaguely mentioned in the preview of the report that “there are signs that entry-level positions are being displaced by artificial intelligence at higher rates.”
One might think the report says much more than that, and one would be wrong. On the very first page, it says that “there are signs that entry-level positions are being displaced by artificial intelligence at higher rates.” On page 3, it claims that the “high adoption rate by information companies along with the sheer employment declines in [some roles] since 2022 suggested some displacement effect from AI…[and] digging deeper, the largest displacement seems to be entry-level jobs normally filled by recent graduates.”
In fact, fuck it, take a look.
That’s it! That’s the entire extent of its proof! The argument is that because companies are getting AI software and there’s employment declines, it must be AI. There you go! Case closed.
This report has now been quoted as gospel. Axios claimed that Oxford Economics’ report provided “hard evidence” that “AI is displacing white-collar workers.” USA Today said that positions in computer and mathematical sciences have been the first affected as companies increasingly adopt artificial intelligence systems.”
And Anthropic marketing intern/New York Times columnist Kevin Roose claimed that this was only the tip of the iceberg, because, and I shit you not, he had talked to some guys who said some stuff.
No, really.
In interview after interview, I’m hearing that firms are making rapid progress toward automating entry-level work, and that A.I. companies are racing to build “virtual workers” that can replace junior employees at a fraction of the cost. Corporate attitudes toward automation are changing, too — some firms have encouraged managers to become “A.I.-first,” testing whether a given task can be done by A.I. before hiring a human to do it.
One tech executive recently told me his company had stopped hiring anything below an L5 software engineer — a midlevel title typically given to programmers with three to seven years of experience — because lower-level tasks could now be done by A.I. coding tools. Another told me that his start-up now employed a single data scientist to do the kinds of tasks that required a team of 75 people at his previous company.
Yet Roose’s most egregious bullshit came after he admitted that these don’t prove anything:
Anecdotes like these don’t add up to mass joblessness, of course. Most economists believe there are multiple factors behind the rise in unemployment for college graduates, including a hiring slowdown by big tech companies and broader uncertainty about President Trump’s economic policies.
But among people who pay close attention to what’s happening in A.I., alarms are starting to go off.
That’s right, anecdotes don’t prove his point, but what if other anecdotes proved his point? Because Roose goes on to quote Amodei’s 50% quote, and say that they now claim its Claude Opus 4 model can “code for several hours without stopping,” a statement that Roose calls “a tantalizing possibility if you’re a company accustomed to paying six-figure engineer salaries for that kind of productivity” without thinking “does that mean the code is good?” or “what does it do for those hours?”
Roose spends the rest of the article clearing his throat, adding that “even if AI doesn’t take all entry-level jobs right away” that “two trends concern [him],” namely that he worries companies are “turning to AI too early, before the tools are robust enough to handle full entry-level workloads,” and that executives believing that entry-level jobs are short-lived will “underinvest in job training, mentorship and other programs aimed at entry-level workers.”
Kevin, have you ever considered checking whether that actually happens?
Nah! Why would he? Kevin’s job is to be a greasy pawn of the AI industry and the markets at large. An interesting — and sincere! — version of this piece would’ve intelligently humoured the idea then attempted to actually prove it, and then failed because there is no proof that this is actually happening other than that which the media drums up.
It’s the same craven, insincere crap we saw with the return to office “debate” which was far more about bosses pretending that the office was good than it was about productivity or any kind of work. I wrote about this almost every week for several years, and every single media outlet participated, on some level, in pushing a completely fictitious world where in-office work was “better” due to ‘serendipity,” that the boss was right, and that we all had to come back to the office.
Did they check with the boss about how often they were in the office? Nope! Did they give equal weight to those who disagreed with management — namely those doing the actual work? No. But they did get really concerned about quiet quitting for some reason, even though it wasn’t real, because the bosses that don’t seem to actually do any work had demanded that it was.
Anyway, Kevin Roose was super ahead of the curve on that one. He wrote that “working from home is overrated” and that “home-cooked lunches and no commuting…can’t compensate for what’s lost in creativity” in March, 2020. My favourite quote is when he says “...research also shows that what remote workers gain in productivity, they often miss in harder-to-measure benefits like creativity and innovative thinking,” before mentioning some studies about “team cohesion,” linking to an article from The Atlantic from 2017 that does not appear to include a study other than the Nicholas Bloom study that Roose himself linked that showed remote work was productive and another about “proximity boosting productivity” that it does not link to, adding that “the data tend to talk past each other.”
I swear to god I am not trying to personally vilify Kevin Roose — it’s just that he appears to have backed up every single boss-coddling market-driven hype cycle with a big smile, every single time. If he starts writing about Quantum Computing, it’s tits up for AI.
This is the same thing that happened when corporations were raising prices and the media steadfastly claimed that inflation had nothing to do with corporate greed (once again, CNN’s Allison Morrow was one of the few mainstream media reporters willing to just say “yeah corporations actually are raising prices and blaming it on inflation”), desperately clinging to whatever flimsy data might prove that corporations weren’t price gouging even as corporations talked about doing so publicly.
It’s all so deeply insincere, and all so deeply ugly — a view from nowhere, one that seeks not to tell anyone anything other than that whatever the rich or powerful is worried or excited about is true, and that the evidence, no matter how flimsy, always points in the way they want it to.
It’s lazy, brainless, and suggests either a complete rot in the top of editorial across the entire business and tech media or a consistent failure by writers to do basic journalism, and as forgiving I want to be, there are enough of these egregious issues that I have to begin asking if anybody is actually fucking trying.
It’s the same thing every time the powerful have an idea — remote work is bad for companies and we must return to the office, the metaverse is here and we’re all gonna work in it, prices are higher and it’s due to inflation rather than anything else, AI is so powerful and strong and will take all of our jobs, or whatever it is — and that idea immediately become the media’s talking points. Real people in the real world, experiencing a different reality, watch as the media repeatedly tells them that their own experiences are wrong. Companies can raise their prices specifically to raise their profits, Meta can literally not make a metaverse, AI can do very little to actually automate your real job, and the media will still tell you to shut the fuck up and eat their truth-slop.
You want an actual conspiracy theory? How about a real one: that the media works together with the rich and powerful to directly craft “the truth,” even if it runs contrary to reality. The Business Idiots that rule our economy — work-shy executives and investors with no real connection to any kind of actual production — are the true architects of what’s “real” in our world, and their demands are simple: “make the news read like we want it to.”
Yet when I say “works together,” I don’t even mean that they get together in a big room and agree on what’s going to be said. Editors — and writers — eagerly await the chance to write something following a trend or a concept that their bosses (or other writers’ bosses) come up with and are ready to go. I don’t want to pillory too many people here, but go and look at who covered the metaverse, cryptocurrency, remote work, NFTs and now generative AI in gushing terms.
Okay, but seriously, how is it every time with Casey and Kevin?
The illuminati doesn’t need to exist. We don’t need to talk about the Bilderberg Group, or Skull and Bones, or reptilians, or wheel out David Icke and his turquoise shellsuit. The media has become more than willing to follow whatever it needs to once everybody agrees on the latest fad or campaign, to the point that they’ll repeat nonsensical claim after nonsensical claim.
The cycle repeats because our society — and yes, our editorial class too — is controlled by people who don’t actually interact with it. They have beliefs that they want affirmed, ideas that they want spread, and they don’t even need to work that hard to do so, because the editorial rails are already in place to accept whatever the next big idea is. They’ve created editorial class structures to make sure writers will only write what’s assigned, pushing back on anything that steps too far out of everybody’s agreed-upon comfort zone.
The “AI is going to eliminate half of white collar jobs” story is one that’s taken hold because it gets clicks and appeals to a fear that everyone, particularly those in the knowledge economy who have long enjoyed protection from automation, has. Nobody wants to be destitute. Nobody with six figures of college debt wants to be stood in a dole queue.
It’s a sexy headline, one that scares the reader into clicking, and when you’re doing a half-assed job at covering a study, you can very easily just say “there’s evidence this is happening.” It’s scary. People are scared, and want to know more about the scary subject, so reporters keep covering it again and again, repeating a blatant lie sourced using flimsy data, pandering to those fears rather than addressing them with reality.
It feels like the easiest way to push back on these stories is fairly simple: ask reporters to show the companies that have actually done this.
No, I don’t mean “show me a company that did layoffs and claims they’re bringing in new efficiencies with AI.” I mean actually show me a company that has laid off, say, 10 people, and how those people have been replaced by AI. What does the AI do? How does it work? How do you quantify the work it’s replaced? How does it compare in quality? Surely with all these headlines there’s got to be one company that can show you, right?
No, no, I really don’t mean “we’re saying this is the reason,” I mean show me the actual job replacement happening and how it works. We’re three years in and we’ve got headlines talking about AI replacing jobs. Where? Christopher Mims of the Wall Street Journal had a story from June 2024 that talked about freelance copy editors and concept artists being replaced by generative AI, but I can find no stories about companies replacing employees.
To be clear, I am not advocating for this to happen. I am simply asking that the media, which seems obsessed with — even excited by — the prospect of imminent large-scale job loss, goes out and finds a business (not a freelancer who has lost work, not a company that has laid people off with a statement about AI) that has replaced workers with generative AI.
They can’t, because it isn’t happening at scale, because generative AI does not have the capabilities that people like Dario Amodei and Sam Altman repeatedly act like they do, yet the media continues to prop up the story because they don’t have the basic fucking curiosity to learn about what they’re talking about.
Hell, I’ll make it easier for you. Why don’t you find me the product, the actual thing, that can do someone’s job? Can you replace an accountant? No. A doctor? No. A writer? Not if you want good writing. An artist? Not if you want to actually copyright the artwork, and that’s before you get to how weird and soulless the art itself feels. Walk into your place of work tomorrow and look around you and start telling me how you would replace each and every person in there with the technology that exists today, not the imaginary stuff that Dario Amodei and Sam Altman want you to think about.
Outside of coding — which, by the way, is not the majority of a software engineer’s fucking job, if you’d take the god damn time to actually talk to one! — what are the actual capabilities of a Large Language Model today? What can it actually do?
You’re gonna say “it can do deep research,” by which you mean a product that doesn’t really work. What else? Generate videos that sometimes look okay? “Vibe code”? Bet you’re gonna say something about AI being used in the sciences to “discover new materials” which proved AI’s productivity benefits. Well, MIT announced that it has “no confidence in the provenance, reliability or validity of the data, and [has] no confidence in the validity of the research contained in the paper.”
I’m not even being facetious: show me something! Show me something that actually matters. Show me the thing that will replace white collar workers — or even, honestly, “reduce the need for them.” Find me someone who said “with a tool like this I won’t need this many people” who actually fired them and then replaced them with the tool and the business keeps functioning. Then find me two or three more. Actually, make it ten, because this is apparently replacing half the white collar workforce.
There are some answers, by the way. Generative AI has sped up transcription and translation, which are useful for quick references but can cause genuine legal risk. Generative AI-based video editing tools are gaining in popularity, though it’s unclear by how much. Seemingly every app that connects to generative AI can summarise a message. Software engineers using LLM tools — as I talked about on a recent episode of Better Offline — are finding some advantages, but LLMs are far from a panacea. Generative AI chatbots are driving people insane by providing them an endlessly-configurable pseudo-conversation too, though that’s less of a “use case” and more of a “text-based video game launched at scale without anybody thinking about what might happen.”
Let’s be real: none of this is transformative. None of this is futuristic. It’s stuff we already do, done faster, though “faster” doesn’t mean better, or even that the task is done properly, and obviously, it doesn’t mean removing the human from the picture. Generative AI is best at, it seems, doing very specific things in a very generic way, none of which are truly life-changing. Yet that’s how the media discusses it.
An aside about software engineering: I actually believe LLMs have some value here. LLMs can generate outputs to generate and evaluate code, as well as handle distinct functions within a software engineering environment. It’s pretty exciting for some software engineers - they’re able to get a lot of things done much faster! - though they’d never trust it with things launched in production. These LLMs also have “agents” - but for the sake of argument, I’d like to call them “bots.” Bots, because the term “agent” is bullshit and used to make things sound like they can do more than they can. Anyway, bots can, to quote Thomas Ptacek, “poke around your codebase on their own…author files directly…run tools…compile code…run tests…and iterate on the results,” to name a few things.” These are all things - under the watchful eye of an actual person - that can speed up some software engineers’ work.
(A note from my editor, Matt Hughes, who has been a software engineer for a long time: I’m not sure how persuasive this stuff is. Coders have been automating things like tests, code compilation, and the general mechanics of software engineering long before AI and LLMs were the hot thing du jour. You can do so many of the things that Ptacek mentioned with cronjobs and shell scripts — and, undoubtedly, with greater consistency and reliability.)Ptacek also adds that “if truly mediocre code is all we ever get from LLM, that’s still huge, [as] it’s that much less mediocre code humans have to write.”
Back to Ed: In a conversation with The Internet of Bugs’ (and veteran software engineer) Carl Brown as I was writing this newsletter, he recommended I exercise caution with how I discussed LLMs and software engineering, saying that “...there are situations at the moment (unusual problems, or little-used programming languages or frameworks) where the stuff is absolutely useless, and is likely to be for a long time.”In a previous draft, I’d written that mediocre code was “fine if you knew what to look for,” but even then, Brown added that “...the idea that a human can ‘know what code is supposed to look like’ is truly problematic. A lot of programmers believe that they can spot bugs by visual inspection, but I know I can't, and I'd bet large sums of money they can't either — and I have a ton of evidence I would win that bet.”
Brown continued: “In an offline environment, mediocre code may be fine when you know what good code looks like, but if the code might be exposed to hackers, or you don't know what to look for, you're gonna cause bugs, and there are more bugs than ever in today's software, and that is making everyone on the Internet less secure.”
He also told me the story of the famed Heartbleed bug, a massive vulnerability in a common encryption library that millions of smart, professional security experts and developers looked at for over two years before someone saw a single error — one single statement — that somebody didn’t check that led to a massive, internet-wide panic leaving hundreds of millions of websites vulnerable.
So, yeah, I dunno man. On one hand, there are clearly software developers that benefit from using LLMs, but it’s complicated, much like software engineering itself. You cannot just “replace a coder,” because “coder” isn’t really the job, and while this might affect entry-level software engineers at some point, there’s yet to be proof it’s actually happening, or that AI’s taking these jobs and not, say, outsourcing.
Perhaps there’s a simpler way to put it: software engineering is not just writing code, and if you think that’s the case, you do not write software or talk to software engineers about what it is they do.
Seriously, put aside the money, the hype, the pressure, the media campaigns, the emotions you have, everything, and just focus on the product as it is today. What is it that generative AI does, today, for you? Don’t say “AI could” or “AI will,” tell me what “AI does.” Tell me what has changed about your life, your job, your friends’ jobs, or the world around you, other than that you heard a bunch of people got rich.
Yet the media continually calls it “powerful AI.” Powerful how? Explain the power! What is the power? The word “powerful” is a marketing term that the media has adopted to describe something it doesn’t understand, along with the word “agent,” which means “autonomous AI that can do things for you” but is used, at this point, to describe any Large Language Model doing anything.
But the intention is to frame these models as “powerful” and to use the term “agents” to make this technology seem bigger than it is, and the people that control those terms are the AI companies themselves.
It’s at best lazy and at worst actively deceitful, a failure of modern journalism to successfully describe the moment outside of what they’re told to, or the “industry standards” they accept, such as “a Large Language Model is powerful and whatever Anthropic or OpenAI tells me is true.”
It’s a disgrace, and I believe it either creates distrust in the media or drives people insane as they look at reality - where generative AI doesn’t really seem to be doing much - and get told something entirely different by the media.
When I read a lot of modern journalism, I genuinely wonder what it is the reporter wants to convey. A thought? A narrative? A story? Some sort of regurgitated version of “the truth” as justified by what everybody else is writing and how your editor feels, or what the markets are currently interested in? What is it that writers want readers to come away with, exactly?
It reminds me a lot of a term that Defector’s David Roth once used to describe CNN’s Chris Cilizza — “politics, noticed”:
This feels, from one frothy burble to the next, like a very specific type of fashion writing, not of the kind that an astute critic or academic or even competent industry-facing journalist might write, but of the kind that you find on social media in the threaded comments attached to photos of Rihanna. Cillizza does not really appear to follow any policy issue at all, and evinces no real insight into electoral trends or political tactics. He just sort of notices whatever is happening and cheerfully announces that it is very exciting and that he is here for it. The slugline for his blog at CNN—it is, in a typical moment of uncanny poker-faced maybe-trolling, called The Point—is “Politics, Explained.” That is definitely not accurate, but it does look better than the more accurate “Politics, Noticed.”
Whether Roth would agree or not, I believe that this paragraph applies to a great deal of modern journalism. Oh! Anthropic launched a new model! Delightful. What does it do? Oh they told me, great, I can write it down. It’s even better at coding now! Wow! Also, Anthropic’s CEO said something, which I will also write down. The end!
I’ll be blunt: making no attempt to give actual context or scale or consideration to the larger meaning of the things said makes the purpose of journalism moot. Business and tech journalism has become “technology, noticed.” While there are forays out of this cul-de-sac of credulity — and exceptions at many mainstream outlets — there are so many more people who will simply hear that there’s a guy who said a thing, and that guy is rich and runs a company people respect, and thus that statement is now news to be reported without commentary or consideration.
Much of this can be blamed on the editorial upper crust that continually refuses to let writers critique their subject matter, and wants to “play it safe” by basically doing what everybody else does. What’s crazy to me is that many of the problems with the AI bubble — as with the metaverse, as with the return to office, as with inflation and price gouging — are obvious if you actually use the things or participate in reality, but such things do not always fit with the editorial message.
But honestly, there are plenty of writers who just don’t give a shit. They don’t really care to find out what AI can (or can’t) do. They’ve come to their conclusion (it’s powerful, inevitable, and already doing amazing things) and thus will write from that perspective. It’s actually pretty nefarious to continually refer to this stuff as “powerful,” because you know their public justification is how this stuff uses a bunch of GPUs, and you know their private justification is that they have never checked and don’t really care to. It’s much easier to follow the pack, because everybody “needs to cover AI” and AI stories, I assume, get clicks.
That, and their bosses, who don’t really know anything other than that “AI will be big,” don’t want to see anything else. Why argue with the powerful? They have all the money.
But even then…can you try using it? Or talking to people that use it? Not “AI experts” or “AI scientists,” but real people in the real world? Talk to some of those software engineers! Or I dunno, learn about LLMs yourself and try them out?
Ultimately, a business or tech reporter should ask themselves: what is your job? Who do you serve? It’s perfectly fine to write relatively straightforward and positive stuff, but you have to be clear that that’s what you’re doing and why you’re doing it.
And you know what, if all you want to do is report what a company does, fine! I have no problem with that, but at least report it truthfully. If you’re going to do an opinion piece suggesting that AI will take our jobs, at least live in reality, and put even the smallest amount of thought into what you’re saying and what it actually means.
This isn’t even about opinion or ideology, this is basic fucking work.
And it is fundamentally insincere. Is any of this what you truly believe? Do you know what you believe? I don’t mean this as a judgment or an attack — many people go through their whole lives with relatively flimsy reasons for the things they believe, especially in the case of commonly-held beliefs like “AI is going to be big” or “Meta is a successful company.”
If I’m honest, I really don’t mind if you don’t agree with something I say, as long as you have a fundamentally-sound reason for doing so. My CoreWeave analysis may seem silly to some because its value has quadrupled — and that’s why I didn’t write that I believed the stock would crater, or really anything about the stock. Its success does not say much about the AI bubble other than it continues, and even if I am wrong, somehow, long term, at least I was wrong for reasons I could argue versus the general purpose sense that “AI is the biggest thing ever.”
I understand formats can be constraining — many outlets demand an objective tone — but this is where words like “allegedly” come in. For example, The Wall Street Journal recently said that Sam Altman had claimed, in a leaked recording, that buying Jony Ive’s pre-product hardware startup would add “$1 trillion in market value” to OpenAI. As it stands, a reader — especially a Business Idiot — could be forgiven for thinking that OpenAI was now worth, or could be worth, over a trillion dollars, which is an egregious editorial failure.
One could easily add that “...to this date, there have been no consumer hardware launches at this scale outside of major manufacturers like Apple and Google, and these companies had significantly larger research and development budgets and already-existent infrastructure relationships that OpenAI lacks.”
Nothing about what I just said is opinion. Nothing about what I just said is an attack, or a sleight, and if you think it’s “undermining” the story, you yourself are not thinking objectively. These are all true statements, and are necessary to give the full context of the story.
That, to me, is sincerity. Constrained by an entirely objective format, a reporter makes the effort to get across the context in which a story is happening, rather than just reporting exactly the story and what the company has said about it. By not including the context, you are, on some level, not being objective: you are saying that everything that’s happening here isn’t just possible, but rational, despite the ridiculous nature of Altman’s comment.
Note that these are subjective statements. They are also the implication of simply stating that Sam Altman believes acquiring Jony Ive’s company will add $1 trillion dollars in value to OpenAI. By not saying how unlikely it is — again, without even saying the word “unlikely,” but allowing the audience to come to that conclusion by having the whole story — you give the audience the truth.
It really is that simple.
The problem, ultimately, is that everybody is aware that they’re being constantly conned, but they can’t always see where and why. Their news oscillates from aggressively dogmatic to a kind of sludge-like objectivity, and oftentimes feels entirely disconnected from their own experiences other than in the most tangential sense, giving them the feeling that their actual lives don’t really matter to the world at large.
On top of that, the basic experience of interacting with technology, if not the world at large, kind of fucking sucks now. We go on Instagram or Facebook to see our friends and battle through a few ads and recommended content, we see things from days ago until we click stories, and we hammer past a few more ads to get a few glimpses of our friends. We log onto Microsoft Teams, it takes a few seconds to go through after each click, and then it asks why we’re not logged in, a thing that we don’t need to be able to do to make a video call.
Our email accounts are clogged with legal spam — marketing missives, newsletters, summaries from news outlets, notifications from UPS that require us to log in, notifications that our data has been leaked, payment reminders, receipts, and even occasionally emails from real people. Google Search is broken, but then again, so is searching on basically any platform, be it our emails, workspaces or social networks.
At scale, we as human beings are continually reminded that we do not matter, that any experiences of ours outside of what the news say makes us “different” or a “cynic,” that our pain points are only as relevant as those that match recent studies or reports, and that the people that actually matter are either the powerful or considered worthy of attention. News rarely feels like it appeals to the listener, reader or viewer, just an amorphous generalized “thing” of a person imagined in the mind of a Business Idiot. The news doesn’t feel the need to explain why AI is powerful, just that it is, in the same way that “we all knew” that being back in the office was better, even if there were far more people who disagreed than didn’t.
As a result of all of these things, people are desperate for sincerity. They’re desperate to be talked to as human beings, their struggles validated, their pain points confronted and taken seriously. They’re desperate to have things explained to them with clarity, and to have it done by somebody who doesn’t feel chained by an outlet.
This is something that right wing media caught onto and exploited, leading to the rise of Donald Trump and the obsession with creating the “Joe Rogan of the Left,” an inherently ridiculous position based on his own popularity with young men (which is questionable based on recent reports) and its total misunderstanding of what actually makes his kind of media popular.
However you may feel about Rogan, what his show sells on is that he’s a kind of sincere, pliant and amicable oaf. He does not seem condescending or judgmental to his audience, because he himself sits, slack-jawed, saying “yeah I knew a guy who did that” and genuinely seems to like them. While you (as I do) may deeply dislike everything on that show, you can’t deny that they seem to at least enjoy themselves, or feel engaged and accepted.
The same goes for Theo Von (real name: Theodor Capitani von Kurnatowski III, and no, really!), whose whole affable doofus motif disarms guests and listeners.
It works! And he’s got a whole machine that supports him, just like Rogan, money, real promotion, and real production value. They are given the bankroll and the resources to make a high-end production and a studio space and infrastructural support and then they get a bunch of marketing and social push too. There’s entire operations behind them, other than the literal stuff they do on the set, because, shocker, the audience actually wants to see them not have a boxed lunch with “THE THINGS TO BELIEVE” written on it by a management consultant.
This is in no way a political statement, because my answer to this entire vacuous debate is to “give a diverse group of people that you agree with the beliefs of the actual promotional and financial backing and then let them create something with their honest-to-god friendships.” Bearing witness to actual love and solidarity is what will change the hearts of young people, not endless McKinsey gargoyles with multi-million-dollar budgets for “data.”
I should be clear that this isn’t to say every single podcast should be in the format I suggest, but that if you want whatever “The Joe Rogan Of The Left” is, the answer is “a podcast with a big audience where the people like the person speaking and as a result are compelled by their message.”
It isn’t even about politics, it’s that when you cram a bunch of fucking money into something it tends to get big, and if that thing you create is a big boring piece of shit that’s clearly built to be — and even signposted in the news as built to be — manipulative, it is in and of itself sickening.
I’m gonna continue clearing my throat: the trick here is not to lean right, nor has it ever been. Find a group of people who are compelling, diverse and genuinely enjoy being around each other and shove a whole bunch of advertising dollars into it and give it good production values to make it big, and then watch in awe as suddenly lots of people see it and your message spreads. Put a fucking trans person in there — give Western Kabuki real money, for example — and watch as people suddenly get used to seeing a trans person because you intentionally chose to do so, but didn’t make it weird or get upset when they don’t immediately vote your way.
Because guess what — what people are hurting for right now is actual, real sincerity. Everybody feels like something is wrong. The products they use every day are increasingly-broken, pumped full of generative AI features that literally get in the way of what they’re trying to do, which already was made more difficult because companies like Meta and Google intentionally make their products harder to use as a means of making more money. And, let’s be clear, people are well aware of the billions in profits that these companies make at the customer’s expense.
They feel talked down to, tricked, conned, abused and abandoned, both parties’ representatives operating in terms almost as selfish as the markets that they also profit from. They read articles that blandly report illegal or fantastical things as permissible and rational and think, for a second, “am I wrong? Is this really the case? This doesn’t feel the case?” while somebody tells them that despite the fact that they have less money and said money doesn’t go as far, they’re actually experiencing the highest standard of living in history.
Ultimately, regular people are repeatedly made to feel like they don’t matter. Their products are overstuffed with confusing menus, random microtransactions, the websites they read full of advertisements disguised as stories and actual advertisements built to trick them, their social networks intentionally separating them from the things they want to see.
And when you feel like you don’t matter, you look to other human beings, and other human beings are terrified of sincerity. They’re terrified of saying they’re scared, they’re angry, they’re sad, they’re lonely, they’re hurting, they’re constantly on a fucking tightrope, every day feels like something weird or bad is going to happen either on the news (which for no reason other than it helps rich people constantly tries to scare them that AI will take their jobs), and they just want someone to talk to, but everybody else is fucking unwilling to let their guard down after a decade-plus of media that valorized snark and sarcasm, because the lesson they learned about being emotionally honest was that it’s weird or they’re too much or it’s feminine for guys or it’s too feminine for women.
Of course people feel like shit, so of course they’re going to turn to media that feels like real people made it, and they’ll turn to the media they’ll see the easiest, such as that given to them by the algorithm, or that which they are made to see by advertisement, or, of course, word of mouth. And if you’re sending someone to listen to something, and someone describes it in terms that sound like they’re hanging out with a friend, you’d probably give it a shot.
Outside of podcasting, people’s options for mainstream (and an alarming amount of industry) news are somewhere between “I’m smarter than you,” “something happened!” “sneering contempt,” “a trip to the principal’s office,” or “here’s who you should be mad at,” which I realize also describes the majority of the New York Times opinion page.
While “normies” of whatever political alignment might want exactly the slop they get on TV, that slop is only slop because the people behind it believe that regular people will only accept the exact median person’s version of the world, even if they can’t really articulate it beyond “whatever is the least-threatening opinion” (or the opposite in Fox News’ case).
Really, I don’t have a panacea for what ails media, but what I do know is that in my own life I have found great joy in sincerity and love. In the last year I have made — and will continue to make, as it’s my honour to — tremendous effort to get to know the people closest to me, to be there for them if I can, to try and understand them better and to be my authentic and honest self around them, and accept and encourage them doing the same. Doing so has improved my life significantly, made me a better, more confident and more loving person, and I can only hope I provide the same level of love and acceptance to them as they do to me.
Even writing that paragraph I felt the urge to pare it back, for fear that someone would accuse me of being insincere, for “speaking in therapy language,” for “trying to sound like a hero,” not that I am doing so, but because there are far more people concerned with moderating how emotional and sincere there are than those willing to stop actual societal harms.
I think it’s partly because people see emotions as weakness. I don’t agree. I have never felt stronger and more emboldened than I have as I feel more love and solidarity with my friends, a group that I try to expand at any time I can. I am bolder, stronger (both physically and mentally), and far happier, as these friendships have given me the confidence to be who I am, and I offer the same aggressive advocacy to my friends in being who they are as they do to me.
None of what I am saying is a one-size-fits-all solution. There is so much room for smaller, more niche projects, and I both encourage and delight in them. There is also so much more attention that can be given to these niche projects, and things are only “niche” until they are given the time in the light to become otherwise. There is also so much more that can be done within the mainstream power structures, if only there is the boldness to do so.
Objective reporting is necessary — crucial, in fact! — to democracy, but said objectivity cannot come at the cost of context, and every time it does so, the reader is failed and the truth is suffocated. And I don’t believe objective reporting should be separated from actual commentary. In fact, if someone is a reporter on a particular beat, their opinion is likely significantly more-informed than that of someone “objective” and “outside of the coverage,” based on stuff like “domain expertise.”
The true solution, perhaps, is more solidarity and more sincerity. It’s media outlets that back up their workers, with editorial missions that aggressively fight those who would con their readers or abuse their writers, focusing on the incentives and power of those they’re discussing rather than whether or not “the markets” agree with their sentiment.
In any case, the last 15+ years of the media has led to a flattening of journalism, constantly swerving toward whatever the next big trend is — the pivot to video, contorting content to “go viral” on social media, SEO, or whatever big coverage area (AI, for example) everybody is chasing instead of focusing on making good shit people love. Years later, social networks have effectively given up on sending traffic to news, and now Google’s AI summaries are ripping away large chunks of the traffic of major media outlets that decided the smartest way to do their jobs was “make content for machines to promote,” never thinking for a second that those who owned the machines were never to be trusted.
Worse still, outlets have drained the voices from their reporters, punishing them for having opinions, ripping out anything that might resemble a personality from their writing to meet some sort of vague “editorial voice” despite readers and viewers again and again showing that they want to read the news from a human being not an outlet.
I maintain that things can change for the better, and it starts with a fundamental acceptance that those running the vast majority of media outlets aren’t doing so for their readers’ benefit. Once that happens, we can rebuild around distinct voices, meaningful coverage and a sense of sincerity that the mainstream media seems to consider the enemy.