Sam Altman Is Full Of Shit

Edward Zitron 8 min read
Note: In my last newsletter, I said that my next post would be the second part of my Facebook autopsy. Don’t worry, that’s still coming, but given the recent drama between Sam Altman, OpenAI, and Scarlett Johansson, I felt the need to write something. Don’t worry, I won’t be doing bonus posts every week.

Eight days ago, Sam Altman, CEO of OpenAI, giddy from the high of launching the faster-responding model GPT-4o, tweeted the word "her." Altman was referencing the fact that OpenAI had just debuted a voice assistant inspired — or not, as the case may be — by Scarlett Johansson in the movie Her, where she voiced an AI. In an interview with The Verge, OpenAI CTO Mira Murati said that the voice assistant was not meant to sound like Johansson, and on Monday morning, the company abruptly chose to pull down the voice from ChatGPT, saying that it wasn't meant to sound like her, and that it belonged to a completely different unnamed actress. Altman, in a separate blog post, said that ChatGPT's new model "feels like AI from the movies."

Later on Monday, The Verge also reported that OpenAI had been "in conversations" with Johansson's representatives. Yet a mere half an hour later, Johansson told NPR in a statement that she'd been solicited twice — once in September, and once two days before the announcement — to bring her voice to ChatGPT, something she'd declined to do, and on hearing the demo, she chose to retain legal counsel and had forced Altman and OpenAI to pull down the voice. In a statement released to the press, Altman subsequently claimed that the actress for Sky was cast before the company reached out to Johansson.

This begs the question: If the voice behind Sky belongs to an entirely different person, and was not, as seems to be the case, inspired by or stolen from Johansson, why did OpenAI try to license Johansson’s voice? There are two possibilities. First, it was a coincidence, and OpenAI is trying to cover its arse. Nitasha Tiku, a tech culture writer for the Washington Post, noted the similarity in September of last year when attending an OpenAI demo event and raised the issue to a company exec, who denied deliberately copying Johansson’s likeness. Was it all just a giant mistake? One big unhappy coincidence?  

I don’t think so, not least because of the not-so-coy hints dropped by Altman around the launch of GPT-4o. 

The second possible explanation — and the most plausible in my opinion — is that OpenAI simply pirated her likeness and then tried to bring her onboard in a failed attempt to eliminate any potential legal risk. Given OpenAI’s willingness to steal content from the wider web to train its AIs, for which it’s currently facing multiple lawsuits from individual authors and media conglomerates alike, it’s hardly a giant leap to assume that they'd steal a person’s voice. 

As an aside: It shouldn’t come as much of a surprise that Johansson didn’t jump at the chance to work with OpenAI. As a member of the SAG-AFTRA actor’s guild, Johansson was a participant in the 2023 strike that effectively deadlocked all TV and film production for much of that year. A major concern of the guild was the potential use of AI to effectively create a facsimile of an actor, using their likeness but giving them none of the proceeds. The idea that, less than one year after the strike’s conclusion, Johansson would lend her likeness to the biggest AI company in the world is, frankly, bizarre.  

Just so we are abundantly, painfully, earnestly clear here, OpenAI lied to the media multiple times.

  • Mira Murati, OpenAI's CTO, lied to Kylie Robison of The Verge when she said that "Sky" wasn't meant to sound like Scarlett Johansson.
  • OpenAI lied repeatedly about the reasons and terms under which "Sky" was retired, both by stating that it "believed that AI voices should not deliberately mimic a celebrity's distinct voice" and — by omission — stating that it had been "in conversations" with representatives to bring Johansson's voice to ChatGPT, knowing full well that she had declined twice previously and that OpenAI's legal counsel were actively engaging with Johansson's.

This company has recently been asked and failed to answer — whether it trained its video-generating AI Sora using videos from YouTube. Last week was a busy one for OpenAI, with Vox reporting that people leaving OpenAI have to sign a restrictive NDA that, when violated, they lose all vested equity shares in the company, something which Sam Altman has claimed isn't the case, which we all have to take his word for. 

This was also the week where OpenAI dissolved its team focused on the long-term risks of AI, and Jan Leike, its former co-head, resigned claiming that he disagreed with the "core priorities" of OpenAI's leadership, saying that his team — which, to be clear, dealt with the safety implications of artificial intelligence — was "sailing against the wind" at OpenAI, and that it was becoming "harder and harder" to get compute for his research. Leike added that at OpenAI, "safety culture and processes have taken a backseat to shiny products." Oh, and co-founder Ilya Sutskever resigned, the very same guy who was behind Altman's brief exile from OpenAI, before repenting and returning to the fold.

I should also add that Altman has previously stated that "humanity needs to solve for AI safety."

So, let's review. In the last week, OpenAI has repeatedly lied about a voice product, dissolved its AI safety team, and had two major players in the company resign — one of whom tried to oust Sam Altman late last year, and the other who clearly despises the direction of the company. And unlike Sam Altman, both Sutskever and Leike are actual computer scientists that build things versus specious hype men who people have been trying to fire for a decade. Seriously, staff went to the board to get him fired from his first company twice, Paul Graham personally flew into San Francisco to fire him from yCombinator, and he was so dramatically fired from OpenAI that he had to run crying to venture capitalist Reid Hoffman and Microsoft CEO Satya Nadella to make him CEO again and install the Avengers of Capitalism as the new board.

I'll cut to the chase: it's time to stop listening to anything that Sam Altman has to say. Sam Altman is full of shit, and his reign at OpenAI has been defined far more by its empty promises than any realized dreams. It's time to actively push back on Altman when he says that GPT-5 will be "similar to a virtual brain," or a "super smart person who knows absolutely everything about your life," or a "super-competent colleague," or that it'll "replace 95% of marketing tasks," or that it'll "evolve in uncomfortable ways" rather than get twisted by a group of people that know enough or give enough of a shit to make sure they're not causing said evolution. 

Sam Altman needs you to believe that AI will kill us all or going to destroy all our jobs and that he's a little bit scared of AI, because if you think for even a second about what this man is saying, you'll realize that he's not an engineer, he's a lobbyist and a liar. He needs us to humor — even if he rejects the notion — the idea that AI could be considered a "creature" because doing so allows him to add further mystique and hype to distract from the fact that he doesn't seem to know anything and OpenAI doesn't seem to be innovating.

Every single thing that Sam Altman and OpenAI does is suspicious, and it has been for months, ever since Altman was fired and then rehired as CEO with — to this day — little or no explanation. Sam Altman has repeatedly said things that, if any founder with less power, presence, access and funding had said, they'd be laughed at, ignored, and treated like fantasists. Altman is the P.T. Barnum of tech, with just enough knowledge to be dangerous but far too little to actually say anything of note. He is not the technical mind behind OpenAI, he did not write its models, and looking up to him as some sort of technolojesus is bad for the tech industry and worse for the world. This is not a person that should be making decisions about the future of the tech industry, nor should he be allowed to spout fan fiction and automatically have it covered as gospel.

And the people that Altman hires are just as untrustworthy. Chief Technology Officer Mira Murati lied to The Verge and refused to answer whether its video-generator Sora was trained on YouTube videos, something that OpenAI's Chief Operating Officer Brad Lightcap chose to do a few months later when interviewed at a Bloomberg conference. And let's not forget that one of the rumored reasons that Altman was fired from OpenAI was for lying to the board.

I've said it once, and I'll say it again: it's time for artificial intelligence companies to start showing us something, and it's time for OpenAI to stop being given any credit or quarter to tell us what it "will" do until it shows us. The AI reality distortion field feels like a giant con, used to keep media attention (and traffic) flowing to ChatGPT so that Altman can continue claiming that "generative AI is the future" when it's yet to prove itself necessary, essential, or capable of overcoming its massive energy, compute and training data needs. Or, for that matter, a financially-viable service. 

This week should be a wakeup call to the media and anybody else who chooses to trust OpenAI or Sam Altman. OpenAI is built on a culture of deception, one that obfuscates the actual abilities of their technology, and every further successful obfuscation enriches an enterprise that lacks morality, clarity and respect for its users or the tech industry at large.

The same goes for Sundar Pichai of Google, Satya Nadella of Microsoft, and, of course, Mark Zuckeberg and anybody associated with Meta. Artificial Intelligence is a watershed moment for the media, where liars and scammers and hucksters get rich every time that a fanciful claim is left untouched, and where products are built upon a foundation of stolen content interpreted by a model that doesn’t actually know anything — because AI can’t, by definition, know anything. And yet we’re supposed to trust them? 

Questions must be asked of these founders, and follow-up questions asked based on what they say, and their reckless, stupid ideas must be faced with suspicion and a complete lack of the benefit of the doubt.

Failing to interrogate these people will cause immeasurable harm to the creatives they're stealing from, and anyone in the media that believes that Altman and his ilk won't come for their work is living in a fantasy land. These companies will make their products worse by forcing AI into them to please investors that only care about growth, and in doing so will drain capital from the tech ecosystem while making the world tangibly worse so that extremely rich people can get a few decimal points richer. I don't want Windows to studiously record every single god damn thing I've done regardless of whether Satya Nadella says that this information will stay on your computer.

Yet they will fail to perpetuate further misery if they are interrogated with the suspicion they deserve. Push back against the vague promises of artificial intelligence. Embrace that which actually delivers value to your lives, not the theoretical promises of any number of monotonous billionaires that want to turn every website into fuel for a machine that continually gets things wrong.

Demand these things do something. And if you get a chance to ask any of these people a question, make sure it's a good one.

And don't give a single one of them the benefit of the doubt.

Share
Comments

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.