Let Tim Cook

Edward Zitron 14 min read

Last week, Apple announced “Apple Intelligence,” a suite of features coming to iOS 18 (the next version of the iPhone’s software) in a presentation that FastCompany called “uninspired,” Futurism called “boring,” and Axios claimed “failed to excite investors.” 

The presentation, given at Apple’s Worldwide Developers conference (usually referred to as WWDC, and where Apple usually announces its next software updates), felt remarkably demure in comparison to May’s Google’s I/O conference, where CEO Sundar Pichai and Head of Search Liz Reid hyped the next generation of Google products and search updates that absolutely nobody asked for

Historically, WWDC is where the company announced major updates to its product and software line-up. At WWDC 2005, the late Steve Jobs announced the company would move from IBM-designed (and Motorola-manufactured) PowerPC processors, allowing for faster and more power-efficient computers, as well as the ability to dual-boot Windows, a huge moment for Macs that was met with rancorous applause from people that really should know better.

Again, fifteen years later, Tim Cook would announce the shift from Intel’s stagnating x86 architecture to Apple’s homegrown ARM-based processors - starting with the M-series processors that have become ubiquitous across Macs and iPads, one met with the bombastic promise that these were the “most powerful chips ever created.”

WWDC is where Apple has unveiled every new version of OS X. It’s where the original Mac Pro, Facetime, and the iPhone App Store were introduced to the world. This is the conference where Apple flexes its muscles and boasts about how powerful and important they are.

Now you start to understand why this year was so…strange. Gone was Apple’s trademark bombast, its propensity to tell the world how good it was at everything and how important its next big thing was, replaced with a constant juxtaposition of “we’re putting AI in your phone!” and “it’s totally fine, it’s not that big a deal.”

Despite this ostensibly being Apple’s big AI play, CEO Tim Cook and SVP of Engineering, Craig Federighi spent far more time explaining how much Apple loved privacy, before clunkily launching into a series of product updates that appear to both sound useful and heartily suggest that Apple is intentionally trying to distance itself from the AI bubble. And while it’s integrating ChatGPT into iOS 18 (which I’ll get to later), it’s doing so at arms-length, with most of the day-to-day AI work done by the company’s own models.

The AI integrations (which arrive sometime in the fall to iPhone 15 Pro and iPhone 15 Pro Max, along with iPads and Macbooks with M1 or later chips) are fairly straightforward and don’t do anything that we haven’t seen elsewhere. Apple’s AI can generate transcripts of calls (but not before warning all participants that it’s doing so); generate images through a tool called Image Playground; help write and rewrite emails, a la Grammarly; and perform distinct actions across multiple apps, like adding slides to a presentation or opening a web image with the iPhone photos app, where you can edit it. The last item is a bit fuzzy, as it extends to both Apple’s own apps and other third-party apps, and the functionality is expected to roll out over the coming year or so. As ever, take any promises made without a demo with a grain of salt.

Apple also says that Siri will soon have an awareness of what’s on your screen and be able to respond appropriately. Using an Apple-demonstrated example, if you’re filling in a form that asks for your driver’s license number, Siri can look through your photo library, identify any pictures of your license, and grab the pertinent information for you - equal parts useful and creepy.

And, thanks to the power of artificial intelligence, you can now generate your own custom emojis — a feature that screams “we have to fill a few minutes of this presentation.”

Apple claims that many of these features run entirely on your device, and those that don’t use something called “Private Cloud Compute,” where Apple claims (in a lengthy privacy statement) that at no time will anyone ever have any access to any of the data that’s being processed on its servers, “even during active processing” (meaning when Apple’s servers are handling the request). Apple has also offered security researchers the opportunity to personally verify the claims it’s making, and in general, the news seems to have been well-received, though some — as Chief Privacy Officer Steve Wilson of Exabeam told Dark Reading — worry that threats in generative AI are “poorly-understood,” and that despite Apple’s best efforts (and, no doubt, heavy policing) some will slip through the cracks. 

On some level, Wilson is being alarmist, with his only criticism being that “we don’t know everything about this thing and thus should be scared about what we don’t know.” On the other hand, he’s right insofar as we’re in relatively early days of mass-scale use of Large Language Models and so many things remain uncertain — how people and businesses will use them, how people will perceive the reliability of the information provided by LLMs in the long-term, and the effectiveness of the safeguards being put in place by LLM developers to prevent abuse.

Apple has chosen (albeit cautiously) to take the risk of integrating these models into devices at a time when companies like Google and Meta have proven that one cannot simply trust that a multi-trillion dollar company will maintain the stability and privacy of a product — or, for that matter, can guarantee the accuracy of the outputs. It doesn’t help that Apple CEO Tim Cook told the Washington Post that Apple’s AI is “short of 100%” when it comes to hallucinations — when it authoritatively tells you something that isn’t true — which feels a little worrying when it’s potentially this model taking actions across apps, and the fact that Apple is a highly trusted brand, and thus people are presumably more likely to trust its outputs as opposed to those from, say, ChatGPT and Google’s Gemini. 

These features are predominantly running on Apple’s own models, which (according to Apple) are trained using licensed data (such as that licensed from Shutterstock), “data selected to enhance specific features” (it’s unclear what this means, but from what I understand, it’s from licensed data at least), as well as “publicly available data” collected by Apple’s web-crawler AppleBot, which Apple allows publishers to opt out of using a rule added to their website’s code, which I should add does not solve the bloody problem if Apple has already used the data for training purposes, which is bordering on impossible to confirm.

And if it already has, it’s indicative of some shady, underhand behavior. While it’s fair to say that AppleBot isn’t new — it arrived at some point in 2015 — it was scarcely-publicized, save for a few posts on Apple rumor sites speculating that the company was working on a replacement to Google Search. Prior to the launch of Apple’s generative AI features, its sole stated purpose was to provide data for Siri and Spotlight — the search tool built into MacOS that scours both the user’s hard drive and can also pull data from the wider Internet. 

If Apple is already training its models on data scraped by AppleBot, using the tool beyond its stated purpose, it would be an act of immense deception. And, if this came to pass, those unwilling to have their online content repurposed as training data for a trillion-dollar tech company would have limited options. Getting an AI to unlearn something isn’t exactly straightforward, and Apple has yet to explain how that might work.

I’ve reached out to Apple for comment here, but it’s unclear how exactly it intends to fulfill the promise of letting publishers opt out of being included in training data, especially if the AppleBot crawler has already fed their content into the model. If I hear back, I’ll update this post. 

Though Apple has done a better job than most, it’s disgraceful that yet another big tech company has seen the open internet as its personal property. Despite publishing a remarkable amount of information about its privacy standards and how its models are trained, Apple likely hopes that its privacy-focused media blitz will hide the fact that it’s ripping off everybody’s work just like Google, OpenAI and Anthropic — except it’s doing so in a way that is a little bit easier for the media to swallow because the main selling point of its AI isn’t vomiting out oodles of anodyne business language, unless you’re using, say, the smart reply feature in Mail which drafts a response for you if you’re for whatever reason incapable of writing one. 

This is particularly worrying when you consider Apple’s image generation features, which will function similarly to ChatGPT (yes, I’m getting to that) and Stable Diffusion. While I’d love to believe that Apple has only used licensed content to train its models, that clearly isn’t the case, and if their version of “publicly available” includes things from Google, or DeviantArt, or social networks, Apple is willingly participating in the AI boom’s continual pillaging of the internet. 

As previously mentioned, I’ve requested a comment from Apple about this, and hope that I’ll get an answer, though I somewhat doubt I will. Apple’s approach to PR is almost as contemptuous as Tesla’s, with the company only providing comment (and interviews, review samples, etc) to those journalists it likes — and only on certain subjects. It’s a style that mirrors the haughty — snooty, even — image it crafted in the mid-2000s with those fucking “I’m a Mac” ads, with reporters divided into “worthy” and “unworthy” camps, the latter virtually ignored.  

Putting these very real concerns aside, Apple’s AI announcement feels equal parts useful and strange. These are not world-changing integrations, but useful ones — call transcriptions, a better Siri that can take actions across multiple apps and know what you’re talking about based on what you’re doing on the screen (theoretically), a better Photos app with better search (even though, to be clear, this is already a feature that works in iOS 17), the ability to edit photos with AI, and so on.  

These are all things that a user could foreseeably want to use in their daily lives without being told they’re “the future,” in large part because they’re not. In fact, I’d argue that Apple’s biggest generative AI push is in trying to sell us back the idea that Siri can actually do stuff after years of most users accepting it as a voice-controlled roulette wheel that occasionally understands you. And if you’re Scottish, or have a broad regional English accent, it’s very occasionally.


Nevertheless, I can’t get over how reserved Apple is about AI. 

While Google has desperately tried (and failed) to convince Wall Street that it’s building and selling the future, Apple is almost desperate to explain how boring and normal your iPhone and Mac could be be, and how “Apple Intelligence” is simply yet another feature that will make you want to keep using Apple products rather than the reason that Apple should be worth $40 trillion and keep growing forever. As WIRED’s Will Knight put it, to Apple, AI is a feature rather than a product.

And now for the funny part.

Quietly stapled onto the end of the announcement of Apple Intelligence was integration of OpenAI’s ChatGPT in the vaguest, least-consequential way I’ve ever seen a product launched. After spending an hour-and-a-half talking about how great its own AI features were, Craig Federighi mentioned that “there are other artificial intelligence tools available that can be useful for tasks that draw on broad world knowledge, or offer specialized domain expertise,” saying that Apple wanted users to be able to use “these external models without having to jump between different tools,” integrating them directly into Apple’s OS…starting with “the pioneer and market leader OpenAI.” Apple’s integration with OpenAI, the supposed big dog of the tech industry, in the biggest announcement for the company in years, was explicitly advertised as the first of multiple integrations with multiple models.

And even then, ChatGPT’s integration with Siri is entirely opt-in, with certain requests occasionally prompting Apple to ask you if you’d like to run them through ChatGPT, like “I have these ingredients, what meal can I make?” and other questions that tens, maybe even hundreds of people will find useful, like having ChatGPT write a bedtime story for your kid if you for some reason lack any books. 

ChatGPT can also generate images and summarize documents — features already available in Apple’s own AI. This underwhelming addition to an already-placid announcement ends with Apple adding that it intends to add support for other AI models in the future, before moving on to how developers can integrate Apple’s own AI.

The deal that would supposedly cement Sam Altman’s hold on Silicon Valley and OpenAI ended up being a two-minute-long sidebar at the end of a near-two-hour-long announcement, one that requires users to opt-in on literally every single interaction, with no specifics about how often that a user would actually see it. Despite the media’s romanticization of Sam Altman’s incredible technology, Apple’s approach is one that begins and ends with them saying “do you really want to use this?” and a remarkable lack of excitement or trust. 

While it remains to be seen exactly how often you’ll be prompted to use it, ChatGPT’s addition to Apple products feels far more like something cooked up to please Wall Street rather than address a specific need — and it worked, briefly making it the most valuable US company, briefing beating Microsoft, all without having to invest billions of dollars into another company.

Sidenote: There’s undoubtedly an element of CYA — or cover-your-ass — here, too. If Apple could, it would undoubtedly monopolize AI on the iPhone, much like it has previously with app distribution and browser rendering engines (the bit of the web browser that turns HTML, CSS, and JavaScript into a web page). 
From the beginning, Apple has sought absolute control of the iOS (and its derivatives, iPadOS, tvOS, and watchOS) ecosystem, using user safety as a justification. In an email exchange with former Gawker writer Ryan Tate, the late Steve Jobs described the company’s iron-clad control over the App Store as a necessary measure to ensure user “freedom” from what he considered harms. 
“Yep, freedom from programs that steal your private data. Freedom from programs that trash your battery. Freedom from porn. Yep, freedom," Jobs wrote, when asked whether this stance was in conflict with the revolutionary themes found in the iPad’s marketing at the time
The problem is, the line between protecting users and anticompetitive behavior is thin indeed, and regulators — particularly those at the European Commission — aren’t convinced that Apple’s iron-clad control of the iPhone is entirely altruistic. And that’s why you can access alternative app stores in the European Union, and even install apps directly from the Web. And why you’re (theoretically, at least) no longer tied into using the Safari Webkit rendering engine, even if you install an alternative browser like Brave or Safari or Firefox. 
If Apple iced out OpenAI or Anthropic, they could conceivably complain to the European Commission, who might rule in favor of them and order Apple to start supporting their models, in addition to levying a huge multi-billion Euro fine. And so, the company (wisely) saved itself the hassle and the cost. 

One might imagine this integration was a huge coup for Altman and OpenAI, except for one wrinkle — as reported by Bloomberg’s Mark Gurman, Apple didn’t give OpenAI any money to integrate ChatGPT, paying instead in “distribution” of a tool that loses OpenAI money on literally every transaction.

Gurman also reports that users will also be able to upgrade their ChatGPT accounts to ChatGPT Plus through the integration — although these upgrades will likely operate on the same terms of every other digital goods sale on iOS, with OpenAI forced to share 30% of the take with Apple, unless the user upgrades directly on the ChatGPT website.

To be clear, in most cases companies integrating ChatGPT pay on a per-thousand-tokens basis, meaning that OpenAI, while unprofitable, still generates revenue of some sort on each query. Yet Apple’s deal doesn’t even appear to pay OpenAI for using ChatGPT, meaning that OpenAI’s deal will immediately lose them money from the second iOS 18 launches.

This deal is equal parts perilous and hilarious, and something that could genuinely end up hurting OpenAI, all while insulating Apple. In the event that this integration actually sees adoption by Apple’s hundreds of millions of users, it will cost OpenAI an incredible amount of money thanks to the compute and energy-intensive nature of ChatGPT, especially as I imagine most users will only ask it the occasional question and find no reason to opt for a $20-a-month subscription

On some level, it shows the disdain that Apple has for OpenAI. Most deals that hinge on the vast reach of the iOS ecosystem involve some level of monetary exchange. In 2022, Google paid roughly $20bn to be the default search engine on iOS. Given the cost of running ChatGPT, and the fact that an LLM query is inherently more sophisticated (and computationally expensive) than a simple search query, you’d expect the opposite to be true, right? That Apple would pay OpenAI something to defray the costs of all that power, and all those GPUs and server farms. 

But no. From what we’ve learned so far, it seems that no money has changed hands — and if OpenAI makes a sale directly on the iPhone, it’ll undoubtedly be subject to the same “Apple Tax” as every other company.  

I’d also speculate that Apple likely requires some sort of Service Level Agreement for its partners — meaning that OpenAI has to dedicate resources to maintain uptime for Apple devices, which will in turn be incredibly expensive to maintain, burning money with every query on a product that Apple will offer only when Apple’s services can’t do the job. And if iOS 18 doesn’t bring the expected users to ChatGPT, those resources will sit there, unused and redundant, when they could be servicing other demand. 

It might be hyperbolic to describe this as tech’s equivalent of the Versailles Treaty — but only just. While the terms are just as humiliating, Apple at least saw fit to give Altman a two-minute-long figleaf of dignity. 

Seriously, what’s the business model here? To offer ChatGPT and its entire featureset to people, for free, inside of Apple devices, in the hopes that OpenAI can upsell them? Even if that somehow happens, is it going to happen enough to make this a profitable one? Will OpenAI have to raise the cost of any subscriptions made through the app — like Google does — to negate the aforementioned 30% “Apple Tax,” thereby making it a less appealing option to consumers?


While Sam Altman is regularly described as a tier-one operator and a superior intellect, this is one of the single-worst deals I’ve seen in my life, one that shows that while it might have domain over the startup world, OpenAI lacks any real leverage. 

OpenAI has effectively agreed to give ChatGPT for free to hundreds of millions of people and got nothing in return — and Apple is already, according to Anissa Gardizy of the Information, already working on cutting deals with both Google and Anthropic to integrate their Large Language Models into iOS, likely using OpenAI’s terrible deal to leverage better terms. 

Altman may have been at Worldwide Developers Conference, but he didn’t get to speak, nor was he mentioned during any of Tim Cook or Craig Federighi’s remarks. Apple treated ChatGPT with less excitement than a new suite of emojis, as an afterthought to a presentation that was deliberate in its lack of froth or hype. This year’s WWDC was framed as a series of fun, potentially even exciting features, all delivered with a continual promise of privacy and reliability that tacitly accepted that the AI hype train was moving too fast. And in this context, there’s no room for Altman — an unapologetic hype-man if there ever was one. 

This was a humiliating moment for the generative AI movement, and especially so for OpenAI, a company that desperately needs good news at a time when the world has become deeply suspicious of its product and its CEO. Apple is — for better or worse — the gatekeeper to the technology used by hundreds of millions of people, and Apple has decided that ChatGPT is a feature, not a product, and one that isn’t trustworthy or useful enough to run without a user’s permission, or even pay for

This was, on some level, OpenAI’s iPhone moment. This was the time when the world would see exactly how important the most important tech company in the world thought the most important startup in the world was, and the answer was not particularly important at all. Apple can (and likely will) dump ChatGPT at a moment’s notice, replacing it with another generative AI, or ripping the feature out entirely thanks to the fact it introduced it as a feature at the end of a presentation of better, more useful features. 

While this might not kill OpenAI immediately, it’s a brutal moment that proves how little the world actually cares about ChatGPT. The Information reported last week that OpenAI’s annualized revenue may reach the billions in 2024, yet it remains deeply unprofitable, and even if Apple had paid money to integrate it, ChatGPT would still lose money on every search. 

It’s also a genuinely dangerous deal for OpenAI, and shows that while Sam Altman might be capable of swindling Silicon Valley startups, he doesn’t have what it takes to bend Apple to his will. 

Share
Comments

Welcome to Where's Your Ed At!

Subscribe today. It's free. Please.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Ed Zitron's Where's Your Ed At.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.