During the debut of Apple Intelligence at WWDC 2024 yesterday, Senior Vice President of Software Engineering Craig Federighi repeatedly touted the new feature’s security and delicate handling of sensitive user data. To protect user privacy, Apple Intelligence performs many of its generative operations on-device. And for those that exceed its onboard capabilities, the system will transfer the work up to the company’s newly developed Private Cloud Compute (PCC).
However, as Dr. Matthew Green, associate professor of Computer Science at Johns Hopkins University in Baltimore, asked in a thread Monday, Apple’s AI cloud may be secure, but is it trustworthy?
Apple Intelligence promises to empower your iPhone, iPad, and Mac with cutting-edge generative models that can create images, edit your writing, and perform actions on your behalf across any number of apps. Apple devices have already performed machine learning tasks on-device for a number of years; take the camera roll’s search function and optical character recognition (OCR) text recognition for example. However, as Green points out, even the latest generation of Apple processors aren’t yet powerful enough to handle some of the more complex and resource-intensive AI operations coming online, necessitating the use of cloud compute servers.
The problem is, these complex models also need unencrypted access to the user’s data in order to perform inference functions and transmitting that data to a public cloud leaves it at risk of being hacked, leaked, or outright stolen. Apple’s solution was to build its own standalone, hardened data centers specifically for processing Apple Intelligence data: the PCC. This “groundbreaking cloud intelligence system,” according to a recent Apple Security Blog post, “extends the industry-leading security and privacy of Apple devices into the cloud, making sure that personal user data sent to PCC isn’t accessible to anyone other than the user — not even to Apple.”
The problem is that while modern phone “neural” hardware is improving, it’s not improving fast enough to take advantage of all the crazy features Silicon Valley wants from modern AI, including generative AI and its ilk. This fundamentally requires servers. 3/
— Matthew Green (@matthew_d_green) June 10, 2024
But ensuring that privacy is much harder in practice. “Building trustworthy computers is literally the hardest problem in computer security,” Green wrote. “Honestly ,it’s almost the only problem in computer security.” He commended the company on applying many of the same security features built into its mobile and desktop devices to its new servers, including Secure Boot, “stateless” software, and a Secure Enclave Processor (SEP), as well as “throwing all kinds of processes at the server hardware to make sure the hardware isn’t tampered with.”
Apple has gone to great lengths to ensure that the software running on its servers is legitimate, automatically wiping all user data from a PCC node as soon as the request has been completed and enabling the device’s operating system to “attest” to what software image it’s running.
“If you gave an excellent team a huge pile of money and told them to build the best ‘private’ cloud in the world, it would probably look like this,” Green wrote. “But now the tough questions. Is it a good idea? Is it as secure as what Apple does today,” and can users opt out? It doesn’t appear that users will even opt in to the new service. “You won’t necessarily even be told it’s happening,” he continued. “It will just happen. Magically. I don’t love that part.”
Green goes on to argue that hackers don’t even pose the biggest threat to user data: it’s the hardware and software companies themselves. As such, “this PCC system represents a real commitment by Apple not to ‘peek’ at your data,” Green concluded. “That’s a big deal.”
Apple Intelligence will reportedly begin rolling out later this summer. And in the coming weeks, Apple plans to invite security researchers for a first look at PCC software and the virtual research environment.