Last month at Google I/O, the crowd at Mountain View was treated to a demo of the next generation Google Assistant. We watched entranced as a series of questions and verbal commands were instantly addressed in a natural, continuous, conversational style. Without any pauses, or the need to say “Hey, Google” every time, we saw the potential to multi-task across multiple apps, with the action unfolding much faster than you could possibly tap and swipe your way to the same conclusion.
It was Apple’s turn at WWDC 2019, and it unveiled …. a new, more natural voice for Siri. The show had only shifted a few miles down Californian streets to San Jose, but it may as well have been another planet. The contrast in approach to artificial intelligence (A.I.) from Apple and Google is stark. For Apple it’s behind the curtain; for Google, it’s the future, and it’s poised to change the way we use phones.
Apple and A.I.
While Google CEO Sundar Pichai has always been outspoken about the impact he believes artificial intelligence will have on the world – he once suggested it’s more profound than electricity or fire – his Apple counterpart, Tim Cook, has been more reserved on the topic.
This isn’t because Apple is clueless about A.I., and it’s not like there was no mention at WWDC 2019. Beyond Siri’s more natural voice, we heard that personalized music is coming to HomePod, Siri will read incoming messages to you on your AirPods, and, perhaps most importantly, Apple’s Core ML 3 machine learning framework is available to help iOS developers harness the on-device processing power of iPhones to add machine learning smarts to their apps.
But if we rewind a couple of years, Apple was definitely talking about A.I. and machine learning more. As Cook pointed out in a 2017 MIT Technology Review interview, A.I. underpins a lot of the features that Apple pushes out; there’s image recognition in Apple’s Photos app, Apple Music makes recommendations based on what you’ve been listening to, and iPhone battery life is better than the capacity would suggest because it employs machine learning to study our usage and adjust.
In an interview with Bloomberg a few days later, Cook admitted that Apple was working on autonomous systems in the automotive space and described Apple’s secretive Project Titan, as “the mother of all A.I. projects.” We haven’t heard much about it since then.
Reading the signs
There has been a lot of speculation about how well, or how poorly, Project Titan has been going. When 200 employees were laid off it was taken as an indication by many that things may have stalled, but we’re talking about a team that reportedly numbers somewhere around 5,000. A reshuffle to accommodate incoming leadership from Tesla and a shift to a broader company-wide A.I. effort may be closer to the truth.
Apple has the resources and it can hire talent, but there are roadblocks.
Apple also hired John Giannandrea in April of 2018, a former vice president of Engineering at Google and leader of Machine Intelligence, Research and Search teams. He is now Apple’s senior vice president of Machine Learning and A.I. Strategy, and Siri is one of the things he’s tasked with improving. This year Apple hired away Ian Goodfellow from Google to become Director of Machine Learning.
There’s no doubt that Apple is devoting some of its embarrassingly enormous war chest to close the gap in the A.I. space, and if the driverless car project has changed direction, it may have freed up even more resources. But then it does have a lot of ground to make up.
Closing the gap
Google has always been focused on software, driven by machine learning, and enormous cloud computing capability. Apple has traditionally been about hardware. The death of iTunes highlights the fragmented mess that Apple’s software has become. Google is embedding its Assistant into more and more apps, while Apple’s offerings lack cohesion. If you want practical proof of Google’s strategy paying off, look at the ascendancy of its photography. Google’s A.I. enables inferior hardware to outperform the competition.
Glance over at what Google is trying to do with Stadia, it’s forthcoming game streaming service, and you can see more evidence of an assault on traditional hardware. Maybe there’s a day in the not-too-distant where it really doesn’t matter what device you have in your pocket.
The fact Google found a way to shrink down 100GB of algorithmic models to 500MB so it fits on our phones, powering complex voice interactions with no delay, is potentially game changing. It could be an important early step towards breaking our physical attachment to hardware altogether. Being glued to your phone is increasingly being seen in a negative light, and A.I. could free us from the tyranny of the touchscreen.
From screening a spam call, to ordering takeout, or booking a haircut, Google Assistant is capable of doing more and more; Apple’s Siri looks pretty small in the rearview. That could change. Apple has the resources and it can hire talent, but there are roadblocks. That traditional focus on hardware needs to shift; software can’t be an afterthought. But then there’s the fact that grabbing and crunching all the data needed to craft effective algorithms and models may be at odds with Apple’s commitment to privacy. It’s a fine line to walk.
Back in that 2017 interview, Cook suggested that the press doesn’t always give Apple credit for its A.I. because Apple only likes to talk about features that are ready to ship, rather than “sell futures.” It’s a fair point. But we’re entering a time where artificial intelligence is finally starting to cut through the hype and Google is leading that charge. Apple needs to ship something impressive in this space soon, because the future is here.