Last year, DxO Mark named the camera inside the Google Pixel the best smartphone camera yet — and new insight from Alphabet is showcasing just where the tech behind that camera camera from, and where it’s headed next. In an article for the Graduate Series exploring the tech that comes out of the X research division, Google says the camera tech inside Pixel was actually initially intended for Google Glass.
The tech, called Gcam, was first sparked in 2011 when researchers began looking for a high-resolution camera that could still fit inside a pair of eyeglass frames for Google Glass. Since adding a giant camera on the side of the glasses was out of the question, the team instead started looking at computational photography.
Instead of taking a single image on a large high resolution sensor, Gcam takes several images on a low-resolution sensor and, by merging them together with software, creates a high resolution image — or at least an image that can compete with the typical smartphone camera. The team, lead by Stanford Computer Science faculty Marc Levoy, called it image fusion and the feature launched in Google Glass in 2013.
Creating a small camera didn’t just introduce problems in resolution, however — a smaller lens captures less light, so photos from the lens small enough to hide in Google Glass was also a pretty poor low light performer. Merging the photos helped correct that. But the team next looked at getting more from the tiny camera using high dynamic range, a technique merging multiple images at different exposure levels together to create a wider range of light and detail. HDR+ then launched as an Android camera app for Nexus 5 (and later 6) in 2014.
The computational photography behind the Google Glass camera and HDR+ is now inside the Google Pixel, as well as the Google Photos app, YouTube, and Jump, a virtual reality device. The feature makes the lens blur setting inside Google Photos possible while the same program is piecing together 360 videos for Jump.
Levoy says that it took five years to get the Gcam software right before it launched in the Pixel smartphone. Since the system relies heavily on software, when users started complaining of lens flares, the team launched a firmware update for the software to automatically detect and remove it.
So what’s next for the tiny camera that started in Google Glass and now covers multiple products? The software-focused camera system could be getting a boost based on artificial intelligence. “One direction that we’re pushing is machine learning,” Levan said. “There’s lots of possibilities for creative things that actually change the look and feel of what you’re looking at. That could mean simple things like creating a training set to come up with a better white balance. Or what’s the right thing we could do with the background — should we blur it out, should we darken it, lighten it, stylize it? We’re at the best place in the world in terms of machine learning, so it’s a real opportunity to merge the creative world with the world of computational photography.”
Whatever’s in store for the next variation, the success of the Pixel likely sets that bar pretty high.