When I tested the original Sony A7 in 2014, I proclaimed that the future of photography would be mirrorless. I was wrong. The future of photography is computational, and spending even a single day with the Google Pixel 2 is all you need to acquaint yourself with this fact.
I spent several days with it when I took it on vacation as my only camera. The camera in the Google Pixel 2 (and Pixel 2 XL) is so good that it feels like magic. It’s not the camera hardware itself (the lens and imaging sensor) that takes credit for this, but rather Google’s software and processing. Thanks to machine learning algorithms and the advanced HDR+ mode that’s been around in Google phones for a few years now, the Pixel 2 produces the most beautiful pictures I’ve ever captured on a smartphone. What’s more, in some situations it even yields results that are better than many straight-from-the-camera images shot with a DSLR or mirrorless camera.
To be sure, larger cameras with interchangeable lenses aren’t going anywhere — there are still limitations of Google’s computational approach, and there’s no substitute for being able to swap out different lenses. But at $650, the Pixel 2 is priced to compete with advanced compact cameras like the Sony RX100-series, and, for most people, it’s the obvious better buy — you get an entire phone with it, after all.
We have already touted the Pixel 2’s camera, but for this article, I’m looking at it from a working photographer’s point-of-view, and whether it can truly function as a “real camera” replacement.
Why dynamic range matters
Dynamic range may be the least understood aspect of image quality among non-photographers. Most people generally understand megapixels and even noise/grain, but dynamic range is one of the key aspects of image quality that separates a large-sensor DSLR or mirrorless camera from something like a smartphone.
Essentially, a camera that captures more dynamic range is able to “see” a broader range of tones, preserving more detail into the shadows and highlights of an image that a lesser camera would have clipped. If you have ever taken a picture on a bright sunny day, you have likely run into the dynamic range limitation of your camera — particularly if that camera was a phone or other small point-and-shoot. This could show up as your subject being too dark against a backlit sky, or as the sky showing up as pure white instead of blue. The camera tries its best to compensate for the wide range of contrast in the scene, but it has to make a sacrifice somewhere.
The camera in Pixel 2 is so good that it feels like magic.
The sensor in the Pixel 2 has this same problem, but Google has worked around it with software. With HDR+ turned on, the camera shoots a quick burst of photos, each with short exposure times to preserve highlights and prevent motion blur. It then merges the images together and automatically boosts the shadows to recover detail.
While boosting shadows is possible in a single-exposure photograph, doing so would result in also boosting noise. Because the Pixel 2 has several photos to work with, the shadow noise is averaged out and you end up with a much cleaner result. Google has an in-depth explainer of HDR+ if you’re interested in learning more about how it works.
On a basic level, this is similar to how other HDR modes work in other phones, but it simply works so incredibly well on the Pixel 2. The system is smart enough to preserve detail across a very wide tonal range without resulting in a flat image or jumping into the psychedelic, over-HDR look. It is, for all intents and purposes, comparable to a DSLR or mirrorless camera — except that you don’t need to spend any time processing the images in post, making it more immediate and more approachable for casual photographers.
Stereo depth-mapping from a single lens
While many phones have “portrait modes” that mimic a shallow depth of field, most accomplish this by using two separate lenses and sensors placed side-by-side. This allows the phone to compute a depth map based on the subtle differences between the two images, in a similar fashion to how our eyes perceive depth in the world around us. The Pixel 2 has just a single camera module, and yet it can produce the same stereoscopic depth map.
- 1. Original image before synthetic depth map is applied. (Photos: Sam Kweskin/Google)
- 2. Depth map generated from Google’s stereo algorithm, lighter is closer to the camera.
- 3. Visualization of blur applied to each pixel, the brighter the red the more blur.
- 4. Final synthetic shallow depth-of-field image, generated by combining HDR+, segmentation mask, and depth map.
Magic? Almost. The Pixel 2 uses on-chip phase-detection autofocus, which means each pixel is actually divided in half. Google has a much more detailed explanation of how its portrait mode works, but basically, those split pixels offer just enough stereo separation to create a depth map.
This allows for shallow-depth-of-field photographs of any subject within the allowable distance, but for portraits, the phone goes a step further. It uses AI trained on millions of sample photos in a neural network to recognize faces, improving the accuracy of where blur is applied in the photograph.
In practice, the Pixel 2’s portrait mode has the same inconsistencies of other smartphones, but mostly I’m impressed that it works at all. As AI and processing power improves, this computational approach to depth of field control will get even better and, eventually, may offer advantages over a DSLR or mirrorless camera. Namely, anything done computationally can theoretically be controlled or removed after the fact. Depth of field and even the subjective quality of blur could be altered in post, opening up new avenues of creative control.
A Pixel worth a thousand words
It is debatable whether the Pixel 2 (and its larger sibling, the Pixel 2 XL) is the best smartphone out there, but it’s certainly one of the best smartphone cameras (the new Samsung Galaxy S9 Plus may have nudged it off the top, but only just). It’s also commendable that Google put its top camera tech into a non-flagship phone, rather than saving it for just the XL model, making it more accessible to more users.
I’ll still hang on to my mirrorless kit, but the Pixel 2 proves that software, not hardware, is the future of photography.