Skip to main content

Can Google’s Pixel 2 ace conventional cameras? We spent a week finding out

When I tested the original Sony A7 in 2014, I proclaimed that the future of photography would be mirrorless. I was wrong. The future of photography is computational, and spending even a single day with the Google Pixel 2 is all you need to acquaint yourself with this fact.

I spent several days with it when I took it on vacation as my only camera. The camera in the Google Pixel 2 (and Pixel 2 XL) is so good that it feels like magic. It’s not the camera hardware itself (the lens and imaging sensor) that takes credit for this, but rather Google’s software and processing. Thanks to machine learning algorithms and the advanced HDR+ mode that’s been around in Google phones for a few years now, the Pixel 2 produces the most beautiful pictures I’ve ever captured on a smartphone. What’s more, in some situations it even yields results that are better than many straight-from-the-camera images shot with a DSLR or mirrorless camera.

Google Pixel 2 Camera
Tinh tế Photo/Flickr

To be sure, larger cameras with interchangeable lenses aren’t going anywhere — there are still limitations of Google’s computational approach, and there’s no substitute for being able to swap out different lenses. But at $650, the Pixel 2 is priced to compete with advanced compact cameras like the Sony RX100-series, and, for most people, it’s the obvious better buy — you get an entire phone with it, after all.

Recommended Videos

We have already touted the Pixel 2’s camera, but for this article, I’m looking at it from a working photographer’s point-of-view, and whether it can truly function as a “real camera” replacement.

Why dynamic range matters

Dynamic range may be the least understood aspect of image quality among non-photographers. Most people generally understand megapixels and even noise/grain, but dynamic range is one of the key aspects of image quality that separates a large-sensor DSLR or mirrorless camera from something like a smartphone.

Essentially, a camera that captures more dynamic range is able to “see” a broader range of tones, preserving more detail into the shadows and highlights of an image that a lesser camera would have clipped. If you have ever taken a picture on a bright sunny day, you have likely run into the dynamic range limitation of your camera — particularly if that camera was a phone or other small point-and-shoot. This could show up as your subject being too dark against a backlit sky, or as the sky showing up as pure white instead of blue. The camera tries its best to compensate for the wide range of contrast in the scene, but it has to make a sacrifice somewhere.

The camera in Pixel 2 is so good that it feels like magic.

The sensor in the Pixel 2 has this same problem, but Google has worked around it with software. With HDR+ turned on, the camera shoots a quick burst of photos, each with short exposure times to preserve highlights and prevent motion blur. It then merges the images together and automatically boosts the shadows to recover detail.

While boosting shadows is possible in a single-exposure photograph, doing so would result in also boosting noise. Because the Pixel 2 has several photos to work with, the shadow noise is averaged out and you end up with a much cleaner result. Google has an in-depth explainer of HDR+ if you’re interested in learning more about how it works.

On a basic level, this is similar to how other HDR modes work in other phones, but it simply works so incredibly well on the Pixel 2. The system is smart enough to preserve detail across a very wide tonal range without resulting in a flat image or jumping into the psychedelic, over-HDR look. It is, for all intents and purposes, comparable to a DSLR or mirrorless camera — except that you don’t need to spend any time processing the images in post, making it more immediate and more approachable for casual photographers.

Stereo depth-mapping from a single lens

While many phones have “portrait modes” that mimic a shallow depth of field, most accomplish this by using two separate lenses and sensors placed side-by-side. This allows the phone to compute a depth map based on the subtle differences between the two images, in a similar fashion to how our eyes perceive depth in the world around us. The Pixel 2 has just a single camera module, and yet it can produce the same stereoscopic depth map.

Magic? Almost. The Pixel 2 uses on-chip phase-detection autofocus, which means each pixel is actually divided in half. Google has a much more detailed explanation of how its portrait mode works, but basically, those split pixels offer just enough stereo separation to create a depth map.

This allows for shallow-depth-of-field photographs of any subject within the allowable distance, but for portraits, the phone goes a step further. It uses AI trained on millions of sample photos in a neural network to recognize faces, improving the accuracy of where blur is applied in the photograph.

In practice, the Pixel 2’s portrait mode has the same inconsistencies of other smartphones, but mostly I’m impressed that it works at all. As AI and processing power improves, this computational approach to depth of field control will get even better and, eventually, may offer advantages over a DSLR or mirrorless camera. Namely, anything done computationally can theoretically be controlled or removed after the fact. Depth of field and even the subjective quality of blur could be altered in post, opening up new avenues of creative control.

A Pixel worth a thousand words

It is debatable whether the Pixel 2 (and its larger sibling, the Pixel 2 XL) is the best smartphone out there, but it’s certainly one of the best smartphone cameras (the new Samsung Galaxy S9 Plus may have nudged it off the top, but only just). It’s also commendable that Google put its top camera tech into a non-flagship phone, rather than saving it for just the XL model, making it more accessible to more users.

I’ll still hang on to my mirrorless kit, but the Pixel 2 proves that software, not hardware, is the future of photography.

Daven Mathies
Former Digital Trends Contributor
Daven is a contributing writer to the photography section. He has been with Digital Trends since 2016 and has been writing…
The Google Pixel Fold 2 just leaked. Here’s everything that’s new
Schematics of Google Pixel 2 according to leaks.

Google’s next foldable phone is going to be a screamer, and not solely for good reasons, if the latest leaks are to be believed. SmartPrix (via OnLeaks) has shared alleged renders depicting the Google Pixel Fold 2, and it looks like a mixed bag of regressive design and positive developments.
The biggest change compared to the first-gen Google Pixel Fold is a camera bump that ditches the stretched bar look on the current-gen Pixel smartphones. Instead, we now have a rectangular design with two rows of pill-shaped black outlines, hosting three camera lenses and a bunch of other sensors.
It’s a stark departure from the mainstream Pixel camera bar, and it revives a look that was last seen on the Pixel 5 and Pixel 4 series phones. Looks aside, the side placement also means the phone will keep wobbling if you place it on a flat surface. I’m not exactly a fan of this approach, to be honest.

Another notable change is the side profile. The curved edges of the Pixel Fold have been flattened, a look that will also appear on the Google Pixel 9 series phones, according to leaked renders.
But if you look closely and ignore the ugly camera hump shape, the Pixel Fold 2 looks identical to the OnePlus Open in its leathery black trim. The side rails, those curved bezels, and the front camera placement are all identical.
Even the ridge seems to have been lifted straight from the OnePlus Open, down to its polished metal aesthetics. It’s not a bad thing, as the OnePlus Open is one of the best-built phones I’ve ever used, and the form factor is an absolute joy to handle.
The most progressive change happens once the phone is opened. The thick bezels are gone, and what we see on the Pixel Fold 2 renders are uniformly thin bezels on all sides, a trend that we first noticed on the foldable phones offered by Oppo.

Read more
I really hope the Google Pixel Fold 2 doesn’t look like this
A leaked hands-on photo of the Google Pixel Fold 2, showing a close-up of its camera module on the back.

This has been a busy week in terms of Google Pixel Fold 2 news. After a report about the phone's potential new processor and RAM upgrades, we now have our supposed first look at Google's second folding phone. And ... well, it sure is something.

These photos of the Pixel Fold 2 come courtesy of Android Authority, who received them from an anonymous source — the same source that claims the Pixel Fold 2 will have a Tensor G4 chip. Right off the bat, you can see that something funky is happening with the phone's camera module.

Read more
The Google Pixel Fold 2 just got a lot more exciting
A person holding the Google Pixel Fold.

It's widely expected that Google is working on a Pixel Fold 2, and according to one new report, Google's second folding phone could get a much-needed performance upgrade. We're talking a vastly upgraded processor and significantly more RAM — and those are both really big deals.

According to a recent report from Android Authority, citing an unnamed source, "Google has been testing the Pixel Fold 2 internally for the last couple of months." That shouldn't come as a huge surprise. The more interesting tidbit is that the Pixel Fold 2 will reportedly have a Tensor G4 chipset instead of the Tensor G3 one found inside the Google Pixel 8 and Pixel 8 Pro. Google hasn't announced its Tensor G4 chip yet, though it will likely debut in the Pixel 9 series later this fall.

Read more