Skip to main content

Pixel 2 owners get the first glimpse of Google Lens computer vision possibilities

Unveiled earlier in 2017 at Google I/O, the first public version of the artificially intelligent computer vision program Google Lens is now part of the new Google Pixel 2. During Wednesday’s October 4 event in San Fransisco, Google shared a preview of Lens that will ship inside the new Pixel 2 smartphone, with integration into both Google Photos and Google Assistant.

Google Lens is the tech giant’s computer vision software that collects information from a photograph to either save some time by skipping the typing or to learn something new about the things that we see around us. The tool effectively mixes Google search with a camera, and while the Pixel 2 only contains a preview of the feature, the platform already creates a few promising shortcuts.

Recommended Videos

During the event, Google’s Aparna Chennapragada shared how the new feature allows the smartphone’s camera to be used as a sort of keyboard. When taking a photo of something with text, like a flyer, Google Lens allows users to highlight text such as email addresses, phone numbers, websites, and street addresses and copy the information. The shortcut makes it easy to look up a location on Google Maps or call a phone number without typing it into the keyboard.

Besides serving as a visual shortcut to typing in long and unusual email addresses, Google Lens is also designed to help users understand the objects they see — starting with art and entertainment. Snapping a photo of a piece of art will lead to who the artist is and what else they painted. See a movie poster? Lens will tell you if the flick is worth watching or not. Snapping photos of album covers and book covers also lead you to more details on the work.

The preview inside the Pixel 2 is just a start for the computer vision software. When the software was first announced, Google listed a long number of possibilities, including translating text, getting more details on a business, reading Wi-Fi network settings or learning the name of that flower you just spotted.

Google’s computer vision also works with existing photos, powering a number of tools inside the native Google Photos app on the Pixel 2. Searching for specific objects, people and even famous landmarks is possible through the program’s auto-tagging feature.

Google Lens is based on machine learning — Google essentially used those millions of photos in their search results to train the computer to recognize what a specific object looks like. With enough photos, the program can learn to recognize what the Eiffel Tower looks like on a cloudy day, lit up at night or even blurred from camera shake to correctly identify what is in the photo.

Chennapragada said that Google Lens will continue to improve with use. For example, she said, Google’s voice recognition, at first, wouldn’t always recognize speech correctly, particularly with factors like accents. Now, after several years of development, Google voice has a 95 percent accuracy rate.

Google CEO Sudar Pichai said that the object recognition AI built by Google had a 39 percent accuracy rate. Using what’s called AutoML, which is essentially artificial intelligence building more AI programs, that accuracy rate has improved to 43 percent and is continuing to improve.

“This is why we are excited about the shift from mobile first to AI first, it’s radically rethinking how computers work,” Pichai said during the presentation. “Computers should adapt to how people live their life, rather than people adapting to computers.”

Google Lens will first be available in Pixel 2 by tapping the Lens icon inside both Google Photos and Google Assistant.

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
The first 6 things you need to do with your Google Pixel 9
The back of the Google Pixel 9.

Google has launched the Google Pixel 9 series, and it’s quite an impressive lineup. The base model Pixel 9 got a big upgrade with a 48-megapixel ultrawide lens and 12GB RAM. The Pro models now come in two sizes so that anyone who wants a smaller Pro phone can finally have it without compromising on features. The Pro phones also have a mighty 16GB RAM and a powerful triple-lens camera setup.

While you may be super excited to start using your new phone right away, there are some things you should make sure you do first. Here are a few suggestions to make the most of your Pixel 9, Pixel 9 Pro, or Pixel 9 Pro XL.
Customize the look of your Pixel 9

Read more
Does the Google Pixel 9 have a good camera? Here are the first photos I took with it
Someone holding a Pixel 9 taking a photo.

The Made by Google August event has come and gone, and a whole slate of new Pixel products came with it. I had the opportunity to go out to Mountain View, California, for the event, and with that came the opportunity to test out the new Google Pixel 9.

One of the things I do most with any smartphone is take photos, so I was eager to put the Pixel 9 through its paces. While I still need some extra time with the Pixel 9 before I share my full review, here are some examples of how Google upgraded the cameras on the base model Pixel 9 this year.
What are the Pixel 9 camera upgrades?

Read more
Google Pixel Buds Pro 2 get smaller and smarter
Two pairs of Google Pixel Buds Pro 2 in their charge case.

Google's newest wireless earbuds are now (after copious leaks) finally official: The Google Pixel Buds Pro 2 will come in off-white, dark gray, mint, and pink colors when they go on sale in September for $229, an almost $30 increase from the price the company set for the first-gen Pixel Buds Pro.

Google says it has made many improvements both internally and externally, though many of their capabilities have already been added to the first-gen Buds Pro via firmware updates.

Read more