Skip to main content

Google supercharges Search with new features

Search is the biggest Google product and at the I/O, the company is pushing its service to new levels. Earlier, Google introduced Multisearch which allows you to, for example, take a photo of an object, and base a search query around that photo. Now, the company has announced that it will be rolling out Multisearch with the additional “near me” variable and Scene Exploration features later this year.

Multisearch’s “near me” variable allows you to search for a plate of food, find out what it’s called, and find out where to eat it near you. It’s like Shazam but for Search queries.

Google multisearch near me feature.
Image used with permission by copyright holder

Basically, you can search for a photo and a question at the same time. The feature will work on everything, such as a photo of a dish, and add “near me” – which will bring up a restaurant that serves the dish. It will scan photos from Maps contributors to match the photo you searched. The feature will be rolled out later this year for English and other languages over time.

Recommended Videos

Another Search feature that’s coming is Scene Exploration which allows you to pan your camera and instantly glean insights about multiple objects from a wider scene. You can scene the entire shelf with your camera, and it will display helpful info overlaid on the objects.

Google multisearch scene exploration.
Image used with permission by copyright holder

The feature uses computer vision to connect multiple frames that make up the scene and all the objects within it. The knowledge graph surfaces the most helpful results. Google cited a shelf of chocolates that you can scan and get information on which chocolate bars are nut-free. It allows you to get a sort of AR-looking overlay of an entire scene in front of you.

Prakhar Khanna
Prakhar writes news, reviews and features for Digital Trends. He is an independent tech journalist who has been a part of the…
Google Gemini is good, but this update could make it downright sci-fi
Google Gemini running on an Android phone.

Ever since seeing the "Welcome home, sir" scene in Iron Man 2, many of us have wanted a smart setup with a Jarvis-like assistant. While some may have hoped that Alexa would provide that kind of functionality, so far, the assistant is just too limited. That might change with the launch of Gemini 2.0 and Google's Project Jarvis, though.

In a sense, this new project is Jarvis. The system works by taking stills of your screen and interpreting the information on it, including text, images, and even sound. It can auto-fill forms or press buttons for you, too. This project was first hinted at during Google I/O 2024, and according to 9to5Google, it's designed to automate web-based tasks. Jarvis is an AI agent with a narrower focus than a language learning model like ChatGPT — an AI that demonstrates human-like powers of reasoning, planning, and memory.

Read more
Android 16 might give its own spin to iPhone’s Dynamic Island alerts
The DynamicSpot Dynamic Island at the top of the Pixel 7 Pro.

Over the past few weeks, we’ve come across some interesting details about the next major build of Android. Currently in development under the apparent codename of Baklava, Android 16 will reportedly bring a cool new feature called Priority modes for notifications.

If that sounds familiar, that’s because Apple already offers a bunch of focus modes toward the same goal and bolsters the system with AI-assisted priority notifications in iOS 18. It seems Google doesn’t want to be left behind, and in doing so, could very well lift from a popular iPhone trick.

Read more
The iOS 18.2 beta, with new Apple Intelligence features, is here
iOS 18.2 update notification on an iPhone.

Apple has just rolled out the first beta of iOS 18.2, merely a day after seeding a release candidate version of the iOS 18.1 build. The latest beta brings some of the biggest Apple Intelligence features to the table.

The first one is ChatGPT integration. When users bring up Siri and ask it a question the assistant can’t handle, the request will be offloaded to OpenAI’s ChatGPT. “Users are asked before any questions are sent to ChatGPT, along with any documents or photos, and Siri then presents the answer directly,” Apple says.

Read more