Skip to main content

Your Google Pixel 8 Pro is about to get an incredible update

The Google Pixel 8 Pro laying face-down on a wood bench.
Joe Maring / Digital Trends

When Google introduced the Google Pixel 8 Pro earlier this year, the company made it clear that it was more of an AI phone than just a regular Android flagship. Now, Google is leaning more into that AI future with a pair of new features based on the in-house Gemini Nano model.

In order to bring Gemini Nano to the Pixel 8 Pro, Google built a new system called Android AI Core in Android 14. It’s the most compact and efficient AI model from Google, operating on Tensor mobile silicon natively. On-device operation means all processes happen locally, which ensures there’s little risk of sensitive information leaving your phone.

Recommended Videos

Most importantly, you need not be connected to the internet – or Google’s cloud – for these AI features to work. Talking about features, Gemini Nano offers tricks such as advanced text summarization, context-aware smart responses, and sophisticated proofreading and grammar-correction tools.

Summarizer feature in Google Recorder app.
Google

The first Gemini Nano-based feature on the Pixel 8 Pro is a new summarizer system in the Recorder app for Pixel phones. The app, which already excels at transcription, will now present a summary of the audio recording in the form of quick-to-absorb bullet points. GPT-based tools and apps such as Shortwave also offer their own take on summarization, but they can’t do so without an internet connection.

Please enable Javascript to view this content

For now, however, the AI summarizer system in Recorder is limited to the English language. Unfortunately, we don’t know whether support for more languages – and older devices – will be added down the road. Interestingly, the app is now able to transcribe audio in 28 languages.

Smart Reply feature in WhatsApp courtesy of Gboard.
Google

Next, we have a new smart reply system in the Gboard app. The feature is already live globally in WhatsApp, but more apps will add support for this predictive text response feature in Google’s keyboard app soon. Conversational awareness is the trick here, and since all processing happens natively, the suggestions are quick and privacy is not jeopardized.

Interestingly, Google is not locking the AI Core to its own silicon. The company says it can take advantage of the machine learning engines built on silicon offered by Qualcomm, MediaTek, and Samsung.

A Present For Your Pixel | December ‘23 Feature Drop

That makes a lot of sense, as both MediaTek and Qualcomm now offer top-end, as well as mid-tier processors that support AI models worth billions of parameters. Moreover, the inherent architecture leaves the doors open for developers to fine-tune it for their own apps and develop custom AI-powered features of their own.

Preview mode for Google Pixel Fold.
Google

In addition to AI features built atop the Gemini Nano model, Google is also improving the Photo Unblur feature to better recognize pets like cats and dogs. Taking a APGE out of the Magic Eraser book, the Pixel phones get a new Clean feature that can remove smudges and gunk visible in scanned documents.

The Google Pixel Fold gets a new Dual Screen Preview mode that lets you deploy the outer screen as a viewfinder. The ability to use the Pixel phone as a hi-res webcam is also rolling out widely for users.

Cleanup feature for document scan on Pixel phones.
Google

Following in the footsteps of Samsung, Google is also bringing a Repair Mode to Pixels that locks all the user data in a secure environment while the device is being fixed. the Pixel 6, and all phones launched after it, will also now show contextual quick replies when the Google Assistant is screening calls, offering users a quick way to answer calls with a text without ever picking it up.

Thanks to Call Screen, Google is also bringing contextual incoming call alerts to the Pixel Watch. Just like the iPhone-Apple Watch camaraderie, the Pixel Watch can be used to unlock Pixel phones. The first-generation Pixel Watch, following the December feature drop, is now able to sync Do Not Disturb and Bedtime Modes with your Pixel smartphone.

Nadeem Sarwar
Nadeem is a tech journalist who started reading about cool smartphone tech out of curiosity and soon started writing…
I tried 4 of the best earbud and phone combos. Here’s which one you should use
The OnePlus Nord 4 and OnePlus Buds Pro 3, Google Pixel 9 Pro and Google Pixel Buds 3, Apple iPhone 16 Pro Max with Airpods Pro 2, and Samsung Galaxy S24 Ultra with Samsung Galaxy Buds3 Pro.

When you buy a smartphone from Apple, Samsung, Google, or OnePlus, there’s always going to be the temptation to get a matching set of wireless earbuds to go along with it, as each manufacturer makes its own pair. But what exactly does it mean when you stay loyal to the brand, and is it worth it?

I’ve used the latest phones and earbuds from each manufacturer to find out. Here's what you need to know — and which pair is the best.
What have I tested?
(From left) OnePlus Buds Pro 3, Samsung Galaxy Buds 3 Pro, Google Pixel Buds Pro 2, and Apple AirPods Pro 2 Andy Boxall / Digital Trends

Read more
The iOS 18.2 update includes a special feature just for iPhone 16 Pro users
A person holding the Apple iPhone 16 Pro Max.

If you have an iPhone 16 Pro or iPhone 16 Pro Max, updated to iOS 18.2, and regularly use the Voice Memos app, then your phone just got even better if you're a musician. Originally teased in September’s iPhone 16 event, Layered Recordings is now available in the Voice Memos app with the iOS 18.2 update.

What exactly are Layered Recordings? Basically, you can now add a vocal track layer on top of any existing instrumental recording without the need for headphones. In the iOS 18.2 update, users are now able to play original instrument ideas through the iPhone’s built-in speakers while simultaneously recording vocals with the studio-quality microphone on the iPhone 16 Pro or Pro Max.

Read more
Google teases smart glasses with amazing Project Astra update
google teases smart glasses with amazing project astra update smartglasses

Google has been hard at work improving Project Astra since it was first shown during Google I/O this year. The AI bot that understands the world around you is slated to be one of the major updates to arrive with Gemini 2.0. Even more excitingly, Google says it’s “working to bring the capabilities to Google products like the Gemini app, our AI assistant, and to other form factors like glasses.”

What’s new in Project Astra? Google says language has been given a big performance bump, as Astra can now better understand accents and less commonly used words, plus it can speak in multiple languages, and in combinations of languages too. It means Astra is more conversational, and speaks more like we do every day. Astra “sees” the world around it and now uses Google Lens, Google Maps, and Google Search to inform it.

Read more