Google has been hard at work improving Project Astra since it was first shown during Google I/O this year. The AI bot that understands the world around you is slated to be one of the major updates to arrive with Gemini 2.0. Even more excitingly, Google says it’s “working to bring the capabilities to Google products like the Gemini app, our AI assistant, and to other form factors like glasses.”
What’s new in Project Astra? Google says language has been given a big performance bump, as Astra can now better understand accents and less commonly used words, plus it can speak in multiple languages, and in combinations of languages too. It means Astra is more conversational, and speaks more like we do every day. Astra “sees” the world around it and now uses Google Lens, Google Maps, and Google Search to inform it.
A big part of Project Astra’s appeal is its ability to remember things, and Google has given it a 10-minute in-conversation memory, ensuring conversation flows and questions don’t need to be phrased in a certain way to be understood. Its long-term memory has also been improved.
The final update is around latency, which Google says now matches the flow of human conversation. Take a look at the video below to see how Project Astra is progressing. I’s looking very exciting indeed.
We were impressed with Project Astra when we saw it demonstrated at Google I/O, but frustrated that Google didn’t effectively even tease, let alone announce, a pair of smart glasses that would work with it in the future. It has now teased such a product with the announcement of Gemini 2.0 and the Project Astra updates.
The wearable is being tested in the real world, as you can see in the Project Astra video, where it’s obvious how well Astra suits hands-free use. Such a product can’t come soon enough as brands like Meta, with the help of Ray-Ban, and newcomers like Solos are already ahead of the game. Even Samsung is expected to launch some kind of smart eyewear in 2025.
What about Gemini 2.0? Google’s CEO Sundar Pichai calls it the company’s most capable model yet, adding: “If Gemini 1.0 was about organizing and understanding information, Gemini 2.0 is about making it much more useful.” While the updates and new features are mostly of interest to developers at the moment, we will see Gemini 2.0 in action on our phones and in our searches.
A chat-optimized, experimental version of Gemini 2.0 Flash — the first model in the Gemini 2.0 family — will be available as an option in the mobile and desktop Gemini app for you to try, while Gemini 2.0 will enable AI Overviews in your Google searches to answer more complex questions, including advanced math equations and coding queries.
This feature begins testing this week and will be more widely available in early 2025. There’s no information on when Project Astra or the prototype smart glasses will be more widely available.