Skip to main content

Gemini AI is making robots in the office far more useful

An Everyday Robot navigating through an office.
Everyday Robot

Lost in an unfamiliar office building, big box store, or warehouse? Just ask the nearest robot for directions.

A team of Google researchers combined the powers of natural language processing and computer vision to develop a novel means of robotic navigation as part of a new study published Wednesday.

Essentially, the team set out to teach a robot — in this case an Everyday Robot — how to navigate through an indoor space using natural language prompts and visual inputs. Robotic navigation used to require researchers to not only map out the environment ahead of time but also provide specific physical coordinates within the space to guide the machine. Recent advances in what’s known as Vision Language navigation have enabled users to simply give robots natural language commands, like “go to the workbench.” Google’s researchers are taking that concept a step further by incorporating multimodal capabilities, so that the robot can accept natural language and image instructions at the same time.

For example, a user in a warehouse would be able to show the robot an item and ask, “what shelf does this go on?” Leveraging the power of Gemini 1.5 Pro, the AI interprets both the spoken question and the visual information to formulate not just a response but also a navigation path to lead the user to the correct spot on the warehouse floor. The robots were also tested with commands like, “Take me to the conference room with the double doors,” “Where can I borrow some hand sanitizer,” and “I want to store something out of sight from public eyes. Where should I go?”

Or, in the Instagram Reel above, a researcher activates the system with an “OK robot” before asking to be led somewhere where “he can draw.” The robot responds with “give me a minute. Thinking with Gemini …” before setting off briskly through the 9,000-square-foot DeepMind office in search of a large wall-mounted whiteboard.

To be fair, these trailblazing robots were already familiar with the office space’s layout. The team utilized a technique known as “Multimodal Instruction Navigation with demonstration Tours (MINT).” This involved the team first manually guiding the robot around the office, pointing out specific areas and features using natural language, though the same effect can be achieved by simply recording a video of the space using a smartphone. From there the AI generates a topological graph where it works to match what its cameras are seeing with the “goal frame” from the demonstration video.

Then, the team employs a hierarchical Vision-Language-Action (VLA) navigation policy “combining the environment understanding and common sense reasoning,” to instruct the AI on how to translate user requests into navigational action.

The results were very successful with the robots achieving “86 percent and 90 percent end-to-end success rates on previously infeasible navigation tasks involving complex reasoning and multimodal user instructions in a large real world environment,” the researchers wrote.

However, they recognize that there is still room for improvement, pointing out that the robot cannot (yet) autonomously perform its own demonstration tour and noting that the AI’s ungainly inference time (how long it takes to formulate a response) of 10 to 30 seconds turns interacting with the system a study in patience.

Andrew Tarantola
Andrew has spent more than a decade reporting on emerging technologies ranging from robotics and machine learning to space…
How to use Gemini AI to write anything in Google Docs
A Gemini branding image

Gemini AI, Google's latest language model, is revolutionizing the way we create content within Google Docs, from drafting emails and reports to generating creative writing pieces.

In this guide, we'll walk you through the steps to access and set up Gemini AI in Google Docs, explore its impressive features, and provide practical examples of how to leverage its potential for a wide range of tasks.
How to integrate Gemini into Google Docs
Unfortunately, Gemini integration with Google's Workspace suite isn't available to free-tier users. You'll need a $20/month subscription to the Google One AI Premium Plan (or a work or school account through a Gemini for Google Workspace add-on) to gain access. Signing up for the personal plan is straightforward.

Read more
Watch Google DeepMind’s robotic ping-pong player take on humans
Google DeepMind's robot ping pong player takes on a human.

Demonstrations - Achieving human level competitive robot table tennis

Ping-pong seems to be the sport of choice when it comes to tech firms showcasing their robotic wares. Japanese firm Omron, for example, made headlines several years ago with its ping-pong robot that could comfortably sustain a rally with a human player, while showing off the firm’s sensor and control technology in the process.

Read more
More AI may be coming to YouTube in a big way
a content creator recording a thing in the kitchen with a bowl of food

YouTube content creators could soon be able to brainstorm video topic, title, and thumbnail ideas with Gemini AI as part of the "brainstorm with Gemini" experiment Google is currently testing, the company announced via its Creator Insider channel.

The feature is first being released to a small number of selected content creators for critique, as a spokesperson from the company told TechCrunch, before the company decides whether to roll it out to all users. "We're collecting feedback at this stage to make sure we're developing these features thoughtfully and will improve the feature based on feedback," the video's host said.

Read more