Skip to main content

Google raises 5 safety concerns for the future of artificial intelligence

nestor ai paying attention artificial intelligence
Image used with permission by copyright holder
While artificial intelligence was once sci-fi subject matter, the field is advancing at such a rate that we’ll likely see it become a part of everyday life before too long. As a result, Google wants to make sure that an AI can be trusted to carry out a task as instructed, and do so without putting humans at risk.

That was the focus of a study carried out by Google in association with Stanford University; University of California, Berkeley; and OpenAI, the research company co-founded by Elon Musk. The project outlined five problems that need to be addressed so that the field can flourish, according to a report from Recode.

Recommended Videos

It’s noted that these five points are “research questions” intended to start a discussion, rather than offer up a solution. These issues are minor concerns right now, but Google’s blog post suggests that they will be increasingly important in the long-term.

The first problem asks how we’ll avoid negative side effects, giving the example of a cleaning AI cutting corners and knocking over a vase because that’s the fastest way to complete its janitorial duties. The second refers to “reward hacking,” where a robot might try take shortcuts to fulfill its objective without actually completing the task at hand.

The third problem is related to oversight, and making sure that robots don’t require too much feedback from human operators. The fourth raises the issue of the robot’s safety while exploring; this is illustrated by a mopping robot experimenting with new techniques, but knowing to fall short of mopping an electrical outlet (for obvious reasons).

The final problem looks at the differences between the environment a robot would train in, and their workplace. There are bound to be major discrepancies, and the AI needs to be able to get the job done regardless.

It’s really just a matter of time before we see AI being used to carry out menial tasks, but research like this demonstrates the issues that need to be tackled ahead of a wide rollout. User safety and the quality of the service will of course be paramount, so it’s vital that these questions are asked well ahead of time.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
ChatGPT Search is here to battle both Google and Perplexity
The ChatGPT Search icon on the prompt window

ChatGPT is receiving its second new search feature of the week, the company announced on Thursday. Dubbed ChatGPT Search, this tool will deliver real-time data from the internet in response to your chat prompts.

ChatGPT Search appears to be both OpenAI's answer to Perplexity and a shot across Google's bow.

Read more
Google expands AI Overviews to over 100 more countries
AI Overviews being shown in Google Search.

Google's AI Overview is coming to a search results page near you, whether you want it to or not. The company announced on Monday that it is expanding the AI feature to more than 100 countries around the world.

Google debuted AI Overview, which uses generative AI to summarize the key points of your search topic and display that information at the top of the results page, to mixed reviews in May before subsequently expanding the program in August. Monday's roll-out sees the feature made available in seven languages — English, Hindi, Indonesian, Japanese, Korean, Portuguese, and Spanish — to users in more than 100 nations (you can find a full list of covered countries here)

Read more
Google’s AI detection tool is now available for anyone to try
Gemini running on the Google Pixel 9 Pro Fold.

Google announced via a post on X (formerly Twitter) on Wednesday that SynthID is now available to anybody who wants to try it. The authentication system for AI-generated content embeds imperceptible watermarks into generated images, video, and text, enabling users to verify whether a piece of content was made by humans or machines.

“We’re open-sourcing our SynthID Text watermarking tool,” the company wrote. “Available freely to developers and businesses, it will help them identify their AI-generated content.”

Read more