Skip to main content

Researchers are programming robots to learn as human babies do

faulty robots
Image used with permission by copyright holder
Google is betting big on artificial intelligence. Robots have come a long way over the past decade or so, but they’re still not as good at interacting with humans as they could be. In fact, robots still struggle to do some basic tasks.

A team at Carnegie Mellon, however, is trying to fix that. The team, led by assistant professor Abhinav Gupta, is taking a new approach — allowing robots to play with everyday physical objects and explore the world to help them learn — exactly as a human baby would.

Recommended Videos

“Psychological studies have shown that if people can’t affect what they see, their visual understanding of that scene is limited,” said Lerrel Pinto, a PhD student in the group, in a report by The Verge. “Interaction with the real world exposes a lot of visual dynamics.”

Please enable Javascript to view this content

The group first showed off its tech last year, and the demo helped it land a three-year, $1.5 million award from Google, which will be used to expand on the number of robots that are being used in the study. More robots allows the researchers to gather data more quickly, which basically helps the group build increasingly advanced robots.

But the team isn’t just looking toward more robots to help speed up data gathering. It’s also trying to teach robots skills that will, in turn, help the robot learn other skills. The team also uses adversarial learning — which, according to the Verge report, is akin to a parent teaching a child how to catch a ball by pitching increasingly difficult throws. Apparently, taking this approach results in significantly faster learning than alternative methods.

It will certainly be interesting to see what comes of the project, and we’ll likely hear more about it as time goes on. Check out the video below to see the robots in action.

Robot Adversaries for Grasp Learning
Christian de Looper
Christian de Looper is a long-time freelance writer who has covered every facet of the consumer tech and electric vehicle…
Google Gemini arrives on iPhone as a native app
the Google extensions feature on iPhone

Google announced Thursday that it has released a new native Gemini app for iOS that will give iPhone users free, direct access to the chatbot without the need for a mobile web browser.

The Gemini mobile app has been available for Android since February, when the platform transitioned from the older Bard branding. However, iOS users could only access the AI on their phones through either the mobile Google app or via a web browser. This new app provides a more streamlined means of chatting with the bot as well as a host of new (to iOS) features.

Read more
An AI robot’s painting was just auctioned for more than $1 million
Ai-Da beside its painting of Alan Turing.

A painting of British computer scientist and codebreaker Alan Turing that was created by an AI-powered robot has fetched $1.08 million at auction.

The astonishing amount marks a record sale for a piece of art created by a humanoid robot, and is sure to provoke discussion about the effect AI is having on art and how it is created.

Read more
OpenAI’s robotics plans aim to ‘bring AI into the physical world’
The Figure 02 robot looking at its own hand

OpenAI continued to accelerate its hardware and embodied AI ambitions on Tuesday, with the announcement that Caitlin Kalinowski, the now-former head of hardware at Oculus VR, will lead its robotics and consumer hardware team.

"OpenAI and ChatGPT have already changed the world, improving how people get and interact with information and delivering meaningful benefits around the globe," Kalinowski wrote on a LinkedIn announcement. "AI is the most exciting engineering frontier in tech right now, and I could not be more excited to be part of this team."

Read more