Robots are getting better at dealing with the complexity of the real world, but they still need a helping hand when taking their first tentative steps outside of easily defined lab conditions. That’s what a new open source virtual reality training ground called AI2-THOR, created by researchers at Seattle’s Allen Institute for Artificial Intelligence, aims to help with. It’s an interactive VR model of real-world scenes, such as the kitchen or living room in a regular home, that allows an A.I. agent to learn to cope with our world in a way that is not only less time-consuming, but a whole lot less risky, too — for both robots and the human folk they interact with.
At present, most commercial machine learning algorithms learn about the world from data sets made up of videos and still images. That approach certainly has its uses, as a quick glance at the number of robotics-related advances in recent years will prove. However, it’s not necessarily a substitute for the opportunity to physically interact with the real world, which is where AI2-THOR comes in.
The THOR project is an acronym standing for “The House Of inteRactions.” The realistic A.I. training ground, created using the graphical engine Unity, has been in development since the summer 2016. The first version of the software offered 120 different scenes, based around kitchens, living room, bedroom, and bathroom settings. Each one features location-appropriate items to interact with, such as an openable microwave in the kitchen, as well as realistic physics models. The detail even includes such minutiae as empty and full bathtubs and sliceable apples.
In the future, the team behind AI2-THOR plans to expand it further by adding objects with non-rigid physics, letting robots get valuable practice at making beds or moving items of clothing. (Hey, we’re not giving up on our Jetsons-style dream of a robot household helper yet!)
The open-source software is already available for users, and anyone is able to download it and customize the various scenes to their specifications. Hopefully, initiatives such as this will allow robots to get even smarter while opening up the toolsets to allow this in a way that reaches those outside of high-end research labs.