OK, let’s get this out the way up top: A robot doll that can sense your child’s emotions and change how it behaves accordingly sounds like the kind of high-concept horror movie a Hollywood screenwriter would pitch after binge-watching Westworld and a Chucky marathon.
In reality, it describes research being carried out by investigators at the University of Castilla-La Mancha in Ciudad Real, Spain. What they’ve built as a proof of concept is an artificially intelligent doll that can recognize eight different emotions and runs on an AI chip costing just 115 euros (around $130). Emotion recognition is carried out through facial-recognition technology, via a camera hidden in the doll’s mouth.
As project leader Oscar Deniz explains, the doll is something of a red herring.
“It’s actually an application of an open vision platform we have developed in our Horizon 2020 project, ‘Eyes of Things,’” Deniz, whose work focuses on computer vision and machine learning, told Digital Trends. “The vision platform has been designed for small size, cost, and maximum efficiency. Thus, the doll contains the board along with camera and battery. The board processes images to recognize the girl’s facial expression, allowing the doll to react accordingly. All of this is done inside the doll, meaning that no images are sent to the internet, which has been the case in other toys. This not only allows for better response time; it guarantees privacy.”
The “Eyes of Things” project started in January 2015 and runs through December of this year. Its objective is to design a new embedded vision platform, optimized for size, cost, performanc,e and power consumption. While there are plenty of fascinating facial-recognition technologies around, the fact that this tool does everything locally, rather than in the cloud, makes it particularly intriguing.
Besides dolls and other intelligent toys, the project aims to develop similar uses of vision-based AI for drones, robots, headsets, and video surveillance. Another use case the team has developed is something called the “museum audio guide,” which comprises a headset containing the AI board, a battery, and camera. The idea is that, when a museum visitors puts the headset on, the headset “sees” what they are looking at and provides contextual information.
“Currently, we have a number of prototypes available, and we are working on some demonstrators, but the idea is to generate [excitement] so that the platform can be commercialized,” Deniz said.
A paper describing the work was recently published in the journal Sensors.