Did you ever make sculptures out of found objects like driftwood? Researchers at the University of Tokyo have taken this same idea and applied it to robots. In doing so, they’ve figured out a way to take everyday natural objects like pieces of wood and get deep reinforcement learning algorithms to figure out how to make them move. Using just a few basic servos, they’ve opened up a whole new way of building robots — and it’s pretty darn awesome.
“[In our work, we wanted to] consider the use of found objects in robotics,” the researchers write in a paper describing their work. “Here, these are branches of various shapes. Such objects have been used in art or architecture, but [are] not normally considered as robotic materials. [However,] when the robot is trained towards the goal of efficient locomotion, these parts adopt new meaning: hopping legs, dragging arms, spinning hips, or yet unnamed creative mechanisms of propulsion. Importantly, these learned strategies, and thus the meanings we might assign to such found object parts, are a product of optimization and not known prior to learning.”
Deep reinforcement learning is useful for applications where the A.I. needs to figure out strategies for itself through trial and error. Famously, this approach to artificial intelligence was used to develop DeepMind’s A.I., which learned to play classic Atari games using just the game’s on-screen data and knowledge of its controls. In this latest driftwood example, the robot figures out the optimal way to bring its wooden limbs to virtual life by using reinforcement learning technology to test out different types of locomotion. The result involves movements that don’t necessarily replicate real-life animal movements (to be fair, there aren’t a whole lot of stick-like living creatures to model movement on!), but that are nonetheless efficient.
In a masterstroke, the researchers arranged for this training to be done in simulation. Among other things, this allows for a large number of failed movement attempts without having to worry about destroying the physical robot in the process. In order to carry out these simulations accurately, though, the researchers first have to 3D scan in the sticks and enter their respective weights so that the gaits can be calculated correctly.
While it’s likely that roboticists will continue to build many robots from the ground up, this is still a great reminder that, with the right software, literally anything can be a robot — even a pile of sticks.