Called DeepLoco, the work was shown off this week at SIGGRAPH 2017, probably the world’s leading computer graphics conference. While we have had realistic CGI that is capable of mimicking realistic walking motions for years, what makes this work so nifty is that it uses reinforcement learning to optimize a solution.
Reinforcement learning, for those unfamiliar with it, is a school of machine learning in which software agents learn to take actions that will maximize their reward. Google’s DeepMind, for example, has used reinforcement learning to teach an AI to play classic video games by working out how to achieve high scores.
In the case of DeepLoco, the reward is getting from Point A to Point B in the most efficient manner possible, all while being challenged by everything from navigating narrow cliffs to surviving bombardments of objects. As it does this, it learns from its environment in order to discover how to balance, walk, and even dribble a soccer ball. It’s like watching your kid grow up — except that, you know, in this case, your kid is a pair of disembodied AI legs powered by Skynet!
Nonetheless, it is another intriguing example of the power of reinforcement learning. While the technology could be applied in any number of ways (such as by animators wanting to more easily animate giant computer-generated crowd scenes in movies), its most game-changing use would almost certainly be in robotics. Applied to some of the cutting-edge walking robots we have seen from companies like Boston Dynamics, DeepLoco could help develop robots that are able to more intuitively move through a range of environments.
A paper describing the work, titled “DeepLoco: Dynamic Locomotion Skills Using Hierarchical Deep Reinforcement Learning” was published in the journal Transactions on Graphics.