Skip to main content

World’s most advanced robotic hand is approaching human-level dexterity

Remember when the idea of a robotic hand was a clunky mitt that could do little more than crush things in its iron grip? Well, such clichés should be banished for good based on some impressive work coming out of the WMG department at the U.K.’s University of Warwick.

Recommended Videos

If the research lives up to its potential, robot hands could pretty soon be every bit as nimble as their flesh-and-blood counterparts. And it’s all thanks to some impressive simulation-based training, new A.I. algorithms, and the Shadow Robot Dexterous Hand created by the U.K.-based Shadow Robot Company (which Digital Trends has covered in detail before.)

Researchers at WMG Warwick have developed algorithms that can imbue the Dexterous Hand with impressive manipulation capabilities, enabling two robot hands to throw objects to one another or spin a pen around between their fingers.

“The Shadow Robot Company [is] manufacturing a robotic hand that is very similar to the human hand,” Giovanni Montana, professor of Data Science, told Digital Trends. “However, so far this has mostly been used for teleoperation applications, where a human operator controls the hand remotely. Our research aims at giving the hand the ability to learn how to manipulate objects on its own, without human intervention. In terms of demonstrating new abilities, we’ve focused on hand manipulation tasks that are deemed very difficult to learn.”

In a paper titled “Solving Challenging Dexterous Manipulation Tasks With Trajectory Optimisation and Reinforcement Learning,” the Warwick researchers created 3D simulations of the hands using a physics engine called MuJoCo (Multi-Joint Dynamics with Contact) that was developed at the University of Washington.

The work — which is currently still in progress — is impressive because it showcases robot hand tasks requiring two hands, such as catching. This adds extra difficulty to the learning process. The researchers think the algorithms represent one of the most impressive examples to date of autonomously learning to complete challenging, dexterous manipulation tasks.

Shadow Robot Hand Algorithms Simulation by WMG, University of Warwick

The algorithms that power the hands

The breakthrough involves two algorithms. First, a planning algorithm produces examples of how the task should be performed. Then a reinforcement learning algorithm, which learns through trial and error, practices repeatedly to achieve this action flawlessly. It does this using a reward function to assess how well it’s doing.

“Ideally, you want to define a reward which is simple to specify, and doesn’t require a huge amount of engineering [and] tweaking, but which is also able to provide regular feedback to guide the learning,” Henry Charlesworth, another researcher on the project, told Digital Trends. “In the case of the pen-spinning task, we define a simple reward based on the pen’s angular velocity, as well as a slight negative reward based on how far the pen deviates from lying in the horizontal plane. In this case, ‘better’ means the pen is rotating as fast as possible whilst remaining ‘horizontal’ relative to the hand.”

Functional robotic hands aren’t just a cool demo. They could have plenty of applications in the real world. For example, more capable robotic hands could be useful in computer assembly where assembling microchips requires a level of precision that currently only human hands can achieve. They could also be used in robotic surgery, an application the Warwick researchers are currently investigating.

There’s a catch, though: Currently, the hand algorithms, which show almost human levels of motion, have only been demonstrated in virtual reality simulation. Translating the algorithms to physical hardware is the next step of the project.

“It does definitely add an extra layer of complexity, because although the simulator is reasonably accurate, it can never be perfect,” Charlesworth said. “This means that a policy you train in the simulated environment cannot be directly transferred to a physical hand. However, there has been a lot of successful work recently that looks at how you can make a policy trained in simulation more robust, such that it can operate on a physical robot.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Kid-mounted cameras help A.I. learn to view the world through eyes of a child
smart toys not for kids gps enabled smartwatch

Talk to any artificial intelligence researcher and they’ll tell you that, while A.I. may be capable of complex acts like driving cars and spotting tiny details on X-ray scans, they’re still way behind when it comes to the generalized abilities of even a 3-year-old kid. This is sometimes called Moravec’s paradox: That the seemingly hard stuff is easy for an A.I., while the seemingly easy stuff is hard.

But what if you could teach an A.I. to learn like a kid? And what kind of training data would you need to feed into a neural network to carry out the experiment? Researchers from New York University recently set out to test this hypothesis by using a dataset of video footage taken from head-mounted cameras worn regularly by kids during their first three years alive.

Read more
Facebook A.I. could fix one of the most annoying problems in video chat apps
Woman looking at videos on Facebook

Communication on Facebook might be predominantly carried out via text, but the social media giant may nonetheless help to solve some of the biggest challenges with audio communication. Announced on Friday, July 10, ahead of the International Conference on Machine Learning, Facebook has developed a new, cutting-edge artificial intelligence that’s able to distinguish up to five voices speaking simultaneously.

That could be transformative for everything from next-gen hearing aids or smart speakers dialing in and amplifying certain voices to future Zoom-style video conferencing learning to better prioritize speakers to stop everyone talking over each other.

Read more
NASA wants to build a steam-powered hopping robot to explore icy worlds
Moons In this artist's concept, a SPARROW robot uses steam propulsion to hop away from its lander home base to explore an icy moon's surface.

Hopping Robot Concept to Explore Frozen Ocean Worlds

A bouncing, ball-like robot that's powered by steam sounds like something out of a steampunk fantasy, but it could be the ideal way to explore some of the distant, icy environments of our solar system.

Read more