Skip to main content

To build a lifelike robotic hand, we first have to build a better robotic brain

Our hands are like a bridge between the intentions laid out by the brain and the physical world, carrying out our wishes by letting us turn thoughts into actions. If robots are going to truly live up to their potential when it comes to interaction, it’s crucial that they therefore have some similar instrument at their disposal.

Recommended Videos

We know that roboticists are building some astonishingly intricate robot hands already. But they also need the smarts to control them — being capable of properly gripping objects both according to their shape and their hardness or softness. You don’t want your future robot co-worker to crush your hand into gory mush when it shakes hands with you on its first day in the office.

Fortunately, this is what researchers from Germany have been working on with a new, more brain-inspired neural network that can allow a robotic hand (in this case, an existing model called a Schunk SVH 5-finger hand) to learn how to pick up objects of different shapes and hardness levels by selecting the correct grasping motion. In a proof-of-concept demonstration, the robot hand was able to pick up an unusual range of objects including — but not limited to — a plastic bottle, tennis ball, sponge, rubber duck, pen, and an assortment of balloons.

Robot arm gripper
FZI Forschungszentrum Informatik Karlsruhe

“Our approach has two main components: The modeling of motion of the hand, and the compliant control,” Juan Camilo Vasquez Tieck, a research scientist at FZI Forschungszentrum Informatik in Karlsruhe, Germany, told Digital Trends. “The hand is modeled in a hierarchy of different layers, and the motion is represented with motion primitives. All the joints of one finger are coordinated by a finger-primitive. For one particular grasping motion, all the fingers are coordinated by a hand-primitive.”

In other words, he explained, it can close its hand in different ways.

The system represents a different way of developing robotic systems for carrying out these kinds of actions. The neural network involved allows the hand to grasp more intelligently, making real-time adaptations where necessary.

Spiking neural networks (SNN) are a special kind of artificial neural networks that model closer the way real neurons work,” Tieck continued. “There are many spiking neuron models based on neuroscience research. For this work, we used leaky integrate and fire (LIF) neurons. The communication between neurons is event-based, using spikes. Spikes are discrete impulses, and not a continuous signal. This … reduces the amount of information being sent between neurons and provides great power efficiency.”

A paper describing the work was recently published in the journal IEEE Robotics and Automation Letters.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
World’s most advanced robotic hand is approaching human-level dexterity
robot hands getting better holding pen

Remember when the idea of a robotic hand was a clunky mitt that could do little more than crush things in its iron grip? Well, such clichés should be banished for good based on some impressive work coming out of the WMG department at the U.K.’s University of Warwick.

If the research lives up to its potential, robot hands could pretty soon be every bit as nimble as their flesh-and-blood counterparts. And it’s all thanks to some impressive simulation-based training, new A.I. algorithms, and the Shadow Robot Dexterous Hand created by the U.K.-based Shadow Robot Company (which Digital Trends has covered in detail before.)

Read more
A.I. fail as robot TV camera follows bald head instead of soccer ball
ai fail as robot camera mistakes bald head for soccer ball

CaleyJags : SPFL Championship : Real Highlights: ICTFC 1 v 1 AYR : 24/10/2020

While artificial intelligence (A.I.) has clearly made astonishing strides in recent years, the technology is still susceptible to the occasional fail.

Read more
This basic human skill is the next major milestone for A.I.
Profile of head on computer chip artificial intelligence.

Remember the amazing, revelatory feeling when you first discovered the existence of cause and effect? That’s a trick question. Kids start learning the principle of causality from as early as eight months old, helping them to make rudimentary inferences about the world around them. But most of us don’t remember much before the age of around three or four, so the important lesson of “why” is something we simply take for granted.

It’s not only a crucial lesson for humans to learn, but also one that today’s artificial intelligence systems are pretty darn bad at. While modern A.I. is capable of beating human players at Go and driving cars on busy streets, this is not necessarily comparable with the kind of intelligence humans might use to master these abilities. That’s because humans -- even small infants -- possess the ability to generalize by applying knowledge from one domain to another. For A.I. to live up to its potential, this is something it also needs to be able to do.

Read more