Automation makes things easier. It also makes things potentially scarier as you put your well-being in the hands of technology that has to make spur-of-the-moment calls without first consulting you, the user. A self-driving car, for instance, must be able to spot a traffic jam or swerving cyclist and react appropriately. If it can do this effectively, it’s a game-changer for transportation. If it can’t, the results may be fatal.
At the University of Waterloo, Canada, researchers are working on just this problem — only applied to the field of wearable robot exosuits. These suits, which can range from industrial wearables reminiscent of Aliens’ Power Loader to assistive suits for individuals with mobility impairments resulting from age or physical disabilities, are already in use as augmentation devices to aid their wearers. But they’ve been entirely manual in their operation. Now, researchers want to give them a mind of their own.
To that end, the University of Waterloo investigators are developing A.I. tools like computer vision that will allow exosuits to sense their surroundings and adjust movements accordingly — such as being able to spot flights of stairs and climb them automatically or otherwise respond to different walking environments in real time. Should they pull it off, it will forever change the usefulness of these assistive devices. Doing so isn’t easy, however.
The biggest challenge for robotic exoskeletons
“Control is generally regarded as one of the biggest challenges to developing robotic exoskeletons for real-world applications,” Brokoslaw Laschowski, a Ph.D. candidate in the university’s Systems Design Engineering department, told Digital Trends. “To ensure safe and robust operation, commercially available exoskeletons use manual controls like joysticks or mobile interfaces to communicate the user’s locomotor intent. We’re developing autonomous control systems for robotic exoskeletons using wearable cameras and artificial intelligence, [so as to alleviate] the cognitive burden associated with human control and decision-making.”
As part of the project, the team had to develop an A.I.-powered environment classification system, called the ExoNet database, which it claims is the largest-ever open-source image dataset of human walking environments. This was gathered by having people wear a mounted camera on their chest and walk around local environments while recording their movement and locomotion, It was then used to train neural networks.
“Our environment classification system uses deep learning,” Laschowski continued. “However, high-performance deep-learning algorithms tend to be quite computationally expensive, which is problematic for robotic exoskeletons with limited operating resources. Therefore, we’re using efficient convolutional neural networks with minimal computational and memory storage requirements for the environment classification. These dee- learning algorithms can also automatically and efficiently learn optimal image features directly from training data, rather than using hand-engineered features as is traditionally done.”
John McPhee, a professor of Systems Design Engineering at the University of Waterloo, told Digital Trends: “Essentially, we are replacing manual controls — [like] stop, start, lift leg for step — with an automated solution. One analogy is an automatic powertrain in a car, which replaces manual shifting. Nowadays, most people drive automatics because it is more efficient, and the user can focus on their environment more rather than operating the clutch and stick. In a similar way, an automated high-level controller for an exo will open up new opportunities for the user [in the form of] greater environmental awareness.”
As with a self-driving car, the researchers note that the human user will possess the ability to override the automated control system if the need arises. While it will still require a bit of faith to, for instance, trust that your exosuit will spot a flight of descending stairs prior to launching down them, the wearer can take control in scenarios where it’s necessary.
Still prepping for prime time
Right now, the project is a work in progress. “We’re currently focusing on optimizing our A.I.-powered environment classification system, specifically improving the classification accuracy and real-time performance,” said Laschowski. “This technical engineering development is essential to ensuring safe and robust operation for future clinical testing using robotic exoskeletons with autonomous control.”
Should all go to plan, however, hopefully it won’t be too long until such algorithms can be deployed in commercially available exosuits. They are already becoming more widespread, thanks to innovative companies like Sarcos Robotics, and are being used in evermore varied settings. They’re also capable of greatly enhancing human capabilities beyond what the wearer would be capable of when not wearing the suit.
In some ways, it’s highly reminiscent of the original conception of the cyborg, not as some nightmarish Darth Vader or RoboCop amalgamation of half-human and half-machine, but, as researchers Manfred Clynes and Nathan Kline wrote in the 1960s, as “an organizational system in which … robot-like problems [are] taken care of automatically, leaving [humans] free to explore, to create, to think, and to feel.” Shorn of its faintly hippy vibes (this was the ’60s), the idea still stands: By letting robots autonomously take care of the mundane problems associated with navigation, the human users can focus on more important, engaging things. After all, most people don’t have to consciously think about the minutiae of moving one foot in front of the other when they walk. Why should someone in a robot exosuit have to do so?
The latest paper dedicated to this research was recently published in the journal IEEE Transactions on Medical Robotics and Bionics.