Skip to main content

Like a wearable guide dog, this backback helps Blind people navigate

Visual Assistance System for the Visually Impaired

In “Secondhand Spoke,” the 15th episode of the 12th season of Family Guy, teenage son Chris Griffin is being bullied. With Chris unable to come up with responses to the verbal gibes of his classmates, his smarter baby brother, Stewie, hops in a backpack so that Chris can surreptitiously carry him around. Prompted by Stewie, Chris not only manages to get back at the bullies, but even winds up getting nominated for class president for his troubles.

Recommended Videos

That Family Guy B-plot bears only the most passing of resemblances to a new project carried out by Intel and the University of Georgia. Nonetheless, it’s an intriguing one: A smart backpack that’s able to help its wearer better navigate a given environment without problems — all through the power of speech.

What researcher Jagadish Mahendran and team have developed is an A.I.-powered, voice-activated backpack that’s designed to help its wearer perceive the surrounding world. To do it, the backpack — which could be particularly useful as an alternative to guide dogs for visually impaired users — uses a connected camera and fanny pack (the former worn in a vest jacket, the latter containing a battery pack), coupled with a computing unit so it can respond to voice commands by audibly describing the world around the wearer.

That means detecting visual information about traffic signs, traffic conditions, changing elevations, and crosswalks, alongside location information, and then being able to turn it into useful spoken descriptions, delivered via Bluetooth earphones.

A useful assistive tool

“The idea of developing an A.I.-based visual-assistance system occurred to me eight years ago in 2013 during my master’s,” Mahendran told Digital Trends. “But I could not make much progress back then for [a] few reasons: I was new to the field and deep learning was not mainstream in computer vision. However, the real inspiration happened to me last year when I met my visually impaired friend. As she was explaining her daily challenges, I was struck by this irony: As a perception and A.I. engineer I have been teaching robots how to see for years, while there are people who cannot see. This motivated me to use my expertise, and build a perception system that can help.”

ai navigation backpack for the blind setup
Jagadish Mahendran

The system contains some impressive technology, including a Luxonis OAK-D spatial A.I. camera that leverages OpenCV’s Artificial Intelligence Kit with Depth, which is powered by Intel. It is capable of running advanced deep learning neural networks, while also providing high-level computer vision functionality, complete with a real-time depth map, color information, and more.

“The success of the project is that we are able to run many complex A.I. models on a setup that has a simple and small form factor and is cost-effective, thanks to OAK-D camera kit that is powered by Intel’s Movidius VPU, an A.I. chip, along with Intel OpenVINO software,” Mahendran said. “Apart from A.I., I have used multiple technologies such as GPS, point cloud processing, and voice recognition.”

Currently in testing phase

As with any wearable device, a big challenge involves making it something that people would actually want to wear. Nobody wants to look like a science-fiction cyborg outside of Comic-Con.

Fortunately, Mahendran’s A.I. vest does well under these parameters. It conforms to the standards of what the late Xerox PARC computer scientist Mark Weiser said was necessary for ubiquitous computing: Receding into the background without attracting attention to itself. The components are all hidden away from view, with even the camera (which, by design, must by visible in order to record the necessary images) looking out at the world through three tiny holes in the vest.

Image used with permission by copyright holder

“The system is simple, wearable, and unobtrusive so that the user doesn’t get unnecessary attention from other pedestrians,” Mahendran said.

Currently, the project is in the testing phase. “I did the initial [tests myself] in downtown Monrovia, California,” Mahendran said. “The system is robust, and can run in real time.”

Mahendran noted that, in addition to detecting outdoor obstacles — ranging from bikes to overhanging tree branches — it can also be useful for indoor settings, such as detecting unclosed kitchen cabinet doors and the like. In the future, he hopes that members of the public who need such a tool will be able to try it out for themselves.

“We have already formed a team called Mira, which is a group of volunteers from various backgrounds, including people who are visually impaired,” Mahendran said. “We are growing the project further with a mission to provide an open-source, A.I. based visual assistance system for free. We are currently in the process of raising funds for our initial phase of testing.”

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
How A.I. bumblebee brains could usher in a new era for navigation
ai bee brain opteran

Artificial intelligence is a discipline that, historically, has rewarded big thinkers. James Marshall, professor of computer science at the U.K.’s University of Sheffield, thinks small.

That’s not intended as a slight, so much as it is an accurate description of his work. His startup, Opteran Technologies, has just received $2.8 million to continue pursuing that work. Where others are focused on building A.I. with human-level intelligence, pushing even further into the realms of “artificial general intelligence,” Marshall has his sights set on something a whole lot smaller than the human brain. He wants to build an artificial honeybee brain.

Read more
The high-tech quest to detect COVID-19 via voice
voice assistant

From fever-identifying drones to 3D-printed ventilator components to the tracking of mobile location data, there’s no shortage of ways that technologists are harnessing modern tech to track (and hopefully help slow) the spread of the coronavirus pandemic.

Researchers from Carnegie Mellon University have come up with one of the most sophisticated and unlikely-sounding methods yet -- but if it works it could be a game-changer for a world in which there’s a drastic shortage of proper COVID-19 testing kits. The idea? A free app that can diagnose COVID-19 simply by listening to and analyzing a user’s voiceprint.

Read more
A.I. could help spot telltale signs of coronavirus in lung X-rays
using ai to spot coronavirus lung damage lungs x ray

There are many pain points when it comes to the coronavirus, officially known as COVID-19. One of them is how exactly to test people for it when the necessary testing kits are in short supply. One possible solution could be to allow artificial intelligence to scrutinize chest X-rays of patients’ lungs to spot signs of potential coronavirus-caused lung damage.

That’s the basis for several exciting and promising attempts to develop a neural network that could be used to give a strong indication of whether or not a patient likely has COVID-19. Researchers at Chinese medical company Infervision recently teamed up with Wuhan Tongji Hospital in China to develop a COVID-19 diagnostic tool. It is reportedly now being used as a screening tool at the Campus Bio-Medico University Hospital in Rome, Italy.

Read more