Skip to main content

Amazing new headset translates thoughts into speech for vocally impaired wearers

Vicky Just/University of Bath

“In a nutshell,” said Scott Wellington, “we’re hoping to create a technology that can take your imagined speech — that is, you think of a word or a sentence, without moving or speaking at all — and translate your brain signals into synthesized speech of that same word or sentence.”

That’s quite a mission, but Wellington, a Ph.D. researcher at the University of Bath’s Center for Accountable, Transparent and Responsible Artificial Intelligence, may just be up to the job.

Recommended Videos

For the past several years, via his previous work at the University of Edinburgh and a startup called SpeakUnique, Wellington has been working on an ambitious, but potentially game-changing, project: Creating personalized synthetic voices for those who have impaired speech or entirely lost the ability to speak as a result of neurodegenerative conditions like Motor Neurone Disease (MND).

“The goal is to create a new technique that allows more fluent communication by either supporting or, even better, altogether replacing the need to type out what you want to communicate, by using the brain signal to do the ‘typing’ instead.”

Synthetic voices for people with potentially debilitating conditions like MND have been around for years. Famously, the late theoretical physicist Stephen Hawking communicated using a synthesized computer voice, created for him by a Massachusetts Institute of Technology engineer named Dennis Klatt, as far back as 1984. The voice, a default male named “Perfect Paul,” could be operated using a handheld clicker that would enable him to choose words from a computer. Later, when Hawking lost the use of his hands, he switched to a system that detected his facial movement.

Vicky Just/University of Bath

Wellington’s work would be a step forward from this. For one thing, where recordings exist or suitable sound parts could be made, he could piece together a synthetic personalized voice that sounds like the person it’s being used for. Furthermore, this voice could be controlled entirely through the user’s thoughts — all using a humble, commercially available gamer’s headset.

Promising developments

“There have already been some promising developments in the field from researchers around the world, but these have all used a process called electrocorticography, which requires a craniotomy,” Wellington said.

A craniotomy, as he points out, is invasive brain surgery. The goal of his work at the University of Bath is to achieve the effect of “imagined speech recognition,” but without the need for someone to cut open your head and plant sensors onto the surface of your brain.

“For people who have lost their natural speech, one of the biggest causes of frustration is the inability to communicate their thoughts to friends and family with the same speed and naturalness as they had previously,” he said. “For instance, for people in advanced stages of MND, eye-tracking technologies can allow people with severely impaired motor control to use text-to-speech systems to communicate at around 10 words a minute, and that’s if they’re fluent users of the technology. You and I can speak 10 words in a few seconds. You can see why this is one of the biggest causes of frustration for people with motor impairment who have lost their speech.”

In the University of Bath setup, the gaming headset employed is equipped with an EEG (electroencephalography) system to detect the wearers’ brain waves. These are then processed by a computer that uses neural networks and deep learning to identify the intended speech of the user.

“We’ve been able to translate these imagined sounds with a promising degree of accuracy.”

“The goal is to create a new technique that allows more fluent communication by either supporting or, even better, altogether replacing the need to type out what you want to communicate, by using the brain signal to do the ‘typing’ instead,” Wellington said. “With the latest developments in engineering, machine learning, and artificial intelligence, I believe we’re at the stage to begin to make this a reality.”

To train the system, volunteers wore the EEG device while a recording of their own speech was played for them. At the same time, they had to imagine saying the sound, as well as vocalize the sound. While it would be accurate to describe the system as reading thoughts, it would still require the user to silently verbalize the words they wanted to say. (The plus side of this is that there’s no risk of it accidentally reading a wearers’ most private thoughts.)

The future’s bright, but manage expectations

Wellington was clear that he wants to “manage expectations.” Taking the noisy signal of brain waves and trying to pick up the all-important signal contained in it is tough. He likened it to trying to have a phone conversation with a person who is outside in heavy wind — or even a hurricane. “If they’re shouting the same word over and over, yes, probably you’ll get it,” he said. “But a natural, full sentence? Probably not.”

Vicky Just/University of Bath

This will hopefully change as the project advances and they get better at extracting information from the brain signal. New machine learning techniques should push the capabilities of gaming headsets for better imagined natural speech reception. One challenge, which will prove worthwhile in the end, is that the researchers want to make sure that whatever hardware they use is affordable, practical, and mobile.

“[So far] we’ve managed to achieve some success in decoding imagined speech sounds from the brain signal,” Wellington said. “That is, imagine you were sounding out the English language phonically, as children do in school: ‘Aah,’ ‘buh,’ ‘kuh,’ ‘duh,’ ‘ehh,’ ‘guh,’ and so forth. We’ve been able to translate these imagined sounds with a promising degree of accuracy. Of course, this is far from natural speech, but does already allow for a brain-computer interface that can translate a small ‘closed’ vocabulary of distinct words quite reliably. For example, if you wanted the device to speak, from your thoughts, the words for ‘up,’ ‘down,’ ‘left,’ ‘right,’ ‘start,’ ‘stop,’ ‘back,’ ‘forwards,’ [that would be possible].”

Wellington noted that he is excited about developments like Elon Musk’s Neuralink hardware, a “brain chip” that could be implanted beneath the skull, which could prove extremely transformative for work such as this. “As you can imagine, I was left wanting to know what we could achieve if such a device were implanted over the speech- and language-processing regions of the brain,” he said. “There’s certainly an exciting future ahead for this research!”

The work was presented at the Interspeech virtual conference in late October 2020.

Topics
Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
NYT Mini Crossword today: puzzle answers for Wednesday, December 18
The Mini open in the NYT Games app on iOS.

Love crossword puzzles but don't have all day to sit and solve a full-sized puzzle in your daily newspaper? That's what The Mini is for!

A bite-sized version of the New York Times' well-known crossword puzzle, The Mini is a quick and easy way to test your crossword skills daily in a lot less time (the average puzzle takes most players just over a minute to solve). While The Mini is smaller and simpler than a normal crossword, it isn't always easy. Tripping up on one clue can be the difference between a personal best completion time and an embarrassing solve attempt.

Read more
NYT Crossword: answers for Wednesday, December 18
New York Times Crossword logo.

The New York Times has plenty of word games on its roster today — with Wordle, Connections, Strands, and the Mini Crossword, there's something for everyone — but the newspaper's standard crossword puzzle still reigns supreme. The daily crossword is full of interesting trivia, helps improve mental flexibility and, of course, gives you some bragging rights if you manage to finish it every day.

While the NYT puzzle might feel like an impossible task some days, solving a crossword is a skill and it takes practice — don't get discouraged if you can't get every single word in a puzzle.

Read more
This new Alien game will tide you over until Alien: Isolation 2
Key art for Alien: Rogue Incursion

It’s hard to believe it has been over a decade since the release of Alien: Isolation. No Alien game since then has matched the intensity of its survival horror focus. Slowly creeping around a space station, only to be ambushed by a Xenomorph and engage in a cat-and-mouse chase in hopes of survival, is an exhilarating experience that no other Alien game has quite been able to achieve. Unfortunately, Sega and Creative Assembly did not immediately start work on a follow-up; it was only this year that we learned a sequel is in the works.

That Alien: Isolation sequel is still years away at this point, but thankfully, a different Alien game is here for players looking for another Xenomorph-filled first-person shooter. It’s a new VR game from Survios called Alien: Rogue Incursion, and it's your best bet for that Alien horror short of replaying the original Alien: Isolation as we wait for its sequel.

Read more