Skip to main content

Mind-reading A.I. algorithm can work out what music is playing in your head

Most of us have used apps like Shazam, which can identify songs when we hold up our phone up to a speaker. But what if it was possible for an app to identify a piece of music based on nothing more than your thought patterns. Impossible? Perhaps not, according to a new piece of research carried out by investigators at the University of California, Berkeley.

In 2014, researcher Brian Pasley and colleagues used a deep-learning algorithm and brain activity, measured with electrodes, to turn a person’s thoughts into digitally synthesized speech. This was achieved by analyzing a person’s brain waves while they were speaking in order to decode the link between speech and brain activity.

Recommended Videos

Jump forward a few years, and the team has now improved on that earlier research and applied their findings to music. Specifically, they were able to accurately (50 percent more accurately than the previous study) predict what sounds a pianist is thinking of, based on brain activity.

Please enable Javascript to view this content

“During auditory perception, when you listen to sounds such as speech or music, we know that certain parts of the auditory cortex decompose these sounds into acoustic frequencies — for example, low or high tones,” Pasley told Digital Trends. “We tested if these same brain areas also process imagined sounds in the same way you internally verbalize the sound of your own voice, or imagine the sound of classical music in a silent room. We found that there was large overlap, but also distinct differences in how the brain represents the sound of imagined music. By building a machine learning model of the neural representation of imagined sound, we used the model to guess with reasonable accuracy what sound was imagined at each instant in time.”

For the study, the team recorded a pianist’s brain activity when he played music on an electric keyboard. By doing this, they were able to match up both the brain patterns and the notes played. They then performed the experiment again, but turning off the sound of the keyboard and asking the musician to imagine the notes as he played them. This training allowed them to create their music-predicting algorithm.

“The long-term goal of our research is to develop algorithms for a speech prosthetic device to restore communication in paralyzed individuals who are unable to speak,” Pasley said. “We are quite far from realizing that goal, but this study represents an important step forward. It demonstrates that the neural signal during auditory imagery is sufficiently robust and precise for use in machine learning algorithms that can predict acoustic signals from measured brain activity.”

A paper describing the work was recently published in the journal Cerebral Cortex.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
New A.I. can identify the song you’re listening to by reading your brain waves
song identifying ai

From Journey’s “Don't Stop Believin’” to Queen’s “Bohemian Rhapsody” to Kylie Minogue’s “Can't Get You Out Of My Head,” there are some songs that manage to successfully worm their way down our ear canals and take up residence in our brains. What if it was possible to read the brain’s signals, and to use these to accurately guess which song a person is listening to at any given moment?

That’s what researchers from the Human-Centered Design department at Delft University of Technology in the Netherlands and the Cognitive Science department at the Indian Institute of Technology Gandhinagar have been working on. In a recent experiment, they demonstrated that it is eminently possible -- and the implications could be more significant than you might think.

Read more
Clever new A.I. system promises to train your dog while you’re away from home
finding rover facial recognition app dog face big eyes

One of the few good things about lockdown and working from home has been having more time to spend with pets. But when the world returns to normal, people are going to go back to the office, and in some cases that means leaving dogs at home for a large part of the day, hopefully with someone coming into your house to let them out at the midday point.

What if it was possible for an A.I. device, like a next-generation Amazon Echo, to give your pooch a dog-training class while you were away? That’s the basis for a project carried out by researchers at Colorado State University. Initially spotted by Chris Stokel-Walker, author of YouTubers:How YouTube Shook Up TV and Created a New Generation of Stars, and reported by New Scientist, the work involves a prototype device that’s able to give out canine commands, check to see if they’re being obeyed, and then provide a treat as a reward when they are.

Read more
New A.I. hearing aid learns your listening preferences and makes adjustments
Widex Moment hearing aids.

One of the picks for this year’s CES 2021 Innovation Awards is a smart hearing aid that uses artificial intelligence to improve the audio experience in a couple of crucial ways.

Among the improvements the Widex Moment makes to conventional hearing aids is reducing the standard sound delay experienced by wearers from 7 to 10 milliseconds seconds down to just 0.5 milliseconds. This results in a more natural sound experience for users, rather than the out-of-sync audio experience people have had to settle for up until now.

Read more