Most of us have used apps like Shazam, which can identify songs when we hold up our phone up to a speaker. But what if it was possible for an app to identify a piece of music based on nothing more than your thought patterns. Impossible? Perhaps not, according to a new piece of research carried out by investigators at the University of California, Berkeley.
In 2014, researcher Brian Pasley and colleagues used a deep-learning algorithm and brain activity, measured with electrodes, to turn a person’s thoughts into digitally synthesized speech. This was achieved by analyzing a person’s brain waves while they were speaking in order to decode the link between speech and brain activity.
Jump forward a few years, and the team has now improved on that earlier research and applied their findings to music. Specifically, they were able to accurately (50 percent more accurately than the previous study) predict what sounds a pianist is thinking of, based on brain activity.
“During auditory perception, when you listen to sounds such as speech or music, we know that certain parts of the auditory cortex decompose these sounds into acoustic frequencies — for example, low or high tones,” Pasley told Digital Trends. “We tested if these same brain areas also process imagined sounds in the same way you internally verbalize the sound of your own voice, or imagine the sound of classical music in a silent room. We found that there was large overlap, but also distinct differences in how the brain represents the sound of imagined music. By building a machine learning model of the neural representation of imagined sound, we used the model to guess with reasonable accuracy what sound was imagined at each instant in time.”
For the study, the team recorded a pianist’s brain activity when he played music on an electric keyboard. By doing this, they were able to match up both the brain patterns and the notes played. They then performed the experiment again, but turning off the sound of the keyboard and asking the musician to imagine the notes as he played them. This training allowed them to create their music-predicting algorithm.
“The long-term goal of our research is to develop algorithms for a speech prosthetic device to restore communication in paralyzed individuals who are unable to speak,” Pasley said. “We are quite far from realizing that goal, but this study represents an important step forward. It demonstrates that the neural signal during auditory imagery is sufficiently robust and precise for use in machine learning algorithms that can predict acoustic signals from measured brain activity.”
A paper describing the work was recently published in the journal Cerebral Cortex.