We may be able to pick out our best friend’s or our mother’s voice from a crowd, but can the same be said for our smart speakers? For the time being, the answer may be “no.” Smart assistants aren’t always right about who’s speaking, but Google is looking to change that with a pretty elegant solution.
Thanks to new research detailed in a paper titled, “Looking to Listen at the Cocktail Party,” Google researchers explain how a new deep learning system is able to identify voices simply by looking at people’s faces as they speak.
“People are remarkably good at focusing their attention on a particular person in a noisy environment, mentally “muting” all other voices and sounds,” Inbar Mosseri and Oran Lang, software engineers at Google Research noted in a blog post. And while this ability is innate to human beings, “automatic speech separation — separating an audio signal into its individual speech sources — while a well-studied problem, remains a significant challenge for computers.”
Mosseri and Lang, however, have created a deep learning audio-visual model capable of isolating speech signals from a variety of other auditory inputs, like additional voices and background noise. “We believe this capability can have a wide range of applications, from speech enhancement and recognition in videos, through video conferencing, to improved hearing aids, especially in situations where there are multiple people speaking,” the duo said.
So how did they do it? The first step was training the system to identify individual voices (paired with their faces) speaking uninterrupted in an aurally clean environment. The researchers presented the system with about 2,000 hours of video, all of which featured a single person in the camera frame with no background interference. Once this was complete, they began to add virtual noise — like other voices — in order to teach its A.I. system to differentiate among audio tracks, and thereby allowing the system to identify which track is which.
Ultimately, the researchers were able to train the system to “split the synthetic cocktail mixture into separate audio streams for each speaker in the video.” As you can see in the video, the A.I. can identify the voices of two comedians even as they speak over one another, simply by looking at their faces.
“Our method works on ordinary videos with a single audio track, and all that is required from the user is to select the face of the person in the video they want to hear, or to have such a person be selected algorithmically based on context,” Mosseri and Lang wrote.
We’ll just have to see how this new methodology is ultimately implemented in Google products.