Skip to main content

Say what? Google AI creates sounds we’ve literally never heard before

Google Earth
Julian Chokkattu/Digital Trends
You ain’t heard nothin’ like this before.

Literally.

Recommended Videos

Thanks to Google and its AI capabilities, we’re expanding our aural horizons, taking our ears to places they’ve never been before. You see, Google is creating brand new sounds with technology, combining the sounds made by various instruments and creating something that is entirely novel. It’s the work of Jessie Engel, Cinjon Resnick, and other team members of Google Brain, the tech company’s core AI lab. And it’s called NSynth or Neural Synthesizer, described as “a novel approach to make music synthesis designed to aid the creative process.”

Please enable Javascript to view this content

While it may sound as though Google’s scientists are playing two instruments at the same time with NSynth, or perhaps layering instruments atop one another, that’s actually not what’s happening. Rather, as Wired notes, this new software is producing completely unique sounds by leveraging “the mathematical characteristics of the notes that emerge” from various instruments. And those instruments are indeed varied — NSynth is capable of working with around 1,000 different sound makers from violins to didgeridoos. And the combinations of those sounds are creating countless new experiences for us to reckon with.

“Unlike a traditional synthesizer which generates audio from hand-designed components like oscillators and wavetables, NSynth uses deep neural networks to generate sounds at the level of individual samples,” the team explained in a blog post last month. “Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer.”

Indeed, music critic Marc Weidenbaum tells Wired that this concept is nothing new, though we’re certainly more adept at synthesis than ever before. “The blending of instruments in nothing new,” Weidenbaum said, “Artistically, it could yield some cool stuff, and because it’s Google, people will follow their lead.”

Ultimately, the team behind NSynth notes, “We wanted to develop a creative tool for musicians and also provide a new challenge for the machine learning community to galvanize research in generative models for music.” And later this week, the public will be able to see this new tool in action as Google’s team presents at the annual art, music, and tech festival known as Moogfest. So if you’re near Durham, North Carolina, this certainly seems like something worth checking out.

Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
Google’s LaMDA is a smart language A.I. for better understanding conversation
LaMDA model

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more
What is this? Say hello to the new Google Nest speaker
google nest speaker image released news

Just like that, Google teases us with the first image of its next Nest speaker. After images surfaced of the purported smart speaker on Thursday, it is now being shown in full glory courtesy of Google's PR team, who shared it with us. You could say that we're nearing an inevitable announcement, but the image does indicate that it's going to be a sporting a dramatic redesign over the original Google Home from 2016. What do you think?

Looking closely at the picture, we can see that it's going to be an upright, oblong-shaped speaker with the same four LEDs embedded on the front. If the dimensions match what was shown in the filings, it's going to be one of the beefier-sized speakers in Google's lineup. It is substantially taller than the original Google Home, but it's not as hulking in size as the Google Home Max.

Read more
Google execs say we need a plan to stop A.I. algorithms from amplifying racism
Facial Recognition

Two Google executives said Friday that bias in artificial intelligence is hurting already marginalized communities in America, and that more needs to be done to ensure that this does not happen. X. Eyeé, outreach lead for responsible innovation at Google, and Angela Williams, policy manager at Google, spoke at (Not IRL) Pride Summit, an event organized by Lesbians Who Tech & Allies, the world’s largest technology-focused LGBTQ organization for women, non-binary and trans people around the world.

In separate talks, they addressed the ways in which machine learning technology can be used to harm the black community and other communities in America -- and more widely around the world.

Read more