Skip to main content

Stanford A.I. can realistically score computer animations just by watching them

[SIGGRAPH 2018] [Highlights] Toward Wave-based Sound Synthesis for Computer Animation

In the early days of cinema, organists would add sound effects to silent movies by playing along to whatever was happening on screen. Jump forward to 2018, and a variation on this idea forms the basis of new work carried out by Stanford University computer scientists. They have developed an artificial intelligence system that’s able to synthesize realistic sounds for computer animation based entirely on the images it sees and its knowledge of the physical world. The results are synthesized sounds at the touch of a button.

Recommended Videos

“We’ve developed the first system for automatically synthesizing sounds to accompany physics-based computer animations,” Jui-Hsien Wang, a graduate student at Stanford’s Institute for Computational and Mathematical Engineering (ICME), told Digital Trends. “Our approach is general, [meaning that] it can compute realistic sound sources for a wide range of animated phenomena — such as solid bodies like a ceramic bowl or a flexible crash cymbal, as well as liquid being poured into a cup.”

The technology that makes the system work is pretty darn smart. It takes into account the varying position of the objects in the scene as assembled during the 3D modeling process. It identifies what these are, and then predicts how they will affect sounds being produced, whether it be to reflect, scatte,r or diffract them.

“A great thing about our approach is that no training data is required,” Wang continued. “It simulates sound from first physical principles.”

As well as helping more quickly add sound effects to animated movies, the technology could also one day be used to help designers work out how products are going to sound before they are physically produced.

There’s no word on when this tool might be made publicly available, but Wang said that the team is currently “exploring options for making the tool accessible.” Before it gets to that point, however, the researchers want to improve the system’s ability to model more complex objects, such as the lush reverberating tones of a Stradivarius violin.

The research is due to be presented as part of ACM SIGGRAPH 2018, the world’s leading conference on computer graphics and interactive techniques. Take a second to feel sorry for the poor Pixar foley artist at the back of the hall who just bought a new house!

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
How the USPS uses Nvidia GPUs and A.I. to track missing mail
A United States Postal Service USPS truck driving on a tree-lined street.

The United States Postal Service, or USPS, is relying on artificial intelligence-powered by Nvidia's EGX systems to track more than 100 million pieces of mail a day that goes through its network. The world's busiest postal service system is relying on GPU-accelerated A.I. systems to help solve the challenges of locating lost or missing packages and mail. Essentially, the USPS turned to A.I. to help it locate a "needle in a haystack."

To solve that challenge, USPS engineers created an edge A.I. system of servers that can scan and locate mail. They created algorithms for the system that were trained on 13 Nvidia DGX systems located at USPS data centers. Nvidia's DGX A100 systems, for reference, pack in five petaflops of compute power and cost just under $200,000. It is based on the same Ampere architecture found on Nvidia's consumer GeForce RTX 3000 series GPUs.

Read more
Algorithmic architecture: Should we let A.I. design buildings for us?
Generated Venice cities

Designs iterate over time. Architecture designed and built in 1921 won’t look the same as a building from 1971 or from 2021. Trends change, materials evolve, and issues like sustainability gain importance, among other factors. But what if this evolution wasn’t just about the types of buildings architects design, but was, in fact, key to how they design? That’s the promise of evolutionary algorithms as a design tool.

While designers have long since used tools like Computer Aided Design (CAD) to help conceptualize projects, proponents of generative design want to go several steps further. They want to use algorithms that mimic evolutionary processes inside a computer to help design buildings from the ground up. And, at least when it comes to houses, the results are pretty darn interesting.
Generative design
Celestino Soddu has been working with evolutionary algorithms for longer than most people working today have been using computers. A contemporary Italian architect and designer now in his mid-70s, Soddu became interested in the technology’s potential impact on design back in the days of the Apple II. What interested him was the potential for endlessly riffing on a theme. Or as Soddu, who is also professor of generative design at the Polytechnic University of Milan in Italy, told Digital Trends, he liked the idea of “opening the door to endless variation.”

Read more
Emotion-sensing A.I. is here, and it could be in your next job interview
man speaking into phone

I vividly remember witnessing speech recognition technology in action for the first time. It was in the mid-1990s on a Macintosh computer in my grade school classroom. The science fiction writer Arthur C. Clarke once wrote that “any sufficiently advanced technology is indistinguishable from magic” -- and this was magical all right, seeing spoken words appearing on the screen without anyone having to physically hammer them out on a keyboard.

Jump forward another couple of decades, and now a large (and rapidly growing) number of our devices feature A.I. assistants like Apple’s Siri or Amazon’s Alexa. These tools, built using the latest artificial intelligence technology, aren’t simply able to transcribe words -- they are able to make sense of their contents to carry out actions.

Read more