Capturing an actor’s facial performance in a three-dimensional environment is key to believable animations in movies and video games. The process of accurately modeling a performer’s face — and their full range of expressions — is no easy task, however. But with a new method developed by Disney Research, it’s about to become much simpler.
Unlike traditional facial-performance-capture techniques, which rely on multiple cameras to read depth information, this new method uses only a single camera. And no, it doesn’t require a light-field camera or any other non-traditional imaging device. In fact, the researchers even demonstrated it using both a GoPro and an iPhone, according to a press release first seen on Phys.org.
Single-camera facial-performance capture isn’t completely new, but existing methods require a complex computer model of an actor’s face to be developed ahead of time. That model has to be built with many different expressions programmed into it, otherwise the capture process will generate too many anomalies.
With Disney Research’s new method, information about bone structure and skin thickness is taken into account, which provides an “anatomically constrained” model, thereby keeping the range of available animations limited to what’s physically possible, without the need for as many pre-recorded expressions. The effectiveness of the method was tested by capturing an actor’s face being deformed by a jet of compressed air, something that simply wouldn’t be possible with other single-camera solutions.
Smaller, less complex hardware should make facial performance capture more accessible and easier to do. A single-camera system also allows actors increased freedom of movement during their performances. While typically there exists an inverse relationship between simplicity and capture quality, Disney researchers say their new method bucks the trend.
“No hardware setup could be simpler than our new one-camera method, yet we’ve shown that it can obtain results that rival, if not exceed, more traditional methods,” Markus Gross, vice president of Disney Research, said in the press release.
The project will be presented on July 24 at the ACM International Conference on Computer Graphics & Interactive Techniques (SIGGRAPH) in California.