The new system uses a high-speed camera and a high-speed projector aligned on the same optical axis, along with a set of algorithms to track subtle facial movements for dynamic projections. The camera captures images in the infrared spectrum, while the projector illuminates the face with visible light. These combined features enable the system to process images in two (rather than three) dimensions.
In order to increase realism, a primary goal for the research team was to decrease latency — the lag between changing projections.
“The key challenge of live augmentation is latency — the time between generating an image that matches the actor’s pose and when the image is displayed,” Anselm Grundhöfer, principal research engineer at Disney Research, said in a press release. “The larger the latency, the more the actor’s pose will have changed and the greater the potential misalignment between the augmentation and the face.”
Some latency is inevitable, according to the researchers. However, by using Kalman filtering, a method that makes small measurements to predict changes, they were able to cut back on perceived latency by predicting the performer’s next expression. The system still needs to be trained on each performer in order to determine facial boundaries and expression nuances, but the work demonstrates progress, and is the first such system without the need for specific tracking markers.
“We’ve seen astounding advances in recent years in capturing facial performances of actors and transferring those expressions to virtual characters,” said Markus Gross, vice president at Disney Research. “Leveraging these technologies to augment the appearance of live actors is the next step and could result in amazing transformations before our eyes of stage actors in theaters or other venues.”
The team will present their work this week at the European Association for Computer Graphics conference, Eurographics 2017, in France.