If you think it’s been a problem up to this point, the fight against fake news is about to get a whole lot harder. That is thanks to artificial intelligence technology which is making the creation of so-called “deep fake” videos more convincing at a frankly terrifying rate. The latest development comes from an international team of researchers, lead by Germany’s Max Planck Institute for Informatics.
They have created a deep-learning A.I. system which is able to edit the facial expression of actors to accurately match dubbed voices. In addition, it can tweak gaze and head poses in videos, and even animate a person’s eyes and eyebrows to match up with their mouths — representing a step forward from previous work in this area.
“It works by using model-based 3-D face performance capture to record the detailed movements of the eyebrows, mouth, nose, and head position of the dubbing actor in a video,” Hyeongwoo Kim, one of the researchers from the Max Planck Institute for Informatics, said in a statement. “It then transposes these movements onto the ‘target’ actor in the film to accurately sync the lips and facial movements with the new audio.”
The researchers suggest that one possible real-world application for this technology could be in the movie industry, where it could carry out tasks like making it easy and affordable to manipulate footage to match a dubbed foreign vocal track. This would have the effect of making movies play more seamlessly around the world, compared with today where dubbing frequently results in a (sometimes comedic) mismatch between an actor’s lips and the dubbed voice.
Still, it’s difficult to look at this research and not see the potential for the technology being misused. Along with other A.I. technologies that make it possible to synthesize words spoken in, say, the voice of Barack Obama, the opportunity for this to make the current fake news epidemic look like paltry in comparison is unfortunately present. Let’s hope that proper precautions are somehow put in place for regulating the use of these tools.
The “Deep Video Portraits” research was recently presented at the SIGGRAPH 2018 conference in Vancouver, Canada.