Are you a terrible dancer who dreams of one day starring in a toe-tapping music video that would have made Michael Jackson jealous? If so, you’ve got two options: go the Napoleon Dynamite route and put in some serious practice, or simplify the process by taking advantage of some cutting-edge artificial intelligence.
Since you’re still reading and not off YouTubing “How to dance” videos, we’re going to assume the second of these options is the one that more appeals to you. If so, you have researchers at the University of California, Berkeley, to thank. Using the kind of “deepfake” technology that makes it possible to carry out realistic face-swaps in videos, they have developed a tool which can make even the most bumbling and uncoordinated among us look like experts.
“We have developed a method to transfer dance motions from one individual — a professional dancer — to another, [who we’ll refer to as ‘Joe’ for this example,]” Shiry Ginosar, a Ph.D. student in Computer Vision at UC Berkeley, told Digital Trends. “In order to do this, we take a video of Joe performing all kinds of motions. We use this video to train a generative adversarial network to learn a model of how Joe looks and moves. When we have learned this model, we can take a stick figure of a body pose as input and generate a still photograph of Joe performing that body pose as output. If we have a whole video of a dancing stick figure, we can generate a whole video of Joe dancing in the same way. Now, given a video of the professional dancer, we extract the body pose of the dancer and go back to Joe and generate a video of him dancing in much the same way.”
Aside from the fun of being able to make anyone resemble an expert dancer, Ginosar said that dancing presents an interesting challenge for this kind of deepfake technology. That’s because it involves the entire human body moving in a fluid way, which is considerably different (and tougher) than the more static pose or face transfers which have been carried out so far.
A paper describing the work, titled “Everybody Dance Now,” is available to read on the arXiv preprint server. In addition to Ginosar, other researchers on the project included Caroline Chan, Tinghui Zhou, and Alexei Efros.