Did you know you can make a deepfake video from the comfort of your own home, or on your phone? Download one of the plethora of face-swap or deepfake apps casually available from your local app store and you too can influence an election.
Okay, not really, and we definitely don’t endorse that. But in the hype and concern around deepfake technology, and its very real misuse, the simple truth is that this technology isn’t going away. In fact, it’s going to become as commonplace as Photoshop, especially if the app developers working on deepfake tech have anything to say about it: We could soon see hyper-targeted ads with our own faces on them.
Roman Mogylnyi, the CEO of RefaceAI, a startup based in Ukraine, said they have been working with machine learning since 2011, and pivoted to making their own apps based on deepfake tech in 2014. RefaceAI has already released a photo-deepfake app called Reflect, and is on the verge of releasing both their own video-deepfake app, as well as a web service that will help detect deepfake videos.
“We saw from the very beginning that this technology was being misused,” Mogylnyi said. “We thought we could do it better.”
RefaceAI has already worked with some film production companies — although Mogylnyi couldn’t say which ones — to use its technology to swap the faces of actors onto body doubles, at a cost far less than what it would have been to fly the actors back to set and reshoot the scenes in question. This, Mogylnyi said, is what the company sees as the future of deepfakes. “We want to use it for marketing, for personalizing ads, for gifs, for entertainment materials in general,” he said.
The future of media
RefaceAI isn’t the company one trying to get ahead of this inevitable marketing curve. Carica, an app based out of South Korea, is also developing deepfake GIFs and videos wherein a person can graft their face onto a popular reaction GIF to send to friends and family. The app already features pop-up advertisements that incorporate the user’s face into the ad’s photo or video.
“We want our company to become a media company,” Carica engineer Joseph Jang told Digital Trends. Deepfakes are just the way they’re starting out. “I think media companies will start adopting this feature. It will become an option you have, just like a filter. It will become just so normal for people.”
Both Jang and Mogylnyi used the proliferation of Photoshop as the model for where they see Deepfakes going: So common as to be unremarkable, if still a bit controversial. And for both of them, the political and ethical problems wrapped up in deepfakes are really just run of the mill.
Shamir Allibhai, the CEO of the deepfake-detecting platform Amber Video, told Digital Trends that his overriding view was that deepfake technology, like most other kinds of tech, is amoral.
“It would be like saying Microsoft Word is immoral, is evil because potential terrorists will use it.”
“It would be like saying Microsoft Word is immoral, is evil because potential terrorists will use it to espouse violent, extremist ideology, which will inspire others to espouse violent extremist ideology,” he said. “I very much see this technology in the same vein.”
Amber Video is a platform that advertises its services as combating deepfakes to “prevent fraud, reputation loss, and stakeholder mistrust.” Allibhai said he didn’t know if he was ready to buy something just because his face was on it, but he did agree that, eventually, the technology will be pervasive.
“It’s a mirror on society,” he said. “You see this in technologies like the Twitter platform. You see the best of humanity and the worst of us. Deepfake technology will also mirror society. There are people who will use it for lighthearted satire and poke fun at authoritarian regimes. But we will also try to sew chaos.”
Weaponization of deepfakes
The biggest fear remains the potential abuse and the weaponization of deepfakes. As they become more common, we could even see them deployed not just for wide-scale disinformation campaigns, but for practical jokes or high school bullying. If the technology is accessible enough, anyone can use it.
Both Carica and RefaceAI said they are taking steps to mitigate potential abuse — RefaceAI with its deepfake detection webservice, and Carica with content moderators. The whole Carica company is just nine people right now, Jang said, and they trade off content moderation duties.
“Even Photoshop can and has been used for bullying. But on the other hand, we’re providing an antidote.”
“This was really the first question we asked ourselves,” said RefaceAI’s Mogylnyi. “We’ve been dealing with this in terms of political scandals in Ukraine for years. We understand our tech can be misused. Even Photoshop can and has been used for bullying. But on the other hand, we’re providing an antidote for it. We did have users upload sexual content to our app, and we banned those users right away.”
Carica is a small enough app that that’s what they have to work with for now. But overall, Jang wasn’t worried.
“This happens with all new technology,” Jang said. “First, it’s driven by misuse, but then after that phase, it becomes available for everyone and people use it for normal things.”