Skip to main content

Are deepfakes a dangerous technology? Creators and regulators disagree

Over the past few years, deepfakes have emerged as the internet’s latest go-to for memes and parody content.

It’s easy to see why: They enable creators to bend the rules of reality like no other technology before. Through the magic of deepfakes, you can watch Jennifer Lawrence deliver a speech through the face of Steve Buscemi, see what Ryan Reynolds would’ve looked like as Willy Wonka, and even catch Hitler and Stalin singing Video Killed The Radio Star in a duet.

Recommended Videos

For the uninitiated, deepfake tech is a form of synthetic media that allows users to superimpose a different face on someone else’s in a way that’s nearly indistinguishable from the original. It does so by reading heaps of data to understand the face’s contours and other characteristics to naturally blend and animate it into the scene.

Ryan Reynolds as Willy Wonka Deepfake from NextFace
NextFace/Youtube

At this point, you’ve probably come across such clips on platforms like TikTok (where the hashtag “#deepfake” has about 200 million views), YouTube, and elsewhere. Whether it’s to fulfill fan fiction and redo scenes with stars they wish the movie originally had cast or put a dead person in a modern, viral meme, deepfakes have been adopted as a creative outlet for purposes that were previously next to impossible.

The shift has spawned a league of new creators like Ctrl Shift Face, whose deepfake videos regularly draw millions of views and often are the main topic of discussion on late-night talk shows.

“It’s a whole new way of making funny internet videos, or telling stories like we’ve never seen before,” says the Netherlands-based creator behind the hit “The Avengers of Oz” deepfake clip who asked that his real name not be used. “It’s a beautiful combination of fascination for A.I. technology and humor.”

But there’s a looming risk that threatens the future of deepfake technology altogether: Its tainted reputation.

With great power comes great repostability

Unfortunately, in addition to their potential as a creative tool for well-intentioned video artists, deepfakes also carry a tremendous potential to do harm.

A recent study by the Dawes Centre for Future Crime at the UCL Jill Dando Institute of Security and Crime Science labeled deepfakes the most serious A.I.-enabled threat. Sen. Ben Sasse, a Nebraskan Republican who has introduced a bill to criminalize the malicious creation of deepfakes, warned last year that the technology could “destroy human lives,” “roil financial markets,” and even “spur military conflicts around the world.”

To an extent, these concerns are fair. After all, the proliferation of deepfake technology has already enabled things like the production of fake adult content featuring celebrities, and lifelike impersonations of politicians for satire. Earlier this year, Dan Scavino, White House social media director, tweeted a poorly manipulated clip, which was also retweeted by President Donald Trump, of Trump’s rival Joe Biden asking people to re-elect Trump.

Trump retweeted edited video of Biden. Here's what Biden actually said.

However, these less-than-convincing hoaxes have quickly been debunked before reaching the masses. More importantly, experts suggest deepfake videos have thus far had little to no societal impact, and that they currently don’t pose any imminent threats. For example, research conducted by Sensity, a cybersecurity firm focused on visual threats, claims that the vast majority of deepfake videos are pornographic (96%) and that the technology has yet to make its way into any significant disinformation campaigns.

Similarly, a Georgetown University report concluded that while deepfakes are an “impressive technical feat,” policymakers should not rush into the hype as the technology is not perfect at the moment and can’t influence real-world events just yet.

The creator behind the most popular deepfake channel Ctrl Shift Face, whose videos are viewed by millions, believes the “hysteria” circling around the deepfake topic is diverting lawmakers’ attention away from the real issues, such as the poorly regulated ad networks on Facebook that are actually responsible for misleading people.

“If there ever will be a harmful deepfake, Facebook is the place where it will spread,” Ctrl Shift Face said in an interview with Digital Trends. “In that case, what’s the bigger issue? The medium or the platform?”

The owner of BabyZone, a YouTube gaming channel with over half a million subscribers that often deepfakes celebrities into video games, echoes a similar concern: “I think that deepfakes are a technology like all the other existing technologies. They can be used for good and for bad purposes.”

The movement to save deepfakes

Over the last year or two, as governments and tech companies investigate the potential risks of this technology, deepfake advocates have scrambled to allay these concerns and fix the technology’s public image. Reddit communities that seek to “adjust this stigma” have popped up, and some independent researchers are actively building systems that can spot deepfakes before they go viral.

Roman Mogylnyi, CEO and co-founder of Reface, a hit app that lets you quickly swap your face into any GIF, says his startup is now developing a detection tool that can tell whether a video was made with Reface’s technology. “We believe that wide access to synthesized media tools like ours will increase humanity’s empathy and creativit, and will help to change the perception of the technology for better,” Mogylnyi told Digital Trends.

Eran Dagan, founder and CEO of Botika, the startup behind the popular face-swapping app Jiggy, has a similar outlook toward deepfakes and believes as they become more mainstream, “people will be much more aware of their positive use cases.”

Given the potential dangers of deepfakes, however, it’s likely that Congress will eventually step in. Major tech platforms including Facebook, Twitter, and YouTube already have updated their policies to flag or remove manipulated media that’s designed to mislead. Multiple states like California and New York have passed bills that will punish makers of intentionally deceptive deepfakes, as well as ones released without consent of the person whose face is used.

Should deepfakes be regulated?

While these policies exclude parody content, experts, fearing ill-defined laws or a complete ban of the technology, still believe Congress should stay out of it and let deepfakes run the natural course that any new form of media goes through.

David Greene, civil liberties director at the Electronic Frontier Foundation, says “any attempt by Congress to regulate deep fakes or really any kind of visual communication will be a regulation of speech and implicate the First Amendment.

“Society needs to build new mechanisms for certifying what can be trusted, and how to prevent the negative impacts of synthetic content on individuals and organizations.”

These laws, Greene adds, need to be precise and must — in well-defined and easily understood terms — address the harm they’re trying to curtail. “What we have seen so far … regulatory attempts at the state level are vague and overbroad laws that do not have the precision the First Amendment requires. They don’t have required exceptions for parody and political commentary.”

Giorgio Patrini, CEO of Sensity, finds a ban on algorithms and software “meaningless in the internet era.” Patrini compares this conundrum with malware protection and how it’s next to impossible to put an end to all computer viruses or their authors, so it’s better to simply invest in anti-malware mechanisms instead. “Society needs to build new mechanisms for certifying what can be trusted, and how to prevent the negative impacts of synthetic content on individuals and organizations,” he said.

Tim Whang wrote in the Georgetown University report that with the commodification of deepfake tools, the technology to detect and filter it out automatically will consequently evolve and become more accurate — thereby neutralizing their ability to present any serious threats.

Microsoft’s recently launched deepfake detection tool, for instance, analyzes the videos to offer you a confidence score that indicates how likely it is that the video was modified.

“Deepfakes are a new form of media manipulation, but not the first time we’ve faced this type of challenge. We are exploring and investing in ways to address synthetic media,” said a YouTube spokesperson.

The researchers behind CtrlShiftLab, an advanced deepfake creation software that many YouTubers including Ctrl Shift Face employ, are now also working toward open-source projects to raise awareness and build more comprehensive detection services.

“The only way to prevent this [deepfake abuse] is to establish an open source deepfake-related project and attract the public’s attention. So public netizens can realize that deepfakes exist,” Kunlin Liu, one of the CtrlShiftLab’s researchers, told Digital Trends.

Several deepfake creators Digital Trends talked to remain optimistic and find countries’ pushback against deepfakes premature. They agree that deepfake’s growing role in memes and parody culture will be instrumental in mending this emerging tech’s crummy reputation. And as long as videos have disclaimers and platforms invest in more effective detection layers, they added, deepfakes are here to stay.

“I think that the reputation of deepfakes is improving significantly. Two years ago, the word deepfake automatically meant porn,” said the creator of Ctrl Shift Face. “Now, most people know deepfakes because of these entertaining videos circulating around the internet.”

Shubham Agarwal
Former Digital Trends Contributor
Shubham Agarwal is a freelance technology journalist from Ahmedabad, India. His work has previously appeared in Firstpost…
Hyundai believes CarPlay, Android Auto should remain as options
The 6.9-inch Sony digital media receiver installed in the dashboard of a vehicle.

Hyundai must feel good about the U.S. market right now: It just posted "record-breaking" November sales, led by its electric and hybrid vehicles.

It wouldn’t be too far of a stretch for the South Korean automaker to believe it must be doing something right about answering the demands of the market. And at least one recurring feature at Hyundai has been a willingness to keep offering a flexible range of options for drivers.

Read more
Dodge’s Charger EV muscles up to save the planet from ‘self-driving sleep pods’
dodges charger ev muscles up to save the planet from self driving sleep pods stellantis dodge daytona

Strange things are happening as the electric vehicle (EV) industry sits in limbo ahead of the incoming Trump administration’s plans to end tax incentives on EV purchases and production.

The latest exemple comes from Dodge, which is launching a marketing campaign ahead of the 2025 release of its first fully electric EV, the Daytona Charger.

Read more
Many hybrids rank as most reliable of all vehicles, Consumer Reports finds
many hybrids rank as most reliable of all vehicles evs progress consumer reports cr tout cars 0224

For the U.S. auto industry, if not the global one, 2024 kicked off with media headlines celebrating the "renaissance" of hybrid vehicles. This came as many drivers embraced a practical, midway approach rather than completely abandoning gas-powered vehicles in favor of fully electric ones.

Now that the year is about to end, and the future of tax incentives supporting electric vehicle (EV) purchases is highly uncertain, it seems the hybrid renaissance still has many bright days ahead. Automakers have heard consumer demands and worked on improving the quality and reliability of hybrid vehicles, according to the Consumer Reports (CR) year-end survey.

Read more