Skip to main content

Will this deepfake of a power-hungry Zuckerberg make Facebook rethink fake news?

A video of Facebook founder Mark Zuckerberg proclaiming his power over “millions of people’s stolen data” as the billionaire stoically gestures on camera is garnering tens of thousands of views on Instagram. The problem? The video is generated entirely by artificial intelligence — and the real Zuckerberg had nothing to do with the video or the words it contains.

Deepfake videos circulating on social media are nothing new — the most famous was a recent fake of House Speaker Nancy Pelosi that reached viral acclaim on Facebook. But a video featuring the Facebook giant himself has users wondering if the social media company will stick to previous statements about leaving the fake A.I.-generated videos in place.

Recommended Videos

So far, the video — which is marked by #deepfake if you read through the entire list of hashtags — has remained live on the platform for four days, without being removed by the company that the video bashes. Reposts of the video are even still showing up in hashtag searches.

Please enable Javascript to view this content

In previous statements, both Instagram and Facebook have taken a reduce, but not remove stance on fake videos. Facebook, for example, will demote fake videos lower in the news feed while Instagram will leave any such videos off of the hashtag pages and Explore section. Instagram is testing a feature that adds links to fact-checking resources, a feature that Facebook already has. Both use third-party fact-checkers to determine what content qualifies as fake news. So far, Facebook hasn’t changed that stance to benefit its own founder and the video remains intact. Views on the video doubled on Wednesday to more than 60,000.

“We will treat this content the same way we treat all misinformation on Instagram,” an Instagram representative told Digital Trends. “If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.”

The misinformation may not be the only reason to remove the video, however. The video uses a news station header, including the CBSN logo. CBS is pushing for the removal of the video for the unauthorized use of their logo. “CBS has requested that Facebook take down this fake, unauthorized use of the CBSN trademark,” a CBS spokesperson said.

The video was created using CannyAI software and earlier footage of Zuckerberg, captured in 2017 and originally discussing election interference. The project, put together by Bill Posters and Daniel Howe, was created with advertising company Canny as a stunt to show off the CannyAI software and potential future of artificial video. 

The software uses A.I. algorithms to manipulate a speaker’s mouth movements by studying footage from an existing video. The result is a video that’s different from the original, with the speaker’s expression appearing to match the voice. As evidenced by the video, however, the software doesn’t imitate the speaker’s voice, clueing in viewers familiar with Zuckerberg’s speeches that the video isn’t legit.

While Facebook has so far left the deepfake intact, the video also serves as another reminder that videos, like photos, can be easily manipulated and masqueraded as the real thing.

Updated on June 12, 2019: Added official statements from Instagram and CBS.

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Why a deepfake ban won’t solve Facebook’s real problems
Zuckerberg Deepfake

Facebook late on Monday published a statement outlining how it will tackle “manipulated media,” also known as deepfakes. It's the latest social media giant to tackle what is seen as a looming political problem ahead of the 2020 election — but its policy leaves a loophole, experts say.

In a blog post, the company said that it would essentially ban most deepfakes, “investigate A.I.-generated content and deceptive behaviors like fake accounts,” and “partner with academia, government, and industry to expose people behind these efforts.”

Read more
Ahead of the 2020 presidential election, Facebook says it’s banning deepfakes
Facebook Chairman and CEO Mark Zuckerberg testifies before the House Financial Services Committee on "An Examination of Facebook and Its Impact on the Financial Services and Housing Sectors" in the Rayburn House Office Building in Washington, DC on October 23, 2019.

Two days before Facebook is set to appear in front of lawmakers at a House Energy and Commerce hearing on manipulated media, the social network has announced it’s banning all forms of deepfakes. The announcement represents a significant step forward for Facebook, which has been struggling to mend its ailing image with the 2020 presidential election right around the corner.

In a blog post, Monika Bickert, Facebook’s vice president of global policy management, said the company will take down videos that have been "edited or synthesized in ways that aren’t apparent to an average person" or are the "product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic."

Read more
Snopes says ex-partner Facebook is ‘not committed’ to fighting fake news
mark zuckerberg testimony feat

Snopes, the internet’s favorite fact-checking site, is having a good week. It scored a win when Facebook said it removed over 600 profiles, as well as a number of pages and groups associated with these profiles, following some extensive reporting by Snopes. A report by Snopes claims that a network of inauthentic Facebook profiles were artificially boosting engagement to a pro-President Donald Trump media outlet.

Facebook did not respond to a request for comment as to what its future strategy would be to continue fighting the ongoing problem of inauthentic engagement and fake user profiles, but the company has previously announced a rash of efforts to fight fake news, including partnering with local fact-checking organizations all over the world to monitor the content on its platform. At the same time, though, it said it will be “demoting,” but not removing, content that has been rated as untrustworthy, and announced the decision not to fact-check political ads.

Read more