Skip to main content

An alternate history of the moon landing, brought to you by deepfakes

“Fate has ordained,” said President Nixon, staring solemnly into the enormous television camera lens from his seat in the White House’s Oval Office, “that the men who went to the moon to explore in peace will stay on the moon to rest in peace. These brave men, Neil Armstrong and Edwin Aldrin, know that there is no hope for their recovery. But they also know that there is hope for mankind in their sacrifice.”

Recommended Videos

Such are the words of what historians often refer to as the “contingency speech” — the backup address written prior to the 1969 moon landing in case the Apollo 11 mission went terribly, terribly wrong. Written by Nixon’s speechwriter, William Safire, who passed away in 2009, the July 18, 1969, contingency speech was how America’s 37th president would have publicly addressed a national (and, really, international) tragedy. It notes that humanity must always remember that “there is some corner of another world that is forever mankind.”

Of course, in reality, Nixon never had to deliver the speech. The Apollo 11 mission was, well, the Apollo 11 mission. We all know how it went — unless you’re one of the people convinced it was faked and the actual landing footage was directed in a studio by Stanley Kubrick.

But researchers from the Center for Advanced Virtuality at MIT are offering a glimpse into an alternate version of history in which this speech was given. And it’s pretty darn compelling.

Alternate histories

alternate history of the moon landing
Image used with permission by copyright holder

Launched Monday, MIT’s “In Event of Moon Disaster” project shines a light on the power of deepfake video and audio by doctoring authentic video of Nixon’s real 1969 broadcast about the successful moon landing to show him reading the contingency speech. It’s accompanied by a website that provides plenty of alternate history documentation on the parallel history that never was — along with a 30-minute documentary titled “To Make a Deepfake.”

For those unfamiliar with the term, deepfakes use artificial intelligence to edit video or audio to make it sound as though a person is saying or doing things that they’ve never done. We’ve covered no shortage of compelling deepfakes at Digital Trends, whether it’s the ominous message from 2019 in which Facebook CEO Mark Zuckerberg appeared to gloat about “millions of people’s stolen data” or the hilarious video, created especially for us, depicting Ryan Reynolds as Willy Wonka (which was retweeted by the Deadpool actor himself).

Ryan Reynolds as Willy Wonka. You're welcome. https://t.co/Qa6U0IeFLU @VancityReynolds @DigitalTrends pic.twitter.com/bmeH2JQrqn

— Drew Prindle (@GonzoTorpedo) August 13, 2019

Few homebrew deepfakes have been quite as convincing as MIT’s creation, however. Creating it took several months, a hired actor, a studio shoot, and an international team of researchers.

“Synthetic media is not easy to do,” co-director Francesca Panetta, XR Creative Director at the Center for Advanced Virtuality, told Digital Trends. “We did not [just] do a quick face swap like most deep fakes. We wanted to make the most convincing pieces of synthetic media that we could and so decided to do both visual and audio deep fakes.”

Creating the illusion

Image used with permission by copyright holder

To create the visual side of the project, MIT partnered with an Israeli company called Canny A.I., which carries out what it calls video dialog replacement. This process, which could one day conceivably replace subtitles in movies, allows video makers to use “A.I. magic” to tweak existing source video showing a person speaking so that their lips and associated facial expressions match dubbed words. That could be dialog in another language. It could also mean that movie creators can freely change an actor’s lines after filming has finished, without having to reshoot the scene in question.

“We filmed our actor delivering the speech that we wanted President Nixon to deliver, then Canny A.I. took our video of that actor and used their A.I. capabilities to analyze it and say, ‘Oh, this person’s mouth moves in a particular way,’” co-director Halsey Burgund, a fellow at MIT Open Documentary Lab, told Digital Trends. “Then it can look at the other video of President Nixon’s mouth moving in a different way, and replace just enough — as little as possible — to make it appear as though President Nixon is saying those words.”

The audio deepfake portion was carried out by a Ukrainian company called Respeecher. Respeecher specializes in voice-to-voice speech conversion. In essence, that means making one voice sound like another. Such technology could, Respeecher notes on its website, be used for creating audiobooks read in the voice of a famous person (think celebrity autobiographies) or entertainment purposes like karaoke apps that let you sing in the voice of a song’s original singer.

To create the deepfake audio, the team had to create hundreds of clips with an actor saying lines previously recorded by President Nixon. Over time, it was able to learn to convert the input (the actor’s voice) into the desired output (Nixon’s).

“The reason we went with this technology rather than a text-to-speech technology is that by going speech-to-speech, you can preserve the performative aspects of the input,” Burgund said. “We can tell our actor, ‘speak a little bit slower’ or ‘emphasize this word,’ and really direct him to deliver the speech the way we wanted it. All of that direction would then be passed through to [the finished] President Nixon speech.”

The future of deepfakes

Image used with permission by copyright holder

MIT’s In Event of Moon Disaster project is a fascinating — albeit unsettling — glimpse at the future of deepfakes. The website it is hosted on constantly pushes the user to sort the fact from fiction in what they are both seeing and hearing. Combined with the mounds of “fake news” alternate history the project’s creators have dreamed up, this is not always as easy as it sounds. The results are both endlessly exciting when it comes to new storytelling possibilities and ever so slightly terrifying.

So what fascinating branch of history-that-never-was does the team plan to explore next? “We’ve thought about this,” Panetta said. “I’m not sure we’re going to become the production house for alternative histories. We’re interested in always pushing forward in terms of storytelling.”

However, she said that one of the things the past few months — including, notably, the Black Lives Matter anti-racism protests — has gotten her thinking about is the question of who gets to narrate our history. “There has been a lot of talk about reconsidering what the narrative has been,” Panetta said. “Synthetic media and A.I. can be used to redress the narratives around [the history we have been told].”

Who will be next to use deepfakes to explore alternate history? What history could they use it to address? All of these are questions we’ll have to wait to find out. Based on the evidence of this project, however, right now it seems that the past is every bit as non-fixed in place and unpredictable as the future.

What a time to be alive. One small step, indeed.

Topics
Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
PayPal vs. Venmo vs. Cash App vs. Apple Cash: which app should you use?
PayPal, Venmo, Cash App, and Apple Wallet apps on an iPhone.

We’re getting closer every day to an entirely cashless society. While some folks may still carry around a few bucks for emergencies, electronic payments are accepted nearly everywhere, and as mobile wallets expand, even traditional credit and debit cards are starting to fall by the wayside.

That means many of us are past the days of tossing a few bills onto the table to pay our share of a restaurant tab or slipping our pal a couple of bucks to help them out. Now, even those things are more easily doable from our smartphones than our physical wallets.

Read more
How to change margins in Google Docs
Laptop Working from Home

When you create a document in Google Docs, you may need to adjust the space between the edge of the page and the content --- the margins. For instance, many professors have requirements for the margin sizes you must use for college papers.

You can easily change the left, right, top, and bottom margins in Google Docs and have a few different ways to do it.

Read more
What is Microsoft Teams? How to use the collaboration app
A close-up of someone using Microsoft Teams on a laptop for a videoconference.

Online team collaboration is the new norm as companies spread their workforce across the globe. Gone are the days of primarily relying on group emails, as teams can now work together in real time using an instant chat-style interface, no matter where they are.

Using Microsoft Teams affords video conferencing, real-time discussions, document sharing and editing, and more for companies and corporations. It's one of many collaboration tools designed to bring company workers together in an online space. It’s not designed for communicating with family and friends, but for colleagues and clients.

Read more