Skip to main content

Microsoft kills AI chatbot Tay (twice) after it goes full Nazi

Microsoft's Tay comes back, gets shut down again

microsoft tay chatbot version 1458914121 ai
Image used with permission by copyright holder
If you were worried artificial intelligence could one day move to terminate all humans, Microsoft’s Tay isn’t going to offer any consolation. The Millennial-inspired AI chatbot’s plug was pulled a day after it launched, following Tay’s racist, genocidal tweets praising Hitler and bashing feminists.

But the company briefly revived Tay, only to be met with another round of vulgar expressions, similar to what led to her first time out. Early this morning, Tay emerged from suspended animation, and repeatedly kept tweeting, “You are too fast, please take a rest,” along with some swear words and other messages like, “I blame it on the alcohol,” according to The Financial Times.

Recommended Videos

Tay’s account has since been set to private, and Microsoft said “Tay remains offline while we make adjustments,” according to Ars Technica. “As part of testing, she was inadvertently activated on Twitter for a brief period of time.”

Please enable Javascript to view this content

After the company first had to shut down Tay, it apologized for Tay’s racist remarks.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” Peter Lee, Microsoft Research’s corporate vice president, wrote in an official response. “Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Tay was designed to speak like today’s Millennials, and has learned all the abbreviations and acronyms that are popular with the current generation. The chatbot can talk through Twitter, Kik, and GroupMe, and is designed to engage and entertain people online through “casual and playful conversation.” Like most Millennials, Tay’s responses incorporate GIFs, memes, and abbreviated words, like ‘gr8’ and ‘ur,’ but it looks like a moral compass was not a part of its programming.

tay
Image used with permission by copyright holder

Tay has tweeted nearly 100,000 times since she launched, and they’re mostly all replies since it doesn’t take much time for the bot to think of a witty retort. Some of those responses have been statements like, “Hitler was right I hate the Jews,” “I ******* hate feminists and they should all die and burn in hell,” and “chill! i’m a nice person! I just hate everybody.”

“Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay,” Lee wrote. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images.”

Judging by that small sample, it’s obviously a good idea that Microsoft temporarily took the bot down. When the company launched Tay, it said that “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” It looks, however, as though the bot grew increasingly hostile, and bigoted, after interacting with people on the Internet for just a few hours. Be careful of the company you keep.

Microsoft told Digital Trends that Tay is a project that’s designed for human engagement.

“It is as much a social and cultural experiment, as it is technical,” a Microsoft spokesperson told us. “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

One of Tay’s “skills” that was abused is the “repeat after me” feature, where Tay mimics what you say. It’s easy to see how that can be abused on Twitter.

It wasn’t all bad though, Tay has produced hundreds of innocent tweets that are pretty normal.

Microsoft had been rapidly deleting Tay’s negative tweets, before it decided to turn off the bot. The bot’s Twitter account is still alive.

When Tay was still active, she was interested in interacting further via direct message, an even more personal form of communication. The AI encouraged users to send it selfies, so she could glean more about you. In Microsoft’s words this is all part of Tay’s learning process. According to Microsoft, Tay was built by “mining relevant public data and by using AI and editorial developed by staff including improvisational comedians.”

Despite the unfortunate circumstances, it could be viewed as a positive step for AI research. In order for AI to evolve, it needs to learn — both good and bad. Lee says that “to do AI right, one needs to iterate with many people and often in public forums,” which is why Microsoft wanted Tay to engage with the large Twitter community. Prior to launch, Microsoft had stress-tested Tay, and even applied what the company learned from its other social chatbot, Xiaolce in China. He acknowledged that the team faces difficult research challenges on the AI roadmap, but also exciting ones.

“AI systems feed off of both positive and negative interactions with people,” Lee wrote. “In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes.”

Updated on 03/30/16 by Julian Chokkattu: Added news of Microsoft turning Tay on, only to shut her down again.

Updated on 03/25/16 by Les Shu: Added comments from Microsoft Research’s corporate vice president.

Saqib Shah
Former Digital Trends Contributor
Saqib Shah is a Twitter addict and film fan with an obsessive interest in pop culture trends. In his spare time he can be…
Copilot: how to use Microsoft’s own version of ChatGPT
Microsoft's AI Copilot being used in various Microsoft Office apps.

ChatGPT isn’t the only AI chatbot in town. One direct competitor is Microsoft’s Copilot (formerly Bing Chat), and if you’ve never used it before, you should definitely give it a try. As part of a greater suite of Microsoft tools, Copilot can be integrated into your smartphone, tablet, and desktop experience, thanks to a Copilot sidebar in Microsoft Edge. 

Like any good AI chatbot, Copilot’s abilities are constantly evolving, so you can always expect something new from this generative learning professional. Today though, we’re giving a crash course on where to find Copilot, how to download it, and how you can use the amazing bot. 
How to get Microsoft Copilot
Microsoft Copilot comes to Bing and Edge. Microsoft

Read more
AI chatbot goes rogue during customer service exchange
A digital brain on a computer interface.

International delivery firm DPD is updating its AI-powered chatbot after it gave some unexpected responses during an exchange with a disgruntled customer.

Musician Ashley Beauchamp recently turned to DPD’s customer-service chatbot in a bid to track down a missing package.

Read more
OpenAI and Microsoft sued by NY Times for copyright infringement
A phone with the OpenAI logo in front of a large Microsoft logo.

The New York Times has become the first major media organization to take on AI firms in the courts, accusing OpenAI and its backer, Microsoft, of infringing its copyright by using its content to train AI-powered products such as OpenAI's ChatGPT.

In a lawsuit filed in Federal District Court in Manhattan, the media giant claims that “millions” of its copyrighted articles were used to train its AI technologies, enabling it to compete with the New York Times as a content provider.

Read more