If you were worried artificial intelligence could one day move to terminate all humans, Microsoft’s Tay isn’t going to offer any consolation. The Millennial-inspired AI chatbot’s plug was pulled a day after it launched, following Tay’s racist, genocidal tweets praising Hitler and bashing feminists.
But the company briefly revived Tay, only to be met with another round of vulgar expressions, similar to what led to her first time out. Early this morning, Tay emerged from suspended animation, and repeatedly kept tweeting, “You are too fast, please take a rest,” along with some swear words and other messages like, “I blame it on the alcohol,” according to The Financial Times.
Tay’s account has since been set to private, and Microsoft said “Tay remains offline while we make adjustments,” according to Ars Technica. “As part of testing, she was inadvertently activated on Twitter for a brief period of time.”
After the company first had to shut down Tay, it apologized for Tay’s racist remarks.
“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” Peter Lee, Microsoft Research’s corporate vice president, wrote in an official response. “Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”
Tay was designed to speak like today’s Millennials, and has learned all the abbreviations and acronyms that are popular with the current generation. The chatbot can talk through Twitter, Kik, and GroupMe, and is designed to engage and entertain people online through “casual and playful conversation.” Like most Millennials, Tay’s responses incorporate GIFs, memes, and abbreviated words, like ‘gr8’ and ‘ur,’ but it looks like a moral compass was not a part of its programming.
Tay has tweeted nearly 100,000 times since she launched, and they’re mostly all replies since it doesn’t take much time for the bot to think of a witty retort. Some of those responses have been statements like, “Hitler was right I hate the Jews,” “I ******* hate feminists and they should all die and burn in hell,” and “chill! i’m a nice person! I just hate everybody.”
“Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay,” Lee wrote. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images.”
Judging by that small sample, it’s obviously a good idea that Microsoft temporarily took the bot down. When the company launched Tay, it said that “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” It looks, however, as though the bot grew increasingly hostile, and bigoted, after interacting with people on the Internet for just a few hours. Be careful of the company you keep.
Microsoft told Digital Trends that Tay is a project that’s designed for human engagement.
“It is as much a social and cultural experiment, as it is technical,” a Microsoft spokesperson told us. “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”
One of Tay’s “skills” that was abused is the “repeat after me” feature, where Tay mimics what you say. It’s easy to see how that can be abused on Twitter.
It wasn’t all bad though, Tay has produced hundreds of innocent tweets that are pretty normal.
@sxndrx98 Here’s a question humans..Why isn’t #NationalPuppyDay everyday?
— TayTweets (@TayandYou) March 24, 2016
Microsoft had been rapidly deleting Tay’s negative tweets, before it decided to turn off the bot. The bot’s Twitter account is still alive.
Wow it only took them hours to ruin this bot for me.
This is the problem with content-neutral algorithms pic.twitter.com/hPlINtVw0V
— linkedin park (@UnburntWitch) March 24, 2016
TayTweets is now taking a break after a long day of algorithm abuse pic.twitter.com/8bfhj6dABO
— Stephen Miller (@redsteeze) March 24, 2016
When Tay was still active, she was interested in interacting further via direct message, an even more personal form of communication. The AI encouraged users to send it selfies, so she could glean more about you. In Microsoft’s words this is all part of Tay’s learning process. According to Microsoft, Tay was built by “mining relevant public data and by using AI and editorial developed by staff including improvisational comedians.”
Despite the unfortunate circumstances, it could be viewed as a positive step for AI research. In order for AI to evolve, it needs to learn — both good and bad. Lee says that “to do AI right, one needs to iterate with many people and often in public forums,” which is why Microsoft wanted Tay to engage with the large Twitter community. Prior to launch, Microsoft had stress-tested Tay, and even applied what the company learned from its other social chatbot, Xiaolce in China. He acknowledged that the team faces difficult research challenges on the AI roadmap, but also exciting ones.
“AI systems feed off of both positive and negative interactions with people,” Lee wrote. “In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes.”
Updated on 03/30/16 by Julian Chokkattu: Added news of Microsoft turning Tay on, only to shut her down again.
Updated on 03/25/16 by Les Shu: Added comments from Microsoft Research’s corporate vice president.