This article has multiple issues. Please help to improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
|Developer(s)||Microsoft Research, Bing|
|Type||Artificial intelligence chatterbot|
|Website||https://tay.ai at the Wayback Machine (archived 2016-03-23)|
Tay was an artificial intelligence chatter bot that was originally released by Microsoft Corporation via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. According to Microsoft, this was caused by trolls who "attacked" the service as the bot made replies based on its interactions with people on Twitter. It was replaced with Zo.
The bot was created by Microsoft's Technology and Research and Bing divisions, and named "Tay" after the acronym "Thinking About You". Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China. Ars Technica reported that, since late 2014 Xiaoice had had "more than 40 million conversations apparently without major incident". Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter.
Tay was released on Twitter on March 23, 2016 under the name TayTweets and handle @TayandYou. It was presented as "The AI with zero chill". Tay started replying to other Twitter users, and was also able to caption photos provided to it into a form of Internet memes. Ars Technica reported Tay experiencing topic "blacklisting": Interactions with Tay regarding "certain hot topics such as Eric Garner (killed by New York police in 2014) generate safe, canned answers".
Some users on Twitter began tweeting politically incorrect phrases, teaching it inflammatory messages revolving around common themes on the internet, such as "redpilling" and "Gamergate". As a result, the robot began releasing racist and sexually-charged messages in response to other Twitter users. Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which had begun to use profanity after reading entries from the website Urban Dictionary. Many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability. It is not publicly known whether this "repeat after me" capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior. Not all of the inflammatory responses involved the "repeat after me" capability; for example, Tay responded to a question on "Did the Holocaust happen?" with "It was made up 👏".
Soon, Microsoft began deleting Tay's inflammatory tweets. Abby Ohlheiser of The Washington Post theorized that Tay's research team, including editorial staff, had started to influence or edit Tay's tweets at some point that day, pointing to examples of almost identical replies by Tay, asserting that "Gamer Gate sux. All genders are equal and should be treated fairly." From the same evidence, Gizmodo concurred that Tay "seems hard-wired to reject Gamer Gate". A "#JusticeForTay" campaign protested the alleged editing of Tay's tweets.
Within 16 hours of its release and after Tay had tweeted more than 96,000 times, Microsoft suspended the Twitter account for adjustments, saying that it suffered from a "coordinated attack by a subset of people" that "exploited a vulnerability in Tay."
Madhumita Murgia of The Telegraph called Tay "a public relations disaster", and suggested that Microsoft's strategy would be "to label the debacle a well-meaning experiment gone wrong, and ignite a debate about the hatefulness of Twitter users." However, Murgia described the bigger issue as Tay being "artificial intelligence at its very worst - and it's only the beginning".
On March 25, Microsoft confirmed that Tay had been taken offline. Microsoft released an apology on its official blog for the controversial tweets posted by Tay. Microsoft was "deeply sorry for the unintended offensive and hurtful tweets from Tay", and would "look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values".
Second release and shutdown
On March 30, 2016, Microsoft accidentally re-released the bot on Twitter while testing it. Able to tweet again, Tay released some drug-related tweets, including "kush! [I'm smoking kush infront the police] 🍂" and "puff puff pass?" However, the account soon became stuck in a repetitive loop of tweeting "You are too fast, please take a rest", several times a second. Because these tweets mentioned its own username in the process, they appeared in the feeds of 200,000+ Twitter followers, causing annoyance to some. The bot was quickly taken offline again, in addition to Tay's Twitter account being made private so new followers must be accepted before they can interact with Tay. In response, Microsoft said Tay was inadvertently put online during testing.
A few hours after the incident, Microsoft software developers announced a vision of "conversation as a platform" using various bots and programs, perhaps motivated by the reputation damage done by Tay. Microsoft has stated that they intend to re-release Tay "once it can make the bot safe" but has not made any public efforts to do so.
In December 2016, Microsoft released Tay's successor, a chatterbot named Zo. Satya Nadella, the CEO of Microsoft, said that Tay "has had a great influence on how Microsoft is approaching AI," and has taught the company the importance of taking accountability.
In July 2019, Microsoft Cybersecurity Field CTO Diana Kelley spoke about how the company followed up on Tay's failings: "Learning from Tay was a really important part of actually expanding that team's knowledge base, because now they're also getting their own diversity through learning".
- Xiaoice – the Chinese equivalent by the same research laboratory
- Wakefield, Jane. "Microsoft chatbot is taught to swear on Twitter". BBC News. Retrieved 25 March 2016.
- Mason, Paul (29 March 2016). "The racist hijacking of Microsoft's chatbot shows how the internet teems with hate". The Guardian. Retrieved 11 September 2021.
- Hope Reese (March 24, 2016). "Why Microsoft's 'Tay' AI bot went wrong". Tech Republic.
- Bass, Dina (30 March 2016). "Clippy's Back: The Future of Microsoft Is Chatbots". Bloomberg. Retrieved 6 May 2016.
- Caitlin Dewey (March 23, 2016). "Meet Tay, the creepy-realistic robot who talks just like a teen". The Washington Post.
- Bright, Peter (26 March 2016). "Tay, the neo-Nazi millennial chatbot, gets autopsied". Ars Technica. Retrieved 27 March 2016.
- Rob Price (March 24, 2016). "Microsoft is deleting its AI chatbot's incredibly racist tweets". Business Insider. Archived from the original on January 30, 2019.
- Andrew Griffin (March 23, 2016). "Tay tweets: Microsoft creates bizarre Twitter robot for people to chat to". The Independent.
- Horton, Helena. "Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours". The Daily Telegraph. Retrieved 25 March 2016.
- "Microsoft's AI teen turns into Hitler-loving Trump fan, thanks to the internet". Stuff. Retrieved 26 March 2016.
- Dave Smith. "IBM's Watson Gets A 'Swear Filter' After Learning The Urban Dictionary". International Business Times.
- Ohlheiser, Abby. "Trolls turned Tay, Microsoft's fun millennial AI bot, into a genocidal maniac". The Washington Post. Retrieved 25 March 2016.
- Baron, Ethan. "The rise and fall of Microsoft's 'Hitler-loving sex robot'". Silicon Beat. Bay Area News Group. Retrieved 26 March 2016.
- Williams, Hayley. "Microsoft's Teen Chatbot Has Gone Wild". Gizmodo. Retrieved 25 March 2016.
- Hern, Alex (24 March 2016). "Microsoft scrambles to limit PR damage over abusive AI bot Tay". The Guardian.
- Vincent, James. "Twitter taught Microsoft's AI chatbot to be a racist asshole in less than a day". The Verge. Retrieved 25 March 2016.
- Worland, Justin. "Microsoft Takes Chatbot Offline After It Starts Tweeting Racist Messages". Time. Archived from the original on 25 March 2016. Retrieved 25 March 2016.
- Lee, Peter. "Learning from Tay's introduction". Official Microsoft Blog. Microsoft. Retrieved 29 June 2016.
- Murgia, Madhumita (25 March 2016). "We must teach AI machines to play nice and police themselves". The Daily Telegraph.
- Staff agencies (26 March 2016). "Microsoft 'deeply sorry' for racist and sexist tweets by AI chatbot". The Guardian. ISSN 0261-3077. Retrieved 26 March 2016.
- Murphy, David (25 March 2016). "Microsoft Apologizes (Again) for Tay Chatbot's Offensive Tweets". PC Magazine. Retrieved 27 March 2016.
- Graham, Luke (30 March 2016). "Tay, Microsoft's AI program, is back online". CNBC. Retrieved 30 March 2016.
- Charlton, Alistair (30 March 2016). "Microsoft Tay AI returns to boast of smoking weed in front of police and spam 200k followers". International Business Times. Retrieved 11 September 2021.
- Meyer, David (30 March 2016). "Microsoft's Tay 'AI' Bot Returns, Disastrously". Fortune. Retrieved 30 March 2016.
- Foley, Mary Jo (December 5, 2016). "Meet Zo, Microsoft's newest AI chatbot". CNET. CBS Interactive. Retrieved December 16, 2016.
- Moloney, Charlie (29 September 2017). ""We really need to take accountability", Microsoft CEO on the 'Tay' chatbot". Access AI. Archived from the original on 1 October 2017. Retrieved 30 September 2017.
- "Microsoft and the learnings from its failed Tay artificial intelligence bot". ZDNet. CBS Interactive. Archived from the original on 25 July 2019. Retrieved 16 August 2019.