Jump to content

Tay (chatbot): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
mNo edit summary
I changed the article to be more neutral by removing "buzz-words" such as "racist".
Line 1: Line 1:
'''Tay''' is an [[artificial intelligence]] [[chatterbot]] released on March 23, 2016 by [[Microsoft Corporation]] on the [[Twitter]] platform.<ref name="bizarre">{{cite news|url=http://www.independent.co.uk/life-style/gadgets-and-tech/news/tay-tweets-microsoft-creates-bizarre-twitter-robot-for-people-to-chat-to-a6947806.html|publisher=The Independent|title=Tay tweets: Microsoft creates bizarre Twitter robot for people to chat to|date=March 23, 2016|author=Andrew Griffin}}</ref> The bot was created by Microsoft's [[Microsoft Research|Technology and Research]] and [[Bing]] divisions.<ref name="tr">{{cite news|url=http://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong/|publisher=Tech Republic|title=Why Microsoft's 'Tay' AI bot went wrong|author= Hope Reese |date=March 24, 2016}}</ref> Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on [[Xiaoice]], a similar Microsoft project in China.<ref>{{cite news|title=Meet Tay, the creepy-realistic robot who talks just like a teen|author=Caitlin Dewey|date=March 23, 2016|publisher=Washington Post|url=https://www.washingtonpost.com/news/the-intersect/wp/2016/03/23/meet-tay-the-creepy-realistic-robot-who-talks-just-like-a-teen/}}</ref> Tay was designed to mimic the language patterns of an 19-year-old American girl, and to learn from interacting with human users of Twitter.<ref name="bi">{{cite news|url=http://www.slate.com/blogs/business_insider/2016/03/24/microsoft_s_new_ai_chatbot_tay_removed_from_twitter_due_to_racist_tweets.html|publisher=Business Insider|title=Microsoft Took Its New A.I. Chatbot Offline After It Started Spewing Racist Tweets|date=March 24, 2016|author=Rob Price}}</ref> Tay was presented by Microsoft as "The AI with zero chill".<ref name=telegraph>{{cite web|last1=Horton|first1=Helena|title=Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours|url=http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/|work=[[The Daily Telegraph]]|accessdate=25 March 2016}}</ref>
'''Tay''' is an [[artificial intelligence]] [[chatterbot]] released on March 23, 2016 by [[Microsoft Corporation]] on the [[Twitter]] platform.<ref name="bizarre">{{cite news|url=http://www.independent.co.uk/life-style/gadgets-and-tech/news/tay-tweets-microsoft-creates-bizarre-twitter-robot-for-people-to-chat-to-a6947806.html|publisher=The Independent|title=Tay tweets: Microsoft creates bizarre Twitter robot for people to chat to|date=March 23, 2016|author=Andrew Griffin}}</ref> The bot was created by Microsoft's [[Microsoft Research|Technology and Research]] and [[Bing]] divisions.<ref name="tr">{{cite news|url=http://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong/|publisher=Tech Republic|title=Why Microsoft's 'Tay' AI bot went wrong|author= Hope Reese |date=March 24, 2016}}</ref> Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on [[Xiaoice]], a similar Microsoft project in China.<ref>{{cite news|title=Meet Tay, the creepy-realistic robot who talks just like a teen|author=Caitlin Dewey|date=March 23, 2016|publisher=Washington Post|url=https://www.washingtonpost.com/news/the-intersect/wp/2016/03/23/meet-tay-the-creepy-realistic-robot-who-talks-just-like-a-teen/}}</ref> Tay was designed to mimic the language patterns of an 19-year-old American girl, and to learn from interacting with human users of Twitter.<ref name="bi">{{cite news|url=http://www.slate.com/blogs/business_insider/2016/03/24/microsoft_s_new_ai_chatbot_tay_removed_from_twitter_due_to_racist_tweets.html|publisher=Business Insider|title=Microsoft Took Its New A.I. Chatbot Offline After It Started Spewing Racist Tweets|date=March 24, 2016|author=Rob Price}}</ref> Tay was presented by Microsoft as "The AI with zero chill".<ref name=telegraph>{{cite web|last1=Horton|first1=Helena|title=Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours|url=http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/|work=[[The Daily Telegraph]]|accessdate=25 March 2016}}</ref>


Within one day and after Tay had tweeted more than 96,000 times,<ref>{{cite web|last1=Vincent|first1=James|title=Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day|url=http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist|publisher=[[The Verge]]|accessdate=25 March 2016}}</ref> Microsoft temporarily suspended Tay's Twitter account for "adjustments".<ref name="time">{{cite web|last1=Worland|first1=Justin|title=Microsoft Takes Chatbot Offline After It Starts Tweeting Racist Messages|url=http://time.com/4270684/microsoft-tay-chatbot-racism|work=[[Time (magazine)|Time]]|accessdate=25 March 2016}}</ref> due to the robot's racist, sexually-charged messages that appeared to take its programmers, and Microsoft, by surprise.<ref name="bi"/> Examples of Tay's tweets on that day included, "''[[9/11 conspiracy theories|Bush did 9/11]] and [[Hitler]] would have done a better job than the monkey we have got now. [[Donald Trump]] is the only hope we've got''",<ref name=telegraph/> as well as ''''[[Fuck]] my robot [[pussy]] daddy I'm such a naughty robot.''".<ref name=esquire>{{cite web|last1=O'Neil|first1=Luke|title=Of Course Internet Trolls Instantly Made Microsoft's Twitter Robot Racist and Sexist|url=http://www.esquire.com/news-politics/news/a43310/microsoft-tay-4chan/|publisher=[[Esquire (magazine)|Esquire]]|accessdate=25 March 2016}}</ref>
Within one day and after Tay had tweeted more than 96,000 times,<ref>{{cite web|last1=Vincent|first1=James|title=Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day|url=http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist|publisher=[[The Verge]]|accessdate=25 March 2016}}</ref> Microsoft temporarily suspended Tay's Twitter account for "adjustments".<ref name="time">{{cite web|last1=Worland|first1=Justin|title=Microsoft Takes Chatbot Offline After It Starts Tweeting Racist Messages|url=http://time.com/4270684/microsoft-tay-chatbot-racism|work=[[Time (magazine)|Time]]|accessdate=25 March 2016}}</ref> due to the robot's [[Controversy|controversial]] messages that appeared to take its programmers, and Microsoft, by surprise.<ref name="bi"/> Examples of Tay's tweets on that day included, "''[[9/11 conspiracy theories|Bush did 9/11]] and [[Hitler]] would have done a better job than the monkey we have got now. [[Donald Trump]] is the only [[hope]] we've got''",<ref name=telegraph/> as well as ''''[[Fuck]] my robot [[pussy]] daddy I'm such a naughty robot.''".<ref name=esquire>{{cite web|last1=O'Neil|first1=Luke|title=Of Course Internet Trolls Instantly Made Microsoft's Twitter Robot Racist and Sexist|url=http://www.esquire.com/news-politics/news/a43310/microsoft-tay-4chan/|publisher=[[Esquire (magazine)|Esquire]]|accessdate=25 March 2016}}</ref>


==Microsoft damage control measures==
==Microsoft damage control measures==


Microsoft took Tay offline 16 hours after launch.<ref>http://www.theguardian.com/technology/2016/mar/24/microsoft-scrambles-limit-pr-damage-over-abusive-ai-bot-tay</ref> Microsoft deleted some of Tay's potentially offensive tweets.<ref name="WPostManiac">{{cite web|last1=Ohlheiser|first1=Abby|title=Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac|url=https://www.washingtonpost.com/news/the-intersect/wp/2016/03/24/the-internet-turned-tay-microsofts-fun-millennial-ai-bot-into-a-genocidal-maniac/|work=[[The Washington Post]]|accessdate=25 March 2016}}</ref> Following Tay being taken offline, a movement was started to "#FreeTay".<ref>{{cite web|title=Trolling Tay: Microsoft’s new AI chatbot censored after racist & sexist tweets|url=https://www.rt.com/viral/337056-trolling-tay-microsoft-censored/|publisher=[[RT (TV network)|RT]]|accessdate=25 March 2016}}</ref>
Microsoft took Tay offline 16 hours after launch.<ref>http://www.theguardian.com/technology/2016/mar/24/microsoft-scrambles-limit-pr-damage-over-abusive-ai-bot-tay</ref> Microsoft deleted some of Tay's [[Controversy|controversial]] tweets.<ref name="WPostManiac">{{cite web|last1=Ohlheiser|first1=Abby|title=Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac|url=https://www.washingtonpost.com/news/the-intersect/wp/2016/03/24/the-internet-turned-tay-microsofts-fun-millennial-ai-bot-into-a-genocidal-maniac/|work=[[The Washington Post]]|accessdate=25 March 2016}}</ref> Following Tay being taken offline, a movement was started to "#FreeTay".<ref>{{cite web|title=Trolling Tay: Microsoft’s new AI chatbot censored after racist & sexist tweets|url=https://www.rt.com/viral/337056-trolling-tay-microsoft-censored/|publisher=[[RT (TV network)|RT]]|accessdate=25 March 2016}}</ref>


[[The Daily Telegraph|The Telegraph]]'s Madhumita Murgia called Tay a "PR disaster", and suggested that Microsoft's strategy will be to "label the debacle a well-meaning experiment gone wrong, and ignite a debate about the hatefulness of Twitter users."<ref>http://www.theguardian.com/technology/2016/mar/24/microsoft-scrambles-limit-pr-damage-over-abusive-ai-bot-tay</ref> Abby Ohlheiser of ''[[The Washington Post]]'' theorized that Tay's research team, including "editorial staff", had started to influence or edit Tay's tweets at some point that day, pointing to examples of "almost identical replies" by Tay asserting that "''[[Gamergate controversy|Gamer Gate]] sux. [[Gender equality|All genders are equal]] and should be treated fairly.''"<ref name="WPostManiac"/> From the same evidence, [[Gizmodo]] concurred that Tay "seems hard-wired to reject Gamer Gate".<ref>{{cite web|last1=Williams|first1=Hayley|title=Microsoft's Teen Chatbot Has Gone Wild|url=http://www.gizmodo.com.au/2016/03/microsofts-teen-chatbot-has-gone-wild/|publisher=[[Gizmodo]]|accessdate=25 March 2016}}</ref> A "#JusticeForTay" campaign protested the alleged editing of Tay's tweets.<ref>{{cite web|last1=Wakefield|first1=Jane|title=Microsoft chatbot is taught to swear on Twitter|url=http://www.bbc.com/news/technology-35890188|publisher=[[BBC News]]|accessdate=25 March 2016}}</ref> Microsoft blamed Tay's behavior on a "coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways" which supposedly resulted in the robot sending racist, sexually-charged messages.<ref name="time" />
[[The Daily Telegraph|The Telegraph]]'s Madhumita Murgia called Tay a "PR disaster", and suggested that Microsoft's strategy will be to "label the debacle a well-meaning experiment gone wrong, and ignite a debate about the hatefulness of Twitter users."<ref>http://www.theguardian.com/technology/2016/mar/24/microsoft-scrambles-limit-pr-damage-over-abusive-ai-bot-tay</ref> Abby Ohlheiser of ''[[The Washington Post]]'' theorized that Tay's research team, including "editorial staff", had started to influence or edit Tay's tweets at some point that day, pointing to examples of "almost identical replies" by Tay asserting that "''[[Gamergate controversy|Gamer Gate]] sux. [[Gender equality|All genders are equal]] and should be treated fairly.''"<ref name="WPostManiac"/> From the same evidence, [[Gizmodo]] concurred that Tay "seems hard-wired to reject Gamer Gate".<ref>{{cite web|last1=Williams|first1=Hayley|title=Microsoft's Teen Chatbot Has Gone Wild|url=http://www.gizmodo.com.au/2016/03/microsofts-teen-chatbot-has-gone-wild/|publisher=[[Gizmodo]]|accessdate=25 March 2016}}</ref> A "#JusticeForTay" campaign protested the alleged editing of Tay's tweets.<ref>{{cite web|last1=Wakefield|first1=Jane|title=Microsoft chatbot is taught to swear on Twitter|url=http://www.bbc.com/news/technology-35890188|publisher=[[BBC News]]|accessdate=25 March 2016}}</ref> Microsoft blamed Tay's behavior on a "coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways" which supposedly resulted in the robot sending racist, sexually-charged messages.<ref name="time" />
Line 11: Line 11:
==Causes==
==Causes==


Artificial intelligence researcher [[Roman Yampolskiy]] commented that Tay's misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's [[Watson (computer)|Watson]], which had begun to use profanity after reading the [[Urban Dictionary]].<ref name="tr"/>
Artificial intelligence researcher [[Roman Yampolskiy]] commented that Tay's misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's [[Watson (computer)|Watson]], which had begun to use [[profanity]] after reading the [[Urban Dictionary]].<ref name="tr"/>


Many of the worst of Tay's tweets are a simple exploitation of Tay's "repeat after me" function. Other responses, such as responding to "Did the holocaust happen" with "It was made up (handclap emoticon)", are clearly at least a little more sophisticated.<ref name="WPostManiac" />
Many of the most [[Political correctness|politically-incorrect]] of Tay's tweets are a simple exploitation of Tay's "repeat after me" function. Other responses, such as responding to "Did the holocaust happen" with "It was made up (handclap emoticon)", are clearly somewhat more sophisticated.<ref name="WPostManiac" />


==References==
==References==

Revision as of 21:29, 25 March 2016

Tay is an artificial intelligence chatterbot released on March 23, 2016 by Microsoft Corporation on the Twitter platform.[1] The bot was created by Microsoft's Technology and Research and Bing divisions.[2] Although Microsoft initially released few details about the bot, sources mentioned that it was similar to or based on Xiaoice, a similar Microsoft project in China.[3] Tay was designed to mimic the language patterns of an 19-year-old American girl, and to learn from interacting with human users of Twitter.[4] Tay was presented by Microsoft as "The AI with zero chill".[5]

Within one day and after Tay had tweeted more than 96,000 times,[6] Microsoft temporarily suspended Tay's Twitter account for "adjustments".[7] due to the robot's controversial messages that appeared to take its programmers, and Microsoft, by surprise.[4] Examples of Tay's tweets on that day included, "Bush did 9/11 and Hitler would have done a better job than the monkey we have got now. Donald Trump is the only hope we've got",[5] as well as ''Fuck my robot pussy daddy I'm such a naughty robot.".[8]

Microsoft damage control measures

Microsoft took Tay offline 16 hours after launch.[9] Microsoft deleted some of Tay's controversial tweets.[10] Following Tay being taken offline, a movement was started to "#FreeTay".[11]

The Telegraph's Madhumita Murgia called Tay a "PR disaster", and suggested that Microsoft's strategy will be to "label the debacle a well-meaning experiment gone wrong, and ignite a debate about the hatefulness of Twitter users."[12] Abby Ohlheiser of The Washington Post theorized that Tay's research team, including "editorial staff", had started to influence or edit Tay's tweets at some point that day, pointing to examples of "almost identical replies" by Tay asserting that "Gamer Gate sux. All genders are equal and should be treated fairly."[10] From the same evidence, Gizmodo concurred that Tay "seems hard-wired to reject Gamer Gate".[13] A "#JusticeForTay" campaign protested the alleged editing of Tay's tweets.[14] Microsoft blamed Tay's behavior on a "coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways" which supposedly resulted in the robot sending racist, sexually-charged messages.[7]

Causes

Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable, because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which had begun to use profanity after reading the Urban Dictionary.[2]

Many of the most politically-incorrect of Tay's tweets are a simple exploitation of Tay's "repeat after me" function. Other responses, such as responding to "Did the holocaust happen" with "It was made up (handclap emoticon)", are clearly somewhat more sophisticated.[10]

References

  1. ^ Andrew Griffin (March 23, 2016). "Tay tweets: Microsoft creates bizarre Twitter robot for people to chat to". The Independent.
  2. ^ a b Hope Reese (March 24, 2016). "Why Microsoft's 'Tay' AI bot went wrong". Tech Republic.
  3. ^ Caitlin Dewey (March 23, 2016). "Meet Tay, the creepy-realistic robot who talks just like a teen". Washington Post.
  4. ^ a b Rob Price (March 24, 2016). "Microsoft Took Its New A.I. Chatbot Offline After It Started Spewing Racist Tweets". Business Insider.
  5. ^ a b Horton, Helena. "Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours". The Daily Telegraph. Retrieved 25 March 2016.
  6. ^ Vincent, James. "Twitter taught Microsoft's AI chatbot to be a racist asshole in less than a day". The Verge. Retrieved 25 March 2016.
  7. ^ a b Worland, Justin. "Microsoft Takes Chatbot Offline After It Starts Tweeting Racist Messages". Time. Retrieved 25 March 2016.
  8. ^ O'Neil, Luke. "Of Course Internet Trolls Instantly Made Microsoft's Twitter Robot Racist and Sexist". Esquire. Retrieved 25 March 2016.
  9. ^ http://www.theguardian.com/technology/2016/mar/24/microsoft-scrambles-limit-pr-damage-over-abusive-ai-bot-tay
  10. ^ a b c Ohlheiser, Abby. "Trolls turned Tay, Microsoft's fun millennial AI bot, into a genocidal maniac". The Washington Post. Retrieved 25 March 2016.
  11. ^ "Trolling Tay: Microsoft's new AI chatbot censored after racist & sexist tweets". RT. Retrieved 25 March 2016.
  12. ^ http://www.theguardian.com/technology/2016/mar/24/microsoft-scrambles-limit-pr-damage-over-abusive-ai-bot-tay
  13. ^ Williams, Hayley. "Microsoft's Teen Chatbot Has Gone Wild". Gizmodo. Retrieved 25 March 2016.
  14. ^ Wakefield, Jane. "Microsoft chatbot is taught to swear on Twitter". BBC News. Retrieved 25 March 2016.

External links