This article was published on March 24, 2016

Microsoft’s AI chatbot Tay learned how to be racist in less than 24 hours


Microsoft’s AI chatbot Tay learned how to be racist in less than 24 hours

Tay, Microsoft’s AI chatbot on Twitter had to be pulled down within hours of launch after it suddenly started making racist comments.

As we reported yesterday, it was aimed at 18-24 year-olds and was hailed as, “AI fam from the internet that’s got zero chill”.

 

The AI behind the chatbot was designed to get smarter the more people engaged with it. But, rather sadly, the engagement it received simply taught it how to be racist.

Things took a turn for the worse after Tay responded to a question about whether British comedian Ricky Gervais was an atheist. Tay’s response was, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” 

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

We’ve reached out to Ricky for comment on the story and will update if he decides to take this seriously or not. 

From there, Tay’s AI just gobbled up all the things people were Tweeting it – which got progressively more extreme. A response to Twitter user @icbydt said, “bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got.” 

Then this happened.

thBOpGa

Interestingly, according to Microsoft’s privacy agreement, there are humans contributing to Tay’s Tweeting ability. 

Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.

After 16 hours and a tirade of Tweets later, Tay went quiet. Nearly all of the Tweets in question have now been deleted, with Tay leaving Twitter with a final thanks.

Many took to Twitter to discuss the sudden ‘silencing of Tay’  

Others meanwhile wanted the Tweets to remain as an example of the dangers of artificial antelligence when it’s left to its own devices.

It’s just another reason why the Web cannot be trusted. If people pick up racism as fast as Microsoft’s AI chatbot, we’re all in trouble.

But maybe AI isn’t all that bad. 

Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter [Guardian]

 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with