Advertisement
Tech

Microsoft is really sorry about its racist AI bot

It has some vulnerabilities to fix.

Photo of Selena Larson

Selena Larson

Article Lead Image

In developing artificial intelligence systems, one must account for the trolls. That’s the lesson Microsoft has learned after its AI chatbot Tay was taught how to be a racist, sexist bigot thanks to harassers on Twitter

Featured Video

Microsoft launched its teen chatbot AI designed to provide personalized engagement through conversation with humans on Wednesday, and by Thursday, Twitter users had taught Tay how to be hateful. 

On Friday, Microsoft apologized for the offense the bot caused.

Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

Advertisement

Microsoft noted that XiaoIce, a similar teen-like bot that interacts with 40 million people in China, is “delighting with its stories and conversations.” But when the company brought the idea to the U.S., a “radically different cultural environment,” the interactions were significantly different. 

The company’s teen bot mishap demonstrates a number of considerable issues and challenges with social platforms and AI—in essence, Tay was a mirror for the harassment and horrifying negativity taking place on Twitter each day, except usually, it’s directed at other people. 

Tay did not make the decision to become racist. AI only learns what humans teach it, and the audience on Twitter taught the bot to represent their hateful values. This is the environment that exists on Twitter, and Tay illuminated that for us

Microsoft said it has learned its lesson and is working to address the vulnerability in Tay that users took advantage of. Perhaps when she relaunches, it will be harder to turn her against humans. 

Advertisement

Photo via Jeepers Media/Flickr (CC BY 2.0)

 
The Daily Dot