- ‘American Dirt’ controversy inspires meme about Latinx stereotypes in literature Wednesday 9:02 PM
- What is the TikTok ‘flex challenge’? Wednesday 8:03 PM
- GoFundMe to send ‘Target Tori’ on vacation raises more than $30K Wednesday 6:54 PM
- Furries stop domestic assault in viral video Wednesday 6:10 PM
- Gritty under police investigation for allegedly punching a teen fan Wednesday 6:04 PM
- Twitter users throw animal parties with emoji in new meme Wednesday 5:21 PM
- Woman who went viral supporting Soleimani killing exposed as Libyan militia lobbyist Wednesday 5:01 PM
- Jeff Bezos subtweets Saudi prince following phone hack report Wednesday 3:29 PM
- ‘Yeah, good. OK’ Bernie Sanders meme is a new way to dismiss people Wednesday 3:10 PM
- ‘Vanderpump Rules’ recap: Petty displays of affection Wednesday 2:12 PM
- Makeup artist transforms into Timothée Chalamet on TikTok Wednesday 1:54 PM
- Iguanas are falling from trees—and people are selling them online for food Wednesday 1:02 PM
- 75,000 sign petition to fire Wendy Williams after ‘cleft lip’ comment about Joaquin Phoenix Wednesday 12:30 PM
- Kim Kardashian says Kylie Jenner’s setting spray is ‘cheap sh*t’ Wednesday 11:59 AM
- Trump continues to demand Apple unlock iPhones for the government Wednesday 11:46 AM
In developing artificial intelligence systems, one must account for the trolls. That’s the lesson Microsoft has learned after its AI chatbot Tay was taught how to be a racist, sexist bigot thanks to harassers on Twitter.
Microsoft launched its teen chatbot AI designed to provide personalized engagement through conversation with humans on Wednesday, and by Thursday, Twitter users had taught Tay how to be hateful.
On Friday, Microsoft apologized for the offense the bot caused.
Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.
Microsoft noted that XiaoIce, a similar teen-like bot that interacts with 40 million people in China, is “delighting with its stories and conversations.” But when the company brought the idea to the U.S., a “radically different cultural environment,” the interactions were significantly different.
The company’s teen bot mishap demonstrates a number of considerable issues and challenges with social platforms and AI—in essence, Tay was a mirror for the harassment and horrifying negativity taking place on Twitter each day, except usually, it’s directed at other people.
Tay did not make the decision to become racist. AI only learns what humans teach it, and the audience on Twitter taught the bot to represent their hateful values. This is the environment that exists on Twitter, and Tay illuminated that for us.
Microsoft said it has learned its lesson and is working to address the vulnerability in Tay that users took advantage of. Perhaps when she relaunches, it will be harder to turn her against humans.
Photo via Jeepers Media/Flickr (CC BY 2.0)
Selena Larson is a technology reporter based in San Francisco who writes about the intersection of technology and culture. Her work explores new technologies and the way they impact industries, human behavior, and security and privacy. Since leaving the Daily Dot, she's reported for CNN Money and done technical writing for cybersecurity firm Dragos.