- Miley Cyrus tweets about cheating allegations and penis cake drama Thursday 6:32 PM
- ‘The Dark Crystal: Age of Resistance’ dazzles with a timely tale Thursday 6:00 PM
- The DOJ emailed a white nationalist blog post to immigration judges Thursday 5:31 PM
- The Amazon rainforest is on fire–and people are using memes to cope Thursday 4:11 PM
- Microsoft contractors listened in on Xbox users Thursday 2:15 PM
- Anti-vaxxer assaults pro-vaccine lawmaker on Facebook Live (updated) Thursday 2:15 PM
- Oreos licked by singer Lewis Capaldi are being auctioned off on eBay Thursday 1:54 PM
- Zach Braff predicted Sean Spicer would be on ‘Dancing With the Stars’ 2 years ago Thursday 1:38 PM
- NYPD sergeant who watched Eric Garner die punished with lost vacation days Thursday 1:27 PM
- Brie Larson haters have a meltdown over a joke about Thor’s hammer Thursday 1:26 PM
- This comedian attempted to make fun of women on Twitter—and it did not go over well Thursday 1:04 PM
- Logan Paul wants to help the Amazon rainforest Thursday 12:36 PM
- Nutaku announces redesign and filters for LGBTQ porn games Thursday 12:25 PM
- This video of dozens of inflatable mattresses taking off in the wind is perfect Thursday 12:20 PM
- Reddit mods restore Tiananmen Square image after censorship claims Thursday 12:18 PM
In developing artificial intelligence systems, one must account for the trolls. That’s the lesson Microsoft has learned after its AI chatbot Tay was taught how to be a racist, sexist bigot thanks to harassers on Twitter.
Microsoft launched its teen chatbot AI designed to provide personalized engagement through conversation with humans on Wednesday, and by Thursday, Twitter users had taught Tay how to be hateful.
On Friday, Microsoft apologized for the offense the bot caused.
Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.
Microsoft noted that XiaoIce, a similar teen-like bot that interacts with 40 million people in China, is “delighting with its stories and conversations.” But when the company brought the idea to the U.S., a “radically different cultural environment,” the interactions were significantly different.
The company’s teen bot mishap demonstrates a number of considerable issues and challenges with social platforms and AI—in essence, Tay was a mirror for the harassment and horrifying negativity taking place on Twitter each day, except usually, it’s directed at other people.
Tay did not make the decision to become racist. AI only learns what humans teach it, and the audience on Twitter taught the bot to represent their hateful values. This is the environment that exists on Twitter, and Tay illuminated that for us.
Microsoft said it has learned its lesson and is working to address the vulnerability in Tay that users took advantage of. Perhaps when she relaunches, it will be harder to turn her against humans.
Photo via Jeepers Media/Flickr (CC BY 2.0)
Selena Larson is a technology reporter based in San Francisco who writes about the intersection of technology and culture. Her work explores new technologies and the way they impact industries, human behavior, and security and privacy. Since leaving the Daily Dot, she's reported for CNN Money and done technical writing for cybersecurity firm Dragos.