- ‘Star Trek’s Jonathan Frakes calls out your lies with this new meme Saturday 3:46 PM
- #JusticeForLucca trends after video shows police slam Black teen’s head into pavement Saturday 3:11 PM
- The internet is shocked to learn that Goombas do, in fact, have arms Saturday 2:02 PM
- PayPal, GoFundMe cut off armed militia that detains migrants at border Saturday 1:16 PM
- Barnwood theft may be on the rise because of ‘Fixer Upper’—and fans aren’t having it Saturday 12:23 PM
- Literary Twitter calls out Dzanc Books for Islamophobic, racist novel Saturday 11:40 AM
- How to watch Crawford vs. Khan online Saturday 10:00 AM
- Beyoncé has 2 more projects coming to Netflix after ‘Homecoming’ Saturday 9:53 AM
- How to watch Danny Garcia vs. Adrian Granados for free Saturday 9:00 AM
- The ‘Feeling Cute Challenge’ turns ugly after correctional officers abuse it Saturday 7:30 AM
- How to watch ‘How High 2’ for free Saturday 7:00 AM
- Swipe This! My ex-BFF keeps sliding into my DMs, but I don’t want to be friends Saturday 6:30 AM
- Watch ‘I Am Somebody’s Child: The Regina Louise Story’ for free Saturday 6:00 AM
- How to watch Barcelona vs. Real Sociedad for free Saturday 6:00 AM
- How to stream UFC Fight Night 149 for free Saturday 5:30 AM
Researchers at the Massachusetts Institute of Technology have trained an artificial intelligence algorithm to be a psychopath, according to the BBC.
The algorithm Norman—named after the infamous Hitchcock character in Psycho—was trained by researchers to have dark thoughts. They said they did it to see what would happen if AI was trained with data from “the dark corners of the net” and how it would change its worldview.
Of course, there are few darker places on the web than Reddit. Researchers say that’s where they led Norman to look at images, rather than the typical family-friendly, happy images that algorithms are usually trained with.
The experiment showed the researchers’ methods were very successful. By showing Norman Rorschach inkblots, the researchers found that Norman had a very dark and creepy perspective. For example, where most AIs would see a wedding cake on a table, Norman would see a man getting killed by a speeding driver. And where a standard AI would see a “black and white photo of a red and white umbrella,” Norman would see a man getting electrocuted while attempting to cross a busy street.
This experiment proves that AI can, in fact, be trained to have a bias: Just like Norman was trained to see death and suffering, other AIs can be trained to be racist or sexist.
In May 2016, a ProPublica report outed a computer algorithm that had a racial bias against Black prisoners, finding those inmates more likely candidates for recidivism than their white counterparts, even if the white inmate a longer criminal record. That same year, Microsoft accidentally created a hateful Twitter bot after users trained it to be racist and sexist. Another study in 2016 showed that software trained on Google News became sexist as a result of the data it was learning from. Norman’s training now provides further evidence that it’s easy to rig AI.
Next, the researchers have set out to prove that AI can be re-trained, according to Geek.com. They plan to do that by having regular folks submit new answers to MIT’s test images through this Google form.
Editor’s note: This article has been revised to remove similarities to the BBC’s original report.
Tess Cagle is a reporter who focuses on politics, lifestyle, and streaming entertainment. Her work has appeared in the New York Times, Texas Monthly, the Austin American-Statesman, Damn Joan, and Community Impact Newspaper. She’s also a portrait, events, and live music photographer in Central Texas.