- British judge refuses to delay WikiLeaks founder Julian Assange’s extradition hearing Monday 8:01 PM
- Indicted Giuliani associate Lev Parnas’ private Instagram filled with Trump connections Monday 7:22 PM
- ‘Bomboclaat’ is the new ‘sco pa tu manaa’ Monday 7:00 PM
- Lori Harvey reportedly trying to walk away from car crash spawns memes Monday 6:00 PM
- In Netflix’s ‘Upstarts,’ Silicon Valley CEOs are the good guys Monday 4:35 PM
- This video of a tree struck by lightning is… relatable? Monday 4:13 PM
- How to watch ‘Keeping Faith’ Monday 3:37 PM
- ‘South Park’ at the center of $500 million streaming war Monday 3:16 PM
- Pizza Hut and Papa John’s employees pranked into talking to each other on the phone Monday 2:34 PM
- Twitter bullies brought Jordan Peterson to tears Monday 2:24 PM
- 25 last-minute Halloween costumes for those with no time to shop Monday 1:30 PM
- Krassensteins return to Twitter and are immediately suspended Monday 1:01 PM
- Tom Brady insists he didn’t parody Robert Kraft in ‘Living with Yourself’ cameo Monday 12:52 PM
- Black security guard fired for telling student not to call him the N-word Monday 12:38 PM
- How Watchmen’s Bass Reeves cameo ties into the original comic Monday 12:34 PM
A new website proves that we might not be ready to know what AI thinks of us.
The project, called ImageNet Roulette, allows users to upload their photos and see how their faces are categorized by machine learning software trained to identify humans.
The team behind ImageNet Roulette says the project’s aim is to expose the many issues with such classifications, which are based on datasets with “problematic, offensive and bizarre categories.”
“AI classifications of people are rarely made visible to the people being classified,” the website states. “ImageNet Roulette provides a glimpse into that process—and to show the ways things can go wrong.”
Once users began uploading their own results to Twitter, it didn’t take long to see those issues.
Many photos of Black people, for example, were labeled with outdated and offensive terms like “Negro” and “Negroid.” The terms were sometimes applied to photos of white people, too.
This the last thing I expected: pic.twitter.com/MVM1j5I8Nk— the Godfrogger (@myuncleisadj) September 16, 2019
Similarly, the software categorized at least one Black person as a “clown” wearing white makeup.
Man who you tellin? pic.twitter.com/RmVo5G2bZG— Adrian T. WOMACK (@weauxmaque) September 16, 2019
The AI described one woman who uploaded her own photo as an unmarried girl and a likely “virgin.”
The AI categorized a photo uploaded by a man of his 16-year-old self as a “rape suspect.”
I uploaded this photo from when I was at a Florida Marlins game as a 16 year old, and uh... pic.twitter.com/7AtTADOrbZ— Will Brown (@WdB11) September 16, 2019
Even Democratic presidential candidate Joe Biden wasn’t safe. The AI labeled the former vice president as a “klansman,” while President Donald Trump was categorized as a “centrist.”
Those interested in getting roasted by AI can visit the site and upload a picture or use their webcam. The site’s creators state that they do not store or keep any of the images uploaded.
Mikael Thalen is a tech and security reporter based in Seattle, covering social media, data breaches, hackers, and more.