- The trailer for the final episodes of ‘Unbreakable Kimmy Schmidt’ is here Today 1:52 AM
- Guy gets roasted for throwing razor in the toilet to protest Gillette Wednesday 9:23 PM
- Experts warn of uptick in ‘Ryuk’ ransomware after hackers net $3.7 million Wednesday 7:03 PM
- Video game composer boycotts Gillette after anti-toxic masculinity ad Wednesday 6:05 PM
- Steve Carell sitcom ‘Space Force’ heading to Netflix Wednesday 5:30 PM
- Ocasio-Cortez’s ‘run train’ phrase becomes conservative sex controversy Wednesday 5:25 PM
- ‘Into’ is a reminder that queer businesses can be hurt by straight leaders Wednesday 5:13 PM
- TSA agents are the latest tool in the government shutdown meme war Wednesday 4:22 PM
- YouTube still hosting bestiality images year after crackdown pledge Wednesday 4:13 PM
- YouTuber quits fight after Darth Vader fan film claimed by Disney Wednesday 3:26 PM
- Millions of Fortnite accounts exposed via Epic Games website exploit Wednesday 2:26 PM
- A man found a camera in his Airbnb and the company didn’t seem to care Wednesday 2:00 PM
- A redditor planted an Easter egg in Hulu’s Fyre Fest doc Wednesday 1:51 PM
- This new revelation about Woody from ‘Toy Story’ will blow your mind Wednesday 1:35 PM
- Dave Rubin fails to delete Patreon on livestream to delete Patreon Wednesday 1:14 PM
Don’t stare into its eyes.
If you watched The Brave Little Toaster as a kid, you’ve probably felt a little more empathy for your household appliances. But if an averaged image of inanimate objects is to be believed, then there’s a little more humanity in these devices than you might expect.
School for Poetic Computation student Robby Kraft decided to take photos of inanimate objects with characteristics that resemble a face and run them through an algorithm for facial detection.
Kraft said the idea came after spending a week studying contemporary artist Jason Salavon, best known for his work manipulating data and images with software to create new art, and Nancy Burson, a creator of computer morphing technology, including the Age Machine and Human Race Machine. “Face blending was a nice intersection of the two,” he said.
To find his subjects, Kraft dove into the popular Instagram tag #FacesInThings. The category houses images that take on a different form thanks to pareidolia, a phenomenon in which people perceive a pattern that isn’t actually there. It’s the same effect that causes people to see the Virgin Mary in a piece of toast.
Kraft ran 2,500 images through the facial detection software—though only about one in 20 was successfully identified. Even with the low success rate, the resulting pattern is an image that looks strikingly like a human face.
“I was a little impressed that it worked at all,” he told the Daily Dot. Prior to running the program on the expressive objects, Kraft did a test run using images tagged with #selfie. The algorithm, which identifies the positions of eye and mouth locations, processed one in three of those images of actual humans.
“I’m often pushing software libraries beyond their traditional use, so I frequently get null responses,” Kraft explained. “Technically it’s a failure of the face detection algorithm when it identifies a face in a #facesinthings image.”
Because pareidolia is a purely human phenomenon, it’s surprising that the algorithm would identify anything resembling enough of a face to process it. “I think that the degree with which pareidolia relies on human imagination prevents it from being fully realized on today’s computers,” Kraft said, “though that might be changing!”
Kraft also ran the algorithm on photos that were nothing but noise. While it’s considerably less defined, the result still bares some resemblance to a human face, with dark areas where the eyes and mouth would be and a bright vertical strip for a nose. Any definition of a face’s edge is missing, but the most identifiable attributes are present and about where one would expect them to be.
After running the experiment, Kraft is interested in learning if there are varying levels of a pareidolia threshold across different cultures, and what might influence it. “By looking at cartoon faces and caricatures, do we train ourselves to see more faces in things, and does this vary between isolated groups?” he said. “Are there any correlations between active imagination and pareidolia?”
Computer algorithms often have a very narrow definition as to what they’re supposed to do, which leads to mostly expected outcomes. That’s why Kraft decided to push the limits of the system to see what happened. He said if there’s anything to be learned from his experiment, it’s to “run algorithms in ways they weren’t meant to be.”
AJ Dellinger is a seasoned technology writer whose work has appeared in Digital Trends, International Business Times, and Newsweek. In 2018, he joined Gizmodo as the nights and weekend editor.