- Debunking the biggest Alexandria Ocasio-Cortez conspiracy theories 3 Years Ago
- How to make calls on Google Home 3 Years Ago
- We now probably know the final runtime for ‘Avengers: Endgame’ Monday 11:06 PM
- Cardi B says she drugged, robbed men in her past on Instagram Live Monday 8:03 PM
- Twitter thread roasts bathtub tray ads for women Monday 7:21 PM
- Nintendo set to release two new models of the Switch—possibly in 2019 Monday 6:45 PM
- Viral cat video ‘Dear Kitten’ finds new life in TikTok challenge Monday 5:30 PM
- Here’s every show that was announced at the Apple TV+ kickoff Monday 3:53 PM
- ‘Shazam!’ embraces the spectacle and heart of the superhero genre Monday 3:45 PM
- How to mute Twitter’s suggested tweets on your timeline Monday 3:02 PM
- What you need to know about Apple’s new streaming service Monday 2:32 PM
- Text-message fanfiction is taking over Instagram Monday 1:54 PM
- Your Asus computer might have a secret backdoor Monday 1:06 PM
- Trump is already fundraising off the Mueller report—even though no one’s seen it Monday 1:01 PM
- Michael Avenatti charged with trying to extort $20 million from Nike Monday 12:51 PM
Don’t stare into its eyes.
If you watched The Brave Little Toaster as a kid, you’ve probably felt a little more empathy for your household appliances. But if an averaged image of inanimate objects is to be believed, then there’s a little more humanity in these devices than you might expect.
School for Poetic Computation student Robby Kraft decided to take photos of inanimate objects with characteristics that resemble a face and run them through an algorithm for facial detection.
Kraft said the idea came after spending a week studying contemporary artist Jason Salavon, best known for his work manipulating data and images with software to create new art, and Nancy Burson, a creator of computer morphing technology, including the Age Machine and Human Race Machine. “Face blending was a nice intersection of the two,” he said.
To find his subjects, Kraft dove into the popular Instagram tag #FacesInThings. The category houses images that take on a different form thanks to pareidolia, a phenomenon in which people perceive a pattern that isn’t actually there. It’s the same effect that causes people to see the Virgin Mary in a piece of toast.
Kraft ran 2,500 images through the facial detection software—though only about one in 20 was successfully identified. Even with the low success rate, the resulting pattern is an image that looks strikingly like a human face.
“I was a little impressed that it worked at all,” he told the Daily Dot. Prior to running the program on the expressive objects, Kraft did a test run using images tagged with #selfie. The algorithm, which identifies the positions of eye and mouth locations, processed one in three of those images of actual humans.
“I’m often pushing software libraries beyond their traditional use, so I frequently get null responses,” Kraft explained. “Technically it’s a failure of the face detection algorithm when it identifies a face in a #facesinthings image.”
Because pareidolia is a purely human phenomenon, it’s surprising that the algorithm would identify anything resembling enough of a face to process it. “I think that the degree with which pareidolia relies on human imagination prevents it from being fully realized on today’s computers,” Kraft said, “though that might be changing!”
Kraft also ran the algorithm on photos that were nothing but noise. While it’s considerably less defined, the result still bares some resemblance to a human face, with dark areas where the eyes and mouth would be and a bright vertical strip for a nose. Any definition of a face’s edge is missing, but the most identifiable attributes are present and about where one would expect them to be.
After running the experiment, Kraft is interested in learning if there are varying levels of a pareidolia threshold across different cultures, and what might influence it. “By looking at cartoon faces and caricatures, do we train ourselves to see more faces in things, and does this vary between isolated groups?” he said. “Are there any correlations between active imagination and pareidolia?”
Computer algorithms often have a very narrow definition as to what they’re supposed to do, which leads to mostly expected outcomes. That’s why Kraft decided to push the limits of the system to see what happened. He said if there’s anything to be learned from his experiment, it’s to “run algorithms in ways they weren’t meant to be.”
AJ Dellinger is a seasoned technology writer whose work has appeared in Digital Trends, International Business Times, and Newsweek. In 2018, he joined Gizmodo as the nights and weekend editor.