- No, the first words of Trump’s tweets don’t match up to lyrics of ‘Break My Stride’ Sunday 10:28 PM
- White woman demanding strangers ‘repent’ for Christ sparks conversation on mental illness and racism Sunday 9:27 PM
- Amtrak employee asked a NAACP lawyer to move from her train seat Sunday 7:54 PM
- Billie Eilish fans riot after being referred to as ‘Avocados’ Sunday 4:37 PM
- Beyhive coming for Sainsbury’s supermarket over Ivy Park shade Sunday 3:17 PM
- Antique store blasted for selling ‘white only’ signs Sunday 1:45 PM
- DaBaby explains altercation with hotel employee after video goes viral Sunday 12:32 PM
- Kanye faces backlash for headlining Christian event with anti-LGBTQ leaders Sunday 10:31 AM
- Why is Yennefer of Vengerberg so different in Netflix’s ‘The Witcher’? Sunday 10:00 AM
- Actress slammed for ‘acid attack-face’ TikTok challenge Sunday 9:46 AM
- ‘Weathering With You’ blends fantasy and realism in a magical love story Saturday 6:18 PM
- Kidnapped teen used Snapchat to get rescued Saturday 4:35 PM
- What fans do and don’t want to see in future ‘Far Cry’ installments Saturday 4:26 PM
- Aaron Carter accused of stealing lion art for merch Saturday 3:10 PM
- Instagram’s hidden like counts were inspired by a ‘Black Mirror’ episode Saturday 2:06 PM
Average of #FacesInThings yields a very spooky image
Don’t stare into its eyes.
If you watched The Brave Little Toaster as a kid, you’ve probably felt a little more empathy for your household appliances. But if an averaged image of inanimate objects is to be believed, then there’s a little more humanity in these devices than you might expect.
School for Poetic Computation student Robby Kraft decided to take photos of inanimate objects with characteristics that resemble a face and run them through an algorithm for facial detection.
Kraft said the idea came after spending a week studying contemporary artist Jason Salavon, best known for his work manipulating data and images with software to create new art, and Nancy Burson, a creator of computer morphing technology, including the Age Machine and Human Race Machine. “Face blending was a nice intersection of the two,” he said.
To find his subjects, Kraft dove into the popular Instagram tag #FacesInThings. The category houses images that take on a different form thanks to pareidolia, a phenomenon in which people perceive a pattern that isn’t actually there. It’s the same effect that causes people to see the Virgin Mary in a piece of toast.
Kraft ran 2,500 images through the facial detection software—though only about one in 20 was successfully identified. Even with the low success rate, the resulting pattern is an image that looks strikingly like a human face.
“I was a little impressed that it worked at all,” he told the Daily Dot. Prior to running the program on the expressive objects, Kraft did a test run using images tagged with #selfie. The algorithm, which identifies the positions of eye and mouth locations, processed one in three of those images of actual humans.
“I’m often pushing software libraries beyond their traditional use, so I frequently get null responses,” Kraft explained. “Technically it’s a failure of the face detection algorithm when it identifies a face in a #facesinthings image.”
Because pareidolia is a purely human phenomenon, it’s surprising that the algorithm would identify anything resembling enough of a face to process it. “I think that the degree with which pareidolia relies on human imagination prevents it from being fully realized on today’s computers,” Kraft said, “though that might be changing!”
Kraft also ran the algorithm on photos that were nothing but noise. While it’s considerably less defined, the result still bares some resemblance to a human face, with dark areas where the eyes and mouth would be and a bright vertical strip for a nose. Any definition of a face’s edge is missing, but the most identifiable attributes are present and about where one would expect them to be.
After running the experiment, Kraft is interested in learning if there are varying levels of a pareidolia threshold across different cultures, and what might influence it. “By looking at cartoon faces and caricatures, do we train ourselves to see more faces in things, and does this vary between isolated groups?” he said. “Are there any correlations between active imagination and pareidolia?”
Computer algorithms often have a very narrow definition as to what they’re supposed to do, which leads to mostly expected outcomes. That’s why Kraft decided to push the limits of the system to see what happened. He said if there’s anything to be learned from his experiment, it’s to “run algorithms in ways they weren’t meant to be.”
AJ Dellinger is a seasoned technology writer whose work has appeared in Digital Trends, International Business Times, and Newsweek. In 2018, he joined Gizmodo as the nights and weekend editor.