Article Lead Image

Photo via Juan de Dios Santander Vela/Flickr (CC BY SA 2.0)

Average of #FacesInThings yields a very spooky image

Don't stare into its eyes.

 

AJ Dellinger

Tech

Posted on Nov 14, 2015   Updated on May 27, 2021, 3:49 pm CDT

If you watched The Brave Little Toaster as a kid, you’ve probably felt a little more empathy for your household appliances. But if an averaged image of inanimate objects is to be believed, then there’s a little more humanity in these devices than you might expect.

School for Poetic Computation student Robby Kraft decided to take photos of inanimate objects with characteristics that resemble a face and run them through an algorithm for facial detection. 

Kraft said the idea came after spending a week studying contemporary artist Jason Salavon, best known for his work manipulating data and images with software to create new art, and Nancy Burson, a creator of computer morphing technology, including the Age Machine and Human Race Machine. “Face blending was a nice intersection of the two,” he said. 

To find his subjects, Kraft dove into the popular Instagram tag #FacesInThings. The category houses images that take on a different form thanks to pareidolia, a phenomenon in which people perceive a pattern that isn’t actually there. It’s the same effect that causes people to see the Virgin Mary in a piece of toast.

Kraft ran 2,500 images through the facial detection software—though only about one in 20 was successfully identified. Even with the low success rate, the resulting pattern is an image that looks strikingly like a human face.


View post on imgur.com

“I was a little impressed that it worked at all,” he told the Daily Dot. Prior to running the program on the expressive objects, Kraft did a test run using images tagged with #selfie. The algorithm, which identifies the positions of eye and mouth locations, processed one in three of those images of actual humans.

“I’m often pushing software libraries beyond their traditional use, so I frequently get null responses,” Kraft explained. “Technically it’s a failure of the face detection algorithm when it identifies a face in a #facesinthings image.”

Because pareidolia is a purely human phenomenon, it’s surprising that the algorithm would identify anything resembling enough of a face to process it. “I think that the degree with which pareidolia relies on human imagination prevents it from being fully realized on today’s computers,” Kraft said, “though that might be changing!”

Kraft also ran the algorithm on photos that were nothing but noise. While it’s considerably less defined, the result still bares some resemblance to a human face, with dark areas where the eyes and mouth would be and a bright vertical strip for a nose. Any definition of a face’s edge is missing, but the most identifiable attributes are present and about where one would expect them to be.

After running the experiment, Kraft is interested in learning if there are varying levels of a pareidolia threshold across different cultures, and what might influence it. “By looking at cartoon faces and caricatures, do we train ourselves to see more faces in things, and does this vary between isolated groups?” he said. “Are there any correlations between active imagination and pareidolia?”

Computer algorithms often have a very narrow definition as to what they’re supposed to do, which leads to mostly expected outcomes. That’s why Kraft decided to push the limits of the system to see what happened. He said if there’s anything to be learned from his experiment, it’s to “run algorithms in ways they weren’t meant to be.”

H/T Flowing Data | Photo via Juan de Dios Santander Vela/Flickr (CC BY SA 2.0)

Share this article
*First Published: Nov 14, 2015, 3:35 pm CST