Man wearing headphones and using phone with facial recognition overlay

Artem Oleshko/Shutterstock (Licensed)

AI ‘gaydar’ is now a thing—and it can tell if you’re gay based on your face

Some see this as dangerous.


Kris Seavers


Published Sep 8, 2017   Updated Sep 8, 2017, 1:37 pm CDT

Scientists have created artificial intelligence that can guess whether someone is gay based on a picture of their face—which has some worried about the AI’s future use and implications.

In the recently published study, researchers from Stanford University analyzed more than 35,000 faces of men and women from a dating website. Using a sophisticated mathematical system to look at images, the researchers created an algorithm that could correctly tell whether men were straight or gay 81 percent of the time, and whether women were gay or straight 74 percent of the time.

The study, which found that human “gaydar” was much less reliable than the machine’s algorithm, said that gay men and women tended to have “gender-atypical” features.

“The data also identified certain trends, including that gay men had narrower jaws, longer noses, and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women,” the Guardian reported.

The researchers said the study provides “strong support” for the theory that sexual orientation is related to exposure to certain hormones before birth, and that therefore people are born gay.

“The machine’s lower success rate for women also could support the notion that female sexual orientation is more fluid,” the Guardian reported.

The study—which did not include people of color (aka only researched white people’s faces) or consider transgender or bisexual people—raised questions about the ethics of facial recognition technology and how it could abuse LGBT people’s privacy.

Nick Rule, an associate professor of psychology at the University of Toronto who has published research on the science of gaydar, told the Guardian that this new AI capability is “certainly unsettling.”

With billions of images of people’s faces stored on social media sites and in government databases, it’s not hard to imagine how something could go wrong. 

“If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad,” Rule told the Guardian.

But the researchers—and Rule—both said it’s still important to develop and test this technology in order for governments and companies to proactively consider its dangers and implement regulations.

“What the authors have done here is to make a very bold statement about how powerful this can be,” Rule told the Guardian. “Now we know that we need protections.”

H/T the Guardian

Share this article
*First Published: Sep 8, 2017, 1:36 pm CDT