- ‘Motherless Brooklyn’ is a gorgeous noir with little below the surface Today 1:14 PM
- Jameela Jamil and Sara Sampaio got in a Twitter feud over ‘long-starved’ models Today 12:52 PM
- Freddie Prinze Jr. will straight-up school you about the Force don’t @ him Today 12:18 PM
- Woman hosts Instagram funeral after she ‘killed’ $102K in student debt Today 11:45 AM
- YouTube beats Netflix as go-to streaming platform for teens Today 11:41 AM
- The tallest man in America posts emotional YouTube video from hospital room Today 11:31 AM
- Nintendo Switch subreddit implodes amid Hong Kong protests Today 11:14 AM
- Biden yelling at Warren becomes relatable workplace meme Today 10:33 AM
- Tulsi Gabbard was conservatives’ favorite debater Today 10:07 AM
- ‘Rogue One’ co-writer to direct several episodes, write the pilot for Cassian Andor series Today 9:50 AM
- ‘The Two Popes’: Anthony Hopkins and Jonathan Pryce shine in Netflix’s pope comedy Today 8:57 AM
- AOC, ‘Squad’ to endorse Bernie Sanders Today 8:44 AM
- ‘Ghosts of Sugar Land’ explores what happens if your friend joins ISIS Today 7:00 AM
- Andrew Yang upset porn fans with his criticism of Bing Tuesday 10:34 PM
- Kamala Harris really wants Trump kicked off Twitter Tuesday 10:22 PM
Voice recognition technology promises to make our lives easier, letting us control everything from our phones to cars to home appliances. Just talk to our tech, and it works.
As the tech becomes more advanced, there’s another issue that’s not as obvious as a failure to process simple requests: Voice recognition technology doesn’t recognize women’s voices as well as men’s.
According to research from Rachael Tatman, linguist researcher and National Science Foundation Graduate Research Fellow at the University of Washington, Google’s speech recognition software has gender bias.
She realized significant differences in the way Google’s speech recognition software auto-captions video on YouTube. It was much more consistent on male voices than female. She said the results were “deeply disturbing.”
Tatman said she hand-checked more than 1,500 words from annotations across 50 different videos and discovered a glaring bias.
It’s not that there’s a consistent but small effect size, either, 13% is a pretty big effect. The Cohen’s d was 0.7 which means, in non-math-speak, that if you pick a random man and random woman from my sample, there’s an almost 70% chance the transcriptions will be more accurate for the man. That’s pretty striking.
“Language varies in systematic ways depending on how you’re talking,” Tatman said in an interview. Differences could be based on gender, dialect, and other geographic and physical attributes that factor into how our voices sound.
To train speech recognition software, developers use large datasets, either recorded on their own, or provided by other linguistic researchers. And sometimes, these datasets don’t include diverse speakers.
“Generally, the people who are doing the training aren’t the people whose voices are in the dataset,” Tatman said. “You’ll take a dataset that’s out there that has a lot of different people’s voices, and it will work well for a large variety of people. I think the people who don’t have socio-linguistic knowledge haven’t thought that the demographic of people speaking would have an effect. I don’t think it’s maliciousness, I just think it wasn’t considered.”
A representative for Google pointed us to a paper published last year that describes how Google built a speech recognizer specifically for children. In the paper, the company notes that speech recognition performed better for females. Additionally, Google said it trains voice recognition across gender, ages, and accents.
It’s hard to directly address the research in the article you shared, since the results reflect a relatively small sample size, without information on how the data was sampled or the number of speakers represented. That said, we recognize that it’s extremely important to have a diverse training set; our speech technology is trained on speakers across genders, ages, and accents. In a research paper we published last year (Liao et al. 2015), we found that our speech recognizer performed better for females, with 10% lower Word Error Rate. (In that paper we actually measured 20% higher Word Error Rate for kids—which we’ve been working hard to improve since then.)
Speech recognition has struggled to recognize female voices for years. Tatman cites a number of studies in her post, including voice tech working better for men in the medical field, and that it even performs better for young boys than girls.
When the auto industry began implementing voice commands in more vehicles, women drivers struggled to tell their cars what to do, while their male counterparts had fewer problems getting vehicles to operate properly.
It’s a similar problem to biases in algorithms that control what we see on the web—women get served ads for lower-paying jobs, criminal records turn up higher in searches for names commonly associated with someone of African-American descent, and the software some law enforcement agencies use is biased against people of color. And this isn’t the first time Google’s automated systems have failed; last year, the company came under fire when its image recognition technology labeled an African-American woman a “gorilla.”
Tatman said the best first step to address issues in voice tech bias would be to build training sets that are stratified. Equal numbers of genders, different races, socioeconomic statuses, and dialects should be included, she said.
Automated technology is developed by humans, so our human biases can seep into the software and tools we are creating to supposedly to make lives easier. But when systems fail to account for human bias, the results can be unfair and potentially harmful to groups underrepresented in the field in which these systems are built.
Selena Larson is a technology reporter based in San Francisco who writes about the intersection of technology and culture. Her work explores new technologies and the way they impact industries, human behavior, and security and privacy. Since leaving the Daily Dot, she's reported for CNN Money and done technical writing for cybersecurity firm Dragos.