Tech

Why Google’s algorithms suggested an offensive search result about Muslims

The computer shows you what it’s learned.

Photo of Selena Larson

Selena Larson

Article Lead Image

When writer Hind Makki Googled “American Muslims report terrorism” while researching a blog post about a comment made during Thursday’s Democratic debate, she noticed something peculiar and racist about the search results. 

Featured Video

Google, which provides search suggestions for terms or misspelled words, asked if she meant “American Muslims support terrorism.” 

Advertisement

It wasn’t just a fluke; Quartz, which first reported Makki’s discovery, duplicated the search suggestion. Google has since removed the suggestion, so it won’t show up anymore.

But why did it show up in the first place? As Quartz explained, it’s not an error. Google’s search technology takes a look at the most common queries from people who searched for similar things and suggests something different if it thinks you mistyped your own search. 

Google’s algorithms learn what kinds of things people search by examining anonymized search query data, in an attempt to make Googling easier or smarter. Some of the things taken into account are common misspellings, the location the person is searching from, how new the content is on the website, and words that commonly come before or after the search query. 

Advertisement

According to Google, the company’s algorithms have more than 200 signals that help determine suggestions. 

Apparently, Google’s algorithms learned that “support” was so much more common in all the data it indexes across the Web and the inputs people type into Google, that it suggested it as an alternative, assuming it was what Makki was looking for. 

“I thought it was hilarious, but also sad and immediately screencapped it,” Makki told Quartz. “I know it’s not Google’s ‘fault,’ but it goes to show just how many people online search for ‘Muslims support terrorism,’ though the reality on the ground is the opposite of that.”

The automatically generated search results are yet another example of computers and machine learning providing offensive results. Last year, Google’s image recognition software tagged photos of Black people with the racist term “gorillas.” 

Advertisement

Computers aren’t smart enough to figure out whether a result they provide is offensive or not. Their actions are based entirely on the ones humans make in the first place.

H/T Quartz | Photo via Blake Patterson / Flickr (CC BY 2.0) | Remix by Max Fleishman

 
The Daily Dot