If you’re a person of color, add self-driving cars to your list of worries that white people don’t have to bother with.
According to a study released last month by Georgia Tech, “Predictive Inequity in Object Detection,” self-driving cars were 5 percent better at predicting pedestrians with lighter skin tones than darker ones. For the study, researchers divvied up images of pedestrians into groups based on the Fitzpatrick scale, which classifies skin tone. The object-detection models noticed more images from the group with the lighter skin tones.
However, the study has not yet been peer-reviewed, which limits its scope of verification to some extent, and the object-detection models used are not ones found in any self-driving cars.
Still, this doesn’t help the case of self-driving cars, which already have been a cause for concern. They are considered to be more accident-prone; Google’s self-driving car had had its first accident within two years of its launch. Not to mention, the politics of facial recognition is real. In 2015, Google apologized after its facial recognition system mistakenly matched photos of gorillas to Black people.
The larger issue at hand is clearly about a lack of diversity, as a Vox analysis points out. Algorithms are designed in a manner where they detect what they’re being exposed to, and if they’re not being exposed to a certain demographic with a certain skin color, they might simply not know to identify it as human.
In this regard, Google isn’t the only company that’s lagging behind. Firms such as Amazon, IBM, and Microsoft have all been flagged for having facial recognition algorithms that have racial and gender bias. This makes sense considering the world of technology continues to be dominated by white men.