- Florida city is pushing homeless people out by playing ‘Baby Shark’ on a loop Wednesday 7:27 PM
- A ‘Gossip Girl’ reboot is coming to HBO Max–and fans are not happy with the casting details Wednesday 6:44 PM
- Beto can’t leverage his slave owner ancestry to gain Black voters’ trust Wednesday 5:51 PM
- Oakland to become the third U.S. city to ban facial recognition Wednesday 5:50 PM
- ‘Release the Snyder Cut’ billboards pop up outside of San Diego Comic-Con Wednesday 5:24 PM
- Iggy Azalea and Peppa Pig have an epic Twitter fight Wednesday 4:39 PM
- Should you be concerned about your privacy on FaceApp? Wednesday 4:15 PM
- Google ‘terminates’ Dragonfly, its censored search engine for China Wednesday 3:33 PM
- AOC rips Facebook during Libra House hearing Wednesday 3:14 PM
- The time traveler conversation meme finds its way to TikTok Wednesday 2:52 PM
- Grimes claims she had an ‘experimental’ eye surgery and practices sword fighting Wednesday 2:42 PM
- 70 Border Patrol employees under investigation for posts in secret Facebook group Wednesday 1:45 PM
- Republican’s Operation Safe Return criticized as cover for mass deporation Wednesday 1:42 PM
- ‘Chernobyl’ star Jared Harris is concerned about people taking Instagrams there Wednesday 12:18 PM
- Mattel’s BTS dolls are finally up for preorder Wednesday 12:14 PM
The futures of many prison inmates depend on racially biased algorithms
Prisons use algorithms to predict recidivism, but the code is biased against black offenders.
It sounds like something out of Minority Report: software predicting the likelihood of people committing crimes in the future and assigning people scores that judges and cops use to determine sentences and bond payments.
But these algorithms are real and widely used in the United States—and according to a new ProPublica report, this software is biased against African Americans.
The scores and data produced by risk-and-needs assessment tools like the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which ProPublica investigated, are based on a series of questions that offenders answer as they move through the criminal-justice system. (In some cases, the data also come from their arrest records.)
There are no questions about race, but the surveys include inquiries like “How many of your friends/acquaintances have ever been arrested?”, “Do you have a regular living situation?”, and “How often did you have conflicts with teachers at school?”
A computer program analyzes the survey results and assigns a score to each offender that represents the likelihood of them committing a future crime. As ProPublica reported, offenders don’t get an explanation of how their scores are determined, even though judges and cops rely on them—or at least take them into account—when making important decisions about offenders’ fates.
ProPublica analyzed 10,000 criminal defendants and compared their scores to their actual recidivism rates over a two-year period. The publication found that black defendants were regularly assigned higher risk scores than were warranted and that black defendants who did not commit a crime after two years were twice as likely to be misclassified as higher-risk than were white defendants.
The tool also underestimated white defendants’ recidivism rates and mistakenly labelled them as lower-risk twice as often as black recidivists.
Other findings include:
- The analysis also showed that even when controlling for prior crimes, future recidivism, age, and gender, black defendants were 45 percent more likely to be assigned higher risk scores than white defendants.
- Black defendants were also twice as likely as white defendants to be misclassified as being a higher risk of violent recidivism. And white violent recidivists were 63 percent more likely to have been misclassified as a low risk of violent recidivism, compared with black violent recidivists.
- The violent recidivism analysis also showed that even when controlling for prior crimes, future recidivism, age, and gender, black defendants were 77 percent more likely to be assigned higher risk scores than white defendants.
This data-driven approach to criminal justice is intended to reduce the number of people in prison, save states and localities money, and reduce recidivism rates. But the use of algorithms and risk assessments to predict future crimes is not without controversy. In Wisconsin, one offender is appealing a conviction to the state’s supreme court on the grounds that the use of COMPAS violates his right to due process.
University of Michigan law professor Sonja B. Starr, who has studied the use of algorithmic-based risk assessments, said the surveys can adversely impact low-income offenders.
“They are about the defendant’s family, the defendant’s demographics, about socio-economic factors the defendant presumably would change if he could: Employment, stability, poverty,” Starr told the Associated Press in 2015. “It’s basically an explicit embrace of the state saying we should sentence people differently based on poverty.”
Algorithms shape everything from our Facebook feeds to the ads we see online to prison sentences, so it’s natural that questions are arising about whether and how they are biased.
The more companies test out these kinds of systems—such as IBM’s software that tries to spot terrorists in refugee populations and predict jihadist attacks—the more concerned people become about how ethical it is to use computers to find patterns in human behavior.
Algorithms are imperfect. They are written by fallible, biased human beings. And because they are the product of deliberate decisions made by biased programmers, their use can reinforce stereotypes and bias.
As machine learning researcher Moritz Hardt writes:
An immediate observation is that a learning algorithm is designed to pick up statistical patterns in training data. If the training data reflect existing social biases against a minority, the algorithm is likely to incorporate these biases. This can lead to less advantageous decisions for members of these minority groups.
ProPublica’s report is a reminder that predictive algorithms are not “neutral” or “fair” simply because they’re software. And because the companies that make them don’t disclose their secret sauce, it’s impossible to know how the programs generate their results.
Selena Larson is a technology reporter based in San Francisco who writes about the intersection of technology and culture. Her work explores new technologies and the way they impact industries, human behavior, and security and privacy. Since leaving the Daily Dot, she's reported for CNN Money and done technical writing for cybersecurity firm Dragos.