- How to watch ‘Kidding’ for free 3 Years Ago
- What’s the deal with Bran Stark at the end of ‘Game of Thrones’? Today 6:30 AM
- How to watch TruTV online for free Today 6:00 AM
- Fans call out Madonna for edited Eurovision video Tuesday 9:36 PM
- Partnered Twitch streamer temporarily banned for airing troll’s racist message Tuesday 8:45 PM
- Reddit theory says fans are wrong about who won ‘Game of Thrones’ Tuesday 6:52 PM
- Elon Musk hires ‘absolute unit’ sheep meme creator to be Tesla’s social media manager Tuesday 6:12 PM
- Jason Momoa stands by his Khaleesi after the ‘Game of Thrones’ finale Tuesday 4:05 PM
- Airbnb, 23andMe partner for creepy heritage travel recommendations Tuesday 3:26 PM
- Rep. Katie Porter goes viral again for trouncing Ben Carson (updated) Tuesday 3:26 PM
- This deepfake takes Bill Hader’s Schwarzenegger impression to the next level Tuesday 2:58 PM
- Wanda Sykes rails against Trump and offers much-needed perspective in ‘Not Normal’ Tuesday 2:41 PM
- Man arrested after allegedly threatening to shoot YouTube employees Tuesday 2:13 PM
- Some House Dems are backing away from the Save the Internet Act Tuesday 1:40 PM
- Thousands sign petition calling for Danny DeVito to play Wolverine Tuesday 1:02 PM
Face recognition software is under-regulated across the country.
Over 50 civil liberties organizations have signed a letter to the Department of Justice requesting formal investigations on facial recognition technologies after a damning new report exposed systemic racial biases.
On Tuesday, the Centre of Privacy and Technology at Georgetown Law released “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” a comprehensive study into the rise and unregulated application of algorithmic identification programs, such as facial recognition.
These technologies are fast becoming a go-to tool for law enforcement across the United States.
Facial recognition software uses inbuilt algorithms that ID the faces from surveillance photos or footage by matching them to those in State databases that can include pictures from driver licenses and general photo-ID catalogues. The technology can help law enforcement verify the identities of suspects or unknown persons. One in two American adults are already in one of these repositories, meaning that tens of millions of law-abiding Americans are scanned and analyzed by the face recognition software.
Despite claims from one Washington Police Department that the technology “does not not see race,” the report disturbingly illustrates the opposite. Several of the leading facial-recognition algorithms were 5 to 10 percent less accurate in identifying black people as white people. As a result, an innocent black person is more likely to misidentified by these systems as a suspect than a white person.
“If the suspect is African-American rather than Caucasian,” the report explains, “the system is more likely to erroneously fail to identify the right person, potentially causing innocent people to be bumped up the list—and possibly even investigated. Even if the suspect is simply knocked a few spots lower on the list, it means that, according to the facial recognition system, innocent people will look like better matches.”
While these systems do not interpret race in a human way, prejudices can be trained into an algorithm either through unbalanced training picture sets or even vindictively by design. The report highlighted that engineers at two of the most prominent companies developing facial recognition technology did not test for racial bias in their identification algorithms at all.
Part of the problem is that the exponential growth in deployment, and increasing dependence on this technology for serious investigative purpose, has far outpaced the implementation of legislation and regulation framework.
Drawing attention to this is the primary purpose of the civil liberty coalition’s letter to the DOJ. It’s an explicitly expressed concern that this “technology is having a disparate impact on communities of color, potentially exacerbating and entrenching existing policing disparities.”
“Face recognition technology is rapidly being interconnected with everyday police activities, impacting virtually every jurisdiction in America,” the letter continued. “Yet, the safeguards to ensure this technology is being used fairly and responsibly appear to be virtually nonexistent.”
Read the full letter below:
David Gilmour is a reporter who specializes in national politics, internet culture, and technology. He previously covered civil liberties, crime, and politics for Vice.