Article Lead Image

Max Pixel (Public Domain)

San Francisco becomes first U.S. city to ban facial recognition

'We can have good security without a security state and we can have good policing without a police state.'

 

Mikael Thalen

Tech

Posted on May 15, 2019   Updated on May 20, 2021, 12:35 pm CDT

San Francisco became the first city in the U.S. on Tuesday to ban the use of facial recognition technology by local government agencies.

Passed in an 8-1 vote by the city’s board of supervisors, the measure, known as the “Stop Secret Surveillance Ordinance,” not only implements a ban but calls for an accounting of all surveillance technology already owned by police and other groups

Any agency seeking to purchase new surveillance tools will also be required to inform the public and receive city approval.

Aaron Peskin, the city official who introduced the legislation, defended the measure as a necessary step toward protecting the privacy of local residents.

“We can have good security without a security state and we can have good policing without a police state,” Peskin said, per Gizmodo. “The thrust of this legislation is not to get rid of surveillance technology. It’s to let the government and the public know how that technology is used.”

Although San Francisco at current does not own or deploy facial recognition technology, the new rule is seen as a preventative measure. Such technology, however, will continue to be used at the city’s federally run international airport and ports. Local residents and private companies will also be exempt from the ban.

The measure received widespread support from civil liberties groups including the American Civil Liberties Union (ACLU), Oakland Privacy, and the Electronic Frontier Foundation.

While proponents of the bill have largely been silent on the matter, Wired notes that NEC, a major company in the facial recognition technology business, is taking steps to ensure that any similar laws passed are ineffective.

“NEC is pushing for a federal law that would preempt local and state laws, require systems to be tested for accuracy by outsiders, and include new rules protecting against bias and civil rights abuses,” Wired reports.

The technology has been widely criticized for its inaccuracy and invasive nature. In a test of the technology last year by the ACLU, 28 members of Congress were incorrectly identified as other people. A similar study by MIT also found that facial recognition tools struggled to properly identify people of color.

READ MORE:

Share this article
*First Published: May 15, 2019, 3:19 pm CDT