- Daniel Caesar dons cape for whiteness—and gets canceled Wednesday 4:29 PM
- Triton is a new malware ‘deliberately’ designed to put lives at risk Wednesday 3:23 PM
- ‘Into the Dark: I’m Just F*cking with You’ is one of the series’ best Wednesday 1:54 PM
- Trump’s latest prop, a map of ISIS, gets memed Wednesday 12:54 PM
- HBO sends fans on a global scavenger hunt for 6 Iron Thrones Wednesday 11:51 AM
- The Awkward Family Photos game is Cards Against Humanity for meme lovers Wednesday 11:50 AM
- London firefighters’ organization accuses ‘Peppa Pig’ of sexism Wednesday 11:41 AM
- YouTuber accused of abusing her children to make kid-friendly content Wednesday 11:20 AM
- Ari Fleischer’s Iraq War tweet isn’t going over well Wednesday 10:54 AM
- Cop arrested for recording man’s genitals, forcing mentally ill man to twerk Wednesday 10:37 AM
- MoviePass rebrands its unlimited plan, again Wednesday 10:37 AM
- Former Alaska senator launches meme-filled 2020 primary campaign Wednesday 10:17 AM
- The Shane Dawson cat controversy has resulted in these sex memes Wednesday 10:06 AM
- Sarah Sanders mocks CNN reporter with ‘dear diary’ tweet Wednesday 9:03 AM
- Know what you’re signing up for thanks to these dating site reviews Wednesday 8:58 AM
Facial recognition is everywhere now. What can people do?
A recent test by the American Civil Liberties Union (ACLU) has found that Rekognition, Amazon’s popular facial recognition service, mistakenly thinks 28 members of the U.S. Congress have been previously arrested for crimes. The case is the latest chapter of an ongoing debate over whether we can trust advanced facial recognition technology in fighting crime.
Police forces have been using facial recognition systems for decades to identify criminals in images. However, never has the technology been pervasive as it is today, and it is less and less reliant on human assistance and judgment thanks to advances in artificial intelligence.
While there are clear benefits in providing advanced facial recognition technology to the agencies and institutions tasked with keeping us safe against crime, there are also fears that in the wrong hands, the same technology can be used for evil and harmful purposes.
How does advanced facial recognition work?
Facial recognition relies on two key components: a database of face images and computer software that can compare and match new images and video frames with the database. In past decades, creating registries of face images was difficult, and the software that performed the comparison required a lot of human input and assistance.
But things have changed a lot. There are now billions of publicly available images on social media networks and other internet services, and those images can be automatically correlated to other information about the people they belong to. Government agencies and companies can create face recognition databases by using software that automatically scrapes public information from the web.
Also, thanks to advances in artificial intelligence, comparing and matching photos of people has become easier and much more accurate than before. Modern facial recognition software uses deep learning and neural networks, an AI technique that has gained popularity in recent years. Neural networks examine labeled samples and find ways to classify new data. So, for instance, if you train a neural network with millions of labeled images of faces, it’ll learn to classify new images and match them with their proper owners. The more quality and diversified training data you provide, the more accurate your neural network becomes.
“We are at a point in time where we have the camera infrastructure and processing power widely available to make use of face recognition in real time,” says Shaun Moore, CEO of TrueFace.ai. “Face recognition solutions are being trained with millions of faces from around the world making the power of the algorithms significantly more accurate.”
Partnerships between governments and tech companies
“There are more high-profile companies producing face recognition solutions as they expand their cognitive services arsenals, a direct result of the availability of greater computing power and years of improving deep learning techniques,” says Doug Aley, CEO of Ever AI.
Amazon is not the only big tech company that has shown an interest in AI-based facial recognition. Facebook uses the facial recognition to tag pictures and warn users if they are being impersonated by someone else. Google uses the technology to classify images you upload in the Photos app and to detect strangers through the lens of your smart home security cameras. Microsoft provides a facial recognitions application programming interface (API), not much different from Amazon’s. And Apple, the other member of the tech industry’s “big five,” uses facial recognition in its flagship smartphone to authenticate users and to map facial expressions to emojis.
However, the other lucrative business opportunities of facial recognition technology are not lost on most of those companies. “As those companies look to build businesses around face recognition, it is natural that they will skate to industries that are already accepting of face recognition, like the intelligence community, law enforcement, and defense,” Aley says.
Recent years have seen an increase in partnerships between the governments and tech companies to use facial recognition and artificial intelligence in law enforcement. Amazon recently faced backlash from privacy advocacy groups for providing facial recognition services to police departments in Washington County, Oregon, and Orlando, Florida. Microsoft faced dissent in its own ranks for working with the Immigration and Customs Enforcement (ICE) on AI-related technologies including facial recognition.
There are several smaller startups that are also engaged in contracts with government agencies, and the DHS is actively exploring using the technology in different settings, including border gates, police vehicles and body cameras.
The positive uses of facial recognition
There have already been several cases where the police have made arrests with the help of advanced facial recognition which would have otherwise been impossible.
Earlier this month, the British police identified two suspects in the chemical poisoning of a former Russian double agent and his daughter. They used facial recognition technology to quickly comb through thousands of hours’ worth of surveillance footage at airports and other areas in Salisbury, where the crime was committed. The UK police have also used the technology to make arrests in other places.
In April, Chinese police used facial recognition software to identify and arrest a financial crimes suspect in a crowd of 50,000 people attending a concert in Nanchang, China’s third largest city. China has one of the world’s most advanced facial recognition surveillance systems.
One of the most important uses of facial recognition is in helping fight human trafficking and child abduction. TrueFace’s Moore reminds that in April, Indian authorities were able to identify 3,000 missing children within days of deploying their new facial recognition system. “It can also be used to keep unwanted sex offenders out of places in which children frequent,” Moore adds.
What can go wrong?
While providing law enforcement with advanced facial recognition technology that can scan thousands of images and identify people in real time has some undeniable advantages, it isn’t without tradeoffs. Facial recognition trails some technical and legal challenges, and if they’re not fixed, the technology can end up doing more damage than good.
Privacy of citizens is among the top concerns of civil and digital rights groups. Privacy advocates are concerned that law enforcement is collecting and processing images and information about people without their consent. In the U.S., law enforcement already has photos of more than half the country’s adult population, most of which have never been involved in any crime.
Moreover, there’s little regulation on the use of facial recognition technology by government agencies.
“Face recognition most likely isn’t performed on site. Images from cameras are uploaded to a cloud or central server through the same internet that everyone else uses. That means there will always be a cyber security risk in which malicious actors could compromise the system,” says Paul Bischoff, privacy advocate at Comparitech. But while, companies are held to account and penalized when they don’t protect their users’ data, it’s not clear to what extent privacy laws that govern the private sector apply to law enforcement.
Another problem is algorithmic bias, one of the endemic challenges of the AI industry. Deep learning algorithms are only as good as the data you feed them, and they can behave erratically outside their specific domain. For instance, if a facial recognition application has only seen faces of white people, it will be better at making matches for people of lighter skin and will be prone to making mistakes on people of darker skin.
“Many companies train their face recognition models on available datasets like Microsoft’s Celebrity dataset,” says Every AI’s Aley. “Unfortunately, this does not present a very accurate picture of the world. If you train on predominantly white actors, you’ll be very good at identifying white people, but will fall down on other phenotypes.”
Recent examination of face analysis technologies of IBM and Microsoft proved that both systems were significantly more accurate on male faces than female faces, and on lighter faces than darker faces. Both companies have since applied fixes to their technologies.
Other experts question the accuracy of the system. There are several accounts of facial recognition systems making incorrect matches. For instance, data released by the UK’s South Wales Police in 2017 showed that a facial recognition system used to secure the Champion’s League final match made 2,470 matches, but 2,297 of them were wrong. There are other varying reports that put the accuracy facial recognition systems at different levels, but the volatility of the technology is undeniable. Facial recognition tends to perform better in environments where a clear shot of the subject face from the front is available.
But many of the concerns stem from the wrong perception that AI is supposed to replace human judgment. “Face recognition is not meant to automate decision making in law enforcement, but it is meant to be an aid or a tool in a larger toolbox,” TrueFace’s Moore says. “This technology should augment the already very difficult job our law enforcement has but it was never built to make a decision that could lead to altering someone’s life.”
Can the government misuse facial recognition?
Civil rights advocacy groups are also worried that governments can abuse their newly gained powers for purposes other than fighting crime. “In the wrong hands, face recognition tech could be used to stalk, blackmail, or harass someone. On a broader scale, an oppressive government can use face recognition to restrict people’s freedom of movement,” says Bischoff, the privacy advocate from Comparitech.
A stark example is China, where the government is using facial recognition, AI and big data to control and keep close watch on the lives of its citizens, especially dissidents and minorities. While such invasive measures are hard to imagine in the U.S., privacy advocates are worried that facial recognition can be abused against non-citizens and minorities.
In a recent blog post, Microsoft President Brad Smith reminded that like all technologies, facial recognition can be used for both good and ill. The solution, Smith argued is to regulate the space against abuse by government agencies and to set ethical guidelines for companies that develop the technologies.
“Any technology used by the police can be abused,” says Aley. “The most important thing is for law enforcement to learn how to use these new tools to help investigations, and that they understand that no system is fool proof (i.e. after making an identification, there still needs to be human intervention and the presumption of innocence until you have manually verified the person). Regulation and training is needed for sure, but we also need to understand that having these tools will indeed save lives in the long run.”
Ben Dickson is a software engineer and founder of TechTalks. His work has been published by TechCrunch, VentureBeat, the Next Web, PC Magazine, Huffington Post, and Motherboard, among others.