Article Lead Image

Photo via Christiaan Colen/Flickr (CC BY SA 2.0)

U.S. gov asks Silicon Valley to build a terrorist-spotting algorithm

If you see something, code something.

 

AJ Dellinger

Tech

Posted on Jan 15, 2016   Updated on May 27, 2021, 8:54 am CDT

In a meeting with executives from top technology companies, officials from the United States government made a request—make Minority Report a reality.

According to Fusion, during the terrorism summit between members of the government and the heads of Silicon Valley’s biggest companies, an idea was floated by policymakers that it might be possible to catch terrorists before they act with a computerized system. 

The algorithmic method would be intended to flag potentially radical activity online and give law enforcement a heads up on activity that may be deemed terrorism. 

The suggestion posed to companies like Facebook, Twitter, and Google—companies that sit on massive wells of data collected from their users—is that they would monitor activities on their platforms. When potential signs of radicalization are spotted, the services can keep an eye on future activity or alert authorities to take action.

Issues with this suggestion are plentiful, especially without any real detail. The proposal is broad and ill-defined—a fact acknowledged even by the White House in a memo sent out to participants prior to the meeting. 

“It’s a very fine line to get that information,” Andre McGregor, a former FBI terrorism investigator, told the Guardian earlier this month. McGregor is now director of security at Silicon Valley security company Tanium Inc. “You’re essentially trying to take what is in someone’s head and determine whether or not there’s going to be some violent physical reaction associated with it.”

The call for an algorithmic watchdog to spot potential extremists before they are able to act is likely driven by recent events—especially the San Bernardino shooting, in which it was widely reported that one of the shooters pledged allegiance to ISIS on Facebook. That was later discredited by the Federal Bureau of Investigation, but not before the report was passed around for weeks.

It’s not the first time such an idea has been floated, either. Earlier this year, Republican presidential candidate Carly Fiorina suggested the government isn’t using the right algorithms to find terrorists online, and that she has the sway to convince Silicon Valley companies to help because of her time as CEO of Hewlett-Packard.

Last year in the United Kingdom, the parliamentary intelligence and security committee turned to Facebook to utilize its scripts and tracking methods to identify terrorists. The decision raised questions among experts, including data strategist Duncan Ross, who ran an analysis to determine just how accurate such a system could be.

According to the Guardian, Ross found that even if such a system was 99.9 percent effective—considerably more successful than any real-world application would be—it would still misidentify close to 60,000 people as being suspicious. That’s a massive amount of information for law enforcement to wade through. 

All of that comes before taking into account privacy concerns. Any algorithm that searches activity on social networks to spot potential problems would likely be invasive and require reaching into even protected content. Facebook reportedly suggested a system similar to the site’s suicide prevention program, which requires friends of a user to flag a post and bring it to Facebook’s attention. Action could be taken from there.

Facebook declined to comment on the report. 

Computer scientists at the University of Pennsylvania recently published a paper for an algorithm they developed. According to the report, the system is designed to spot terrorist activity without violating the privacy of any user. 

The system developed by the researchers essentially turns the members of a particular network into bits. The algorithm only identifies certain bits at a time, finding particular information about a user without revealing their full identity. The system can then identify a potential target without revealing any information of the rest of the user base.

That still leaves question as to what will get a person flagged and what type of action is required once identified. Even if the privacy of most users isn’t being directly violated, it’s not clear how something somewhat subjective—the concept of being “radicalized’—can be identified by an algorithm that views things in an objective manner. 

Perhaps the government is expecting its partners in Silicon Valley to figure that part out, if nothing else so it can point the finger at those companies for spying on its owner users rather than take the bullet for being an overreaching government.

H/T Fusion | Photo via Christiaan Colen/Flickr (CC BY SA 2.0)

Share this article
*First Published: Jan 15, 2016, 6:54 pm CST