Article Lead Image

Photo via Libertinus/Flickr (CC BY-SA 2.0)

How Facebook ‘likes’ could dent ISIS’s recruiting efforts

By scraping data, researchers can target extremist sympathizers.


Selena Larson


Posted on Jan 22, 2016   Updated on May 27, 2021, 8:06 am CDT

Terrorism is a very real threat no matter where you happen to live, and the individuals looking to brainwash new recruits often find them online. The fight against ISIS is happening everywhere, including social networks, and a counter-speech solution has been found in an industry that might surprise you: digital marketing. 

Strategies adopted by corporate ad campaigns don’t seem like an obvious response to violent extremism and terrorism recruitment. We already know that by tracking conversations, Facebook group affiliation, Page Likes and other data, an online profile of an individual can be built and used as fodder for targeted ads from brands and companies. That same basic tech can apply to potential extremists looking into joining ISIS. 

At the Institute for Strategic Dialogue (ISD), senior researcher Dr. Erin Saltman is working on new ways of bringing technical marketing tools to the counter-extremism space. Figuring out how best to identify individuals interested in traveling to Syria or joining a neo-Nazi organization, and then delivering messaging that counters the rhetoric provided by extremist groups, is one way she does it. 

“When you look at online metrics, you can be much better at targeting someone based on their movements online, based on their sentiments; we can do sentiment mapping,” Saltman said in an interview with the Daily Dot. “That way, when you have a counter-narrative, you can target your audience not based on something as rudimentary as religion or ethnicity, which also can be quite offensive to certain community groups, but base it very specifically on their interests, without having to look at a claimed religion, ideology, or sentiment.” 

Security monitoring has failed among cohorts who don’t fit the stereotypical jihadist profile of angry, young Muslim men, Saltman explained. For instance, young women who travel to Syria from European countries to join extremist groups. In 2015, 56 women and girls are believed to have left the UK to join ISIS in Syria. They were essentially traveling under the radar because their profile didn’t fit the stereotype. 

Like ISIS fighters who are well adept at using social media, sometimes these women leave a trail of Internet activity that documents their journey to extremism. Blogs, Twitter, and Facebook accounts provide a firsthand look at the transition between UK resident and ISIS supporter. The content ranges from survival guides, to poetry, to blatant recruiting efforts. 

Through sentiment mapping, activity across the Web, and participation in different online groups or forums, ISD can build a profile of an individual at risk of joining extremist organizations, and can either target “counter-narrative” advertising through Facebook using videos or information created to dissuade their beliefs, or reach out to individuals on a personal level. 

Identifying extremists goes beyond ethnicity or religion. Factors like Page Likes that group members have in common can help identify individuals—Saltman said that if they are looking at far right neo-facists, mixed martial arts (MMA) would be one of the factors you would plug in among others for your targeting. 

“When we have done research into the different push and pull factors that lead people into violent extremist groups, the people that are really engaging with violent hate speech are different. There’s not one pathway. That’s very hard for security services to wrap their head around,” Saltman said.

Different factors include religious or ideological beliefs, acceptance due to sentiments of alienation, and misguided political goals. 

“Counter-narrative or counter-speech has to come from an equally diverse number of angles,” she said. “The main issue right now is that we have an extremist minority which has in essence hijacked the discourse in a way that’s not reflective of greater society. These extremist voices are gaining much more traction than they actually have in the real world offline space. So we need to shift that, to make it a healthier marketplace of ideas online.”

To call ISIS Internet-savvy would be an understatement. The terrorist organization uses Twitter, Facebook, YouTube, Tumblr, and other social networks as well as any other group of millennials who grew up with texting and AOL Instant Messenger, and they aim to reach a potentially large, and young, audience. According to Pew Research, the median age of the world’s 1.6 billion Muslims is 23-years-old. High-quality videos and online discourse have dismantled the stereotype of a mysterious jihadist who wants to be kept anonymous. People can now see what extremists had for dinner, photos of a new baby, and selfies in Iraq and Syria. 

Effectively countering the ISIS narrative won’t be possible with government-branded videos like those from the U.S. Department of State’s “Think Again, Turn Away,” campaign. Instead, grassroots efforts by ISD and other organizations aim to apply the same fierce online tactics to positive messaging.

Two different types of counter-speech include targeting people who are in the early stages of radicalization with video and other online content in larger groups, and personal messaging to individuals who are identified as sympathetic to extremist causes. Activists and technologists create the videos and advertising campaigns in what ISD calls innovation labs. Abdullah X is one of the fictional characters ISD created to try and discourage people from joining the Islamic State. 

While videos and ad campaigns can target people on a higher level, a one-on-one pilot program worked with former extremists to open up dialogue between Facebook users sympathetic to extremists. Ten people, eight men and two women, participated; five former far-right extremists from North America and five former Islamist extremists from the UK. By using Facebook’s Graph Search, ISD identified 154 profiles based on Page Likes, group membership, cover photos, and tone and content of posts on Facebook. 

After a profile was identified, it went through a four-step process to determine if the person was at risk. In the study, 90 percent of profiles were confirmed as “at risk.” The former extremists then messaged the individuals through Facebook Messenger, and 60 percent of the respondents participated in sustained conversation. 

Since the experiment, Facebook has restricted Graph Search, making it more difficult to collect this data. But while ISD says that Twitter is a helpful resource for mapping extremist ecosystems, Facebook is still more appropriate for establishing communication and personalized counter-narratives.

Facebook recently partnered with ISD and the International Centre for the Study of Radicalisation and Political Violence (ICSR) to launch the Online Civil Courage Initiative (OCCI) that will provide €1 million in support of European nonprofits working to combat extremism online. The OCCI will aggregate tools and methods developed by these organizations to help people figure out how to best engage in counter-speech. 

“Hate speech has no place online, or in society,” Facebook COO Sheryl Sandberg said in a statement. “Facebook is no place for the dissemination of hate speech or calls for violence. With this new initiative, we can better understand and respond to the challenges of extremist speech on the internet.” 

Saltman said ISD works with Facebook on the technical aspects of their research. Representatives of the organization carry out the messaging themselves, but Facebook employees analyze how ISD is targeting individuals and suggest different tools and ways for campaigns to be more effective. 

Governments are also turning to Facebook and other Silicon Valley companies to help combat extremism. Earlier this month, the U.S. government met with the tech company leaders and policy makers at a terrorism summit and suggested that techies should come up with an algorithm that could spot terrorist and extremist activity automatically. However, nuances make this political technical dream impossible. 

“It would be impossible to create an algorithm [for extremism] because, unlike child sexual abuse imagery, an image is very unique to plug in and create a detection around. [But] discourse and discussion and ideology and ideas, it’s impossible to create an algorithm. Especially because language changes,” Saltman said. “Are you calling it ISIS, Daesh, IS, or are you creating a code-name to talk about something, and how are you talking about it? That’s where the nuances involved in extremism [make it] impossible to create this magic algorithm to understand such discourse nuance. Especially the changing nature of language.” 

Extremist content isn’t illegal, but it’s unwanted speech. Social platforms can remove accounts and hate speech at their discretion, but when it comes to automatically identifying and targeting extremist or terrorist content on a broad scale, subtle differences in language, meaning, and ideologies make it difficult to identify what’s bad or good content. Even visuals can be hard to police. For example, Saltman said that a computer might detect a violent action film and an ISIS beheading video similarly.

It’s understandable that politicians have a fundamental misunderstanding of how this technology might work in practice, considering the U.S. Congress is among the oldest in history. The government is struggling to grapple with digital natives behind the Islamic State’s online presence. When the “Think Again, Turn Away,” campaign launched with footage of ISIS in the videos, extremists reported some of the videos for “terrorist imagery,” and they were taken offline. 

“That cat and mouse game of censorship or the government trying to be savvy has been less effective than really using the voices of activists, of young people, of women, of different religious bodies to get that different voices out there that are more credible,” Saltman said.

The online accounts that ISIS creates to recruit new members could be the very things that help governments and organizations stop them. 

By accessing the same kind of data companies like Coca-Cola use to manipulate us into buying soft drinks, researchers can target men and women interested in joining the Islamic State. And, potentially, convince them not to. 

Photo via Libertinus/Flickr (CC BY-SA 2.0)

Share this article
*First Published: Jan 22, 2016, 6:17 pm CST