The research aims to understand the effectiveness of both removing such individuals, as well as allowing them to remain online to be debated by others, according to Vijaya Gadde, Twitter’s head of trust and safety, legal and public policy.
Gadde told Motherboard that Twitter is working with academics to see if it can be confirmed that “counter-speech and conversation are a force for good” and “can act as a basis for de-radicalization,” as Twitter currently believes.
“A research project like this isn’t new; our work with academics is ongoing and something that is a critical part of building effective policies,” a Twitter spokesperson told the Daily Dot.
Gadde added that Twitter has seen evidence, on other platforms, that radical viewpoints can change through an exchange of ideas.
“We’re working with them specifically on white nationalism and white supremacy and radicalization online and understanding the drivers of those things; what role can a platform like Twitter play in either making that worse or making that better?” Gadde said.
Reaction to the research plan by a range of extremism experts was mixed, Motherboard noted. Most agreed that such action should have been taken long ago.
“It has a ring of being too little too late in terms of launching into research projects right now,” Becca Lewis, a researcher of far-right movements for Data & Society, told Motherboard. “People have been raising the alarm about this for literally years now.”
Lewis also expressed skepticism over Twitter’s motivation, noting that its decisions could be influenced primarily by profit.
“Counter-speech is really appealing and there are moments when it does absolutely work, but platforms have an ulterior motive because it’s a less expensive and more profitable option,” Lewis said.
Counter-speech may not be a viable option on Twitter, Lewis continued, given that “networked harassment campaigns are common, with white nationalists often (taking) part in those campaigns.”
A former Twitter employee also claimed that Twitter’s announcement showed that the company had essentially made no progress on the issue since she left four years prior.
“Good to see they’re in the exact same place in terms of doing anything than they were when I was working with them as a Trust and Safety partner FOUR FUCKING YEARS AGO,” Gerard Whey wrote on Twitter.
Good to see they’re in the exact same place in terms of doing anything than they were when I was working with them as a Trust and Safety partner FOUR FUCKING YEARS AGO https://t.co/TgBCzhKZLh
— zoë “tragic sans” quinn (@UnburntWitch) May 29, 2019
Jillian York, a director for the Electronic Frontier Foundation, applauded Twitter’s move but cautioned against well-intention policies that could end up being used against marginalized communities.
“Powerful (extremist or otherwise) actors will always find another way to get their word out, and we know from experience that speech restrictions (including of extremist content) all too often catch the wrong actors, who are often marginalized individuals or groups,” York told Motherboard. “And so I’m glad to see Twitter thinking creatively about this problem.”
Twitter did not disclose when the study will be completed, whether the results will be publically released, or whether the company plans to make any policy changes regarding the study.
“We’ve made great strides in creating stronger policies against hateful conduct, violent extremist groups and violent threats on Twitter,” the Twitter spokesperson wrote. “We will always have more to do, and collaboration with outside researchers is critical to helping us effectively address issues like radicalization in all its forms.”
- 4chan’s new troll campaign aims to make the hashtag a white supremacist symbol
- Twitter reportedly worried banning white nationalists would also flag some Republicans
- YouTube/Facebook comments on hate speech hearing shut down due to hate speech