- The best gear and gadget gifts for Dad this holiday season Today 7:30 AM
- The 10 most important sci-fi films of the 2010s Today 7:00 AM
- Netflix advances beyond testosterone-fueled anime with subdued ‘Levius’ Today 6:00 AM
- Influencer accused of selling shirt she was supposed to promote Tuesday 8:42 PM
- Jameela Jamil dragged for comparing reproductive rights to landlord rights Tuesday 6:54 PM
- Trump campaign posts Thanos meme, totally misses point of ‘Endgame’ Tuesday 5:58 PM
- Petition calls for Apple to make a Baby Yoda emoji Tuesday 5:16 PM
- This BTS-Billie Eilish mashup is the most popular tweet of 2019 Tuesday 4:51 PM
- Michelle Wolf embraces vulgarity in ‘Joke Show’ Tuesday 4:24 PM
- Influencer gets 14 years in prison for trying to steal domain name at gunpoint Tuesday 4:14 PM
- ‘Three Days of Christmas’ is a delightfully dark holiday alternative to Hallmark Tuesday 3:55 PM
- The way Trump Jr. holds his own book inspires mockery Tuesday 3:47 PM
- Woman facing backlash for no longer wearing hijab in end of the decade photo Tuesday 3:16 PM
- Report: Consulting firm lied about decreasing violence at Rikers Island jail Tuesday 2:36 PM
- TikTok users are sharing things they thought were ‘ghetto’ as kids Tuesday 2:31 PM
The research aims to understand the effectiveness of both removing such individuals, as well as allowing them to remain online to be debated by others, according to Vijaya Gadde, Twitter’s head of trust and safety, legal and public policy.
Gadde told Motherboard that Twitter is working with academics to see if it can be confirmed that “counter-speech and conversation are a force for good” and “can act as a basis for de-radicalization,” as Twitter currently believes.
“A research project like this isn’t new; our work with academics is ongoing and something that is a critical part of building effective policies,” a Twitter spokesperson told the Daily Dot.
Gadde added that Twitter has seen evidence, on other platforms, that radical viewpoints can change through an exchange of ideas.
“We’re working with them specifically on white nationalism and white supremacy and radicalization online and understanding the drivers of those things; what role can a platform like Twitter play in either making that worse or making that better?” Gadde said.
Reaction to the research plan by a range of extremism experts was mixed, Motherboard noted. Most agreed that such action should have been taken long ago.
“It has a ring of being too little too late in terms of launching into research projects right now,” Becca Lewis, a researcher of far-right movements for Data & Society, told Motherboard. “People have been raising the alarm about this for literally years now.”
Lewis also expressed skepticism over Twitter’s motivation, noting that its decisions could be influenced primarily by profit.
“Counter-speech is really appealing and there are moments when it does absolutely work, but platforms have an ulterior motive because it’s a less expensive and more profitable option,” Lewis said.
Counter-speech may not be a viable option on Twitter, Lewis continued, given that “networked harassment campaigns are common, with white nationalists often (taking) part in those campaigns.”
A former Twitter employee also claimed that Twitter’s announcement showed that the company had essentially made no progress on the issue since she left four years prior.
“Good to see they’re in the exact same place in terms of doing anything than they were when I was working with them as a Trust and Safety partner FOUR FUCKING YEARS AGO,” Gerard Whey wrote on Twitter.
Good to see they’re in the exact same place in terms of doing anything than they were when I was working with them as a Trust and Safety partner FOUR FUCKING YEARS AGO https://t.co/TgBCzhKZLh— gerard whey (@UnburntWitch) May 29, 2019
Jillian York, a director for the Electronic Frontier Foundation, applauded Twitter’s move but cautioned against well-intention policies that could end up being used against marginalized communities.
“Powerful (extremist or otherwise) actors will always find another way to get their word out, and we know from experience that speech restrictions (including of extremist content) all too often catch the wrong actors, who are often marginalized individuals or groups,” York told Motherboard. “And so I’m glad to see Twitter thinking creatively about this problem.”
Twitter did not disclose when the study will be completed, whether the results will be publically released, or whether the company plans to make any policy changes regarding the study.
“We’ve made great strides in creating stronger policies against hateful conduct, violent extremist groups and violent threats on Twitter,” the Twitter spokesperson wrote. “We will always have more to do, and collaboration with outside researchers is critical to helping us effectively address issues like radicalization in all its forms.”
- 4chan’s new troll campaign aims to make the hashtag a white supremacist symbol
- Twitter reportedly worried banning white nationalists would also flag some Republicans
- YouTube/Facebook comments on hate speech hearing shut down due to hate speech
Mikael Thalen is a tech and security reporter based in Seattle, covering social media, data breaches, hackers, and more.