Women of color have long told the world that racism and misogyny are interlinked on Twitter. A new Amnesty International report further supports the point, revealing that Black women are 84 percent more likely to face “abusive” or “problematic” tweets than white women on the site.
The statistic comes from Amnesty International’s “Troll Patrol” report, which studied tweets from 2017 sent to 778 women in the U.S. Congress, U.K. Parliament, and journalists at both left- and right-leaning publications. Serving as a followup to March’s “Toxic Twitter” report, Amnesty International teamed up with AI product company Elements AI to analyze two sample tweet sets. A group of 6,500 volunteers analyzed 228,000 tweets for “abusive” or “problematic” content, while 1,000 tweets were sent over to three experts on violence and abuse to gauge their opinion. From there, Element AI was able to use a data excerpt to analyze 14.5 million tweets sent to the 778 women in the study, building a dataset to better understand the content sent to women on Twitter.
For the sake of Amnesty International’s study, “abusive” tweets “violate Twitter’s own rules and include content that promotes violence against or threats to people” based on such qualities as their religion, race, ethnicity, sexual orientation, or gender identity, among other identifiers. A problematic tweet, on the other hand, consists of “hurtful or hostile content” that doesn’t necessarily qualify as abuse, even though it may “reinforce negative or harmful stereotypes against a group of individuals,” effectively “silencing an individual or groups of individuals.”
Based on the report’s findings, Element AI approximates that 1.1 million “abusive or problematic tweets” were sent to the female journalists and politicians in the study, averaging one toxic tweet per every 30 seconds. Black women were 84 percent more likely to face abuse than white women, with one in 10 tweets sent to Black women labeled “abusive or problematic.” In comparison, white women received only one in 15 hostile tweets. Women of color as a whole were 34 percent more likely to face abusive or problematic tweets over white women too, pointing to a much larger issue with how racism and misogyny intersect on Twitter.
The study also found that Latinx women were 81 percent more likely to receive specific physical threats than white women, even though Latinx women faced proportionally less abuse than their white counterparts. Mixed-race women received abuse “across all categories, including sexism, racism, physical, and sexual threats.”
“With the help of technical experts and thousands of volunteers, we have built the world’s largest crowdsourced dataset about online abuse against women,” Amnesty International’s Senior Advisor for Tactical Research Milena Marin told the Daily Dot in a statement. “Troll Patrol means we have the data to back up what women have long been telling us—that Twitter is a place where racism, misogyny and homophobia are allowed to flourish basically unchecked.”
Black women were 84% more likely than white women to receive abusive tweets on #ToxicTwitter.
We now have the data to back up what women have long been telling us: Twitter is a place where racism, misogyny & homophobia are allowed to flourish, basically unchecked.
— Amnesty International (@amnesty) December 18, 2018
The study also found that right- and left-leaning women face “similar levels of online abuse,” although women working for left-leaning political parties experienced slightly more problematic and abusive mentions than their right-wing counterparts. Left-leaning female politicians received 23 percent more problematic and abusive mentions, while female journalists at right-wing publications faced 64 percent more problematic and abusive tweets than writers working with more liberal sources.
As a whole, Amnesty International hopes Twitter becomes more transparent about the “scale and nature of abuse on their platform,” taking steps to address it moving forward. The organization believes machine learning can be a powerful tool for identifying abuse but should be used to aid human analysis of abusive or problematic content, not necessarily to replace human moderators.
“Twitter must start being transparent about how exactly they are using machine learning to detect abuse, and publish technical information about the algorithms they rely on,” Marin added.
Read the full Troll Patrol study here.