Article Lead Image

Twitter users post 10,000 racist slurs every day, study finds

Most of the 10,000 slurs we tweet every day aren't intended to be derogatory, but that doesn't make them any less disturbing.

 

Aja Romano

IRL

Posted on Feb 13, 2014   Updated on May 31, 2021, 6:23 pm CDT

Just how racist is Twitter? About .00007 percent racist, according to a new study.

But that still adds up to a disconcerting volume of words used as religious, ethnic, or racial slurs. There’s some good news, however: Most of the language isn’t used with derogatory intent.

Media Bistro reports that Demos, a UK research group, has published a new study finding that about 1 in every 15,000 tweets contains a slur. That’s over 10,000 tweets each day.

The most common phrase that turned up on the study? “White boy.”

The study crowdsourced its list of slurs from Wikipedia and then scraped tweets from the platform during late November of 2012. The result was nearly 127,000 tweets containing what researchers identified as a slur. From there, researchers used machines and humans to analyze how the tweets were being used.

The study stresses that it’s a measurement of specific words and phrases only, not hate speech or other kinds of harmful dialogue around race on Twitter. And while praising Twitter as “an unprecedented source of data” for linguists, they cautioned that context—something it’s not always easy to get from 140 characters—is everything.

But even within the study’s limited constraints, the results are revealing:

— Most uses of slurs are done casually, without any violent or abusive intent; only a relatively small number (between 500—2,000 daily) were directed at an individual with derogatory intent.

—Casual slurs were most often used to build in-group solidarity or to describe someone. These kinds of non-derogatory uses include examples of humor, satire, and a group appropriating or reclaiming a word used to slur them.

—But the study also noted that “casual use of racial slurs” was significant: “The way in which

racist language can seep into broader language use and reflect underlying social attitudes is potentially an area of concern,” notes the study. About 5—10% of tweets fell into the category of slurs that the user might view as non-derogatory, but that others could perceive as threatening.

Another aim of the study was to determine how well machine analysis could work on these types of tweets, to determine context and purpose. While they were useful in data sorting, when it came to parsing whether tweets were derogatory or not, they fared less well.

But then again, so did the humans, who disagreed with each other nearly a third of the time on how to classify the data. The study cited “cultural bias” as one reason for this, pointing out that even the culture around the words themselves isn’t always clear on what’s a slur or not. For example, many people would argue that the phrase “white boy” doesn’t operate on the level of other words in the list, because it doesn’t help to reinforce a social power imbalance that favors white men over basically everyone else.

And plenty of people might disagree with the way the study catalogued certain tweets. Here’s the example the study gives for “casual racial slur:”

@^^^: Fucking paki shops that charge you for cash withdrawals or paying by card need nuked

The study labeled this tweet as “non-heated,” but to many people, a tweet declaring that owners of Pakistani shops should be bombed and destroyed might come across as very heated. And while the study distinguished between this kind of tweet and “an explicit incitement to do something “in the real world,’” similar oblique tweets have been read as threats when they’re directed at, oh, the U.S. president instead.

All in all, the study raises as many questions as it answers; but one thing seems clear.

Whether it’s easy to judge their intent or not, words can and do still hurt you.

Photo via eldh/Flickr CC by SA 2.0

Share this article
*First Published: Feb 13, 2014, 3:39 pm CST