- A few of our favorite things on Newegg are on sale for Black Friday 2 Years Ago
- Disney adds ‘Bob’s Burgers’ movie back to release schedule after accidentally yanking it 2 Years Ago
- Ocasio-Cortez launches petition demanding Stephen Miller’s resignation Today 1:24 PM
- Prince Andrew’s defense against child sex crimes stokes conspiracy theory flames Today 1:20 PM
- More people may be looking to cancel Disney+ than Netflix Today 1:09 PM
- Monday Night Football: How to stream Chiefs vs. Chargers live Today 1:00 PM
- After days of deadly protests, Iran implements ‘largest internet shutdown ever’ Today 12:55 PM
- ‘Disney Plus and thrust’ is apparently the new Netflix and Chill Today 12:32 PM
- Woman fired, sued after coworker shared their sexts Today 12:22 PM
- Group running GoFundMe for border wall breaks ground without permits Today 11:47 AM
- Biden says he won’t support federal legalization of marijuana Today 11:42 AM
- People can’t get enough of ‘Baby Yoda’ Today 11:41 AM
- ‘The Crown’ season 3 switches its cast but loses none of its intrigue Today 11:23 AM
- Protesters occupying Hong Kong university post last wishes to Twitter as police move in Today 11:19 AM
- Sara Lee navigates dirty Instagram comments after ‘SNL’ sketch Today 11:18 AM
A “trove” of documents allegedly obtained by ProPublica offers a look at the controversial methods Facebook uses to police hate speech, which the social network defines as “direct and serious attacks on any protected category of people based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or disease.”
If the evidence is true, it is damning: The social media platform trains its employees to protect white men from hate speech, but not black children.
An example from the documents asks Facebook moderators which subset should be protected from hate speech: white men, female drivers, or black children. The answer: white men.
Here’s the quiz Facebook has given to its “content reviewers” pic.twitter.com/zv8hS27H0A— Julia Angwin (@JuliaAngwin) June 28, 2017
That’s because Facebook only shields the eight “protected categories” listed in its definition (like race and gender). It does not, however, protect subsets of “protected categories” that are combined with a “non-protected category.” Non-protected categories include social class, age, appearance, religions, countries, occupation, continental origin, and political ideology. Once a protected category is combined with a non-protected category, that group is no longer protected by Facebook.
It should be noted that if the question had asked if black men, male drivers, or female children would be protected, the answer would be black men.
ProPublica claims Facebook’s decision to not protect subsets like “female drivers” is because they don’t seem “especially sensitive.” Citing a “person familiar with the matter,” ProPublica says the default decision is to allow free speech. The Daily Dot reached out to Facebook as to why “children” doesn’t qualify as a protected category, but hasn’t received as response as of this publishing.
One questionable decision derived from Facebook’s hate speech policies is to allow the use of swastikas because of a rule that permits the “display [of] hate symbols for political messaging,” but to ban images of Pepe the Frog, a symbol used by white supremacists.
Another example offered in the alleged documents explains that the sentence “Migrants are lazy and filthy” is OK, while “The French are the best but the Irish suck!” is not, because of a rule that states “It’s okay to claim superiority for a nation but not at the expense of another nationality.”
The report says a rule cited in the documents that is no longer in effect banned posts that praise the use of “violence to resist occupation of an internationally recognized state.” ProPublica accuses Facebook of favoring elites and governments over grassroots activities and racial minorities to serve their own business interests.
Facebook admitted its policies’ faults but defended itself for attempting to create a consistent standard throughout the world.
“The policies do not always lead to perfect outcomes,” said Monika Bickert, head of global policy management at Facebook, according to the report. “That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share.”
The social giant has come under fire in recent months for its handling of sensitive content. It recently announced it would hire 3,000 more moderators to help review the millions of reports it gets every week.
Phillip Tracy is a former technology staff writer at the Daily Dot. He's an expert on smartphones, social media trends, and gadgets. He previously reported on IoT and telecom for RCR Wireless News and contributed to NewBay Media magazine. He now writes for Laptop magazine.