- Giuliani just straight-up tweets some Ukraine secrets 2 Years Ago
- You can now buy that viral game about an annoying goose 2 Years Ago
- Bill de Blasio was still running for president, but now he’s not Today 8:40 AM
- How to stream Panthers vs. Cardinals in Week 3 Today 8:20 AM
- ‘American Dreamer’ is a frustratingly basic crime thriller starring Jim Gaffigan Today 7:00 AM
- ‘Smallville’ star Tom Welling will play Superman once again Today 6:43 AM
- How old is Beto O’Rourke? Today 6:30 AM
- How to stream Chiefs vs. Ravens in NFL Week 3 action Today 6:08 AM
- How to stream Saints vs. Seahawks in NFL Week 3 action Today 5:46 AM
- Reddit Relationships: Man laughs at girlfriend for using Microsoft PowerPoint during sex Thursday 8:59 PM
- The 15 Brad Pitt movies you need to see now, ranked Thursday 8:26 PM
- Facebook could face legal action over the Area 51 event Thursday 6:50 PM
- How to stream Texans vs. Chargers in NFL Week 3 action Thursday 6:40 PM
- Tekashi 69 alleges Cardi B was a Bloods gang member Thursday 5:55 PM
- Right-wing sites falsely claimed group of Somalis attacked man in viral video Thursday 5:00 PM
A “trove” of documents allegedly obtained by ProPublica offers a look at the controversial methods Facebook uses to police hate speech, which the social network defines as “direct and serious attacks on any protected category of people based on their race, ethnicity, national origin, religion, sex, gender, sexual orientation, disability or disease.”
If the evidence is true, it is damning: The social media platform trains its employees to protect white men from hate speech, but not black children.
An example from the documents asks Facebook moderators which subset should be protected from hate speech: white men, female drivers, or black children. The answer: white men.
Here’s the quiz Facebook has given to its “content reviewers” pic.twitter.com/zv8hS27H0A— Julia Angwin (@JuliaAngwin) June 28, 2017
That’s because Facebook only shields the eight “protected categories” listed in its definition (like race and gender). It does not, however, protect subsets of “protected categories” that are combined with a “non-protected category.” Non-protected categories include social class, age, appearance, religions, countries, occupation, continental origin, and political ideology. Once a protected category is combined with a non-protected category, that group is no longer protected by Facebook.
It should be noted that if the question had asked if black men, male drivers, or female children would be protected, the answer would be black men.
ProPublica claims Facebook’s decision to not protect subsets like “female drivers” is because they don’t seem “especially sensitive.” Citing a “person familiar with the matter,” ProPublica says the default decision is to allow free speech. The Daily Dot reached out to Facebook as to why “children” doesn’t qualify as a protected category, but hasn’t received as response as of this publishing.
One questionable decision derived from Facebook’s hate speech policies is to allow the use of swastikas because of a rule that permits the “display [of] hate symbols for political messaging,” but to ban images of Pepe the Frog, a symbol used by white supremacists.
Another example offered in the alleged documents explains that the sentence “Migrants are lazy and filthy” is OK, while “The French are the best but the Irish suck!” is not, because of a rule that states “It’s okay to claim superiority for a nation but not at the expense of another nationality.”
The report says a rule cited in the documents that is no longer in effect banned posts that praise the use of “violence to resist occupation of an internationally recognized state.” ProPublica accuses Facebook of favoring elites and governments over grassroots activities and racial minorities to serve their own business interests.
Facebook admitted its policies’ faults but defended itself for attempting to create a consistent standard throughout the world.
“The policies do not always lead to perfect outcomes,” said Monika Bickert, head of global policy management at Facebook, according to the report. “That is the reality of having policies that apply to a global community where people around the world are going to have very different ideas about what is OK to share.”
The social giant has come under fire in recent months for its handling of sensitive content. It recently announced it would hire 3,000 more moderators to help review the millions of reports it gets every week.
Phillip Tracy is a former technology staff writer at the Daily Dot. He's an expert on smartphones, social media trends, and gadgets. He previously reported on IoT and telecom for RCR Wireless News and contributed to NewBay Media magazine. He now writes for Laptop magazine.