On Facebook, figuring out the difference between free speech and offensive content hasn't been easy. Before now, though, little has been done to pinpoint the site's inconsistency—especially regarding threatening language directed at women.
Last week, in an open letter to Facebook, a circle of feminist activist groups spoke out about the seemingly endless list of Facebook groups promoting misogyny and violence. In it, they targeted Facebook advertisers whose ads appear next to what they deem "hate speech"—fan pages such as "Fly Kicking Bitches in the Face."
The letter demands a three-pronged action plan to end gender-based violent speech on Facebook—especially the kind that masquerades as jokes. Spearheaded by writer Soraya Chemaly, activist site Everyday Sexism, and media watchdog Women, Action, & Media (WAM!), the plan involves adopting a zero-tolerance policy for rape jokes and other kinds of "speech that trivializes or glorifies violence against girls and women." It also calls for Facebook to "effectively train moderators" about how to proactively ban and prevent against such hate speech.
This isn't the first time Facebook has come under fire for its handling of discussions about threats and other exercises of violent speech against women. Earlier this year, Facebook refused to take any action when a Swedish woman became the target of online stalking, doxing, and rape and death threats. In a public statement for a documentary about online harassment, Nordic Facebook policy chief Myrup Kristenson stated that the website handles rape threats on a case-by-case basis, attempting to judge how serious the person really is:
[W]e look into whether such threats have been said in the heat of the moment or if it’s something the person really means.
That attitude is exactly what the women's rights groups who cosigned the letter are trying to combat. Insisting on "swift, comprehensive, and effective action addressing the representation of rape and domestic violence on Facebook," they asked Facebook to rethink its policy on labeling violent themes against women as offensive humor.
Your common practice of allowing this content by appending a [humor] disclaimer to said [violent] content literally treats violence targeting women as a joke.
"The basic problem is that Facebook doesn't take the reports seriously until there's media coverage on a specific page," WAM! spokesperson Jaclyn Friedman told the Daily Dot. "Then they take that single page down, as though that solves the problem."
If that's what Kristenson meant by handling things on a case-by-case basis, it doesn't seem to be working: While several of the pages listed in the open letter have been removed, many more offensive ones remain. For instance, while the page "Fly Kicking Women in the Uterus," mentioned in the widely read open letter, has disappeared, the pages "Fly Kicking Kunts" and "Fly Kicking Bitches in the Face" remain up.
The biggest problem with the campaign is Facebook itself.
Facebook is a mammoth operation, serving about 1.11 billion monthly users. The community moderators who would guard against violent speech would need to number in the hundreds of thousands. Moreover, asking Facebook to reevaluate hate speech disguised as "humorous speech" would also require a major content policy overhaul.
Before either topic can be dealt with, the activists first have to alert Facebook to the problem—and get Facebook to take it seriously. To do that, they're targeting advertisers, calling upon supporters to flood the Twitter accounts of corporations whose ads appear next to offensive content on the website. Using auto-generated tweets and the hashtag #FBRape, supporters have been protesting companies right and left for days.
But despite considerable media coverage, so far most advertisers have proven just as uninterested in reinforcing site policy as Facebook is. After getting inundated with protests, Vistaprint and Audible stated they had verified with Facebook that their ads were no longer appearing on specific individual pages. Vistaprint asked its users to report offensive pages where any Vistaprint ads were located; meanwhile, Audible shared its official response from Facebook:
We try to react quickly to remove reported language or images that violate our terms and we try to make it very easy for people to report questionable content using links located throughout the site. Due to the numerous and varied posts on our site, we are unable to broadly restrict your ads from appearing alongside specific content. But in many cases, when the content is allowed by our policies but particularly offensive, we remove ads from those Pages.
The problem with any attempt to ask Facebook to issue an automatic sitewide restriction on any kind of speech is that the site is simply too big. Even the advertisers don't know for sure where all their ads are being placed at any given moment. With over a billion active users, Facebook's basic "If you see something, say something" policy is simply the most effective way to get something reported and banned.
Provided the reports actually lead to bans, that is. According to Friedman, that's far from the case.
You could report every page you ever found on Facebook promoting violence against women, sure. Many do. ... We hear over and over again that people report horribly violent pages and are told that they will stay up because they're "humor" or protected by "free speech."
Those pages only get taken down when Facebook is called out publicly by someone powerful enough to get their attention.
Many of the supporters of the Open Letter campaign have claimed that Facebook is much quicker to ban any kind of racially motivated hate speech. "We just want them to recognize pages/images promoting violence against women as hate speech, and treat that hate speech as effectively as they do other forms," Friedman said.
But it seems Facebook is no better at policing racially motivated forms of hate speech than it is at banning gender-based hate speech. A report just released by the Online Hate Prevention Institute finds that while some forms of anti-Semitic hate speech are taken down quickly, others remain up for months. The report also indicates that Facebook has trouble recognizing specific kinds of anti-Semitic language as hate speech.
Three advertisers—Nissan U.K., webhosting company WestHost, and Middle East peace advocacy group J Street--have so far chosen to pull their ads completely until Facebook takes action.
@emilylhauser I looked up the hashtag you used and saw something about raping handicapped girls. Involuntarily clenched fist.— Benjamin Silverstein (@bensilverstein) May 23, 2013
Other advertisers like American Express have expressed concern but stopped short of pulling advertising altogether. Most of the other names on the list of advertisers targeted by the campaign, including Dove, Pringles, and Mitsubishi, have been silent.
Publicly, so has Facebook. That stance isn't likely to change. Facebook claims to have "a dedicated User Operation teams that reviews incoming reports 24/7." A key concern of the women's activist groups is that violence against women is not seen as a valid safety threat. Facebook's Community Standards states, "Facebook does not permit hate speech, but distinguishes between serious and humorous speech."
Then again, it also states that "Sharing any graphic content for sadistic pleasure is prohibited." (It's hard to see how images that show women getting kicked, punched, raped, or pushed down stairs for laughs is anything but sadistic pleasure.)
Despite the nearly 60 organizations, websites, and activist programs who signed the Open Letter, it seems as though the cries for change will go no further than the activists' immediate networks of outraged bystanders. Without a groundswell of advertiser support for change, Facebook has little incentive to toughen up its own policy standards—especially given how inherently confusing they are.
Two weeks ago, I had an off-record meeting with an executive at Facebook about #FBrape. I left frustrated. That's all.— Laurie Penny (@PennyRed) May 22, 2013
In the end, Facebook may just simply be too big to care.
Update: Facebook published a lengthy response to "Controversial, Harmful and Hateful Speech" on the social network, pledging to evaluate its policies and procedures for handling content that targets women "or incites gender-based violence." Here's an excerpt:
In recent days, it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like, particularly around issues of gender-based hate. In some cases, content is not being removed as quickly as we want. In other cases, content that should be removed has not been or has been evaluated using outdated criteria. We have been working over the past several months to improve our systems to respond to reports of violations, but the guidelines used by these systems have failed to capture all the content that violates our standards. We need to do better—and we will.
As part of doing better, we will be taking thefollowing steps, that we will begin rolling out immediately: We will complete our review and update the guidelines that our User Operationsteam uses to evaluate reports of violations of our Community Standards around hate speech. To ensure that these guidelines reflect best practices, wewill solicit feedback from legal experts and others, including representatives of the women's coalition and other groups that have historically faced discrimination.
- We will update the training for the teams that review andevaluate reports of hateful speech or harmful content on Facebook. To ensure that our training is robust, we will work with legal experts and others, including members of the women’s coalition to identify resources or highlight areas of particular concern for inclusion in the training.
- We will increase the accountability of the creators of content that does not qualify as actionable hate speech but is cruel or insensitive by insisting that the authors stand behind the content they create. A few months ago we began testing a new requirement that the creator of any content containing cruel and insensitive humor include his or her authentic identity for thecontent to remain on Facebook. As a result, if an individual decides to publicly share cruel and insensitive content, users can hold the author accountable and directly object to the content. We will continue to develop this policy based on the results so far, which indicate that it is helping create a better environment for Facebook users.
- We will establish more formal and direct lines of communications with representatives of groups working in this area, including women's groups, to assure expedited treatment of content they believe violate our standards. We have invited representatives of the women Everyday Sexism to join the less formal communication channels Facebook has previously established with other groups.
- We will encourage the Anti-Defamation League’s Anti-Cyberhate working group and other international working groups that we currently work with on these issues to include representatives of the women’s coalition to identify how to balance considerations of free expression, to undertake research on the effect of online hate speech on the online experiences of members of groups that have historically faced discrimination in society, and to evaluate progress on our collective objectives.
These are complicated challenges and raise complex issues. Our recent experience reminds us that we can’t answer them alone. Facebook is strongest when we are engaging with the Facebook community over how best to advance our mission. As we’ve grown to become a global service with more than one billion people, we’re constantly re-evaluating our processes and policies. We’ll also continue to expand our outreach to responsible groups and experts who can help and support us in our efforts to give people the power to share and make the world more open and connected.
Illustration by Jason Reed