It took Facebook one week to respond to an open letter written by a group of feminist activists demanding that the social network take action against groups that promoted misogyny and violence against women.
On May 21, Laura Bates, Soraya Chemaly, and Jaclyn Friedman published a letter cosigned by more than 50 advocacy groups speaking out against Facebook's disinterest toward Facebook groups with names like "Kicking your Girlfriend in the Fanny because she won't make you a Sandwich" and "Violently Raping Your Friend Just For Laughs."
"These pages and images are approved by your moderators, while you regularly remove content such as pictures of women breastfeeding, women post-mastectomy, and artist representations of women's bodies," the letter pointed out.
"Your common practice of allowing this content by appending a [humor] disclaimer to said content literally treats violence targeting women as a joke."
The coalition demanded that Facebook label this type of content as hate speech, train its moderators to identify it and remove it, and to train these same moderators to "understand how online harassment differently affects women and men."
The group also asked supporters to tweet at brands whose ads appeared on those pages. The Twitter campaign organized under the hashtag #fbrape, and by Monday, at least 15 companies had pulled their ads.
It appears that Facebook heard these complaints loud and clear.
On Tuesday afternoon, the social network published on its Facebook Safety page a lengthy note titled "Controversial, Harmful and Hateful Speech on Facebook," a direct response to the aforementioned open letter.
"In recent days, it has become clear that our systems to identify and remove hate speech have failed to work as effectively as we would like, particularly around issues of gender-based hate," the company acknowledged.
"We need to do better—and we will."
This effort to do better has resulted in four new changes to the way Facebook deals with objectionable content, effective immediately.
The first is to overhaul Facebook's policies on what constitutes hate speech, noting that it "will solicit feedback from legal experts and others, including representatives of the women's coalition and other groups that have historically faced discrimination."
The Palo Alto–based company will also retrain their moderators to do a better job at identifying this type of content, establish more direct forms of communication with groups affected to "assure expedited treatment of content they believe violate[s] [Facebook's] standards," and meet with these same organizations to discuss how to "balance considerations of free expression [and] to undertake research on the effect of online hate speech on the online experiences of members of groups that have historically faced discrimination in society."
The biggest change, however, is that Facebook will now include the name of those responsible for distasteful and objectionable—but not hateful—content. In other words, if you're going to spew hate on the social network, you no longer will be able to do it anonymously.
"[If] an individual decides to publicly share cruel and insensitive content, users can hold the author accountable and directly object to the content," the company announced.
With these new directives in place, Facebook has acknowledged that it did have a misogyny problem and that it's doing its best to change.
Photo via babyben/Flickr