I was noodling around on Twitter, minding my own business, when some dildo with a Guy Fawkes avatar dropped me a line to let me know that an anthropomorphic train was swiftly headed to my house to rape me.
Or, more specifically, he sent me an image macro of a character from Thomas the Tank Engine with the caption, “CHOO CHOO MOTHERFUCKER THE RAPE TRAIN’S ON ITS WAY. NEXT STOP YOU.”
It’s a rape threat. It’s menacing. It’s creepy. It’s part of a massive, sustained harassment campaign leveled at women online (particularly feminist writers) that’s been going on for years. (Gamergate is just the latest incarnation.) It’s not the “worst” rape threat I’ve ever received, if I had to rank them—please don’t make me rank them—but its intent was to make me feel unsafe because of my gender and to disrupt my work.
So I reported it. Reporting rape threats on Twitter is a time-sucking, onerous process that dominates my online life. I report abusive tweets sometimes hundreds of times per week, and I do it because I’m told it is my only recourse. It’s tedious, ineffectual paperwork, and it does little to change the toxic, misogynist culture that pervades the network.
A few days later, I received this message from Twitter Support:
Thank you for letting us know about your issue. We’ve investigated the account and the Tweets reported as abusive behavior, and have found that it’s currently not violating the Twitter Rules (twitter.com/rules).
Twitter’s abuse and harassment policy reads:
Users may not make direct, specific threats of violence against others, including threats against a person or group on the basis of race, ethnicity, national origin, religion, sexual orientation, gender, gender identity, age, or disability. Targeted abuse or harassment is also a violation of the Twitter Rules and Terms of Service.
If “the rape train’s coming for you”—directed at a woman in order to punish her for her work advocating for women—doesn’t qualify as gendered abuse, then what on earth does? What’s the point of having a harassment policy at all if it doesn’t police harassment?
Before tossing it on the pile with all the others (to be clear, not all of my reports are rejected, but a significant number are, with no discernible rhyme or reason), I tweeted a screengrab of the image with the caption, “Twitter just let me know that this doesn’t violate their rules or qualify as abusive behavior. Lighten up, ladies!”
Blogger and Skepchick founder Rebecca Watson recently reported this overt, violent threat and was told it didn’t violate Twitter’s rules:
@rebeccawatson I would love to knock you the fuck out. Not because you're a female or a feminist, but because you're an enormous bitch.— farleft (@_farleft) December 12, 2014
And when Watson reported another user for posting anime pornography with her head Photoshopped into it, Twitter threatened to suspend her account, saying that the user had reported her for harassment.
The past few weeks have been particularly baffling and grim. Conservative blogger Chuck Johnson used Twitter to openly, gleefully dox, harass, and borderline blackmail a woman he thought was the alleged UVA rape victim, inciting harassment against an entirely different, misidentified woman in the process.
Again, if that doesn’t qualify as harassment, then what the motherfuck does?
I emailed Twitter for comment and was told, “We do not comment on individual accounts, for privacy and security reasons.” The Twitter spokesman then referred me back, again, to the harassment policy.
Predictably, several men rushed in to inform me that it was “just a joke.”
“They will be completely unwilling to talk,” writer and activist Soraya Chemaly confirmed when I reached out to her. Chemaly organized the Safety and Free Speech Coalition, a network of anti-violence-against-women organizations with legal, academic, and tech expertise. So far, the coalition has been working with Facebook, Twitter, and YouTube to examine and improve how they handle abuse and harassment, particularly the type of persistent, long-term, often sexually violent harassment that disproportionately affects women.
“The system is predicated on the idea that the harassment is going to be fairly benign name-calling,” Chemaly told me, “which we all know that men experience more. It is not built to capture context or sustained harassment. It’s also not built to recognize trauma or re-traumatization, especially as it’s linked to violence.
“The very linear, non-contextual architecture of the reporting system is a bias,” Chemaly continued. “It’s biased toward the experiences of men, who will experience name-calling but are far less likely to experience gendered, violent, sustained harassment.”
Twitter’s system isn’t built, for example, to recognize that the user who tweeted the above photo of Romano is the leader of a small but vocal movement of outspoken misogynists and rape apologists who regularly organize large-scale, sustained harassment campaigns against women. It has no mechanism for recognizing the context—that the tweet is a deliberate incitement to harassment, that behind that single tweet are 10,000 followers salivating at the chance to be unleashed on a disobedient woman. It has no way to take into account the cumulative effect that such campaigns have on women’s mental health and safety.
There’s no way to surgically remove the problem of gendered harassment without addressing the fact that the system is inherently biased.
It’s not the case, as anti-feminist “free speech” trolls inevitably counter, that feminist activists want everyone they subjectively deem a meanie to be banished from the Internet forever. What we want, in fact, is a transparent, clearly delineated policy that takes into account the unique challenges facing women online—challenges that don’t just hurt women’s feelings, but actively silence us, keeping our voices out of the discourse and our influence out of the forward thrust of history. And the system is designed that way. There’s no way to surgically remove the problem of gendered harassment without addressing the fact that the system is inherently biased. Bias is built in.
“A lot of these corporations have gender-segregated spaces internally,” Chemaly told me. “Women have the outwardly facing public policy roles, and men have the internal systems architecture and engineering roles. So there’s a disconnect between men who are creating these architectures and women who are receiving the harassment. Until we can figure out, all over the place, how to bridge those gaps, I don’t think we’re going to see big changes.”
But Chemaly is cautiously optimistic.
“I’m two years in, working with Facebook,” she said, “and they are making a good-faith effort to address what I would characterize as normative challenges. Two years ago, posting rape threats as ‘controversial humor’ was acceptable, and now it is no longer acceptable. Two years ago, pages glorifying Elliot Rodger probably would have stood, but now they’re rejected on the basis of gender-based hate. Twitter is at the beginning of that process, where they are examining the impact of their policies or lack of policies.”
“Their free speech absolutism is very costly speech. It actually isn’t free at all for so many of us. I think there are people at Twitter who understand and are grappling with how to reconcile it with their corporate ethos. They are engaging with us. It is extremely slow and extremely opaque. However, I would say that a year ago, none of this was happening.”
Heartening but frustrating. Obviously Twitter and other social media sites—if they truly want to prioritize the safety of vulnerable users—have a complex and delicate task ahead of them, and I’m sympathetic to that. But in the meantime, actual human beings are bearing the brunt.
Hurry up—it’s heavy.
Illustration by Max Fleishman