Reddit cracks down on harassment, but is it too late?

On Thursday, the leadership of the social news site Reddit published a blog post announcing a major change in the way the site deals with harassment.

The company’s new policy prohibits personal attacks and harassment against other users. The post defined harassment as: “Systematic and/or continued actions to torment or demean someone in a way that would make a reasonable person (1) conclude that Reddit is not a safe platform to express their ideas or participate in the conversation, or (2) fear for their safety or the safety of those around them.”

Users will now be able to email Reddit administrators directly about being harassed. Reddit CEO Ellen Pao told the New York Times the company is hiring additional community managers to handle and influx of new harassment complaints.

The question now is, will it work?

A shaky past

Because Reddit is loathe to ban users at the IP-address level, and the site allows users to create accounts anonymously, any user banned for harassment can easily open another account and continue their trolling with few consequences.

Reddit has long enforced a relatively hands-off policy towards the actions of its users. The small set of prohibited activities includes spamming, engaging in vote manipulation, revealing the personal information of another users, and posting child pornography.

In a post written in the wake of last year’s controversy surrounding the site’s role as hub for a trove of stolen celebrity nude pictures, former Reddit CEO Yishan Wong compared the site’s management to that of a government making a concerted effort to promote free speech. 

“The role and responsibility of a government differs from that of a private corporation,” Wrong wrote, “in that it exercises restraint in the usage of its powers.”

However, that stance has drawn widespread criticism—both from the media and the site’s own user base, which has grown to over 200 million.

“We’ve seen many conversations [on the site] devolve into attacks against individuals. We share redditors’ frustration with these interactions. We are also seeing more harassment and different types of harassment as people’s use of the Internet and the information available on the Internet evolve over time,” Thursday’s blog post reads. “Instead of promoting free expression of ideas, we are seeing our open policies stifling free expression; people avoid participating for fear of their personal and family safety.”

A company spokesperson cited a recent survey of Reddit users as a major factor behind the policy shift. The survey, which was conducted in March and garnered responses from 16,817 users, found that their biggest problem with Reddit was the “increasingly negative and unsafe community environment.”

The survey found that half of the people who wouldn’t recommend the site to a friend gave the their reason for doing so as not wanting to expose friends to “unpleasant content and users” or “appearing to support or participate in such content by association.”

The survey also discovered a significant gender gap: 92 percent of men were satisfied with the Reddit experience, whereas only 84 percent of women felt the same way. When the question was narrowed to the community of people who participate in the site, only 58 percent of women reported being satisfied as opposed to 69 percent of men.

Reddit, whose users are 81 percent male and generally seen as overwhelmingly white, has long had an issue with harassment targeted at women and minorities. Last year, a coalition of moderators from dozens of subreddits signed an open letter urging the site’s administrators to clamp down on trolls. The letter, which was written by the moderators of r/BlackLadies wrote:

Since this community was created, individuals have been invading this space to post hateful, racist messages and links to racist content, which are visible until a moderator individually removes the content and manually bans the user account. All of these individuals are anonymous, many of them are on easily-created and disposable (throwaway) accounts, and they are relentless, coming in barrages. Hostile racist users are also anonymously ‘downvoting’ community members to discourage them from participating. reddit admins have explained to us that as long as users are not breaking sitewide rules, they will take no action.

The resulting situation is extremely damaging to our community members who have the misfortune of seeing this intentionally upsetting content, to other people who are interested in what black women have to say, as well as moderators, who are the only ones capable of removing content, and are thus required to view and evaluate every single post and comment. Moderators volunteer to protect the community, and the constant vigilance required to do so takes an unnecessary toll.

We need a proactive solution for this threat to our well-being. We have researched and understand reddit’s various concerns about disabling downvotes and restricting speech. Therefore, we ask for a solution in which communities can choose their own members, and hostile outsiders cannot participate to cause harm.

What is ‘harassment,’ exactly?

In the r/Rape subreddit, a support community for sexual assault survivors and their loved ones, volunteer moderators, who are not paid company employees, told the Daily Dot last year that one out of every five posts or comments to the subreddit were abusive—calling survivors liars or saying they somehow “deserved it.”

Additionally, r/Rape’s moderators told stories about users being barraged with rape and death threats through the site’s private messaging system and not getting an adequate response from Reddit’s management.

Moderators of a given subreddit have the ability to prohibit specific users from posting to their community; however, the task of clearing out trolls proved too time consuming and emotionally draining.

Many of the r/Rape moderators felt that Reddit’s management was prioritizing the free speech of trolls over the safety of all users. As a result, some within the Reddit community are skeptical about the new policy affecting any real change.

“It’s big talk, but they’ve never cared about what has happened to our users,” an r/Rape moderator going by the handle Waitwhatnow told the Daily Dot. “So far, Reddit has only ever backpedaled on a harassing post when threatened with legal action. If they do change, it will definitely be for the best. But the admins haven’t exactly shown good faith in the past.”

A moderator of r/BlackLadies going by the pseudonym TheYellowRose, worried that the new rule could ultimately prove counterproductive. 

“I think that definition of harassment is just vague enough to be a problem,” TheYellowRose said. “We are currently running a robot that automatically bans anyone who participates in certain hateful subreddits. These users have been calling us Nazis, fascists, fatties, etc., ever since the bot was implemented, and I’m sure they will report us for ‘harassment’ for deciding to exclude them from our safe spaces.” 

“The definition of harassment needs to be a bit clearer,” TheYellowRose continued. “I’d like it to include rules against racism, sexism, rape, and death threats.”

Other users also expressed frustration in the new policy, but from the perspective that Reddit’s new harassment crackdown is effectively censoring speech. “Time was, if you didn’t like what was written on the Intenet you turned off the screen and walked fucking outside,” wrote one redditor. “I can’t stand this f**king politically correct bulls**t. We need to tell people to harden the f**k up, use an anonymous Internet name, and don’t feed the trolls. Problem solved.”

Hey, what about shadowbanning?

The comment thread on the post announcing the new harassment policy largely skipped the topic of harassment. Instead, much of the discussion centered around so-called shadowbanning—a practice, instituted early in Reddit’s history to deter spammers, where any comment or post a user submits to the site is invisible to all other users, all without the shadowbanned user’s knowledge.

Many Redditors have serious qualms with the policy, arguing that the shadowbanning process is applied both arbitrarily and inconsistently. In addition, since users aren’t notified they’ve been shadowbanned and can still submit content to the site, shadowbanned users can spend long periods of time submitting content no other users of the site every see or intact with.

“This was a product decision we made literally 10 years ago—it has not been updated and it needs to be,” wrote Reddit cofounder Alexis Ohanian on the thread. “Back when we made it, we had only annoying marketers to deal with, and it was easier to ‘neuter’ them (that’s what we called it) and let them think they could keep spamming us so that we could focus on more important things, like building the site.”

Ohanian added that the company recently hired someone to create a better system.

Slipping through the cracks

This policy shift isn’t the only time in recent months that Reddit has attempted to crack down on the bad behavior of its users. Earlier this year, the site banned “revenge porn,” sexual images posted to the site without the subject’s consent. The move was a delayed reaction to the previous year’s celebrity nude photo leaks and the site’s difficulty in enforcing the newfound anti-harassment rules.

Months after that announcement, there are still subreddits on the site devoted to so-called sexually suggestive “creepshots” of women taken secretly and posted without their consent. For example, the subreddit r/CandidFashionPolice is devoted to these types of pictures but operates under the fig leaf of critiquing women’s sartorial choices. Despite the policy, the subreddit and others like it are still permitted to operate.

Reddit has long benefitted from anarchic, anything-goes nature. A shift away from that, one that promises to leave an indelible mark on the site’s very culture, will certainly be no easy task.

Photo via erix/Flickr (CC BY 2.0) | Remix by Fernando Alfonso III

Aaron Sankin

Aaron Sankin

Aaron Sankin is a former Senior Staff Writer at the Daily Dot who covered the intersection of politics, technology, online privacy, Twitter bots, and the role of dank memes in popular culture. He lives in Seattle, Washington. He joined the Center for Investigative Reporting in 2016.