Reddit moderation has long been a complicated issue. Within its darkest recesses are subreddits which many would find distasteful. Removing them, however, is no easy task, particularly if it wants to be perceived as a bastion of free and unfiltered discussion. Offense, after all, is fundamentally subjective, and the company is wary of being seen as partial to a particular viewpoint.
It’s primarily for that reason why today’s change to Reddit’s policies against harassment and bullying is a landmark. In a post to /r/announcements, Reddit administrator landoflobsters explained that abusive behavior would no longer need to meet the criteria of “continued” or “systematic” in order to become actionable by the company.
“Chiefly, Reddit is a place for conversation,” they said. “Thus, behavior whose core effect is to shut people out of that conversation through intimidation or abuse has no place on our platform.”
For the first-time, Reddit also plans to accept reports from “bystanders” who have witnessed abuse but were not the recipient of it. Previously, the company only accepted reports from those who had received inappropriate comments first-hand.
Hoping to assuage the fears of users wary of heavy-handed enforcement, the Reddit representative explained that it’ll attempt to pay attention to context. The site plans to use machine-learning tools to prioritize reports, but these will play no role in actual enforcement. That job will remain in the hands of human moderators.
By lowering the threshold where a post or subreddit becomes objectionable, and allowing anyone to report a post, users will inevitably report more posts. The question remains whether the so-called “Frontpage of the Internet” can cope.
According to Amazon’s Alexa ranking service, Reddit is the 18th most visited website in the world. It has an army of moderators, but these are largely unpaid and, according to a 2018 Engadget report, routinely face harassment and threats. Without finding more people to perform this thankless job, it’s not clear how Reddit will process these new reports in a timely and effective manner.
And then there’s the fact that moderating any platform which is primarily text-based, and drenched in context, is inherently hard. It’s safe to assume that mistakes will be made, which will draw the ire of the broader Reddit community, which is intrinsically sensitive to any perceived encroachments to the unfiltered and frank speech which the site is known for.
When Reddit crosses that line, it’ll be interesting to see how it responds. Will it water down these new rules, or will it stand firm?