Via

Comment moderation and the (anti-) social Web

Comment moderation is not a First Amendment issue, but conversations about free speech often overlook this very basic point.

Photo of [email protected]

[email protected]

Article Lead Image

As we see from the ubiquity of online harassment, shaming, and trolling, the social web is as much about ambiguity and antagonism as it is about sharing, connection, and cooperation. In fact, the Web’s most apparently obviously anti-social behaviors—including trolling and shaming—are, strangely enough, also its most social. This series, The Anti-Social Web, will explore this overlap to look at various aspects of social behavior online, from “good” to “bad” and all colors between. 

Featured Video

by WHITNEY PHILLIPS

Is it time to get rid of the comments section? Or would that be a great loss for freedom of expression online?

Popular Science recently decided to disable its comments system. The problem, as staff writer Suzanne LaBarre explained, is that the incendiary and ultimately useless comments—made by “shrill, boorish specimens of the lower Internet phyla”—are actually bad for science; they risk skewing readers’ perceptions of a particular issue, which risks skewing overall public perception of that issue. And once public support goes, unfavorable policy decisions may follow, potentially compromising available research and funding opportunities. These are not risks Popular Science was willing to take—so bye-bye, comments.

Advertisement

Popular Science isn’t the only major online platform reconsidering its commenting policies. Google recently rolled out a new commenting system for YouTube powered by Google+, which in addition to tethering a person’s YouTube comments to his or her Google+ profile, will help sift relevant comments from what CNET senior writer Seth Rosenblatt describes as a “wretched hive of scum and villainy” that users of YouTube have come to know and loathe.

Whether or not one agrees with Popular Science’s decision (Slate’s Will Oremus, for example, calls Popular Science’s move “lazy and wrong”) or appreciates Google’s proposed changes, one thing is for sure: Comment moderation or disabling policies raise a number of important questions about the (anti-) social Web.

On an elemental level, comment moderation policies call into question the ideal relationship between writer and commenters. For some, like Will Oremus, this relationship should be one of free and open exchange. Readers often have a great deal of insight to share, they feel, and writers should be willing to listen and maybe even learn. “I can’t begin to count the number of times I’ve been alerted to new developments, factual oversights, dissenting opinions, and fresh story ideas by readers using the comments section below my stories and blog posts,” Oremus writes. “Commenters also help authors understand where they’ve explained a point in a misleading way, and what readers are taking away from their posts.” In short, commenters keep writers honest and on their toes.

Advertisement

Derek Thompson at The Atlantic presents a more ambiguous take. As he argues, commenters can be great, but they can also be, well, the worst—see above point about YouTube’s Mos Eisley likeness. That’s what makes Thomson’s answer—that comments are both good and bad for journalism—so unsatisfying. The answer depends on the platform, the issue, the community, a whole host of factors. Still, it’s an important question to ask. 

A similarly important question is the extent to which platforms are responsible for the content their commenters post. This is particularly salient when considering racist, sexist, homophobic, and transphobic comments, as well as comments designed to threaten and intimidate other users. Is it the platform’s job to intervene when conversations veer into hateful territory? Or should platform moderators encourage self-regulation, and allow community members to establish their own behavioral standards? 

In a post written for The Awl, my co-author Kate Miltner and I considered this question in relation to Reddit’s moderation policies (or lack thereof); while no consensus was reached (I am a proponent of strong moderation and am an unapologetic fan of the banhammer, while Kate expressed concern about the slippery slope of censorship), we both agreed that questions about hateful online comments necessarily raise questions about the platforms that host them.

Of course, the most prominent issue that comment moderation raises is that of free speech—ironic, considering that comment moderation, even comment disabling, is not a First Amendment issue. Not only do conversations about free speech often overlook this very basic point, they may just contribute to the underlying problem.

Advertisement

First, a quick rundown of what the First Amendment actually says. In addition to prohibiting the establishment of a national religion, establishing the freedom of the press, and protecting religious freedom, peaceful assembly, and the ability to petition the government, the First Amendment states that Congress shall pass no laws abridging the freedom of expression—with a few significant exceptions. Essentially, then, free speech is about protecting citizens from the government, not ensuring that citizens should be able to say anything they want whenever they want under any circumstance ever. So the idea that platform moderation constitutes a violation of a poster’s free speech is, in a word, nonsense (note: this is not to say that free speech violations don’t and can’t occur online, but rather that the moderation of comments on privately held websites is not one of them).

But it’s not the legal sense of the term that’s the problem; it’s the colloquial sense of the term—the idea that people should be able to say whatever they want on the Internet, even if, maybe even especially when, what they have to say is antagonistic or otherwise obnoxious, because … free speech (this argument is a textbook case of circular logic; for an example, see Storify’s CEO Xavier Damman’s reaction to several female users who had complained about on-site harassment).

Concurrent to this basic assertion—that people should be able to say whatever they want on the Internet, because people should be able to say whatever they want on the Internet—is the assumption that it’s everyone else’s job to sit back and deal with it, a line of reasoning that typically goes something like, well, if the things people say offend you so much, then don’t read the comments section, or don’t come online at all. And if you can’t handle a little heat, then don’t bother commenting. It’s a person’s God-given right to fling poo all over the comments section; this is America. Also, welcome to the Internet.   

Advertisement

The kind of speech most likely to be defended by this line of reasoning is speech that is bigoted and antagonistic, largely toward women and other historically underrepresented groups (note the infrequency with which women and people of color use the “…but but FREE SPEECH” defense in a debate, whether online or off-). Free speech in the colloquial Internet sense, particularly as it’s used in the context of comment moderation, almost always justifies or outright apologizes for a typically male, typically white aggressor. It is a concept that frames freedom in terms of being free to harass others, not freedom from being harassed, or simply from being exposed to harassment (which often amounts to the same thing).

Unlike discussions of the ideal relationship between author and commenter, or the extent to which platforms are responsible for protecting their readers from harassment,  concerns over “free speech” are unlikely to precipitate thoughtful conversations about best moderation practices. In fact, by actively latching onto “free speech” as a behavioral ideal, platforms inadvertently privilege the aggressor and pathologize readers—readers who, for some strange reason, don’t like wading through a tsunami of antagonistic bullshit every time they scroll through a comments section. 

If these platforms really do value their most antagonistic commenters, and would prefer to scare away thoughtful contributors and proudly host the most incendiary non-conversations possible, then fine, free speech in the colloquial sense works great. Thumbs up! But if the goal is to encourage actual discourse within a diverse community, platforms would be wise to think beyond “free speech,” and take some measures to privilege those who are there to have a conversation, not a fire fight. 

Advertisement

Whitney Phillips is a media and Internet studies scholar who received her Ph.D. from the University of Oregon in 2012. Her work has appeared in journals such as Television and New Media and First Monday, and she has been interviewed or featured in the Atlantic, Fast Company, and NBC News. She is currently revising her dissertation—which focuses on subcultural trolling—for publication. 

Photo by stupid systemus/Flickr

 
The Daily Dot