Article Lead Image

Don’t feed the trolls? It’s not that simple

Practical advice on how to actually deal with online aggression—and why the standing advice, “Don’t feed the trolls,” just doesn’t cut it.  


[email protected]

Internet Culture

THE ANTI-SOCIAL WEB: As we see from the ubiquity of online harassment, shaming, and trolling, the social web is as much about ambiguity and antagonism as it is about sharing, connection, and cooperation. In fact, the Web’s most apparently obviously anti-social behaviors—including trolling and shaming—are, strangely enough, also its most social. This series, The Anti-Social Web, will explore this overlap to look at various aspects of social behavior online, from “good” to “bad” and all colors between. Guest curated by Whitney Phillips.


In my last post, I talked about the history of the term “troll” and argued against its use as an online behavioral catch-all. Instead, I suggested, we should think of trolling as a subset of online aggression, rather than the category itself—in order to help foster more targeted responses to problematic online behaviors. 

But this sort of semantic hair-splitting isn’t particularly helpful when confronted by an online antagonist. Regardless of how you might describe the interaction—trolling, harassment, bullying—what should you actually do? What can you do?

In many online circles, the most common piece of advice is “Don’t feed the troll(s),” which serves as both response to and apology for all kinds of online antagonism.

Under this logic, trolls are like great white sharks and their target’s reactions like chum: the more you throw, the more worked up the shark will get (and the more likely it is that other sharks will smell the blood in the water and come join the party). Stop throwing chum, and eventually the shark will lose interest and leave.

Initially, “Don’t feed the trolls” seems like a perfectly reasonable response to nasty online behavior. Despite its cursory appeal, however, the command raises more issues than it solves—and not just because it brings us right back to the problem of definitions. 

First of all, “don’t feed the trolls” frames conversations about aggressive online behaviors solely in terms of the aggressor. Even if a person avoids feeding the trolls (and/or the person accused of trolling), he or she is still playing into the aggressor’s hands. It’s the aggressor’s game and the aggressor’s rules; the target (I prefer “target” over “victim,” since target establishes that a person has been singled out, but doesn’t imply helplessness) is little more than a plaything. 

Even more insidiously, the imperative to not to feed the trolls (again, using that term loosely) places blame for whatever unpleasantness squarely at the target’s feet. If only the target hadn’t fed the trolls, the argument goes, the trolls wouldn’t have done what they did! And really, it’s kind of the target’s fault for doing something stupid on the Internet; maybe next time they’ll think twice before posting/doing/saying that sort of thing. In short, the targets—and not the trolls themselves—are the root cause of the trolls’ behaviors. The ultimate message here being: Don’t get trolled. 

This sort of victim-blaming rhetoric—precisely the rhetoric we see in the wake of rape cases—is damaging wherever it occurs. On the Internet, the mentality of “don’t feed the trolls” represents the acceptance of a certain set of arbitrary assumptions about how people “should” behave on the Internet (such as: you shouldn’t take anything seriously; you shouldn’t post things under your real name; you shouldn’t express emotion; you shouldn’t be hanging out where trolling is likely to happen). Abusive behaviors are justified as being “appropriate” punishment for breaking whatever made-up rule. 

This is why I reject the premise “Don’t feed the trolls”: it gives all the power to the troll (or the person accused of trolling) and blames the victim for the aggressor’s actions. 

That isn’t to say that the basic underlying message of “Don’t feed the trolls” should be entirely abandoned. But we need to re-frame the message to reflect the target’s agency, and to encourage active resistance to online aggression. 

Instead of agreeing not to feed the trolls, thereby accepting the terms of the antagonist’s game, the target should be encouraged to respond with his or her own game—a game called Ruining This Asshole’s Day. 

The first and most basic way to play Ruin This Asshole’s Day is to shut them down, ideally by unceremoniously deleting their comments. (This presumes that the target has some control over the posted content, and that the target can keep up with whatever comments, which isn’t always the case and immediately begs a nest of questions about best moderation practices—a conversation for another day.) This shouldn’t be done passively, as an act of acquiescence, but actively, as an exertion of power—specifically the one-two punch of a raised eyebrow and extended middle finger.

To an online antagonist with nothing better to do than spend his or her days being a nuisance, this sort of silence is deadly. Because whatever their motivations (boredom, bigotry, a warped sense of humor), and however they might self-identify (as a troll, as a soldier of God, as a concerned citizen), they all want one basic thing—an audience. If they didn’t, they would spend their time emailing insults to themselves. Where’s the fun in that? That is why forced impotence can be such a powerful punitive tool.  

Of course there are exceptions; sometimes silence isn’t enough, and in fact isn’t appropriate. Sometimes assholes need to be called out—and yes, maybe even shamed—not just because the antagonist in question crossed an ethical line but also to send the message that this sort of bigotry or aggression or general unpleasantness will not be tolerated. In these cases, an individual or community might choose to call attention to the antagonists’ problematic behavior(s), thus bringing unwanted attention to the aggressor(s) and his or her actions, or might go a step further and publicize the offline identity of the guilty parties, thus forcing the aggressor to face real-world consequences. 

Counter-trolling measures are another option. In these cases, the intended targets turn the rhetorical tables on their aggressors, either by exploiting the aggressor’s perceived weaknesses (is the aggressor a rabid homophobe? then in all likelihood he will not respond positively to the assurance that it gets better, as was George Takei’s approach when responding to the anti-gay bleatings of disgraced former Arkansas school board member Clint McCance); or by encouraging the aggressor to keep talking, interjecting only to frustrate, confuse, or goad him into making increasingly outrageous statements—the end goal of which is to see the would-be aggressor scurry back into the shadows with his tail planted firmly between his legs.

However one chooses to respond, it is critical to emphasize the target’s—and not the antagonist’s—agency, and to educate people on the choices available to them. Not that there exists any perfect solutions; these problems aren’t easily solved, particularly when online abuse has a group dimension, or when the abuse occurs on unmoderated or poorly-moderated platforms (take for example this recent case, in which three Chicago teens allegedly raped a 12 year-old girl and then posted the video to Facebook, or any of the dizzying number of misogynist horror-stories coming from Reddit—some thoughts on Reddit’s unwillingness to stomp out these sorts of behaviors here). In these more extreme cases, where emotional and physical harm is archived and spread across the Internet, active push back—whether in the form of antagonistic silence or shaming or counter-trolling—may not be enough, and in fact may backfire

But taking the time to interrogate how we talk about online antagonism—starting with the ubiquitous and highly problematic phrase “don’t feed the trolls”—is a good place to start. These sorts of conversations provide a solid foundation onto which practical solutions may be built, and more importantly, empower the targets of online aggression to take active steps against their would-be aggressors. After all, the Internet does not belong to the assholes. Our language should reflect that.    

Whitney Phillips is a media and Internet studies scholar who received her Ph.D. from the University of Oregon in 2012. Her work has appeared in journals such as Television and New Mediaand First Monday, and she has been interviewed or featured in the Atlantic, Fast Company, and NBC News. She is currently revising her dissertation—which focuses on subcultural trolling—for publication. 

Art remix by Fernando Alphonso III

The Daily Dot