Article Lead Image

Battlebots: How Reddit and Twitter’s fake accounts stack up

More and more, the bot-makers ought to turn their attention to what can be done with the clockwork of Twitter itself. 

 

L. Rhodes

Internet Culture

Posted on Nov 25, 2013   Updated on Jun 1, 2021, 1:13 am CDT

If you’re not a regular user of Twitter, it’s possible that your first brush with bots on the microblogging platform came just two months ago, on Sept. 24. That’s when the New Yorker‘s Susan Orlean revealed that a popular Twitter account, @Horse_ebooks, was, in fact, an art project orchestrated by a pair of New York conceptualists. It was the first time many high-profile outlets saw fit to report on Twitter bots, and they have periodically resurfaced in the mainstream since then.

Whatever you make of @Horse_ebooks’s conceptual pretensions, the account managed to attract hundreds of thousands of followers, giving it roughly the same level of visibility as the Twitter presences of Salon or NPR. Part of the account’s appeal was the impression that someone had unwittingly left a faulty spambot running. “The incredible free tool that helps,” ran one of its typically clipped tweets, declining to elaborate. “Perish the feeling, my facial gymnastics,” commanded another. Most were sheer nonsense, but they could rack up more than a thousand retweets if they looked especially broken or seemed to back their way into an ersatz insight.

The unmasking of @Horse_ebooks was contentious news to aficionados of procedurally generated text—of whom, it may surprise you to learn, there are quite a few on Twitter. These are people who delight at the synchronicities stumbled upon by a recursive bit of code as it doggedly cuts, pastes, and deforms found text into declarations that range from mock profundity to Dadaesque silliness. For many who didn’t already suspect the ruse, the magic of @Horse_ebooks was ruined by the knowledge that its spam-like doggerel had, at least since February 2011, been churned out, not by a blind algorithm but as a creative performance imitating machine unintelligence. 

Since then, the culture of Twitter bot development has labored in the shadow of an irony: that its best known example was not, after all, a bot.

Not that the subculture has suffered much. There are numerous well-followed bots, grinding out counterintuitive texts to the pleasure of their audiences. Some mashup texts from wildly disparate sources, like the @MobySchtick bot, which combines tweets from the account of comedian Rob Delaney with text from Moby Dick. (Sample tweet: “It will always amaze me that you can say ‘I’ll have some tomato juice’ in a public restaurant & thy spirit ebbs away to whence it came.”)

Others point out amusing patterns in human-crafted tweets, like the coincidental anagrams collected by @ANAGRAMATRON. Prolific bot-makers like Darius Kazemi and Mark Sample have done much to automate the production of Twitter-sized memes with accounts like @TwoHeadlines (“Syria: Like it or not, we’ll have to talk to Jennifer Lawrence”) and the William Carlos Williams remixer @DependsUponBot. Even The Colbert Report recently launched its own bot, designed by show writer Rob Dubbin to satirize Fox News’ use of alias accounts to counter negative criticism. Finally, there are, as always, the hordes of bots built to spam users with links to porn and pharmaceuticals, or to drive up the number displayed under an account’s “followers” stat.

Opportunists aside, though, the bot-making community on Twitter is driven largely by the same curiosity about language and computation that gave rise to the digital humanities in academic circles. What’s the process for constructing an interesting statement? For making a joke? For answering a question? The best Twitter bots are built on observations about the patterns that emerge when we talk to—or, at least, at—one another. If we let them talk enough, they sometimes speak to us in surprising and revealing ways, which may explain why so many people are listening.

The bot phenomenon is not unique to Twitter, of course. Digital media is rife with them, from the scripts used by search engines to map the structure of the Web to the malicious botnets black hats employ to bring down websites with distributed denial of service (DDoS) attacks. Some are good, some bad, and over time every social platform must decide for itself how much automation it will tolerate, as well as how it will combat the intolerable sort.

Reddit, too, has seen its share of bot hijinks. Marketers eager to see their product voted to the front page of the social news site quickly recognized that bots were a surer bet than relying on organic votes. Reddit’s administrators countered with anti-spam measures, then “fuzzed” the votes to keep the bot makers from catching on when their machinations fail. Yet, at the same time, the site has encouraged users to experiment with bots, making progressive refinements to its application programming interface to allow for more and more automation.

As on Twitter, courting the bot-making community has resulted in a surfeit of novelty. One user created a bot that plays tic-tac-toe in the comments of Reddit threads; another scans the comments for replies with the proper syllable count, then reformats them as a haiku. When it comes to such playful uses, though, the users who moderate subreddits tend to be less tolerant than the site’s administrators. Because Reddit is oriented around topical subreddits, rather than on the Twitter-like option to follow or unfollow other users, the line between novelty and spam quickly grows thin. Thus, when bot-generated jokes lose their novelty or exhibit a tendency to derail discussion, the usual remedy is to ban them from more and more subreddits, until there’s no place left for them to play.

That, in part, may account for the comparatively pragmatic orientation of the site’s bot-making community. While there’s no shortage of automated jokesters roaming the comments, a remarkable number of Reddit bots lend themselves to more useful purposes. Take, for instance, tabledresser, a bot designed to tame the chaos of IAmA threads by collating questions and answers into neatly readable tables. Bitcointip allows Reddit users to transfer bitcoins to one another as tokens of gratitude. ModerationLog ensures community transparency by tracking threads removed by moderators and reposting them to r/ModerationLog.

What’s most notable about these service-oriented bots is the way in which they allow users to build on the platform. It would, for example, be nearly impossible for small teams of volunteer moderators to manage Reddit’s massive default subreddits if many basic functions—like labeling common submission types and removing frequently reported submissions—were not handled by AutoModeratorBot. That bot came at a make-or-break moment during the site’s growth and has proven so critical to its ongoing success that the user who created it has since been hired on as an administrator. Likewise, many functions that were initially handled by user-created bots have since been integrated into the platform itself, like the ability to allow users to attach to their names small textual flags called flair. The upshot is the formation of communities that would not have been possible were users forced to rely on the original design of the site.

It’s that community-hacking ethos that distinguishes much of the bot-making activity on Reddit from its Twitter counterpart. Not that Twitter altogether lacks useful bots—EQBOT, for example, tracks earthquakes, while services like Track This let you connect your Twitter account to other digital services. Compared to Reddit, though, there are strikingly few Twitter bots designed to reconfigure the medium itself, letting users connect and interact in ways not initially conceived by the platform’s creators.

In part, that may be because there are comparatively fewer kinks in how Twitter operates. There are no moderators struggling to keep up with thousands of daily submissions, and users can simply block or unfollow any channels they don’t care to see. Reddit has developed a bot-culture of utility largely out of necessity, whereas Twitter’s bot-makers have mostly the raw potential of the platform to drive their efforts. It’s easy to see which is the more compelling of the two.

More and more, the bot-makers ought to turn their attention to what can be done with the clockwork of Twitter itself. The linguistic terrain they’ve explored to this point can be fascinating, yes, and they need not abandon it entirely, but there are social kinks to which they could turn their talents. For example, Twitter’s news accounts desperately need a hack for making retractions as viral as the breaking news they correct, especially in light of the misinformation spread in the wake of the Boston Marathon bombing. A bot that syndicates a correction to the users who retweeted the original news story might be just the solution they need.

Such hacks need not concentrate solely on gaps in the platform, though. Rather, bot-makers could see Twitter’s successes as opportunities for reassembling its features into previously undreamt configurations. Instead of cutting and pasting text, they’d be constructing potential communities out of the raw material of tweets and retweets, follows and favorites. To start down that path, they need only ask themselves what sort of communities might exist on Twitter if only a bot did some of the work for us.

Illustration by Jason Reed

Share this article
*First Published: Nov 25, 2013, 12:00 pm CST