Article Lead Image

Evan El-Amin/Shutterstock | Remix by Jason Reed leoblanchette/GettyImages (Licensed) Remix by Jason Reed

How pro-Trump Twitter bots are still manipulating the 2016 conversation

'Anyone with capital ... can build bots that are quite successful at manipulating a conversation.'

 

Aaron Sankin

Tech

Posted on Nov 7, 2016   Updated on May 25, 2021, 3:35 pm CDT

During the third and final presidential debate of the 2016 election, Twitter was flooded with jokes about nasty women and bad hombres. Political pundits, both professional and amateur, battled it out on the field of ideas—which, at this point in the national political discourse, consisted of a lot of juvenile name-calling. It was alternatively chaotic and inspiring, soul-crushingly horrible and a window into the very beating heart of American democracy.

Mainly, it was full of robots programmed to send out a stream of messages with a single, unified goal: helping Republican nominee Donald Trump become president of the United States.

According to a recent study released by a trio of researchers at Hungary’s Corvinus University, Oxford University in the U.K, and the University of Washington in Washington state, hashtags relating to the last general election presidential debate were flooded with tweets from Twitter bots—automated computer programs designed to tweets using predetermined scripts.

Collecting data on some 6 million tweets that included dozens of election-related hashtags ranging from ones popular with conservatives (such as #AmericaFirst, #MakeAmericaGreatAgain, and #hillaryshealth) to those popular with liberals (like #ImWithHer, #ClintonKaine, and #DirtyDonald), and neutral hashtags (#Debates2016, #Debates, #Election2016), the researchers identified that over one-quarter of all the tweets using those hashtags around the time of the debate were likely the work of bots.

“Anyone with capital—temporal, social, or financial—can build bots that are quite successful at manipulating a conversation.”

While the researchers found bot activity pushing both narratives in favor of both Democratic nominee Hillary Clinton and Trump, the distribution was far from equal. Automated pro-Trump accounts outpaced the tweets sent out by pro-Clinton accounts by a ratio of seven-to-one. The volume of bots sending out pro-Trump messages was so great that automated accounts accounts for nearly half—46.7 percent, to be exact—of all pro-Trump content sent on Twitter that evening.

Of the neutral and pro-Clinton tweets sent about the debate, bots represented a significantly smaller share of the overall volume. The researchers found that bots accounts for 30.8 percent of the neutral debate traffic and 10.4 percent of pro-Clinton chatter.

In total, 27 percent of all debate tweets were identified as likely coming from bots. This number has gradually crept upwards since the first debate, when bots accounted for 23 percent of debate tweets. However, the most significant difference between how Trump’s robotic social media army approached the the final debate relative to earlier performances, was that the flurry of tweets began earlier in the day, well before the debate started, instead of largely running concurrently with the event itself.

In addition to happening at a much smaller scale, the pro-Clinton Twitter bots also tended to tweet substantively different content. Clinton bots were far more likely to be exclusively dedicated to circulating material that bolstered their candidate rather than attacking their opponent. Many of those bots, like the new wave of bots designed to waste the time of conservative Twitter trolls by drawing them into un-winnable arguments with robotic antagonists, are often more benignly satirical in nature.

Determining what is or isn’t a bot is tricky and isn’t getting any easier. The researchers used tools like frequency of tweets and semantic differentiation between messages—an account tweeting over 50 times a day is a big red-flag (and account tweeting 1,500 a day is a giant one), as is one that rarely varies its sentence structure. “Highly automated accounts—the accounts that tweeted 200 or more times with a related hashtag and user mention during the data collection period— generated close to 25 percent of all Twitter traffic about the presidential debate,” the report notes.

As Twitter bots become sophisticated and are programmed to avoid easy detection, however, picking out bots becomes a more labor-intensive practice that requires a human being to scroll through a possible bot’s timeline and render a qualitative judgment. Other bots go back and forth between being operated by a computer program and being actively managed by a human, making identification even more difficult. In terms of this study, it likely means that the proportion of bots in the debate conversation was even higher than what the researchers estimated.

Representatives from Twitter did not respond to a request for comment

Sam Wooley, a researcher at University of Washington who co-authored the study, noted that, while his team hasn’t done a large-scale network analysis of bots supporting Trump, he has identified two primary networks.

“A pretty big one is coming from the alt-right and 8chan, 4chan folks,” he said. “A lot of that are inter-connected bots tasked with both with sending out pro-Trump information, but more often sending out anti-Hillary information. A lot of the stuff those accounts tweet are memes that are about conspiracies to do with Hillary, like her health or that she has an earpiece in during debates.”

The goal of this botnet is to use the amplification potential of thousands upon thousands of bots to make certain hashtags trend and then flood those hashtags with memes friendly to Trump and/or hostile to Clinton.

Woolley identified a second large network of bots, this one based in Russia. He noted that many of the bots reporter Adrian Chen identified as being part a Russian online propaganda network, dubbed “The Agency,” have shifted to discussing the U.S. presidential election and do so in a way that bolsters Trump’s candidacy.

There are also number of smaller botnets that appear to be running independently. What’s interesting about many of these botnets, Woolley noted, is that many of them have a tendency to shift back and forth between posting political and advertising content. These botnets are likely operated by online marketing firms and are either occasionally taking commissions from someone backing the Trump campaign or using the popularity of pro-Trump conversations to organically build audience for bots that can later be used to pitch products.

As noted in a recent Buzzfeed story about about how Madedonian teenagers are taking advantage of Facebook‘s proclivity to spread political news stories even if those stories are completely fake, internet users in the U.S. are generally far more valuable to advertisers than people living overseas. If a bot can use Trump memes to build a following of people in the U.S., its value as a marketing tool is substantially increased.

Organized networks of Twitter bots have made an impact in the election prior to the big debates. Earlier this year, Patrick Ruffini, co-founder of the right-leaning political analytics firm Echolon Insights, identified a network of pro-Trump Twitter bots engaging in coordinated campaigns to do things like urging Trump supporters to file complaints with the FCC about robocalls originating from the campaign for Trump’s GOP primary rival, Texas Sen. Ted Cruz.

“That’s the nature of Twitter, anyone with capital—temporal, social, or financial—can build bots that are quite successful at manipulating a conversation,” Woolley said. “The responsibility will be not only on the U.S. government to start thinking of how it’s going to manage this type of automation when it comes to political conversations, but also campaign finance, harassment and hate speech.”

The onus is on Twitter, too, said Woolley, which must “think about how it’s going to identify bots or not identify them and work to manage ones that are doing things that are a bit more nefarious, rather than putting it on the user to identify and report malicious accounts.”

Share this article
*First Published: Nov 7, 2016, 4:40 pm CST