fake news newspaper

Photo via Inked Pixels/Shutterstock (Licensed)

Bots play a crucial role in the spread of fake news, study finds

People are just as likely to spread news from a bot than any other human.

 

Phillip Tracy

Tech

Posted on Aug 14, 2017   Updated on May 22, 2021, 8:40 pm CDT

A study of 14 million tweets solved the riddle of how fake news spreads on Twitter: social bots.

Researchers at Indiana University studied 400,000 claims made by 122 alleged fake news sites like Breitbart, Infowars, PoliticusUSA, and satirical publications like the Onion. At the same time, they monitored more than 1 million Twitter posts referencing 15,000 stories written by fact-checking sites, including Snopes, Politifact, and FactCheck.

From that data set, researchers looked at which accounts were spreading the news and sampled 200 of their most recent tweets. Using machine learning, they were able to determine whether the account belonged to a human or an automated social bot.

“From this data we extract features capturing various dimensions of information diffusion as well as user metadata, friend statistics, temporal patterns, part-of-speech and sentiment analysis,” the study says. “These features are fed to a machine learning algorithm trained on thousands of examples of human and bot accounts.”

The algorithm, called Belometer, found that social bots play a key role in the spread of fake news.

“Relatively few accounts are responsible for a large share of the traffic that carries misinformation. These accounts are likely bots, and we uncovered several manipulation strategies they use.”

Researchers believe these strategies are why Twitter bots are so effective. First, they amplify fake news in its early stages, long before it goes viral. Then they target individual users through replies and mentions, instead of writing broad posts or retweeting. This increases the chances a post could go viral because it injects fake news directly into a closely connected human network. Finally, bots disguise themselves as human by changing their geographic location. These manipulations are largely why people spread false news from bots just as much as other humans, according to the study.

While the study only looked at Twitter, researchers believe other social platforms are just as vulnerable. Conspiracy theories are also spread on Facebook by accounts managed automatically and can spread just as fast as real news. Unfortunately, the increasing popularity of ephemeral platforms like Snapchat and Sarahah makes studying the spread of fake news nearly impossible.

Still, researchers are confident their findings will help social platforms make changes to end the spread of false information, even if it isn’t clear what route to take. The study presents two possible solutions: using machine learning algorithms to detect and shut down bots, or deploying CAPTCHAs, a proven method for distinguishing between a human and machine.

H/T MIT Technology Review

Share this article
*First Published: Aug 14, 2017, 10:00 am CDT