- Lil Nas X says he will perform at Area 51 for free 2 Years Ago
- The best Prime Day deals for gamers 2 Years Ago
- How Republicans are dancing around Trump’s racist tweets 2 Years Ago
- Not even anti-immigrant groups are defending Trump’s ‘go back’ tweets 2 Years Ago
- Netflix’s latest chase thriller ‘Point Blank’ lacks electricity 2 Years Ago
- Jay Inslee floats Megan Rapinoe as his secretary of state pick Today 11:33 AM
- The cast list for the ‘Kingsman’ prequel movie looks totally nuts Today 11:17 AM
- The best Prime Day deals to heat up your kitchen Today 11:16 AM
- YouTuber Emily Hartridge killed in electric scooter crash Today 10:50 AM
- Is Lashana Lynch really playing 007 in the new Bond movie? Today 10:33 AM
- Trump demands apology after his racist tweets Today 10:21 AM
- Prime Day deals that’ll make you grateful for your Amazon membership Today 9:51 AM
- Netflix’s ‘4L’ takes a long road trip to the Sahara Today 9:04 AM
- Air Force says it’s ready to ‘protect’ Area 51 amid Facebook event buzz Today 9:02 AM
- Get 50% off 23andMe DNA tests today for Prime Day Today 9:00 AM
Photo via Inked Pixels/Shutterstock (Licensed)
People are just as likely to spread news from a bot than any other human.
Researchers at Indiana University studied 400,000 claims made by 122 alleged fake news sites like Breitbart, Infowars, PoliticusUSA, and satirical publications like the Onion. At the same time, they monitored more than 1 million Twitter posts referencing 15,000 stories written by fact-checking sites, including Snopes, Politifact, and FactCheck.
From that data set, researchers looked at which accounts were spreading the news and sampled 200 of their most recent tweets. Using machine learning, they were able to determine whether the account belonged to a human or an automated social bot.
“From this data we extract features capturing various dimensions of information diffusion as well as user metadata, friend statistics, temporal patterns, part-of-speech and sentiment analysis,” the study says. “These features are fed to a machine learning algorithm trained on thousands of examples of human and bot accounts.”
The algorithm, called Belometer, found that social bots play a key role in the spread of fake news.
“Relatively few accounts are responsible for a large share of the traffic that carries misinformation. These accounts are likely bots, and we uncovered several manipulation strategies they use.”
Researchers believe these strategies are why Twitter bots are so effective. First, they amplify fake news in its early stages, long before it goes viral. Then they target individual users through replies and mentions, instead of writing broad posts or retweeting. This increases the chances a post could go viral because it injects fake news directly into a closely connected human network. Finally, bots disguise themselves as human by changing their geographic location. These manipulations are largely why people spread false news from bots just as much as other humans, according to the study.
While the study only looked at Twitter, researchers believe other social platforms are just as vulnerable. Conspiracy theories are also spread on Facebook by accounts managed automatically and can spread just as fast as real news. Unfortunately, the increasing popularity of ephemeral platforms like Snapchat and Sarahah makes studying the spread of fake news nearly impossible.
Still, researchers are confident their findings will help social platforms make changes to end the spread of false information, even if it isn’t clear what route to take. The study presents two possible solutions: using machine learning algorithms to detect and shut down bots, or deploying CAPTCHAs, a proven method for distinguishing between a human and machine.
Phillip Tracy is a former technology staff writer at the Daily Dot. He's an expert on smartphones, social media trends, and gadgets. He previously reported on IoT and telecom for RCR Wireless News and contributed to NewBay Media magazine. He now writes for Laptop magazine.