- The first professional U.S. transgender boxer just won his first fight Today 2:18 PM
- Twitch streamer apparently hits partner on video Today 1:45 PM
- There’s now rehab for Fortnite addiction Today 12:07 PM
- How to watch América vs. Pumas online for free Today 11:25 AM
- ‘Target Tammy’ is the latest white woman to complain about Black people minding their own business Today 11:08 AM
- Jason Momoa reprises ‘Game of Thrones’ character on ‘SNL’ Today 10:06 AM
- How to watch the epic Copa Libertadores final online for free Today 9:35 AM
- The top fandoms of 2018 Today 8:00 AM
- How to watch Real Madrid vs. Huesca online for free Today 6:40 AM
- What is Sling TV? Today 6:15 AM
- A year of apologizing to the internet Today 6:15 AM
- How to stream NFL’s Week 14 games for free Today 6:00 AM
- John Kelly will be leaving the White House, and Twitter reacted exactly as expected Saturday 6:12 PM
- Shonen Jump manga is going (mostly) free to combat piracy Saturday 5:14 PM
- ‘Death Grips is online’ is trending, so what does it mean? Saturday 4:33 PM
Photo via Inked Pixels/Shutterstock (Licensed)
People are just as likely to spread news from a bot than any other human.
Researchers at Indiana University studied 400,000 claims made by 122 alleged fake news sites like Breitbart, Infowars, PoliticusUSA, and satirical publications like the Onion. At the same time, they monitored more than 1 million Twitter posts referencing 15,000 stories written by fact-checking sites, including Snopes, Politifact, and FactCheck.
From that data set, researchers looked at which accounts were spreading the news and sampled 200 of their most recent tweets. Using machine learning, they were able to determine whether the account belonged to a human or an automated social bot.
“From this data we extract features capturing various dimensions of information diffusion as well as user metadata, friend statistics, temporal patterns, part-of-speech and sentiment analysis,” the study says. “These features are fed to a machine learning algorithm trained on thousands of examples of human and bot accounts.”
The algorithm, called Belometer, found that social bots play a key role in the spread of fake news.
“Relatively few accounts are responsible for a large share of the traffic that carries misinformation. These accounts are likely bots, and we uncovered several manipulation strategies they use.”
Researchers believe these strategies are why Twitter bots are so effective. First, they amplify fake news in its early stages, long before it goes viral. Then they target individual users through replies and mentions, instead of writing broad posts or retweeting. This increases the chances a post could go viral because it injects fake news directly into a closely connected human network. Finally, bots disguise themselves as human by changing their geographic location. These manipulations are largely why people spread false news from bots just as much as other humans, according to the study.
While the study only looked at Twitter, researchers believe other social platforms are just as vulnerable. Conspiracy theories are also spread on Facebook by accounts managed automatically and can spread just as fast as real news. Unfortunately, the increasing popularity of ephemeral platforms like Snapchat and Sarahah makes studying the spread of fake news nearly impossible.
Still, researchers are confident their findings will help social platforms make changes to end the spread of false information, even if it isn’t clear what route to take. The study presents two possible solutions: using machine learning algorithms to detect and shut down bots, or deploying CAPTCHAs, a proven method for distinguishing between a human and machine.
Phillip Tracy is a former technology staff writer at the Daily Dot. He's an expert on smartphones, social media trends, and gadgets. He previously reported on IoT and telecom for RCR Wireless News and contributed to NewBay Media magazine. He now writes for Laptop magazine.