- Man refuses to eat a meat-free meal prepared by his vegan wife 2 Years Ago
- Smoke ’em, pass ’em Week 12: I quit 2 Years Ago
- People have much love for the all-women moderator panel at the presidential debate Wednesday 10:03 PM
- Kamala Harris: Trump ‘got punked’ by North Korea Wednesday 9:53 PM
- Biden on domestic violence: We need to keep ‘punching’ Wednesday 9:47 PM
- Amy Klobuchar says she raised $17,000 from ex-boyfriends Wednesday 9:16 PM
- Trump’s campaign is a fan of Tulsi Gabbard’s attack on the Democratic party Wednesday 9:07 PM
- 50 Cent makes Instagram return with transphobic meme Wednesday 8:34 PM
- Lyft driver attacks female passenger after she refused to turn off music Wednesday 7:30 PM
- J.J. Watt posted his phone number online, wants fans to text him Wednesday 6:22 PM
- How a normal redditor becomes a conspiracy theorist Wednesday 5:48 PM
- ‘Bikram’ is not a great film, but it is a document of justice Wednesday 5:43 PM
- Congress is concerned Amazon isn’t safeguarding Ring videos Wednesday 5:40 PM
- Twitter urged to suspend Tory Party Twitter account after it ‘misled’ the public Wednesday 4:56 PM
- This former stripper has the best Humans of New York story of all time Wednesday 4:47 PM
Researchers at Indiana University studied 400,000 claims made by 122 alleged fake news sites like Breitbart, Infowars, PoliticusUSA, and satirical publications like the Onion. At the same time, they monitored more than 1 million Twitter posts referencing 15,000 stories written by fact-checking sites, including Snopes, Politifact, and FactCheck.
From that data set, researchers looked at which accounts were spreading the news and sampled 200 of their most recent tweets. Using machine learning, they were able to determine whether the account belonged to a human or an automated social bot.
“From this data we extract features capturing various dimensions of information diffusion as well as user metadata, friend statistics, temporal patterns, part-of-speech and sentiment analysis,” the study says. “These features are fed to a machine learning algorithm trained on thousands of examples of human and bot accounts.”
The algorithm, called Belometer, found that social bots play a key role in the spread of fake news.
“Relatively few accounts are responsible for a large share of the traffic that carries misinformation. These accounts are likely bots, and we uncovered several manipulation strategies they use.”
Researchers believe these strategies are why Twitter bots are so effective. First, they amplify fake news in its early stages, long before it goes viral. Then they target individual users through replies and mentions, instead of writing broad posts or retweeting. This increases the chances a post could go viral because it injects fake news directly into a closely connected human network. Finally, bots disguise themselves as human by changing their geographic location. These manipulations are largely why people spread false news from bots just as much as other humans, according to the study.
While the study only looked at Twitter, researchers believe other social platforms are just as vulnerable. Conspiracy theories are also spread on Facebook by accounts managed automatically and can spread just as fast as real news. Unfortunately, the increasing popularity of ephemeral platforms like Snapchat and Sarahah makes studying the spread of fake news nearly impossible.
Still, researchers are confident their findings will help social platforms make changes to end the spread of false information, even if it isn’t clear what route to take. The study presents two possible solutions: using machine learning algorithms to detect and shut down bots, or deploying CAPTCHAs, a proven method for distinguishing between a human and machine.
Phillip Tracy is a former technology staff writer at the Daily Dot. He's an expert on smartphones, social media trends, and gadgets. He previously reported on IoT and telecom for RCR Wireless News and contributed to NewBay Media magazine. He now writes for Laptop magazine.