Almost everything you read on Twitter about the Boston bombing was a lie
Twitter was the fastest source of online news about the April 15, 2013, Boston Marathon bombings, with witnesses tweeting from the scene and reports posted within seconds whenever new information hit police scanners.
It was also one of the least accurate.
A group of researchers from IBM Research Labs in Delhi, India, has issued a report on the tweets that came out in the wake of the bombings. Their most unnerving conclusion is that 29 percent of the top 20 most popular tweets (out of 7.8 million collected) were inaccurate rumors and “fake content.”
Of the remaining tweets, Aditi Gupta, Hemank Lamba, and Ponnurangam Kumaraguru wrote in their report (PDF), 51 percent consisted of “generic opinions and comments,” while a scant 20 percent contained accurate, factual information.
So much for getting your information from Twitter during a crisis.
“We found that large number of users with high social reputation and verified accounts were responsible for spreading the fake content,” the researchers said.
The signal-to-noise ratio on Twitter, which can be low on a normal day, is lower still during crises. Part of that is the rumor mill and the desire to be relevant among people with Twitter credibility but who themselves lack good news judgement and who, in the wake of a tragic event, are more susceptible to rumor—as are their readers.
An example of a tweet containing an inaccurate rumor was one from @HopeForBoston, which said, “R.I.P. to the 8 year-old boy who died in Boston’s explosions, while running for the Sandy Hook kids. #prayforboston”
While there were plenty of sad casualties from the explosions, that was not one of them.
Another part of the problem, however, is opportunism. Spam merchants, trolls, and other sham accounts used fake news updates to promote phishing attacks and malware injections.
“We identified over six thousand (fake) user profiles,” the researchers wrote. “(W)e observed that the creation of such profiles surged considerably right after the blasts occurred.” 75 percent of this fake content was spread via mobile devices, according to the study.
31,919 new accounts that each posted at least one tweet about the bombings were created between the day of the blast and five days later. Two months later, Twitter had deleted almost 20 percent of them, likely for spam.
Because so much damage can be done with false information after a disaster, the authors of the report recommend that “solutions and algorithms built need to be able to solve and detect such content in real-time. Post-analysis may be able to capture concerned criminals, but would not be able to contain the damage.”
It’s hard to argue with such a suggestion. The technically detailed work of Gupta, Lamba, and Kumaraguru may help to push Twitter toward finding such solutions.