Sorry, but Twitter needs more than some fact-checking notes to make it a friendly, healthy social network.
Twitter has come under fire over the past few weeks for the way it’s been handling prominent users who also happen to spread violence or conspiracy theories, namely Alex Jones and the InfoWars brand. In an interview with the Washington Post, Twitter CEO Jack Dorsey shared that the company has been trying to address the issue, and in fact has been rethinking Twitter and some of its core features. Some of those ideas are better than others.
Dorsey said that “the most important thing” Twitter can do is to look at the incentives the company builds into its product. “They do express a point of view of what we want people to do—and I don’t think they are correct anymore,” Dorsey told the Washington Post. Dorsey said that the company hasn’t changed its incentives since it first launched 12 years ago, and that it’s basically been trying to Band-Aid problems that are core to this system with policy changes.
While the company has stressed that it’s aiming to foster a healthier social network, it’s clear these policy changes haven’t had the intended effect. One problem is that some of these policies may be overly broad to enforce in an effective way; another problem is that Twitter seems to repeatedly make exceptions to its policies for prominent Twitter users. The result is a lack of trust in the social network, as exampled by recent confusion over whether it was shadow banning prominent conservative users.
Earlier this week, Twitter finally took some action against Jones after he tweeted a call for followers to get their “battle rifles” ready. Twitter suspended Jones for one week, giving him only read-only access to the social media site. Before that, in an attempt to draw attention to Jones and his behavior on Twitter by making an impact on the company’s bottom line, a number of Twitter users started blocking major advertisers in the app.
In order to help fundamentally fix those flawed incentives in the app, Dorsey has been rethinking Twitter and proposed a few possibilities that could allow people to share their views without also contributing to the problem of conspiracy theories and misinformation. One is similar to a step YouTube has recently taken, which is to surround inaccurate tweets with factual context. For example, Twitter could better label parody accounts as being fake. The problem here, however, is the seemingly insurmountable volume of tweets that will need to be fact-checked on a minute-by-minute basis. And if certain accounts are constantly being flagged for spreading misinformation, what is the value in their presence on the social network if their tweets are continually inaccurate? For YouTube, where videos persist for months or years as a searchable resource or entertainment option, it makes sense. For Twitter, fact-checking notes on tweets seems like an incredibly challenging undertaking.
Another area Dorsey thought Twitter could improve on is how it handles bots. Its actions against bots have been inconsistent over the years, and recently Twitter began purging the network of many bots and inactive accounts. Dorsey said that Twitter could label known bots as such, in an effort to help Twitter users identify what accounts are manned by humans, and which are posting automated content. This is certainly a good idea, and one that could help inform Twitter users about the types of accounts they follow. It’s possible this could help stop the spread of disinformation if users realize the source is a bot rather than a human. In April, a Pew Research study found that two-thirds of popular links on Twitter were shared by bots.
If this is all Dorsey has up his sleeve to improve the Twitter experience, it’s clearly not enough. Bad actors on Twitter, prominent or not, continue to make the social network a hell for many of its users, particularly women and minorities. If prominent users can continue to behave poorly with little to no repercussion, it sets a terrible example for the rest of the Twitter population. Labeling bots and adding factual context to inaccurate tweets are steps to making conspiracy theories and false news less of a problem, but it’s not a solution.
H/T Washington Post