According to a subset of very vocal people, the internet’s longstanding, pernicious problem of dealing with images of child abuse being shared online has been solved by one person over the course of just a couple of days.
Fans of Elon Musk claim that the tech visionary has single-handedly eliminated child sexual abuse material (CSAM) on Twitter, with tweets and articles declaring victory against one of the internet’s longstanding issues.
Those tweets and claims come without much verification (an admittedly difficult process to undertake when dealing with CSAM), but merely a faith in Musk’s ability to do anything he says, despite his long history of not following through with a number of his claims.
The issue of child sex abuse material has long plagued social media platforms, with 29.3 million images of child abuse removed across the internet last year, according to the National Center for Missing and Exploited Children (NCMEC), a non-profit organization that tracks the issue of child abuse and co-ordinates with social media platforms on how to tackle the problem.
According to supporters of Musk, he’s banned several hashtags relating to CSAM, and deleted accounts for “pedophiles.”
But according to experts who study the problem, the issue is by no means anywhere near solved.
Banning a hashtag doesn’t really do anything, not least because those users who want to communicate with others will simply shift their conversations to an associated or different hashtag. And nuking accounts that right-wingers say are “pedophiles” really doesn’t offer much in the way of proof of the elimination of actual, problematic material.
“The problem is much more complicated than just a few hashtags,” says Jess Maddox, assistant professor at the University of Alabama. “For Musk’s supporters, who have always steadfastly believed he can do no wrong, touting and sharing this maneuver is evidence of his success. And this is the problem. Instead of focusing on victims, the conversation shifts to Musk’s savior narrative instead.”
Twitter has long had issues with images of child abuse. In September, prior to Musk’s purchase of the platform, companies advertising on Twitter complained about their products being shown alongside images of child exploitation.
What Musk appears to have done is ban several hashtags linked to those looking to share inappropriate and illegal images of child abuse and those perpetuating child sexual exploitation (CSE) from appearing on the platform.
The entrepreneur, who took over Twitter a month ago, tweeted that the issue was “Priority #1” on his list of must-do actions after purchasing the platform.
One child safety campaigner, and longtime fan of Musk, praised Musk for “virtually” eliminating the three most commonly-used hashtags for those selling and sharing images of child abuse from the platform.
Outside of their claim, there doesn’t appear to be much evidence Musk has made a dramatic shift. And it’s not as simple as that, suggest experts in the field.
“Solving CSAM [child sexual abuse material] is one of the biggest online safety challenges ever, and that needs investment, teams of specifically trained—and professionally, psychologically supported—human moderators and collaborations with enforcement, and not layoffs,” said Carolina Are, an Innovation Fellow at Northumbria University researching the intersection between online abuse and censorship.
The oversimplification of one of the thorniest, ugliest issues to blight social media platforms into an easy-to-solve issue does a disservice to those who are victims of abuse, said Maddox.
“This is just another moment in Musk’s chaotic takeover of Twitter in which he believes he alone can fix it, something championed by many of his supporters,” she said. “Musk’s bizarre tweets regarding removing child abuse for Twitter are simultaneously underscored and complicated by the fact he allegedly let go most of the site’s content moderators—who could actually aid in the removal.”
Melissa Ingle, a former senior data scientist at Twitter until she was let go as part of the company’s layoffs earlier this month, said that the layoffs undo Twitter’s ability to enforce its policies to deal with child sexual exploitation. “The suggestion that Twitter was facilitating child abuse is disgusting, and part of the same lie that there is a cabal of high-ranking Democrat leadership abusing kids at Comet Ping Pong or more recent lies coming out of QAnon,” she said.
In apparently banning a hashtag while also cutting monitors, Musk makes it just as likely that a new forum will develop on the site, one that can’t be as easily reviewed. He has not made any claims that he will increase staffing to keep content off the site.
Twitter has long had a zero-tolerance policy towards CSAM, according to the platform’s own documentation. However, in reality, pre-Musk, Twitter struggled to tackle the problem. A February 2021 internal report, first obtained by the Verge, suggested that “While the amount of [CSAM] online has grown exponentially, Twitter’s investment in technologies to detect and manage the growth has not.”
The platform was also singled out by NCMEC in an amicus brief for ignoring “obvious” and “graphic” child sexual abuse material. The company has since tried to tackle the issue, but not at the pace needed.
A number of memes contrasting Musk’s Twitter with the old version of it explicitly brand former Twitter employees as complicit in the face of CSAM.
Some have even claimed that the outrage over Musk’s new version of Twitter is centered around this supposed crackdown.
Musk appears to have taken a major step, but he has a history of overpromising and underdelivering—see his offers to rescue the Thai soccer team, his promise of ventilators during COVID-19, and everything about the Boring Project. So it’s not surprising that he may have made a big and bold decision when first taking over Twitter.
But the question of whether he’ll put in the effort to continue to police it year after year remains to be seen.