Earlier this week, President Donald Trump retweeted a series of disturbing anti-Muslim videos, one of which appears to show a teenage boy being beaten to death. The posts predictably sparked outrage, leading users to question why Twitter would allow them to remain on the feeds of the president’s 43 million followers.
Now Twitter has explained why the videos were left up—and it’s only causing more anger.
Earlier this week Tweets were sent that contained graphic and violent videos. We pointed people to our Help Center to explain why they remained up, and this caused some confusion.— Twitter Safety (@TwitterSafety) December 1, 2017
Twitter previously justified letting Trump post content—even if it clearly broke the company’s poorly-enforced rules—because of a convenient internal policy that protects “newsworthy” posts.
Twitter has changed that stance and now says a video showing a boy getting murdered doesn’t fall under those guidelines. Instead, and just as alarmingly, the company is referring people to its standard media policy.
To clarify: these videos are not being kept up because they are newsworthy or for public interest. Rather, these videos are permitted on Twitter based on our current media policy. https://t.co/RqEQy3skgc— Twitter Safety (@TwitterSafety) December 1, 2017
The policy includes a vague definition of what constitutes “graphic violence” or “adult language” and appears to skillfully elude taking down images like those reposted by Trump. While users are encouraged to report graphic violence, the rules explain the offending post won’t always be taken down, nor will the account that posted it be punished.
“Some forms of graphic violence or adult content may be permitted in Tweets when they are marked as sensitive media,” the policy says. “However, you may not include this type of content in live video, or in profile or header images.”
Twitter does say it will remove media with “excessively graphic violence out of respect for the deceased and their families” but only “if we receive a request from their family or an authorized representative.”
All other violent images posted in timelines (excluding live videos) are given a “sensitive tag.”
The media rules also begin with a confusing and seemingly unnecessary note, “Please note: this policy will be updated later this year to include hate symbols and hateful imagery.”
By posting the notice, Twitter has already included those categories in its policy, except that it, for whatever reason, appears to be waiting to enforce them. It’s unclear why the publishing of rules about “hateful imagery” would need to be delayed or, perhaps, strategically scheduled.
The company’s “Safety Calendar” also indicates that on Dec. 18, a new rule will expand its policies to include content that glorifies or condones acts of violence that result in death or serious harm.
The site’s CEO, Jack Dorsey, felt compelled to chime in on Friday afternoon, reiterating why Trump’s retweeted posts weren’t taken down. His tweet was met with a cacophony of furious replies, many coming from verified accounts of people who work in media.
We mistakenly pointed to the wrong reason we didn’t take action on the videos from earlier this week. We’re still looking critically at all of our current policies, and appreciate all the feedback. See our safety calendar for our plans and ship dates. https://t.co/yGytH3eskM— jack (@jack) December 1, 2017
Dorsey replied with a curt, “No, I don’t.”
No, I don’t— jack (@jack) December 1, 2017
Twitter has recently taken a more active approach against accounts that promote violence and hateful speech, verifying and even banning some members. It also changed its policy on verified accounts and can now remove the coveted blue check mark if a user abuses it—a misguided policy that does nothing to address the site’s rampant harassment.
But for all its (mostly) agreeable changes, Twitter still hasn’t addressed how it plans to enforce them. Both the Twitter Safety account and Dorsey said they would welcome feedback. Surely, if that’s true, something must change.