Photo via Ink Drop/Shutterstock (Licensed)
It’s the latest step to give Twitter users transparency into its decision making.
The controversial Twitter rules of old are finally gone for good.
The company on Friday overhauled “The Twitter Rules,” its oft-cited but not well understood set of guidelines that govern its decisions on whether to ban or delete accounts and posts.
Many of the changes clarify or alter vague wording that may have previously caused confusion.
“We have worked on this clarified version of our rules for the past few months to ensure it takes into account the latest trends in online behavior, considers different cultural and social contexts, and properly sets expectations around what’s allowed on Twitter,” the company said in a statement.
One of the biggest edits shows the options Twitter gives to those who violate its terms. Previously, Twitter only said it would punish users with a “temporary locking and/or permanent suspension of an account.” Now, it shows users may be asked to delete content before they regain access to their account or Twitter may temporarily limit their ability to create or interact with other posts.
Twitter also expanded its definitions of graphic violence and adult content with new examples, including gory media related to death, serious injury, violence, or surgical procedures. Adult violence, according to the new rules, is anything pornographic or intended to cause sexual arousal. Those content categories are allowed in tweets marked as sensitive but you can’t use them in headers or profile images. Excessively graphic violence may also be taken down out of respect for the deceased. Those rules aren’t new, they’ve just been expanded.
Twitter also added a section on username squatting, security, and “private information and intimate media.” In response to recent controversial trending topics, Twitter now has a section about preventing content from trending if it breaches its terms.
But the most important changes were made to the abuse section, where Twitter has been criticized over the years. Regarding abusive behavior, the site now shows how Twitter first considers three things: Whether it was targeted at an individual or group, if a report was filed by the victim or a bystander, and if the behavior is newsworthy and in the legitimate public interest.
That last point is particularly interesting. In September, Twitter explained that it wouldn’t delete a threatening tweet by President Donald Trump because of an “internal policy” that protects newsworthy posts even if they clearly violate its terms. The mystery rule caused outrage and forced Twitter CEO Jack Dorsey to increase transparency in the company.
We’re putting significant effort into increasing our transparency as a company, and commit to meaningful and fast progress. Will do better. https://t.co/g1Rvkaj2sl
— jack (@jack) September 25, 2017
In late October, it released a calendar that shows all the safety features it will release by the end of the year to curb abuse.
The altered rules include a different definition of violence and new sections designed to better show how its policies deal with bullying, child sexual exploitation, and unwanted sexual advances.
The changes to the Twitter Rules are exhaustive. Fortunately, the company provided handy redlined versions so you can compare the old rules with the new.
Correction: Twitter did not change its rules, but rather published the existing policies on Friday. This article has been updated for clarity and context.