Silhoutte of woman on phone over Mastodon logo on blue background

rafapress/Shutterstock wikimediacommons/Mastadon (Licensed)

Assh*le Twitter users are barging into Mastodon and demanding it stop being so polite

They're annoyed it's a safe space.

 

Viola Stefanello

Tech

Posted on Nov 10, 2022   Updated on Nov 10, 2022, 9:11 am CST

On the surface, Mastodon isn’t all that different from Twitter. 

The decentralized platform has its own name for posts (it’s “toots” instead of “tweets”), its own like and repost features, even its own chronological timeline. This resemblance is one of the reasons why Mastodon is attracting thousands of new people who want an alternative to Twitter after Elon Musk purchased the site. Spooked by the chaotic way the Tesla and SpaceX CEO is running it, at least 70,000 people joined Mastodon in the past couple of weeks.

But many of the thousands of Twitter users that are now migrating to Mastodon to try and find a new, hopefully less toxic social media home are clashing with one of the platform’s most unique features: content warnings.

When tooting on Mastodon, users are prompted about whether they want to add a content warning. Adding one is completely up to the user, but if you do, everyone who follows you or who belongs to the same instance as you can initially only see the short warning you decide to write. 

It’s then up to them to decide whether to click on the warning to see the rest of the toot or to keep scrolling.

https://twitter.com/guerraDgalaxia/status/1588904965887918086

The platform offers highly customizable settings, which are possible because of Mastodon’s open source nature and set it starkly aside from most of the centralized social media apps that have become mainstream over the last decade. 

It is organized on a number of different servers, which the company calls instances, providing customizability for users. 

But, if you keep the standard preferences that users find on their profiles when they join, you will quickly notice that people on Mastodon seem to write content warnings for basically everything, from pictures that include eye contact to discussions of cars, jobs, travel, and other mundane topics. Some communities even ask their members to hide every mention of politics or breaking news behind a content warning.

For new users coming from spaces like Twitter, where no such feature exist and people can be ridiculed for manually adding content warnings, the experience might prove jarring.

In the past few days, mastodon.social—the main global instance that counts 159,000 members—has been awash in posts arguing for and against the perceived excessive use of content warnings, with others airing their complaints on Twitter, too. 

The most patient Mastodon veterans have been trying to educate skeptical newcomers: some explain that it’s just a matter of getting accustomed to the platform’s etiquette, in order not to constantly look and sound impolite. Others are asking for respect for the platform’s uniqueness, underlining that Mastodon has long been populated and shaped by marginalized groups that were harassed and bullied off other platforms and are accustomed to having a space where their boundaries and mental health are respected.

The main complaint from new users seems to be that content warnings are used too liberally, even for innocuous things, whereas the general understanding outside of Mastodon is that posts need warnings if they deal with particularly delicate topics—like suicide, addiction, violence, or if they aren’t safe for work. 

“I’m seeing a lot of Mastodon instances forcing users to CW all kinds of things, even innocuous things like food, and I feel that’s wrong,” a user wrote on Twitter. “I do not get the Mastodon reliance on content warnings for most everything that isn’t a cat picture,” another commented

“The last time I joined mastodon I got tone policed because people wanted fucking filtered content warnings for things like ‘dinner’ and ‘cars’ and ‘work.’ If there’s a mass exodus from twitter I’m not moving, I’m just signing off,” one concluded

Setting all personal preferences and ideas on what a content warning should be for aside, the feature does make the user experience slightly more clunky, at least for people who aren’t neurodiverse or have otherwise special needs. If you don’t turn content warnings automatically off, reading through the timeline is more laborious, since it requires clicking on every single toot that’s hidden behind a warning after deciding whether you’re interested. 

On the other hand, it does represent an exercise in consent that is usually absent on other platforms. 

People who are used to platforms like Twitter, where topics like politics and breaking news are discussed openly—constantly and often very vocally—might be puzzled by the fact that posting about these same topics on many Mastodon servers without slapping a content warning on the toots first will lead to an admonition. On Tuesday, as the midterms played out, a widely circulated toot on an instance dedicated to artists and creators asked members to “Please, /please/, place all political posts behind a content warning. It’s actually in our Code of Conduct as a strong suggestion, but if we see anyone posting heavy political content frequently and without a CW, we will ‘limit’ the account (at the very least) so that the post don’t show up on the public timeline.” 

This approach is unsettling some Twitter users who are dipping their toes into Mastodon for the first time 

“Twitter can be unhealthy but I’m struggling with Mastodon feeling a little too sterile for me. I think content warnings are valid but being directed to hide ANY political thoughts or even reposts from Twitter behind a warning for risk of ‘messing with the vibe’ feels weird”, a Twitter user noted.

There are options to remove warnings across the site, but impatient Twitter users haven’t taken time to learn the protocol. 

Ultimately, though, a lot of the criticism rests on a larger misunderstanding: in most cases, content warnings on the platform aren’t labeled because they think someone will find the content distressing at all. Interestingly enough, Mastodon founder Eugen Rochko apparently approved the inclusion of the feature on Mastodon because it was useful to hide spoilers for TV shows. And, over the years, the community on the platform has began to use the instrument for a bunch of different scopes. 

https://twitter.com/hyperplanes/status/1589838314324258817

In a now-viral explainer blog post on the topic, artist and developer v buckenham writes that content warnings can be used for a bunch of different reasons: “allowing you to talk about shit that feels a bit too heavy to talk about without the reader opting in,” like issues with mental or physical health, but also “allowing you to talk about shit that feels a bit too boring to talk about without the reader opting in … talking about stuff that is fine for you but might have higher than expected emotional load for other people,” and even making little jokes with your community. 

“Different parts of Mastodon, even more so than Twitter, have their own culture,” buckenham told the Daily Dot. “The instance I’m on uses CWs heavily: most of the time just for jokes, but there has always been CW discourse, which seems necessary. It’s the community (communities) trying to find a consensus, pushing back on people who make the experience non-ideal for them.”

This doesn’t mean every long-time user agrees with the way content warnings are used on the platform: blogger Karl Voit, for instance, has recently argued that the massive use of content warnings on top of posts that don’t include spoilers, nudity, violence or other sensitive topics is watering down the feature’s actual usefulness. 

And as an increasing number of people give Mastodon a swing, looking for an alternative to an increasingly chaotic Twitter, this meta-debate is bound to keep developing. 

web_crawlr
We crawl the web so you don’t have to.
Sign up for the Daily Dot newsletter to get the best and worst of the internet in your inbox every day.
Sign up now for free
Share this article
*First Published: Nov 10, 2022, 9:06 am CST