Advertisement
Tech

Social media sites are ditching their content moderators at the worst possible time

They’re needed more than ever.

Photo of Roger Sollenberger

Roger Sollenberger

empty computer desks

In October 2019, Mark Zuckerberg was grilled on the Senate floor about Facebook’s controversial content moderator apparatus, which has been reported in many instances to be inhumane. Its estimated 15,000 content moderators—some employed directly; some contracted through third parties—work long days at low pay (as little as $28,800 a year), exposed to some of the most disturbing scenes of the human experience for hours at a time, with a few short breaks each day. 

Featured Video

As Rep. Katie Porter (D-Calif.) put it to the CEO:

“These workers get nine minutes of supervised wellness time per day. That means nine minutes to cry in the stairwell while someone watches them.”

But now many of those workers won’t have someone to watch them—or even a stairwell. Like everyone, social media companies have been forced to adapt to the manifold and often unforeseen challenges of doing business during the global coronavirus pandemic. As a consequence of social distancing, many content moderators cannot work in offices—where they themselves can be tightly monitored—so thousands have been relieved of work and supplemented with imperfect artificial intelligence screening programs that have never been tested at such a large scale. 

Advertisement

This will depress the companies’ abilities to combat not just disturbing and abusive content, but coronavirus lies, as well as the spread of political misinformation in an election year already rife with foreign interference.

With moderators gone, social media may go amok in the most dangerous time possible.

Facebook has dramatically reduced its use of moderators contracted through third-party companies such as Accenture and Cognizant, shifting the most sensitive content monitoring—including self-harm—to full-time Facebook employees. Other companies, such as Twitter and Google, have done the same. None of the companies contacted for the article would reveal how many of their moderators have been temporarily suspended, or when they expect to lift their emergency policies.

The companies aren’t only scaling back on moderators. The Bay Area is on lockdown, as are major cities across the world. Twitter says it barred all in-office work since March 11, and Google offices now only host employees whose roles are “business essential.”

Advertisement

None of the companies interviewed for this article characterized the necessary downsizing as layoffs, and all say they are supporting all of their workers affected by the crisis. This includes pay for time employees would have been working and measures to make remote work possible for employees whose jobs expose them to sensitive data.

Moderators pose a special case, however, because social media companies often contract a significant amount of that work through third-party vendors. This means they must coordinate their response and support for employees whose ability to do their work has, out of public health requirements, vanished.

Facebook has been uncharacteristically transparent about the obstacles. The company wants the public to know its new configuration will likely not be up to the task, at least at first.

When asked what it is doing to care for moderators that the pandemic puts out of work, the company said it will make sure they will be paid. Facebook told the Daily Dot in an email that the company is ensuring that it will make support available to full-time employees and content-review contractors, including mental health support. All companies interviewed for this article described similar efforts.

Advertisement

Coincidentally, Facebook is right now in the final negotiating stages to reach settlements with a group of moderators who in 2018 filed a class-action lawsuit alleging they developed PTSD from regularly viewing highly disturbing images and videos, of murder, rape, child pornography, and suicide.

Facebook is concerned that the volume of posts, combined with its diminished ability to review them, will lead to more disturbing content reaching more people. That could compound the damaging effects as people around the world, particularly those already struggling with mental health, begin to grapple with extended isolation, as well as penury and loss.

Zuckerberg voiced his own concerns in a recent interview with The Verge. “I’m personally quite worried that the isolation from people being at home could potentially lead to more depression or mental health issues, and we want to make sure that we are ahead of that in supporting our community by having more people during this time work on things that are on suicide and self-injury prevention, not less.”

It’s a sobering thought. Though an exact correlation is difficult to pin, social media and other internet platforms have been perennially linked to suicides, especially among young people who experience cyberbullying and inordinate peer pressure. Youth suicide rates in America increased 56% in the decade starting in 2007. That was the year Apple introduced the iPhone, and the year Facebook went mobile.

Advertisement

And suicides, like COVID-19, can have a ripple effect in communities. For instance, in 2008 someone posted on a Japanese message board that hydrogen sulfide gas can be used to commit suicide. 220 people later attempted suicide that way, 208 of them to completion.

A company spokesperson says that Facebook’s new response apparatus has prioritized the most critical areas, including self- and imminent harm.

Moderators, of course, aren’t only tasked with flagging disturbing content. 

They also monitor manipulative content, bots, and spam—critical and demanding work in a pivotal election year where foreign actors are increasing their interference efforts, with the worst likely yet to come. That is not even to mention the numerous real people who are trying to downplay coronavirus online, spreading information that contradicts experts.

Advertisement

This is of unique significance for Facebook. Last year, Facebook made the controversial decision not to ban political ads that contain false or misleading content, though the company says it will continue to police organic posts. Facebook has partnered with outside fact-checkers to flag and debunk (but not necessarily remove) misleading political posts, but some of its choices—such as Check Your Fact, a subsidiary of right-wing outlet The Daily Caller—have their own history of misinformation. These partners operate in a responsive capacity and don’t dedicate themselves to proactively patrolling organic posts. Zuckerberg said Facebook, out of concern for the program’s effectiveness during the pandemic, has created a $1 million fund to support its fact-checking partners.

Twitter, on the other hand, has banned political ads outright, and a December 2019 report found that YouTube had already removed 300 ads for President Donald Trump. Given the recent reductions in moderating capability, both platforms will, at least in the near term, be less effective at suppressing and removing misinformation.

But as resources grow slimmer, social distancing has invigorated social media. Facebook says it’s seen a substantial spike in traffic from users and businesses around the world to the point the company is concerned about server capacity. Zuckerberg told reporters on a press call last week that both WhatsApp and Facebook Messenger calls recently more than doubled in volume.

The companies have responded by ramping up AI content screening, paired with limited human review teams, to fight the spread of harmful, disturbing, and abusive content, misinformation—including about COVID-19—and bogus political content, which Facebook says it prioritizes. 

Advertisement

The expansion of automated screening is unanticipated and abrupt, and Facebook—even Zuckerberg himself—has been transparent about the technology’s shortcomings.

“We anticipate our current changes could lead to mistakes,” a spokesperson told the Daily Dot.

Twitter and YouTube told the Daily Dot they’ll supplement their remaining moderators with AI, which will primarily be used to flag content for human review.

 In a blog post, Twitter shared Facebook’s apprehension, saying its automated screening systems “can sometimes lack the context that our teams bring, and this may result in us making mistakes.”

Advertisement

Twitter tells the Daily Dot it will only permanently suspend accounts upon human review of flagged content, and YouTube and Facebook indicated similar precautions.

A Google spokesperson referred questions about misinformation to a company blog post, which acknowledges that its AI systems “are not always as accurate or granular in their analysis” as its human reviewers, and indicates that YouTube’s software could flag too much content—with lags for human review and appeal times.

Facebook told the Daily Dot that it is currently exploring work-from-home options with its contractors on a temporary basis, and says those programs have already been enabled in some locations. “We expect this to be a slow and steady process,” the company said.

An Accenture worker told The Intercept last week that Accenture told a team of more than 20 contractors they would not be allowed to work from home, despite social distancing guidance from government and health experts. Accenture told the Daily Dot in an email that it will offer all remote employees wellness support, including 24/7 virtual access to mental health professionals, but did not respond to specific questions. Cognizant did not respond to questions.

Advertisement

One reason for the downsizing is that moderators can’t be moderated if they don’t work in an office or specially designed telework setting. Companies rigidly control and monitor this work in the interests of accuracy and output— at volumes that can devastate psyches—as well as for safety and privacy concerns.

“Some of this work can only be done onsite—like those who need to access the most sensitive content,” a Google spokesperson said in an email. “Whether working remotely or not, everyone is being compensated for their work,” the spokesperson said, adding that the policy extends to the company’s vendors and contractors, including moderators. Google is in the process of distributing equipment, such as secure laptops, to some employees and vendors who must now work remotely, in hopes of establishing a durable new model.

A Twitter spokesperson said in an email that the company will continue to pay “labor costs to cover standard working hours” for its contractors and hourly workers who are not able to do their work remotely while the work-from-home guidance is in place, a policy also reflected in statements from Google and Facebook. Facebook told the Daily Dot it was also offering “remote psychological/resiliency support” for its employees and says it is working to ensure its partners do the same.

As sweeping as the changes are, these companies probably have the resources to pull them off. Smaller companies—such as TikTok and Snap—might find it harder to adjust, raising serious concerns about the content shared on their platforms. (Neither TikTok nor Snap responded to repeated requests for comment.)

Advertisement

Though the combination of fewer resources with a dramatic surge in traffic creates very real cause for concern—are we going to experience an even darker online world?—that surge also suggests a bright spot. Many people seek and find comfort and connection on these platforms, and that’s true of young people especially. Social media helps us get in touch with people in similar situations, whether they’re strangers across the globe or friends and family who now share the same anxieties about an invisible threat and a seemingly interminable uncertainty.

Because for all the flaws and uncharted challenges ahead, social media helps many people feel a little less lonely, and the companies seem to be taking their responsibilities, their users, and their workers a lot more seriously.

But at a time when we’re recalibrating our culture around what’s newly essential—grocery stores, delivery drivers; sanitation workers—and a wave of new, sometimes unlikely people are revealed around the world to help society function, perhaps we should start to look differently at content moderators, too.

READ MORE:

Advertisement
 
The Daily Dot