Article Lead Image

Photo via Ryan Melaugh/Flickr (CC BY 2.0)

Is Facebook a replacement for a suicide hotline?

Encouraging users to narc on each other might not be such a good idea.

 

S.E. Smith

Via

Posted on Feb 27, 2015   Updated on May 29, 2021, 10:36 am CDT

Think twice before vaguebooking—or using suicide metaphors—on Facebook, because your friends might narc on you. The site will be rolling out a feature that allows users to flag posts for review if they think they contain suicidal content, though Facebook reminds people they should call emergency services immediately if a user posts a clear suicide threat. The esteemed mental health professionals at Facebook will take a look at reported content and issue a note to a user suggesting that she seek help if they think it’s warranted.

This is a terrible idea.

Suicide should be taken extremely seriously, especially in light of the fact that almost seven percent of Americans have one or more major depressive episodes annually, and 18 percent of Americans experience anxiety disorders, which often tie into depression. In the United States, a suicide occurs every 12.8 minutes.

Frameworks for suicide prevention are incredibly complicated and require the construction of a supportive social network that includes friends, family, mental health professionals, and others who interact with depressed people. Facebook’s participation in suicide prevention might seem positive on the surface, but it comes with hidden costs.

Facebook’s participation in suicide prevention might seem positive on the surface, but it comes with hidden costs.

At a casual glimpse, the idea of seeing a friend in trouble and letting Facebook know you’re worried seems logical. However, it’s odd that the site would encourage users to report behavior that concerns them rather than reaching out to people directly. Directly helping a friend with depression can be challenging, but providing evidence that you’re there and listening is much more powerful than using a cold and clinical reporting tool. Perhaps this is evidence of an era in which people are encouraged to remain distant from each other and their emotions.

The thought of logging in to Facebook to a frozen timeline and the news that a friend flagged one of your posts—anonymously, at that—is chilling. For at least some people with depression and other mental health conditions, it seems like it would leave considerable room for pause and second-guessing before posting content on Facebook. This would leave them more isolated and stressed, which can create circumstances that directly contribute to depression.

The very term “social network” implies a social or community aspect, not a setting in which people go behind each others’ backs to file reports on their behavior. Depressed people might find themselves alienated and uncomfortable with reports like these, perhaps even leaving Facebook altogether because it no longer feels like a safety net. This achieves the exact opposite of Facebook’s suggestion that the feature will help people reach out.

Historically, users concerned about what appeared to be suicidal content had to go through a multilayered reporting process that included taking a screenshot to report a user. As with other reports, personnel would review it and decide on an appropriate course of action, including potentially providing the user with helpful resources. Now, users can report at the click of a button, which leaves much more room for abuse.

While Facebook claims it will review reports carefully before contacting users, there’s a high probability of trolling embedded in the feature. Everyone communicates differently, including mentally ill people, and a comment that might seem like a serious expression of suicidal ideation in one person might be a dark joke for another—and unless you know the individuals involved, it can be difficult to tell. Conversely, what seems like a casual or lighthearted comment could be a sign of extreme distress, and an outside observer wouldn’t be able to make that distinction. This creates an ideal opening for trolls to get to work.

The ability to report users in a snap feels like a tremendous violation of privacy, and it also, for at least some users, raises the specter of #SamaritansRadar, which flagged suicidal content on Twitter and reported it to users of the app. The short-lived program was launched in 2014 to considerable criticism—in fact, just over a week later, it was withdrawn in response to complaints from Twitter users.

Users of the app could receive a notification whenever anyone they followed tweeted a comment with keywords like “want to die” and “hate myself.” The Samaritans argued that this would make it harder to miss suicidal people in a busy timeline, creating an opportunity to reach out in support. Critics pointed out that it created an ideal opportunity for trolls who wanted to strike when people were at their most vulnerable. The Samaritans countered with the argument that tweets are public and people who didn’t want their tweets scanned should go private—and critics responded with the point that people use Twitter for social networking and community support. Exactly the sort of support that people feeling suicidal might need.

“One of the key concerns [with #SamaritansRadar],” wrote Karen McVeigh at the Guardian, “relates to the lack of knowledge or consent of potentially vulnerable individuals. Anyone can sign up to receive an email when someone appears to be in crisis and those being monitored will not be alerted when a user signs up.”

Likewise, Facebook users have no way to opt out of reporting from their followers—in fact, they can’t even go private to avoid it. Numerous supporters—like Ashley Feinberg at Gizmodo, who writes that “if this helps even one person, it’ll be worth it”—miss the fact that the feature is nonconsensual. People with mental health conditions don’t have an opportunity to decide whether they want this feature implemented, which gives their followers a great deal of power over them.

https://twitter.com/ezraklein/status/570979997059588097

Facebook’s feature flips the dynamic of #SamaritansRadar, providing resources for people to reach out if they want help rather than enabling people to monitor Twitter timelines, but it’s still troubling. It implies an invasive level of scrutiny that not all users may be comfortable with, and it will definitely change the way people engage with and use Facebook. For people with mental illness, Facebook may turn from a tool for connecting, reaching out, and finding solidarity into a potentially hostile environment.

While people without mental illnesses might think that a message about “a friend” being worried could be comforting, it can be just the opposite for someone experiencing a depressive episode or a more complex mental health issue, like a break with reality or episode of paranoia. Instead of feeling supported and surrounded by a loving community, a user might instead feel unsafe and under attack.

Mental health is more complicated than a quick read of a report that will likely be one of thousands to review over the course of the workday. It’s also more complicated than casually flagging a post that gives a user pause, wondering if it contains a warning of suicidal behavior. These tools disempower people with mental health conditions, putting them on the defensive instead of allowing them to make active choices—like, for example, enabling a reporting tool or buddy system so their Facebook friends know when they need help.

If you need to talk to someone, contact the National Suicide Prevention Lifeline at 800-273-TALK (TTY: 800-799-4889).

Photo via Ryan Melaugh/Flickr (CC BY 2.0)

Share this article
*First Published: Feb 27, 2015, 12:00 pm CST