- Milo Yiannopoulos receives lifetime ban from furry convention Monday 7:49 PM
- Snapchat just made all political ads purchased publicly available Monday 6:12 PM
- How to stream Barcelona vs. Borussia Dortmund in Champions League action Monday 5:39 PM
- How to stream Liverpool vs. Napoli in Champions League action Monday 5:19 PM
- How to make real money with Amazon’s Mechanical Turk Monday 5:03 PM
- How to stream Chelsea vs. Valencia in the Champions League group stage Monday 4:47 PM
- ‘SNL’ fires Shane Gillis for racist, homophobic comments Monday 4:41 PM
- Ben Shapiro wants accusers to describe Brett Kavanaugh’s penis Monday 4:30 PM
- Twitch suspends streamer for wearing Chun-Li cosplay Monday 4:11 PM
- Report: 8 years of Trump tax returns subpoenaed by prosecutors Monday 3:45 PM
- Netflix lands exclusive streaming rights to ‘Seinfeld’ Monday 3:34 PM
- Jenny Slate sets first comedy special at Netflix Monday 3:05 PM
- #EndSmearFear is aiming to save lives Monday 2:54 PM
- Netflix ‘Living With Yourself’ trailer offers a double dose of Paul Rudd Monday 2:07 PM
- How to stream the 2019-20 UEFA Champions League Monday 2:04 PM
Think twice before vaguebooking—or using suicide metaphors—on Facebook, because your friends might narc on you. The site will be rolling out a feature that allows users to flag posts for review if they think they contain suicidal content, though Facebook reminds people they should call emergency services immediately if a user posts a clear suicide threat. The esteemed mental health professionals at Facebook will take a look at reported content and issue a note to a user suggesting that she seek help if they think it’s warranted.
This is a terrible idea.
Suicide should be taken extremely seriously, especially in light of the fact that almost seven percent of Americans have one or more major depressive episodes annually, and 18 percent of Americans experience anxiety disorders, which often tie into depression. In the United States, a suicide occurs every 12.8 minutes.
Frameworks for suicide prevention are incredibly complicated and require the construction of a supportive social network that includes friends, family, mental health professionals, and others who interact with depressed people. Facebook’s participation in suicide prevention might seem positive on the surface, but it comes with hidden costs.
Facebook’s participation in suicide prevention might seem positive on the surface, but it comes with hidden costs.
At a casual glimpse, the idea of seeing a friend in trouble and letting Facebook know you’re worried seems logical. However, it’s odd that the site would encourage users to report behavior that concerns them rather than reaching out to people directly. Directly helping a friend with depression can be challenging, but providing evidence that you’re there and listening is much more powerful than using a cold and clinical reporting tool. Perhaps this is evidence of an era in which people are encouraged to remain distant from each other and their emotions.
The thought of logging in to Facebook to a frozen timeline and the news that a friend flagged one of your posts—anonymously, at that—is chilling. For at least some people with depression and other mental health conditions, it seems like it would leave considerable room for pause and second-guessing before posting content on Facebook. This would leave them more isolated and stressed, which can create circumstances that directly contribute to depression.
The very term “social network” implies a social or community aspect, not a setting in which people go behind each others’ backs to file reports on their behavior. Depressed people might find themselves alienated and uncomfortable with reports like these, perhaps even leaving Facebook altogether because it no longer feels like a safety net. This achieves the exact opposite of Facebook’s suggestion that the feature will help people reach out.
Historically, users concerned about what appeared to be suicidal content had to go through a multilayered reporting process that included taking a screenshot to report a user. As with other reports, personnel would review it and decide on an appropriate course of action, including potentially providing the user with helpful resources. Now, users can report at the click of a button, which leaves much more room for abuse.
While Facebook claims it will review reports carefully before contacting users, there’s a high probability of trolling embedded in the feature. Everyone communicates differently, including mentally ill people, and a comment that might seem like a serious expression of suicidal ideation in one person might be a dark joke for another—and unless you know the individuals involved, it can be difficult to tell. Conversely, what seems like a casual or lighthearted comment could be a sign of extreme distress, and an outside observer wouldn’t be able to make that distinction. This creates an ideal opening for trolls to get to work.
Facebook’s Prevent Suicide: Great, now if you feel people are intruding into your life, Facebook hits you with cheap Psychoanalysis.
— Karlitos (@_karlvan) February 26, 2015
“One of the key concerns [with #SamaritansRadar],” wrote Karen McVeigh at the Guardian, “relates to the lack of knowledge or consent of potentially vulnerable individuals. Anyone can sign up to receive an email when someone appears to be in crisis and those being monitored will not be alerted when a user signs up.”
Likewise, Facebook users have no way to opt out of reporting from their followers—in fact, they can’t even go private to avoid it. Numerous supporters—like Ashley Feinberg at Gizmodo, who writes that “if this helps even one person, it’ll be worth it”—miss the fact that the feature is nonconsensual. People with mental health conditions don’t have an opportunity to decide whether they want this feature implemented, which gives their followers a great deal of power over them.
Facebook’s feature flips the dynamic of #SamaritansRadar, providing resources for people to reach out if they want help rather than enabling people to monitor Twitter timelines, but it’s still troubling. It implies an invasive level of scrutiny that not all users may be comfortable with, and it will definitely change the way people engage with and use Facebook. For people with mental illness, Facebook may turn from a tool for connecting, reaching out, and finding solidarity into a potentially hostile environment.
While people without mental illnesses might think that a message about “a friend” being worried could be comforting, it can be just the opposite for someone experiencing a depressive episode or a more complex mental health issue, like a break with reality or episode of paranoia. Instead of feeling supported and surrounded by a loving community, a user might instead feel unsafe and under attack.
Mental health is more complicated than a quick read of a report that will likely be one of thousands to review over the course of the workday. It’s also more complicated than casually flagging a post that gives a user pause, wondering if it contains a warning of suicidal behavior. These tools disempower people with mental health conditions, putting them on the defensive instead of allowing them to make active choices—like, for example, enabling a reporting tool or buddy system so their Facebook friends know when they need help.
If you need to talk to someone, contact the National Suicide Prevention Lifeline at 800-273-TALK (TTY: 800-799-4889).
s.e. smith is a Northern California-based journalist and writer focusing on social justice issues. smith's work has appeared in publications like Esquire, the Guardian, Rolling Stone, In These Times, Bitch Magazine, and Pacific Standard.