Meta’s virtual reality platforms are rife with hate speech and sexual abuse with little to no moderation from Meta itself, according to a recent study.
“Metaverse: another cesspool of toxic content,” released by nonprofit advocacy group SumOfUs on Tuesday, details the rise of Meta’s VR platforms and the harm they have caused.
SumOfUs details multiple instances of sexual harassment and assault from users in the Metaverse, including one instance where a SumOfUs researcher’s avatar was led into a private room and raped by another avatar while other users watched and passed around a bottle of digital vodka.
The researcher “noted how quickly she encountered sexual assault on the platform” after a separate user encouraged her to disable personal boundary settings, a feature that won’t allow users to get within four feet of each other.
“It happened so fast I kind of disassociated,” the researcher said in the report. “One part of my brain was like wtf is happening, the other part was like this isn’t a real body, and another part was like, this is important research.”
The report said that while safety features like the bubble boundary are necessary, they “fall short of holding other users accountable for misbehavior, as the platform puts the responsibility of moderating content on the player being harassed.”
The report details other instances of sexual assault and harassment in Meta-owned spaces, from sexist remarks to a “virtual gang rape.”
Users attempting to report these instances to Meta are often met with complications or a lack of moderators and say there is very little plan or structure from the company to regulate the platform.
“Instead of learning from its previous mistakes, Meta is pushing ahead with the Metaverse with no clear plan for how it will curb harmful content and behavior, disinformation and hate speech,” the report said.
According to an internal memo sent by Andrew Bosworth, Meta’s Chief Technology Officer, the company is well aware that moderation in the metaverse “at any meaningful scale is practically impossible.”
Meanwhile, Meta’s policy chief in a blog post equated the company’s moderation policy to choosing when to intervene in a fight at a bar, as opposed to Facebook’s active and rolling moderation.
Users also experienced hate speech and graphic content on the app, with SumOfUs researchers experiencing slurs, stalking, and gun violence in the metaverse. One VR user, who is Black, said experiencing racism in the metaverse feels the same as in real life.
The report comes as Meta is under an increasing microscope from global regulators as the influence of the social media company continues to grow. Lawmakers in the U.S., and the EU are attempting to legislate or rein in the power of the tech giant, while the company was accused of using its algorithm to influence an Australian bill that would force Facebook and Google to pay news publishers for content earlier this month.
Meta did not respond to a Daily Dot request for comment about the report.