In the wake of the mosque shootings in New Zealand where 49 people were killed, a number of people online are criticizing tech giants like Google, Twitter, and Facebook for allowing hateful content to exist on their platforms.
Four people are in custody following the shooting, where the alleged killer reportedly used a helmet camera to film and livestream the attacks onto Facebook. Footage was later available on YouTube. The video of the attack is still easily found on Twitter as of Friday morning.
As a number of social media sites scrambled to remove the content, people online criticized them for allowing hateful content to fester on the platforms. The response comes because an account, believed to connected to the killer, posted a manifesto filled with white nationalist rhetoric ahead of the attack.
Twitter has especially come under fire for its refusal to take a strong stance on hate speech.
People have long called for CEO Jack Dorsey to #BantheNazis. Today, users reiterated that sentiment.
— Lady of the Lake #WarrenDemocrat 💎 (@LucianaLamb) March 15, 2019
Ban the fucking Nazis, @jack. Comb through your social media platform, find them all, and ban them. All of them. Now. There's no place for them in a civilization.
— John Phipps (@mistermegative) March 15, 2019
I don't give a fuck about your broken hearts, ban the fucking Nazis from your platform once and for all. https://t.co/V9ninyQqcn
— Oz Katerji (@OzKaterji) March 15, 2019
This is why people say ban the fucking Nazis. You give them a platform and they use it radicalize each other and livestream murder. Any platform that allows white supremacists to congregate is complicit
— Butt Praxis (@buttpraxis) March 15, 2019
And yet, we continually show how inept we are at finding an answer to hate-fueled gun violence. @twitter and @facebook are complicit as they refused to ban hate speech. They give platforms to Nazis and the alt-right to spew their vitriol. To recruit others to their ranks.
— Tom (@urban_tom) March 15, 2019
Others were critical of the social media platforms response to the videos being shared so much in the wake of the attack.
The spread of the video of the attacks in New Zealand is the latest to be spread using social media. As CNN noted, attacks in Thailand, Denmark, and the United States have all been broadcast on social media.
“While Google, YouTube, Facebook, and Twitter all say that they’re cooperating and acting in the best interest of citizens to remove this content, they’re actually not because they’re allowing these videos to reappear all the time,” Lucinda Creighton, a senior adviser at the Counter Extremism Project, told the news outlet.
“The tech companies basically don’t see this as a priority, they wring their hands, they say this is terrible. But what they’re not doing is preventing this from reappearing.”
New Zealand Police said they are “working to have any footage removed” from sites online where it is being shared.
As CNN reports, Facebook, Twitter, and YouTube–which is owned by Google–struggled to reign the spread of the live-streamed video the shooter took during the shootings. In a statement, Facebook said it took down the video—and the alleged shooter’s account on the site and Instagram—after it was alerted about it by police. The company also said it was removing “any praise or support or the crime and the shooter or shooters as soon as we’re aware.”
“We will continue working directly with New Zealand Police as their response and investigation continues,” the company said in a statement.
The responses from the social media giants didn’t seem to be enough for many people, who have called out how the attack could be livestreamed and shared so quickly.
Apparently the gunman in New Zealand livestreamed the shooting, completely unfettered. What say you @facebook ? No algorithm to address actual terrorist acts?
— María Peña (@mariauxpen) March 15, 2019
Extremism researchers and journalists (including me) warned the company in emails, on the phone, and to employees' faces after the last terror attack that the next one would show signs of YouTube radicalization again, but the outcome would be worse. I was literally scoffed at. https://t.co/z0OPqfJJw6
— Ben Collins (@oneunderscore__) March 15, 2019
Was the video flagged by Facebook’s technology prior to police notifying Facebook of it?
If not – WHY? https://t.co/QEj97jY06x
— Cody Melissa Godwin (@CMG_BBC) March 15, 2019
Critics have long said tech giants have struggled in policing hate speech, hiding behind claims of free speech and their role as platforms and not publishers; this despite their capability to more swiftly radicalize people than previous mediums.