Today, in Christchurch, New Zealand, shootings at two different mosques killed 49 people, one of the worst mass shootings in modern history. And the entire act of violence played out online, as the shooter live streamed part of his spree, where it was uploaded and reposted across the internet.
It’s not the first internet-broadcast shooting. A reporter and cameraman in Virginia were killed in 2015, and the ensuing footage was uploaded while that murderer was on the lam. It’s happened via Facebook Live and amid gang violence in Chicago.
However, the visceral nature of this footage left people appalled at how easy it is to disseminate carnage.
The New Zealand massacre was livestreamed on Facebook, announced on 8chan, reposted on YouTube, commentated about on Reddit, and mirrored around the world before the tech companies could even react.
— Drew Harwell (@drewharwell) March 15, 2019
The video originated on Facebook Live, posted by the suspect, Brenton Tarrant, where it followed him from his car to the Masjid Al Noor mosque, where one of the killings took place. Tarrant filmed himself walking through the mosque firing shots into people huddled into groups.
He shared his intentions on 8chan, where he posted the link to the Facebook Live.
The video picked up steam on Twitter, where an account @realjackdawkins was posting clips from the shooting. It’s not clear what the correlation between these accounts where.
The video was also posted to Reddit and YouTube. Videos like this tend to be impossible to snuff out, once they’ve begun spreading across the web.
However, the tech giants did attempt to stop the dissemination of this act.
Facebook almost immediately took down the shooter’s account, as well as corresponding video from the event. It also announced that it removed a corresponding Instagram account.
In a statement, Facebook said that it responded after being contacted by New Zealand police, according to Mia Garlick, Facebook’s director of policy for Australia and New Zealand: “New Zealand Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video.”
Facebook also said it was moderating and removing praise posted on its site for the crime.
“[We are] removing any praise or support for the crime and the shooter or shooters as soon as we’re aware,” Garlick said.
Although the video wasn’t originally posted to YouTube, it was reuploaded to the site. YouTube announced it was pulling it down whenever it found it.
Our hearts are broken over today’s terrible tragedy in New Zealand. Please know we are working vigilantly to remove any violent footage.
— YouTube (@YouTube) March 15, 2019
YouTube said in a statement that “shoocking, violent and graphic content has no place on our platforms, and is removed as soon as we become aware of it.”
Platform officials said they were working with authorities on the matter. But in the aftermath, the video was uploaded by at least dozens of different accounts.
The Twitter account of the shooter was unearthed immediately after the shooting. Though the account was taken down, it shows how easy it is to post hateful messages and threaten acts of violence on the site.
On his account, he posted pictures of magazines in advance of the killing. And though it’s been removed, it is still accessible by web archives.
Another account @realjackdawkins was suspended in the wake of posting clips from the shooting. However, the account was up for at least 30 minutes after it originally posted clips from the massacre.
Twitter told CNN it was working to remove videos from the shooting when it was posted. But some videos of the shooting, posted by accounts with a large following, were left up for over two hours, according to BuzzzFeed’s Ryan Mac.
Despite Twitter's earlier commitment to taking down the video I'm still seeing clips, including one shared from a verified account with 694K followers. I'm not sharing it here, but it's been up for two hours.
— Ryan Mac 🙃 (@RMac18) March 15, 2019
Reddit is notorious for being unwilling to censor content, allowing the site to be a bastion of “free speech.” Over on r/watchpeopledie, a forum dedicated to watching people die, the moderators struggled to decide what to do when the video was posted. Initially, the decision was made to leave the video up.
What responsibility do we want these companies to have? On Reddit, one of the most popular sites on the Internet, people have been narrating the video on a forum called "watchpeopledie." After more than an hour, this was posted: pic.twitter.com/C8nmt7CZgh
— Drew Harwell (@drewharwell) March 15, 2019
However, Reddit eventually decided to step in and ban the video because it “glorifies violence.”
It appears that police may have also asked Reddit to remove the video, according to a comment from moderator u/iammrpositive.
The thread is gone because the NZ police are requesting a takedown of the video and the links in the thread were dead. Sorry. Isn’t it cool how the “authorities” get to decide what you are allowed to watch? Regardless of what you believe, the video simply shows the objective reality of what can happen when an actual crazy person gets caught up in all of the bullshit. And, well.. I think objective reality is a pretty good thing to base your beliefs around.
While the tech giants reacted swiftly to this incident, it reveals the difficulties of policing and moderating content. Smaller scale acts of violence get lost in the mix. Channels on YouTube and Facebook pages exist solely to radicalize people and inspire acts of violence.
And on less-regulated corners of the internet, like 8chan, where the shooter posted his intent to commit the murders, moderation is non-existent, and people are free to post and spew whatever vitriol they prefer.