- Jeff Bezos subtweets Saudi prince following phone hack report 2 Months Ago
- ‘Yeah, good. OK’ Bernie Sanders meme is a new way to dismiss people 2 Months Ago
- ‘Vanderpump Rules’ recap: Petty displays of affection Today 2:12 PM
- Makeup artist transforms into Timothée Chalamet on TikTok Today 1:54 PM
- Iguanas are falling from trees—and people are selling them online for food Today 1:02 PM
- 75,000 sign petition to fire Wendy Williams after ‘cleft lip’ comment about Joaquin Phoenix Today 12:30 PM
- Kim Kardashian says Kylie Jenner’s setting spray is ‘cheap sh*t’ Today 11:59 AM
- Trump continues to demand Apple unlock iPhones for the government Today 11:46 AM
- Police officer suspended after video of a handcuffed Delonte West surfaces Today 11:33 AM
- ‘Girls don’t want a boyfriend’ meme leaves boyfriends in 2019 Today 11:21 AM
- Are these tweets about ‘The Bachelor’ or Trump’s impeachment? Today 10:45 AM
- Likely file Saudi prince sent to Jeff Bezos’ to hack his phone revealed Today 10:10 AM
- Will Olivia Jade have to testify against her mother, Lori Loughlin, in bribery trial? Today 10:07 AM
- Gina Rodriguez slammed for promoting ‘American Dirt’ Today 9:26 AM
- Netflix says ‘The Witcher’ is its biggest show. Is it really? Today 8:59 AM
For the first time on Wednesday, Instagram was including in Facebook’s quarterly transparency report.
In a blog post from the company, Guy Rosen, Facebook’s vice president of integrity, outlines how the tech giant works to enforce its rules in regards to four specific policy areas.
“In this first report for Instagram, we are providing data on four policy areas: child nudity and child sexual exploitation; regulated goods — specifically, illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda,” the company states.
In terms of self-harm, Rosen says Instagram was successful in removing roughly 835,000 pieces of content in the second quarter, 77.8 percent of which was detected by the company proactively. In the third quarter, Instagram removed approximately 845,000 pieces of self-harm content, 79.1 percent which was detected proactively.
As far as the company’s policy on terrorism, Rosen states that Facebook was able to detect 98.5 percent of terrorist-related content on Facebook and just 92.2 percent on Instagram. Facebook says it “will continue to invest in automated techniques to combat terrorist content” in hopes of bringing its percentages even higher.
Facebook also added that it has made progress tackling child exploitation on Instagram, with 512,000 instances removed in the second quarter and an additional 745,000 pieces of content in the third quarter.
Instagram likewise took down 1.5 million pieces of content related to drug sales while around 58,600 posts related to firearm sales were removed as well.
Neither Instagram or Facebook released any statistics on fake accounts or hate speech, however, despite increased attention on the issues as the 2020 election draws near.
The company did state though that it is employing new tactics in order to crack down on hate speech before it is able to spread.
“Our detection techniques include text and image matching, which means we’re identifying images and identical strings of text that have already been removed as hate speech, and machine-learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve seen previously in content that violates our policies against hate,” the blog post notes.
- Facebook testing TikTok clone within Instagram called Reels
- Instagram covers video costs for celebs who don’t get political
- Instagram to test hiding likes in the U.S. starting next week
H/T The Verge
Mikael Thalen is a tech and security reporter based in Seattle, covering social media, data breaches, hackers, and more.