- Angela Abar wrestles with destiny in ‘Watchmen’ episode 8 Sunday 9:05 PM
- Guy who runs Trump Organization Twitter account caught hyping up own tweet Sunday 4:51 PM
- People found out how tall Olaf is–and now ‘Frozen’ is terrifying Sunday 3:41 PM
- Rapper Juice WRLD dead at 21 Sunday 3:02 PM
- Embody Andrew Yang, fight other presidential candidates in video game Sunday 2:33 PM
- Ariana Grande spoke with TikTok teen who looks exactly like her Sunday 1:00 PM
- Beyoncé accused of paying dancers ‘low rates’ Sunday 11:58 AM
- Timmy Thick blasted for saying the N-word in comeback video Sunday 9:11 AM
- Netflix’s ‘The Confession Killer’ is a devastating and well-built portrait of a con artist Sunday 8:00 AM
- Swipe This! I’m ashamed to tell anyone about my online shopping habit Sunday 6:00 AM
- UPS facing backlash for thanking police after employee killed in shootout Saturday 5:02 PM
- Sanders campaign fires staffer after anti-Semitic, homophobic tweets surface Saturday 3:13 PM
- Brother Nature was attacked, says everyone just watched with phones out Saturday 2:45 PM
- Ryan Reynolds’ gin company hires Peloton wife for ad Saturday 1:24 PM
- Ex-vegan YouTuber accused of fraud after following meat-only diet Saturday 1:11 PM
For the first time on Wednesday, Instagram was including in Facebook’s quarterly transparency report.
In a blog post from the company, Guy Rosen, Facebook’s vice president of integrity, outlines how the tech giant works to enforce its rules in regards to four specific policy areas.
“In this first report for Instagram, we are providing data on four policy areas: child nudity and child sexual exploitation; regulated goods — specifically, illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda,” the company states.
In terms of self-harm, Rosen says Instagram was successful in removing roughly 835,000 pieces of content in the second quarter, 77.8 percent of which was detected by the company proactively. In the third quarter, Instagram removed approximately 845,000 pieces of self-harm content, 79.1 percent which was detected proactively.
As far as the company’s policy on terrorism, Rosen states that Facebook was able to detect 98.5 percent of terrorist-related content on Facebook and just 92.2 percent on Instagram. Facebook says it “will continue to invest in automated techniques to combat terrorist content” in hopes of bringing its percentages even higher.
Facebook also added that it has made progress tackling child exploitation on Instagram, with 512,000 instances removed in the second quarter and an additional 745,000 pieces of content in the third quarter.
Instagram likewise took down 1.5 million pieces of content related to drug sales while around 58,600 posts related to firearm sales were removed as well.
Neither Instagram or Facebook released any statistics on fake accounts or hate speech, however, despite increased attention on the issues as the 2020 election draws near.
The company did state though that it is employing new tactics in order to crack down on hate speech before it is able to spread.
“Our detection techniques include text and image matching, which means we’re identifying images and identical strings of text that have already been removed as hate speech, and machine-learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve seen previously in content that violates our policies against hate,” the blog post notes.
- Facebook testing TikTok clone within Instagram called Reels
- Instagram covers video costs for celebs who don’t get political
- Instagram to test hiding likes in the U.S. starting next week
H/T The Verge
Mikael Thalen is a tech and security reporter based in Seattle, covering social media, data breaches, hackers, and more.