- Riots break out after a fake email about coronavirus went viral Thursday 8:59 PM
- Bloomberg edits debate clip to make other Democratic candidates appear speechless Thursday 7:50 PM
- Dad claims YouTube refuses to remove video of daughter’s murder Thursday 6:36 PM
- Video of Kanye leaving Kim in elevator to carry all their bags has people cackling Thursday 6:19 PM
- Orlando Bloom’s tattoo misspelled son’s name because of Pinterest Thursday 5:35 PM
- The Ahi Challenge is the latest dance taking over TikTok Thursday 4:40 PM
- Show criticized for putting rape victim in blackface to protect her identity Thursday 3:42 PM
- Woman becomes viral sensation after iconic ‘Shallow’ subway video Thursday 2:48 PM
- Prettyboyfredo tried to gift a bullied teen some $30,000 Nikes at school—he got detained Thursday 2:13 PM
- ‘Vanderpump Rules’ recap: Wedding bells and blows Thursday 1:50 PM
- A 16-year-old made a ‘meme guide’ to help her dad understand online trends Thursday 1:46 PM
- UCLA drops plans to use facial recognition after student pushback Thursday 1:07 PM
- ‘Star Trek: Picard’ recap, episode 5: ‘Stardust City Rag’ Thursday 12:56 PM
- Roger Stone sentenced to 40 months in prison Thursday 12:45 PM
- New The 1975 music video is full of memes you’ll love Thursday 12:28 PM
Facebook revealed Thursday that it banned an astonishing 2.2 fake billion accounts in just the first three months of this year. As part of its annual Community Standards Enforcement Report, the social media company detailed why so many had been removed from the platform.
“The amount of accounts we took action on increased due to automated attacks by bad actors who attempt to create large volumes of accounts at one time,” wrote Guy Rosen, Facebook’s vice president of integrity.
The number is a significant increase from last year, where 1.2 billion accounts had to be deleted in the fourth quarter of 2018. The company said last month that its active user base is 2.38 billion people.
Facebook further boasted that its ability to detect hate speech had also improved. While the company says it proactively located 58.8% of such content late last year, the current quarter has seen a rise to 65.4%. The increased detection rate led to the removal of 4 million posts in the beginning of this year as opposed to 3.3 million during the final months of 2018.
“In six of the policy areas we include in this report, we proactively detected over 95% of the content we took action on before needing someone to report it,” Rosen added.
The company stated that it continues “to invest in technology to expand our abilities to detect this content across different languages and regions.”
The most prominent problem, according to Facebook’s data, revolves around the spread of spam. Posts and accounts are also flagged for containing nudity, harassment, and terrorist propaganda.
Facebook says it blocks and filters “hundreds of terms associated with drug sales” while also targeting the sales of firearms.
“In Q1 2019, we took action on about 900,000 pieces of drug sale content, of which 83.3% we detected proactively,” Rosen wrote. “In the same period, we took action on about 670,000 pieces of firearm sale content, of which 69.9% we detected proactively.”
- A robot could soon be delivering your packages from a self-driving car
- People are using an app to out gropers on Japan’s subway
- Republicans and Democrats agree on dangers of facial recognition tech
H/T Business Insider
Mikael Thalen is a tech and security reporter based in Seattle, covering social media, data breaches, hackers, and more.