- Kevin Smith announces a He-Man reboot for Netflix 2 Years Ago
- Kellyanne Conway brushes off recession fears, calls it ‘Sesame Street word of the day’ 2 Years Ago
- Conservatives are livid the New York Times is writing articles about slavery Today 10:52 AM
- Iceland holds funeral for first glacier to melt Today 10:44 AM
- Nonprofit fanfiction database Archive of Our Own wins a Hugo Today 9:59 AM
- Dan Carlin’s ‘War Remains’ is a stunning VR pop-up Today 9:27 AM
- Your wireless data is probably being throttled, study finds Today 9:25 AM
- Mike Judge’s dystopian comedy ‘Idiocracy’ is now streaming on Netflix Today 8:00 AM
- The 2020 Democratic presidential candidates as La Croix flavors Today 7:00 AM
- Crowdsourcing mental healthcare with 7 Cups Today 7:00 AM
- How to unlock hidden filters and effects for Instagram Stories Today 6:00 AM
- In season 2, ‘Succession’ has quietly become one of the best shows on TV Sunday 9:10 PM
- Alexa Demie shares the beauty inspiration behind ‘Euphoria’s’ Maddy Sunday 5:47 PM
- Fans just discovered Lizzo’s old YouTube channel–and it’s full of gems Sunday 4:22 PM
- The ‘Final Destination’ movies are now streaming on Hulu Sunday 2:44 PM
In an effort to battle the ever-growing issue of extremist content on its site, YouTube plans to grow its moderation team to 10,000 people in 2018. Algorithms can only go so far, YouTube CEO Susan Wojcicki said.
“Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content,” Wojcicki wrote in a blog post detailing the announcement.
While YouTube’s moderators have manually examined more than 2 million videos since June alone, YouTube is looking to expand this human team even further in the coming year in order to more quickly and more efficiently identify and remove content that violates its guidelines. It also aims to be more transparent about how it handles such “problematic content.”
Right now, YouTube uses machine learning to initially flag videos for review by moderators. After deploying this method in June, YouTube has removed more than 150,000 violent extremist videos. YouTube’s machine learning identification algorithms were able to spot 98 percent of such videos on the site, helping moderators remove five times more videos than they were able to do previously. Staffers are able to remove content more quickly than before, as well: Half of such uploaded content is removed within two hours of upload, 70 percent within eight hours.
Right now, YouTube’s machine learning technology is fine-tuned specifically to identify violent extremist videos. However, the company is working to customize the algorithms to identify other areas such as hate speech and child safety. The latter has proved a serious issue for the video-watching platform.
Beginning in 2018, YouTube will regularly publish reports about how it’s enforcing its community guidelines. The report will discuss the type of flags YouTube sees, as well as what actions the company takes to remove inappropriate comments and video uploads.
After discovering ads were placed against more than 2 million inappropriate videos, YouTube also says it will take a new approach to advertising. The company plans to augment its team of ad reviewers to perform more manual curation, to more relevantly pair ads with videos. This should benefit advertisers, who shouldn’t find their product paired with unsavory content, as well as creators, who aim to make money off their uploads. It should benefit those of us just watching too, since we should see more appropriate advertising on the videos we view and less unsavory content on the site.
H/T The Hill
Christina Bonnington is a tech reporter who specializes in consumer gadgets, apps, and the trends shaping the technology industry. Her work has also appeared in Gizmodo, Wired, Refinery29, Slate, Bicycling, and Outside Magazine. She is based in the San Francisco Bay Area and has a background in electrical engineering.