- The chicken sandwich war is in full throttle on Twitter 2 Years Ago
- Netflix’s ‘Sextuplets’ proves Marlon Wayans is no Eddie Murphy—or even Mike Myers 2 Years Ago
- Facebook is finally rolling out its clear history tool 2 Years Ago
- ‘Theater etiquette’ tweets surge after YouTuber cast in ‘Waitress’ 2 Years Ago
- A GoFundMe for Eric Garner’s killer has raised more than $70,000 Today 12:49 PM
- YouTuber finds GoPro footage of man who drowned in 2017 Today 12:20 PM
- Netflix’s ’45 rpm’ is as tired as the boomer rock era it tries to honor Today 11:38 AM
- Teen arrested for threatening to ‘slaughter’ abortion clinic on iFunny Today 11:29 AM
- How to stream the LA Galaxy vs. Cruz Azul Leagues Cup semifinal match Today 11:10 AM
- Going broke over the App Store? Here’s how to turn off in-app purchases Today 10:49 AM
- Jill Biden says even if you don’t like Joe Biden, you need to vote for Joe Biden Today 10:43 AM
- Report on ideal thermostat temperature brings out the dad jokes Today 10:28 AM
- Edited videos of Portland protests are telling half-truths Today 10:20 AM
- Netflix debuts upcoming releases section on the Netflix TV app Today 9:29 AM
- Marianne Williams announces plan for a Department of Peace Today 8:53 AM
In a blog post published Sunday, the company laid out its strategy for combatting extremist videos. First, it will expand its use of machine learning to identify extremist and terrorist videos. This has been a challenge for Google since videos of terrorism posted by a news source can be informative, while similar clips posted by a different user can be used to glorify the incident.
Google claims machine learning helped find and assess more than 50 percent of the content it has removed over the last six months. It plans to put more money into research so it can train AI to more quickly identify and remove content that breaches its policies.
It will also increasing the number of people in YouTube’s “Trusted Flagger” program. Google claims machines need help from humans to determine what does and does not fall within the company’s guidelines. It will expand the program by adding 50 organizations to the 63 it already works with. It will also continue to work with counter-extremist groups.
Perhaps the most intriguing change is that Google will now take a tougher stance on videos that don’t explicitly violate its policies. It says “problematic” videos—for example, those that contain inflammatory religious content—will no longer be monetized, recommended, or eligible for comments or user endorsements. This could very well be in response to an exodus of advertising companies that left of YouTube earlier this year in fear their ads were being displayed over extremist videos.
Google’s final move is to expand its “Redirect Method” with the hope of using advertisements to redirect potential ISIS recruits and change their mind about joining. Google says more than half a million minutes of anti-ISIS videos were viewed in past deployments of the system. It did not say how many individuals were affected by the campaign.
The Mountain View giant and its counterparts have faced increasing pressure from governments to curb extremist efforts after recent incidents of terrorism were linked to social media.
One of the terrorists who drove a vehicle into pedestrians on the London Bridge in June is said to have watched YouTube videos of a radical preacher. Following the attack, U.K. Prime Minister Theresa May insisted the internet was a spawning place for terrorism, and said social media companies should be forced to create a backdoor for authorities to access private data.
Enough is enough. My response to last night’s brutal terror attack:"Last night, our country fell victim to a brutal...Posted by Theresa May on Sunday, June 4, 2017
That’s one request tech companies will be keen to avoid, but these new anti-terrorism policies are a step in the right direction.
“The measures being implemented by Google, particularly those relating to hateful speakers, are encouraging first steps,” a spokesperson for U.K.’s Home Office told Bloomberg. “However, we feel the technology companies can and must go further and faster, especially in identifying and removing hateful content itself.”
Google seems to think its efforts are in vein if other tech leaders don’t follow suit. It will share technology with Facebook, Microsoft, and Twitter to help them establish their own methods.
Facebook in particular has been under extreme scrutiny for its slow response in dealing with explicit content. In May, the social network said it would take a similar approach to Google by hiring 3,000 additional employees to review posts that may breach its community standards.
H/T Ars Technica
Phillip Tracy is a former technology staff writer at the Daily Dot. He's an expert on smartphones, social media trends, and gadgets. He previously reported on IoT and telecom for RCR Wireless News and contributed to NewBay Media magazine. He now writes for Laptop magazine.