- Report: Disney yanks YouTube ad spending following child exploitation accusations Wednesday 7:56 PM
- These people are organizing Fyre Fest live-action role-play parties Wednesday 6:35 PM
- White woman berates Mexican restaurant manager for speaking Spanish Wednesday 4:12 PM
- In Pixar short ‘Kitbull,’ a cat and pit bull become unlikely friends Wednesday 3:48 PM
- Stop exploiting the Jussie Smollett case to discredit LGBTQ hate crime victims Wednesday 3:28 PM
- The best Netflix original movies of 2019 Wednesday 3:20 PM
- Pinterest is reportedly blocking vaccination searches Wednesday 2:53 PM
- Nike’s self-lacing smart sneakers malfunction days after release Wednesday 2:50 PM
- How to quickly get the Havoc weapon in Apex Legends Wednesday 2:48 PM
- The truth behind the anti-LGBTQ emoji controversy Wednesday 1:37 PM
- Tristan Thompson disables Instagram comments after reports he cheated on Khloe Kardashian Wednesday 11:25 AM
- Introducing ‘boner culture,’ this Gamergate blogger’s latest cause Wednesday 11:16 AM
- HBO debuts trailer for controversial Michael Jackson doc ‘Leaving Neverland’ Wednesday 10:46 AM
- Christian woman refuses to do taxes for lesbian married couple Wednesday 10:43 AM
- Political campaigns will be snooping on your phones in 2020 Wednesday 10:43 AM
Worawee Meepian/Shutterstock (Licensed)
The company will take aggressive actions to stop the spread of extremist propaganda.
In a blog post published Sunday, the company laid out its strategy for combatting extremist videos. First, it will expand its use of machine learning to identify extremist and terrorist videos. This has been a challenge for Google since videos of terrorism posted by a news source can be informative, while similar clips posted by a different user can be used to glorify the incident.
Google claims machine learning helped find and assess more than 50 percent of the content it has removed over the last six months. It plans to put more money into research so it can train AI to more quickly identify and remove content that breaches its policies.
It will also increasing the number of people in YouTube’s “Trusted Flagger” program. Google claims machines need help from humans to determine what does and does not fall within the company’s guidelines. It will expand the program by adding 50 organizations to the 63 it already works with. It will also continue to work with counter-extremist groups.
Perhaps the most intriguing change is that Google will now take a tougher stance on videos that don’t explicitly violate its policies. It says “problematic” videos—for example, those that contain inflammatory religious content—will no longer be monetized, recommended, or eligible for comments or user endorsements. This could very well be in response to an exodus of advertising companies that left of YouTube earlier this year in fear their ads were being displayed over extremist videos.
Google’s final move is to expand its “Redirect Method” with the hope of using advertisements to redirect potential ISIS recruits and change their mind about joining. Google says more than half a million minutes of anti-ISIS videos were viewed in past deployments of the system. It did not say how many individuals were affected by the campaign.
The Mountain View giant and its counterparts have faced increasing pressure from governments to curb extremist efforts after recent incidents of terrorism were linked to social media.
One of the terrorists who drove a vehicle into pedestrians on the London Bridge in June is said to have watched YouTube videos of a radical preacher. Following the attack, U.K. Prime Minister Theresa May insisted the internet was a spawning place for terrorism, and said social media companies should be forced to create a backdoor for authorities to access private data.
Enough is enough. My response to last night’s brutal terror attack:"Last night, our country fell victim to a brutal…
That’s one request tech companies will be keen to avoid, but these new anti-terrorism policies are a step in the right direction.
“The measures being implemented by Google, particularly those relating to hateful speakers, are encouraging first steps,” a spokesperson for U.K.’s Home Office told Bloomberg. “However, we feel the technology companies can and must go further and faster, especially in identifying and removing hateful content itself.”
Google seems to think its efforts are in vein if other tech leaders don’t follow suit. It will share technology with Facebook, Microsoft, and Twitter to help them establish their own methods.
Facebook in particular has been under extreme scrutiny for its slow response in dealing with explicit content. In May, the social network said it would take a similar approach to Google by hiring 3,000 additional employees to review posts that may breach its community standards.
H/T Ars Technica
Phillip Tracy is a former technology staff writer at the Daily Dot. He's an expert on smartphones, social media trends, and gadgets. He previously reported on IoT and telecom for RCR Wireless News and contributed to NewBay Media magazine. He now writes for Laptop magazine.