- Sri Lankan government shuts down social media in wake of deadly blasts Sunday 7:56 PM
- Amazon Flex drivers now must use selfies to verify identity Sunday 6:34 PM
- #GentrifyingGeorge thinks 152-year-old HBCU should ‘just move’ Sunday 5:27 PM
- Watch out! Tonight’s episode of ‘Game of Thrones’ leaked online (updated) Sunday 3:32 PM
- Videos of people working may be the best thing on TikTok right now Sunday 1:46 PM
- How to watch ‘Game of Thrones’ season 8, episode 2 for free Sunday 7:00 AM
- Gendry is making a new weapon for Arya Stark—but what is it? Sunday 6:30 AM
- The live-action Halo series could be Showtime’s most ambitious project yet Sunday 6:00 AM
- How to watch Turner Classic Movies for free Sunday 5:30 AM
- How to watch Real Madrid vs. Athletic Bilbao online for free Sunday 5:00 AM
- ‘Star Trek’s Jonathan Frakes calls out your lies with this new meme Saturday 3:46 PM
- #JusticeForLucca trends after video shows police slam Black teen’s head into pavement Saturday 3:11 PM
- The internet is shocked to learn that Goombas do, in fact, have arms Saturday 2:02 PM
- PayPal, GoFundMe cut off armed militia that detains migrants at border Saturday 1:16 PM
- Barnwood theft may be on the rise because of ‘Fixer Upper’—and fans aren’t having it Saturday 12:23 PM
Worawee Meepian/Shutterstock (Licensed)
The company will take aggressive actions to stop the spread of extremist propaganda.
In a blog post published Sunday, the company laid out its strategy for combatting extremist videos. First, it will expand its use of machine learning to identify extremist and terrorist videos. This has been a challenge for Google since videos of terrorism posted by a news source can be informative, while similar clips posted by a different user can be used to glorify the incident.
Google claims machine learning helped find and assess more than 50 percent of the content it has removed over the last six months. It plans to put more money into research so it can train AI to more quickly identify and remove content that breaches its policies.
It will also increasing the number of people in YouTube’s “Trusted Flagger” program. Google claims machines need help from humans to determine what does and does not fall within the company’s guidelines. It will expand the program by adding 50 organizations to the 63 it already works with. It will also continue to work with counter-extremist groups.
Perhaps the most intriguing change is that Google will now take a tougher stance on videos that don’t explicitly violate its policies. It says “problematic” videos—for example, those that contain inflammatory religious content—will no longer be monetized, recommended, or eligible for comments or user endorsements. This could very well be in response to an exodus of advertising companies that left of YouTube earlier this year in fear their ads were being displayed over extremist videos.
Google’s final move is to expand its “Redirect Method” with the hope of using advertisements to redirect potential ISIS recruits and change their mind about joining. Google says more than half a million minutes of anti-ISIS videos were viewed in past deployments of the system. It did not say how many individuals were affected by the campaign.
The Mountain View giant and its counterparts have faced increasing pressure from governments to curb extremist efforts after recent incidents of terrorism were linked to social media.
One of the terrorists who drove a vehicle into pedestrians on the London Bridge in June is said to have watched YouTube videos of a radical preacher. Following the attack, U.K. Prime Minister Theresa May insisted the internet was a spawning place for terrorism, and said social media companies should be forced to create a backdoor for authorities to access private data.
Enough is enough. My response to last night’s brutal terror attack:"Last night, our country fell victim to a brutal…
That’s one request tech companies will be keen to avoid, but these new anti-terrorism policies are a step in the right direction.
“The measures being implemented by Google, particularly those relating to hateful speakers, are encouraging first steps,” a spokesperson for U.K.’s Home Office told Bloomberg. “However, we feel the technology companies can and must go further and faster, especially in identifying and removing hateful content itself.”
Google seems to think its efforts are in vein if other tech leaders don’t follow suit. It will share technology with Facebook, Microsoft, and Twitter to help them establish their own methods.
Facebook in particular has been under extreme scrutiny for its slow response in dealing with explicit content. In May, the social network said it would take a similar approach to Google by hiring 3,000 additional employees to review posts that may breach its community standards.
H/T Ars Technica
Phillip Tracy is a former technology staff writer at the Daily Dot. He's an expert on smartphones, social media trends, and gadgets. He previously reported on IoT and telecom for RCR Wireless News and contributed to NewBay Media magazine. He now writes for Laptop magazine.