Tech

These are the 4 steps Google will take to end online extremism

The company will take aggressive actions to stop the spread of extremist propaganda.

Photo of Phillip Tracy

Phillip Tracy

YouTube algorithm

Google has four new ways to remove terrorist content from YouTube. The new policies are a response to U.K. lawmakers who say social media offers terrorists a “safe place” online.

Featured Video

In a blog post published Sunday, the company laid out its strategy for combatting extremist videos. First, it will expand its use of machine learning to identify extremist and terrorist videos. This has been a challenge for Google since videos of terrorism posted by a news source can be informative, while similar clips posted by a different user can be used to glorify the incident.

Google claims machine learning helped find and assess more than 50 percent of the content it has removed over the last six months. It plans to put more money into research so it can train AI to more quickly identify and remove content that breaches its policies.

It will also increasing the number of people in YouTube’s “Trusted Flagger” program. Google claims machines need help from humans to determine what does and does not fall within the company’s guidelines. It will expand the program by adding 50 organizations to the 63 it already works with. It will also continue to work with counter-extremist groups.

Advertisement

Perhaps the most intriguing change is that Google will now take a tougher stance on videos that don’t explicitly violate its policies. It says “problematic” videos—for example, those that contain inflammatory religious content—will no longer be monetized, recommended, or eligible for comments or user endorsements. This could very well be in response to an exodus of advertising companies that left of YouTube earlier this year in fear their ads were being displayed over extremist videos.

Google’s final move is to expand its “Redirect Method” with the hope of using advertisements to redirect potential ISIS recruits and change their mind about joining. Google says more than half a million minutes of anti-ISIS videos were viewed in past deployments of the system. It did not say how many individuals were affected by the campaign.

The Mountain View giant and its counterparts have faced increasing pressure from governments to curb extremist efforts after recent incidents of terrorism were linked to social media.

One of the terrorists who drove a vehicle into pedestrians on the London Bridge in June is said to have watched YouTube videos of a radical preacher. Following the attack, U.K. Prime Minister Theresa May insisted the internet was a spawning place for terrorism, and said social media companies should be forced to create a backdoor for authorities to access private data.

Advertisement

https://www.facebook.com/TheresaMayOfficial/posts/1757704577579641

That’s one request tech companies will be keen to avoid, but these new anti-terrorism policies are a step in the right direction.

“The measures being implemented by Google, particularly those relating to hateful speakers, are encouraging first steps,” a spokesperson for U.K.’s Home Office told Bloomberg. “However, we feel the technology companies can and must go further and faster, especially in identifying and removing hateful content itself.”

Google seems to think its efforts are in vein if other tech leaders don’t follow suit. It will share technology with Facebook, Microsoft, and Twitter to help them establish their own methods.

Advertisement

Facebook in particular has been under extreme scrutiny for its slow response in dealing with explicit content. In May, the social network said it would take a similar approach to Google by hiring 3,000 additional employees to review posts that may breach its community standards.

H/T Ars Technica

 
The Daily Dot