After being criticized for inciting social unrest in several countries, Facebook announced on Wednesday that it plans to remove posts with false information intended to spark violence.
“Reducing the distribution of misinformation—rather than removing it outright—strikes the right balance between free expression and a safe and authentic community,” a Facebook spokesperson told CNBC. “There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down. We will be begin implementing the policy during the coming months.”
Zuckerberg referenced events that transpired in Sri Lanka and Myanmar, saying he felt a “deep sense of responsibility to try to fix the problem.”
Experts blamed the social network for the explosion of hate speech at the beginning of the Rohingya crisis in Myanmar. Thousands of the Rohingya, a minority ethnic group comprised of mostly Muslims, were forced to flee to Bangladesh following insurgent attacks.
“Facebook is arguably the only source of information online for the majority in Myanmar,” one cybersecurity analyst in Yangon told the Guardian.
In Sri Lanka, Facebook was blamed as a major source of misinformation that sparked anti-Muslim riots, causing the country to temporarily ban several social media apps.
Facebook already has policies to remove direct threats of violence or hate speech but “has been hesitant to remove rumors that do not directly violate its content policies,” according to the New York Times.
Facebook said the new policy would focus on working with local and international groups to identify fake news. The Wall Street Journal reported that the strategy will first be introduced in Sri Lanka and Myanmar.
H/T Business Insider