Instagram has begun demoting content it deems “inappropriate” even if such posts do not violate the app’s rules.
The announcement was made Wednesday by Facebook, which owns Instagram, as part of the company’s “remove, reduce, and inform” policy. Posts flagged as inappropriate will now be barred from appearing, severely limiting their ability to reach a wide audience.
“We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines, limiting those types of posts from being recommended on our Explore and hashtag pages,” Facebook said.
Facebook specifically cites “low quality” content as anything that is violent, spammy, or “sexually suggestive.”
“For example, a sexually suggestive post will still appear in Feed if you follow the account that posts it, but this type of content may not appear for the broader community in Explore or hashtag pages,” the company’s statement says. The new policy also aims to further tackle instances of clickbait, harassment, and fake news. Will Ruben, Instagram’s product manager, says the company will rely on content moderators and machine learning in order to demote inappropriate posts.
While Instagram has provided a brief outline of the changes, content creators have few insights on how to navigate the new policy. A help page on Instagram’s website merely states that the service uses “a variety of signals” to determine whether content will be allowed to reach the app’s community.
“Not all posts or accounts are eligible to be surfaced in Explore and hashtag pages,” Instagram says. “We use a variety of signals, for example, if an account has recently gone against our Community Guidelines, to determine which posts and accounts can be recommended to the community.”
The announcement from Facebook also included plans to combat fake news on its main social media site, Instagram, and on its Messenger app as well.