Periscope, the Twitter-owned streaming platform, is getting a slight upgrade to combat the troll armies of the internet.
In a Medium post titled “A Safer Conversation,” Periscope detailed plans for an expansion of how its report system identified abusive chat users.
Periscope’s original system allowed users to report individual comments, which were then sent to a group of other users. Those users voted on whether the comment counted as abuse or spam, and if a majority agreed, that user would then have the ability to post comments temporarily disabled in that specific chat. Repeated abusers would have their privileges temporarily taken away across all chats.
Starting on August 10, actual Periscope moderators “will also review and suspend accounts for repeatedly sending chats that violate our guidelines.” This implies that Periscope will be taking a much more active role in its community moderation, rather than letting algorithms and volunteers determine the legitimacy of reports. While that old system will simply be joining the new, one can see how it might allow abusive comments to go unpunished. A randomly selected volunteer might not perceive a comment the same way the person reporting it does—or they may be a troll themselves.
Periscope’s community guidelines apply to all broadcasts on both Periscope and Twitter. Periscope warns against posting comments that encourage physical violence, target other users with harassment, disclose another person’s private information, endanger minors, or try to work around suspensions by using alternate accounts.
Twitter, which owns Periscope, has had a long history of (intentionally or otherwise) acting as a relatively safe haven for internet trolls, neo-Nazi rhetoric, and alt-right ideologues. Twitter ended 2017 by banning a bot that outed Nazis who made imposter accounts and posed as non-white people in order to post horrible comments and stir racial hatred.
If you’re new to Periscope or just want to use it to its fullest, check out the Daily Dot’s beginner’s guide.