Article Lead Image

Public Domain Pictures (Public Domain)

Microsoft developing voice filters to block ‘toxic’ users on Xbox Live

Xbox unveils a new text-based filtering system and says voice filters are around the corner.

 

Mikael Thalen

Tech

Posted on Oct 15, 2019   Updated on May 20, 2021, 1:24 am CDT

Microsoft has unveiled a new text-based filter to root out “toxic” messages on Xbox Live and revealed that it is working on a similar system for blocking actual voices.

The first system, revealed in a blog post from Xbox Monday, will scan text-based messages on Xbox Live and block specific content based on a user’s preferences.

Users can adjust the filter by choosing between a Friendly, Medium, Mature, or Unfiltered mode. The filter is also customizable, meaning users can opt to allow certain words or block those not already excluded by Xbox.

Messages that the system believes to be inappropriate will now come with a “potentially offensive hidden message” warning label. Adult Xbox Live account holders will be able to click and see what the actual message is if desired, while child accounts will not be able to see such messages by default.

The message filter will be made available to those enrolled in the Xbox Insiders program today. Microsoft hopes to make the system available to all Xbox users later this fall.

But that’s not all the company is working on. Xbox is also aiming to develop a voice-based filter for its party chat feature.

Speaking with the Verge, head of Microsoft’s Xbox operations Dave McCarthy, said that his team is looking into combining Microsoft’s speech-to-text technology with its new filtration system.

“What we’ve started to experiment with is ‘Hey, if we’re real-time translating speech-to-text, and we’ve got these text filtering capabilities, what can we do in terms of blocking possible communications in a voice setting?'” McCarthy said. “It’s early days there, and there are a myriad of other AI and technology that we’re looking to stack around the voice problem, things like emotion detection and context detection that we can apply there. I think we’re learning overall… we’re taking our time with this to do it right.”

Rob Smith, a program manager on the Xbox Live engineering team, added that the ultimate goal would be to create a system that can detect and bleep out inappropriate comments much like is done on TV.

“It’s a great goal, but we’re going to have to take steps towards that,” Smith said.

While such a system would be difficult given that live TV shows often run on a delay while gamers are communicating and coordinating in real-time, Smith says that minimally Xbox could detect a user’s “level of toxicity.”

“In the meantime, we could do things like analyzing a person’s speech and figuring out, overall, what’s their level of toxicity they’re using in this session?” Smith said. “And maybe doing things like automatically muting them.”

McCarthy went on to add that it is taking user privacy into consideration while creating such systems.

“We have to respect privacy requirements at the end of the day for our users, so we’ll step into it in a thoughtful manner, and transparency will be our guiding principle to have us do the right thing for our gamers,” McCarthy said.

READ MORE:

H/T The Verge

Share this article
*First Published: Oct 15, 2019, 3:27 pm CDT