How ToxMod Plans to Fix Toxic Gaming Chat

Or, teaching AI how and why people swear

Key Takeaways

  • ToxMod, by Boston-based company Modulate, claims to detect and act against disruptive speech automatically, in real time.
  • Instead of being restricted by simple language, ToxMod uses machine learning to figure out intangibles like emotion, volume, and rhythm.
  • ToxMod is currently designed for use with in-game chat rooms, but there’s no reason it couldn’t come to your Twitch channel.
high school boy online gaming at home

MoMo Productions / Getty Images

A Boston-based company is using machine learning to create what it bills as the world’s first voice-native moderation service, which can tell the difference between what’s being said and what it means.

ToxMod is an attempt to solve the paradox of moderating any open space on the Internet; there aren’t enough humans to keep up with demand, but algorithms, filters, and a report system don’t understand nuance.

With ToxMod’s database, it can track factors in players’ speech like emotion and volume, which helps it distinguish between a momentary lapse and a pattern of behavior. It was recently announced as an addition to the 7v7 American football game Gridiron, currently in Steam Early Access.

"Everyone knows that harassment, hate speech, and toxicity in voice chat and gaming is a massive problem. That’s commonly understood," said Carter Huffman, chief technology officer and co-founder of Modulate, in a Google Meeting with Lifewire. "We could take the features that we were extracting through this variety of machine-learning systems and coalesce that into a system that took into account all this expert knowledge that we were learning from the community."

All Hail Our New Robot Moderators

Modulate has been working on ToxMod since last fall, and has incorporated it as one of the company’s three core services. It also offers VoiceWear, a voice disguiser powered by machine learning, and VoiceVibe, an aggregator service that lets users find out what the people in their communities are discussing.

When ToxMod is running in a chat, it can be programmed via Modulate’s admin panel to take a variety of automatic actions, such as issuing warnings, muting players, or individually adjusting volume.

It uses a system of triage where its local instance is the first to take action, before checking with Modulate’s servers for confirmation. Then it finally escalates to the point where it might call for human intervention. By running through each check in turn, which Modulate calls "triage gates," the idea is that ToxMod gives a small team of moderators the tools with which to effectively moderate a much larger community.

"The sad truth is that everyone has had that experience, of trying to use voice chat on whatever platform you were on, and discovering that, boy, that was a bad idea," said Modulate CEO Mike Pappas in a video call with Lifewire. "Being able to go in and say, ‘This is not the Wild West. There are rules.’ I think that’s really important."

Breaking the System

Naturally, the second or third question to ask about ToxMod is how to break it.

With many automatic moderation systems, such as the algorithms that govern Twitter, it’s easy to game them against people you don’t like. Just mass-report your target with a few sock-puppet accounts and they’ll reliably eat a ban.

"At a baseline level, ToxMod doesn’t need to rely on those additional player reports," Pappas said. "It’s still able to produce solid estimates of what offenses we need [to pay] attention to. You don’t have to worry about players trying to game the system, as there’s not really anything to game.

Everyone knows that harassment, hate speech, and toxicity in voice chat and gaming is a massive problem.

"All you have control over as a player is your own audio," continued Pappas. "The worst thing you can do is be less bad of a person so we don’t flag you as a bad actor, which I’d say is something close to mission success."

In general, then, the idea behind ToxMod is an active reclamation attempt. Many players at this point have experienced some form of harassment from random strangers in open voice channels, ranging from random insults to active threats. As a result, gamers tend to shy away from voice chat in general, preferring to give up its convenience in exchange for their own peace of mind.

"What we’re expecting to see [are bad actors spending] a lot less time in voice chat before getting found and removed," said Pappas. "That has more than just a linear impact. When everyone is seeing voice chat as a safer place, more good actors are willing to come back to voice chat and give it a try. I think everything can spiral in a positive direction."

Was this page helpful?