On Wednesday, Activision announced that it will be introducing real-time AI-powered voice chat moderation in the upcoming November 10 release of Call of Duty: Modern Warfare III. The company is partnering with Modulate to implement this feature, using technology called ToxMod to identify and take action against hate speech, bullying, harassment, and discrimination.
While the industry-wide challenge of toxic online behavior isn’t unique to Call of Duty, Activision says the scale of the problem has been heightened due to the franchise’s massive player base. So it’s turning to machine-learning technology to help automate the solution.
ToxMod is an AI-powered voice moderation system designed to identify and act against what Activision calls “harmful language” that violates the game’s code of conduct. The aim is to supplement Call of Duty’s existing anti-toxicity measures, which include text filtering in 14 languages and an in-game player-reporting system.
Activision says that data from its previous anti-toxicity efforts have restricted voice or text chat for over 1 million accounts that violated its code of conduct. Moreover, 20 percent of those who received a first warning did not re-offend, suggesting that clear feedback is useful for moderating player behavior.
On its surface, real-time voice moderation seems like a notable advancement to combat disruptive in-game behavior—especially since privacy concerns that might typically come with such a system are less prominent in a video game. The goal is to make the game more enjoyable for all players.
However, at the moment, AI detection systems are notoriously fickle and can produce false positives, especially with non-native English speakers. Given variations in audio quality, regional accents, and various languages, it’s a tall order for a voice-detection system to work flawlessly under those conditions. Activision says a human will remain in the loop for enforcement actions:
Detection happens in real time, with the system categorizing and flagging toxic language based on the Call of Duty Code of Conduct as it is detected. Detected violations of the Code of Conduct may require additional reviews of associated recordings to identify context before enforcement is determined. Therefore, actions taken will not be instantaneous. As the system grows, our processes and response times will evolve.”
Further, Activision says that Call of Duty’s voice chat moderation system “only submits reports about toxic behavior, categorized by its type of behavior and a rated level of severity based on an evolving model.” Humans then determine whether they will enforce voice chat moderation violations.
The new moderation system began a beta test starting Wednesday, covering North America initially and focusing on the existing games Call of Duty: Modern Warfare II and Call of Duty: Warzone. The full rollout of the moderation technology, excluding Asia, is planned to coincide with the launch of Modern Warfare III, beginning in English, with additional languages added over time.
Despite the potential drawbacks of false positives, there’s no way to opt out of AI listening in. As Activision’s FAQ says, “Players that do not wish to have their voice moderated can disable in-game voice chat in the settings menu.”