We may earn a commission when you buy through links in our articles. Learn more.

Valorant is going to start storing voice comms to moderate abusive behaviour

All of Riot's games are getting updated privacy agreements to allow moderated voice comms

Riot has announced that it has updated its privacy policy across its online games to allow it to start storing voice communications data. Online multiplayer game Valorant will be the first to see the new system in action, as Riot says it’s moving to start moderating voice comms to verify reports of abusive and toxic behaviour.

A report by TechCrunch provides some details about the voice moderation system Riot plans to implement. Audio data will be stored regionally, and then pulled when a report is submitted. Riot says the audio will be evaluated to check for code of conduct violations, and if one has occured, the player in question will have a chance to see it. Afterwards, the recording will be deleted. If no violation is found, the audio will also be deleted.

Riot told TechCrunch that the system for monitoring voice communications is still in development, and may take the form of a voice-to-text transcription system or possibly machine learning. Modulate’s ToxMod software already has the capability to ‘listen’ to human speech and recognise specific words, phrases, or abusive language in general, and Riot may use a similar AI-driven solution in its voice moderation.

Valorant executive producer Anna Donlon says abusive behaviour is a “major problem” in competitive online gaming.

“If you don’t know that, then you likely haven’t suffered the type of abuse in-game that many people suffer,” she wrote in a tweet today. “I read and listen to the behaviors people report. I hear it myself in games. Stop telling me to ‘just mute.’ How about the abusers ‘just mute’ themselves? This is a meaningful step, one of many we’ll all need to take.”