A report by TechCrunch provides some details about the voice moderation system Riot plans to implement. Audio data will be stored regionally, and then pulled when a report is submitted. Riot says the audio will be evaluated to check for code of conduct violations, and if one has occured, the player in question will have a chance to see it. Afterwards, the recording will be deleted. If no violation is found, the audio will also be deleted.
Riot told TechCrunch that the system for monitoring voice communications is still in development, and may take the form of a voice-to-text transcription system or possibly machine learning. Modulate’s ToxMod software already has the capability to ‘listen’ to human speech and recognise specific words, phrases, or abusive language in general, and Riot may use a similar AI-driven solution in its voice moderation.
Valorant executive producer Anna Donlon says abusive behaviour is a “major problem” in competitive online gaming.
I read and listen to the behaviors people report. I hear it myself in games. Stop telling me to “just mute.” How about the abusers “just mute” themselves? This is a meaningful step, one of many we'll all need to take. ❤️(2/2)
— Anna Donlon (@RiotSuperCakes) April 30, 2021
“If you don’t know that, then you likely haven’t suffered the type of abuse in-game that many people suffer,” she wrote in a tweet today. “I read and listen to the behaviors people report. I hear it myself in games. Stop telling me to ‘just mute.’ How about the abusers ‘just mute’ themselves? This is a meaningful step, one of many we’ll all need to take.”