We may earn a commission when you buy through links in our articles. Learn more.

FACEIT and Google built an AI that banned 20,000 toxic CS:GO players in a month

The AI cops are coming for you

As the all-knowing, all-powerful reach of algorithms continues to extend across every aspect of digital media, FACEIT is taking machine learning to a new frontier: toxic game chat. The company has partnered with Google Cloud and Jigsaw (formerly Google Labs) to build an AI dedicated to rooting out and banning toxic players. It’s already in use, and it has already banned over 20,000 Counter-Strike: Global Offensive players.

The FACEIT AI is called Minerva, and “after months of training to minimize false positives,” it went into live use on the FACEIT platform in late August. Since then, the AI has issued 90,000 warnings and 20,000 bans against abusive chat and spam, all “without manual intervention.”

“If a message is perceived as toxic in the context of the conversation,” FACEIT explains in a blog post, “Minerva issues a warning for verbal abuse, while similar messages in a chat are flagged as spam. Minerva is able to take a decision just a few seconds after a match has ended: if an abuse is detected it sends a notification containing a warning or ban to the abuser.”

FACEIT says there’s been a 20.13% decrease in toxic messages since Minerva was introduced, from 2,280,769 in August to 1,821,723 in September.

So next time you’re thinking about being a jerkhole in your favourite FPS games, just remember the AI cops may be watching. Or better yet, take Ice T’s advice, and “learn some motherfucking respect, because I’m sick of hearing all that sexist, racist, homophobic bullshit from a few punkass motherfucking dumbfucks out there trying to ruin this shit for the rest of us.” (But I do have to wonder if that’d be tagged as a false positive.)