Back to Top

The AI that banned 20,000 CS:GO players is getting taught how to ban better

Around a month ago we reported on a new AI called Minerva, which was built by Google and FACEIT to root out and ban players spreading toxicity in Counter-Strike: Global Offensive. It’s already been in use since August, and by October it had already issued 90,000 warnings and 20,000 bans to players it had perceived as toxic or spam.

The problem with this system is that it was all done by the AI without any manual intervention from human moderators, and players took offence at this – sometimes a chat can only seem toxic if unaware of the context, and may just be a case of friendly griefing, for example.

Fortunately, the Minerva AI is getting an upgrade called ‘Justice’. This was teased by FACEIT on Twitter last month, but no one knew exactly what Justice was. According to a presentation by FACEIT Niccolo Maisto, Justice is a community-run add-on for Minerva that will teach it on how to do its job better – all through machine learning.

With the Justice add-on, if a player gets banned for toxic behaviour, other players on the server will be able to review the case – and then decide if that player was indeed being toxic and whether Minerva was right to ban them. Minerva will then learn from this, and be able to issues bans or warnings more accurately.

You can check out CEO Maisto talking about the new Justice add-on here. “One of the main concerns our community expressed when we announced Minerva,” he explains, “was that they didn’t really understand who was setting the threshold. Is it just some machine ruling on behaviour, is it a man behind the curtain playing judge on our behaviour and making the decisions?”

Allowing players to review cases of toxic behaviour through Justice will help remove those concerns, he goes on to say. There is currently no launch date for Justice, so watch what you say around Minerva.

Back to Navigation