Over at Gamescom Intel has spoken a little about its plan to utilise AI to combat toxicity in gaming communities. Aiming to reduce the human time spent wading through sh- comments, lots of mean comments – Intel is hoping to utilise the computing breakthrough of the 2010s to intuitively weed out the worst offenders in-game like some sort of cyber Judge Dredd.
“Toxicity – we talked a little about that and how it might hamper some peoples’ desire to get into the gaming space or continue playing certain games if the community’s getting too toxic,” Troy Severson, sales director of PC gaming at Intel, says. “So this is a place where we’re looking at applying AI to better manage those communities, so that the searches can be done and you don’t have to have a person sat there watching chat all the time. We can use AI to actually improve that toxic environment in games.”
Artificial intelligence has become a driving force behind Intel’s product lineup, with its most recent Xeon Cascade Lake and Cooper Lake server CPUs and Ice Lake client CPUs featuring DL Boost (deep learning boost) in order to accelerate artificial inference. Uses for this tech could be to accelerate photo editing, rapid image searching, or even real-time translation tasks.
But Intel isn’t alone in its desire to break into the AI space, with Nvidia also throwing its considerable weight into the market, too. In fact, the two companies recently went head-to-head in a series of blog posts demeaning one another and proclaiming either CPU or GPU top dog for inference tasks. Long story short, both companies came out bloody and bruised from the whole ordeal.
Yet the use of AI to wash gaming of the scourge of toxicity could be a good thing for gaming, opening the pastime to a wider, more inclusive, and all-encompassing audience. But it’s also an easily abused system, and handing over the keys to the complex social net of the web could also lead to worries over any and all negative discourse, just or unjust, being scrubbed from its servers.
But let’s see if Intel can take this smart comment moderation bot from theory to reality first, and then we’ll worry about who’s wielding this power – and whether they’re doing so for good – when the time comes. It’ll probably be fine, right?