Rockstar is using AI robots to combat toxicity in GTA Online

Rockstar Games is copying Riot Games and Blizzard Entertainment by implementing AI software to help curb toxicity in GTA Online on the PC.


You can't fault Rockstar Games for trying to curb the prevalent toxicity in GTA Online.

Rockstar Games’ latest update for Grand Theft Auto Online is causing a bit of a stir.

On one hand, it’s isn’t the full-next-gen upgrade for the PC version of GTA Online that fans have desperately wanted for a long time, so it was always going to be received negatively. On the other hand, it’s a genuine attempt at combating the game’s long-time problem with toxicity, but it’s drawing mixed reactions.

As noted by Tez2 on X, the new PC-exclusive feature added as part of the October 3 patch for GTA Online lets Rockstar “ban you from using VC.

Almost immediately, the deluge of negative comments regarding Rockstar’s latest update came flooding. Most shared anecdotes about how similar systems have been banned them for using bad language in voice chat that exists in-game. Simply put, these AI systems can block your access to games even if you’re just reading out loud profanity that the game’s script contained.

If you’re wondering why this is a PC-exclusive feature, there’s a simple reason. Both Xbox and PlayStation police players using their own methods. On the Xbox, players can record voice chat from online lobbies and even voice clips if they’re reporting inappropriate behavior online. This isn’t a limited feature specific for a single game – it applies to all games that support voice chat. PlayStation implemented a similar safety feature a few years back, but Sony clarified that this doesn’t “actively monitor or record” conversations.

Of course, it’s possible Rockstar may implement a similar measure on Xbox and PlayStation at a later date, but we doubt it.

Call of Duty is using the same AI software to curb toxicity.

Using an AI system in content moderation is nothing new nor specific to the gaming industry. Thus, similar criticism will apply as well. The main concern raised with using AI to moderate content produced by actual humans is its struggle to separate toxic behavior from freedom of speech. The inherent flaw of AI content moderation systems is it actively tries to “fix” the gray areas of normal human interaction.

It isn’t unusual for AI to sanction users for merely expressing themselves. Mishaps like removing contented protected by freedom of speech or removing content that isn’t harmful (the inverse is also true) are common in AI moderation.

We’ll have to wait and see how this added safety measure will work in practice or if Rockstar will reverse the changes it implemented if it backfires.

In Riot Games’ case, it’s using voice chats to train AI to detect toxic behavior.

For what it’s worth, it’s a valid effort from Rockstar, which has historically drawn a certain type of player in its games, especially GTA Online.

Perhaps Rockstar is trying something out in GTA Online ahead of the official reveal of GTA 6, which should’ve happened by now based on the earlier leaks.

Ray Ampoloquio
Ray Ampoloquio // Articles: 7186
With over 20 years of gaming experience and technical expertise building computers, I provide trusted coverage and analysis of gaming hardware, software, upcoming titles, and broader entertainment trends. // Full Bio