SHARE
League of Legends Rolls Out Automatic Ban System for Trolls

League of Legends developer Riot is rolling out a new system which will automatically ban users who are abusive as soon as 15 minutes after a match.

Unfortunately abuse and online gaming comes hand in hand. It’s something that’s been apparent for a while now and though many try to tackle the verbal or written diarrhoea that comes out of some people, there’s just no sure-fire way to moderate everyone other than switching off forms of communication completely, like Nintendo has done with Splatoon. Riot are hoping to take the hassle out of moderating complaints however, by rolling out an intelligent automated system which will roll out bans as soon as 15 minutes after a game.

In a post on its Player Behavior blog, Riot explains in more detail how this automated system will work stating that once team mates or other players report someone for abuse, which Riot is defining as “homophobia, racism, sexism, death threats, and other forms of excessive abuse,” the automated system will then attempt to validate those reports. The system will determine whether the abuse is worthy of punishment, and will send a “reform card” that pairs chat logs with an explanation of the punishment for those guilty of such actions. “These harmful communications will be punished with two-weeks or permanent bans within 15 minutes of game’s end,” Riot promises.

As you’d expect, leaving the power to ban users in the hands of a computer algorithm has left some people a little concerned and upset on the matter, but on the game’s official forums, Lead Designer of Social Systems, Jeffrey Lin explains it in a little more detail explaining that the system will try to learn which phrases are frequently reported rather than working from a blacklist of phrases.

“Every report and honor in the game is teaching the system about behaviors and what looks OK or not OK, so the system continuously learns over time,” he writes. “If a player shows excessive hate speech (homophobia, sexism, racism, death threats, so on) the system might hand out a permanent ban to the player for just one game. But, this is pretty rare!”

Lin also added that they’d been testing the algorithms behind doors last July, but those automated reports were escalated to be manually reviewed by the Player Support team, something that would take a lot of effort to moderate effectively if the system were to continue to work in this way. The new system removes this human review step from the process allowing the system to near instantly punish foul mouthed players. Lin also added tat the moderation team would hand-review the first 1,000 cases handled by the system as it rolled out in North America and EU Servers last week adding that it saw false positives in the 1 in 6,000 range.

“So, we know the system isn’t perfect, but we think the accuracy is good enough to launch,” Lin said.

Following users pushing back against the idea of the system, Lin tweeted that Riot has already tweaked the system a little adding that “one case of the system being overaggressive is not a reason to shut the system off. Let’s be reasonable everyone!”