
Roblox's AI Voice Moderation: The Balance Between Machines and Humans
Exploring how Roblox uses AI for voice moderation while acknowledging the importance of human judgment.
As voice chat in online games grows, it becomes easier for players to behave poorly, leading companies like Roblox to implement machine learning for voice moderation. During a talk at GDC, Roblox’s Senior Technical Director Kiran Bhat explained the challenges of moderating voice in real-time, noting that it requires understanding not just what is said but also tone and intent.
“Moderating voice in real time sounds really daunting… it’s a really hard problem.”
“La modération de la voix en temps réel semble vraiment redoutable… c’est un problème vraiment difficile.”
Roblox’s system effectively covers about 85% of toxic communication through 50 identified keywords. Despite this, Bhat acknowledged that human moderators often excel in nuanced contexts where AI may struggle, emphasizing the need for human oversight even in tech-focused solutions.
“Humans still surpass machines in cases where you might be very close to the decision boundary.”
“Les humains dépassent toujours les machines dans les cas où vous pourriez être très proche de la frontière de décision.”
In a year-long trial across 31 countries, the company has already reported a 50% decrease in abuse reports per hour of active talk. This example illustrates the potential of AI in addressing online issues while showcasing the indispensable role of human judgment.