Roblox introduces an ML system that monitors policy violations in voice communication, aiming to create a safe and civilized online environment.

In the announcement on July 8, Mr. Kiran Bhat - Senior Technical Director of Roblox said that the company has successfully deployed a "Real-time Safety" system operating based on the "Real-time Safety" model. machine learning, processing millions of minutes of voice activity every day. This system is capable of detecting policy violations in voice communications with greater accuracy than human moderation.

The “Real-Time Monitoring” system is an ambitious effort, marking a new step forward in the field of online safety. According to Roblox, no other company in the industry currently offers multilingual, near real-time voice safety to users.

The system works by analyzing both audio style (including volume and intonation) and spoken content. After detecting a violation, the system will send a warning to the violating user. If that behavior continues, Roblox will apply stronger measures.

Overcome data challenges

To build this system, Roblox had to overcome many data challenges. Initially, the company had almost no real human-labeled data at scale. To effectively train a machine learning model, Roblox needs a large amount of labeled data. However, manually collecting and labeling thousands of hours of voice data is an extremely time- and resource-intensive process.

Roblox solved this problem by combining automatically labeled data for training and manually labeled data for evaluation:

  • Automatically labeled data for training: Roblox used large amounts of machine-labeled data with weak supervision. This approach allows Roblox to label the volume of data needed for the model in weeks instead of years.

  • Manually labeled data for reviews: Roblox leverages its internal moderation team, which triages abuse reports from users, to manually label review data.

system architecture

The “Real-Time Monitoring” system is built based on the Transformer architecture, suitable for the requirements of low latency (near real-time response) and the ability to process data sequences effectively. Roblox tested and selected WavLM and Whisper – two open source encoding models popular in the audio research community – to train the system.

The system operates according to a 3-step process:

  1. Splitting audio: The first stage of the system involves dividing audio into smaller segments, or shorter segments, whenever silences are detected between sentences. This allows content that violates policies to be more effectively identified and labeled.

  2. Speech to text conversion: The second stage of the system involves converting these audio clips into text using an automatic speech recognition (ASR) model. The ASR models used are publicly available and open source.

  3. Text classification: The final stage of the system involves classifying the converted text using Roblox's internal text filter. This filter is designed to detect and block content labeled with categories and keywords that violate policies. The text filter is a hybrid model trained on human-labeled policy violation text data, including an extended DistilBERT model and regular expression rules.

To optimize performance, Roblox applied quantization techniques, changed the feature extraction method, and integrated a voice activity detection (VAD) model into the system.

Result

Once implemented, the system helped Roblox significantly reduce policy violations on the platform. Specifically, Roblox recorded a 15.3% reduction in the number of severe voice abuse reports and an 11.4% reduction in violations per voice minute.

Roblox is continuing to research and improve “Real-Time Monitoring,” including expanding multilingual support, to provide a healthy online environment for users.