According to the news from Bijie.com, on September 26, local time, the Open AI Research Center (OpenAI) of the United States announced that it will introduce a new moderation model in the moderation API - omni-moderation-latest. The new model is based on GPT-4o, supports text and image input, and is more accurate than its previous models, especially in non-English languages. Like the previous version, this model uses OpenAI's GPT-based classifier to evaluate whether content should be marked as categories such as hatred, violence, and self-harm, while also adding the ability to detect other categories of harm. (Interface)