OpenAI announced a "more human-like" version of ChatGPT.

OpenAI has introduced the latest model of their chatbot, GPT-4o. This neural network has become "more human-like" and has learned to process visual data.

The AI tool "particularly excels at understanding video and audio compared to existing models." It includes features such as emotion recognition and breathing rhythm detection.

The chatbot now also features a full-fledged Voice Mode for voice interactions.

According to the presentation, the product can assist users with various everyday tasks, such as preparing for job interviews. OpenAI also demonstrated how GPT-4o can call customer support to request an iPhone replacement.

Other examples showed the neural network telling "dad jokes," translating conversations between two languages in real-time, judging a game of "rock-paper-scissors," and responding with sarcasm.

In one video, ChatGPT was shown reacting to a user's first encounter with a puppy.

"Hello, Bowser! Aren't you the most adorable creature?" the chatbot exclaimed.

OpenAI stated that the "o" in GPT-4o stands for "omni," symbolizing a step towards more natural human-computer interaction.

GPT-4o is "much faster and 50% cheaper" than GPT-4 Turbo. The neural network responds to audio queries in 2.3 seconds. The average response time of the chatbot is 3.2 seconds, which is comparable to human reaction time in normal conversation, OpenAI emphasized.