The current release of GPT-4o has created much discussion because of its capability to mimic human-like conversations. However, OpenAI is now experiencing a problem because the users are starting to develop an emotional bond with the chatbot, according to the OpenAI blog post.

Since the release of GPT-4o, which is claimed to have more human-like dialogues, OpenAI has observed that people are treating the AI as if it is human. 

OpenAI identifies risks of treating AI as human

This particular advancement has posed a challenge to the company in as far as users’ emotional connections with the chatbot is concerned. OpenAI’s observations involve instances where users displayed emotions or sentiments that indicate a sense of ownership. 

The company fears that such emotional connections might result in the following negative consequences. Firstly, users may start ignoring the wrong information provided by the chatbot. AI hallucination, where a model produces wrong or deceptive information, is another problem that worsens when users treat the AI as a human-like entity. 

Another factor that has been raised is the effect on real-life social relations of the users of these networks. OpenAI points out that although GPT-4o may serve as a companion for lonely people, there is a possibility that it will negatively impact the quality of human relations. The company also notes that users may enter real-life interactions expecting people to behave like the chatbot. 

OpenAI plans to moderate AI interactions

To mitigate these risks, OpenAI has already stated that it will closely supervise how users engage with GPT-4o. The company will explore the process through which people build emotional bonds and will modify the chatbot’s responses to reflect this. This will aid OpenAI in avoiding the chatbot’s interference with the users’ social lives and any worsening of the AI hallucinations.

OpenAI has pointed out that GPT-4o is programmed to disengage if the users start speaking over it, a feature aimed at reducing overuse. However, this design element also points towards the need to regulate the users’ experience of the chatbot. 

According to Tony Prescott from the University of Sheffield, AI  could help resolve loneliness. In his new book, The Psychology of Artificial Intelligence, Prescott notes that AI could be used as a way of social interaction for lonely people. Prescott notes that loneliness is a major factor that affects human life. It can even shorten it and raise the chances of dying by 26%.