According to Cointelegraph, a study published in Nature Scientific Journal showed that the error rate of AI chatbots increased with the release of new models. Lexin Zhou, one of the authors of the study, believes that the AI ​​model prioritizes providing seemingly correct answers, resulting in a decrease in accuracy.

Editor and writer Mathieu Roy warns users not to over-rely on these tools and recommends verifying AI-generated search results. Roy points out that verifying information becomes more complicated, especially in customer service chatbots.

In February 2024, Google’s AI platform was mocked for generating historically inaccurate images. Nvidia CEO Jensen Huang proposed alleviating the AI ​​hallucination problem by forcing AI models to conduct research and provide provenance.

HyperWrite AI CEO Matt Shumer announced that the company’s new 70B model uses a “Reflection-Tuning” approach to learn by analyzing its own mistakes and adjusting responses.