This article is related.

A recent study published in the Nature Scientific Journal titled “Larger, more trainable language models become less reliable” found that AI chatbots make more mistakes as new models are released. One of the authors of the study, Lexin Zhou, noted that AI models are always focused on giving plausible answers, and as a result, users are presented with answers that appear to be correct but are incorrect.

These AI hallucinations increase over time, leading to “model collapse.” Editor Mathieu Roy warned users not to over-rely on these tools and to always check the results produced by the AI.

Google’s AI platform came under fire for historically producing incorrect images in February 2024. Industry leaders like Nvidia CEO Jensen Huang have suggested that models conduct research and provide sources for each answer to reduce AI hallucinations.

HyperWrite AI CEO Matt Shumer announced that the company’s new 70B model uses “Reflection-Tuning,” a method that allows the AI ​​bot to analyze its own mistakes and adjust its responses over time.

What do you think about this? Share in the comments.