AI chatbots have the potential to reduce belief in conspiracy theories by providing accurate information and logical reasoning.

A new study published in the journal Science in mid-September offers new hope in the fight against misinformation. The study, conducted by scientists at American University, shows that artificial intelligence (AI) can be used to disprove beliefs in conspiracy theories.

The researchers designed a chatbot based on OpenAI's GPT-4 Turbo large language model, which is trained to logically refute conspiracy theories and provide convincing evidence.

More than 1,000 study participants were asked to describe a conspiracy theory they believed and explain why they believed it. They were then asked to converse with a chatbot, which provided information and evidence to refute the conspiracy theory.

Results showed that after interacting with the chatbot, participants’ belief in the conspiracy theory decreased by an average of 21%. In fact, 25% of participants went from a state of strong belief to a state of uncertainty.

Conspiracy theories and misinformation have spread rapidly during the COVID-19 pandemic, leaving many government agencies in dire straits. Source: Hollie Adams/Getty Potential and challenges

In the current climate of widespread misinformation and conspiracy theories, this research demonstrates that AI has great potential in persuading people to change their minds, especially in countering conspiracy theories.

However, the researchers caution that this study is just a first step. The participants were paid to take the survey, which may not be representative of people who deeply believe in conspiracy theories.

Further research is needed to evaluate the effectiveness of chatbots using other methods, as well as testing with LLMs with lower security measures to ensure that chatbots do not inadvertently reinforce conspiratorial thinking.

The researchers also want to take a closer look at chatbot interaction strategies, including testing impolite responses, to better understand what factors contribute to success.

Ensuring that chatbots do not generate misinformation is also a key challenge. The researchers used a fact-checker to assess the accuracy of the information provided by the chatbot, ensuring that the chatbot did not present false or politically biased information.