Google is using artificial intelligence to answer queries regarding health issues. Some analysts find this concerning, considering the AI Overviews feature may provide unreliable information.

Medical practitioners say that AI Overviews can be a concern, especially when answering personal health queries. They argued that the feature has the potential to give out inaccurate information even though it can also guide people toward required medical treatment, according to a recent report by the New York Times.

AI Overviews search error fuels AI hallucination sentiments

Google introduced AI Overviews, a feature that generates summaries of content from websites and shows them on Google’s search results page. Within a few days after launch, the feature has shown a wide range of inaccuracies, as it based its answers on flawed sources. An occurrence that further fuels the sentiment that AI is prone to hallucinations and can generate factually incorrect information.

Also read: Google limits AI Overviews for satirical and nonsensical queries

Health-related search results administering a daily dose of rock or asking users to eat glued pizzas have generated a backlash. AI Overviews do quote sources like the World Health Organization, WebMD, PubMed, and Mayo Clinic. However, they are not limited to these sources. Analysts say that the tool pulls information from Reddit, Wikipedia, different blogs, and e-commerce sites.

Hema Budaraju, a senior product management director at Google, said that there are additional guardrails for health-related queries, but she declined to reveal any further details. Hema added that some queries do not trigger AI Overviews, for example, if the system senses a self-harm situation.

Google said that AI Overviews integrates with Google Knowledge Graph, a knowledge base containing data pulled from several sources, but did not provide details about the websites supporting the output of the AI-assisted search feature.

Google shows more AI Overviews for health queries

The traditional results, often referred to as “ten blue links,” were helpful for users as they could distinguish between reputable medical sources and paid links or less credible websites. With AI Overviews, the scenario has changed. Now, a single block of text shows information gathered from different sources, which, according to analysts, causes confusion.

Kate Knibbs of Wired said in a podcast:

“If you type something in asthma or do I have diabetes or basically if you’re searching for anything involving diseases, you are very likely to this day to still see an AI Overview. I mean, it’s not everyone; it’s still about 63 percent. And I think it depends.”

Some doctors, like Dr. Dariush Mozaffarian, say that AI Overviews show some facts that are correct and summarize them into answers, but they do not differentiate between evidence from observational studies and randomized trials.

Another layer of misinformation is added when the feature muddles up information. For example, “Green tea contains antioxidants, and it helps prevent cancer.” While the former is true, the latter has not been clinically proven. When such claims are listed together, they give a false impression that they are credible.

Google has restricted AI Overviews

People observing the AI rollout advise caution when approaching health-related AI search results. They say users should consider the fine print under some answers that say, “This is for informational purposes only.” The NYTimes report noted that generative AI is in the experimental phase, and users should consult doctors during any serious condition. 

Also read: Google core algorithm updates and AI Overviews are reducing publishers’ traffic

In a statement on May 30, Google said that it had trimmed down the usage of AI Overviews in some categories. Data from an SEO firm, Brightedge, corroborates Google’s position, suggesting that the cutdown had begun even before the company’s announcement. 

Other reports also say that Google began to roll back AI Overviews for certain categories even before netizens started mocking it for nonsensical replies. Nonetheless, there are claims that the health category keeps showing a high number of AI Overviews. Knibbs says,

“I talked to some other SEO researchers who had also found that healthcare was still producing a lot of AI Overviews.”

Knibbs says that she tested some health-related queries and found different results within a few hours of trying, which shows Google is working on improving the results. She said, “OK, they’re trying. I don’t know if they’re doing a good enough job, but they’re definitely trying.”

Cryptopolitan reporting by Aamir Sheikh