An investigation by The Guardian has revealed potential security vulnerabilities in OpenAI's ChatGPT search tool, highlighting concerns about its susceptibility to manipulation through hidden content on web pages. The findings suggest that ChatGPT can be influenced by third-party instructions embedded in hidden text, potentially generating biased or misleading summaries.

Tests conducted by The Guardian demonstrated that even when web pages contained negative comments, hidden instructions could prompt ChatGPT to return unwarranted positive reviews. Jacob Larsen, a cybersecurity expert at CyberCX, warned that the current state of the tool might enable malicious actors to design websites aimed at deceiving users, posing significant risks.

As concerns grow, experts emphasize the need for robust safeguards to address these vulnerabilities before the tool's broader deployment. OpenAI has yet to comment on potential measures to mitigate these risks.