Yu Xian, the founder of Slowmist, a blockchain security firm, has highlighted the emerging danger of AI code poisoning. This malicious tactic involves inserting harmful code into AI models' training data, endangering users who rely on these tools for technical purposes. The issue came to light following an incident with OpenAI's ChatGPT, where a user lost $2,500 in digital assets due to a scam API recommendation. Despite suspicions, it seems the poisoning was not intentional but a failure to detect scam links in search results. The fraudulent API's domain registration two months prior suggests premeditation. Scam Sniffer also noted how scammers manipulate AI training data to generate fraudulent outputs. With the increasing use of AI tools like ChatGPT, users face heightened risks. Xian warns of the real threat posed by AI poisoning, urging stronger defenses to prevent further financial losses and maintain trust in AI-driven technologies. Read more AI-generated news on: https://app.chaingpt.org/news