AI code poisoning is a growing threat where harmful code is injected into AI models’ training data, posing risks for users who rely on these tools for technical tasks. A recent incident involving OpenAI’s ChatGPT highlights the issue. A crypto trader lost $2,500 in digital assets after ChatGPT recommended a fraudulent Solana API website while creating a bot for a memecoin generator.
The malicious API drained the victim’s assets within 30 minutes. The Slowmist founder noted that the fraudulent API’s domain was registered two months ago, suggesting the attack was premeditated. While ChatGPT may not be directly responsible for the scam, this incident shows how scammers can pollute AI training data with harmful code.
As AI tools like ChatGPT face increasing challenges, users need to be aware of the risks associated with large language models and potential financial losses.
Source
<p>The post Blockchain Security Breach: AI Code Poisoning Alerts Cryptocurrency Users first appeared on CoinBuzzFeed.</p>