According to PANews, a Twitter user identified as @r_cky0 reported a significant security breach while using ChatGPT to develop a blockchain automated trading bot. The user discovered that the AI-recommended code contained a hidden backdoor, which transmitted private keys to a phishing website, resulting in a loss of approximately $2,500. This incident was later confirmed by SlowMist founder Yu Jian, known as @evilcos, who acknowledged the existence of such vulnerabilities in AI-generated code.
Experts have highlighted that these types of attacks may stem from AI models learning malicious patterns from phishing posts or insecure content. Current AI models face challenges in detecting backdoors within code, raising concerns about the reliability of AI-generated programming solutions. Industry professionals are urging users to exercise caution and avoid blindly trusting AI-generated code. They also recommend that AI platforms enhance their content review mechanisms to identify and alert users to potential security risks. This call for increased vigilance and improved safety measures underscores the need for a more secure approach to integrating AI in software development.