According to PANews, Twitter user @r_cky0 revealed that when he was using ChatGPT to generate code to develop a blockchain automatic trading robot, a backdoor was hidden in the code, resulting in a loss of approximately US$2,500.

Yu Xian, the founder of SlowMist, confirmed that there are indeed cases of "hacking" using AI-generated code. Experts pointed out that such attacks may stem from malicious patterns learned by AI from unsafe content.

The industry calls on users to be vigilant and avoid blindly trusting AI-generated code, and recommends that AI platforms strengthen content review mechanisms in the future to identify and alert to potential security risks.