According to PANews, Twitter user @r_cky0 revealed that while using ChatGPT to generate code for a blockchain automated trading robot, a backdoor was hidden in the code, resulting in a loss of approximately $2500.

Slow Fog founder Yu Xian confirmed that there are indeed cases of using AI-generated code being 'hacked.' Experts point out that such attacks may stem from malicious patterns learned by AI from unsafe content.

Industry calls for users to remain vigilant, avoiding blind trust in AI-generated code, and suggests that future AI platforms strengthen content review mechanisms to identify and alert potential security risks.