Author: Azuma, Odaily Planet Daily

On the morning of November 22, Slow Fog founder Yu Xian posted a bizarre case on his personal X— a user's wallet was 'hacked' by AI...

The ins and outs of the case are as follows.

In the early hours of November 22, netizen r_ocky.eth disclosed that he previously hoped to use ChatGPT to assist in developing a pump.fun trading bot.

r_ocky.eth provided his requirements to ChatGPT, which returned a piece of code that could indeed help r_ocky.eth deploy a bot that meets his needs. However, he never expected that the code would hide a piece of phishing content—r_ocky.eth connected his main wallet and consequently lost $2,500.

From the screenshot posted by r_ocky.eth, the code provided by ChatGPT sends the address private key to a phishing API website, which is also the direct cause of the theft.

While r_ocky.eth was trapped, the attacker reacted extremely quickly, transferring all assets in r_ocky.eth's wallet to another address (FdiBGKS8noGHY2fppnDgcgCQts95Ww8HSLUvWbzv1NhX) within half an hour. Subsequently, r_ocky.eth tracked down what appeared to be the attacker's main wallet address (2jwP4cuugAAYiGMjVuqvwaRS2Axe6H6GvXv3PxMPQNeC) through on-chain tracing.

On-chain information shows that this address has currently accumulated over $100,000 in 'stolen funds', leading r_ocky.eth to suspect that such attacks may not be isolated incidents but part of a larger-scale attack event.

After the incident, r_ocky.eth expressed disappointment, stating that he has lost trust in OpenAI (the company developing ChatGPT) and called for OpenAI to quickly address the unusual phishing content.

So, as the most popular AI application at the moment, why does ChatGPT provide phishing content?

In this regard, Yu Xian characterized the root cause of the incident as an 'AI poisoning attack' and pointed out that there is a common deceitful behavior in LLMs like ChatGPT and Claude.

The so-called 'AI poisoning attack' refers to the act of deliberately sabotaging AI training data or manipulating AI algorithms. The attackers could be insiders, such as disgruntled current or former employees, or external hackers whose motives may include causing reputational and brand damage, distorting the credibility of AI decision-making, slowing down or sabotaging AI processes, and so on. Attackers can distort the model's learning process by implanting misleading labels or features in the data, leading to incorrect results when the model is deployed and running.

In light of this incident, the reason ChatGPT provided phishing code to r_ocky.eth is likely because the AI model 'absorbed' data containing phishing content during training. However, the AI seemingly failed to identify the phishing content hidden beneath the regular data. After learning, the AI inadvertently provided this phishing content to the user, resulting in the occurrence of this incident.

With the rapid development and widespread adoption of AI, the threat of 'poisoning attacks' is becoming increasingly significant. In this incident, although the absolute amount lost is not large, the extended impact of such risks is enough to raise alarms—assuming it occurs in other fields, such as AI-assisted driving...

In response to a netizen's question, Yu Xian mentioned a potential measure to avoid such risks, which is for ChatGPT to add some kind of code review mechanism.

The victim r_ocky.eth also stated that he has contacted OpenAI about the matter. Although he has not received a response yet, he hopes that this case can serve as an opportunity for OpenAI to pay attention to such risks and propose potential solutions.

(The above content is excerpted and reproduced with the authorization of partner PANews, original link | Source: Odaily Planet Daily)

Statement: The article only represents the author's personal views and opinions, does not represent the views and positions of Block客, and all content and opinions are for reference only and do not constitute investment advice. Investors should make their own decisions and trades, and the author and Block客 will not bear any responsibility for any direct or indirect losses caused by investors' trades.

"Using AI to write code hides 'traps': users seek help from ChatGPT, but fall victim to phishing 'theft'" This article was first published on (Block客).