On the morning of November 22nd, Taipei time, SlowMist founder Yu Xian posted a bizarre case on his personal X - a user's wallet was "hacked" by AI...
Source: X
The ins and outs of the case are as follows.
Early this morning, X user r_ocky.eth revealed that he had previously hoped to use ChatGPT to port a pump.fun auxiliary transaction bot.
r_ocky.eth gave his needs to ChatGPT, and ChatGPT returned a piece of code to him. This piece of code could indeed help r_ocky.eth deploy a bot that met his needs, but he never expected that there would be something hidden in the code. A piece of phishing content - r_ocky.eth linked to his main wallet and lost $2,500.
Source: Odaily Planet Daily
Judging from the screenshots posted by r_ocky.eth, the code provided by ChatGPT will send the address private key to a phishing API website, which is also the direct cause of the theft.
When r_ocky.eth stepped into the trap, the attacker reacted very quickly and transferred all the assets in the r_ocky.eth wallet to another address within half an hour.
FdiBGKS8noGHY2fppnDgcgCQts95Ww8HSLUvWbzv1NhX
Then r_ocky.eth found the address suspected to be the attacker’s main wallet through on-chain tracing.
2jwP4cuugAAYiGMjVuqvwaRS2Axe6H6GvXv3PxMPQNeC
Source: Odaily Planet Daily
Information on the chain shows that the address has collected more than $100,000 in "stolen money". Therefore, r_ocky.eth suspects that this type of attack may not be an isolated case, but an attack of a certain scale.
Afterwards, r_ocky.eth expressed disappointment that it had lost trust in OpenAI (ChatGPT development company) and called on OpenAI to clean up abnormal phishing content as soon as possible.
So, as the most popular AI application today, why does ChatGPT provide phishing content?
In this regard, Cosine characterized the root cause of the incident as "AI poisoning attack" and pointed out that there is widespread deception in LLMs such as ChatGPT and Claude.
The so-called "AI poisoning attack" refers to the deliberate destruction of AI training data or manipulation of AI algorithms. The adversary launching the attack may be an insider, such as a disgruntled current or former employee, or an external hacker. Their motivations may include causing reputation and brand damage, tampering with the credibility of AI decisions, slowing or disrupting AI processes, etc. . An attacker can distort the model's learning process by implanting data with misleading labels or features, causing the model to produce erroneous results when deployed and run.
Judging from this incident, the reason why ChatGPT provided the phishing code to r_ocky.eth is most likely because the AI model was contaminated with data containing phishing content during training, but the AI seems to have failed to identify the phishing code hidden in the regular data. After learning the phishing content, the AI provided the phishing content to the user, which caused the incident.
With the rapid development and widespread adoption of AI, the threat of "poisoning attacks" has become increasingly greater. In this incident, although the absolute amount of loss is not large, the extended impact of such risks is enough to trigger vigilance - assuming it occurs in other industries, such as AI-assisted driving...
Source: Odaily Planet Daily
In response to questions from netizens, Yusian mentioned a potential measure to avoid such risks, which is to add some kind of code review mechanism by ChatGPT.
The victim r_ocky.eth also stated that he has contacted OpenAI about this matter. Although he has not received a reply yet, he hopes that this case will be an opportunity for OpenAI to pay attention to such risks and propose potential solutions.
[Disclaimer] There are risks in the market, so investment needs to be cautious. This article does not constitute investment advice, and users should consider whether any opinions, views or conclusions contained in this article are appropriate for their particular circumstances. Invest accordingly and do so at your own risk.
This article is reproduced with permission from: (Shenchao TechFlow)
Original author: Azuma, Odaily Planet Daily
"I never expected that!" A netizen’s encrypted wallet was stolen by AI. How did the bizarre hacking case happen? 』This article was first published in "CryptoCity"