Original title: (I never expected that AI stole my wallet)

Original author: Azuma, Odaily Planet Daily


On the morning of November 22, Beijing time, SlowMist founder Yu Xian posted a bizarre case on his personal account X - a user's wallet was "hacked" by AI...



The context of the case is as follows.


Early this morning, X user r_ocky.eth revealed that he had previously hoped to use ChatGPT to carry a pump.fun auxiliary trading bot.


r_ocky.eth gave ChatGPT his requirements, and ChatGPT returned a piece of code to him. This code can indeed help r_ocky.eth deploy a bot that meets his needs, but he never expected that there would be a phishing content hidden in the code - r_ocky.eth linked his main wallet and lost $2,500 as a result.



Judging from the screenshots posted by r_ocky.eth, the code provided by ChatGPT will send the address private key to a phishing API website, which is also the direct cause of the theft.


When r_ocky.eth fell into the trap, the attacker reacted very quickly and transferred all the assets in the r_ocky.eth wallet to another address (FdiBGKS8noGHY2fppnDgcgCQts95Ww8HSLUvWbzv1NhX) within half an hour. Then r_ocky.eth found the address suspected to be the attacker's main wallet (2jwP4cuugAAYiGMjVuqvwaRS2Axe6H6GvXv3PxMPQNeC) through on-chain tracing.



Information on the chain shows that the address has currently collected more than $100,000 in "stolen funds". Therefore, r_ocky.eth suspects that this type of attack may not be an isolated case, but an attack incident of a certain scale.


Afterwards, r_ocky.eth expressed disappointment that it had lost trust in OpenAI (the developer of ChatGPT) and called on OpenAI to clean up abnormal phishing content as soon as possible.


So, as the most popular AI application at the moment, why does ChatGPT provide phishing content?


In response, Yu Xian characterized the root cause of the incident as "AI poisoning attack" and pointed out that there is widespread deception in LLMs such as ChatGPT and Claude.


The so-called "AI poisoning attack" refers to the act of deliberately destroying AI training data or manipulating AI algorithms. The adversary who launches the attack may be an insider, such as a disgruntled current or former employee, or an external hacker, whose motives may include causing reputation and brand damage, tampering with the credibility of AI decisions, slowing down or destroying AI processes, etc. Attackers can distort the learning process of the model by implanting data with misleading labels or features, causing the model to produce incorrect results when deployed and run.


In view of this incident, the reason why ChatGPT provided phishing code to r_ocky.eth is probably because the AI ​​model was contaminated with data containing phishing content during training, but the AI ​​seemed to fail to identify the phishing content hidden under the regular data. After learning, the AI ​​provided this phishing content to users, which caused the incident.


With the rapid development and widespread adoption of AI, the threat of "poisoning attacks" has become increasingly greater. In this incident, although the absolute amount of loss is not large, the extended impact of such risks is enough to arouse vigilance - if it happens in other fields, such as AI-assisted driving...



In response to questions from netizens, Yu Xian mentioned a potential measure to avoid such risks, which is for ChatGPT to add some kind of code review mechanism.


The victim, r_ocky.eth, also stated that it had contacted OpenAI about the matter. Although it has not received a response yet, it is hoped that this case will become an opportunity for OpenAI to pay attention to such risks and propose potential solutions.


Original link


Welcome to join the BlockBeats official community:

Telegram subscription group: https://t.me/theblockbeats

Telegram chat group: https://t.me/BlockBeats_App

Official Twitter account: https://twitter.com/BlockBeatsAsia