AI can be a useful tool for criminals in the cryptocurrency ecosystem, and understanding new crime trends is critical to protecting the industry.

With the rapid development of artificial intelligence (AI), there have been many innovations that have brought great benefits in many industries, especially the cryptocurrency sector. However, like any new technology, AI also carries the risk of being used for malicious purposes. According to a report conducted by Elliptic, the organization has identified five emerging crime types in the AI-powered cryptocurrency ecosystem.

AI creates deepfake scams in cryptocurrency

AI is being used to create cryptocurrency scams. Specifically, deepfake videos impersonating celebrities or business leaders are promoting cryptocurrency investment projects, creating the impression that the project has official support. For example, videos impersonating Ripple CEO Brad Garlinghouse have appeared to trick users into participating in cryptocurrency giveaway scams.

These scams often use AI-generated images, videos, and voices to make the scam websites look more convincing. A typical example is the Twitter attack in July 2020, when the accounts of many celebrities were hacked to post fraudulent links giving away Bitcoin. In many cases, scammers also use admin accounts on cryptocurrency projects' Discord channels to post fraudulent links, stealing cryptocurrency or non-fungible tokens (NFTs). ).

Creating fraudulent AI tokens or market manipulation schemes

A large number of fraudulent tokens have been created with AI-related keywords such as “GPT”, “CryptoGPT”, and “GPT Coin”. These tokens are often promoted on amateur trading forums, creating fake excitement and ultimately defrauding users by selling the tokens and disappearing with the proceeds. These scams include “exit scams” ​​and “pump and dump” (market manipulation).

According to Elliptic's report, on many blockchains, creating a token is very easy, and scammers have taken advantage of this to create AI-related tokens to create fake excitement. These tokens are often promoted in amateur trading forums, where scammers claim official affiliation with legitimate AI companies such as ChatGPT or other AI companies. These scams go beyond token creation, and include fake investment platforms, fake AI trading bots, and exit scams.

A prime example is the $6 million “iEarn” AI trading bot scam, in which scammers created a fake AI trading platform and then disappeared with the invested funds. of the victim. These scammers often reappear under different names and websites, continuing to scam new victims. Another prominent example is Mirror Trading International (MTI), a Ponzi scheme that netted more than $1.7 billion in cryptocurrency from victims worldwide.

Using large language models to perform cyber attacks

AI tools like ChatGPT have the ability to test and detect vulnerabilities in code, which can therefore be exploited by hackers to find and attack smart contracts of decentralized finance (DeFi) protocols. ). Some “unethical” AI tools have been promoted on dark web forums with the ability to automate phishing emails, write malicious code and find vulnerabilities.

For example, hackers can use AI to examine the open source code of many DeFi protocols in a short period of time to find security vulnerabilities. A number of “unethical” AI tools such as WormGPT, DarkBard, FraudGPT and HackerGPT have been promoted on dark web forums with the ability to automate cybercrime activities such as sending phishing emails, writing malicious code and Find security holes. These tools typically sell for between $70 and $1,700.

A specific example is WormGPT, an AI tool designed to generate phishing emails, malware, and find security vulnerabilities. WormGPT is promoted on dark web forums with the ability to automate cybercrime activities, including sending phishing emails and writing malicious code. One of WormGPT's customers used it to spoof banks and collect OTP codes from victims.

Deployment of large-scale cryptocurrency scams and misinformation

AI can create and spread phishing websites quickly and easily. Services like NovaDrainer, which offers scam websites for affiliates and profit sharing, received more than 2,400 types of tokens from more than 10,000 different wallets, likely from scam victims. In addition, AI is also used to automatically generate social media posts, helping to spread false information about cryptocurrency projects.

A specific example is the NovaDrainer service, a platform that offers fraud as a service to affiliates and shares profits. NovaDrainer claims to use AI to process transactions and create new website designs, optimized for SEO and meta tags. The service received more than 2,400 tokens from more than 10,000 different wallets, likely from scam victims.

Additionally, social media bots are also used to spread misinformation about cryptocurrency projects. A good example is the FOX8 bot network, a Twitter bot network that used ChatGPT to generate automated posts and responses. This bot network includes more than 1,100 accounts and has spread misinformation about cryptocurrencies, with the hashtag#cryptoappearing more than 3,000 times in its posts.

Expanding the illegal market

On dark web markets, AI services have appeared that create fake nude images of celebrities and provide AI tools to create these images at low cost. Fake document generation services like “OnlyFake Document Generator” use AI to create fake identification documents to bypass KYC checks at cryptocurrency exchanges. In addition, AI tools are also used to filter and analyze data stolen from cyber attacks.

A prime example is OnlyFake Document Generator, a fake document creation service that uses AI to create fake identification documents to pass KYC checks at cryptocurrency exchanges. This service offers service packages ranging from 15 USD (create one fake document) to 1,500 USD (create 1,000 fake documents). In just one month, the service sold enough licenses to create approximately 4,935 fake documents.

Services that create fake nude images of celebrities have also appeared on dark web markets. One example is an AI service that created fake nude images of at least 13 celebrities in the Hong Kong entertainment industry, which were sold for $2. In addition, other AI services also provide the ability to create nude images from user-uploaded images at a low cost, usually less than 1 USD per image.

Consequences and preventive measures

The combination of AI and illegal activities on the dark web poses a major challenge for law enforcement agencies and cybersecurity experts. Detecting and preventing these activities requires a deep understanding of AI technology and new criminal methods.

To deal with AI-powered criminal threats in the cryptocurrency ecosystem, comprehensive preventative measures need to be established. One of these approaches is DECODE, which includes the following elements:

1. Detection: Using AI technology to early detect unusual activities and signs of fraud. Automated monitoring systems can scan transactions and activities on the blockchain to look for suspicious behavior patterns.

2. Education: Increase awareness and educate users and stakeholders about the risks associated with AI crimes in cryptocurrency. Communication and training campaigns should be deployed to help users recognize the signs of fraud and take personal protective measures.

3. Collaboration: Promote cooperation between law enforcement agencies, technology companies and financial institutions to share information and coordinate in responding to AI crimes. International cooperation is also important in dealing with transnational threats.

4. Oversight: Establish monitoring and regulatory mechanisms to ensure that companies and organizations comply with security and crime prevention standards. Regulators should regularly update regulations to reflect new crime trends.

5. Defense: Apply advanced security measures, including encryption, two-factor authentication, and other security solutions to protect systems from attacks. These measures need to be continuously updated to deal with new threats.

6. Evaluation: Conduct regular audits and assessments of crime prevention measures and security systems to ensure effectiveness. These assessments should include testing systems against simulated attacks to detect and remediate weaknesses.

Conclude

AI is opening up new opportunities for both benefits and risks in the cryptocurrency ecosystem. Understanding and managing the risks involved is critical to protecting users and maintaining cybersecurity. Coordinated efforts and effective prevention measures will help minimize the impact of these criminal activities and protect the sustainable development of AI technology and cryptocurrency.

Elliptic's report not only provides insight into AI-related crime trends but also recommends specific strategies to deal with these challenges. Continued research and awareness of potential threats is an important step in ensuring a safe and trustworthy digital environment.

As technology continues to advance, maintaining a balance between innovation and security will be challenging. Stakeholders in the cryptocurrency and IT industries need to work closely together to develop cutting-edge security solutions, while enhancing awareness and skills to deal with threats. new. The sustainable and safe development of AI technology and cryptocurrencies depends not only on technical measures but also on global cooperation and commitment of all stakeholders.