The word “bot” might bring to mind helpful chatbots that assist with finding information, selecting products, or playing music. However, it also refers to automated programs designed to mimic human activity for selfish purposes. This article will cover the latter category.

Malicious bots are employed to promote cryptocurrency scams, confront political adversaries, and bombard competitors with negative reviews. They target influencers, engage in cyberbullying, and participate in click fraud, which causes annual losses of more than $100 billion.

These bots often disguise themselves as real users on social media platforms like Facebook, Telegram, or X. They manipulate public opinion, spread misinformation and conspiracy theories, and interfere with politics (as seen in the 2016 U.S. presidential election) and financial markets, including cryptocurrencies.

It’s challenging to determine the exact proportion of social media activity that bots represent. For instance, studies from 2017 indicated that about 9% of X users were bots, a figure that rose to over 15% by 2022. However, X's management claims that less than 5% of active users on their platform are bots.

Thousands of bots are organized into "botnets" (derived from "robot" and "network") or "bot farms," as the media calls them. These networks are managed by a few individuals known as "botmasters." They simulate user activity by arguing in comments, liking posts, and boosting follower counts. The ultimate goal depends on the motives of the people behind each specific bot farm.

Promotion of Projects or Tokens

A significant portion of the information in the crypto world is shaped through social media platform X. Crypto enthusiasts often rely on the opinions of influencers and authoritative users and can become unintended victims of hype and FOMO. In this landscape, interested parties use bot farms to create the illusion of popularity, engagement, and positive sentiment toward their chosen projects, leading to mass purchases of their products, NFTs, or other assets.

In scam promotions, botnet owners emphasize the frequency of project mentions and the reach of advertising posts. They employ social engineering techniques, including:

Utilizing numerous emojis (rockets, fires, lightning bolts, and money);

Showing charts that depict significant increases in token prices (even if such increases are fictitious);

Comparing the project to successful cases to exploit investors' greed.

Bots are not solely used for promoting scam tokens. They also create an appearance of interest for legitimate, yet under-the-radar projects. This strategy is commonly used by crypto startup owners and meme coin creators.

The demand for project promotion via bot farms is serviced by specialized companies. These "shilling masters" handle hundreds of fake accounts, creating posts, and scripting dialogues in chats.

One such company offers its clients a monthly subscription. The minimum plan costs $1,399, which includes 250 daily mentions in the project's chat, while the premium plan costs $4,449, offering 2,000 mentions in 30 popular groups and communities.

Did FTX Use Bots?

In July 2023, the Network Contagion Research Institute (NCRI) analyzed over 3 million tweets associated with 18 cryptocurrencies that were once listed on the FTX exchange. The data spanned from January 1, 2019, to January 27, 2023.

The research indicated that the increased activity on social media was not merely due to the organic popularity of the coins but was rather part of a strategic promotion of FTX tokens. This included cryptocurrencies such as BOBA, GALA, IMX, RNDR, SAND, SPELL, and others

The NCRI's analysis aimed to quantify the extent of authentic versus inauthentic activities on X related to these cryptocurrencies, using the Botometer tool to detect bot-like behaviors.

Out of 182,105 unique accounts analyzed, 172,451 were evaluated for bot-like traits. Those not rated were likely already suspended. Ultimately, 11,215 users were identified as bots, making up about 6.5% of the total. Despite their relatively small number, the tweets they promoted represented about 20% of all analyzed posts.

The presence of inauthentic mentions does not directly link to FTX. To further understand the relationship, the NCRI explored the correlation between the listing of a token on the exchange and the increase in bot-like account scores. The findings revealed that bot activity consistently spiked following the commencement of a token's trading on FTX.

This study sheds light on the issue of social bots within the crypto industry. The greater the number of stakeholders, the higher the likelihood of encountering inauthentic activities. In this instance, bots could have been used by FTX itself, the cryptocurrency projects, influencers, or representatives from DEX exchanges.

Wash Trading

Wash trading is a type of market manipulation where trading volumes are artificially inflated to create the illusion of higher liquidity than actually exists. This strategy is commonly used with fraudulent exchanges, tokens, or NFTs. Owners transfer tokens among thousands of their controlled wallets to falsely suggest the asset’s popularity.

Bots play a crucial role in wash trading schemes, enabling the automatic trading and timely sale of tokens to benefit the fraudster.

A typical wash trading model includes four stages:

Funding wallets A and B using offline exchangers, decentralized exchanges (DEXs), or mixers like Tornado Cash, with the initial step for the fraudster being to convert money into crypto anonymously.

Creating a token and adding liquidity with wallet A to establish the asset's value.

Distributing funds from wallet B to thousands of controlled blockchain addresses (bots) and purchasing the token created by Wallet A.

Generating hype around the project through wash trading, which leads to selling the tokens to unsuspecting investors.

Scammers often run multiple such projects simultaneously, waiting for one to attract investments before using the proceeds to create and inflate the value of new ones.

This scheme can also be used for money laundering, as the movement of funds between various wallets and tokens makes it difficult to trace the original source. While crypto detectives like ZachXBT may be able to track these movements, owners of offline exchanges—often the last step in cashing out "dirty" money—are less likely to succeed.

Airdrop Hunting

Airdrop hunting is a method of earning in crypto by engaging early with projects. In 2024, users received airdrops from zkSync, Starknet, and LayerZero for various activities like token exchanges, quest completions, and community involvement.

To scale airdrop hunting, some crypto enthusiasts resort to multi-accounting—creating hundreds of controlled wallets that mimic real user actions. These multi-accounts often function as bots following a pre-scripted sequence.

Even the most brazen hunters sometimes forego disguises. For instance, the legendary LayerZero user named Ruslan opted for simplicity, sequentially numbering his accounts: Ruslan002.eth, Ruslan003.eth, Ruslan004.eth, etc.

Project owners naturally prefer not to reward bots. Hence, they are proactive in detecting and subsequently blocking such accounts. For example, this year, LayerZero CEO Bryan Pellegrino initiated a substantial sybil hunt, where out of 5.2 million L0 users, only 1.28 million received an airdrop, identifying at least 800,000 as potentially bot-driven addresses.

Methods for Detecting Social Bots

Detecting real users from social bots is crucial for maintaining the integrity of social media platforms. Consequently, platforms like Facebook, X, and Instagram employ a variety of methods to identify bots, ranging from manual checks to sophisticated AI-driven solutions.

Early detection typically relies on easily observable behavioral patterns:

Account Activity: Bots tend to show a higher frequency of messages and activity compared to genuine users.

Account Metadata: Bots often have an unusually high ratio of followings to followers and are usually associated with recently created accounts.

Content: Bots generate repetitive content, use identical keywords, or draw from a limited pool of sources.

Advanced bots have become adept at circumventing these initial detection strategies. Moreover, these methods have often yielded false positives—genuine users mistakenly identified as bots, and vice versa. This challenge has prompted the development of more advanced bot detection technologies, including deep learning and neural networks.

Convolutional Neural Networks (CNNs) are employed to analyze textual content and extract complex features from sequences of words or characters. Recurrent Neural Networks (RNNs) and Long Short-Term Memory Networks (LSTMs) are used to analyze dependencies and patterns in content. Graph Neural Networks (GNNs) are applied to model the network structure and interactions between users.

Final Thoughts

The application of bots extends beyond the detection methods outlined in this article. The range of their activities is limited only by the creativity and goals of their controllers. For example, bots can also be used for DDOS attacks, locking influencer accounts, or artificially inflating follower numbers. This association of bots with mostly destructive activities highlights the ongoing challenge they present.

Bots are likely to remain a permanent fixture on the internet. Furthermore, as artificial intelligence evolves, their capabilities will become more sophisticated. They will be able to autonomously respond to comments, read the emotions of others, and understand the context of posts.

To safeguard yourself, filter the information you receive, use critical thinking, and rely on your own judgments. This approach will help ensure that no bot farm can sway your decisions.