US law enforcement authorities seized two domains connected with an AI-powered social media bot farm linked to the government of Russia, the U.S. Department of Justice (DOJ) announced on Tuesday.

“This is a strong example of the disruption-first strategy that the Department, including the FBI, have taken when it comes to cyber and cyber-enabled threats to national security,” a DOJ spokesperson told Decrypt. “We continue to evolve in the way we defend and identify these actors.”

According to documents released by the agency, the cybercriminals used generative AI to create fake social media profiles, many claiming to be Americans, that were then used to post pro-Russian messages on Twitter.

“Today’s actions represent a first in disrupting a Russian-sponsored generative AI-enhanced social media bot farm,” U.S. Federal Bureau of Investigation (FBI) Director Christopher Wray said in a statement. 

“Russia intended to use this bot farm to disseminate AI-generated foreign disinformation, scaling their work with the assistance of AI to undermine our partners in Ukraine and influence geopolitical narratives favorable to the Russian government.”

The bot farm allegedly used domains (mlrtr.com and otanmail.com) issued by Arizona-based domain provider Namecheap, private email services to generate random email addresses, and created profiles on Twitter. 

In one example, a bot account purported to be a Minneapolis resident who described himself as a “humanist” and included a hashtag for Bitcoin.

In total, the bot farm created over 968 Twitter accounts between June 11, 2022, and March 1, 2024.

Although Namecheap said it could not comment on any specific investigation, the company told Decrypt that it “actively combats all forms of abuse on our platform.”

“We are recognized widely for our effective tactics to identify and rapidly take down bad actors using our services,” Namecheap wrote via Twitter. “We work alongside law enforcement and implement U.S. subpoenas, court orders and/or warrants related to abuse or illegal activity on our platform.”

For cybersecurity experts, publicly available generative AI models and standardized developer tools are attractive to state-sponsored cybercriminals.

“The combination of generative AI and developer APIs provided by platforms like Telegram, X, and Meta can be very dangerous if used maliciously,” co-founder and CTO of blockchain security firm Halborn, Steve Walbroehl, said via Telegram. 

“In a normal situation, this functionality enables users to automate, manage, or boost their social media presence. But in the hands of a bad actor, it is a very convincing method to mislead people and online communities.”

Walbroehl pointed to Telegram groups, composed entirely of AI-driven bots, that could be used to create false hype and social proof and convince unsuspecting victims to invest in scams based on this manufactured enthusiasm, only to have their investments stolen by the scammers controlling the bots.

“While it is a social engineering tactic used for distributing disinformation at scale, it is also a tactic used by scammers to help ‘pump and dump’ meme-coins to steal large amounts of capital from naive users,” Walbroehl added.

“Takedowns and accompanying advisory suggest that U.S. and allies are trying various techniques like these disruptions and seizures… because the operators are currently beyond their direct reach,” Senior Researcher at Citizen Lab John Scott-Railton observed in a Twitter thread. 

“Expect the operators to learn, evolve, and come right back targeting the U.S.”

NEW: @TheJusticeDept just disrupted a Russian-run AI-enabled Twitter disinformation bot farm.

Almost 1000 accounts on @X.

Masqueraded as Americans & promoted Russian government narratives.

Fascinating sleuthing behind this 1/https://t.co/QOsWnoVUCf pic.twitter.com/pvbVZM3nLj

— John Scott-Railton (@jsrailton) July 9, 2024

The operation to root out the fake accounts began in June with the filing of two search and seizure warrants from the United States District Court for Arizona authorizing the search of the servers of X Corp (aka Twitter) and the seizure of the domains from Namecheap. Twitter voluntarily suspended the reported accounts, citing violations of terms of service.

“X is an important partner in this case and we value their voluntary efforts to remove these bots and conduct their own investigation,” the DOJ spokesperson said.

“As malign actors accelerate their criminal misuse of AI, the Justice Department will respond and we will prioritize disruptive actions with our international partners and the private sector,” Deputy Attorney General Lisa Monaco said in a statement. 

“We will not hesitate to shut down bot farms, seize illegally obtained internet domains, and take the fight to our adversaries.”

The operation was conducted in collaboration with the FBI, Cyber National Mission Force, government partners in Canada and the Netherlands, as well as social media companies including Twitter, the DOJ said.

“With these actions, the Justice Department has disrupted a Russian-government backed, AI-enabled propaganda campaign to use a bot farm to spread disinformation in the United States and abroad,” Attorney General Merrick B. Garland said in a statement. 

“As the Russian government continues to wage its brutal war in Ukraine and threatens democracies around the world, the Justice Department will continue to deploy all of our legal authorities to counter Russian aggression and protect the American people.”

It’s not just fake Twitter accounts being used to promote scams and disinformation. Recently, a rash of cyber attacks targeting legitimate Twitter accounts of prominent entertainers, including musician Doja Cat, actress Sydney Sweeney, and legendary heavy-metal band Metallica, have been used to promote crypto scams.