Artificial Intelligence (AI), while in its infancy as a technology, is already being harnessed for criminal acts in the crypto space, according to a report from British blockchain analytics firm Elliptic.

While AI-enhanced crypto crime may not yet be a mainstream threat, identifying emerging trends is important for sustainable innovation, the report said.

Crypto investment scams have recently made use of deepfakes of celebrities and authority figures to promote themselves.

Among others, video images of Elon Musk and former Singaporean Prime Minister Lee Hsien Loong have been used in such scams.

DL News reported earlier this month that an ad on social media featuring video of Lee hawking crypto investments was a deepfake that layered bogus audio over video of a speech he made earlier this year.

“This is extremely worrying: People watching the video may be fooled into thinking that I really said those words,” Lee said in a Facebook post. “The video is not real!”

AI is also a hype-generator for scam tokens, the Elliptic report said.

For example, there are hundreds of tokens listed on blockchains that have some variant of the term “GPT” in their name. Some may be the product of legitimate ventures, but Elliptic said it has identified numerous scams among them.

Threat actors

The report noted there is debate over whether AI tools can be used for code auditing and bug-checking, and whether black hat hackers may use those capabilities to identify and devise hacks.

Though Microsoft and OpenAI have reported instances of Russian and North Korean threat actors engaging in such attempts, others suggest their technology is not yet fully developed, according to the Elliptic report.