According to Cointelegraph, the rapid evolution of artificial intelligence (AI) is enabling scammers to execute highly sophisticated attacks on a large scale, warns Richard Ma, co-founder of Web3 security firm Quantstamp. In an interview at Korea Blockchain Week, Ma explained that AI is helping hackers become more convincing and increase the success rate of their social engineering attacks.
Ma shared an example of an AI-powered attack on one of Quantstamp's clients, where the attacker pretended to be the CTO of the targeted firm. The attacker engaged in conversations with an engineer at the company, discussing an emergency before asking for any information. Ma said these added steps make it more likely for someone to hand over important information.
The most significant threat introduced by sophisticated AI is the scale at which these types of attacks can be executed. Attackers can use automated AI systems to launch social engineering attacks and advanced scams across thousands of organizations with minimal human involvement. To protect themselves, Ma advises individuals and organizations to avoid sending sensitive information via email or text and to use internal communication channels like Slack. Additionally, companies should invest in anti-phishing software that filters automated emails from bots and AI.