Deepfake technology is threatening to cause huge damage to the global financial services industry, with estimated losses of up to $40 billion by 2027, according to Venturebeat.

The explosion of artificial intelligence (AI) brings many benefits, but at the same time creates worrying security risks, especially deepfake. According to Statista, deepfake – technology that uses AI to create fake videos, voices or images – is becoming an increasingly serious threat to the financial services industry.

It is expected that losses caused by deepfake will skyrocket from 12.3 billion USD in 2023 to 40 billion USD in 2027, with a staggering annual growth rate of 32%.

Deepfake attacks have increased dramatically in recent years. In 2023, the number of deepfake attacks increased by 3,000% compared to the previous year and is expected to continue to increase by 50-60% in 2024, with about 140,000 - 150,000 cases globally, according to a report by Deloitte.

This alarming increase is partly due to the popularity of next-generation generative AI (GenAI) applications, tools, and platforms. These tools allow attackers to easily create deepfake videos, voice spoofs, and fraudulent documents at low cost and quickly.

Worryingly, deepfake fraud targeting contact centers has caused an estimated $5 billion in losses per year, according to a 2024 Pindrop report. Meanwhile, Bloomberg reports, a shadow industry on the web was formed, specializing in selling fraudulent software for prices ranging from 20 USD to thousands of USD.

Source: Statista

Businesses are not ready to deal with Deepfake

Although the risk from deepfake is increasing, many businesses are still not fully prepared to cope. Ivanti research shows that 30% of businesses do not have a specific plan to identify and defend against AI attacks.

Ivanti's "State of Cybersecurity 2024" report also shows that 74% of businesses surveyed have noted signs of AI-driven threats, and 89% believe this is just the beginning. .

Source: Ivanti's 2024 State of Cyber ​​Security Report

Among the Chief Information Security Officers (CISOs), Chief Information Officers (CIOs) and IT leaders interviewed by Ivanti, 60% fear that their businesses are not prepared to protect against threats. AI-controlled threats and attacks.

Attackers often use deepfakes as part of a diverse attack strategy, including phishing, software exploits, ransomware, and API vulnerabilities. This trend is consistent with security experts' predictions about the dangerous increase in threats from new generation AI.

Especially fake videos and voices, powerful attack weapons of cyber criminals, target large businesses with the goal of appropriating millions of dollars. The situation becomes even more alarming when countries and potential criminal organizations are investing money, recruiting experts and developing GAN (Generative Adversarial Network) technology - the technology behind sophisticated deepfakes. difficult to identify.

The danger from deepfake has been acknowledged by leading cybersecurity experts. Mr. George Kurtz, CEO of CrowdStrike - a famous cybersecurity company - in an interview with the Wall Street Journal expressed deep concern about the current level of sophistication of deepfake. He said that this technology is seriously threatening information security and identity security of organizations and businesses.

Businesses need to meet the challenge

Deepfake has become so popular that the US Department of Homeland Security had to issue guidance on recognizing the dangers of this form of attack. The rise of deepfakes and AI attacks requires businesses to quickly adapt and develop strong defense strategies.

Raising awareness about deepfake, training employees to recognize signs of fraud, applying deepfake detection technology and increasing cooperation between organizations are also considered urgent measures.

The fight against deepfakes and future AI attacks will be a technological arms race. Businesses need to proactively invest in advanced security technology, while continuously updating knowledge and sharing information to effectively deal with this increasingly sophisticated threat.