Financial institutions face rising threat from sophisticated AI fraud

Many financial institutions are struggling to keep up with the rising sophistication of AI-driven fraud, creating a critical need for enhanced detection and prevention methods.

In the world of finance, artificial intelligence (AI) has emerged as both a tool and a generator of new problems. It brings forth innovation, productivity and efficiencies for companies, however, it has also introduced sophisticated challenges that many financial institutions are unprepared to address.

Since the rise of accessible AI tools, many financial institutions have struggling with a lack of tools to accurately identify and segregate AI fraud from other types of fraud.

This inability to differentiate various fraud types within their systems leaves these institutions with a blind spot and makes it difficult to comprehend the scope and impact of AI-driven fraud.

Cointelegraph heard from Ari Jacoby, an AI fraud expert and the CEO of Deduce, to better understand how financial institutions can identify and separate AI fraud, what can be done to prevent this type of fraud before it occurs and how its rapid growth may impact the entire industry.

AI fraud identification

Given that the main challenge is that most financial institutions currently have no way of distinguishing between AI-generated fraud and all other types, it is aggregated into one category of fraud.

Jacoby said the combination of legitimate personal identifiable information — like social security numbers, names, and birthdates — with socially engineered email addresses and legitimate phone numbers makes detection by legacy systems nearly impossible.

Jacoby said that this makes preventing and remediating the major fraud drivers exceptionally difficult, especially as new types of fraud ramp up.

“AI is particularly difficult to detect because of its ability to create synthetic, lifelike identities at a scale that makes it nearly impossible for technology to identify.”

According to the Deduce CEO, the challenge with solutions is that technology is advancing rapidly, and therefore, so is the skill set of those committing AI fraud. This means that financial institutions must be on top of their game now to understand where AI comes into play in such cases of fraud. 

Finding solutions

According to Jacoby, the first step in implementing solutions is to analyze the online activity patterns of individuals and groups of identities to find fraudulent actions that might appear legitimate but are actually fraud. 

He said legacy fraud prevention methods simply aren’t enough anymore, and financial institutions will need to become “relentlessly proactive” in their pursuit of preventing the continued explosion of AI-generated fraud.

This likely won’t mean implementing just one solution — it wil mean creating a layered program that works to identify existing fraudsters lingering within the existing customer base while also working to prevent new fake identities before they infiltrate.

“By layering solutions, utilizing massive data sets to identify patterns, and more accurately analyzing trust scores, this type of fraud can be better mitigated.”

Jacoby said most of the financial fraud teams they are speaking with are moving risk “one peg to the right,” with anything previously categorized as low risk now medium risk, and they are taking additional steps to prevent fraud across all stages of the customer life cycle. 

“They’re taking the threat of AI fraud seriously; it’s one of the major issues plaguing the financial industry, and we’re merely at the beginning stages of how advanced this technology will become.”

Jacoby stressed that fraud has surged by 20% year-over-year, with the rise of AI significantly increasing the prevalence of synthetic identities.

“AI Driven fraud is the fastest growing aspect of identity fraud today and will be over $100B problem this year.”

Beyond traditional financial institutions, AI-generated fake IDs also have the possibility to reshape crypto exchange KYC measures and cybersecurity as a whole.

The issue is large enough that it is also already being looked at by regulators. On May 2, the United States Commodity Futures Trading Commission (CFTC) Commissioner Kristin Johnson advanced three proposals for the regulation of AI technologies as they apply to U.S. financial markets.

Particularly the introduction of heightened penalties for those who intentionally use AI technologies to engage in fraud, market manipulation or the evasion of regulations.

If financial institutions and regulators don't take action now, they risk being unable to effectively grasp the right solution. #Write2Earn