
Artificial intelligence (AI) technology is increasingly being adopted by fraudsters to commit new account fraud (NAF) and bypass even biometric checks. This is revealed by a new report from Entrust, which analyzed data from more than one billion identity verifications in 30 sectors and 195 countries between September 2024 and September 2025.
The report details how Generative AI (GenAI) has democratized the creation of counterfeit ID documents and deepfakes, allowing fraudsters to generate hyper-realistic replicas of documents and impersonate identities to open new fraudulent accounts.
AI drives digital fakes
The Entrust study highlights a shift in document fraud tactics:
- Physical forgeries: They represented almost half (47%) of attempted document fraud.
- Digital fakes: Made up more than a third (35%) of attempts. This increase is attributed to the “accessibility and scalability of modern editing tools” and generative AI.
According to the report, what once required specialized software and design skills can now be achieved with an open source model and a few text prompts.
The risk of Deepfakes in biometric verification
Scammers are also using AI-powered deepfakes to bypass biometric identity verification systems. Deepfakes account for a fifth (20%) of biometric fraud attempts and are especially prevalent in the financial services sector:
- Cryptocurrencies: 60% of biometric fraud attempts.
- Digital banks: 22% of attempts.
- Payments and commerce: 13% of attempts.
The most common deepfake methods used include:
- Synthetic identities: AI-generated faces that do not correspond to real people.
- Face swaps: Replace one person’s face with another’s in a recorded or live video.
- Animated Selfies: Take a still photo and use AI to add motion, simulating a proof of life.
Injection attacks: bypass live capture
The report warns of the rise of injection attacks, where fake images or videos are fed directly into the identity verification system, bypassing the live capture process via camera.
The frequency of these attacks has increased 40% annually, according to Entrust. Virtual camera injection attacks are the most common and are often combined with device emulation techniques to trick verification software into believing it is a legitimate login attempt.
Conclusions: Identity as the first line of defense
Simon Horswell, senior manager of fraud specialists at Entrust, highlights that as detection improves, fraud networks evolve, becoming faster, more organized and commercially motivated. Generative AI and shared tactics drive the volume and sophistication of these attacks.
According to the expert, identity has become the first line of defense, and protecting it with reliable identity verification throughout the entire customer lifecycle is essential to stay ahead of adaptive threats.
References
- Report: Entrust 2026 Identity Fraud Report.