Combating Deepfakes in Financial Service

Apr 7, 2025
-minute read
Headshot of Adam Preis Ping Identitys Director of Product and Solution Marketing
Director, Product & Solution Marketing

Deepfake financial fraud is rising, with bad actors increasingly leveraging illicit synthetic information like falsified invoices and customer service interactions to access sensitive financial data and even manipulate organizations’ AI models to wreak havoc on financial reports.

Mike Weil

Digital Forensics Leader and Managing Director at Deloitte Financial Advisory Services LLP.1

Imagine receiving a call from your CEO instructing you to transfer millions to a vendor. The face and voice match exactly, leaving no room for doubt. But what if it’s all fake? Welcome to the world of deepfakes—a new frontier in fraud that is transforming financial crime. As these artificial intelligence (AI)-driven scams become more sophisticated, financial institutions face growing risks to their security, reputation, and bottom line.

The Deepfake Landscape: A Threat on the Rise

Deepfakes leverage advanced AI to manipulate video, audio, and images, creating convincingly accurate imitations. Initially, a novelty in entertainment and social media, they have evolved into tools for fraudsters seeking to exploit trust. The financial services industry, with its reliance on identity verification, is a prime target. In 2023 alone, deepfake fraud cases surged by 3,000% in the United States, with similar spikes globally. Financial institutions reported escalating threats, with 77% predicting that deepfake fraud will become one of their most significant cybersecurity challenges within the next three years.

 

The implications are alarming. Fraudsters have already used deepfakes to impersonate CEOs, convincing employees to make unauthorized payments. In one case, an executive transferred $25 million after attending a video call with what he believed were senior colleagues. Every participant, apart from the victim, was a deepfake creation. Even job interviews aren’t safe—one organization discovered that 15% of its software developer hires were fraudulent, with deepfakes used to pass remote screenings.

 

These attacks highlight the growing sophistication of deepfake technology and its capacity to undermine even the most advanced detection systems. Given their dependence on identity and trust, financial services must adapt by leveraging modern identity and access management (IAM) to detect suspicious activity in real time.

 

 

3 Ways to Combat Deepfakes with Digital Identity eBook

 

Leverage identity verification, authorization, and verified credentials across the end-to-end customer journey to combat deepfake.

Why the Financial Services Industry Is Vulnerable

The digitization and new AI tools of the financial sector have unlocked unparalleled convenience for customers. However, this digital shift also expands the attack surface for cybercriminals. Deepfakes exploit weak identity verification protocols, bypass traditional fraud detection, and manipulate unsuspecting employees or customers. They target vulnerabilities across various fronts, from loan approvals and onboarding processes to high-stakes wire transfers. Additionally, deep fake fraud schemes attempt to access bank accounts using social engineering, which manipulates someone into performing an action or divulging sensitive information by gaining their trust illegitimately.

 

The stakes are high. Fraud driven by deepfakes doesn’t just lead to financial losses. It erodes customer trust. A single incident of deepfake fraud can tarnish a bank's reputation, deterring customers and undermining investor confidence. For many financial institutions, the damage to trust can outweigh the financial impact.

The Role of Identity in Combating Deepfake Fraud

Fighting deepfake fraud starts with a robust identity framework. Traditional methods of verifying identities, such as passwords or one-time passcodes, fall short in detecting AI-generated forgeries. To counter these threats, financial institutions must prioritize advanced identity solutions and security measures that leverage biometrics, real-time analytics, decentralized identity, machine learning, and verified credentials. Multi-factor authentication (MFA), while a useful digital security capability, isn’t enough on its own to combat identity fraud in the world of generative AI.

 

Identity verification

Identity verification is a critical first step. Facial recognition systems, enhanced with liveness detection, can differentiate between a live human and a static image or video. The Ping Identity Platform analyzes micro-expressions and movements to flag deepfake attempts, adding a vital layer of protection. In one case, liveness detection stopped a deepfake video from bypassing a government ID check, saving a financial institution from approving a fraudulent loan.

 

Verified credentials

Verified credentials introduce a revolutionary way to validate identity. By using cryptographic techniques to bind an individual’s information to a secure digital credential, institutions can confirm identity without relying on physical documents. These credentials are stored in digital wallets, providing a tamper-proof, decentralized means of authentication. For instance, during customer onboarding, a verified credential can authenticate not only the individual’s identity but also the integrity of the information presented, effectively nullifying attempts to use manipulated data.

 

Decentralized identity

Decentralized identity solutions empower users by giving them control over their personal information. Instead of storing identity data centrally, which presents a single point of failure, decentralized models distribute data across secure networks. Users can selectively share specific attributes of their identity, such as proof of age or employment, without exposing additional details. This approach minimizes data exposure, reducing the risk of exploitation through deepfakes or other means. Moreover, decentralized identity makes it easier for institutions to revoke compromised credentials in real time, ensuring ongoing security.

 

Adaptive authentication

Adaptive authentication methods complement these technologies by continuously analyzing behavioral signals—such as typing speed, device usage, and geolocation—to identify anomalies that could indicate fraudulent activity. If a customer suddenly attempts a high-value transfer from an unusual location, adaptive authentication can trigger additional verification measures or block the transaction altogether.

Deepfake Detection and Risk Mitigation

The sophistication of deepfake attacks requires the entire financial system to adopt a proactive, layered defense approach. Early detection is key. AI-driven systems designed to identify manipulated media can analyze visual and audio cues for inconsistencies, such as unnatural lip movements, mismatched lighting, or distorted voice frequencies.

 

Another vital weapon to wage in this fight is policy-based access controls (PBAC). This fine-grained dynamic authorization capability grants or denies access based on a combination of pre-determined and externalized policies that can include the user's role, location, device, and a plethora of third-party contextual signals. PBAC ensures that even if a deepfake manages to bypass initial authentication, it cannot gain unauthorized access to resources for which entitlements have not been assigned.

 

For example, a fraudulent attempt to access a client’s investment portfolio after hours using a deepfake might trigger an alert if the request deviates from established access patterns or exceeds pre-defined transaction limits. By combining PBAC with identity verification and adaptive authentication through no-code/low-code access journey orchestration, financial service providers can create a robust defense that mitigates the risk of deepfake fraud.

Educating and Empowering Stakeholders

Technology alone cannot combat deepfake fraud. Financial service providers must also invest in education and awareness programs for employees and customers. Cybercrime often exploits human psychology, using urgency or authority to bypass critical thinking and gain access to sensitive information. Training employees to recognize red flags, such as unusual requests or inconsistencies in communication, is essential.

 

Customers, too, play a role in prevention. Financial service institutions should educate them about the risks of deepfakes, emphasizing the importance of verifying requests and safeguarding personal information. Simple measures, like confirming wire transfers through secondary channels, can prevent catastrophic losses.

IAM in Action

Ping Identity’s solutions are at the forefront of the fight against deepfake fraud. By integrating identity verification, fraud detection, decentralized identity, and verified credentials, advanced IAM enables financial service providers to stay one step ahead of cybercriminals.

 

A leading retail bank recently deployed liveness detection capabilities to strengthen its onboarding process. Within months, the bank reported a 40% reduction in fraudulent account openings. Similarly, adaptive authentication capability helped a European financial institution block a high-value transfer initiated using a deepfake video, saving millions.

 

Verified credentials enhance identity verification by allowing institutions to issue cryptographically secure, tamper-proof credentials to users. These credentials can be presented by customers during transactions, significantly reducing the risk of manipulation. For example, a customer applying for a mortgage can provide a verified credential proving their income and employment status, eliminating the opportunity for fraudsters to use fabricated documents.

 

Finally, modern PBAC capabilities dynamically adapt to changing context, offering an additional layer of security against deepfake fraud. By externalizing authorization logic, institutions can ensure that access decisions reflect real-time risk assessments, making it nearly impossible for deepfakes to exploit system vulnerabilities.

Priorities for Financial Service Providers

To effectively combat deepfake fraud, financial service providers should embrace IAM as a strategic asset, focusing on these three areas to create a layered defense posture against deepfake attacks:

 

1. Modernize Identity Verification

Creating a strong foundation for preventing deepfake fraud is contingent on establishing sophisticated identity verification capability. Technologies such as liveness detection and biometric authentication can differentiate between genuine interactions and fraudulent attempts. Verified credentials take this further by securely binding an individual’s identity to cryptographic digital credentials stored in tamper-proof digital wallets. These tools not only verify identity during onboarding but also ensure that every subsequent interaction is anchored in authenticity. For example, leveraging verified credentials can mitigate risks during high-stakes transactions or loan applications by validating both the individual and the integrity of the presented data​​.

 

2. Turbo-Charge Dynamic Authorization

PBAC offers dynamic and fine-grained authorization, ensuring that access is granted based on contextual factors such as user role, device, location, and transaction type. By externalizing authorization logic, PBAC makes it nearly impossible for deepfakes to exploit systemic vulnerabilities. For instance, a deepfake impersonating an executive would still be denied access if the request originated from an unapproved device or outside designated hours. PBAC integrates seamlessly with verified credentials, enabling institutions to enforce stringent access policies without adding unnecessary friction to legitimate user interactions​.

 

3. Embed Verified Credentials At the Center of Customer Journeys

Verified credentials empower customers to securely share specific attributes, such as proof of identity or employment, without exposing unnecessary personal data. This approach significantly reduces opportunities for fraudsters to exploit identity blind spots and weak credentials through deepfake attacks. During onboarding, financial service providers can rely on these credentials to streamline verification while maintaining high-security standards. Integrating verified credentials with adaptive fraud detection further enhances protection, allowing real-time decision-making based on evolving risk signals​.

The Future of Deepfake Defense in Financial Services

As AI technologies continue to evolve, so too will the methods used by fraudsters. Financial service providers must adopt a mindset of continuous improvement, leveraging the latest advancements in identity and fraud prevention to stay ahead of adversaries. Deepfake fraud is not a fleeting challenge. It is a persistent threat that requires vigilance, innovation, and collaboration.

 

With the right strategies, tools, and risk management plan, financial service providers can turn the tide, protecting their customers and securing their future in an increasingly digital world. The comprehensive capabilities of the modern IAM provide the foundation for this layered defense against adversarial AI and deepfake attacks.

 

Learn more about how leading financial services firms are future-proofing their IAM strategies.

 

Share this Article:
Related Resources

Start Today

See how Ping can help you deliver secure employee, partner, and customer experiences in a rapidly evolving digital world.