How threat actors have shifted from cracking perimeters to exploiting people — and what security leaders in banking, insurance, and fintech need to do about it.

The Attack Surface Has Shifted

Here’s what most financial institutions are still getting wrong: they’re spending millions hardening their perimeters while attackers walk right through the front door — wearing a borrowed face.

Social engineering is no longer the low-tech, “nigerian prince” attack of a decade ago. The 2024 threat landscape in financial services is defined by AI-synthesized voices impersonating CFOs, deepfake video calls fooling bank executives into authorizing nine-figure wire transfers, and pretexting campaigns so surgically tailored they defeat every piece of security awareness training your employees have ever sat through.

According to the Verizon 2024 Data Breach Investigations Report (DBIR), 87% of breaches in the financial sector involved a human element — whether through phishing, pretexting, misuse of privileges, or error. That number has not materially declined in five years. The perimeter is not the problem. People are.

This post breaks down exactly what these attacks look like today, who they’re targeting inside financial institutions, and what your security program needs to do differently to actually test and harden the human layer.

Five Attack Patterns Dominating Financial Services in 2025

Threat actors targeting banks, credit unions, insurance carriers, and fintechs have converged on a set of techniques that share one common thread: they require no vulnerability in your technology stack. They require only a convincing story and an employee who believes it.

1. AI-Powered Deepfake Impersonation

This is the threat keeping CISOs up at night — and for good reason. Deepfake technology has crossed from science fiction into operational weaponry. In early 2024, a finance employee at a multinational firm was tricked into transferring HK$200 million (approximately $25.6 million USD) after attending a video conference call with what he believed were his company’s CFO and other colleagues. Every person on that call was a deepfake.

This is not an isolated incident. Onfido’s 2024 Identity Fraud Report found that deepfake fraud attempts targeting financial firms increased by 3,000% between 2022 and 2024. Deloitte’s Center for Financial Services projects that AI-generated fraud — including deepfakes — could cost the financial sector up to $25 billion annually by 2027.

The attack vector is straightforward: attackers harvest publicly available video and audio of executives from earnings calls, conference recordings, LinkedIn, and media appearances. Readily available AI tools — some freely accessible — can generate convincing real-time voice and video clones. Targets inside the firm (typically treasury staff, finance operations, or wire transfer teams) receive a call or video conference from a “senior executive” with a plausible, time-sensitive authorization request.

Key threat: The average employee has no training or tool to detect a real-time AI voice clone. Traditional security awareness programs don’t cover this scenario. Most firms have never tested it.

2. Spear-Phishing and Business Email Compromise (BEC)

BEC remains the highest-volume social engineering attack against financial institutions. The FBI’s 2023 Internet Crime Report documented $2.9 billion in BEC losses across all sectors, with financial services among the most targeted industries.

What has changed is the quality of the attacks. Modern BEC campaigns don’t rely on obvious grammatical errors or generic pretexts. They are built from open-source intelligence: LinkedIn profiles of finance staff, SEC filings, earnings call transcripts, press releases announcing leadership changes or M&A activity, and even regulatory correspondence obtained through phishing earlier in the attack chain.

The Verizon DBIR 2024 found that phishing and pretexting together account for roughly 73% of social engineering incidents. In financial services specifically, attackers frequently impersonate regulators (FDIC, OCC, FINRA, SEC), auditors, correspondent banks, and insurance carriers — entities that employees are conditioned to treat with urgency and compliance.

3. Vishing and SIM-Swapping

Voice phishing (vishing) attacks on financial institutions escalated sharply in 2024, driven by the wide availability of AI voice synthesis tools. The pattern mirrors what the cybersecurity community watched unfold with the MGM Resorts attack in 2023 — where attackers bypassed all technical controls by calling the IT help desk and social engineering their way to credential resets.

The same playbook is being actively applied to banks and insurers. Attackers identify employees through LinkedIn and data broker sites, then call internal help desks or branch staff impersonating other employees, IT vendors, or customers. Voice synthesis tools mean the attacker’s voice can be dynamically modulated to sound convincing.

SIM-swapping — socially engineering mobile carriers to transfer a victim’s number to an attacker-controlled SIM — has seen a 160% year-over-year increase in reported cases targeting financial account takeover (2024 data, Lookout Mobile Security Threat Report).

4. Pretexting and Insider Manipulation

Pretexting — the construction of a fabricated scenario to manipulate a target — now represents the majority of social engineering attacks in the Verizon DBIR’s financial services data. The sophistication has grown significantly: multi-stage, multi-actor campaigns that build rapport over weeks before making a request are becoming standard.

Common pretexts in financial services include: impersonating IT vendors with legitimate access to core banking systems; posing as auditors conducting routine reviews; impersonating compliance or legal counsel with urgent regulatory inquiries; and targeting HR or payroll staff with direct deposit redirect requests (a subset of BEC often called “payroll diversion”).

The SANS 2024 Security Awareness Report found that help desk staff and IT support personnel remain the most targeted internal roles for pretexting, because they are trained to be helpful and to resolve issues quickly — instincts that attackers systematically exploit.

5. Supply Chain and Third-Party Social Engineering

Financial institutions typically have robust controls on their own employees but are only beginning to reckon with the social engineering risk that flows through their vendor ecosystem. Fintech partners, insurance brokers, law firms, accounting firms, and managed IT providers all have varying levels of security maturity — and many have legitimate email access, portals, or APIs connected to core banking infrastructure.

Attackers have learned that it’s often easier to compromise a lightly defended vendor and then impersonate them to the financial institution than to attack the bank directly. Once a vendor’s email is compromised or an account is created in a vendor’s identity system, the attacker has a credible identity to operate from inside the trust perimeter.

IBM’s Cost of a Data Breach 2024 report found that breaches involving third parties cost $4.88 million on average — the highest average breach cost recorded — and take an average of 277 days to identify and contain.

Why Annual Social Engineering Tests Miss the Point

Most financial institutions that conduct any social engineering testing do it once a year: a phishing simulation campaign, maybe a vishing call or two, a physical badge tailgate attempt at headquarters. The report goes to the board. The CISO presents a click rate. Training gets assigned to employees who clicked.

This approach would have been inadequate five years ago. In 2025, it’s operationally negligent.

Here’s the problem: the threat landscape mutates continuously. The tactics we described above — AI-generated voice clones, deepfake video calls, hyper-personalized BEC built from real-time OSINT — didn’t exist in any meaningful form when most security awareness programs were designed. A point-in-time assessment run in Q1 tells you nothing about your exposure to a deepfake attack in Q3.

Gartner defines this emerging category as Continuous Offensive Security Testing (COST) — a recognition that security posture must be validated continuously, not on a calendar schedule. Sprocket Security has been operating this model since 2018.

The Gap: Your organization changes daily — new hires, new vendors, new org charts posted to LinkedIn. Attackers update their targeting in real time. A once-a-year social engineering test cannot keep pace with that.

What Financial Services Security Leadership Should Be Testing

Closing the social engineering gap in financial services requires moving beyond phishing simulations. Here is what a mature human-layer security testing program looks like:

  • AI-Voice Impersonation Testing: Deepfake and vishing simulation

Can your treasury staff, branch managers, or help desk recognize a cloned executive voice or a spoofed video call? If you’ve never tested this, you don’t know. Testing requires scenarios that simulate real-world attack vectors, not scripted role plays.

  • Regulator and Auditor Impersonation: Testing the actual pretexts attackers use against your institution

Does your compliance team verify identity before sharing sensitive information with someone claiming to be an OCC examiner? Does your IT help desk require callback verification before resetting credentials? These controls only work if they’ve been tested.

  • Third-Party and Supply Chain Social Engineering: Not just employees — your vendors and partners too

Map which vendors have privileged access to your systems, then assess whether those vendors’ security postures create a viable attack path into your environment. This is a blind spot in nearly every financial institution we assess.

  • Long-Form Pretexting Campaigns: Multi-stage, multi-week scenarios that mirror real adversaries

Real attackers don’t execute one phone call. They build rapport, establish cover identities, and operate over extended timeframes. Your testing program should reflect this reality.

  • Continuous Baseline Monitoring: Tracking controls degradation between annual assessments

Personnel changes, reorgs, new vendors, and new public OSINT about your firm create new attack surface constantly. Continuous attack surface management (ASM) feeds directly into when and how social engineering testing should be triggered.

Technology-Powered Testing, Human-Verified Results

Sprocket Security’s social engineering assessments go beyond generic phishing simulations. Our team of US-based pentesters designs and executes bespoke campaigns modeled on the actual tactics threat actors use against financial institutions — including AI-assisted pretexting, vishing, deepfake simulation, and physical intrusion testing.

What makes our approach different is the integration of social engineering into a continuous security model. Our platform continuously monitors your attack surface — new assets, new employee data, new OSINT exposure — and uses that intelligence to inform when and how testing should be triggered. You’re not waiting 345 days to find out whether a new VP of Finance’s LinkedIn profile just became the foundation for a wire fraud attack.

  • Continuous Attack surface monitoring feeds social engineering targeting
  • US-Based Expert pentesters design bespoke scenarios, not template phishes
  • Unlimited Retests included — validate that controls improvements actually work

Findings from social engineering engagements flow directly into the Sprocket platform alongside technical pentest results — giving security leadership a unified view of both technical and human-layer exposure. Every finding includes an attack narrative that explains exactly what happened, what was exploited, and what needs to change.

How Deepfake Attacks Work in Financial Services

The following video provides a detailed breakdown of how AI-generated deepfakes are being weaponized against financial institutions, including a walkthrough of the $25.6M Hong Kong wire fraud case and recommendations for detection and response.

The Bottom Line for Financial Services Security Leaders

The social engineering threat against financial institutions in 2025 is not a training problem. It’s a testing problem. Your employees aren’t failing because they’re careless — they’re failing because the attacks they’re facing are sophisticated, personalized, AI-augmented, and nothing like the scenarios they were trained on.

The answer is not another phishing simulation. The answer is continuous, expert-driven testing of the human layer — using the same tactics your adversaries use, on the same timeline they operate on.

If your last social engineering assessment was more than six months ago, your security posture does not reflect the threat environment you’re operating in today.

Sources

  1. Verizon 2024 Data Breach Investigations Report (DBIR) — https://www.verizon.com/business/resources/reports/dbir/
  2. IBM Cost of a Data Breach Report 2024 — https://www.ibm.com/reports/data-breach
  3. Onfido Identity Fraud Report 2024 — https://onfido.com/landing/identity-fraud-report/
  4. Deloitte Center for Financial Services, “Gaming the System: The Growing Threat of Deepfake Fraud” (2024)
  5. FBI Internet Crime Complaint Center (IC3) 2023 Annual Report — https://www.ic3.gov/Media/PDF/AnnualReport/2023_IC3Report.pdf
  6. Lookout Mobile Security Threat Report 2024 — https://www.lookout.com/threat-intelligence
  7. SANS 2024 Security Awareness Report — https://www.sans.org/security-awareness-training/resources/reports/
  8. Gartner, “Continuous Threat Exposure Management (CTEM),” 2024
  9. South China Morning Post / Hong Kong Police Force — Deepfake Video Conference Wire Fraud, February 2024