A finance worker at engineering firm Arup joined what appeared to be a routine video call with the company’s CFO and several colleagues in January 2024. Over the next few minutes, she authorized 15 wire transfers totaling $25 million to accounts in Hong Kong.
Every person on that call was fake. The CFO, the colleagues, the entire meeting - all generated by AI in real time.
She discovered the fraud only after contacting headquarters through official channels. By then, the money was gone.
This isn’t an outlier. It’s the new baseline.
The Numbers
Deepfake fraud has hit industrial scale:
- 8 million deepfake files now exist online, growing 900% annually
- A new deepfake attack occurs every 5 minutes
- Average business loss per incident: $500,000
- Human detection accuracy for high-quality deepfakes: 24.5% - worse than a coin flip
- Voice cloning now requires just 3 seconds of audio
- Cost to produce a convincing deepfake: $1
Deloitte projects $40 billion in generative AI fraud by 2027. We’re on track.
How the Attacks Work
The playbook has evolved beyond simple face-swaps. Modern deepfake fraud operations:
1. Harvest public content. Criminals scrape corporate presentations, earnings calls, LinkedIn videos, and conference appearances. Any public video or audio of an executive becomes training data.
2. Reconstruct digital humans. Using the harvested footage, attackers build synthetic versions of real executives - face, voice, mannerisms, and speech patterns.
3. Deploy multiple synthetic participants. The Arup attack didn’t feature just one deepfake. It created an entire meeting room of fabricated executives, complete with background interactions that made the scene appear normal.
4. Exploit trust hierarchies. Requests come from apparent CFOs and CEOs - people employees are conditioned to obey without question.
A similar attack in Singapore in 2025 extracted $499,000 from a finance director through the same technique: a video call where every participant was synthetic.
The Job Infiltration Problem
Deepfakes aren’t just stealing money. They’re infiltrating companies.
The FBI has issued multiple warnings about North Korean operatives using deepfake technology to pass job interviews at American companies. Amazon alone blocked over 1,800 suspected DPRK applicants between April 2024 and December 2025 - a 27% increase quarter-over-quarter.
In January 2026, AI security CEO Jason Rebholz of Evoke Security nearly hired a deepfake. During the video interview, he noticed soft facial edges, disappearing body parts, and unnatural dimples. The synthetic candidate had even researched Rebholz’s public statements and quoted them back during the interview.
These operatives aren’t after one-time theft. They want persistent access - to infrastructure, to source code, to internal communications.
Why Detection Is Failing
Humans correctly identify high-quality deepfakes only 24.5% of the time. That’s dramatically worse than random chance.
MIT researcher Simon Mylius notes: “It’s become very accessible to a point where there is really effectively no barrier to entry.”
Automated detection tools don’t fare much better. In laboratory conditions, they achieve high accuracy. In real-world conditions - variable lighting, compression artifacts, network latency - accuracy drops 45-50%.
The fundamental problem: generation technology is advancing faster than detection. Every improvement in AI models makes deepfakes more convincing. Detection is playing perpetual catch-up.
The Regulatory Response
FinCEN issued an alert in November 2024 (FIN-2024-Alert004) warning financial institutions about deepfake fraud schemes. The alert covers depository institutions, casinos, money services businesses, insurance companies, mortgage brokers, securities firms, and precious metals dealers.
Financial institutions must now report suspected deepfake-related suspicious activity under the Bank Secrecy Act. FinCEN reports “increased suspicious activity reporting” involving synthetic media.
But reporting fraud after it happens doesn’t prevent it.
What Actually Works
Corporate countermeasures that reduce deepfake fraud risk:
Mandatory callback protocols. Never authorize transfers based solely on video calls. Call back using numbers from official directories - not numbers provided during the suspicious call.
Code word authentication. Establish verbal authentication codes that change regularly and aren’t shared digitally.
Multi-channel verification. Require confirmation through multiple separate channels for high-value transactions.
Time delays. Institute waiting periods before executing large transfers, allowing verification processes to catch fraud.
Camera and background requirements. Require cameras on with no virtual backgrounds during sensitive calls - though this only slightly raises the difficulty for attackers.
In-person verification for new hires. The FBI recommends requiring physical presence during onboarding, cross-referencing photos with social media, and verifying employment history directly with institutions.
For individuals, security experts recommend establishing family “safe words” for emergency situations, never authorizing money transfers based on video calls alone, and limiting online voice and video presence.
The Uncomfortable Reality
Detection technology cannot keep pace with generation technology. By the time deepfake detectors adapt to current methods, generators have already moved on.
The 72% of business leaders who identify AI-enabled fraud as their top operational challenge for 2026 are correct to worry. The response requires fundamental changes to how organizations verify identity and authorize high-stakes decisions.
Trust, but verify through independent channels. That verification can’t happen over the same medium as the request.
The Bottom Line
A convincing deepfake costs $1 to produce and requires only 3 seconds of audio. The average business loss per incident is $500,000. Human detection accuracy is below random chance. These numbers explain why Deloitte projects a $40 billion deepfake fraud market by 2027 - and why organizations need to assume that any video call could be synthetic until proven otherwise through independent verification.