Deepfake Scam: How Criminals Stole $25 Million from Arup

What Happened?
In January 2024, Arup, a globally renowned UK engineering firm with over 18,000 employees, suffered a significant deepfake scam loss when an employee in its Hong Kong office was deceived into wiring $25.6 million to criminals. The fraud began with a phishing email impersonating the company’s CFO, requesting urgent confidential transactions. Despite initial skepticism, the employee joined a video conference where AI-generated deepfake videos of senior executives, including the CFO, appeared authentic with realistic facial expressions and cloned voices. Trusting the synthetic media and the urgent narrative, the employee authorized 15 separate wire transfers totaling the massive sum across multiple Hong Kong bank accounts.
The fraud was discovered only after the employee followed up directly with Arup’s London headquarters, revealing that the executives in the video call were all fabricated deepfakes. This case starkly illustrates how deepfake video conference fraud and AI-powered social engineering attacks have evolved to breach traditional security that relies on visual and audio identity verification. Arup’s CIO confirmed the use of fake videos and voices, emphasizing this was “technology-enhanced social engineering” rather than a system breach. Hong Kong police noted the attackers used deepfakes to bypass facial recognition systems multiple times during the scam, showcasing the sophistication of CEO fraud deepfake and CFO impersonation scams.
"To get financial services right, we have to get identity right. It is vital to building trust in the system."
How It Happened?
The attack unfolded in several phases. Initially, a convincing phishing email was sent masquerading as Arup’s UK CFO, demanding quick action on a “secret transaction”, a classic case of business email compromise prevention failure. To reinforce legitimacy, the fraudsters invited the employee to a video call populated by multiple deepfake executives. The AI-generated faces and cloned voices came from extensive training on real executive videos and speeches, enabling real-time deepfake video call scams that mimicked natural speech patterns and expressions via advanced generative AI models.
Psychologically, the attackers exploited typical human biases, authority bias, social proof, and urgency, to pressure the employee into authorizing massive wire transfers. The video call format created an illusion of authenticity impossible to distinguish by human senses or ineffective biometric systems alone. After multiple transfers, the fraud converted the funds quickly, complicating recovery. This attack exposed critical weaknesses: absence of deepfake detection tools, voice deepfake detection capabilities, urgent wire transfer verification procedures, and comprehensive employee resistance training against synthetic media fraud.
"To prevent such threats, the agencies recommend that organizations implement deepfake detection tools with real-time verification capabilities and passive detection techniques."
How It Could Have Been Prevented?
Sequenxa would have disrupted this attack through integrated, AI-driven enterprise deepfake defense layers. First, its real-time deepfake detection software uses multimodal video analysis to detect synthetic facial anomalies and GAN artifacts with 95-98% accuracy in live video streams, immediately flagging the fake CFO and other participants.
Second, Sequenxa’s voice cloning scam prevention leverages advanced voice liveness detection technology, identifying synthetic audio in under two seconds by analyzing speech rhythm, micro-tremors, and prosody absent in cloned voices. Combined with behavioral biometrics, the platform detects irregular transaction patterns and user behavior deviations, preventing unauthorized wire transfers.
Third, Sequenxa integrates multi-factor authentication fraud prevention, requiring independent verification beyond video calls, via secure CFO emails, SMS codes, and challenge-response knowledge checks, blocking attackers from bypassing identity proofs. The platform also incorporates business email compromise prevention via DMARC, SPF, and DKIM protocols to stop phishing emails at the gateway.
Lastly, consistent deepfake awareness training and cybersecurity awareness training employees empower staff to spot deepfake scams, social engineering tactics, and suspicious wire transfer requests, complementing technical defenses.
Together, these capabilities provide a comprehensive defense against the growing threat of AI deepfake fraud and related synthetic media fraud targeting financial services and multinational companies.
"Trust is built from the inside out. It's not a performance. It's a resonance. Do your actions match your truth?"
Lessons
This high-profile incident serves as a sobering case study that deepfakes and AI-synthesized media are no longer fringe threats but active and sophisticated tools for criminal fraud, especially in wire transfer scams. The key lessons include:
Traditional verification methods relying on visual or auditory confirmation are obsolete. Advanced liveness detection technology and deepfake detection tools are essential.
Multi-layer authentication, including biometrics, independent communications, and behavioral analysis, is critical for preventing unauthorized transfers.
Employee education through deepfake awareness training and simulations significantly improves vigilance against social engineering.
The exponential rise in generative AI fraud risks 2024 demands that businesses urgently adopt cutting-edge, integrated solutions like Sequenxa’s.
Despite available technology, many enterprises remain vulnerable due to lack of implementation and process rigor.
The Arup deepfake scam underscores that enterprises handling financial transactions must proactively deploy best deepfake detection software 2024, wire transfer fraud prevention technologies, and employee training to safeguard assets and reputations in the era of AI-powered social engineering attacks.
This case is a powerful reminder that while criminals exploit advances in AI for deepfake fraud, organizations equipped with modern synthetic media detection and multi-factor processes will reduce risk and maintain trust in digital business communications.
"Zero Trust is a concept that centers on the belief that trust is a vulnerability, and security must be designed with the strategy, 'Never trust, always verify."

