Get started with Adaptive
Want to download an asset from our site?
American Express Global Business Travel, a major player in corporate travel management, has reportedly been targeted by sophisticated deepfake attacks.
It’s an attack that signals a significant escalation in the tactics used by fraudsters, moving beyond traditional phishing into the realm of highly convincing AI-generated impersonations.
According to The Company Dime, cybercriminals have been leveraging deepfake technology to create fake videos and audio recordings that mimic real Amex GBT executives. The realistic fakes are used to deceive employees, travel agents, and even corporate clients to trick unsuspecting individuals into divulging sensitive information or authorizing fraudulent transactions.
Attacks like this one on Amex GBT should remind IT and security teams that the uncanny accuracy of deepfakes makes them exceptionally difficult to detect, posing a challenge even for security-aware employees. It’s a situation that demands immediate attention and a hard look at security posture across industries handling private data.
Unpacking the Amex GBT Attack Scenario
Reports suggest the Amex GBT deepfake attack was a focused campaign targeting the ecosystem surrounding corporate travel management. Fraudsters allegedly utilized deepfake technology to impersonate high-level executives, aiming to exploit the inherent trust in leadership figures.
The targets appear multifaceted, including internal employees who might have system access or payment authorization capabilities, affiliated travel agents managing bookings, and corporate clients themselves.
By convincingly mimicking an executive’s voice or appearance, attackers attempt to bypass routine verification procedures.
The objectives are clear: Gain unauthorized access to valuable data, including payment details, travel itineraries, and travelers' personal information. In addition, the deepfakes were reportedly used to try to authorize fraudulent transactions, potentially diverting funds or making unauthorized bookings, representing a direct financial threat alongside the risk of a major data breach.
The Broader Risk: Deepfakes Poised for Growth
Amex GBT’s deepfake attack isn’t isolated. It’s part of a rapidly emerging and dangerous trend. As deepfake technology matures and becomes easier to deploy, security experts anticipate a significant rise in its use for fraud and social engineering across all sectors.
Successful deepfake attacks can lead to direct financial losses, exposure of highly sensitive personal and corporate data, erosion of trust between companies and their clients, and severe reputational damage. Traditional security measures often fall short against such convincing impersonations.
Voice recognition systems can be fooled by cloned audio, and even keen-eyed employees can be deceived by realistic deepfake videos, especially when combined with social engineering tactics that create urgency.
Some industry watchers estimate that global deepfake-related identity fraud attempts will reach 50,000 cases in 2024-2025. While specific numbers may vary, the upward trajectory is clear, demanding immediate strategic adjustments from organizations.
Fortifying Defenses Against AI Impersonation
Given the sophistication of deepfake attacks, organizations must adopt a multi-layered defense strategy that goes beyond conventional methods.
- Enhanced Verification Protocols: Implement mandatory multi-channel verification processes for sensitive requests, especially financial transactions or data access requests. Never rely solely on a video call or voice message. Instead, confirm via a separate, secure, and pre-established channel.
- Advanced Technological Safeguards: Investigate and deploy advanced security technologies where appropriate, including biometric authentication with liveness detection (to distinguish real users from digital manipulations) and AI-driven anomaly detection systems that flag unusual communication patterns.
- Modern Security Awareness Training: Fostering a culture of awareness is crucial. Employees at every level need targeted education on the specific risks of deepfakes. Security awareness training needs to cover how to spot potential signs of video or audio manipulations, the importance of adhering to verification protocols, and procedures for reporting suspicious activity.
Integrating these safeguards is essential for comprehensive protection against deepfake threats.
Vigilance in the Age of Deepfakes
While deepfake technology presents daunting challenges, incidents like the one targeting American Express Global Business Travel reinforce the power of preparedness. Investing in the right security toolkit is important, but fostering a resilient human firewall through rigorous verification habits and continuous, realistic security awareness training is the most critical component in defending against sophisticated, AI-driven social engineering.