Want to test your team’s readiness?
Want to download an asset from our site?
LastPass, a leader in password management, recently shared a real-world example of AI voice phishing, also known as “vishing,” targeting one of its employees.
Deepfake audio technology was used in the attack to impersonate LastPass CEO Karim Toubba. This fabricated audio was delivered to an employee through multiple channels, including the employee’s WhatsApp messages and voicemail.
It highlights how accessible AI voice cloning tools have become, enabling convincing executive impersonation attempts that were once much harder to execute. For organizations and their employees, this incident is also a stark reminder that trusting familiar voices remotely requires new caution.
Fortunately, there’s good news: LastPass’ employee thwarted the attack, expressing vigilance and reporting the suspicious activity.
The decision to publicly share the details of this AI voice phishing attack publicly serves as a warning about yet another evolving threat and the need for proactive defense strategies.
The AI Voice Phishing Attack Unfolds
As detailed by LastPass intelligence analyst Mike Kosak, the attack was a coordinated social engineering attempt leveraging multiple channels.
“In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp,” Kosak explained.
Using WhatsApp, a platform typically outside LastPass’ standard business communication protocols, immediately signaled something was amiss. But the deception didn’t stop there. The attacker aimed to pressure and deceive the employee using a fake, replicated version of the CEO’s voice.
AI-generated voice mimicking Toubba represented a sophisticated attempt to lend credibility and urgency to the scammer’s demands.
Red Flags Raised: Why the Deepfake Audio Scam Failed
LastPass should feel proud that its employee didn’t fall for the attacker’s ruse. Awareness and critical thinking allowed them to recognize the manipulation attempt.
Kosak noted, “As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages.”
Combining an unusual communication channel (WhatsApp) and the classic pressure tactic of “forced urgency” triggered the employee’s skepticism.
Instead of complying, the employee ignored the messages and reported the incident to LastPass’ security team.
In its blog post, LastPass confirmed the attempted AI voice phishing attack had zero impact on the company. It chose to share the incident to educate other organizations and individuals about the evolving threat landscape.
As Kosak wrote, their key message was “to raise awareness that deepfakes are increasingly being leveraged for executive impersonation fraud campaigns.”
The Evolving Threat: Vishing Gets an AI Upgrade
Security experts view incidents like this as the next evolution of business email compromise (BEC) scams. While BEC often relies on spoofed emails, AI-powered vishing adds direct, personal pressure through voice calls, text messages, and even video.
The goal remains to manipulate an employee into taking unauthorized actions, such as transferring funds, sharing credentials, or granting system access.
AI voice cloning technology has become disturbingly accessible. Malicious actors can train AI models using publicly available audio samples, like speeches or interviews found online, to generate convincing fake audio.
This lowers the barrier for sophisticated fraud, enabling attackers to craft personalized impersonations at scale.
Building Defenses Against AI Voice Phishing
LastPass’ incident demonstrates that technology alone isn’t enough to fight any type of phishing attack. The employee’s skepticism, awareness of social engineering tactics, and adherence to reporting protocols were all important.
It underscores the significance of robust security awareness training, which should cover:
- Understanding Deepfake Tactics: Educating employees that voices (and videos) are convincingly faked using AI.
- Recognizing Red Flags: Training staff to spot signs like unusual communication channels (like WhatsApp for official business), unexpected requests, pressure tactics, and deviations from standard practices.
- Verification Procedures: Establishing and reinforcing strict protocols for verifying unusual or high-stakes requests, especially those claiming to come from executives or senior leadership. This means using a separate, trusted communication channel to confirm rather than just replying via any channel to any message.
- Reporting Culture: Encouraging a ‘see something, say something’ culture where employees feel safe and obligated to report suspicious activity immediately without fear of blame.
Employees should be encouraged to think critically: “Why would the CEO contact me directly via WhatsApp, bypassing the usual chain of command?” Asking such questions is a powerful defense mechanism.
Technical measures like multi-factor authentication (MFA) remain vital, but organizations must also consider the risks associated with easily cloned biometrics. AI-powered tools designed to detect deepfakes are emerging, but deepfakes themselves are constantly evolving. And there’s little an organization can do if an attack occurs outside of company-owned communication channels.
The attempted AI voice phishing attack against LastPass signals that vishing must be taken seriously.
Staying ahead requires moving beyond checkbox training. Today’s security awareness training programs need to continuously educate employees on evolving tactics like deepfake audio and mult-channel social engineering.
Phishing simulations shouldn’t cover solely email, for example. As phishing attacks evolve across every channel, organizations need to prepare for threats like AI phishing.
AI voice phishing is here to stay, and equipping employees with the knowledge and simulated experience to recognize and report these sophisticated deceptions is the most effective way to build resilience against this growing threat.