ChatGPT & AI Phishing: Here’s Why It’s Surging
Why is AI phishing surging? Here's how generative AI platforms like ChatGPT have led to more convincing, scalable attacks.

4,151% — that’s the increase in phishing attacks since the launch of ChatGPT, according to a study on the state of phishing.
It’s a statistic that signals the dawn of a new era in cybersecurity. Phishing attacks are becoming significantly more sophisticated and harder to detect due to generative AI, and this evolution directly correlates with the widespread use of tools like ChatGPT and other chatbots, creating remarkably human-like text, image, video, and audio content for attackers at scale.
Attackers realize they’re not forced to complete time-consuming, manual work to pull off a polished phishing attack anymore. AI does all the work — and fast.
In moments, AI models can generate a lifelike deepfake of a company’s CEO, mirroring their appearance and replicating their voice.
But while generative AI is taking phishing far beyond the email inbox with full visual representations, this new era of AI phishing is multi-channel; both old and new threat vectors already incorporate generative AI.
3 Reasons Why Generative AI is a Game-Changer for Phishing Attacks
Generative AI is fueling the surge in AI phishing because it overcomes the limitations previously faced by attackers. Today, it’s easier than ever for a cybercriminal to plan, create, and execute a deceptive phishing attack on an individual or entire organization.
Here are the top reasons why phishing attacks have increased with generative AI.
#1. Eliminating language barriers
Phishing attempts were often flagged by awkward phrasing or grammatical errors, especially from non-native speakers.
Generative AI, however, completely nullifies this. Its sophisticated text generation allows anyone to create fluent, natural-sounding, contextual messages in multiple languages, instantly making scams feel more legitimate.
#2. Enabling realistic, dynamic conversations
Until recently, attackers struggled to maintain conversations if a victim replied. Between language barriers and managing several conversations simultaneously, attackers couldn’t keep up. Now, it’s easy to deploy multiple phishing attacks and run conversations in real time.
Attackers can feed victim responses into AI to generate realistic, context-aware replies in real time, leading to dynamic, believable conversations that make social engineering far more effective than ever.
#3. Automating personalization at scale
Effective spear phishing requires personalization, which was once a time-consuming manual process. Generative AI automates this by allowing attackers to feed open-source intelligence (OSINT) from publicly available data channels into AI models.
Instantly, the content used for a phishing attack is generated with full customizations for thousands of individuals simultaneously. It blends the targeted effectiveness of spear phishing with the volume of mass campaigns.
How Cybercriminals Use Generative AI for Phishing
Boosted productivity, aiding creativity, and solving complex problems are all associated with generative AI and its positive use cases. However, it’s a double-edged potential.
The same powerful capabilities are being exploited and weaponized by cybercriminals who see generative AI not as a tool for progress but as an arsenal for malicious activities. Cybercriminals are repurposing technologies with generative AI to enhance deception and scale attacks like never before.
Let’s examine how cybercriminals use generative AI for phishing.
Crafting hyper-realistic messages
Attackers leverage generative AI not only to achieve grammatical accuracy and maintain the flow of conversation; the technology also meticulously mimics specific writing styles.
Imagine receiving an email perfectly capturing your CEO’s urgent tone during the end of a quarter, a text message replicating the helpful language of your IT support desk, or a legal notice using precise formal phrasing from a known partner or vendor. AI makes this level of personalization entirely achievable.
To enhance believability, AI tailors messages based on OSINT, such as recent company news or industry events scraped from public sources like social media, to create contextual lures. Generative AI rapidly scales variations of these phishing attacks to cover a large number of targets in little time.
But the application of these techniques isn’t limited to any particular communication channel, such as email or text message. Attackers employ generative AI across every channel to broaden the attack surface.
Powering the sophistication of deepfakes
Deepfakes, highly realistic video and audio content designed to impersonate real individuals, is the most alarming application of generative AI today. It allows attackers to convincingly mimic voice patterns, facial expressions, and mannerisms with just the tiniest amount of source data, like an audio recording or existing image found online.
AI voice cloning represents a dramatic escalation for vishing, short for voice phishing. By replicating the voice of a trusted person like a CEO, colleague, or family member, attackers make urgent, fraudulent requests to demand immediate wire transfers to sensitive information, all due to the familiarity of the voice. Such cloned voices are also used to leave deceptive voicemails.
Video deepfakes, on the other hand, enable a visual impersonation, posing threats in scenarios that involve video communication. And before you laugh off this type of phishing attack, here’s a newsworthy item: Earlier this year, a firm lost $25 million after a finance employee was tricked by a deepfake during a video conference call.
The combination of increasingly realistic deepfakes presents a huge challenge. Verifying identity becomes significantly harder when attackers manipulate both sight and sound. While the technology, especially for seamless real-time interaction, continues to evolve, the current capabilities are already sufficient to stage devastating scams.
Developing malicious tools powered by generative AI
Cybercriminals aren’t only using off-the-shelf generative AI; they’re building specialized versions tailored for malicious purposes.
’WormGPT’ and ‘FraudGPT’ have emerged, acting as force multipliers that dramatically lower the technical barrier for less skilled attackers and boost the efficiency of experienced ones.
Tools like these provide easy-to-use interfaces that automate tasks. For example, they can generate highly convincing phishing emails or text messages tailored to specific targets. Or they might write malware code designed to evade security software or create sophisticated fake login pages.
Worryingly, access to WormGPT and FraudGPT is often sold as a service on dark web forums. This model democratizes advanced capabilities for phishing attacks, and it’s a trend that presents a significant challenge for law enforcement and security researchers attempting to track, attribute, and dismantle cybercriminal operations.
The Resulting Challenge for Cybersecurity
Generative AI makes things incredibly challenging for cybersecurity professionals, creating hurdles for traditional defenses typically used against phishing attacks.
- Quality & Conviction: AI-generated content often bypasses filters looking for basic errors and easily deceives human eyes and ears.
- Volume & Speed: Automation allows for relentless, high-volume phishing attack campaigns.
- Adaptability: Generative AI enables attackers to constantly adapt their tactics, making signature-based detection less effective.
- Multi-Channel Threats: Attacks span email, SMS, voice, and video, which require broader defenses.
While technical solutions like advanced email security and multi-factor authentication (MFA) remain vital, they cannot solely counter threats crafted by generative AI.
Defending Against AI Phishing: Awareness is Key
Defending against AI phishing attacks demands a focus on human vigilance, supported by strong technical foundations. Keep security tools updated, explore AI-driven detection, and enforce MFA, but recognize that technology alone isn’t sufficient — employees are the most crucial line of defense.
Prioritize modern security awareness training that moves beyond ordinary training practices. Instill a verification culture where unusual or sensitive requests are confirmed via separate channels, and educate employees specifically on generative AI capabilities like deepfakes and hyper-realistic text, while training them to analyze the context of communications.
Given the rapid evolution of AI threats, training must be continuous and agile. Utilize a modern platform, like Adaptive Security, that offers dynamic content, realistic AI phishing simulations, and personalized learning. As part of an ongoing effort, this approach builds a resilient security culture with empowered, vigilant employees.
Building Resilience in the Age of Generative AI
Everyone is witnessing this surge in sophisticated phishing because generative AI fundamentally lowers the barriers for cybercriminals. Phishing attacks are highly convincing, leveraging personalized deception elements produced at an unprecedented scale.
While generative AI poses a serious challenge, remember that you’ll need to combine robust technical safeguards with a focus on human vigilance and critical thinking. This requires security awareness training and phishing simulations designed specifically for today’s AI era and its ongoing evolution.
Adaptive’s next-generation platform delivers this crucial training, utilizing simulations that mimic AI-powered attacks to help organizations train employees before real threats strike.
Get a demo with Adaptive — we’ll walk you through tracking your organization’s generative AI risk, running phishing simulations with deepfake personas on any channel, and preparing your team for multi-channel threats with next-generation security awareness training.