4151% increase in phishing since the launch of ChatGPT
Recent data has revealed a startling trend in cybersecurity: phishing attacks have increased by an unprecedented 4151% since the launch of ChatGPT. This significant rise has raised concerns for companies around their digital security, drawing attention to the evolving nature of phishing and social engineering security risks.
Phishing is a deceptive practice where cybercriminals pose as trustworthy entities - such as company execs - to deceive individuals into revealing sensitive information. Common forms of phishing include email phishing, where fraudulent emails impersonate legitimate organizations, and spear phishing, which involves targeted attacks on specific individuals or companies using personalized information. Whaling is a form of spear phishing aimed at high-profile targets like C-suite executives. Other variations include vishing, which is voice phishing conducted over phone calls, and smishing, which targets mobile users through SMS messages.
AI Phishing
AI has significantly lowered the barrier to entry for non-native English speakers in crafting convincing phishing attempts. In the past, grammatical errors and awkward phrasing often served as red flags, alerting potential victims to fraudulent communications. However, AI-powered language models can now generate fluent, error-free text that closely mimics authentic correspondence. Additionally, AI enables would-be attackers to become conversationally proficient. If a victim responds to the phishing attack, the attacker can pass the response to an AI and respond in a realistic tone of voice. This development has expanded the pool of potential attackers and made it more difficult for recipients to discern legitimate messages from fraudulent ones based on language quality alone.
Furthermore, AI has revolutionized the scalability of personalized phishing attacks. Traditionally, highly targeted phishing attempts required considerable time and effort to research and tailor messages to specific individuals. AI now enables attackers to automate this process, rapidly generating customized content based on readily available personal information. This capability allows for mass-produced yet personalized phishing attempts, significantly increasing the reach and potential impact of these attacks.
Perhaps most alarmingly, AI is facilitating the creation of entirely new types of phishing threats through generative voice and video technologies. Deepfake audio can now convincingly mimic the voices of known individuals, enabling vishing (voice phishing) attacks that are far more persuasive than traditional methods. Similarly, AI-generated video content using deepakes can create highly realistic impersonations, potentially leading to video-based phishing attempts that exploit the inherent trust people place in visual communication. Attackers can walk away with millions of dollars, like what happened to a multinational financial firm in Hong Kong after an employee sent $25 million to a deepfake cybercriminal impersonating their CFO.
These AI-driven advancements in phishing techniques present a formidable challenge to existing security measures and user awareness strategies. As AI continues to evolve, companies must adapt rapidly to counter these sophisticated threats and protect their employees and organizations from increasingly convincing and pervasive phishing attacks.
Defending Against AI Phishing
Historically, the fight against phishing has involved multiple strategies. One key approach has been the implementation of technical defenses. Organizations have deployed email gateways, which act as a first line of defense by filtering out suspicious messages before they reach employees' inboxes. Building on this, email client developers like Gmail or Microsoft Outlook have incorporated advanced algorithms into their software. These algorithms work to detect potential threats and automatically quarantine suspicious emails, adding an extra layer of protection. Despite these improvements, over 90% of organizations experienced an email security incident in the past 12 months.
Technology alone is not enough to combat the ever-evolving threat of phishing, especially when personal judgment comes into play. Multiple surfaces (SMS, email, and now voice and video with generative AI) and the development of generative AI tools for cyber criminals like FraudGPT challenge rules-based technical controls that typically only cover email.
Recognizing this, companies have increasingly invested in human-centered solutions. User awareness training has become a crucial component of cybersecurity strategies. These programs aim to equip employees with the knowledge and skills needed to identify and report phishing attempts, transforming staff from potential vulnerabilities into active defenders against cyber threats.
Adaptive Security’s training addresses these new challenges by offering a dynamic solution to evolve with emerging threats. Unlike static programs, Adaptive continuously adds interactive elements to reflect the latest AI-enhanced phishing techniques and teach employees how to prevent falling for such attacks.
The platform's personalized training caters to individual roles and knowledge levels, ensuring relevant and engaging content for each employee. This tailored approach improves retention and application of security best practices. Adaptive also incorporates real-world simulations that mimic advanced AI-generated phishing tactics, including those using sophisticated language and deepfake audio to impersonate executives. These practical exercises are vital for developing the skills needed to identify and respond to convincing phishing attempts.
By implementing Adaptive Security's SAT tools, companies foster a proactive security culture where employees become active defenders against cyber threats. Continuous training maintains high security awareness, strengthening defenses against evolving AI-driven attacks. As AI continues to reshape the phishing landscape, organizations must prioritize adaptive, human-centric approaches to stay ahead of increasingly sophisticated threats.