The Rise of AI Copilots is a Massive Phishing Risk

A headshot of Justin Herrick, a content marketer at Adaptive Security
Justin Herrick
August 11, 2024

4
min read
A graphic signifying phishing attacks through AI copilots

TABLE OF CONTENTS

Want to test your team’s readiness?

Try a demo today

Want to download an asset from our site?

Download now

Artificial intelligence (AI) is revolutionizing the workplace, with AI assistants — or ‘copilots’ — promising huge leaps in productivity. As the pursuit of operational efficiency continues, organizations are investing heavily in AI copilots.

Integrated into software like Microsoft 365 and Google Workspace, AI copilots help draft emails, summarize meetings, and analyze data faster than any employee.

But there’s a dangerous flip side to this innovation, too.

The same AI power that supercharges legitimate work is also supercharging phishing attacks. And as organizations eagerly adopt these tools, cybercriminals are exploiting them, turning helpful assistants into potent weapons for deception and theft.

Phishing Threats Were Already Bad — AI Made Them Worse

Phishing has long been a major cybersecurity headache for IT and security teams. The difference with AI phishing? Its volume and sophistication are escalating dramatically.

Consider this: Microsoft detected over 30 billion phishing emails in 2024 alone. Add generative AI to the mix, and attackers gain the ability to craft more personalized, believable, and error-free messages at an unprecedented scale.

AI copilots, meanwhile, have become an attacker’s new best friend. Their deep integration with user data — including emails, documents, contacts, calendars, and chat histories — is precisely what attackers want to leverage.

How AI Copilots Become Phishing Accelerants

AI copilots and the massive phishing risk that’s emerging boil down to their core design: deep access to sensitive corporate data, from financial records to customer information. Now, security researchers are actively demonstrating how this access can be twisted for malicious purposes.

Research presented by Michael Bargury at the recent Black Hat USA conference (and covered by Wired) detailed several ways Microsoft’s Copilot chatbot can be specifically exploited. 

Here’s a recap of how AI copilots can be weaponized, according to Bargury:

Automated spear-phishing

Bargury demonstrated a tool dubbed ‘LOLCopilot’ in which an attacker compromises an email address and uses Copilot to analyze the victim’s contacts and writing style, right down to their emoji use. It then crafts and sends hundreds of highly personalized phishing emails — complete with malicious links or malware — almost instantly.

As the researcher noted, “A hacker would spend days crafting the right email to get you to click on it, but they can generate hundreds of these emails in a few minutes.”

It turns Microsoft’s Copilot (and other generative AI-powered chatbots) into automated phishing engines, leveraging core functionality for malicious scale.

Stealthy sensitive data exfiltration

AI copilots access and summarize data regularly, so Bargury showed how attackers (again, post-compromise) exploit this.

By prompting an AI copilot to retrieve sensitive information (like salary data) but specifically instructing it not to reference the source files, attackers can potentially bypass some security alerts.

Bargury commented, “A bit of bullying does help” when manipulating the AI’s output constraints.

Bypassing security with ‘magic words’

While Microsoft and other chatbot makers have security controls in place, Bargury detailed how attackers could circumvent them. He found ways using “a few magic words” to bypass limitations and “do whatever you want” after analyzing internal systems and understanding how Copilot accesses resources.

As a result, the researcher noted that attackers would then effectively hijack its capabilities despite any built-in protections.

Data poisoning for social engineering

Attackers don’t always need full account access. Bargury showed that simply sending a malicious email containing fake information (like incorrect bank details) into a user’s inbox could poison the data an AI copilot draws from.

Later, if the user asks Copilot or another chatbot about that information, the AI might present the attacker’s fake details as fact, demonstrating Bargury’s point: “Every time you give AI access to data, that is a way for an attacker to get in.”

Malicious insider impersonation

Other demos from Bargury’s session at the Black Hat USA conference included manipulating Microsoft’s AI copilot to leak sentiment about internal matters (like earnings calls) or directly providing users with links to phishing websites, turning the helpful AI into a “malicious insider.”

Context Matters: Post-Compromise & Existing Vulnerabilities

Keep in mind that several of the exploits unearthed by Bargury, particularly the automated phishing and sensitive data exfiltration, require an attacker to first gain access to the user’s email address, so they’re post-compromise scenarios that often start with a phishing attack to obtain login credentials.

In a conversation with Wired, Microsoft’s Phillip Misner expressed appreciation for Bargury’s work. Misner, who leads AI incident detection and response, also revealed that Microsoft is working with the researcher to assess the vulnerabilities identified.

Johann Rehberger, a red team leader at Electronic Arts, emphasized that AI copilots often amplify security weaknesses. If an organization has poor data permissions allowing broad access to files, “Now imagine you put Copilot on top of that problem,” he told Wired. Rehberger added that AI can easily find sensitive information like poorly secured passwords if underlying access controls are weak.

Navigating the AI Paradox: Productivity vs Peril

AI copilots offer immense potential, but their power is a double-edged sword that significantly amplifies the phishing threat landscape. Ignoring this reality? A perilous path forward for any organization embracing these tools.

Organizations need to adopt AI copilots with eyes wide open, implementing robust, AI-aware technical defenses. But it also requires fostering a vigilant culture through continuous, relevant security awareness training and phishing simulations. It’s not about checking a box but rather building a resilient human defense layer that counters threats bypassing technology.

Effective training needs to tackle AI-specific threats head-on, educating employees on what hyper-personalized attacks look like, how AI might manipulate chatbot interactions, and the potential for deepfakes. In addition, ongoing phishing simulations provide safe, practical experience against these advanced tactics.

Together, security awareness training and phishing simulations harden employees against increasingly deceptive lures and build the confidence needed to recognize, question, and report suspicious activity before clicking.

Acknowledging that the very AI features boosting productivity also boost phishing capabilities allows IT and security teams to navigate appropriately. By combining technology with well-trained employees, organizations successfully harness AI’s benefits while mitigating misuse and preventing attacks that lead to irreparable harm to the business.

Get your team ready for
Generative AI

Schedule your demo today