AI Copilots and Phishing
AI Credential Phishing and Security Risks
Credential phishing - a common cyber crime where hackers attempt to steal sensitive login information - has increased by 217% in the last 6 months alone. As the strength of AI continues to increase, spear-phishing and data breaches will increase.
A recent expose in Wired revealed how Microsoft's new Copilot AI could be turned into a “phishing machine.” At this summer’s Black Hat, Michael Bargury demonstrated five proof-of-concept ways that Copilot, which runs on its Microsoft 365 apps, such as Word, could be manipulated by malicious attackers, including using it to provide false references to files, exfiltrate some private data, and dodge Microsoft’s security protections. By gaining access to an email account, attackers can manipulate Copilot to impersonate the user through emails; From there, the attacker can use Copilot to send off emails to all of the user's contacts in a matter of minutes; these emails can contain malware or links that put all of its recipients at risk. On top of this, attackers can use Copilot to obtain sensitive information through compromised emails or by bypassing security measures. These phishing attacks were already dangerous, but the personalization and efficiency of AI makes them even more dangerous and more common. Learn more at wired.com.
As AI continues to advance and introduce new risks, particularly around phishing and social engineering, advanced phishing simulations and security awareness training become increasingly critical. These attacks are easy to fall for, so companies need employees who have practice dealing with these threats. Adaptive’s AI phishing simulations provide a crucial tool for organizations looking to improve their defenses and ensure that employees can identify and respond to increasingly advanced phishing schemes. This training can be revolutionary in protecting a company.
For more information on how phishing simulations can strengthen your company’s security, visit Adaptive's product page.