Deepfakes on financial services firms rise 700%

WRITTEN BY
Adaptive Security
Whitepaper
5 min read
Download article
Download PDF
July 18, 2024

The financial services industry is contending with the latest form of fraud–generative AI. The Wall Street Journal reported that deep fakes on financial technology firms rose above 700% in the last year, and Deloitte's Center for Financial Services predicts that AI-enabled fraud losses could reach $40 billion in the United States by 2027. As deepfakes become more realistic and the technologies to create them become accessible, the financial sector should pay special attention to the evolving threat. Attackers can now create a near-perfect deep fake of someone simply using a single photo and a short audioclip. 

In February, a finance worker at Arup was tricked into transferring $25 million to scammers who used deepfake technology to impersonate the company’s CFO and other staff. Millions were lost, and the attackers remain at large. These attacks are not isolated but part of a growing trend. The sophistication of deepfake technology has made it increasingly difficult for individuals and organizations to distinguish between genuine and fraudulent communications. This vulnerability extends beyond corporate settings, affecting consumers directly and leading to a surge in identity theft cases.

Identity theft is the most common consumer complaint received by the U.S. Federal Trade Commission (FTC), with more than half involving robocalls. The Federal Communications Commission (FCC) has recently taken action to combat AI-powered scams by banning AI-enabled robocalls. However, this regulation doesn't guarantee the complete eradication of such activities. Despite these efforts, identity theft remains the biggest threat to financial institutions due to the sensitive data they manage.

Identification – knowing that a person says who they say they are – is becoming increasingly challenging in a generative AI world.  Voice recognition was once considered a highly secure authentication protocol, and some banks relied solely on it. However, with the advancement of AI voice generators and voice manipulation technologies, voice verification has become problematic. Deepfake technology can perfectly mimic an individual’s voice, making it crucial to incorporate additional factors for customer, account, and employee verification. Financial institutions are now investing heavily in AI-powered detection systems to identify potential deepfakes and suspicious activities in real-time. These systems analyze patterns, anomalies, and inconsistencies in audio and video content that might be imperceptible to the human eye or ear. 

While there is no “silver-bullet” to eliminate the range of risks posed by deepfakes, companies are finding solutions that can help with the prevention and detection of attacks. Recognizing the gravity of these threats, financial institutions are responding to these new attacks in a variety of ways. Many have adopted a "zero trust" security model, which assumes no user, system, or service operating from within the security perimeter is automatically trusted. This approach requires verification from everyone trying to access resources in the network, regardless of their position or previous authentications.

One of the best ways firms combat this emerging threat is through modern employee training on voice and video deepfakes. For banks, it is essential to provide internal training to enhance employee awareness and preparedness, given that deepfakes and generative AI are relatively new technologies. Ensuring that employees understand how these technologies work, the associated risks, and how to recognize them is the best way to build organizational resilience. Adaptive offers customized training sessions using your executives' voices and likenesses to simulate potential threats, providing a realistic experience to prepare employees for these new risks. Contact a team member today for a personalized demo.

WRITTEN BY
Adaptive Security
Blog
5 min read
Download article
Download PDF
Subscribe to newsletter

Get your team ready for Generative AI

Schedule your demo today