In 2025, artificial intelligence will be a key focus for cybersecurity analysts and professionals. AI-enabled social engineering attacks are ushering in a new era of cybersecurity threats. Using advanced machine learning algorithms, cybercriminals can now automate and personalize their attacks with unprecedented precision. AI-driven phishing and vishing scams mimic voices and email tones, while deepfakes impersonate trusted figures. As a result, distinguishing between legitimate communication and malicious intent is more challenging than ever.
The New York State Department of Financial Services (DFS) Addresses AI Cybersecurity Risks
The DFS considers AI-enabled social engineering as a significant threat to the financial services sector. To address this threat, they issued an industry letter to Covered Entities* outlining AI cybersecurity risks. The DFS letter highlights four main risks from AI, two of which focus on how attackers use AI:
1. AI-Enabled Social Engineering Attacks
2. AI-Enabled Cybersecurity Attacks
While social engineering is not new, AI has made the attacks more personalized and sophisticated. Threat actors are using AI to create realistic deepfakes, audio, video, and text to target individuals through phishing, vishing, SMiShing, videoconferencing, and online platforms. Here are a few examples from the DFS industry letter:
- A finance worker transferred $25 million to threat actors after a video call with deepfake participants, including the CFO.
- An energy executive wired €220,000 to a threat actor after an AI-generated voice deepfake mimicked the parent company’s CEO, requesting the urgent transfer.
- Threat actors created an AI deepfake hologram of a Binance PR executive to conduct Zoom calls with companies seeking digital asset listings.
*Covered Entity: A covered entity is defined in 23 NYCRR § 500.1(e) as “any person operating under or required to operate under a license, registration, charter, certificate, permit, accreditation or similar authorization under the Banking Law, the Insurance Law or the Financial Services Law, regardless of whether the covered entity is also regulated by other government agencies.”
AI-Enabled Attacks – A Concern for All Enterprises
AI-enabled social engineering attacks threaten all enterprises, not just the financial industry. Consider just a few statistics:
- 26% of C-suite executives faced deepfake incidents in the past year (Validia).
- One report shows a 28% rise in phishing emails in Q2 2024, likely driven by generative AI (Egress).
- 80% of employees are willing to follow AI-impersonated instructions (Validia).
With the rise in AI-enabled social engineering attacks, the need for robust and innovative training has never been more urgent.
Cybersecurity Training for AI-Enabled Attacks
The DFS industry letter highlights the need for comprehensive training on AI-enabled attacks. Staff, executives, and board members should be trained in AI risks and how to respond to AI-driven social engineering threats.
For this training to be effective, we believe it should integrate a deep understanding of social engineering techniques with a strong grasp of how AI technologies are used in these attacks. To answer this need, Social-Engineer LLC, in partnership with Validia, offers an innovative Artificial Intelligence and Deep Fake Social Engineering Audit designed to equip your staff and cybersecurity teams with essential skills for navigating this threat landscape. We use advanced deep fake and digital skin technologies to create hyper-realistic scenarios that test and enhance your team’s vigilance against sophisticated social engineering attacks. This service provides companies with a cutting-edge defense against the evolving threat of AI-enabled social engineering attacks.
Please contact us today for a consultation.