The fact that AI is already fueling cybersecurity attacks is surely no surprise to you. However, do you know exactly the type of attacks it helps facilitate, and what defenses can be applied against these types of hacks? Let’s start by noting that when we talk about AI-driven hacks today, we are not referring to autonomous AI agents independently planning and executing full attack sequences without human involvement. We are still some times away from those kinds of autonomous threats. As of today, AI is still used primarily as a tool to amplify existing attack techniques — accelerating reconnaissance, crafting convincing phishing lures, automating vulnerability discovery, and generating malware or exploit code with greater efficiency and precision. In this blog we will dive into these uses and how they have been used in the last several years to conduct successful attacks against global organizations. We will also explore what security teams can do to prevent such attacks and how they can also use AI to their advantage. Let’s jump in!

Five AI-Driven Attacks and How they Unfolded

1. Deepfake CEO Scam

Back in 2023, a finance worker at an unnamed company (we can understand why!) received an invitation to a video call from senior officials at the company. The employee recognized the faces and heard from the CFO on the call that he was requested to make a series of confidential transactions. Interestingly, the employee first received an email from the CFO that talked about the need for a secret transaction, which raised their suspicion. However, the video call took away these doubts and the employee went ahead and transferred $25M to accounts of the attackers. Only after doing the transfers did the employee contact the IT and security team, which was, of course, too late.

As deepfake technology improves, related scams will only grow. Indeed, for as long as we have Zoom calls, these types of social engineering hacks are here to stay. At the same time, most employees are not ready for it. Whereas email scams have been around for more than two decades and most workers have already been conditioned to recognize some of the signs of email fraud, deepfake video is too new for most corporate workers to fully grasp its deceptive potential. Here is just a small sample of how a fake Elon Musk can easily convince Zoom participants of being real:

2. AI-Powered Social Engineering via LinkedIn

Starting in 2023, executives in companies such as Okta, Cisco, and Microsoft started to be impacted by AI-generated versions of themselves sending LinkedIn messages to unsuspecting users. These bots mimicked executives and recruiters using scraped profile data, preying on users’ desire to find a job or expand their professional network. The bots would include a phishing link inside an InMail or email follow-up. This, in turn, would lead to compromised credentials used for internal phishing or VPN access.

The use of AI in LinkedIn-based attacks highlights the technology’s versatility in enabling social engineering. In these cases, AI is leveraged for: (1) gathering data to target specific individuals, (2) collecting information to mimic real profiles, (3) creating convincing impersonations, (4) crafting persuasive outreach messages, and (5) automating message delivery for maximum engagement. Here is an example of what an AI generated InMail can look like and here is a whole LinkedIn article on this tactic:

Fake Inmail

3. OpenAI – SweetSpecter Spear Phishing Attempt

In October 2024, employees at OpenAI started to receive highly relevant, targeted and context aware emails, that contained SugarGh0st Remote Access Trojan. The virus is typically embedded in malicious attachments of linked documents and once installed, allows an attacker to fully remote control, have full data access, record keystrokes, take screenshots and more!

Indeed, the Trojan is very dangerous once it gets past company defenses. Fortunately, OpenAI’s email filters and endpoint detection usefully blocked the malware before it reached corporate inboxes. OpenAI published a threat intelligence report outlining the attack tactics and shared Indicators of Compromise (IoCs). Also, below is an example of the email OpenAI employees would receive as part of the hack.

SweetSpecter

4. Paradox.ai's Lax Security Measures Compromise McDonald's Hiring System

Just this month, Wired reported that “McDonald’s AI Hiring Bot Exposed Millions Applicants’ Data to Hackers Who Tried the Password ‘123456’”. Indeed, security researchers discovered that a hacker could easily access the chat’s conversations (including personal information of job applicants) with simple tricks like guessing the basic password. All in all, the data included as many as “64 million records, including applicants’ names, email addresses and phone numbers”.

In this case, no actual data was leaked, but the incidents showcased the importance of vetting AI-technology providers like Paradox.ai. The company which created the bot itself issued a statement soon after the incident in which it stated it took the following actions: ”Both the legacy password and the API endpoint vulnerability have been addressed. Additionally, we are launching several new security initiatives including providing an easy way to contact our security team (security@paradox.ai) on our website and a bug bounty program. We take responsibility for this issue. Full stop.”

5. AI Voice Cloning in Vishing Attacks Against U.S. Officials

Throughout the new presidency, Trump officials have continued to be personated by AI “vishing” scammers. Most recently, the Secretary of State Marco Rubio had his voice and writing style replicated by an AI and used to send voice and text messages to other American and foreign politicians. The attacker tried to manipulate the other officials to gain access to specific information and accounts.

Even though this attack failed to obtain the information, it was still effective in reaching its target audience. In fact, with the ease of finding emails and phone numbers via tools like Zoominfo, attackers can easily generate extensive vishing attacks against thousands of people simultaneously.

Best Practices for Security Teams to Prevent AI-powered Attacks

Based on the above examples, it’s clear that AI is driving a wave of social engineering attacks and this category of attacks is likely to be the most prevalent in the near future. This highlights the importance of employee training as well as focusing on securing corporate communication channels:

  • Build awareness and controls for deepfake and voice scams - employees in finance, HR, and executive roles are increasingly targeted by AI-generated video and voice scams. To reduce risk, organizations should implement deepfake-awareness training, enforce multi-person verification for sensitive requests, and use voiceprint authentication or callbacks for high-risk communications.
  • Harden Email and Messaging Defenses – AI generated messages are often missed by filters due to nearly flawless grammar, tone, and context. Security teams should consider deploying advanced email platforms that use behavioral and AI-based anomaly detection, not just keyword or pattern matching, They should also apply DMARC, DKIM and SPF to authenticate legitimate emails and extend monitoring to Slack and other internal messaging apps.
  • Vet AI & Automation Tools – Most AI vendors are still new in the market and may not have strong security policies. Be sure to review and evaluate all third-party AI vendors for comprehensive security practices, including resilient password policies, data encryption, audit logging, and timely patch management. Treat AI systems like production infrastructure — apply least-privilege access, monitor logs, and isolate them from sensitive environments.
  • Adopt Real-Time Threat Intelligence Sharing – The success of OpenAI’s defensive response was not only in blocking the malware but also in quickly sharing indicators of compromise (IoCs) and insights. Security teams should participate in relevant threat intelligence exchanges, establish internal escalation protocols for AI-enhanced attacks, and share confirmed IoCs with trusted partners and vendors.
IoCs

AI-powered cyberattacks are no longer a future threat. They are already happening and accelerating quickly. The good news is that security teams who stay informed, strengthen their defenses, and use AI as part of their strategy will not only be able to respond effectively but also stay ahead of the curve.