Every week, Sprocket CEO and Founder Casey Cammilleri interviews an expert leading the charge on empowering security experts and practitioners with the knowledge and insights needed to excel in the future of cybersecurity.

He recently spoke with Andrew Grealy, Head of Armis Labs. Here are the top takeaways from the interview.

#1: Deploy Honeypots in Mom and Pop Infrastructure to Catch Zero Days

“One way we do is we also see what's getting spoiled right now. So we have our own deception-based honeypots. And those honeypots are not in your normal places like AWS or Akamai or Cloudflare, they're in mom and pop shops. Because what does threat actors do, they compromise, and they use the credit cards, fake credit cards, or other people's credit cards to do that. And they practice that locally. So if we have something locally for them that's of interest, then we see these new type of attacks.

“And for us, we're not about seeing an old attack that's already known. What we're looking for is a new CWE, new type of attack, or something that we haven't seen, an end day or a zero day. And we do that well. And with our technology, we can build anything. We can build oil and gas stations, we can build banks, we can build anything based on geography, technology, or industry. So we've done lots of different things. We've even done Formula 1 race cars. We've done Tesla superchargers that got compromised with zero days. We've done all those different types of technologies. But what leads us to that technology is the next part, which is our intelligence collection.

“And intelligence collection, in Armis, we sound so simplified as the Dark Web. But to us it's nothing like the Dark web, because you're a security guy. So what it is is that we'll just say an organization. And I'll use this, an example, an organization is using JIRA as ticketing system. Well, what if we pretend we're a researcher that gets invited into this collection. So now we see all the tickets for everyone getting updated, what they're hacking, what they're doing. So if we see all the tickets, then we know what the prioritizing, what the weaponizing and where they are. And when we were a small company, I found one that this exchange was getting compromised, 2000, and I got all the emails, and I sent it to everyone and we immediately got on spam. Two guys, this can't be true. And then they all got compromised. But now with Armis, with our early warning flash alerts, people know Armis. So when they get something from us like, ‘Oh this is real.’ So it's actually been really helpful for people to get that information, those flash alerts that come out.”

Actionable Takeaway: Position deception environments where threat actors practice with stolen credit cards — not in AWS or Cloudflare, but in small local businesses. This approach captures new attack types and zero days because attackers test locally before targeting enterprises. Build custom honeypots matching your industry to see weaponization patterns early.

#2: Prepare for AI-Powered Triple Threat Ransomware Attacks

“The triple threat is, you get ransomware, and less people are paying, so then they bring it up to, well, exfiltration. And exfiltration is bad for two things. One, customer data, but two, also, fines. So that becomes even a bigger, like we've seen $1.4 billion fines in that space. And then the third one, which is just as bad as all those, is they go through all the emails, all the information in the organization using AI because that's an easy way to do it now with tools. Because you say, ‘Hey, is there anyone doing something that should not be right or violating different laws?’ They find people breaking laws.

“And the bigger the organization, the easier it is to find, but more time consuming. If you were doing it the old-school way, doing hands AI done right, you can send billion tokens through before you know it. And they then get that extortion and then they use that to get, one, get back into the organization accounts, all that stuff to extort money and keep doing it and bringing that more and more and more because the person's job or livelihood going to jail is because of that person. So now you're seeing that not just once, you can see that they can do that for a large number of people in an organization.

“So when I say organizations, I've been in large organizations, over 200,000 people, we say 1% of the people are doing something serious. They may be taking their own toilet paper home with them or stuff like that, but out of that 1%, it's 1% actively doing fraud. So you look at a number like that, you're talking thousands. So if you get into that organization and do that, and those people hide themselves and layer, and it's easy for AI to look at this person's layering. Because a normal person would just send an email, this person sending an email, deleting it, then sending something else and doing that. So it's anomalous behavior.”

Actionable Takeaway: Modern ransomware combines encryption, data exfiltration, and AI-powered internal investigation to maximize damage. Attackers use AI to scan billions of tokens across emails and documents, identifying regulatory violations and personal misconduct for targeted extortion. In organizations over 200,000 people, approximately 20 individuals are actively committing fraud, creating multiple leverage points for persistent attacks.

#3: Secure AI Coding Pipelines against Supply Chain Package Infiltration

“It was really hard to get into the supply chain of code. Really hard. That was years of exercise to get that and to get that goal. Action objectives. Whereas what I'm seeing today is two things. One, vibe coding. And 54%, Copilot, OpenAI, and Claude — all of them put vulnerabilities in a code. So you're automatically putting runners in code and you're at a faster rate than you could ever do. And the second one that I've seen a number of times and in Armis, we've been searching in my continuous hunt team, searching for people downloading packages that aren't real packages. So typosquatting.

“So before, to get into a line of business application, you had to spend, as a threat actor, all the work in the world. All I have to do now is create a package similar to name to Python NumPy PI MongoDB hyphen, and guess what? The LLMs are all going to do a variation of that and it's going to come up, you're going to download that because the vibe coding saying, we are going fast, we’re vibe coding and now it's in your line of business application that no one ever had access to. And what they've done, they just do a wrapper around it.

“So it's the same library of PYMongoDB, but the wrapper above it is the command and control channel to go back out. So you don't know any different. It's running. But now your line of business application, which could hardly ever get compromised, and a lot of them don't even have them on external GitHubs, all that stuff that are internal, is compromised. Your whole supply chain. And that scares me because it used to be really hard to do that.”

Actionable Takeaway: AI coding assistants inject vulnerabilities into 54% of generated code while simultaneously recommending malicious packages through typosquatting attacks. Threat actors create packages that LLMs suggest as alternatives, infiltrating internal applications that were previously unreachable. The wrapper appears identical but includes command and control channels, compromising your entire supply chain through legitimate-looking code.


Listen on Apple

Listen on Spotify

Watch on YouTube