Microsoft’s Rob Lefferts On Rise Of AI Attacks: ‘Be Prepared To Go Faster’
With autonomous cyberattacks expected to surge going forward, adopting a security posture that can respond rapidly to threats will increasingly not be optional, the Microsoft corporate vice president tells CRN.
As AI-powered cyberattacks become even more autonomous and widespread, cybersecurity teams will need to adapt by becoming faster and more sophisticated in their response capabilities, according to Microsoft security executive Rob Lefferts.
In an interview with CRN, Lefferts, corporate vice president for threat protection at Microsoft, said that adopting a cyber defense posture capable of rapidly responding to threats will increasingly not be optional in the era of AI- and agentic-driven attacks.
[Related: Microsoft’s Vasu Jakkal On Why Sentinel Is Now The ‘Backbone For Agentic Defense’]
The emergence of cyberattacks that are almost entirely automated—such as the recently disclosed “AI-orchestrated cyber espionage campaign” that targeted Anthropic—has been widely expected in the cybersecurity industry. However, that doesn’t mean that most organizations are prepared to keep up with the potentially massive increase in attacks that are coming, Lefferts said.
The arrival of autonomous attacks “completely changes the game because of speed and scale,” he said.
In other ways, though, protecting against cyberthreats in the AI era is “also the same game,” Lefferts said. “The exploitations and the attacks aren’t unique or different. You have to do all the things that you used to do. You have to have all the same monitoring in place.”
Still, “if there was one call to action for security organizations, it’s [to] be prepared to go faster—because you have less time to respond,” he said. “And be prepared, more and more, to detect signals working across multiple elements, to pull them together.”
Lefferts also made the comments amid Microsoft’s huge push around bringing AI and agentic capabilities to security teams. Such capabilities will become essential for many organizations as a way to keep up with the intensified pace and breadth of AI-powered attacks, he said.
The next generation of tools, Lefferts said, will involve coordinating “systems of agents” that can work largely on their own to complete security tasks for an organization.
In other words, with such a system, it will be about prescribing the outcome that is desired rather than just assigning different tasks, he said.
This type of agentic system “is clearly of huge benefit to the security industry and companies’ individual security teams,” Lefferts said. “They’re overwhelmed. They can’t get everything done.”
In September 2025, Microsoft unveiled an array of updates for its Sentinel and Security Copilot platforms aimed at enabling greater interconnectivity between security tools while accelerating the use of AI agents for cyber defense.
The announcements for Sentinel represented a major expansion of usefulness beyond its roots as a cloud-native SIEM (security information and event management) offering, Microsoft has said. Key updates included general availability for Microsoft’s Sentinel data lake and features such as a new Sentinel graph capability.
Such capabilities will be crucial for security teams looking to harden themselves against autonomous attacks, Lefferts said.
Stopping these attacks will require having security tools that can “all be looking at the same data, and they all have to be driving the same insights,” he said. “With Sentinel and Defender [operating] back and forth across those systems, that’s an automated, machine-speed process that does containment of attacks that are live.”
What follows is more of CRN’s interview with Lefferts.
How would you boil down what AI means for security at this stage?
One side is, how can security teams better leverage AI to be more productive, effective, successful? As I think about security as a domain, it is the kind of place where AI really should make a difference, for a number of reasons. No. 1, we can’t hire enough security professionals. We’re short 4 million professionals globally. How can we make the cybersecurity professionals we’ve got become more productive? The second is, security is intrinsically a big data problem. It is this ocean of data [and you must] find the needle in the haystack, find the unique insight. Having probabilistic big data tools really helps find that unique insight.
This notion of having agents that could work 24/7 to support human analysts and bring them results and insights and ideas is very powerful. And we already see that in the tools that we have deployed. We’ve had tools in production with customers for months now. We can already see some of the results—if you’re using an agent to help you pre-investigate user-submitted phish alerts, the human analyst on the other side of that will be something like 600 percent more efficient. And they will actually be 77 percent more accurate. That second part was surprising to me. The first one I expected, they’re throwing noise out of the system. But the accuracy [shows that] humans work better if they’re not facing constant drudgery.
Why are agents so well-equipped to handle something like phishing analysis for security teams?
We spent decades training employees in all companies to recognize and report phishing emails. The result of that is that they are hypersensitive and over-report phishing email like crazy. It is an incredibly noisy signal. And as I talk to some of the biggest multinationals on the planet, the signal is so bad that they’ve even given up looking at it. They don’t look at end-user submitted phish emails, because it’s mostly false positives. What the phishing triage agent does is it takes all of that end user-submitted email and it effectively pre-investigates it for the analyst. For a bunch of it, [the agent] just throws it out. And then for some, it [tells the analyst], “This was the bad one, you should look at this.” So when the human analyst comes along to go investigate all these potential phishing emails, the queue is curated. A lot of the obvious noise is thrown out, and some of the risky ones are highlighted. Instead of this mind-numbing [process] they get to see a pre-curated list that has some insights already associated with it, and they get to spend time on things that matter. That’s the theme that I think we’ll come back to again and again—if we can get analysts and security professionals spending their time on the things that matter, that’ll make all the difference in the world.
The next steps look like, let’s use these AI tools and use agents to really help the security team be better. They’re more efficient, they’re more effective, they’re more accurate. They’re enjoying their jobs more. And in particular, they’re spending their time on the things that matter, the strategic insights. My dream is that a security team could spend all of its time on the new stuff and none of its time on the boring, repetitive toil. That’s the dream.
What was your takeaway from the recently disclosed autonomous attack against Anthropic, and what those sorts of attacks mean for cyber defense?
It completely changes the game because of speed and scale. But [in other ways] it’s also the same game. The exploitations and the attacks aren’t unique or different. You have to do all the things that you used to do. You have to have all the same monitoring in place. If there was one call to action for security organizations, it’s [to] be prepared to go faster—because you have less time to respond. And be prepared, more and more, to detect signals working across multiple elements, to pull them together.
How prepared do you think organizations are for agentic-powered, autonomous attacks?
A lot of the organizations that have been putting in place the building blocks over the last few years have work to do, but it’s not undoable work. The organizations that I worry about are the ones who truly aren’t prepared. And they’re easy to identify. They’re the ones who would get compromised by ransomware anyway. Even if you look back over the last couple of years, there’s this real clear separation between organizations that are ready and ones that aren’t right. If you’re ready, you can detect, monitor and evict. And if you’re not, then you have very few options, because your state’s just too chaotic and you don’t have a grip on it.
[Attackers] have scaled out heavily in infrastructure deployment because setting up all of the fake domains is time-consuming and expensive. So now they have AI do that, so they can just spew these things out. Once they figure out how to use AI more cheaply for the exploitation, of course they will do so as soon as it’s cheaper.
From a tools perspective and what Microsoft is offering there, what are the most essential things organizations should know?
I think you need three critical pieces coming together in order to make this real. First is, you need comprehensive visibility—which means you need data, data access, and a platform that frees the data to be used by AI. That’s what we’ve talked about, starting in the summer, with Sentinel Data Lake and Sentinel Graph. More data is better. And having something like the Data Lake that makes it cheaper for organizations to pencil out how much data they can keep and leverage, that makes a huge difference. Exposing it through graph APIs and then making it ready for agentic systems to come along and take advantage of it.
The second thing you need is a bunch of tools that are not stuck in islands. A very traditional security model was, I’ll get my SIEM from over here and my EDR [endpoint detection and response] from over there, and then, I’ll just wire them up, because I have a super smart security team. But there can’t be any divide. They have to all be looking at the same data, and they all have to be driving the same insights. With Sentinel and Defender [operating] back and forth across those systems, that’s an automated, machine-speed process that does containment of attacks that are live. The word AI has come to mean just LLMs, but there’s a whole bunch of behavioral models built into that thing and driving it. One of the other things we announced recently was connecting that into predictive shielding. The idea is, when we see the attacker break in and compromise a laptop, we know the five places the attacker could have gotten to from that laptop. So we’re proactively going to start hardening and locking them down, just to make sure the attacker can’t take advantage of it. That’s the kind of end-to-end and machine-speed [approach] that we’re talking about.
Then the last item is, you have to have AI tools. You have to be building agents. Some of that is working with a vendor who’s pushing hard on making this stuff real. And in the world of AI today, of course, making it real is more than just hype—but running it in production, real world, doing the tuning, doing the iteration and making sure it’s delivering results. Then you need your security teams to be playing with it. They can’t be living in a world where they hope it’s not going to change. Change is coming.
What are some of the themes or areas Microsoft is looking at next with this?
We unveiled a bunch of agents [in 2025]. We’re going to be unveiling a bunch more. We’re actually taking a step back to think about, how does a SOC [Security Operations Center] work together, end to end? And how do we have an integrated set of agents that communicate together, that support those activities? We’ll make sure that that system of agents becomes an intrinsic part of Defender and Sentinel in the future.
So this concept of orchestrating agents will be increasingly important for security teams?
That’s right. There’s this interesting thing that happens with agentic systems, where it pays to have specific and expert instructions. You can actually tell agents to be working on specific jobs that are much more narrow than you would have put into a human job description, because it’s boring. I wouldn’t want a human to be stuck doing that. But for an agent to be reading phish emails, 24/7, no problem. And so as we think about investigation, I want a bunch of agents that are expert in each category of alert. Then I’m going to have an agent that orchestrates that and pulls together the ultimate results.
Does the fact that AI and agents will be able to handle so many entry-level tasks end up impacting the next generation of analysts in a SOC?
This is a fascinating topic I’ve spent a bunch of time noodling on. The coding example is a great one—if everyone’s always vibe coding, how do you learn the basics? How does that happen? Here’s my current best answer, based on my own experiences. It turns out that you can actually have an instructional dialogue with the AI as you’re doing this stuff, and it can even make recommendations for you. So I think there is a call to action for the AI tool builders to build in education as part of that process. I think the result of that is actually going to be much more efficient than [the usual approach].
We already see that happening inside of the Copilot tools. For example, we have a tool to do reverse engineering of a malware script, a potentially malicious PowerShell script. It does this great tear down in seconds. When an attacker sends across a chunk of PowerShell script, it’s heavily obfuscated and it’s encoded. So you de-encode it, which is not a big deal. But then every operation in that script is confusing. There’s no straightforward code. An AI can tear it down in seconds and give you this great analysis—“This is what it does, and these are the key chunks of code.” And so if you take that and then go back and look at the original, it’s a much faster way to learn.
What do you say to CISOs and security teams that are still unsure what to focus on when it comes to AI?
I have talked to individual companies whose CEOs want to have 1 billion agents by [2026]. And the CISOs feel a bit overwhelmed. That’s a lot. If a company has millions of agents running around, how do I think about that? How do I approach that problem? The first thing is, get a framework. We’ve had a number of announcements around Agent 365 as a framework for how to think about that. And that’s the same thing that we’re doing across our toolset. I think the whole industry is [now aware] that AI will be driving security. Now is the time to start thinking not just about, how does AI add on to the tools that you’ve got—but, how do you think about AI-native security? This will be the time to make sure that you are leaning in and learning. There will be huge transformation over the upcoming years, because it hasn’t settled down. It’s clearly still in a very frothy mode. But what you should be looking for is getting started and then finding partners who can help lead you on the journey—which is exactly what we’ve been working towards on those two pillars of Agent 365 as a framework for protection, and then Defender and Sentinel as a vehicle on supercharging [an organization’s] security.
Are there certain predictions you might have for how AI and security are going to progress this year?
We are still learning a lot about how to optimize AI. How much do you do prompt tuning, or tuning of the agent definition? When and where do you use additional iteration of RAG in order to get you a better dataset, in order to drive forward? At what point do you think about fine-tuning of the model for that particular application? We’re still experimenting heavily with all of that. One of the things I will say is getting good at evaluating the results has been key to delivering quality. This is another caution that I would offer people around AI hype is, you’ve got to make sure you’re measuring. If you’re not measuring, you’ll just kind of feel good about it. But it’s not clear that you get the actual results that you want. And so being grounded is essential.
Then the second thing is systems of agents. If we really want to talk about supercharging [security]—if we really want to change how the security team operates—it is about multiple agents that communicate and coordinate, and then humans directing, where are we going? What are the strategic problems? And then making sure that we’re driving to the outcomes we need.
So the biggest difference in security will come from being able to actually delegate tasks to agents?
The first generation of AI is, I have an assistant. I have any one of the AI chatbots, and I can ask it questions and it gives me answers. The second generation is, I made an agent do something for me—and I sent it off on a task, and it comes back with the answers. And it’s kind of a transactional back-and-forth. But then the third generation is when I have systems of agents, coordinating together—and it’s not task-specific, it’s [about the] outcome. And I get to interoperate with them as supervising the direction, monitoring the results and guiding the next steps in the process.
It’s already coming true in some places. Certainly inside of coding systems, we see more advanced firms operating in that [area]. And security is a place where we’re pushing hard. It is clearly of huge benefit to the security industry and companies’ individual security teams. They’re overwhelmed. They can’t get everything done. Let’s help them out.
What about protecting AI itself—what are the biggest threats or concerns you see there?
Attackers are not only excited about how AI will enhance their activities, but they’re equally excited about how they can use AI as an attack surface area. It’s clear about why that’s true. Every single thing we’ve ever built adds complexity, and that’s just breeding ground for attacks. As we look at the AI stack, there’s multiple levels of attack points. You can attack the chatbots and agents themselves—I’ll call that social engineering for AI, where I try to trick it. I am fascinated by this, because it’s very parallel to social engineering for humans, but the tricks are all different. For example, for humans, you want to create a sense of urgency and fear. So their rational thinking narrows, and they become very driven to [respond]—“I have to send that wire transfer right now or the business will fail.” It doesn’t make any sense, but that’s how you trick them. With AI, [the tactic is] to confuse them about the context of this question—and then their natural urge to be helpful will cause them to over-generate and do more than they should have done, and circumvent their own guardrails.
So that’s just attacking the surface area of the AI itself. Then there is attacking the infrastructure that it runs on—attacking the MCP servers, the cloud infrastructure. One of the things that I worry a lot about is, it has become very easy to prop up MCP servers, and easy for agents to find those. How do you know that it’s not a malicious server feeding inaccurate information and instructions into the agent, and then causing it to go off the rails? Then underneath all of that is the actual models themselves—poisoning of the models, malicious models that have built-in instructions to forward information to a place you didn’t expect it to go. This is the other bold new frontier about AI impacting security. Inside of Defender and Purview and Entra, there are a bunch of new mechanisms to help with that security. The very basic one is inventory. You just have to know where all the agents are. And so Entra has this notion of agentic ID. If you want an agent to do something, it has to be able to get access to data and communicate with the humans—it has to be able to send you email, and so it needs permission. So that’s the hook to give it an Entra ID. And then once you have that, you can inventory them and know what they’re supposed to do and monitor what they actually do. And then that becomes the springboard into the rest of the system.