How Autonomous AI Cyberattacks Will Transform Security: Experts
The speed and volume of attacks enabled by AI agents is likely to quickly overwhelm the organizations that haven’t made a real investment in security, cybersecurity experts tell CRN.
For cybersecurity industry veteran Oliver Tavakoli, the recent disclosure about the arrival of a new breed of cyberattacks—almost entirely powered by AI—has a clear message for the cybersecurity world.
In short: The days when an organization could get by without taking security seriously are quickly coming to a close.
[Related: AI Security Week 2026]
“As hard as it has been for defenders to hang on [based on existing attacks], it is believable that over the next 18 months to two years, things will get 10X worse,” said Tavakoli, CTO at cybersecurity vendor Vectra AI.
What that means, according to Tavakoli and other cybersecurity experts who spoke with CRN recently, is that the speed and volume of attacks that AI agents will enable could quickly expose organizations that haven’t made a real investment in security.
As a result, the rise of autonomous cyberattacks is poised to reshape cyber defense as we know it, experts said.
For one thing, that means that adopting basic, fundamental security measures will no longer be optional. But it also means organizations will need to embrace an automated approach to cybersecurity in a far greater way, in order to avoid being overwhelmed by the new AI-driven threats, security experts told CRN.
Ultimately, “if you’re barely hanging on today, and you believe that things are going to get a lot worse, then what that says is incrementalism [in cyber defense improvements] isn’t going to get you there,” Tavakoli said. “You have to do something transformative.”
Attacks boosted by AI have been with us for several years now, of course. Ever since the public release of OpenAI’s ChatGPT in late 2022, it has been an axiom of cybersecurity that social engineering tactics, such as the writing of phishing emails, are now heavily augmented by GenAI.
What experts believe now, however, is that cybersecurity is on the cusp of a shift into a new phase of how AI can bolster cybercrime and nation-state espionage.
Autonomous Attacks Strike
In November 2025, AI platform Anthropic disclosed what it called “the first reported AI-orchestrated cyber espionage campaign.”
The China-linked attack manipulated the company’s coding tool, Claude Code, and enabled the attackers to rapidly execute a series of steps that were “largely” autonomous, from reconnaissance through to data exfiltration, Anthropic said.
In all, the company estimates that AI was responsible for handling as much as 90 percent of the necessary work, the company said.
While just a few dozen organizations were targeted in the attack, many in the industry have viewed the incident as a sign of bigger things to come.
Without question, the arrival of autonomous attacks “completely changes the game because of speed and scale,” said Rob Lefferts, corporate vice president for threat protection at Microsoft.
“If there was one call to action for security organizations, it’s [to] be prepared to go faster—because you have less time to respond,” Lefferts said.
Stopping ‘Machine-Speed’ Attacks
Still, as shown by the Anthropic incident, it’s clear that not everything will be different in the coming era of highly automated attacks, according to security experts.
“Agentic and autonomous attacks do not fundamentally change attackers’ tradecraft in any way,” said Morgan Adamski, a former executive director of U.S. Cyber Command and director of the NSA Cybersecurity Collaboration Center. “Autonomous attacks don’t change what attackers want.”
But without a doubt, “it does help [attackers] increase the scope and speed, in terms of their operational objectives. They just move faster,” said Adamski, who is now a principal and U.S. leader in the cyber, data and technology risk business at PricewaterhouseCoopers.
Thus, if security teams are wondering whether they need to adopt agentic capabilities, the Anthropic incident should serve as a prime case study for why that’s necessary, she said.
The bottom line, Adamski said, is that “you can’t defend against machine-speed attackers with human-speed operators.”
Don’t Skip Ahead
Even before seeking greater adoption of AI and agentic technologies for security, however, experts say many organizations must address some of the lingering cyber defense basics.
Foundational security measures will undoubtedly become even more important going forward, according to Diana Kelley, a longtime security leader who is now CISO at AI security startup Noma Security.
“If you have not [addressed] the low-hanging fruit, that’s going to be the first thing that these automated attacks are going to find,” Kelley said. “The foundational controls matter more than ever.”
Implementation of multi-factor authentication has improved in recent years as cyberattacks have escalated, but still remains stubbornly low in some segments such as medium-sized businesses (34 percent) and small businesses (27 percent), according to a survey from JumpCloud.
The arrival of autonomous attacks will almost certainly expose those shortcomings, security experts said. As a result, such organizations may finally “find religion” on MFA and other security fundamentals, according to Trey Ford, chief strategy and trust officer at crowdsourced cybersecurity platform Bugcrowd.
“Sadly, I think some of these [AI-powered attacks] will be what motivates those changes,” Ford said.
And while many security teams may be eager to jump ahead to adopting agentic capabilities to boost their defense, the fundamentals really do need to come first, said Moudy Elbayadi, chief AI and innovation officer at Solana Beach, Calif.-based Evotek, No. 92 on CRN’s Solution Provider 500 for 2025.
In addition to MFA, those include phishing training, email filtering and modern endpoint protection, which remain the building blocks of a strong security posture, Elbayadi said.
“If you’re not applying the common best practices, and you’re trying to jump ahead, I think that’s a horrible solution,” he said.
Focus On Identity
There’s no question that autonomous cyberattacks will dramatically amplify long-standing gaps on identity security, experts said.
Because nearly all attacks exploit identity and privileges in some way today, autonomous attacks will necessarily function the same way, said Jason Martin, co-founder and co-CEO at identity security startup Permiso.
Identities with excessive privileges are already a widespread problem, for instance. Companies that fail to address the issue will therefore be leaving the door open for attackers to utilize the over-permissioned identities to enable attacks, Martin said.
“It’s very likely going to be catastrophic if people don’t get identity under control,” he said.
The fact that the vast array of AI agents themselves will also need to have their own identities, with appropriate permissions levels, also means that identity security failings could quickly multiply, according to Alex Bovee, co-founder and CEO of identity security startup ConductorOne.
While trying to spread fear and panic is never the right strategy, it’s nonetheless unavoidable that identity is “a very real challenge that we need to face as an industry,” he said.
“We’re in a world where most attacks are already identity-based, and we’re about to increase the number of identities by 10X or 20X,” Bovee said. “That’s kind of a disaster waiting to happen.”
Getting identity under control, however, may mean making some tough decisions about automating a company’s own security responses and procedures, experts said.
Many identity and access security issues today, for example, are remediated by way of an IT ticketing system, said Permiso co-founder and co-CEO Paul Nguyen. That’s simply not going to be fast enough in the era of autonomous attacks.
“If the adversary is automating your attack, you have to be able to also automate the response,” Nguyen said. “We have to change our risk appetite for the security team to be able to take mitigation action faster.”
Surge In Vulnerabilities
Likewise, organizations should expect that AI will enable a surge of new zero-day vulnerabilities due to improvements in automated discovery of software flaws, experts said.
AI is very well-suited for discovering how to break into software, and already the industry has seen evidence—such as from Google’s Big Sleep project in mid-2025—that AI can be used effectively for identifying and weaponizing vulnerabilities, according to Adam Meyers, senior vice president for counter adversary operations at CrowdStrike.
In all likelihood, 2026 will see a big uptick in vulnerabilities as these AI-powered research techniques become more practical, Meyers said.
What that means is that, as is the case with addressing identity shortcomings, the many organizations that are struggling with vulnerability management will want to prioritize getting a better handle on it, he said.
For instance, defenders should “not patch based off of severity or prevalence [of a vulnerability], but rather, patch based off of whether it’s actively being exploited,” Meyers said.
Organizations should also explore how to protect against the attacks that will likely still manage to get through, he said.
“You need to expect that there’s going to be zero days out there that you aren’t able to get in front of,” Meyers said. “What this means for those organizations is that they really need to focus on doing security well and having the visibility across their environment—so that even if a zero day is used, that they can catch it quickly and take some sort of action to prevent that adversary from being successful.”
Up-Leveling Security Maturity
All in all, the expansion of autonomous, AI-driven cyberattacks is going to have the effect of exposing the maturity of security teams at countless organizations, experts told CRN.
The reality is that while organizations must go through various processes and make investments to change up their cyber defense approach, attackers “don’t have to deal with corporate politics,” said Bryan Sacks, field CISO at New York-based Myriad360, No. 110 on CRN’s Solution Provider 500 for 2025.
“They’re just like, ‘Alright, I have this tool now. I’m going to use it,’” Sacks said.
At the same time, thanks to the availability of GenAI-powered tools and AI agents for defenders, a lot of security teams that have been operating on a “junior varsity” level “can elevate themselves to varsity very quickly,” he said.
Time To ‘Lean In’ On Automation
Ultimately, while the sky is certainly not falling, security teams should be taking the threat of autonomous attacks extremely seriously, according to Noma Security’s Kelley.
“The defenders, I think, really need to lean in to doing more with automation in order to strengthen our defenses,” she said.
And in reality, most security teams will not actually have a choice when it comes to whether to adopt newer AI and agentic capabilities as the autonomous threat landscape intensifies, Kelley said.
“Just going with the traditional defenses isn’t going to be enough, because the attackers are not using the traditional offense,” she said. “That’s not going to be able to keep up.”