GenAI Is A Hit With Hackers. Here’s Why It Will ‘Benefit The Defense’ Even More.

While ChatGPT has made life easier for attackers, over the longer term it’s likely that generative AI ‘will benefit the defense more than it will the offense,’ a top cyberthreat expert tells CRN.

ARTICLE TITLE HERE

While threat actors are getting a boost from ChatGPT and other generative AI tools, many cybersecurity experts see a potential for cyber defense teams to see a greater benefit from GenAI over the longer term — thanks to an abundance of promising uses for the technology that are expected to emerge over time.

A flood of GenAI-powered cybersecurity tools have been introduced in 2023. Security teams and MSSPs will soon be enabled to uncover known security issues faster and more efficiently, according to vendors including Microsoft, CrowdStrike, SentinelOne and many others that are leveraging generative AI.

[Related: 10 Emerging Cybersecurity Threats And Hacker Tactics In 2023]

id
unit-1659132512259
type
Sponsored post

On the other side, hackers are able to utilize OpenAI’s ChatGPT to craft more-convincing phishing emails, including through improving grammar for non-native English speakers.

All of this, of course, is just the beginning. But in looking at ChatGPT and the wave of GenAI technologies that has followed it, and extrapolating out into the future, which side benefits more? Cybercrime or cyberdefense?

CRN has posed this question to a variety of cybersecurity and threat experts in recent months. Many believe that in the short term, the arrival of GenAI seems to be a bigger win for the threat actors.

“Certainly the attackers have a big advantage at the moment,” said Dave DeWalt, a security industry luminary who’s now founder and CEO of venture firm NightDragon. “If you look at all the attack vectors on these generative AI platforms — I mean, it’s just too easy.”

On the other hand, according to Accenture’s Robert Boyce, “I don’t think we’ve even started to think about what the possibilities are” over the longer term. The ultimate potential for GenAI in cybersecurity, he said, is to not just do a better job at finding known issues — but to actually uncover the lurking, high-risk issues that no one knows about yet.

For CRN’s Cybersecurity Week 2023, we’ve collected five predictions for how GenAI could enable cyber defense to stop hackers in the future.

Discovering Zero Days

The idea of using Large Language Models (LLMs) for finding common software vulnerabilities was quickly seized upon by security vendors. Vulnerability management platform provider Tenable, for instance, noted in an April report that it had already at that point built LLM-powered tools to more quickly identify vulnerabilities in applications.

But discovering previously unknown, zero-day vulnerabilities is another story. There’s a reason why zero days — responsible for enabling so many major breaches and ransomware attacks — can fetch as much as $10 million.

Still, the idea that GenAI technology will eventually be capable of discovering zero-day vulnerabilities in software does not appear to be far-fetched. With the help of LLMs, it’s feasible that zero days could someday be eliminated from software before the vulnerabilities are actually released, according to Michael Sikorski, CTO and vice president of engineering at Palo Alto Networks’ Unit 42 division.

In this potential scenario, “the developers can then leverage that [capability] ahead of time, before they ship their code — find those [zero days], plug them up, before they ship,” Sikorski said.

Preempting the release of zero days? That would be monumental for the cyber defense side. And so, based on this potential use of GenAI, Sikorski said “there’s an argument that I’ve started to believe in — that actually this technology will benefit the defense more than it will the offense.”

Widening The Talent Pool

Speaking of seemingly intractable issues in cybersecurity, here’s another one that generative AI might be able to help with: The massive talent shortage.

Estimates vary on how many unfilled jobs there are in cybersecurity, but what is universally agreed-upon is that the cybersecurity talent pool needs to expand.

With GenAI capabilities, however, there’s a strong potential for reducing the technical barrier to entry that has been required for many cybersecurity roles, according to Boyce, a managing director and global lead for cyber resilience services at Accenture.

Generative AI could “lessen the requirement that they need to be so technical,” he said. Among other things, the steep technical requirement for many roles in the field is “what frightens people away a lot of times,” Boyce said.

As just one example, to be able to create threat detection rules in Splunk, a person would need to learn Splunk’s search-command language, SPL. With the help of GenAI, however, this significant undertaking may no longer be necessary for creating and applying detection rules in Splunk, according to Boyce.

“Now we can say to GenAI, ‘Create me a detection rule in Splunk for this [threat].’ And it will create the detection rule. And it will apply the detection rule,” he said. At Accenture, “we have tried this — and it’s very, very accurate.”

For the record, Boyce said he’s not knocking Splunk and that this is just one of many potential ways GenAI could lower technical barriers to working in cybersecurity. Splunk itself also seems to be on the same page: In July — prior to Cisco’s $28 billion acquisition deal for the company — Splunk released a GenAI-powered tool that helps users work with SPL through natural language.

Improve Mental Health In The Field

The high stress level associated with working in many cybersecurity roles is well-documented: 20 percent of cybersecurity professionals reported having plans to leave the field within two years, according to a Trellix survey last year. Meanwhile, an even higher percentage of cybersecurity leaders — 25 percent — is planning to do the same by 2025, Gartner has found.

Security Operations Centers (SOCs) can be a mentally taxing environment, making it crucial for organizations to “focus heavily on a SOC analyst’s mental health,” said Jordan Hildebrand, practice director for detection and response at World Wide Technology, No. 9 on CRN’s Solution Provider 500.

But while an array of GenAI-powered tools for SOCs have been touted for their ability to boost productivity and efficiency for SOC analysts using automation, there’s another angle to consider: The possibility that the tools could “give them a better life,” Hildebrand said.

After seeing a demonstration of CrowdStrike’s Charlotte AI in September, for instance, Hildebrand said he sees the potential for the tool to remove some of the mental stress and strain associated with jobs in a SOC. The GenAI-powered assistant has the potential not just to present analysts with “more” data, but also with the “right data,” he said.

And given how difficult it can be for SOC analysts to find what they’re looking for — particularly in a high-pressure situation such as a security incident — this could make a huge difference for their mental wellbeing.

Ultimately, the promise of GenAI tools such as Charlotte is that SOC analysts “are going to be able to control their own destinies more,” Hildebrand said.

Spreading Automation Even Further

While automation of routine tasks is one of the widely touted uses of generative AI in cybersecurity, the industry is only at the beginning of exploring the possibilities, experts said.

For instance, automating the collection and correlation of threat intelligence is another area where GenAI can likely be applied in the future, Boyce said.

Rather than having to read every threat intelligence report individually, security analysts might be able to use GenAI to pull and correlate the data across reports to rapidly get a unified picture, he said.

The data in most threat intelligence platforms is structured consistently, and so GenAI should be capable of doing this, Boyce noted.

Thanks to GenAI advancements, security teams in the future will be able to have “all of what’s happening in the world at [their] fingertips,” said DeWalt, who was formerly CEO of prominent cybersecurity vendors including FireEye and McAfee.

The technology will enable security teams to “collect every piece of data to help them understand their situational awareness faster,” he said. Ultimately, “SOC automation looks highly, highly disruptive.”

For management of vulnerabilities, it’s probable that GenAI will be able to help prioritize which patches need to be deployed first—ultimately expediting the deployment of those fixes, according to Boyce.

Many breaches occur in vulnerable environments that had patches available but just weren’t deployed quickly enough. Amid fast-moving attacks such as Clop’s MOVEit campaign, the typical 30-day window for patching critical vulnerabilities is “too long,” he said.

If organizations can use GenAI to prioritize patches faster — and then automate the deployment of those fixes — that would make a “tangible difference,” Boyce said.

Asking Better Questions

The bottom line for GenAI and cyberdefense is that, when used in concert with large and varied datasets, it has the potential to help with “predicting what we don’t know yet,” Boyce said.

The result is that defenders should ultimately be enabled to “find things that we haven’t even thought about — these ‘unknown unknowns’ that we’ve been talking about for years, but that we’ve never been able to figure out how to [find],” he said.

“I don’t think we’re asking the right questions of our security data now, as a community. We don’t even know what to be asking,” Boyce said. “We’re just asking stuff that we already know. ‘Are these IOCs present in this dataset?’ Important — but not going to get us to a protection strategy that’s adding a really high level of confidence for your cyber resilience.”

Hildebrand said he also expects GenAI will prove to be a huge help when it comes to figuring out what the starting hypothesis is about a threat, thanks to the technology’s ability to sift through massive amounts of data and surface the context that’s important.

In security operations, “one of the hardest things to do is hypothesize,” he said. “That’s what takes time.”

At least one forthcoming GenAI-powered security tool aims to include functionality of this type. SentinelOne’s Purple AI, which entered limited preview earlier this year, has the ability to propose questions that “you might not have thought of,” said SentinelOne Co-Founder and CEO Tomer Weingarten.

“By looking at your environment and knowing what’s there, it can actually suggest follow-up questions,” Weingarten told CRN. “So if you’re an analyst and you were asking about a specific operation — maybe APT39 — but there’s no activity of that in your environment, it might [suggest], ‘Maybe ask about APT41, because I might be seeing some indicators there.’”

Ultimately, according to Boyce, GenAI could be able to help security teams to truly achieve their full potential for the first time.

The capabilities should be able to enable “more of the true analyst model — of being able to ask the right questions of the information,” he said, “and have the machines do the machine work.”