How Solution Providers Are Fighting AI With AI

The buzz around generative AI is getting louder as solution providers are addressing the risks and advantages it poses to their customers while vendors are stepping up by adding GenAI-powered capabilities into their cyber defense tools.

ARTICLE TITLE HERE

While a wide array of cybersecurity challenges and opportunities have presented themselves around generative AI over the course of less than a year since the public release of ChatGPT, there’s much more to come, security experts and executives told CRN.

For solution providers, GenAI brings massive implications. Helping customers to safely use apps such as OpenAI’s ChatGPT has stood out as one of the most immediate issues, but the use of these technologies by hackers also poses a heightened threat that solution providers should be getting proactive about, experts said. Meanwhile, GenAI is being widely leveraged by industry vendors to enhance the way security teams utilize cyber defense tools, with the goal of improving productivity and enabling faster responses to threats.

Ultimately, cybersecurity is “a game of speed where you want to get people to make a decision faster,” said Ian McShane, vice president of strategy at Eden Prairie, Minn.-based cybersecurity vendor Arctic Wolf. And GenAI can accelerate decision-making by providing greater context around a potential threat, giving security teams the confidence to decide whether it can be ignored or deserves further investigation, he said.

id
unit-1659132512259
type
Sponsored post

[Related: 10 Emerging Cybersecurity Threats And Hacker Tactics In 2023]

With the help of GenAI, “getting to that decision point with the right context is what’s going to make a difference,” McShane said. But all of this is just the prelude: Solution providers can expect GenAI technologies to be a source of innovation and disruption for years to come.

“There’s a lot of buzz, and the buzz is growing. But I also think that things are moving forward,” said Mike Heller, senior director for managed security services at Phoenix-based solution provider Kudelski Security. “I think the fact that there will be an impact from [generative] AI to our market is clear.”

The exact nature of that impact is still taking shape, however. When it comes to enabling secure usage of ChatGPT and other GenAI applications, many solution providers are already working to advise customers—even as numerous security vendors release tools aiming to assist with protecting sensitive data amid the rise of the technology.

Without a doubt, GenAI “can be a security risk if it’s not managed properly,” said Atul Bhagat, president and CEO of BASE Solutions, a Vienna, Va.-based MSP.

“I think one of our responsibilities as MSPs is we have to get ahead of it and have those conversations early on with our clients about how to use AI safely and correctly. We’ve heard about mistakes and horror stories,” Bhagat said. “But overall, I think generative AI is the future, and we’re seeing in a lot of organizations they’re trying to find ways to use it to their advantage.”

Deploying data security technologies to help prevent the disclosure of intellectual property or sensitive data into GenAI apps is one potential approach. In recent months, a number of vendors have released tools to help enable safe usage of GenAI platforms such as ChatGPT.

For instance, Zscaler has updated its data loss prevention (DLP) product to thwart potential leakage of data into GenAI apps. That has included implementing new data loss policies and security filtering policies, said Deepen Desai, global CISO and head of security research at the San Jose, Calif.-based cybersecurity vendor.

Meanwhile, Zscaler said that its recently introduced Multi-Modal DLP capability can prevent data leakage across not only text and images, but also audio and video.

Overall, the goal is to “allow our customers to securely embrace generative AI without leaking data, without hitting malicious versions of the chatbots,” Desai said. “The risk definitely exists with this technology if it’s not being embraced in the right way.”

Another well-known security risk posed by GenAI technology is the boost it can give to malicious actors, such as hackers using ChatGPT to craft more convincing phishing emails.

This summer, security researchers also identified GenAI-powered chatbots that are specifically intended for use by hackers—including WormGPT, which was disclosed by researchers at SlashNext, and FraudGPT, which was uncovered by Netenrich researchers.

But even ChatGPT itself can provide a significant aid to malicious actors, such as by improving grammar for non-native English speakers, researchers have noted.

Existing guardrails don’t prevent ChatGPT from serving up emails that could be exploited for social engineering—for instance, an email to your “uncle” that you haven’t talked to in years, said Mike Parkin, senior technical marketing engineer at Tel Aviv, Israel-based vulnerability management vendor Vulcan Cyber.

“The guardrails are there, but if I’m at all clever, I can get around those guardrails,” Parkin said.

In response, a number of security vendors have released tools that can help to combat the threat of GenAI-powered email attacks. San Francisco-based Abnormal Security’s CheckGPT tool focuses on detecting attacks that were created using Large Language Models, tapping into multiple open-source LLMs to determine the likelihood that an email message was created with the help of GenAI.

Phishing detection vendor SlashNext, meanwhile, offers capabilities powered by its own GenAI technology, aimed at blocking email-based attacks that are created by ChatGPT and other GenAI apps.

At a time when malicious actors clearly have much to gain from apps such as ChatGPT, the team at Pleasanton, Calif.-based SlashNext believes that “you have to fight AI with AI,” said CEO Patrick Harr.

In the cybersecurity industry as a whole, vendors have been moving aggressively to add GenAI-powered capabilities into their tools for cyber defense.

Among the first was SentinelOne, which debuted a GenAI-powered threat hunting tool, dubbed Purple AI, in April. The tool provides the ability to use natural language to query a system, offering a massive time savings to analysts and allowing security teams to respond to more alerts and catch more attacks, according to the Mountain View, Calif.-based company.

Purple AI essentially “takes any entry-level analyst and makes them a ‘super analyst,’” SentinelOne co-founder and CEO Tomer Weingarten told CRN.

In May, CrowdStrike unveiled Charlotte AI, which the Austin, Texas-based company calls a “generative AI security analyst” that can dramatically boost productivity and effectiveness for Security Operations Center (SOC) teams. “The way we think about this technology is to help accelerate the decision-making and help present the information to the SOC analysts to be able to move very quickly,” CrowdStrike President Michael Sentonas told CRN.

Looking ahead, Santa Clara, Calif.-based PaloAlto Networks aims to use GenAI in a way that will “solve the real hard problems,” ultimately going far beyond “superficial” applications for security, Chief Product Officer Lee Klarich told CRN. “What we’re doing is looking at what are those hard problems we want to go solve? How do we architecturally approach that and leverage these new AI technologies to help us get there?” Klarich said. “It will be different than just a little box in the corner of the UI [user interface] where you can type in something, and you get a [response].”

There’s no question that many implications of GenAI for security still have yet to be revealed, experts told CRN.

Utilization of threat intelligence and management of vulnerabilities are just two examples of areas that should see profound improvements from using GenAI, according to Robert Boyce, a managing director and global lead for cyber resilience services at Dublin, Ireland-based Accenture.

With threat intelligence, rather than having to read every report individually, security analysts in the future will likely be able to pull and correlate the data across reports to rapidly get a unified picture, Boyce said.

And for management of vulnerabilities, it’s probable that GenAI will be able to help prioritize which patches need to be deployed first—and potentially even automate the deployment of those fixes, he said.

The use of GenAI for automating more of these actions in security “is something that I’m really interested in,” Boyce said. In a world where attackers always seem to have the advantage, “that would make a tangible difference.”