SlashNext Unveils Its Own Generative AI To Thwart ChatGPT-Powered Email Attacks
The new offering — the first cybersecurity product to utilize independently developed generative AI — aims to help partners and customers ‘fight AI with AI,’ CEO Patrick Harr tells CRN.
Cybersecurity firm SlashNext is debuting new capabilities powered by generative AI technology that it developed in-house, aimed at blocking email-based attacks that are created by similar technology such as OpenAI’s widely used ChatGPT application.
The potential for ChatGPT to aid cyberattacks such as phishing and social engineering scams has been widely demonstrated, thanks to its ability to mimic human writing while generating content tailored to the specifics of the user. SlashNext, an AI-focused messaging security platform, has spent the past two years developing its own large language model akin to OpenAI’s GPT-3, which is the underlying technology for ChatGPT.
SlashNext is now making the first offering powered by the large language model available to customers and partners — and the company believes it to be the first cybersecurity product of any kind to utilize independently developed generative AI, according to CEO Patrick Harr.
At a time when malicious actors clearly have much to gain from generative artificial intelligence, the team at SlashNext believes that “you have to fight AI with AI,” Harr told CRN.
Launched to the public in late November, ChatGPT is by some estimates the fastest-growing application ever, with its ability to adeptly produce writing and software code based on user instructions. Numerous security researchers, however, have found that the AI chatbot is also skilled at generating content that’s helpful to hackers.
For phishing and social engineering messages, for instance, a simple prompt asking ChatGPT to generate an email to a superior or at work or a family member could create a convincing message that an attacker could use maliciously. Since such messages could also be used for benign purposes, though, it would appear to be much harder for the tool to put up guardrails to prevent such uses.
“The ChatGPTs of the world have dramatically lowered the barriers to entry become a threat actor,” Harr said. “It wasn’t that high to begin with. But now it is extremely low.”
What the industry ultimately must do is put countermeasures in place that utilize the same technology to tip the scales back and remove the advantage from the attackers, he said.
“We have to shift from a reactive approach to a proactive approach — from a static, more-supervised AI model to a generative AI model — which is exactly what we’re doing here to solve and stop these threats,” Harr said.
Blocking AI-Driven Email Threats
Specifically, the new SlashNext offering is focused on thwarting a type of email impersonation attack known as business email compromise (BEC). The scams typically target executives or other employees of a company, and involve an attacker pretending to be a colleague that is requesting a transfer of funds. Often, the attacks also utilize a compromised email account to add further legitimacy. BEC attacks were responsible for $43 billion in losses from mid-2016 to 2021, according to the FBI.
The new generative AI product from SlashNext works by leveraging the company’s large language model algorithm to proactively anticipate potential BEC threats created using generative AI. The offering automatically generates thousands of new variants of BEC attacks — and then consults that information when deciding which incoming emails to block on behalf of users.
The new variants are “multiple different iterations of the same way to say that threat” that has previously been detected, Harr said. SlashNext refers to the capability as “cloning” — and in another nod to sci-fi, he compares the tactic to the movie “Minority Report” in that it’s “effectively predicting the threat before it happens.”
“We want to make sure that we’re removing these threats before they get to the user,” Harr said. “It’s happening transparently in the background by looking at the [likely] augmentations of those particular threats. So it anticipates what that [next threat] is, and will pull it out before it infects that user.”
The existing SlashNext security product has already been hit with customers, and the new generative AI capabilities will likely make it even more so, according to Bill Rubin, president and CEO of Connect IT Solutions, a SlashNext partner based in San Mateo, Calif.
Connect IT Solutions focuses on reselling technology to Fortune 1000 companies, and from his vantage point in Silicon Valley, Rubin says that a major part of his role is to be continually looking at new products that will help his customers to stay ahead of the curve technologically. In cybersecurity, that has meant pinpointing “the next vector we have to protect against” and then finding the tools to do that, he said.
With the new generative AI offering from Pleasanton, Calif.-based SlashNext, “I’m pretty excited about the opportunity with a leading-edge product like this,” Rubin said. While some other security vendors have leveraged third-party generative AI systems such as GPT-3, SlashNext stands out by developing its own technology in the space, he said.
“I call it ‘leapfrog technology’ — meaning that we’ve leapfrogged the last generation and we’re on to the next one,” Rubin said. With the new generative AI-based approach from SlashNext, he expects to be advising customers that “you should be strategically looking at this, rather than continuing to band-aid the solutions that are currently being put into use.”
SlashNext has been able to get to market quickly with a generative AI security product because it had already been working on it well before ChatGPT debuted, according to Harr. The company initiated the development of its own large language model two years ago in anticipation of the likelihood that generative AI technology would become more widely available in the future — including to attackers, he said.
The generative AI offering will be available in March as part of SlashNext’s enterprise-tier product.
The technology builds on SlashNext prior development of machine learning classifiers used to detect link-based and natural language-based threats, an effort that began nearly six years ago, Harr said.
“Over that period, we’ve developed an extremely large dataset that we’ve been training our models on,” he said. “That’s effectively given us an unfair advantage as we’ve developed this new generative AI.”