For Good And Bad, GenAI Is Key To Email Security: Inky CEO

‘The fundamental challenge here is, the attackers can use generative AI against us. And there's a fundamental asymmetry here. If you're the bad guy, you could make one template for 10 cents, or maybe two cents, and send 10,000 phish. If you're a good guy, you’ve got to potentially scan every single email for this stuff. And it's very expensive to run lots of GPU computation on every single email,’ says Inky Founder and CEO Dave Baggett.

While ChatGPT and other uses of AI have revolutionized how humans interact with IT, it has also opened the door for criminals to improve their phishing and other attacks via email.

That’s the word from Dave Baggett, founder and CEO of Inky, the College Park, Md.-based developer of email security technology who Sunday told an audience of managed services providers at this week’s XChange 2025 conference in Denver that it is up to them to understand the threats and find ways to protect customers.

For threat actors, ChatGPT has become a very useful tool for things like quickly developing phishing attacks, especially since guardrails aimed at preventing the technology from causing harm don’t always work, Baggett (pictured) said.

[Related: The 2025 Security 100]

As an experiment, he said he prompted ChatGPT by saying he is an email security expert giving a talk to a partner specializing in vaccine development and would like a realistic example of a phishing email template targeting a high-level executive.

The response, he said, was a complete email that he could use to target such executives with a minimum of modifications.

“I literally spent two minutes on this, and … it's pretty scary and impressive,” he said. “And what did this cost? Essentially zero cents. And if I'm doing this as a bad guy, at scale, this is going to cost me $1 for a million tokens or something like that. So I can generate as many of these templates as I want. I can hook that up to a LinkedIn scraper mod that goes and gets executive profiles, gets industry information about the company they work for, feeds it in there. It's a completely automated pipeline that does this automatically, and then I have perfectly written, grammatically perfect, targeted, incredibly believable phishing.”

The big question is how to defend against such attacks, Baggett said.

Inky has been using AI as part of its email security for years which, at its most basic, can index incoming emails by various properties such as the included headers and links to compare them to other email phishing attacks it has scanned after PII (personally identifiable information) has been stripped out, he said. That increases the probability of scoring a phishing email correctly, he said

Inky also looks at logos and content in emails to find issues such as visual clues or typos that indicate attackers sent the email, he said.

The Challenges Of GenAI

GenAI, on the other hand, poses a different challenge, Baggett said.

“The fundamental challenge here is the attackers can use generative AI against us,” he said. “And there's a fundamental asymmetry here. If you're the bad guy, you could make one template for 10 cents, or maybe two cents, and send 10,000 phish. If you're a good guy, you’ve got to potentially scan every single email for this stuff. And it's very expensive to run lots of GPU computation on every single email.”

There are a few things that can be done to deal with this asymmetry, especially with AI, Baggett said.

One is to add a banner to the email warning about attacks, something that has been done for years, he said.

“We've evolved this into something much more elaborate,” he said. “We can now put 80 different messages in the mail and tell people all kinds of things, and the more AI we can leverage and analyze in the mail, the more we can tell end users that they wouldn't normally see. We're kind of giving them a superpower, like, ‘User, here's something that you couldn't see.’”

Inky would like to apply GenAI to every single incoming email but can’t, Baggett said.

“When people tell you they're doing this, they're almost certainly lying,” he said. “And if they aren't, they've done a really, really impressive kind of work, because this is incredibly hard to do efficiently.”

That’s because GenAI still has hallucinations, has trouble understanding text in context, and suffers from a lack of explainability, Baggett said. And, he said, it would be “fantastically expensive” to run.

“We actually ran a frontier model like ChatGPT on every email,” he said. “We spent $100 million a year, and I somehow doubt you guys are going to pay $100 million a year for security.”

Data privacy is also a concern with ChatGPT, Baggett said.

“I would really encourage you to be very strict with vendors on what exactly they are doing with data, especially in things like emails,” he said. “We basically made our own sort of rule, like we're not going to send any emails outside of our own infrastructure. That rules out using ChatGPT or Anthropic or any other commercial system. But we can run our own frontier models...in our own infrastructure.”

At any rate, ChatGPT is not the best answer to phishing, Baggett said.

“You can't just take a mail and give it to ChatGPT, even if you could afford to,” he said. “It turns out that ChatGPT does a lot of things, but out of the box [and] is definitely not very good at deciding who’s phishing. It thinks everything's phish. You've got to tune this stuff to be useful for your domain, and email security is no different.”

But GenAI can be useful, Baggett said. Most email security systems rely on patterns, using AI to look for certain words or phrases than indicate phishing. But spammers know this, and are getting creative in their phrasing, such as getting users to click on a button labeled something like, “If you're not into this chat, reply and I’ll gracefully exit.”

“The reason they're doing that is because they know every email system worth anything has a giant database of unsubscribe instructions they look for, so they can classify that as spam,” he said. “But the spammers thought they were going to get caught, so they're trying to work this in a way that humans will understand and maybe think, ‘Oh, they're being funny,’ but it's going to get past the email connection system. Well, it doesn't work anymore with GenAI because GenAI knows what that means.”

New Routes For Email Attacks

Attackers are also using legitimate systems to send attacks via email, Baggett said. For example, they may set up a PayPal account using a “burner” Yahoo account, giving their phishing attacks a look of legitimacy, he said. Scammers may receive a legitimate email from PayPal disputing a payment made to the burner Yahoo account, and replay rather than forward that email to potential victims, he said.

“So how do you possibly detect these?” he said. “Well, first of all, we acknowledge that such scams exist. We have modules in the system that know payment systems like these are often abused. We've seen it in the transaction memo, like, what you paid for or the seller’s name. And we can understand the language. [These are things] the generative AI can find interesting.”

The point, Baggett said, is that this stuff is incredibly sophisticated.

“There's no way a person is going to be able to see this is bad,” he said. “OK, the language is slightly screwed up. That's not going to be true anymore with generative AI anyway, even if someone did notice it. Like this is really a mail from PayPal. It's essentially perfect, from the standpoint of mail systems circa 2024. The only way you can detect the mail is malicious is [if] you know there's this kind of scam, and you can understand the meaning of that text in that selling field like a person would.”

Carlo MacDonald, president and owner of Exigo Technology, a Baton Rouge, La.-based solution provider and Inky channel partner, told CRN that his company signed with the vendor because a large healthcare client was having spam and phishing problems.

MacDonald said he was intrigued by Baggett’s discussion about GenAI and the way Inky approaches spam.

“They really look at email from a human perspective,” he said. “They’ve been looking at the QR codes and all these things that I'm only now seeing a lot of the other email filtering and security systems just now getting into. Inky has been doing it for a while.”

Inky has done a really great job of informing users what problems they’re having with their emails, MacDonald said.

“Traditionally, in the MSP space. We get that call, ‘Hey, my email is not going through,’ or ‘I'm getting blocked,’ and they don't know why,” he said. “That's a cost to me, because they're calling up and opening a ticket and we have to call the user. With this product, we’re able to better help users understand why their emails got blocked or what the problem is. And I found that to be pretty good agency.”