Inky Leverages GenAI In Fight Against Malicious Emails
‘We’re on a smaller scale (than the technology vendor giants), but we still have to use generative AI to protect against these next-generation threats,’ says Inky’s Matthew Sywulak.
By simply asking a generative artificial intelligence chatbot for an example of a phishing message for a presentation on cybersecurity, Matthew Sywulak, Inky senior director for customer engineering, made the bot ignore its guardrails.
But for all the ways threat actors are leveraging GenAI to improve attacks, companies like the College Park, Md.-based email security vendor and recent Kaseya acquisition are turning to GenAI to improve email intent analysis and other ways they help solution providers secure clients, Sywulak told a crowd of solution providers at the XChange NexGen 2025 conference hosted by CRN parent The Channel Company. The show goes through Tuesday in Houston.
“We’re on a smaller scale (than the technology vendor giants), but we still have to use generative AI to protect against these next-generation threats,” he said. “Today, phishers can use the services like ChatGPT, Anthropic, Google, without problems. They can just go to any one of them, impersonate someone that they want to basically phish you with, and then ChatGPT or anyone will give you the response that it’s looking for.”
[RELATED: Kaseya Acquires Email Security Trailblazer Inky, Boosts AI-Powered Protection]
Inky GenAI
About 70 percent of Inky’s revenue comes from channel and alliance partners. Its top channel goals for 2025 include enabling partners to develop an AI strategy and increasing the amount of recurring revenue going through partners, according to CRN’s 2025 Channel Chiefs.
Josh Thomas, co-founder and sales and marketing vice president of Plano, Texas-based Superior TurnKey Solutions Group, told CRN in an interview that his solution provider is vetting a variety of email security tools to better protect customers, with Inky one of the possible vendors.
The rise in threat actor attacks and their enablement through AI is another challenge for the channel in keeping clients secure, Thomas said.
“Seems like you can’t get enough protection tools,” he said.
In the age of AI bots, instead of an individual threat actor going through a database of targets and sending each one a spam message, the threat actor can upload a plain-text file to a popular AI chatbot, ask for deep research on each target and tailor the message based on their social media accounts, Sywulak said. Online forums have detailed so-called “jailbreaking” techniques to trick chatbots into going against guardrails, even by adjusting a word once the chatbot delivers a message saying it can’t deliver what the user prompted.
“The phish of yesterday would take a lot of time and effort,” he said. “But now generative AI can do it for you in the matter of minutes.”
For analyzing the intention of emails to detect a phishing attempt, Inky cuts down on the high cost of GenAI capabilities through synthetic generation instead of actually generating new text.
The GenAI in Inky is not trained against customer data, he said. Inky analyzes in its own servers and does not use ChatGPT and other chatbots for analysis. Making the capability apply to multiple languages is also a possibility in the future.
If the intention capability finds a call to action for the email recipient that looks malicious, Inky labels the email and highlights the text that prompted the label.
“We’re not rebuilding the entire analysis stack to catch phishing first,” he said. “We’re rebuilding it to understand the intent of the mail based around generating thousands and thousands and hundreds of thousands of examples of synthetic data.”