Security News

ChatGPT Malware Shows It’s Time To Get ‘More Serious’ About Security

Kyle Alspach

Security researchers this week posted findings showing that the tool can in fact be used to create highly evasive malware.


With security researchers showing that OpenAI’s ChatGPT can in fact be used to write malware code with relative ease, managed services providers should be paying close attention.

This week, researchers from security vendors including CyberArk and Deep Instinct posted technical explainers about using the ChatGPT writing automation tool to generate code for malware, including ransomware.

[Related: Google Cloud VP Trashes ChatGPT: Not Cool]

While concerns about the potential for ChatGPT to be used this way have circulated widely of late, CyberArk researchers Eran Shimony and Omer Tsarfati posted findings showing that the tool can in fact be used to create highly evasive malware, known as polymorphic malware.

Based on the findings, it’s clear that ChatGPT can “easily be used to create polymorphic malware,” the researchers wrote.

Deep Instinct threat intelligence researcher Bar Block, meanwhile, wrote that existing controls in ChatGPT do ensure that the tool won’t create malicious code for users that lack know-how about the execution of malware.

However, “it does have the potential to accelerate attacks for those who do [have such knowledge]”, Block wrote. “I believe ChatGPT will continue to develop measures to prevent [malware creation], but as shown, there will be ways to ask the questions to get the results you are looking for.”

The research so far is showing that concerns about the potential for malicious cyber actors to “weaponize” ChatGPT are not unfounded, according to Michael Oh, founder and president of Boston-based managed services provider Tech Superpowers.

“It just accelerates that cat-and-mouse game” between cyber attackers and defenders, Oh said.

As a result, any MSPs or MSSPs (managed security services providers) who thought they still had more time to get their clients fully protected should reconsider that position, he said.

If nothing else, ChatGPT’s potential for malware creation should “drive us to be much more serious about plugging all the holes” in customers’ IT environments, Oh said.

Earlier this month, researchers at cybersecurity firm Check Point disclosed that they’ve found evidence of the “first instances of cybercriminals using OpenAI to develop malicious tools.” The disclosure prompted Dominick Delfino, global vice president of cybersecurity sales at Google Cloud, to write on LinkedIn, “In case you think ChatGPT is cool. This is what [it’s] being used for.”

OpenAI, which is also behind the DALL-E 2 image generator, and whose backers include Microsoft, first introduced ChatGPT in November. The chatbot has gained massive popularity thanks to its ability to effectively mimic human writing and conversation while responding to prompts or questions from users.

Kyle Alspach

Kyle Alspach is a Senior Editor at CRN focused on cybersecurity. His coverage spans news, analysis and deep dives on the cybersecurity industry, with a focus on fast-growing segments such as cloud security, application security and identity security.  He can be reached at

Sponsored Post