ChatGPT Malware Shows It’s Time To Get ‘More Serious’ About Security

Security researchers this week posted findings showing that the tool can in fact be used to create highly evasive malware.

ARTICLE TITLE HERE

With security researchers showing that OpenAI’s ChatGPT can in fact be used to write malware code with relative ease, managed services providers should be paying close attention.

This week, researchers from security vendors including CyberArk and Deep Instinct posted technical explainers about using the ChatGPT writing automation tool to generate code for malware, including ransomware.

[Related: Google Cloud VP Trashes ChatGPT: Not Cool]

id
unit-1659132512259
type
Sponsored post

While concerns about the potential for ChatGPT to be used this way have circulated widely of late, CyberArk researchers Eran Shimony and Omer Tsarfati posted findings showing that the tool can in fact be used to create highly evasive malware, known as polymorphic malware.

Based on the findings, it’s clear that ChatGPT can “easily be used to create polymorphic malware,” the researchers wrote.

Deep Instinct threat intelligence researcher Bar Block, meanwhile, wrote that existing controls in ChatGPT do ensure that the tool won’t create malicious code for users that lack know-how about the execution of malware.

However, “it does have the potential to accelerate attacks for those who do [have such knowledge]”, Block wrote. “I believe ChatGPT will continue to develop measures to prevent [malware creation], but as shown, there will be ways to ask the questions to get the results you are looking for.”

The research so far is showing that concerns about the potential for malicious cyber actors to “weaponize” ChatGPT are not unfounded, according to Michael Oh, founder and president of Boston-based managed services provider Tech Superpowers.

“It just accelerates that cat-and-mouse game” between cyber attackers and defenders, Oh said.

As a result, any MSPs or MSSPs (managed security services providers) who thought they still had more time to get their clients fully protected should reconsider that position, he said.

If nothing else, ChatGPT’s potential for malware creation should “drive us to be much more serious about plugging all the holes” in customers’ IT environments, Oh said.

Earlier this month, researchers at cybersecurity firm Check Point disclosed that they’ve found evidence of the “first instances of cybercriminals using OpenAI to develop malicious tools.” The disclosure prompted Dominick Delfino, global vice president of cybersecurity sales at Google Cloud, to write on LinkedIn, “In case you think ChatGPT is cool. This is what [it’s] being used for.”

OpenAI, which is also behind the DALL-E 2 image generator, and whose backers include Microsoft, first introduced ChatGPT in November. The chatbot has gained massive popularity thanks to its ability to effectively mimic human writing and conversation while responding to prompts or questions from users.