5 Big Pros And Cons Of ChatGPT For Cybersecurity

There’s mounting evidence that the AI chatbot could be a powerful tool both for hackers and for cyber defenders.

ChatGPT’s Cyber Conundrum

History reveals to us that advanced technology, even when it’s developed with the best of intentions, will inevitably end up being used in ways that cause harm. AI is certainly no exception. But with OpenAI’s ChatGPT, both the positive and negative uses of the technology seem to have been taken up a notch. And when it comes to cybersecurity, there’s now mounting evidence that the AI-powered chatbot could be a powerful tool both for hackers and cyber defenders.

ChatGPT — a virtual research and writing assistant that, at least for now, is free to use — is an amazingly helpful tool. Its knowledge is basically limitless, its ability to boil down complex subjects is superb and, oh yeah, it’s fast. But the fact that it can write programming code upon request is where many concerns about possible harms are arising.

For those intent on using the tool to write malware code for deployment in cyberattacks, “ChatGPT lowers the barrier to entry for threat actors with limited programming abilities or technical skills,” researchers from threat intelligence firm Recorded Future said in a report Thursday. “It can produce effective results with just an elementary level of understanding in the fundamentals of cybersecurity and computer science.”

[Related: ChatGPT Malware Shows It’s Time To Get ‘More Serious’ About Security]

Of course, ChatGPT has its positive uses too, including in the cybersecurity realm. Researchers at Accenture Security have been trying out ChatGPT’s capabilities for automating some of the work involved in cyber defense. The initial findings around using the AI-powered chatbot in this way are promising, according to Robert Boyce, Accenture’s global lead for cyber resilience services . It’s clear that the tool “helps reduce the barrier to entry with getting into the defensive side as well,” he told CRN.

OpenAI, which is also behind the DALL-E 2 image generator, and whose backers include Microsoft, first introduced ChatGPT in late November. This week, Microsoft said it’s making a new “multiyear, multibillion dollar investment” into OpenAI, which the New York Times confirmed as amounting to $10 billion. Microsoft had previously invested more than $3 billion into OpenAI starting in 2019, and OpenAI uses Microsoft Azure for its cloud infrastructure.

What follows are the details we’ve assembled on five big pros and cons of ChatGPT for cybersecurity.

Pro: Automating Security Incident Analysis

Typically, after an analyst gets an alert about a potential security incident, they start pulling other data sources to be able to “tell a story” and make a decision on whether they think it’s a real attack or not. That often entails a lot of manual work, or requires using a SOAR (security orchestration, automation and response) tool to be able to pull it together automatically. (Many organizations find SOAR tools to be difficult, however, since they require additional specialized engineers and the introduction of new rules for the security operations center.)

On the other hand, the research at Accenture suggests that taking the data outputs from a security information and event management (SIEM) tool and putting it through ChatGPT can quickly yield a useful “story” about a security incident. Using ChatGPT to create that narrative from the data “is really giving you a clear picture faster than an analyst would by having to gather the same information,” said Boyce, who is also a managing director at Accenture Security in addition to heading its cyber resilience services.

This is important because for years, the security operations space “has been stagnant in a lot of ways because of the immense amount of information coming at an analyst, and because of the enrichment that has to happen before they can make good decisions,” he said. “It’s always been overwhelming. It’s information overload.”

And while many cybersecurity professionals are overburdened, there also aren’t nearly enough of them, as the massive shortage of skilled security pros continues. ChatGPT, however, holds the promise of automating some of work of overwhelmed security teams while also helping to “erase some of the noise from the signal,” Boyce said.

Con: Accelerating Malware Development

Last week, researchers from security vendors including CyberArk and Deep Instinct posted technical explainers about using the ChatGPT writing automation tool to generate code for malware, including ransomware. CyberArk researchers Eran Shimony and Omer Tsarfati posted findings showing that the tool can in fact be used to create highly evasive malware, known as polymorphic malware. Based on the findings, it’s clear that ChatGPT can “easily be used to create polymorphic malware,” the researchers wrote.

In their report released Thursday, Recorded Future’s research team highlighted several ways that the tool could be used for malware creation in an advanced fashion. Those include training ChatGPT on malware code found in open-source repositories to generate “unique variations of that code which evade antivirus detections” and using “syntactical workarounds that ‘trick’ the model” into fulfilling a request to write code that exploits vulnerabilities.

Notably, ChatGPT can also be utilized to generate the malware payload itself that’s intended for distribution as part of a cyberattack, according to Recorded Future researchers. The research team has identified several malware payloads that ChatGPT is effective at generating, including infostealers, remote access trojans and cryptocurrency stealers.

The position of OpenAI appears to be that when it comes to user requests to ChatGPT for code, ChatGPT functions more like a search engine, and is not capable of doing the level of customization with code-writing that a human would be. “When it comes to code-related requests, I can provide examples of code, explain how to write the code and provide information related to the code, but I don‘t have the ability to generate new code or execute it,” the ChatGPT chatbot said in response to a question from CRN last week. “My approach with code-related requests is more similar to a search engine or a reference book, where I can provide information that I have seen during my training process, rather than generating original text.”

However, Deep Instinct threat intelligence researcher Bar Block told CRN that in her tryout of ChatGPT, she felt that the tool was functioning as something more than just a really good search engine. “When I started checking its limits to see what it can do — when I tried to make it write ransomware — I first started with telling it, ‘OK, write me code in Go that can iterate over directories and encrypt their content.’ And then I asked it to do more things like, ‘Do the same, but also go over subdirectories.’ And then, ‘Put .txt files in each subdirectory.’ And [ChatGPT changed] the code that it initially gave me,” Block said in an interview. “So it did have the capabilities to understand what it was writing, and to change it. It didn’t just search for another thing that it stumbled across during its training. It actually changed the initial input. So it has the capabilities to generate code.”

Pro: Reducing The Knowledge Gap For Execs And Boards

For just about any field of inquiry, ChatGPT has the potential to improve a user’s understanding of the area at an accelerated pace. The chatbot’s ability to adeptly answer specific questions with highly relevant information, boiled down to a few paragraphs or bullet points, means that ChatGPT is often a faster way to learn about a new topic than combing through the web with Google. However, there’s a case to be made that ChatGPT is especially advantageous for those looking to learn about cybersecurity. The complexity and nuances of the subject has meant that “security has always been a mystery” to those who don’t focus on it day-to-day, and it’s widely misunderstood, Accenture’s Boyce told CRN. But because ChatGPT can rapidly summarize complex topics, and because the stakes in cybersecurity are so high, it very well could be disproportionately valuable for corporate executives and board members, who are increasingly being expected to be knowledgeable on the topic of cybersecurity.

The world may be able to more quickly “close the knowledge gap that exists between non-security executives and security executives” using ChatGPT, Boyce said. For instance, “if you were a CEO of a financial services company, and you wanted to understand how Russia-Ukraine was going to impact you [in terms of cybersecurity], you could be searching on Google for hours putting together your own point-of-view on this, and looking for the right questions to ask.” With ChatGPT, “you honestly may be able to do this in minutes to be able to educate yourself.”

While a user could do the same for any industry, the chatbot’s value for cybersecurity is particularly meaningful since the topic “just has such a high profile now,” Boyce said. “When you‘re looking at the SEC potentially making it a requirement [for public companies] to have a board member who is educated as their designated cyber [expert], where are you going to find all of these cyber people at a board level who can do this?”

Con: Enabling Phishing And Social Engineering

ChatGPT’s specialty in imitating human writing “gives it the potential to be a powerful phishing and social engineering tool,” Recorded Future researchers said in their report Thursday. The AI-powered chatbot could prove especially useful for threat actors who are not fluent in English, with the potential for the tool to be used to “more effectively” distribute malware, according to the report.

In the researchers’ test of ChatGPT, none of the telltale issues with phishing emails written by individuals who are not fluent in English — such as spelling and grammar mistakes, or misuse of English vocabulary — were present in the email text the tool produced. As a result, “we believe that ChatGPT can be used by ransomware affiliates and initial access brokers that are not fluent in English to more effectively distribute infostealer malware, botnet staging tools, remote access trojans, loaders and droppers, or one-time ransomware executables that do not involve data exfiltration,” the researchers wrote.

Pro: Automating Other Areas Of Cybersecurity

Beyond bringing automation to cyber incident analysis by security operations teams, ChatGPT also holds the promise of automating some of the work of penetration testers who are deployed to test cyber defense systems for faults. For instance, the malware creation capabilities offered by ChatGPT can assist “ethical hackers” as well. “Being able to help automate some aspects of the the proactive attack simulations is also a super important use case” for ChatGPT, Accenture’s Boyce told CRN. “As a services company, it could allow us to service our customers more effectively, more efficiently, and maybe at lower costs — rather than having to do a lot of the research and create our own malware all the time.”

Looking ahead, ChatGPT may also be a signal that greater automation of cyber defense decision-making is not too far off. Within cybersecurity, “a lot of the ‘AI’ in the past has really just been machine learning, and built on deviations from normal behaviors or things like that,” Boyce said. “Where we’re now going is [the ability] to take in information, and use that information to make a decision. And that’s going to be super interesting.”