Palo Alto Networks Cloud Survey: 4 Takeaways On GenAI And Security

Despite the security trade-offs in using GenAI for code generation, all respondents to the 2024 State of Cloud-Native Security survey say they are embracing the approach.

Despite the security trade-offs in using GenAI for code generation, the approach is more or less universal at this point, according to a new survey released Wednesday by Palo Alto Networks.

The 2024 State of Cloud-Native Security, initiated by the cybersecurity giant’s Prisma Cloud unit, uncovered a number of findings about how the arrival of generative AI-powered tools is changing the picture for software development teams — as well as for the security of their applications.

[Related: Which Side Wins With GenAI: Cybercrime Or Cyberdefense?]

The report surveyed more than 2,800 respondents in 10 countries, between December and January, with respondents coming from both practitioner and executive roles.

What follows are four major takeaways on GenAI and security from Palo Alto Networks’ latest State of Cloud-Native Security survey.

Universal Adoption Of GenAI Tools

GenAI tools such as GitHub Copilot have been rapidly adopted by developers, and the usage would seem to have been little affected by findings that such tools introduce a greater number of vulnerabilities. A full 100 percent of respondents that their organizations are “embracing AI-assisted coding,” according to the survey.

Amol Mathur, senior vice president and general manager for Prisma Cloud at Palo Alto Networks, said he found it to be “a little bit of a surprise that it was literally 100 percent.”

“It wasn't like high 80s or high 90s — every single respondent, they're already doing it,” Mathur told CRN. “It's very rapid adoption.”

More Vulnerabilities In Code

At the same time, many respondents acknowledged that there could be security consequences from this adoption. The survey found 44 percent of organizations worried that “unforeseen vulnerabilities and exploits” could be introduced through AI generation of code. That made it the No. 1 cloud security concern found in the report.

Even apart from the code itself that is generated by these tools, the sheer fact of increased productivity — leading to “massive” gains in the amount of software code produced overall — is likely to result in more vulnerabilities going forward in applications, according to Mathur.

For instance, cyber insurer Coalition recently reported that Common Vulnerabilities and Exposures (CVEs) are expected to jump to about 2,900 a month in 2024, up 25 percent from last year. And in the future, “I definitely expect that to go up as people start writing a lot more code using AI,” Mathur said.

Security Remains A ‘Gating Factor’

At the same time, the vast majority of organizations still see security as an impediment to releasing software as quickly as they would like, according to the survey. The report found 86 percent of respondents reporting they believe “security is a gating factor hindering software releases.”

While this dynamic is far from new, the fact that it continues to persist, even with all of the attention paid to the issue in recent years, is notable.

Many more organizations undoubtedly have now integrated security into their software development pipelines at this point, Mathur said. “But it's still largely not a universally solved problem — where developers feel like they truly understand how to bake security in.”

Disconnect On AI-Powered Threats

On the other side of the equation, the rise of GenAI-enhanced cyberattacks has been viewed as a major threat since the arrival of ChatGPT a year and a half ago. Perhaps surprisingly though, a sizable percentage of security professionals surveyed in the State of Cloud-Native Security report indicated a lack of concern about the threat.

The survey found just 43 percent of security professionals predicting that “AI-powered threats will evade traditional detection techniques to become a more common threat vector.”

In reality though, “100 percent of the people should expect” this to happen, Mathur said — given that GenAI-assisted phishing and social engineering are just the first examples of ways that the technology “lowers the bar” for attackers.