Palo Alto Networks To Launch Its Own LLM ‘In The Coming Year’: CEO Nikesh Arora
The cybersecurity vendor sees ‘significant opportunity’ around bringing generative AI to its offerings, including through developing a proprietary large language model, Arora said Tuesday.
Palo Alto Networks is developing its own proprietary large language model (LLM) as the cybersecurity giant looks to capitalize on generative AI technology in a number of ways, CEO Nikesh Arora said Tuesday.
The company plans to launch the LLM “in the coming year,” and ultimately sees “significant opportunity as we begin to embed generative AI into our products and workflows,” Arora said during the company’s quarterly call with analysts.
[Related: Palo Alto Networks CEO Nikesh Arora On The ‘Revenge Of The CFO,’ the ChatGPT ‘Boon’ And AI-Based Network Security]
Arora made the comments as Palo Alto Networks disclosed quarterly earnings results Tuesday that surpassed analyst expectations, even in spite of the challenging economic environment. Growth in areas including its secure access service edge platform, Prisma SASE — as well as with its AI-driven security operations platform, Cortex XSIAM (extended security intelligence and automation management) — helped to drive the results, according to the Santa Clara, Calif.-based company.
On the whole, Palo Alto Networks reported $1.72 billion in revenue for its fiscal third quarter of 2023, ended April 30, up 24 percent year-over-year. That was just above the consensus estimate from Wall Street analysts for the quarter.
On earnings, the company reported non-GAAP net income of $1.10 per diluted share for its fiscal Q3, beating the 93 cents per share that had been expected by analysts. Palo Alto Networks’ stock price rose 3.75 percent in after-hours trading Tuesday, to $196.85 a share. The company also raised its guidance for revenue and profits in its fiscal 2023.
Generative AI Opportunities
Numerous times during the quarterly call Tuesday, the topic turned to artificial intelligence — and specifically to the potential for generative AI, a technology that has only seen mainstream usage since late last year following the release of OpenAI’s hugely popular ChatGPT.
During his prepared remarks, Arora said that generative AI will boost Palo Alto Networks in at least three major ways, including by helping the company “improve our core, under-the-hood detection and prevention efficacy” within its product portfolio.
The company also plans to utilize generative AI to provide a “more intuitive and natural language-driven experience within our products,” he said.
Additionally, Palo Alto Networks plans to drive “significant efficiency in our own processes and operations” using the technology, Arora said.
“We intend to deploy a proprietary Palo Alto Networks security LLM in the coming year, and are actively pursuing multiple efforts to realize these three outcomes,” he said.
Arora’s remarks follow recent comments on the topic by Palo Alto Networks CPO Lee Klarich, who said in an interview with CRN that the company’s use of generative AI will go far beyond what we’ve seen so far in cybersecurity.
“I think that right now, what you’re seeing is a very superficial application of [generative AI] — which is interesting for demos, but it doesn’t necessarily solve the real hard problems,” Klarich told CRN. “That is where the real value will come in. What we’re doing is looking at, what are those hard problems we want to go solve? How do we architecturally approach that and leverage these new AI technologies to help us get there?”
During the quarterly call Tuesday, Klarich said that generative AI technology offers “tremendous promise,” particularly when it comes to “being able to help guide product adoption, product usage, [and] to help enhance security capabilities and to drive greater efficiencies across the business.”
So far, Palo Alto Networks has not been among the cybersecurity vendors to rush out generative AI capabilities powered by existing LLMs such as OpenAI’s GPT-4 — despite the company’s heavy focus on using AI/ML in its products over the years.
Data, Scale Are Key
Arora contended during the call Tuesday that generative AI will offer a disproportionate advantage to organizations that are large and that have significant amounts of data, both of which apply to Palo Alto Networks.
“If you’re in the security business, it definitely helps if you have the largest data lake in the world of security data. So from that perspective, I think [generative AI] favors the people who have a lot of data already as part of their strategy — and [that] have built a business on the back of a data-led strategy,” he said. All in all, “it favors companies which have tremendous amounts of data.”
And for large businesses — such as those with significant customer support operations, for instance — there is a clear benefit from making a big investment into generative AI technology, Arora said.
“I can go spend $30, $40, $50 million deploying an LLM and saving half my cost,” he said. “If you’re running a small company and your entire cost is $50 million, it probably doesn’t behoove you to go out and create an LLM-based generative AI project [that will] take away $20 million of cost.”
In other words, “I think it also benefits [organizations] of scale, who are able to drive efficiencies using generative AI across the enterprise, allowing them to grow their business much faster with limited resources,” Arora said.