8 Partners Weigh In On The ChatGPT, GPT Generative AI Hype
Employees with VCPI, Sourcepass, Net Friends, Accenture, SADA, Novacoast, ProArch and Netrix weigh in on the ChatGPT hype.
From ChatGPT saving time on documentation and writing change orders to speeding up the response to a security issue - and potentially causing security issues - partners with a variety of business models are adopting and trying out the generative AI tool, its competitors and the technology behind them.
Employees with Net Friends, Accenture, Novacoast, ProArch and other businesses have spoken with CRN in recent days on the pros, cons and potential of ChatGPT and the generative pre-trained transformer (GPT) large language model powering the software, which produces written content based on a user’s query.
ChatGPT is now considered the fastest-growing app in history, with 100 million monthly users, accoring to a study by investment banking firm UBS and reported by CBS. By comparison, it took Instagram more than two years to reach that number, TikTok nine months and Google Translate more than six years.
Partners Talk ChatGPT, GPT
ChatGPT was created by OpenAI, which has billions of dollars in investment from Microsoft, and was founded in 2015 by CEO Sam Altman (the former president of Y Combinator), plus Elon Musk, Peter Thiel and their fellow PayPal alum and LinkedIn co-founder Reid Hoffman, according to Fortune.
Other donors have included Y Combinator co-founder Jessica Livingston, India-based IT outsourcing firm Infosys and Amazon Web Services, according to Fortune.
One factor that remains is whether the revenue opportunity with ChatGPT and GPT can overcome the extra compute costs. Recent reports from investment firm Morgan Stanley show that natural language queries can cost five times the amount in compute capacity as a normal query.
Morgan Stanley predicts that if Google allows for 10 percent of search responses with a 50-word natural language response, the extra operational cost to Google will be $1.2 billion for 2024.
A separate Morgan Stanley report for Microsoft put the total cost of integrating GPT into its search engine between about $600 million and $1 billion annually – assuming Bing’s 3 percent market share and every Bing query going through GPT.
Here’s what partners have told CRN about the ChatGPT adoption.
Sales Operations Manager
Stephen Eiting, a sales operations manager at VCPI – a Milwaukee-based managed service provider (MSP) whose vendor partners include Microsoft, Citrix, Cisco and N-able – told CRN in an interview that he’s used ChatGPT to write a project charter, change orders, user guides and even a program script for Microsoft’s PowerShell task automation and configuration management program.
“It saves me endless hours every single week,” Eiting said.
For the project charter, Eiting fed ChatGPT stakeholders, project managers and information on the risks involved to get usable text.
“I learned from that experience that as you become more conversational with it, it really presents a really good result,” Eiting said. “And I wasn’t able to just take that. It wasn’t complete. But it did 70 percent of my work for the project charter (in), I don’t know, 45 seconds. Which is great.”
Eiting, who also maintains a personal blog about technology, has been impressed by its ability to generate posting ideas and to write the actual entries.
He said he hopes OpenAI, Microsoft and other AI vendors maintain guardrails around the potential to use ChatGPT to produce malicious content and malicious code.
“They are walking a very fine line,” he said. “And I really hope that it continues to be used in a positive way.”
Vice President Of Product Development
Nick Ross, vice president of product development at Sourcepass – a New York-based MSP that works with Acronis, Dell technologies, VMware, SentinelOne, Microsoft and Fortinet – told CRN that he uses ChatGPT to help translate concepts and ideas into writing.
He’s used ChatGPT to correct his grammar for posts on his MSP-focused blog, for creating templates for email and marketing campaigns, and even for low-level programming.
“I’d love to have it a little bit more rolled out and baked in certain ways than it is, but, I mean, it’s a game changer,” he said. “It’s a technology that has the excitement of a blockchain or crypto, but actually has way more applicability to businesses and the things that we do to reshape the world.”
Programmers’ jobs are still protected by the need to explain what a user wants, the ability to read code and the ability to troubleshoot, he said.
“It’s not going to go out and build you a full front-end and back-end that you can maintain,” he said.
Still, Ross sees ChatGPT as a helpful tool for translating jargon from product managers and developers.
For small MSPs limited by employee count, in theory, ChatGPT can write a business plan, do the market research based on information at its disposal, and use virtual agents for sales calls, he said.
“There‘s the limitless possibility to being able to scale out a business, at least in the forefront,” he said.
John Snyder, CEO of Net Friends – a Durham, N.C.-based MSP whose vendor partners include Microsoft, Nextiva and Palo Alto Networks – told CRN in an interview that he’s already using the paid version of ChatGPT, which is faster than the free offering.
“I was eager to sign up for it,” Snyder said. “I’m never gonna miss that $20 a month because I use it so, so much.”
Snyder has played with OpenAI’s image-generating AI Dall-E with his children, but he has not found a business case for it yet.
Snyder told CRN that he is excited for all the lower-level, “subtle AI” features that have been rolling out. Users increasingly grow to expect this “subtle AI” in their applications – just as word processor users lost their excitement long ago over spell checking tools.
An example of “subtle AI” is Slack’s transcription function when a video is added, Snyder said.
“What we‘ve seen is actually AI that’s not in your face,” he said. “It’s not like a segregated tool set. We’re seeing it starting to get woven in. … The best AI is going to simply be part of the tools we already use.”
Global Lead For Cyber Resilience Services
Researchers at Accenture Security have been trying out ChatGPT’s capabilities for automating some of the work involved in cyber defense. And the initial findings around using the AI-powered chatbot in this way are promising, according to Robert Boyce, global lead for cyber resilience services at Accenture – No. 1 on the 2022 CRN Solution Provider 500.
After taking in data from a security operations platform, ChatGPT has shown the ability to “actually create for us a really nice summary — almost like an analyst’s report — of what you would expect a human analyst to do as they’re reviewing it,” Boyce told CRN.
These potential applications of ChatGPT for cyber defense deserve attention to round out the picture amid the numerous research reports suggesting that the tool can be misused to enable cyberattacks, he said.
It’s not just the malicious actors who can use ChatGPT as a research and writing assistant, as it’s clear that the tool “helps reduce the barrier to entry with getting into the defensive side as well,” said Boyce, who is also a managing director at Accenture Security in addition to heading up its cyber resilience services.
Typically, after an analyst gets an alert about a potential security incident, they start pulling other data sources to “tell a story” and make a decision on whether they think it’s a real attack or not, he said.
That often entails a lot of manual work, or requires using a SOAR (security orchestration, automation and response) tool to pull it together automatically, Boyce said. (Many organizations find SOAR tools to be difficult, however, since they require additional specialized engineers and the introduction of new rules for the security operations center, he noted.)
On the other hand, the research at Accenture suggests that taking the data outputs from a security information and event management (SIEM) tool and putting it through ChatGPT can quickly yield a useful “story” about a security incident. Using ChatGPT to create that narrative from the data, Boyce said, “is really giving you a clear picture faster than an analyst would by having to gather the same information.”
Director Of Security Go-To-Market And Solutions
Soon, ChatGPT will have some major competition from Google’s forthcoming Bard chatbot. And this raises a big question from a cybersecurity standpoint: Will Google do more to prevent the malicious use of Bard, including for cybercrime, than OpenAI initially did with ChatGPT?
Giglio, who is director of security go-to-market and solutions at Los Angeles-based SADA, told CRN that he’s eager to see what Bard can do, but so far hasn’t gotten wind of what Bard’s capabilities will be.
His hope, however, is that Bard will have stronger preventions against malicious usage, such as the creation of malware and phishing emails.
“What I’m hopeful to see, in Bard, is a little bit more consideration there on how we control that platform,” Giglio said, including measures to limit the tool’s usefulness for hackers.
And knowing Google’s track record and focus around cybersecurity, “I do think there’s some of that consideration being built into it, for sure,” he said.
Chief Operating Officer
At security services provider Novacoast, the OpenAI chatbot technology that’s behind the popular ChatGPT is about to get a wide-scale tryout to see if it can save time for the company’s hundreds of cybersecurity professionals.
Based on a smaller-scale test that the firm already carried out, Novacoast is optimistic that it will, company Chief Operating Officer Eron Howard told CRN.
Even though the technology has shortcomings, the eight-week test of the chatbot at Novacoast was successful enough to move it into broader usage at the company, Howard said.
Issues with the OpenAI chatbot technology include the fact that it hasn’t been trained on data past 2021 and it sometimes fabricates, or “hallucinates” an answer when asked a question it doesn’t have data on.
And yet, “despite the fact that it can ‘hallucinate,’ and despite the fact that the dataset is old — which in security can be very bad — it saved people a ton of time,” Howard told CRN.
At Wichita, Kan.-based Novacoast, No. 258 on CRN’s 2022 Solution Provider 500 list, the tryout of GPT-3 with the more than 400 professionals in its services organization will start in the next week or so, Howard said.
The team includes 100 security operations center (SOC) analysts as well as threat hunters, penetration testers, developers and security engineers. “All of them will have it integrated into their chat, so we can see if they’re getting a boost in time savings,” Howard said.
Novacoast has tailored the GPT-3 technology by putting measures in place that help to limit the chatbot’s hallucinations, he noted.
The security services provider will be looking to see if GPT-3 can accelerate activities such as writing the scripts and rules that are an essential part of security operations, including for detecting and responding to threats.
GPT-3 is also adept at summarizing the steps that are necessary for responding to a security issue, Howard said, since it can essentially query the corpus of knowledge that has been published by SOC analysts.
The technology can provide a SOC analyst with an average recommendation for the steps to take in a specific situation, “without having to go to Google and read a bunch of blogs,” he said.
Chief Information Security Officer
The potential is definitely there for OpenAI’s ChatGPT to help security analysts, who work with SIEM (security information and event management) tools like Microsoft Sentinel, to help automate and expedite some of the typically manual analysis of security incidents, according to Michael Montagliano, chief information security officer at Atlanta-based solution provider ProArch.
At this early stage, though, more testing of the types of integration methods now being posted online is definitely necessary, which ProArch plans to do, Montagliano told CRN.
“We are going to test that integration into Sentinel in a lab environment,” he said. “One of the things you have to be cautious about is, is that accurate? Is it dependable?”
But the potential use of the tool for cyberattacks also shows that “technology cannot be unpoliced,” he said. “There needs to be a controlling force.”
Still, Reeder does believe that OpenAI is “managing it the best they can—a layman can’t come in and say, ‘Create [malware] for me.’”