Dell Technologies CTO John Roese: Agentic AI Is A Storage Win For Partners
Dell Technologies CTO and Chief AI Officer John Roese says that agentic AI will be deployed throughout major enterprises by the end of the year and that it represents a massive opportunity for partners.
Dell Technologies Global CTO and Chief AI Officer John Roese predicts that agentic AI—independently operating AI bots that can carry out tasks, move between systems and learn on the fly—will be deployed throughout major enterprises by the end of the year, and he recently told CRN it represents a massive opportunity for partners.
Before he was Dell’s global CTO, Roese was CTO at EMC when it was acquired by Dell. He also had CTO stints at Cablevision and Broadcom.
Roese said that over the three-year development cycle of AI, many proofs of concept have turned into products and validated designs that have been deployed at more than 3,000 customers.
For him, the AI era is a new frontier where he can bring the best of what he has picked up in his career to deliver outcomes and value for partners and customers.
Here is more of what Roese had to say in an interview with CRN.
One of the things you mention in your YouTube video series ‘AI Insights’ is that it is very difficult to see more than six months ahead. What are some of the things you’ve learned early on?
It’s funny because more and more what I find is … in having a discussion with people that are trying to do this, they run into, inevitably, the same issues that we’ve worked through. But when it happens in their world, suddenly everything stops and they’re in this mode of ‘I know I should be doing better. I know I should be making this technology work. I know I can get ROI, but how do I organize my engineering organization to do this? How do I find technical talent? How do I decide what is important, and how do I keep the noise out? How do I structure for this thing?’
With two big customers, I had a first conversation with their chief AI officers independently and I laid it out, and then we followed up about a month later, and both times they came back and said, ‘Hey, I took what you told me, and I went back, and I fought the fight.’ And what the topic was generally about was organization. I’m very vocal. You have to do this top down. You cannot do this by goodwill and hope.
What you’ve said is if something’s not working, you change it, you pivot, you go where the data tells you to go. Is that the case with AI as well?
If you look at what happened last year, you’ve heard the narrative, we had like 800 projects that were spinning up. It was an inch deep and a mile wide.
And I think I walked people through this stack that we said, ‘Let’s start with why are we doing this in the first place.’ And we decided that we’d like a financial impact to our company, and so profit, revenue, cost reduction and material regulatory reduction, risk reduction seemed to be the right things to do. So that was kind of the North Star.
And then we said, ‘Hey, where do those exist?’ They could argue those costs everywhere. But are there critical areas? And the data led us very clearly—and you’ve heard us say this outside the AI context—to the four things that make us special.
We have the largest secure supply chain in the world. We have the largest enterprise-selling organization in our industry. We have our largest engineering capabilities in the world. We have one of the largest service capabilities in the world. Those four things, if you look at the data, are where most of the cost and the benefit of Dell exists. There are many other organizations that support them, but if you get those ones right, you’re going to make a difference. And so we decided to prioritize them.
Can you describe how you applied AI to the sales processes?
We said, ‘Well, AI is the application of artificial intelligence against a specific process. It’s not random. You’re applying this technology to make a process better. Well, which of the million processes in the company are we going to target?’ And so in the case of sales, we did this massive diagnostic of 20,000 sellers. Looked at the day in the life of them, what do they do? Where do they spend their time? And it was funny because the data told us where to target AI because we realized, ‘Hey, they’re very good in front of customers. We don’t need virtual sellers. We don’t need robots out there.’
So we said, ‘Wow, if we could fix that and free up the time a seller has in the week to go be in front of customers more, that would probably move the needle in a pretty significant way.’
And the impact on it has just been spectacular. We had a 100 percent pickup rate. Sellers were using it across the entire sales force. Every time I run into a salesperson, they tell me all these creative things that they’re doing with it, which all correlate to they’re moving faster.
The entire Dell team is relentlessly focused on market share and being No. 1. What is it like working with that intensity?
I’ve been doing this for a long time, and as a CTO I get really bored if things aren’t changing. And so I like the fact that we have to move forward and we have to fail forward faster and we have to be more aggressive. But I’ve also run engineering before, I’ve run marketing, I’ve run lots of things, but the bottom line is I have just an ingrained bias to action.
What’s been interesting on this, though, is I have been empowered to move even faster because in the AI cycle, it’s not just simply normal product cycles. The entire thing reinvents itself.
Culturally, inside the company, I’m sure if you to talk to [other executive leaders], the tone at the top is definitely not one of, ‘Good try. Keep working on it. We’ll get to it.’ It’s, ‘You’re late already.’ It’s, ‘Pleased, but never satisfied.’
You’ve said agentic systems will live on a platform, but they’re going to do 80 percent of their work outside the data center. Where inside those infrastructure products are partners going to find the opportunity?
OK, so first there are chatbots and reactive AI stuff, that was the first generation.
Now we have agents, which are actually autonomous and self-organizing and learning and distributable. What’s interesting is to compare the two. When you go to the second one, when you go to agentic and you have an ensemble of agents doing work, you see about 10 times the amount of token production, meaning you’re consuming about 10 times the amount of compute. Everybody gets that. What they don’t get is the data side.
Fundamentally, these tools, each one of them interacts with data sets, but more importantly they create their own data. In fact, the underlying foundational technology ensemble of agents isn’t a vector database. It isn’t a graph, a traditional graph. It’s actually something called a ‘knowledge graph’ in most of the early implementations.
Think of it as a collection of data, objects, information, manuals, content, whatever it is that’s related to what their job is. ‘Think, there’s the library they’re going to use.’ But because it’s a knowledge graph, there are relationships between them, like how your brain works. There are neurons and synapses, and the reality of it is when you put agents into production and you make them do a job, you have to give them information. The information isn’t in the LLM entirely, it’s actually in the knowledge graph. So you arm them with the manual of how to be a good service agent, or a good selling agent or to understand products. That’s how you give them domain expertise.
But then as they operate, as they start to use that information to learn skills, to understand how to navigate it, they modify the knowledge graph. You actually see paths form between them so if you want to know how to solve this particular problem, all the information is kind of scattered in the knowledge graph. Once they solve it, once there are actual modifications to the dataset that show them the path to do that, the next time they have to do it they know how to do it. They’ve developed a skill.
And so when you think about that from a storage perspective, what it says is every agent in the world is dependent on the knowledge inside the large language models they use and the knowledge you give them and the knowledge they create, which all correlates into storage footprint.
And more importantly, the knowledge they create and they use is always hot. It’s never archival. So if you’re familiar with storage you have cold, warm and hot. There is no cold storage involved in this because fundamentally it’s like your brain. Does your brain turn off?
The reality is, when we start looking at the knowledge graphs underneath these, they are not only very dynamic, meaning high IOPS [input-output operations per second], they have a lot of input-output into them, but they’re also incredibly complex. They’re not that big, but there are lots of them. But then more importantly, they’re incredibly valuable. Because once the agents work on a knowledge set, they create the relationships, and that actually is added, high-value information.