Dell ISG President Arthur Lewis Talks Storage, Servers, Massive On-Prem LLMs, And ‘Crazy’ Pace Of Innovation

‘Last year, we were talking about perception models. Now, we’re talking about tree-of-thought, reasoning models and agents. That wasn’t a thing last year ... The pace of innovation is crazy. But the big unlock here is the compute. Now for the software engineer, if you can think it, you can create it,’ Dell Technologies ISG President Arthur Lewis tells CRN.

Arthur Lewis, president of Dell Technologies’ Infrastructure Solutions Group, said the company is focused on redefining the IT architecture that makes up the enterprise and data center with a private cloud platform and full-stack innovation that turns data into intelligence, he told CRN.

“You have to be thinking, ‘If I was a company born in AI, unencumbered by the past, what advantages do I have versus legacy companies. That is the mindset that you have to have,” he said during an interview at Dell Technologies World 2025 last week. “And you need to be scared that that company is going to run circles around you in a couple of years.”

Lewis leads the teams that produce the server and storage breakthroughs that have won Dell its first position ranking in market share in each of those categories and entrenched it there for quarter after quarter. For Lewis, the ideal pace is to never slow down.

“If you think you’ve won, you’ve lost,” he told CRN.

[RELATED: Michael Dell’s Boldest AI Predictions From Dell Technologies World 2025]

In terms of how customers should approach AI, the theme at the show is that organizations must act now. Lewis said those that are afraid to move first risk being left behind.

“I kind of joke that the AI revolution is not a spectator sport. It’s really not. You’ve got to get into the game. You’ve got to learn. You’ve got to make mistakes. You’ve got to learn some more,” he said. “If you’re going to sit around and wait to be a fast follower, you’re kind of dead in the water.”

He said while unlocking the power of AI is a significant task that involves incorporating disparate data from across a vast IT estate, Dell is hoping to make it easier.

He pointed to the company’s new offerings like the Dell AI Data Platform. He said data is fuel and AI needs premium fuel, which requires refinement. He said the foundation of the Dell AI Data Platform is built on Dell’s fast, scalable storage that has been engineered for the AI moment.

Dell’s file and object storage, PowerScale F710 and PowerScale F910 have twice the capacity they did a year ago, as well as data services wrapped around that to automate the ingestion of all those bytes with a proprietary RAG connector, as well as advanced search and discovery capabilities.

Meanwhile, Dell’s ObjectScale integrates with modern data lakes, AI tools and a large variety of object file extensions as it delivers an industry-leading throughput of 384 gigabytes per second.

Additionally, the company’s Project Lightning will be the fastest parallel file system in the world, Lewis told the audience at Dell Tech World, with 67 percent greater access to data and twice the level of throughput to its nearest competitor.

Lewis called Project Lightning a “superhighway for artificial intelligence” that eliminates bottlenecks and is designed for agentic training and workloads, it is capable of saturating “hundreds of thousands of GPUs,” which is needed for checkpointing, key-value caching, as well as high-end metadata analytics to ensure “seamless and continuous access” to data.

Dell’s data lakehouse combines the scalability, and cost efficacy of a data lake with the structure and reliability of a data warehouse, which gives customers a unified data approach across diverse data sources, tools and formats, Lewis said on stage.

“This, ladies and gentlemen, is the Dell AI Data Platform: from fast, scalable storage with PowerScale and ObjectScale, to cutting-edge technologies like Lightning, and end-to-end data management, with the Dell Data Lakehouse, we are giving customers the tools that they need to derive maximum value from their most valuable asset, their data.”

Here is an edited transcript of CRN’s conversation with Lewis:

I heard a lot about storage on stage, but I didn’t hear as much about servers. Dell introduced new PowerEdge servers, the 9780 and 9785, and a liquid-cooled 9785L. Was that by design?

When you think about what’s new and unique about AI Factory 2.0, you expect us to innovate in the compute side. And we’ve done that, and we’ve killed it. On the IR 5000 or IR7000 (rack cabinets) getting the (PowerEdge XE) 9780, the 9785, air-cooled, liquid-cooled piece, got it. Networking: spectrum, quantum, Broadcom, Nvidia, check, got it.

I really wanted to land the storage story, because that was sort of the biggest addition to the AI Factory. I really wanted to land the AI Data Platform. And I wanted to do it, not rushed, trying to jam compute, network and storage. I really wanted to go into the five components of the data platform, and I wanted to go into the components of the private cloud. And then I wanted to talk about the incredible ecosystem that we’re building out.

On the servers, I think last year on mainstage, Dell Technologies CEO Michael Dell said you could fit 96 H200s inside a rack of PowerEdge 9680s. Now you have server configurations with the PowerEdge XE 9785L that can fit 256 GB300s inside a rack. That’s a massive leap in terms of compute and those chips are big. What does that look like?

So in the NVL structure we can fit 192 (Blackwell chips) and in our own design we can fit 256. It’s a 52U rack. This is ORV (OpenRack version) 3, the 21-inch standard. How tall is it? How much does it weigh? It weighs a whole lot. But if you go towards the back of the demo room where we have the AI center you’ll see two of those racks back there including the closed-door heat exchanger, which is incredibly cool.

You’re talking about driving 60 percent efficiency by doing 100 percent heat capture. You can actually see how we extract the heat, roll it through the coils, chill it with the fans, send it back through the air hallways, and recirculate it to cool down the servers. What the team’s accomplishing is pretty amazing.

Enterprise customers are not looking to deploy 256 GPU racks. They’re looking to get started with AI. And they’re really not so much struggling with compute, they’re struggling with how do I access my data to make AI a reality, which again, is why we wanted to hit the data story with storage to say, ‘Hey, we’re ready for you.’

We’re defining this. When you’ve got the Dell Data AI Platform, where you have incredibly fast, scalable storage with PowerScale and ObjectScale with Project Lightning on top of it, the caching layer we’re building with Nvidia, the Dell Data Lake House on top of it, you have all the things that you need to connect your data, pipe it in and get it into the AI system.

On that ecosystem, you mentioned bringing Google Gemini on prem. How big a model is that? Can you talk about how big a deal that is?

We’re working through them now. There are going to be four different models. There are going to be air-gapped versions and non-air-gapped versions.

There are a lot of customers that are used to Google tooling, right? And so because they’re familiar with it. It’s super-efficient for them to deploy Google within their premises utilizing familiar tooling, where they don’t have to retrain the sales force, and they can take advantage of Google’s AI capabilities.

It’s a no-brainer for the thousands of Google customers that are out there. They want security, performance, cost. They can have the best of both worlds. They can use Google Cloud, but they can use it on premises, in a very secure environment with industry-leading security.

And then we have a series of on-prem models, traditional models, thinking models, that all have varying degrees different levels of performance based on use cases.

Can you tell me about what you’re seeing out there that should encourage partners that they can manage the sort of data chaos that their customers are going to be bringing at them as they deploy AI?

When I talk to a lot of partners I always say, ‘Start with the fact that the majority of customers are going to take a look at their data center and realize that the data center of today ain’t the data center of tomorrow.’

They need to be thinking about how do they architect things over time for an AI world. Because AI is a system. And this system is fed through data.

So if you’re thinking that 80 to 90 percent of the world’s data sits in cold storage, you can easily think of a world where that data is now going to sit in warm and hot tiers, constantly in circulation, feeding these AI engines, feeding these agents.

So you’re going to need to think about your data strategy a whole lot differently, like I said today, ‘You’re gonna have to have a different data architecture.’

So to me, the biggest value-add that partners bring is to come in and talk about what is your data strategy and how are you architecting for the data future.

So many people talk about the computer, the network, and don’t get me wrong, that’s incredibly important, but that’s a means to an end. The data is the fuel. If you build the engine, but you don’t gas it up, then a car ain’t going anywhere. Or you fill it with subpar gas, it’s not going to run for long, and it’s not going to run optimally. You’ve got to feed it with premium-grade fuel.

And to me, it’s about, ‘How do you help customers with their data strategy, near-term, mid-term, long-term?’ And our view is there are things that you can do to get started. You don’t have to wholesale rip everything out and put everything in. You can start to learn. But you need a point of view.

What we do is we write a bunch of hypotheses on paper, and then we get started, and then as we learn and we test these hypotheses, we adjust them. We scratch them out, and we create new hypotheses, right?

But you’ve got to be like, ‘What’s the near-, mid-, long-term strategy to really ensure that your data is ready for AI?’ That’s the number one thing.

In terms of storage, where should partners lean in?

When you think about the progress we’ve seen in Powerstore, some of the progress we’ve seen in PowerScale. We wouldn’t see that progress without an incredibly strong partner community. And not like the partners we partner with to build products, but the partners that we partner with to sell products and solve customer problems.

Thinking about what the future data centers will look like, and thinking about it from a 'Hey, I got the private-cloud concept,’ which includes primary storage with Powerstore. It includes backup storage or cyber resilience storage with PowerProtect Data Domain, as well as very large scale out of primary storage with PowerFlex, you’ve got all of the things that you need to kind of create a private cloud.

But more importantly, as customers now look to shift to more multi-hypervisor environments, which is a very serious thing, like everything gets pushed now to disaggregated infrastructure. They want hardware protection. They want the ability to transition smoothly between cloud operating systems, but they want it under a common management framework for simplicity.

So enter the automation platform. This is really cool, and there’s a demo of that in the back as well that you can see how easy it is for a customer.

They order a PowerEdge. They order a PowerStore, just like if they were buying it online. It gets shipped to them. They have a secure key on the box. They connect it to the internet. It pulls down their ELA (enterprise license agreement). It could be a VMware ELA, a Nutanix ELA, a RedHat ELA. And then it provisions the system and sets it up in a disaggregated way.

It has all the simplicity of HCI, but you got cost protection, you got the ability to provision different operating systems, and you got one management framework for everything. So it’s a very elegant solution.

Picture
You showed off some big enhancements to PowerProtect DataDomain. Backup and resiliency is a still a big topic. What’s new here?

Now it’s an all-flash solution, which is great. It has industry-leading density. So raw, 504 terabytes per node, but with our five- to-one data reduction, that’s 36 petabytes per node.

So now we’ve actually gotten restore times to four times faster. We’ve got replication two times faster because of the efficiency we’re driving. It’s 80 percent less power consumption and 40 percent less rack space.

Not only are customers getting the benefit of the cyber resiliency inherent in Data Domain, they’re getting us a lot of density and cost savings as well.

So this is where you can meet your sustainability goals and drive all the cyber resiliency that you want, because now you have an incredibly flexible and incredibly power, incredibly efficient cyber resiliency system.

This product literally redefines the art of the possible in enterprise, cyber resilience.

Digital technology, Businesswoman holding on AI to command search business information, work in E-commerce, Freelancer, internet network and cybersecurity of data center.

What are some of the fears you hear from customers around AI adoption?

When we get in talking to the enterprise customers, I think there’s a couple of different things that go through their head, and then it sort of varies by degree based on customer and segment and vertical and whatnot.

Number one is always around how do I get started? What use case should I be testing, and how should I be thinking about calculating my ROI? And that sounds easy enough, but actually it’s not because they have very specific requirements. You have to have that discussion.

Then you get into a model-level discussion, which we started this conversation with, which is, ‘Geez, I got all these different models to choose from. How do I know that I’m choosing the right model for the specific use case?’

Then the third piece comes in, which is a really hard one, which is, ‘All right, so now I like understand the use case. I got a good thinking in my ROI calculations. I know which model I want to use. So now, how do I think about data?’

Because this particular use case touches 89,000 different applications and 10,000 different databases. And so how do I get all of my data in here?

And so then we have to have an architecture, infrastructure conversation. Customers also are very interested in security and governance. So they want to have some base level around security pipeline, security and governance. They don’t really have the skill sets internally is another pretty big objection.

I think those would be like the six or seven big things that we hear from the enterprise.

You can get started very small, but this notion that AI is instantly deployable and instantly accessible is just not true.

You’ve got to get in. You got to start small. You have to experiment, you have to learn. You have to build the skills. You have to build the capabilities because this is where the world is moving. And again, I repeat what I always say, which is, whoever figures it out tomorrow, there’ll be a cliff by the one who figures it out the day after that, and then so on and so on, because the algorithmic innovation in this space is crazy.

For so many years in my career, this industry has been bottlenecked by compute. Now, compute is almost unlimited, so the software guys are running crazy and innovating like bandits. It’s amazing. Last year we were talking about perception models. Now, we’re talking about tree of thought, reasoning models and agents. That wasn’t a thing last year.

What’s going to be reasoning models, tree of thought next year? And the year after that? And then you got Nvidia pumping out GPUs every six months. AMD is going to catch up.

Then you got these network bandwidth, which is going crazy. You went from 200 to 400 to 800 to 1.6 to 3.2 in two or three years. The pace of innovation, is crazy. But the big unlock here is the compute. Now for the software engineer, if you can think it, you can create it.