Peak:AIO CEO Says Software-Defined Storage Firm ‘Really Focused On The AI Space’

Peak:AIO, long self-funded and profitable, recently secured its first institutional round to scale globally and expand its channel reach on the foundation it has built with its software-defined storage technology that aims to deliver AI-grade performance from minimal hardware.

Peak:AIO is sharpening its focus on the booming AI infrastructure market with a software-defined storage platform the company says delivers high performance with far less hardware than competing offerings.

A big part of that is a $6.8 million seed funding round the company closed in October after years of bootstrapping itself to profitability as a way accelerate global expansion and scale its channel presence, CEO Roger Cummings told CRN in an exclusive conversation.

Cummings said that, unlike rivals building large, enterprise-heavy architectures, Manchester, U.K.-based Peak:AIO claims its edge comes from extracting maximum performance from single nodes versus competitors who historically required 10 to 12 nodes. That, in turn, makes Peak:AIO’s storage platform especially suited for use with AI workloads.

[Related: The 50 Coolest Software-Defined Storage Vendors: The 2025 Storage 100]

“With this round of funding and future rounds of funding, we’re going to become even more intelligent about that storage for AI, because we’re going to understand those AI workloads, the read and write characteristics of those workloads, the data placement characteristics of those workloads,” he said. “A lot of scenarios, whether it be internally for an organization that has thousands of GPUs or a GPU-as-a-service organization, we’re going to understand that workload and show the workflow of where that job should run.”

Cummings also said Peak:AIO’s go-to-market strategy leans heavily on partners as well as with OEMs and ODMs, as the modularity of its technology gives partners a way to land small and grow to exabyte-scale by capturing AI workloads early.

With its new funding, Peak:AIO plans to expand its U.S. footprint, invest in workload-aware data placement, deepen its work in vertical AI frameworks such as MONAI (medical open network for AI), and push further into deep-cache and cloud-native orchestration technologies.

There’s a lot going on at Peak:AIO. For a deep dive on what this software-defined storage company is doing, read CRN’s entire conversation with Cummings, which has been lightly edited for clarity.

How do you define Peak:AIO?

Peak:AIO is a software-defined storage platform for organizations to optimize and scale their data performance for AI and HPC workloads.

A lot of software-defined storage companies say pretty much the same, preparing data for AI workloads. What is Peak:AIO doing differently?

We started in our very early years. Our secret sauce was always to get the most out of the most minimal infrastructure we could, meaning let’s get the highest level of performance we can out of a single load. And so early in our company, we were getting the same level performance, storage and data performance, out of a single node that our competitors would get out of 10 to 12 notes. That was our secret sauce. And that was great for scaling deep, but scaling out was initially an additional challenge. So with a recent announcement that we made at this year’s MSST (Massive Storage Systems and Technology) conference, we had our open-source scale-out file system, pNFS. I’m very, very excited about this. pNFS takes that single node efficiency and performance that we have and allows us to build on top of it. So now you can go deep on a single node, add another node, and go deep on that. It recognizes storage performance and the file system, and allows you scale out in a very efficient, cost effective, modular way that organizations need to be successful. What’s really great about that file system is that it is an open-source file system, and so now organizations can scale out and do it in a non-proprietary fashion and reduce costs as they do it.

Peak:AIO also recently closed a funding round.

Yeah, this is our first round of institutional funding. The great thing about the company is we’ve been there and done it. Mark [Klarzynski, co-founder, chief strategy officer, and chief technology officer] and I have been in the industry for a long time, and we wanted to get outside funding when we had a lot of the proof points in place to get the valuation and the amount of money that we need to take company further.

Marcus self-funded the company, and we were able to turn the company into a profitable one. I came on about a year-and-a-half ago, and the company has been profitable since that time, actually before I came on board. But it’s continued to be profitable since that time, and we have self-funded it until this year.

So Peak:AIO pretty much bootstrapped itself into profitability before it went to seek external funding. That’s a fairly unusual model. Why that approach? Why wait so long?

A lot of times you’re building a product and you need to bring in the talent to do that. We’ve been able to bring in the talent based on our relationships and from a tremendous amount of sweat equity to build the company, and we wanted to make sure that we’re building the right company. A lot of times, funding is used by some companies to figure out what they want to do. We wanted to take the funding in when we had the product, the customers, the testimonials, the test cases, and the use cases to accelerate what we’ve done so far. That was the difference in how we approached the market. And I think that when you look at the venture capital community, and you then look at the investment community, that’s what they want to see. We bring a lot of experience to the table with me, Mark, and the rest of the management team, to take that investment, use it wisely, and to really capitalize on that in our next and future rounds of funding.

Given that the company is already profitable, and that suggests cash flow positive as well, why do you need the funding at this point?

It’s really for global expansion. We have a tremendous amount of success in Europe, specifically in the U.K. We have some great customers, like Los Alamos National Labs in the U.S., but we really need to put funding behind the development roadmap that we have from a product perspective but also get ourselves anchored in the markets where we have growing success and get the people out there that we need. It was really a global go-to-market fundraising event.

Who would you say Peak:AIO’s top competitors are?

We go against companies like Weka, Vast Data, Hammerspace, and DDN, but they’re really chasing the enterprise market, and they’re chasing things that I sold many, many moons ago like global redundancy and things that nature. We’re really focused on the AI space. We want to make the best product available for folks that want to be successful. You hear all these horror stories with AI failure rates and things of that nature. We want to provide a solution that addresses the AI market specifically. We’re doing that with very cost-effective storage performance and acceleration. We’re doing that with a scale-out file system that’s very easy and embedded into our system. For us to be successful in AI, it has got to be simple. We really focus on a simple solution to implement and to help organizations scale and be successful in their AI applications. Eventually, as we grow and mature, we might eat away at those other competitors’ market shares at the enterprise level. But we’re going to stick to our knitting and be very focused on helping people be successful around both their inference and their training applications within AI.

You talked about storage for AI. What’s different about storage design for AI compared to more general-purpose storage? How can you call it storage for AI?

That’s a great question. With this round of funding and future rounds of funding, we’re going to become even more intelligent about that storage for AI, because we’re going to understand those AI workloads, the read and write characteristics of those workloads, the data placement characteristics of those workloads. A lot of scenarios, whether it be internally for an organization that has thousands of GPUs or a GPU-as-a-service organization, we’re going to understand that workload and show the workflow of where that job should run. So it’s understanding the idiosyncrasies of the read and write capabilities of those workloads, their HA (high availability) capability, the HA protection that you need within those workloads. Also, we’re doing a tremendous amount of work within the deep memory space, because an AI workload has a much different characteristic than an enterprise workload when it comes to keeping the number of active tokens that you want up and running to remove unneeded re-computation of the models. Those are the unique things that I think you see within the AI space versus the enterprise space.

What is Peak:AIO’s go-to-market strategy in terms of channel?

I’ve built two- and three-tier distributions throughout my whole career. The channel is, is always about, how do I build a business based on your solution? We’re super sensitive to that. We want to make sure that the channel embraces us and can see how to build a solution. And with the modular fashion of our technology, we can grab those workloads and scale to the exabyte scale. There’s probably 20 of those super node or super pod customers out there. There’s a tremendous amount of market there for a solution that can scale from a single node. So what we’re doing is we’re giving partners the ability to capture those workloads early and then scale those workloads. And we were doing that through value added resellers. We’re doing that through OEMs and ODM. Because of the modular nature and how we are software-defined, we can fit and sit on top of a lot of people’s infrastructure to do that. So we’re excited about future partnerships that we’ll be announcing on the server side, the disk side, as well as the switch side. There’s a lot of different places that you’ll find Peak. And our ability to distribute the product and the partner ecosystem we’re developing are going to be able to take an open-source product and build a great business around it, helping customers, because we’ve done all we can to make the product simple to use. Organizations still lack the amount of talent needed from an AI perspective in general, so we have strategic partnerships lined up that will also be announced that will take to our solution and really help us go to market.

What are some things Peak:AIO is working on going forward?

I talked a little bit about some of the things we’re doing from a flexible data placement aspect that’ll be more and more mature. Think of it as using AI within our solution to understand those workloads better. That’ll mature tremendously. You’re going to see us do a lot of work within what I call the model framework. One example of that is in the MONAI (medical open network for AI) space, which in the medical imaging space. So you’re going to see us go vertically a little bit as well.

We’re also seeing a big need for deep cache capability and providing that as well as taking the orchestration of storage and moving it into more of a cloud-native environment. We’re very excited about where we’re taking this whole orchestration, the storage, this zero tier, that you’ve heard tremendous buzz about. There’s no reason why you can’t make that zero tier expansive across on-prem and off-prem scenarios. So we’re real excited about where we’re going and what we’re doing to take this orchestration of AI infrastructure and move it to a cloud-based environment as well.

Is there anything else we need to know about Peak:AIO?

We’re getting a tremendous amount of interest from various organizations across all the verticals. The GPU-as-a-service organizations really are looking at us as a way to drive top line as well as reduce their bottom-line expenses. Today, the infrastructure they have to implement is very complex. There is an incredible amount of power and cooling associated with the file system. They’re very expensive and a challenge to manage. We feel our solution, from a modular nature, can grow to support those customers, and can scale back, because that’s the nature of AI. [Our] file system is scalable, both from an economic perspective as well as an ease-of-use perspective. I think that that’s a unique value proposition we have in the market that resonated really well with customers.