Nvidia Channel Chief Calls RTX Pro Servers Its ‘Largest Scale-Out Opportunity’

In an interview with CRN, Nvidia Americas Channel Chief Craig Weinstein calls the GPU-accelerated RTX Pro servers the ‘largest scale-out opportunity’ he’s seen in his nearly 10 years at the AI infrastructure giant because of its plan to aim the product line at enterprises.

Nvidia’s Americas channel chief said the AI infrastructure giant’s new brand of GPU-accelerated RTX Pro servers represents the “largest scale-out opportunity” the company has seen in the nearly 10 years that he’s been working there.

In an interview with CRN, Craig Weinstein, vice president of the Americas partner organization, said Nvidia views RTX Pro servers as a multibillion-dollar opportunity for the channel because it’s aiming the product line at the many enterprise data centers that have traditionally run on CPU-only servers and are now coming due for a refresh.

[Related: Exclusive: AMD Makes Big Channel Funding Boost As It Builds ‘True’ Partner Program]

“If you think about what’s been going on in the server space between [the COVID-19 pandemic] and then after COVID, there’s a massive CPU refresh cycle that’s coming. And we believe the opportunity is significant [to] the tune of billions of dollars of opportunity to not only help customers save money but optimize their environment for AI,” Weinstein said last month.

Revealed at Nvidia’s GTC 2025 event in March, the RTX Pro servers are air-cooled systems powered by x86-based CPUs and its new RTX Pro 6000 Blackwell Server Edition GPUs. Available in the 8U and more standard 2U form factors, these servers are designed by Nvidia’s OEM partners to fit in standard data centers.

OEMs selling RTX Pro servers include Dell Technologies, Hewlett Packard Enterprise, Lenovo, Cisco Systems, Supermicro, Asus and Gigabyte.

These servers and the underlying GPUs may not be nearly as powerful as Nvidia’s rack-scale offerings such as the Blackwell Ultra-based GB300 NVL72 platform that require liquid cooling and unprecedented amounts of electricity—which is only possible with marquee customers like Microsoft, OpenAI and Amazon that are willing to make massive investments in new data center infrastructure.

But the relatively lax energy and cooling requirements of the RTX Pro servers are what will make them appealing to a much broader constituency of enterprise customers that are keen on taking advantage of GPUs to power AI and other kinds of workloads that can benefit from accelerated computing, according to Weinstein.

“This exists inside the customer’s current data center. We’re going to have the opportunity to [put RTX Pro servers] inside the existing power footprint of an enterprise customer. That’s a powerful message today when the world is power-constrained,” he said.

Partners See ‘Tremendous Opportunity’ With Some Reservations

Bob Venero, CEO of Fort Lauderdale, Fla.-based Nvidia solution provider partner Future Tech Enterprise, called the RTX Pro servers a “tremendous opportunity,” but he doesn’t anticipate a full-blown takeover of enterprise data centers.

“There are going to be workloads that are going to be very specific to the GPU. And then there are going to be workloads that are very specific to the CPU,” he told CRN. “I don’t see the CPU going away. I see each one of their use cases there, and there’s going to be a cost evaluation that needs to get done on running CPU versus GPU, [including] the power consumption [and] the cooling.”

However, Venero, whose company is No. 81 on CRN’s 2025 Solution Provider 500 list, does believe RTX Pro servers will gain more market share in enterprise data centers as a growing number of workloads get infused with AI.

“We’re working hand-in-hand with Nvidia on bringing that product set to some of the largest companies in the globe. We are heavily involved,” he said.

A director-level employee at a U.S.-based Nvidia systems integration partner told CRN that he sees the RTX Pro server strategy as a move by Nvidia to fight against the rise of CPU-based inferencing, which could threaten GPU adoption in enterprise data centers.

“They’re looking at this through a strategic lens of, ‘Could we introduce the RTX Pro 6000 into that environment as a way to compete and then convince OEMs to start standardizing builds that still allows us to put CUDA in an environment where maybe there isn’t CUDA today?’” said the director, who asked to not be identified to speak candidly.

Christopher Cyr, CTO of North Sioux City, S.D.-based Nvidia systems integration partner Sterling Computers, told CRN that he believes Nvidia will succeed in becoming a major enterprise data center vendor because of the maturity of its software stack as well as the big performance boost GPUs can provide over CPUs for a growing number of applications.

While Cyr said GPU-accelerated servers represent an incremental cost over CPU-only systems, customers should consider the value of Nvidia’s vast software ecosystem, including its NIM microservices that are designed to speed up the development of AI applications by making AI models and other software components available in containers.

“If I can download a NIM that I can have up and running today, versus writing code or finding some executable that runs on CPU or whatever that might cost—human resources as well as compute resources—all that needs to be weighed,” said Cyr, whose company is No. 54 on the 2025 CRN Solution Provider 500 and won an Nvidia Partner Network award last year.

Andy Lin, CTO and vice president of strategy and innovation at Houston-based Nvidia systems integration partner Mark III Systems, said he considers the RTX Pro 6000 GPU to be in the sweet spot for what he calls “tier-two types of models” for training, inference and digital twins purposes.

This is in large part because of the GPU’s 96 GB of memory, which is significant step up from the 48-GB capacity of its predecessor, the L40S from 2023, but less than what is offered in some of Nvidia’s most expensive GPUs like the B200 and B300.

“There are a lot of clients that need greater than 80 [GB] but don’t need 180 or 288 [GB] or whatever, with the B200 and B300, and they don’t want to pay that price point. So having 96 gigs is like the perfect amount,” said Lin, whose company has won multiple Nvidia Partner Network awards over the past few years.

That combined with the RTX Pro 6000’s multi-instance GPU capability makes it the “general-purpose, jack-of-all-trades GPU,” which the Mark III Systems CTO said is why Weinstein “has such optimism” for the product and the servers it will power.

“We’ve already worked with a number of clients who have implemented RTX Pro very successfully, even in just like one or two short months. And we have a good amount of proposals out there for larger deployments,” Lin said.

Why Nvidia Sees RTX Pro As A Big Opportunity With Enterprises

Anne Hecht, senior director of enterprise AI at Nvidia, told CRN that the company is looking to convert the world’s 3,000 largest companies—some of which already use its rack-scale offerings—into RTX Pro server customers.

Among those that have already been convinced are Disney, Foxconn, Hitachi, Hyundai Motor Group, Lilly, SAP and TSMC, the company announced late last month.

Hecht said that businesses are seeking the accelerated computing benefits of GPUs for standard data centers because they are no longer finding meaningful performance boosts with new generations of CPUs and see multiple benefits in shifting workloads to GPUs.

“A 2U system with two RTX Pro [GPUs] is going to require more energy than a CPU-only 2U system, but the performance gains and the amount of work you can do on that system far exceeds what you can do on the CPU system,” Hecht said.

Nvidia is claiming that a 2U system with two RTX Pro Blackwell GPUs can deliver up to 45 times better performance than a CPU-only 2U system. Hecht said this is the average performance boost the company saw in tests across data analytics, simulation, rendering and video processing workloads.

This means that customers can significantly consolidate their server farms to achieve the same level of performance, according to Hecht. “From a few 100 to a handful of systems,” she offered as an example. As a result, companies can reduce their carbon footprint, energy consumption and real estate, she added.

Hecht said that customers can use some of that freed-up capital, real estate and energy to “invest in more AI systems.”

It’s in this area where Nvidia sees a big opportunity to sell RTX Pro servers to enterprises, particularly those that are looking to get started in embracing AI applications. This is especially seen as the case because of the growing hype around AI agents, which are designed to automate a series of tasks, according to Hecht.

Weinstein said enterprises could find multiple uses for RTX Pro servers.

“The modern enterprise has multiple lines of business with different facets of workloads that all need access to GPU computing. And so this platform … meets the needs of all of those applications and can reside in a common, air-cooled, low-power-consumption data center, which is where most enterprise customers are today,” Weinstein said.

While Nvidia is mainly aiming RTX Pro servers at enterprises, the channel chief said the product line will also find appeal among midmarket customers.

“They’ve been running a CPU-bound x86 architecture, and their servers are up for refresh, and we have partners that will help them and recommend RTX Pro as the primary platform for not only their current needs, but future-proof their environment for years to come,” he said.

Nvidia Works ‘Deeply’ With OEMs To Enable RTX Pro Channel Sales

Weinstein said Nvidia is “working deeply with every OEM to make sure that they have the right programs” and financial incentives in place to enable channel partners to sell RTX Pro servers along with related software and services.

“We’re deeply involved with all of our OEMs to make sure they understand everything: the value proposition, the tools, the assets and obviously the benefits of a platform like this,” he said. “Then we’re mapping all of their channel routes to market, so we know who all of their downstream partners are.”

Many of these channel partners, which either support some or all OEMs, “also deeply understand” Nvidia’s strategy for the RTX Pro servers, Weinstein added.

To get channel partners properly equipped to sell and service RTX Pro servers, the channel chief said Nvidia is deploying a mix of technical and sales enablement resources.

“Obviously, we want to make sure the solution architects that are building and designing data centers for these emerging workloads understand the value proposition,” Weinstein said. “[Nvidia is also moving to ensure] the sales teams are prepared to talk about the workloads that these environments are being run on.”

On top of selling RTX Pro servers, these partners can also sell licenses for the Nvidia AI Enterprise software, which is a suite of software tools, libraries and frameworks that help customers develop and run AI applications.

“So they have a software opportunity. They have a platform opportunity. They obviously have all the ingredients to build out a larger data center if that’s what the customer would like to do. And so I think the profitability story for partners is fantastic, and we’re directly aligned with our OEMs to help them,” Weinstein said.

With the Nvidia Partner Network’s roster standing at 500 channel partners in North America as of the beginning of this year, the channel chief said he doesn’t see a need for the invite-only partner program to grow its ranks to fulfill the RTX Pro server opportunity.

The only exception Weinstein can imagine is if there are channel partners that are not yet in the program but focus on enterprise high-performance computing opportunities with ISVs such as Ansys, Cadence Design Systems, Hexagon and others that play an important role in the software ecosystem for RTX Pro servers.

“We want the most important partners that serve the enterprise in our program. We do not want to try to create a long tail of partners, and I think that even stays true with RTX Pro,” he said. “We have the right partners to help us scale it out.”

Nvidia Sees ‘Larger Economic, Services Opportunities For Partners’

For channel partners selling RTX Pro servers, Weinstein sees “larger economic and services opportunities” opening up other streams of revenue and profit.

These services opportunities start at the planning stage for customers that are looking to retrofit data centers with RTX Pro servers, with partners having the ability to provide assessments and evaluations on the optimal configuration, according to the channel chief.

“For that, you’re doing quite a bit of planning regarding how the data center will scale, not only from a storage and a power perspective but also the physical server footprint. And customers can do a lot more with a lot less compute,” Weinstein said.

“So redesigning and rethinking what their data center look likes and recapturing power in that data center and repurposing dollars towards these types of new systems is a huge opportunity for partners,” he added.

Weinstein said partners also have “ideation and consulting opportunities around the workloads customers are running” and the correct platform to support those.

From there, the opportunities get more exciting, according to the channel chief, with partners able to help customers accelerate their AI efforts.

“That’s where the services flywheel for partners really starts to get going. It’s less about the physical infrastructure. It’s more about consulting with customers on how they can get some of these really important applications moving now that the platform is in place,” he said.

Cyr, the CTO at Nvidia partner Sterling, said the affordability of the RTX Pro servers is one big factor that will allow Nvidia to create a larger services opportunity for the channel than other Nvidia-based data center offerings that have come out.

“If a partner is already working within the Nvidia AI ecosystem, it’s going to be a majority role of [their services opportunity with Nvidia] because it’s a much [lower] barrier to entry, lower cost, [and it] runs the same software,” he said.