Nutanix CEO On Treating AI As Core Infrastructure: ‘This Is Now About Your Competitive Edge’
As agentic AI moves from pilots into production, technology executives face mounting pressure to govern cost, data, and risk—forcing a rethink of platforms, not just workloads.
At the Nutanix .NEXT 2026 conference, the message was clear: Agentic AI is no longer a future concept; it is now an operational reality.
Yet as AI moves from pilot projects into production, it is exposing hard questions surrounding governance, cost control, data sovereignty and infrastructure flexibility.
[RELATED: 5 Rules To Getting Started With AI Governance]
In his keynote officially kicking off the event, Nutanix CEO Rajiv Ramaswami and several of Nutanix’s enterprise customers spoke on why AI adoption is forcing IT leaders to rethink their operational platforms, not just their workloads.
Here are some key takeaways from Nutanix’s CEO and its customers:
Agentic AI Is Moving Into Governed Production
Ramaswami described a shift away from simple AI prompting toward autonomous agents that operate continuously across the enterprise.
“We are rapidly moving from an era of prompting to the era of delegating and empowering autonomy with agents,” Ramaswami said. “This is now about your competitive edge.”
He emphasized that organizations are now dealing with more users, more agents and more data—often with limited infrastructure resources. As a result, AI must be treated like core infrastructure, requiring standardized platforms and operational controls rather than fragmented tools.
Governance And Data Sovereignty Are Non-Negotiable AI Requirements
The keynote and customer stories made it clear that regulated industries can’t let AI run amok. Clear rules on where data lives, who can access what, and how AI is allowed to operate are mandatory as AI spreads.
That concern is already shaping real-world decisions. Dan Regalado, CIO of Wynn North America, who joined Ramaswami on stage at the NEXT event, emphasized that regulatory constraints and data residency both define how AI can be deployed.
“Our gaming data cannot leave the state,” Regalado said. “Data security and data residency are non-negotiable for every one of our resorts.”
Even organizations experimenting with AI in the public cloud are reassessing where production workloads belong.
“We’ve done it in the cloud,” Regalado added, “but we’re actively researching how we might do it better on‑prem or in a hybrid model—because governance and control matter.”
AI Cost Predictability Has Become An IT Operations Issue
As AI use scales, tracking usage and managing costs are no longer just financial concerns.
Nutanix highlighted the growing importance of usage metering and “cost per token” visibility to prevent AI initiatives from becoming budget liabilities.
[RELATED: Analysis: How The Midmarket Can Deliver ROI With AI]
“As you use more and more models, you run into challenges around tracking usage and managing cost,” Ramaswami said.
For Wynn, cost efficiency is already influencing platform decisions.
“Cost efficiency is one of the major factors we’re evaluating,” Regalado said, “as we decide whether to stay purely in the cloud or invest on‑prem.”
AI without metering also becomes a budget liability. CIOs need infrastructure that surfaces usage clearly and supports predictable budgeting.
Platform Simplicity Matters More Than Ever For Smaller IT Teams
Customer examples showed that many IT teams often lack the staff to manage a growing mix of AI tools.
The customers conveyed a very specific message: Reducing operational complexity through unified platforms is critical to sustaining innovation without expanding head count.
For many organizations, AI ambition can clash with limited staffing, said Josh Hostetler, lead platform engineer at Tire Rack.
“I’m on a platform engineering team of three people,” Hostetler said. “Our goal was to reduce administrative burden without adding another tech stack or more engineers.”
Tire Rack modernized incrementally—starting with foundational workloads and evolving toward containers—without overwhelming the team.
“We started simple,” Hostetler said. “Stability mattered as much as scale.”
AI platforms that increase operational complexity can halt adoption.
AI Is Forcing Platform Adaptability
From supply chain constraints to cloud cost concerns, CIOs are evaluating where workloads should run. Hybrid and on‑premises options are gaining renewed attention as AI demands more platform adaptability.
[RELATED: What Midmarket CIOs Must Prove By EOY 2026: Fewer Platforms, Faster Security, Measurable Outcomes]
Stephen Hall, vice president of infrastructure and operations at BlueCross BlueShield of Tennessee, underscored the importance of adaptability as AI reshapes infrastructure planning.
“Infrastructure leaders need adaptability,” Hall said, “because the industry will keep evolving.”
This article originally appeared on CRN sister website MES Computing.