Parallel Works Looks To Bring Order To AI Workloads With New Control Platform
Parallel Works has debuted Activate AI, a control plane with integrated Kubernetes support, that the company says simplifies the deployment and management of scalable AI and machine learning workloads across hybrid, multi-cloud IT environments.
As AI systems proliferate throughout IT estates, managing AI workloads and the infrastructure that supports them is becoming a challenge. Parallel Works, which develops technology for managing hybrid, multi-cloud computing resources, has launched a new edition of its platform for deploying, scaling and managing AI and machine learning workloads.
The new Activate AI, according to Parallel Works, is the latest edition of the company’s flagship Activate control plane platform that now provides AI resource integrations, Kubernetes support, and support for Neocloud (GPU-as-a-service) providers to simplify the deployment and management of AI and ML applications and processes.
Parallel Works, headquartered in Chicago, describes its Activate unified control plane as an “operating system” for orchestrating and utilizing HPC (high-performance computing) resources, particularly across hybrid and multi-cloud computing environments. That allows IT managers to easily provision, manage and share computer resources across on-premises, cloud and hybrid systems.
[Related: The 10 Hottest AI Startup Companies Of 2025 (So Far)]
“So we become the unifying interface between all these different system types,” said Activate CEO Matthew Shaxted in an interview with CRN.
Operationalizing AI is a challenge due to fragmented infrastructure, high GPU costs, underutilized GPU resources, and lack of a clear path to production-ready AI, according to the company. Parallel Works’ new Activate AI offering expands the Activate platform’s capabilities into the AI realm and is designed to accelerate the shift from research around AI workflows to production.
Activate AI enables more efficient AI infrastructure management across hybrid and GPU-intensive deployment, according to Parallel Works. Organizations can run large-scale model training, inference and simulation workloads across secure environments, including GPU-as-a-Service (GPUaaS) clouds, legacy systems and next-generation containerized systems.
A key feature of Activate AI is its ability to integrate existing Kubernetes clusters—whether on-premises or in the cloud—into these environments for HPC tasks and providing control over user access and namespace resource allocation across projects and teams.
Activate AI is integrated with third-party AI resources such as Amazon SageMaker, Azure Machine Learning Workspace, and open chat APIs for large language models, Shaxted said. And Parallel Works has partnered with several “Neocloud” GPU infrastructure service providers including Canopy Wave, Voltage Park and VULTR to develop a reference architecture model, the CEO said.
Activate AI enables chargeback and showback for assigning usage-based pricing and tracking internal resource consumption across Kubernetes clusters. The system manages GPU resources across multiple users with dynamic partitioning and it can run and migrate workloads across Kubernetes, batch schedulers and virtualized environments. It also runs and optimizes workloads across Nvidia, AMD and Intel AMX systems.
“Enterprise leaders need AI to be a competitive advantage, but most enterprises are stuck navigating fragmented infrastructure and mounting costs, which can turn AI initiatives into a strategic liability,” Shaxted said. “Our goal is to bridge the final gap between complex infrastructure and practical AI deployment, making systems accessible without requiring deep infrastructure expertise.”
GPUaaS provider Canopy Wave currently has a reference-sell relationship with Activate, offering the Activate control plane system as an option to its customers, said James Liao, Canopy Wave founder and CTO, in an interview with CRN. “We see a lot of common interests between the two companies,” he said.
With AI, many businesses and organizations maintain their data on premises but utilize cloud compute resources such as Canopy Wave for model training and AI inferencing applications. Other resources such as front-end applications reside on traditional CPU-based public clouds. All this requires tools like Activate AI to manage all those resources and workloads, according to Liao.
“This is actually a growing need in our data centers,” Liao said. “We’re seeing more and more people having the need to coordinate the CPU public cloud with the private cloud on the GPU side,” including AI tasks, data, the internet connection and different Kubernetes clouds.
In addition to GPU service providers, Parallel Works also has been working with a number of technical integrators, including GDIT and Raytheon, who white label the Activate platform and provide it to their customers in the public sector, Shaxted said. The company has pursued opportunities with solution providers World Wide Technologies and Cambridge Computer Solutions, and IT infrastructure providers Mark III Systems and Penguin Solutions, according to the CEO.
Shaxted said Parallel Works is in the early stages of developing a partner program and building up a partner ecosystem. The company recently began working with government solutions provider Carahsoft and has been in discussions with other major systems integrators about adopting the technology.
“So the channel is kind of new for us,” Shaxted said, noting that Activate just hired a channel manager in January. “We’re in the process of activating the entire channel, I would say,” Shaxted said.