Supercomputing 2025 Shines Spotlight On AI, HPC-Targeted Performance

This week’s Supercomputing 2025 conference in St. Louis brought a wide range of storage, server and other data center infrastructure technology aimed at helping business get ready for higher-performing AI and high-performance computing workloads. CRN highlights 18 of the latest offerings.

A large hallway with supercomputers inside a server room data center. Technology used for cloud computing and network security.

This was a busy week in St. Louis where the annual Supercomputing 2025 conference brought together a wide range of industry professionals to explore the latest hardware and software offerings targeting the burgeoning AI and high-performance computing (HPC) markets.

There were several hundred exhibitors ranging from small distributors and software vendors to large universities and even governmental organizations looking to demonstrate new technologies aimed at next-generation computing infrastructures.

And then there were the products. Exhibitors showed a wide range of offerings from the tiniest of ICs and SSDs to supercomputers to high-performing clouds.

[Related: HPE’s Nvidia AI Factory Solution Blitz: What You Need To Know]

CRN is highlighting 18 of those new technologies, including:

There was a lot to see at Supercomputing 2025.Here are some of the highlights from the conference.

Supermicro Air-Cooled 10U GPU Server

San Jose, Calif.-based Supermicro used Supercomputing 20C25 to introduce its new air-cooled 10U GPU server, Model AS A126GS-TNMR, with eight AMD Instinct MI355X GPUs. This eight-GPU system leverages the industry-standard OCP Accelerator Module (OAM) and offers 288 GB of HBM3e memory per GPU and 8-TBps bandwidth. The MI355X GPUs boost GPU power from 1,000W TDP to 1,400W TDP, pushing up to double-digit higher performance compared with the air-cooled 8U MI350X system to improve data processing performance. With the new 10U option added to Supermicro’s lineup of AMD MI355X-powered GPU servers, the company says customers can unlock higher performance per rack on both air-cooled and liquid-cooled infrastructure at scale.

Broadcom Thor Ultra

Semiconductor and infrastructure software solution developer Broadcom showed off its Thor Ultra, which the Palo Alto, Calif.-based company called the industry’s first 800G AI Ethernet Network Interface Card (NIC) capable of interconnecting hundreds of thousands of XPUs to drive trillion-parameter AI workloads. By adopting the open Ultra Ethernet Consortium (UEC) specification, Thor Ultra lets customers scale AI workloads with what it termed unparalleled performance and efficiency in an open ecosystem.

DataCore Nexus

DataCore Nexus delivers a high-performance, software-defined parallel file system with ultra-low latency and high throughput of up to 180 GBps in a 4U footprint for demanding HPC and AI workloads. Built in part on technology from Fort Lauderdale, Fla.-based DataCore’s 2025 acquisition of ArcaStream to exploit high-speed InfiniBand fabrics and Nvidia GPUDirect for direct, low-overhead data paths, it accelerates compute pipelines at scale. Nexus provides intelligent, policy-driven data orchestration across scratch, project/home directories, S3 archive and cloud tiers, and offers a unified global namespace with multi-protocol access (POSIX, NFS, SMB, S3). By consolidating silos and automating data movement, Nexus helps speed time-to-results, simplify management, and provide global collaboration while ensuring data remains accessible wherever it resides.

HPE Cray Supercomputing GX5000

The HPE Cray Supercomputing GX5000 from Spring, Texas-based HPE is a next-generation supercomputing system purpose-built for the AI era. The system offers three multi-partner, multi-workload compute blade options for industry-leading density, unified HPE Supercomputing Management Software to provide multitenancy, and HPE Slingshot 400, which is designed to perform at scale under large AI workloads. It is augmented by what HPE called the industry’s first factory-built storage system with embedded Distributed Asynchronous Object Storage (DAOS) open-source software, the HPE Cray Supercomputing Storage Systems K3000, which allows supercomputing customers to run input/output-bound AI applications with higher productivity.

Vdura Data Platform V12

The Vdura Data Platform V12 from Milpitas, Calif.-based Vdura was designed to boost scalability and resilience for AI and HPC environments. V12 introduces an Elastic Metadata Engine that scales linearly across nodes to help accelerate metadata operations up to 20X. New Snapshot Support enables instant, space-efficient dataset copies for pipelines, check points and recovery. Optimized integration with SMR HDDs unlocks up to 30 percent more storage capacity per rack while maintaining throughput. Building on V11, V12 delivers over 20 percent higher aggregate performance, reduces cost per terabyte by 20 percent, and simplifies data protection. General availability is planned for the second quarter of 2026 with seamless in-place upgrades.

Hammerspace v5.2

Hammerspace v5.2 delivers major performance, security and ecosystem enhancements that help organizations unify, automate and accelerate AI and high-performance workloads across on-premises, hybrid and cloud environments. Redwood City, Calif.-based Hammerspace says the release raises the bar on standards-based parallel file system performance, especially for AI and HPC workloads. A key driver is Hammerspace’s ongoing contribution of client-side NFS performance improvements to the upstream Linux kernel, specifically engineered to accelerate demanding workloads. By tightly integrating its software with these kernel advancements, Hammerspace said it provides dramatic performance gains without requiring proprietary client installations or locking data into vendor-controlled silos, enabling true flexibility and scale for modern AI initiatives.

Hitachi Vantara VSP One Block High End (BHE)

Hitachi Vantara, Santa Clara, Calif., used Supercomputing 2025 to expand its VSP One family with VSP One Block High-End, a next-generation data platform engineered for performance and resilience. Delivering up to 50 million IOPS, 60 TB of NVMe SSDs, and future-ready 100-Gbit TCP/64G FC connectivity, it unifies high-end block workloads across open systems, mainframes and hybrid clouds. VSP One BHE offers 100 percent data availability, immutable snapshots, FIPS 140-3 compliance and clean cyber recovery within seconds. With dynamic carbon reduction for reduced C02 emissions, a 4:1 data reduction guarantee, and unified AIOps-driven management through VSP 360, it provides a scalable, secure and energy-efficient data foundation for the AI era.

Quantinuum Helios

The Helios quantum computer from Broomfield, Colo.-based Quantinuum, was designed to accelerate quantum computing adoption by enterprises. With what Quantinuum calls the highest fidelity of any commercial system and a real-time control engine, Helios enables developers to program a quantum computer in much the same way they program classic computers. Quantinuum is deepening its partnership with Nvidia, combining its systems with Nvidia GPUs and new integrations with GB200, NVQLink, CUDA-Q and Guppy to advance hybrid quantum-AI computing. An Nvidia GPU-based decoder in Helios showed logical fidelity improvement of 3 percent, and a new GenQAI workflow achieved a 234X speed-up generating complex molecule training data.

Pure Storage FlashBlade EXA//

The FlashBlade//EXA from Santa Clara, Calif.-based Pure Storage is aimed at the requirements of AI and HPC. It helps provide multidimensional performance with massively parallel processing and scalable metadata IOPS to support high-speed AI requirements, with performance of 10-plus terabytes per second in a single namespace. The platform also helps eliminate metadata bottlenecks with high metadata performance, availability and resiliency for massive AI datasets with no manual tuning or additional configuration needed. Its configurable and disaggregated architecture uses industry-standard protocols, including Nvidia ConnectX NICs, Spectrum switches, LinkX cables and accelerated communications libraries.

DDN Sovereign AI Blueprints

Chatsworth, Calif.-based DDN unveiled Sovereign AI Blueprints and Nvidia Reference Designs, delivering validated, production-ready architectures for national- and enterprise-scale AI. Built on DDN’s unified data intelligence platform and Nvidia AI Data Platform reference designs, these technologies ensure infrastructure is sovereign by design, energy-efficient and sustainable. Certified architectures enable greater than 99 percent GPU utilization, advanced security, and predictable performance for training, inference and RAG workloads. Proven in deployments like India’s Yotta Shakti Cloud and Singtel, DDN’s Sovereign AI provides governments and enterprises a repeatable framework to control data, meet compliance requirements, optimize efficiency, and scale AI with trust and operational certainty.

Dell AI Factory

Round Rock, Texas-based Dell Technologies unveiled enhancements to the Dell AI Factory designed to simplify and accelerate enterprise AI adoption. Updates to Dell Automation Platform streamline enterprise deployment for secure, repeatable success; data management solutions optimize performance and accelerate decision-making; enhanced Dell PowerEdge servers deliver faster training and scalable compute; advanced networking solutions help promote AI at scale; and Dell’s Integrated Rack Scalable Solutions additions offer resilient, smarter infrastructure for greater control. With turnkey AI use case pilots and resilient infrastructure, Dell AI Factory helps empower organizations to unlock the full potential of AI, drive innovation, and deliver measurable business value across industries.

IBM Storage Scale System 6000

IBM, Armonk, N.Y., used Supercomputing 2025 to show its updated IBM Storage Scale System 6000 with triple the maximum capacity at 47 petabytes per rack. This was done by adding support for industry-standard QLC flash storage in 30-TB, 60-TB and 122-TB SSD configurations to offer clients more data storage options to fit their needs. IBM also introduced the IBM Scale System All-Flash Expansion Enclosure, which is optimized for high-performance AI training, data inferencing, HPC, and data-intensive workloads. With the introduction of 122-TB QLC NVMe SSDs, the enhanced Storage Scale System 6000 delivers over 47 petabytes of cost-effective high-density flash capacity in a single 42U rack with the All-Flash Expansion Enclosure.

Sandisk UltraQLC 256TB NVMe SSD

The Sandisk UltraQLC 256TB NVMe SSD from Milpitas, Calif.-based Sandisk targets hyperscale flash storage. The SSDs were purpose-built for fast, intelligent data lakes powering AI at scale. Built on Sandisk’s new enterprise-grade UltraQLC platform, they combine BiCS8 QLC CBA NAND, custom controllers, and advanced system optimizations to deliver lower latency, higher bandwidth and greater reliability than previous models. This achievement in NAND architecture helps show Sandisk’s ability to scale performance and efficiency for AI-driven, data-intensive environments. The Sandisk UltraQLC 256TB NVMe SSDs are slated to be available in U.2 form factor in the first half of 2026, with additional form factors available later in the year.

Quantum ActiveScale Ranged Restore

Recent enhancements to Centennial, Colo.-based Quantum’s ActiveScale include a Ranged Restore feature, an industry-first capability that enables organizations to retrieve specific byte ranges from large objects stored in Glacier-class archive tiers. The erasure-coded object-on-tape architecture eliminates the need for full-file rehydration to help cut retrieval times, egress costs and overall compute usage. This release also delivers over five times faster performance for small-object restores, achieved with a redesigned restore engine that intelligently batches and orders retrieval requests. The Ranged Restore enhancements deliver responsive, query-ready data lakes that are optimized for high-volume demands at exabyte scale, vital for both AI and analytics workflows.

Weka Next Generation WEKApod

Campbell, Calif.-based Weka used Supercomputing 2025 to unveil the next generation of its WEKApod appliances aimed at upending traditional performance-versus-cost trade-offs. The completely redesigned WEKApod Prime appliance achieves 65 percent better price-performance over prior models through AlloyFlash, a new capability in NeuralMesh that intelligently places data across high-performance TLC and high-capacity eTLC drives in the same system. Organizations get the performance AI workloads demand at economics that make sense. WEKApod Nitro doubles performance density with refreshed hardware, enabling organizations to accelerate AI and HPC innovation, maximize GPU utilization and serve more customers. Its higher-density design makes it ideal for large-scale object storage repositories and AI data lakes that demand performance without compromise.

MinIO ExaPOD

ExaPOD is Redwood City, Calif.-based MinIO’s modular reference architecture for building and operating large-scale AI systems. It integrates Supermicro high-density platforms, Intel Xeon 6 processors and Solidigm enterprise SSDs with MinIO AIStor software to deliver reliable, low-latency data performance for AI training and inference. ExaPOD unifies hardware and software-defined storage into a balanced, repeatable system that scales seamlessly across racks and data centers.

ExaPOD delivers a massive, all-inclusive 36-petabyte usable capacity per rack unit to minimize data center footprint, while consuming an average of 900W of power per petabyte of usable capacity (including cooling) to free up maximum power resources for GPU compute.

Western Digital JBOD Platforms

San Jose, Calif.-based Western Digital showcased next-generation AI and HPC storage technologies at Supercomputing 2025, demonstrating how its storage platforms help eliminate performance bottlenecks and democratize access to high-capacity storage. Key innovations include UltraSMR technology expansion beyond hyperscalers and real-world OpenFlex Data24 disaggregated storage performance scenarios. Western Digital’s expanded Open Composable Compatibility Lab ecosystem features new partners, enabling vendor-neutral, prevalidated solutions. Western Digital said its approach delivers superior capacity economics, flexible scaling and reduced total cost of ownership for organizations deploying AI and HPC workloads at any scale.

MSI ORv3 Rack System

Taiwan-based high-performance server developer MSI used Supercomputing 2025 to introduce its ORv3 rack offering and a comprehensive portfolio of power-efficient, multi-node and additional AI-optimized platforms built on Nvidia MGX and desktop Nvidia DGX designed for high-density environments and mission-critical workloads. MSI’s ORv3 21-inch, 44U rack is a fully validated, integrated offering combining power, thermal and networking systems to streamline engineering and accelerate deployment in hyperscale environments. Featuring 16 CD281-S4051-X2 2U DC-MHS servers, the rack utilizes centralized 48V power shelves and front-facing I/O, maximizing space for CPUs, memory and storage while maintaining optimal airflow and simplifying maintenance. The DC-MHS servers feature either AMD EPYC 9005 processors or Intel Xeon 6 processors.