Nvidia’s 10 New Cloud AI Products For AWS, Microsoft And Google

From Nvidia’s new Blackwell GPU platform being injected into AWS, Azure and GCP, to new generative AI accelerators, here are 10 new Nvidia offerings for Microsoft, Google and Amazon that partners need to know about.

Nvidia launched a slew of new offerings on the three leading cloud platforms—Microsoft Azure, Google Cloud and Amazon Web Services—this week at its GPU Technology Conference this week revolving around artificial intelligence.

The AI superstar kicked off its massive event at the San Jose Convention Center on Monday by announcing new offerings on the Azure, GCP and AWS cloud platforms in front of tens-of-thousands of attendees.

This includes the launch of Nvidia’s new Blackwell GPU platform alongside integration with the world’s three leading cloud platforms.

Nvidia GPU Technology Conference Kicks-Off

“AI is transforming our daily lives—opening up a world of new opportunities,” said Nvidia’s CEO and founder Jensen Huang at the conference. “AI is driving breakthroughs at an unprecedented pace, leading to new applications, business models, and innovation across industries.”

Huang said enterprises are looking for solutions that empower them to take full advantage of “generative AI in weeks and months” instead of years.

[Related: Azure Vs. AWS Vs. Google Cloud: Customer Spending Results]

With expanded infrastructure offerings and new integrations with Nvidia’s full-stack AI—AWS, Google Cloud and Microsoft Azure can now offer new cloud-based solutions to scale generative AI applications, boost foundational models and large language models capabilities, and much more.

New cloud platform integrations revolve around Nvidia’s GB200 Grace Blackwell Superchip, B100 Tensor Core GPUs, NIM inference microservices, as well as Nvidia’s H100 and L4 Tensor Core GPUs.

Google, Amazon and Microsoft are all battling for market share in the booming AI market, spanning from generative AI collaboration tools, such as Google Workspace and Microsoft 365, to their cloud infrastructure powering AI applications and GenAI developer platforms.

CRN breaks down the 10 biggest Nvidia launches with integrations on AWS, GCP and Azure unveiled at Nvidia’s GPU Technology Conference that every partner and customer needs to know about.

Nvidia’s New Blackwell Platform On AWS Cloud

Nvidia’s new Blackwell GPU platform is coming to the AWS cloud.

“Nvidia’s next-generation Grace Blackwell processor marks a significant step forward in generative AI and GPU computing,” said AWS CEO Adam Selipsky (pictured) in a statement.

AWS said it will also offer the Nvidia GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs on its cloud platform.

Additionally, AWS will soon offer new Nvidia Grace Blackwell GPU-based Amazon EC2 instances and Nvidia DGX Cloud to accelerate performance of building and running inference on multi-trillion parameter LLMs.

“When combined with AWS’s powerful Elastic Fabric Adapter Networking, Amazon EC2 UltraClusters’ hyperscale clustering, and our unique Nitro system’s advanced virtualization and security capabilities, we make it possible for customers to build and run multi-trillion parameter large language models faster, at massive scale, and more securely than anywhere else,” said Selipsky.

Overall, AWS said when connected with Amazon’s networking, and supported by advanced virtualization, and hyperscale clustering, customers can scale to thousands of GB200 Superchips.

AWS Nitro System On Nvidia GB200 For Cloud Security

AWS’ advanced virtualization Nitro System is integrating with Nvidia’s GB200 to elevate AI security even further by preventing unauthorized individuals from accessing model weights.

Nvidia’s GB200 allows physical encryption of the NVLink connections between GPUs and encrypts data transfer from the Grace CPU to the Blackwell GPU, while AWS’ Elastic Fabric Adapter encrypts data across servers for distributed training and inference.

The GB200 will also benefit from the AWS Nitro System—which offloads I/O for functions from the host CPU/GPU to specialized AWS hardware to deliver better performance—while its enhanced security protects customer code and data during processing.

AWS said the integration of its Nitro System, Elastic Fabric Adapter encryption, and AWS Key Management Service with Blackwell encryption, provides customers end-to-end control of their training data and model weights to provide even stronger security for customers’ AI applications on AWS.

Amazon SageMaker Integration With Nvidia NIM

In a move to accelerate the development of generative AI applications and new use cases, AWS and Nvidia joined forces to offer a low-cost inference for GenAI with Amazon SageMaker integrated with Nvidia NIM inference microservices.

Customers can use this new offer to quickly deploy FMs that are pre-compiled and optimized to run on Nvidia GPUs to SageMaker, reducing the time-to-market for generative AI applications.

The goal of Amazon SageMaker integration with Nvidia NIM microservices is to help businesses further optimize price performance of foundation models running on GPUs.

Nvidia’s New Blackwell Platform On Google Cloud

Google has adopted Nvidia’s new Nvidia Grace Blackwell AI computing platform, as well as the Nvidia DGX Cloud service, on the Google Cloud Platform.

“The strength of our long-lasting partnership with Nvidia begins at the hardware level and extends across our portfolio—from state-of-the-art GPU accelerators, to the software ecosystem, to our managed Vertex AI platform,” said Google Cloud CEO Thomas Kurian (pictured) in a statement.

The new Grace Blackwell platform enables organizations to build and run real-time inference on trillion-parameter large language models. Google is adopting the platform for various internal deployments and will be one of the first cloud providers to offer Blackwell-powered instances.

Google Cloud Integrates With Nvidia NIM Microservices

Building on their recent collaboration to optimize Google’s Gemma family of lightweight open-source language models, Google will adopt Nvidia NIM inference microservices.

The goal is to provide developers with an open, flexible platform to train and deploy using their preferred tools and frameworks.

Nvidia NIM inference microservices, a part of the Nvidia AI Enterprise software platform, will be integrated into Google Kubernetes Engine (GKE).

Built on inference engines including TensorRT-LLM, Nvidia’s NIM aims to speed up generative AI deployment in enterprises, while supporting a wide range of leading AI models and ensuring scalable AI inferencing.

Nvidia’s DGX Cloud Platform On Google Cloud

Google will bring Nvidia GB200 NVL72 systems to the Google Cloud Platform.

Nvidia’s GB200 NVL72 system combines 72 Blackwell GPUs and 36 Grace CPUs interconnected by NVLink to Google’s cloud infrastructure. The systems will be available via the DGX Cloud, an AI platform offering a serverless experience for enterprise developers building and serving LLMs.

DGX Cloud is now generally available on Google Cloud A3 VM instances powered by Nvidia H100 Tensor Core GPUs.

“Together with Nvidia, our team is committed to providing a highly accessible, open and comprehensive AI platform for ML developers,” said Google Cloud’s CEO Kurian.

Nvidia Support for Google’s Vertex AI And JAX

Google Cloud announced support for its machine learning framework, JAX, on NVIDIA GPUs, as well as Vertex AI instances powered by Nvidia H100 and L4 Tensor Core GPUs.

To advance data science and analytics, Vertex AI now supports Google Cloud A3 VMs powered by Nvidia H100 GPUs and G2 VMs. This provides machine learning operations (MLOps) teams with scalable infrastructure and tooling to manage and deploy AI applications. Dataflow has also expanded support for accelerated data processing on Nvidia GPUs.

Additionally, Google Cloud and Nvidia collaborated to bring the advantages of Google’s JAX to Nvidia GPUs in a move to widen access to large-scale LLM training.

JAX is a Google framework for high-performance machine learning that is compiler-oriented and Python-native, making it easy to use for LLM training. AI practitioners can now use JAX with Nvidia H100 GPUs on Google Cloud through its MaxText and Accelerated Processing Kit.

Microsoft Azure Adopts New Nvidia Grace Blackwell

Microsoft Azure is integrating with Nvidia’s Grace Blackwell GB200 and advanced NVIDIA Quantum-X800 InfiniBand networking, to deliver trillion-parameter foundation models for natural language processing, computer vision and speech recognition.

Microsoft also announced the general availability of its Azure NC H100 v5 virtual machine (VM) based on the Nvidia H100 NVL platform.

Microsoft’s NC series of VMs was designed for mid-range training and inferencing, offering customers two classes of VMs from Nvidia H100 94GB PCIe Tensor Core GPUs and supports Nvidia Multi-Instance GPU technology which allows customers to partition each GPU into up to seven instances for diverse AI workloads.

“Together with Nvidia, we are making the promise of AI real, helping to drive new benefits and productivity gains for people and organizations everywhere,” said Microsoft CEO Satya Nadella (pictured), in a statement.

Azure Integrates With Nvidia’s DGX Cloud And Clara

Microsoft Azure has integrated with Nvidia DGX Cloud and its Clara suite of microservices.

Microsoft said by harnessing the power of Azure alongside the DGX Cloud and Clara, healthcare providers, pharmaceutical and biotechnology companies, and medical device developers will be able to innovate rapidly across clinical research and care delivery with improved efficiency.

“From bringing the GB200 Grace Blackwell processor to Azure, to new integrations between DGX Cloud and Microsoft Fabric, the announcements we are making today will ensure customers have the most comprehensive platforms and tools across every layer of the Copilot stack, from silicon to software, to build their own breakthrough AI capability,” said Microsoft’s CEO.

Nvidia Omniverse Cloud APIs On Azure

Nvidia Omniverse Cloud APIs will be available first on Microsoft Azure later this year, enabling developers to bring increased data interoperability, collaboration, and physics-based visualization to existing software applications.

There are five Omniverse Cloud APIs that enable developers to easily integrate core Omniverse technologies directly into existing design and automation software applications for digital twins, or their simulation workflows for testing and validating autonomous machines like robots or self-driving vehicles.

Nvidia GTC attendees can witness Microsoft demonstrate a preview of what is possible using Omniverse Cloud APIs on Microsoft Azure at the show. Using an interactive 3D viewer in Microsoft Power BI, attendees can see real-time factory data overlaid on a 3D digital twin of their facility to gain new insights that can speed up production.