Google Cloud CEO: New Hugging Face AI Partnership Makes GenAI More ‘Impactful’ For Developers

‘This partnership ensures that developers on Hugging Face will have access to Google Cloud’s purpose-built AI platform, Vertex AI, along with our secure infrastructure, which can accelerate the next generation of AI services and applications,’ says CEO Thomas Kurian.

Google Cloud’s generative AI march toward becoming a global market leader continued Thursday with its unveiling of a new partnership with AI star Hugging Face aimed at making Google’s GenAI technology more easily accessible to developers.

The new partnership between the two AI companies allows developers to utilize Google Cloud’s infrastructure for all Hugging Face services, while also enabling training and serving of Hugging Face’s AI models on Google Cloud.

“Google Cloud and Hugging Face share a vision for making generative AI more accessible and impactful for developers,” said Google Cloud CEO Thomas Kurian in a statement Thursday.

[Related: Why Google Cloud Fee Changes Will Help Vs. AWS, Microsoft: 66degrees President]

Kurian said the partnership opens the door for developers to train, tune and serve Hugging Face models with Google’s Vertex AI in just a few clicks from the Hugging Face platform. This makes it easy for developers to utilize Google Cloud’s end-to-end MLOps services to build new GenAI applications.

“This partnership ensures that developers on Hugging Face will have access to Google Cloud’s purpose-built AI platform, Vertex AI, along with our secure infrastructure, which can accelerate the next generation of AI services and applications,” said Kurian.

Developers will be able to utilize Google Cloud’s AI infrastructure—which includes compute, tensor processing units (TPUs) and graphics processing units (GPUs)—to train and serve open models and build new generative AI applications.

Google Cloud Becomes ‘Preferred Destination’ For Hugging Face; GKE Deployments

Mountain View, Calif.-based Google Cloud and New York-based Hugging Face are two of the most in demand AI providers in the world right now.

The new partnership makes Google Cloud a strategic cloud partner for Hugging Face and a preferred destination for Hugging Face training and inference workloads.

One key aspect to the partnership is Google seeking to provide more open-source developers with access to Google’s Cloud TPU v5e, which offers up to 2.5X more performance per dollar and up to 1.7X lower latency for inference compared with previous versions.

In addition, the partnership adds support for Google Kubernetes Engine (GKE) deployments, so developers on Hugging Face can train, tune and serve their workloads with “do-it-yourself” infrastructure and scale models using Hugging Face’s Deep Learning Containers on GKE, Google said.

Vertex AI and GKE will be available in the first half of 2024 as deployment options on the Hugging Face platform.

Hugging Face CEO: ‘Google Has Been At The Forefront Of AI’

Hugging Face has more than 100,000 free and accessible machine learning models that are downloaded more than 1 million times every day.

The open-source artificial intelligence company offers a slew of AI models, data sets and collaboration offerings for businesses.

Hugging Face CEO Clement Delangue said for many years, Google has been “at the forefront of AI progress and the open science movement.”

“With this new partnership, we will make it easy for Hugging Face users and Google Cloud customers to leverage the latest open models together with leading optimized AI infrastructure and tools from Google Cloud including Vertex AI and TPUs to meaningfully advance developers’ ability to build their own AI models,” Delangue said.

Hugging Face will also become available on the Google Cloud Marketplace. This means customers can receive simple management and billing for the Hugging Face managed platform, including Inference, Endpoints, Spaces, AutoTrain and others.