Google Cloud Hires Intel Vet Uri Frank To Design Server Chips

With the hiring of a seasoned engineer from Intel to work on new server chips, Google Cloud is following the footsteps of Amazon Web Services and Microsoft Azure, both of which are at different stages of developing and deploying processors in-house for new cloud instances.

ARTICLE TITLE HERE

Google Cloud has hired Intel engineering veteran Uri Frank to lead new server chip design efforts as part of the cloud service provider’s increasing investments in custom silicon.

The Mountain View, Calif.-based cloud vendor announced Monday that it was “doubling down” on custom chips as “one way to boost performance and efficiency” in its servers, expanding beyond the work it has done in introducing the Tensor Processing Unit, the Video Processing Unit and the open-source OpenTitan silicon root-of-trust project over the last six years.

[Related: Intel CEO Pat Gelsinger To Discuss Comeback Plan In Live Webcast]

id
unit-1659132512259
type
Sponsored post

With the hiring of a seasoned engineer from Intel to work on new server chips, Google Cloud is following the footsteps of Amazon Web Services and Microsoft Azure, both of which are at different stages of developing and deploying processors in-house for new cloud instances. The moves are creating new competition for the cloud businesses of not only Intel but also x86 rival AMD, which expects the number of cloud instances using its EPYC CPUs to reach 400 by the end of the year.

In a blog post, Amin Vahdat, a Google fellow and vice president of systems infrastructure, said Frank will lead a “world-class team” in Israel and that the company will focus on systems-on-chip [SoC] designs, a method previously adopted by Intel and AMD that involves putting multiple functions on the same chip or multiple chips onto one package.

The goal is to increase performance and reduce energy use, Vahdat said, as focusing on the motherboard as the integration point is no longer sufficient.

“In other words, the SoC is the new motherboard,” he said.

By taking a SoC-based approach to computing, Vahdat said Google can improve the latency and bandwidth between different components by “orders of magnitude.” In addition, the cost and power required for custom SoCs can be “greatly reduced” compared to individual ASICs on a motherboard.

Vahdat said Google will design individual components of future SoCs when necessary, but the company is also open to buying components from other vendors. In addition, he said, the company will “aim to build ecosystems that benefit the entire industry.”

“Together with our global ecosystem of partners, we look forward to continuing to innovate at the leading edge of compute infrastructure, delivering the next generation of capabilities that are not available elsewhere, and creating fertile ground for the next wave of yet-to-be-imagined applications and services,” he said.

Frank was previously with Intel for more than 20 years, according to his LinkedIn profile. He was most recently head of Intel’s Core and Client Development Group and had been promoted to corporate vice president last month, according to Israeli news outlet Calcalistech.

Calcalistech reported that Google plans to recruit “several hundred employees” for a new hard-ware-focused development center in Israel that will be led by Frank.

“Google has designed and built some of the world’s largest and most efficient computing systems. For a long time, custom chips have been an important part of this strategy,” Frank wrote in a LinkedIn post. “I look forward to growing a team here in Israel while accelerating Google Cloud‘s innovations in compute infrastructure.”

Dominic Daninger, vice president of engineering at Nor-Tech, a Burnsville, Minn.-based high-performance computing system integrator that partners with Intel and AMD, said the rise of alternative CPU architectures that can be licensed from Arm and other chip designers has enabled companies like Google Cloud to create their own processors.

“It just shows a maturity of the CPU design market,” he said.

Beyond improving performance and power efficiency, another potential reason Google Cloud is designing a custom server chip is to better compete with Microsoft Azure and AWS, according to Daninger. While Microsoft is reportedly working on Arm-based processors for new Azure instances, AWS has already launched multiple instances using its Arm-based Graviton2 processors.

“There’s nothing like competitors doing it that will give [Google Cloud] a reason to do it,” he said.

Daninger said Google Cloud will also be able to optimize its silicon for certain workloads, and he expects more large tech companies to build custom silicon in the future.

“I wouldn’t be surprised to see more,” he said.