Google Showcases AI Technologies At I/O Developer Conference


Printer-friendly version Email this CRN article

Google's I/O developer conference opened Wednesday with a focus on artificial intelligence as the modality that will power the next generation of its consumer products.

And because innovations Google develops for consumers usually find their way into the enterprise, partners told CRN the products showcased at I/O across various platforms – Android, Chrome, YouTube, Google Home, virtual reality headsets – tease business-grade capabilities they'll be able to introduce to their practices down the road.

The 11th I/O conference, hosted at an amphitheater near Google's headquarters in Mountain View, Calif. campus, highlighted advances that streamline for users the process of interfacing with Google products and platforms, seven of which have more than a billion monthly users.

[Related: Look Out AWS And Azure—Google Is Betting Big On The Enterprise Channel]

Google CEO Sundar Pichai kicked off the morning's keynote by describing to the company's developer community the current evolution in computing as a shift from a "mobile-first to AI-first approach."

"Mobile made us reimagine every product we were working on. Similarly, in an AI-first world, we are rethinking all of our products," Pichai told attendees.

Google is even rethinking its entire computational architecture, the CEO said, building "AI-first data centers."

To power the AI workloads of the future, Pichai announced a new chip called the Cloud TPU.

Last year, Google introduced its Tensor Processing Units – advanced chips customized to excel at machine learning tasks. TPUs are engaged across Google services, from Search to AlphaGo, a product from the DeepMind subsidiary that recently defeated a grandmaster of the notoriously complex Go board game.

Machine learning, however, is comprised of two basic tasks: training and inference. TPUs excel at inference – actually making predictions using established neural networks – but the training part is more computationally intensive.

Cloud TPUs are designed to churn through training tasks. Each chip is capable of 180 trillion floating point operations per second. They're designed to be stacked in pods in data centers, capable of achieving 11.5 petaflops.

Printer-friendly version Email this CRN article