AWS Introduces P2 Instances To Power AI, High-Performance Workloads

Amazon Web Services introduced a virtual machine type Thursday to power a new generation of resource-hungry artificially intelligent applications.

The P2 instance type leverages banks of NVIDIA graphics processors to ramp compute power along with advanced memory features to keep those cores humming. The instances are also running an "AWS-specific version" of Intel's Broadwell processors, according to Jeff Barr, Amazon's chief cloud evangelist.

In an AWS blog post, Barr described the new virtual machines as "designed to chew through tough, large-scale machine learning, deep learning, computational fluid dynamics (CFD), seismic analysis, molecular modeling, genomics, and computational finance workloads."

[Related: The Race Is On: IBM, Google, Microsoft And AWS Aim To Deliver Machine Learning As A Cloud Service]

id
unit-1659132512259
type
Sponsored post

Amazon's latest configuration of compute, memory and network resources comes as artificially intelligent workloads, typically employing machine-learning or deep-learning methodologies, are all the rage in the industry. Public cloud rival Google on Thursday hyped its focus on delivering such capabilities as cloud services.

Jeff Aden, executive vice president of Seattle-based AWS partner 2nd Watch, described the P2s as "excellent workhorses" for high-performance computing use cases like machine learning or deep learning, which require "massive parallel processing" to deliver the necessary performance.

2nd Watch has seen early adopters of intelligent and cloud-native applications interested in deploying those kinds of instances in their public cloud environments, he said.

"Companies that are doing scientific research and analytics, or those modeling financial risk, could benefit from this new instance," Aden told CRN.

To help customers apply the P2s to tackle sophisticated deep-learning workloads, AWS concurrently launched a new machine image.

Barr explained the distinction between deep learning and machine learning.

"Deep learning has the potential to generate predictions (also known as scores or inferences) that are more reliable than those produced by less sophisticated machine learning, at the cost of a most complex and more computationally intensive training process," Barr said.

A number of new tools enable distributing computation cycles across multiple GPUs on a lone instance, or across multiple instances each with several GPUs, he said.

The new Deep Learning Amazon Machine Image is capable of deploying frameworks and libraries commonly used by data scientists, including the MXNet library, the Caffe framework, and TensorFlow, originally developed by Google.