Amazon CEO Andy Jassy: Customers Shifting Innovation To AWS

AWS’ innovation in cloud compute, networking, storage, database, and generative AI are driving businesses to find new ways to take advantage of Amazon’s cloud capabilities, and making Amazon Web Services a major driver of the company’s success, CEO Andy Jassy told financial analysts.


Amazon’s investment in cloud compute, networking, storage, database, and generative AI are helping businesses take advantage of Amazon Web Services, and in turn driving a major part of Amazon’s business, according to Amazon CEO Andy Jassy.

Jassy, speaking Thursday to financial analysts during Amazon’s second fiscal quarter 2023 financial analyst conference call, said AWS remains the clear cloud infrastructure leader in terms of the number of customers, the size of its partner ecosystems, its breadth of functionality, and its strong operational performance.

“These are important factors for why AWS has grown the way it has over the last several years and for why AWS has almost doubled the revenue of any other provider,” he said.

Sponsored post

[Related: AWS’ 5 New Generative AI Tools For Nvidia, Anthropic And LLMs]

While those have been important drivers of AWS’ business, customers also appreciate the company’s customer focus and orientation, Jassy said.

“As the economy has been uncertain over the last year, AWS customers have needed assistance cost optimizing to withstand this challenging time. … We proactively help customers do this,” he said. “And while customers have continued to optimize during the second quarter, we start seeing more customers shift their focus towards driving innovation and bringing new workloads to the cloud.”

AWS continues to innovate at a rapid clip across the array of AWS product categories where AWS leads, including compute, networking, storage, database, data solutions, machine learning, and other areas, Jassy said.

He cited as an example a request by customers a few years to improve price performance for generalized computing.

“To enable that, we realized that we needed to rethink things all the way down to the silicon and set out to design our own general purpose CPU chips,” he said. “Today, more than 50,000 customers use AWS’ Graviton chips in AWS compute instances, including 98 of our top 100, Amazon EC2 customers. And these chips have about 40 percent better price performance than other leading x86 processors.”

That type of reimagining is now happening with generative AI, Jassy said.

Generative AI has captured people’s imagination, but most people are talking about the application layer,” he said. “Specifically, what OpenAI has done with ChatGPT. It’s important to remember that we’re in the very early days of the adoption and success of generative AI, and the consumer applications is only one layer of the opportunity.”

Large language models and generative AI are thought of has having three key layers, all of which are very important and all of which AWS is investing heavily, Jassy said.

At the lowest layer is the compute required to train foundational models and do inference or make predictions, he said. Because of a scarcity of the type of processors to train large models and develop generative AI applications, AWS started working several years ago on custom AI chips for training, its AWS Trainium machine learning accelerators and AWS Inferentia accelerators, which are on their second versions already and offer an appealing price performance option for customers building and running large language models, he said.

“We’re optimistic that a lot of large language model training and inference will be run on AWS’ Trainium and Inferentia chips in the future,” he said.

The middle layer is large language models as a service, Jassy said. Developing large language models takes billions of dollars and multiple years, and most companies prefer not to consume that resource billing themselves but instead what to customize those models with their own data while protecting their proprietary data, wrap it in security and other AWS features, and offered as a managed service, he said.

“[Our Bedrock service] offers customers all these aforementioned capabilities with not just one large language model, but with access to multiple leading large language model companies like Anthropic, Stability AI, AI21 Labs, Cohere, and Amazon’s own developed large language model called Titan. … We just recently announced new capabilities for Bedrock, including new models from Cohere and Anthropic and Stability AI’s Stable Diffusion XL 1.0, as well as Agents on Bedrock that allow customers to create conversational agents to deliver personalized up-to-date answers based on their proprietary data and to execute actions.”

Those first two layers are really democratizing access to generative AI and lowering the cost of training and writing models and enabling access to multiple large language model choices to let customers of all sizes and abilities to customize their own large language model and build generative AI applications in a secure and enterprise-grade fashion, Jassy said.

“These are all part of making generative AI accessible to everybody, and very much what AWS has been doing for technology infrastructure over the last 17 years,” he said.

The top layer, where a lot of publicity around generative AI is focused, includes the actual applications that run on top of the large language models, Jassy said. ChatGPT is a prime example, but Amazon believes there is a need for a coding companion which is why the company built Amazon CodeWhisperer, an AI-powered coding companion which recommends code snippets directly in the code editor to accelerate developer productivity, he said.

“It’s off to a very strong start, and changes the game with respect to developer productivity,” he said. “Inside Amazon, every one of our teams is working on building generative AI applications that reinvent and enhance their customers’ experience. But while we will build a number of these applications ourselves, most will be built by other companies. And we’re optimistic that the largest number of these will be built on AWS.”

It’s important to remember that the core of AI is data, Jassy said.

“People want to bring generative AI models to the data, not the other way around,” he said. “AWS not only has the broadest array of storage, database, analytics, and data management services for customers, it also has more customers and data stored than anybody else. Coupled with providing customers with unmatched choices in the three layers in the generative AI stack, as well as Bedrock’s enterprise grade security that’s required for enterprises to feel comfortable putting generative AI applications into production, we think AWS is poised to be customers’ long-term partner of choice in generative AI.”

For its second fiscal quarter 2023, which ended June 30, Amazon reported total revenue of $134.38 billion up 10.8 percent over the $121.23 billion the company reported for its second fiscal quarter 2022.

This included total product sales of $59.03 billion, up from $56.58 billion, and service sales of $75.35 billion, up from $64.66 billion.

Included in the revenue figure was AWS revenue of $22.14 billion, up from $19.74 billion; North American sales of $82.55 billion, up from $74.43 billion; and international sales of $29.70 billion, up from $27.07 billion.

Note that AWS revenue is counted separately from Amazon’s North America and international revenue.

Amazon also reported GAAP net income of $6.75 billion or 65 cents per share, up significantly from last year’s net loss of $2.03 billion or 20 cents per share. On a non-GAAP basis, Amazon reported comprehensive income of $7.04 billion, up from last year’s comprehensive loss of $4.45 billion.

Looking ahead, Amazon expects third fiscal quarter 2023 net sales of $138.0 billion to $143.0 billion, or up between 9 percent and 13 percent over the third fiscal quarter 2022