10 Top Topics From AWS Summit: Swami Sivasubramanian
Amazon Web Services’ vice president of machine learning and artificial intelligence details how AWS cloud tools and services -- and those of its partners -- help organizations innovate faster by migrating and modernizing applications and making data-driven decisions.
As companies re-architect their applications using serverless and containers, it’s a good time to think about how they deal with data, but many still are storing it in a way that harkens to a Prince song, according to Swami Sivasubramanian, vice president of machine learning and artificial intelligence for Amazon Web Services.
“The world has changed so much, but many companies still store data like it’s 1999,” Sivasubramanian said during his keynote address on Wednesday for the AWS Summit Online -- Americas. “The cloud gives us so many better options. It frees us from the licensing and technology constraints that are common with the old-guard relational databases. When you think about it, a relational database doesn’t always make sense in a world when you’re dealing with petabytes or even exabytes of data.”
To a large extent, a relational database is like the “Swiss Army knife,” Sivasubramanian said.
“It can do many things, but it is not perfectly suited to any one task,” he said.
Sivasubramanian’s keynote focused on how AWS cloud tools and services -- and those of its partners -- help organizations innovate faster by migrating and modernizing applications and making data-driven decisions.
Here are Sivasubramanian’s top remarks.
Accelerating The Cloud Journey And Transformation
The best approach is to think about it in four key steps: migrate, modernize, unify and innovate. For companies that aren’t born in the cloud, the journey often begins by migrating existing workloads. And here, having a good plan is a critical first step. A solid plan sets you up for success by helping you avoid unexpected roadblocks, giving you the confidence to move forward quickly. Think back to a large project in your life that suffered from a lack of coordination. Unclear lines of ownership and communication silos caused deadlines to slip, resulting in a subpar or even unfinished final product -- something that nobody is happy about. Infrastructure migrations are no different: With parallel processes tracked across disconnected systems, it can be difficult to prepare, stay on track and get across the finish line on time.
To help with these migrations, we created AWS Migration Hub, which provides a single place to track the process of application migrations across multiple AWS and partner solutions. Customers like Magellan Health and partners like Wipro and Onica are making use of Migration Hub to plan their migration journeys, resulting in reduced overall migration costs, right-sizing of resource allocation and improved data transparency.
Another big part of planning for migrations is knowing the tools at your disposal. Having the right tool at the right time helps you move more quickly, reliably and cost-effectively.
Let’s start with databases, for instance, a critical part of many modern applications. In an end-to-end database migration, there are a number of error-prone tasks that can be extremely costly if not done correctly. From schema creation to data transfer to changing your application code and queries -- each of these traditionally requires a set of unique expertise. That’s why we have tools like AWS Database Migration Service, which enables you to migrate your databases with minimal downtime. It has accelerated the migration of more than 450,000 databases to AWS. We also offer AWS Schema Conversion Tool to simplify the creation of your data model on your destination database. And finally, we also offer Babelfish for Aurora PostgreSQL, which allows you to run Microsoft SQL Server applications on PostgreSQL with little to no code changes.
'Don't Go It Alone'
Zooming out, there is much more to migration mode than databases. When moving entire applications, many teams struggle with the complexity of managing multiple migration solutions and a long maintenance window. One AWS partner performing these migrations particularly well was CloudEndure. Today, AWS Application Migration Service, which is based on CloudEndure’s migration technology, enables customers to easily lift and shift a wide range of applications…with minimal downtime.
Speaking of expertise within the community and the AWS Partner Network, we have a deep community of partners to help you with your transformation. So, don’t go at it alone. There are over 170 AWS Migration Competency partners that can help you from day one, from migrating to operating. Each has industry and domain expertise, certified technical skills and a proven track record of customer success to help you on your journey. Migrations may seem complicated, but surrounding yourself with the right team can make all the difference.
Once you have got the right partners in place, the next important piece is setting up a successful migration and aligning and equipping the people within your organization for innovation. Interesting fact is innovation is rarely just about the technology. It is about the people who can make it happen.
Successful migration stories start with a strong top-down leadership to chart the course. They are driven by a well-trained team that powers the engine of change. For leaders, this means setting a mission that unites their team and crafting aggressive goals without “boiling the ocean” or making things overly complex. For the rest of the team, training and developing new skills broadens what is possible, opening up solutions that may have previously been out of reach.
One customer that put these principles into practice is Autodesk, a leader in 3D design, engineering and entertainment software that transformed their business through recent large-scale migrations. Managing hundreds of business apps across both on-prem and hybrid infrastructure, their leadership set out to streamline internal development processes and remote user onboarding friction by moving to the cloud. Autodesk leadership knew that leveling up their team members during the migration process was critical, so they created an internal center of excellence to help transform employees into cloud experts. They made use of upskilling programs like AWS Certification, amassing more than 115 AWS certifications in only two months. The teams began to migrate individual applications and increase their feature velocity. By setting aggressive, top-down goals and investing in training certifications, they were able to migrate or retire over 400 enterprise applications, reduce the number of operational incidents by 86 percent and improve disaster recovery by 80 percent. Best of all, they were able to accomplish it four months earlier than anticipated. The outcomes they achieved were truly extraordinary, but the playbook is no secret: A strong vision with the right preparation and training sets you up for success.
Containers And Serverless
One of the ways we have seen customers modernize their workloads is by moving to smaller and smaller units of compute and by that, I mean moving to containers and serverless. Customers like containers because they can build smaller chunks of compute. These smaller chunks allow customers to move faster, be more portable and benefit from the high degree of automation built into container orchestration systems.
The growth in containers is pretty astounding, and lots of that growth is happening on AWS. Of the containers in the cloud, 80 percent run on AWS. Customers choose AWS for their containers because we have more choice than any other cloud provider. Customers can choose a service that suits their needs. For instance, if what you value most is the open-source Kubernetes framework, then we have the Amazon Elastic Kubernetes Service or EKS. You can easily migrate any standard Kubernetes application to EKS without needing to refactor your code. If you want the container service with the deepest integration to the rest of the AWS platform, then we have the Amazon Elastic Container Service or ECS. Since we control the development of ECS, we can ensure that it integrates seamlessly with the rest of AWS platform to provide a secure and easy-to-use solution.
If you don’t want to manage clusters of instances, we have our serverless container offering, AWS Fargate. The Fargate service is unique to AWS. You can use ECS and EKS to launch containers on Fargate, and it removes the need to provision, manage or patch clusters or servers.
Customers have told us that they love the choice and flexibility these services give them, however, they still have containers that need to run on prem as they transition to the cloud. That’s why we have built Amazon ECS Anywhere and Amazon EKS Anywhere. With Amazon ECS Anywhere, you use the familiar ECS control plane based in an AWS region to orchestrate your containers and run tasks on your on-prem infrastructure. With Amazon EKS Anywhere, you can create and operate Kubernetes clusters on prem. EKS Anywhere provides an installable software package for creating and operating Kubernetes clusters on prem and the tooling you will need to support it.
AWS App Runner
Another thing we discovered is that customers running containerized web apps and APIs were doing similar things that could be streamlined. So we decided to optimize for this workflow and back in May, we introduced AWS App Runner. App Runner is a fully managed container application service that makes it easier and faster for customers to build, deploy and run containerized web apps and APIs. AWS App Runner load-balances traffic based on incoming requests and automatically scales resources according to your traffic patterns. There is no need to build and configure CI/CD pipelines with AWS App Runner. It automatically detects changes to your code or a container image and deploys a new version of the application.
Another big change in the way customers architect their applications has been using serverless computing (and) services like AWS Lambda. Lambda enables event-driven serverless computing, which allows you to write applications without managing servers or containers. Lambda is event-driven, which means you can set up your code to automatically trigger based on events from over 200 AWS services and SaaS applications without having to write any integration code, or you can also call it directly from any web or mobile app.
In the simplest terms, an event is a signal that a system state has changed. So, when triggered by an event -- for example, adding an item to a shopping cart -- Lambda spins up the necessary compute to run the code and then shuts it back down when it’s finished. One of the coolest parts: You only pay for the compute time you consume at a millisecond granularity, so you’re never paying for overprovisioned infrastructure. This is an amazing cost efficiency for most customers. With Lambda, you can focus on writing your code and then use Lambda to manage the compute, scaling and fault tolerance. This means faster time to production with the lowest possible costs. We see customers building a wide variety of applications, from automating IT tasks to data processing to building micro-services. Here, Lambda really allows for superfast innovation, enabling you to be extraordinary.
Over the last year, we have all changed the way we interact with the physical world. When COVID hit, Coca-Cola wanted to allow customers to order drinks from their freestyle drink dispensers without needing to touch the machine. They set out to work, building a smartphone app that allows customers to order and pay for drinks without having to touch their freestyle machines. They used AWS Lambda and several other AWS services to quickly launch this new feature. Because Lambda has security and scalability built in, that team was able to focus on the application. As a result, they built this new app in just 100 days, and now, over 30,000 machines had this touchless capability.
Choosing The Right Databases
It’s time to reevaluate your database and challenge your assumptions. As you modernize, consider if the database you use today is still the best fit for your application. Choosing the right database comes down to understanding both your data model and your access patterns.
Customers with relational data can make the most of Amazon RDS or Amazon Aurora. They can use familiar SQL database engines without having to deal with time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. For non-relational data sets, like key-value pairs or documents, those workloads are well-suited towards Amazon DynamoDB or Amazon DocumentDB.
Some use cases have unique patterns surrounding how data is written or queried. Demand caching or latency requirements, relationship-based queries for graph data sets and consistent write intervals in time series data -- all of these represent scenarios that become a challenge for more general databases. For these access patterns, purpose-built databases like Amazon ElastiCache, Amazon Neptune, Amazon Timestream and more do the heavy lifting to solve for these specialized workload challenges.
One of the problems that comes with choice is that it can be hard to pick what‘s right for you. If you find yourself in this situation, there are partners and programs that can help you get from where you are to where you want to be. We have AWS Professional Services and partners who can look at your circumstances and accelerate your path to modernization. We also offer the Database Freedom program to provide customers with advice on application architecture, migration strategies and employee training customized for the technology landscape and goals.
The most practical way to unify data across an organization starts with a data lake. AWS customers like Moderna, Pinterest, Intuit, Epic Games and Cerner have centralized their data from various business silos into data lakes.
A data lake enables controlled, secure data access across the entire organizations. With such a setup, data scientists, developers and business analysts can access the data lake using their choice of analytics and start creating value for their business right away.
Building a data lake on AWS usually starts with Amazon S3. It’s really easy to use and works with a broad portfolio of analytics tools. S3 is built to store and retrieve any amount of data with unmatched availability and built from the ground up to deliver 11 nines of durability. But S3 is just the first piece of a modern data lake architecture. Earlier, I talked about the importance of using purpose-built data stores that are best-suited to your use cases. That kind of optimization is just as important when it comes to using the right analytics tools for the job.
For example, if you want to get started querying your data right away, you can use standard SQL to perform other queries directly in your S3 data lake with Amazon Athena. If you need to process vast amounts of unstructured data, you can automate tasks like provisioning capacity and tuning clusters with Amazon EMR (previously called Amazon Elastic MapReduce). If you run critical applications and monitoring operational health is critical to your success, you can use Amazon Elasticsearch Service to quickly analyze large amounts of log data. If you need to analyze or perform real-time processing or streaming data, you can use Amazon Kinesis, which has now become a linchpin in many IoT applications. And if you have large amounts of structured data, where you want super-fast querying results -- a capability that is critical for predictive analytics -- you want a data warehouse like Amazon Redshift. Redshift is the first data warehouse built for the cloud, and it can run queries at exabyte-scale against your data lake and at petabyte scale in your cluster, delivering up to three times better price performance than other cloud data warehouses.
AWS’ Machine Learning Stack
Our customers choose to run their machine learning workloads on AWS because of our breadth and depth of services, the rapid pace of innovation and the AI services that make it easy to get started with machine learning. Of course, machine learning is an incredibly dynamic technology, and how a developer engages with ML is very different from the needs of a data scientist.
At AWS, we think of ML like a stack, where each layer builds on the one below it. As we go up the stack, we worked to decrease complexity and make machine learning easier and easier to use and make it easy to integrate with your applications. You can engage with machine learning at a level that matches your expertise and use case.
At the bottom of the stack, for expert practitioners that need to run highly customized ML applications, AWS supports all the major machine learning frameworks, including TensorFlow, MXNet and PyTorch. We also offer the highest performance instance for ML training in the cloud with our Amazon EC2 P4d instances. For running machine learning inference, we have our AWS Inferentia-based EC2 Inf1 instances that deliver up to 70 percent lower cost per inference than comparable GPU-based EC2 instances. And to help you further optimize your training workloads, we are building another custom ML chip called Tranium. We expect Trainium-based EC2 instances to be available later this year and to offer significant price performance advantages over other options.
In the middle layer of the stack, targeted towards data scientists and a broad range of ML developers, we offer Amazon SageMaker, the most comprehensive, managed machine learning service. SageMaker makes ML more accessible to all levels of technical expertise and was built from the ground up to simplify the process of ML. It includes tools for every step of ML development -- tasks like data preparation, feature engineering, altering ML algorithms, training, posting monitoring and many more. Using SageMaker, you can remove the complexity from each step of the ML development workflow, so that it’s faster, more cost-effective and easier to implement.
At the top layer, we have AI services to address common horizontal and industry- specific use cases. Our goal here is to help developers and business users add intelligence to their apps without needing to learn deep ML skills. We have a suite of solutions for the industrial sector that uses visual and machine data to improve processes, automate visual inspection of equipment and recommend maintenance. For healthcare, we have purpose-built solutions for transcription, medical text comprehension and Amazon HealthLake, new HIPAA-eligible service to store and analyze health data in the cloud. We also have a variety of solutions for common use cases like computer vision, natural language processing, sentiment analysis and many more.