Five takeaways on the future of the cloud
From the latest in custom processors to the new scale of data – here are some of the hot topics shaping conversations at AWS’s annual re:Invent conference.
on December 03, 2019
AWS re:Invent 2019 has brought together more than 65,000 thousand attendees from around the world to learn about the trends shaping the cloud in 2020. With multiple new capabilities and services announced on Tuesday alone, it can be hard for to keep up. Here are five of the biggest trends you should pay attention to from re:Invent.
Breaking down barriersWith 69 Availability Zones in 22 AWS Regions around the world, customers trust AWS to power mission-critical workloads, but some applications need to run with single-digit millisecond latency and this requires local data processing requirements local to end users. At re:Invent, AWS highlighted three services aimed at breaking down barriers, extending the reach of AWS to wherever customers need it—whether it’s on-premises, in key population and industrial centers, or on their 5G mobile and edge devices.
AWS Outposts are fully-managed and configurable racks of AWS-designed hardware that bring native AWS capabilities to on-premises locations using the familiar AWS or VMware control plane and tools. AWS Local Zones place select AWS services in close proximity to large population, industry, and IT centers in order to deliver applications with single-digit millisecond latencies, without requiring customers to build and operate datacenters or co-location facilities. AWS Wavelength enables developers to deploy AWS compute and storage at the edge of the 5G network, in order to support emerging applications like machine learning at the edge, industrial IoT, and virtual and augmented reality on mobile and edge devices.
With the release of AWS Outpost, AWS Local Zones, and AWS Wavelength, AWS is changing one of the defining characteristics of the cloud, bringing services even closer to end users, and supporting whole new classes of applications that run local to end users and connect to the rest of the application and the full range of services running in an AWS Region.
Innovating with custom processorsWhen AWS first launched EC2 in 2006, most business built data centers around general-purpose processors that supported a wide range of workloads. Due to the economies of scale, businesses simply couldn’t afford the massive capital investment needed to develop custom processors for specific workloads. As AWS grew to millions of active users a month, and new workloads like machine learning, microservices, and web tier applications grew in popularity, AWS’s ability to attract a wide customer base made the investment in custom hardware attractive. AWS realized that if it could develop custom processors that deliver greater performance at a lower cost, it could pass that extra performance and cost savings on to customers.
At this year’s re:Invent, AWS advanced the capabilities of its custom processors with its next-generation AWS-designed, Arm-based Graviton processer, the Graviton2, and the first EC2 machine learning inference instances (Inf1), powered by Inferentia, AWS’s custom-designed accelerator for machine learning inference. The Graviton2 powers new Arm-based versions of Amazon EC2 M, R, and C instance families, delivering up to 40% improved price/performance over comparable x86-based instances. With Amazon EC2 Inf1 instances, customers receive the highest performance and lowest cost for machine learning inference in the cloud. Amazon EC2 Inf1 instances deliver 2x higher inference throughput, and up to 66% lower cost-per-inference than the Amazon EC2 G4 instance family, which was already the fastest and lowest cost instance for machine learning inference available in the cloud.
Bringing Amazon’s machine learning expertise to more customersThe demand for machine learning continues to grow rapidly. Over the last year, AWS has introduced multiple AI services that allow customers to benefit from the same machine learning technologies used by Amazon’s consumer business to power its award-winning customer experience.
AWS customers are interested in learning from Amazon’s vast experience operating machine learning at scale to improve operations and deliver better customer experiences, but not every customer is able to invest in the creation of their own custom models. At re:Invent, AWS announced new AI services that build upon Amazon’s rich experience with machine learning, allowing more developers to apply machine learning to create better end user experiences, including machine learning-powered enterprise search, code reviews and profiling, and fraud detection.
Amazon Kendra uses machine learning to reinvent enterprise search through natural language process and other machine learning techniques, providing high-quality results to common queries instead of a random list of links in response to keywords. Amazon CodeGuru uses machine learning to automate code reviews and application profiling. Amazon Fraud Detector is a fully managed service for detecting potential online identity and payment fraud in real-time—with no machine learning experience required.
The first fully integrated developer environment (IDE) for machine learningAn IDE is an established technology in software development that gives a developer everything they need to write, build, and test an application. While IDEs have been available in software development for some time, due to the relative immaturity of machine learning, these tools simply didn’t exist for machine learning applications–until now
Tens of thousands of customers are using AWS Amazon SageMaker to accelerate their machine learning deployments, but just as they solve one challenge, customers want AWS to help solve the next challenge of building, training, and deploying machine learning at scale.
Amazon SageMaker Studio is the first comprehensive IDE for machine learning, allowing developers to build, train, explain, inspect, monitor, debug, and run their machine learning models from a single interface. Developers now have a simple way to manage the end-to-end machine learning development workflows so they can build, train, and deploy high-quality machine learning models faster and easier.
Preparing customers to operate at the new scale of dataCustomers today are regularly trying to operate on petabytes and even exabytes of data. To operate at this new scale of data, analytics tools will have to change significantly to efficiently scale. Customers want to analyze across all of their data, regardless of format or where it’s located, and applications need to scale to support millions of users globally.
AWS provides the broadest and deepest set of analytics services of any cloud provider, and Amazon Redshift is recognized as the world's fastest cloud data warehouse. At re:Invent, AWS expanded the capabilities of Redshift to further improve its performance and provide more flexibility to customers so they can operate effectively at the new scale of data.
Amazon Redshift RA3 instances let customers scale compute and storage separately and deliver 3x better performance than other cloud data warehouse providers. AQUA (Advanced Query Accelerator) for Amazon Redshift provides a new innovative hardware accelerated cache that delivers up to 10x better query performance than other cloud data warehouse providers. Amazon Redshift Data Lake Export allows customers to export data directly from Amazon Redshift to Amazon S3 in an open data format (Apache Parquet) optimized for analytics. Amazon Redshift Federated Query lets customers analyze data across their Amazon Redshift data warehouse, Amazon S3 data lake, and Amazon RDS and Aurora (PostgreSQL) databases.
To read about the latest services and customers showcased at re:Invent, check out What’s New with AWS.