AWS re: Invent 2019: Another Year of Madness and Innovation

Frank Palermo
Executive Vice President - Global Digital Solutions
Article

Who would have thought back in 2012 at the first AWS re: Invent that the conference would eventually grow to draw over 65,000 professionals worldwide and require over six venues in Las Vegas to conduct its agenda?

AWS re: Invent has never been a sales conference. This event continues to be a gathering of technical experts and developers who are continuing to push the envelope on what is possible in the cloud. There are over 2,500 educational sessions and 100 different product announcements.

Amazon is now on the path of “dual” disruption, where they continue to disrupt the horizontal cloud services and leapfrog competition. But more importantly, their cloud capabilities are also helping companies redefine their specific industry business models. Let’s face it, after 13 years of continuous evolution, AWS has now become the “everything store” for cloud computing.

There’s no compression algorithm for experience.

Innovation at speed

AWS has been under a lot of pressure lately as other cloud service providers like Microsoft have received accolades for closing the capability gap with AWS. Google has also made recent strides with their AI/ML capabilities. AWS has also been a victim of several high-profile losses like the government Joint Enterprise Defense Infrastructure (JEDI) contract as well as other large enterprise contracts.

But this year, the king of cloud is back. This year’s re: Invent marks a true “reinvention,” of AWS.

You would be hard-pressed to keep up with the pace of innovation going on at AWS these days. They are a market leader with over 175 cloud services but it’s the depth of these services that are truly powerful and set them apart. In his keynote alone, Andy Jassy packed over 30 new announcements in just under 3 hours. He accused other cloud providers of providing shallow capabilities and just being “checkbox monkeys.”

All industries are now cloud-native

Amazon is clear about its strategy to focus on transforming the enterprise and not just IT. It was significant to see two highly influential CEOs, David Solomon of Goldman Sachs and Brent Shafer of Cerner, who operate in highly regulated industries, discuss how they have truly transformed their businesses to the cloud. Solomon not only provided DJ entertainment to guests before the keynote but described how they worked with AWS to create a “bring your own key service” that allowed them to fully embrace the cloud and accelerate new market offerings. For instance, building the credit card system that underpins the AppleCard would not have been possible without cloud-native technology. AppleCard launched a few months ago, and according to Solomon, is the most successful credit card launch ever. Next year they will announce their Transaction Banking service, which is a cloud-native, digital platform to help organizations better manage their cash.

Cerner, a healthcare technology company that manages 23 petabytes of health data for over 250 million people around the world across 30 countries, continues to explore ways to leverage this data for better care delivery and improved patient outcomes. The ability to learn from the data and predict care has been limited. By migrating their privately-held data out to AWS and taking advantage of the machine learning services to build, train and deploy predictive models Cerner has been able to prevent costly, second episodes of care. As a result, one healthcare system reported the lowest readmission rate in a decade.

It’s now clear that if you are not born or re-born in the cloud, your business is at risk of surviving.

The foundation continues to evolve

Many of the foundational AWS services continue to innovate at a rapid rate. This frequently requires disrupting or re-inventing classic architectures and technology components. At the heart of AWS services are its chips. AWS made a strategic decision to design and build its own chips through its acquisition of Annapurna Labs, which it bought in early 2015. The reality is that general-purpose CPUs often aren’t ideal for tasks such as handling machine learning algorithms, processing imagery, and other specialized tasks.

This strategy has paid dividends. EC2 now provides the most powerful GPU’s that can be leveraged for complicated machine learning computations. It has the fastest processor with z1D that clocks in at 4.0 GHz speed. It’s the only platform with 1,000 Gbps connectivity for standard instances.

AWS launched a series of new ARM-based EC2 instances called M6g, R6g, and C6g, powered by Amazon’s proprietary Graviton 2 chip. These provide 4x more compute cores, 5x faster memory, 7x performance than the first-generation graviton. It delivers 40% better price performance over x86 5 generation instances. The only people not excited about this are Intel and AMD.

As if one set of chips was not good enough, AWS also announced Inferentia, the first custom ML chip that delivers up to 3x higher throughput and up to 40% lower costs per inference when compared to traditional GPU powered instances. EC2 Inf1 instances now provide the fastest and lowest cost machine learning inference in the cloud.

Not your mother’s hypervisor

Virtualization has been the bread and butter of the compute services of cloud computing. Classic virtualization has been around since the 1960’s, and initial techniques like the Xen Hypervisor were used to enable AWS to support virtual machines. However, this traditional virtualization came with management overhead.

A concept called root I/O virtualization tax where resources fight for I/O resources causes latencies and jitter and more compute power is consumed for virtualization management than the compute instance. You can think of the first generation of virtualization as a monolith where all hardware components are managed by the same hypervisor. To provide some context on the typical overhead, if you were transferring an 8gb file in S3, that could result in hundreds of thousands of kernel traps. It doesn’t take much extrapolation to see how this overhead quickly affects overall system performance.

AWS Nitro applies microservices concepts to virtualization. By moving certain processes to separate chips or cards, outside of the compute CPU’s, you can offload I/O operations, networking, security management, etc. The Nitro system moves all the virtualization management functions into its console or plane. The Nitro hypervisor is very lightweight and delivers performance that is indistinguishable from the bare metal.

AWS Nitro is the next generation hypervisor that is underpinning much of the innovations that are now being released.

Purpose-built data services for scale

Nowadays, all data is big data. Hadoop is now a legacy technology. Gone are the days of one-size-fits-all databases. Today’s modern computing demands purpose-built database solutions. If you don’t have the right data service, you will struggle to gain the insights required at scale.

Application architectures can now consider the type of data processing they require and select the right data service for the job. AWS has now assembled the broadest set of data services in the cloud. There are over ten purpose-built databases to build highly scalable applications. If you need ad hoc querying of unstructured data you leverage AWS Athena; if you have vast amounts of unstructured data using dynamic clusters you use AWS EMR; if you need super-fast querying of structured data you can use AWS Redshift; if you want to do real-time data analytics on stream data, you have AWS Kinesis; if you need to do BI and data visualization you use AWS QuickSite.

Bringing compute to your storage

Redshift is the most broadly used data warehouse in the cloud today. Redshift is 2x faster than other cloud databases and 75% less expensive than other vendors. Over 100 innovations have been added.

However, scaling data workloads cost-effectively in the cloud is not an easy task. Data now has gravity and density. Distributed data workloads are putting more stress on I/O operations creating challenges in scaling storage and compute linearly. Since 2012, SSD bandwidth has increased by 12x but CPU data streaming bandwidth has only doubled creating limitations and bottlenecks.

The announcement of AWS Redshift Advanced Query Accelerator (AQUA) provides a sophisticated hardware-accelerated caching that provides up to 10x better query performance than other cloud platforms on the market. Existing data warehouse architectures with centralized storage require data to be moved to compute clusters where AQUA is moving the compute to the storage, limiting the amount of data movement required and enabling compute and storage to be scaled independently.

Making the machine learn easier

Democratizing AI begins with making machine learning more accessible and easy for all application developers and data scientists. For those comfortable at a framework level, AWS provides equal support across the three major frameworks of TensorFlow, PyTorch, and Mxnet. 90% of data scientists use multiple frameworks today.

Machine learning development is traditionally very complex and time-consuming. And the skills needed for ML are hard to find and expensive. Most ML applications are also very iterative requiring rapid experimentation. Several years ago, AWS announced SageMaker which was an abstraction service that allowed developers to rapidly build, train, and deploy machine learning models to simplify the infrastructure and avoid having to work at a framework level.

Now AWS has released SageMaker Studio which is an integrated development environment (IDE) for machine learning.  You can perform all your steps for building, training, and deploying models in one visual environment. You can create notebooks to manage experiments and perform debugging and profiling to detect model drift and other anomalies that may result from changing model assumptions.

AWS is not alone in this space as Microsoft’s AzureStudio has also attempted to visualize the ML modeling process by providing a drag and drop UI to abstract the nuances of python coding. And Google launched Cloud AutoML that has sophisticated image and video intelligence tools. These “studios” are in the early phases of adoption.  While their true ease of use is still yet to be determined, it’s clear that for ML to go mainstream, the model technology needs to become more streamlined.

A quantum leap forward in computing

With Moore’s law finally reaching its limits of scaling chip densities, quantum computing is emerging as the next paradigm in application development at scale. It’s a logical next wave for cloud providers such as AWS to integrate quantum computing into their cloud service offerings. Amazon Braket is a newly announced, fully managed service that enables users to have access to quantum hardware provided by a variety of quantum computing hardware vendors (D-Wave, Rigetti, etc.) to allow rapid experimentation.  Additionally, Amazon is providing access to quantum computing experts through its Amazon Quantum Solutions Lab to better understand the possibilities and practical applications of quantum computing within your business. While the mainstream applications of quantum computing maybe five or more years away, this is an amazing way for businesses to both understand the potential as well as shape the future direction of quantum computing.

Outposts delivered to your door

The reality that some applications will have to run on-premise may seem like a fresh perspective for AWS. AWS Outposts were first announced at re: Invent 2018, but general availability was only announced at re: Invent 2019. Outposts are pre-configured delivered racks that get shipped to your location containing all the AWS services like EC2, S3, EMR, RDS, EKS, ECS, etc. to provide a hybrid cloud solution. Workloads can live and run on premise with the same AWS APIs and control planes and seamlessly connect to other applications running in the public instances of AWS. Outposts come in two variants, a native AWS outpost or VMWare Cloud on AWS. With Outposts, developers can now build something once using native AWS APIs and easily move the application between cloud and on-premises environments.

Are we on the same Wavelength?

One of the key promises of 5G will be the enablement of low latency applications like AR/VR, autonomous cars, smart cities and many others. 5G revolutionizes mobile computing across several dimensions such as 10 Gb/s peak data rates, 5ms low latency, 10 Tb/s data volumes per square kilometer, connection densities of 1 million devices per square kilometer, higher reliability, and lower energy consumption.

For applications that want to take full advantage of 5G and require low latency, it is critical that you reduce network hops to the internet. AWS Wavelength brings AWS services to the edge of a 5G network by allowing application traffic to reach servers running in wavelength zones without leaving a mobile provider’s network. Application traffic only needs to travel from the device to the cell tower to a wavelength zone running in a nearby metro aggregation site.

AWS is partnering with Verizon to make AWS Wavelength available across the United States and is currently being piloted by select customers in Verizon’s 5G Edge.

Where do we go from here?

Despite the numerous innovations announced, it’s impossible to cover everything. The AWS open source strategy didn’t make the main stage. Unlike others like Google (think Kubernetes), there aren’t as many major contributions for a company innovating at the speed of AWS. The VMWare partnership has seemingly taken a back seat with no real roadmap. Even the AWS Outposts announcement didn’t seem to push VMWare as a priority. Despite it being critically important for enterprise, the hybrid cloud and multi-cloud messaging seems to be very light. And overall ease of use still plagues the AWS platform. It remains an engineer’s toolbox and frequently overwhelms users with a plethora of options that are sometimes overlapping.

But I guess that’s the point: “There’s no compression algorithm for experience”.

Frank Palermo Executive Vice President - Technology, Media & Telecommunications Frank heads the Global Technical Solutions Group which contains many of Virtusa’s specialized technical competency areas such as Business Process Management (BPM), Enterprise Content Management (ECM) and Data Warehousing and Business Intelligence (DWBI).

Contact Us

    Yes, I want Virtusa to keep me up-to-date with recent industry developments including insights, upcoming events, and innovative solution capabilities according to the privacy policy
  • This field is for validation purposes and should be left unchanged.