One of the most significant updates to come out of AWS re:Invent this year was about a processor, not a cloud service.
On the first day of the user conference, AWS added A1 instances, a set of EC2 VMs powered by its own custom, Arm-based Graviton Processor. This might have been overlooked amid the bevy of services the cloud provider added that week; many developers’ eyes glaze over with this kind of AWS hardware stuff. However, these processors will affect software in more ways than many people think. In fact, they could be a game-changer in terms of cost and how we power applications.
Even if you don’t know what an Arm-based Graviton Processor is, there could be one like it in your cellphone right now. They use less power than premium processors and provide good performance, despite coming in a smaller package, which translates to more processing power for the dollar. They’re also a good match for scale-out workloads where it’s best to distribute an application across many smaller instances to provide scalability and support processing needs.
Examples of these types of workloads include containerized microservices, web servers, dev platforms and caching fleets. Re:Invent attendees were most interested in how they could use A1 instances with that first type of workload, because, despite users’ best efforts to optimize containers, they are costly to run on the traditional cores found on AWS and other clouds.
A user can relocate workloads that use scripting to the A1 instance type without a rewrite or redeployment. However, if your code runs natively, you’ll need to rebuild it specifically for this processor instance.
AWS hardware diversifies to stay ahead in the market
The Arm-based Graviton Processor can provide a clock speed of 2.3 GHz and is able to run at 45% lower costs than AWS’ standard x86 cores. AWS users can now choose from more than 100 instant types and sizes to meet their particular needs, including instances with sustained all-core frequency of 4 GHz, instances with 12 TB of memory, instances with up to eight field-programmable gate arrays and instances with AMD EPYC processors.
Keep in mind that these Graviton Processors are custom versions of Arm chips, and while little has been said about what work AWS did on the CPU, the provider undoubtedly optimized them for AWS workloads and tenants. We do know the 16 virtual CPUs (vCPUs) that make up each system on a chip are configured in clusters of four, with a 2 MB L2 cache shared between each. Moreover, one vCPU maps to one physical CPU core.
What’s important here is that these processors are cheaper to run and built to support next-generation architectures, such as containerized microservices and other net-new workloads, on public clouds. Thus, when you configure instances on AWS to support your application workloads, it will often be the cheaper and better route to go, based upon your own usage patterns.
Arm-based Graviton Processors are a tactical improvement for Amazon. They may not have received as much attention as the bigger and perhaps more surprising re:Invent news — such as AWS Outposts — but from a pragmatic standpoint, this is a huge improvement for those who push code out to the AWS cloud.
But what about the other cloud players? While Google has its own custom Tensor Processing Unit chip for AI processing, Microsoft has relied on widely available processors, such as the AMD EPYC series. Expect more purpose-built processors from these two in the future, but they both have some work to do to catch up to AWS.