Amazon Web Services (AWS) is the most comprehensive and broadly adopted cloud platform, offering over 175 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.
Amazon Web Services (AWS) introduced the Amazon EC2 A1 instances powered by the first generation AWS Graviton processors at AWS re:Invent 2018. These instances are custom-built by AWS using 64-bit Arm Neoverse cores and are the first Arm-based instances on AWS.
Amazon EC2 A1 instances provide up to 45% cost savings over other general-purpose instances for scale-out applications such as web servers, containerized microservices, data/log processing, and other workloads that can run on smaller cores and fit within the available memory footprint.
AWS announced the new AWS Graviton2 processors and six new instance types powered by these new processors at AWS re:Invent 2019. The new general purpose (M6g), compute optimized (C6g), and memory optimized (R6g) instances and their disk variants provide up to 40% better price performance over comparable current generation instances for a wide variety of workloads.
Extensive ecosystem support
AWS Graviton2 processors, based on the 64-bit Arm architecture, are supported by several popular Linux distributions and many popular applications and services from AWS and several Independent Software Ecosystem (ISV) vendors.
Best price performance
AWS Graviton2 processors power Amazon EC2 M6g, C6g, and R6g instances that provide up to 40% better price performance over comparable current generation instances for a wide variety of workloads, including application servers, micro-services, high-performance computing, electronic design automation, gaming, open-source databases, and in-memory caches.
On-demand, scalable, and cost-effective
Developers building applications on Arm based architecture can leverage AWS Graviton2 processors to run cloud native applications securely, and at scale without any up-front investment or performance compromises. This allows developers to innovate by leveraging an inexpensive and simple cloud access to ensure they have the compute capacity needed.
News and blogs
|Answered||Forum FAQs||0 votes||667 views||0 replies||Started 4 months ago by Annie||Answer this|
|Suggested answer||ResNet50 Inference Time less on LITTLE core than big core||0 votes||51 views||1 replies||Latest yesterday by Andy Neil||Answer this|
|Suggested answer||Calculate execution time||0 votes||74 views||1 replies||Latest yesterday by Andy Neil||Answer this|
|Not answered||Varied Inference Times on running same Neural Network repeatedly on GPU on HiKey970||0 votes||47 views||0 replies||Started 2 days ago by ShD||Answer this|
|Not answered||what's the purpose and user case of condM in armv8||0 votes||34 views||0 replies||Started 2 days ago by philz||Answer this|
|Suggested answer||How to trace PMU performance data by ETM?||0 votes||124 views||1 replies||Latest 4 days ago by iamsocute||Answer this|
|Answered||Forum FAQs Started 4 months ago by Annie||0 replies 667 views|
|Suggested answer||ResNet50 Inference Time less on LITTLE core than big core Latest yesterday by Andy Neil||1 replies 51 views|
|Suggested answer||Calculate execution time Latest yesterday by Andy Neil||1 replies 74 views|
|Not answered||Varied Inference Times on running same Neural Network repeatedly on GPU on HiKey970 Started 2 days ago by ShD||0 replies 47 views|
|Not answered||what's the purpose and user case of condM in armv8 Started 2 days ago by philz||0 replies 34 views|
|Suggested answer||How to trace PMU performance data by ETM? Latest 4 days ago by iamsocute||1 replies 124 views|