Amazon EC2 R7i instances

The fastest 4th Generation Intel Xeon Scalable-based instances in the cloud

Amazon Elastic Compute Cloud (EC2) R7iz instances are memory-optimized, high CPU performance instances. They are the fastest 4th Generation Intel Xeon Scalable-based (Sapphire Rapids) instances in the cloud with 3.9 GHz sustained all-core turbo frequency. R7iz instances deliver up to 20% better performance than previous generation z1d instances. They use DDR5 memory and deliver up to 2.4x higher memory bandwidth than z1d instances. R7iz instances feature an 8:1 ratio of memory to vCPU with up to 128 vCPUs and up to 1,024 GiB of memory. The combination of high CPU performance and high memory footprint makes R7iz instances ideal for frontend Electronic Design Automation (EDA), relational database workloads with high per-core licensing fees, and financial, actuarial, and data analytics simulation workloads.","id":"collection-text-media#r7iz-text-media","customSortOrder":"1","heading":"Why Amazon EC2 R7iz Instances?"},"metadata":{"tags":[]}},{"fields":{"bodyContent":"

AWS and Intel continue to collaborate to provide cloud services that are designed to meet current and future computing requirements. For more information, see the AWS and Intel partner page.","id":"collection-text-media#r7iz-text-media-2","customSortOrder":"2","heading":"AWS and Intel Partnership"},"metadata":{"tags":[]}}]},"metadata":{"auth":{},"pagination":{"empty":false,"present":true},"testAttributes":{}},"context":{"page":{"locale":null,"site":null,"pageUrl":"https://aws.amazon.com/ec2/instance-types/r7iz/","targetName":null,"pageSlotId":null,"organizationId":null,"availableLocales":null},"environment":{"stage":"prod","region":"us-east-1"},"sdkVersion":"1.0.115"},"refMap":{"manifest.js":"47864e0370","rt-text-media-collection.rtl.css":"7e799fc6cb","rt-text-media-collection.css.js":"823e895837","rt-text-media-collection.js":"df0ab61677","rt-text-media-collection.css":"f1abbd2201","rt-text-media-collection.rtl.css.js":"611e834187"},"settings":{"templateMappings":{"hyperlinkText":"hyperlinkText","hyperlinkUrl":"hyperlinkUrl","heading":"heading","mediaAltText":"mediaAltText","mediaPosition":"mediaPosition","mediaUrl":"mediaUrl","subheader":"subheader","bodyContent":"bodyContent","videoOverlayTitle":"videoOverlayTitle","videoThumbnailUrl":"videoThumbnailUrl","videoPlayButtonText":"videoPlayButtonText","dark":"dark"}}}

Benefits

R7iz instances deliver up to 20% higher compute performance than previous generation z1d instances. High CPU performance combined with up to 1024 GiB memory results in increased overall performance for applications such as EDA and relational databases. This can help you reduce time to market for product development while reducing licensing costs.

R7iz instances add to the broadest and deepest selection of EC2 instances in the cloud. They provide instance sizes up to 32xlarge and offer two bare metal sizes. With up to 2.6x more vCPUs than other high-frequency instances, you can scale up your memory-intensive workloads.

R7iz instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor that delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security.

Features

R7iz instances are the fastest 4th Generation Intel Xeon Scalable-based (Sapphire Rapids) instances in the cloud with 3.9 GHz sustained all-core turbo frequency. These instances include support for always-on memory encryption using Intel Total Memory Encryption (TME).

R7iz instances support up to 50 Gbps networking. R7iz instances also support 40 Gbps bandwidth to Amazon Elastic Block Store (EBS) in the largest size. Additionally, with R7iz instances, you can attach up to 88 EBS volumes to an instance (compared to z1d which allowed up to 28 EBS volume attachments to an instance). R7iz instances use the new DDR5 memory technology and provide up to 2.4x higher memory bandwidth than comparable high-frequency instances. R7iz instances have support for Elastic Fabric Adapter (EFA) in the 32xlarge and the metal-32xl sizes.

4th Gen Intel Xeon Scalable processors offer 4 new built-in accelerators. Advance Matrix Extensions (AMX) – available on all sizes - accelerate matrix multiplication operations for applications such as CPU-based machine learning. Data Streaming Accelerator (DSA), In-Memory Analytics Accelerator (IAA), and QuickAssist Technology (QAT) - available on R7iz bare metal sizes - enable efficient offload and acceleration of data operations that help in optimizing performance for databases, encryption and compression, and queue management workloads.

The AWS Nitro System can be assembled in many different ways, allowing AWS to flexibly design and rapidly deliver EC2 instance types with an ever-broadening selection of compute, storage, memory, and networking options. Nitro Cards offload and accelerate I/O for functions, increasing overall system performance.

Product details

Amazon EC2 R7iz instances are powered by 4th Generation Intel Xeon Scalable processors and are an ideal fit for high CPU and memory-intensive workloads.

Instance Size vCPU Memory (GiB) Instance Storage (GB) Network Bandwidth (Gbps) EBS Bandwidth (Gbps)

r7iz.large

2

16

EBS-Only

Up to 12.5

Up to 10

r7iz.xlarge

4

32

EBS-Only

Up to 12.5

Up to 10

r7iz.2xlarge

8

64

EBS-Only

Up to 12.5

Up to 10

r7iz.4xlarge

16

128

EBS-Only

Up to 12.5

Up to 10

r7iz.8xlarge

32

256

EBS-Only

12.5

10

r7iz.12xlarge

48

384

EBS-Only

25

19

r7iz.16xlarge

64

512

EBS-Only

25

20

r7iz.32xlarge

128

1,024

EBS-Only

50

40

r7iz.metal-16xl

64

512

EBS-Only

25

20

r7iz.metal-32xl

128

1,024

EBS-Only

50

40

Customer testimonials

Here are examples of how customers and partners have achieved their business agility, price performance, cost savings, and sustainability goals with Amazon EC2 R7iz instances.

  • Aiven

    Aiven provides an open source cloud data platform for organization to build a modern data infrastructure.

    We help our customers simplify their data infrastructure to drive cost efficiencies and increase software agility. Throughput and latency are critical factors that our customers evaluate when selecting cloud compute options for our data platform. We are excited to push the performance limits on the latest Amazon EC2 R7iz instances to achieve 170% higher throughput and 41% lower average latency than prior generation R6i instances.

    Heikki Nousiainen, CTO, Aiven
  • Astera Labs

    Astera Labs is a leader in purpose-built data and memory connectivity solutions that remove performance bottlenecks throughout the data center.

    We build our solutions 100% in the cloud for the cloud. This is why we are excited about the potential for using the new Amazon EC2 R7iz instance to provide us access to increased single threaded performance vs R6i instances. During our testing of the new R7iz instances, we were able to realize performance gains up to 25% compared to R6i instances. With access to increased performance, we’ll be able to accelerate our ability to deliver silicon, software, and system-level connectivity solutions that realize the vision of artificial intelligence and machine learning in the cloud.

    Jitendra Mohan, CEO, Astera Labs
  • Nasdaq

    Nasdaq is a global electronic exchange for buying and selling securities and other instruments, and a market infrastructure technology provider to 130 other exchanges, regulators, and post-trade organizations in over 50 countries.

    We leverage Amazon EC2 high frequency instances to provide reliable, ultra-low latency and high performance at scale to our clients. Amazon EC2 R7iz instances have a new smaller bare metal size with better NUMA affinity that provides excellent throughput for our workloads, simplifies system architecture, and improves determinism by reducing latency. This innovation is a critical component of AWS and Nasdaq’s partnership to build the next generation of cloud-enabled infrastructure for the world's capital markets.

    Nikolai Larbalestier, Senior Vice President, NASDAQ
  • Akami

    Noname Security (Akami) creates powerful, complete, and easy-to-use API security platform that helps enterprises discover, analyze, remediate, and test all legacy and modern APIs.

    We conduct traffic analysis with AI and machine learning to automatically detect API threats, and it is important for us to deliver low latency and high bandwidth security to our customers. During benchmarking, Amazon EC2 R7iz instances offered near real-time security with 3x faster response times and higher throughput compared to C6i instances. We are also excited to leverage the new Advanced Matrix Extensions (AMX) to accelerate the performance of our machine learning workloads to reduce the risk of API security vulnerabilities and cyberattacks around the world.

    Shay Levi, CTO, Noname Security
  • SingleStoreDB

    SingleStoreDB is a cloud-native database built for speed and scale to power real-time applications.

    Leading companies across nearly every vertical around the world use SingleStoreDB to enhance customer experience and to improve operations and security. Optimizing the compute performance of the underlying infrastructure is necessary to support constantly growing workloads. When testing Amazon EC2 R7iz instances, our engineering teams saw a 19% improvement on database performance versus prior generation Ice Lake based instances. We look forward to leveraging the Amazon EC2’s latest Sapphire Rapids instances to deliver exceptional performance for transactions and analytics.

    Rob Weidner, Director of Cloud Partnerships, SingleStoreDB
  • TotalCAE

    TotalCAE's platform supports hundreds of engineering applications and makes it simple for customers to adopt High Performance Computing (HPC) applications in the cloud.

    Amazon EC2 R7iz instances combine 1 TB of the newest DDR5 memory and the latest 4th Generation Intel Xeon Scalable processors running at 3.9 GHz to offer next generation performance for applications such as Finite Element Analysis (FEA). We tested several flagship licensed FEA applications on R7iz instances and observed performance gains of up to 19% for the same license cost over previous generation R6id instances. Our clients invest heavily in their FEA application licenses, and we are eager to help them maximize their license investments and accelerate their time to market.

    Rod Mach, President, TotalCAE
  • Amazon Relational Database Service (RDS)

    Amazon Relational Database Service (RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud.

    Amazon EC2 R7iz instances are ideal for relational database workloads that typically have high per-core licensing costs. Our Airline and Banking customers running demanding workloads currently use z1d instances. R7iz's 20% higher compute performance, larger sizes (up to 32xlarge), and 2.4x memory throughput (using the latest DDR5) versus z1d will help these customers achieve superior performance as they continue to scale.

    Kambiz Aghili, GM, RDS, DBS Managed Commercial Engines, AWS
  • Hugging Face

    The Hugging Face Hub works as a central place where anyone can share, explore, discover, and experiment with open-source ML.

    At Hugging Face, we are proud of our work with Intel to accelerate the latest models on the latest generation of hardware, from Intel Xeon CPUs to Habana Gaudi AI accelerators, for the millions of AI builders using Hugging Face.

    The new acceleration capabilities of Intel Xeon 4th Gen, readily available on Amazon EC2, introduce bfloat16 and INT8 support for transformers training and inference, thanks to Advanced Matrix Extensions (AMX).

    By integrating Intel Extension for Pytorch (IPEX) into our Optimum-Intel library, we make it super easy for Hugging Face users to get the acceleration benefits with minimal code changes. Using the custom EC2 Gen 7 instances (such as Amazon EC2 R7iz and other instances), we reached an 8x speedup fine-tuning DistilBERT and were able to run inference 3x faster on the same transformers model. Likewise, we achieved a 6.5x speedup when generating images with a Stable Diffusion model.”

    Ella Charlaix, ML Engineer, Hugging Face