Amazon Redshift Unleashes Graviton-Powered RG Instances: Up to 2.2x Faster, 30% Cheaper Per vCPU
Breaking News — Amazon Web Services today launched a new instance family for Amazon Redshift—the RG instances—powered by its custom AWS Graviton processors. These instances promise data warehouse workloads running up to 2.2 times faster than the previous RA3 generation, while slashing the price per vCPU by 30%.
The integrated data lake query engine lets organizations run SQL analytics across both Redshift data warehouses and Amazon S3 data lakes from a single engine, accelerating Apache Iceberg queries up to 2.4x and Apache Parquet queries up to 1.5x relative to RA3 instances.
“This blend of speed, cost efficiency, and an integrated data lake query engine makes Redshift RG instances well-suited to handle the high query volumes and low-latency requirements of today’s analytics and agentic AI workloads,” said Rahul Pathak, Vice President of Amazon Redshift at AWS, in an exclusive interview.
Background
Amazon Redshift has powered cloud data warehouses since 2013, evolving from dense compute to RA3 instances and serverless options. Over the past decade, organizations have increasingly used both structured warehouse tables and cost-effective data lakes to manage growing data volumes.

The rise of AI agents—programs that query data warehouses at scales far exceeding human usage—has driven operational costs higher. In March 2026, Redshift already improved BI dashboard and ETL performance by up to 7x for new queries, targeting low-latency SQL needs.
Today's RG instances represent the next architectural leap. They use AWS Graviton processors, ARM-based chips designed for high efficiency, to deliver a step-change in price-performance for analytics workloads.

What This Means
RG instances directly address the cost and complexity of combined data warehouse and data lake environments. By running both workload types from a single query engine, customers can simplify operations and reduce total analytics costs.
The new instance family is especially valuable for organizations deploying autonomous AI agents that need near-real-time responses. Faster query execution at lower cost per vCPU means that high-volume agentic workloads become more economical to run.
Below is a comparison of recommended RG instances against current RA3 instances:
- ra3.xlplus → rg.xlarge: 4 vCPUs, 32 GB memory (for small cluster departmental analytics)
- ra3.4xlarge → rg.4xlarge: 16 vCPUs (1.33:1 ratio), 128 GB memory (1.33:1 ratio) for standard production workloads
Getting started is straightforward. You can launch new clusters or migrate existing ones through the AWS Management Console, AWS CLI, or AWS API. The integrated data lake query engine is enabled by default, so no additional configuration is needed.
For estimated savings, AWS recommends using the AWS Pricing Calculator with your specific workload patterns.
Related Articles
- Run Your Own Private Image Generator: Docker Model Runner + Open WebUI
- PyTorch Lightning Package Compromised: Credential Stealer Targets Developers
- AWS Weekly Roundup: Deepening AI Partnerships and New Lambda Capabilities (April 27, 2026)
- How to Enable Tiered Memory Protection with Memory QoS in Kubernetes v1.36
- AI Workloads Skyrocket Cloud Costs – But Optimization Fundamentals Remain Unchanged, Experts Warn
- Accelerate Database Performance Troubleshooting with Grafana Assistant – A Step-by-Step Guide
- Securing Autonomous AI Agents on Kubernetes: A Practical Q&A Guide
- Open Document Standards: The Core of Digital Sovereignty in European Office Suites