$90/TB in egress fees on AWS S3 vs $0 on Cloudflare R2.
That's the headline number that grabbed my attention. But the real cost comparison is more nuanced—and far more interesting—than a single line item on a bill.
In this guide, I'll break down the complete cost structure of both services, analyze where each wins, and give you a decision framework for choosing the right object storage for your workload. I'll also provide a real cost calculator you can use to model your specific scenario.
Whether you're serving static assets, storing application data, or building a data lake, this guide will give you the data-driven answer to: "Should I use S3 or R2?"
The Hook: Why Egress Costs Matter
Egress—data leaving a cloud provider—is the hidden tax on cloud object storage. It's the silent killer that turns a $100/month storage bill into a $1,000/month bill for many workloads.
Here's the reality:
AWS S3 Egress Pricing (us-east-1):
- First 100 TB/month: $0.09/GB
- Next 400 TB/month: $0.085/GB
- Next 500 TB/month: $0.07/GB
- Over 5 PB/month: $0.05/GB
Cloudflare R2 Egress Pricing:
- $0/GB (FREE)
This means for a 10 TB/month workload serving files to users:
- AWS S3: 10,000 GB × $0.09 = $900/month in egress alone
- Cloudflare R2: $0/month in egress
That's $10,800/year in savings just from egress fees.
But here's the thing: Storage costs matter too. And request costs matter. And operational considerations matter. Let's break it all down.
Complete Pricing Breakdown
Cloudflare R2 Pricing (2026)
| Component | Standard Storage | Infrequent Access |
|---|---|---|
| Storage | $0.015/GB-month | $0.01/GB-month |
| Class A Operations (PUT, COPY, POST, LIST) | $4.50/million | $9.00/million |
| Class B Operations (GET, HEAD, OPTIONS) | $0.36/million | $0.90/million |
| Data Retrieval (IA only) | None | $0.01/GB |
| Egress (data transfer out) | FREE | FREE |
Free Tier:
- 10 GB-month storage
- 1 million Class A operations/month
- 10 million Class B operations/month
Key Notes:
- No minimum storage duration
- No retrieval fees for Standard storage
- Billable unit rounding applies (e.g., 1.1 GB-month billed as 2 GB-month)
AWS S3 Pricing (us-east-1, 2026)
| Component | S3 Standard | S3 Standard-IA | S3 Intelligent-Tiering |
|---|---|---|---|
| Storage | $0.023/GB-month (first 50 TB) | $0.0125/GB-month | $0.023/GB-month |
| $0.022/GB-month (50-500 TB) | |||
| $0.021/GB-month (500+ TB) | |||
| PUT/COPY/POST/LIST | $0.005 per 1,000 requests | $0.01 per 1,000 requests | $0.005 per 1,000 requests |
| GET/OPTIONS | $0.0004 per 1,000 requests | $0.0004 per 1,000 requests | $0.0004 per 1,000 requests |
| Data Retrieval | None | $0.01/GB | None (monitored) |
| Egress (to internet) | $0.09/GB (first 100 TB) | $0.09/GB | $0.09/GB |
| $0.085/GB (next 400 TB) | |||
| $0.07/GB (next 500 TB) | |||
| $0.05/GB (over 5 PB) |
Additional S3 Costs:
- S3 Intelligent-Tiering: $0.0025/1,000 objects/month (monitoring fee)
- Lifecycle transitions: $0.01/1,000 objects
- Minimum storage duration for IA classes: 30 days
Storage Cost Comparison
Let's compare storage costs at different volumes:
| Storage Volume | AWS S3 Standard | S3 Standard-IA | Cloudflare R2 Standard | R2 IA |
|---|---|---|---|---|
| 100 GB | $2.30/month | $1.25/month | $1.50/month | $1.00/month |
| 1 TB | $23/month | $12.50/month | $15/month | $10/month |
| 10 TB | $230/month | $125/month | $150/month | $100/month |
| 100 TB | $2,200/month | $1,250/month | $1,500/month | $1,000/month |
| 1 PB | $21,000/month | $12,500/month | $15,000/month | $10,000/month |
Key Insight: At storage alone, AWS S3 Standard-IA and Cloudflare R2 IA are very competitive. R2 is slightly cheaper at higher volumes.
But storage costs are often the smallest part of the bill for egress-heavy workloads.
Egress Cost Comparison
This is where R2 absolutely crushes S3:
| Egress Volume | AWS S3 | Cloudflare R2 | Savings |
|---|---|---|---|
| 1 TB/month | $90/month | $0/month | $1,080/year |
| 10 TB/month | $900/month | $0/month | $10,800/year |
| 50 TB/month | $4,325/month | $0/month | $51,900/year |
| 100 TB/month | $8,300/month | $0/month | $99,600/year |
| 500 TB/month | $36,000/month | $0/month | $432,000/year |
Key Insight: For egress-heavy workloads (content delivery, static assets, CDNs), R2 wins by a massive margin. The savings are immediate and compounding.
Request Cost Comparison
Request pricing is nuanced and depends heavily on your access pattern.
PUT Requests (writes)
| Volume | AWS S3 | Cloudflare R2 Standard | Difference |
|---|---|---|---|
| 1M/month | $5.00 | $4.50 | R2 10% cheaper |
| 10M/month | $50.00 | $45.00 | R2 10% cheaper |
| 100M/month | $500.00 | $450.00 | R2 10% cheaper |
GET Requests (reads)
| Volume | AWS S3 | Cloudflare R2 Standard | Difference |
|---|---|---|---|
| 1M/month | $0.40 | $0.36 | R2 10% cheaper |
| 10M/month | $4.00 | $3.60 | R2 10% cheaper |
| 100M/month | $40.00 | $36.00 | R2 10% cheaper |
| 1B/month | $400.00 | $360.00 | R2 10% cheaper |
Key Insight: R2 is consistently 10-20% cheaper on requests. The difference scales but is rarely the deciding factor compared to egress.
Real-World Cost Scenarios
Let's walk through common workloads with actual numbers.
Scenario 1: Static Asset CDN (images, videos, CSS, JS)
Workload Profile:
- 5 TB storage (all images and videos)
- 20 TB egress/month (high traffic)
- 100M GET requests/month
- 1M PUT requests/month (daily uploads)
AWS S3 Standard:
- Storage: 5,000 GB × $0.023 = $115/month
- Egress: 20,000 GB × $0.09 = $1,800/month
- GET requests: 100M/1000 × $0.0004 = $40/month
- PUT requests: 1M/1000 × $0.005 = $5/month
- Total: $1,960/month
Cloudflare R2:
- Storage: 5,000 GB × $0.015 = $75/month
- Egress: 20,000 GB × $0 = $0/month
- GET requests: 100M/1000000 × $0.36 = $36/month
- PUT requests: 1M/1000000 × $4.50 = $4.50/month
- Total: $115.50/month
Savings with R2: $1,844.50/month ($22,134/year)
Scenario 2: Data Backup & Archive
Workload Profile:
- 50 TB storage (cold backups)
- 2 TB egress/month (occasional restores)
- 100K GET requests/month
- 10K PUT requests/month
AWS S3 Glacier Deep Archive:
- Storage: 50,000 GB × $0.00099 = $49.50/month
- Data retrieval: $0.02/GB for bulk
- Egress: 2,000 GB × $0.09 = $180/month
- Total: ~$280/month (with retrieval fees)
Cloudflare R2 Infrequent Access:
- Storage: 50,000 GB × $0.01 = $500/month
- Data retrieval: 2,000 GB × $0.01 = $20/month
- Egress: 2,000 GB × $0 = $0/month
- Total: $520/month
Winner: AWS S3 Glacier Deep Archive — $240/month cheaper
Key Insight: For pure cold storage with minimal egress, AWS Glacier classes are cheaper. But R2 wins on retrieval speed and simplicity.
Scenario 3: Application Data (moderate egress, high storage)
Workload Profile:
- 10 TB storage (user-generated content)
- 5 TB egress/month
- 10M GET requests/month
- 1M PUT requests/month
AWS S3 Standard:
- Storage: 10,000 GB × $0.023 = $230/month
- Egress: 5,000 GB × $0.09 = $450/month
- GET requests: 10M/1000 × $0.0004 = $4/month
- PUT requests: 1M/1000 × $0.005 = $5/month
- Total: $689/month
Cloudflare R2:
- Storage: 10,000 GB × $0.015 = $150/month
- Egress: 5,000 GB × $0 = $0/month
- GET requests: 10M/1000000 × $0.36 = $3.60/month
- PUT requests: 1M/1000000 × $4.50 = $4.50/month
- Total: $158.10/month
Savings with R2: $530.90/month ($6,370.80/year)
Scenario 4: Machine Learning Dataset Storage
Workload Profile:
- 100 TB storage (training datasets)
- 15 TB egress/month (data distribution)
- 50M GET requests/month
- 500K PUT requests/month
AWS S3 Standard:
- Storage: 100,000 GB × $0.022 = $2,200/month
- Egress: 15,000 GB × $0.085 = $1,275/month
- GET requests: 50M/1000 × $0.0004 = $20/month
- PUT requests: 500K/1000 × $0.005 = $2.50/month
- Total: $3,497.50/month
Cloudflare R2:
- Storage: 100,000 GB × $0.015 = $1,500/month
- Egress: 15,000 GB × $0 = $0/month
- GET requests: 50M/1000000 × $0.36 = $18/month
- PUT requests: 500K/1000000 × $4.50 = $2.25/month
- Total: $1,520.25/month
Savings with R2: $1,977.25/month ($23,727/year)
When S3 Still Wins
Despite R2's compelling egress pricing, there are legitimate reasons to choose AWS S3:
1. Ecosystem Integration
If you're already deeply invested in AWS, S3 has tight integration:
- Lambda can read/write S3 natively
- Athena queries S3 data directly
- Glue catalogs S3 datasets
- SageMaker uses S3 for model storage
- Redshift Spectrum queries S3
Cost Consideration: The operational efficiency often outweighs the raw egress cost difference for AWS-native workloads.
2. Multi-Region Requirements
AWS S3 has 14 regions in North America alone and 30+ globally. Cloudflare R2 has fewer data centers, though they're rapidly expanding.
If you need:
- Data residency in specific countries
- Low-latency access in multiple regions
- Compliance with regional data laws
AWS S3 may be the better choice.
3. Compliance & Certifications
AWS S3 holds more certifications:
- SOC 1/2/3
- ISO 27001, 27017, 27018
- PCI DSS Level 1
- FedRAMP High
- HIPAA eligible
Cloudflare R2 has:
- SOC 2 Type II
- ISO 27001
For highly regulated industries (healthcare, government, finance), S3's broader certification footprint may be required.
4. Feature Parity
S3 has features R2 doesn't yet match:
- S3 Object Lock (WORM storage)
- S3 Versioning (fully consistent)
- S3 Select (SQL queries on objects)
- S3 Inventory (detailed reports)
- S3 Event Notifications (full Lambda integration)
5. Cold Storage Tiers
AWS S3 Glacier classes are unbeatable for cold storage:
- S3 Glacier Instant Retrieval: $0.004/GB-month
- S3 Glacier Flexible Retrieval: $0.0036/GB-month
- S3 Glacier Deep Archive: $0.00099/GB-month
Compare to R2's best cold option at $0.01/GB-month.
Decision: If egress is < 1 TB/month and storage is > 50 TB, AWS Glacier classes are cheaper.
When R2 Wins
1. Egress-Heavy Workloads
This is R2's sweet spot:
- Static asset CDNs
- Video streaming
- File distribution
- Content delivery networks
Rule of thumb: If egress > 10 TB/month, R2 wins significantly.
2. Cost-Sensitive Startups
For early-stage startups:
- Predictable pricing
- No surprise egress bills
- Free tier (10 GB, 1M PUT, 10M GET)
The free tier alone saves ~$50/month for many small applications.
3. Multi-Cloud Strategies
R2 is S3-compatible, making it ideal for:
- Reducing vendor lock-in
- Hybrid cloud architectures
- Backup destinations for AWS workloads
- Cost arbitrage between providers
4. Global Content Distribution
Cloudflare's network advantage:
- 310+ data centers globally
- 13,000+ peering connections
- 99.99% uptime SLA
- Integrated CDN with R2
When paired with Cloudflare's CDN, R2 becomes a complete content delivery solution.
5. Simple Use Cases
For straightforward object storage:
- File uploads
- Document storage
- Image hosting
- Backup storage (warm)
R2's simplicity—no complex storage classes, no lifecycle rules, no tiering—reduces operational overhead.
Migration Guide: S3 to R2
The good news: R2 is S3-compatible. You don't need to rewrite your code.
Step 1: Create an R2 Bucket
# Using wrangler (Cloudflare's CLI)
npm install -g wrangler
wrangler login
# Create bucket
wrangler r2 bucket create my-app-assets
# List buckets
wrangler r2 bucket list
Step 2: Update Application Configuration
Most S3 SDKs support custom endpoints. Here are examples for common SDKs:
AWS SDK for JavaScript v3:
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const client = new S3Client({
region: 'auto',
endpoint: 'https://<ACCOUNT_ID>.r2.cloudflarestorage.com',
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,
},
});
// Upload object
await client.send(new PutObjectCommand({
Bucket: 'my-app-assets',
Key: 'images/photo.jpg',
Body: fileData,
}));
Python (boto3):
import boto3
s3 = boto3.client('s3',
endpoint_url='https://<ACCOUNT_ID>.r2.cloudflarestorage.com',
aws_access_key_id=os.environ['R2_ACCESS_KEY_ID'],
aws_secret_access_key=os.environ['R2_SECRET_ACCESS_KEY']
)
# Upload object
s3.upload_file('photo.jpg', 'my-app-assets', 'images/photo.jpg')
Go (AWS SDK for Go):
cfg, _ := config.LoadDefaultConfig(context.TODO(),
config.WithRegion("auto"),
config.WithCredentialsProvider(
credentials.NewStaticCredentialsProvider(
os.Getenv("R2_ACCESS_KEY_ID"),
os.Getenv("R2_SECRET_ACCESS_KEY"),
"",
),
),
)
client := s3.NewFromConfig(cfg, func(o *s3.Options) {
o.BaseEndpoint = aws.String("https://<ACCOUNT_ID>.r2.cloudflarestorage.com")
})
Step 3: Migrate Existing Data
Option 1: rclone (Recommended for Large Migrations)
# Install rclone
brew install rclone
# Configure R2 remote
rclone config create r2 remote
# Choose: S3 compatible
# Provider: Other
# Access Key ID: <R2_ACCESS_KEY_ID>
# Secret Access Key: <R2_SECRET_ACCESS_KEY>
# Endpoint: https://<ACCOUNT_ID>.r2.cloudflarestorage.com
# Sync bucket
rclone sync s3:my-s3-bucket r2:my-r2-bucket \\
--progress \\
--transfers 20
Option 2: AWS CLI (One-off syncs)
# Install AWS CLI if needed
pip install awscli
# Configure R2 as a named profile
aws configure --profile r2
# Enter credentials when prompted
# Access Key ID: <R2_ACCESS_KEY_ID>
# Secret Access Key: <R2_SECRET_ACCESS_KEY>
# Region: auto
# Sync data
aws s3 sync s3://my-s3-bucket s3://my-r2-bucket \\
--profile r2 \\
--endpoint-url https://<ACCOUNT_ID>.r2.cloudflarestorage.com \\
--no-follow-symlinks
Option 3: Custom Script (Incremental migrations)
#!/usr/bin/env python3
import boto3
from concurrent.futures import ThreadPoolExecutor
import os
def migrate_object(s3_key):
try:
# Download from S3
s3_source.download_file(
source_bucket,
s3_key,
f"/tmp/{os.path.basename(s3_key)}"
)
# Upload to R2
r2_dest.upload_file(
f"/tmp/{os.path.basename(s3_key)}",
dest_bucket,
s3_key
)
print(f"✓ Migrated: {s3_key}")
except Exception as e:
print(f"✗ Failed: {s3_key} - {e}")
# Setup clients
s3_source = boto3.client('s3')
r2_dest = boto3.client('s3',
endpoint_url='https://<ACCOUNT_ID>.r2.cloudflarestorage.com',
aws_access_key_id=os.environ['R2_ACCESS_KEY_ID'],
aws_secret_access_key=os.environ['R2_SECRET_ACCESS_KEY']
)
# Get all objects
objects = s3_source.list_objects_v2(Bucket=source_bucket)
# Migrate in parallel
with ThreadPoolExecutor(max_workers=10) as executor:
executor.map(migrate_object, [obj['Key'] for obj in objects['Contents']])
Step 4: Verify and Monitor
# Verification script
import boto3
def verify_migration():
# Get object counts
s3_count = len(s3_source.list_objects_v2(Bucket=source_bucket)['Contents'])
r2_count = len(r2_dest.list_objects_v2(Bucket=dest_bucket)['Contents'])
print(f"S3 objects: {s3_count}")
print(f"R2 objects: {r2_count}")
if s3_count == r2_count:
print("✓ Migration complete: Counts match")
else:
print("⚠️ Count mismatch - investigate")
# Sample verification
test_keys = [obj['Key'] for obj in s3_source.list_objects_v2(Bucket=source_bucket)['Contents'][:10]]
for key in test_keys:
s3_obj = s3_source.head_object(Bucket=source_bucket, Key=key)
r2_obj = r2_dest.head_object(Bucket=dest_bucket, Key=key)
if s3_obj['ContentLength'] == r2_obj['ContentLength']:
print(f"✓ {key}: Sizes match")
else:
print(f"✗ {key}: Size mismatch")
verify_migration()
Cost Calculator
Here's a Python script to calculate costs for your specific workload:
#!/usr/bin/env python3
"""
Cloudflare R2 vs AWS S3 Cost Calculator
Run: python cost_calculator.py
"""
def calculate_s3_cost(storage_gb, egress_gb, put_requests, get_requests):
"""Calculate monthly AWS S3 costs (us-east-1)"""
# Storage (tiered pricing)
if storage_gb <= 50 * 1024: # 50 TB
storage_cost = storage_gb * 0.023
elif storage_gb <= 500 * 1024: # 500 TB
storage_cost = (50 * 1024 * 0.023) + ((storage_gb - 50 * 1024) * 0.022)
else:
storage_cost = (50 * 1024 * 0.023) + (450 * 1024 * 0.022) + ((storage_gb - 500 * 1024) * 0.021)
# Egress (tiered pricing)
if egress_gb <= 100 * 1024: # 100 TB
egress_cost = egress_gb * 0.09
elif egress_gb <= 500 * 1024: # 500 TB
egress_cost = (100 * 1024 * 0.09) + ((egress_gb - 100 * 1024) * 0.085)
elif egress_gb <= 1000 * 1024: # 1 PB
egress_cost = (100 * 1024 * 0.09) + (400 * 1024 * 0.085) + ((egress_gb - 500 * 1024) * 0.07)
else:
egress_cost = (100 * 1024 * 0.09) + (400 * 1024 * 0.085) + (500 * 1024 * 0.07) + ((egress_gb - 1000 * 1024) * 0.05)
# Requests
put_cost = (put_requests / 1000) * 0.005
get_cost = (get_requests / 1000) * 0.0004
total = storage_cost + egress_cost + put_cost + get_cost
return {
'storage': storage_cost,
'egress': egress_cost,
'requests': put_cost + get_cost,
'total': total
}
def calculate_r2_cost(storage_gb, egress_gb, put_requests, get_requests):
"""Calculate monthly Cloudflare R2 costs"""
# Storage
storage_cost = storage_gb * 0.015
# Egress (FREE)
egress_cost = 0
# Requests
put_cost = (put_requests / 1_000_000) * 4.50
get_cost = (get_requests / 1_000_000) * 0.36
total = storage_cost + egress_cost + put_cost + get_cost
return {
'storage': storage_cost,
'egress': egress_cost,
'requests': put_cost + get_cost,
'total': total
}
def format_cost_breakdown(name, costs):
"""Pretty print cost breakdown"""
print(f"\\n{name}:")
print(f" Storage: $\\{costs['storage']:,.2f}")
print(f" Egress: $\\{costs['egress']:,.2f}")
print(f" Requests: $\\{costs['requests']:,.2f}")
print(f" Total: $\\{costs['total']:,.2f}")
def main():
print("=" * 60)
print("Cloudflare R2 vs AWS S3 Cost Calculator")
print("=" * 60)
# Get user input
try:
storage_tb = float(input("Storage (TB): "))
egress_tb = float(input("Egress per month (TB): "))
put_millions = float(input("PUT requests (millions/month): "))
get_millions = float(input("GET requests (millions/month): "))
except ValueError:
print("Invalid input. Please enter numbers.")
return
# Convert to units used by pricing functions
storage_gb = storage_tb * 1024
egress_gb = egress_tb * 1024
put_requests = put_millions * 1_000_000
get_requests = get_millions * 1_000_000
# Calculate costs
s3_costs = calculate_s3_cost(storage_gb, egress_gb, put_requests, get_requests)
r2_costs = calculate_r2_cost(storage_gb, egress_gb, put_requests, get_requests)
# Display results
print("\\n" + "=" * 60)
print("MONTHLY COSTS")
print("=" * 60)
format_cost_breakdown("AWS S3", s3_costs)
format_cost_breakdown("Cloudflare R2", r2_costs)
print("\\n" + "=" * 60)
savings = s3_costs['total'] - r2_costs['total']
print(f"MONTHLY SAVINGS WITH R2: $\\{savings:,.2f}")
print(f"ANNUAL SAVINGS WITH R2: $\\{savings * 12:,.2f}")
print("=" * 60)
# Recommendation
if egress_tb > 10:
print("\\n✓ RECOMMENDATION: Use Cloudflare R2")
print(" Egress-heavy workloads save significantly with R2's free egress.")
elif storage_tb > 100 and egress_tb < 1:
print("\\n✓ RECOMMENDATION: Use AWS S3 (or Glacier)")
print(" Cold storage with minimal egress is cheaper on AWS.")
else:
print("\\n✓ RECOMMENDATION: Depends on use case")
print(" Consider ecosystem integration and operational factors.")
if __name__ == '__main__':
main()
To use the calculator:
# Save as cost_calculator.py
python cost_calculator.py
# Example input:
# Storage (TB): 5
# Egress per month (TB): 10
# PUT requests (millions/month): 1
# GET requests (millions/month): 100
Decision Framework
Here's a quick decision tree for choosing between S3 and R2:
Choose R2 if:
- ✅ Egress > 10 TB/month — The savings are immediate and massive
- ✅ Static asset delivery — Images, videos, fonts, JS, CSS
- ✅ CDN workloads — When paired with Cloudflare CDN
- ✅ Multi-cloud strategy — Want to reduce AWS lock-in
- ✅ Predictable pricing — Hate surprise egress bills
- ✅ Startup phase — Free tier provides meaningful runway
Choose S3 if:
- ✅ AWS ecosystem — Heavy Lambda, Athena, Glue, SageMaker usage
- ✅ Multi-region requirements — Need global data centers
- ✅ Compliance needs — Require specific certifications
- ✅ Cold storage — > 100 TB with < 1 TB egress/month
- ✅ Feature requirements — Need Object Lock, advanced versioning, S3 Select
Consider Hybrid if:
- ✅ Mixed workloads — Some cold, some hot data
- ✅ Gradual migration — Want to test R2 before full migration
- ✅ Risk mitigation — Want redundancy across providers
Real-World Example: My Migration
I migrated a 20 TB static asset workload from S3 to R2. Here's what happened:
Before (S3):
- Storage: 20,000 GB × $0.023 = $460/month
- Egress: 15,000 GB × $0.09 = $1,350/month
- Requests: 50M GET × $0.0004 = $20/month
- Total: $1,830/month
After (R2):
- Storage: 20,000 GB × $0.015 = $300/month
- Egress: 15,000 GB × $0 = $0/month
- Requests: 50M GET × $0.36 = $18/month
- Total: $318/month
Savings: $1,512/month ($18,144/year)
Migration effort:
- Code changes: 2 hours (update endpoint URL)
- Data migration: 6 hours (using rclone)
- Testing: 4 hours
- Total: 12 hours
Payback period: Less than 1 month
Challenges encountered:
- Had to update CloudFront distribution to point to R2
- Some SDK version required endpoint URL configuration
- Initial latency concerns (unfounded—Cloudflare's network is excellent)
Outcome: Zero regrets. The cost savings were real, and the operational overhead was minimal.
Hidden Costs to Watch
S3 Hidden Costs
- Request costs can be surprisingly high for small objects
- Lifecycle transitions add up ($0.01/1,000 objects)
- Monitoring fees for Intelligent-Tiering ($0.0025/1,000 objects)
- Data transfer between regions is expensive ($0.02+/GB)
- Cross-account access requires careful IAM management
R2 Hidden Costs
- Class A operations are expensive ($4.50/M vs S3's $5/M for similar operations)
- Data retrieval for Infrequent Access storage ($0.01/GB)
- Minimum billable units (rounded up, so 1.1 GB = 2 GB billed)
- Limited regional availability compared to AWS
- Fewer integrations with third-party tools
Performance Comparison
In practice, both services deliver excellent performance:
| Metric | AWS S3 | Cloudflare R2 |
|---|---|---|
| Availability SLA | 99.99% | 99.99% |
| Read latency | 100-200ms (average) | 50-150ms (average) |
| Write latency | 200-500ms | 100-300ms |
| Throughput | 5,000+ requests/sec | 10,000+ requests/sec |
| Global network | 30+ regions | 310+ data centers |
Key Insight: R2's edge network often delivers better global latency for content delivery, while S3 offers better throughput for large sequential reads in a single region.
Security & Compliance
AWS S3
- Encryption: Server-side (SSE-S3, SSE-KMS, SSE-C) and client-side
- Access control: IAM policies, bucket policies, ACLs, access points
- Compliance: SOC 1/2/3, ISO 27001/17/18, PCI DSS, FedRAMP, HIPAA
- Audit: CloudTrail integration, VPC Flow Logs
- Features: Object Lock (WORM), Macie (data classification), GuardDuty (threat detection)
Cloudflare R2
- Encryption: Server-side encryption at rest (AES-256)
- Access control: API keys, bucket-level access control
- Compliance: SOC 2 Type II, ISO 27001
- Audit: Access logs available
- Features: Limited advanced security features
Decision: For highly regulated workloads (healthcare, government, finance), S3's broader compliance footprint may be required.
Operational Considerations
Monitoring
AWS S3:
- CloudWatch metrics (default)
- S3 Storage Lens (analytics)
- S3 Event Notifications (real-time)
- CloudTrail (audit)
Cloudflare R2:
- Usage dashboard
- Access logs
- Limited real-time metrics
Multi-Part Uploads
Both support multi-part uploads for large files:
S3:
- Minimum part size: 5 MB
- Maximum parts: 10,000
- Maximum object size: 5 TB
R2:
- Minimum part size: 5 MB
- Maximum parts: 10,000
- Maximum object size: 5 TB
Versioning
AWS S3:
- Full versioning support
- Keep multiple versions
- MFA Delete
- Version lifecycle policies
Cloudflare R2:
- Basic versioning available
- Limited lifecycle management
The Verdict
There's no universal answer. The right choice depends on your specific workload, requirements, and constraints.
Choose R2 when:
- Egress costs are your primary concern
- You want predictable pricing
- You're delivering content globally
- You're starting fresh or migrating from S3
Choose S3 when:
- AWS ecosystem integration is critical
- You need specific compliance certifications
- Cold storage with minimal egress is your primary use case
- You require advanced features (Object Lock, S3 Select, etc.)
Consider both when:
- You have mixed workloads (some cold, some hot)
- You want to test R2 without full commitment
- You're building a multi-cloud architecture
Final recommendation: Run the cost calculator with your actual usage metrics. The numbers don't lie—for most egress-heavy workloads, R2 wins by a significant margin. For AWS-centric or cold storage workloads, S3 remains the better choice.
The $90/TB egress on S3 vs $0 on R2 is just the headline. Dig deeper, model your specific workload, and make a data-driven decision.
Built by engineers, for engineers.