Most Teams Are Paying Too Much for S3

When we audited cloud costs at a previous employer — a fintech running about 2 PB of S3 data — we found that 71% of objects hadn't been accessed in over 90 days. Every one of them was sitting in S3 Standard at $0.023/GB/month.

The fix was a lifecycle policy. It took two hours to write and test, and it saved roughly $22,000/month within 90 days.

AWS offers seven distinct storage classes, each with a different price point, retrieval cost, latency profile, and durability guarantee. Most teams default to S3 Standard because it's the default, not because it's the right choice. This guide breaks down each class with real numbers so you can make an informed decision.


The Seven S3 Storage Classes at a Glance

Here's the pricing as of March 2026 (US East — N. Virginia):

Storage ClassStorage CostRetrieval FeeMin DurationMin Object SizeDurability
S3 Standard$0.023/GB/moNoneNoneNone99.999999999%
S3 Intelligent-Tiering$0.023/GB (frequent)None*NoneNone99.999999999%
S3 Standard-IA$0.0125/GB/mo$0.01/GB30 days128KB99.999999999%
S3 One Zone-IA$0.01/GB/mo$0.01/GB30 days128KB99.5% availability
S3 Glacier Instant$0.004/GB/mo$0.03/GB90 days128KB99.999999999%
S3 Glacier Flexible$0.0036/GB/mo$0.01/GB (standard)90 days40KB99.999999999%
S3 Glacier Deep Archive$0.00099/GB/mo$0.02/GB (standard)180 days40KB99.999999999%

*Intelligent-Tiering charges a monitoring fee: $0.0025 per 1,000 objects/month.

The range is massive: Standard costs 23x more than Deep Archive. Use the wrong class and you're burning money. Use the wrong retrieval tier and a single restore job can cost more than months of storage.


Deep Dive: Each Storage Class

S3 Standard — The Default You Probably Overuse

Price: $0.023/GB/month (first 50 TB), $0.022/GB (next 450 TB), $0.021/GB (over 500 TB)

Standard is the right choice when:

  • Objects are accessed frequently (multiple times per month)
  • You can't predict access patterns
  • Latency matters (milliseconds to first byte)
  • You're writing and reading the same objects repeatedly

Standard is the wrong choice when objects are sitting untouched for weeks. Every byte you're not actively reading is money you're leaving on the table.

Data transfer costs: S3 Standard doesn't change data transfer pricing — you still pay $0.09/GB for egress out of AWS (or $0.085–$0.005/GB depending on destination). See our guide on cutting AWS egress costs for that problem specifically.


S3 Intelligent-Tiering — The Right Default for Unpredictable Access

Price:

  • Frequent Access tier: $0.023/GB/month
  • Infrequent Access tier: $0.0125/GB/month (auto-moved after 30 days of no access)
  • Archive Instant Access: $0.004/GB/month (auto-moved after 90 days)
  • Archive Access: $0.0036/GB/month (optional, moves after 90+ days with 3–5 hour retrieval)
  • Deep Archive Access: $0.00099/GB/month (optional, moves after 180+ days with 12 hour retrieval)
  • Monitoring fee: $0.0025 per 1,000 objects/month

Intelligent-Tiering automatically moves objects between tiers based on access patterns. No retrieval fee within the frequent/infrequent/archive-instant tiers.

The monitoring fee is the gotcha. For a bucket with 10 million small objects, that's $25/month in monitoring fees alone, regardless of storage. If your average object is under 128KB, Intelligent-Tiering may cost more than manually tiering to Standard-IA.

The math on monitoring fee break-even:

For a 1GB file stored for 6 months with no access after month 1:

  • Standard: $0.023 × 6 = $0.138
  • Intelligent-Tiering: $0.023 + $0.0125 × 5 + monitoring = ~$0.0855 + monitoring
  • Monitoring per object: $0.0000025/month = $0.000015 over 6 months (negligible for large objects)

For a 10KB file stored for 6 months:

  • Monitoring fee per object: $0.000015 (same)
  • Storage in IT at infrequent tier: $0.000125/month × 5 = $0.000625
  • Monitoring is 2.4% of storage cost — still reasonable
  • But if the object is accessed once, IT keeps it in frequent tier at $0.023/GB vs Standard-IA at $0.0125/GB — you lose the savings

When to use Intelligent-Tiering:

  • Objects > 128KB that you can't predict access for
  • Mixed workloads where some data is hot, some is cold
  • New applications where you don't know access patterns yet
  • You want zero-ops cost management

When NOT to use it:

  • Millions of tiny objects (monitoring fee dominates)
  • Data with known access patterns (manually tier instead)
  • Archives you know won't be touched (use Glacier directly)

S3 Standard-IA — Infrequent Access with Instant Retrieval

Price: $0.0125/GB/month storage + $0.01/GB retrieval Minimum storage duration: 30 days Minimum billable object size: 128KB

Standard-IA is Standard with a retrieval fee attached. AWS gives you ~46% off the storage price in exchange for paying per byte when you read.

The break-even point: if you retrieve less than 57% of your data per month, Standard-IA costs less than Standard. In practice, for data accessed once a month or less, Standard-IA is the right tier.

The minimum object size trap: Any object smaller than 128KB gets billed as if it's 128KB. If your bucket has 1 million 10KB objects, each one costs as if it's 128KB. This is a 12.8x cost multiplier on small objects. Always check object size distribution before migrating to IA tiers.

# Check object size distribution in a bucket
aws s3api list-objects-v2 \
  --bucket my-bucket \
  --query 'Contents[].Size' \
  --output json | jq 'sort | 
    {
      total: length,
      under_128kb: map(select(. < 131072)) | length,
      over_128kb: map(select(. >= 131072)) | length,
      avg_size_kb: (add / length / 1024 | round)
    }'

When to use Standard-IA:

  • Disaster recovery copies and backups accessed rarely
  • Compliance archives needing occasional access
  • Log archives beyond your hot retention window
  • Database dumps, snapshots, build artifacts

S3 One Zone-IA — Cheaper IA for Non-Critical Data

Price: $0.01/GB/month storage + $0.01/GB retrieval Durability: 99.5% availability (one AZ, not eleven 9s)

One Zone-IA stores data in a single Availability Zone instead of across three. AWS gives you another 20% discount off Standard-IA, but if that AZ goes down, your data is temporarily unavailable — and in rare cases, destroyed.

This is the right tier for data you can recreate or that already exists elsewhere:

  • Thumbnails generated from original images (originals in Standard)
  • Transcoded video files (source stored separately)
  • Derived datasets that can be recomputed
  • Secondary backup copies (not the only backup)

Never use One Zone-IA for:

  • Primary data with no other copy
  • Compliance data with availability requirements
  • Anything where data loss is unacceptable

S3 Glacier Instant Retrieval — Cold Storage, Instant Access

Price: $0.004/GB/month storage + $0.03/GB retrieval Minimum storage duration: 90 days Retrieval latency: Milliseconds

Glacier Instant is the most underused tier in this list. It gives you 83% cost reduction vs Standard for data you access maybe once a quarter. The retrieval is instant — same latency as Standard — but you pay per retrieval.

The economics: if you retrieve less than 25% of your data per month, Glacier Instant beats Standard-IA on total cost.

Standard: $0.023/GB/mo
Standard-IA: $0.0125/GB/mo + $0.01/GB retrieval
Glacier Instant: $0.004/GB/mo + $0.03/GB retrieval

For 1TB stored, accessed 10GB/month:
- Standard: $23.55
- Standard-IA: $12.80 + $0.10 = $12.90
- Glacier Instant: $4.10 + $0.30 = $4.40

The 90-day minimum duration matters: if you delete an object before 90 days, you pay for the remaining time. For frequently-deleted data, this can eliminate the savings.

Best use cases:

  • Medical images (accessed for patient visits, rarely otherwise)
  • Legal documents, contracts (occasional access for audits)
  • Annual financial records
  • Seasonal data (retail: last year's product catalog)

S3 Glacier Flexible Retrieval — The Classic Archive Tier

Price: $0.0036/GB/month storage Retrieval fees:

  • Expedited: $0.03/GB + $0.01 per request (1–5 minutes)
  • Standard: $0.01/GB + $0.025 per 1,000 requests (3–5 hours)
  • Bulk: $0.0025/GB (5–12 hours) Minimum storage duration: 90 days

Glacier Flexible is the original Glacier — designed for true archival where you plan restores in advance. The free bulk retrieval tier (buried in the pricing page) is the real value: $0.0025/GB at 5–12 hours retrieval time, which for large restores is essentially free.

# Initiate a Glacier restore with bulk retrieval (cheapest)
aws s3api restore-object \
  --bucket my-archive-bucket \
  --key path/to/archived-file.gz \
  --restore-request '{"Days":7,"GlacierJobParameters":{"Tier":"Bulk"}}'

# Check restore status
aws s3api head-object \
  --bucket my-archive-bucket \
  --key path/to/archived-file.gz \
  --query 'Restore'

The restore creates a temporary Standard copy. After Days (7 in the example above), the Standard copy is deleted and the Glacier copy remains.

When to use Glacier Flexible:

  • Long-term compliance archives (7-year retention policies)
  • Video masters you might someday need to reprocess
  • Scientific datasets for occasional analysis
  • Anything where you can plan restores 3–5 hours ahead

S3 Glacier Deep Archive — Cheapest Cloud Storage, Period

Price: $0.00099/GB/month (~$1/TB/month) Retrieval:

  • Standard: $0.02/GB (12 hours)
  • Bulk: $0.0025/GB (48 hours) Minimum storage duration: 180 days

At $0.99/TB/month, Deep Archive is cheaper than most on-prem tape libraries when you factor in hardware, power, and admin costs. This is the tier for data you're legally required to keep but will almost never access.

The math on a 10-year compliance archive:

  • 100 TB at Standard: $23,000/month = $2.76M over 10 years
  • 100 TB at Deep Archive: $99/month = $11,880 over 10 years

The 180-day minimum duration penalty applies. If you're storing quarterly regulatory filings for 7 years, that's not a concern. If you're experimenting with archival of data that you later decide isn't worth keeping, it will bite you.


Building Lifecycle Policies

Don't manage storage classes manually — use lifecycle policies to automate transitions based on object age and access patterns.

{
  "Rules": [
    {
      "ID": "intelligent-tiering-then-archive",
      "Status": "Enabled",
      "Filter": {
        "Prefix": "logs/"
      },
      "Transitions": [
        {
          "Days": 0,
          "StorageClass": "INTELLIGENT_TIERING"
        },
        {
          "Days": 90,
          "StorageClass": "GLACIER"
        },
        {
          "Days": 365,
          "StorageClass": "DEEP_ARCHIVE"
        }
      ],
      "Expiration": {
        "Days": 2555
      }
    }
  ]
}
# Apply lifecycle policy to a bucket
aws s3api put-bucket-lifecycle-configuration \
  --bucket my-log-bucket \
  --lifecycle-configuration file://lifecycle.json

# Verify the policy was applied
aws s3api get-bucket-lifecycle-configuration \
  --bucket my-log-bucket

In Terraform:

resource "aws_s3_bucket_lifecycle_configuration" "logs" {
  bucket = aws_s3_bucket.logs.id

  rule {
    id     = "log-archive-policy"
    status = "Enabled"

    filter {
      prefix = "logs/"
    }

    transition {
      days          = 30
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = 90
      storage_class = "GLACIER_IR"
    }

    transition {
      days          = 365
      storage_class = "GLACIER"
    }

    transition {
      days          = 730
      storage_class = "DEEP_ARCHIVE"
    }

    expiration {
      days = 2555  # 7 years
    }

    noncurrent_version_transition {
      noncurrent_days = 30
      storage_class   = "STANDARD_IA"
    }

    noncurrent_version_expiration {
      noncurrent_days = 90
    }
  }
}

The Decision Framework

Here's how I approach storage class selection for any new bucket or data category:

Step 1: What's the access pattern?

Accessed daily or multiple times/week → S3 Standard
Accessed a few times/month, unpredictable → S3 Intelligent-Tiering
Accessed once a month or less → S3 Standard-IA
Accessed once a quarter → S3 Glacier Instant Retrieval
Accessed once a year, planned restores OK → S3 Glacier Flexible
Accessed <once/year, compliance retention → S3 Glacier Deep Archive

Step 2: Can you recreate the data?

If yes → consider One Zone-IA instead of Standard-IA (20% cheaper) If no → never use One Zone-IA

Step 3: What are your object sizes?

# Get p50, p90, p99 object sizes for a bucket
aws s3api list-objects-v2 \
  --bucket my-bucket \
  --output json \
  --query 'sort_by(Contents, &Size)[*].Size' | \
  python3 -c "
import sys, json
sizes = json.load(sys.stdin)
n = len(sizes)
print(f'Count: {n:,}')
print(f'P50: {sizes[n//2] / 1024:.1f} KB')
print(f'P90: {sizes[int(n*0.9)] / 1024:.1f} KB')
print(f'P99: {sizes[int(n*0.99)] / 1024:.1f} KB')
print(f'Avg: {sum(sizes)/n / 1024:.1f} KB')
"

If P50 < 128KB → avoid all IA tiers (minimum object billing kills savings) If P50 < 128KB and Intelligent-Tiering → monitoring fee may dominate

Step 4: What's your deletion frequency?

Minimum duration charges exist on IA, Glacier, and Deep Archive. If objects are frequently deleted before the minimum:

  • Standard-IA: pro-rate remaining 30 days
  • Glacier Instant/Flexible: pro-rate remaining 90 days
  • Deep Archive: pro-rate remaining 180 days

For short-lived objects, Standard is often cheaper even if cold.


Real-World Savings Example

A media company with 800TB of video assets:

  • 50TB recently uploaded, actively encoded: Standard ($1,150/mo)
  • 150TB delivered to CDN in last 90 days: Standard-IA ($1,875/mo vs $3,450/mo in Standard)
  • 300TB older content, accessed for rare re-deliveries: Glacier Instant ($1,200/mo vs $6,900/mo)
  • 300TB master files, kept for legal compliance: Glacier Deep Archive ($297/mo vs $6,900/mo)

Total optimized: $4,522/month Previous (all Standard): $18,400/month Savings: $13,878/month (75%)


Quick Start: Audit Your Current S3 Spend

# Get per-storage-class usage from S3 Storage Lens or Cost Explorer
aws cloudwatch get-metric-statistics \
  --namespace AWS/S3 \
  --metric-name BucketSizeBytes \
  --dimensions Name=BucketName,Value=my-bucket Name=StorageType,Value=StandardStorage \
  --start-time 2026-03-13T00:00:00Z \
  --end-time 2026-03-20T00:00:00Z \
  --period 86400 \
  --statistics Average \
  --query 'Datapoints[-1].Average'

# Or use S3 Storage Lens dashboard for all buckets at once
aws s3control get-storage-lens-configuration \
  --config-id default-account-dashboard \
  --account-id $(aws sts get-caller-identity --query Account --output text)

Use the S3 vs R2 vs Backblaze Storage Calculator to model your specific workload across providers — it accounts for storage price, retrieval fees, and egress costs in a single comparison.


The Alternatives Worth Considering

Once you've optimized S3 storage classes, the next question is whether S3 is the right choice at all for certain workloads. Cloudflare R2 offers $0.015/GB/month storage with zero egress fees, which beats even S3 Glacier Instant Retrieval once you factor in egress at $0.09/GB.

For data that's actively read and served externally:

  • R2 at $0.015/GB + $0 egress can beat Standard-IA ($0.0125 + $0.01 retrieval + $0.09 egress) significantly for egress-heavy workloads
  • Backblaze B2 at $0.006/GB + $0.01/GB egress is competitive for large cold datasets

The storage optimizer handles this math automatically if you plug in your workload numbers.


Summary

S3's seven storage classes are a cost optimization tool AWS gives you for free — you just have to use them. The default is Standard, which is the most expensive. The right answer for most data is a tiered lifecycle policy that automatically moves objects as they age.

The biggest wins, in order:

  1. Lifecycle policies for aged logs, backups, and archives → automatic, no retrieval risk
  2. Glacier Deep Archive for compliance retention → 95%+ cost reduction
  3. Glacier Instant Retrieval for rarely-accessed operational data → 83% reduction, no latency penalty
  4. Standard-IA for DR backups and monthly reports → 46% reduction

Run the storage calculator against your current S3 spend, apply a lifecycle policy to your largest buckets, and you'll likely see 40–70% cost reduction within 90 days.