blog/system-design/caching-strategies
Performance & Scalability

Caching Strategies: Cache Aside, Write Through, Write Behind

Your cache isn't just a faster database. Different caching strategies make fundamentally different trade-offs between consistency, latency, and complexity. Here's how to pick the right one.

ยท10 min read

Your Cache Is Making Promises You Don't Know About

Every time you add a cache to your system, you're making an implicit promise about data freshness. The problem? Most developers never think about which promise they're making.

You slap Redis in front of your database, reads get faster, and life is good -- until a user updates their profile and the old data keeps showing up. Or worse, your cache and database silently diverge, and you spend two days debugging why orders have the wrong prices.

The fix isn't "just invalidate the cache." The fix is choosing the right caching strategy for your access pattern from the start.


Three Strategies, Three Trade-offs

There are three fundamental patterns for how your cache and database interact. Each one makes a different trade-off between consistency, write latency, and complexity.

StrategyWrite pathRead pathConsistencyWrite speed
Cache-AsideWrite to DB, invalidate cacheCheck cache, fallback to DBEventual (short window)Fast (DB only)
Write-ThroughWrite to cache + DB togetherAlways read from cacheStrongSlower (both writes)
Write-BehindWrite to cache, async flush to DBAlways read from cacheEventual (risk of loss)Fastest

Let's break down how each one actually works.


Cache-Aside (Lazy Loading)

This is the most common pattern, and the one you're probably already using. The application manages the cache explicitly -- it's the middleman between cache and database.

Cache-Aside: Read Path (Cache Miss)
Application
Your code
GET key
Cache
Check first
Database
Fallback
Cache Fill
Store result

How it works

On read: Check the cache first. If it's there (hit), return it. If not (miss), read from the database, store the result in cache, then return it.

On write: Write directly to the database, then invalidate (delete) the cache entry. The next read will repopulate it.

Why this pattern exists

Cache-Aside puts the application in control. You decide what gets cached, when it gets evicted, and how stale data is handled. The cache is purely a performance optimization -- your system works fine without it, just slower.

๐Ÿ“Œ The stale data window

Between writing to the database and invalidating the cache, there's a tiny window where another request could read stale data from cache. In practice, this window is milliseconds. For most applications, this is acceptable. For financial transactions, it's not.

When to use it

  • Read-heavy workloads where the same data is accessed repeatedly
  • Systems where occasional staleness (milliseconds) is acceptable
  • When you want the cache to be a pure optimization that can be removed without breaking anything
  • Applications with unpredictable access patterns -- only hot data ends up cached

The gotcha

If two processes write to the database concurrently, race conditions in cache invalidation can leave stale data. The fix: use TTLs as a safety net so stale entries eventually expire.


Write-Through

Write-Through treats the cache as the primary write target. Every write goes to both cache and database synchronously, guaranteeing consistency.

Write-Through: Write Path
Application
Write request
SET
Cache
Write first
Database
Write second
Ack
Both written

How it works

On write: The application writes to the cache, and the cache layer synchronously writes to the database. The write only succeeds when both are updated.

On read: Always read from cache. Since every write goes through cache, it's always up-to-date.

Why this pattern exists

Write-Through eliminates the stale data window entirely. If the cache has the data, it's the latest version. Period. This makes reads simple and fast, with no "check if stale" logic needed.

โš ๏ธ The write penalty

Every write now takes longer because you're writing to two places synchronously. If your cache is Redis over the network and your database is Postgres, you're adding a full network round-trip to every write. For write-heavy workloads, this adds up fast.

When to use it

  • Strong consistency is non-negotiable (e.g., user sessions, inventory counts)
  • Your workload is read-heavy with infrequent writes
  • You can tolerate higher write latency for simpler read logic
  • Systems where stale reads cause real problems (pricing, permissions)

The gotcha

Write-Through caches everything that's written, even data that might never be read. This wastes memory. Combine with TTL-based eviction to prevent cache pollution from rarely-accessed data.


Write-Behind (Write-Back)

Write-Behind is the aggressive optimization. Writes go to cache immediately, and the cache asynchronously flushes to the database later. The application gets an instant acknowledgment.

Write-Behind: Write Path
Application
Write request
SET
Cache
Write + ack
Buffer
Queue writes
Database
Async flush

How it works

On write: Write to cache, immediately return success to the caller. The write is queued in a buffer. A background process flushes the buffer to the database at intervals or when the buffer fills up.

On read: Always read from cache (it has the latest writes, even if the database doesn't yet).

Why this pattern exists

Write-Behind gives you the lowest possible write latency. The application never waits for the database. This is how CPU caches work, how SSDs batch writes, and how high-throughput systems handle write bursts.

๐Ÿ”ด Data loss risk

If your cache crashes before flushing to the database, unflushed writes are lost. This is the fundamental trade-off. You're trading durability for speed. Only use this when you can tolerate data loss or have a recovery mechanism.

When to use it

  • Write-heavy workloads where database write latency is the bottleneck
  • Scenarios where write batching improves throughput (e.g., analytics counters, metrics)
  • Systems that can tolerate short-term data loss (session activity, view counts)
  • When you need to absorb write spikes without overwhelming the database

The gotcha

Debugging is harder. When something goes wrong, the database might be several seconds (or minutes) behind the cache. Monitoring the write buffer depth becomes critical.


Try It Yourself

Use the simulator below to see how each strategy behaves. Read and write to different keys, watch how the cache and database interact, and notice the differences between strategies.

Interactive Cache Strategy Simulator
Cache
empty
---
Database
user_1: Alice
user_2: Bob
user_3: Carol
Operation log
Click Read or Write to start...

The Decision Framework

Choosing a caching strategy isn't about which one is "best." It's about which trade-off your system can afford.

Which caching strategy fits your system?

What's your primary concern?


Performance Characteristics at a Glance

Write Latency by Strategy (Relative)
Write-Behind (cache only)2ms
Cache-Aside (DB + invalidate)15ms
Write-Through (cache + DB sync)25ms
Read Latency on Cache Hit
All strategies (cache hit)1ms
Cache-Aside (cache miss)18ms

Real-World Combinations

In production, most systems combine strategies. Here's what that looks like:

Hybrid Caching Architecture
authreadseventssyncon missasyncAPI LayerCache-AsideSession StoreWrite-ThroughProduct CacheCache-AsideAnalyticsWrite-BehindDatabasePostgreSQL
  • Sessions use Write-Through because stale auth data means security bugs
  • Product catalog uses Cache-Aside because it's read-heavy and millisecond staleness is fine
  • Analytics events use Write-Behind because losing a few page views is acceptable, but fast writes aren't negotiable

Key Takeaways

โœ… The mental model

Think of caching strategies as answering one question: who owns the write path?

  • Cache-Aside: Your application owns both paths. Cache is optional.
  • Write-Through: The cache layer owns the write path. Consistency guaranteed.
  • Write-Behind: The cache absorbs writes. Database catches up later. Speed over safety.

Most teams should start with Cache-Aside. It's the simplest, most flexible, and easiest to reason about. Move to Write-Through when consistency demands it, and Write-Behind when write throughput demands it. Don't optimize for problems you don't have yet.


References