CQRS Explained: Why Read and Write Models Should Divorce
CRUD works until your reads and writes have fundamentally different needs. CQRS separates them into independent models -- giving each side the freedom to optimize without compromise. Here's when that separation pays off and when it's overkill.
Your CRUD API Has a Secret Problem
Every CRUD application makes an assumption that nobody questions: reads and writes should use the same model. Same table structure. Same DTOs. Same service layer.
For a blog or a todo app, this is fine. But the moment your system handles different read and write volumes, different read and write shapes, or different read and write performance requirements, you've got a problem. And the problem only gets worse as you scale.
Your GET /orders endpoint needs to join five tables, aggregate totals, and compute stats. Your POST /orders endpoint needs to validate business rules, apply domain logic, and persist a single normalized row. You're forcing one model to serve two fundamentally different masters.
CQRS says: stop forcing them to share a model. Let them divorce.
What CQRS Actually Is
CQRS stands for Command Query Responsibility Segregation. The core idea is deceptively simple: use a different model to update data than the model you use to read data.
| Aspect | Traditional CRUD | CQRS |
|---|---|---|
| Data model | Single model for reads and writes | Separate read model and write model |
| Read path | Query normalized tables, JOIN as needed | Query denormalized, pre-computed views |
| Write path | Validate and INSERT/UPDATE same tables | Process commands, persist to write store |
| Consistency | Immediate (same database) | Eventually consistent (event propagation) |
| Scaling | Scale reads and writes together | Scale reads and writes independently |
| Complexity | Low -- one model, one database | Higher -- two models, sync mechanism needed |
This isn't about using two databases (though you can). At its simplest, CQRS is a code-level separation: one set of objects for commands, another set for queries. The power comes from what that separation enables.
The Two Sides of CQRS
The Write Side (Commands)
Commands represent intent. They're not "update row 42" -- they're "PlaceOrder" or "ShipPackage" or "CancelSubscription." Each command carries the data needed to perform one specific action.
The command handler:
- Validates the command against business rules
- Applies domain logic (calculate totals, check inventory, verify permissions)
- Persists the result to the write-optimized store
- Publishes an event describing what happened
The write store is normalized. It's optimized for data integrity and consistency -- the same things you'd care about in any transactional system.
The Read Side (Queries)
Queries return data. They never modify state. And critically, they don't need to reconstruct data from the same normalized structure that writes use.
The read model is a projection -- a pre-computed, denormalized view of the data shaped exactly for what the UI needs. No joins. No aggregations at query time. No complex WHERE clauses.
When an event arrives (like "OrderPlaced"), a projection handler updates the read store. The read side is always catching up to the write side, but it's fast because the data is already in the shape consumers need.
Try It: CRUD vs CQRS Side-by-Side
Place orders and query them in both architectures. Watch how CRUD uses a single database for everything, while CQRS separates concerns through commands, events, and projections.
The Event Bridge
The glue between write and read sides is the event. When the write side persists a change, it publishes a domain event. The read side subscribes to these events and updates its projections.
This is where CQRS gets interesting. One write event can feed multiple read models. Your order summary view, your analytics dashboard, your search index -- each gets its own optimized projection from the same stream of events.
The read model isn't updated instantly when a write happens. There's a propagation delay -- usually milliseconds to low seconds. During this window, a query might return stale data. This is the fundamental trade-off of CQRS: you give up immediate consistency for independent scalability and optimized read performance.
When CQRS Pays Off
CQRS adds real complexity. It's not a default architecture -- it's a targeted tool for specific problems.
Are your read and write workloads fundamentally different in volume or shape?
Concrete scenarios where CQRS works
- E-commerce order systems: Write path handles complex order logic (inventory checks, payment, fulfillment). Read path serves product listings, order history, admin dashboards -- each with pre-computed data.
- Reporting-heavy applications: Write path captures events. Read side maintains materialized views for dashboards that would otherwise require expensive analytical queries.
- Collaborative platforms: Multiple users writing concurrently. Read projections can be tuned per user role or device (mobile gets a slimmer projection than desktop).
When to avoid CQRS
- Simple CRUD domains with balanced read/write loads
- Small teams that can't afford the operational overhead of two models
- Systems where immediate consistency is non-negotiable across all read paths
CQRS and Event Sourcing: Related but Separate
People often conflate CQRS with event sourcing. They complement each other, but they're independent patterns.
| Pattern | What it does | Required for CQRS? |
|---|---|---|
| CQRS | Separates read and write models | -- |
| Event Sourcing | Stores state changes as a sequence of events instead of current state | No -- CQRS works with traditional databases |
| CQRS + Event Sourcing | Events are both the write model and the source of read projections | Optional but powerful combination |
You can use CQRS with a regular PostgreSQL database. Write to normalized tables, project to denormalized read tables, sync with database triggers or a change data capture stream. No event store required.
Event sourcing enhances CQRS by making the event stream the single source of truth. But if your domain doesn't benefit from replaying history, you don't need it.
Implementation Patterns
Level 1: Same database, different models
The simplest CQRS. Your write side uses normalized tables. Your read side uses materialized views or denormalized tables in the same database. A database trigger or background job keeps them in sync.
-- Write model: normalized
INSERT INTO orders (id, user_id, product_id, qty, status)
VALUES (1, 42, 7, 3, 'pending');
-- Read model: denormalized view, updated by trigger
-- order_summaries has pre-joined product name, user name, computed total
SELECT * FROM order_summaries WHERE user_id = 42;Trade-off: Simple to implement, but reads and writes still compete for the same database resources.
Level 2: Separate databases
Write store is a transactional database (PostgreSQL). Read store is something optimized for queries -- Elasticsearch for search, Redis for hot data, a columnar store for analytics.
Events flow from write to read via a message broker (Kafka, RabbitMQ).
Trade-off: Full independent scaling, but now you have distributed system complexity -- message ordering, idempotent consumers, eventual consistency.
Level 3: Event sourcing + CQRS
The write side is the event log. Instead of storing current state, you store every state change. Read projections are built by replaying events.
Trade-off: Maximum flexibility (rebuild any projection from history), but event schema evolution and replay performance become real concerns.
The Consistency Trade-off in Practice
Eventual consistency sounds scary until you realize you already live with it. Your bank shows pending transactions for days. Your email inbox syncs every few seconds. Amazon shows "in stock" even as the last item sells.
The question isn't "can we tolerate eventual consistency?" It's "for which specific reads can we tolerate it?"
โ Practical consistency strategy
Keep the write side strongly consistent (transactional guarantees on commands). Accept eventual consistency on the read side with clear SLAs. If a specific read absolutely must reflect the latest write, route that query through the write model -- CQRS doesn't prevent this, it just makes it explicit.
Most applications only need a handful of strongly consistent reads. Everything else -- dashboards, listings, search results, notifications -- can tolerate milliseconds of staleness.
Key Takeaways
CQRS separates how you write data from how you read it. The write model is normalized and optimized for consistency. The read model is denormalized and optimized for query speed.
The bridge is events. Write-side changes publish domain events. Read-side projections consume those events and update their denormalized views.
It's not all-or-nothing. Start with code-level separation (different DTOs for commands and queries). Graduate to separate tables, then separate databases, only when the workload demands it.
The cost is complexity. Two models to maintain, eventual consistency to reason about, event infrastructure to operate. Don't reach for CQRS unless the read/write asymmetry justifies it.
If your SELECT and INSERT are both happy with the same table structure and the same database, leave them together. CQRS exists for the moment they aren't -- and when that moment comes, you'll know.