CQRS & Event Sourcing: Separating Reads from Writes

Most applications use the same data model for reading and writing. The same Order entity is used to create orders, update orders, and query orders. The same database table serves both the checkout page and the analytics dashboard.
This works fine for simple CRUD applications. But as systems grow, reads and writes develop fundamentally different requirements. Writes need strict validation and transactional consistency. Reads need fast queries optimized for specific views. Writes are infrequent but complex. Reads are frequent but simple. Scaling them the same way is wasteful.
CQRS (Command Query Responsibility Segregation) separates these concerns into distinct models. Event Sourcing takes it further by storing every state change as an immutable event, making the event log the source of truth instead of the current state.
In this post, we'll cover:
✅ The problem with a single read/write model
✅ CQRS explained: separate command and query models
✅ Why CQRS: scaling, optimization, and security benefits
✅ Event Sourcing fundamentals: events as source of truth
✅ The event store and replaying events
✅ Projections and read model rebuilding
✅ Snapshots for performance
✅ CQRS without Event Sourcing (and vice versa)
✅ Eventual consistency challenges
✅ Practical implementation with Spring Boot
✅ When CQRS adds unnecessary complexity
The Problem: One Model to Rule Them All
In a traditional CRUD application, the same model handles everything:
// Same entity for reads and writes
@Entity
public class Order {
@Id private Long id;
private String customerId;
private OrderStatus status;
private BigDecimal totalAmount;
@OneToMany(cascade = CascadeType.ALL)
private List<OrderItem> items;
@OneToMany(cascade = CascadeType.ALL)
private List<StatusChange> statusHistory;
@OneToMany(cascade = CascadeType.ALL)
private List<Payment> payments;
}This creates several problems as the system grows:
1. Conflicting optimization needs
The checkout page needs fast writes with minimal validation overhead. The admin dashboard needs complex aggregations across millions of orders. Optimizing the database for one hurts the other.
2. Model bloat
The write model needs validation rules, business logic, and state transitions. The read model needs computed fields, joined data, and denormalized views. Cramming both into one entity creates a bloated God object.
3. Scaling mismatch
Read traffic is typically 10-100x write traffic. But with a single model, you scale both together — adding read replicas requires the same schema as the write database.
4. Security concerns
The write model has sensitive fields (payment details, internal statuses). The read model for public APIs should expose only safe fields. With one model, you rely on careful field filtering rather than structural separation.
CQRS: Command Query Responsibility Segregation
CQRS solves these problems by splitting the application into two sides: a command side (writes) and a query side (reads).
The Command Side (Writes)
The command side handles all state changes. It enforces business rules, validates input, and persists the result.
// Command — a request to change state
public record PlaceOrderCommand(
String customerId,
List<OrderItemRequest> items,
String shippingAddress
) {}
// Command handler — enforces business rules
@Service
public class OrderCommandHandler {
private final OrderRepository orderRepository;
private final InventoryClient inventoryClient;
private final EventPublisher eventPublisher;
public String handle(PlaceOrderCommand command) {
// 1. Validate business rules
if (command.items().isEmpty()) {
throw new ValidationException("Order must have at least one item");
}
// 2. Check inventory
for (OrderItemRequest item : command.items()) {
if (!inventoryClient.isAvailable(item.productId(), item.quantity())) {
throw new InsufficientInventoryException(item.productId());
}
}
// 3. Create and persist order (write model)
Order order = Order.create(command);
orderRepository.save(order);
// 4. Publish event for the read side to update
eventPublisher.publish(new OrderPlacedEvent(
order.getId(),
order.getCustomerId(),
order.getItems(),
order.getTotalAmount()
));
return order.getId();
}
}Key characteristics of the command side:
- Rich domain model with business rules
- Validates input and enforces invariants
- Normalized data structure (3NF or domain-driven)
- Optimized for consistency, not query performance
- Returns minimal data (often just an ID or success/failure)
The Query Side (Reads)
The query side handles all data retrieval. It uses a model optimized for the specific queries the UI needs.
// Query — a request for information
public record GetOrderDetailsQuery(String orderId) {}
public record ListCustomerOrdersQuery(String customerId, int page, int size) {}
// Read model — optimized for display, denormalized
public record OrderDetailsView(
String orderId,
String customerName, // Denormalized from Customer table
String customerEmail,
List<OrderItemView> items,
BigDecimal totalAmount,
String status,
String statusLabel, // Human-readable status
LocalDateTime placedAt,
LocalDateTime updatedAt,
String shippingAddress,
String trackingNumber // Joined from Shipping table
) {}
// Query handler — simple and fast
@Service
public class OrderQueryHandler {
private final OrderReadRepository readRepository;
public OrderDetailsView handle(GetOrderDetailsQuery query) {
return readRepository.findOrderDetails(query.orderId())
.orElseThrow(() -> new NotFoundException("Order not found"));
}
public Page<OrderSummaryView> handle(ListCustomerOrdersQuery query) {
return readRepository.findByCustomerId(
query.customerId(),
PageRequest.of(query.page(), query.size())
);
}
}Key characteristics of the query side:
- Denormalized, pre-computed views
- No business logic — just data retrieval
- Optimized for specific UI needs (dashboard, list, detail views)
- Can use different storage (Elasticsearch for search, Redis for caching)
- Returns rich, ready-to-display data
Synchronizing the Two Sides
The write and read models must stay in sync. There are three approaches:
1. Synchronous update (same transaction)
@Transactional
public void handle(PlaceOrderCommand command) {
Order order = Order.create(command);
orderRepository.save(order); // Write model
// Update read model in same transaction
orderReadRepository.save(OrderDetailsView.from(order));
}Simple but couples the two sides and limits scaling.
2. Asynchronous update (events)
// Projection — updates read model from events
@Component
public class OrderProjection {
@KafkaListener(topics = "order-events", groupId = "order-projection")
public void onOrderPlaced(OrderPlacedEvent event) {
// Build denormalized view
Customer customer = customerClient.getCustomer(event.getCustomerId());
OrderDetailsView view = new OrderDetailsView(
event.getOrderId(),
customer.getName(),
customer.getEmail(),
event.getItems().stream().map(OrderItemView::from).toList(),
event.getTotalAmount(),
"PLACED",
"Order Placed",
event.getTimestamp(),
event.getTimestamp(),
event.getShippingAddress(),
null // No tracking yet
);
orderReadRepository.save(view);
}
@KafkaListener(topics = "order-events", groupId = "order-projection")
public void onOrderShipped(OrderShippedEvent event) {
orderReadRepository.updateStatus(
event.getOrderId(),
"SHIPPED",
"Shipped",
event.getTrackingNumber()
);
}
}Decoupled and scalable, but introduces eventual consistency.
3. Change Data Capture (CDC)
Use tools like Debezium to capture changes from the write database's transaction log and stream them to the read side — no application code needed.
Event Sourcing: Events as Source of Truth
Event Sourcing is a fundamentally different approach to data persistence. Instead of storing the current state of an entity, you store every event that ever happened to it.
Traditional State vs Event Sourcing
Traditional approach (store current state):
orders table:
| id | status | total | updated_at |
|-----|-----------|--------|---------------------|
| 123 | SHIPPED | 109.97 | 2026-03-02 14:30:00 |You know the order is shipped and costs $109.97. But you don't know:
- When was it placed? By whom?
- What was the original total before the discount?
- Was it ever cancelled and re-placed?
- Who changed the shipping address?
Event Sourcing approach (store events):
order_events table:
| event_id | order_id | type | data | timestamp |
|----------|----------|-------------------|--------------------------------|---------------------|
| evt-1 | 123 | OrderPlaced | {customerId: "c1", items: [...], total: 129.97} | 2026-03-01 10:00:00 |
| evt-2 | 123 | DiscountApplied | {code: "SAVE20", discount: 20.00} | 2026-03-01 10:01:00 |
| evt-3 | 123 | PaymentProcessed | {paymentId: "p1", amount: 109.97} | 2026-03-01 10:05:00 |
| evt-4 | 123 | OrderConfirmed | {} | 2026-03-01 10:05:01 |
| evt-5 | 123 | AddressChanged | {old: "123 Main", new: "456 Oak"} | 2026-03-01 15:00:00 |
| evt-6 | 123 | OrderShipped | {carrier: "FedEx", tracking: "FX123"} | 2026-03-02 14:30:00 |Now you have a complete audit trail of everything that happened to this order. You can reconstruct the state at any point in time by replaying events up to that moment.
How Event Sourcing Works
The flow:
- A command arrives (e.g.,
PlaceOrder) - Load the aggregate by replaying its past events from the event store
- The aggregate validates the command against the current state
- If valid, the aggregate produces new events
- Append the new events to the event store (append-only — never update or delete)
- Projections consume events to build read models
// Order aggregate — rebuilt from events
public class OrderAggregate {
private String orderId;
private String customerId;
private OrderStatus status;
private BigDecimal totalAmount;
private List<OrderItem> items;
private final List<DomainEvent> uncommittedEvents = new ArrayList<>();
// Rebuild state from past events
public static OrderAggregate fromEvents(List<DomainEvent> events) {
OrderAggregate aggregate = new OrderAggregate();
for (DomainEvent event : events) {
aggregate.apply(event);
}
return aggregate;
}
// Handle command — validate and produce new events
public void placeOrder(PlaceOrderCommand command) {
if (status != null) {
throw new IllegalStateException("Order already exists");
}
if (command.items().isEmpty()) {
throw new ValidationException("Order must have items");
}
// Don't mutate state directly — produce an event
raiseEvent(new OrderPlacedEvent(
command.orderId(),
command.customerId(),
command.items(),
calculateTotal(command.items())
));
}
public void cancelOrder(CancelOrderCommand command) {
if (status != OrderStatus.PLACED && status != OrderStatus.CONFIRMED) {
throw new IllegalStateException(
"Cannot cancel order in status: " + status
);
}
raiseEvent(new OrderCancelledEvent(orderId, command.reason()));
}
// Apply event — update internal state (no side effects!)
private void apply(DomainEvent event) {
switch (event) {
case OrderPlacedEvent e -> {
this.orderId = e.getOrderId();
this.customerId = e.getCustomerId();
this.items = e.getItems();
this.totalAmount = e.getTotalAmount();
this.status = OrderStatus.PLACED;
}
case OrderConfirmedEvent e -> {
this.status = OrderStatus.CONFIRMED;
}
case OrderCancelledEvent e -> {
this.status = OrderStatus.CANCELLED;
}
case OrderShippedEvent e -> {
this.status = OrderStatus.SHIPPED;
}
default -> {} // Ignore unknown events
}
}
private void raiseEvent(DomainEvent event) {
apply(event); // Update local state
uncommittedEvents.add(event); // Track for persistence
}
}The Event Store
The event store is an append-only log of all events. Events are never updated or deleted — they're immutable facts.
// Event store interface
public interface EventStore {
// Append new events for an aggregate
void appendEvents(String aggregateId, List<DomainEvent> events,
long expectedVersion);
// Load all events for an aggregate
List<DomainEvent> getEvents(String aggregateId);
// Load events from a specific version
List<DomainEvent> getEvents(String aggregateId, long fromVersion);
}
// Usage in command handler
@Service
public class OrderCommandService {
private final EventStore eventStore;
public void placeOrder(PlaceOrderCommand command) {
// 1. Load aggregate from event history
List<DomainEvent> history = eventStore.getEvents(command.orderId());
OrderAggregate order = OrderAggregate.fromEvents(history);
// 2. Execute command (validates and produces events)
order.placeOrder(command);
// 3. Persist new events (append-only)
eventStore.appendEvents(
command.orderId(),
order.getUncommittedEvents(),
history.size() // Optimistic concurrency check
);
}
}Event store implementations:
- EventStoreDB — purpose-built event store database
- Apache Kafka — with log compaction and infinite retention
- PostgreSQL — with an
eventstable (good enough for many use cases) - DynamoDB — with partition key = aggregateId, sort key = version
-- Simple event store in PostgreSQL
CREATE TABLE events (
event_id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
aggregate_id VARCHAR(255) NOT NULL,
aggregate_type VARCHAR(100) NOT NULL,
event_type VARCHAR(100) NOT NULL,
event_data JSONB NOT NULL,
version BIGINT NOT NULL,
timestamp TIMESTAMPTZ DEFAULT NOW(),
UNIQUE (aggregate_id, version) -- Optimistic concurrency
);
CREATE INDEX idx_events_aggregate ON events (aggregate_id, version);Projections: Building Read Models from Events
Events in the event store are optimized for writes — they're a sequence of facts about individual aggregates. But reads need denormalized, queryable views. Projections transform the event stream into read models.
Types of Projections
1. Inline projection — built synchronously during event processing
// Simple inline projection
@Component
public class OrderListProjection {
private final OrderListViewRepository viewRepository;
public void handle(OrderPlacedEvent event) {
viewRepository.save(new OrderListView(
event.getOrderId(),
event.getCustomerId(),
event.getTotalAmount(),
"PLACED",
event.getTimestamp()
));
}
public void handle(OrderShippedEvent event) {
viewRepository.updateStatus(event.getOrderId(), "SHIPPED");
}
public void handle(OrderCancelledEvent event) {
viewRepository.updateStatus(event.getOrderId(), "CANCELLED");
}
}2. Async projection — built from event stream (Kafka, EventStoreDB subscription)
// Async projection — consumes from event stream
@Component
public class CustomerDashboardProjection {
private final CustomerDashboardRepository dashboardRepo;
@KafkaListener(topics = "order-events", groupId = "customer-dashboard")
public void handle(DomainEvent event) {
switch (event) {
case OrderPlacedEvent e -> {
dashboardRepo.incrementOrderCount(e.getCustomerId());
dashboardRepo.addToTotalSpent(
e.getCustomerId(), e.getTotalAmount()
);
dashboardRepo.setLastOrderDate(
e.getCustomerId(), e.getTimestamp()
);
}
case OrderCancelledEvent e -> {
dashboardRepo.decrementOrderCount(e.getCustomerId());
}
default -> {} // Ignore irrelevant events
}
}
}Rebuilding Read Models
One of the most powerful benefits of Event Sourcing: you can rebuild any read model from scratch by replaying all events.
This means:
- Fix bugs in projections — fix the code, replay events, get correct data
- Add new projections — need a new dashboard? Create a new projection and replay
- Change read model schema — no complex data migrations, just rebuild from events
// Rebuild a projection from scratch
@Service
public class ProjectionRebuilder {
private final EventStore eventStore;
private final OrderListProjection projection;
public void rebuild() {
// 1. Clear existing read model
projection.clear();
// 2. Replay all events through the projection
eventStore.getAllEvents()
.forEach(event -> projection.handle(event));
log.info("Projection rebuilt from {} events",
eventStore.getEventCount());
}
}Snapshots: Performance Optimization
As aggregates accumulate thousands of events, loading them by replaying all events becomes slow. Snapshots solve this by periodically saving the aggregate's current state.
// Snapshot-aware aggregate loading
@Service
public class OrderCommandService {
private final EventStore eventStore;
private final SnapshotStore snapshotStore;
public OrderAggregate loadAggregate(String orderId) {
// 1. Try to load latest snapshot
Optional<Snapshot> snapshot = snapshotStore.getLatest(orderId);
OrderAggregate aggregate;
long fromVersion;
if (snapshot.isPresent()) {
// Start from snapshot
aggregate = snapshot.get().deserialize(OrderAggregate.class);
fromVersion = snapshot.get().getVersion() + 1;
} else {
// Start from scratch
aggregate = new OrderAggregate();
fromVersion = 0;
}
// 2. Replay events since the snapshot
List<DomainEvent> newEvents = eventStore.getEvents(orderId, fromVersion);
for (DomainEvent event : newEvents) {
aggregate.apply(event);
}
return aggregate;
}
// Save snapshot every N events
public void saveEventsWithSnapshot(String orderId,
List<DomainEvent> events,
OrderAggregate aggregate) {
eventStore.appendEvents(orderId, events, aggregate.getVersion());
if (aggregate.getVersion() % 100 == 0) { // Snapshot every 100 events
snapshotStore.save(new Snapshot(
orderId,
aggregate.getVersion(),
serialize(aggregate)
));
}
}
}When to use snapshots:
- Aggregates with hundreds or thousands of events
- High-traffic aggregates loaded frequently
- When replay time exceeds acceptable latency
When snapshots are unnecessary:
- Aggregates with fewer than 50-100 events
- Events loaded infrequently
- Event replay is already fast enough
CQRS Without Event Sourcing
CQRS and Event Sourcing are independent patterns that happen to work well together. You can use one without the other.
CQRS with a Traditional Database
The write side uses a normalized relational database. The read side uses Elasticsearch for fast full-text search. Events or CDC synchronize them. No event sourcing involved.
This is actually the most common CQRS pattern in practice. Most teams benefit from separating read and write models without the complexity of event sourcing.
Event Sourcing Without CQRS
You can store events as the source of truth without separating read and write models. The same service handles both commands and queries — but persistence is event-based.
// Event Sourcing without CQRS — same service handles reads
@Service
public class OrderService {
// Write: event-sourced
public void placeOrder(PlaceOrderCommand command) {
OrderAggregate order = loadAggregate(command.orderId());
order.placeOrder(command);
eventStore.appendEvents(command.orderId(), order.getUncommittedEvents());
}
// Read: replay events to build current state
public OrderDetails getOrder(String orderId) {
OrderAggregate order = loadAggregate(orderId);
return OrderDetails.from(order); // Map aggregate to view
}
}This works for small-scale systems but defeats the purpose as reads grow — you're replaying events for every query.
Eventual Consistency Challenges
When the read model is updated asynchronously from events, there's a delay between writing and reading. This is eventual consistency — the read model will eventually reflect the write, but not immediately.
The Problem
The user places an order, gets a confirmation, then immediately views their orders — but the new order isn't there yet.
Strategies for Handling Eventual Consistency
1. Return the result directly from the command
// Command returns enough data for immediate display
public OrderConfirmation placeOrder(PlaceOrderCommand command) {
// ... process command ...
return new OrderConfirmation(
order.getId(),
order.getStatus(),
order.getTotalAmount(),
"Your order has been placed!"
);
// Redirect to confirmation page with this data
// Don't query the read model immediately
}2. Optimistic UI
The frontend optimistically updates the UI before the server confirms the read model is ready:
// Frontend — optimistic update
async function placeOrder(orderData: OrderRequest) {
const result = await api.post('/orders', orderData);
// Immediately add to local state (don't wait for read model)
addToLocalOrders({
id: result.orderId,
status: 'PROCESSING',
...orderData,
});
// The read model will catch up eventually
}3. Polling with timeout
// Poll until the read model is consistent
public OrderDetailsView getOrderWithConsistency(String orderId, Duration timeout) {
Instant deadline = Instant.now().plus(timeout);
while (Instant.now().isBefore(deadline)) {
Optional<OrderDetailsView> result = readRepository.findById(orderId);
if (result.isPresent()) {
return result.get();
}
Thread.sleep(100); // Brief pause before retry
}
throw new EventualConsistencyTimeoutException(
"Read model not yet updated for order: " + orderId
);
}4. Subscription-based notification
Use WebSockets or Server-Sent Events to notify the client when the read model is ready:
// Push notification when projection completes
@Component
public class OrderProjection {
private final SimpMessagingTemplate websocket;
public void handle(OrderPlacedEvent event) {
// Update read model...
orderReadRepository.save(view);
// Notify client via WebSocket
websocket.convertAndSendToUser(
event.getCustomerId(),
"/queue/order-updates",
new OrderReadyNotification(event.getOrderId())
);
}
}When CQRS Adds Unnecessary Complexity
CQRS and Event Sourcing are powerful patterns — but they are not default choices. They add significant complexity that must be justified by real requirements.
Don't Use CQRS When
1. Simple CRUD application
If your app is a basic create-read-update-delete system with no complex queries or scaling needs, CQRS is over-engineering.
// A blog with posts, comments, and tags
// Does NOT need CQRS — a single model works perfectly fine2. Read and write models are identical
If you're reading exactly what you wrote with no transformations, projections, or aggregations, there's nothing to separate.
3. Small team without event-driven infrastructure
CQRS with async projections requires message brokers, monitoring, and operational expertise. If your team doesn't have this, the infrastructure burden outweighs the benefits.
4. Strong consistency is non-negotiable
Banking, financial transactions, and inventory management sometimes require immediate read-after-write consistency. Eventual consistency in the read model may not be acceptable.
Don't Use Event Sourcing When
1. You don't need an audit trail
Event Sourcing's biggest benefit is the complete history. If you don't need it, you're paying for complexity without getting value.
2. Events would contain sensitive data
PII, payment details, and health records in events create compliance nightmares (GDPR right to deletion is impossible with immutable events without crypto-shredding).
3. The domain is simple
CRUD entities with few state transitions don't benefit from event modeling. An event-sourced UserProfile with events like NameChanged, EmailChanged is over-engineering.
4. Schema evolution is frequent
If your domain model changes constantly (early-stage startup), managing event schema evolution across hundreds of event types becomes a maintenance burden.
The Complexity Cost
Rule of thumb: Start with a simple architecture. Adopt CQRS when read/write scaling or optimization diverges. Add Event Sourcing when you need a complete audit trail or temporal queries.
Practical Reference: Complete Order System
Here's how CQRS + Event Sourcing fits together in a real system:
Frameworks and Libraries
| Tool | Language | What It Provides |
|---|---|---|
| Axon Framework | Java/Kotlin | Full CQRS + ES framework with saga support |
| EventStoreDB | Any (gRPC) | Purpose-built event store database |
| Marten | .NET | Document DB + Event Store on PostgreSQL |
| Eventuous | .NET | Lightweight ES library |
| Sequent | Ruby | CQRS + ES framework |
| Apache Kafka | Any | Event streaming as event store |
Summary
CQRS separates an application into a command side (writes) and a query side (reads), allowing each to be optimized, scaled, and secured independently. Event Sourcing stores every state change as an immutable event, providing a complete audit trail and the ability to reconstruct state at any point in time.
CQRS core concepts:
- Command side — rich domain model, validates business rules, persists state changes
- Query side — denormalized read models optimized for specific views
- Projections — transform events into read models
- Reads and writes can use different databases and scaling strategies
Event Sourcing core concepts:
- Events as source of truth — store what happened, not current state
- Event store — append-only log of immutable events
- Aggregate reconstruction — replay events to rebuild current state
- Snapshots — periodic state saves to avoid replaying all events
Key combinations:
- CQRS without Event Sourcing — most common; separate read/write models with traditional persistence
- Event Sourcing without CQRS — possible but limited; events for writes, same model for reads
- CQRS + Event Sourcing — full power; events drive both write persistence and read model projections
When to use:
- Read and write workloads need different optimization
- You need a complete audit trail of all state changes
- Multiple read models needed for different consumers
- Complex domain with rich business rules
When NOT to use:
- Simple CRUD applications
- Small teams without event infrastructure
- Strong consistency is non-negotiable
- Domain model changes frequently
CQRS and Event Sourcing are optimization patterns, not default architectures. Start simple, add complexity only when the pain justifies it.
What's Next in the Software Architecture Series
This is post 7 of 12 in the Software Architecture Patterns series:
- ✅ ARCH-1: Software Architecture Patterns Roadmap
- ✅ ARCH-2: Monolithic Architecture
- ✅ ARCH-3: Layered (N-Tier) Architecture
- ✅ ARCH-4: MVC, MVP & MVVM Patterns
- ✅ ARCH-5: Microservices Architecture
- ✅ ARCH-6: Event-Driven Architecture
- ✅ ARCH-7: CQRS & Event Sourcing (this post)
- 🔜 ARCH-8: Hexagonal Architecture (Ports & Adapters)
- 🔜 ARCH-9: Clean Architecture
- 🔜 ARCH-10: Domain-Driven Design (DDD)
- 🔜 ARCH-11: Serverless & Function-as-a-Service
- 🔜 ARCH-12: Choosing the Right Architecture
Related posts:
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.