CQRS: Stop Using the Same Model for Reads and Writes šāļø
CQRS: Stop Using the Same Model for Reads and Writes šāļø
Real talk: The first time our product dashboard took 12 seconds to load, I did what every developer does - added indexes. Then I added caching. Then I added more indexes. The problem? Our PostgreSQL database was trying to be good at EVERYTHING - complex analytical queries, real-time dashboards, AND handling thousands of writes per second. š±
My CTO: "Why is the database at 98% CPU all the time?"
Me staring at pgAdmin: "Uh... because we're asking it to do the impossible?"
What I learned that week: Trying to use the same database model for reads AND writes is like using the same tool for brain surgery and demolition. Sure, technically both need a sharp instrument, but maybe... just maybe... they need DIFFERENT optimizations! šŖš£
Welcome to CQRS (Command Query Responsibility Segregation) - the pattern that says "Hey, maybe reading and writing data are fundamentally different operations!"
What's CQRS Anyway? š¤
Think of CQRS like separating a restaurant kitchen from the dining room:
Without CQRS (Traditional CRUD):
Same Chef Does Everything:
āā Takes orders from customers (interrupts cooking!)
āā Cooks the food (interrupts taking orders!)
āā Describes menu items (interrupts cooking!)
āā Manages inventory (interrupts everything!)
āā Everything is slower because of constant context switching! š°
With CQRS (Separated Responsibilities):
Kitchen (WRITES):
āā Cooks focus only on cooking
āā Optimized for speed and consistency
āā No interruptions! š„
Dining Room (READS):
āā Servers focus on customer experience
āā Optimized for presentation and speed
āā Can serve 100 customers while kitchen cooks for 10
āā No bottlenecks! āØ
Translation: CQRS = Separate models for Commands (writes) and Queries (reads) so each can be optimized independently!
The Database Meltdown That Taught Me CQRS š
When I was architecting our e-commerce analytics dashboard, we had a "beautiful" monolithic PostgreSQL setup:
Year 1 (Everything's Fine):
// Simple CRUD - works great!
app.get('/api/orders', async (req, res) => {
const orders = await db.query('SELECT * FROM orders WHERE user_id = ?', [req.user.id]);
res.json(orders);
// 50ms query time ā
});
app.post('/api/orders', async (req, res) => {
const order = await db.query('INSERT INTO orders VALUES (?)', [orderData]);
res.json(order);
// 20ms insert time ā
});
Year 2 (Business Wants Analytics):
// Now they want a dashboard...
app.get('/api/dashboard/revenue', async (req, res) => {
const revenue = await db.query(`
SELECT
DATE(created_at) as date,
SUM(total) as revenue,
COUNT(*) as order_count,
AVG(total) as avg_order_value,
COUNT(DISTINCT user_id) as unique_customers
FROM orders
LEFT JOIN order_items ON orders.id = order_items.order_id
LEFT JOIN products ON order_items.product_id = products.id
WHERE created_at > DATE_SUB(NOW(), INTERVAL 30 DAY)
GROUP BY DATE(created_at)
ORDER BY date DESC
`);
res.json(revenue);
// 8 seconds! š„
// Database CPU: 95%
// Writes getting slower: 20ms ā 500ms
// Everything is on fire! š„
});
What went wrong:
- Orders table: 50 million rows
- Dashboard query: Full table scan with 3 JOINs and GROUP BY
- Indexes helped writes, hurt reads
- Writes locked rows, blocked analytics queries
- Analytics queries consumed CPU, slowed writes
- Cache invalidation: nightmare (every write invalidates dashboard!)
The realization: Reads and writes have OPPOSITE optimization needs!
| Aspect | Writes (Commands) | Reads (Queries) |
|---|---|---|
| Focus | Consistency, Validation | Speed, Convenience |
| Model | Normalized (3NF) | Denormalized (flat) |
| Indexes | Minimal (faster writes) | Aggressive (faster reads) |
| Volume | ~5% of traffic | ~95% of traffic |
| Latency tolerance | Low (user waiting) | Very low (user staring) |
| Complexity | Simple inserts/updates | Complex joins/aggregations |
We were using the SAME MODEL for both! š±
CQRS Pattern: The Separation šÆ
The basic concept:
User Request
|
ā¼
āāāāāāāāāā“āāāāāāāāā
ā ā
COMMAND QUERY
(Write) (Read)
ā ā
ā¼ ā¼
Write Model Read Model
(PostgreSQL) (MongoDB + Redis)
Normalized Denormalized
ACID Eventually consistent
Implementation - Write Side (Commands):
// commands/createOrder.js
class CreateOrderCommand {
constructor(orderData) {
this.orderId = uuid();
this.userId = orderData.userId;
this.items = orderData.items;
this.total = orderData.total;
this.timestamp = Date.now();
}
}
// Command handler (writes to normalized database)
class CreateOrderHandler {
async handle(command) {
// Validate business rules
if (command.total < 0) {
throw new ValidationError('Order total cannot be negative');
}
// Write to normalized database (ACID!)
const order = await db.transaction(async (trx) => {
// Insert order
const [orderId] = await trx('orders').insert({
id: command.orderId,
user_id: command.userId,
total: command.total,
status: 'pending',
created_at: new Date()
});
// Insert order items
await trx('order_items').insert(
command.items.map(item => ({
order_id: orderId,
product_id: item.productId,
quantity: item.quantity,
price: item.price
}))
);
return orderId;
});
// Publish event for read model to update
await eventBus.publish('order.created', {
orderId: command.orderId,
userId: command.userId,
items: command.items,
total: command.total,
timestamp: command.timestamp
});
return { orderId: order };
}
}
// API endpoint
app.post('/api/orders', async (req, res) => {
try {
const command = new CreateOrderCommand(req.body);
const result = await commandHandler.handle(command);
res.status(201).json({
success: true,
orderId: result.orderId
});
} catch (error) {
res.status(400).json({ error: error.message });
}
});
Implementation - Read Side (Queries):
// queries/getOrderHistory.js
class GetOrderHistoryQuery {
constructor(userId, page = 1, limit = 20) {
this.userId = userId;
this.page = page;
this.limit = limit;
}
}
// Query handler (reads from denormalized, optimized view)
class GetOrderHistoryHandler {
async handle(query) {
// Read from MongoDB (denormalized documents!)
const orders = await OrderReadModel.find({
userId: query.userId
})
.sort({ createdAt: -1 })
.skip((query.page - 1) * query.limit)
.limit(query.limit)
.lean();
// Data is already joined and formatted! No JOINs needed! š
return orders;
}
}
// API endpoint
app.get('/api/orders', async (req, res) => {
try {
const query = new GetOrderHistoryQuery(
req.user.id,
req.query.page,
req.query.limit
);
const orders = await queryHandler.handle(query);
res.json(orders);
// 10ms instead of 8 seconds! š
} catch (error) {
res.status(500).json({ error: 'Failed to fetch orders' });
}
});
The magic - Event-driven sync:
// Event handler - Updates read model when writes happen
eventBus.on('order.created', async (event) => {
try {
// Fetch additional data for denormalized view
const user = await db.query('SELECT name, email FROM users WHERE id = ?', [event.userId]);
const productDetails = await db.query(
'SELECT id, name, image_url FROM products WHERE id IN (?)',
[event.items.map(i => i.productId)]
);
// Create denormalized document in MongoDB
await OrderReadModel.create({
orderId: event.orderId,
userId: event.userId,
userName: user.name, // Denormalized!
userEmail: user.email, // Denormalized!
items: event.items.map(item => ({
productId: item.productId,
productName: productDetails.find(p => p.id === item.productId).name, // Denormalized!
productImage: productDetails.find(p => p.id === item.productId).image_url, // Denormalized!
quantity: item.quantity,
price: item.price
})),
total: event.total,
status: 'pending',
createdAt: new Date(event.timestamp)
});
// Also update analytics cache
await redis.hincrby('daily_revenue', getTodayKey(), event.total);
await redis.incr('daily_orders:' + getTodayKey());
console.log(`Read model updated for order ${event.orderId}`);
} catch (error) {
console.error('Failed to update read model:', error);
// Add to DLQ for retry
await dlq.push({ event: 'order.created', data: event });
}
});
Results after implementing CQRS:
// Before (single model):
// Write: 500ms (locks + complex indexes)
// Dashboard query: 8 seconds (full scan + JOINs)
// Database CPU: 95%
// Concurrent users: ~100 max
// After (CQRS):
// Write: 50ms (simple inserts, minimal indexes)
// Dashboard query: 10-50ms (denormalized, no JOINs)
// Write DB CPU: 30%
// Read DB CPU: 20%
// Concurrent users: 10,000+ š
// 160x improvement on reads!
// 10x improvement on writes!
CQRS Pattern #2: Simple Projection š
When you don't need separate databases:
// Same database, different tables optimized differently
// Write table: Normalized
CREATE TABLE orders (
id UUID PRIMARY KEY,
user_id UUID NOT NULL,
total DECIMAL(10,2),
status VARCHAR(20),
created_at TIMESTAMP DEFAULT NOW(),
FOREIGN KEY (user_id) REFERENCES users(id)
);
CREATE TABLE order_items (
id SERIAL PRIMARY KEY,
order_id UUID NOT NULL,
product_id UUID NOT NULL,
quantity INT,
price DECIMAL(10,2),
FOREIGN KEY (order_id) REFERENCES orders(id),
FOREIGN KEY (product_id) REFERENCES products(id)
);
// Read table: Denormalized projection
CREATE TABLE order_history_view (
order_id UUID PRIMARY KEY,
user_id UUID,
user_name VARCHAR(255), -- Denormalized
user_email VARCHAR(255), -- Denormalized
items JSONB, -- Denormalized product details
total DECIMAL(10,2),
status VARCHAR(20),
created_at TIMESTAMP,
updated_at TIMESTAMP DEFAULT NOW()
);
-- Create indexes optimized for reads
CREATE INDEX idx_order_history_user ON order_history_view(user_id);
CREATE INDEX idx_order_history_date ON order_history_view(created_at DESC);
CREATE INDEX idx_order_history_status ON order_history_view(status);
Maintain projection with triggers or application code:
// Application code updates both on write
async function createOrder(orderData) {
await db.transaction(async (trx) => {
// Write to normalized tables
const order = await trx('orders').insert(orderData);
await trx('order_items').insert(orderData.items);
// Update denormalized projection
const user = await trx('users').where({ id: orderData.userId }).first();
const products = await trx('products').whereIn('id', orderData.items.map(i => i.productId));
await trx('order_history_view').insert({
order_id: order.id,
user_id: orderData.userId,
user_name: user.name,
user_email: user.email,
items: JSON.stringify(orderData.items.map(item => ({
...item,
product_name: products.find(p => p.id === item.productId).name
}))),
total: orderData.total,
status: 'pending',
created_at: new Date()
});
});
}
// Queries now read from projection (fast!)
async function getUserOrders(userId) {
return await db('order_history_view')
.where({ user_id: userId })
.orderBy('created_at', 'desc')
.limit(20);
// No JOINs! Just a simple SELECT! š
}
CQRS Pattern #3: Event Sourcing + CQRS š
The ultimate combo:
// Store events, not current state!
// Event store (write side)
class OrderEventStore {
async saveEvent(event) {
await db('events').insert({
aggregate_id: event.orderId,
event_type: event.type,
event_data: JSON.stringify(event.data),
timestamp: Date.now(),
version: event.version
});
// Publish to read models
await eventBus.publish(event.type, event.data);
}
async getEvents(orderId) {
return await db('events')
.where({ aggregate_id: orderId })
.orderBy('version', 'asc');
}
}
// Write side: Append events
app.post('/api/orders', async (req, res) => {
const orderId = uuid();
// Create event
await eventStore.saveEvent({
type: 'order.created',
orderId,
data: {
userId: req.user.id,
items: req.body.items,
total: calculateTotal(req.body.items)
},
version: 1
});
res.json({ orderId });
});
app.post('/api/orders/:id/ship', async (req, res) => {
const events = await eventStore.getEvents(req.params.id);
const currentVersion = events.length;
// Add shipping event
await eventStore.saveEvent({
type: 'order.shipped',
orderId: req.params.id,
data: {
trackingNumber: req.body.trackingNumber,
carrier: req.body.carrier
},
version: currentVersion + 1
});
res.json({ success: true });
});
// Read models subscribe to events
eventBus.on('order.created', async (event) => {
// Order list view (denormalized)
await OrderListView.create({
orderId: event.orderId,
userId: event.userId,
total: event.total,
status: 'pending',
createdAt: Date.now()
});
});
eventBus.on('order.shipped', async (event) => {
// Update read model
await OrderListView.updateOne(
{ orderId: event.orderId },
{
status: 'shipped',
trackingNumber: event.trackingNumber,
shippedAt: Date.now()
}
);
});
// Multiple read models from same events!
eventBus.on('order.created', async (event) => {
// Revenue analytics view
await RevenueAnalytics.updateOne(
{ date: getToday() },
{
$inc: {
totalRevenue: event.total,
orderCount: 1
}
},
{ upsert: true }
);
});
Why event sourcing + CQRS is powerful:
- ā Complete audit trail (every state change recorded)
- ā Time travel debugging (replay events to any point)
- ā Multiple read models from same events
- ā Easy to add new read models (just replay events!)
- ā Eventual consistency (reads lag slightly but scale infinitely)
When designing our e-commerce backend, event sourcing + CQRS let us add new analytics dashboards without touching the write side! Just create new read model, replay events, done! šÆ
The Technology Stack š ļø
My production CQRS setup:
// Write side (Commands)
const writeDb = new Pool({
host: 'postgres-master.internal',
database: 'orders_write',
// Optimized for writes
max: 20,
idleTimeoutMillis: 30000
});
// Read side (Queries)
const readDb = require('mongodb').MongoClient.connect('mongodb://mongo-cluster.internal');
// Event bus
const eventBus = new RabbitMQ({
url: 'amqp://rabbitmq.internal',
exchange: 'orders',
type: 'topic'
});
// Cache layer
const cache = new Redis({
host: 'redis-cluster.internal',
port: 6379
});
// Command handler
class OrderCommandHandler {
async createOrder(command) {
// Write to PostgreSQL (ACID transactions!)
const order = await writeDb.query(
'INSERT INTO orders (id, user_id, total) VALUES ($1, $2, $3)',
[command.orderId, command.userId, command.total]
);
// Publish event
await eventBus.publish('order.created', {
orderId: command.orderId,
userId: command.userId,
total: command.total
});
return { orderId: command.orderId };
}
}
// Query handler
class OrderQueryHandler {
async getOrders(userId, options) {
// Try cache first
const cacheKey = `orders:${userId}:${JSON.stringify(options)}`;
const cached = await cache.get(cacheKey);
if (cached) {
return JSON.parse(cached);
}
// Query MongoDB (denormalized documents)
const orders = await readDb
.collection('order_views')
.find({ userId })
.sort({ createdAt: -1 })
.limit(options.limit)
.toArray();
// Cache for 1 minute
await cache.setex(cacheKey, 60, JSON.stringify(orders));
return orders;
}
}
Common CQRS Mistakes (I Made Them All) šŖ¤
Mistake #1: Using CQRS Everywhere
// BAD: CQRS for simple CRUD
// User profile? Just use normal CRUD!
app.get('/api/profile', async (req, res) => {
const profile = await db.query('SELECT * FROM users WHERE id = ?', [req.user.id]);
res.json(profile);
// This doesn't need CQRS!
});
// GOOD: CQRS only where it helps
// Complex analytics dashboard? CQRS makes sense!
app.get('/api/dashboard/revenue', async (req, res) => {
const revenue = await RevenueReadModel.find({ date: { $gte: last30Days } });
res.json(revenue);
// Denormalized, pre-aggregated, fast! ā
});
Golden rule: Use CQRS when reads and writes have VERY different requirements!
Mistake #2: Synchronous Read Model Updates
// BAD: Update read model synchronously
app.post('/api/orders', async (req, res) => {
const order = await createOrder(req.body);
// Blocking! Slows down writes!
await updateOrderReadModel(order); // 500ms
await updateAnalyticsReadModel(order); // 200ms
await updateSearchIndex(order); // 300ms
res.json(order);
// User waited 1+ second! š±
});
// GOOD: Async event-driven updates
app.post('/api/orders', async (req, res) => {
const order = await createOrder(req.body);
// Fire events and respond immediately!
eventBus.publish('order.created', order);
res.json(order);
// User gets response in 50ms! ā
});
// Read models update in background
eventBus.on('order.created', async (event) => {
await updateOrderReadModel(event);
await updateAnalyticsReadModel(event);
await updateSearchIndex(event);
// Happens async, doesn't block writes!
});
Mistake #3: No Eventual Consistency Handling
// BAD: Assume read model is immediately consistent
app.post('/api/orders', async (req, res) => {
await createOrder(req.body);
res.json({ success: true, orderId: order.id });
});
app.get('/api/orders/:id', async (req, res) => {
const order = await OrderReadModel.findOne({ orderId: req.params.id });
if (!order) {
return res.status(404).json({ error: 'Order not found' });
// But we JUST created it! User sees 404! š±
}
res.json(order);
});
// GOOD: Return data from write, handle eventual consistency
app.post('/api/orders', async (req, res) => {
const order = await createOrder(req.body);
// Return full order data immediately!
res.json({
success: true,
order: {
id: order.id,
items: order.items,
total: order.total,
status: 'pending'
}
});
// Client has the data, doesn't need to query immediately!
});
app.get('/api/orders/:id', async (req, res) => {
let order = await OrderReadModel.findOne({ orderId: req.params.id });
if (!order) {
// Fallback to write model (slower but consistent)
order = await db.query('SELECT * FROM orders WHERE id = ?', [req.params.id]);
}
res.json(order);
});
When to Use CQRS? š³
Use CQRS when:
- ā Read and write patterns are VERY different
- ā Reads are 90%+ of traffic (most apps!)
- ā Complex read queries slow down writes
- ā Need different scaling for reads vs writes
- ā Business analytics on same data as transactional writes
- ā Multiple read models from same data (list view, analytics, search)
Skip CQRS when:
- ā Simple CRUD app (User profile management)
- ā Reads and writes are similar complexity
- ā Strong consistency required everywhere
- ā Small scale (< 100 req/sec)
- ā Team unfamiliar with event-driven architecture
My production usage:
- ā E-commerce orders: CQRS (complex analytics dashboards)
- ā User activity feed: CQRS (millions of reads, few writes)
- ā Real-time dashboards: CQRS (denormalized for speed)
- ā User settings: Normal CRUD (simple, low volume)
- ā Admin CRUD: Normal CRUD (consistency > speed)
As a Technical Lead, I've learned: CQRS adds complexity. Only use it where the benefits outweigh the cost!
The Bottom Line š”
CQRS isn't about being fancy - it's about accepting that reads and writes have different needs!
The essentials:
- Separate models - Different optimization for commands vs queries
- Event-driven sync - Async updates to read models
- Denormalization - Pre-join data for fast reads
- Eventual consistency - Accept slight lag for massive scale
- Use strategically - Not every CRUD needs CQRS
The truth about CQRS:
It's not "let's separate everything!" - it's "these operations have fundamentally different requirements, let's optimize each separately!" You're trading immediate consistency for scalability and performance!
When architecting on AWS, I learned: CQRS shines when you have high read volume with complex queries. Dashboard taking 8 seconds? CQRS can make it 50ms. But for simple user CRUD? Stick with traditional architecture! š
You don't need CQRS from day one - start simple, add CQRS to specific areas when they become bottlenecks! šÆ
Your Action Plan š
This week:
- Identify read vs write ratios in your app
- Find slow analytical queries
- Consider where denormalization helps
- Test eventual consistency tolerance
This month:
- Implement CQRS for ONE slow query
- Set up event bus (RabbitMQ/Kafka)
- Create denormalized read model
- Measure performance improvements
This quarter:
- Expand CQRS to other bottlenecks
- Add multiple read models for different views
- Consider event sourcing for audit trail
- Monitor eventual consistency lag
Resources Worth Your Time š
Tools I use:
- MongoDB - Great for denormalized read models
- RabbitMQ - Event bus for sync
- Redis - Cache layer for reads
Reading:
Real talk: The best CQRS implementation is the simplest one that solves YOUR problem!
Building scalable read-heavy systems? Connect with me on LinkedIn and share your CQRS stories!
Want to see CQRS implementations? Check out my GitHub - real patterns from production!
Now go forth and separate responsibly! šāļøāØ
P.S. If your analytics dashboards are killing your database, CQRS might be your answer. We went from 8-second dashboard loads to 50ms by separating reads from writes! š
P.P.S. I once tried to use CQRS for a simple user profile CRUD. Added so much complexity for zero benefit. Ripped it out after a week. Learn from my pain - CQRS is powerful but NOT for everything! Use it strategically! š