0x55aa
Back to Blog

⚡ Redis Caching in Node.js: Speed Up Your API Like a Cheat Code

|
6 min read

⚡ Redis Caching in Node.js: Speed Up Your API Like a Cheat Code

Picture this: your API starts getting popular. Users love it. Traffic climbs. Then your database starts sweating, your response times balloon from 50ms to 800ms, and your boss starts asking pointed questions in Slack. You've been here. We've all been here.

The good news? A large chunk of your database queries are probably answering the same question, over and over again. Redis caching is how you stop your database from having the same conversation a thousand times a day, and start handing out pre-cooked answers in microseconds.

Let's fix your API before it becomes a meme.

What Is Caching, Really?

Think of Redis as a hyper-organized sticky note system. Instead of running to the filing cabinet (your database) every time someone asks "what are the top 10 products?", you slap the answer on a sticky note and grab that instead. The sticky note expires after a while so it doesn't go stale — then you check the cabinet again.

That's it. That's caching.

Redis is the world's most popular in-memory data store. It lives in RAM, which is absurdly fast compared to disk-based databases. We're talking sub-millisecond reads vs. the 50–200ms your PostgreSQL query might take after joins and index scans.

Setting Up Redis with Node.js

Install the ioredis package — it's the most robust Redis client for Node.js:

npm install ioredis

Create a reusable Redis client:

// lib/redis.js
import Redis from 'ioredis';

const redis = new Redis({
  host: process.env.REDIS_HOST || 'localhost',
  port: process.env.REDIS_PORT || 6379,
  // Reconnect automatically — Redis blips shouldn't take down your app
  retryStrategy: (times) => Math.min(times * 50, 2000),
});

redis.on('error', (err) => console.error('Redis error:', err));

export default redis;

Simple, clean, ready to go. Now let's actually use it.

The Cache-Aside Pattern

The most common caching strategy is cache-aside (also called lazy loading). The logic is:

  1. Check the cache first.
  2. If it's there (cache hit), return it immediately.
  3. If it's not (cache miss), fetch from the database, store in cache, return the result.

Here's an Express route that fetches a user profile using this pattern:

import express from 'express';
import redis from '../lib/redis.js';
import db from '../lib/db.js';

const router = express.Router();

router.get('/users/:id', async (req, res) => {
  const { id } = req.params;
  const cacheKey = `user:${id}`;

  // Step 1: check the cache
  const cached = await redis.get(cacheKey);
  if (cached) {
    return res.json({ source: 'cache', data: JSON.parse(cached) });
  }

  // Step 2: cache miss — go to the database
  const user = await db.query('SELECT * FROM users WHERE id = $1', [id]);
  if (!user) return res.status(404).json({ error: 'User not found' });

  // Step 3: store in cache, expire after 5 minutes (300 seconds)
  await redis.setex(cacheKey, 300, JSON.stringify(user));

  res.json({ source: 'db', data: user });
});

export default router;

The source field is a debugging trick — you can clearly see where data is coming from during development. Remove it in production if you don't want to advertise your caching layer to clients.

Cache Invalidation: The Hard Part

There's a famous saying in computer science: "There are only two hard problems — cache invalidation, naming things, and off-by-one errors."

Cache invalidation means: when data changes, you need to delete the stale cached version so the next request fetches fresh data. Here's how to do it cleanly when a user updates their profile:

router.put('/users/:id', async (req, res) => {
  const { id } = req.params;
  const { name, email } = req.body;

  // Update the database
  const updated = await db.query(
    'UPDATE users SET name = $1, email = $2 WHERE id = $3 RETURNING *',
    [name, email, id]
  );

  // Bust the cache — the old data is now a lie
  await redis.del(`user:${id}`);

  res.json({ data: updated });
});

No drama. Update the DB, delete the cache key. Next GET request will repopulate it automatically. This is why the cache-aside pattern shines — the cache is always a derivative of the database, never the source of truth.

A Note on TTL (Time to Live)

Every cached value should have an expiration time. Always. Without TTL, stale data lives forever and you'll spend a Saturday debugging why users are seeing information from three months ago.

Choose your TTL based on how often the data changes and how much staleness you can tolerate:

Data Type Suggested TTL
User profile 5–15 minutes
Product catalog 1–6 hours
Static config/settings 24 hours
Live scores / prices 10–30 seconds
Session tokens Match session expiry

The setex command (SET + EXpire) does both in one atomic operation — set the value and the TTL at the same time. Never use set without ex for cached data.

The Big Win: Aggregate Queries

Caching shines brightest on expensive aggregate queries — the kind that scan thousands of rows to produce a leaderboard, a dashboard stat, or a "trending items" list. These are perfect cache candidates because they're:

  • Expensive to compute
  • Read frequently by many users
  • Tolerate slight staleness (nobody cares if the leaderboard updates every 60 seconds)

Wrap these in a small helper and your database will send you a thank-you note.

Practical Takeaways

You don't need to cache everything. Over-caching creates complexity and invalidation nightmares. Start with:

  • Expensive queries that run often and change rarely
  • High-traffic endpoints under load (product pages, user profiles, feed data)
  • Third-party API responses you're re-fetching on every request

Redis caching is one of the highest-ROI performance improvements you can make to a Node.js API. A few dozen lines of code can slash your database load by 80% and turn a sluggish endpoint into something that responds before the user even blinks.

Your database has been answering the same questions in silence. Give it a break.


What's your biggest caching war story? Have you ever had a Redis outage take down your whole app because it was the source of truth instead of a cache? Drop it in the comments — we learn more from disasters than from documentation.

Thanks for reading!

Back to all posts