0x55aa
Back to Blog

⚙️ Background Jobs in Node.js: Stop Making Your Users Wait

|
5 min read

⚙️ Background Jobs in Node.js: Stop Making Your Users Wait

Picture this: a user clicks "Sign Up" on your app. Behind the scenes, your Express handler fires off a welcome email, resizes their profile picture, updates three analytics dashboards, and queues a coupon for their first purchase — all before sending back the 201 Created response.

Three seconds later, the user is staring at a spinner, wondering if the internet broke.

We've all been that user. We've also all written that handler. Today we fix it.


The Problem: Doing Everything Inline

The naive approach to "do stuff after a request" is to just... do it inside the route handler:

app.post('/register', async (req, res) => {
  const user = await db.createUser(req.body);

  // All this runs BEFORE we respond 😬
  await sendWelcomeEmail(user.email);       // 500ms
  await resizeProfilePicture(user.id);      // 800ms
  await updateAnalytics(user.id);           // 300ms
  await generateWelcomeCoupon(user.id);     // 400ms

  res.status(201).json({ user });           // Finally!
});

Total time before the user gets a response: ~2 seconds. For a signup! Your users didn't ask to watch paint dry — they just want to be registered.

The fix is simple in concept: respond immediately, do the heavy lifting later.


Enter the Job Queue

A job queue is like a restaurant ticket system. The waiter (your API) takes the order and hands a ticket to the kitchen (worker). The customer (user) gets a "your order is in!" confirmation instantly. The kitchen handles the actual cooking in its own time.

The most battle-tested Node.js solution for this is BullMQ — a Redis-backed queue library that handles retries, priorities, concurrency, and failure tracking out of the box.

Setting Up BullMQ

First, make sure you have Redis running (Docker makes this trivial: docker run -p 6379:6379 redis), then install:

npm install bullmq ioredis

Now let's refactor that registration handler:

import { Queue, Worker } from 'bullmq';
import IORedis from 'ioredis';

const connection = new IORedis({ maxRetriesPerRequest: null });

// Define your queues
const emailQueue = new Queue('emails', { connection });
const imageQueue = new Queue('images', { connection });

// The route handler — lean and mean ✅
app.post('/register', async (req, res) => {
  const user = await db.createUser(req.body);

  // Fire and forget — these return immediately
  await emailQueue.add('welcome-email', { userId: user.id, email: user.email });
  await imageQueue.add('resize-avatar', { userId: user.id });

  res.status(201).json({ user }); // Responds in ~50ms 🚀
});

// Workers run separately (or in a worker thread)
const emailWorker = new Worker('emails', async (job) => {
  if (job.name === 'welcome-email') {
    await sendWelcomeEmail(job.data.email);
    console.log(`Welcome email sent to ${job.data.email}`);
  }
}, { connection });

emailWorker.on('failed', (job, err) => {
  console.error(`Job ${job.id} failed:`, err.message);
});

Your route now responds in milliseconds. The workers pick up the jobs asynchronously, and if something fails — say, your email provider has a hiccup — BullMQ will automatically retry the job with exponential backoff.


Real-World Patterns Worth Knowing

1. Separate Your Workers Into Their Own Process

Don't run workers inside your Express process. If your API restarts, mid-flight jobs get orphaned. Instead, run a dedicated worker process:

# In production, you'd run these as separate services
node src/api/server.js      # your Express app
node src/workers/email.js   # email worker
node src/workers/image.js   # image processing worker

This also lets you scale them independently — spin up 10 image-processing workers during peak hours without touching your API layer.

2. Use Job Priorities

Not all background work is equally urgent. BullMQ lets you assign priority (lower number = higher priority):

// Password reset emails are urgent
await emailQueue.add('password-reset', { email }, { priority: 1 });

// Weekly newsletters can wait
await emailQueue.add('newsletter', { email }, { priority: 10 });

3. Delayed Jobs Are Underrated

Need to send a "You left items in your cart!" email 30 minutes after a user abandons checkout? Just schedule a delayed job:

await emailQueue.add(
  'cart-abandonment',
  { userId, cartId },
  { delay: 30 * 60 * 1000 } // 30 minutes in ms
);

No cron jobs, no separate scheduling service — just set delay and let BullMQ handle it.


The Payoff

Let's revisit our registration handler. Before: ~2 second response. After: ~50ms response. Your API went from feeling like a DMV visit to feeling like a vending machine.

But beyond speed, you get:

  • Resilience: jobs persist in Redis, so crashes don't lose work
  • Visibility: BullMQ has a dashboard (Bull Board) to inspect queued/failed jobs in real time
  • Retries: transient failures (flaky email providers, rate limits) are handled automatically
  • Decoupling: your API doesn't care how emails are sent, just that the job was queued

When NOT to Use a Queue

Queues add operational complexity (you need Redis). Don't reach for them for:

  • Fast operations (< 100ms, no external services) — just do it inline
  • Returning data to the user — queues are fire-and-forget; if the user needs the result synchronously, a queue is the wrong tool
  • Simple scripts — if you're not building a service, you're probably over-engineering

Go Make Your APIs Snappy

Background job queues are one of those tools that, once you start using them, you wonder how you ever shipped without them. Offloading async work is the difference between an API that feels polished and one that feels like it's thinking too hard.

Start with one queue. Pick your slowest route — probably the one that sends emails — and move that work out of the request lifecycle. See how it feels. I'll bet you'll be queuing everything by the end of the week.

Resources to dive deeper:

  • BullMQ docs — comprehensive and well-maintained
  • Bull Board — beautiful dashboard for your queues
  • Redis documentation — understanding your backing store never hurts

What's the slowest thing your API does inline right now? Drop it in the comments — let's queue it up. 👇

Thanks for reading!

Back to all posts