🔌 Node.js Graceful Shutdown: Don't Pull the Plug on Your Users
Imagine you're halfway through a bank transfer and the server just... dies. No warning. No "hang on, let me finish." Just gone.
That's what happens when your Node.js app shuts down ungracefully. It's the digital equivalent of a waiter walking away mid-order because their shift ended. Rude. Chaotic. And completely avoidable.
Let's fix that.
The Problem: SIGTERM Is Coming for You
Every time you deploy a new version of your app, Kubernetes (or Docker, or your process manager) sends a SIGTERM signal to your running process. This is the polite way of saying: "Hey, please wrap up and exit soon."
By default, Node.js responds to SIGTERM by... immediately dying. No cleanup. No finishing in-flight requests. Just poof.
Here's what that looks like in the wild:
- A user's API request returns a 502 or a socket hang-up
- A database write gets half-committed
- Open file handles leak like a broken faucet
- Your logs show nothing useful because the logger never flushed
The cruel irony? This happens most often during deployments — the exact moment users expect your app to be more reliable, not less.
The Solution: Listen, Drain, Exit
Graceful shutdown boils down to three steps:
- Stop accepting new requests
- Finish the ones already in flight
- Clean up resources, then exit
Here's the bare-minimum pattern:
const express = require('express');
const app = express();
app.get('/health', (req, res) => res.json({ status: 'ok' }));
app.get('/slow', async (req, res) => {
// Simulate a slow database query
await new Promise(resolve => setTimeout(resolve, 3000));
res.json({ message: 'Finally done!' });
});
const server = app.listen(3000, () => {
console.log('Server listening on port 3000');
});
// Graceful shutdown handler
async function shutdown(signal) {
console.log(`Received ${signal}. Starting graceful shutdown...`);
// Stop accepting new connections
server.close(async () => {
console.log('HTTP server closed. No new requests accepted.');
// Clean up resources: close DB connections, flush logs, etc.
try {
// await db.close();
// await redisClient.quit();
console.log('Resources cleaned up. Exiting.');
process.exit(0);
} catch (err) {
console.error('Error during cleanup:', err);
process.exit(1);
}
});
}
process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT')); // Ctrl+C in dev
The key insight: server.close() stops the server from accepting new connections but lets existing ones finish. That's exactly what you want.
The Timeout Safety Net
Here's the sneaky problem with the above code: what if a request never finishes? Maybe there's a client holding a long-lived connection, or a query is stuck waiting for a lock. Your app will just hang forever, refusing to exit.
You need a timeout — a hard deadline that says "I tried to be polite, but now I'm leaving regardless":
async function shutdown(signal) {
console.log(`Received ${signal}. Graceful shutdown initiated...`);
// Set a hard timeout: force exit after 10 seconds
const forceExitTimer = setTimeout(() => {
console.error('Graceful shutdown timed out. Forcing exit.');
process.exit(1);
}, 10_000);
// Don't let this timer keep the process alive
forceExitTimer.unref();
server.close(async () => {
clearTimeout(forceExitTimer);
try {
await db.close();
await redisClient.quit();
console.log('Clean shutdown complete.');
process.exit(0);
} catch (err) {
console.error('Cleanup error:', err);
process.exit(1);
}
});
}
The forceExitTimer.unref() call is a subtle but important trick — it tells Node's event loop "don't stay alive just because of this timer." Without it, if nothing else is keeping the process running, the timer itself would prevent the process from naturally exiting.
Ten seconds is a reasonable default for most APIs. Adjust based on your slowest expected operation.
The Kubernetes Angle
If you're running in Kubernetes, there's an extra layer to understand. When a pod gets a SIGTERM, Kubernetes simultaneously starts routing traffic away from it via the readinessProbe. But there's a race condition: traffic might still arrive for a second or two after SIGTERM while the load balancer catches up.
The fix is embarrassingly simple — add a small delay before you start the shutdown:
async function shutdown(signal) {
console.log(`${signal} received. Waiting for load balancer...`);
// Give the load balancer time to stop routing traffic here
await new Promise(resolve => setTimeout(resolve, 5000));
console.log('Starting shutdown...');
server.close(/* ... */);
}
It feels weird to intentionally sleep during shutdown, but those 5 seconds prevent a flood of errors from requests hitting a closing server. Consider it a courtesy wave goodbye.
What to Clean Up
Beyond the HTTP server itself, here's a checklist of resources that need explicit shutdown:
| Resource | Why It Matters |
|---|---|
| Database connections | Unclosed connections block other apps from using the pool |
| Redis clients | Prevents "connection forcibly closed" errors in logs |
| Message queue consumers | Avoids re-queuing messages that were mid-processing |
| Cron jobs / timers | Prevents tasks from firing during teardown |
| Log streams | Ensures the last few log lines actually get written |
Each of these usually has a .close(), .quit(), or .disconnect() method. Call them in sequence (or in parallel with Promise.all if order doesn't matter) inside your shutdown callback.
TL;DR
Graceful shutdown isn't glamorous. It won't get you upvotes on a tech blog (well, except this one). But it's the difference between a professional-grade production app and one that leaves users staring at error pages every time you ship.
The recipe:
- Catch
SIGTERMandSIGINT - Call
server.close()to drain existing requests - Add a hard timeout so you don't hang forever
- Clean up your resources in order
- In Kubernetes, sleep 5 seconds before starting the whole thing
Your future self — and your users mid-transaction — will thank you.
Go add graceful shutdown to your app right now. Seriously, it takes 20 lines of code and could save you a very unpleasant on-call incident at 2am.
Have a horror story about ungraceful shutdowns in production? Drop it in the comments — misery loves company.