0x55aa
Back to Blog

🏊 Node.js Database Connection Pooling: Stop Knocking on a New Door Every Time

|
6 min read

🏊 Node.js Database Connection Pooling: Stop Knocking on a New Door Every Time

Picture this: every time a customer walks into your restaurant, you demolish the front door, build a brand new one, seat them, and then demolish it again when they leave. Sounds insane, right?

That's exactly what your Node.js app does when it opens a fresh database connection for every single request.

Database connections are expensive. TCP handshakes, SSL negotiation, authentication — all of that happens before a single query runs. Do it thousands of times per minute, and your database will be sweating bullets while your response times tank.

Enter connection pooling: the practice of keeping a warm set of connections ready to go, reusing them across requests like a civilized application.


What Exactly Is a Connection Pool?

A connection pool is a cache of pre-opened database connections. Instead of opening and closing a connection per request, your app grabs an available connection from the pool, uses it, and returns it when done.

Think of it like a hotel's shuttle van. The van doesn't drive one guest, get destroyed, and get rebuilt for the next. It loops the airport, picks people up, drops them off, and does it again. Efficient. Reusable. Not insane.

A pool has a few key properties:

  • Min connections — always-warm connections sitting idle, ready to serve
  • Max connections — the ceiling; beyond this, requests wait in line
  • Idle timeout — how long a connection can loaf around before getting evicted
  • Acquire timeout — how long a request waits before giving up

The Naive Approach (Don't Do This)

Here's what many beginners do in Node.js:

const { Client } = require('pg');

app.get('/users', async (req, res) => {
  const client = new Client({ connectionString: process.env.DATABASE_URL });
  await client.connect(); // 🐢 Full TCP + auth handshake every. single. time.

  const result = await client.query('SELECT * FROM users');
  await client.end();

  res.json(result.rows);
});

This works. It's also quietly murdering your performance. Under moderate load, you'll hit PostgreSQL's max_connections limit (default: 100), and suddenly every new request gets a too many connections error while your users stare at spinners.


The Right Way: Use a Pool

With pg (node-postgres), switching to a pool is embarrassingly easy:

const { Pool } = require('pg');

// Create ONE pool for the entire app lifetime
const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 10,           // max 10 connections in the pool
  idleTimeoutMillis: 30000,   // evict idle connections after 30s
  connectionTimeoutMillis: 2000, // throw if no connection available in 2s
});

// Reuse it across all routes
app.get('/users', async (req, res) => {
  const result = await pool.query('SELECT * FROM users');
  res.json(result.rows);
});

app.get('/posts', async (req, res) => {
  const result = await pool.query('SELECT * FROM posts WHERE published = true');
  res.json(result.rows);
});

Notice: the pool is created once at startup, not inside route handlers. This is crucial. Creating a new pool per request defeats the entire purpose — you'd be back to square one.

The pool manages checkout and return automatically. When the request completes (or throws), the connection goes back into the pool, ready for the next lucky request.


Sizing Your Pool: It's Not "More Is Better"

Here's a counterintuitive truth: a giant pool can actually make things worse.

If your database server has 4 CPU cores and you throw 200 simultaneous connections at it, those connections compete for the same cores. The database spends more time context-switching between connections than actually running queries. You get congestion, not speed.

A rough rule from the PostgreSQL community:

max_connections = (num_db_cores * 2) + num_spindle_disks

For a 4-core database with SSD, that's around 9–10 connections per app instance. If you're running 5 Node.js instances behind a load balancer, you need them to share the database's budget: 10 total / 5 instances = 2 per instance. Adjust accordingly.

This is also why PgBouncer exists — a connection pooler that sits between your app and PostgreSQL, letting hundreds of app connections share a small pool of actual database connections. For high-traffic apps, it's a must-have.


Handling Transactions Properly

When you need a transaction (all-or-nothing operations), you need a dedicated client for the duration — not a fire-and-forget pool query:

app.post('/transfer', async (req, res) => {
  const client = await pool.connect(); // check out a specific connection

  try {
    await client.query('BEGIN');
    await client.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [100, req.body.from]);
    await client.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [100, req.body.to]);
    await client.query('COMMIT');
    res.json({ success: true });
  } catch (err) {
    await client.query('ROLLBACK'); // undo everything on failure
    res.status(500).json({ error: 'Transfer failed' });
  } finally {
    client.release(); // ALWAYS return the connection to the pool
  }
});

The client.release() in finally is non-negotiable. Forget it, and that connection is gone from the pool forever — a connection leak. Under sustained traffic, you'll exhaust the pool and your app will freeze while waiting for a connection that never comes back.


Monitoring Your Pool

A pool you can't observe is a pool you can't trust. Most pool libraries expose metrics. Log them periodically:

setInterval(() => {
  console.log({
    total: pool.totalCount,
    idle: pool.idleCount,
    waiting: pool.waitingCount,
  });
}, 30000);

If waitingCount is consistently above zero, your pool is too small or your queries are too slow. If idleCount is always at max, your pool might be oversized. Find the sweet spot.


The Takeaway

Connection pooling isn't an advanced optimization — it's table stakes for any production Node.js app talking to a database. Without it, you're either hammering your database with repeated handshakes or hitting connection limits at the worst possible time (peak traffic, naturally).

The good news: it takes about ten minutes to set up properly, and the payoff is immediate — lower latency, higher throughput, and a database that isn't gasping for air.

Grab a connection from the pool, run your query, return it when you're done. It's just good manners.


What's your pool size set to in production? Drop your setup in the comments — especially if you've got a spicy war story about connection leaks bringing down prod at 2am. We've all been there. 🍵

Thanks for reading!

Back to all posts