0x55aa
โ† Back to Blog

Nginx + Docker: Stop Exposing Your App Ports to the World Like a Rookie ๐Ÿ”ง๐Ÿณ

โ€ข8 min read

Nginx + Docker: Stop Exposing Your App Ports to the World Like a Rookie ๐Ÿ”ง๐Ÿณ

Fun fact: The first production server I ever set up had Node.js listening directly on port 80. No reverse proxy. No rate limiting. No SSL termination. Just raw app, raw port, raw chaos.

It got hammered by a bot within 48 hours. ๐Ÿ’€

After countless deployments across Laravel, Node.js, and assorted AWS chaos, I learned: Nginx sitting in front of your app isn't optional. It's the bouncer your service desperately needs.

Let me show you the setup that's been saving my deployments for years.

Why Not Just Expose Port 3000? ๐Ÿค”

You've probably seen Docker tutorials that end with:

# "Congrats! Your app runs on port 3000!"
docker run -p 3000:3000 my-app

And it works! Until it doesn't.

The problems with raw port exposure:

  • No SSL โ€” your users send passwords in plaintext
  • No rate limiting โ€” one angry bot can take you down
  • No compression โ€” you're sending uncompressed responses like it's 1998
  • No static file serving โ€” Node.js/PHP serving images is embarrassingly slow
  • No request buffering โ€” slow clients hold your app threads hostage
  • Port 3000 in browser URLs looks unprofessional (and users notice)

Nginx solves ALL of this in one config file. Let's build it.

The Architecture We're Building โš™๏ธ

Internet
    โ”‚
    โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚    Nginx    โ”‚  โ† The bouncer (port 80/443)
โ”‚  Container  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
      โ”‚ Internal Docker network
      โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Your App   โ”‚  โ† Never exposed to the internet
โ”‚  Container  โ”‚  (port 3000/8000/9000 โ€” internal only!)
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

No external access to app ports. Ever. Nginx handles everything public-facing.

The Docker Compose Setup ๐Ÿณ

Here's the docker-compose.yml that I use as a base for production:

version: '3.8'

services:
  # Your application
  app:
    build: .
    # โŒ DO NOT expose ports here
    # ports:
    #   - "3000:3000"   โ† Never do this in production
    environment:
      - NODE_ENV=production
      - PORT=3000
    networks:
      - internal
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

  # Nginx reverse proxy
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"   # Only Nginx touches these!
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - ./certbot/conf:/etc/letsencrypt:ro
      - ./certbot/www:/var/www/certbot:ro
      - ./static:/var/www/static:ro
    depends_on:
      app:
        condition: service_healthy
    networks:
      - internal
      - external
    restart: unless-stopped

networks:
  internal:   # App lives here โ€” invisible to the internet
    driver: bridge
  external:   # Only Nginx touches this
    driver: bridge

The key insight: The app service has NO ports mapping. It only exists on the internal network. The outside world can't touch it directly. ๐Ÿ”

The Nginx Config That Does the Heavy Lifting ๐Ÿ”ง

Create nginx/conf.d/app.conf:

# Rate limiting zone โ€” before the server block!
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=5r/m;

# Upstream โ€” this is how Nginx finds your app
upstream app_backend {
    server app:3000;   # Docker resolves "app" by container name โœจ
    keepalive 32;      # Reuse connections โ€” way faster!
}

# Redirect HTTP โ†’ HTTPS
server {
    listen 80;
    server_name yourdomain.com;

    # Let's Encrypt certificate renewal
    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

# Main HTTPS server
server {
    listen 443 ssl http2;
    server_name yourdomain.com;

    # SSL certificates (from certbot)
    ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;

    # Modern TLS only โ€” drop the ancient stuff
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers off;

    # Security headers โ€” free protection!
    add_header Strict-Transport-Security "max-age=31536000" always;
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;
    add_header Referrer-Policy "strict-origin-when-cross-origin";

    # Serve static files directly โ€” don't bother your app
    location /static/ {
        root /var/www;
        expires 1y;
        add_header Cache-Control "public, immutable";
        gzip_static on;
    }

    # API routes with rate limiting
    location /api/ {
        limit_req zone=api_limit burst=20 nodelay;
        limit_req_status 429;

        proxy_pass http://app_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
    }

    # Login endpoint โ€” extra tight rate limiting
    location /api/login {
        limit_req zone=login_limit burst=3;
        limit_req_status 429;
        proxy_pass http://app_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    # Everything else
    location / {
        proxy_pass http://app_backend;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Buffer slow clients โ€” don't hold app threads!
        proxy_buffering on;
        proxy_buffer_size 4k;
        proxy_buffers 8 8k;
    }
}

Getting SSL with Let's Encrypt ๐Ÿ”

Docker-ize certbot so you never manually renew certificates again:

# Add to docker-compose.yml
  certbot:
    image: certbot/certbot
    volumes:
      - ./certbot/conf:/etc/letsencrypt
      - ./certbot/www:/var/www/certbot
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done'"

First-time certificate:

# Get your first certificate
docker compose run --rm certbot certonly \
  --webroot \
  --webroot-path=/var/www/certbot \
  --email [email protected] \
  --agree-tos \
  --no-eff-email \
  -d yourdomain.com

# Reload Nginx to pick up the cert
docker compose exec nginx nginx -s reload

After this, certbot auto-renews every 12 hours. You never touch it again. โœจ

The Compression Win ๐Ÿ“ฆ

Before I added Nginx compression, my API responses were embarrassingly fat. Add this to nginx.conf:

http {
    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_min_length 1024;
    gzip_types
        text/plain
        text/css
        text/javascript
        application/json
        application/javascript
        application/x-javascript
        image/svg+xml;

    # Hide Nginx version โ€” don't advertise your attack surface
    server_tokens off;

    # Timeouts โ€” don't let slow clients linger forever
    keepalive_timeout 65;
    client_body_timeout 12;
    client_header_timeout 12;
    send_timeout 10;

    include /etc/nginx/conf.d/*.conf;
}

Before: 120KB JSON response, no compression After: 18KB with gzip. 85% smaller. Same data. ๐Ÿคฏ

Common Pitfalls I Learned the Hard Way ๐Ÿšจ

Pitfall #1: Trusting X-Forwarded-For Blindly

Your app sees X-Forwarded-For as the client IP. But if a bad actor adds a fake header, your logs lie to you and your rate limiting breaks.

# In your server block, ONLY trust the proxy you control:
real_ip_header X-Forwarded-For;
real_ip_recursive on;
set_real_ip_from 172.16.0.0/12;  # Your Docker network range

Pitfall #2: Forgetting proxy_set_header X-Forwarded-Proto

Laravel and Express check this header to generate correct HTTPS URLs. Without it, every link your app generates starts with http:// โ€” even on HTTPS. Your users get mixed content warnings. I spent 3 hours on this once.

proxy_set_header X-Forwarded-Proto $scheme;

One line. So much pain saved.

Pitfall #3: Nginx Caching Your 502s

By default, Nginx can cache error responses. When your app restarts during a deploy, Nginx might serve stale 502s for a while.

# Don't cache errors
proxy_cache_valid any 0s;
proxy_no_cache $http_pragma $http_authorization;

Pitfall #4: Large File Uploads Timing Out

A CI/CD pipeline I set up for a client had a file upload feature. Deployments kept breaking uploads. Turns out Nginx has default size limits:

# Increase upload limit
client_max_body_size 50M;

# Increase proxy timeout for large uploads
proxy_connect_timeout 60s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;

Testing Your Config Before Deploying ๐Ÿงช

Docker taught me the hard way to always test Nginx config before reloading in production:

# Test config syntax (does NOT reload)
docker compose exec nginx nginx -t

# Output you want:
# nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
# nginx: configuration file /etc/nginx/nginx.conf test is successful

# THEN reload gracefully
docker compose exec nginx nginx -s reload

Never docker compose restart nginx. That's a hard stop, dropping active connections. Always nginx -s reload.

Before vs After ๐Ÿ“Š

Metric Raw Port 3000 Nginx Reverse Proxy
SSL โŒ Manual nightmare โœ… Auto-renews
Rate limiting โŒ None โœ… Per-route limits
Compression โŒ None โœ… 60-85% smaller
Static files โŒ App serves them (slow) โœ… Nginx serves them (fast)
Security headers โŒ You forget them โœ… One config, all routes
Attack surface โŒ App port exposed โœ… Only 80/443 visible
Slow client protection โŒ Threads held hostage โœ… Buffered by Nginx

TL;DR โ€” The Pattern That Works ๐Ÿ’ก

  1. App containers โ†’ internal Docker network only, no public ports
  2. Nginx container โ†’ only 80/443 exposed, sits in front of everything
  3. Certbot container โ†’ handles SSL renewals automatically
  4. Test config with nginx -t before every reload
  5. Rate limit aggressively โ€” especially login and API endpoints

After years of deploying Laravel APIs and Node.js services to AWS, this is the setup that stops the 3 AM alerts. Your app doesn't need to worry about rate limiting, SSL, compression, or slow clients. Nginx handles all of it, and your app just needs to return JSON.

The first time I deployed this properly and watched Nginx absorb a bot attack that would have flattened the raw app, I understood why every serious production setup has a reverse proxy.

Your app deserves a bouncer. Give it one. ๐Ÿšช


Got questions or a Nginx config that's been haunting you? Find me on LinkedIn or GitHub.

Now go put Nginx in front of that Node app running directly on port 80. You know who you are. ๐Ÿ”ง