Docker Networking: Stop Guessing Why Your Containers Can't Talk to Each Other š³š
Docker Networking: Stop Guessing Why Your Containers Can't Talk to Each Other š³š
True story: It was a Monday morning. I had two containers running side by side on the same machine. My API container kept screaming ECONNREFUSED 127.0.0.1:5432. The database container was right there. Healthy. Happy. Completely unreachable.
I did what every developer does: I Googled. I restarted. I cursed. I restarted again.
Three hours later I discovered the problem: they were on different Docker networks and localhost inside a container means that container's localhost, not your machine's. š¤¦
Welcome to Docker networking. It's not complicated ā but nobody explains it clearly, and Docker's defaults will bite you if you don't understand what's actually happening.
What Even Is a Docker Network? š¤
Think of Docker networks like office buildings. Each container is an employee. Without a shared network (building), they live in separate offices with no hallways connecting them.
Docker has four network types you'll actually care about:
| Network | Use Case | Isolation |
|---|---|---|
bridge (default) |
Single-host container communication | Containers isolated from each other by default |
host |
Maximum performance, no isolation | Container uses host's network directly |
none |
Zero network access | Fully isolated |
overlay |
Multi-host (Swarm/Kubernetes) | Cross-host communication |
The gotcha that burned me: Every container gets the default bridge network. But containers on the default bridge network cannot talk to each other by name. Only by IP. And IP addresses change every restart. š
The Default Bridge Network: The Trap šŖ¤
Here's what happens when you run containers without specifying a network:
# Start a database
docker run -d --name mydb postgres:15
# Start an API
docker run -d --name myapi node:18-alpine
# Try to connect from API to DB... š„
# postgres://mydb:5432/myapp --> FAILS!
# Why? No hostname resolution on default bridge!
Why it fails:
- Both containers ARE on the
bridgenetwork - But default bridge doesn't have automatic DNS
mydbas a hostname? Docker doesn't know what that is- You'd need to use the actual IP:
172.17.0.2(which changes!)
Docker taught me the hard way: never rely on the default bridge network for container-to-container communication.
User-Defined Networks: The Right Way ā
Create a custom network and your containers get automatic DNS resolution:
# Create a custom network
docker network create myapp-network
# Now start containers ON that network
docker run -d \
--name postgres \
--network myapp-network \
-e POSTGRES_PASSWORD=secret \
postgres:15
docker run -d \
--name api \
--network myapp-network \
-e DATABASE_URL="postgres://postgres:secret@postgres:5432/myapp" \
myapi:latest
# Now "postgres" resolves as a hostname automatically! ā
# No IP addresses. No guessing. It just works.
What changed:
- Custom network = built-in DNS
- Container name becomes its hostname
- IP address? Docker doesn't care, neither do you
- Containers on different networks? Still isolated ā
Docker Compose: Networking Done Right š¼
Here's the thing about Docker Compose ā it creates a user-defined network automatically for your entire stack:
# docker-compose.yml
services:
postgres:
image: postgres:15
environment:
POSTGRES_PASSWORD: secret
# No network config needed! Compose handles it.
redis:
image: redis:7-alpine
api:
build: ./api
environment:
DATABASE_URL: "postgres://postgres:secret@postgres:5432/myapp"
REDIS_URL: "redis://redis:6379"
depends_on:
- postgres
- redis
ports:
- "3000:3000"
nginx:
image: nginx:alpine
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
depends_on:
- api
docker compose up -d
# All four containers are on: myapp_default network
# "postgres", "redis", "api", "nginx" are all valid hostnames
# nginx can reach api, api can reach postgres and redis
# From outside? Only ports 80 and 3000 are exposed
A CI/CD pipeline that saved our team: Defining the entire stack in Docker Compose means dev, staging, and prod environments are identical. No more "but it works in dev!" šÆ
Network Isolation: Defense in Depth š”ļø
Here's a pattern I use in production ā separate frontend-facing services from backend-only services:
# docker-compose.yml (production-style)
services:
nginx:
image: nginx:alpine
networks:
- frontend
ports:
- "80:80"
- "443:443"
api:
build: ./api
networks:
- frontend # nginx can reach api
- backend # api can reach database
postgres:
image: postgres:15
networks:
- backend # ONLY accessible from backend network
# No ports exposed to host! š
redis:
image: redis:7-alpine
networks:
- backend # ONLY accessible from backend network
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # š No internet access from this network!
Why internal: true on the backend network?
- Your database container can't make outbound HTTP requests
- If an attacker compromises your app, they can't call home from your DB network
- Defense in depth ā even if your app layer is breached, the data layer has a wall around it
After countless deployments and one memorable security audit, I realized: network isolation isn't paranoia, it's basic hygiene. š
Connecting to Containers from Your Host Machine š»
This is where beginners get confused. localhost inside a container is the container, not your machine.
# Your host machine wants to connect to postgres running in Docker
# WRONG: postgres://localhost:5432/myapp ā won't work unless port is exposed!
# RIGHT: expose the port when running the container
docker run -d \
--name postgres \
-p 5432:5432 \ # host:container
postgres:15
# NOW localhost:5432 works from your host machine ā
The -p flag: what it actually means:
-p 8080:3000
# ā ā
# | āāā Container port (what the app listens on inside the container)
# āāāāā Host port (what you access from outside)
# So: curl http://localhost:8080 ā hits container's port 3000
Common mistake I've seen (and made):
# Exposing EVERYTHING to debug, then forgetting to remove it in production
docker run -d \
-p 5432:5432 \ # ā Database directly accessible from internet!
-p 6379:6379 \ # ā Redis directly accessible from internet!
postgres:15
In production: only expose what users actually need. Let nginx/load balancer handle the rest. Everything else stays internal. š
Debugging Network Issues (The Tools That Save My Sanity) š§
1. Inspect a network:
docker network inspect myapp-network
# Shows: connected containers, their IPs, network config
2. List all networks:
docker network ls
# NETWORK ID NAME DRIVER SCOPE
# abc123 bridge bridge local
# def456 myapp_default bridge local
# ghi789 host host local
3. Test connectivity from inside a container:
# Get a shell inside your API container
docker exec -it myapi sh
# Can you reach the database?
ping postgres # should resolve
nc -zv postgres 5432 # test TCP connection
# Can you resolve DNS?
nslookup postgres
4. See what's exposed:
docker port myapi
# 3000/tcp -> 0.0.0.0:3000
5. The "why can't my containers see each other" diagnostic:
# Check which network your containers are on
docker inspect myapi | grep -A 20 Networks
docker inspect mydb | grep -A 20 Networks
# If they're on different networks: FOUND YOUR BUG!
docker network connect myapp-network mydb
Real-World Production Pattern: Multi-Service App š
Here's how I set up a Node.js + PostgreSQL + Redis stack for a real project:
# docker-compose.prod.yml
services:
nginx:
image: nginx:alpine
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/certs:/etc/nginx/certs:ro
ports:
- "80:80"
- "443:443"
networks:
- public
restart: unless-stopped
api:
image: myapp-api:${VERSION:-latest}
environment:
NODE_ENV: production
DATABASE_URL: postgres://api_user:${DB_PASSWORD}@postgres:5432/myapp
REDIS_URL: redis://redis:6379
networks:
- public # nginx ā api
- internal # api ā postgres, redis
restart: unless-stopped
healthcheck:
test: ["CMD", "node", "-e", "require('http').get('http://localhost:3000/health')"]
interval: 30s
timeout: 5s
retries: 3
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: api_user
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- internal # ONLY internal! Never exposed to internet š
restart: unless-stopped
redis:
image: redis:7-alpine
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redis_data:/data
networks:
- internal # ONLY internal! š
restart: unless-stopped
networks:
public:
driver: bridge
internal:
driver: bridge
internal: true # No outbound internet access
volumes:
postgres_data:
redis_data:
What this gets you:
- ā nginx handles SSL, proxies to api
- ā api can reach postgres and redis (internal network)
- ā postgres and redis unreachable from internet
- ā No sensitive ports exposed
- ā Restart policies handle crashes
- ā Health checks mean nginx only routes to healthy API instances
The Common Pitfalls That Will Ruin Your Day ā ļø
Pitfall #1: Connecting from inside a container to the host machine
# Need to reach a service on your HOST from inside Docker?
# "localhost" from inside container = the container, not host!
# Solution: use the special hostname
host.docker.internal # Works on Mac and Windows
172.17.0.1 # Default Docker bridge gateway on Linux
# Example DB URL from inside container ā host's postgres:
DATABASE_URL=postgres://user:[email protected]:5432/mydb
Pitfall #2: Forgetting depends_on doesn't mean "wait until healthy"
# Bad: depends_on only waits for container START, not readiness
depends_on:
- postgres
# Good: wait for actual health
depends_on:
postgres:
condition: service_healthy
Pitfall #3: Hardcoding container IPs
# BAD: IP addresses change on every restart!
DATABASE_URL=postgres://172.18.0.3:5432/myapp # š
# GOOD: Use service names (Docker DNS handles the rest)
DATABASE_URL=postgres://postgres:5432/myapp # ā
Pitfall #4: Publishing ports you don't need
# BAD: Exposes postgres directly to host (and internet!)
postgres:
ports:
- "5432:5432"
# GOOD: Keep it internal, only expose what users access
postgres:
# No ports section = not accessible from host ā
networks:
- internal
TL;DR: The Mental Model š§
Docker networking clicks when you think of it like this:
- Each container is an island ā its
localhostis its own - Networks are bridges between islands ā custom networks give automatic DNS
- Port mapping (
-p) is a ferry from the host to an island - Use named networks ā never rely on default bridge for container-to-container comms
- Isolate sensitive services ā database and cache on internal-only networks
- Use service names, not IPs ā Docker DNS is your friend
After countless deployments across Node.js, Laravel, and AWS environments, Docker networking is the one thing I wish someone had explained clearly on day one. The 3 hours I lost to "ECONNREFUSED localhost" would have been spent shipping features instead. š
Now go rebuild your docker-compose.yml with proper network isolation. Your future self (and security team) will thank you. š³
Still debugging container networking at 2 AM? Hit me up on LinkedIn ā I've probably made the same mistake.
Working code is in production, not in my notes: Check GitHub for real Docker Compose files from real projects.
Ship more. Debug less. Isolate everything. šāØ