Docker Compose: Stop Installing Postgres on Your Laptop Like a Caveman š³š»
Docker Compose: Stop Installing Postgres on Your Laptop Like a Caveman š³š»
Real talk: My first day at a new job in 2018, they handed me a 47-step setup document. Install Postgres. Install Redis. Install Elasticsearch. Install RabbitMQ. Configure each one. Fix port conflicts. Realize I installed the wrong Postgres version. Uninstall everything. Start over. Three days later, I finally got "Hello World" to work. š±
Senior dev: "Why didn't you just use Docker Compose?"
Me: "The what now?"
Welcome to the day I learned that setting up development environments doesn't have to be a week-long nightmare!
What's Docker Compose Anyway? š¤
Think of Docker Compose like a recipe for your entire development environment:
Without Docker Compose (Stone Age):
# The nightmare setup
brew install postgresql@14
brew install redis
brew install elasticsearch
brew services start postgresql@14
brew services start redis
brew services start elasticsearch
# Wait, wrong Postgres version!
brew uninstall postgresql@14
brew install postgresql@13
# Port 5432 already in use?!
# Kill mystery Postgres process
# Start over
# 3 hours later...
# Still doesn't work
# Cry
With Docker Compose (Future):
# The entire setup
docker-compose up
# That's it! You're done! š
# Postgres, Redis, Elasticsearch all running
# Correct versions, correct configs
# Works on your machine AND everyone else's!
Translation: Docker Compose = One command to rule them all! š§āāļø
The Onboarding Horror Story That Changed Everything š
After countless deployments to production, I thought I knew Docker. But using it for LOCAL development? Mind blown!
Monday, New Junior Dev Joins:
9:00 AM - "Welcome! Read the setup docs!"
10:00 AM - "Postgres won't start..."
11:00 AM - "Redis is using port 6379 but something else is too..."
12:00 PM - "My Elasticsearch crashed my laptop..."
2:00 PM - "Can you just give me your database dump?"
3:00 PM - "Now the migrations won't run..."
4:00 PM - "I think I'll just work on documentation today..."
5:00 PM - Still no code written š
Cost of bad onboarding:
- 1 full day wasted
- Junior dev demoralized
- My time wasted helping with setup
- Zero productivity on day one
Then I created a docker-compose.yml:
# docker-compose.yml - The entire dev environment!
version: '3.8'
services:
postgres:
image: postgres:14-alpine
environment:
POSTGRES_USER: app_user
POSTGRES_PASSWORD: dev_password
POSTGRES_DB: myapp_dev
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
ports:
- "6379:6379"
elasticsearch:
image: elasticsearch:8.11.0
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- "9200:9200"
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
volumes:
postgres_data:
elasticsearch_data:
New dev setup time after Docker Compose:
git clone repo
docker-compose up -d
npm run migrate
npm run dev
# ā
Coding in 5 minutes!
Result: New devs productive in under 10 minutes! š
Docker Compose 101: The Essentials š
Your First docker-compose.yml
Start simple - Just a database:
# docker-compose.yml
version: '3.8'
services:
# Service name (use in code as hostname)
db:
image: postgres:14-alpine # Docker image to use
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
POSTGRES_DB: myapp
ports:
- "5432:5432" # host:container
volumes:
- db_data:/var/lib/postgresql/data # Persist data
volumes:
db_data: # Named volume (survives container restarts)
Use it:
# Start everything
docker-compose up
# Start in background
docker-compose up -d
# View logs
docker-compose logs -f
# Stop everything
docker-compose down
# Stop and DELETE all data
docker-compose down -v
Connect from your app:
// config.js
const config = {
database: {
host: 'localhost', // Docker exposes port 5432 to localhost
port: 5432,
user: 'myuser',
password: 'mypassword',
database: 'myapp'
}
};
Why this is brilliant:
- ā Same database version for entire team
- ā No installing Postgres on laptop
- ā Delete and recreate database in seconds
- ā Can run multiple projects without port conflicts
Real-World Setup: My Actual Development Stack šļø
After 7 years deploying Laravel and Node.js apps, here's my production-ready local setup:
# docker-compose.yml - Full-stack development environment
version: '3.8'
services:
# PostgreSQL Database
postgres:
image: postgres:14-alpine
container_name: myapp_postgres
environment:
POSTGRES_USER: ${DB_USER:-postgres}
POSTGRES_PASSWORD: ${DB_PASSWORD:-postgres}
POSTGRES_DB: ${DB_NAME:-myapp_dev}
POSTGRES_INITDB_ARGS: "--encoding=UTF8 --locale=en_US.UTF-8"
ports:
- "${DB_PORT:-5432}:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./docker/postgres/init.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
networks:
- app_network
# Redis Cache
redis:
image: redis:7-alpine
container_name: myapp_redis
ports:
- "${REDIS_PORT:-6379}:6379"
volumes:
- redis_data:/data
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD:-}
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 5
networks:
- app_network
# Elasticsearch (for search)
elasticsearch:
image: elasticsearch:8.11.0
container_name: myapp_elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- "9200:9200"
- "9300:9300"
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- app_network
# RabbitMQ (message queue)
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: myapp_rabbitmq
environment:
RABBITMQ_DEFAULT_USER: ${RABBITMQ_USER:-guest}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASS:-guest}
ports:
- "5672:5672" # AMQP port
- "15672:15672" # Management UI
volumes:
- rabbitmq_data:/var/lib/rabbitmq
networks:
- app_network
# MinIO (S3-compatible storage for local dev)
minio:
image: minio/minio:latest
container_name: myapp_minio
environment:
MINIO_ROOT_USER: ${MINIO_USER:-minioadmin}
MINIO_ROOT_PASSWORD: ${MINIO_PASSWORD:-minioadmin}
ports:
- "9000:9000" # API
- "9001:9001" # Console
volumes:
- minio_data:/data
command: server /data --console-address ":9001"
networks:
- app_network
# MailHog (catch emails in dev)
mailhog:
image: mailhog/mailhog:latest
container_name: myapp_mailhog
ports:
- "1025:1025" # SMTP
- "8025:8025" # Web UI
networks:
- app_network
volumes:
postgres_data:
redis_data:
elasticsearch_data:
rabbitmq_data:
minio_data:
networks:
app_network:
driver: bridge
.env file (for configuration):
# .env - Not committed to Git!
DB_USER=myapp_user
DB_PASSWORD=super_secret_password
DB_NAME=myapp_dev
DB_PORT=5432
REDIS_PORT=6379
REDIS_PASSWORD=redis_secret
RABBITMQ_USER=myapp
RABBITMQ_PASS=rabbitmq_secret
MINIO_USER=minio_access_key
MINIO_PASSWORD=minio_secret_key
What you get:
- ā Full production-like environment
- ā Postgres for data
- ā Redis for caching
- ā Elasticsearch for search
- ā RabbitMQ for queues
- ā MinIO for S3-like file storage
- ā MailHog to catch emails (no accidental emails to customers!)
Total setup time: 3 minutes!
A deployment pattern that saved our team: Same compose file works on Mac, Windows, and Linux! No more "works on my machine" excuses! šŖ
Advanced Patterns I Actually Use šÆ
Pattern #1: Development vs Production Configs
The problem: Dev needs different settings than production!
Solution - Multiple compose files:
# docker-compose.yml - Base config
version: '3.8'
services:
postgres:
image: postgres:14-alpine
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
# docker-compose.dev.yml - Development overrides
version: '3.8'
services:
postgres:
ports:
- "5432:5432" # Expose port for local dev
environment:
POSTGRES_DB: myapp_dev
command: postgres -c log_statement=all # Log all queries
# docker-compose.test.yml - Test overrides
version: '3.8'
services:
postgres:
environment:
POSTGRES_DB: myapp_test
tmpfs:
- /var/lib/postgresql/data # Use in-memory DB for speed!
Use them:
# Development
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
# Testing
docker-compose -f docker-compose.yml -f docker-compose.test.yml up
# Or use Make
# Makefile
dev:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
test:
docker-compose -f docker-compose.yml -f docker-compose.test.yml up -d
npm test
docker-compose -f docker-compose.yml -f docker-compose.test.yml down
Pattern #2: Database Initialization Scripts
The problem: Need sample data for development!
docker/postgres/init.sql:
-- init.sql - Runs automatically on first startup
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pg_trgm";
-- Create tables
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
email VARCHAR(255) UNIQUE NOT NULL,
name VARCHAR(255) NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
-- Insert sample data
INSERT INTO users (email, name) VALUES
('[email protected]', 'Admin User'),
('[email protected]', 'Regular User'),
('[email protected]', 'Test User');
-- Grant permissions
GRANT ALL PRIVILEGES ON DATABASE myapp_dev TO myapp_user;
docker-compose.yml:
services:
postgres:
image: postgres:14-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
- ./docker/postgres/init.sql:/docker-entrypoint-initdb.d/init.sql
# ā Automatically runs on first startup!
After countless databases corrupted during development, I learned: Version control your seed data! Fresh database any time! š±
Pattern #3: Healthchecks and Dependencies
The problem: App starts before database is ready!
# Without healthchecks
docker-compose up
# App: "Connecting to database..."
# Postgres: "Still starting up..."
# App: "Connection failed! Crashing!"
# š±
Solution - Healthchecks + depends_on:
version: '3.8'
services:
app:
build: .
depends_on:
postgres:
condition: service_healthy # Wait for healthy!
redis:
condition: service_healthy
environment:
DB_HOST: postgres
REDIS_HOST: redis
postgres:
image: postgres:14-alpine
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 3s
retries: 5
start_period: 10s
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
What happens:
- Postgres starts
- Healthcheck runs every 5s
- After 5 successful checks, Postgres is "healthy"
- App starts (after Postgres is healthy)
- No more race conditions! ā
Pattern #4: Hot Reload for Development
The problem: Rebuild container every time you change code!
Solution - Volume mounts for code:
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile.dev # Different Dockerfile for dev
volumes:
- .:/app # Mount source code
- /app/node_modules # Don't overwrite node_modules
environment:
NODE_ENV: development
command: npm run dev # Use nodemon or similar
ports:
- "3000:3000"
# In production, you DON'T mount code as volume!
Dockerfile.dev:
FROM node:18-alpine
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm install # Include dev dependencies
# Code is mounted as volume, not copied!
CMD ["npm", "run", "dev"]
package.json:
{
"scripts": {
"dev": "nodemon --watch src src/index.js"
}
}
Now when you edit code:
- ā Nodemon detects change
- ā App restarts automatically
- ā No rebuilding containers!
- ā Instant feedback loop! š„
Common Docker Compose Mistakes (I Made All of These) šŖ¤
Mistake #1: Not Using Named Volumes
Bad:
services:
postgres:
image: postgres:14
# No volumes! Data lost on container restart! š
Good:
services:
postgres:
image: postgres:14
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data: # Data persists!
Mistake #2: Exposing All Ports in Dev
Bad:
services:
postgres:
ports:
- "5432:5432" # Now ANYONE on network can access!
redis:
ports:
- "6379:6379" # Redis with no password exposed!
Good for dev:
# Development - OK to expose
docker-compose.dev.yml:
postgres:
ports:
- "127.0.0.1:5432:5432" # Only localhost!
# Production - DON'T expose!
docker-compose.prod.yml:
postgres:
# No ports exposed! Only accessible via Docker network!
Mistake #3: Hardcoding Passwords
Bad:
services:
postgres:
environment:
POSTGRES_PASSWORD: "super_secret_123" # In Git! š±
Good:
services:
postgres:
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD} # From .env file!
# .env (NOT in Git!)
DB_PASSWORD=actual_secret_password
Mistake #4: Not Cleaning Up
The horror:
# 6 months later
docker system df
# Output:
# TYPE TOTAL ACTIVE SIZE
# Containers 47 3 2.5GB
# Images 132 12 45GB
# Volumes 89 5 78GB
# Total: 125GB š
# RIP laptop disk space
The fix:
# Stop and remove containers
docker-compose down
# Remove volumes too (WARNING: deletes data!)
docker-compose down -v
# Clean up everything unused
docker system prune -a --volumes
# Regular cleanup
docker image prune -a # Remove unused images
docker volume prune # Remove unused volumes
A Docker lesson that cost me: Set up a weekly cleanup cron job! Otherwise Docker eats all your disk space! š¾
My Ultimate Development Workflow š
Morning routine - Start dev environment:
# Makefile
.PHONY: dev test clean
# Start development environment
dev:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
@echo "ā
Development environment started!"
@echo "š Dashboard: http://localhost:8025 (MailHog)"
@echo "šļø Database: localhost:5432"
@echo "š¦ Redis: localhost:6379"
# Run tests
test:
docker-compose -f docker-compose.yml -f docker-compose.test.yml up -d
npm run test
docker-compose -f docker-compose.yml -f docker-compose.test.yml down
# Clean everything
clean:
docker-compose down -v
docker system prune -f
# View logs
logs:
docker-compose logs -f
# Database shell
db-shell:
docker-compose exec postgres psql -U myapp_user -d myapp_dev
# Redis shell
redis-shell:
docker-compose exec redis redis-cli
Usage:
make dev # Start coding
make logs # Debug issues
make test # Run tests
make clean # Fresh start
After setting up CI/CD for countless projects, I learned: Makefile + Docker Compose = developer happiness! š
Debugging Docker Compose Like a Pro š
Check what's running:
docker-compose ps
# Output:
# Name Command State Ports
# myapp_postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
# myapp_redis docker-entrypoint.sh redis Up 0.0.0.0:6379->6379/tcp
View logs:
# All services
docker-compose logs
# Specific service
docker-compose logs postgres
# Follow logs (like tail -f)
docker-compose logs -f
# Last 100 lines
docker-compose logs --tail=100
Execute commands in containers:
# PostgreSQL shell
docker-compose exec postgres psql -U myuser -d myapp
# Redis shell
docker-compose exec redis redis-cli
# Shell access
docker-compose exec postgres sh
# Run one-off commands
docker-compose run postgres psql --version
Check resource usage:
# Container stats
docker stats
# Disk usage
docker system df
The Docker Compose Cheat Sheet š
Essential commands:
# Start services
docker-compose up # Foreground
docker-compose up -d # Background
docker-compose up --build # Rebuild images
# Stop services
docker-compose stop # Stop (keep containers)
docker-compose down # Stop and remove containers
docker-compose down -v # Stop, remove, delete volumes
# Restart services
docker-compose restart
docker-compose restart postgres # Just one service
# View services
docker-compose ps # Running services
docker-compose top # Processes
# Logs
docker-compose logs # All logs
docker-compose logs -f # Follow
docker-compose logs postgres # One service
# Execute commands
docker-compose exec postgres bash # Shell
docker-compose run postgres env # Run one-off command
# Scaling
docker-compose up -d --scale web=3 # Run 3 instances
# Rebuild
docker-compose build # Rebuild all
docker-compose build app # Rebuild one service
The Bottom Line š”
Docker Compose isn't just for deployment - it's the secret weapon for development environments!
What you get:
- ā Consistent environments - Same setup for entire team
- ā Fast onboarding - New devs productive in minutes
- ā No installation hell - No more "install Postgres" docs
- ā Isolated projects - Multiple projects, no port conflicts
- ā Production parity - Dev matches production
- ā
Easy cleanup -
docker-compose downresets everything
The truth about local development:
It's not "how many things can I install on my laptop?" - it's "how fast can I get coding?"
In my 7 years deploying production applications, I learned this: The best development environment is one that:
- Works in 5 minutes
- Works on every machine
- Matches production
- Can be destroyed and recreated instantly
Docker Compose gives you all four! šÆ
Your Action Plan š
Right now:
- Create a
docker-compose.ymlin your project - Move database to Docker
- Run
docker-compose up - Delete Postgres from your laptop!
This week:
- Add Redis, if you use it
- Add any other services you need
- Create
docker-compose.dev.ymlwith dev overrides - Share with team
This month:
- Create Makefile for common tasks
- Add database seed scripts
- Set up healthchecks
- Document in README
- Never install development dependencies globally again! š
Resources Worth Your Time š
Official docs:
- Docker Compose Docs - Actually good docs!
- Compose File Reference - All the options
Tools I use:
- Lazydocker - TUI for Docker
- Dive - Explore Docker images
- ctop - Top for containers
Reading:
- The Twelve-Factor App - Dev/prod parity
- Docker Compose Best Practices
Real talk: The best tool is the one that gets you coding faster! Start with basic Docker Compose, add complexity as needed!
Still installing databases on your laptop? Connect with me on LinkedIn and let's talk about better development workflows!
Want to see my actual setups? Check out my GitHub - Real docker-compose.yml files from production projects!
Now go containerize your development environment! š³āØ
P.S. If you're thinking "But I already have Postgres installed..." - you can run both! Docker exposes on localhost:5432, you can use localhost:5433 for your system Postgres. No conflicts! šÆ
P.P.S. I once spent 2 days debugging why my code worked on my laptop but not on my coworker's. Turns out: Different Postgres versions. Docker Compose would've prevented this. Learn from my pain! š