GitHub Actions. The thing that started as "oh cool, free CI minutes" and somehow became the backbone of half the world's software delivery pipelines.
You've written the YAML. You've debugged the YAML. You've wept into the YAML. And yet your pipeline still takes 18 minutes to run 40 unit tests, burns through your free minutes by Wednesday, and occasionally fails for reasons that only the GitHub Actions gods understand.
Let's fix that.
The "It Works on My Machine" Pipeline
Here's a workflow that a shocking number of production repos use in some form:
# The sad version
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install dependencies
run: npm install
- name: Run tests
run: npm test
- name: Build
run: npm run build
- name: Deploy
run: ./deploy.sh
It works. Barely. Every push reinstalls every package from scratch, your deploy step has no environment context, and "ubuntu-latest" is doing a lot of heavy lifting as a security posture.
This pipeline will betray you. Let me show you how to make it better.
Tip #1: Cache Like Your Free Minutes Depend On It (They Do)
The single biggest win in most CI pipelines is dependency caching. Your node_modules folder doesn't change every commit — so why are you downloading it every single time?
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm' # That's it. One line. Seriously.
- name: Install dependencies
run: npm ci # ci instead of install — faster, reproducible
- name: Run tests
run: npm test
The cache: 'npm' option in actions/setup-node handles cache key generation based on your package-lock.json automatically. If the lockfile hasn't changed, it pulls from cache. A cold install that takes 90 seconds becomes a warm restore that takes 8.
Real-world result: On a mid-size Node project, this alone cut our CI time from 14 minutes to under 5.
Tip #2: Stop Running Everything on Every Push
Here's the thing nobody tells you: you don't need to run your full test suite every time someone fixes a typo in a README. Use path filters and concurrency controls to be surgical about what runs when.
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
paths-ignore:
- '**.md'
- 'docs/**'
- '.github/ISSUE_TEMPLATE/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true # New push? Kill the old run. No mercy.
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [18, 20, 22] # Test multiple versions in parallel
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- run: npm ci
- run: npm test
The concurrency block is a game-changer. When a developer pushes three commits in quick succession (we all do it), GitHub will cancel the two older runs and only finish the latest. You stop paying for redundant work, and the feedback loop gets faster.
The paths-ignore filter means documentation PRs don't trigger a full build. Your team writes more docs. Everyone wins.
Tip #3: Secrets Are Not Config — Treat Them Differently
I've reviewed repos where environment URLs, feature flags, and AWS region names were stored as GitHub Secrets right next to actual API keys. That's not a security practice — that's just making things harder to manage.
Secrets = values that would cause harm if leaked (API keys, tokens, passwords)
Config = values that vary by environment but aren't sensitive (URLs, region names, feature flags)
Use GitHub Environments for config, secrets for secrets:
jobs:
deploy:
runs-on: ubuntu-latest
environment: production # Links to a GitHub Environment
steps:
- uses: actions/checkout@v4
- name: Deploy to production
env:
# Sensitive — lives in Secrets
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
# Non-sensitive config — lives in Environment Variables
AWS_REGION: ${{ vars.AWS_REGION }}
APP_URL: ${{ vars.APP_URL }}
run: |
aws s3 sync ./dist s3://${{ vars.S3_BUCKET }} --region $AWS_REGION
GitHub Environments also give you protection rules — required reviewers before deploying to production, deployment wait timers, and a full audit log. This is the difference between "we have CI/CD" and "we have CI/CD that a compliance auditor won't immediately flag."
The Lessons That Hurt to Learn
Pinning action versions matters. uses: actions/checkout@v4 is fine. uses: some-random-action@main is a supply chain attack waiting to happen. Pin third-party actions to a specific commit SHA, not a tag.
ubuntu-latest changes. GitHub rotates what "latest" means, and it will break your pipeline on a Tuesday when you least expect it. For anything beyond experiments, pin to ubuntu-24.04.
Workflow files are code. They need review. They need testing. They have bugs. A bad workflow that runs rm -rf in the wrong directory is just as dangerous as bad application code — possibly more so, because it runs with your deployment credentials.
Free minutes are finite. GitHub gives you 2,000 free minutes per month on public repos. A bloated 20-minute pipeline running 50 times a day will eat through that in under 2 days. Caching, path filters, and concurrency controls aren't premature optimization — they're how you stay in the free tier.
The Payoff
Getting GitHub Actions right feels like giving your team a superpower. PRs get feedback in 3 minutes instead of 18. Deployments happen automatically, consistently, and with an audit trail. On-call engineers stop getting paged because someone manually deployed from a laptop with different environment variables.
The YAML is still annoying. That part doesn't get better. But at least it'll be fast, annoying YAML.
What's your GitHub Actions horror story? I know you have one. Drop it in the comments — misery loves company, and the rest of us might learn something from your suffering. And if you're just getting started, go add that cache: 'npm' line right now. Future you will be grateful.