0x55aa
Back to Blog

🚀 GitHub Actions: Stop Deploying on Fridays (And Other CI/CD Lessons Learned the Hard Way)

|
5 min read

Look, we need to talk about your CI/CD pipeline. Not because it's broken — it works fine. It runs your tests, builds your Docker image, and ships your code. But "fine" is the silent killer of on-call engineers everywhere.

After watching pipelines bring down production on a Friday at 4:55 PM more times than I'd like to admit, I've compiled the lessons that nobody puts in the official docs. Buckle up.

The Friday Deploy Curse is Real

Let's start with the elephant in the room. There is an unofficial rule in software engineering, passed down from grey-bearded SREs like ancient prophecy: thou shalt not deploy on Friday.

It's not superstition. It's math. Deploys that go wrong on a Monday give you 8 business hours to fix them. Deploys that go wrong at 5 PM Friday give you a panic attack and a ruined weekend.

GitHub Actions doesn't stop you from doing this. But you can stop yourself.

# .github/workflows/deploy.yml
name: Deploy to Production

on:
  push:
    branches: [main]

jobs:
  guard-rails:
    runs-on: ubuntu-latest
    steps:
      - name: Block Friday deployments
        run: |
          DAY=$(date +%u)  # 1=Mon, 5=Fri, 6=Sat, 7=Sun
          HOUR=$(date +%H)
          if [ "$DAY" -eq 5 ] && [ "$HOUR" -ge 15 ]; then
            echo "🚫 It's Friday after 3 PM. Go home. The pipeline refuses."
            echo "Override with YOLO=true if you hate your weekend."
            exit 1
          fi
          echo "✅ Reasonable deploy time. Proceeding."
        env:
          YOLO: ${{ vars.YOLO_MODE }}

Yes, you can override it. But you'll have to consciously set YOLO_MODE=true in your repo variables. That moment of friction is exactly the point.

Your Secrets Are Leaking (Probably)

Here's a fun game: grep your CI logs for the word "token". I'll wait.

GitHub Actions does a solid job of masking secrets registered in ${{ secrets.* }}, but teams routinely leak credentials by:

  • Printing environment variables for "debugging" and never removing the echo
  • Passing secrets as plain build args to Docker
  • Logging full HTTP responses that happen to include auth headers

The Docker one is particularly sneaky:

# ❌ This bakes your token INTO the image layer history
FROM node:20-alpine
ARG NPM_TOKEN
RUN echo "//registry.npmjs.org/:_authToken=${NPM_TOKEN}" > ~/.npmrc
RUN npm install
# NPM_TOKEN is now visible in `docker history`
# ✅ Use BuildKit secrets — they never touch the layer cache
# syntax=docker/dockerfile:1
FROM node:20-alpine
RUN --mount=type=secret,id=npm_token \
    echo "//registry.npmjs.org/:_authToken=$(cat /run/secrets/npm_token)" > ~/.npmrc \
    && npm install \
    && rm ~/.npmrc

Then in your Actions workflow:

- name: Build Docker image
  uses: docker/build-push-action@v5
  with:
    context: .
    secrets: |
      npm_token=${{ secrets.NPM_TOKEN }}

The secret mounts are ephemeral — they exist during the RUN step and vanish. They don't appear in docker history, they don't get cached, they don't end up in your final image. This is what "doing it right" looks like.

Cache Invalidation: The Second Hardest Problem

Phil Karlton's famous quote says there are two hard problems in computer science: cache invalidation and naming things. GitHub Actions managed to make both of them your problem simultaneously.

Your pipeline is probably doing this:

- uses: actions/cache@v4
  with:
    path: ~/.npm
    key: npm-${{ hashFiles('**/package-lock.json') }}

This looks reasonable. It caches npm modules keyed on your lockfile hash. But here's what happens in practice: you update one dependency, the hash changes, the entire cache is invalidated, and your 2-minute pipeline is now 8 minutes long while it reinstalls everything from scratch.

The fix is layered cache keys with fallbacks:

- uses: actions/cache@v4
  with:
    path: ~/.npm
    key: npm-${{ runner.os }}-${{ hashFiles('**/package-lock.json') }}
    restore-keys: |
      npm-${{ runner.os }}-
      npm-

The restore-keys are ordered fallbacks. On a cache miss, Actions finds the most recent partial match rather than starting cold. You still get a warm cache with 95% of your dependencies — only the changed packages need downloading. For a project with 300+ dependencies, this routinely shaves 4-5 minutes off pipeline time.

The Lessons Nobody Tells You

After years of CI/CD work, here's the stuff that doesn't make it into tutorials:

Pipeline time is money, but it's also morale. A 15-minute pipeline means 15 minutes of context switching per PR. Developers stop running tests locally because "CI will catch it anyway." Then CI becomes the bottleneck and nobody's happy. Invest in speed early.

Make failures loud and obvious. A silent pipeline that exits 0 when it should have failed is worse than no pipeline at all. Use set -euo pipefail in shell scripts. Assert your build artifacts exist before "successfully" pushing an empty Docker image.

Your pipeline is code. Treat it like code. Review CI changes in PRs. Don't commit secrets. Don't write 400-line YAML files. Extract reusable workflows. The pipeline that nobody understands is the one that fails at 2 AM.

Test your rollback before you need it. "We can always roll back" is the most dangerous phrase in DevOps. Roll back to what, exactly? Test your rollback procedure in staging before it becomes your production lifeline.

Where to Go From Here

The best CI/CD pipeline is one your whole team understands and trusts. Start small: add the Friday deploy guard, audit your Docker builds for secret leakage, and tune your cache strategy.

If you're just getting started with GitHub Actions, the official docs are genuinely good. If you're ready to go deeper, look into reusable workflows, environments with required reviewers, and OpenID Connect for keyless AWS/GCP auth (no more long-lived access keys in your secrets).

And for the love of all things deployable — stop pushing to main at 4 PM on a Friday.

Your on-call rotation will thank you.


Got a CI/CD war story of your own? I'd love to hear it — the more absurd the better. The funniest ones are always the most educational.

Thanks for reading!

Back to all posts