Helm Charts: Stop Copy-Pasting Kubernetes YAML Like It's 2019 š¦
Confession time: For the first year of my Kubernetes journey, I managed environments by literally duplicating YAML files.
k8s/
āāā dev/
ā āāā deployment.yaml
ā āāā service.yaml
ā āāā ingress.yaml
āāā staging/
ā āāā deployment.yaml ā 95% identical to dev
ā āāā service.yaml ā 98% identical to dev
ā āāā ingress.yaml ā 90% identical to dev
āāā prod/
āāā deployment.yaml ā 95% identical to staging
āāā service.yaml ā 98% identical to staging
āāā ingress.yaml ā 90% identical to staging
I thought this was fine. "It's just YAML. How bad can it be?"
Famous last words.
The incident: A teammate updated the health check path in dev/deployment.yaml and staging/deployment.yaml but forgot prod/deployment.yaml. We deployed to prod. Health checks started failing silently. Load balancer kept routing to the broken pod. Users got errors for 4 hours before someone noticed. All because of a missed copy-paste in a YAML file.
Enter Helm ā the package manager for Kubernetes that should've been in my toolkit from day one.
What Is Helm, Really? š¤
Think of Helm like npm or apt, but for Kubernetes applications. Instead of manually managing 15 YAML files per environment, you define your app once as a chart (a templated package) and deploy it with different values for each environment.
The magic: one source of truth, infinite configurations.
# Without Helm: manually editing 15 files per environment š
kubectl apply -f k8s/prod/deployment.yaml
kubectl apply -f k8s/prod/service.yaml
kubectl apply -f k8s/prod/ingress.yaml
# ...and 12 more files
# With Helm: one command, all environments ā
helm upgrade --install my-app ./chart \
--namespace production \
--values values.prod.yaml
Your First Helm Chart in 10 Minutes ā”
Let's turn a basic Node.js API deployment into a proper Helm chart.
Create the chart structure:
helm create my-api
This generates:
my-api/
āāā Chart.yaml # Chart metadata
āāā values.yaml # Default config values
āāā templates/
āāā deployment.yaml # Your deployment template
āāā service.yaml # Your service template
āāā ingress.yaml # Your ingress template
āāā _helpers.tpl # Reusable template snippets
The values.yaml ā your single source of truth:
# values.yaml (defaults that work for dev)
replicaCount: 1
image:
repository: my-registry/my-api
tag: "latest"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 3000
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
env:
LOG_LEVEL: "debug"
DATABASE_URL: "" # Set per environment!
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 70
The deployment.yaml template ā write it ONCE:
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "my-api.fullname" . }}
labels:
{{- include "my-api.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "my-api.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "my-api.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.port }}
env:
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
livenessProbe:
httpGet:
path: /health
port: {{ .Values.service.port }}
initialDelaySeconds: 10
periodSeconds: 15
readinessProbe:
httpGet:
path: /ready
port: {{ .Values.service.port }}
initialDelaySeconds: 5
periodSeconds: 10
Now, instead of 3 copies of this file, you have one template that's always in sync.
Environment-Specific Values š
Here's where Helm gets beautiful. Override only what's different per environment:
# values.prod.yaml ā ONLY what differs from defaults
replicaCount: 3
image:
tag: "v2.4.1" # Pin exact version in prod!
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2000m"
memory: "2Gi"
env:
LOG_LEVEL: "warn" # Less noise in prod
DATABASE_URL: "postgresql://prod-db:5432/myapp"
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 20
targetCPUUtilizationPercentage: 60
Deploy to each environment:
# Dev (uses defaults from values.yaml)
helm upgrade --install my-api ./my-api \
--namespace dev --create-namespace
# Staging
helm upgrade --install my-api ./my-api \
--namespace staging \
--values values.staging.yaml
# Production
helm upgrade --install my-api ./my-api \
--namespace production \
--values values.prod.yaml \
--set image.tag=$CI_COMMIT_SHA # Override tag from CI
The --set flag is great for CI/CD ā inject the Git commit SHA as the image tag at deploy time without touching any file.
The Killer Feature: Versioned Releases & Rollbacks š
This is the moment I fully converted to Helm. We had a bad deploy in production ā new code introduced a memory leak that wasn't caught in staging.
# See what's deployed
helm list -n production
# NAME NAMESPACE REVISION STATUS CHART APP VERSION
# my-api production 14 deployed my-api-1.2.0 v2.4.1
# Immediately roll back to the previous release
helm rollback my-api 13 -n production
# Rollback was a success! š
# See the full release history
helm history my-api -n production
# REVISION STATUS DESCRIPTION
# 13 superseded Upgrade complete
# 14 superseded Upgrade complete (broken!)
# 15 deployed Rollback to revision 13
Without Helm, a rollback meant hunting down the previous YAML, remembering what image tag was deployed, manually editing files, and praying. With Helm, it's one command that takes 10 seconds.
The rollback that saved our on-call rotation: What would've been a 30-minute scramble turned into a 10-second fix. Our SRE team bought me coffee for a week. ā
Real-World Lessons Learned š
Lesson 1: Use helm diff before upgrading
Install the helm-diff plugin and you'll never deploy a surprise again:
helm plugin install https://github.com/databus23/helm-diff
# See EXACTLY what will change before deploying
helm diff upgrade my-api ./my-api \
--namespace production \
--values values.prod.yaml \
--set image.tag=v2.5.0
# Output shows a git-diff-style view of every change ā
I made this a required step in our CI/CD pipeline. If the diff touches more than expected, the deploy is blocked for human review.
Lesson 2: Never use latest as your image tag
# BAD - You have no idea what's actually running
image:
tag: "latest"
# GOOD - Pin to exact version, always know what's deployed
image:
tag: "v2.4.1"
In your CI/CD pipeline, pass the Git commit SHA:
# .github/workflows/deploy.yml
- name: Deploy to production
run: |
helm upgrade --install my-api ./my-api \
--namespace production \
--values values.prod.yaml \
--set image.tag=${{ github.sha }}
Now every production deployment is traceable to a specific commit. Future-you will be grateful.
Lesson 3: Store chart values in Git, secrets in a vault
# In Git (safe to commit):
values.yaml
values.dev.yaml
values.staging.yaml
values.prod.yaml
# NOT in Git (use Kubernetes Secrets or a vault):
DATABASE_PASSWORD
API_KEYS
TLS_CERTIFICATES
Use the helm-secrets plugin to encrypt sensitive values with SOPS:
# Encrypt secrets file
helm secrets enc secrets.prod.yaml
# Deploy with encrypted secrets (auto-decrypted at runtime)
helm upgrade --install my-api ./my-api \
--values values.prod.yaml \
--values secrets://secrets.prod.yaml
Your Helm Starter Checklist ā
Before shipping your next Kubernetes app:
-
helm createyour chart ā stop writing boilerplate from scratch - Split env-specific config into
values.{env}.yaml - Pin image tags ā never use
latestin prod - Install
helm-diffā review changes before every deploy - Add
helm rollbackto your incident runbook ā it's your safety net - Put secrets in a vault, not in
values.yaml - Add
helm lint ./chartto CI ā catch template errors before they hit prod
The Bottom Line š”
Helm didn't just solve my YAML duplication problem ā it changed how I think about Kubernetes deployments entirely. Deployments became repeatable, auditable, and reversible.
If you're still copy-pasting YAML files across environments, you're accumulating drift debt that will bite you at the worst possible time (Black Friday, product launch, 3 AM). One source of truth, parameterized for each environment, with rollback on demand ā that's the dream, and Helm delivers it.
Start with helm create, migrate one service, and experience the relief of knowing every environment is running exactly what you think it's running.
Take Action Today š
- Install Helm:
brew install helm(or your package manager of choice) - Run
helm create my-first-chartand poke around the generated structure - Pick your most-duplicated Kubernetes YAML and turn it into a chart
- Add
helm diffto your workflow before the next deploy - Share your first
helm rollbacksuccess story ā I want to hear it!
Building your Kubernetes setup and want a second pair of eyes? Connect on LinkedIn ā I love talking infrastructure.
Want to see real Helm charts in action? Check out my GitHub for production-grade chart examples.
Now go delete those duplicate YAML files. šļøāØ
P.S. The first time you run helm rollback in production and it works in 10 seconds flat, you'll want to go back in time and tell past-you to adopt Helm two years earlier. Trust me. āŖ
P.P.S. Yes, you can also use Helm to install community charts for Postgres, Redis, Nginx, Cert-Manager... basically your entire infrastructure. The helm search hub command is a rabbit hole I'll warn you about in advance. You're welcome. š