From local development to production infrastructure — packaging applications into portable, reproducible containers
Every developer has heard "But it works on my machine!" — the classic symptom of environment inconsistency between development, staging, and production.
A container is a lightweight, isolated process that shares the host OS kernel but has its own filesystem, network, and process space.
Read-only template with app code, runtime, libraries. Built in layers from a Dockerfile.
Running instance of an image. Writable layer on top. Isolated process with its own filesystem.
Repository for storing and distributing images. Like GitHub but for container images.
# Node.js Express API Dockerfile
FROM node:20-alpine
# Create app directory
WORKDIR /app
# Install dependencies first (better caching)
COPY package*.json ./
RUN npm ci --only=production
# Copy application source
COPY src/ ./src/
COPY tsconfig.json ./
# Build TypeScript
RUN npm run build
# Expose the port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s \
CMD wget -qO- http://localhost:3000/health || exit 1
# Run as non-root user
USER node
# Start the app
CMD ["node", "dist/server.js"]
FROM — base image to build uponWORKDIR — set working directoryCOPY — copy files into imageRUN — execute build commandsEXPOSE — document the portCMD — default run commandpackage.json before source codeLike .gitignore — exclude node_modules, .git, .env, test files from the build context.
Use multiple FROM statements to separate build-time and runtime dependencies, dramatically reducing final image size.
# ── Stage 1: Build ──
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
RUN npm prune --production
# ── Stage 2: Production ──
FROM node:20-alpine AS production
WORKDIR /app
# Only copy what we need
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
USER node
EXPOSE 3000
CMD ["node", "dist/server.js"]
~850 MB — includes TypeScript compiler, dev dependencies, source files, build tools
~150 MB — only production node_modules and compiled JS. No compiler, no dev deps.
scratchdebian-slimnginx:alpine| Command | Description | Example |
|---|---|---|
docker build | Build image from Dockerfile | docker build -t myapp:1.0 . |
docker run | Create & start container | docker run -d -p 3000:3000 myapp:1.0 |
docker ps | List running containers | docker ps -a (include stopped) |
docker logs | View container output | docker logs -f --tail 100 myapp |
docker exec | Run command in container | docker exec -it myapp sh |
docker stop | Gracefully stop container | docker stop myapp |
docker rm | Remove stopped container | docker rm myapp |
docker images | List local images | docker images --filter dangling=true |
docker system prune | Clean up unused resources | docker system prune -af |
docker run-d detached -p 8080:3000 port map -v ./data:/data volume --name myapp name --rm auto-remove -e KEY=val env var --network mynet network
Define and run multi-container applications with a single YAML file. One command to start everything.
# docker-compose.yml
services:
app:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://user:pass@db:5432/myapp
- REDIS_URL=redis://cache:6379
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
volumes:
- ./src:/app/src # dev hot-reload
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user"]
interval: 5s
timeout: 3s
retries: 5
cache:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
pgdata:
docker compose up -d — start all servicesdocker compose down — stop and removedocker compose logs -f app — follow logsdocker compose build — rebuild imagesdocker compose ps — service status.env)Use docker-compose.override.yml for dev settings (hot-reload, debug ports) that don't go to production.
A container registry stores and distributes Docker images. Think of it as npm/PyPI but for container images.
| Registry | Free Tier | Best For |
|---|---|---|
| Docker Hub | 1 private repo, unlimited public | Open source, public images |
| GitHub GHCR | 500 MB free storage | GitHub-based workflows |
| AWS ECR | 500 MB/month (free tier) | AWS deployments |
| Google AR | 500 MB free w/ Cloud Run | GCP deployments |
| Azure ACR | Basic tier ~$5/mo | Azure deployments |
# Format: registry/namespace/image:tag
docker.io/library/node:20-alpine
ghcr.io/myorg/myapp:v1.2.3
123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:latest
latest — avoid in production (mutable!)v1.2.3 — semantic version (immutable)sha-a1b2c3d — git commit hashmain-20260324 — branch + date# 1. Build the image
docker build -t myapp:v1.0.0 .
# 2. Tag for your registry
docker tag myapp:v1.0.0 \
ghcr.io/myorg/myapp:v1.0.0
# 3. Authenticate
echo $GITHUB_TOKEN | docker login \
ghcr.io -u USERNAME --password-stdin
# 4. Push to registry
docker push ghcr.io/myorg/myapp:v1.0.0
# 5. Pull on another machine
docker pull ghcr.io/myorg/myapp:v1.0.0
docker run -d -p 3000:3000 \
ghcr.io/myorg/myapp:v1.0.0
# .github/workflows/docker.yml
name: Build & Push
on:
push:
tags: ['v*']
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: docker/build-push-action@v5
with:
push: true
tags: |
ghcr.io/${{ github.repository }}:${{ github.ref_name }}
ghcr.io/${{ github.repository }}:latest
| Driver | Use Case |
|---|---|
| bridge | Default. Containers on same host communicate via virtual bridge |
| host | Container shares host network stack. No port mapping needed |
| overlay | Multi-host networking for Docker Swarm / orchestration |
| none | No networking. Complete isolation |
| macvlan | Container gets its own MAC address on the physical network |
On user-defined networks, containers can reach each other by service name. No need for IP addresses.
# Create a custom network
docker network create mynet
# Run containers on the same network
docker run -d --name api \
--network mynet myapp:1.0
docker run -d --name db \
--network mynet postgres:16
# 'api' container can connect to:
# postgres://db:5432/mydb
# Docker resolves 'db' to the container IP
Containers are ephemeral — data inside is lost when the container is removed. Volumes solve this.
/var/lib/docker/volumes/docker volume create pgdata
docker run -v pgdata:/var/lib/postgresql/data postgres:16
docker run \
-v $(pwd)/src:/app/src \
-v $(pwd)/config:/app/config:ro \
myapp:dev
docker run \
--tmpfs /tmp:rw,size=64m \
--tmpfs /run/secrets \
myapp:1.0
Never store database data inside the container without a volume. A docker rm will destroy all your data permanently.
root inside containerslatest tag in production--privileged flagUSER nonroot in Dockerfilealpine or distroless base imagesread_only: true where possible# Secure Dockerfile example
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM gcr.io/distroless/nodejs20-debian12
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
# No shell, no package manager, no root
USER nonroot:nonroot
EXPOSE 3000
CMD ["dist/server.js"]
# Scan image for vulnerabilities
trivy image myapp:v1.0.0
# Output:
# Total: 2 (HIGH: 1, CRITICAL: 1)
# ┌──────────┬────────────┬──────────┐
# │ Library │ Vuln ID │ Severity │
# ├──────────┼────────────┼──────────┤
# │ openssl │ CVE-2024-… │ CRITICAL │
# │ curl │ CVE-2024-… │ HIGH │
# └──────────┴────────────┴──────────┘
Running one container on one server is simple. Running dozens across multiple servers with zero downtime requires orchestration.
| Platform | Complexity | Best For |
|---|---|---|
| Kubernetes | High | Large-scale, multi-cloud, full control |
| Docker Swarm | Low | Small teams, simple orchestration |
| AWS ECS | Medium | AWS-native, Fargate serverless |
| Nomad | Medium | Multi-workload (containers + VMs + batch) |
| Cloud Run | Very Low | Serverless containers, scale to zero |
Smallest deployable unit. One or more containers sharing network/storage.
Stable network endpoint that load-balances across pod replicas.
Manages pod replicas, rolling updates, and rollbacks declaratively.
Routes external HTTP/S traffic to services. Handles TLS termination and path-based routing.
# Common kubectl commands
kubectl get pods
kubectl apply -f deployment.yaml
kubectl scale deploy myapp --replicas=5
kubectl rollout status deploy/myapp
kubectl logs -f deploy/myapp
# Deploy with AWS CLI
aws ecs update-service \
--cluster prod \
--service myapp \
--force-new-deployment
# Deploy to Cloud Run
gcloud run deploy myapp \
--image gcr.io/proj/myapp:v1 \
--region us-central1 \
--allow-unauthenticated
# Deploy to Azure
az containerapp create \
--name myapp \
--resource-group mygroup \
--image myapp:v1 \
--target-port 3000 \
--ingress external
Already on AWS? Use ECS/Fargate. Want simplest option? Google Cloud Run. Need Kubernetes? EKS, GKE, or AKS. Small project? See next slide for free/cheap options.
You don't need a big budget to deploy containers. These platforms offer generous free tiers for side projects and startups.
| Platform | Free Tier | Pricing After | Key Features |
|---|---|---|---|
| Google Cloud Run | 2M requests/mo, 360K vCPU-sec | Pay per request + compute | Scale to zero, HTTPS, custom domains |
| Fly.io | 3 shared VMs, 160GB bandwidth | ~$2/mo per extra VM | Global edge deploy, built-in Postgres |
| Railway | $5 free credit/mo | Usage-based (~$5-10/mo) | Git push deploy, databases included |
| Render | Free for static + web services | $7/mo for always-on | Auto-deploy from Git, managed DBs |
| Coolify | Self-hosted (free OSS) | VPS cost only (~$5/mo) | Self-hosted PaaS, Heroku alternative |
# Google Cloud Run (from Dockerfile)
gcloud run deploy --source .
# Fly.io
fly launch # auto-detects Dockerfile
fly deploy
# Railway
railway up # deploys current directory
For hobby projects: Cloud Run (generous free tier, scales to zero). For small production apps: Fly.io (predictable pricing, global edge). For teams: Railway (great DX, includes databases).
node:20 (1GB) instead of node:20-alpine (130MB)node_modules, .git into the imagenpm install — busts layer cache on every code change# Why did my container crash?
docker logs myapp --tail 50
# What's happening inside right now?
docker exec -it myapp sh
# Inspect the full container config
docker inspect myapp
# Check resource usage
docker stats myapp
# See what's eating disk space
docker system df
# Debug a failed build
docker build --progress=plain \
--no-cache -t myapp .
# Dive into image layers
# (install 'dive' tool)
dive myapp:latest
Check: Is CMD running a foreground process? Did you use exec form CMD ["node", "server.js"] vs shell form CMD node server.js? Is the app crashing? Check logs with docker logs.
minikube for local Kubernetesdive (inspect layers) · hadolint (lint Dockerfiles) · trivy (security scan) · lazydocker (TUI) · ctop (container top)