Docker Compose & Multi-Container Apps

Welcome to Phase 2 of the Docker & Kubernetes Learning Roadmap! Now that you understand Docker fundamentals—images, containers, volumes, and networking—it's time to build your own images with Dockerfiles and orchestrate multi-container applications with Docker Compose.
This is where Docker becomes truly powerful. Instead of pulling pre-built images, you'll package your own applications. Instead of running containers one by one, you'll define entire stacks in a single file.
What You'll Learn
✅ Write Dockerfiles from scratch with all essential instructions
✅ Choose the right base image (alpine, slim, distroless)
✅ Use multi-stage builds to reduce image size by 70-90%
✅ Understand layer caching and build optimization
✅ Master Docker Compose for multi-container orchestration
✅ Configure networks, volumes, and environment variables in Compose
✅ Set up development and production configurations
Time commitment: 5–7 days, 1–2 hours daily
Prerequisites: Phase 1: Docker Fundamentals
Part 1: Dockerfile Fundamentals
A Dockerfile is a text file containing instructions to build a Docker image. Think of it as a recipe—each instruction adds a layer to your image.
Your First Dockerfile
Let's start with a simple Node.js application:
// app.js
const http = require('http');
const server = http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ message: 'Hello from Docker!', time: new Date() }));
});
server.listen(3000, () => {
console.log('Server running on port 3000');
});Now the Dockerfile:
# Use Node.js as the base image
FROM node:20-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy package files first (for better caching)
COPY package*.json ./
# Install dependencies
RUN npm install --production
# Copy application code
COPY . .
# Expose port 3000
EXPOSE 3000
# Start the application
CMD ["node", "app.js"]Build and run:
# Build the image
docker build -t my-node-app .
# Run the container
docker run -d -p 3000:3000 --name my-app my-node-app
# Test it
curl http://localhost:3000
# {"message":"Hello from Docker!","time":"2026-03-19T..."}Essential Dockerfile Instructions
Here's every instruction you need to know:
FROM — Base Image
Every Dockerfile starts with FROM. It defines the base image your application builds upon.
# Official images
FROM node:20-alpine
FROM python:3.12-slim
FROM golang:1.22-alpine
FROM openjdk:21-slim
# Minimal base images
FROM alpine:3.19
FROM ubuntu:24.04
FROM debian:bookworm-slim
# Empty base (for static binaries)
FROM scratchRUN — Execute Commands
RUN executes commands during the build process. Each RUN creates a new layer.
# Install system packages
RUN apt-get update && apt-get install -y \
curl \
git \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Create directories
RUN mkdir -p /app/dataBest practice: Chain commands with
&&and clean up in the sameRUNinstruction to reduce layer size.
COPY vs ADD
Both copy files into the image, but they behave differently:
# COPY — Simple file copying (preferred)
COPY package.json ./
COPY src/ ./src/
COPY . .
# ADD — Extra features (usually avoid)
ADD https://example.com/file.tar.gz /app/ # Downloads URLs
ADD archive.tar.gz /app/ # Auto-extracts archivesBest practice: Use
COPYunless you specifically need URL downloading or auto-extraction fromADD.
WORKDIR — Working Directory
Sets the working directory for subsequent instructions:
WORKDIR /app
# All following commands run in /app
COPY . .
RUN npm install
CMD ["node", "app.js"]CMD vs ENTRYPOINT
Both define what runs when the container starts, but they serve different purposes:
# CMD — Default command (can be overridden)
CMD ["node", "app.js"]
CMD ["python", "main.py"]
# ENTRYPOINT — Fixed executable (arguments can be appended)
ENTRYPOINT ["python", "main.py"]
CMD ["--port", "8000"] # Default argumentsThe difference in practice:
# With CMD ["node", "app.js"]
docker run my-app # Runs: node app.js
docker run my-app bash # Runs: bash (CMD overridden)
# With ENTRYPOINT ["python", "main.py"]
docker run my-app # Runs: python main.py --port 8000
docker run my-app --port 9000 # Runs: python main.py --port 9000
docker run my-app bash # Runs: python main.py bash (probably error)Rule of thumb:
- Use
CMDfor general-purpose images where users might want to run different commands - Use
ENTRYPOINT+CMDwhen your container should always run a specific program
ENV and ARG — Variables
# ENV — Available at build time AND runtime
ENV NODE_ENV=production
ENV PORT=3000
# ARG — Available ONLY during build time
ARG VERSION=1.0.0
ARG NODE_VERSION=20
# Using ARG in FROM
ARG NODE_VERSION=20
FROM node:${NODE_VERSION}-alpine
# Using build args
docker build --build-arg VERSION=2.0.0 -t my-app .EXPOSE — Document Ports
EXPOSE doesn't actually publish ports—it's documentation for which ports the container listens on:
EXPOSE 3000
EXPOSE 8080
EXPOSE 5432You still need -p flag when running: docker run -p 3000:3000 my-app
USER — Run as Non-Root
# Create a non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Switch to non-root user
USER appuser
# All subsequent commands run as appuser
COPY --chown=appuser:appgroup . .
CMD ["node", "app.js"]Security: Always run production containers as non-root users.
VOLUME — Declare Mount Points
VOLUME ["/data"]
VOLUME ["/var/log/app"]LABEL — Metadata
LABEL maintainer="chanh@example.com"
LABEL version="1.0"
LABEL description="My awesome application"Build Context and .dockerignore
When you run docker build ., Docker sends the entire directory (the build context) to the Docker daemon. The .dockerignore file excludes unnecessary files:
# .dockerignore
node_modules
npm-debug.log
.git
.gitignore
.env
.env.*
Dockerfile
docker-compose.yml
README.md
.DS_Store
coverage
.nyc_output
dist
build
*.mdWhy this matters:
# Without .dockerignore — sends everything (slow)
Sending build context to Docker daemon 250MB
# With .dockerignore — sends only what's needed (fast)
Sending build context to Docker daemon 2.5MBPart 2: Dockerfile Best Practices
Writing a Dockerfile is easy. Writing a good Dockerfile requires understanding layers, caching, and security.
Choose the Right Base Image
Your base image dramatically affects image size and security:
| Base Image | Size | Use Case |
|---|---|---|
node:20 | ~1GB | Full development environment |
node:20-slim | ~200MB | Production (most dependencies) |
node:20-alpine | ~130MB | Production (lightweight) |
python:3.12 | ~1GB | Full Python environment |
python:3.12-slim | ~150MB | Production Python |
python:3.12-alpine | ~50MB | Minimal Python |
golang:1.22 | ~800MB | Go build environment |
alpine:3.19 | ~7MB | Minimal Linux |
gcr.io/distroless/static | ~2MB | Static binaries only |
Guidelines:
- Development: Use full images for debugging tools
- Production: Use
slimoralpinevariants - Go/Rust: Use
scratchordistrolessfor final images (static binaries) - Always: Pin specific versions (not
:latest)
Layer Caching and Instruction Order
Docker caches each layer. If a layer hasn't changed, Docker reuses the cache. Order matters:
# ❌ BAD — Reinstalls dependencies every time code changes
FROM node:20-alpine
WORKDIR /app
COPY . . # Code change invalidates this layer
RUN npm install # Must reinstall every time
CMD ["node", "app.js"]
# ✅ GOOD — Dependencies cached unless package.json changes
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./ # Only changes when dependencies change
RUN npm install # Cached unless package.json changed
COPY . . # Code changes only affect this layer
CMD ["node", "app.js"]The principle: Copy files that change less frequently first.
- Blue layers: Rarely change (cached almost always)
- Yellow layers: Change when dependencies update
- Red layers: Change with every code update
Minimize the Number of Layers
Each RUN, COPY, and ADD creates a new layer. Combine related commands:
# ❌ BAD — 3 separate layers
RUN apt-get update
RUN apt-get install -y curl
RUN rm -rf /var/lib/apt/lists/*
# ✅ GOOD — 1 layer, with cleanup
RUN apt-get update && apt-get install -y \
curl \
git \
&& rm -rf /var/lib/apt/lists/*Security Hardening
FROM node:20-alpine
# Create non-root user
RUN addgroup -S app && adduser -S app -G app
WORKDIR /app
# Install dependencies as root
COPY package*.json ./
RUN npm ci --production && npm cache clean --force
# Copy app files with proper ownership
COPY --chown=app:app . .
# Switch to non-root user
USER app
EXPOSE 3000
CMD ["node", "app.js"]Part 3: Multi-Stage Builds
Multi-stage builds are a game-changer. They let you use one image for building and a different (smaller) image for running.
The Problem
Without multi-stage builds:
# Single-stage: includes ALL build tools in final image
FROM node:20
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
# Final image: ~1GB (includes TypeScript compiler, dev dependencies, etc.)
CMD ["node", "dist/app.js"]The Solution
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-alpine AS production
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY --from=builder /app/dist ./dist
USER node
EXPOSE 3000
CMD ["node", "dist/app.js"]The --from=builder copies only the built artifacts from the first stage. Everything else (source code, dev dependencies, build tools) is discarded.
Multi-Stage Build for Python (FastAPI)
# Stage 1: Build dependencies
FROM python:3.12-slim AS builder
WORKDIR /app
# Install build dependencies
RUN apt-get update && apt-get install -y \
gcc \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir --prefix=/install -r requirements.txt
# Stage 2: Production
FROM python:3.12-slim
WORKDIR /app
# Copy installed packages from builder
COPY --from=builder /install /usr/local
# Create non-root user
RUN useradd --create-home appuser
USER appuser
COPY . .
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]Multi-Stage Build for Go
Go shines with multi-stage builds because Go compiles to a static binary:
# Stage 1: Build
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /server ./cmd/server
# Stage 2: Production (from scratch — smallest possible image!)
FROM scratch
COPY --from=builder /server /server
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
EXPOSE 8080
ENTRYPOINT ["/server"]Result: Final image is just a few MB instead of ~800MB.
Multi-Stage Build for Java (Spring Boot)
# Stage 1: Build with Maven
FROM maven:3.9-eclipse-temurin-21 AS builder
WORKDIR /app
COPY pom.xml .
RUN mvn dependency:resolve
COPY src ./src
RUN mvn package -DskipTests
# Stage 2: Production
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
RUN addgroup -S app && adduser -S app -G app
COPY --from=builder /app/target/*.jar app.jar
RUN chown app:app app.jar
USER app
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]Image Size Comparison
Here's the impact of multi-stage builds:
| Application | Single-Stage | Multi-Stage | Reduction |
|---|---|---|---|
| Node.js API | ~1.1GB | ~180MB | 84% |
| Python FastAPI | ~950MB | ~150MB | 84% |
| Go API | ~800MB | ~8MB | 99% |
| Java Spring Boot | ~800MB | ~280MB | 65% |
Part 4: Docker Compose Fundamentals
Running docker run commands with long flags gets tedious fast, especially when your application has multiple services. Docker Compose solves this by letting you define and manage multi-container applications in a single YAML file.
What is Docker Compose?
Docker Compose is a tool for defining and running multi-container Docker applications. Instead of this:
# Create network
docker network create myapp
# Start database
docker run -d --name postgres \
--network myapp \
-e POSTGRES_DB=mydb \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=secret \
-v pgdata:/var/lib/postgresql/data \
postgres:16-alpine
# Start Redis
docker run -d --name redis \
--network myapp \
redis:7-alpine
# Start application
docker run -d --name app \
--network myapp \
-p 3000:3000 \
-e DATABASE_URL=postgres://admin:secret@postgres:5432/mydb \
-e REDIS_URL=redis://redis:6379 \
my-appYou write this:
# docker-compose.yml
services:
app:
build: .
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://admin:secret@postgres:5432/mydb
REDIS_URL: redis://redis:6379
depends_on:
- postgres
- redis
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: mydb
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret
volumes:
- pgdata:/var/lib/postgresql/data
redis:
image: redis:7-alpine
volumes:
pgdata:And run everything with a single command:
docker compose up -ddocker-compose.yml Structure
A Compose file has these top-level keys:
services: # Container definitions (required)
networks: # Custom networks (optional)
volumes: # Named volumes (optional)
configs: # Configuration objects (optional, Swarm mode)
secrets: # Secret objects (optional, Swarm mode)Service Configuration Deep Dive
Each service can be configured with many options:
services:
web:
# Image source (pick one)
image: nginx:alpine # Use pre-built image
build: . # Build from Dockerfile
build: # Build with options
context: ./app
dockerfile: Dockerfile.prod
args:
NODE_ENV: production
# Container settings
container_name: my-web # Custom container name
hostname: web-server # Container hostname
restart: unless-stopped # Restart policy
# Networking
ports:
- "80:80" # host:container
- "443:443"
- "127.0.0.1:8080:8080" # Bind to specific interface
# Storage
volumes:
- ./src:/app/src # Bind mount
- app-data:/app/data # Named volume
- /app/node_modules # Anonymous volume
# Environment
environment:
NODE_ENV: production
API_KEY: ${API_KEY} # From host environment
env_file:
- .env # Load from file
# Dependencies
depends_on:
- db
- redis
# Resource limits
deploy:
resources:
limits:
cpus: "0.5"
memory: 512M
reservations:
cpus: "0.25"
memory: 256M
# Health check
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:80"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# Logging
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"Environment Variables
Docker Compose supports multiple ways to set environment variables:
Inline in docker-compose.yml
services:
app:
environment:
NODE_ENV: production
PORT: "3000"
DEBUG: "false"From .env File
# .env
POSTGRES_USER=admin
POSTGRES_PASSWORD=supersecret
POSTGRES_DB=myapp
APP_PORT=3000services:
app:
env_file:
- .env
ports:
- "${APP_PORT}:3000"
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}Variable Substitution
services:
app:
image: my-app:${TAG:-latest} # Default to 'latest' if TAG not set
environment:
LOG_LEVEL: ${LOG_LEVEL:?error} # Error if LOG_LEVEL not setDepends_on and Health Checks
depends_on controls startup order, but doesn't wait for the service to be ready:
services:
app:
build: .
depends_on:
postgres:
condition: service_healthy # Wait for health check to pass
redis:
condition: service_started # Just wait for container to start
postgres:
image: postgres:16-alpine
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5Restart Policies
services:
app:
restart: "no" # Never restart (default)
restart: always # Always restart
restart: on-failure # Restart only on non-zero exit code
restart: unless-stopped # Restart unless manually stoppedProduction recommendation: Use
unless-stoppedoralwaysfor critical services.
Part 5: Compose Networking
Docker Compose automatically creates a network for your project. All services can communicate using their service name as hostname.
Default Network Behavior
# docker-compose.yml
services:
app:
build: .
ports:
- "3000:3000"
db:
image: postgres:16-alpine
redis:
image: redis:7-alpineWith this setup:
- Compose creates a network named
<project>_default appcan reach PostgreSQL atdb:5432appcan reach Redis atredis:6379- All services can communicate by service name
// Inside the app container, connect using service names:
const dbUrl = 'postgres://admin:secret@db:5432/mydb'; // "db" = service name
const redisUrl = 'redis://redis:6379'; // "redis" = service nameCustom Networks
For more complex setups, define custom networks to isolate groups of services:
services:
# Frontend — only talks to API
frontend:
build: ./frontend
ports:
- "80:80"
networks:
- frontend-net
# API — talks to frontend AND backend services
api:
build: ./api
ports:
- "3000:3000"
networks:
- frontend-net
- backend-net
# Database — only accessible from API
db:
image: postgres:16-alpine
networks:
- backend-net
# Redis — only accessible from API
redis:
image: redis:7-alpine
networks:
- backend-net
networks:
frontend-net:
driver: bridge
backend-net:
driver: bridgeThe frontend container cannot reach db or redis directly—network isolation enforced.
Part 6: Compose for Development
Docker Compose truly shines in development workflows. Set up hot reload, override files, and development-specific configurations.
Volume Mounts for Hot Reload
Mount your source code as a volume so changes reflect immediately:
services:
app:
build: .
ports:
- "3000:3000"
volumes:
- ./src:/app/src # Mount source code
- /app/node_modules # Prevent overwriting node_modules
environment:
NODE_ENV: development
command: npm run dev # Use dev server with hot reloadOverride Files
Docker Compose automatically merges docker-compose.yml with docker-compose.override.yml:
docker-compose.yml (base — shared settings):
services:
app:
build: .
ports:
- "3000:3000"
depends_on:
- db
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:docker-compose.override.yml (development — auto-loaded):
services:
app:
build:
target: development
volumes:
- ./src:/app/src
- /app/node_modules
environment:
NODE_ENV: development
DEBUG: "true"
command: npm run dev
db:
ports:
- "5432:5432" # Expose DB port for local toolsdocker-compose.prod.yml (production — explicit):
services:
app:
build:
target: production
environment:
NODE_ENV: production
restart: unless-stopped
deploy:
resources:
limits:
cpus: "1.0"
memory: 1G
db:
restart: unless-stopped
# No port exposure — only accessible within Docker networkUsage:
# Development (auto-loads override file)
docker compose up
# Production (explicitly specify prod file)
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -dCompose Profiles
Use profiles to conditionally include services:
services:
app:
build: .
ports:
- "3000:3000"
db:
image: postgres:16-alpine
# Only started when "debug" profile is active
adminer:
image: adminer
ports:
- "8080:8080"
profiles:
- debug
# Only started when "monitoring" profile is active
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
profiles:
- monitoring
grafana:
image: grafana/grafana
ports:
- "3001:3000"
profiles:
- monitoring# Start only app + db
docker compose up -d
# Start app + db + adminer
docker compose --profile debug up -d
# Start app + db + monitoring stack
docker compose --profile monitoring up -d
# Start everything
docker compose --profile debug --profile monitoring up -dPart 7: Full-Stack Application Example
Let's put it all together with a complete full-stack application: React frontend + Node.js API + PostgreSQL + Redis.
Project Structure
my-fullstack-app/
├── frontend/
│ ├── Dockerfile
│ ├── package.json
│ ├── src/
│ └── nginx.conf
├── api/
│ ├── Dockerfile
│ ├── package.json
│ └── src/
├── docker-compose.yml
├── docker-compose.override.yml
├── docker-compose.prod.yml
├── .env
└── .env.exampleAPI Dockerfile (Multi-Stage)
# api/Dockerfile
FROM node:20-alpine AS base
WORKDIR /app
COPY package*.json ./
# Development stage
FROM base AS development
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
# Build stage
FROM base AS builder
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine AS production
WORKDIR /app
RUN addgroup -S app && adduser -S app -G app
COPY package*.json ./
RUN npm ci --production
COPY --from=builder /app/dist ./dist
USER app
EXPOSE 3000
CMD ["node", "dist/app.js"]Frontend Dockerfile (Multi-Stage)
# frontend/Dockerfile
FROM node:20-alpine AS base
WORKDIR /app
COPY package*.json ./
# Development stage
FROM base AS development
RUN npm install
COPY . .
EXPOSE 5173
CMD ["npm", "run", "dev", "--", "--host"]
# Build stage
FROM base AS builder
RUN npm ci
COPY . .
RUN npm run build
# Production stage — serve with Nginx
FROM nginx:alpine AS production
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]Nginx Configuration for Frontend
# frontend/nginx.conf
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
# Handle client-side routing
location / {
try_files $uri $uri/ /index.html;
}
# Proxy API requests to backend
location /api/ {
proxy_pass http://api:3000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}Docker Compose Files
docker-compose.yml (base):
services:
frontend:
build:
context: ./frontend
depends_on:
- api
api:
build:
context: ./api
environment:
DATABASE_URL: postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
REDIS_URL: redis://redis:6379
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
volumes:
- redis-data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
volumes:
pgdata:
redis-data:docker-compose.override.yml (development):
services:
frontend:
build:
target: development
ports:
- "5173:5173"
volumes:
- ./frontend/src:/app/src
- /app/node_modules
api:
build:
target: development
ports:
- "3000:3000"
volumes:
- ./api/src:/app/src
- /app/node_modules
environment:
NODE_ENV: development
DEBUG: "true"
db:
ports:
- "5432:5432"
redis:
ports:
- "6379:6379"docker-compose.prod.yml (production):
services:
frontend:
build:
target: production
ports:
- "80:80"
restart: unless-stopped
api:
build:
target: production
environment:
NODE_ENV: production
restart: unless-stopped
db:
restart: unless-stopped
redis:
restart: unless-stopped.env file:
POSTGRES_DB=myapp
POSTGRES_USER=admin
POSTGRES_PASSWORD=changeme_in_productionRunning the Full Stack
# Development — hot reload, exposed ports, debug tools
docker compose up
# Production build and run
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build
# View logs
docker compose logs -f
# View specific service logs
docker compose logs -f api
# Stop everything
docker compose down
# Stop and remove volumes (clean slate)
docker compose down -vPart 8: Essential Docker Compose Commands
Lifecycle Commands
# Start services (foreground)
docker compose up
# Start services (background)
docker compose up -d
# Start with rebuild
docker compose up -d --build
# Stop services
docker compose stop
# Stop and remove containers, networks
docker compose down
# Stop, remove containers, AND delete volumes
docker compose down -v
# Restart a specific service
docker compose restart apiInspection Commands
# List running services
docker compose ps
# View logs (all services)
docker compose logs
# Follow logs for specific service
docker compose logs -f api
# View resource usage
docker compose topExecution Commands
# Run a command in a running container
docker compose exec api sh
docker compose exec db psql -U admin myapp
# Run a one-off command (creates new container)
docker compose run --rm api npm test
docker compose run --rm api npm run migrateBuild Commands
# Build all images
docker compose build
# Build specific service
docker compose build api
# Build without cache
docker compose build --no-cache
# Pull latest images
docker compose pullScaling Services
# Scale a service to 3 instances
docker compose up -d --scale api=3
# Note: you'll need to remove fixed port mapping
# and use a load balancer (like Nginx) in frontCommon Patterns and Tips
1. Wait-for-It Pattern
Sometimes depends_on with health checks isn't enough. Use a wait script:
services:
api:
build: .
command: >
sh -c "
echo 'Waiting for database...' &&
while ! nc -z db 5432; do sleep 1; done &&
echo 'Database is ready!' &&
npm run migrate &&
npm start
"
depends_on:
- db2. Database Initialization
Run SQL scripts on first start:
services:
db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql # Auto-runs on first start3. Sharing Data Between Services
services:
app:
volumes:
- shared-uploads:/app/uploads
worker:
volumes:
- shared-uploads:/app/uploads
volumes:
shared-uploads:4. Using YAML Anchors to Reduce Duplication
x-common-env: &common-env
NODE_ENV: production
LOG_LEVEL: info
TZ: UTC
services:
api:
build: ./api
environment:
<<: *common-env
PORT: "3000"
worker:
build: ./worker
environment:
<<: *common-env
CONCURRENCY: "5"5. Multi-Compose File for Different Environments
# docker-compose.yml — base config
# docker-compose.override.yml — dev (auto-loaded)
# docker-compose.prod.yml — production
# docker-compose.test.yml — testing
# Run tests
docker compose -f docker-compose.yml -f docker-compose.test.yml run --rm api npm test
# Deploy to production
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -dPractice Exercises
Exercise 1: Dockerfile Challenge
Write a multi-stage Dockerfile for a Python Flask application:
- Build stage: install dependencies from
requirements.txt - Production stage: use
python:3.12-slim, run as non-root user - The final image should be under 200MB
Exercise 2: Compose Stack
Create a docker-compose.yml for a WordPress stack:
- WordPress container (port 8080)
- MySQL container with persistent volume
- Use environment variables for database credentials
- Add health checks to both services
Exercise 3: Full Development Environment
Set up a development environment with:
- A Node.js API with hot reload
- PostgreSQL with initialization scripts
- Redis for caching
- Adminer for database management (using profiles)
- Separate development and production configurations
What's Next?
In the next post, we'll dive into Kubernetes Fundamentals—learning how to orchestrate containers at scale with:
- Kubernetes architecture (control plane, worker nodes)
- Pods, Deployments, and Services
- ConfigMaps and Secrets
- kubectl essential commands
- Setting up a local Kubernetes cluster
Summary and Key Takeaways
✅ Dockerfiles define how to build images — use FROM, COPY, RUN, CMD
✅ Always use .dockerignore to keep build context small
✅ Order Dockerfile instructions for optimal layer caching
✅ Multi-stage builds reduce image size by 70-99%
✅ Run containers as non-root users for security
✅ Docker Compose defines multi-container apps in one YAML file
✅ Services communicate by name within Compose networks
✅ Use override files to separate dev and production configs
✅ Health checks + depends_on ensure proper startup order
✅ Profiles conditionally include services like debug tools
Series: Docker & Kubernetes Learning Roadmap
Previous: Phase 1: Docker Fundamentals
Next: Phase 3: Kubernetes Fundamentals
Have questions about Dockerfiles or Docker Compose? Feel free to reach out or leave a comment!
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.