Build a Personal Blog — Phase 5: Docker Compose

This is Phase 5 of the Build a Personal Blog series. Your blog has MDX rendering, a PostgreSQL database, tags, search, and pagination. It runs perfectly on your laptop with npm run dev. But what happens when you want to run it on a VPS? You'd need to install Node.js, set up PostgreSQL, manage environment variables, and hope everything works the same way it does locally. Docker fixes all of that.
Series: Build a Personal Blog — Complete Roadmap
Previous: Phase 4 — Tags, Search & Pagination
Next: Phase 6 — Deploy to Ubuntu VPS on Hostinger
Source Code: GitHub — personal-blog-phase-5
What You'll Build
By the end of this phase:
✅ A multi-stage Dockerfile that produces a tiny, production-ready Next.js image
✅ A docker-compose.yml for local development with hot reload
✅ A docker-compose.prod.yml override for production settings
✅ Automatic database migrations that run on container startup
✅ Health checks for both PostgreSQL and the Next.js app
✅ A .dockerignore that keeps images small and builds fast
✅ Environment variable management across dev, staging, and production
Time commitment: 2–3 hours
Prerequisites: Phase 4 — Tags, Search & Pagination
Why Docker for a Blog?
You might be thinking: "It's just a blog — do I really need Docker?" Fair question. Here's why it's worth it:
| Problem | Without Docker | With Docker |
|---|---|---|
| "Works on my machine" | Node.js version mismatch, OS differences | Identical environment everywhere |
| PostgreSQL setup | Install locally, manage users, ports | docker compose up — done |
| New machine onboarding | README with 15 steps | git clone + docker compose up |
| Production deploy | SSH in, install everything, pray | Same docker compose up |
| Rollback | "What version was running before?" | docker compose up with previous image tag |
The key insight: Docker doesn't just help with deployment — it makes local development reproducible. One command spins up your entire stack.
1. Enable Standalone Output in Next.js
Next.js can produce a standalone build — a self-contained folder with only the files needed to run the app, without node_modules. This is essential for small Docker images.
Open next.config.mjs (or next.config.ts) and add the output option:
// next.config.ts
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
output: "standalone",
};
export default nextConfig;What output: "standalone" does:
- Traces all imports and copies only the required
node_modulesfiles into.next/standalone/ - Produces a
server.jsentry point — run withnode .next/standalone/server.js - Reduces the final image from ~1 GB (full
node_modules) to ~100-150 MB
Note: Static assets (
public/folder and.next/static/) are NOT included in the standalone output. You'll copy them explicitly in the Dockerfile.
2. Create the Dockerfile
This is the heart of containerization. We use a multi-stage build with three stages:
- deps — install all
node_modules - builder — build the Next.js app
- runner — copy only what's needed to run
Create a Dockerfile at the project root:
# Dockerfile
# ============================================
# Stage 1: Install dependencies
# ============================================
FROM node:20-alpine AS deps
WORKDIR /app
# Copy package files
COPY package.json package-lock.json ./
# Install all dependencies (including devDependencies for build)
RUN npm ci
# ============================================
# Stage 2: Build the application
# ============================================
FROM node:20-alpine AS builder
WORKDIR /app
# Copy dependencies from deps stage
COPY --from=deps /app/node_modules ./node_modules
# Copy source code
COPY . .
# Build arguments for environment variables needed at build time
ARG DATABASE_URL
ENV DATABASE_URL=${DATABASE_URL}
# Run database migrations before building
RUN npx drizzle-kit push
# Build Next.js (standalone output)
RUN npm run build
# ============================================
# Stage 3: Production runner
# ============================================
FROM node:20-alpine AS runner
WORKDIR /app
# Set production environment
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
# Create non-root user for security
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 nextjs
# Copy the standalone server
COPY --from=builder /app/.next/standalone ./
# Copy static assets (not included in standalone)
COPY --from=builder /app/.next/static ./.next/static
# Copy public folder (images, fonts, etc.)
COPY --from=builder /app/public ./public
# Set correct permissions
RUN chown -R nextjs:nodejs /app
# Switch to non-root user
USER nextjs
# Expose port
EXPOSE 3000
# Set hostname for Next.js
ENV HOSTNAME="0.0.0.0"
ENV PORT=3000
# Start the server
CMD ["node", "server.js"]Why Multi-Stage?
Each stage starts from a clean base image. Only the final COPY --from=builder lines determine what ends up in the production image. This means:
- Build tools, TypeScript compiler, dev dependencies → not in the final image
- Source code (
.ts,.tsxfiles) → not in the final image - Only compiled output + runtime dependencies → in the final image
Why a Non-Root User?
Running as root inside a container is a security risk. If an attacker exploits your app, they'd have root access to the container (and potentially break out). The nextjs user can only access /app — nothing else.
3. Create .dockerignore
Without .dockerignore, Docker copies everything in your project into the build context — including node_modules, .git, and .next. This makes builds slow and images bloated.
Create .dockerignore at the project root:
# .dockerignore
# Dependencies (installed fresh in Docker)
node_modules
# Build output (rebuilt in Docker)
.next
# Version control
.git
.gitignore
# IDE and editor files
.vscode
.idea
*.swp
*.swo
# Environment files (mounted at runtime, not baked in)
.env
.env.local
.env.production
# Docker files (don't need to copy themselves)
Dockerfile
docker-compose*.yml
.dockerignore
# Documentation
README.md
PLANNING.md
CLAUDE.md
LICENSE
# OS files
.DS_Store
Thumbs.dbWhy exclude .env files? Environment variables should be injected at runtime (via docker-compose.yml or --env-file), not baked into the image. Baking secrets into images is a security risk — anyone with access to the image can extract them.
4. Set Up Docker Compose for Development
Docker Compose lets you define and run multi-container applications. For local development, you need two services: the Next.js app and PostgreSQL.
Create docker-compose.yml at the project root:
# docker-compose.yml — Local development configuration
services:
# PostgreSQL database
db:
image: postgres:17-alpine
container_name: blog-db
restart: unless-stopped
environment:
POSTGRES_USER: ${POSTGRES_USER:-bloguser}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-blogpass}
POSTGRES_DB: ${POSTGRES_DB:-blogdb}
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-bloguser} -d ${POSTGRES_DB:-blogdb}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
# Next.js application
app:
build:
context: .
dockerfile: Dockerfile
args:
DATABASE_URL: postgresql://${POSTGRES_USER:-bloguser}:${POSTGRES_PASSWORD:-blogpass}@db:5432/${POSTGRES_DB:-blogdb}
container_name: blog-app
restart: unless-stopped
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://${POSTGRES_USER:-bloguser}:${POSTGRES_PASSWORD:-blogpass}@db:5432/${POSTGRES_DB:-blogdb}
NODE_ENV: production
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/ || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
postgres_data:
driver: localKey Design Decisions
depends_on with condition: service_healthy: The app container won't start until PostgreSQL reports healthy. Without this, the app would crash on startup because the database isn't ready yet.
Named volume postgres_data: Your data persists across container restarts and rebuilds. Without this, you'd lose all data every time you run docker compose down.
Health checks: Docker knows when services are truly ready, not just "started". The pg_isready command checks PostgreSQL can accept connections. The wget command checks Next.js responds to HTTP requests.
Variable defaults (${POSTGRES_USER:-bloguser}): Works out of the box without any .env file, but you can override by creating one.
5. Create the .env File for Local Development
Create .env at the project root for local development:
# .env — Local development environment variables
# PostgreSQL
POSTGRES_USER=bloguser
POSTGRES_PASSWORD=blogpass
POSTGRES_DB=blogdb
# Database URL (used by Drizzle ORM and the app)
DATABASE_URL=postgresql://bloguser:blogpass@db:5432/blogdbImportant: Notice the hostname is
db, notlocalhost. Inside Docker's network, services reference each other by their service name defined indocker-compose.yml.
Add .env to your .gitignore if it's not already there:
# .gitignore
.env
.env.local
.env.productionAnd create a .env.example file for documentation:
# .env.example — Copy to .env and fill in your values
# PostgreSQL
POSTGRES_USER=bloguser
POSTGRES_PASSWORD=changeme
POSTGRES_DB=blogdb
# Database URL
DATABASE_URL=postgresql://bloguser:changeme@db:5432/blogdb6. Production Overrides with docker-compose.prod.yml
Docker Compose supports override files. The base docker-compose.yml defines the services, and docker-compose.prod.yml adds production-specific settings.
Create docker-compose.prod.yml:
# docker-compose.prod.yml — Production overrides
services:
db:
restart: always
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Don't expose port 5432 to the host in production
ports: !override []
app:
restart: always
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"What Changed for Production?
| Setting | Dev | Production |
|---|---|---|
| Restart policy | unless-stopped | always (survives server reboot) |
| DB port exposed | 5432:5432 (for local tools) | Not exposed (only app can reach it) |
| Log rotation | Unlimited | 10 MB × 3 files (30 MB max) |
Log rotation is critical for production. Without it, container logs grow forever and eventually fill your disk. The json-file driver with max-size and max-file automatically rotates logs.
Removing the DB port in production means PostgreSQL is only accessible from within the Docker network. No one can connect to it from the internet, even if the firewall allows port 5432.
Running with Production Overrides
# Development (default)
docker compose up -d --build
# Production (merges both files)
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d --buildWhen you pass multiple -f flags, Docker Compose deep-merges the files. Values in the later file override the earlier one.
7. Running Migrations Inside Docker
In Phase 3, you used npx drizzle-kit push to apply database schema changes. In Docker, migrations run during the build stage — before the production image is created.
Look back at the Dockerfile:
# In the builder stage
ARG DATABASE_URL
ENV DATABASE_URL=${DATABASE_URL}
# Run database migrations before building
RUN npx drizzle-kit pushThis means migrations are applied every time you build the image. For a personal blog, this approach is simple and effective.
Alternative: Runtime Migrations
If you prefer to run migrations when the container starts (not when the image is built), you can use an entrypoint script:
#!/bin/sh
# scripts/docker-entrypoint.sh
set -e
echo "Running database migrations..."
npx drizzle-kit push
echo "Starting Next.js server..."
exec node server.jsThen update the Dockerfile's runner stage:
# Copy migration files needed at runtime
COPY --from=builder /app/drizzle ./drizzle
COPY --from=builder /app/drizzle.config.ts ./drizzle.config.ts
COPY --from=builder /app/lib/db ./lib/db
COPY --from=deps /app/node_modules ./node_modules
COPY scripts/docker-entrypoint.sh ./docker-entrypoint.sh
RUN chmod +x docker-entrypoint.sh
CMD ["./docker-entrypoint.sh"]Trade-off: Build-time migrations are simpler but require database access during build. Runtime migrations don't need DB access at build time but add startup latency and require more files in the production image.
For this series, we'll stick with build-time migrations since the database is always available via Docker Compose.
8. Environment Variable Management
Managing environment variables across environments is one of the trickiest parts of Docker. Here's the strategy:
The Three Files
| File | In Git? | Purpose |
|---|---|---|
.env.example | ✅ Yes | Documents required variables with placeholder values |
.env | ❌ No | Local development values (auto-loaded by Docker Compose) |
.env.production | ❌ No | Production secrets (only exists on the VPS) |
Using a Production .env File
On your VPS, you'll create .env.production with real credentials:
# .env.production (on VPS only — NEVER commit this)
POSTGRES_USER=bloguser
POSTGRES_PASSWORD=s3cur3_pr0duct10n_p4ss!
POSTGRES_DB=myblog
DATABASE_URL=postgresql://bloguser:s3cur3_pr0duct10n_p4ss!@db:5432/myblogThen reference it when starting:
docker compose -f docker-compose.yml -f docker-compose.prod.yml \
--env-file .env.production up -d --build9. Project Structure After Phase 5
Your project should now have these new files:
my-blog/
├── Dockerfile ← NEW (multi-stage build)
├── .dockerignore ← NEW (keeps builds fast)
├── docker-compose.yml ← NEW (dev configuration)
├── docker-compose.prod.yml ← NEW (production overrides)
├── .env ← NEW (local dev variables, not in Git)
├── .env.example ← NEW (committed template)
├── next.config.ts ← MODIFIED (output: "standalone")
├── .gitignore ← MODIFIED (added .env files)
├── app/
│ ├── api/
│ ├── blog/
│ └── ...
├── components/
├── content/posts/
├── lib/
│ ├── db/
│ │ ├── index.ts
│ │ ├── schema.ts
│ │ └── queries.ts
│ └── posts.ts
├── public/
│ └── images/
└── ...10. Build and Test Locally
Time to verify everything works. Run these commands step by step:
Start the Stack
# Build and start all services
docker compose up -d --buildExpected output:
[+] Building 45.2s (17/17) FINISHED
=> [deps] npm ci 12.3s
=> [builder] npm run build 28.4s
=> [runner] COPY --from=builder 0.3s
[+] Running 3/3
✔ Network my-blog_default Created
✔ Container blog-db Healthy
✔ Container blog-app StartedCheck Service Status
docker compose psYou should see both containers running and healthy:
NAME IMAGE STATUS PORTS
blog-db postgres:17 Up 2 minutes (healthy) 0.0.0.0:5432->5432/tcp
blog-app my-blog-app Up 1 minute (healthy) 0.0.0.0:3000->3000/tcpTest the Application
- Open
http://localhost:3000— your blog should load - Click around — posts, tags, search should all work
- Check the view counter — it should increment (database is working)
View Logs
# All services
docker compose logs -f
# Just the app
docker compose logs -f app
# Just the database
docker compose logs -f dbStop Everything
# Stop containers (data persists in volume)
docker compose down
# Stop and delete data volume (fresh start)
docker compose down -v11. Useful Docker Commands
Here's a quick reference for commands you'll use daily:
# Rebuild after code changes
docker compose up -d --build
# Restart a single service
docker compose restart app
# Open a shell inside the app container
docker compose exec app sh
# Open a psql session in the database
docker compose exec db psql -U bloguser -d blogdb
# Check image sizes
docker images | grep blog
# Remove dangling images (free disk space)
docker image prune -f
# View resource usage
docker statsChecking the Image Size
After building, check your image size:
docker images | grep blogExpected:
my-blog-app latest abc123 2 minutes ago ~150 MBCompare that to a naive Dockerfile that copies all of node_modules:
my-blog-app latest def456 5 minutes ago ~1.2 GBThe multi-stage build with standalone output saves roughly 1 GB per image.
Common Issues
Container can't connect to database
The most common cause is using localhost instead of db in DATABASE_URL:
# ❌ Wrong — localhost refers to the app container itself
DATABASE_URL=postgresql://bloguser:blogpass@localhost:5432/blogdb
# ✅ Correct — 'db' is the service name in docker-compose.yml
DATABASE_URL=postgresql://bloguser:blogpass@db:5432/blogdbBuild fails with "Cannot find module"
This usually means the Dockerfile's COPY commands are missing a file. Check that .dockerignore isn't excluding something you need. Common culprits:
# These should NOT be in .dockerignore
# drizzle.config.ts ← needed for migrations
# lib/ ← needed for build
# content/ ← needed for MDX postsDatabase data lost after docker compose down
By default, docker compose down stops and removes containers but keeps volumes. If you used docker compose down -v, the -v flag deletes volumes too — that's where PostgreSQL stores data.
# Safe — data persists
docker compose down
# Destructive — data deleted
docker compose down -vBuild is slow
Docker caches each layer. If you change package.json, the npm ci layer is invalidated and all subsequent layers rebuild. The Dockerfile above optimizes this by copying package.json first, then source code — so dependency installation is cached when only source code changes.
If builds are still slow, check that .dockerignore excludes node_modules and .next. Without this, Docker sends gigabytes of context to the build daemon.
Port 5432 already in use
If you have PostgreSQL installed locally, it might conflict with the Docker container:
# Check what's using port 5432
lsof -i :5432
# Option 1: Stop local PostgreSQL
brew services stop postgresql
# Option 2: Change the Docker port mapping
# In docker-compose.yml, change "5432:5432" to "5433:5432"
# Then connect with port 5433 from your hostSummary
In this phase you:
✅ Enabled standalone output in Next.js for minimal Docker images
✅ Created a multi-stage Dockerfile (deps → builder → runner) that produces ~150 MB images
✅ Set up Docker Compose with PostgreSQL health checks and named volumes
✅ Added production overrides with log rotation and restricted port access
✅ Configured database migrations to run automatically during build
✅ Established environment variable management across dev and production
Your blog now runs in containers — the same containers that will run on your production VPS. No more "works on my machine" surprises.
What's Next
In Phase 6, you'll take these Docker containers and deploy them to an Ubuntu VPS on Hostinger. You'll set up SSH access, install Docker on the server, configure Nginx as a reverse proxy, and get free SSL certificates from Let's Encrypt. One docker compose up on the VPS and your blog is live.
Next Post: Phase 6 — Deploy to Ubuntu VPS on Hostinger
Series Index
| Post | Title | Status |
|---|---|---|
| BLOG-1 | Build a Personal Blog — Roadmap | ✅ Complete |
| BLOG-2 | Phase 1: Project Setup — Next.js 16 + ShadCN/UI | ✅ Complete |
| BLOG-3 | Phase 2: MDX On-Demand Rendering | ✅ Complete |
| BLOG-4 | Phase 3: PostgreSQL + Drizzle ORM | ✅ Complete |
| BLOG-5 | Phase 4: Tags, Search & Pagination | ✅ Complete |
| BLOG-6 | Phase 5: Docker Compose | ✅ You are here |
| BLOG-7 | Phase 6: Deploy to Ubuntu VPS on Hostinger | ✅ Complete |
| BLOG-8 | Phase 7: Custom Domain Setup on Hostinger | ✅ Complete |
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.