Back to blog

Writing a Dockerfile: Build Your Own Docker Images

dockerdockerfiledevopscontainerstutorial
Writing a Dockerfile: Build Your Own Docker Images

In the previous post, you learned what Docker images and containers are. You pulled images from Docker Hub and ran them. But how do you package your own app into an image?

That's what a Dockerfile is for.

What is a Dockerfile?

A Dockerfile is a plain text file containing a set of instructions that Docker uses to build an image. Think of it as a recipe:

  • Each instruction creates a layer in the image
  • Docker reads from top to bottom and executes each instruction in order
  • Layers are cached — if an instruction hasn't changed, Docker reuses the cached layer instead of rebuilding it

That last point is key: good Dockerfile structure means fast builds.


The Core Instructions

FROM

Every Dockerfile starts with FROM. It specifies the base image your new image will build on top of.

FROM node:24
FROM node:24-alpine    # Lighter image using Alpine Linux
FROM ubuntu:22.04

You're not starting from nothing — you're standing on the shoulders of existing images. node:24 already has Node.js and npm installed. ubuntu:22.04 gives you a full Ubuntu environment.

WORKDIR

Sets the working directory inside the container for all subsequent instructions (RUN, COPY, CMD, etc.). Docker creates the directory if it doesn't exist.

WORKDIR /app

Without WORKDIR, files end up scattered in the root / directory — messy and hard to maintain.

COPY

Copies files or directories from your host machine into the image.

COPY index.js .          # Copy index.js into WORKDIR
COPY . .                 # Copy everything from current directory into WORKDIR
COPY package*.json .     # Copy package.json and package-lock.json

The . on the right means "into the current working directory" (your WORKDIR).

RUN

Executes a command during the build process, creating a new layer. Used for installing dependencies, compiling code, or any setup step.

RUN npm install express
RUN apt-get update && apt-get install -y curl

Each RUN instruction creates one layer. Chaining commands with && keeps layers minimal.

EXPOSE

Documents which port the container will listen on. This is informational only — it doesn't automatically publish the port to your host machine. You still need -p when running docker run.

EXPOSE 3000

Think of it as documentation for whoever uses your image.

CMD

Specifies the default command to run when a container starts. Each Dockerfile should have one CMD — if there are multiple, only the last one takes effect. It can be overridden by passing arguments to docker run.

CMD ["node", "index.js"]
CMD ["npm", "start"]

Always use the JSON array (exec) form: CMD ["executable", "arg1", "arg2"]. The string form runs through a shell and can cause issues with signal handling.

ENV

Sets environment variables inside the image and all containers created from it.

ENV NODE_ENV=production
ENV PORT=3000

These are accessible at both build time and runtime.


A Real Example: Node.js App

Here's a complete Dockerfile for a simple Node.js Express app.

index.js:

const express = require('express');
const app = express();
 
app.get('/', (req, res) => {
  res.send('Hello from Docker!');
});
 
app.listen(3000, () => {
  console.log('Server running on port 3000');
});

Dockerfile:

FROM node:24
WORKDIR /app
 
RUN npm install express
COPY index.js .
 
EXPOSE 3000
 
CMD ["node", "index.js"]

Build and run:

# Build the image (tag it as "my-node-app")
docker build -t my-node-app .
 
# Run a container, mapping host port 3000 to container port 3000
docker run -p 3000:3000 my-node-app

Visit http://localhost:3000 — you'll see "Hello from Docker!"

What each step does:

  1. FROM node:24 — start with Node.js 24 (includes node and npm)
  2. WORKDIR /app — all following commands run inside /app
  3. RUN npm install express — install Express into the image during build
  4. COPY index.js . — copy source code into /app
  5. EXPOSE 3000 — document that the app listens on port 3000
  6. CMD ["node", "index.js"] — start the app when the container runs

Instruction Order Matters: Optimizing Cache

Docker caches each layer. If a layer changes, all subsequent layers are invalidated and rebuilt from scratch.

The rule: put things that change least at the top, things that change most at the bottom.

Here's the wrong order:

FROM node:24
WORKDIR /app
 
COPY . .               # Copy everything first
RUN npm install        # Then install dependencies
 
EXPOSE 3000
CMD ["node", "index.js"]

The problem: every time you change any source file — even a single line in index.js — the COPY . . layer changes, which invalidates the RUN npm install cache. npm install runs from scratch every time. Slow.

Here's the right order:

FROM node:24
WORKDIR /app
 
# Copy package files first — only rebuilds when dependencies change
COPY package*.json .
RUN npm install
 
# Copy source code last — changes most often
COPY . .
 
EXPOSE 3000
CMD ["node", "index.js"]

Now npm install only reruns when package.json or package-lock.json changes. For day-to-day development, your builds are fast because only the COPY . . layer and below are invalidated.


Multi-Stage Builds

Single-stage Dockerfiles include everything: compiler, build tools, dev dependencies, source files. The production image doesn't need any of that — it just needs the compiled output.

Multi-stage builds solve this by using multiple FROM instructions in one Dockerfile. Each FROM starts a new stage. The final image only contains the last stage.

# Stage 1: Build
FROM node:24 AS builder
WORKDIR /app
 
COPY package*.json .
RUN npm install
 
COPY . .
RUN npm run build        # Compile TypeScript, bundle, etc.
 
# Stage 2: Production
FROM node:24-alpine AS production
WORKDIR /app
 
COPY --from=builder /app/dist ./dist         # Only copy the build output
COPY --from=builder /app/package*.json .
RUN npm install --omit=dev                   # Only production dependencies
 
EXPOSE 3000
CMD ["node", "dist/index.js"]

Key details:

  • AS builder — names the stage so you can reference it later
  • COPY --from=builder — copies files from the builder stage into the current stage
  • The production stage uses node:24-alpine — a much smaller base image
  • Docker discards all previous stages from the final image

The size difference is significant:

ApproachBase ImageEstimated Size
Single-stagenode:24~1.1 GB
Multi-stagenode:24-alpine~200 MB

You can also build a specific stage (useful for debugging):

# Build the full image (defaults to the last stage)
docker build -t my-app .
 
# Build only up to the builder stage
docker build --target builder -t my-app-builder .

Dockerfile Instructions Cheat Sheet

InstructionPurpose
FROM image:tagSet base image
WORKDIR /pathSet working directory
COPY src destCopy files from host into image
RUN commandExecute command during build
EXPOSE portDocument the port the app listens on
CMD ["cmd", "arg"]Default command when container starts
ENV KEY=valueSet environment variable
ARG NAME=defaultBuild-time variable (not available at runtime)

Summary

✅ A Dockerfile is a recipe for building a Docker image — Docker reads it top to bottom
✅ Each instruction creates a cached layer — unchanged layers are reused for fast builds
✅ Put COPY package*.json and RUN npm install before COPY . . to maximize cache hits
✅ Use CMD ["executable", "arg"] (exec form) for proper signal handling
✅ Multi-stage builds drastically reduce image size by discarding build-time tools
EXPOSE is documentation only — use -p with docker run to actually publish ports


Series: Docker & Kubernetes Learning Roadmap
Previous: Docker Images and Containers Explained
Next: Docker Networking & Volumes


Ready to go further? The next post covers Docker networking and volumes — how containers talk to each other and how to persist data.

📬 Subscribe to Newsletter

Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.

We respect your privacy. Unsubscribe at any time.

💬 Comments

Sign in to leave a comment

We'll never post without your permission.