Dockerfile Best Practices & Multi-Stage Builds

Welcome to the first Deep Dive in the Docker & Kubernetes series! In Phase 2 you learned how to write Dockerfiles and use Docker Compose. Now it's time to write Dockerfiles that are production-ready — small, secure, fast to build, and easy to maintain.
A poorly written Dockerfile can produce images that are 1GB+, take minutes to build, run as root, and leak secrets. A well-written one produces images under 50MB, builds in seconds (thanks to caching), runs as a non-root user, and contains only what's needed to run your app.
What You'll Learn
✅ Reduce image size by 70-95% with the right base images
✅ Master layer caching for lightning-fast builds
✅ Use multi-stage builds to separate build-time and runtime dependencies
✅ Harden images with non-root users and read-only filesystems
✅ Write optimized Dockerfiles for Node.js, Python, Go, Java, and Rust
✅ Leverage BuildKit features like cache mounts and build secrets
✅ Scan images for vulnerabilities and debug build issues
✅ Integrate Docker builds into CI/CD pipelines
Time commitment: 3–5 days, 1–2 hours daily
Prerequisites: Phase 2: Docker Compose & Multi-Container Apps
Part 1: Image Size Optimization
Why Image Size Matters
Every megabyte in your Docker image has consequences:
| Impact | Small Image (50MB) | Large Image (1.2GB) |
|---|---|---|
| Pull time | ~2 seconds | ~45 seconds |
| Storage | 50MB × 100 nodes = 5GB | 1.2GB × 100 nodes = 120GB |
| Attack surface | Minimal packages = fewer CVEs | Full OS = hundreds of CVEs |
| Startup time | Seconds (less to download) | Minutes on cold start |
| CI/CD costs | Fast pipelines, less bandwidth | Slow pipelines, expensive egress |
Choosing the Right Base Image
The base image is the single biggest factor in your final image size:
| Base Image | Size | Shell | Package Manager | Best For |
|---|---|---|---|---|
ubuntu / debian | 77-130MB | Yes | apt | Development, debugging |
*-slim | 50-130MB | Yes | apt (minimal) | Production (when you need a shell) |
*-alpine | 5-50MB | Yes | apk | Production (small footprint) |
distroless | 2-50MB | No | None | Production (maximum security) |
scratch | 0MB | No | None | Statically compiled binaries (Go, Rust) |
Alpine caveats: Alpine uses musl libc instead of glibc. This can cause subtle compatibility issues with some libraries. If you hit strange segfaults or performance issues, try slim instead.
The .dockerignore File
Before building, Docker sends the entire build context to the daemon. Without .dockerignore, it sends everything — including node_modules, .git, and test files.
# Version control
.git
.gitignore
# Dependencies (will be installed fresh)
node_modules
vendor
__pycache__
*.pyc
# Build artifacts
dist
build
target
*.o
*.exe
# Development files
.env
.env.*
*.md
LICENSE
docker-compose*.yml
Dockerfile*
# IDE
.vscode
.idea
*.swp
# Tests (not needed in production image)
tests
test
__tests__
*.test.js
*.spec.js
coverage
.nyc_outputImpact: A Node.js project with node_modules can have a 500MB+ build context. With .dockerignore, it drops to a few MB.
Cleaning Up Package Manager Caches
Every RUN instruction creates a new layer. Install and clean up in the same layer:
# ❌ BAD: Cache stays in the layer
RUN apt-get update
RUN apt-get install -y curl wget
RUN rm -rf /var/lib/apt/lists/*
# ✅ GOOD: Install and clean in one layer
RUN apt-get update && \
apt-get install -y --no-install-recommends \
curl \
wget \
&& rm -rf /var/lib/apt/lists/*# ❌ BAD: pip cache stays in the layer
RUN pip install -r requirements.txt
# ✅ GOOD: No cache
RUN pip install --no-cache-dir -r requirements.txt# ❌ BAD: apk cache stays
RUN apk add curl
# ✅ GOOD: No cache
RUN apk add --no-cache curlPart 2: Layer Caching
Docker builds images layer by layer. If a layer hasn't changed, Docker reuses the cached version. Understanding this is the key to fast builds.
How Layer Caching Works
Rule: When a layer changes, all subsequent layers are invalidated.
In the example above, if you change your application code but not package.json, Docker reuses the cached npm ci layer (which can take 30+ seconds). Only COPY . . and npm run build re-execute.
Instruction Ordering
The golden rule: Copy files that change least frequently first.
# ✅ GOOD: Dependencies change rarely, source code changes often
FROM node:20-alpine
WORKDIR /app
# Layer 1: Dependencies (changes rarely)
COPY package.json package-lock.json ./
RUN npm ci --production
# Layer 2: Source code (changes often)
COPY . .
CMD ["node", "server.js"]# ❌ BAD: Any source code change invalidates the npm install cache
FROM node:20-alpine
WORKDIR /app
COPY . .
RUN npm ci --production
CMD ["node", "server.js"]Separating Dev and Prod Dependencies
FROM node:20-alpine
WORKDIR /app
# Install production dependencies only
COPY package.json package-lock.json ./
RUN npm ci --production
# Copy source code
COPY . .
CMD ["node", "server.js"]For Python:
FROM python:3.12-slim
WORKDIR /app
# Install dependencies first
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
CMD ["uvicorn", "main:app", "--host", "0.0.0.0"]Cache Busting
Sometimes you need to force a layer to rebuild. Common technique — use build arguments:
FROM ubuntu:22.04
# This arg changes the cache key — increment to force fresh packages
ARG CACHE_BUST=1
RUN apt-get update && apt-get install -y curl# Force rebuild of this layer
docker build --build-arg CACHE_BUST=$(date +%s) .Part 3: Multi-Stage Builds
Multi-stage builds are the most powerful Dockerfile technique. They let you use one image for building and a completely different (smaller) image for running.
The Problem
A typical build image contains compilers, build tools, dev dependencies — things you don't need at runtime.
# ❌ Single-stage: 1.2GB image with build tools included
FROM node:20
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
CMD ["node", "dist/server.js"]
# Image contains: Node.js, npm, node_modules (dev + prod), source code, build toolsThe Solution: Multi-Stage
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-alpine AS production
WORKDIR /app
# Only copy what we need from the build stage
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json /app/package-lock.json ./
RUN npm ci --production
# Security: run as non-root
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
EXPOSE 3000
CMD ["node", "dist/server.js"]Result: Build stage has everything needed to compile. Production stage only has the compiled output and production dependencies.
Named Stages and Targets
You can have multiple stages and build specific ones:
# Base stage: shared dependencies
FROM node:20-alpine AS base
WORKDIR /app
COPY package.json package-lock.json ./
# Development stage
FROM base AS development
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
# Build stage
FROM base AS builder
RUN npm ci
COPY . .
RUN npm run build
# Test stage
FROM builder AS test
RUN npm run test
# Production stage
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json /app/package-lock.json ./
RUN npm ci --production
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
CMD ["node", "dist/server.js"]Build specific targets:
# Build only the dev image
docker build --target development -t my-app:dev .
# Build only the test stage (runs tests during build)
docker build --target test -t my-app:test .
# Build the production image (default — last stage)
docker build -t my-app:prod .Copying from External Images
You can copy files from any image, not just build stages:
FROM alpine:3.19
# Copy a binary from another image
COPY --from=golang:1.22-alpine /usr/local/go/bin/go /usr/local/bin/go
# Copy nginx config from the official nginx image
COPY --from=nginx:alpine /etc/nginx/nginx.conf /etc/nginx/nginx.confPart 4: Security Hardening
Running containers with default settings is a security risk. Here are the essential hardening techniques.
Running as Non-Root
By default, containers run as root. If an attacker escapes the container, they're root on the host.
# ❌ BAD: Running as root (default)
FROM node:20-alpine
WORKDIR /app
COPY . .
CMD ["node", "server.js"]
# ✅ GOOD: Create and use a non-root user
FROM node:20-alpine
WORKDIR /app
# Create a system group and user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Copy files and set ownership
COPY --chown=appuser:appgroup . .
# Switch to non-root user
USER appuser
CMD ["node", "server.js"]For Debian/Ubuntu-based images:
RUN groupadd -r appgroup && useradd -r -g appgroup -d /app -s /sbin/nologin appuserRead-Only Filesystem
Prevent writes to the container filesystem — any modifications should go to mounted volumes:
# Run with read-only filesystem
docker run --read-only --tmpfs /tmp my-app:latestIn Kubernetes:
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000Minimizing Attack Surface
# ✅ Remove unnecessary packages after build
FROM python:3.12-slim AS builder
RUN apt-get update && apt-get install -y gcc libpq-dev
COPY requirements.txt .
RUN pip install --no-cache-dir --prefix=/install -r requirements.txt
FROM python:3.12-slim
# Only copy the installed packages — no gcc, no build tools
COPY --from=builder /install /usr/local
COPY . /app
WORKDIR /app
USER 1000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0"]Scanning for Vulnerabilities
Scan your images before pushing to production:
# Trivy — popular open-source scanner
trivy image my-app:latest
# Docker Scout (built into Docker Desktop)
docker scout cves my-app:latest
# Snyk
snyk container test my-app:latest
# Grype
grype my-app:latestExample Trivy output:
my-app:latest (alpine 3.19)
============================
Total: 3 (UNKNOWN: 0, LOW: 1, MEDIUM: 1, HIGH: 1, CRITICAL: 0)
┌───────────────┬──────────────────┬──────────┬─────────────┐
│ Library │ Vulnerability │ Severity │ Version │
├───────────────┼──────────────────┼──────────┼─────────────┤
│ libcrypto3 │ CVE-2024-XXXXX │ HIGH │ 3.1.4-r1 │
│ libssl3 │ CVE-2024-XXXXX │ MEDIUM │ 3.1.4-r1 │
│ busybox │ CVE-2024-XXXXX │ LOW │ 1.36.1-r15 │
└───────────────┴──────────────────┴──────────┴─────────────┘Don't Leak Secrets
# ❌ BAD: Secret visible in image history
FROM node:20-alpine
ENV API_KEY=sk-12345
COPY . .
CMD ["node", "server.js"]
# ❌ BAD: Secret in a layer (even if deleted later)
COPY .env .
RUN source .env && npm run build
RUN rm .env # Still in the previous layer!
# ✅ GOOD: Use BuildKit secrets (never stored in image)
# syntax=docker/dockerfile:1
FROM node:20-alpine
RUN --mount=type=secret,id=api_key \
API_KEY=$(cat /run/secrets/api_key) npm run build# Pass secret at build time (never stored in image layers)
docker build --secret id=api_key,src=./api_key.txt .Part 5: Language-Specific Best Practices
Node.js
# syntax=docker/dockerfile:1
FROM node:20-alpine AS builder
WORKDIR /app
# Use npm ci for reproducible builds (respects package-lock.json exactly)
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build
RUN npm prune --production
FROM node:20-alpine
WORKDIR /app
# Use tini as PID 1 for proper signal handling
RUN apk add --no-cache tini
ENTRYPOINT ["/sbin/tini", "--"]
# Copy only production artifacts
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
EXPOSE 3000
CMD ["node", "dist/server.js"]Key points:
- Use
npm ci(notnpm install) for deterministic builds npm prune --productionremoves devDependencies- Tini handles SIGTERM properly (graceful shutdown)
- Final image: ~80MB instead of ~1GB
Python
# syntax=docker/dockerfile:1
FROM python:3.12-slim AS builder
WORKDIR /app
# Install build dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends gcc libpq-dev && \
rm -rf /var/lib/apt/lists/*
# Install Python dependencies to a custom prefix
COPY requirements.txt .
RUN pip install --no-cache-dir --prefix=/install -r requirements.txt
FROM python:3.12-slim
WORKDIR /app
# Copy only installed packages from builder
COPY --from=builder /install /usr/local
# Copy application code
COPY . .
# Non-root user
RUN useradd -r -s /sbin/nologin appuser
USER appuser
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]Key points:
- Build dependencies (gcc) only in builder stage
--prefix=/installputs packages in a clean directory for copying- Final image has no compiler, no build tools — just Python + your packages
- Final image: ~150MB instead of ~1GB
Go
Go produces statically compiled binaries — perfect for minimal images:
# syntax=docker/dockerfile:1
FROM golang:1.22-alpine AS builder
WORKDIR /app
# Download dependencies first (cache-friendly)
COPY go.mod go.sum ./
RUN go mod download
# Build the binary
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o /app/server ./cmd/server
# Final image: scratch = 0MB base
FROM scratch
# Copy SSL certificates (needed for HTTPS calls)
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Copy the binary
COPY --from=builder /app/server /server
EXPOSE 8080
ENTRYPOINT ["/server"]Key points:
CGO_ENABLED=0produces a fully static binary-ldflags="-s -w"strips debug info (smaller binary)scratchis an empty image — nothing but your binary- Final image: ~5-15MB instead of ~300MB
- No shell, no package manager, no attack surface
Java (Spring Boot)
# syntax=docker/dockerfile:1
FROM eclipse-temurin:21-jdk-alpine AS builder
WORKDIR /app
# Copy Gradle/Maven files first for caching
COPY build.gradle settings.gradle gradlew ./
COPY gradle ./gradle
RUN ./gradlew dependencies --no-daemon
# Build the application
COPY src ./src
RUN ./gradlew bootJar --no-daemon
# Extract layers for better caching
RUN java -Djarmode=layertools -jar build/libs/*.jar extract
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
# Copy Spring Boot layers (most to least likely to change)
COPY --from=builder /app/dependencies/ ./
COPY --from=builder /app/spring-boot-loader/ ./
COPY --from=builder /app/snapshot-dependencies/ ./
COPY --from=builder /app/application/ ./
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
EXPOSE 8080
ENTRYPOINT ["java", "org.springframework.boot.loader.launch.JarLauncher"]Key points:
- Use JRE (not JDK) in production — no compiler needed
- Spring Boot layer extraction enables better Docker caching
- Dependencies rarely change → cached layer
- Application code changes often → only the top layer rebuilds
- Final image: ~200MB instead of ~500MB
Rust
# syntax=docker/dockerfile:1
FROM rust:1.77-alpine AS builder
WORKDIR /app
# Install musl target for static linking
RUN apk add --no-cache musl-dev
# Cache dependencies
COPY Cargo.toml Cargo.lock ./
RUN mkdir src && echo "fn main() {}" > src/main.rs
RUN cargo build --release --target=x86_64-unknown-linux-musl
RUN rm -rf src
# Build real application
COPY src ./src
RUN touch src/main.rs && cargo build --release --target=x86_64-unknown-linux-musl
FROM scratch
COPY --from=builder /app/target/x86_64-unknown-linux-musl/release/myapp /myapp
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
EXPOSE 8080
ENTRYPOINT ["/myapp"]Key points:
- musl target produces fully static binaries (no libc dependency)
- Dummy
main.rstrick caches dependency compilation scratchbase — binary only- Final image: ~5-10MB
Part 6: BuildKit Features
BuildKit is the modern Docker build engine. It's faster, more flexible, and has features not available in the legacy builder.
Enabling BuildKit
# Option 1: Environment variable
export DOCKER_BUILDKIT=1
docker build .
# Option 2: Use docker buildx (BuildKit is always enabled)
docker buildx build .
# Option 3: Set as default in Docker daemon config
# /etc/docker/daemon.json
{
"features": { "buildkit": true }
}Note: Docker Desktop has BuildKit enabled by default since Docker 23.0+.
Cache Mounts
Cache mounts persist a directory between builds — perfect for package manager caches:
# syntax=docker/dockerfile:1
# Node.js: Cache npm packages
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci
COPY . .
# Python: Cache pip packages
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt
COPY . .
# Go: Cache module downloads and build cache
FROM golang:1.22
WORKDIR /app
COPY go.mod go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
go mod download
COPY . .
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
go build -o /app/server .Build Secrets
Pass secrets at build time without storing them in layers:
# syntax=docker/dockerfile:1
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
# Mount secret during build — not stored in the image
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc \
npm ci
COPY . .
CMD ["node", "server.js"]# Build with secret
docker build --secret id=npmrc,src=$HOME/.npmrc .Multi-Platform Builds
Build images for different architectures (x86, ARM):
# Create a multi-platform builder
docker buildx create --name multiplatform --use
# Build for multiple platforms
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t myregistry/my-app:latest \
--push .This is especially important for:
- Apple Silicon (M1/M2/M3) developers deploying to x86 servers
- Supporting both x86 and ARM cloud instances (ARM is cheaper on AWS)
- Edge/IoT devices running ARM
Part 7: CI/CD Integration
GitHub Actions Example
name: Build and Push Docker Image
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and Push
uses: docker/build-push-action@v5
with:
context: .
push: ${{ github.event_name != 'pull_request' }}
tags: |
ghcr.io/${{ github.repository }}:latest
ghcr.io/${{ github.repository }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: linux/amd64,linux/arm64Key features:
cache-from/cache-to: type=gha— Uses GitHub Actions cache for Docker layers- Multi-platform build (amd64 + arm64)
- Only pushes on main branch (not on PRs)
- Tags with both
latestand commit SHA
Image Tagging Strategies
# Semantic versioning
my-app:1.2.3
my-app:1.2
my-app:1
# Git-based
my-app:abc123def # commit SHA
my-app:main # branch name
my-app:pr-42 # pull request number
# Date-based
my-app:2024-01-20
my-app:20240120-abc123
# Combined (recommended)
my-app:1.2.3-abc123Best practice: Never rely solely on latest. Always tag with a specific version or commit SHA so you can trace exactly which code is running.
Part 8: Debugging Dockerfiles
Inspecting Image Layers
# View layer history and sizes
docker history my-app:latest
# Example output
IMAGE CREATED CREATED BY SIZE
a1b2c3d4e5 2 hours ago CMD ["node" "server.js"] 0B
f6g7h8i9j0 2 hours ago EXPOSE map[3000/tcp:{}] 0B
k1l2m3n4o5 2 hours ago COPY . . # buildkit 45.2kB
p6q7r8s9t0 2 hours ago RUN npm ci --production # buildkit 38.5MB
u1v2w3x4y5 2 hours ago COPY package*.json ./ # buildkit 1.2kB
z6a7b8c9d0 2 hours ago WORKDIR /app 0B
e1f2g3h4i5 3 weeks ago /bin/sh -c #(nop) CMD ["node"] 0BUsing Dive for Layer Analysis
Dive is an excellent tool for exploring Docker image layers:
# Install dive
brew install dive # macOS
apt-get install dive # Debian/Ubuntu
# Analyze an image
dive my-app:latestDive shows you:
- Each layer's size and contents
- Wasted space (files added then deleted in later layers)
- Image efficiency score
Debugging Build Failures
# Verbose BuildKit output
docker build --progress=plain .
# Build up to a specific stage
docker build --target builder -t debug-stage .
# Shell into the failed stage
docker run -it debug-stage /bin/sh
# Build without cache (start fresh)
docker build --no-cache .Common Issues and Fixes
| Issue | Cause | Fix |
|---|---|---|
| Image too large | Wrong base image or uncleaned cache | Use alpine/slim, clean in same RUN |
| Build is slow | Cache invalidation too early | Copy dependency files before source code |
npm install runs every time | COPY . . before npm install | Copy package*.json first, then npm install |
| Permission denied at runtime | Files owned by root, running as non-root | Use COPY --chown=user:group |
| Secrets in image history | ENV or COPY for secrets | Use BuildKit --secret mount |
| Signal handling issues | App is PID 1 without init | Use tini or dumb-init as entrypoint |
Exercises
Exercise 1: Optimize an Existing Dockerfile
Take this unoptimized Dockerfile and improve it:
FROM node:20
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD ["node", "index.js"]Goals:
- Reduce image size by at least 70%
- Use layer caching for dependencies
- Run as non-root user
- Add a .dockerignore file
Exercise 2: Multi-Stage Go Application
Write a Dockerfile for a Go web server that:
- Compiles to a static binary
- Uses
scratchas the final base image - Includes SSL certificates for HTTPS calls
- Final image should be under 15MB
Exercise 3: Language Comparison
Build the same "Hello World" HTTP server in three languages (Node.js, Go, Python) and compare:
- Build time with cold cache
- Build time with warm cache
- Final image size
- Number of CVEs (use Trivy)
What's Next?
In the next post, we'll dive into Docker Networking & Volumes — understanding how containers communicate and persist data:
- Docker networking architecture (bridge, host, overlay, macvlan)
- Container-to-container communication
- Storage drivers and union filesystems
- Volumes, bind mounts, and tmpfs
- Production storage patterns
Summary and Key Takeaways
✅ Choose the smallest base image that works: alpine > slim > full OS > scratch for compiled languages
✅ Always use .dockerignore to exclude node_modules, .git, tests, and dev files from the build context
✅ Order Dockerfile instructions by change frequency — dependencies first, source code last
✅ Multi-stage builds separate build-time tools from production — reduce image size by 70-95%
✅ Run containers as non-root users — never run as root in production
✅ Use BuildKit features: cache mounts for package managers, secrets for sensitive data
✅ Scan images with Trivy/Snyk/Grype before deploying to production
✅ Use npm ci (not npm install) for reproducible Node.js builds
✅ Go and Rust can use scratch base images for minimal 5-15MB containers
✅ Tag images with specific versions or commit SHAs — never rely solely on latest
Series: Docker & Kubernetes Learning Roadmap
Previous: Phase 3: Kubernetes Fundamentals
Next: Deep Dive: Docker Networking & Volumes
Have questions about Dockerfile optimization? Feel free to reach out or leave a comment!
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.