Docker Networking & Volumes Deep Dive

You've learned to build and run containers — now let's understand how they talk to each other and how they persist data. Networking and storage are two areas where developers often run into unexpected behavior, and understanding them deeply will save you hours of debugging.
This post covers everything you need to know about Docker networking and storage for production use.
Time commitment: 3–4 days, 1–2 hours daily
Prerequisites: Phase 1: Docker Fundamentals, Phase 2: Docker Compose & Multi-Container Apps
Part 1: Docker Networking
How Docker Networking Works
When Docker starts, it creates a virtual network interface on your host machine. Every container gets its own network namespace — an isolated view of the network stack. Containers can communicate through Docker-managed networks.
# See all networks on your system
docker network ls
# Output
NETWORK ID NAME DRIVER SCOPE
a1b2c3d4e5f6 bridge bridge local
b2c3d4e5f6a1 host host local
c3d4e5f6a1b2 none null localDocker ships with three built-in networks and lets you create custom ones.
Network Drivers
Bridge Network (Default)
The bridge driver is the default for standalone containers. Docker creates a virtual Ethernet bridge (docker0) on the host, and every container connected to it gets an IP address in that subnet.
# Run a container — it joins the default bridge automatically
docker run -d --name web nginx
# Inspect the default bridge network
docker network inspect bridgeDefault bridge limitations:
- Containers can only communicate by IP address (no DNS)
- Not recommended for production — use custom bridge networks
Custom bridge networks solve this:
# Create a custom network
docker network create myapp-network
# Run containers on it
docker run -d --name backend --network myapp-network myapp:latest
docker run -d --name db --network myapp-network postgres:16
# backend can now reach db by hostname
docker exec backend ping db # ✅ Works via DNSCustom bridge networks give you:
- Automatic DNS resolution by container name
- Better isolation from other containers
- Can connect/disconnect containers at runtime
Host Network
The container shares the host's network stack directly — no network namespace isolation.
docker run --network host nginx
# nginx now listens on host port 80 directly (no -p needed)When to use host networking:
- Maximum network performance (no NAT overhead)
- Tools that need access to host network interfaces (monitoring agents, packet sniffers)
- Linux only — doesn't work on Docker Desktop for Mac/Windows
Security consideration: The container sees all host network interfaces. Use with caution.
None Network
Completely disables networking for the container.
docker run --network none alpine sh
# No network interfaces except loopbackUseful for batch jobs that process local files and don't need network access.
Overlay Network
Overlay networks span multiple Docker hosts (used with Docker Swarm or Kubernetes). They allow containers on different machines to communicate as if they're on the same network.
# Only available in Swarm mode
docker network create --driver overlay my-overlayFor single-host development, you won't need overlay networks — that's Kubernetes territory.
Container DNS and Service Discovery
Docker runs an embedded DNS server at 127.0.0.11 for custom networks. When a container does a DNS lookup for another container name, Docker resolves it to that container's IP.
# Create network and containers
docker network create backend-net
docker run -d --name postgres --network backend-net postgres:16
docker run -d --name redis --network backend-net redis:7
docker run -d --name api --network backend-net myapi:latest
# api can reach postgres and redis by name
# postgres://postgres:5432 ✅
# redis://redis:6379 ✅DNS aliases — give a container multiple hostnames:
docker run -d \
--network backend-net \
--network-alias db \
--network-alias database \
--name postgres \
postgres:16
# Now reachable as "postgres", "db", OR "database"Service discovery in Docker Compose is automatic — every service name becomes a DNS hostname:
services:
api:
build: .
# Can reach "db" and "cache" by name
db:
image: postgres:16
cache:
image: redis:7Port Publishing
Containers are isolated by default — you must explicitly publish ports to make them accessible from the host.
# -p host_port:container_port
docker run -p 8080:80 nginx # host:8080 → container:80
docker run -p 127.0.0.1:8080:80 nginx # bind to localhost only (safer)
docker run -p 80 nginx # random host port → container:80
# Check which port was assigned
docker port <container_id>Best practice: Bind to 127.0.0.1 in development so the port isn't exposed to the local network:
docker run -p 127.0.0.1:5432:5432 postgres:16
# Only accessible from localhost, not other machines on your networkNetwork Security and Isolation
Separate networks per concern
# docker-compose.yml — production-style network isolation
services:
nginx:
image: nginx:alpine
networks:
- frontend
- backend
api:
build: .
networks:
- backend
- data
db:
image: postgres:16
networks:
- data # db is NOT on frontend — nginx can't reach it directly
networks:
frontend:
backend:
data:This architecture means:
nginxtalks toapi(both onbackend)apitalks todb(both ondata)nginxcannot directly reachdb— must go throughapi
Internal networks (no external access)
docker network create --internal secure-net
# Containers on this network can't reach the internetUseful for databases and internal services that should never have outbound internet access.
Useful Network Commands
# List networks
docker network ls
# Inspect a network (shows connected containers and their IPs)
docker network inspect myapp-network
# Create a network
docker network create --driver bridge --subnet 172.20.0.0/16 myapp-network
# Connect a running container to a network
docker network connect myapp-network my-container
# Disconnect
docker network disconnect myapp-network my-container
# Remove unused networks
docker network prune
# Show container's network settings
docker inspect --format='{{json .NetworkSettings}}' my-container | jqPart 2: Docker Storage
The Container Filesystem Problem
Containers have a writable layer on top of their image layers. By default, everything written inside a container is stored in this writable layer:
┌─────────────────────────────┐
│ Writable Container Layer │ ← deleted when container is removed
├─────────────────────────────┤
│ Image Layer 3 (RO) │
├─────────────────────────────┤
│ Image Layer 2 (RO) │
├─────────────────────────────┤
│ Image Layer 1 (RO) │
└─────────────────────────────┘Problems with the writable layer:
- Data is lost when the container is removed
- Data can't be shared between containers
- Writing uses a copy-on-write storage driver — slower than native filesystem I/O
Docker offers three solutions: volumes, bind mounts, and tmpfs.
Volumes
Volumes are the preferred mechanism for persisting data. Docker manages them completely — they live in /var/lib/docker/volumes/ on Linux and are independent of the container lifecycle.
# Create a named volume
docker volume create mydata
# Use it when running a container
docker run -d \
--name postgres \
-v mydata:/var/lib/postgresql/data \
postgres:16
# The data in /var/lib/postgresql/data persists even after the container is removedVolume in Docker Compose:
services:
db:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: secret
volumes:
pgdata: # Docker manages this volumeVolume commands
# List volumes
docker volume ls
# Inspect a volume (shows mountpoint on host)
docker volume inspect mydata
# Remove a volume
docker volume rm mydata
# Remove all unused volumes
docker volume prune
# Create with options
docker volume create \
--driver local \
--opt type=nfs \
--opt o=addr=192.168.1.100,rw \
--opt device=:/path/to/dir \
nfs-volumeSharing volumes between containers
# First container writes data
docker run -d --name writer -v shared-data:/data alpine \
sh -c "echo 'hello' > /data/message.txt && sleep 3600"
# Second container reads from the same volume
docker run --rm -v shared-data:/data alpine cat /data/message.txt
# Output: helloBind Mounts
Bind mounts map a host filesystem path directly into the container. Unlike volumes, you control exactly where the data lives on the host.
# Mount current directory into container
docker run -v $(pwd):/app node:20 npm test
# Mount a specific host path
docker run -v /host/path:/container/path nginxBind mount in Docker Compose (typical dev setup):
services:
api:
build: .
volumes:
- ./src:/app/src # source code — live reload
- ./config:/app/config # config files
- /app/node_modules # anonymous volume — don't overwrite node_modulesThe anonymous volume trick (/app/node_modules) prevents the host's node_modules from overwriting the container's installed packages.
Bind mount options
# Read-only mount — container can't modify the files
docker run -v $(pwd)/config:/app/config:ro myapp
# Consistent, cached, delegated (macOS performance hints — legacy)
docker run -v $(pwd):/app:cached myappVolumes vs Bind Mounts — when to use each
| Volumes | Bind Mounts | |
|---|---|---|
| Managed by | Docker | You (host path) |
| Best for | Persistent data (databases) | Development (live code reload) |
| Portable | ✅ Yes | ❌ Depends on host path |
| Performance | ✅ Better on Linux | ✅ Great for read-heavy |
| Backup | Via docker volume commands | Direct filesystem access |
| Production | ✅ Preferred | ⚠️ Use carefully |
tmpfs Mounts
tmpfs mounts store data in the host's memory — never written to disk. Perfect for sensitive data (tokens, passwords) that shouldn't persist.
docker run -d \
--tmpfs /tmp \
--tmpfs /run:rw,noexec,nosuid,size=100m \
nginxIn Docker Compose:
services:
api:
image: myapi
tmpfs:
- /tmp
- /runUse cases:
- Temporary session data
- Sensitive credentials that must never hit disk
- High-speed scratch space for processing
Storage Drivers
Docker uses a union filesystem (storage driver) to layer images. The driver affects I/O performance and is mostly transparent to you, but worth understanding.
| Driver | OS | Notes |
|---|---|---|
overlay2 | Linux | Default, recommended |
aufs | Ubuntu (legacy) | Older, being phased out |
devicemapper | RHEL/CentOS (legacy) | Avoid unless required |
vfs | Any | Slow, used for testing |
# Check your storage driver
docker info | grep "Storage Driver"
# Storage Driver: overlay2The storage driver only affects the writable container layer — volumes bypass it entirely and use native filesystem I/O.
Production Storage Patterns
Pattern 1: Database with named volume
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: app
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
volumes:
- pgdata:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
secrets:
- db_password
volumes:
pgdata:
secrets:
db_password:
file: ./secrets/db_password.txtPattern 2: Application with separate code and data volumes
services:
app:
image: myapp:${VERSION}
volumes:
- uploads:/app/uploads # user-uploaded files persist
- logs:/app/logs # logs persist for debugging
tmpfs:
- /tmp # temp files in memory
volumes:
uploads:
logs:Pattern 3: Volume backup and restore
# Backup: create a tar from the volume
docker run --rm \
-v pgdata:/data \
-v $(pwd):/backup \
alpine tar czf /backup/pgdata-backup.tar.gz -C /data .
# Restore: extract tar into volume
docker run --rm \
-v pgdata:/data \
-v $(pwd):/backup \
alpine tar xzf /backup/pgdata-backup.tar.gz -C /dataPattern 4: NFS volume for shared storage across hosts
docker volume create \
--driver local \
--opt type=nfs4 \
--opt o=addr=nfs-server.example.com,rw \
--opt device=:/exports/mydata \
nfs-sharedCommon Storage Mistakes
Mistake 1: Writing database data to the container writable layer
# ❌ BAD — data lost when container is removed
docker run -d postgres:16
# ✅ GOOD — data persists
docker run -d -v pgdata:/var/lib/postgresql/data postgres:16Mistake 2: Using bind mounts for production databases
# ❌ BAD — depends on specific host path, permissions issues
volumes:
- /home/ubuntu/postgres-data:/var/lib/postgresql/data
# ✅ GOOD — Docker manages it
volumes:
- pgdata:/var/lib/postgresql/dataMistake 3: Forgetting to exclude node_modules from bind mounts
# ❌ BAD — host's node_modules overwrites container's
volumes:
- .:/app
# ✅ GOOD — anonymous volume shadows node_modules
volumes:
- .:/app
- /app/node_modulesMistake 4: Not setting volume permissions
# In Dockerfile — create directory and set ownership before declaring VOLUME
RUN mkdir -p /app/data && chown -R node:node /app/data
USER node
VOLUME /app/dataComplete Example: Full-Stack App with Networking and Storage
# docker-compose.yml
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ssl-certs:/etc/nginx/certs:ro
networks:
- frontend
depends_on:
- api
api:
build:
context: .
dockerfile: Dockerfile
target: production
environment:
DATABASE_URL: postgresql://app:${DB_PASSWORD}@db:5432/myapp
REDIS_URL: redis://cache:6379
volumes:
- uploads:/app/uploads
tmpfs:
- /tmp
networks:
- frontend
- backend
depends_on:
db:
condition: service_healthy
cache:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: myapp
POSTGRES_USER: app
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- backend
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d myapp"]
interval: 10s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
volumes:
- redisdata:/data
networks:
- backend
healthcheck:
test: ["CMD", "redis-cli", "--pass", "${REDIS_PASSWORD}", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
frontend: # nginx ↔ api
backend: # api ↔ db ↔ cache (nginx can NOT reach db or cache directly)
volumes:
pgdata: # postgres data
redisdata: # redis persistence
uploads: # user uploads
ssl-certs: # TLS certificatesThis setup:
- nginx can only talk to api (not db or cache directly)
- api can talk to db and cache by name via DNS
- All data volumes persist across container restarts
- Healthchecks ensure dependent services are ready before api starts
Debugging Networking Issues
# Is the container on the right network?
docker inspect --format='{{json .NetworkSettings.Networks}}' my-container | jq
# Can container A reach container B?
docker exec container-a ping container-b
docker exec container-a curl http://container-b:8080/health
# What ports is a container listening on?
docker port my-container
# See real-time network traffic (requires tcpdump in container)
docker exec -it my-container sh -c "apk add tcpdump && tcpdump -i eth0"
# DNS resolution from inside container
docker exec my-container nslookup db
docker exec my-container cat /etc/resolv.confDebugging Storage Issues
# What volumes does a container have?
docker inspect --format='{{json .Mounts}}' my-container | jq
# What's in a volume?
docker run --rm -v my-volume:/data alpine ls -la /data
# Volume disk usage
docker system df -v
# Check permissions inside container
docker exec my-container ls -la /app/uploads
# Copy files to/from container
docker cp my-container:/app/logs/error.log ./error.log
docker cp ./seed.sql my-container:/tmp/seed.sqlSummary and Key Takeaways
✅ Custom bridge networks provide automatic DNS — containers find each other by name, not IP
✅ Use separate networks per tier (frontend/backend/data) to isolate services from each other
✅ Publish ports with 127.0.0.1: prefix in development to avoid exposing to the local network
✅ Volumes are Docker-managed, portable, and preferred for production data persistence
✅ Bind mounts are great for development (live code reload) — avoid for production databases
✅ Use tmpfs for sensitive in-memory data that must never be written to disk
✅ Always declare named volumes for stateful services (postgres, redis, elasticsearch)
✅ The anonymous volume trick (- /app/node_modules) prevents bind mounts from clobbering container dependencies
✅ Use docker network inspect and docker exec ... ping for network debugging
✅ Volumes bypass the storage driver — they use native filesystem I/O and are faster than the writable layer
Series: Docker & Kubernetes Learning Roadmap
Previous: Deep Dive: Dockerfile Best Practices & Multi-Stage Builds
Next: Deep Dive: Kubernetes Workloads (Coming Soon)
Have questions about Docker networking or storage? Feel free to reach out or leave a comment!
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.