Kubernetes Fundamentals

Welcome to Phase 3 of the Docker & Kubernetes Learning Roadmap! You've mastered Docker fundamentals, written Dockerfiles, and orchestrated multi-container apps with Docker Compose. Now it's time for the big leap — Kubernetes.
Docker Compose is great for single-machine deployments. But what happens when your app needs to run across multiple servers? When a container crashes at 3 AM and nobody's awake to restart it? When Black Friday traffic hits and you need 50 replicas instead of 2? That's where Kubernetes comes in.
What You'll Learn
✅ Understand Kubernetes architecture (control plane and worker nodes)
✅ Set up a local Kubernetes cluster with minikube
✅ Create and manage Pods — the smallest deployable unit
✅ Use Deployments for rolling updates and rollbacks
✅ Expose applications with Services (ClusterIP, NodePort, LoadBalancer)
✅ Manage configuration with ConfigMaps and Secrets
✅ Organize resources with Namespaces, Labels, and Selectors
✅ Debug and troubleshoot Kubernetes workloads
Time commitment: 7–10 days, 1–2 hours daily
Prerequisites: Phase 2: Docker Compose & Multi-Container Apps
Part 1: What is Kubernetes?
Kubernetes (often abbreviated as K8s — the 8 stands for the eight letters between "K" and "s") is an open-source container orchestration platform. Originally designed by Google and now maintained by the Cloud Native Computing Foundation (CNCF), it automates deploying, scaling, and managing containerized applications.
Why Kubernetes?
Docker runs containers. Kubernetes runs containers at scale. Here's what it gives you:
| Capability | Without K8s | With K8s |
|---|---|---|
| Scaling | Manually start more containers | kubectl scale --replicas=10 |
| Self-healing | Container crashes → you restart it | K8s automatically restarts failed containers |
| Load balancing | Set up nginx/HAProxy yourself | Built-in Service load balancing |
| Rolling updates | Stop old → start new (downtime) | Zero-downtime rolling deployments |
| Service discovery | Hardcode IPs or use external DNS | Built-in DNS for every Service |
| Configuration | Environment files on each server | Centralized ConfigMaps and Secrets |
| Resource management | Hope containers don't hog resources | CPU/memory requests and limits |
Kubernetes vs Docker Compose vs Docker Swarm
| Feature | Docker Compose | Docker Swarm | Kubernetes |
|---|---|---|---|
| Complexity | Simple | Moderate | Complex |
| Multi-node | No | Yes | Yes |
| Auto-scaling | No | Limited | Advanced (HPA, VPA) |
| Self-healing | No | Basic | Advanced |
| Ecosystem | Docker only | Docker only | Massive (CNCF) |
| Learning curve | Low | Medium | High |
| Best for | Development | Small production | Large-scale production |
Kubernetes Distributions
Not all Kubernetes is the same. There are several distributions:
- Vanilla Kubernetes (kubeadm) — The standard, DIY installation
- minikube — Single-node cluster for local development
- kind (Kubernetes in Docker) — Runs K8s nodes as Docker containers
- k3s — Lightweight K8s by Rancher (great for edge/IoT)
- MicroK8s — Canonical's lightweight K8s (snap-based)
- OpenShift — Red Hat's enterprise Kubernetes platform
- Managed services — EKS (AWS), GKE (Google), AKS (Azure)
For learning, we'll use minikube — it's the easiest way to get a full Kubernetes cluster running on your laptop.
Part 2: Kubernetes Architecture
Understanding Kubernetes architecture is essential before you start deploying anything. A Kubernetes cluster has two main parts: the control plane and the worker nodes.
Control Plane Components
The control plane is the "brain" of the cluster. It makes decisions about scheduling, detecting events, and responding to changes.
API Server (kube-apiserver)
- The front door to Kubernetes — every command goes through it
kubectlcommunicates with the API server- RESTful API that validates and processes requests
- The only component that talks to etcd directly
etcd
- Distributed key-value store that holds all cluster state
- Stores configuration, secrets, service discovery data
- If etcd dies, your cluster loses its memory
- Always run with backups in production
Scheduler (kube-scheduler)
- Decides which node should run a new Pod
- Considers: resource requirements, affinity rules, taints/tolerations
- Doesn't run the Pod — just assigns it to a node
Controller Manager (kube-controller-manager)
- Runs controller loops that watch cluster state and make corrections
- Deployment controller: ensures desired replicas are running
- Node controller: monitors node health
- Job controller: manages batch jobs
Worker Node Components
Worker nodes are the machines that actually run your containers.
kubelet
- Agent that runs on every worker node
- Receives Pod specs from the API server
- Ensures containers are running and healthy
- Reports node and Pod status back to the control plane
kube-proxy
- Network proxy on each node
- Maintains network rules for Service communication
- Implements the Service abstraction (load balancing between Pods)
Container Runtime
- The software that runs containers (containerd, CRI-O)
- Docker was the original runtime but Kubernetes removed Docker support in v1.24
- containerd is now the most common runtime
How It All Fits Together
When you run kubectl apply -f deployment.yaml, here's what happens:
Part 3: Setting Up Kubernetes
Installing minikube
minikube creates a single-node Kubernetes cluster on your local machine.
macOS:
# Install with Homebrew
brew install minikube
# Or download directly
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-arm64
sudo install minikube-darwin-arm64 /usr/local/bin/minikubeLinux:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikubeWindows (PowerShell):
winget install Kubernetes.minikubeStarting Your Cluster
# Start minikube with default settings
minikube start
# Start with specific resources
minikube start --cpus=4 --memory=8192 --driver=docker
# Check cluster status
minikube statusOutput:
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: ConfiguredInstalling kubectl
kubectl is the command-line tool for interacting with Kubernetes.
macOS:
brew install kubectlLinux:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install kubectl /usr/local/bin/kubectlVerify installation:
# Check kubectl version
kubectl version --client
# Check cluster connection
kubectl cluster-info
# View cluster nodes
kubectl get nodesOutput:
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 2m v1.31.0kubectl Essentials
Here are the commands you'll use every day:
# Get resources
kubectl get pods # List all pods
kubectl get pods -o wide # With extra details (IP, node)
kubectl get services # List all services
kubectl get all # List everything
# Describe (detailed info)
kubectl describe pod <pod-name> # Detailed pod info
kubectl describe node minikube # Node details
# Create and apply
kubectl apply -f manifest.yaml # Apply a manifest (create or update)
kubectl create -f manifest.yaml # Create (fails if exists)
# Delete
kubectl delete pod <pod-name> # Delete a pod
kubectl delete -f manifest.yaml # Delete resources from manifest
# Logs and debugging
kubectl logs <pod-name> # View pod logs
kubectl logs -f <pod-name> # Stream logs (follow)
kubectl exec -it <pod-name> -- sh # Shell into a pod
# Context management
kubectl config get-contexts # List available contexts
kubectl config use-context minikube # Switch contextPro tip: Set up an alias to save typing:
alias k=kubectl
Now you can typek get podsinstead ofkubectl get pods.
Part 4: Pods
A Pod is the smallest deployable unit in Kubernetes. It's a wrapper around one or more containers that share the same network and storage.
Why Pods, Not Containers?
In Docker, you think in terms of containers. In Kubernetes, you think in terms of Pods. Why the extra layer?
- Pods can run multiple containers that need to work together (sidecar pattern)
- Containers in the same Pod share the same IP address and localhost
- Containers in the same Pod share volumes
- Pods are the unit of scheduling — they always run on the same node
Most of the time, a Pod runs one container. Multi-container Pods are for specific patterns like logging sidecars, proxies, or init containers.
Creating Your First Pod
Imperative approach (quick testing):
# Run a pod directly
kubectl run nginx --image=nginx:alpine --port=80
# Check it's running
kubectl get podsOutput:
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 10sDeclarative approach (recommended):
Create a file called pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: my-app
environment: dev
spec:
containers:
- name: app
image: nginx:alpine
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "250m"Apply it:
kubectl apply -f pod.yaml
kubectl get pods
kubectl describe pod my-appPod Lifecycle
Pods go through several phases:
| Phase | Meaning |
|---|---|
| Pending | Pod accepted but containers not running yet (pulling images, scheduling) |
| Running | At least one container is running |
| Succeeded | All containers exited successfully (exit code 0) |
| Failed | At least one container exited with an error |
| Unknown | Pod state can't be determined (usually node communication issues) |
Multi-Container Pods
Sometimes containers need to work together closely. Common patterns:
Sidecar pattern — a helper container alongside the main app:
apiVersion: v1
kind: Pod
metadata:
name: app-with-sidecar
labels:
app: web
spec:
containers:
# Main application
- name: app
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
# Sidecar: ships logs to a central system
- name: log-shipper
image: busybox
command: ["sh", "-c", "tail -f /logs/access.log"]
volumeMounts:
- name: shared-logs
mountPath: /logs
volumes:
- name: shared-logs
emptyDir: {}Both containers share the shared-logs volume. The nginx container writes logs, and the log-shipper container reads and forwards them.
Init Containers
Init containers run before the main containers start. They're perfect for setup tasks:
apiVersion: v1
kind: Pod
metadata:
name: app-with-init
labels:
app: web
spec:
initContainers:
# Wait for a database to be ready
- name: wait-for-db
image: busybox
command: ["sh", "-c", "until nc -z db-service 5432; do echo waiting for db; sleep 2; done"]
# Run database migrations
- name: run-migrations
image: my-app:latest
command: ["python", "manage.py", "migrate"]
containers:
- name: app
image: my-app:latest
ports:
- containerPort: 8000Init containers run sequentially — each must complete before the next starts. The main container only starts after all init containers succeed.
Debugging Pods
# View pod details (events at the bottom are most useful)
kubectl describe pod my-app
# View logs
kubectl logs my-app
kubectl logs my-app -c log-shipper # Specific container in multi-container pod
kubectl logs my-app --previous # Logs from a crashed container
# Shell into a running pod
kubectl exec -it my-app -- /bin/sh
# Port-forward to access a pod locally
kubectl port-forward my-app 8080:80
# Now visit http://localhost:8080
# Check resource usage
kubectl top pod my-appCommon Pod issues and fixes:
| Status | Common Cause | Debug Command |
|---|---|---|
ImagePullBackOff | Wrong image name or no access to registry | kubectl describe pod → check Events |
CrashLoopBackOff | App crashes on startup | kubectl logs --previous |
Pending | Not enough resources on nodes | kubectl describe pod → check Events |
OOMKilled | Container exceeded memory limit | Increase resources.limits.memory |
Part 5: Deployments
In practice, you almost never create Pods directly. Instead, you use a Deployment — a higher-level resource that manages Pods for you.
Why Deployments?
| Feature | Bare Pod | Deployment |
|---|---|---|
| Self-healing | Pod dies → stays dead | Pod dies → new one created |
| Scaling | Manual pod creation | kubectl scale --replicas=5 |
| Rolling updates | Stop old, start new | Zero-downtime updates |
| Rollback | Not possible | kubectl rollout undo |
| History | None | Full revision history |
Creating a Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: nginx:1.25-alpine
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "250m"
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 15
periodSeconds: 20Let's break down the key parts:
replicas: 3— Run 3 identical Podsselector.matchLabels— How the Deployment finds its Podstemplate— The Pod template (what each replica looks like)readinessProbe— Checks if the Pod is ready to receive trafficlivenessProbe— Checks if the Pod is still alive (restarts it if not)
Apply and inspect:
kubectl apply -f deployment.yaml
# View the deployment
kubectl get deployments
kubectl get rs # ReplicaSet created by the Deployment
kubectl get pods # Pods created by the ReplicaSetOutput:
NAME READY UP-TO-DATE AVAILABLE AGE
my-app 3/3 3 3 30s
NAME DESIRED CURRENT READY AGE
my-app-7d9f8b6c5 3 3 3 30s
NAME READY STATUS RESTARTS AGE
my-app-7d9f8b6c5-abc12 1/1 Running 0 30s
my-app-7d9f8b6c5-def34 1/1 Running 0 30s
my-app-7d9f8b6c5-ghi56 1/1 Running 0 30sNotice the hierarchy: Deployment → ReplicaSet → Pods. The Deployment manages ReplicaSets, and ReplicaSets manage Pods.
Scaling
# Scale up
kubectl scale deployment my-app --replicas=5
# Scale down
kubectl scale deployment my-app --replicas=2
# Check the result
kubectl get podsRolling Updates
This is where Deployments shine. When you update the container image, Kubernetes gradually replaces old Pods with new ones — zero downtime.
# Update the image
kubectl set image deployment/my-app app=nginx:1.26-alpine
# Watch the rollout
kubectl rollout status deployment/my-appWhat happens during a rolling update:
You can control the update strategy:
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Max extra pods during update
maxUnavailable: 0 # Zero downtime — never remove a pod until new one is readyRollbacks
Something went wrong with the new version? Roll back instantly:
# View rollout history
kubectl rollout history deployment/my-app
# Roll back to previous version
kubectl rollout undo deployment/my-app
# Roll back to a specific revision
kubectl rollout undo deployment/my-app --to-revision=2
# Check rollout status
kubectl rollout status deployment/my-appHealth Checks (Probes)
Probes tell Kubernetes whether your application is healthy:
| Probe | Purpose | Action on Failure |
|---|---|---|
| livenessProbe | Is the container alive? | Restart the container |
| readinessProbe | Is the container ready for traffic? | Remove from Service endpoints |
| startupProbe | Has the app finished starting? | Kill and restart |
containers:
- name: app
image: my-app:latest
# Startup probe: give slow-starting apps time to boot
startupProbe:
httpGet:
path: /healthz
port: 8080
failureThreshold: 30
periodSeconds: 10
# Liveness probe: restart if the app hangs
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 0
periodSeconds: 15
failureThreshold: 3
# Readiness probe: only send traffic when ready
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5Part 6: Services
Pods are ephemeral — they come and go, get new IP addresses each time. You can't rely on Pod IPs. Services provide a stable network endpoint to access a group of Pods.
How Services Work
A Service uses label selectors to find Pods and routes traffic to them:
Service Types
Kubernetes offers four Service types, each with different access levels:
1. ClusterIP (default)
Only accessible within the cluster. Perfect for internal service-to-service communication.
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: ClusterIP
selector:
app: backend
ports:
- port: 80 # Service port
targetPort: 8080 # Container port
protocol: TCPOther Pods can access this service at backend-service:80 or backend-service.default.svc.cluster.local:80.
2. NodePort
Exposes the Service on a static port on every node. Accessible from outside the cluster.
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
type: NodePort
selector:
app: frontend
ports:
- port: 80
targetPort: 3000
nodePort: 30080 # External port (30000-32767)Access via <node-ip>:30080. With minikube:
minikube service frontend-service --url3. LoadBalancer
Creates an external load balancer (on cloud providers like AWS, GKE, AKS). In minikube, use minikube tunnel to simulate it.
apiVersion: v1
kind: Service
metadata:
name: public-api
spec:
type: LoadBalancer
selector:
app: api
ports:
- port: 80
targetPort: 8080# In a separate terminal (for minikube)
minikube tunnel
# Now check the external IP
kubectl get service public-api4. ExternalName
Maps a Service to an external DNS name. No proxying — just a DNS alias.
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
type: ExternalName
externalName: db.example.comPods can access external-db and it resolves to db.example.com.
Service Discovery
Kubernetes provides built-in DNS for Services. Every Service gets a DNS entry:
<service-name>.<namespace>.svc.cluster.localWithin the same namespace, you can just use the service name:
# Inside a Pod in the same namespace
import requests
# Short form (same namespace)
response = requests.get("http://backend-service:80/api/users")
# Full DNS name (cross-namespace)
response = requests.get("http://backend-service.production.svc.cluster.local:80/api/users")Part 7: ConfigMaps and Secrets
Hardcoding configuration in container images is a bad practice. ConfigMaps and Secrets let you decouple configuration from your application code.
ConfigMaps
ConfigMaps store non-sensitive configuration data as key-value pairs.
Creating ConfigMaps:
# From literal values
kubectl create configmap app-config \
--from-literal=APP_ENV=production \
--from-literal=LOG_LEVEL=info \
--from-literal=MAX_CONNECTIONS=100
# From a file
kubectl create configmap nginx-config --from-file=nginx.conf
# View the ConfigMap
kubectl get configmap app-config -o yamlDeclarative YAML:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
APP_ENV: "production"
LOG_LEVEL: "info"
MAX_CONNECTIONS: "100"
# Multi-line config file
app.properties: |
server.port=8080
spring.profiles.active=production
logging.level.root=INFOUsing ConfigMaps in Pods:
apiVersion: v1
kind: Pod
metadata:
name: app-with-config
spec:
containers:
- name: app
image: my-app:latest
# Option 1: As environment variables
envFrom:
- configMapRef:
name: app-config
# Option 2: Specific keys as env vars
env:
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: APP_ENV
# Option 3: Mount as files
volumeMounts:
- name: config-volume
mountPath: /etc/config
readOnly: true
volumes:
- name: config-volume
configMap:
name: app-configSecrets
Secrets are like ConfigMaps but for sensitive data — passwords, API keys, TLS certificates. Values are base64-encoded (not encrypted by default).
Creating Secrets:
# From literal values
kubectl create secret generic db-credentials \
--from-literal=DB_USER=admin \
--from-literal=DB_PASSWORD=s3cur3p@ss
# From files (e.g., TLS certificates)
kubectl create secret tls my-tls-secret \
--cert=tls.crt \
--key=tls.key
# View the secret (values are base64 encoded)
kubectl get secret db-credentials -o yamlDeclarative YAML:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
# Values must be base64 encoded
DB_USER: YWRtaW4= # echo -n "admin" | base64
DB_PASSWORD: czNjdXIzcEBzcw== # echo -n "s3cur3p@ss" | base64
---
# Or use stringData for plain text (Kubernetes encodes it for you)
apiVersion: v1
kind: Secret
metadata:
name: api-keys
type: Opaque
stringData:
API_KEY: "my-super-secret-api-key"
JWT_SECRET: "jwt-signing-secret-256-bit"Using Secrets in Pods:
apiVersion: v1
kind: Pod
metadata:
name: app-with-secrets
spec:
containers:
- name: app
image: my-app:latest
# As environment variables
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_USER
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_PASSWORD
# Mount as files
volumeMounts:
- name: secret-volume
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secret-volume
secret:
secretName: db-credentialsImportant: Kubernetes Secrets are base64-encoded, not encrypted. For production, consider:
- Enabling encryption at rest for etcd
- Using external secret managers (Vault, AWS Secrets Manager, Azure Key Vault)
- Using the External Secrets Operator to sync from external stores
Part 8: Namespaces
Namespaces are virtual clusters within a physical cluster. They provide isolation, organization, and resource management.
Default Namespaces
Every Kubernetes cluster starts with these namespaces:
kubectl get namespacesNAME STATUS AGE
default Active 1d # Where your resources go if you don't specify
kube-system Active 1d # Kubernetes system components (API server, DNS, etc.)
kube-public Active 1d # Publicly accessible data (rarely used)
kube-node-lease Active 1d # Node heartbeat leasesCreating and Using Namespaces
apiVersion: v1
kind: Namespace
metadata:
name: staging
labels:
environment: staging
---
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
environment: production# Create namespace
kubectl apply -f namespaces.yaml
# Create resources in a specific namespace
kubectl apply -f deployment.yaml -n staging
# List resources in a namespace
kubectl get pods -n staging
kubectl get all -n production
# List resources across all namespaces
kubectl get pods --all-namespaces
kubectl get pods -A # Short formSetting Default Namespace
Tired of typing -n staging every time?
# Set default namespace for current context
kubectl config set-context --current --namespace=staging
# Now all commands target 'staging' by default
kubectl get pods # Shows pods in 'staging'Resource Quotas
Control how much resources a namespace can consume:
apiVersion: v1
kind: ResourceQuota
metadata:
name: staging-quota
namespace: staging
spec:
hard:
pods: "20"
requests.cpu: "4"
requests.memory: "8Gi"
limits.cpu: "8"
limits.memory: "16Gi"
services: "10"
persistentvolumeclaims: "5"Cross-Namespace Communication
Services in different namespaces can communicate using the full DNS name:
<service-name>.<namespace>.svc.cluster.local# Pod in 'staging' namespace accessing a service in 'production' namespace
env:
- name: API_URL
value: "http://api-service.production.svc.cluster.local:80"Part 9: Labels and Selectors
Labels are key-value pairs attached to Kubernetes objects. They're how Kubernetes organizes and selects resources.
Adding Labels
apiVersion: v1
kind: Pod
metadata:
name: api-server
labels:
app: api
environment: production
team: backend
version: v2.1.0# Add a label to an existing resource
kubectl label pod api-server tier=backend
# Remove a label
kubectl label pod api-server tier-
# Update a label
kubectl label pod api-server version=v2.2.0 --overwriteUsing Selectors
Selectors filter resources by labels:
# Equality-based selectors
kubectl get pods -l app=api
kubectl get pods -l environment=production
kubectl get pods -l 'environment!=staging'
# Set-based selectors
kubectl get pods -l 'environment in (production, staging)'
kubectl get pods -l 'team notin (frontend)'
kubectl get pods -l 'app, environment' # Has both labels (any value)
# Combine selectors
kubectl get pods -l 'app=api,environment=production'Labels vs Annotations
| Feature | Labels | Annotations |
|---|---|---|
| Purpose | Identify and select resources | Attach non-identifying metadata |
| Used by selectors | Yes | No |
| Size limit | 63 chars (value) | 256KB |
| Examples | app=web, env=prod | description, git-commit, config-hash |
metadata:
labels:
app: api # Used for selection
environment: production # Used for selection
annotations:
description: "Main API server" # Not used for selection
git-commit: "abc123def456" # Build metadata
kubectl.kubernetes.io/last-applied-configuration: "..." # System annotationLabel Best Practices
Follow the recommended labeling convention:
metadata:
labels:
# Recommended labels (kubernetes.io convention)
app.kubernetes.io/name: my-app
app.kubernetes.io/instance: my-app-prod
app.kubernetes.io/version: "2.1.0"
app.kubernetes.io/component: backend
app.kubernetes.io/part-of: ecommerce
app.kubernetes.io/managed-by: helm
# Custom labels for your organization
team: platform
cost-center: engineeringPart 10: Putting It All Together
Let's deploy a complete application: a Node.js API with a PostgreSQL database.
Project Structure
k8s/
├── namespace.yaml
├── configmap.yaml
├── secret.yaml
├── postgres-deployment.yaml
├── postgres-service.yaml
├── api-deployment.yaml
└── api-service.yamlStep 1: Namespace
# k8s/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: demo-app
labels:
app: demoStep 2: ConfigMap and Secret
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: api-config
namespace: demo-app
data:
NODE_ENV: "production"
PORT: "3000"
DB_HOST: "postgres-service"
DB_PORT: "5432"
DB_NAME: "myapp"# k8s/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
namespace: demo-app
type: Opaque
stringData:
DB_USER: "appuser"
DB_PASSWORD: "supersecretpassword"
POSTGRES_PASSWORD: "supersecretpassword"Step 3: PostgreSQL
# k8s/postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: demo-app
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: api-config
key: DB_NAME
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: POSTGRES_PASSWORD
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
readinessProbe:
exec:
command: ["pg_isready", "-U", "appuser"]
initialDelaySeconds: 5
periodSeconds: 10
volumes:
- name: postgres-data
emptyDir: {} # Use PersistentVolume in production!# k8s/postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres-service
namespace: demo-app
spec:
type: ClusterIP
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432Step 4: Node.js API
# k8s/api-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: demo-app
labels:
app: api
spec:
replicas: 3
selector:
matchLabels:
app: api
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: api
spec:
initContainers:
- name: wait-for-db
image: busybox
command: ["sh", "-c", "until nc -z postgres-service 5432; do echo waiting for postgres; sleep 2; done"]
containers:
- name: api
image: node:20-alpine
command: ["node", "server.js"]
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: api-config
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_USER
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: DB_PASSWORD
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 15# k8s/api-service.yaml
apiVersion: v1
kind: Service
metadata:
name: api-service
namespace: demo-app
spec:
type: NodePort
selector:
app: api
ports:
- port: 80
targetPort: 3000
nodePort: 30000Step 5: Deploy Everything
# Apply all manifests in order
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secret.yaml
kubectl apply -f k8s/postgres-deployment.yaml
kubectl apply -f k8s/postgres-service.yaml
kubectl apply -f k8s/api-deployment.yaml
kubectl apply -f k8s/api-service.yaml
# Or apply everything at once
kubectl apply -f k8s/
# Check everything is running
kubectl get all -n demo-appExpected output:
NAME READY STATUS RESTARTS AGE
pod/api-7d9f8b6c5-abc12 1/1 Running 0 60s
pod/api-7d9f8b6c5-def34 1/1 Running 0 60s
pod/api-7d9f8b6c5-ghi56 1/1 Running 0 60s
pod/postgres-5c8f9d7b2-xyz89 1/1 Running 0 65s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/api-service NodePort 10.96.100.50 <none> 80:30000/TCP
service/postgres-service ClusterIP 10.96.100.51 <none> 5432/TCP
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/api 3/3 3 3 60s
deployment.apps/postgres 1/1 1 1 65s# Access the API (minikube)
minikube service api-service -n demo-app --url
# Test it
curl http://<minikube-ip>:30000/healthCleanup
# Delete all resources in the namespace
kubectl delete namespace demo-app
# Or delete specific resources
kubectl delete -f k8s/Exercises
Exercise 1: Deploy a Web Application
Deploy an nginx web server with:
- 3 replicas
- A ConfigMap with a custom
index.html - A NodePort Service on port 30080
- Liveness and readiness probes
Exercise 2: Rolling Update Practice
- Deploy nginx:1.24 with 3 replicas
- Update to nginx:1.25 and watch the rolling update
- Check rollout history
- Roll back to 1.24
- Verify the rollback succeeded
Exercise 3: Multi-Service Application
Deploy a complete stack in a my-project namespace:
- A Redis deployment (1 replica) with a ClusterIP Service
- A Python/Node.js API (3 replicas) that connects to Redis
- Use ConfigMaps for Redis connection settings
- Use Secrets for any authentication
- An init container that waits for Redis to be ready
What's Next?
In the next post, we'll dive into Dockerfile Best Practices & Multi-Stage Builds — mastering advanced techniques to build production-ready container images:
- Image size optimization (alpine, slim, distroless, scratch)
- Layer caching strategies
- Multi-stage builds for any language
- Security hardening (non-root, read-only filesystem)
- BuildKit features (cache mounts, secrets, multi-platform)
Summary and Key Takeaways
✅ Kubernetes orchestrates containers at scale with self-healing, scaling, and rolling updates
✅ The control plane (API server, etcd, scheduler) manages cluster state; worker nodes run Pods
✅ Pods are the smallest deployable unit — use Deployments to manage them, never create Pods directly
✅ Deployments provide rolling updates, rollbacks, and scaling with zero downtime
✅ Services give Pods a stable network endpoint — use ClusterIP for internal, NodePort/LoadBalancer for external
✅ ConfigMaps store configuration; Secrets store sensitive data (base64-encoded, not encrypted)
✅ Namespaces isolate resources and enable resource quotas per team or environment
✅ Labels and selectors are how Kubernetes organizes and connects resources
✅ Always set resource requests/limits and health checks (liveness, readiness, startup probes)
✅ Use kubectl describe and kubectl logs as your primary debugging tools
Series: Docker & Kubernetes Learning Roadmap
Previous: Phase 2: Docker Compose & Multi-Container Apps
Next: Deep Dive: Dockerfile Best Practices & Multi-Stage Builds
Have questions about Kubernetes? Feel free to reach out or leave a comment!
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.