Spring Boot Docker & Kubernetes Deployment

Your Spring Boot application works perfectly on your machine. Now ship it to production. That means containers, orchestration, health checks, scaling, and configuration management — without turning your deployment into a fragile house of cards.
This guide takes you from a working Spring Boot app to a production-ready container running on Kubernetes. We'll cover Docker best practices specific to Java/Spring Boot, then deploy to Kubernetes with proper health probes, configuration, and auto-scaling.
What You'll Learn
✅ Multi-stage Dockerfiles — small, secure images for Spring Boot
✅ Layered JARs — faster builds with Spring Boot's layer extraction
✅ Docker Compose — spin up your app with PostgreSQL and Redis locally
✅ Kubernetes Deployments — declarative manifests for your application
✅ Services and Ingress — expose your app to the outside world
✅ ConfigMaps and Secrets — externalize configuration securely
✅ Health probes — liveness and readiness with Spring Boot Actuator
✅ Resource limits — prevent one pod from consuming all cluster resources
✅ Horizontal Pod Autoscaling — scale based on CPU and memory
✅ Local development — minikube and kind for testing Kubernetes locally
Prerequisites
- Spring Boot fundamentals (Getting Started)
- Database integration (JPA & PostgreSQL)
- Docker basics (Docker Fundamentals) or general Docker knowledge
- Java 17+ and Docker installed
kubectlinstalled (for Kubernetes sections)
1. Why Containerize Spring Boot?
Spring Boot applications are self-contained — they embed Tomcat and produce a single JAR. That sounds deployment-friendly until you realize:
- "Works on my machine" — your local JDK version, environment variables, and OS differ from production
- Dependency conflicts — multiple apps on the same server compete for ports, libraries, system resources
- Scaling — adding more instances means provisioning servers, configuring load balancers, managing state
Containers solve all three. Package your app with its exact JDK, dependencies, and configuration into an image. Run that image anywhere — laptop, staging, production. Scale by running more copies.
| Without Containers | With Containers |
|---|---|
| Install JDK on every server | JDK bundled in the image |
| Configure env vars manually | Env vars declared in manifests |
| Port conflicts between apps | Each container has isolated networking |
| Snowflake servers | Identical images everywhere |
| Scale by buying bigger servers | Scale by running more containers |
2. Multi-Stage Dockerfile
A naive Dockerfile copies your source code and builds inside the image. The result: a 600MB+ image with build tools, source code, and the JDK compiler — none of which you need at runtime.
Multi-stage builds fix this. Build in one stage, copy only the JAR to the final stage:
# Stage 1: Build
FROM eclipse-temurin:21-jdk AS builder
WORKDIR /app
# Copy build files first (cache dependencies)
COPY pom.xml mvnw ./
COPY .mvn .mvn
RUN ./mvnw dependency:go-offline -B
# Copy source and build
COPY src src
RUN ./mvnw package -DskipTests -B
# Stage 2: Runtime
FROM eclipse-temurin:21-jre AS runtime
WORKDIR /app
# Create non-root user
RUN groupadd -r appuser && useradd -r -g appuser appuser
# Copy only the JAR
COPY --from=builder /app/target/*.jar app.jar
# Set ownership
RUN chown -R appuser:appuser /app
USER appuser
# JVM flags for containers
ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0"
EXPOSE 8080
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -jar app.jar"]What This Gets You
| Aspect | Naive Dockerfile | Multi-Stage |
|---|---|---|
| Image size | ~600MB (JDK + build tools + source) | ~280MB (JRE only) |
| Security | Build tools, source code exposed | Only runtime artifacts |
| Build cache | Rebuilds everything on code change | Dependencies cached separately |
| User | Runs as root | Runs as non-root user |
Key Details
eclipse-temurin:21-jre — use JRE, not JDK, for the runtime stage. You don't need the compiler in production.
-XX:+UseContainerSupport — tells the JVM to respect container memory limits. Without this, the JVM sees the host's total RAM and may allocate too much.
-XX:MaxRAMPercentage=75.0 — use up to 75% of the container's memory limit for the heap. Leave 25% for non-heap memory (metaspace, threads, native memory).
Non-root user — running as root inside a container is a security risk. If an attacker escapes the container, they have root on the host.
3. Layered JARs (Spring Boot Optimization)
Spring Boot 2.3+ supports layered JARs. Instead of a single fat JAR, the application is split into layers that Docker can cache independently:
Layer 1: dependencies (rarely changes — cached)
Layer 2: spring-boot-loader (rarely changes — cached)
Layer 3: snapshot-dependencies (changes occasionally)
Layer 4: application (changes every build)When you change your code, only Layer 4 rebuilds. Layers 1-3 come from cache. This makes builds dramatically faster.
Enable Layered JARs
Add to pom.xml:
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<layers>
<enabled>true</enabled>
</layers>
</configuration>
</plugin>
</plugins>
</build>Layered Dockerfile
# Stage 1: Build
FROM eclipse-temurin:21-jdk AS builder
WORKDIR /app
COPY pom.xml mvnw ./
COPY .mvn .mvn
RUN ./mvnw dependency:go-offline -B
COPY src src
RUN ./mvnw package -DskipTests -B
# Extract layers
RUN java -Djarmode=layertools -jar target/*.jar extract --destination extracted
# Stage 2: Runtime
FROM eclipse-temurin:21-jre AS runtime
WORKDIR /app
RUN groupadd -r appuser && useradd -r -g appuser appuser
# Copy layers in order (least-changing first for cache efficiency)
COPY --from=builder /app/extracted/dependencies/ ./
COPY --from=builder /app/extracted/spring-boot-loader/ ./
COPY --from=builder /app/extracted/snapshot-dependencies/ ./
COPY --from=builder /app/extracted/application/ ./
RUN chown -R appuser:appuser /app
USER appuser
ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0"
EXPOSE 8080
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS org.springframework.boot.loader.launch.JarLauncher"]Note: With layered extraction, the entry point uses JarLauncher instead of -jar app.jar.
Build Time Comparison
First build: ~90 seconds
Code change (no layers): ~90 seconds (rebuilds everything)
Code change (with layers): ~15 seconds (only application layer rebuilds)4. Docker Compose for Development
Don't install PostgreSQL, Redis, and your app separately. Docker Compose runs everything together:
# docker-compose.yml
version: "3.8"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
environment:
SPRING_PROFILES_ACTIVE: dev
SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/taskdb
SPRING_DATASOURCE_USERNAME: postgres
SPRING_DATASOURCE_PASSWORD: password
SPRING_DATA_REDIS_HOST: redis
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
restart: unless-stopped
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: taskdb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redisdata:/data
volumes:
pgdata:
redisdata:Usage
# Start everything
docker compose up -d
# View logs
docker compose logs -f app
# Rebuild after code changes
docker compose up -d --build app
# Stop everything
docker compose down
# Stop and remove volumes (fresh start)
docker compose down -vDevelopment vs Production Compose
For development, add hot-reload by mounting source code:
# docker-compose.dev.yml (override)
version: "3.8"
services:
app:
build:
context: .
target: builder # Stop at build stage
command: ./mvnw spring-boot:run
volumes:
- ./src:/app/src # Mount source for hot reload
environment:
SPRING_PROFILES_ACTIVE: dev
SPRING_DEVTOOLS_RESTART_ENABLED: "true"# Development with hot-reload
docker compose -f docker-compose.yml -f docker-compose.dev.yml up5. Building and Pushing Images
Build Locally
# Build with a tag
docker build -t my-app:1.0.0 .
# Build with multiple tags
docker build -t my-app:1.0.0 -t my-app:latest .
# Build for a specific platform (e.g., ARM for Apple Silicon → x86 for production)
docker build --platform linux/amd64 -t my-app:1.0.0 .Push to Container Registry
# Docker Hub
docker tag my-app:1.0.0 username/my-app:1.0.0
docker push username/my-app:1.0.0
# GitHub Container Registry
docker tag my-app:1.0.0 ghcr.io/username/my-app:1.0.0
docker push ghcr.io/username/my-app:1.0.0
# AWS ECR
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
docker tag my-app:1.0.0 123456789.dkr.ecr.us-east-1.amazonaws.com/my-app:1.0.0
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/my-app:1.0.0Spring Boot Maven Plugin (No Dockerfile Needed)
Spring Boot can build OCI images directly without a Dockerfile using Cloud Native Buildpacks:
./mvnw spring-boot:build-image -Dspring-boot.build-image.imageName=my-app:1.0.0This creates an optimized, layered image automatically. Good for quick builds; use a custom Dockerfile for more control.
6. Kubernetes Fundamentals for Spring Boot
Before deploying, understand the key Kubernetes objects:
| Object | Purpose | Spring Boot Relevance |
|---|---|---|
| Pod | Smallest deployable unit (runs your container) | One pod = one instance of your app |
| Deployment | Manages pod replicas and rolling updates | Declares how many instances and how to update |
| Service | Stable network endpoint for pods | Load balances across your app instances |
| Ingress | HTTP routing from outside the cluster | Maps domain names to your Service |
| ConfigMap | Non-sensitive configuration | application.properties values |
| Secret | Sensitive configuration | Database passwords, API keys |
| HPA | Horizontal Pod Autoscaler | Scale pods based on CPU/memory |
7. Kubernetes Deployment Manifest
Create a Deployment that runs your Spring Boot application:
# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-app
labels:
app: spring-app
spec:
replicas: 3
selector:
matchLabels:
app: spring-app
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # At most 1 extra pod during update
maxUnavailable: 0 # All existing pods stay running during update
template:
metadata:
labels:
app: spring-app
spec:
containers:
- name: spring-app
image: ghcr.io/username/spring-app:1.0.0
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: spring-app-config
- secretRef:
name: spring-app-secrets
resources:
requests:
cpu: "250m"
memory: "512Mi"
limits:
cpu: "1000m"
memory: "1Gi"
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 15
periodSeconds: 5
failureThreshold: 3
startupProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 30 # 30 * 5s = 150s max startup timeRolling Update Strategy
maxSurge: 1, maxUnavailable: 0This means: during updates, Kubernetes creates one new pod first, waits until it's ready, then terminates one old pod. Repeat until all pods are updated. Zero downtime.
Resource Requests vs Limits
resources:
requests:
cpu: "250m" # Guaranteed: 0.25 CPU cores
memory: "512Mi" # Guaranteed: 512MB RAM
limits:
cpu: "1000m" # Maximum: 1 CPU core
memory: "1Gi" # Maximum: 1GB RAM (OOM-killed if exceeded)- Requests — what the pod is guaranteed. Kubernetes uses this for scheduling.
- Limits — the maximum. Exceeding CPU → throttled. Exceeding memory → OOM-killed.
For Spring Boot, 512Mi-1Gi is typical. The JVM needs heap + metaspace + thread stacks + native memory. Monitor actual usage before setting limits.
8. Health Probes with Spring Boot Actuator
Kubernetes uses three types of probes to manage your pods:
| Probe | Purpose | When It Fails |
|---|---|---|
| Startup | Is the app still starting? | Keep waiting (don't restart yet) |
| Liveness | Is the app alive? | Restart the pod |
| Readiness | Can the app handle traffic? | Remove from load balancer |
Enable Actuator Health Probes
Add the Actuator dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>Configure in application.yaml:
management:
endpoints:
web:
exposure:
include: health,info,prometheus
endpoint:
health:
show-details: always
probes:
enabled: true # Enable /actuator/health/liveness and /readiness
health:
livenessState:
enabled: true
readinessState:
enabled: true
db:
enabled: true # Check database connectivity
redis:
enabled: true # Check Redis connectivityHow Probes Work Together
Critical distinction:
- Database down → readiness fails (stop sending traffic), liveness passes (don't restart — restarting won't fix a database outage)
- App deadlocked → liveness fails (restart the pod)
- App starting → startup probe keeps running, liveness and readiness wait
Custom Health Indicators
@Component
public class ExternalServiceHealthIndicator implements HealthIndicator {
private final ExternalServiceClient client;
public ExternalServiceHealthIndicator(ExternalServiceClient client) {
this.client = client;
}
@Override
public Health health() {
try {
client.ping();
return Health.up()
.withDetail("service", "external-api")
.withDetail("status", "reachable")
.build();
} catch (Exception e) {
return Health.down()
.withDetail("service", "external-api")
.withDetail("error", e.getMessage())
.build();
}
}
}9. ConfigMaps and Secrets
Never hardcode configuration in your Docker image. Use ConfigMaps for non-sensitive values and Secrets for credentials.
ConfigMap
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: spring-app-config
data:
SPRING_PROFILES_ACTIVE: "prod"
SPRING_DATASOURCE_URL: "jdbc:postgresql://postgres-service:5432/taskdb"
SPRING_DATA_REDIS_HOST: "redis-service"
SERVER_PORT: "8080"
MANAGEMENT_ENDPOINTS_WEB_EXPOSURE_INCLUDE: "health,info,prometheus"Secret
# k8s/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: spring-app-secrets
type: Opaque
data:
SPRING_DATASOURCE_USERNAME: cG9zdGdyZXM= # base64 encoded "postgres"
SPRING_DATASOURCE_PASSWORD: cGFzc3dvcmQ= # base64 encoded "password"
JWT_SECRET: bXktc3VwZXItc2VjcmV0LWtleQ== # base64 encodedImportant: Base64 is encoding, not encryption. For real secrets management, use Sealed Secrets, External Secrets Operator, or your cloud provider's secrets manager (AWS Secrets Manager, Azure Key Vault, GCP Secret Manager).
Create Secrets from Command Line
# More secure than putting base64 in YAML
kubectl create secret generic spring-app-secrets \
--from-literal=SPRING_DATASOURCE_USERNAME=postgres \
--from-literal=SPRING_DATASOURCE_PASSWORD=password \
--from-literal=JWT_SECRET=my-super-secret-keySpring Boot Configuration Hierarchy
Spring Boot resolves configuration in this order (highest priority first):
- Environment variables (from ConfigMap/Secret) — highest priority
application-{profile}.yaml— profile-specificapplication.yaml— defaults in your JAR@ConfigurationPropertiesdefaults — lowest priority
This means ConfigMap/Secret values override anything in your packaged application.yaml. You ship a JAR with sensible defaults, and Kubernetes overrides what needs to change per environment.
10. Service and Ingress
Service
A Service gives your pods a stable network identity:
# k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: spring-app-service
spec:
type: ClusterIP
selector:
app: spring-app
ports:
- port: 80
targetPort: 8080
protocol: TCP| Service Type | Access From | Use Case |
|---|---|---|
| ClusterIP | Inside the cluster only | Internal services, microservices |
| NodePort | External via node IP:port | Development, testing |
| LoadBalancer | External via cloud LB | Production on cloud providers |
For production, use ClusterIP + Ingress (most flexible) or LoadBalancer (simplest on cloud).
Ingress
Route external HTTP traffic to your Service:
# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: spring-app-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: nginx
tls:
- hosts:
- api.example.com
secretName: api-tls-cert
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: spring-app-service
port:
number: 80This maps https://api.example.com to your Spring Boot application with automatic TLS via cert-manager.
11. Horizontal Pod Autoscaling
Scale your Spring Boot application based on CPU and memory usage:
# k8s/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: spring-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: spring-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70 # Scale up when CPU > 70%
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80 # Scale up when memory > 80%
behavior:
scaleUp:
stabilizationWindowSeconds: 60 # Wait 60s before scaling up again
policies:
- type: Pods
value: 2
periodSeconds: 60 # Add at most 2 pods per minute
scaleDown:
stabilizationWindowSeconds: 300 # Wait 5 minutes before scaling down
policies:
- type: Pods
value: 1
periodSeconds: 120 # Remove at most 1 pod per 2 minutesHow It Works
Scale-down is cautious (5-minute stabilization, 1 pod at a time) because scaling down too fast can cause cascading failures. Scale-up is faster because latency spikes hurt users immediately.
Prerequisites
HPA requires the Metrics Server:
# Install Metrics Server (if not already installed)
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# Verify
kubectl top pods12. Database on Kubernetes
For production databases, use a managed service (AWS RDS, Cloud SQL, Azure Database). Running PostgreSQL on Kubernetes is possible but adds operational complexity.
For development and testing, here's a PostgreSQL deployment:
# k8s/postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16-alpine
ports:
- containerPort: 5432
envFrom:
- secretRef:
name: postgres-secrets
volumeMounts:
- name: pgdata
mountPath: /var/lib/postgresql/data
resources:
requests:
cpu: "250m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: postgres-pvc
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ClusterIP
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Secret
metadata:
name: postgres-secrets
type: Opaque
data:
POSTGRES_DB: dGFza2Ri # taskdb
POSTGRES_USER: cG9zdGdyZXM= # postgres
POSTGRES_PASSWORD: cGFzc3dvcmQ= # password13. Complete Deployment
Directory Structure
k8s/
├── namespace.yaml
├── configmap.yaml
├── secret.yaml
├── deployment.yaml
├── service.yaml
├── ingress.yaml
├── hpa.yaml
└── postgres.yaml # Dev/test onlyNamespace
# k8s/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: spring-appDeploy Everything
# Create namespace
kubectl apply -f k8s/namespace.yaml
# Deploy in order
kubectl apply -f k8s/configmap.yaml -n spring-app
kubectl apply -f k8s/secret.yaml -n spring-app
kubectl apply -f k8s/postgres.yaml -n spring-app # Dev only
kubectl apply -f k8s/deployment.yaml -n spring-app
kubectl apply -f k8s/service.yaml -n spring-app
kubectl apply -f k8s/ingress.yaml -n spring-app
kubectl apply -f k8s/hpa.yaml -n spring-app
# Or deploy everything at once
kubectl apply -f k8s/ -n spring-appVerify
# Check pods
kubectl get pods -n spring-app
# NAME READY STATUS RESTARTS AGE
# spring-app-6d4f8b7c9d-abc12 1/1 Running 0 30s
# spring-app-6d4f8b7c9d-def34 1/1 Running 0 30s
# spring-app-6d4f8b7c9d-ghi56 1/1 Running 0 30s
# postgres-7f8d9e6c5b-xyz78 1/1 Running 0 45s
# Check services
kubectl get svc -n spring-app
# Check logs
kubectl logs -f deployment/spring-app -n spring-app
# Check health
kubectl exec -it deployment/spring-app -n spring-app -- curl localhost:8080/actuator/health
# Check HPA status
kubectl get hpa -n spring-app
# NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS
# spring-app-hpa Deployment/spring-app 25%/70%, 40%/80% 2 10 314. Local Kubernetes Development
You don't need a cloud cluster to test Kubernetes manifests. Use minikube or kind locally.
minikube
# Start a local cluster
minikube start --cpus=4 --memory=4096
# Enable ingress
minikube addons enable ingress
minikube addons enable metrics-server
# Build your image inside minikube's Docker
eval $(minikube docker-env)
docker build -t spring-app:latest .
# Deploy (use imagePullPolicy: Never for local images)
kubectl apply -f k8s/ -n spring-app
# Access your app
minikube service spring-app-service -n spring-app
# Dashboard
minikube dashboardkind (Kubernetes in Docker)
# Create a cluster
kind create cluster --name spring-dev
# Load your image into kind
docker build -t spring-app:latest .
kind load docker-image spring-app:latest --name spring-dev
# Deploy
kubectl apply -f k8s/ -n spring-app
# Port forward to access
kubectl port-forward svc/spring-app-service 8080:80 -n spring-app15. Production Checklist
Before deploying to production, verify each item:
| Category | Item | Why |
|---|---|---|
| Docker | Multi-stage Dockerfile | Smaller images, no build tools in production |
| Docker | Non-root user | Security — limit container escape damage |
| Docker | JRE, not JDK | Smaller image, reduced attack surface |
| Docker | UseContainerSupport flag | JVM respects container memory limits |
| Docker | Layered JARs | Faster builds with layer caching |
| K8s | Resource requests and limits | Prevent resource starvation |
| K8s | Liveness probe | Restart dead pods |
| K8s | Readiness probe | Don't send traffic to unready pods |
| K8s | Startup probe | Allow slow-starting apps time to boot |
| K8s | ConfigMap for config | Don't bake configuration into images |
| K8s | Secrets for credentials | Don't store passwords in ConfigMaps |
| K8s | HPA | Scale under load |
| K8s | Rolling update strategy | Zero-downtime deployments |
| K8s | Managed database | Don't run production databases on K8s |
| K8s | Ingress with TLS | HTTPS for external traffic |
Summary and Key Takeaways
✅ Multi-stage Dockerfiles produce small, secure images — JRE only, no build tools
✅ Layered JARs make rebuilds fast — only the application layer changes on code updates
✅ Docker Compose simplifies local development — app, database, and cache in one command
✅ Kubernetes Deployments declare desired state — replicas, updates, resource limits
✅ Health probes are essential — startup, liveness, readiness serve different purposes
✅ ConfigMaps and Secrets externalize configuration — never bake credentials into images
✅ HPA scales your app automatically — scale up fast, scale down cautiously
✅ Rolling updates with maxSurge: 1, maxUnavailable: 0 give zero-downtime deploys
✅ Use managed databases in production — don't add database operations to your K8s workload
✅ Local K8s tools (minikube, kind) let you test manifests before deploying to the cloud
What's Next?
Now that your Spring Boot app is containerized and running on Kubernetes, explore these topics:
Continue the Spring Boot series:
- CI/CD with GitHub Actions - Automate builds, tests, and deployments
- Cloud Deployment (AWS, Azure, GCP) - Deploy to managed Kubernetes services
- Structured Logging & Centralized Logging - Observability in containers
Related Spring Boot Posts:
- Performance Optimization & Profiling - Tune before containerizing
- Testing Guide with JUnit & Mockito - Test before deploying
- Spring Boot Learning Roadmap - Complete learning path
Infrastructure Fundamentals:
- Docker Fundamentals - Container basics
- Load Balancing Explained - Traffic distribution
- Reverse Proxy Explained - Nginx, Traefik, SSL termination
Part of the Spring Boot Learning Roadmap series.
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.