Reverse Proxy Explained

Every production web application sits behind a reverse proxy. You may not realize it, but if you've deployed to Vercel, AWS, or any cloud provider — there's a reverse proxy between your users and your code.
In the Load Balancing post, we covered distributing traffic across multiple servers. A reverse proxy does more than that — it's the front door to your entire backend, handling SSL, caching, security, routing, and yes, load balancing too.
Time commitment: 1-2 hours
Prerequisites: Basic understanding of HTTP and client-server architecture
What You'll Learn
✅ What a reverse proxy is and how it differs from a forward proxy
✅ How reverse proxies work — the complete request flow
✅ SSL/TLS termination — offload encryption from your application
✅ Caching — serve responses without hitting your backend
✅ Security benefits — hide servers, block attacks, rate limit
✅ Load balancing — distribute traffic with a reverse proxy
✅ WebSocket and gRPC proxying — real-time protocol support
✅ Practical setup — Nginx, Traefik, and Caddy configurations
1. What is a Reverse Proxy?
A reverse proxy sits between clients and your backend servers. Clients never talk to your servers directly — they talk to the proxy, and the proxy forwards requests to the right server.
The key insight: clients don't know which server handles their request. They only see the reverse proxy's IP address. This simple concept enables everything else — SSL termination, caching, load balancing, security.
What It Actually Does
When a request arrives at the reverse proxy:
- Receives the client's HTTP request
- Inspects the request (URL path, headers, method)
- Decides which backend server should handle it
- Forwards the request to that server
- Receives the response from the backend
- Returns the response to the client
The client thinks it's talking to one server. In reality, there could be dozens of servers behind the proxy.
2. Forward Proxy vs Reverse Proxy
These are often confused. They sit on opposite sides of the connection.
| Aspect | Forward Proxy | Reverse Proxy |
|---|---|---|
| Protects | Clients | Servers |
| Who uses it | Client-side (corporate networks, VPNs) | Server-side (web infrastructure) |
| Client knows about it | Yes (configured in browser/OS) | No (transparent) |
| Server knows about it | No (sees proxy's IP) | Yes (configured to receive from proxy) |
| Primary purpose | Access control, anonymity, caching | Security, performance, routing |
| Examples | Squid, corporate firewalls, VPN | Nginx, Traefik, Cloudflare |
Forward proxy: Your company's proxy that filters your web traffic. You configure your browser to use it. The website doesn't know you're behind a proxy.
Reverse proxy: Nginx sitting in front of your Node.js app. Users don't know it exists. Your Node.js app knows requests come through it.
3. Why Use a Reverse Proxy?
You could expose your application directly to the internet. Here's why you shouldn't:
Security
Your application servers stay hidden on a private network. Attackers can't directly reach them.
Without reverse proxy:
Internet → App Server (public IP, exposed ports, vulnerable)
With reverse proxy:
Internet → Reverse Proxy (public IP) → App Server (private network)The reverse proxy is a hardened, purpose-built gateway. Your application server is free to focus on business logic without worrying about DDoS protection, rate limiting, or IP filtering.
Performance
- SSL termination: Handle encryption at the proxy, not your app
- Caching: Serve cached responses without hitting the backend
- Compression: Gzip/Brotli responses before sending to clients
- Connection pooling: Reuse backend connections instead of creating new ones
- Static file serving: Let the proxy serve images, CSS, JS directly
Flexibility
- Zero-downtime deploys: Swap backend servers without dropping connections
- A/B testing: Route traffic to different app versions
- API gateway: Route
/api/*to one service,/web/*to another - Protocol translation: Accept HTTP/2 from clients, speak HTTP/1.1 to backends
4. How a Reverse Proxy Works
Let's trace a complete request through a reverse proxy:
The Request Journey
Step 1 — SSL Termination: The proxy decrypts the HTTPS request. Backend servers receive plain HTTP on the private network.
Step 2 — Cache Check: If this response is cached and still fresh, return it immediately. Backend never sees this request.
Step 3 — Rate Limiting: Check if this client IP has exceeded request limits. If so, return 429 Too Many Requests.
Step 4 — Backend Selection: Choose a backend server (round robin, least connections, URL-based routing).
Step 5 — Cache Storage: If the backend's response includes Cache-Control headers, store it for future requests.
Step 6 — Compression: Compress the response body with gzip or Brotli before sending to the client.
Headers Added by the Proxy
Reverse proxies add headers so backends know the original request details:
X-Forwarded-For: 203.0.113.50 # Client's real IP
X-Forwarded-Proto: https # Original protocol
X-Forwarded-Host: example.com # Original hostname
X-Real-IP: 203.0.113.50 # Client IP (Nginx-specific)
X-Request-ID: abc-123-def # Unique request identifierWithout these headers, your backend would see the proxy's IP as the client IP — breaking logging, rate limiting, and geolocation.
5. SSL/TLS Termination
SSL termination is the most common reason to use a reverse proxy. It offloads encryption from your application.
Why Terminate at the Proxy?
| Without Proxy | With Proxy |
|---|---|
| Every app server needs SSL certificates | One certificate on the proxy |
| App handles TLS handshakes (CPU-intensive) | Proxy handles all TLS work |
| Certificate renewal on every server | Renew once, in one place |
| App frameworks need SSL configuration | App runs plain HTTP |
| Mixed protocols are harder | Clean HTTP internally |
Nginx SSL Termination
server {
listen 443 ssl http2;
server_name example.com;
# SSL certificates
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Modern TLS configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers off;
# HSTS (tell browsers to always use HTTPS)
add_header Strict-Transport-Security "max-age=63072000" always;
# Proxy to backend (plain HTTP)
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Redirect HTTP to HTTPS
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}SSL Modes
SSL Termination:
SSL Passthrough:
SSL Re-encryption:
| Mode | Client → Proxy | Proxy → Backend | Use Case |
|---|---|---|---|
| Termination | HTTPS | HTTP | Most common. Backend on private network |
| Passthrough | HTTPS | HTTPS (same connection) | Proxy can't inspect traffic. End-to-end encryption |
| Re-encryption | HTTPS | HTTPS (new connection) | Compliance requirements. Proxy can inspect + re-encrypt |
For most applications, SSL termination is the right choice. Your backend runs on a private network where HTTP is fine.
6. Caching
A reverse proxy can cache responses and serve them without touching the backend. This is the single biggest performance win for read-heavy applications.
How It Works
Nginx Caching Configuration
# Define cache zone (10MB metadata, 1GB storage, 60-minute expiry)
proxy_cache_path /var/cache/nginx levels=1:2
keys_zone=app_cache:10m
max_size=1g
inactive=60m
use_temp_path=off;
server {
listen 443 ssl http2;
server_name example.com;
location /api/ {
proxy_pass http://backend;
# Enable caching
proxy_cache app_cache;
proxy_cache_valid 200 10m; # Cache 200 responses for 10 minutes
proxy_cache_valid 404 1m; # Cache 404 responses for 1 minute
proxy_cache_use_stale error timeout updating; # Serve stale on error
proxy_cache_lock on; # Only one request to backend for same URL
# Cache key (what makes each cached entry unique)
proxy_cache_key "$scheme$request_method$host$request_uri";
# Add cache status header for debugging
add_header X-Cache-Status $upstream_cache_status;
}
# Don't cache POST, PUT, DELETE
location /api/ {
if ($request_method != GET) {
set $no_cache 1;
}
proxy_cache_bypass $no_cache;
proxy_no_cache $no_cache;
}
# Cache static assets aggressively
location /static/ {
proxy_pass http://backend;
proxy_cache app_cache;
proxy_cache_valid 200 7d; # 7 days for static files
add_header Cache-Control "public, max-age=604800";
}
}Cache Status Headers
The X-Cache-Status header tells you what happened:
| Status | Meaning |
|---|---|
HIT | Served from cache |
MISS | Not in cache, forwarded to backend |
EXPIRED | Was in cache but expired, re-fetched |
STALE | Cache expired but serving stale (backend is down) |
BYPASS | Cache was bypassed (POST, cookie-based, etc.) |
UPDATING | Serving stale while updating in background |
What to Cache and What Not To
| Cache | Don't Cache |
|---|---|
API responses with Cache-Control headers | User-specific data (profile, cart, settings) |
| Static assets (CSS, JS, images, fonts) | Responses with Set-Cookie |
| Public content (blog posts, product listings) | POST/PUT/DELETE requests |
| Search results (with short TTL) | Real-time data (stock prices, chat) |
| Third-party API responses | Authenticated API responses |
7. Load Balancing with a Reverse Proxy
Most reverse proxies include load balancing out of the box. We covered algorithms in depth in the Load Balancing post — here's the practical setup.
Nginx Load Balancing
# Define backend server group
upstream api_servers {
least_conn; # Use least connections algorithm
server 10.0.1.101:3000 weight=3; # More powerful server gets more traffic
server 10.0.1.102:3000 weight=2;
server 10.0.1.103:3000 weight=1;
server 10.0.1.104:3000 backup; # Only used when others are down
# Health checks (Nginx Plus)
# health_check interval=10 fails=3 passes=2;
}
server {
listen 443 ssl http2;
server_name api.example.com;
location / {
proxy_pass http://api_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Timeouts
proxy_connect_timeout 5s;
proxy_read_timeout 30s;
proxy_send_timeout 10s;
# Retry on failure
proxy_next_upstream error timeout http_502 http_503;
proxy_next_upstream_tries 2;
}
}Path-Based Routing
Route different paths to different backend services:
upstream web_app {
server 10.0.1.101:3000;
server 10.0.1.102:3000;
}
upstream api_service {
server 10.0.2.101:8080;
server 10.0.2.102:8080;
}
upstream admin_panel {
server 10.0.3.101:4000;
}
server {
listen 443 ssl http2;
server_name example.com;
# Frontend application
location / {
proxy_pass http://web_app;
}
# API service
location /api/ {
proxy_pass http://api_service;
}
# Admin panel (restricted by IP)
location /admin/ {
allow 10.0.0.0/8;
deny all;
proxy_pass http://admin_panel;
}
}This is the API gateway pattern — one entry point routing to multiple services based on URL path, headers, or other request attributes.
8. Security Benefits
A reverse proxy is your first line of defense. Here's what it can do:
Hide Backend Servers
Direct exposure:
Client → App Server at 203.0.113.10:3000
(Attacker knows your server IP, port, and technology)
Behind reverse proxy:
Client → Proxy at 198.51.100.1:443
(Attacker sees only the proxy. Backend on private 10.x.x.x network)The proxy exposes only ports 80 and 443. Your application servers are invisible to the internet.
Rate Limiting
# Define rate limit zone (10MB shared memory, 10 requests/second per IP)
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;
server {
# API endpoints — 10 req/s with burst of 20
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://backend;
}
# Login endpoint — strict 1 req/s
location /api/auth/login {
limit_req zone=login_limit burst=3;
limit_req_status 429;
proxy_pass http://backend;
}
}Request Filtering
Block suspicious requests before they reach your application:
server {
# Block known bad user agents
if ($http_user_agent ~* (bot|crawler|spider|scraper)) {
return 403;
}
# Block requests with SQL injection patterns
location /api/ {
if ($query_string ~* "(union|select|insert|update|delete|drop)") {
return 403;
}
proxy_pass http://backend;
}
# Block large request bodies (prevent upload attacks)
client_max_body_size 10m;
# Hide server version
server_tokens off;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
}IP Allowlisting
# Admin endpoints only accessible from specific IPs
location /admin/ {
allow 10.0.0.0/8; # Internal network
allow 192.168.1.0/24; # Office network
deny all; # Block everyone else
proxy_pass http://admin_backend;
}9. WebSocket and gRPC Proxying
Reverse proxies handle more than HTTP. Modern applications need WebSocket and gRPC support.
WebSocket Proxying
WebSocket connections start as HTTP and then upgrade. The proxy needs to handle this upgrade:
# WebSocket support
map $http_upgrade $connection_upgrade {
default upgrade;
"" close;
}
server {
listen 443 ssl http2;
server_name example.com;
# WebSocket endpoint
location /ws/ {
proxy_pass http://websocket_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
# Longer timeouts for WebSocket connections
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
# Regular HTTP
location / {
proxy_pass http://web_backend;
}
}Key settings:
proxy_http_version 1.1— WebSocket requires HTTP/1.1UpgradeandConnectionheaders — tell the backend to upgrade- Long timeouts — WebSocket connections are long-lived
gRPC Proxying
server {
listen 443 ssl http2;
server_name grpc.example.com;
location / {
grpc_pass grpc://grpc_backend;
grpc_set_header Host $host;
# Error handling
error_page 502 = /error502grpc;
}
location = /error502grpc {
internal;
default_type application/grpc;
add_header grpc-status 14;
add_header content-length 0;
return 204;
}
}10. Request/Response Manipulation
Reverse proxies can modify requests and responses as they pass through.
URL Rewriting
server {
# Rewrite /old-api/* to /api/v2/*
location /old-api/ {
rewrite ^/old-api/(.*) /api/v2/$1 break;
proxy_pass http://backend;
}
# Strip prefix: /service-a/users → /users
location /service-a/ {
proxy_pass http://service_a_backend/; # Trailing slash strips the prefix
}
# Add prefix: /users → /api/v1/users
location /users {
proxy_pass http://backend/api/v1/users;
}
}Header Manipulation
server {
location /api/ {
proxy_pass http://backend;
# Add headers to the request sent to backend
proxy_set_header X-Request-ID $request_id;
proxy_set_header X-Real-IP $remote_addr;
# Add headers to the response sent to client
add_header X-Served-By $hostname;
add_header X-Response-Time $request_time;
# Remove sensitive headers from backend response
proxy_hide_header X-Powered-By;
proxy_hide_header Server;
}
}Response Compression
server {
# Enable gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 5;
gzip_types
text/plain
text/css
text/javascript
application/json
application/javascript
application/xml
image/svg+xml;
# Don't compress already-compressed files
gzip_proxied any;
location / {
proxy_pass http://backend;
}
}11. Reverse Proxy in Microservices
In a microservices architecture, the reverse proxy becomes an API gateway — the single entry point for all client requests.
What the API Gateway Handles
| Responsibility | Example |
|---|---|
| Routing | /api/users/* → User Service |
| Authentication | Validate JWT before forwarding |
| Rate limiting | 100 req/min per API key |
| Request aggregation | Combine responses from multiple services |
| Protocol translation | Accept REST, forward as gRPC |
| Circuit breaking | Stop forwarding to failed services |
| Logging | Centralized access logs |
Nginx as API Gateway
upstream auth_service {
server 10.0.1.101:3001;
}
upstream user_service {
server 10.0.2.101:3002;
server 10.0.2.102:3002;
}
upstream order_service {
server 10.0.3.101:3003;
server 10.0.3.102:3003;
server 10.0.3.103:3003;
}
server {
listen 443 ssl http2;
server_name api.example.com;
# Auth subrequest — validate token before forwarding
location = /auth/validate {
internal;
proxy_pass http://auth_service/validate;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
# Protected endpoints — require auth
location /api/users/ {
auth_request /auth/validate;
auth_request_set $auth_user $upstream_http_x_auth_user;
proxy_set_header X-Auth-User $auth_user;
proxy_pass http://user_service/;
}
location /api/orders/ {
auth_request /auth/validate;
auth_request_set $auth_user $upstream_http_x_auth_user;
proxy_set_header X-Auth-User $auth_user;
proxy_pass http://order_service/;
}
# Public endpoints — no auth required
location /api/auth/ {
proxy_pass http://auth_service/;
}
}12. Popular Reverse Proxy Solutions
Nginx
The most widely used reverse proxy. Handles millions of concurrent connections with minimal resources.
Best for: General-purpose reverse proxy, static file serving, SSL termination.
# Complete Nginx reverse proxy setup
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Security
server_tokens off;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
# Compression
gzip on;
gzip_types text/plain text/css application/json application/javascript;
# Proxy to backend
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 5s;
proxy_read_timeout 60s;
}
# Static files
location /static/ {
alias /var/www/static/;
expires 7d;
add_header Cache-Control "public, immutable";
}
}Traefik
Designed for containerized environments. Automatically discovers services from Docker and Kubernetes.
Best for: Docker Compose, Kubernetes, dynamic service discovery.
# docker-compose.yml with Traefik
version: "3.8"
services:
traefik:
image: traefik:v3.0
command:
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web"
- "--certificatesresolvers.letsencrypt.acme.email=admin@example.com"
- "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
- "8080:8080" # Dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- letsencrypt:/letsencrypt
api:
image: my-api:latest
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.rule=Host(`api.example.com`)"
- "traefik.http.routers.api.tls.certresolver=letsencrypt"
- "traefik.http.services.api.loadbalancer.server.port=3000"
deploy:
replicas: 3
web:
image: my-web:latest
labels:
- "traefik.enable=true"
- "traefik.http.routers.web.rule=Host(`example.com`)"
- "traefik.http.routers.web.tls.certresolver=letsencrypt"
- "traefik.http.services.web.loadbalancer.server.port=8080"
volumes:
letsencrypt:Traefik's superpower: no configuration files for routing. Add Docker labels to your services, and Traefik automatically creates routes, SSL certificates, and load balancing.
Caddy
The simplest reverse proxy. Automatic HTTPS with zero configuration.
Best for: Small to medium deployments where simplicity matters most.
# Caddyfile — this is the entire configuration
example.com {
reverse_proxy 127.0.0.1:3000
# Automatic HTTPS — Caddy handles certificates
# Automatic HTTP → HTTPS redirect
# Automatic certificate renewal
}
api.example.com {
reverse_proxy /api/users/* 10.0.1.101:3001 10.0.1.102:3001 {
lb_policy least_conn
health_uri /health
health_interval 10s
}
reverse_proxy /api/orders/* 10.0.2.101:3002
# Compression
encode gzip
# Rate limiting
rate_limit {
zone api {
key {remote_host}
events 100
window 1m
}
}
}Comparison
| Feature | Nginx | Traefik | Caddy |
|---|---|---|---|
| Configuration | Config files | Labels/API | Caddyfile |
| Auto HTTPS | Manual (certbot) | Built-in | Built-in |
| Docker integration | Manual | Automatic | Plugin |
| Performance | Excellent | Good | Good |
| Learning curve | Moderate | Low (Docker) | Very low |
| Ecosystem | Huge | Growing | Growing |
| Best for | Production at scale | Containers | Simple deployments |
13. Performance Optimization
Connection Pooling
Without connection pooling, every request creates a new TCP connection to the backend:
Client → Proxy → [new TCP connection] → Backend → [close] → Proxy → Client
Client → Proxy → [new TCP connection] → Backend → [close] → Proxy → ClientWith keepalive connections:
Client → Proxy → [reuse connection] → Backend → Proxy → Client
Client → Proxy → [reuse connection] → Backend → Proxy → Clientupstream backend {
server 10.0.1.101:3000;
server 10.0.1.102:3000;
# Keep 32 idle connections alive per worker
keepalive 32;
}
server {
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection ""; # Enable keepalive to upstream
}
}Buffering
Control how the proxy buffers responses from backends:
server {
location /api/ {
proxy_pass http://backend;
# Buffer the backend response
proxy_buffering on;
proxy_buffer_size 4k; # Buffer for the first part of response
proxy_buffers 8 16k; # 8 buffers of 16k each
proxy_busy_buffers_size 32k; # Max size before sending to client
}
# Disable buffering for streaming endpoints
location /api/stream/ {
proxy_pass http://backend;
proxy_buffering off;
}
}Timeouts
Set appropriate timeouts to prevent hanging connections:
server {
# Time to establish connection to backend
proxy_connect_timeout 5s;
# Time to read response from backend
proxy_read_timeout 60s;
# Time to send request to backend
proxy_send_timeout 10s;
# Client request body timeout
client_body_timeout 10s;
# Client response timeout
send_timeout 10s;
# Keep-alive timeout
keepalive_timeout 65s;
}14. Common Patterns
Blue-Green Deployment
Switch traffic between two identical environments:
# Blue environment (currently live)
upstream blue {
server 10.0.1.101:3000;
server 10.0.1.102:3000;
}
# Green environment (new version)
upstream green {
server 10.0.2.101:3000;
server 10.0.2.102:3000;
}
# Toggle by changing which upstream is used
server {
location / {
proxy_pass http://blue; # Change to "green" to switch
}
}Canary Deployment
Send a percentage of traffic to the new version:
upstream stable {
server 10.0.1.101:3000 weight=9;
server 10.0.1.102:3000 weight=9;
}
upstream canary {
server 10.0.2.101:3000 weight=1;
}
# Split traffic using map
split_clients "${remote_addr}" $backend {
90% stable;
* canary;
}
server {
location / {
proxy_pass http://$backend;
}
}Health Check Endpoint
Your backend should expose a health check endpoint for the proxy:
// Express.js health check
app.get("/health", (_req, res) => {
const health = {
status: "ok",
uptime: process.uptime(),
timestamp: new Date().toISOString(),
};
try {
// Check database connection
// await db.query("SELECT 1");
res.json(health);
} catch (error) {
res.status(503).json({
...health,
status: "error",
message: "Database connection failed",
});
}
});15. Troubleshooting
Common Issues
| Problem | Cause | Fix |
|---|---|---|
| 502 Bad Gateway | Backend is down or not responding | Check backend health, increase timeouts |
| 504 Gateway Timeout | Backend too slow | Increase proxy_read_timeout, optimize backend |
| WebSocket drops | Missing upgrade headers | Add Upgrade and Connection headers |
| Wrong client IP | Missing forwarded headers | Add X-Real-IP and X-Forwarded-For |
| Mixed content | Wrong X-Forwarded-Proto | Set proxy_set_header X-Forwarded-Proto $scheme |
| Large uploads fail | client_max_body_size too small | Increase client_max_body_size |
| CORS errors | Headers stripped by proxy | Add CORS headers in proxy config |
Debugging Checklist
- Check backend directly:
curl http://127.0.0.1:3000— does it work without the proxy? - Check proxy logs:
tail -f /var/log/nginx/error.log - Check response headers:
curl -I https://example.com— look forX-Cache-Status,Server - Test SSL:
openssl s_client -connect example.com:443— certificate chain OK? - Check upstream health:
curl http://backend:3000/health— backend healthy?
Summary
| Topic | What We Covered |
|---|---|
| Concept | Reverse proxy sits between clients and servers, handling routing, security, and performance |
| Forward vs Reverse | Forward protects clients, reverse protects servers |
| SSL Termination | Offload encryption to the proxy, backends run plain HTTP |
| Caching | Cache responses at the proxy to reduce backend load |
| Security | Hide servers, rate limit, filter requests, add security headers |
| Load Balancing | Distribute traffic across multiple backend servers |
| WebSocket/gRPC | Handle connection upgrades and HTTP/2 streaming |
| API Gateway | Route requests to different microservices based on path |
| Solutions | Nginx (production), Traefik (containers), Caddy (simplicity) |
| Performance | Connection pooling, compression, buffering, timeouts |
Key takeaways:
✅ Every production application should sit behind a reverse proxy
✅ SSL termination at the proxy simplifies certificate management
✅ Caching is the single biggest performance win for read-heavy apps
✅ The proxy is your first security layer — rate limiting, IP filtering, header security
✅ Path-based routing turns a reverse proxy into an API gateway
✅ Start with Nginx for most use cases, Traefik for Docker, Caddy for simplicity
✅ Always set proper timeouts, forwarded headers, and health checks
A reverse proxy isn't just infrastructure plumbing — it's where performance, security, and deployment flexibility come together. Set it up once, configure it well, and your backend can focus on what matters: your application logic.
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.