Build a URL Shortener: Deployment & Production

We've built a full-stack URL shortener — API, database, cache, auth, frontend, and tests. It all works beautifully on your laptop. But software that only runs on localhost isn't software — it's a prototype.
This final post takes our project from development to production. We'll containerize everything with Docker, automate deployments with GitHub Actions, add monitoring with Prometheus and Grafana, and harden the application for real-world traffic.
Time commitment: 2–3 hours
Prerequisites: Phase 8: Testing Strategy
What we'll build in this post:
✅ Multi-stage Docker builds for API and frontend
✅ Docker Compose orchestrating the full stack
✅ GitHub Actions CI/CD pipeline (lint, test, build, deploy)
✅ Detailed health check endpoints with dependency status
✅ Prometheus metrics and Grafana dashboards
✅ Structured JSON logging with pino
✅ Production hardening (Helmet, CORS, compression)
✅ Nginx reverse proxy configuration
Production Architecture Overview
Here's what our production deployment looks like:
Every component runs in its own Docker container. Let's build it layer by layer.
Multi-Stage Docker Build: API Server
Multi-stage builds keep production images small by separating the build environment from the runtime environment.
API Dockerfile
# Dockerfile (API)
# ── Stage 1: Build ──────────────────────────────
FROM node:20-alpine AS builder
WORKDIR /app
# Copy dependency files first (layer caching)
COPY package.json package-lock.json ./
COPY prisma ./prisma/
# Install all dependencies (including devDependencies for building)
RUN npm ci
# Generate Prisma client
RUN npx prisma generate
# Copy source code
COPY tsconfig.json ./
COPY src ./src/
# Build TypeScript
RUN npm run build
# ── Stage 2: Production ────────────────────────
FROM node:20-alpine AS production
WORKDIR /app
# Create non-root user for security
RUN addgroup -g 1001 -S appgroup && \
adduser -S appuser -u 1001 -G appgroup
# Copy dependency files
COPY package.json package-lock.json ./
COPY prisma ./prisma/
# Install production dependencies only
RUN npm ci --only=production && \
npx prisma generate && \
npm cache clean --force
# Copy built files from builder stage
COPY --from=builder /app/dist ./dist
# Set ownership
RUN chown -R appuser:appgroup /app
USER appuser
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
# Start the server
CMD ["node", "dist/index.js"]Why multi-stage?
- The builder stage has TypeScript compiler, dev dependencies — ~400MB
- The production stage has only runtime code — ~150MB
- That's a 60% reduction in image size
Why non-root user? If an attacker exploits a vulnerability in our app, they get limited access instead of root privileges inside the container.
Why npm ci instead of npm install? npm ci installs from lock file exactly, ensuring reproducible builds. It also deletes node_modules first, preventing stale dependencies.
.dockerignore
# .dockerignore
node_modules
dist
.env
.env.local
.git
.gitignore
*.md
tests
coverage
.vscode
docker-compose*.ymlThis prevents copying unnecessary files into the Docker build context, speeding up builds significantly.
Docker Build: Frontend
The React frontend gets its own Docker image, served by Nginx for production performance.
# frontend/Dockerfile
# ── Stage 1: Build ──────────────────────────────
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
# Build-time environment variables
ARG VITE_API_URL=/api
ENV VITE_API_URL=$VITE_API_URL
RUN npm run build
# ── Stage 2: Serve with Nginx ──────────────────
FROM nginx:alpine AS production
# Remove default nginx config
RUN rm /etc/nginx/conf.d/default.conf
# Copy custom nginx config
COPY nginx.conf /etc/nginx/conf.d/
# Copy built React app
COPY --from=builder /app/dist /usr/share/nginx/html
# Expose port
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]Frontend Nginx Config
# frontend/nginx.conf
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
# Enable gzip compression
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml;
gzip_min_length 1000;
# Cache static assets aggressively
location /assets/ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# SPA fallback — all routes serve index.html
location / {
try_files $uri $uri/ /index.html;
}
}Docker Compose: Full Stack
Now we orchestrate everything with Docker Compose:
# docker-compose.yml
services:
# ── PostgreSQL ─────────────────────────────────
postgres:
image: postgres:16-alpine
container_name: url-shortener-db
environment:
POSTGRES_DB: url_shortener
POSTGRES_USER: ${DB_USER:-postgres}
POSTGRES_PASSWORD: ${DB_PASSWORD:?Database password required}
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-postgres} -d url_shortener"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# ── Redis ──────────────────────────────────────
redis:
image: redis:7-alpine
container_name: url-shortener-redis
command: redis-server --requirepass ${REDIS_PASSWORD:?Redis password required}
volumes:
- redis_data:/data
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# ── API Server ─────────────────────────────────
api:
build:
context: .
dockerfile: Dockerfile
container_name: url-shortener-api
environment:
NODE_ENV: production
PORT: 3000
DATABASE_URL: postgresql://${DB_USER:-postgres}:${DB_PASSWORD}@postgres:5432/url_shortener
REDIS_URL: redis://:${REDIS_PASSWORD}@redis:6379
JWT_SECRET: ${JWT_SECRET:?JWT secret required}
CORS_ORIGIN: ${CORS_ORIGIN:-https://yourdomain.com}
BASE_URL: ${BASE_URL:-https://yourdomain.com}
ports:
- "3000:3000"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
# ── Frontend ───────────────────────────────────
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
args:
VITE_API_URL: /api
container_name: url-shortener-frontend
ports:
- "8080:80"
restart: unless-stopped
# ── Nginx Reverse Proxy ────────────────────────
nginx:
image: nginx:alpine
container_name: url-shortener-nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
depends_on:
- api
- frontend
restart: unless-stopped
# ── Prometheus ─────────────────────────────────
prometheus:
image: prom/prometheus:latest
container_name: url-shortener-prometheus
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
ports:
- "9090:9090"
restart: unless-stopped
# ── Grafana ────────────────────────────────────
grafana:
image: grafana/grafana:latest
container_name: url-shortener-grafana
environment:
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD:-admin}
volumes:
- grafana_data:/var/lib/grafana
ports:
- "3001:3000"
depends_on:
- prometheus
restart: unless-stopped
volumes:
postgres_data:
redis_data:
prometheus_data:
grafana_data:Starting the Stack
# Create .env file for production secrets
cat > .env << 'EOF'
DB_USER=postgres
DB_PASSWORD=your-secure-db-password
REDIS_PASSWORD=your-secure-redis-password
JWT_SECRET=your-secure-jwt-secret-at-least-32-chars
CORS_ORIGIN=https://yourdomain.com
BASE_URL=https://yourdomain.com
GRAFANA_PASSWORD=your-grafana-password
EOF
# Start everything
docker compose up -d --build
# Run database migrations
docker compose exec api npx prisma migrate deploy
# Check all services are healthy
docker compose psExpected output:
NAME STATUS PORTS
url-shortener-api Up (healthy) 0.0.0.0:3000->3000/tcp
url-shortener-db Up (healthy) 0.0.0.0:5432->5432/tcp
url-shortener-redis Up (healthy) 0.0.0.0:6379->6379/tcp
url-shortener-frontend Up 0.0.0.0:8080->80/tcp
url-shortener-nginx Up 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
url-shortener-prometheus Up 0.0.0.0:9090->9090/tcp
url-shortener-grafana Up 0.0.0.0:3001->3000/tcpEnvironment Variable Management
Never hardcode secrets. Here's a clean pattern for managing environment variables across environments:
.env.example
# .env.example — commit this to git (no real values!)
# Database
DB_USER=postgres
DB_PASSWORD=change-me
DATABASE_URL=postgresql://postgres:change-me@localhost:5432/url_shortener
# Redis
REDIS_PASSWORD=change-me
REDIS_URL=redis://:change-me@localhost:6379
# Auth
JWT_SECRET=change-me-to-at-least-32-characters
# App
NODE_ENV=production
PORT=3000
BASE_URL=https://yourdomain.com
CORS_ORIGIN=https://yourdomain.com
# Monitoring
GRAFANA_PASSWORD=adminEnvironment Validation at Startup
// src/config/env.ts
import { z } from 'zod';
const envSchema = z.object({
NODE_ENV: z.enum(['development', 'production', 'test']).default('development'),
PORT: z.coerce.number().default(3000),
DATABASE_URL: z.string().url(),
REDIS_URL: z.string().url(),
JWT_SECRET: z.string().min(32, 'JWT_SECRET must be at least 32 characters'),
BASE_URL: z.string().url(),
CORS_ORIGIN: z.string(),
});
const parsed = envSchema.safeParse(process.env);
if (!parsed.success) {
console.error('❌ Invalid environment variables:');
console.error(parsed.error.format());
process.exit(1);
}
export const env = parsed.data;Fail fast. If a required environment variable is missing, the server refuses to start with a clear error message. This is much better than discovering a undefined at 3 AM in production.
Health Check Endpoints
A basic /health endpoint isn't enough for production. We need to know if our dependencies are healthy too.
// src/routes/healthRoutes.ts
import { Router, Request, Response } from 'express';
import { PrismaClient } from '@prisma/client';
import { redisClient } from '../config/redis';
const router = Router();
const prisma = new PrismaClient();
interface HealthStatus {
status: 'healthy' | 'degraded' | 'unhealthy';
timestamp: string;
uptime: number;
version: string;
checks: {
database: ComponentHealth;
redis: ComponentHealth;
};
}
interface ComponentHealth {
status: 'up' | 'down';
latency?: number;
error?: string;
}
async function checkDatabase(): Promise<ComponentHealth> {
const start = Date.now();
try {
await prisma.$queryRaw`SELECT 1`;
return { status: 'up', latency: Date.now() - start };
} catch (error) {
return {
status: 'down',
latency: Date.now() - start,
error: error instanceof Error ? error.message : 'Unknown error',
};
}
}
async function checkRedis(): Promise<ComponentHealth> {
const start = Date.now();
try {
await redisClient.ping();
return { status: 'up', latency: Date.now() - start };
} catch (error) {
return {
status: 'down',
latency: Date.now() - start,
error: error instanceof Error ? error.message : 'Unknown error',
};
}
}
// Simple liveness probe — is the process running?
router.get('/health/live', (_req: Request, res: Response) => {
res.json({ status: 'ok' });
});
// Readiness probe — can this instance serve traffic?
router.get('/health/ready', async (_req: Request, res: Response) => {
const [database, redis] = await Promise.all([
checkDatabase(),
checkRedis(),
]);
const allUp = database.status === 'up' && redis.status === 'up';
const health: HealthStatus = {
status: allUp ? 'healthy' : database.status === 'down' ? 'unhealthy' : 'degraded',
timestamp: new Date().toISOString(),
uptime: process.uptime(),
version: process.env.npm_package_version || '1.0.0',
checks: { database, redis },
};
res.status(allUp ? 200 : 503).json(health);
});
export default router;Two endpoints, two purposes:
/health/live— Kubernetes liveness probe. If this fails, restart the container./health/ready— Kubernetes readiness probe. If this fails, stop routing traffic to this instance but don't restart it.
Example response when everything is healthy:
{
"status": "healthy",
"timestamp": "2026-03-21T10:00:00.000Z",
"uptime": 3600,
"version": "1.0.0",
"checks": {
"database": { "status": "up", "latency": 2 },
"redis": { "status": "up", "latency": 1 }
}
}Example response when Redis is down:
{
"status": "degraded",
"timestamp": "2026-03-21T10:00:00.000Z",
"uptime": 3600,
"version": "1.0.0",
"checks": {
"database": { "status": "up", "latency": 2 },
"redis": { "status": "down", "latency": 5000, "error": "Connection refused" }
}
}Register the health routes in your app:
// src/app.ts (add to existing app setup)
import healthRoutes from './routes/healthRoutes';
// Health checks — no auth required, no rate limiting
app.use(healthRoutes);Structured Logging with Pino
console.log is fine for development. In production, you need structured JSON logs that can be parsed by log aggregation tools like ELK Stack, Datadog, or CloudWatch.
Install Pino
npm install pino pino-http
npm install -D pino-pretty # Pretty printing for developmentLogger Setup
// src/config/logger.ts
import pino from 'pino';
import { env } from './env';
export const logger = pino({
level: env.NODE_ENV === 'production' ? 'info' : 'debug',
transport: env.NODE_ENV === 'development'
? { target: 'pino-pretty', options: { colorize: true } }
: undefined,
base: {
service: 'url-shortener',
environment: env.NODE_ENV,
},
serializers: {
err: pino.stdSerializers.err,
req: pino.stdSerializers.req,
res: pino.stdSerializers.res,
},
// Redact sensitive fields
redact: ['req.headers.authorization', 'req.headers.cookie'],
});HTTP Request Logging
// src/middleware/requestLogger.ts
import pinoHttp from 'pino-http';
import { logger } from '../config/logger';
export const requestLogger = pinoHttp({
logger,
// Don't log health check requests (too noisy)
autoLogging: {
ignore: (req) => req.url === '/health/live' || req.url === '/health/ready',
},
customSuccessMessage: (req, res) => {
return `${req.method} ${req.url} ${res.statusCode}`;
},
customErrorMessage: (req, res) => {
return `${req.method} ${req.url} ${res.statusCode} ERROR`;
},
// Add custom properties to each log line
customProps: (req) => ({
requestId: req.headers['x-request-id'] || crypto.randomUUID(),
}),
});Using the Logger
// Replace console.log everywhere
import { logger } from '../config/logger';
// Structured log with context
logger.info({ shortCode: 'abc123', userId: 'user-1' }, 'URL shortened successfully');
// Error with stack trace
logger.error({ err: error, shortCode: 'abc123' }, 'Failed to resolve short code');
// Performance timing
const start = Date.now();
await someOperation();
logger.info({ duration: Date.now() - start }, 'Database query completed');Production log output (JSON — one line per entry):
{"level":30,"time":1711018800000,"service":"url-shortener","environment":"production","requestId":"550e8400-e29b-41d4-a716-446655440000","shortCode":"abc123","msg":"URL shortened successfully"}Development log output (pretty printed):
[10:00:00] INFO (url-shortener): URL shortened successfully
shortCode: "abc123"
userId: "user-1"Add the request logger to your app:
// src/app.ts
import { requestLogger } from './middleware/requestLogger';
app.use(requestLogger);Prometheus Metrics
Prometheus scrapes a /metrics endpoint on your server and stores time-series data. You then query it with Grafana for dashboards and alerts.
Install prom-client
npm install prom-clientMetrics Setup
// src/config/metrics.ts
import client from 'prom-client';
// Collect default Node.js metrics (memory, CPU, event loop)
client.collectDefaultMetrics({ prefix: 'urlshortener_' });
// ── Custom Metrics ────────────────────────────────
// HTTP request duration
export const httpRequestDuration = new client.Histogram({
name: 'urlshortener_http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['method', 'route', 'status_code'],
buckets: [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5],
});
// Total URLs shortened
export const urlsShortenedTotal = new client.Counter({
name: 'urlshortener_urls_shortened_total',
help: 'Total number of URLs shortened',
labelNames: ['type'], // 'random' or 'custom_alias'
});
// Total redirects served
export const redirectsTotal = new client.Counter({
name: 'urlshortener_redirects_total',
help: 'Total number of redirects served',
labelNames: ['status'], // 'success', 'not_found', 'expired'
});
// Cache hit/miss ratio
export const cacheOperations = new client.Counter({
name: 'urlshortener_cache_operations_total',
help: 'Redis cache operations',
labelNames: ['operation'], // 'hit', 'miss', 'set', 'delete'
});
// Active database connections
export const dbConnections = new client.Gauge({
name: 'urlshortener_db_connections_active',
help: 'Number of active database connections',
});
// Request size
export const requestSize = new client.Histogram({
name: 'urlshortener_request_size_bytes',
help: 'Size of HTTP requests in bytes',
buckets: [100, 500, 1000, 5000, 10000],
});
// Expose the registry for the /metrics endpoint
export const register = client.register;Metrics Middleware
// src/middleware/metricsMiddleware.ts
import { Request, Response, NextFunction } from 'express';
import { httpRequestDuration, requestSize } from '../config/metrics';
export function metricsMiddleware(req: Request, res: Response, next: NextFunction) {
const start = Date.now();
// Track request size
const contentLength = parseInt(req.headers['content-length'] || '0', 10);
if (contentLength > 0) {
requestSize.observe(contentLength);
}
// Track response time when response finishes
res.on('finish', () => {
const duration = (Date.now() - start) / 1000;
const route = req.route?.path || req.path;
httpRequestDuration
.labels(req.method, route, res.statusCode.toString())
.observe(duration);
});
next();
}Metrics Endpoint
// src/routes/metricsRoutes.ts
import { Router, Request, Response } from 'express';
import { register } from '../config/metrics';
const router = Router();
router.get('/metrics', async (_req: Request, res: Response) => {
try {
res.set('Content-Type', register.contentType);
res.end(await register.metrics());
} catch (error) {
res.status(500).end();
}
});
export default router;Instrument Your Services
Add metric tracking to your URL service:
// src/services/urlService.ts (add to existing methods)
import { urlsShortenedTotal, redirectsTotal, cacheOperations } from '../config/metrics';
async shortenUrl(url: string, customAlias?: string) {
// ... existing logic ...
urlsShortenedTotal.labels(customAlias ? 'custom_alias' : 'random').inc();
return result;
}
async resolveAndTrack(shortCode: string) {
// Check cache first
const cached = await redisClient.get(`url:${shortCode}`);
if (cached) {
cacheOperations.labels('hit').inc();
redirectsTotal.labels('success').inc();
return cached;
}
cacheOperations.labels('miss').inc();
// ... database lookup ...
if (!url) {
redirectsTotal.labels('not_found').inc();
return null;
}
// Cache for next time
await redisClient.setex(`url:${shortCode}`, 3600, url.originalUrl);
cacheOperations.labels('set').inc();
redirectsTotal.labels('success').inc();
return url.originalUrl;
}Register metrics middleware and route:
// src/app.ts
import { metricsMiddleware } from './middleware/metricsMiddleware';
import metricsRoutes from './routes/metricsRoutes';
app.use(metricsMiddleware);
app.use(metricsRoutes);Prometheus Configuration
# monitoring/prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'url-shortener-api'
metrics_path: '/metrics'
static_configs:
- targets: ['api:3000']
labels:
app: 'url-shortener'
environment: 'production'
- job_name: 'node-exporter'
static_configs:
- targets: ['node-exporter:9100']Visit http://localhost:9090 to access the Prometheus UI and query your metrics.
Grafana Dashboard Setup
1. Add Prometheus as Data Source
- Open Grafana at
http://localhost:3001 - Login with
admin/ yourGRAFANA_PASSWORD - Go to Configuration > Data Sources > Add data source
- Select Prometheus
- Set URL to
http://prometheus:9090 - Click Save & Test
2. Create the Dashboard
Create a JSON dashboard definition for import:
{
"dashboard": {
"title": "URL Shortener Dashboard",
"panels": [
{
"title": "Request Rate (req/sec)",
"type": "graph",
"targets": [{
"expr": "rate(urlshortener_http_request_duration_seconds_count[5m])",
"legendFormat": "{{method}} {{route}}"
}]
},
{
"title": "Response Time (p95)",
"type": "graph",
"targets": [{
"expr": "histogram_quantile(0.95, rate(urlshortener_http_request_duration_seconds_bucket[5m]))",
"legendFormat": "p95 latency"
}]
},
{
"title": "URLs Shortened",
"type": "stat",
"targets": [{
"expr": "urlshortener_urls_shortened_total"
}]
},
{
"title": "Cache Hit Rate",
"type": "gauge",
"targets": [{
"expr": "rate(urlshortener_cache_operations_total{operation='hit'}[5m]) / (rate(urlshortener_cache_operations_total{operation='hit'}[5m]) + rate(urlshortener_cache_operations_total{operation='miss'}[5m])) * 100"
}]
},
{
"title": "Redirects per Second",
"type": "graph",
"targets": [{
"expr": "rate(urlshortener_redirects_total[5m])",
"legendFormat": "{{status}}"
}]
}
]
}
}Key Metrics to Monitor
| Metric | What it tells you | Alert threshold |
|---|---|---|
| Request rate | Traffic volume | Spike > 10x normal |
| p95 latency | User experience | > 500ms |
| Error rate | Application health | > 1% of requests |
| Cache hit rate | Redis effectiveness | < 80% |
| DB connection pool | Database health | > 80% pool used |
| Memory usage | Resource consumption | > 80% of limit |
| Redirect latency | Core feature performance | > 50ms |
Production Hardening
Security Middleware
npm install helmet cors compression
npm install -D @types/cors @types/compression// src/middleware/security.ts
import helmet from 'helmet';
import cors from 'cors';
import compression from 'compression';
import { env } from '../config/env';
// Helmet sets security headers
export const securityHeaders = helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
scriptSrc: ["'self'"],
styleSrc: ["'self'", "'unsafe-inline'"],
imgSrc: ["'self'", 'data:', 'https:'],
},
},
// HSTS — force HTTPS
hsts: {
maxAge: 31536000, // 1 year
includeSubDomains: true,
preload: true,
},
});
// CORS — restrict which origins can call your API
export const corsMiddleware = cors({
origin: env.CORS_ORIGIN.split(','),
methods: ['GET', 'POST', 'PUT', 'DELETE'],
allowedHeaders: ['Content-Type', 'Authorization'],
credentials: true,
maxAge: 86400, // Cache preflight for 24 hours
});
// Compression — gzip responses to reduce bandwidth
export const compressionMiddleware = compression({
// Don't compress responses smaller than 1KB
threshold: 1024,
// Don't compress if client doesn't accept it
filter: (req, res) => {
if (req.headers['x-no-compression']) return false;
return compression.filter(req, res);
},
});Apply Security Middleware
// src/app.ts
import { securityHeaders, corsMiddleware, compressionMiddleware } from './middleware/security';
const app = express();
// Security middleware — apply FIRST
app.use(securityHeaders);
app.use(corsMiddleware);
app.use(compressionMiddleware);
// Then your existing middleware
app.use(express.json({ limit: '10kb' }));
app.use(requestLogger);
app.use(metricsMiddleware);
// ... routesGraceful Shutdown
Update your server entry point to handle shutdown signals properly:
// src/index.ts
import app from './app';
import { env } from './config/env';
import { logger } from './config/logger';
import { PrismaClient } from '@prisma/client';
import { redisClient } from './config/redis';
const prisma = new PrismaClient();
const server = app.listen(env.PORT, () => {
logger.info({ port: env.PORT, env: env.NODE_ENV }, 'Server started');
});
// Graceful shutdown
async function shutdown(signal: string) {
logger.info({ signal }, 'Shutdown signal received');
// Stop accepting new connections
server.close(async () => {
logger.info('HTTP server closed');
// Close database connection
await prisma.$disconnect();
logger.info('Database connection closed');
// Close Redis connection
await redisClient.quit();
logger.info('Redis connection closed');
process.exit(0);
});
// Force exit after 30 seconds
setTimeout(() => {
logger.error('Forced shutdown after timeout');
process.exit(1);
}, 30000);
}
process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));
// Handle uncaught errors
process.on('uncaughtException', (error) => {
logger.fatal({ err: error }, 'Uncaught exception');
process.exit(1);
});
process.on('unhandledRejection', (reason) => {
logger.fatal({ err: reason }, 'Unhandled rejection');
process.exit(1);
});Why graceful shutdown matters: Without it, a deploy kills in-flight requests. Users see 502 errors. Database connections leak. With graceful shutdown, the server finishes processing current requests, closes connections cleanly, and then exits.
Nginx Reverse Proxy
Nginx sits in front of everything, handling SSL termination, static file serving, and request routing.
# nginx/nginx.conf
events {
worker_connections 1024;
}
http {
# Rate limiting zone
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
# Logging format
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'$request_time';
upstream api_backend {
server api:3000;
}
upstream frontend_backend {
server frontend:80;
}
server {
listen 80;
server_name yourdomain.com;
# Redirect HTTP to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name yourdomain.com;
# SSL certificates (use Let's Encrypt / certbot)
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
# Security headers
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
# Gzip compression
gzip on;
gzip_types text/plain text/css application/json application/javascript;
gzip_min_length 1000;
# API routes
location /api/ {
limit_req zone=api burst=20 nodelay;
proxy_pass http://api_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Short code redirects (single path segment, not starting with api/health/metrics)
location ~ ^/[a-zA-Z0-9]{5,10}$ {
proxy_pass http://api_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Health checks (no rate limiting)
location /health/ {
proxy_pass http://api_backend;
}
# Metrics (restrict to internal network)
location /metrics {
allow 10.0.0.0/8;
allow 172.16.0.0/12;
deny all;
proxy_pass http://api_backend;
}
# Frontend (everything else)
location / {
proxy_pass http://frontend_backend;
proxy_set_header Host $host;
}
}
}Key decisions:
- SSL termination at Nginx — API and frontend don't need to handle HTTPS
- Rate limiting at Nginx — first line of defense before requests reach your app
- Metrics restricted to internal network — don't expose Prometheus data publicly
- Short code regex — routes like
/aBc123go to the API, everything else to the frontend
GitHub Actions CI/CD Pipeline
Automate everything: lint, test, build, and deploy on every push.
# .github/workflows/ci-cd.yml
name: CI/CD Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
# ── Lint & Type Check ──────────────────────────
lint:
name: Lint & Type Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- run: npm ci
- run: npm run lint
- run: npx tsc --noEmit
# ── Unit & Integration Tests ───────────────────
test:
name: Test
runs-on: ubuntu-latest
needs: lint
services:
postgres:
image: postgres:16-alpine
env:
POSTGRES_DB: url_shortener_test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: test-password
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: redis:7-alpine
ports:
- 6379:6379
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- run: npm ci
- run: npx prisma generate
- run: npx prisma migrate deploy
env:
DATABASE_URL: postgresql://postgres:test-password@localhost:5432/url_shortener_test
- run: npm test -- --coverage
env:
DATABASE_URL: postgresql://postgres:test-password@localhost:5432/url_shortener_test
REDIS_URL: redis://localhost:6379
JWT_SECRET: test-secret-at-least-32-characters-long
NODE_ENV: test
- name: Upload coverage
uses: actions/upload-artifact@v4
with:
name: coverage
path: coverage/
# ── Build Docker Image ─────────────────────────
build:
name: Build & Push Docker Image
runs-on: ubuntu-latest
needs: test
if: github.ref == 'refs/heads/main'
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push API image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}-api:latest
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}-api:${{ github.sha }}
- name: Build and push Frontend image
uses: docker/build-push-action@v5
with:
context: ./frontend
push: true
tags: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}-frontend:latest
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}-frontend:${{ github.sha }}
# ── Deploy ─────────────────────────────────────
deploy:
name: Deploy to Production
runs-on: ubuntu-latest
needs: build
if: github.ref == 'refs/heads/main'
environment: production
steps:
- name: Deploy via SSH
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.SERVER_HOST }}
username: ${{ secrets.SERVER_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
cd /opt/url-shortener
# Pull latest images
docker compose pull
# Run database migrations
docker compose run --rm api npx prisma migrate deploy
# Restart with zero downtime
docker compose up -d --remove-orphans
# Verify deployment
sleep 5
curl -f http://localhost:3000/health/ready || exit 1
echo "✅ Deployment successful"CI/CD Flow
Required GitHub Secrets
| Secret | Description |
|---|---|
SERVER_HOST | Production server IP or hostname |
SERVER_USER | SSH username |
SSH_PRIVATE_KEY | SSH private key for deployment |
The GITHUB_TOKEN is automatically available — no setup needed for GitHub Container Registry.
Production Checklist
Before going live, verify everything on this list:
Security
- All secrets in environment variables (not in code)
- Helmet security headers enabled
- CORS restricted to your domain
- Rate limiting on API endpoints
- Non-root user in Docker containers
- SSL/TLS configured (HTTPS only)
-
/metricsendpoint restricted to internal network
Reliability
- Health check endpoints responding
- Graceful shutdown handling SIGTERM
- Database connection pooling configured
- Redis connection with retry logic
- Uncaught exception handlers
- Docker healthchecks for all services
Observability
- Structured JSON logging (no console.log)
- Request/response logging with correlation IDs
- Prometheus metrics exposed
- Grafana dashboard configured
- Key metrics: latency, error rate, cache hit rate
Performance
- Gzip compression enabled
- Multi-stage Docker builds (small images)
- Static assets cached with long expiry
- Database queries indexed
- Redis caching on hot paths
CI/CD
- Automated linting and type checking
- Test suite runs on every PR
- Docker images built and pushed automatically
- Zero-downtime deployment
- Rollback strategy documented
Deployment Commands Cheat Sheet
# ── Local Development ────────────────────────────
docker compose up -d postgres redis # Start dependencies only
npm run dev # Run API with hot reload
# ── Full Stack (Local) ───────────────────────────
docker compose up -d --build # Build and start everything
docker compose logs -f api # Follow API logs
docker compose exec api npx prisma studio # Open Prisma Studio
# ── Production ───────────────────────────────────
docker compose -f docker-compose.yml up -d --build
docker compose exec api npx prisma migrate deploy
docker compose ps # Check service status
# ── Monitoring ───────────────────────────────────
curl http://localhost:3000/health/ready # Check health
curl http://localhost:3000/metrics # View Prometheus metrics
# Grafana: http://localhost:3001
# Prometheus: http://localhost:9090
# ── Troubleshooting ──────────────────────────────
docker compose logs api --tail 100 # Last 100 API log lines
docker compose exec postgres psql -U postgres -d url_shortener # DB shell
docker compose exec redis redis-cli # Redis shell
docker compose restart api # Restart API onlySeries Recap: What We Built
Over 10 posts, we built a production-ready URL shortener from an empty directory. Let's look back at what each phase taught us:
| Post | Phase | Key Concepts |
|---|---|---|
| #1 | Series Overview | Architecture, tech stack decisions, system design |
| #2 | Project Setup & API | Express + TypeScript, REST API design, validation |
| #3 | Database Design | PostgreSQL, Prisma ORM, migrations, indexing |
| #4 | Short Code Generation | Base62 encoding, collision handling, custom aliases |
| #5 | Redirect Engine & Analytics | 301/302 redirects, click tracking, aggregation |
| #6 | Caching with Redis | Cache-aside pattern, TTL, rate limiting |
| #7 | Authentication | JWT, bcrypt, middleware auth, API keys |
| #8 | Frontend with React | React, Vite, charts, QR codes, dashboard |
| #9 | Testing Strategy | Unit, integration, load testing with Vitest + k6 |
| #10 | Deployment (this post) | Docker, CI/CD, monitoring, production hardening |
The Full Tech Stack
Where to Go From Here
Congratulations — you've built a complete, production-deployed application. Here are some directions to explore next:
Feature Extensions
- Link expiration scheduler — background job to deactivate expired URLs
- Custom domains — let users bring their own short domains
- Link previews — show a preview page before redirecting
- Bulk URL import — CSV upload for batch shortening
- Webhooks — notify users when their links reach click milestones
Infrastructure Improvements
- Kubernetes deployment — orchestrate with Helm charts
- CDN integration — CloudFlare or AWS CloudFront for global edge caching
- Database replicas — read replicas for analytics queries
- Message queue — RabbitMQ or Kafka for async click processing
- Blue-green deployments — zero-downtime releases with traffic switching
Learning Paths
- System design interviews — you can now discuss URL shorteners with real implementation experience
- Microservices — split the monolith into separate services (redirect service, analytics service, auth service)
- Serverless — rewrite the redirect engine as a Cloudflare Worker for edge performance
- Open source — publish your shortener and accept contributions
The skills you've learned — API design, database modeling, caching, auth, testing, containerization, CI/CD, monitoring — are the same skills used to build systems at every scale. But we're not done yet — in the next phase, we'll add an admin panel with role-based access control and user management.
Series: Build a URL Shortener
Previous: Phase 8: Testing Strategy
Next: Phase 10: Admin Panel — RBAC & User Management
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.