Back to blog

Build a URL Shortener: SEO & Link Previews

typescriptnodejsseoopengraphfullstack
Build a URL Shortener: SEO & Link Previews

When someone shares a YouTube link on Slack, the channel lights up — a thumbnail, a title, a description, maybe even the video duration. You see immediately what the link is about, and you're far more likely to click. Now compare that to sharing a shortened URL: https://short.ly/x7Kq9. Nothing. Just a cryptic string. No context, no preview, no trust.

That gap between "mysterious short link" and "rich, informative preview card" is what we're closing in this post. Link previews are the difference between a URL that gets clicked and one that gets ignored. They're also a trust signal — users are rightfully suspicious of short URLs that could lead anywhere.

This is the final post in our URL shortener series. We've built the API, the database, the cache, the auth system, the frontend, the tests, and the deployment pipeline. Now we're adding the finishing touch that makes our short links feel professional — rich link previews that work across every social platform.

Time commitment: 2-3 hours
Prerequisites: Phase 13: React Admin Dashboard UI

What we'll build in this post:
✅ OG meta tag scraping from destination URLs
✅ Link preview metadata storage and caching
✅ Preview page for short URLs (shows destination info before redirect)
✅ Dynamic OG image generation for short links
✅ Bot-aware redirect handling (serve meta tags to crawlers, redirect humans)
✅ Social platform compatibility (Facebook, Twitter/X, Slack, LinkedIn, Discord)


When you paste a URL into Slack, Twitter, Facebook, or any modern messaging platform, something interesting happens behind the scenes. The platform doesn't just display the raw URL — it sends a bot (crawler) to fetch the page and extract metadata from specific HTML <meta> tags.

These tags follow the Open Graph protocol, originally created by Facebook, and now adopted by virtually every platform that renders link previews.

Here are the tags platforms look for:

<!-- Open Graph (Facebook, LinkedIn, Slack, Discord) -->
<meta property="og:title" content="How to Learn TypeScript in 2026" />
<meta property="og:description" content="A practical guide to mastering TypeScript from scratch." />
<meta property="og:image" content="https://example.com/images/typescript-guide.png" />
<meta property="og:url" content="https://example.com/typescript-guide" />
<meta property="og:type" content="website" />
<meta property="og:site_name" content="Dev Blog" />
 
<!-- Twitter/X Cards -->
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:title" content="How to Learn TypeScript in 2026" />
<meta name="twitter:description" content="A practical guide to mastering TypeScript from scratch." />
<meta name="twitter:image" content="https://example.com/images/typescript-guide.png" />

The flow looks like this:

Without OG tags, platforms either show nothing or display the raw URL. With them, your short links become rich, clickable preview cards.


The Bot vs Human Problem

Here's the core challenge: when a browser visits https://short.ly/x7Kq9, we want to redirect them to the destination. But when a Slack bot visits the same URL, we want to serve an HTML page with OG meta tags so the platform can render a preview.

We need to handle the same URL differently depending on who's visiting:

Detecting Social Media Bots

Every social platform identifies its crawler with a specific User-Agent string. Here's how we detect them:

// src/utils/bot-detection.ts
 
const BOT_USER_AGENTS = [
  'facebookexternalhit',
  'Twitterbot',
  'Slackbot',
  'LinkedInBot',
  'Discordbot',
  'WhatsApp',
  'TelegramBot',
  'Googlebot',
  'bingbot',
  'Pinterestbot',
  'redditbot',
  'Applebot',
  'Embedly',
];
 
export function isSocialBot(userAgent: string): boolean {
  if (!userAgent) return false;
  const ua = userAgent.toLowerCase();
  return BOT_USER_AGENTS.some(bot => ua.includes(bot.toLowerCase()));
}
 
export function identifyBot(userAgent: string): string | null {
  if (!userAgent) return null;
  const ua = userAgent.toLowerCase();
  const match = BOT_USER_AGENTS.find(bot => ua.includes(bot.toLowerCase()));
  return match || null;
}

Why include Googlebot? Search engines also use OG tags and structured metadata for rich search results. Serving them proper meta tags improves your short links' SEO.


We need to store scraped metadata from destination URLs. Add a new model to your Prisma schema:

// prisma/schema.prisma
 
model UrlMetadata {
  id            String   @id @default(uuid())
  urlId         String   @unique
  url           Url      @relation(fields: [urlId], references: [id], onDelete: Cascade)
  ogTitle       String?
  ogDescription String?
  ogImage       String?
  ogSiteName    String?
  favicon       String?
  scrapedAt     DateTime @default(now())
  expiresAt     DateTime // Refresh metadata periodically
 
  @@index([urlId])
  @@index([expiresAt])
}
 
// Update the existing Url model to add the relation
model Url {
  id          String       @id @default(uuid())
  shortCode   String       @unique
  originalUrl String
  // ... existing fields ...
  metadata    UrlMetadata?
 
  // ... existing relations ...
}

Run the migration:

npx prisma migrate dev --name add-url-metadata

Key design decisions:

  • @unique on urlId — one metadata record per URL (1:1 relationship)
  • expiresAt index — efficiently query for stale metadata that needs refreshing
  • onDelete: Cascade — when a URL is deleted, its metadata is cleaned up automatically
  • All metadata fields are optional — scraping can fail or return partial data

OG Meta Scraper Service

Now let's build the service that fetches a destination URL and extracts its OG meta tags. We'll use cheerio to parse the HTML:

npm install cheerio
npm install -D @types/cheerio
// src/services/metadata.service.ts
 
import * as cheerio from 'cheerio';
import { logger } from '../utils/logger';
 
export interface LinkMetadata {
  ogTitle: string | null;
  ogDescription: string | null;
  ogImage: string | null;
  ogSiteName: string | null;
  favicon: string | null;
}
 
export async function scrapeMetadata(url: string): Promise<LinkMetadata> {
  const controller = new AbortController();
  const timeout = setTimeout(() => controller.abort(), 5000);
 
  try {
    const response = await fetch(url, {
      signal: controller.signal,
      headers: {
        'User-Agent': 'ShortLinkBot/1.0 (metadata preview)',
        'Accept': 'text/html',
        'Accept-Language': 'en-US,en;q=0.9',
      },
      redirect: 'follow',
    });
 
    if (!response.ok) {
      logger.warn({ url, status: response.status }, 'Failed to fetch URL for scraping');
      return emptyMetadata();
    }
 
    const contentType = response.headers.get('content-type') || '';
    if (!contentType.includes('text/html')) {
      logger.info({ url, contentType }, 'Non-HTML content, skipping scrape');
      return emptyMetadata();
    }
 
    // Only read first 100KB to avoid memory issues with large pages
    const reader = response.body?.getReader();
    if (!reader) return emptyMetadata();
 
    let html = '';
    const decoder = new TextDecoder();
    const MAX_BYTES = 100 * 1024; // 100KB
    let totalBytes = 0;
 
    while (totalBytes < MAX_BYTES) {
      const { done, value } = await reader.read();
      if (done) break;
      html += decoder.decode(value, { stream: true });
      totalBytes += value.length;
    }
    reader.cancel();
 
    const $ = cheerio.load(html);
 
    return {
      ogTitle:
        $('meta[property="og:title"]').attr('content') ||
        $('meta[name="twitter:title"]').attr('content') ||
        $('title').text().trim() ||
        null,
      ogDescription:
        $('meta[property="og:description"]').attr('content') ||
        $('meta[name="twitter:description"]').attr('content') ||
        $('meta[name="description"]').attr('content') ||
        null,
      ogImage:
        $('meta[property="og:image"]').attr('content') ||
        $('meta[name="twitter:image"]').attr('content') ||
        null,
      ogSiteName:
        $('meta[property="og:site_name"]').attr('content') ||
        null,
      favicon:
        $('link[rel="icon"]').attr('href') ||
        $('link[rel="shortcut icon"]').attr('href') ||
        $('link[rel="apple-touch-icon"]').attr('href') ||
        null,
    };
  } catch (err) {
    if ((err as Error).name === 'AbortError') {
      logger.warn({ url }, 'Metadata scrape timed out after 5s');
    } else {
      logger.warn({ url, err }, 'Metadata scrape failed');
    }
    return emptyMetadata();
  } finally {
    clearTimeout(timeout);
  }
}
 
function emptyMetadata(): LinkMetadata {
  return {
    ogTitle: null,
    ogDescription: null,
    ogImage: null,
    ogSiteName: null,
    favicon: null,
  };
}

Resolving Relative URLs

OG images and favicons are often specified as relative paths. We need to resolve them to absolute URLs:

// src/utils/url-helpers.ts
 
export function resolveUrl(baseUrl: string, path: string | null): string | null {
  if (!path) return null;
 
  // Already absolute
  if (path.startsWith('http://') || path.startsWith('https://')) {
    return path;
  }
 
  // Protocol-relative
  if (path.startsWith('//')) {
    return `https:${path}`;
  }
 
  try {
    const base = new URL(baseUrl);
    return new URL(path, base.origin).toString();
  } catch {
    return null;
  }
}

Update the scraper to resolve URLs:

// After scraping, resolve relative URLs
const metadata = await scrapeRawMetadata(url);
 
return {
  ...metadata,
  ogImage: resolveUrl(url, metadata.ogImage),
  favicon: resolveUrl(url, metadata.favicon),
};

SSRF Prevention

Before we scrape any URL, we must prevent Server-Side Request Forgery (SSRF) attacks. A malicious user could submit http://169.254.169.254/latest/meta-data/ (AWS metadata endpoint) or http://localhost:3000/admin as their destination URL, and our scraper would dutifully fetch it.

// src/utils/ssrf-prevention.ts
 
import dns from 'dns/promises';
import { logger } from './logger';
 
const PRIVATE_IP_RANGES = [
  /^127\./,                    // Loopback
  /^10\./,                     // Class A private
  /^172\.(1[6-9]|2\d|3[01])\./, // Class B private
  /^192\.168\./,               // Class C private
  /^169\.254\./,               // Link-local
  /^0\./,                      // Current network
  /^fc00:/i,                   // IPv6 unique local
  /^fe80:/i,                   // IPv6 link-local
  /^::1$/,                     // IPv6 loopback
];
 
export function isPrivateIP(ip: string): boolean {
  return PRIVATE_IP_RANGES.some(range => range.test(ip));
}
 
const BLOCKED_HOSTNAMES = [
  'localhost',
  'metadata.google.internal',
  'metadata.google',
];
 
export async function validateScrapeTarget(url: string): Promise<boolean> {
  try {
    const { hostname } = new URL(url);
 
    // Block known dangerous hostnames
    if (BLOCKED_HOSTNAMES.includes(hostname.toLowerCase())) {
      logger.warn({ url }, 'Blocked scrape attempt — dangerous hostname');
      return false;
    }
 
    // Block IP addresses used directly in URLs
    if (/^\d+\.\d+\.\d+\.\d+$/.test(hostname) && isPrivateIP(hostname)) {
      logger.warn({ url }, 'Blocked scrape attempt — private IP in URL');
      return false;
    }
 
    // Resolve hostname and check all IPs
    const addresses = await dns.resolve4(hostname).catch(() => []);
    const addresses6 = await dns.resolve6(hostname).catch(() => []);
    const allAddresses = [...addresses, ...addresses6];
 
    if (allAddresses.some(ip => isPrivateIP(ip))) {
      logger.warn({ url, addresses: allAddresses }, 'Blocked SSRF attempt — resolves to private IP');
      return false;
    }
 
    return true;
  } catch (err) {
    logger.warn({ url, err }, 'Failed to validate scrape target');
    return false;
  }
}

Now wrap the scraper with SSRF protection:

// src/services/metadata.service.ts
 
export async function safeScrapeMetadata(url: string): Promise<LinkMetadata> {
  const isSafe = await validateScrapeTarget(url);
  if (!isSafe) {
    return emptyMetadata();
  }
  return scrapeMetadata(url);
}

Scraping on URL Creation

When a user shortens a URL, we want to scrape metadata in the background — fire-and-forget. This keeps the API response fast while populating preview data asynchronously.

// src/services/metadata.service.ts
 
import { prisma } from '../db';
import { redis } from '../cache';
 
const METADATA_TTL = 24 * 60 * 60; // 24 hours in seconds
const METADATA_REFRESH_HOURS = 24;
 
export async function scrapeAndStoreMetadata(
  urlId: string,
  originalUrl: string
): Promise<void> {
  const metadata = await safeScrapeMetadata(originalUrl);
 
  // Don't store if we got nothing useful
  if (!metadata.ogTitle && !metadata.ogDescription && !metadata.ogImage) {
    logger.info({ urlId }, 'No useful metadata found, skipping storage');
    return;
  }
 
  const expiresAt = new Date();
  expiresAt.setHours(expiresAt.getHours() + METADATA_REFRESH_HOURS);
 
  // Store in database
  await prisma.urlMetadata.upsert({
    where: { urlId },
    create: {
      urlId,
      ogTitle: metadata.ogTitle,
      ogDescription: metadata.ogDescription,
      ogImage: metadata.ogImage,
      ogSiteName: metadata.ogSiteName,
      favicon: metadata.favicon,
      expiresAt,
    },
    update: {
      ogTitle: metadata.ogTitle,
      ogDescription: metadata.ogDescription,
      ogImage: metadata.ogImage,
      ogSiteName: metadata.ogSiteName,
      favicon: metadata.favicon,
      scrapedAt: new Date(),
      expiresAt,
    },
  });
 
  // Cache in Redis
  const cacheKey = `metadata:${urlId}`;
  await redis.setex(cacheKey, METADATA_TTL, JSON.stringify(metadata));
 
  logger.info({ urlId, title: metadata.ogTitle }, 'Metadata scraped and stored');
}

Now integrate it into the URL creation flow:

// src/controllers/url.controller.ts
 
export async function shortenUrl(req: Request, res: Response) {
  const { url: originalUrl, customAlias, expiresAt } = req.body;
 
  // ... validation, short code generation, database insert ...
 
  const newUrl = await prisma.url.create({
    data: {
      shortCode,
      originalUrl,
      userId: req.user?.id,
      expiresAt: expiresAt ? new Date(expiresAt) : null,
    },
  });
 
  // Fire-and-forget: scrape metadata in the background
  scrapeAndStoreMetadata(newUrl.id, originalUrl).catch(err => {
    logger.warn({ err, urlId: newUrl.id }, 'Background metadata scrape failed');
  });
 
  // Return immediately — don't wait for scraping
  return res.status(201).json({
    shortCode: newUrl.shortCode,
    shortUrl: `${BASE_URL}/${newUrl.shortCode}`,
    originalUrl: newUrl.originalUrl,
    expiresAt: newUrl.expiresAt,
  });
}

The key pattern here is fire-and-forget. We call scrapeAndStoreMetadata() but don't await it. The API response returns immediately, and scraping happens in the background. If it fails, we log a warning but the URL still works — it just won't have rich previews.


Metadata Caching Strategy

We need a fast lookup path for metadata since it's called on every bot visit. The caching layer follows the same cache-aside pattern we used for URL lookups:

// src/services/metadata.service.ts
 
export async function getMetadata(urlId: string): Promise<LinkMetadata | null> {
  // 1. Check Redis cache first
  const cacheKey = `metadata:${urlId}`;
  const cached = await redis.get(cacheKey);
 
  if (cached) {
    return JSON.parse(cached) as LinkMetadata;
  }
 
  // 2. Fall back to database
  const metadata = await prisma.urlMetadata.findUnique({
    where: { urlId },
  });
 
  if (!metadata) {
    return null;
  }
 
  // 3. Check if metadata needs refreshing
  if (new Date() > metadata.expiresAt) {
    // Trigger background refresh, but return stale data for now
    const url = await prisma.url.findUnique({
      where: { id: urlId },
      select: { originalUrl: true },
    });
 
    if (url) {
      scrapeAndStoreMetadata(urlId, url.originalUrl).catch(err => {
        logger.warn({ err, urlId }, 'Background metadata refresh failed');
      });
    }
  }
 
  // 4. Populate cache from database result
  const linkMetadata: LinkMetadata = {
    ogTitle: metadata.ogTitle,
    ogDescription: metadata.ogDescription,
    ogImage: metadata.ogImage,
    ogSiteName: metadata.ogSiteName,
    favicon: metadata.favicon,
  };
 
  await redis.setex(cacheKey, METADATA_TTL, JSON.stringify(linkMetadata));
 
  return linkMetadata;
}

The caching flow:

Key design choices:

  • Stale-while-revalidate — return stale metadata immediately while refreshing in the background
  • 24-hour TTL — metadata refreshes daily to catch changes on destination pages
  • Graceful degradation — if metadata is missing, we use fallback values (the raw URL)

Updated Redirect Handler — Bot Detection

Now let's update the redirect route to handle bots differently from humans:

// src/routes/redirect.ts
 
import { Request, Response } from 'express';
import { isSocialBot, identifyBot } from '../utils/bot-detection';
import { getMetadata } from '../services/metadata.service';
import { renderPreviewHtml } from '../templates/preview-html';
import { renderPreviewPage } from '../templates/preview-page';
import { recordClick } from '../services/analytics.service';
import { resolveShortCode } from '../services/url.service';
import { logger } from '../utils/logger';
 
export async function handleRedirect(req: Request, res: Response) {
  const { code } = req.params;
 
  const url = await resolveShortCode(code);
 
  if (!url) {
    return res.status(404).json({ error: 'Short URL not found' });
  }
 
  if (!url.isActive) {
    return res.status(410).json({ error: 'This short URL has been deactivated' });
  }
 
  if (url.expiresAt && new Date() > url.expiresAt) {
    return res.status(410).json({ error: 'This short URL has expired' });
  }
 
  const userAgent = req.headers['user-agent'] || '';
 
  // Path 1: Social media bot — serve OG tags
  if (isSocialBot(userAgent)) {
    const botName = identifyBot(userAgent);
    logger.info({ code, botName }, 'Serving OG preview to bot');
 
    const metadata = await getMetadata(url.id);
    return res.send(renderPreviewHtml(url, metadata));
  }
 
  // Path 2: Human requesting preview page
  if (req.query.preview === 'true' || code.endsWith('+')) {
    const actualCode = code.endsWith('+') ? code.slice(0, -1) : code;
    const metadata = await getMetadata(url.id);
    return res.send(renderPreviewPage(url, metadata));
  }
 
  // Path 3: Normal human redirect
  recordClick(url.id, req).catch(err => {
    logger.warn({ err, urlId: url.id }, 'Failed to record click');
  });
 
  return res.redirect(302, url.originalUrl);
}

Notice we support three modes:

  1. Bot visits — detected by User-Agent, served HTML with OG meta tags
  2. Preview mode — human appends ?preview=true or + to see destination info before redirecting
  3. Normal redirect — the standard 302 redirect for human visitors

Preview HTML Template (For Bots)

This is the minimal HTML page served to social media crawlers. It contains OG meta tags and a <meta http-equiv="refresh"> fallback in case a human somehow ends up here:

// src/templates/preview-html.ts
 
import { escapeHtml } from '../utils/html';
 
const BASE_URL = process.env.BASE_URL || 'https://short.ly';
 
interface UrlRecord {
  shortCode: string;
  originalUrl: string;
}
 
interface LinkMetadata {
  ogTitle: string | null;
  ogDescription: string | null;
  ogImage: string | null;
  ogSiteName: string | null;
  favicon: string | null;
}
 
export function renderPreviewHtml(
  url: UrlRecord,
  metadata: LinkMetadata | null
): string {
  const title = metadata?.ogTitle || extractDomain(url.originalUrl);
  const description =
    metadata?.ogDescription || `Shortened link to ${extractDomain(url.originalUrl)}`;
  const image = metadata?.ogImage || `${BASE_URL}/api/og/${url.shortCode}`;
  const siteName = metadata?.ogSiteName || extractDomain(url.originalUrl);
  const shortUrl = `${BASE_URL}/${url.shortCode}`;
 
  return `<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="utf-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1" />
 
  <!-- Open Graph -->
  <meta property="og:title" content="${escapeHtml(title)}" />
  <meta property="og:description" content="${escapeHtml(description)}" />
  <meta property="og:image" content="${escapeHtml(image)}" />
  <meta property="og:url" content="${escapeHtml(shortUrl)}" />
  <meta property="og:type" content="website" />
  <meta property="og:site_name" content="${escapeHtml(siteName)}" />
 
  <!-- Twitter Card -->
  <meta name="twitter:card" content="summary_large_image" />
  <meta name="twitter:title" content="${escapeHtml(title)}" />
  <meta name="twitter:description" content="${escapeHtml(description)}" />
  <meta name="twitter:image" content="${escapeHtml(image)}" />
 
  <!-- Fallback redirect for humans -->
  <meta http-equiv="refresh" content="0;url=${escapeHtml(url.originalUrl)}" />
 
  <title>${escapeHtml(title)}</title>
</head>
<body>
  <p>Redirecting to <a href="${escapeHtml(url.originalUrl)}">${escapeHtml(url.originalUrl)}</a>...</p>
</body>
</html>`;
}
 
function extractDomain(url: string): string {
  try {
    return new URL(url).hostname;
  } catch {
    return url;
  }
}

HTML Escaping for XSS Prevention

This is critical — the metadata we scraped from the destination URL is untrusted input. A malicious page could set its OG title to "><script>alert('xss')</script>, and if we inject that raw into our HTML, we've got an XSS vulnerability.

// src/utils/html.ts
 
const HTML_ESCAPE_MAP: Record<string, string> = {
  '&': '&amp;',
  '<': '&lt;',
  '>': '&gt;',
  '"': '&quot;',
  "'": '&#x27;',
};
 
export function escapeHtml(str: string): string {
  return str.replace(/[&<>"']/g, char => HTML_ESCAPE_MAP[char] || char);
}

Dynamic OG Image Generation

When a destination URL doesn't have its own OG image, we generate one dynamically. This ensures every short link has a rich preview, even if the destination page is poorly configured.

We'll use satori (by Vercel) to convert a JSX-like template to SVG, then sharp to convert it to PNG:

npm install satori sharp
npm install -D @types/sharp
// src/routes/og-image.ts
 
import { Request, Response } from 'express';
import satori from 'satori';
import sharp from 'sharp';
import fs from 'fs';
import path from 'path';
import { resolveShortCode } from '../services/url.service';
import { getMetadata } from '../services/metadata.service';
import { redis } from '../cache';
import { logger } from '../utils/logger';
 
const BASE_URL = process.env.BASE_URL || 'https://short.ly';
 
// Load font once at startup
const interFont = fs.readFileSync(
  path.join(__dirname, '../assets/fonts/Inter-Bold.ttf')
);
const interRegular = fs.readFileSync(
  path.join(__dirname, '../assets/fonts/Inter-Regular.ttf')
);
 
export async function generateOgImage(req: Request, res: Response) {
  const { shortCode } = req.params;
 
  // Check cache first
  const cacheKey = `og-image:${shortCode}`;
  const cached = await redis.getBuffer(cacheKey);
 
  if (cached) {
    res.setHeader('Content-Type', 'image/png');
    res.setHeader('Cache-Control', 'public, max-age=86400');
    return res.send(cached);
  }
 
  const url = await resolveShortCode(shortCode);
  if (!url) {
    return res.status(404).json({ error: 'Short URL not found' });
  }
 
  const metadata = await getMetadata(url.id);
 
  const title = metadata?.ogTitle || 'Shortened Link';
  const description =
    metadata?.ogDescription || url.originalUrl;
  const domain = extractDomain(url.originalUrl);
 
  try {
    const svg = await satori(
      {
        type: 'div',
        props: {
          style: {
            width: '1200px',
            height: '630px',
            display: 'flex',
            flexDirection: 'column',
            justifyContent: 'space-between',
            padding: '60px',
            background: 'linear-gradient(135deg, #0f172a 0%, #1e3a5f 50%, #0f172a 100%)',
            color: '#ffffff',
            fontFamily: 'Inter',
          },
          children: [
            // Top: short URL badge
            {
              type: 'div',
              props: {
                style: {
                  display: 'flex',
                  alignItems: 'center',
                  gap: '12px',
                },
                children: [
                  {
                    type: 'div',
                    props: {
                      style: {
                        background: 'rgba(59, 130, 246, 0.3)',
                        borderRadius: '9999px',
                        padding: '8px 20px',
                        fontSize: '20px',
                        color: '#93c5fd',
                      },
                      children: `${BASE_URL}/${shortCode}`,
                    },
                  },
                ],
              },
            },
            // Middle: title and description
            {
              type: 'div',
              props: {
                style: {
                  display: 'flex',
                  flexDirection: 'column',
                  gap: '16px',
                },
                children: [
                  {
                    type: 'div',
                    props: {
                      style: {
                        fontSize: '48px',
                        fontWeight: 700,
                        lineHeight: 1.2,
                        maxHeight: '180px',
                        overflow: 'hidden',
                      },
                      children: truncate(title, 80),
                    },
                  },
                  {
                    type: 'div',
                    props: {
                      style: {
                        fontSize: '24px',
                        color: '#94a3b8',
                        lineHeight: 1.4,
                      },
                      children: truncate(description, 120),
                    },
                  },
                ],
              },
            },
            // Bottom: branding
            {
              type: 'div',
              props: {
                style: {
                  display: 'flex',
                  justifyContent: 'space-between',
                  alignItems: 'center',
                  fontSize: '20px',
                  color: '#64748b',
                },
                children: [
                  { type: 'span', props: { children: `Destination: ${domain}` } },
                  { type: 'span', props: { children: 'short.ly' } },
                ],
              },
            },
          ],
        },
      },
      {
        width: 1200,
        height: 630,
        fonts: [
          { name: 'Inter', data: interFont, weight: 700, style: 'normal' },
          { name: 'Inter', data: interRegular, weight: 400, style: 'normal' },
        ],
      }
    );
 
    const png = await sharp(Buffer.from(svg)).png({ quality: 90 }).toBuffer();
 
    // Cache for 24 hours
    await redis.setex(cacheKey, 86400, png);
 
    res.setHeader('Content-Type', 'image/png');
    res.setHeader('Cache-Control', 'public, max-age=86400, s-maxage=86400');
    return res.send(png);
  } catch (err) {
    logger.error({ err, shortCode }, 'Failed to generate OG image');
    return res.status(500).json({ error: 'Failed to generate image' });
  }
}
 
function truncate(str: string, maxLen: number): string {
  if (str.length <= maxLen) return str;
  return str.slice(0, maxLen - 3) + '...';
}
 
function extractDomain(url: string): string {
  try {
    return new URL(url).hostname;
  } catch {
    return url;
  }
}

Register the route:

// src/routes/index.ts
 
import { generateOgImage } from './og-image';
 
router.get('/api/og/:shortCode', generateOgImage);

Now every short link has a guaranteed OG image. If the destination has its own OG image, we use that. If not, we dynamically generate one with the title, description, and branding.


Preview Page for Humans

Sometimes users want to see where a short link leads before clicking. We support this with a preview page accessible at short.ly/abc123+ or short.ly/abc123?preview=true.

// src/templates/preview-page.ts
 
import { escapeHtml } from '../utils/html';
 
const BASE_URL = process.env.BASE_URL || 'https://short.ly';
 
interface UrlRecord {
  shortCode: string;
  originalUrl: string;
  clickCount: number;
  createdAt: Date;
}
 
interface LinkMetadata {
  ogTitle: string | null;
  ogDescription: string | null;
  ogImage: string | null;
  ogSiteName: string | null;
  favicon: string | null;
}
 
export function renderPreviewPage(
  url: UrlRecord,
  metadata: LinkMetadata | null
): string {
  const title = metadata?.ogTitle || extractDomain(url.originalUrl);
  const description = metadata?.ogDescription || 'No description available';
  const image = metadata?.ogImage;
  const favicon = metadata?.favicon;
  const domain = extractDomain(url.originalUrl);
 
  return `<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="utf-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1" />
  <title>Link Preview — ${escapeHtml(title)}</title>
  <style>
    * { margin: 0; padding: 0; box-sizing: border-box; }
    body {
      font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Inter, sans-serif;
      background: #0f172a;
      color: #e2e8f0;
      min-height: 100vh;
      display: flex;
      align-items: center;
      justify-content: center;
      padding: 20px;
    }
    .card {
      background: #1e293b;
      border: 1px solid #334155;
      border-radius: 16px;
      max-width: 600px;
      width: 100%;
      overflow: hidden;
    }
    .card-image {
      width: 100%;
      height: 300px;
      object-fit: cover;
      border-bottom: 1px solid #334155;
    }
    .card-body { padding: 32px; }
    .card-site {
      display: flex;
      align-items: center;
      gap: 8px;
      margin-bottom: 12px;
      font-size: 14px;
      color: #94a3b8;
    }
    .card-site img {
      width: 16px;
      height: 16px;
      border-radius: 4px;
    }
    .card-title {
      font-size: 24px;
      font-weight: 700;
      margin-bottom: 8px;
      line-height: 1.3;
    }
    .card-desc {
      font-size: 16px;
      color: #94a3b8;
      line-height: 1.5;
      margin-bottom: 24px;
    }
    .card-meta {
      display: flex;
      gap: 24px;
      margin-bottom: 24px;
      font-size: 14px;
      color: #64748b;
    }
    .destination {
      background: #0f172a;
      border: 1px solid #334155;
      border-radius: 8px;
      padding: 12px 16px;
      margin-bottom: 24px;
      font-size: 14px;
      word-break: break-all;
      color: #94a3b8;
    }
    .destination-label {
      font-size: 12px;
      text-transform: uppercase;
      color: #64748b;
      margin-bottom: 4px;
      letter-spacing: 0.05em;
    }
    .btn {
      display: block;
      width: 100%;
      padding: 14px;
      background: #3b82f6;
      color: white;
      border: none;
      border-radius: 8px;
      font-size: 16px;
      font-weight: 600;
      cursor: pointer;
      text-align: center;
      text-decoration: none;
      transition: background 0.2s;
    }
    .btn:hover { background: #2563eb; }
    .safety {
      text-align: center;
      margin-top: 16px;
      font-size: 13px;
      color: #475569;
    }
  </style>
</head>
<body>
  <div class="card">
    ${image ? `<img class="card-image" src="${escapeHtml(image)}" alt="${escapeHtml(title)}" />` : ''}
    <div class="card-body">
      <div class="card-site">
        ${favicon ? `<img src="${escapeHtml(favicon)}" alt="" />` : ''}
        <span>${escapeHtml(domain)}</span>
      </div>
      <h1 class="card-title">${escapeHtml(title)}</h1>
      <p class="card-desc">${escapeHtml(description)}</p>
      <div class="card-meta">
        <span>Clicks: ${url.clickCount.toLocaleString()}</span>
        <span>Created: ${url.createdAt.toLocaleDateString()}</span>
      </div>
      <div class="destination">
        <div class="destination-label">Destination URL</div>
        ${escapeHtml(url.originalUrl)}
      </div>
      <a href="${escapeHtml(url.originalUrl)}" class="btn">
        Continue to ${escapeHtml(domain)}
      </a>
      <p class="safety">
        This link was shortened with short.ly. Always verify the destination before clicking.
      </p>
    </div>
  </div>
</body>
</html>`;
}
 
function extractDomain(url: string): string {
  try {
    return new URL(url).hostname;
  } catch {
    return url;
  }
}

The preview page shows everything a user needs to decide whether to click:

  • OG image from the destination (if available)
  • Title and description scraped from meta tags
  • Full destination URL — so users can verify it's not malicious
  • Click count — social proof
  • "Continue to destination" button — explicit action

Social Platform Compatibility

Different platforms have different requirements for link previews. Here's what each platform looks for:

PlatformBot User-AgentRequired TagsRecommended Image Size
Facebookfacebookexternalhitog:title, og:image, og:description1200x630
Twitter/XTwitterbottwitter:card, twitter:title, twitter:image1200x628
LinkedInLinkedInBotog:title, og:image, og:description1200x627
SlackSlackbot-LinkExpandingog:title, og:description1200x630
DiscordDiscordbotog:title, og:description, og:image1200x630
WhatsAppWhatsAppog:title, og:image300x200 min
TelegramTelegramBotog:title, og:description, og:image1200x630

Platform-Specific Quirks

Facebook:

  • Caches aggressively — use Facebook Sharing Debugger to force refresh
  • Requires images to be at least 200x200px
  • Prefers 1.91:1 aspect ratio (1200x630)

Twitter/X:

  • Falls back to og:* tags if twitter:* tags are missing
  • twitter:card must be summary or summary_large_image
  • Images must be less than 5MB

Slack:

  • Unfurls links automatically in channels
  • Shows og:site_name as a subtle label above the preview
  • Respects og:image dimensions

LinkedIn:

  • Very aggressive caching — previews can take hours to update
  • Uses LinkedIn Post Inspector for debugging
  • Requires og:image to be accessible without authentication

Since we serve both og:* and twitter:* tags, our implementation covers all platforms out of the box.


Background Metadata Refresh

Metadata goes stale — pages change their titles, update their images, or even go offline. We need a background job to refresh expired metadata:

// src/jobs/refresh-metadata.ts
 
import { prisma } from '../db';
import { scrapeAndStoreMetadata } from '../services/metadata.service';
import { logger } from '../utils/logger';
 
const BATCH_SIZE = 50;
const DELAY_BETWEEN_BATCHES_MS = 2000;
 
export async function refreshExpiredMetadata(): Promise<void> {
  const now = new Date();
 
  const expiredMetadata = await prisma.urlMetadata.findMany({
    where: {
      expiresAt: { lt: now },
      url: { isActive: true },
    },
    include: {
      url: { select: { id: true, originalUrl: true } },
    },
    take: BATCH_SIZE,
    orderBy: { expiresAt: 'asc' },
  });
 
  if (expiredMetadata.length === 0) {
    logger.info('No expired metadata to refresh');
    return;
  }
 
  logger.info({ count: expiredMetadata.length }, 'Refreshing expired metadata');
 
  let refreshed = 0;
  let failed = 0;
 
  for (const record of expiredMetadata) {
    try {
      await scrapeAndStoreMetadata(record.url.id, record.url.originalUrl);
      refreshed++;
    } catch (err) {
      logger.warn(
        { err, urlId: record.url.id },
        'Failed to refresh metadata'
      );
      failed++;
 
      // If scraping fails, push the expiry forward to avoid retrying too often
      await prisma.urlMetadata.update({
        where: { id: record.id },
        data: {
          expiresAt: new Date(Date.now() + 6 * 60 * 60 * 1000), // retry in 6 hours
        },
      });
    }
  }
 
  logger.info({ refreshed, failed }, 'Metadata refresh complete');
}

Schedule it with a cron job or a simple setInterval:

// src/jobs/scheduler.ts
 
import { refreshExpiredMetadata } from './refresh-metadata';
import { logger } from '../utils/logger';
 
const REFRESH_INTERVAL = 60 * 60 * 1000; // Every hour
 
export function startScheduler(): void {
  logger.info('Starting metadata refresh scheduler');
 
  setInterval(async () => {
    try {
      await refreshExpiredMetadata();
    } catch (err) {
      logger.error({ err }, 'Metadata refresh scheduler error');
    }
  }, REFRESH_INTERVAL);
}

Metadata Refresh API Endpoint

Admin users might want to manually trigger a metadata refresh for a specific URL:

// src/routes/admin.ts
 
router.post(
  '/api/admin/urls/:id/refresh-metadata',
  authenticate,
  requireRole('admin'),
  async (req: Request, res: Response) => {
    const { id } = req.params;
 
    const url = await prisma.url.findUnique({
      where: { id },
      select: { id: true, originalUrl: true },
    });
 
    if (!url) {
      return res.status(404).json({ error: 'URL not found' });
    }
 
    // Scrape and update metadata synchronously for admin
    const metadata = await safeScrapeMetadata(url.originalUrl);
 
    const expiresAt = new Date();
    expiresAt.setHours(expiresAt.getHours() + 24);
 
    const updated = await prisma.urlMetadata.upsert({
      where: { urlId: id },
      create: {
        urlId: id,
        ogTitle: metadata.ogTitle,
        ogDescription: metadata.ogDescription,
        ogImage: metadata.ogImage,
        ogSiteName: metadata.ogSiteName,
        favicon: metadata.favicon,
        expiresAt,
      },
      update: {
        ogTitle: metadata.ogTitle,
        ogDescription: metadata.ogDescription,
        ogImage: metadata.ogImage,
        ogSiteName: metadata.ogSiteName,
        favicon: metadata.favicon,
        scrapedAt: new Date(),
        expiresAt,
      },
    });
 
    // Invalidate cache
    await redis.del(`metadata:${id}`);
 
    return res.json({
      message: 'Metadata refreshed',
      metadata: updated,
    });
  }
);

Security Considerations

Link preview functionality introduces several security surfaces. Let's address each one:

1. XSS via Scraped Metadata

The destination page controls its OG tags. A malicious page could set:

<meta property="og:title" content='"><script>alert("xss")</script>' />

Mitigation: We escape all scraped values before injecting them into HTML (see escapeHtml() above). Never use innerHTML or template literals without escaping.

2. SSRF via Metadata Scraping

Covered earlier — our validateScrapeTarget() function blocks requests to internal networks and known metadata endpoints.

3. Resource Exhaustion

A destination page could be infinitely large, causing our scraper to consume all available memory.

Mitigation: We limit the response body to 100KB and set a 5-second timeout:

// Already implemented in our scraper:
const MAX_BYTES = 100 * 1024; // 100KB limit
const timeout = setTimeout(() => controller.abort(), 5000); // 5s timeout

4. Scraper Rate Limiting

Prevent abuse of the scraping functionality:

// src/middleware/scrape-rate-limit.ts
 
import rateLimit from 'express-rate-limit';
 
export const scrapeRateLimit = rateLimit({
  windowMs: 60 * 1000, // 1 minute
  max: 10, // 10 scrape requests per minute per IP
  message: { error: 'Too many scrape requests, please try again later' },
  keyGenerator: (req) => {
    return req.ip || req.headers['x-forwarded-for'] as string || 'unknown';
  },
});

5. Open Redirect Abuse

Our short URLs are, by definition, open redirects. Attackers might use them to disguise malicious URLs behind our trusted domain.

Mitigations:

  • The preview page (?preview=true) lets users verify the destination
  • Bot previews show the actual destination URL in the description
  • Admin moderation (from earlier in the series) can flag suspicious URLs
  • Consider maintaining a blocklist of known malicious domains

Common Pitfalls

Here are mistakes you'll want to avoid:

1. Not Escaping HTML in OG Tag Values

If the destination page's title is My Page" /><script>alert(1)</script><meta content=", and you inject it directly into your HTML, you've created an XSS vulnerability. Always escape. Always.

2. Scraping Synchronously During URL Creation

// BAD - blocks the API response
const metadata = await scrapeMetadata(originalUrl);
await saveMetadata(urlId, metadata);
return res.json({ shortUrl });
 
// GOOD - fire and forget
scrapeAndStoreMetadata(urlId, originalUrl).catch(logError);
return res.json({ shortUrl });

Scraping takes 1-5 seconds. Your URL creation endpoint should return in under 100ms.

3. Not Handling Redirects During Scraping

Many URLs redirect (HTTP 301/302) before reaching the final page. Our scraper uses redirect: 'follow' to handle this, but you should be aware of redirect chains that could lead to timeout or SSRF.

4. Caching Metadata Forever

If you never refresh metadata, your previews become stale. The destination page might change its title, update its image, or even go offline. Our 24-hour expiry with stale-while-revalidate handles this gracefully.

5. Ignoring Non-HTML Responses

Not every URL points to an HTML page. PDFs, images, and API endpoints don't have OG tags. Check the Content-Type header before attempting to parse:

const contentType = response.headers.get('content-type') || '';
if (!contentType.includes('text/html')) {
  return emptyMetadata(); // Skip non-HTML content
}

Testing

Unit Tests: Bot Detection

// src/__tests__/bot-detection.test.ts
 
import { describe, it, expect } from 'vitest';
import { isSocialBot, identifyBot } from '../utils/bot-detection';
 
describe('isSocialBot', () => {
  it('detects Facebook crawler', () => {
    expect(isSocialBot('facebookexternalhit/1.1')).toBe(true);
  });
 
  it('detects Twitterbot', () => {
    expect(isSocialBot('Twitterbot/1.0')).toBe(true);
  });
 
  it('detects Slackbot', () => {
    expect(
      isSocialBot('Slackbot-LinkExpanding 1.0 (+https://api.slack.com/robots)')
    ).toBe(true);
  });
 
  it('detects LinkedIn bot', () => {
    expect(isSocialBot('LinkedInBot/1.0')).toBe(true);
  });
 
  it('detects Discord bot', () => {
    expect(
      isSocialBot('Mozilla/5.0 (compatible; Discordbot/2.0)')
    ).toBe(true);
  });
 
  it('returns false for normal browsers', () => {
    expect(
      isSocialBot(
        'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
      )
    ).toBe(false);
  });
 
  it('returns false for empty user agent', () => {
    expect(isSocialBot('')).toBe(false);
  });
 
  it('is case-insensitive', () => {
    expect(isSocialBot('FACEBOOKEXTERNALHIT/1.1')).toBe(true);
  });
});
 
describe('identifyBot', () => {
  it('returns bot name when detected', () => {
    expect(identifyBot('facebookexternalhit/1.1')).toBe('facebookexternalhit');
  });
 
  it('returns null for normal browsers', () => {
    expect(identifyBot('Mozilla/5.0 (Windows NT 10.0)')).toBeNull();
  });
});

Unit Tests: Metadata Scraping

// src/__tests__/metadata.service.test.ts
 
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { scrapeMetadata } from '../services/metadata.service';
 
// Mock fetch globally
const mockFetch = vi.fn();
global.fetch = mockFetch;
 
describe('scrapeMetadata', () => {
  beforeEach(() => {
    vi.clearAllMocks();
  });
 
  it('extracts OG tags from HTML', async () => {
    mockFetch.mockResolvedValue({
      ok: true,
      headers: new Headers({ 'content-type': 'text/html' }),
      body: createReadableStream(`
        <html>
        <head>
          <meta property="og:title" content="Test Page" />
          <meta property="og:description" content="A test page description" />
          <meta property="og:image" content="https://example.com/image.png" />
          <meta property="og:site_name" content="Example" />
          <link rel="icon" href="/favicon.ico" />
        </head>
        <body></body>
        </html>
      `),
    });
 
    const metadata = await scrapeMetadata('https://example.com/page');
 
    expect(metadata.ogTitle).toBe('Test Page');
    expect(metadata.ogDescription).toBe('A test page description');
    expect(metadata.ogImage).toBe('https://example.com/image.png');
    expect(metadata.ogSiteName).toBe('Example');
    expect(metadata.favicon).toBe('/favicon.ico');
  });
 
  it('falls back to <title> when og:title is missing', async () => {
    mockFetch.mockResolvedValue({
      ok: true,
      headers: new Headers({ 'content-type': 'text/html' }),
      body: createReadableStream(`
        <html>
        <head><title>Fallback Title</title></head>
        <body></body>
        </html>
      `),
    });
 
    const metadata = await scrapeMetadata('https://example.com');
    expect(metadata.ogTitle).toBe('Fallback Title');
  });
 
  it('returns empty metadata on timeout', async () => {
    mockFetch.mockImplementation(() =>
      new Promise((_, reject) => {
        setTimeout(() => reject(new DOMException('Aborted', 'AbortError')), 100);
      })
    );
 
    const metadata = await scrapeMetadata('https://slow-site.com');
    expect(metadata.ogTitle).toBeNull();
    expect(metadata.ogDescription).toBeNull();
  });
 
  it('returns empty metadata for non-HTML responses', async () => {
    mockFetch.mockResolvedValue({
      ok: true,
      headers: new Headers({ 'content-type': 'application/json' }),
      body: createReadableStream('{"key": "value"}'),
    });
 
    const metadata = await scrapeMetadata('https://api.example.com/data');
    expect(metadata.ogTitle).toBeNull();
  });
});
 
// Helper to create a ReadableStream from a string
function createReadableStream(text: string): ReadableStream {
  return new ReadableStream({
    start(controller) {
      controller.enqueue(new TextEncoder().encode(text));
      controller.close();
    },
  });
}

Unit Tests: SSRF Prevention

// src/__tests__/ssrf-prevention.test.ts
 
import { describe, it, expect } from 'vitest';
import { isPrivateIP } from '../utils/ssrf-prevention';
 
describe('isPrivateIP', () => {
  it('blocks loopback addresses', () => {
    expect(isPrivateIP('127.0.0.1')).toBe(true);
    expect(isPrivateIP('127.0.0.2')).toBe(true);
  });
 
  it('blocks Class A private', () => {
    expect(isPrivateIP('10.0.0.1')).toBe(true);
    expect(isPrivateIP('10.255.255.255')).toBe(true);
  });
 
  it('blocks Class B private', () => {
    expect(isPrivateIP('172.16.0.1')).toBe(true);
    expect(isPrivateIP('172.31.255.255')).toBe(true);
  });
 
  it('blocks Class C private', () => {
    expect(isPrivateIP('192.168.0.1')).toBe(true);
    expect(isPrivateIP('192.168.255.255')).toBe(true);
  });
 
  it('blocks link-local', () => {
    expect(isPrivateIP('169.254.169.254')).toBe(true);
  });
 
  it('allows public IPs', () => {
    expect(isPrivateIP('8.8.8.8')).toBe(false);
    expect(isPrivateIP('1.1.1.1')).toBe(false);
    expect(isPrivateIP('203.0.113.1')).toBe(false);
  });
});

Integration Test: Redirect Handler

// src/__tests__/redirect.integration.test.ts
 
import { describe, it, expect, beforeAll, afterAll } from 'vitest';
import request from 'supertest';
import { app } from '../app';
import { prisma } from '../db';
 
describe('Redirect with bot detection', () => {
  let testUrl: { id: string; shortCode: string };
 
  beforeAll(async () => {
    // Create a test URL with metadata
    testUrl = await prisma.url.create({
      data: {
        shortCode: 'test-bot',
        originalUrl: 'https://example.com/article',
        isActive: true,
        clickCount: 0,
      },
    });
 
    await prisma.urlMetadata.create({
      data: {
        urlId: testUrl.id,
        ogTitle: 'Test Article',
        ogDescription: 'A great test article',
        ogImage: 'https://example.com/image.png',
        expiresAt: new Date(Date.now() + 86400000),
      },
    });
  });
 
  afterAll(async () => {
    await prisma.urlMetadata.deleteMany({ where: { urlId: testUrl.id } });
    await prisma.url.delete({ where: { id: testUrl.id } });
  });
 
  it('redirects normal browsers with 302', async () => {
    const res = await request(app)
      .get(`/${testUrl.shortCode}`)
      .set('User-Agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36');
 
    expect(res.status).toBe(302);
    expect(res.headers.location).toBe('https://example.com/article');
  });
 
  it('serves OG tags to Facebook bot', async () => {
    const res = await request(app)
      .get(`/${testUrl.shortCode}`)
      .set('User-Agent', 'facebookexternalhit/1.1');
 
    expect(res.status).toBe(200);
    expect(res.text).toContain('og:title');
    expect(res.text).toContain('Test Article');
    expect(res.text).toContain('og:image');
  });
 
  it('serves OG tags to Twitter bot', async () => {
    const res = await request(app)
      .get(`/${testUrl.shortCode}`)
      .set('User-Agent', 'Twitterbot/1.0');
 
    expect(res.status).toBe(200);
    expect(res.text).toContain('twitter:card');
    expect(res.text).toContain('twitter:title');
  });
 
  it('serves preview page when requested', async () => {
    const res = await request(app)
      .get(`/${testUrl.shortCode}?preview=true`)
      .set('User-Agent', 'Mozilla/5.0');
 
    expect(res.status).toBe(200);
    expect(res.text).toContain('Test Article');
    expect(res.text).toContain('Continue to');
    expect(res.text).toContain('example.com');
  });
 
  it('returns 404 for non-existent short codes', async () => {
    const res = await request(app)
      .get('/nonexistent')
      .set('User-Agent', 'facebookexternalhit/1.1');
 
    expect(res.status).toBe(404);
  });
});

Putting It All Together

Here's the complete architecture of our link preview system:


Series Complete

Congratulations. You've just built a production-ready URL shortener from an empty directory to a fully deployed, feature-rich application. Let's look at what we've accomplished across this entire series:

PhaseWhat We Built
Phase 1Express + TypeScript project setup, first POST /api/shorten endpoint
Phase 2PostgreSQL database with Prisma ORM, schema design, migrations
Phase 3Base62 short code generation, collision handling, custom aliases
Phase 4Redirect engine, click analytics, geolocation tracking
Phase 5Redis caching for sub-millisecond redirects, rate limiting
Phase 6JWT authentication, user accounts, API keys
Phase 7React frontend with dashboard, charts, QR codes
Phase 8Unit, integration, and load testing with Vitest + k6
Phase 9Docker, CI/CD, Prometheus monitoring, production deployment
Phase 10Admin RBAC system with role-based access control
Phase 11URL moderation, flagging, and content review workflows
Phase 12Analytics dashboard with advanced metrics and visualizations
Phase 13React admin dashboard UI with data tables and moderation tools
Phase 14SEO, OG tag scraping, dynamic OG images, bot-aware redirects

That's a complete, professional-grade web application covering API design, database modeling, caching, authentication, frontend development, testing, deployment, admin tooling, and SEO.

What's Next?

Here are some advanced features you could add to take this project even further:

  • Custom domains — let users bring their own domain (e.g., links.mycompany.com)
  • Link scheduling — schedule links to activate/deactivate at specific times
  • Team workspaces — shared URL management for organizations
  • API documentation — auto-generated Swagger/OpenAPI docs
  • A/B testing — redirect to different destinations based on percentage splits
  • Webhook notifications — notify users when their links hit click milestones
  • Link-in-bio pages — aggregate multiple short links into a landing page

Whatever you build next, you now have the engineering foundation to build it well. The patterns you've learned — caching strategies, background processing, bot detection, SSRF prevention, cache-aside with stale-while-revalidate — these are the same patterns used at scale by companies like Bitly, Rebrandly, and short.io.

Build something great.

Series: Build a URL Shortener
Previous: Phase 13: React Admin Dashboard UI

📬 Subscribe to Newsletter

Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.

We respect your privacy. Unsubscribe at any time.

💬 Comments

Sign in to leave a comment

We'll never post without your permission.