Structured Logging & Centralized Logging in Spring Boot

Introduction
Your Spring Boot app is running in production. Something breaks at 2 AM. You SSH in, tail the logs, and see:
ERROR 2026-03-11 02:14:33 - NullPointerException at UserService.java:87
ERROR 2026-03-11 02:14:33 - NullPointerException at UserService.java:87
ERROR 2026-03-11 02:14:33 - NullPointerException at UserService.java:87Which user? Which request? Which tenant? You have no idea. The logs are useless.
Good logging isn't just printing debug lines — it's building observability into your application so that when production breaks, you can find the problem in seconds, not hours.
This post covers the full logging stack:
- Structured logging — JSON instead of plain text, so logs are queryable
- MDC (Mapped Diagnostic Context) — trace every log line back to its request
- Log levels and best practices — what to log, what not to log
- Centralized log aggregation — ship logs to ELK Stack or Grafana Loki
- Alerting on log patterns — get paged when ERROR rate spikes
What You'll Learn
✅ Configure Logback for production-grade structured logging
✅ Output JSON logs with Logstash encoder
✅ Use MDC to add request ID, user ID, and tenant ID to every log line
✅ Write a LoggingFilter that auto-injects trace context
✅ Implement request/response logging with sensitive field masking
✅ Configure log levels per package via application.yml
✅ Ship logs to Elasticsearch with Logstash or Filebeat
✅ Query logs with Kibana Discover and build dashboards
✅ Set up Grafana Loki as a lightweight alternative
✅ Alert on error rate spikes
Prerequisites
- Java 17+ and Spring Boot 3.x
- Spring Boot basics (Getting Started guide)
- Docker & Docker Compose for running ELK locally
1. The Problem with Plain-Text Logs
Spring Boot ships with Logback and sensible defaults. Out of the box, logs look like:
2026-03-11T02:14:33.421+00:00 INFO 12345 --- [main] c.e.demo.UserService : User created: john@example.com
2026-03-11T02:14:33.425+00:00 ERROR 12345 --- [http-nio-8080-exec-1] c.e.demo.OrderService : Order failedThis is fine for local development. In production, it fails you in several ways:
Problems with plain-text logs:
| Problem | Plain Text | Structured JSON |
|---|---|---|
| Searching | grep "ERROR" — brittle | Query by field: level:ERROR AND userId:42 |
| Correlation | Manually scan timestamps | requestId field links all logs |
| Parsing | Regex — fragile, breaks on format changes | Native JSON parsing |
| Aggregation | Hard to count errors by endpoint | Faceted search on any field |
| Alerting | Complex log parsing rules | SQL-like queries on fields |
The fix: Emit logs as JSON from the start.
2. Structured Logging with Logback
Dependencies
Add the Logstash encoder for JSON output:
<dependencies>
<!-- Spring Boot Web -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- Structured JSON logging -->
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>8.0</version>
</dependency>
<!-- Lombok for @Slf4j -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
</dependencies>Logback Configuration
Create src/main/resources/logback-spring.xml. The -spring suffix lets Spring Boot process <springProfile> tags:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<!-- ======================== -->
<!-- Development: Human-readable colored output -->
<!-- ======================== -->
<springProfile name="dev,default">
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>
%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(%5p){highlight} %clr(${PID:-}){magenta} --- [%clr(%15.15t){faint}] %clr(%-40.40logger{39}){cyan} : %m%n%throwable
</pattern>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE"/>
</root>
</springProfile>
<!-- ======================== -->
<!-- Production: JSON structured output -->
<!-- ======================== -->
<springProfile name="prod,staging">
<appender name="JSON_CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<!-- Application metadata included in every log line -->
<customFields>{"app":"my-spring-app","env":"${SPRING_PROFILES_ACTIVE:-prod}"}</customFields>
<!-- Include MDC fields (requestId, userId, etc.) -->
<includeMdcKeyName>requestId</includeMdcKeyName>
<includeMdcKeyName>userId</includeMdcKeyName>
<includeMdcKeyName>tenantId</includeMdcKeyName>
<includeMdcKeyName>traceId</includeMdcKeyName>
<!-- Skip verbose stack trace in the message field -->
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<maxDepthPerCause>10</maxDepthPerCause>
<shortenedClassNameLength>20</shortenedClassNameLength>
<rootCauseFirst>true</rootCauseFirst>
</throwableConverter>
</encoder>
</appender>
<!-- Rolling file for backup / local disk -->
<appender name="JSON_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/var/log/app/application.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>/var/log/app/application.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
<maxHistory>7</maxHistory>
<timeBasedFileNamingAndTriggeringPolicy
class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
</rollingPolicy>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"app":"my-spring-app","env":"prod"}</customFields>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="JSON_CONSOLE"/>
<appender-ref ref="JSON_FILE"/>
</root>
<!-- Tune noisy frameworks -->
<logger name="org.hibernate.SQL" level="WARN"/>
<logger name="org.springframework.web" level="WARN"/>
<logger name="com.zaxxer.hikari" level="WARN"/>
</springProfile>
</configuration>What JSON Output Looks Like
A single log.info("User created", ...) now emits:
{
"@timestamp": "2026-03-11T02:14:33.421Z",
"@version": "1",
"message": "User created successfully",
"logger_name": "com.example.service.UserService",
"thread_name": "http-nio-8080-exec-1",
"level": "INFO",
"level_value": 20000,
"app": "my-spring-app",
"env": "prod",
"requestId": "a3f7c928-1234-4abc-8def-0987654321ab",
"userId": "42",
"tenantId": "acme-corp"
}Every field is indexed and queryable. Find all logs for a specific request: requestId:"a3f7c928-1234-4abc-8def-0987654321ab".
3. MDC: Trace Every Log Line to Its Request
Mapped Diagnostic Context (MDC) is a thread-local key-value store. Whatever you put in MDC automatically appears in every log statement made on that thread — including library code you don't control.
The MDC Idea
LoggingFilter Implementation
package com.example.filter;
import jakarta.servlet.FilterChain;
import jakarta.servlet.ServletException;
import jakarta.servlet.http.HttpServletRequest;
import jakarta.servlet.http.HttpServletResponse;
import lombok.extern.slf4j.Slf4j;
import org.slf4j.MDC;
import org.springframework.core.annotation.Order;
import org.springframework.security.core.Authentication;
import org.springframework.security.core.context.SecurityContextHolder;
import org.springframework.stereotype.Component;
import org.springframework.web.filter.OncePerRequestFilter;
import java.io.IOException;
import java.util.UUID;
@Component
@Order(1)
@Slf4j
public class LoggingFilter extends OncePerRequestFilter {
private static final String REQUEST_ID_HEADER = "X-Request-ID";
private static final String TRACE_ID_HEADER = "X-Trace-ID";
@Override
protected void doFilterInternal(HttpServletRequest request,
HttpServletResponse response,
FilterChain filterChain) throws ServletException, IOException {
long startTime = System.currentTimeMillis();
String requestId = getOrGenerateRequestId(request);
try {
// Populate MDC — all subsequent logs on this thread get these fields
MDC.put("requestId", requestId);
MDC.put("method", request.getMethod());
MDC.put("path", request.getRequestURI());
MDC.put("clientIp", getClientIp(request));
// Add authenticated user info (if available)
Authentication auth = SecurityContextHolder.getContext().getAuthentication();
if (auth != null && auth.isAuthenticated() && !"anonymousUser".equals(auth.getPrincipal())) {
MDC.put("userId", auth.getName());
}
// Echo request ID back to client for correlating their logs with yours
response.setHeader(REQUEST_ID_HEADER, requestId);
log.info("Request started: {} {}", request.getMethod(), request.getRequestURI());
filterChain.doFilter(request, response);
} finally {
long duration = System.currentTimeMillis() - startTime;
MDC.put("durationMs", String.valueOf(duration));
MDC.put("statusCode", String.valueOf(response.getStatus()));
log.info("Request completed: {} {} → {} ({}ms)",
request.getMethod(), request.getRequestURI(),
response.getStatus(), duration);
// CRITICAL: Always clear MDC to prevent leaking to pool-reused threads
MDC.clear();
}
}
private String getOrGenerateRequestId(HttpServletRequest request) {
String existingId = request.getHeader("X-Request-ID");
return (existingId != null && !existingId.isBlank()) ? existingId : UUID.randomUUID().toString();
}
private String getClientIp(HttpServletRequest request) {
String forwardedFor = request.getHeader("X-Forwarded-For");
if (forwardedFor != null && !forwardedFor.isBlank()) {
return forwardedFor.split(",")[0].trim();
}
return request.getRemoteAddr();
}
}Filter Registration (optional — auto-detection usually works)
If Spring doesn't auto-register your filter, add:
@Configuration
public class FilterConfig {
@Bean
public FilterRegistrationBean<LoggingFilter> loggingFilter(LoggingFilter filter) {
FilterRegistrationBean<LoggingFilter> bean = new FilterRegistrationBean<>(filter);
bean.setOrder(1);
bean.addUrlPatterns("/*");
return bean;
}
}Using MDC in Services
MDC values are already there — just log normally:
@Service
@Slf4j
@RequiredArgsConstructor
public class OrderService {
private final OrderRepository orderRepository;
private final PaymentService paymentService;
public Order createOrder(CreateOrderRequest request) {
// requestId and userId are already in MDC from LoggingFilter
log.info("Creating order for {} items, total: {}",
request.getItems().size(), request.getTotalAmount());
Order order = orderRepository.save(Order.from(request));
// Add order-specific context for this operation's logs
MDC.put("orderId", order.getId().toString());
log.info("Order persisted, initiating payment");
try {
paymentService.charge(order);
log.info("Payment successful");
return order;
} catch (PaymentException e) {
// This error log automatically includes requestId, userId, orderId
log.error("Payment failed for order: {}", e.getMessage());
throw e;
} finally {
MDC.remove("orderId"); // Clean up order-specific context
}
}
}The resulting ERROR log automatically includes everything:
{
"@timestamp": "2026-03-11T02:14:33.421Z",
"level": "ERROR",
"message": "Payment failed for order: Insufficient funds",
"requestId": "a3f7c928-...",
"userId": "42",
"orderId": "789",
"method": "POST",
"path": "/api/orders"
}No more playing detective. One look and you know: user 42, request a3f7c928, order 789.
4. Request/Response Logging with Sensitive Data Masking
Logging request and response bodies is useful for debugging, but dangerous for privacy. Never log passwords, tokens, or credit card numbers.
ContentCachingWrapper Approach
Spring provides ContentCachingRequestWrapper and ContentCachingResponseWrapper to let you read the body after it's been consumed:
package com.example.filter;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.node.ObjectNode;
import jakarta.servlet.FilterChain;
import jakarta.servlet.ServletException;
import jakarta.servlet.http.HttpServletRequest;
import jakarta.servlet.http.HttpServletResponse;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
import org.springframework.web.filter.OncePerRequestFilter;
import org.springframework.web.util.ContentCachingRequestWrapper;
import org.springframework.web.util.ContentCachingResponseWrapper;
import java.io.IOException;
import java.util.Set;
@Component
@Order(2) // After LoggingFilter
@Slf4j
@RequiredArgsConstructor
public class RequestResponseLoggingFilter extends OncePerRequestFilter {
private final ObjectMapper objectMapper;
// Fields to mask — never log these
private static final Set<String> SENSITIVE_FIELDS = Set.of(
"password", "confirmPassword", "currentPassword",
"token", "accessToken", "refreshToken", "apiKey",
"cardNumber", "cvv", "ssn", "creditCard"
);
// Only log bodies for these content types
private static final Set<String> LOGGABLE_CONTENT_TYPES = Set.of(
"application/json", "application/xml"
);
private static final int MAX_BODY_LOG_SIZE = 5000; // chars
@Override
protected void doFilterInternal(HttpServletRequest request,
HttpServletResponse response,
FilterChain filterChain) throws ServletException, IOException {
// Only log for JSON/XML requests
if (!shouldLogBody(request)) {
filterChain.doFilter(request, response);
return;
}
ContentCachingRequestWrapper wrappedRequest = new ContentCachingRequestWrapper(request);
ContentCachingResponseWrapper wrappedResponse = new ContentCachingResponseWrapper(response);
try {
filterChain.doFilter(wrappedRequest, wrappedResponse);
} finally {
logRequestBody(wrappedRequest);
logResponseBody(wrappedResponse);
wrappedResponse.copyBodyToResponse(); // Essential: send body to client
}
}
private void logRequestBody(ContentCachingRequestWrapper request) {
byte[] bodyBytes = request.getContentAsByteArray();
if (bodyBytes.length == 0) return;
String body = new String(bodyBytes);
String maskedBody = maskSensitiveFields(truncate(body));
log.debug("Request body: {}", maskedBody);
}
private void logResponseBody(ContentCachingResponseWrapper response) {
byte[] bodyBytes = response.getContentAsByteArray();
if (bodyBytes.length == 0) return;
String body = new String(bodyBytes);
String maskedBody = maskSensitiveFields(truncate(body));
log.debug("Response body [{}]: {}", response.getStatus(), maskedBody);
}
private String maskSensitiveFields(String json) {
try {
JsonNode node = objectMapper.readTree(json);
maskNode((ObjectNode) node);
return objectMapper.writeValueAsString(node);
} catch (Exception e) {
return "[unparseable body]";
}
}
private void maskNode(ObjectNode node) {
node.fieldNames().forEachRemaining(field -> {
if (SENSITIVE_FIELDS.stream().anyMatch(s -> field.toLowerCase().contains(s.toLowerCase()))) {
node.put(field, "***MASKED***");
} else if (node.get(field).isObject()) {
maskNode((ObjectNode) node.get(field));
}
});
}
private String truncate(String body) {
return body.length() > MAX_BODY_LOG_SIZE
? body.substring(0, MAX_BODY_LOG_SIZE) + "...[truncated]"
: body;
}
private boolean shouldLogBody(HttpServletRequest request) {
String contentType = request.getContentType();
return contentType != null && LOGGABLE_CONTENT_TYPES.stream()
.anyMatch(contentType::startsWith);
}
}Security Note: Set this filter to
DEBUGlevel in production and only enable it temporarily during incident investigation. Logging all request/response bodies in production creates storage costs and privacy risks.
5. Log Levels: What to Log (and What Not to)
The Five Levels
| Level | Use For | Production Setting |
|---|---|---|
TRACE | Method entry/exit, parameter dumps | Never |
DEBUG | Decision points, intermediate values, SQL | Only during incidents |
INFO | Business events (user created, order placed, payment processed) | ✅ Always |
WARN | Degraded operation — still works, investigate soon (cache miss storm, slow query) | ✅ Always |
ERROR | Operation failed — requires investigation (payment failed, DB connection lost) | ✅ Always |
Configuration via application.yml
logging:
level:
root: INFO
# Your application — verbose during development
com.example: INFO
# SQL queries — only in dev
org.hibernate.SQL: DEBUG
org.hibernate.orm.jdbc.bind: TRACE # Parameter values
# Spring framework — usually too noisy
org.springframework.web: WARN
org.springframework.security: INFO
org.springframework.data: WARN
# Connection pool — only if troubleshooting timeouts
com.zaxxer.hikari: WARN
com.zaxxer.hikari.HikariConfig: INFO # Log pool config at startup
# Redis
io.lettuce: WARN
# Actuator — only endpoint calls
org.springframework.boot.actuator.endpoint: WARNLog Level Best Practices
Log at INFO for business events:
// ✅ Good — business event, easy to query
log.info("Order placed: orderId={}, userId={}, amount={}, items={}",
order.getId(), order.getUserId(), order.getTotalAmount(), order.getItemCount());
// ✅ Good — audit trail
log.info("User password changed: userId={}, ip={}", userId, clientIp);
// ❌ Bad — not useful in production, pollutes logs
log.info("Entering getUser method");
log.info("userId is: {}", userId);
log.info("Exiting getUser method");Log at WARN for degraded states:
// ✅ Good — operation succeeded but something is off
log.warn("Cache miss for userId={} — DB fallback used. Consider warming cache.", userId);
log.warn("Slow query detected: {}ms for orderId={}", duration, orderId);
log.warn("Rate limit approaching for userId={}: {}/100 requests used", userId, count);Log at ERROR for actual failures:
// ✅ Good — always include context and the exception
log.error("Failed to process payment for orderId={}, userId={}: {}",
order.getId(), order.getUserId(), e.getMessage(), e);
// ❌ Bad — no context
log.error("Error occurred");
// ❌ Bad — swallowed exception, no stack trace
log.error("Payment failed: " + e.getMessage());6. Application Logging Patterns
Service Layer: Structured Event Logging
@Service
@Slf4j
@RequiredArgsConstructor
public class UserService {
private final UserRepository userRepository;
private final EmailService emailService;
public User registerUser(RegisterRequest request) {
log.info("User registration started: email={}", request.getEmail());
if (userRepository.existsByEmail(request.getEmail())) {
log.warn("Registration rejected — email already exists: {}", request.getEmail());
throw new EmailAlreadyExistsException(request.getEmail());
}
User user = createUser(request);
userRepository.save(user);
log.info("User registered: userId={}, email={}", user.getId(), user.getEmail());
try {
emailService.sendWelcomeEmail(user);
log.info("Welcome email queued: userId={}", user.getId());
} catch (EmailException e) {
// Email failure is not fatal — warn, don't error
log.warn("Welcome email failed for userId={}: {}", user.getId(), e.getMessage());
}
return user;
}
}Global Exception Handler: Consistent Error Logging
@RestControllerAdvice
@Slf4j
public class GlobalExceptionHandler {
// Expected business errors — log at WARN
@ExceptionHandler(ResourceNotFoundException.class)
public ResponseEntity<ErrorResponse> handleNotFound(ResourceNotFoundException ex) {
log.warn("Resource not found: {}", ex.getMessage());
return ResponseEntity.status(HttpStatus.NOT_FOUND)
.body(new ErrorResponse("NOT_FOUND", ex.getMessage()));
}
@ExceptionHandler(ValidationException.class)
public ResponseEntity<ErrorResponse> handleValidation(ValidationException ex) {
log.warn("Validation failed: {}", ex.getMessage());
return ResponseEntity.badRequest()
.body(new ErrorResponse("VALIDATION_ERROR", ex.getMessage()));
}
// Unexpected errors — log at ERROR with full stack trace
@ExceptionHandler(Exception.class)
public ResponseEntity<ErrorResponse> handleGeneral(Exception ex) {
log.error("Unexpected error: {} — {}", ex.getClass().getSimpleName(), ex.getMessage(), ex);
return ResponseEntity.internalServerError()
.body(new ErrorResponse("INTERNAL_ERROR", "An unexpected error occurred"));
}
}Scheduled Job Logging
@Component
@Slf4j
public class OrderCleanupJob {
@Scheduled(cron = "0 0 2 * * *") // 2 AM daily
public void cleanupExpiredOrders() {
String jobId = UUID.randomUUID().toString().substring(0, 8);
MDC.put("jobId", jobId);
MDC.put("jobName", "order-cleanup");
log.info("Job started");
long start = System.currentTimeMillis();
try {
int deleted = orderRepository.deleteExpiredOrders();
long duration = System.currentTimeMillis() - start;
log.info("Job completed: deleted={}, durationMs={}", deleted, duration);
} catch (Exception e) {
log.error("Job failed after {}ms: {}", System.currentTimeMillis() - start, e.getMessage(), e);
} finally {
MDC.remove("jobId");
MDC.remove("jobName");
}
}
}7. Centralized Logging with ELK Stack
When you have multiple instances or microservices, each writing its own log files, you need a central place to search and analyze them. The ELK Stack (Elasticsearch + Logstash + Kibana) is the most widely used solution.
Architecture
Docker Compose for Local ELK
Create docker-compose.elk.yml:
version: "3.8"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.12.0
container_name: elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false # Disable for local dev only
- ES_JAVA_OPTS=-Xms512m -Xmx512m
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9200"]
interval: 10s
timeout: 5s
retries: 10
logstash:
image: docker.elastic.co/logstash/logstash:8.12.0
container_name: logstash
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5044:5044" # Beats input
- "5000:5000/udp" # Syslog
depends_on:
elasticsearch:
condition: service_healthy
kibana:
image: docker.elastic.co/kibana/kibana:8.12.0
container_name: kibana
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- "5601:5601"
depends_on:
elasticsearch:
condition: service_healthy
filebeat:
image: docker.elastic.co/beats/filebeat:8.12.0
container_name: filebeat
user: root
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/log/app:/var/log/app:ro # Mount app log directory
- /var/lib/docker/containers:/var/lib/docker/containers:ro
depends_on:
- logstash
command: filebeat -e -strict.perms=false
volumes:
esdata:Logstash Pipeline
Create logstash/pipeline/spring-boot.conf:
input {
beats {
port => 5044
}
}
filter {
# Parse JSON log output from Spring Boot / Logstash encoder
json {
source => "message"
target => "log"
}
# Promote important fields to top-level for easier Kibana faceting
if [log][level] {
mutate {
add_field => {
"level" => "%{[log][level]}"
"requestId" => "%{[log][requestId]}"
"userId" => "%{[log][userId]}"
"app" => "%{[log][app]}"
"path" => "%{[log][path]}"
}
}
}
# Parse @timestamp from log (overrides Filebeat's ingestion time)
date {
match => ["[log][@timestamp]", "ISO8601"]
target => "@timestamp"
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "spring-boot-logs-%{+YYYY.MM.dd}"
# Daily indices — easy rotation and retention management
}
stdout { codec => rubydebug } # Remove in production
}Filebeat Configuration
Create filebeat/filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/app/*.log
json.keys_under_root: true
json.add_error_key: true
multiline.pattern: '^\{'
multiline.negate: true
multiline.match: after
output.logstash:
hosts: ["logstash:5044"]
# Add Docker metadata if running in containers
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_docker_metadata: ~Start the Stack
# Start ELK
docker compose -f docker-compose.elk.yml up -d
# Verify Elasticsearch is up
curl http://localhost:9200/_cluster/health?pretty
# Open Kibana
open http://localhost:5601In Kibana → Management → Index Patterns → Create pattern spring-boot-logs-*.
8. Alt Option: Grafana Loki (Lightweight)
ELK is powerful but heavyweight (4+ GB RAM). For smaller deployments, Grafana Loki is a better fit — it's like Prometheus but for logs.
Loki Architecture
Docker Compose for Loki
version: "3.8"
services:
loki:
image: grafana/loki:2.9.4
container_name: loki
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
volumes:
- loki-data:/loki
promtail:
image: grafana/promtail:2.9.4
container_name: promtail
volumes:
- /var/log/app:/var/log/app:ro
- ./promtail/config.yml:/etc/promtail/config.yml:ro
command: -config.file=/etc/promtail/config.yml
grafana:
image: grafana/grafana:10.3.1
container_name: grafana
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
ports:
- "3000:3000"
volumes:
- grafana-data:/var/lib/grafana
depends_on:
- loki
volumes:
loki-data:
grafana-data:Promtail Configuration
Create promtail/config.yml:
server:
http_listen_port: 9080
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: spring-boot
static_configs:
- targets:
- localhost
labels:
job: spring-boot
app: my-spring-app
__path__: /var/log/app/*.log
pipeline_stages:
- json:
expressions:
level: level
requestId: requestId
userId: userId
message: message
- labels:
level:
requestId:
userId:
- timestamp:
source: '@timestamp'
format: RFC3339NanoLoki uses LogQL for querying, which is similar to PromQL:
# All ERROR logs
{app="my-spring-app"} | json | level="ERROR"
# Errors for a specific user
{app="my-spring-app"} | json | level="ERROR" | userId="42"
# Slow requests (duration > 1000ms)
{app="my-spring-app"} | json | durationMs > 1000
# Error rate over time
rate({app="my-spring-app"} | json | level="ERROR" [5m])9. Spring Boot Actuator Integration
Spring Boot Actuator exposes operational endpoints — including log level management at runtime.
Actuator Setup
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>management:
endpoints:
web:
exposure:
include: health, info, loggers, metrics, logfile
base-path: /actuator
endpoint:
loggers:
enabled: true
health:
show-details: when-authorizedChange Log Levels at Runtime
No restart needed — change log levels via HTTP:
# See current level for a package
curl http://localhost:8080/actuator/loggers/com.example
# Response:
# {"configuredLevel": "INFO", "effectiveLevel": "INFO"}
# Enable DEBUG for your service package (incident investigation)
curl -X POST http://localhost:8080/actuator/loggers/com.example \
-H "Content-Type: application/json" \
-d '{"configuredLevel": "DEBUG"}'
# Enable Hibernate SQL logging (see actual queries)
curl -X POST http://localhost:8080/actuator/loggers/org.hibernate.SQL \
-H "Content-Type: application/json" \
-d '{"configuredLevel": "DEBUG"}'
# Reset to INFO when done
curl -X POST http://localhost:8080/actuator/loggers/com.example \
-H "Content-Type: application/json" \
-d '{"configuredLevel": "INFO"}'Security: Protect Actuator endpoints with Spring Security. Never expose them publicly.
@Configuration
public class ActuatorSecurityConfig {
@Bean
public SecurityFilterChain actuatorSecurity(HttpSecurity http) throws Exception {
http.securityMatcher("/actuator/**")
.authorizeHttpRequests(auth -> auth
.requestMatchers("/actuator/health").permitAll()
.anyRequest().hasRole("ADMIN")
);
return http.build();
}
}10. Alerting on Log Patterns
Logs are only valuable if you act on them. Set up alerts so errors notify your team automatically.
Kibana Alerting (ELK)
In Kibana → Stack Management → Rules → Create Rule:
- Rule type: Elasticsearch query
- Query:
{ "query": { "bool": { "filter": [{ "term": { "level": "ERROR" }}, { "term": { "app": "my-spring-app" }}]}}} - Condition: Count > 10 in last 5 minutes
- Action: Slack/PagerDuty/Email notification
Grafana Alerting (Loki)
In Grafana → Alerting → New Alert Rule:
# Alert query: error rate
sum(rate({app="my-spring-app"} | json | level="ERROR" [5m])) by (app)- Condition: IS ABOVE
0.5(errors per second) - Evaluation: Every 1 minute for 5 minutes
- Notification: Slack channel
#on-call
Application-Level Metrics for Alerting
Use Micrometer (built into Spring Boot) to emit error rate metrics:
@Component
@Slf4j
@RequiredArgsConstructor
public class PaymentService {
private final MeterRegistry meterRegistry;
public void processPayment(Order order) {
Timer.Sample sample = Timer.start(meterRegistry);
try {
// ... payment logic ...
meterRegistry.counter("payment.success",
"method", order.getPaymentMethod()).increment();
} catch (PaymentException e) {
meterRegistry.counter("payment.failure",
"reason", e.getReason(),
"method", order.getPaymentMethod()).increment();
log.error("Payment failed: orderId={}, reason={}", order.getId(), e.getReason(), e);
throw e;
} finally {
sample.stop(Timer.builder("payment.duration")
.tag("method", order.getPaymentMethod())
.register(meterRegistry));
}
}
}Alert on payment.failure counter spikes in Grafana/Prometheus for operational alerting that complements log-based alerting.
11. Production Checklist
Before going live, verify:
Logging configuration:
-
prodprofile uses JSON Logstash encoder - Root level is
INFO(notDEBUG) - Noisy frameworks (
hibernate,hikari,spring.web) set toWARN - File appender has rolling policy (max 7 days, max 100 MB per file)
-
logback-spring.xmluses<springProfile>for env separation
MDC & tracing:
-
LoggingFilteris registered and populatesrequestId,userId -
MDC.clear()is called in afinallyblock - Async operations propagate MDC (see below)
Security:
- Passwords and tokens are NEVER logged
-
RequestResponseLoggingFilterisDEBUGlevel in production - Actuator log-level endpoint is secured behind
ROLE_ADMIN - Log storage follows data retention policy (GDPR: no PII without consent)
Centralized logging:
- Logs ship to Elasticsearch/Loki
- Index retention policy configured (e.g., 30 days)
- Error rate alert is configured and tested
MDC in Async Contexts
MDC is thread-local. It does not propagate automatically to @Async threads or virtual threads. Fix this with a task decorator:
@Configuration
public class AsyncConfig implements AsyncConfigurer {
@Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(4);
executor.setMaxPoolSize(10);
executor.setQueueCapacity(100);
executor.setThreadNamePrefix("async-");
// Copy MDC from calling thread to async thread
executor.setTaskDecorator(runnable -> {
Map<String, String> mdcContext = MDC.getCopyOfContextMap();
return () -> {
try {
if (mdcContext != null) {
MDC.setContextMap(mdcContext);
}
runnable.run();
} finally {
MDC.clear();
}
};
});
executor.initialize();
return executor;
}
}Summary
Good logging is the difference between a 2-minute production fix and a 2-hour war room. Here's what you built:
Structured JSON logs — every log line is machine-parseable with consistent fields.
MDC tracing — requestId, userId, and tenantId automatically attached to every log, including library code.
Request/response logging with masking — full observability without leaking passwords or tokens.
Log level hygiene — INFO for business events, WARN for degraded states, ERROR for real failures. Frameworks set to WARN.
Centralized log aggregation — ELK Stack (full-featured) or Loki (lightweight). Both support structured queries.
Runtime log level control — Actuator lets you enable DEBUG for a package during an incident, then reset — without a restart.
Alerting — error rate spikes notify your team before users file bug reports.
Key Takeaways
✅ Use logback-spring.xml with <springProfile> to switch between colored console (dev) and JSON (prod)
✅ Always use MDC.clear() in a finally block — never let MDC leak between requests
✅ The Logstash encoder makes every MDC field a top-level JSON field — fully indexable
✅ Sensitive fields (password, token, card number) must be masked before logging
✅ Actuator /loggers endpoint lets you debug production without a restart
✅ Propagate MDC manually to @Async threads with a task decorator
What's Next
With structured logging in place, the natural next step is full observability:
- Monitoring with Actuator, Prometheus & Grafana — metrics, dashboards, and SLO tracking alongside your logs (SB-18)
Part of the Spring Boot Learning Roadmap — a comprehensive guide from basics to production.
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.