Back to blog

Java 25: JVM and Performance Improvements Explained

javajava-25jvmperformancebackend
Java 25: JVM and Performance Improvements Explained

Java 25 is an LTS release, so most upgrade discussions naturally start with language features. But for many production systems, the more important story is actually under the hood.

JDK 25 includes several JVM and runtime improvements that aim to:

  • reduce memory overhead,
  • improve warmup,
  • make profiling more useful,
  • strengthen low-latency GC options,
  • and make startup optimization workflows easier to use.

In the overview post, we looked at the release as a whole. In the language changes post, we focused on source-level improvements. This article moves down a layer and focuses on what changes in the HotSpot JVM, JIT, GC, and JFR tooling.

The most important Java 25 JVM and performance changes are:

  • JEP 519: Compact Object Headers
  • JEP 514: Ahead-of-Time Command-Line Ergonomics
  • JEP 515: Ahead-of-Time Method Profiling
  • JEP 521: Generational Shenandoah
  • JEP 509: JFR CPU-Time Profiling (Experimental)
  • JEP 518: JFR Cooperative Sampling
  • JEP 520: JFR Method Timing & Tracing

Quick Classification

From a practical operations perspective, it helps to group them like this:

Mostly about runtime efficiency

  • Compact Object Headers
  • Generational Shenandoah

Mostly about startup and warmup

  • Ahead-of-Time Command-Line Ergonomics
  • Ahead-of-Time Method Profiling

Mostly about observability and diagnostics

  • JFR CPU-Time Profiling
  • JFR Cooperative Sampling
  • JFR Method Timing & Tracing

That is the simplest mental model for deciding what to evaluate first.

1. Compact Object Headers

JEP 519 changes compact object headers from an experimental feature into a product feature in Java 25.

What It Means

Every Java object carries header metadata. That header is useful, but it also consumes memory. In object-heavy applications, even a modest per-object reduction can have a meaningful system-wide effect.

Compact object headers aim to reduce that overhead.

This is not the kind of feature that changes your code. It is the kind of feature that can improve the economics of your code when the heap is full of many small objects.

Why It Matters

According to the JEP, experiments showed:

  • lower heap usage,
  • fewer garbage collections in some workloads,
  • and measurable CPU-time improvements in some benchmarks.

One of the examples in the JEP reports 22% less heap space and 8% less CPU time for SPECjbb2015 in one setting, plus fewer collections in another.

That does not mean every application will see the same gains, but the direction is clear: reducing object-header overhead can improve both memory efficiency and GC behavior.

Important Caveat

This feature is a product feature in Java 25, but it is not the default object-header layout.

That matters operationally. You should think of this as something to benchmark intentionally rather than something you automatically receive after upgrading.

Who Should Care Most

Compact object headers are especially interesting for:

  • memory-sensitive services,
  • object-heavy caches,
  • allocation-heavy backend workloads,
  • and teams running large heaps where small percentage improvements matter financially.

2. Ahead-of-Time Command-Line Ergonomics

JEP 514 is not a raw runtime optimization by itself. Instead, it makes Java's ahead-of-time cache workflow much easier to use.

The Problem Before Java 25

Before this change, creating an AOT cache required a more awkward two-step workflow:

  1. Run a training phase to record AOT configuration.
  2. Run a second command to build the cache.

That is flexible, but it creates friction. Friction matters because performance features that are difficult to use often do not get used.

What Java 25 Adds

Java 25 introduces a simpler one-command path with -XX:AOTCacheOutput=...

Example:

java -XX:AOTCacheOutput=app.aot -cp app.jar com.example.App

Then later:

java -XX:AOTCache=app.aot -cp app.jar com.example.App

The high-level goal is simple: make startup-optimization workflows more practical for real teams.

Why This Matters

This change is important because Java startup work is moving toward a more explicit performance-engineering model:

  • training runs,
  • caches,
  • warmup acceleration,
  • and more deliberate startup tuning.

JEP 514 reduces the operational cost of trying these techniques.

Caveat

The official JEP notes that the one-step workflow can require significantly more memory during cache creation, because the cache-creation sub-invocation uses its own heap.

So the practical advice is:

  • prefer the one-step workflow for convenience,
  • but keep the two-step workflow in mind for constrained environments or more customized pipelines.

3. Ahead-of-Time Method Profiling

JEP 515 is one of the most strategically important performance changes in Java 25.

The Problem It Solves

Java applications often do not reach peak performance immediately. The JIT compiler needs time to observe the application, identify hot methods, and optimize them.

That warmup period is real overhead in:

  • short-lived processes,
  • autoscaled services,
  • serverless-style workloads,
  • and systems where fast recovery matters.

What Java 25 Adds

Java 25 extends the AOT cache so that it can store method-execution profiles collected during a prior training run.

That means the JVM can start with better knowledge of what is hot, instead of waiting to learn everything again from scratch in production.

Practical Impact

The JEP's core claim is not that Java suddenly becomes fully ahead-of-time compiled. It is that Java can warm up faster because prior profiling information is available immediately at startup.

That is a subtle but important distinction.

  • This is not "replace the JIT."
  • This is "help the JIT start smarter."

The official example in the JEP shows a short Stream-based program improving from 90 ms to 73 ms, roughly a 19% improvement, after adding cached profiles.

Real applications will vary, but the big idea is compelling: shift some warmup intelligence from production runs into training runs.

Who Should Evaluate This

This is especially relevant for:

  • platforms that care about startup time,
  • services that scale up and down frequently,
  • apps with repeatable workloads,
  • and teams already exploring Project Leyden-style optimizations.

4. Generational Shenandoah

JEP 521 changes the generational mode of Shenandoah from experimental to a product feature in Java 25.

Why This Is Important

Low-pause collectors are valuable, but generational collection is also one of the most proven ideas in GC design. Most objects die young, so collecting young objects differently from old ones usually improves efficiency.

Java 25 brings those ideas together more cleanly for Shenandoah.

What Changes in Practice

In JDK 24, generational Shenandoah required experimental flags. In JDK 25, it no longer requires -XX:+UnlockExperimentalVMOptions.

That lowers the barrier to evaluation and signals greater maturity.

Important Caveat

According to the JEP, generational Shenandoah is not made the default mode in Java 25. Shenandoah still defaults to single-generation mode unless you choose otherwise.

So again, this is a feature to evaluate deliberately, not a default behavior change.

Who Should Care

Generational Shenandoah is most interesting for:

  • low-latency systems,
  • workloads sensitive to GC pause behavior,
  • teams already using Shenandoah,
  • and performance engineers comparing collectors under real production-like traffic.

5. JFR CPU-Time Profiling

JEP 509 adds CPU-time profiling to JFR on Linux, as an experimental feature.

Why This Matters

Traditional execution-time sampling and true CPU-time profiling are not the same.

A method can:

  • consume a lot of wall-clock time because it waits on IO,
  • or consume a lot of CPU because it is actually doing computation.

Those are different optimization problems.

What Java 25 Adds

Java 25 lets JFR capture CPU-time samples on Linux using the kernel's CPU timer mechanism. This makes CPU profiling more accurate than relying only on ordinary execution-time sampling.

The JEP also explicitly notes an important benefit: CPU-time profiling can better account for CPU consumed while Java code is interacting with native code.

Why It Is Operationally Interesting

This makes JFR more credible as a built-in production profiling tool, especially for Linux-heavy deployments.

Instead of treating third-party profilers as the only serious option, Java teams get a stronger built-in path for CPU analysis.

Caveat

This feature is:

  • Linux-only in Java 25,
  • not enabled by default,
  • and still experimental.

So teams should treat it as promising, but not yet universal.

6. JFR Cooperative Sampling

JEP 518 improves the stability of JFR stack sampling.

The Old Problem

Accurate stack sampling is hard. Sampling only at safepoints can introduce bias, but sampling asynchronously away from safepoints can be risky and complicated.

Historically, that tension created both accuracy and stability issues.

What Java 25 Changes

Java 25 redesigns JFR's sampling mechanism so stack walking happens only at safepoints, while still reducing safepoint bias through a cooperative approach.

The practical takeaway is:

  • safer stack sampling,
  • simpler internal logic,
  • and improved scalability for the sampler thread.

Why You Should Care

Most application developers will never see this directly, but they may feel it indirectly through a more robust JFR experience.

This is the kind of infrastructure work that improves confidence in platform tooling.

It also matters because JEP 509 depends on the mechanism introduced here.

7. JFR Method Timing and Tracing

JEP 520 extends JFR with method timing and method tracing via bytecode instrumentation.

What This Enables

Instead of relying only on statistical sampling, Java 25 adds a way to collect exact timing and trace information for selected methods.

That is extremely useful when you want to answer specific questions such as:

  • which method is triggering a slow startup path,
  • which code path is causing an expensive initialization,
  • or how often a particular hot method is really being invoked.

Why It Is Different from Ordinary Profiling

Sampling is great when you want broad visibility across the whole application.

Method timing and tracing are better when you want surgical visibility into a smaller set of methods.

That trade-off matters because exact instrumentation can be very powerful, but you do not want to apply it to too many methods at once.

Practical Operations Advice

Use this for:

  • focused investigations,
  • startup analysis,
  • validating performance fixes,
  • and targeted debugging.

Do not think of it as a universal replacement for low-overhead continuous sampling.

What Most Teams Should Evaluate First

If you are upgrading to Java 25 and want the shortest high-value checklist, start here:

  1. Compact Object Headers if memory footprint and GC pressure matter.
  2. Ahead-of-Time Method Profiling if warmup and startup matter.
  3. Generational Shenandoah if you already care deeply about collector behavior and low latency.
  4. JFR CPU-Time Profiling and JFR Method Timing/Tracing if your team does serious production diagnostics.

That gives most teams a sensible first evaluation path without trying every new knob at once.

Final Thoughts

The JVM and performance story in Java 25 is strong because it improves multiple stages of the application lifecycle:

  • before startup, through better AOT workflows,
  • during warmup, through cached method profiles,
  • during steady-state execution, through memory and GC improvements,
  • and during investigation, through a significantly better JFR toolbox.

This is exactly what a mature LTS release should do.

It does not promise magic. Instead, it gives performance-minded teams better building blocks, better diagnostics, and fewer excuses to postpone serious benchmarking.

In the next post, we will move from the JVM into the standard library side of the release and look at the new APIs and libraries in Java 25.

📬 Subscribe to Newsletter

Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.

We respect your privacy. Unsubscribe at any time.

💬 Comments

Sign in to leave a comment

We'll never post without your permission.