Final Isn’t Final? 5 Deeply Impactful Changes Coming in JDK 26

Introduction

When a new Java release is on the horizon, the most talked-about features are often the big, shiny additions to the language or APIs. But in a platform as mature and ubiquitous as Java, the most profound changes often happen at a deeper level—refining core principles for better safety, performance, and developer ergonomics. These are the changes that strengthen the very foundation of the ecosystem.

JDK 26 is a release that exemplifies this philosophy. It brings a series of fundamental improvements that challenge long-held assumptions and address subtle but critical issues that affect nearly every developer. These aren’t just incremental updates; they are foundational shifts that will change how we write and reason about our Java code for the better.

In this article, we’ll explore five of the most surprising and impactful changes slated for JDK 26. From reinforcing the meaning of a core keyword to eliminating painful performance trade-offs, these updates demonstrate a commitment to making Java safer, faster, and more reliable by default.


1. Final Isn’t Actually Final (But It’s About to Be)

The Surprising Truth

Here’s a fact that might surprise you: despite its name, a final field in Java can currently be mutated after initialization. This is possible through a mechanism called “deep reflection,” which allows code to bypass the language’s normal access rules.

The Problem

This loophole has significant consequences for both correctness and performance, as powerfully stated in JEP 500:

Final fields are, in reality, as mutable as non-final fields. We cannot rely on final fields to be immutable when reasoning about correctness, and we cannot use final fields to construct the deeply immutable graphs of objects that enable the JVM to deliver the best performance optimizations.

This ambiguity is problematic because it undermines a developer’s ability to reason about their code’s state. If a final field can be changed unexpectedly, guarantees of immutability evaporate. Furthermore, it prevents the JVM from applying crucial performance optimizations like constant folding, where the value of a constant expression is computed once and reused, because the JVM cannot trust that the final field’s value will truly remain constant.

The Solution

JEP 500 proposes to close this loophole in JDK 26. Initially, using deep reflection to mutate final fields will issue a runtime warning. This is the first step in a plan to make this an error that throws an IllegalAccessException by default in a future release. Application developers who have a legitimate need for this capability (often for serialization libraries) will need to explicitly enable it with the --enable-final-field-mutation command-line flag.

Concluding Reflection

This change is more than just a minor tweak; it’s about reinforcing Java’s commitment to “integrity by default.” By making final truly mean final, the platform ensures its core promises are kept, making all Java programs inherently safer and creating new opportunities for performance optimizations.

2. No More Choosing Between Fast Startups and Low-Latency GC

The Painful Choice

For years, developers of latency-sensitive applications have faced a difficult dilemma, as outlined in JEP 516. They could use the Ahead-of-Time (AOT) cache for significantly faster application startup times, or they could use the Z Garbage Collector (ZGC) for extremely low application latency. They couldn’t have both because the two were incompatible.

The Root Cause

The problem was that the AOT cache stored objects in a format specific to certain garbage collectors, like G1. This format was bitwise-compatible with how G1 lays out objects in the heap, allowing the JVM to map them directly into memory. However, ZGC uses a different object and reference format, making it unable to use these pre-cached objects.

The Clever Solution

The proposed solution is to introduce a new, “GC-agnostic” cache format. Instead of storing objects in a layout specific to one GC, the cache will use a neutral format. At startup, the JVM can then stream these objects from the neutral format into the heap, converting them on the fly into the format required by whatever garbage collector is currently active—including ZGC. When the cache is opened, a background thread eagerly starts materializing objects, making the process even more efficient and hiding latency.

Concluding Reflection

This is a significant step forward and a perfect example of a deep-level JVM enhancement. It removes a difficult trade-off, allowing applications to benefit from both the fast startup provided by AOT caching and the ultra-low-latency operation of ZGC simultaneously. Developers get the best of both worlds without compromise.

3. Concurrency That Finally Cleans Up After Itself

A Relatable Problem

Anyone who has worked with ExecutorService for concurrent tasks knows the common frustrations detailed in JEP 525. It’s dangerously easy to create “thread leaks,” where a subtask continues running in the background even after the main task has failed or been cancelled. Propagating cancellation when one part of a complex concurrent operation fails is notoriously difficult and error-prone.

The Paradigm Shift

Structured Concurrency offers a new model that solves this by tying the lifecycle of concurrent subtasks to a clear, lexical code block. Its core principle is simple yet powerful:

If a task splits into concurrent subtasks then they all return to the same place, namely the task’s code block.

The Benefits

This principle enables a more reliable and understandable approach to concurrency. In practice, it delivers two key benefits automatically:

  • Error handling with short-circuiting: If one subtask fails (throws an exception), all other subtasks forked within the same scope are automatically cancelled.
  • Cancellation propagation: If the main task is cancelled (e.g., its thread is interrupted), the cancellation is automatically propagated to all of its subtasks.

Concluding Reflection

Structured Concurrency isn’t just a new utility; it’s a fundamental shift that makes concurrent code as reliable and easy to reason about as traditional single-threaded, structured code. By confining the lifetime of concurrent operations to a well-defined scope, it eliminates an entire class of common bugs like thread leaks and delayed cancellation, making robust concurrent programming far more accessible.

4. Laziness Meets Immutability: The Best of Both Worlds

The Developer’s Dilemma

JEP 526 highlights another classic trade-off. Using final fields provides the safety of immutability but forces eager initialization, which can slow down application startup if the initialization is expensive. The alternative—using mutable, non-final fields for lazy initialization—is flexible but introduces risks in multi-threaded code and prevents the JVM from applying performance optimizations that rely on immutability.

The Solution

The LazyConstant API offers an elegant solution to this problem. It introduces the concept of “deferred immutability”—an object that is initialized only when its value is first requested. It gives you the best of both worlds: lazy initialization and true immutability.

The Magic

Here’s how it works: you create a LazyConstant with a function that computes its value. The first time .get() is called, that function runs and the value is computed. The LazyConstant API guarantees this computation happens only once, even with concurrent access from multiple threads. Crucially, once the value is initialized, the JVM can treat it as a true constant and apply performance optimizations like constant folding, just as it would for a traditional final field.

The Power of Aggregation

The power of this concept extends beyond single objects. JEP 526 also introduces List.ofLazy(...) and Map.ofLazy(...), allowing developers to create collections whose elements are initialized on demand. Consider an application that needs a pool of OrderController objects to handle concurrent requests. Instead of eagerly creating the entire pool at startup, you can use a lazy list:

static final List<OrderController> ORDERS = List.ofLazy(POOL_SIZE, i -> new OrderController());

Here, each element in the list is a lazy constant. An OrderController is only instantiated the first time its specific index in the list is accessed. This enables on-demand initialization of collection elements, providing a powerful and efficient pattern for managing resources like connection pools or worker object pools without impacting startup time.

Concluding Reflection

This is a highly practical feature that solves a common and frustrating design problem. It gives developers the flexibility of lazy initialization for faster startup times without forcing them to sacrifice the safety and performance benefits of immutability.

5. G1 Gets a Speed Boost… By Doubling Down

The Counter-Intuitive Hook

It may sound paradoxical, but Java’s default garbage collector, G1, is getting a significant throughput boost by adding a second major data structure to its internals.

The Background

As described in JEP 522, G1 uses a data structure called the “card table” to keep track of object references that cross between different memory regions. When your application code modifies an object field to point to another object, a small piece of injected code called a “write barrier” updates this card table.

The Bottleneck

The problem was that the application threads (which update the card table) and the GC’s internal optimizer threads (which process the card table to prepare for collection) had to synchronize their access to it. This coordination created overhead that could slow down the application.

The Solution

The fix is both simple and elegant: introduce a second card table. With two tables, G1 can let the application threads write to one table without any synchronization, while the GC optimizer threads safely process the other. When needed, G1 atomically swaps the two tables, allowing the roles to reverse. This clever design largely eliminates the need for synchronization between the two types of threads.

The Impressive Results

The performance gains observed in the JEP are substantial: 5–15% throughput improvements in applications that heavily modify object fields. Even applications with fewer modifications see gains of up to 5% thanks to simpler, faster write barriers.

Concluding Reflection

This is a fantastic example of a clever, low-level JVM optimization that will provide a “free” performance boost for many applications. No code changes are required; the improvement comes simply by upgrading the JDK.


Conclusion

The changes coming in JDK 26 are connected by a powerful, underlying theme: a focus on deep refinements to Java’s core. By strengthening the guarantees of final, eliminating performance trade-offs, making concurrency safer, and providing new tools for practical immutability, this release fortifies the foundations of safety, performance, and developer experience that have made Java so enduring.

These updates show a willingness to re-examine even the most fundamental parts of the platform to make it better. It leaves us with an exciting question: What long-standing assumption about Java do you think should be challenged next?