A deep look at some of the new features in Spring Boot 3.2.
Recently, Spring Boot Team announced the release of Spring Boot 3.2, and we are excited to share some of the most exciting features coming in the release.
This release introduces numerous new features and enhancements. Check the release notes page to view the complete list.
1. Support for Virtual Threads (Project Loom)
Project Loom aims to provide lightweight concurrency and new programming models on the Java platform by exploring and delivering Java VM features and APIs.
Virtual Threads, as the most important part of this project, was finally released in Java 21, although two other important parts of it, Scoped Values and Structured Concurrency, have not yet been released. There are a ton of articles out there about Project Loom and Virtual Threads in Java 21, and we do not want to repeat them.
The most important advantage of Virtual Threads is that they improve the Scalability and Performance of Java applications by introducing a lightweight threading model while keeping backward compatibility and decreasing the complexity of asynchronous programming without sacrificing performance.
Now, Spring Boot has started to add support for this great feature in version 3.2, but there is still much room, and the Spring Boot team decided to add this support gradually. To leverage this feature in your Spring Boot applications, you need to have a JDK 21 installed and enable the spring.threads.virtual.enabled
configuration.
We can categorize the support for Virtual Threads and its effect in the Spring Boot applications (when we enable it!) into three categories:
1- The Web MVC stack servlet containers (Tomcat and Jetty)
According to the Spring Boot 3.2 release note:
When virtual threads are enabled, Tomcat and Jetty will use virtual threads for request processing. This means that your application code that is handling a web request, such as a method in a controller, will run on a virtual thread.
What does this mean? By the power of the @ConditionalOnThreading
annotation that was introduced in Spring Boot 3.2, When virtual threads are enabled in the configuration, the applicationTaskExecutor
bean, which is responsible for executing async tasks, and its type is SimpleAsyncTaskExecutor
will configure to use virtual threads under the hood for each task. It means that a virtual thread is created (instead of a heavy platform thread) for each task (for example, HTTP request at a controller), which is lightweight and leads to better performance.
To make it clear, take a look at this simple Rest Controller:
@RestController public class TestController { @GetMapping("/test") public String currentThread() { return Thread.currentThread().toString(); } }
We return the current thread information using the Thread.currentThread()
. If we don’t enable virtual threads, we will get this result:
Thread[#50,http-nio-8080-exec-1,5,main]
By enabling the virtual threads, we will get the following result:
VirtualThread[#53,tomcat-handler-0]/runnable@ForkJoinPool-1-worker-1
2- The WebFlux stack Blocking Execution
Spring WebFlux supports blocking execution, and, in this case, it uses the applicationTaskExecutor
bean (similar to Web MVC stack request handling), so When virtual threads are enabled in the configuration, the applicationTaskExecutor
bean (its type is AsyncTaskExecutor
) will be configured to use virtual threads under the hood for each blocking execution.
3- Technology Specific Integrations
By enabling virtual threads in your Spring Boot 3.2 project, you must be aware of these impacts if your application integrates with RabbitMQ, Kafka, Redis, or Apache Pulsar. Especially if you are upgrading your project to Spring Boot 3.2.
- A virtual thread executor is auto-configured for the RabbitMQ listener.
- A virtual thread executor is auto-configured for the Kafka listener.
- The Spring Data Redis’
ClusterCommandExecutor
will use virtual threads. - Spring for Apache Pulsar will use a
VirtualThreadTaskExector
for the auto-configuredConcurrentPulsarListenerContainerFactory
andDefaultPulsarReaderContainerFactory
.
Note: Before enabling Virtual Threads, I highly recommend reading about some considerations about Virtual Threads like Pinned Virtual Threads
.
2. JdbcClient and RestClient
The Spring Framework offers comprehensive tools for working with SQL databases and calling remote REST APIs.
To develop a non-blocking reactive application using Spring WebFlux, we have R2DBC DatabaseClient
for SQL database access and WebClient
for calling remote REST APIs, on the other hand, to develop an application using Spring Web MVC based on the Servlet API, traditionally, we have had you can use JdbcTemplate
for SQL database access and RestTemplate
remote REST API calls.
Why did we need another pair of tools for SQL database access and remote REST API calls?
The only reason is to modernize tools when we want to use Spring Web MVC stack (blocking style). DatabaseClient
and WebClient
both have fluent and functional style APIs, And they are very easy and fun to work with, now by having these new generation tools (JdbcClient
and RestClient
), we have fluent and functional style APIs for SQL database access and remote REST API calls in Spring Web MVC stack. In fact, JdbcClient
and RestClient
are two modern alternatives to JdbcTemplate
and RestTemplate
.
JdbcClient vs JdbcTemplate
In addition to the above, one of the advantages of JdbcClient
over JdbcTemplate
is that we do not need to write RowMapper
. and a row mapper will dynamically create, Let’s look at the following example.
@Repository @Transactional public class EmployeeRepository { private final JdbcClient jdbcClient; public EmployeeRepository(JdbcClient jdbcClient) { this.jdbcClient = jdbcClient; } @Override public Optional<Employee> findById(Long id) { return jdbcClient.sql("SELECT id,full_name FROM employee WHERE id = :id") .param("id", id) .query(Employee.class) .optional(); } }
The only time you need to write a RowMapper
is when you want to have more control over data mapping.
In fact, the JdbcClient
is a wrapper over JdbcTemplate
and NamedParameterJdbcTemplate
. We can use that lower-level template directly for complex JDBC operations like batch inserts or stored procedure calls.
RestClient vs RestTemplate
As you know, RestTemplate
, the only tool in the Web MVC stack to call remote REST APIs, has been retired and is in maintenance mode. Spring team introduced WebClient
for the WebFlux stack in Spring Framework 5, but we can use it in the Web MVC stack as well (by calling block operation and making it synchronous).
Spring Framework 6.1 introduces the RestClient, which provides a functional style API similar to WebClient but for the Web MVC stack (blocking style). Now, if we need a client with fluent and functional style APIs to call remote REST APIs, we can use this alternative instead of a WebClient
. This provides a similar functional API but is blocking rather than reactive.
Let’s see an example that uses the RestClient
to call a REST API to find an employee by ID:
@Service public class EmployeeService { private final RestClient restClient; public PostService() { restClient = RestClient.builder() .baseUrl("https://employee.em") .build(); } public Employee findById(Long id) { return restClient.get() .uri("/employees/{id}", id) .retrieve() .body(Employee.class); }
One of the significant advantages of the RestClient
is that it supports Declarative HTTP Interface similar to WebClient
, which enables us to reduce the boilerplate code and introduce our client as a Java interface:
public interface EmployeeClient { @GetExchange("/employees/{id}") Employee findById(Long id); }
Also, we need to create a proxy bean for the client interface:
@Bean EmployeeClient employeeClient() { RestClient client = RestClient.create("https://employee.em"); HttpServiceProxyFactory factory = HttpServiceProxyFactory .builderFor(RestClientAdapter.create(client)) .build(); return factory.createClient(EmployeeClient.class); }
3. Service Connection Support for ActiveMQ
Service Connection is a new concept introduced in Spring Boot 3.1, and the goal was to improve the integration of Spring Boot with Testcontainers in both integration tests and development time.
A Service Connection needs a ConnectionDetails
to connect to a remote service (e.g., Kafka, ActiveMQ, PostgreSQL or …). The ContainerConnectionDetailsFactory
bean is responsible for creating this ConnectionDetails
based on a Container
subclass (Testcontainers) or the Docker image name. Spring Boot 3.1 provides about 15 ConnectionDetails
out of the box in the spring-boot-testcontainers
library (for Cassandra, Couchbase, Kafka, ActiveMQ, PostgreSQL, and more). Now, Spring Boot 3.2 adds support for ConnectionDetails
for ActiveMQ.
Service connection in integration tests
Before introducing the Service Connection, To write an integration test using Testcontainers in Spring Boot, we needed to configure the application using the @DynamicPropertySource
to connect to the service running in the container:
@SpringBootTest @Testcontainers class SampleIntegrationTests { @Container static GenericContainer<?> activemq = new GenericContainer<>("symptoma/activemq"); @DynamicPropertySource static void activemqProperties(DynamicPropertyRegistry registry) { registry.add("spring.activemq.broker-url", () -> "tcp://%s:%d".formatted(activemq.getHost(), activemq.getMappedPort(61616))); }
But by having the @ServiceConnection
annotation, we no longer need this configuration. In this case, the ActiveMQConnectionDetails
bean will be created by the ContainerConnectionDetailsFactory
class.
@SpringBootTest @Testcontainers class TestIntegrationTests { @Container @ServiceConnection static GenericContainer<?> activemq = new GenericContainer<>("symptoma/activemq");
Service Connection in Development time
Using the Service Connection and Testcontainers in development time is a lesser-known feature provided by Spring Boot (from version 3.1).
We can use Testcontainers in development time to run our application dependency as containers beside our application. With this integration between the Testcontainers and the Spring Boot, when we run our Spring Boot application, first Spring Boot runs the required container using the Testcontainers and then builds the application context.
Spring Boot 3.1 provides this integration with Testcontainers for development time in two ways:
- Declarative: by defining service dependencies in a docker-compose file in the root of the Spring Boot application. Spring Boot implemented this feature using the Service Connection under the hood.
- Programmatically: by defining the service dependencies as beans using the
@ServiceConnection
annotation.
I wrote two separate articles and described both approaches in detail:
Declarative approach:
Programmatically approach:
4. Observability improvements
We have seen significant changes and improvements in Spring Boot observability from version 3.0 onward, this version (Spring Boot 3.2) is no exception.
The new foundation for Observability in Spring Boot 3 is based on the Micrometer for recording metrics in a Spring Boot application and Micrometer Tracing (replaced Spring Cloud Sleuth) to ship traces via OpenZipkin Brave or OpenTelemetry.
We have a lot of new features and improvements in Spring Boot 3.2 for observability. We will review some of them below.
Using Micrometer’s annotations
If you have Spring AOP library in the classpath (spring-boot-starter-aop
), now you can use Micrometer’s annotations (@Timed
, @Counted
, @NewSpan
, @ContinueSpan
and @Observed
) to define observation declaratively instead of defining custom observation programmatically using the Observation
and ObservationRegistery
classes.
@Service public class TestService { private final ObservationRegistry observationRegistry; public TestService(ObservationRegistry observationRegistry) { this.observationRegistry = observationRegistry; } public void doSomething() { Observation.createNotStarted("doSomething", this.observationRegistry) .lowCardinalityKeyValue("locale", "en-US") .observe(() -> { // Write business logic here }); } }
Now we can use @Observed
annotation:
@Service public class TestService { @Observed(name = "doSomething", lowCardinalityKeyValues = {"locale", "en-US"}) public void doSomething() { // Write business logic here } }
Observabilities starting with a prefix can now be disabled through properties
Disabling observations you don’t need to listen to is much easier in Spring Boot 3.2. This usually happens when you use an external library, and it makes a lot of noise. Now, we can disable observations starting with a specific prefix by defining a configuration. For example, to stop Spring Security from reporting observations, we can set this configuration:
management.observations.enable.spring.security=false
Key/values can be applied to all observations
The management.metrics.tags.*
property is deprecated, and we can use management.observations.key-values.*
instead. This new property is beneficial when you want to add a low-cardinality key value to all observations automatically. For example, if we want to add the key region
with the value us-west
to all observations, we can use the following configuration:
management.observations.key-values.region=us-west
@Scheduled methods are new instrumented for Observability
There was an open ticket in the Spring Framework GitHub repository to support observability instrumentation for @Scheduled
annotated methods and report relevant metrics and traces. Spring Boot 3.2 has now added support for this feature.
Observability infrastructures aren’t disabled completely during integration tests
Before Spring Boot 3.2, By default, metrics and tracing were disabled in integration tests, and the MeterRegistry
was replaced by a SimpleMeterRegistry
(an in-memory simple implementation), and the Tracer
was replaced with a noop implementation. There were also other beans annotating with @ConditionalOnEnabledTracing, and those don’t get created when running an integration test.
In Spring Boot 3.2, only the smallest possible number of beans is disabled, so no spans are sent to backends. You need to consider these:
- If you have a custom Brave
SpanHandler
or OpenTelemetrySpanExporter
beans, make sure to annotate them with@ConditionalOnEnabledTracing
so that they won’t be created when running integration tests with observability switched off. - If you want to run your integration tests with observability enabled, you can use the
@AutoConfigureObservability
annotation on the test class.
Test and Observe Spring Boot 3.2 new Observability features using Digma
Let’s see some of these features and improvements in Spring Boot 3.2 observability in practice using Digma. Digma is an IDE Plugin that uncovers risky code, bottlenecks, and query issues in the darkest reaches of your code. Digma collects code runtime data behind the scenes using OpenTelemetry.
First, we must install the Digma Continuous Feedback IntelliJ plugin in our IDE and ensure we have installed the Docker.
Digma Jebtrains plugin has many features you can read about in this link.
Now, we need to get Digma up and running. Watch this video to see how to do it:
After setting up the Digma plugin and Docker, you can clone and open this sample project from Github in IntelliJ Idea and then continue.
If you open the Digma window in IntelliJ, you will see that no data has been collected yet.
Digma window in IntelliJ
There is another important window in Digma Intellij called the Observability window, As you can see, we do not have any recent activity on that window, too:
Digma Observability window in IntelliJ
Let’s start with this new feature in Spring Boot 3.2: Key/values can be applied to all observations
. In our sample Spring Boot 3.2 application, we added this config in the application.properties
(or yml
file) file:
management.observations.key-values.region=my-local-mac
Now run the project in the Intellij Idea IDE and start observing the application Digma Observability window in Intellij. Then hit the test API using the curl
command:
curl -s http://localhost:8080/test
After a few seconds, you should see that the data has been collected by Digma and shown in both Digma and Observability windows.
Recent activities Digma Observability window in IntelliJ
Now, we want to check the availability of the region
tag in the generated trace, so we click on the purple Trace
icon on the right, and we will see a new tab with trace information. Now open the Tags
section to see all tags for a span:
Digma tracing information tab in IntelliJ
Also, you can see a lot of important information and insight in the Digma and window:
Assets tab in Digma window in IntelliJ
Insights in Digma window in IntelliJ
The last thing we want to observe using Digma is the Using Micrometer’s annotations
feature. We annotate the doSomething
method of the TestService
bean with @Observed
annotation and then call it in a CommandLineRunner
. We expect that after running the application, we will be able to see metrics and trace information about that method in Digma windows. As you can see below, in the Digma Window, we can see the method call information in the Other
section:
Assets in Digma window in IntelliJ
Assets in Digma window in IntelliJ
Also, we can see the call and trace in the Observability window:
Recent activities Digma Observability window in IntelliJ
For more information about how to collect important data about your code in the dev
and test
environment using Digma, you can read this article:
https://digma.ai/blog/couch-to-fully-observed-code-with-spring-boot-3-2-micrometer-tracing-and-digma/
Final Thoughts
Spring Boot 3.2 is another important step for the new generation of Spring Boot 3.x and continues to leverage Java 17+ new and modern features, better observability, and improved performance.
You can read the Spring Boot 3.2 Release Notes for the complete list of changes and features.
Install Digma for Free!