Recently, the Spring Boot Team announced the release of Spring Boot 3.2, and we are excited to share a few more exciting features in the release. In the first part of this article series, we learned about some of them, and in this second part, we will go into more detail about some others.
1. Support for Project CRaC (Coordinated Restore at Checkpoint)
While all the attention in the Java community is focused on GraalVM in order to reduce the startup time of Java programs (and, of course, reduce resource consumption), the CRaC project has quietly reached the production stage, and the Spring team has also provided its initial support for this project. It has been added as one of the deployment options for Spring Boot applications.
What is the CRaC project?
The CRaC project is an official OpenJDK project based on the Linux CRIU utility, which can:
“freeze a running application (or part of it) and checkpoint it to a hard drive as a collection of files. You can then use the files to restore and run the application from the point it was frozen at.”
Similar to CRIU, Project CRaC defines a new API that allows us to checkpoint and restore a Java application on the HotSpot JVM. In this way, we can run our Java application using a CRaC-enabled version of the JDK, like Azul Zulu JDK, and then let our application warm up and finally take a checkpoint from the running application, this checkpoint contains a snapshot image of the running Java application that store in the disk and can be used to restore in the future. It will help us to reduce the boot time of our Java applications and also skip the warmup phase of Java applications. Some benchmarks showed that the CRaC project can improve the startup time of a Java application by 10 times.
How does Spring Framework support Project CRaC?
The most important thing for the Spring Framework to support Project CRaC is to manage the Beans and Resources lifecycle to checkpoint or restore smoothly. Fortunately, Spring Boot 3.2 supports Project CRaC out of the box, and the only thing we need to have is the following:
- A CRaC-enabled version of the JDK (for now, the CRaC project only supports Linux).
- The
org.crac:crac
library in our project dependencies - Setting the required parameters (
-XX:CRaCCheckpointTo=PATH
or-XX:CRaCRestoreFrom=PATH
)
How can we use the checkpoint and restore feature in Spring Boot?
Spring Boot provides two approaches to support the checkpoint and restore:
- Automatic: Simple and fast to apply but only improve the startup time of a Spring application.
- On-demand: Powerful and flexible, but needs more configuration and steps. In addition to improving the startup time of a Spring application, we will be able to take snapshots from a warm-up application.
The automatic approach is the simplest way to use the checkpoint and restore feature in a Spring Boot application. To make use of the automatic approach, you only need to set the -Dspring.context.checkpoint=onRefresh
JVM system property and the -XX:CRaCCheckpointTo=PATH
command line parameter:
java -Dspring.context.checkpoint=onRefresh -XX:CRaCCheckpointTo=checkpoint_folder -jar your-spring-app.jar
In this way, a checkpoint will be created and stored in the specified folder at startup time during the onRefresh
phase when the application context is stated. Next time, by setting the -XX:CRaCRestoreFrom=PATH
system, you can restore the application from the checkpoint image that was created at startup much faster:
java -XX:CRaCRestoreFrom=checkpoint_folder
The On-demand approach is more powerful and flexible than the automatic approach and helps us to make checkpoints from a running application, it means we can create checkpoints not only at start-up time but also during the running time of our application, for example, after warming up the application. To make use of the On-demand approach, we don’t need to set the -Dspring.context.checkpoint=onRefresh
property when we want to run the application:
java -XX:CRaCCheckpointTo=checkpoint_folder -jar your-spring-app.jar
But we need to do one more step to make the checkpoint by running the jcmd
commands while the application is up and run by the previous command:
jcmd your-spring-app.jar JDK.checkpoint
After running the jcmd
command, a checkpoint will be created and stored in the specified folder, and similar to the automatic approach, the application will be closed.
The superiority of the on-demand approach is to create a checkpoint when our application is warmed up, so in addition to reducing the startup time of the application, we will restore it at its peak performance.
Checkpoint and restore Considerations
- CRaC checkpoint files may contain sensitive data accessed by the JVM, requiring careful security assessment.
- The on-demand approach requires bean lifecycle management to stop and start resources like files, sockets, and active threads gracefully.
- To integrate checkpoint/restore functionality for other libraries, we must create an
org.crac.Resource
implementation and register the corresponding instance. - To trigger the On-demand checkpoint, other than the
jcmd
command, we can use different mechanisms like API calls or an HTTP endpoint.
2. Logging Correlation IDs
If we have one of the Micrometer tracing bridge/facade libraries (like micrometer-tracing-bridge-otel
, micrometer-tracing-bridge-brave
, or …) in our project library dependencies, Spring Boot 3.2 will now, automatically, without any more configuration, add log correlation ID in the logs.
The correlation IDs comprise spanId
and traceId
with the following format, the first part is traceId
, and the second part is spanId
:
[d0b2b69642ccfb566ea4f5b52d3a587b-1b03064c921c0f63] // [traceId-spanId]
A sample log with the correlation ID is as follows:
2024-01-20T17:11:53.714+01:00 INFO 30176 --- [omcat-handler-0] [d0b2b69642ccfb566ea4f5b52d3a587b-1b03064c921c0f63] com.saeed.demo32.TestController : Calling current thread controller... 2024-01-20T17:11:53.714+01:00 INFO 30176 --- [omcat-handler-0] [d0b2b69642ccfb566ea4f5b52d3a587b-3d5ea723aa5cb6c8] com.saeed.demo32.TestService : Starting business logic...
To change the correlation ID format, we can set our desired format by setting the logging.pattern.correlation
config:
logging.pattern.correlation=[${spring.application.name:},%X{traceId:-},%X{spanId:-}]
By setting the logging.include-application-name
to false
, we can prevent the application name from being repeated in the log messages. We can also disable this feature by setting the management.tracing.enabled
config to false
.
3. SSL Bundle Reloading
SSL bundles are a compelling feature in Spring Boot that allows us to configure SSL trust in a Spring Boot application to use by another component like web servers or other connections like Kafka or RabbitMQ connection (just added in Spring Boot 3.2). SSL bundles can be configured either with the Java KeyStore Files or the PEM-encoded Certificates.
By the power of the Spring Boot auto-configuration feature, a bean with type SslBundles
is injectable in the application to access all named bundles that are configured using the spring.ssl.bundle
properties.
Spring Boot 3.2 brings reloading support for SSL bundles. This means SSL bundles can be reloaded at runtime when the configs change. To enable this feature, we need to set the reload-on-update
config to true
. This feature is implemented by a file watcher that checks bundles directories for any changes and then reloads the bundles and notifies the consuming component, like a web server, about this reload. At the moment, Tomcat and Netty web servers are the only compatible components (consumer) with this feature.
This feature makes the SSL certificate rotation much more manageable in Spring Boot applications.
4. Docker Image Building
We have a lot of positive improvements in the building docker image
feature in Spring Boot in version 3.2. As you may know, Spring Boot uses Paketo Buildpacks as an implementation for its Cloud Native Buildpacks builder to build a docker image from a Spring Boot application without providing any Dockerfile
!
Recommend to read my article about Cloud Native Buildpacks as a standard and Paketo Buildpacks as one of the best implementations of that standard:
Let’s see what are those improvements:
- The default Cloud Native Builder (Paketo Buildpacks) image was upgraded to Ubuntu 22.04 (
paketobuildpacks:builder-jammy-base
). If you are using the Gradle plugin, the builder will bepaketobuildpacks:builder-jammy-tiny
. - Starting from this version, Spring Boot will utilize Docker CLI configuration files to determine the host address and other connection details that should be used by default when communicating with the Docker daemon.
- BitBucket CI had an issue where volumes couldn’t be accessed from CI pipelines. To fix this, Now it is possible to configure the build and launch caches for Paketo Buildpacks to use bind mounts instead of named volumes.
- The temporary workspace utilized by Paketo Buildpacks can now be configured to use binding mounts or custom-named volumes.
- It is now possible to customize the security options of the Paketo Buildpacks builder container to support Docker environments where the default Linux security option
label=disable
is used.
5. Service Connection Support for OpenTelemetry Collector
As I mentioned in part 1 of this article, Spring Boot 3.2 has many improvements in Observability features. On the other hand, I also describe in detail the Service Connection feature introduced in Spring Boot 3.1 here.
Spring Boot 3.1 offers 15 ConnectionDetails
via the spring-boot-testcontainers
library for databases, messaging systems, and more (e.g. Cassandra, Couchbase, Kafka, and PostgreSQL). Spring Boot 3.2 introduces additional ConnectionDetails
, including one for OpenTelemetry Collector.
Before Spring Boot 3.2, to use the OpenTelemetry Collector container using Testcontainers in the integration tests or development time (read the previous part to know more about the difference), we needed to use the @DynamicPropertySource
to connect our integration test (or application) to the running OpenTelemetry Collector container:
@Container static final GenericContainer<?> container = new GenericContainer<>("otel/opentelemetry-collector-contrib:latest") .withCommand("--config=/etc/collector-config.yml") .withCopyToContainer(MountableFile.forClasspathResource("collector-config.yml"), "/etc/collector-config.yml") .withExposedPorts(4318); @DynamicPropertySource static void otlpProperties(DynamicPropertyRegistry registry) { registry.add("management.otlp.metrics.export.url" () -> "http://%s:%d/v1/metrics".formatted(container.getHost(), container.getMappedPort(4318); registry.add("management.otlp.tracing.endpoint" () -> "http://%s:%d/v1/traces".formatted(container.getHost(), container.getMappedPort(4318); }
By introducing a new ConnectionDetails
for the OpenTelemetry Collector when using a Testcontainers GenericContainer
with otel/opentelemetry-collector-contrib
docker image, a ConnectionDetails
automatically will provide the Service connection:
@Container @ServiceConnection static final GenericContainer<?> container = new GenericContainer<>("otel/opentelemetry-collector-contrib:latest") .withCommand("--config=/etc/collector-config.yml") .withCopyToContainer(MountableFile.forClasspathResource("collector-config.yml"), "/etc/collector-config.yml") .withExposedPorts(4318, 9090);
Dev/Test Observability using Digma
There is no doubt that observability is a crucial component in modern applications, to the extent that, as you can read in the previous section, Spring Boot 3.2, by providing the service connection support for OpenTelemetry Collector, makes the process of writing integration test for the observability related concern much more accessible and also as I described earlier by using the service connection for OpenTelemetry Collector, developers can have a full fledge observability infrastructure during development (Service Connection in Development time)
Digma is another approach to have observability during development, but it is within your IDE (as a plugin) and has several more advanced features that help us write performant code. Digma utilizes OpenTelemetry to automatically capture traces, logs, and metrics of your code when running locally. After that, Digma analyzes it to detect meaningful insights about the code. It looks for regressions, anomalies, code smells, or other patterns that can be useful to know about the code and use in development.
One of the significant advantages of Digma is that it allows developers to observe the metrics and traces quickly during the development inside the IDE without waiting for deployment in any environment.
Learn more about Digma:
- Digma: the referenced tool
- Digma: guide to setting it up for your application
- Couch to Fully-Observed Code with Spring Boot 3.2, Micrometer Tracing, and Digma
Final Thoughts
Spring Boot 3.2 includes many great new features and enhancements. I have tried to describe and expand on them in two parts. For the complete list of changes and features, read the Spring Boot 3.2 Release Notes.
The story is still ongoing. Spring Boot 3.3 will be released in the coming months with many new features (May 23rd).
Download Digma for free and start using it in your local environment now.