Java Real World Projects a Pragmatic Guide for Building Modern Java Applications
Java Real World Projects a Pragmatic Guide for Building Modern Java Applications
Projects
A pragmatic guide for building modern
Java applications
Davi Vieira
www.bpbonline.com
First Edition 2025
ISBN: 978-93-65898-972
All Rights Reserved. No part of this publication may be reproduced, distributed or transmitted in any form
or by any means or stored in a database or retrieval system, without the prior written permission of the
publisher with the exception to the program listings which may be entered, stored and executed in a
computer system, but they can not be reproduced by the means of publication, photocopy, recording, or by
any electronic and mechanical means.
All trademarks referred to in the book are acknowledged as properties of their respective owners but BPB
Publications cannot guarantee the accuracy of this information.
www.bpbonline.com
Dedicated to
Davi Vieira is a software craftsman with a vested interest in the challenges large
enterprises face in software design, development, and architecture. He has over
ten years of experience constructing and maintaining complex, long-lasting, and
mission-critical systems using object-oriented languages. Davi values the good
lessons and the software development tradition left by others who came before
him. Inspired by such software tradition, he develops and evolves his ideas.
Davi started his career in technology by working as a Linux system
administrator in the web hosting industry. After learning much about server task
automation with shell scripting, he moved on to the banking industry, where he
fixed bugs in legacy Java systems. Working with such legacy systems enabled
Davi to face exciting challenges in a telecommunications organization, where he
played a crucial role in helping the company adopt cloud-native development
practices. Eager to learn and make a lasting impact, Davi works currently as a
tech lead for an enterprise software company.
About the Reviewers
After so many years since its first release, Java remains as relevant as ever by
powering the most critical applications in enterprises of all sizes. It is not
uncommon to see Java applications still running ten, twenty, or more years ago,
which serves as a testament to Java’s robust and reliable nature. On the other
side of the coin, Java continues to be the language of choice for many new
projects that have the luxury of choosing from a set of high-quality frameworks
like Spring Boot, Quarkus, or Jakarta EE, to name a few, that foster innovation
and keep the Java language fresh to tackle the challenges of modern software
development in the age of cloud and artificial intelligence. There has never been
a better time to be part of such an exciting technological ecosystem, which offers
endless opportunities to impact other people’s lives through software.
Based on this landscape of opportunities and innovations, this book was written
for those who decide to face the complexities, ambiguities, and hardships that
may derive from any serious Java project. It is not meant to be a comprehensive
guide for the Java programming language; instead, it takes a pragmatic approach
in emphasizing, from the author’s perspective, the relevant Java features and
anything related to producing production-ready software.
Starting with exploring Java fundamentals, this book revisits core Java API
components used to efficiently handle data structures, files, exceptions, logs, and
other essential elements found in most enterprise Java applications. It also
examines how modern Java features such as sealed classes, pattern matching,
record patterns, and virtual threads can be used to create software systems that
extract the best of what Java can provide. This book presents techniques for
efficiently handling relational databases by tapping into the Java Database
Connectivity (JDBC) and Jakarta Persistence APIs (JPA), thereby providing
a solid Java data handling foundation. Still, in the context of Java fundamentals,
it explores how to increase overall code quality by employing unit and
integration tests.
After covering the Java Fundamentals, this book explores how reliable software
development frameworks such as Spring Boot, Quarkus, and Jakarta EE can be
used to develop applications based on developer-friendly and productivity
principles. These frameworks empower the Java developer by enabling him to
use cutting-edge technology and industry standards that are the basis for most
critical back-end applications.
With an eye on the everyday challenge of keeping Java applications running
reliably in production, this book describes essential monitoring and observability
techniques that help the Java developer better understand how well its
application is behaving under different and quite often unexpected
circumstances, putting the developer in a more pro-active than reactive position
when dealing with bottlenecks, scalability and any other issue that can represent
a risk for the application availability.
It concludes with an exploration of how different software architecture ideas,
such as domain-driven design (DDD), layered architecture, and hexagonal
architecture, can play crucial roles in developing change-tolerable and
maintainable applications that not only deliver what the customer wants but also
establish solid foundations that enable developers to gracefully introduce code
changes with reduced refactoring efforts.
Chapter 1: Revisiting the Java API - This chapter revisits essential core Java
APIs commonly seen in real-world projects. It starts by exploring the Collections
API’s data structures, showing the possible ways to handle data as objects in
Java systems. Considering how often files must be dealt with, this chapter shows
how to manipulate files using the NIO2. A closer examination of exceptions,
followed by the Logging API, provides a solid foundation for helpful error
handling and enhanced logging management. As most Java applications
somehow need to deal with date and time, the Data-Time API is also explored.
Closing the chapter, functional programming features such as streams and
lambdas are covered to show how to write more efficient and concise Java code.
Chapter 2: Exploring Modern Java Features - Java is constantly changing.
Therefore, it is essential to keep up to date with its new features. This chapter
looks into modern Java capabilities developers can leverage to build robust
applications with sophisticated language features. It starts by explaining how to
use sealed classes to increase inheritance control. It presents an intuitive way of
matching Java types with pattern matching and using record patterns to extract
data from matched types. Finally, it shows how to simplify the development of
concurrent applications with virtual threads.
Chapter 3: Handling Relational Databases with Java - The ability to
efficiently communicate with databases is a crucial characteristic of Java
applications requiring persistence. Based on this premise, this chapter explores
the Java Database Connectivity (JDBC) API, a fundamental Java component
for handling relational databases. To enable developers to handle database
entities as Java objects, it distills the main features of the Jakarta Persistence.
The chapter finishes by examining local development approaches with container-
based and in-memory databases for Java.
Chapter 4: Preventing Unexpected Behaviors with Tests - Automated tests
help to prevent code changes from breaking existing system behaviors. To help
developers with such an outcome, this chapter overviews two automated test
approaches: unit and integration testing. It explores the reliable and widely used
test framework JUnit 5. Finally, it describes how to use Testcontainers to
implement reliable integration tests that rely on real systems as test
dependencies.
Chapter 5: Building Production-Grade Systems with Spring Boot - Regarded
as one of the most well-used Java frameworks, Spring Boot has withstood the
test of time. So, this chapter explores the fundamental Spring components
present in Spring Boot. Such fundamental knowledge leads to an analysis of how
to bootstrap a new Spring Boot project and implement a CRUD application with
major Spring Boot features.
Chapter 6: Improving Developer Experience with Quarkus - In the age of
cloud, Quarkus has arisen as a cloud-first Java development framework with the
promise of delivering a developer-friendly framework that empowers developers
to create cloud-native applications based on the best industry standards and
technology. This chapter starts assessing the benefits Quarkus provides, shifting
quickly to an explanation that details how to kickstart a new Quarkus project. It
then shows how to use well-known Quarkus features to develop a CRUD
application, including support for native compilation.
Chapter 7: Building Enterprise Applications with Jakarta EE and
MicroProfile - Accumulating decades of changes and improvements, the Jakarta
EE (formerly Java EE and J2EE) framework still plays a major role in enterprise
software development. Relying on Jakarta EE, there is also MicroProfile, a lean
framework to develop cloud-native microservices. Focusing on these two
frameworks, this chapter starts with an overview of the Jakarta EE development
model and its specifications. Then, it jumps to hands-on practice by showing
how to start a new Jakarta EE project and develop an enterprise application.
Finally, it shows how to create microservices using MicroProfile.
Chapter 8: Running Your Application in Cloud-Native Environments - Any
serious Java developer must be able to make Java applications extract everything
they can and perform well inside cloud environments. That is why this chapter
explores cloud technologies, starting with container technologies, including
Docker and Kubernetes. It explains how Java applications developed using
frameworks like Spring Boot, Quarkus, and Jakarta EE can be properly
dockerized to run in containers. Finally, it describes deploying such applications
into a Kubernetes cluster.
Chapter 9: Learning Monitoring and Observability Fundamentals -
Understanding how a Java application behaves in production while being
accessed by many users is critical to ensuring the business health of any
organization. Considering this concern, this chapter explores what monitoring
and observability mean and why they are crucial for production-grade Java
systems. It then shows how to implement distributed tracing with Spring Boot
and OpenTelemetry. Also, it explains how to handle logs using Elasticsearch,
Fluentd, and Kibana.
Chapter 10: Implementing Application Metrics with Micrometer - Metrics
are essential to answer whether a Java application behaves as expected.
Micrometer is a key technology that enables developers to implement metrics
that answer those questions. This chapter explains how to use a Micrometer to
provide metrics in a Spring Boot application.
Chapter 11: Creating Useful Dashboards with Prometheus and Grafana -
Visualizing important information regarding an application’s behavior through
dashboards can prevent or considerably speed up the resolution of incidents in
Java applications. Based on such concern, this chapter examines how to capture
application metrics using Prometheus. It then covers the integration between
Prometheus and Grafana, two essential monitoring tools. Finally, it shows how
to create helpful Grafana dashboards with metrics generated by a Java
application and use Alertmanager to trigger alerts based on such metrics.
Chapter 12: Solving problems with Domain-driven Design - Based on the
premise that the application code can serve as an accurate representation of a
problem domain, Domain-driven Design (DDD) proposes a development
approach that puts the problem domain as the driving factor that dictates the
system’s architecture, resulting in better maintainable software. Considering
such maintainability benefits, this chapter starts with a DDD introduction,
followed by an analysis of essential ideas like value objects, entities, and
specifications. The chapter closes by exploring how to test the domain model
produced by a DDD application.
Chapter 13: Fast Application Development with Layered Architecture -
Enterprises of all sorts rely on back-end Java applications to support their
business. The ability to fast deliver such applications is fundamental in a
competitive environment. Layered architecture emerged organically among the
developer community due to its straightforward approach to organizing
application responsibilities into layers. In order to show the layered architecture
benefits, this chapter starts with an analysis of the major ideas that comprise
such architecture, followed by a closer look into applying layered architecture
concepts in the development of a data layer for handling database access, a
service layer for providing business rules, and an API layer for exposing system
behaviors.
Chapter 14: Building Applications with Hexagonal Architecture - The pace
at which technologies change in software systems has increased considerably
over the last years, creating challenges for those who want to tap into the latest
cutting-edge innovations to build the best software possible. However,
incorporating new technologies into existing working software may be
challenging. That is where hexagonal architecture comes in as a solution to build
change-tolerable applications that can receive significant technological changes
without major refactoring efforts. With such advantage in mind, this chapter
introduces the hexagonal architecture ideas, followed by hands-on guidance
explaining the development of a hexagonal system based on the domain,
application, and framework hexagons.
Code Bundle and Coloured Images
Please follow the link to download the
Code Bundle and the Coloured Images of the book:
https://rebrand.ly/bdae2b
The code bundle for the book is also hosted on GitHub at
https://github.com/bpbpublications/Java-Real-World-Projects. In case
there’s an update to the code, it will be updated on the existing GitHub
repository.
We have code bundles from our rich catalogue of books and videos available at
https://github.com/bpbpublications. Check them out!
Errata
We take immense pride in our work at BPB Publications and follow best
practices to ensure the accuracy of our content to provide with an indulging
reading experience to our subscribers. Our readers are our mirrors, and we use
their inputs to reflect and improve upon human errors, if any, that may have
occurred during the publishing processes involved. To let us maintain the quality
and help us reach out to any readers who might be having difficulties due to any
unforeseen errors, please write to us at :
[email protected]
Your support, suggestions and feedbacks are highly appreciated by the BPB
Publications’ Family.
Did you know that BPB offers eBook versions of every book published, with PDF and ePub files
available? You can upgrade to the eBook version at www.bpbonline.com and as a print book customer,
you are entitled to a discount on the eBook copy. Get in touch with us at :
[email protected] for more details.
At www.bpbonline.com, you can also read a collection of free technical articles, sign up for a range of
free newsletters, and receive exclusive discounts and offers on BPB books and eBooks.
Piracy
If you come across any illegal copies of our works in any form on the internet, we would be grateful if
you would provide us with the location address or website name. Please contact us at
[email protected] with a link to the material.
Reviews
Please leave a review. Once you have read and used this book, why not leave a review on the site that
you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase
decisions. We at BPB can understand what you think about our products, and our authors can see your
feedback on their book. Thank you!
For more information about BPB, please visit www.bpbonline.com.
Index
CHAPTER 1
Revisiting the Java API
Introduction
Java has been widely used in enterprises of all sorts of industries. One good
example of how Java helps boost developer productivity is how it deals with
memory management. While other programming languages may require
developers to specify how program memory will be allocated, Java takes this
memory allocation responsibility and lets the developers focus on the problem
domain they want to solve. Another thing that can boost developer productivity
even more is knowing how to use the Java API properly.
The Java API provides the language building blocks for application
development. It crosses domains such as data structure handling with the
Collections API, file manipulation with the NIO2 API, exception handling, and
more. Instead of reinventing the wheel by implementing your data structures or
algorithms, you can save a lot of time by tapping into what the Java API has to
offer.
So, revisiting the Java API while observing how it can be used to solve problems
commonly seen in software projects will give us a solid foundation to explore
further how Java can help us develop better software.
Structure
The chapter covers the following topics:
• Handling data structures with collections
• Using the NIO2 to manipulate files
• Error handling with exceptions
• Improving application maintenance with the Logging API
• Exploring the Date-Time APIs
• Functional programming with Streams and Lambdas
• Compiling and running the sample project
Objectives
By the end of this chapter, you will know how to use some of the most crucial
Java APIs. With such knowledge, you will be able to solve common problems
that may occur in your software project. Understanding the Java API is the
foundation for developing robust Java applications.
System.out.println(mapOfMessages.size()); // 2
mapOfMessages.forEach((id, message) -> {
appendToContent(message);
notifyProcessing(id);
});
}
public static void appendToContent(Message message) {
var content = message.getContent();
var processedContent = content + " - processed";
message.setContent(processedContent);
}
public static void notifyProcessing(Id id) {
System.out.println("Message "+ id + " processed.");
}
}
Executing the above code will produce the following output:
Map size: 2
Message Id{id='MSG-2'} processed.
Message Id{id='MSG-1'} processed.
In the HandleMessage class, we create an empty HashMap, the most
common Map interface implementation. The Map<Id, Message> type
specifies that we have the Id class as the key and the Message class as the
value. Next, we add four map entries and remove one entry. Notice we add two
entries with the same id, "MSG-1". When we try to add the "MSG-1" entry for
the second time, the insertion is ignored because the map already contains an
entry with such a key. Then, we remove the entry "MSG-3" from the map.
Ignoring duplicates and removing an entry using the key is possible because of
our equals and hashCode implemented earlier in the Id class. Finally, we
print the map size, which is two.
Proceeding with the code, we use forEach to iterate over the map entries. The
syntax we use here is a lambda expression, which we will explore later in this
chapter. The map processing is straightforward; we call
appendToContent(Message message) and append the " -
processed" to the original message, followed by a call to
notifyProcessing(Id id), where we use the Id to notify the message
was processed.
The previous example illustrates a typical pattern where we map data coming
from somewhere, perform some data processing, and finally take further action
after finishing with data processing. Instead of using a List or Set, we chose a
Map to map the Id to a Message, allowing us to conveniently use each object
for different purposes.
The Java Framework Collections is an extensive topic with much more than we
covered in this session. Going into every aspect of this topic is out of this book’s
scope. So, this session focused on collection approaches commonly seen in Java
projects. Let us see now how we can efficiently handle files using Java.
Creating paths
A file system represents how files and directories are organized in an operating
system. Such representation follows a tree-based structure where we have a root
directory and other directories and files below it. We have two kinds of file
systems, one deriving from Windows and another from Unix (e.g., Linux or
macOS). On Windows, the root directory is usually C:, while on Linux, it is /.
How we separate paths can also change depending on the operating system. For
example, on Windows, we use the backslash \, while on Linux, it is the forward
slash /.
The following is how we can identify the possible root directories available for a
Java application:
System.out.println(FileSystems.getDefault()); //
sun.nio.fs.LinuxFileSystem
FileSystems.getDefault().getRootDirectories().forEach(System.ou
// "/"
Since the above code is executed in a Linux machine, the
FileSystems.getDefault() returns
sun.nio.fs.LinuxFileSystem. When checking for the possible root
directories, it gets only the /, which is the root directory in Linux systems.
The NIO.2 API provides the Path interface representing paths in a file system.
A path can be either a file or a directory. There is also the concept of symbolic
links, which resolve to a file or directory, so Java allows us to create Path
objects using symbolic links.
The following is how we can create a Path object:
Path path = Path.of("/path/example");
We use the factory method Path.of with a String representing the path we
want to create. One important thing to understand here is that the path String
you pass to the Path.of may not exist. So, creating a new Path object does
not mean the path exists in the operating system.
Paths can be absolute or relative. We call absolute every complete path, meaning
it contains all path components, including the root directory. The following are
examples of absolute paths:
Path.of("/home/john/textFile.txt"); // Linux absolute
path
Path.of("C:\\users\\john\\textFile.txt"); // Windows
absolute path
The absolute path above comprises a file called textFile.txt and three
directories, including the root directory. Note we are using a double backslash
for the Windows path; this is required because one backslash is interpreted as the
escape character.
Relative paths are those partially representing a path location:
Path.of("./john/textFile.txt"); // Linux relative path
Path.of(".\\john\\textFile.txt"); // Windows relative
path
We use ./ or .\ to indicate the current directory. In the example above, we are
not providing the complete path where the textFile.txt is located, making
the path a relative one.
The NIO.2 API offers helpful factory methods to manipulate the Path objects
in different ways. We can, for example, combine an absolute path with a relative
one:
Path absPath = Path.of("/home");
Path relativePath = Path.of("./john/textFile.txt");
Path combinedPath = absPath.resolve(relativePath);
System.out.println(combinedPath); //
/home/./john/textFile.txt
System.out.println(combinedPath.normalize()); //
/home/john/textFile.txt
The resolve method called in a Path object allows us to combine it with
another Path object as we did with the absPath and the relative path.
When printing the combinePath for the first time, we see the presence of the
./ (current directory) path element, which is redundant and can be excluded
from the path representation. When printing the result of calling the
normalize method for the combinedPath, we see the normalized
combinedPath without the ./ element. The normalize method is helpful
to clean up paths containing redundant elements such as ./ (current directory)
and ../ (previous directory).
Now that we know how to manipulate Path objects let us see how to use them
to handle files and directories.
Files.createDirectory(dirA);
Files.createFile(fileA);
System.out.println(Files.isDirectory(dirA)); // true
System.out.println(Files.isRegularFile(fileA)); //
true
Checked exceptions
Here, we have an example showing how checked exceptions can be used:
public class ExceptionAnalysis {
public static void main(String[] args) {
try {
checkParameter(-1);
} catch (Exception e) {
System.out.println("The following error
occurred: "+
e.getMessage());
e.printStackTrace();
}
}
Unchecked exceptions
Contrary to checked exceptions, unchecked exceptions are not required to be
handled at runtime by our code. Although not recommended, handling
unchecked exceptions is legal.
Following is an example showing how an application can throw an unchecked
exception:
var listOfStrings = List.of("a","b");
var first = listOfStrings.get(0);
var second = listOfStrings.get(1);
var third = listOfStrings.get(2); //
IndexOutOfBoundsException: Index: 2 Size: 2
The IndexOutOfBoundsException is considered an unchecked exception
because it extends the RuntimeException class. It occurs when we try to
access an index outside the list bounds. The default JVM exception handler
handles this exception directly.
When handling exceptions, there are scenarios where we want to execute some
code whenever the application leaves a try-catch. That is when the final
block comes into play, allowing us to always execute some code, regardless of
whether an exception is caught.
Logger.getLogger(org.corp.TestApp.class.getName());
public static void main(String... args) throws
IOException {
logger.log(INFO, "Start operation");
try {
execute();
} catch (Exception e) {
logger.log(SEVERE, "Error while executing
operation", e);
}
logger.log(INFO, "Finish operation");
}
public static void execute() throws Exception {
logger.log(INFO, "Executing operation");
throw new Exception("Application failure");
}
}
We obtain a Logger object by executing the following line:
Logger logger =
Logger.getLogger(dev.davivieira.Main.class.getName());
We use this object to log messages. It is also possible to set the default log level
for the entire class by executing something like the below:
logger.setLevel(Level.INFO);
Setting such a configuration means the logger will capture all messages logged
with SEVERE, WARNING, and INFO levels. Other logging levels below INFO,
like CONFIG and FINE, would be ignored.
Executing the TestApp program will cause the creation of the file
/tmp/java-severe-0.log with the following content:
Jan 28, 2024 12:24:44 PM org.corp.TestApp main
SEVERE: Error while executing operation
java.lang.Exception: Application failure
at org.corp.TestApp.execute(TestApp.java:26)
at org.corp.TestApp.main(TestApp.java:17)
The FileHandler provides the above output. Notice we only see SEVERE
log messages. The INFO log messages are omitted because the FileHandler
log level is set to WARNING, which means we display only WARNING and
SEVERE log messages.
The folllowing is the output provided by the ConsoleHandler:
Jan 28, 2024 12:24:44 PM org.corp.TestApp main
INFO: Start operation
Jan 28, 2024 12:24:44 PM org.corp.TestApp execute
INFO: Executing operation
Jan 28, 2024 12:24:44 PM org.corp.TestApp main
SEVERE: Error while executing operation
java.lang.Exception: Application failure
at org.corp.TestApp.execute(TestApp.java:26)
at org.corp.TestApp.main(TestApp.java:17)
Jan 28, 2024 12:24:44 PM org.corp.TestApp main
INFO: Finish operation
Since we rely on the default ConsoleHandler configuration, the default log
level is INFO, so we see SEVERE and INFO log messages here.
Let us explore how we can work date and time in Java.
LocalDate
We use the LocalDate class when we are only interested in the date
representation formed by days, months, and years. You may be working with an
application that provides range filtering capabilities where the time granularity is
defined by days, excluding any time aspects related to hours or minutes. Such a
use case could benefit from the LocalDate capabilities. The following is how
we can create a LocalDate representing the current date:
System.out.println(LocalDate.now()); // 2024-01-28
When calling the now method, it returns the current date based on the default
time zone where the JVM is running unless we enforce a default time zone using
something like below:
TimeZone.setDefault(TimeZone.getTimeZone("Japan"));
If we do not enforce a default timezone through the application, Java relies on
the timezone defined by the operating system where the JVM is running. For
example, my computer clock is configured to use the "Europe/Berlin"
timezone. If executed on my computer, the now method would return the
"Europe/Berlin" timezone.
We can pass another timezone if we do not want to use the default one:
System.out.println(LocalDate.now(ZoneId.of("Japan")));
// 2024-01-29
We can specify a different timezone using the of factory method from the
ZoneId class. In the Japan timezone, the date returned one day ahead of the
previous example, where I used the default timezone provided by the operating
system.
We can get a LocalDate by providing the year, month, and day:
System.out.println(LocalDate.of(2024, Month.JANUARY,
28)); // 2024-01-28
The LocalDate is composed based on the provided date components like year,
month, and day. We achieve the same result by providing only the year and the
day of the year:
System.out.println(LocalDate.ofYearDay(2024, 28)); //
2024-01-28
The day 28 of 2024 falls in January, resulting in a LocalDate with that month.
We can also provide a string that can be parsed and transformed into a
LocalDate:
System.out.println(LocalDate.parse("2024-01-28")); //
2024-01-28
The string above must follow the default date format, which expects the year,
month, and day in this order. If we try to pass something like "2024-28-01",
we get DateTimeParseException. We can solve it by using a data
formatter:
String myFormat = "yyyy-dd-MM";
DateTimeFormatter myDateFormatter =
DateTimeFormatter.ofPattern(myFormat);
System.out.println(LocalDate.parse("2024-28-01",
myDateFormatter)); // 2024-01-28
There is an overloaded parse method from the LocalDate class that accepts
a DateTimeFormatter to allow proper parsing of different date format
strings.
We can also increase or decrease the LocalDate:
System.out.println(LocalDate.of(2024, Month.JANUARY,
28).plusDays(1)); // 2024-01-29
System.out.println(LocalDate.of(2024, Month.JANUARY,
28).minusMonths(2)); // 2023-11-28
System.out.println(LocalDate.of(2024, Month.JANUARY,
28).plusYears(1)); // 2025-01-28
It is important to note that when we use methods like plusDays or
minusMonths, they do not change the current LocalDate instance but
instead return a new instance with the changed date.
What if we are only interested in the time aspect of a given date? That is when
the LocalTime comes into play to help us.
LocalTime
The way we manipulate LocalTime objects is similar to how we manipulate
LocalDate. The LocalTime class provides some factory methods that allow
the creation of LocalTime objects in different ways:
System.out.println(LocalTime.now()); //
21:33:08.740527179
System.out.println(LocalTime.now(ZoneId.of("Japan")));
// 05:33:08.740560750
System.out.println(LocalTime.of(20, 10, 5)); //
20:10:05
System.out.println(LocalTime.of(20, 10, 5, 10)); //
20:10:05.000000010
System.out.println(LocalTime.parse("20:10:05")); //
20:10:05
As with LocalDate, we can create LocalTime instances following the
principle where we can get the current time with the now method or get a
specific time by providing the time components like hour, minute, second, and
nanosecond. It is also possible to parse a String representing time. Increasing
or decreasing a LocalTime is also supported:
System.out.println(LocalTime.of(20, 10,
5).plusHours(5)); // 01:10:05
System.out.println(LocalTime.of(20, 10,
5).minusSeconds(30)); // 20:09:35
Let us check now how we can have an object that expresses both date and time.
LocalDateTime
By joining the date and time into a single object, the LocalDateTime
provides a complete local representation of date and time. The term local means
that these date-time objects do not contain any reference to their timezone, that is
only possible with its variant with timezone support, which will be seen later in
this section.
The following is how we can create a LocalDateTime:
System.out.println(LocalDateTime.now()); // 2024-01-
28T22:21:38.049971256
System.out.println(LocalDateTime.now(ZoneId.of("Japan")));
// 2024-01-29T06:21:38.050009338
var localDate = LocalDate.of(2024, Month.JANUARY, 28);
var localTime = LocalTime.of(20, 10, 5);
System.out.println(LocalDateTime.of(localDate,
localTime)); // 2024-01-28T20:10:05
System.out.println(LocalDateTime.of(2024,
Month.JANUARY, 28, 20, 10, 5)); // 2024-01-28T20:10:05
As with other date-time classes, LocalDateTime has a now method that
returns the current date and time. When creating LocalDateTime with a
specific date and time, we can provide LocalDate and LocalTime objects to
the factory method. We see next the date-time type that supports timezones.
ZoneDateTime
In all the previous classes we have seen so far, LocalDate, LocalTime, and
LocatDateTime all provide a date and time presentation without regard to a
given time zone. There are use cases where the time zone is helpful as part of the
date-time object. We can save date-times using the time zone from the region
where the Java application is running, like the scenarios where a system is
available for users in different time zones. We can also ensure the application
always uses the Coordinated Universal Time (UTC), a universal way to
represent time, no matter the region where the Java application is running. The
use cases can be numerous, and to enable them, we have the ZoneDateTime
class, which works similarly to the LocalDateTime but with support for
time zones. The following is how to create a ZoneDateTime object
representing the current date and time:
System.out.println(ZonedDateTime.now()); // 2024-01-
28T23:46:26.709136750+01:00[Europe/Berlin]
System.out.println(ZonedDateTime.now(ZoneId.of("Japan")));
// 2024-01-29T07:47:33.206255659+09:00[Japan]
In addition to the date and time components, we have the time zone component
as +01:00[Europe/Berlin] when calling the now method without a parameter
defining a time zone and +09:00[Japan] when calling the now method with the
Japan time zone parameter.
It is possible to create the ZoneDateTime from a LocalDateTime:
var localDateTime = LocalDateTime.now();
System.out.println(ZonedDateTime.of(localDateTime,
ZoneId.of("UTC"))); // 2024-01-
28T23:55:27.406187953Z[UTC]
Remember that the now method gets the current date and time based on the
operating system’s default time zone unless we force the Java Virtual Machine
(JVM) to use another time zone. The example above gets the local date and time
and then converts it to UTC when creating a ZonedDateTime.
Instant
In the previous section, we saw how to use ZoneDateTime to create a moment
representation with UTC. UTC stands for Coordinated Universal Time and is
beneficial for situations where we want to represent time regardless of the
geographical location or different time zones. Database systems usually adopt
UTC as the standard for storing date-time values.
Java has the Instant class representing a moment with a nanoseconds number
counting from the epoch of the beginning of 1970 in UTC. Here is how we can
create an Instant:
System.out.println(Instant.now()); // 2024-01-
28T23:24:36.093286873Z
System.out.println(Instant.now().getNano()); //
93372648
Using Instant.now() is common when persisting Java data objects into
database systems because it provides a UTC moment representation that is
usually compatible with timestamp data types from most database technologies.
Predicates
We use predicates to create expressions that always return a Boolean. The
following is how the Predicate interface is defined in the Java API:
@FunctionalInterface
public interface Predicate<T> {
// Code omitted
boolean test(T t);
// Code omitted
}
The @FunctionalInterface annotation is recommended but not
mandatory. We can use this annotation to ensure the functional interface has only
one abstract method. T is the generic parameter type used in the expression we
create, and boolean is what we always return as a result of evaluating the
expression. Look how we can implement a Predicate with a Lambda
expression:
Predicate<String> emptyPredicate = value -> value !=
null && value.isEmpty();
A lambda expression is composed of zero, one or more parameters on the left
side, then an arrow, and finally, the expression body on the right side. Notice we
are assigning this expression to a variable named emptyPredicate. There is
no need to specify the value parameter type here because the Java compiler
infers it from the type we provide on Predicate<String>. We can evaluate
our lambda expression by calling the test method with different values, as
follows:
System.out.println(emptyPredicate.test(null)); //
false
System.out.println(emptyPredicate.test("")); // true
System.out.println(emptyPredicate.test("abc")); //
false
Passing null or "abc" makes our expression return false. It returns true
when we pass an empty String.
Functions
We use functions when we usually want to pass a value, do something with it,
and return a result. That is how the Function functional interface is defined in
the Java API:
@FunctionalInterface
public interface Function<T, R> {
// Code omitted
R apply(T t);
// Code omitted
}
R is the result type we expect to get, and T is the parameter type we use inside
our expression.
You may be wonder what R and T means. These letters are identifiers we use to
define generic type parameters used by generic classes or interfaces. Function is
a generic functional interface that receives two generic type parameters: R and T.
Where R stands for return and T stands for type. Generics give us flexibility by
providing a way to design classes and interfaces capable of working with
different types.
The following is how we can declare and use the Function interface:
Function<Integer, String> putNumberBetweenParentheses
= (input) -> "("+input+")";
var java = putNumberBetweenParentheses.apply(25);
var function = putNumberBetweenParentheses.apply(8);
System.out.println(java); // (25)
System.out.println(function); // (9)
In the example above, Integer is the type of the parameter we are passing,
and String is the return type. Our function gets an Integer, converts it to a
String, and wraps it into parentheses.
Suppliers
There are scenarios where we need to get an object without providing any input.
Suppliers can help us because that is how they behave: we call them, and they
just return something. Following is how the Java API defines the Supplier
interface:
@FunctionalInterface
public interface Supplier<T> {
// Code omitted);
T get();
}
T is the return type of the object we receive when calling the get method. Here
is an example showing how to use a Supplier:
Supplier<List<String>> optionsSupplier = () ->
List.of("Option 1", "Option 2", "Option 3");
System.out.println(optionsSupplier.get()); // [Option
1, Option 2, Option 3]
Notice that the parameter side () of the lambda expression is empty. If the
functional interface does not expect any parameter, we do not provide it when
creating the lambda expression.
Consumers
With Suppliers, we get an object without providing any input. Consumers
work in the opposite direction because we give input to them, but they return
nothing. We use them to perform some action in the provided input. The
following is the Consumer functional interface definition:
@FunctionalInterface
public interface Consumer<T> {
// Code omitted
void accept(T t);
// Code omitted
}
We pass a generic T type to the accept method that returns nothing. Here, we
can see a lambda expression showing how to use the Consumer functional
interface:
Consumer<List<String>> print = list -> {
for(String item : list) {
System.out.println(item+": "+item.length());
}
};
print.accept(List.of("Item A", "Item AB", "Item
ABC"));
Implementing the Consumer interface, we have a lambda expression that
receives a List<String> as a parameter. This lambda shows that an
expression can be a whole code block with multiple statements between curly
brackets. Here, we are iterating over the list of items and printing their value and
length. Note that this lambda expression does not return anything; it just gets the
list and performs some action with it.
The Java API provides many other functional interfaces. Here, we have just
covered some of the main interfaces to understand their fundamental idea and
how they can help us develop code in a functional way.
Let us explore how to use streams to handle collections of objects.
Streams
Introduced together with functional interfaces and lambda expressions in Java 8,
the Stream interface allows us to handle collections of objects functionally.
One significant difference between streams and the data structures provided by
the Collection interface is that streams are lazy loaded. For example, when
we create a list of objects, the entire list is allocated in memory. If we deal with
large amounts of data, our application may struggle and have memory issues
loading all objects into a list. On the other hand, streams allow us to process one
object at a time in a pipeline of steps called intermediate and terminal operations.
Instead of using a collection and loading everything in memory, we rely on a
stream to process object by object, performing the required operations on them.
It is a common situation where getting results from a large database table is
necessary. Instead of returning the results as a List, we can return them as a
Stream to prevent memory bottlenecks in the application.
Streams may be beneficial for preventing memory bottlenecks and also for
helping us handle collections of objects more flexibly. To understand how we
can tap into the benefits provided by streams, we first need to understand the
stream structure.
Streams are designed like a pipeline where we have the following components:
• Stream source
• Intermediate operations
• Terminal operation
The stream source is where we get the data we want to process as a stream. We
can use a List, for example, to source a stream. Intermediate operations allow
us to manipulate stream data in a pipeline fashion, where the output from one
intermediate operation can be used as the input to another intermediate
operation. As the last component, we have the terminal operation that determines
the stream’s final outcome. Terminal operations can return an object or handle
the stream data without returning something.
Let us start by checking how we can source a stream.
Sourcing streams
The simplest way to initialize a stream is through the of factory method from
the Stream interface:
Stream<String> streamOfStrings = Stream.of("Element
1", "Element 2", "Element 3");
A more recurrent use is creating streams out of an existing collection like a
List, for example:
var listOfElements = List.of("Element 1", "Element 2",
"Element 3");
Stream<String> streamOfStrings =
listOfElements.stream();
Once we have a stream, the next step is to handle its data with intermediate and
terminal operations. Let us first check how intermediate operations work.
Intermediate operation
For those familiar with Unix/Linux shells, you may know how helpful it is to use
the output of a command as the input for another command through the usage of
the pipe | character. The Java streams’ intermediate operations work similarly by
letting us use the result from one intermediate operation as the input for the next
intermediate operation. Because of this continuing and compounding nature,
these operations are called intermediate. They do not represent the final outcome
of the stream processing.
Here are some common intermediate operations examples:
Stream<String> streamOfStrings = Stream.of("Element
1", "Element 2", "Element 3");
Stream<String> intermediateStream1 =
streamOfStrings.map(value -> value.toUpperCase());
Stream<String> intermediateStream2 =
streamOfStrings.filter(value -> value.contains("1"));
Stream<String> intermediateStream3 =
streamOfStrings.map(value ->
value.toUpperCase()).filter(value ->
value.contains("1"));
We start by creating a stream of strings. The first intermediate operation puts all
string characters in uppercase. Note that intermediate operations always return a
Stream object, so we can use it to perform another intermediate operation or
terminate the stream processing with a terminal operation. Next, the second
intermediate operation filters the stream data, returning only the objects
containing the "1" string. Finally, we concatenate the map and filter
intermediate operations in the third intermediate operation.
The map intermediate operation receives a lambda expression implementing the
Function interface as its parameter. Remember, a Function is a functional
interface that accepts a parameter and returns an object. We are passing a
String and returning another String with its characters in uppercase.
We are passing a lambda expression for the filter intermediate operation that
implements the Predicate functional interface.
Instead of creating multiple streams containing intermediate operations, we can
create one single stream with multiple intermediate streams, as shown follows:
streamOfStrings
.map(value -> value.toUpperCase())
.filter(value -> value.contains("1"))
.filter(value -> value.contains("3"));
It is important to understand that we are not processing data by creating
intermediate operations. The real processing only happens when we call a
terminal operation. Let us check next if it works.
Terminal operation
Creating one or more intermediate operations on top of a stream does not mean
the application is doing something with the stream data. We are only stating how
the data should be processed at the intermediate level. The stream data is only
processed after we call a terminal operation. Here are some examples of terminal
operations:
Stream<String> streamOfStrings = Stream.of("Element
1", "Element 2", "Element 3");
streamOfStrings
.map(String::toUpperCase)
.filter(value -> value.contains("1"))
.forEach(System.out::println); // ELEMENT 1
//.collect(Collectors.toList()); // [ELEMENT
1]
//.count(); // 1
After concatenating the map and filter intermediate operations, we applied a
forEach terminal operation. Notice we use a more concise syntax instead of a
lambda expression. We call this syntax method reference and can use it in
scenarios where the parameter declared on the right side of the lambda
expression is the same parameter used in the method called within the body part
of the lambda expression. Here, we have a method reference and a lambda
expression that produce the same result:
.forEach(System.out::println);
.forEach(value -> System.out.println(value));
In addition to forEach, we also have a collect terminal operation, which
allows us to return the stream processing result as a List, for example. The
count terminal operation counts the number of objects the stream still has.
Conclusion
This chapter explored some of the Java APIs often used in most Java projects.
Starting with the Java Collections Framework, we learned to use lists, sorts, and
maps to provide a solid data structure foundation for Java applications. We saw
how the NIO2 lets us easily manipulate files and directories. Keeping an eye on
the possibility that a Java application cannot behave as expected, we explored
creating, throwing, and handling exceptions. To ensure better visibility of what a
Java application is doing, we learned how the Logging API helps to log system
behaviors. Next, we explored how the Date-Time gracefully lets us handle
different time and date components, including time zones. Tapping into
functional programming, we learned how powerful stream and lambdas can be in
developing code in a functional way.
This chapter reminded us of some cool things the Java API provides. The next
chapter will dive deep into what is new in Java, especially the changes
introduced between the Java 17 and 21 releases.
Introduction
A fascinating aspect of Java is that after more than twenty-five years since its
first release, the language keeps innovating with new features and enhancing
how we develop software. Many of us rely on Java for its robustness and
commitment to staying as stable as possible regarding backward compatibility
from new to old releases. An amazing point about Java is its capacity to be rock-
solid for mission-critical applications and, simultaneously, a language offering
cutting-edge features for applications that explore new concepts and ways of
doing things.
As Java is a language that changes and evolves quickly, keeping up with every
new feature may be challenging. So, in this chapter, we explore essential features
introduced between the Java 17 and 21 releases that help us develop applications
more efficiently.
Structure
The chapter covers the following topics:
• Getting more control over inheritance with sealed classes
• Increasing code readability with pattern matching
• Increasing application throughput with virtual threads
• Compiling and running the sample project
Objectives
By the end of this chapter, you can tap into Java’s modern features to develop
software more efficiently. By understanding virtual threads, for example, you
will have the means to simplify the development and increase the throughput of
concurrent systems. By employing pattern matching, and record patterns, you
can make your code concise and easier to understand. With sealed classes, you
will have fine-grained inheritance control and enhanced encapsulation.
The features and techniques presented in this chapter can increase the quality
and efficiency of the code written in your Java projects.
Report(Printer printer) {
this.printer = printer;
}
public void printSomething() {
printer.print();
}
}
The Report class has the Printer class as an attribute, which means all the
visible attributes and methods from the Printer class will be available to the
Report class through the printer attribute.
On the other hand, with the Is-A, we can rely on the inheritance between classes
to extend the behaviors and data of another class. The code is as follows:
class Printer {
protected void print() {
// Print something
}
}
public class Report extends Printer {
public void printSomething() {
print();
}
}
We can access the inherited print method by extending the Printer into the
Report class. From an object-oriented design perspective, it is counterintuitive
that a Report is also a Printer. Still, the example above shows we can do
something like that if we want to access a method from another class through
inheritance. So, to help prevent counterintuitive inheritances, we can rely on the
Has-A relationship in favor of the Is-A because of the flexibility in composing
class capabilities by referring to other classes instead of tightly coupling them
through the Is-A relationship.
Still, there can be scenarios where the Is-A relationship is entirely valid to extend
class capabilities and better represent the class design of the problem domain we
are dealing with. A common use case is when we create an abstract class
with the intent to provide some data and behavior that will be shared across other
classes, like in the following example:
abstract class Report {
String name;
abstract void print();
abstract void generate();
void printReportName() {
System.out.println("Report "+name);
}
}
Based on the generic abstract Report class, we can provide concrete classes
like PDFReport and WordReport containing specific logic to generate and
print reports:
class PDFReport extends Report {
@Override
void print() {
printReportName();
System.out.println("Print PDF");
}
@Override
void generate() {
// Generate PDF
}
}
class WordReport extends Report {
@Override
void print() {
printReportName();
System.out.println("Print Word");
}
@Override
void generate() {
// Generate Word
}
}
Generating and printing reports are behaviors we can always expect from
Report type objects, whether they are PDFReport or WordReport
subtypes. However, what if someone extended the Report class to create, for
example, the ExcelReport class with slightly different behavior:
class ExcelReport extends Report {
@Override
void print() { // Dummy implementation }
@Override
void generate() {
// Generates Excel
}
}
The ExcelReport class is only concerned with generating reports; it does not
need to print anything, so it just provides a dummy implementation of the print
method. That is not how we expect classes implementing Report to behave.
Our expectation is based on the behaviors provided by the PDFReport and
WordReport classes. Java allows us to enforce such expectations with sealed
classes. The code block below shows a sealed class example:
sealed abstract class Report permits PDFReport,
WordReport {
String name;
abstract void print();
abstract void generate();
void printReportName() {
System.out.println("Report "+name);
}
}
We know that the PDFReport and WordReport classes fulfill the Report
abstract class contract by providing meaningful implementations of the print
and generate abstract methods, so we seal the Report abstract class to
restrict its inheritance to the classes we trust.
The classes implementing the sealed abstract Report class can be defined as
final, sealed, or non-sealed. The following is how we can define it as
final:
final class WordReport extends Report {
// Code omitted //
}
We use the final keyword to ensure the hierarchy ends on the WordReport
without further extension. It is also possible to define the child class as a
sealed one, like in the following example:
sealed class WordReport extends Report permits Word97,
Word2003 {
// Code omitted //
}
final class Word97 extends WordReport {
// Code omitted //
}
final class Word2003 extends WordReport {
// Code omitted //
}
The WordReport class is extensible only to Word97 and Word2003 classes.
However, what if you wanted to make WordReport extensible to any class
without restriction? We can achieve that using a non-sealed class like in the
following example:
public sealed abstract class Report permits WordReport
{
// Code omitted //
}
non-sealed class WordReport extends Report {
// Code omitted //
}
There are no permits on the WordReport class because we want to make it
extendable by any other class.
Another benefit of using sealed classes is that switch statements become more
straightforward because the compiler can infer all the possible types of a sealed
hierarchy, which removes the necessity to define the default case in a switch
statement. Consider the following example:
sealed abstract class Report permits WordReport,
PDFReport {
// Code omitted //
}
final class WordReport extends Report {
// Code omitted //
}
final class PDFReport extends Report {
// Code omitted //
}
class TestSwitch {
int result(Report report) {
return switch (report) { // pattern matching
switch
case WordReport wordReport -> 1;
case PDFReport pdfReport -> 2;
// there is no need for default
};
}
}
The pattern matching switch receives the sealed abstract Report class as a
parameter that permits only the WordReport and PDFReport classes.
Because the Java compiler knows these two classes are the only possible types
deriving from the Report class, we do not need to define a default case in the
switch statement.
Sealed classes help us emphasize which class hierarchies should be controlled to
avoid code inconsistencies due to unadvised inheritances.
Moving out from sealed classes, let us see how we can write more concise and
easily understood code by applying pattern-matching techniques.
The new virtual thread implementation introduced the term platform thread to
differentiate traditional (now called platform threads) from virtual threads.
Developing multi-thread applications using platform threads has been the
standard approach since the first Java release. This approach worked well for
quite a while, but as Java applications started to deal with more intense
concurrent workloads, such an approach began to show some limitations. Let us
assess these limitations.
The issue here lies in the platform thread waiting for a response. The thread is
consuming memory resources but is not doing anything; it is only waiting for a
database response. We cannot process more requests because no free available
memory exists to create more threads, decreasing our program throughput
capacity.
Besides being expensive to build, platform threads take time to create. Creating a
new thread object for every request is also a costly activity in terms of time. To
overcome this issue, we can create thread pools with reusable threads. Having a
thread pool solves the overhead problem of creating new threads, but we still
have the hardware limitation on the number of total threads that can be created.
A technique involving reactive programming has emerged in Java to solve the
throughput problem caused by blocking IO operations. Let us look at how it
works.
In the image above, the non-blocking IO tasks 1# and 2# run in parallel in non-
blocking IO threads, which means no threads hanging waiting for a response.
Once the IO operation finishes, the non-blocking IO task resumes and continues
its execution. Once the IO operation finishes, the non-blocking IO task may
continue in a different thread than it had started.
The fact that the execution of non-blocking IO tasks is distributed in different
threads creates a debugging challenge because the task execution stack trace will
be scattered among distinct threads. If something fails, we may need to
investigate the execution of more than one thread to understand what happened.
In imperative programming, for example, we can rest assured the stack trace of a
single task is confined to one thread, which considerably decreases the
debugging effort.
Reactive programming may be challenging not only on the debugging side. The
reactive code also differs, relying heavily on functional programming constructs
like streams and lambda expressions. Not all developers are well-versed in such
a programming style.
Although reactive programming solves the IO blocking problem, it may create
maintainability problems because it is more difficult to debug and write reactive
code. Java architects aware of this situation came up with a solution that allows
us to leverage non-blocking IO threads and write and debug code as we do in the
imperative programming style. We call this solution virtual threads. Let us see
how it works.
Conclusion
By keeping ourselves up-to-date with modern Java features, we can solve
problems more efficiently by tapping into the newest features of the Java
language. In this chapter, we had the chance to explore how sealed classes help
us enforce inheritance expectations. We learned how pattern matching allows us
to better deal with logic that does type casting using instanceof, including
the switch statement and data extraction from record classes. We concluded by
learning how virtual threads can significantly increase the throughput capacity of
IO-intensive applications.
In the next chapter, we will explore the technologies and techniques we can use
to handle relational databases in Java, such as the Java Database Connectivity
(JDBC), which provides an interface that simplifies the interaction with
databases, and the Jakarta Persistence, which lets us map Java classes to
database tables.
Introduction
Relational databases continue to be widely used in Java projects that require data
to be stored in a structured form. In the schema-enforced structure of relational
databases, developers can store data in tables where each column has a specific
data type. Java provides good support for those wanting to interact with
relational databases. Serving as the foundation for database access and data
handling, we have the Java Database Connectivity (JDBC) specification. On a
higher level, we have the Jakarta Persistence (previously known as Java
Persistence API), which allows us to map Java classes to database tables.
Understanding how JDBC and Jakarta Persistence work is essential for anyone
involved in Java projects that depend on relational databases.
Structure
The chapter covers the following topics:
• Introduction to JDBC
• Simplifying data handling with the Jakarta Persistence
• Exploring local development approaches when using databases
• Compiling and running the sample project
Objectives
By the end of this chapter, the reader will have the skills to develop Java
applications that correctly employ the JDBC and Jakarta Persistence to deal with
databases. The reader will also learn about the approaches to providing databases
while locally running and developing a Java application.
Introduction to JDBC
The Java Database Connectivity (JDBC) specification defines how Java
applications should interact with relational databases. It is also an API composed
of interfaces describing how to connect and handle data from databases.
Applications relying on the JDBC may benefit, to a certain extent, from the
platform-independent nature of the JDBC specification, which enables
applications to change database vendors without significant refactoring on the
code responsible for connecting to the database. Changing database technologies
is often a non-trivial activity that requires considerable refactoring, primarily due
to the differences in the Structured Query Language (SQL) syntax and data
types that may occur across different database vendors. That is a problem of
database vendors employing non-ANSI-compliant usage of SQL, which can
cause trouble for applications changing from database vendor A to B.
The JDBC specification provides standardization on an application level
regarding the fundamental operations of any relational database. As fundamental
operations, every relational database must support operations based on the
following:
• Data Definition Language: Data Definition Language (DDL) operations
encompass things like creating or changing a database table. It is the
language that deals with the structure in which the data is organized in a
relational database.
• Data Manipulation Language: Data Manipulation Language (DML)
operations, on the other hand, deal directly with data from a relational
database. They provide a language for selecting, inserting, updating, and
deleting data from a relational database.
Database vendors comply with the JDBC specification by implementing the
JDBC API interfaces from the java.sql and javax.sql packages. These
interfaces describe which methods can be used to, for example, create a
connection to a database or send an SQL statement to select data from a table.
Let us start our exploration by learning how to use Java to connect to a relational
database.
"jdbc:"+dbProvider+"://"+dbHost+":"+dbPort+"\"+dbName);
}
}
Assuming a MySQL Server is running locally with a user, password, and
database name defined as test, we compose a database URL connection using
the following structure:
protocol:provider:host:port/database
The protocol is usually jdbc. The provider represents the database vendor,
which in our case is mysql. As the database is running on the same machine as
the Java application, the host is 127.0.0.1. The MySQL server from our
example uses the default port, 3306. The database name is test.
After getting the connection, we confirm if it is opened by calling the
isClosed method, which returns false.
For this approach to work, we must ensure the JDBC driver for MySQL is
loaded into the Java application’s class or module path. The JDBC driver usually
comes as a JAR file, which the database vendor provides.
The same Connection object can be acquired by using the DataSource
interface, as shown in the upcoming section.
4. We can use this Connection object to start interacting with the MySQL
database:
public static void main(String... args) throws Exception {
var user = "test";
var password = "test";
var dbName = "test";
var connection = getConnectionWithDataSource(user, pass
dbName);
System.out.println(connection.isClosed()); // false
}
a. When working with production-grade Java projects, you will not
need to manually create the DataSource and Context objects
because they will be provided through the application server or the
framework your Java program is using.
Now that we know how to set up a database connection using JDBC, let us
explore how to send and process statements to the database.
PreparedStatement preparedStatement =
connection.prepareStatement(sql);
preparedStatement.setString(1, firstName);
ResultSet result =
preparedStatement.executeQuery();
while (result.next()) {
String resultFirstName =
result.getString("FIRST_NAME");
long resultAge = result.getLong("AGE");
System.out.println("First Name:
"+resultFirstName+" | Age: "
+resultAge);
}
}
The parameter is denoted by the ? character in parts of the SQL statement where
we want to use parameter values. We can set which value must be used for each
parameter by calling methods like setString from the
PreparedStatment interface. In our example, the setString receives the
number 1, which represents the index position of the ? character, and a
String object that is used as a value for the ? character in the index position.
The PreparedStament ensures protection against SQL injection attacks like
the one we saw previously in the Statement approach. Consider the following
call to the findAndPrintPersonByFirstName method:
findAndPrintPersonByFirstName("James' OR '1'='1"); //
Prints nothing
Attempts to use arbitrary SQL statements like ' OR '1'='1 are appropriately
handled by the PreparedStament and not executed in the database. That is
why it is always recommended to use PreparedStatement when
constructing SQL statements using data coming from external sources.
Besides offering the Statement for simple queries and the
PreparedStatement for parameterized queries, the JDBC API also has the
CallableStament interface, which allows us to call stored procedures. Let
us look at how it works.
ResultSet result =
callableStatement.executeQuery();
while (result.next()) {
String resultFirstName =
result.getString("FIRST_NAME");
long resultAge = result.getLong("AGE");
System.out.println("First Name:
"+resultFirstName+" | Age: "
+resultAge);
}
}
The CallableStatement interface extends the PreparedStatement
interface and supports statement parametrization. In our example above, the
findPersonOlderThanAge stored procedure expects an integer parameter
representing age. We use the CALL SQL keyword before the stored procedure
name. Note also that we are using the ? character to set a parameter at
findPersonOlderThanAge(?). Because CallableStatement
extends the PreparedStatement interface, we can set the age parameter
using the setInt method. We call the
findAndPrintPersonOlderThanAge method, passing the age integer as
a parameter:
findAndPrintPersonOlderThanAge(29);
// First Name: Mary | Age: 35
// First Name: James | Age: 51
As expected, the store procedure returned two records of persons older than 29.
You may have noted that in this section and the sections covering the
Statement and PreparedStatement interfaces, we have been processing
results using a ResultSet object. Let us examine this further.
Processing results with the ResultSet
The ResultSet is an interface that allows data to be retrieved from and
persisted in the database. Whenever we execute the method executeQuery
from a Statement-type object (which also includes
PreparedStatement and CallableStatement), the result is a
ResultSet object because the executeQuery method is used for SELECT
statements whose purpose is to retrieve data from the database. The
executeUpdate method is also used when making changes in the database
with the UPDATE or DELETE statements. The executeUpdate returns an
integer representing the number of rows affected by a UPDATE or DELETE
statement. This section will focus on the executeQuery method and the
ResultSet object it returns.
When querying data from a database, we are interested in the rows and columns
returned by the query. The ResultSet interface allows us to inspect what a
SELECT statement returned after it was executed in the database. The following
command will help us get a ResultSet:
var retrieveAllPersons = "SELECT * FROM PERSON";
ResultSet resultSet =
statement.executeQuery(retrieveAllPersons);
The ResultSet stores the query result in a table-like structure that we can
navigate through using a cursor that the ResultSet provides. This cursor
starts right before the first row returned by the executed query. We can move the
cursor position by using the next method from the ResultSet:
while (resultSet.next()) {
String firstName =
resultSet.getString("FIRST_NAME");
long age = resultSet.getLong("AGE");
System.out.println("First Name: "firstName+" |
Age: "+age);
}
As the cursor starts right before the first row, when the while loop calls the
resultSet.next() for the first time, it moves the cursor to the first row.
While the ResultSet cursor is in the first table row position, we extract the
data we want using the resultSet.getString and
resultSet.getLong methods. These methods are mapped to the data type
of the database columns. You should use the getString method to retrieve
data stored in a VARCHAR column. If you store data in a BOOLEAN column, you
should use the getBoolean to retrieve data from that column. Accessing a
ResultSet column using the column index rather than the name is also
possible. For example, instead of calling
resultSet.getString("FIRST_NAME") we can call
resultSet.getString(1).
In our example, the while iteration continues until resultSet.next()
returns false, meaning no more rows need to be processed. By default, the
ResultSet cursor moves from beginning to end. And if something changes in
the database while the ResultSet is being traversed, those changes will not be
reflected in the ResultSet object. We can change this behavior by passing
special properties when creating Statement objects. The following example
shows how we can do that:
private static void printPerson() throws SQLException
{
String sql = "SELECT * FROM PERSON";
PreparedStatement preparedStatement =
connection.prepareStatement(
sql,
ResultSet.TYPE_SCROLL_SENSITIVE,
ResultSet.CONCUR_READ_ONLY);
var resultSet =
preparedStatement.executeQuery();
resultSet.last();
resultSet.previous();
String resultFirstName =
resultSet.getString("FIRST_NAME");
long resultAge = resultSet.getLong("AGE");
System.out.println("First Name: "+resultFirstName+"
| Age: "+resultAge);
}
...
printPerson() // First Name: Samuel | Age: 28
We can specify different ResultSet options when creating a Statement
object. Here, we are passing the ResultSet.TYPE_SCROLL_SENSITIVE
option to make the ResultSet scrollable—its cursor can move forward and
backward. By default, ResultSet objects are created with the
ResultSet.TYPE_SCROLL_INSENSITIVE option, which means that
changes in the database are not reflected in the ResultSet object. In addition,
we have the ResultSet.CONCUR_READ_ONLY option that makes this
ResultSet read-only, meaning we cannot make database changes. To have a
modifiable ResultSet, we need to pass the
ResultSet.CONCUR_UPDATABLE option that allows us to use the
ResultSet to make database changes.
The following is an example of how we can update rows of data using the
ResultSet:
private static void updatePersonCountry(String
firstName, String country) throws SQLException {
String sql = "SELECT * FROM PERSON WHERE
FIRST_NAME="+firstName;
PreparedStatement preparedStatement =
connection.prepareStatement(
sql,
ResultSet.TYPE_SCROLL_SENSITIVE,
ResultSet.CONCUR_UPDATABLE);
var resultSet = preparedStatement.executeQuery();
while (resultSet.next()) {
String resultFirstName =
resultSet.getString("FIRST_NAME");
String resultCountry =
resultSet.getString("COUNTRY");
Defining entities
The magic behind the Jakarta Persistence lies in mapping a Java class to a
database table. We can do that by placing the Entity annotation on top of a Java
class:
@Entity
@Table(name = "USER")
public class User {
@Id
@Column(name = "id", updatable = false, nullable =
false)
private UUID id;
ManyToOne
A many-to-one relationship occurs when multiple rows of a table can be mapped
to only one row of another. In the context of attribute definitions and values, we
can have multiple attribute values of only one attribute type. We can implement
the AttributeValue entity using the ManyToOne annotation as follows:
@Entity
@Table(name = "ATTRIBUTE_VALUE")
public class AttributeValue {
@Id
private Long id;
private Long definition_id;
private String value;
@ManyToOne
@JoinColumn(name="definition_id", nullable=false)
private AttributeDefinition attributeDefinition;
// Constructor, getters and setters omitted
}
Remember we specified the attributeDefinition in the mappedBy
property of the AttributeDefinition entity class. The
attributeDefinition appears here as a class attribute of the
AttributeValue entity class. Also note that in addition to the ManyToOne
annotation, we have a JoinColumn annotation specifying which column from
the AttributeValue entity, definition_id in our example, is used to
map AttributeValue entities back to the AttributeDefinition
entity.
We use the ManyToOne annotation here because this is a bidirectional
relationship in which the AttributeDefintion entity owns the relationship
with the AttributeValue entity.
Let us check next how to use Jakarta Persistence to implement a one-to-one
relationship.
OneToOne
A one-to-one relationship occurs when a row from a table maps to only one row
of another table.
Consider the scenario where we need to map the relationship between the
ACCOUNT and PROFILE tables used in a database serving an internet forum.
The SQL code for MySQL that creates the ACCOUNT and PROFILE tables is as
follows:
CREATE TABLE PROFILE(
id INT PRIMARY KEY NOT NULL AUTO_INCREMENT,
name VARCHAR(255) NOT NULL,
description VARCHAR(255) NOT NULL,
website VARCHAR(255) NOT NULL
);
CREATE TABLE ACCOUNT(
id INT PRIMARY KEY NOT NULL AUTO_INCREMENT,
email VARCHAR(255) NOT NULL,
password VARCHAR(255) NOT NULL,
profile_id INT NOT NULL,
FOREIGN KEY (profile_id) REFERENCES PROFILE(id)
);
The PROFILE table must be created first because its id is a foreign key in the
ACCOUNT table. Next, we can start by defining the Account entity class:
@Entity
@Table(name = "ACCOUNT")
public class Account {
@Id
@GeneratedValue (strategy =
GenerationType.IDENTITY)
private Long id;
private String email;
private String password;
@OneToOne(cascade = CascadeType.ALL)
@JoinColumn(name = "profile_id",
referencedColumnName = "id")
private Profile profile;
// Constructor, getters and setters omitted
}
We have the GeneratedValue annotation right below the Id annotation. It
signals that the ID database column has an auto-increment mechanism for
generating ID values. The GenerationType.IDENTITY strategy is used for
scenarios where a special database identity column is used when a new entity is
created and needs an ID value as the primary key. The Jakarta Persistence
provider does not generate the ID value; instead, the underlying database
generates the ID. There are other strategies like GenerationType.AUTO,
GenerationType.SEQUENCE, and GenerationType.TABLE that
provides different behaviors for ID generation.
When a new user registers in the internet forum, he gets an account and a profile.
The account contains login data such as email and password, while the profile
has data like name, description, and website. Every account must have only one
profile linked to it. We express it using the OneToOne annotation. Note the
usage of the cascade = CascadeType.ALL property. Using it means that
if an Account entity is deleted, then the deletion operation will be propagated
to its child relationship entities, which implies that the Profile entity should
also be deleted as it is a child entity from the Account parent entity.
In the ACCOUNT table, we have a profile_id column that acts as a foreign
key that points to the id column in the PROFILE table. Such a relationship is
represented using the JoinColumn annotation. Note that the Account entity
class has a class attribute called profile. When defining the Profile entity
class, we refer to the profile attribute:
@Entity
@Table(name = "PROFILE")
public class Profile {
@Id
@GeneratedValue(strategy =
GenerationType.IDENTITY)
private Long id;
private String name;
private String description;
private String website;
@OneToOne(mappedBy = "profile")
private Account account;
// Constructor, getters and setters omitted
}
Again, we use the OneToOne annotation to map the Account entity back to
the Profile entity. The mappedBy contains the name of the class attribute
profile defined in the Account entity class.
The last relationship to check is the many-to-many relationship. Let us see how
we can implement it using Jakarta Persistence.
ManyToMany
In many-to-many relationships, a row from one table can appear multiple times
in another table and vice versa. Such a relationship usually occurs when a join
table connects two tables.
Let us consider the scenario of a user management system that stores users,
groups, and the user group membership. The following is the SQL code for
MySQL that we can use to create tables to support the system:
CREATE TABLE USER(
id UUID PRIMARY KEY NOT NULL,
email VARCHAR(255) NOT NULL,
password VARCHAR(255) NOT NULL,
name VARCHAR(255) NOT NULL
);
@Id
@GeneratedValue(generator = "UUID")
@GenericGenerator(
name = "UUID",
strategy =
"org.hibernate.id.UUIDGenerator"
)
@ColumnDefault(" uuid()")
private UUID id;
private String email;
private String password;
private String name;
@ManyToMany
@JoinTable(
name="MEMBERSHIP",
joinColumns = @JoinColumn(name="user_id"),
inverseJoinColumns =
@JoinColumn(name="group_id")
)
private List<Group> groups;
// Constructor, getters and setters omitted
}
Instead of using a number as the ID, we use a UUID. Since UUIDS are not
numbers, we cannot rely on ID generators like
GenerationType.IDENTITY or GenerationType.AUTO. That is why
we have the GenericGenerator annotation using the
org.hibernate.id.UUIDGenerator class, provided by Hibernate
(which we will explore further in the next section) and not Jakarta Persistence, as
the generator strategy. Note the ColumnDefault annotation; it defines a
default column value when none is provided. We pass the uuid() function
from the MySQL database that generates random UUID values. Such a function
can appear under different names depending on your database technology.
We have a group class attribute annotated with the ManyToMany annotation,
followed by a JoinTable annotation that specifies the join table name as
MEMBERSHIP. The joinColumns property refers to the user_id column,
and the inverseJoinColumns refers to the group_id column, both
columns from the MEMBERSHIP table.
Next, we define the Group entity:
@Entity
@Table(name = "GROUP")
public class Group {
@Id
@GeneratedValue(generator = "UUID")
@GenericGenerator(
name = "UUID",
strategy =
"org.hibernate.id.UUIDGenerator"
)
@ColumnDefault("uuid()")
private UUID id;
private String name;
@ManyToMany(mappedBy = "groups")
private List<User> users;
// Constructor, getters and setters omitted
}
The ManyToMany is used again, but the Group entity is mapped back to the
User entity through the mappedBy property.
Now that we have covered the fundamentals of defining entities and their
relationships, let us explore how to use Hibernate to retrieve and persist database
entities.
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"
version="1.0">
<persistence-unit name="user">
<provider>org.hibernate.jpa.HibernatePersistenceProvider.
</provider>
<properties>
<property name="jakarta.persistence.jdbc.url"
value="jdbc:mysql://localhost:3306/user"/>
<property name="jakarta.persistence.jdbc.user"
value="test"/>
<property
name="jakarta.persistence.jdbc.password"
value="test"/>
</properties>
</persistence-unit>
</persistence>
This file specifies a persistent unit called the user. Inside it, we can define
which Jakarta Persistence provider we want to use: the
org.hibernate.jpa.HibernatePersistenceProvider in our case.
It is in the persistence.xml that we also define the database connection
details. The configuration provided by the persistence.xml file enables the
creation of the EntityManager, an object responsible for initiating the
interaction with a database:
EntityManagerFactory entityManagerFactory =
Persistence.createEntityManagerFactory("user");
var entityManager =
entityManagerFactory.createEntityManager();
When we call
Persistence.createEntityManagerFactory("user"), Hibernate
tries to identify in the persistence.xml file a persistence unit called user.
From an EntityManagerFactory object, we call
createEntityManager to get an EntityManager that can be used for
further interactions with the database. The EntityManager allows sending
queries to the database:
List<Account> account = entityManager
.createQuery("SELECT a FROM Account a",
Account.class)
.getResultList();
The SQL query we pass to the createQuery method uses the name of the
Account entity class rather than the ACCOUNT table name. Also, we pass the
Account.class to ensure that Hibernate returns Account entities. The
getResultList is used when the query returns one or more results stored in
a List collection.
It is also possible to create SQL queries that refer to real table names using the
createNativeQuery:
List<Account> account = entityManager
.createNativeQuery("SELECT * FROM ACCOUNT",
Account.class)
.getResultList();
We can update or create rows in the database using the persist method:
private void persist(Account account) {
entityManager.persist(account);
}
The persist method expects, as a parameter, an entity class that is mapped to
a database table.
Next, we have the remove method, which can be used to delete database rows
using the following command:
private void persist(Account account) {
entityManager.remove(account);
}
The remove method uses the entity’s primary key to identify and remove the
table row.
In addition to the EntityManager, which comes from the Jakarta Persistence
specification, we can also use the Session, which is a Hibernate-specific
interface that extends the EntityManager interface:
Session session = entityManager.unwrap(Session.class);
List<Account> account = session
.createNativeQuery("SELECT * FROM ACCOUNT",
Account.class)
.getResultList();
A Session object is recommended when the application relies on special
features available only in Hibernate. Otherwise, the EntityManager is
enough.
Let us explore next the Jakarta Persistence Query Language (JPQL) and the
Criteria API.
criteriaQuery
.select(user)
.where(criteriaBuilder.equal(user.get("email"),
email));
return
session.createQuery(criteriaQuery).getSingleResult();
}
We create a CriteriaBuilder that builds a CriteriaQuery object
parameterized to work with User entities. From the CriteriaQuery, we get
the Root object for the User entity. With CriteriaQuery, we express the
database interaction using the select and where methods. The constraint to select
User entities having an specified email address is applied inside the where
method using the equal method from the CriteriaBuilder. Besides
equal, the CriteriaBuilder provides other interesting helper methods
like notEqual, isNull, and isNotNull, to name a few.
In this section, we saw how to create Jakarta Persistence entities and their
relationships. We learned how to configure and use Hibernate, the most well-
known Jakarta Persistence implementation, to handle entity classes by querying,
persisting, and removing them. We also explored JPQL and the Criteria API.
This section covered the fundamentals required to prepare a Java application to
take advantage of the Jakarta Persistence specification and ORM technologies
like Hibernate.
The following section will cover the approaches to providing databases while
developing an application locally.
Conclusion
In this chapter, we learned how Java Database Connectivity (JDBC) provides
the building blocks for database interactions. Seeking a consistent and
standardized way to handle databases, we also learned about the Jakarta
Persistence specification and one of its main implementations, Hibernate. We
finished this chapter by assessing the possible approaches to accessing a
database while locally running a Java application.
In the next chapter, we will cover the technologies and techniques used to test
Java code. We will look at essential topics like unit and integration tests, the
prominent testing framework JUnit 5, and the ability to test using real systems
with Testcontainers.
1 https://github.com/h-thurow/Simple-JNDI
2 (check https://hibernate.org/)
CHAPTER 4
Preventing Unexpected Behaviors
with Tests
Introduction
A fundamental success aspect of any Java project lies in how developers handle
tests. They can handle tests using different approaches like unit and integration
testing. The time invested in testing pays off by the number of prevented bugs.
So, in this chapter, we will explore how to use technologies like JUnit 5 and
Testcontainers to implement helpful unit and integration tests that prevent
unexpected system behaviors, help us better understand application behaviors,
and design simple yet effective code. While exploring testing technologies, we
will also learn about good practices for writing helpful tests that are easy to
grasp and maintain.
Structure
The chapter covers the following topics:
• Overviewing unit and integration tests
• Using JUnit 5 to write effective unit tests
• Implementing reliable integration tests with Testcontainers
• Compiling and running the sample project
Objectives
By the end of this chapter, you will have the fundamental skills to write effective
unit and integration tests on Java applications using JUnit 5 and Testscontainers.
Such skills will make you a better developer, ready to tackle any programming
challenges with much more confidence by ensuring the features you are
developing are secured by automated tests. Let us embark together on this
fascinating testing journey.
Unit tests
When discussing tests, it is always important to consider the scope we want to
address when validating system features. Such consideration is essential because
the bigger the scope, the broader the test dependencies, which may include
different systems, including databases, front-end applications, APIs, and so on.
Another thing to consider when widening the scope is the cost and complexity
associated with testing broader aspects of a system.
With the awareness that tests encompassing a large scope of dependencies are
complex and costly, we can reflect on which system elements can be tested in a
smaller scope. This leads us to the behaviors an application may expose through
methods containing a sequence of instructions. At the core of any Java
application, we have a collection of classes and methods orchestrating a
sequence of instructions to produce useful application behaviors aimed at
hopefully solving real-life problems.
With that in mind, unit tests are a technique to validate application behaviors on
an isolated, self-contained level. At this level, we usually have methods
containing logic that dictates how accurate and well an application handles its
use cases. Unit tests are not concerned with the application’s behavior when
interacting with external resources like a database. Instead, unit tests validate
application behaviors that can be checked without needing external resources or
dependencies. That is why, for example, we can employ the so-called mocks to
simulate such external resources when unit testing an application functionality
that depends on external resources.
We can then state that unit tests are not supposed to test an entire use case or
application feature but rather a more minor part that contributes to such a use
case or feature. It is also important to understand that unit tests cannot replace
other tests, like acceptance tests, which validate how an application behaves
from the user’s perspective.
Unit tests are so appealing because they are cheap to implement yet so
beneficial. When properly implemented, they can protect critical areas of the
application from unwanted changes and side effects. As the code complexity
increases, unit tests can also help us identify what the application is doing with
test cases targeting all the required behaviors to deliver helpful application
features.
Although unit tests help validate how units of code behave, they cannot help us
answer how the application behaves when it needs to deal with external
resources like a database or an API. For such a purpose, we can rely on
integration tests, which we will examine next.
Integration tests
Earlier in this chapter, we discussed this manual test idea, which consists of
locally running an application and observing how it behaves when something is
changed. We can also locally run an application and see how it interacts with its
external dependencies, like databases or APIs served by other systems. This
validation increases our confidence that the application correctly handles data
provided by external resources.
Besides checking how an application behaves when dealing with external
dependencies, we can manually validate how different application components
or modules interact. For example, imagine an application based on the API,
service, and persistence layers materialized as application modules. We can
validate how a request arrives at the API layer, passes through the service and
persistence layers, and returns with some response data. Again, we can locally
run our application and manually test this flow by preparing testing data,
requesting payloads, and ensuring all external dependencies are in place to
ensure our tests will not fail.
Real value can be achieved if we can automate the validation of application
behaviors that span different modules and external dependencies. We are no
longer interested only in the unit level of how an application behaves; instead,
we want to validate the end-to-end behavior across application modules and their
dependencies. We use integration tests to automate the validation of end-to-end
system functionalities, considering their required dependencies.
Note that the integration test scope is broader than that of a unit test, which
focuses on self-contained units of code responsible for supporting an application
use case. With integration tests, we can validate how different units of code work
together when combined.
In the next sections of this chapter, we will explore the techniques and
technologies we can use to create unit and integration tests. Let us start by
exploring what JUnit 5 can offer in terms of unit test automation.
Setting up JUnit 5
The JUnit 5 framework is built into a modular structure, allowing Java projects
to rely only on the framework modules that are relevant to the project. We can
select those modules when defining the Java project’s dependencies through a
dependency manager like Maven or Gradle. The following code is an example
showing which dependencies we must include in the Maven’s pom.xml file of
the Java project to have JUnit 5 working correctly:
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<version>5.10.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>5.10.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.platform</groupId>
<artifactId>junit-platform-launcher</artifactId>
<scope>test</scope>
<version>1.8.2</version>
</dependency>
The dependencies with artifactId junit-jupiter-api and junit-
jupiter-api are mandatory and represent the minimum requirement to get
started with JUnit 5. For example, the last dependency with artifactId
junit-platform-launcher can be included if you face issues while
executing tests directly through an IDE like IntelliJ.
Aside from including the JUnit5 dependencies in the pom.xml file, we also
need to add the maven-surefire-plugin to the plugin configuration of the
pom.xml:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.2.5</version>
</plugin>
The maven-surefire-plugin allows us to execute tests when using
Maven to build a Java application.
With all dependencies in place, we are ready to start creating unit tests. Before
we do so, let us create a simple account registration application that will serve as
the basis for our unit test exploration.
var isBirthDateValid =
validateBirthDate(accountPayload.birthDate());
if(!isBirthDateValid) {
throw new
Exception(INVALID_BIRTHDATE_MESSAGE);
}
}
// Code omitted
}
The INVALID_EMAIL_MESSAGE, INVALID_PASSWORD_MESSAGE, and
INVALID_BIRTHDATE_MESSAGE are constants we use to store messages
returned when the validation fails. After the constants, the validateAccount
method receives an AccountPayload object as a parameter. This method
relies on the validateEmail, validatePassword, and
validateBirthDate to check if the AccountPayload data is valid. If
the data is invalid, it throws exceptions with a message describing why the
validation could not be performed. We describe the implementation of the
validation methods, as follows:
public class ValidatorService {
//** Code omitted **//
private boolean validateEmail(String email) {
var regexPattern = "^(.+)@(\\S+)$";
return Pattern.compile(regexPattern)
.matcher(email)
.matches();
}
private boolean validatePassword(String password)
{
return password.length() >= 6;
}
private boolean validateBirthDate(LocalDate
birthDate) {
return Period.between(birthDate,
LocalDate.now()).getYears() >= 18;
}
}
The validateEmail checks if the email provided is in the correct format.
The validatePassword ensures the password contains at least six
characters. Finally, the validateBirthDate ensures that only birth dates
equal to or above eighteen years old are accepted.
These two classes are enough to implement the account registration system
initially. Let us start testing it then.
@Test
public void
givenInValidEmailString_thenValidationThrowsException()
{
// Arrange
var email = "@daviveira.dev"; // Invalid email
address
var password = "123456";
var birthDate = LocalDate.of(1980, 1, 1);
// Prepare
var accountPayload = getAccountPayload(email,
password, birthDate);
validatorService.validateAccount(accountPayload);
});
// Post-assert
String expected = "Email format is not valid";
String actual = exception.getMessage();
assertEquals(actual, expected);
}
// Code omitted
}
}
We place the org.junit.jupiter.api.Test annotation on top of the
methods we want to test. The test above checks if an exception is thrown when
we pass AccountPayload with an invalid email address. Also, it checks if the
exception message is indeed the one that says the email format is incorrect. Note
that we use comment terms like Arrange, Prepare, Execute, Pre-assert, and
Post-assert to describe the different stages of the testing method. This approach
derives from the Arrange-Act-Assert (AAA) testing pattern that prescribes a
way to organize tests to make them easier to read and maintain. Following, we
explore the AAA pattern further.
Assertions
As one of the cornerstones of the JUnit 5 framework, the assertX methods
validate the test behaviors and data. In the
givenInValidEmailString_thenValidationThrowsException
test, we are using the assertThrows to check if the
validatorService.validateAccount(accountPayload) throws
an Exception when we pass an AccountPayload with an invalid email:
// Execute and Pre-assert
Exception exception = assertThrows(Exception.class, ()
-> {
validatorService.validateAccount(accountPayload);
});
Moving forward, there is also the assertEquals, which receives two
parameters, the first parameter actual representing the actual value returned as
the result of testing the method we are interested in, and the second parameter
expected representing the value we expect the actual value to be:
// Post-assert
String expected = "Email format is not valid";
String actual = exception.getMessage();
assertEquals(actual, expected);
If the actual and expected variables match, then the assertion is successful;
otherwise, the test fails. JUnit 5 provides plenty of other assertX methods,
such as assertTrue, assertFalse, and others, allowing you to perform
many different assertions.
There are situations when we need to unit test a method that contains calls to
external resources like a database. We can overcome it by mocking those calls
and allowing the test to execute without issues. Let us explore mocking
techniques using a technology called Mockito.
public RegistrationService(ValidatorService
validatorService,
AccountRepository accountRepository) {
this.validatorService = validatorService;
this.accountRepository = accountRepository;
}
// Code omitted
}
We have the ValidatorService we implemented previously, and the
AccountRepository intends to persist data in a database. Let us now
implement the logic that registers the account:
public class RegistrationService {
//Code omitted
public Account register(AccountPayload
accountPayload) throws Exception
{
validatorService.validateAccount(accountPayload);
return createAccount(accountPayload);
}
private Account createAccount(AccountPayload
accountPayload) {
var account = new Account(
accountPayload.email(),
accountPayload.password(),
accountPayload.birthDate(),
Instant.now(),
Status.ACTIVE
);
accountRepository.persist(account); // Throws
a RuntimeException
return account;
}
}
The register method receives the AccountPayload object as a parameter that
is validated on
validatorService.validateAccount(accountPayload). If the
validation is okay, then it creates an Account object and persists it into the
database by calling accountRepository.persist(account). Finally,
it returns the Account object. Calling the register method in a unit test will
make it fail because of the RuntimeException. Let us see how to address it
using Mockito.
@BeforeEach
private void init(@Mock AccountRepository
accountRepository) {
registrationService = new
RegistrationService(new
ValidatorService(), accountRepository);
doNothing().when(accountRepository).persist(any());
}
// Code omitted
}
We use the class-level annotation
@ExtendWith(MockitoExtension.class) to make Mockito
capabilities available while executing JUnit 5 tests. Next, we add the
@BeforeEach annotation above the init method. The idea behind this
annotation is that the init method will be executed before every testing method
in the class is executed. Other variations like @BeforeAll, @AfterEach,
and @AfterAll allow us to execute helpful logic across different stages of the
test life cycle.
Note that the init method receives the @Mock AccountRepository
acountRepository as a parameter. The @Mock annotation marks the
accountRepository parameter as a mock, which Mockito injects during
test execution. Inside the init method, we create a new instance of the
RegistrationService, passing a real instance of the
ValidatorService and a mock of the AccountRepository.
Creating mocks is not enough; we need to specify in which conditions the mock
will be used. Mockito enables us to define the application’s behavior when a
method from the mocked object is called. Remember that the persist method
from the AccountRepository throws an exception:
public void persist(Account account) {
throw new RuntimeException("Database integration
is not implemented
yet");
}
To avoid this exception, we instruct Mockito to do nothing when the
persist(Account account) method is called:
doNothing().when(accountRepository).persist(any());
The idea behind the construct above is that we state the desired behavior,
doNothing(), followed by the condition
when(accountRepository).persist(any()), where we pass the
mocked object accountRepository and describe which method from it we
are mocking, which is the persist(any()) method. The any() is an
argument matcher that instructs Mockito to accept any object passed as a
parameter to a given method.
Having prepared the mocks, we can proceed to create our test using the
following ccode:
@ExtendWith(MockitoExtension.class)
public class RegistrationServiceTest {
// Code omitted
@Test
public void
givenValidAccountPayload_thenAccountObjectIsCreated()
throws Exception {
// Arrange
var email = "[email protected]";
var password = "123456";
var birthDate = LocalDate.of(1980, 1, 1);
// Prepare
var accountPayload = getAccountPayload(email,
password, birthDate);
// Execute
var account =
registrationService.register(accountPayload);
// Assert
assertAll("Account is properly created",
() -> assertEquals(account.email(),
email),
() -> assertEquals(account.password(),
password),
() ->
assertEquals(account.birthDate(), birthDate),
() -> assertEquals(account.status(),
ACTIVE)
);
}
// Code omitted
}
The test above aims to check if the Account object returned by calling
registrationService.register(accountPayload) contains all
the expected data. Note we have multiple assertEquals calls inside the
assertAll method. Even if one of the assertEquals fails, the
assertAll will continue executing the other assertions until the end and
report which assertions have failed.
Next, let us check how to use Maven to execute the tests we created with the
RegistrationServiceTest and ValidatorServiceTest testing
classes.
Setting up Testcontainers
Like JUnit 5, Testcontainer is built into a modular structure so that we can bring
only the relevant dependencies to our Java project. Let us start by adding the
Testcontainer dependencies in the Maven’s pom.xml file, as follows:
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>mysql</artifactId>
<version>1.19.7</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>junit-jupiter</artifactId>
<version>1.19.7</version>
<scope>test</scope>
</dependency>
The first dependency identified by the artifactId mysql allows the
creation of MySQL database containers. The next dependency, junit-
jupiter, enables us to add containers into the JUnit 5 tests life-cycle.
Before implementing integration tests, let us provide a Jakarta Persistence
configuration to ensure the account registration system can connect to a real
MySQL database.
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"
version="2.0">
<persistence-unit name="account" transaction-
type="RESOURCE_LOCAL">
<provider>org.hibernate.jpa.HibernatePersistenceProvider</provi
<properties>
<property
name="jakarta.persistence.jdbc.driver"
value="com.mysql.cj.jdbc.Driver"
/>
<property
name="jakarta.persistence.jdbc.url"
value="jdbc:mysql://localhost:3306/account" />
<property
name="jakarta.persistence.jdbc.user"
value="test" />
<property
name="jakarta.persistence.jdbc.password" value="test"
/>
<property
name="jakarta.persistence.schema-
generation.database.action"
value="drop-and-create" />
<property
name="hibernate.connection.autocommit"
value="true" />
<property
name="hibernate.allow_update_outside_transaction"
value="true"/>
</properties>
</persistence-unit>
</persistence>
Note that we specify the com.mysql.cj.jdbc.Driver as the
jakarta.persistence.jdbc.driver. Next, we set
jdbc:mysql://localhost:3306/account as the database connection
URL. We intend to connect to the database in our machine in port 3306, using
the value test as the user and password. The property
jakarta.persistence.schema-generation.database.action is
set to drop-and-create on purpose to ensure Jakarta Persistence entities are
dropped and created every time the application starts. This configuration is not
recommended in production scenarios; we only use it here to demonstrate
integration tests with MySQL databases.
Finally, we adjust the AccountRepository class to use the
EntityManager to persist accounts and find them using their email
addresses. Following is the code implementation that lets us persist and find
Account database entities:
public class AccountRepository {
@PersistenceContext
private EntityManager entityManager;
// Code omitted
public void persist(Account account) {
AccountData accountData =
convertEntityToData(account);
entityManager.merge(accountData);
entityManager.flush();
}
public Account findByEmail(String email) {
Query query =
entityManager.createQuery("SELECT a FROM AccountData
a WHERE
email = :email", AccountData.class);
query.setParameter("email", email);
var accountData = (AccountData)
query.getSingleResult();
return convertDataToEntity(accountData);
}
// Code omitted
}
The persist method receives the Account object that we convert to an
AccountData Jakarta Persistence entity object required to persist data into the
database. The findByEmail method receives a String representing the
email address used to query AccountData objects that convert to the
Account type. We do this conversion because Account is the domain entity
object, and AccountData is the database entity object.
We are ready to implement an integration with test Testcontainers.
@Container
public MySQLContainer<?> mySQLContainer = new
MySQLContainer<>("mysql:8.3.0")
.withDatabaseName("account")
.withUsername("test")
.withPassword("test")
.withExposedPorts(3306)
.withCreateContainerCmdModifier(cmd ->
cmd.withHostConfig(
new
HostConfig().withPortBindings(new
PortBinding(Ports.Binding.bindPort(3306), new
ExposedPort(3306)))
));
private RegistrationService registrationService;
@BeforeEach
public void init() {
registrationService = new
RegistrationService(new
ValidatorService(), new AccountRepository());
}
// Code omitted
}
The @Testcontainers enables the automatic creation of containers during
the JUnit 5 tests. It ensures that containers are running before testing starts. Next,
we have the @Container annotation on top of the instance variable
mySQLContainer that receives a MySQLContainer instance that is
initialized with the "mysql:latest" Docker image tag that is pulled from
the Docker Hub registry. We also use the withX methods to set database
settings such as database name, username, and password. The
withExposedPorts and withCreateContainerCmdModifier ensure
the application can connect to the MySQL container through the 3306 port.
Remember that was the port we defined in the persistence.xml file:
<property name="jakarta.persistence.jdbc.url"
value="jdbc:mysql://localhost:3306/account" />
Testcontainers let us create containers per test or a single container shared by all
tests in the class. We control this behavior using a static or instance variable with
the @Container annotation. In our example, the mySQLContainer is an
instance variable, so a new container will be created for every test method.
Once the Testcontainers configuration is done, we can write our test method, as
follows:
@Testcontainers
public class EndToEndIT {
// Code omitted
@Test
public void
givenAnActiveAccountIsProvided_thenAccountIsSuspended()
throws Exception {
// Arrange
var email = "[email protected]";
var password = "123456";
var birthDate = LocalDate.of(2000, 1, 1);
// Prepare
var accountPayload = getAccountPayload(email,
password, birthDate);
var activeAccount =
registrationService.register(accountPayload);
// Pre-assert
assertEquals(activeAccount.status(), ACTIVE);
// Execute
var suspendAccount =
registrationService.suspend(email);
// Post-assert
assertEquals(suspendAccount.status(),
SUSPENDED);
}
// Code omitted
}
The above test checks if the account registration system can successfully
suspend an account. First, it creates an active account. We confirm it by running
a pre-assertion that confirms the ACTIVE status. Then, we execute the
registrationService.suspend(email) and assert that the returned
account is SUSPENDED. This integration test validates the account suspension
use case using a real MySQL database provided by Testscontainers.
Next, we learn how to run integration tests with Maven.
Conclusion
When we talk about testing Java applications, we mainly talk about running unit
and integration tests. Understanding what they are, their benefits, and how to
employ them constitutes a fundamental skill for any Java developer. We learned
in this chapter how unit tests backed by JUnit 5 can help validate application
behaviors at the unit, self-contained level, of methods containing sequences of
instructions, including methods with external resource calls like database access.
For such methods, we learned how to mock external calls using Mockito.
Moving ahead, we tapped into the Testcontainers capabilities to quickly provide
containers for integration tests.
In the next chapter, we will examine the Java software development frameworks,
starting our exploration with Spring Boot. We will cover the fundamentals of the
Spring Boot framework, learn how to bootstrap a new project, implement a
RESTful API with Spring Web, and persist data with Spring Data JPA.
Introduction
Software development frameworks constitute the cornerstone of most enterprise
applications, providing the foundation for developing robust systems. Relying on
features provided by a framework can also save us precious time. Aware of the
benefits development frameworks can offer, we explore essential Spring Boot
features to build Java enterprise applications.
Over the years, Spring Boot has become one of the most mature software
development frameworks, vastly used across various industries. If you are an
experienced Java developer, you have encountered at least one Spring Boot
application during your career. If you are a beginner Java developer, chances are
high that you will find Spring Boot applications crossing your path in your
developer journey. No matter how experienced you are, knowing Spring Boot is
a fundamental skill to remain relevant in the Java development market.
This chapter starts with a brief introduction to Spring fundamentals and then
proceeds with a hands-on approach to developing a simple CRUD Spring Boot
application from scratch with RESTful support.
Structure
The chapter covers the following topics:
• Learning Spring fundamentals
• Bootstrapping a new Spring Boot project
• Implementing a CRUD application with Spring Boot
• Compiling and running the sample project
Objectives
By the end of this chapter, you will have mastered the fundamental skills to build
Spring Boot applications, handle HTTP requests through RESTful APIs, and
persist data into relational databases using the Spring Data JPA. You will also
grasp how Spring Boot’s convention-over-configuration philosophy helps you
quickly set up new applications without spending too much time on
configuration details. With the knowledge in this chapter, you will have the
essentials to tackle challenging Spring Boot projects.
AnnotationConfigApplicationContext(PersonConfig.class);
var person = context.getBean(Person.class);
System.out.println(person.getName()); // John
Doe
}
}
The following steps describe how the PersonConfiguration and
PersonExample classes are implemented:
1. We place the @Configuration annotation on the
PersonConfiguration class. The @Configuration annotation
lets us configure the Spring context using a Java class.
2. We place the @Bean annotation above the person method. This
annotation tells Spring to create a new object instance based on the
annotated method.
3. Inside the person method, we create a new Person instance, set its
name attribute, and return it. Beans have names; by convention, a bean
is named after the method name that produces the bean instance, which in
our example is the Person name.
4. To use the Person bean, we need to initialize the Spring context by
passing the configuration class:
var context = new AnnotationConfigApplicationContext(PersonC
5. The AnnotationConfigApplicationContext lets us
programmatically create a Spring context, which we can use to retrieve a
bean by calling the getBean method with the bean class type:
var person = context.getBean(Person.class);
The approach described by the previous example works well when there is only
one bean of the Person type in the Spring context. However, we can have
trouble if we produce multiple beans of the Person type:
@Configuration
class PersonConfiguration {
@Bean
public Person person() {
var person = new Person();
person.setName("John Doe");
return person;
}
@Bean
public Person anotherPerson() {
var person = new Person();
person.setName("Mary Doe");
return person;
}
}
The second bean is defined by the anotherPerson method, which also
returns a Person. We get an exception if we try to retrieve a bean by just
informing the bean type to the getBean method:
var person = context.getBean(Person.class);
// NoUniqueBeanDefinitionException: No qualifying bean
of type 'dev.davivieira.Person' available: expected
single matching bean but found 2: person,anotherPerson
We can overcome it by specifying also the bean name to the getBean method:
var person = context.getBean("anotherPerson",
Person.class);
System.out.println(person.getName()); // Mary Doe
It is also possible to override the default behavior where the method name is
used as the bean name and, instead, define your own bean’s name:
@Bean("mary")
public Person anotherPerson() {
var person = new Person();
person.setName("Mary Doe");
return person;
}
...
var person = context.getBean("mary", Person.class);
System.out.println(person.getName()); // Mary Doe
In the above example, instead of relying on the default behavior where the bean
name would be anotherPerson, we defined our bean name as mary. We
referred to it when calling the getBean method from the Spring context.
The previous bean creation examples considered only our classes, but we can
also create beans from classes we do not own:
@Bean
public LocalDate currentDate() {
return LocalDate.now();
}
That is especially useful when you want Spring to control instances of third-
party classes.
Using the @Bean annotation is not the only way to add beans into the Spring
context. We can also use special Spring stereotype annotations to transform a
class into a bean. Let us explore this further.
AnnotationConfigApplicationContext(PersonConfig.class);
var person = context.getBean(Person.class);
System.out.println(person.getName()); // null
}
}
It is not enough to just place the @Component annotation on top of the
Person class to make Spring detect and put it into its context. We need to tell
Spring where it must look to find classes annotated with stereotype annotations.
To achieve it, we use @ComponentScan(basePackages =
"dev.davivieira") with the @Configuration annotation in the
PersonConfiguration class. Note that the PersonConfiguration
class is empty this time, as we no longer provide beans using the @Bean
annotation. The @ComponentScan annotation accepts the basePackages
parameter to specify the packages containing the classes Spring needs to scan.
The Person class is in the dev.davivieira package in our example. Note
that in the previous example, when we call person.getName() to print the
name attribute of the Person bean, it returns null, which is expected because
Spring provides a Person instance by using its default empty constructor and
not setting any value to class attributes like the name Person's name attribute.
When using the @Component annotation to create beans, we can set class
attribute values using the @PostConstruct annotation. This annotation is not
part of the Spring; it comes from the Jakarta EE framework, but Spring uses it.
We use the @PostContruct annotation when we want Spring to execute
some code just after the bean object is created. The following is an example of
how we can use such annotation to set the Person’s name value:
@Component
class Person {
@PostConstruct
private void postConstruct() {
this.name = "John Doe";
}
// Code omitted
}
The postConstruct method is called just after the Person bean instance is
created.
There is also the @PreDestroy annotation that lets us execute code just before
a bean instance is destroyed. It is often used to close resources like database
connections.
The significant difference between beans created using stereotype annotations
like @Component and those created using the @Bean annotation is that the
stereotype annotations can only be used with our classes. In contrast, the @Bean
annotation approach lets us create beans from classes we do not own, like those
from third-party libraries.
Next, let us explore how to use beans to provide dependency injection using the
@Autowired annotation.
AnnotationConfigApplicationContext(PersonConfig.class);
var person = context.getBean(Person.class);
person.drive(); // John Doe knows how to drive
}
}
By calling the drive method from the Person bean, we can confirm it works
because it relies on the behavior provided by the Skills bean.
Injecting dependencies using the @Autowired annotation with the class
constructor is not Spring’s only dependency injection approach. We can also
inject dependencies by placing the @Autowired annotation on top of an
instance attribute:
@Component
class Person {
@Autowired
private Skills skills;
}
Although possible, this approach is not recommended because it can make
running unit tests of classes with dependencies injected directly into class
attributes more difficult.
Now that we know how Spring creates and injects beans, let us explore an
exciting feature that allows us to intercept methods executed by Spring beans.
Aspect
Any behavior that can be shared across different application components is
called an aspect. Aspects are also considered cross-cutting elements because
they capture these shared application behaviors. Once a given behavior is
implemented as an aspect, it can be used by different parts of an application.
Aspects represent the code containing the behavior you want to add without
changing the existing code.
Jointpoint
Application behaviors are usually represented through method executions or
jointpoints from the Spring AOP perspective.
Advice
We must define when the new behavior will be executed through an aspect. For
that purpose, we can rely on advice to determine if an element must be executed
before or after an existing application’s method.
Pointcut
The Spring AOP framework intercepts existing application methods and
executes aspect code before or after them. Pointcut represents these methods
intercepted by Spring.
With a fundamental understanding of AOP concepts, we are ready to explore
how they work in a Spring application. Let us start by learning how to enable the
aspect mechanism.
@Around("execution(*
dev.davivieira.Person.drive(..))")
public void logSkill(ProceedingJoinPoint
joinPoint) throws Throwable {
logger.log(Level.INFO, "Executing skill");
joinPoint.proceed();
logger.log(Level.INFO, "Skill executed with
success");
}
}
We examine the LogSkillAspect implementation through the following
steps:
1. We start by placing the @Aspect and @Component annotations above
the LogSkillAspect class declaration. The @Component is required
because the @Aspect does not make the LogSkillAspect class a
Spring bean.
2. We use the Java Logging API to get a Logger object.
3. We use the @Around annotation to specify which methods should be
intercepted.
4. The string "execution(*
dev.davivieira.Person.drive(..))" we pass matches the
drive method from the Person class. We could use the string
"execution(* dev.davivieira.Person.*(..))" if we
wanted to match any method of the Person class. The * character is a
wildcard that can mean different things depending on where it is placed.
For example, when used just after the execution first parentheses, it
matches any method return type. When used after the class name, like
Person in our example, it matches any method name.
5. Note the logSkill method receives a ProceedingJoinPoint as a
parameter. This parameter represents the method being executed. Inside
the logSkill method, we call logger.log(Level.INFO,
"Executing skill") before calling joinPoint.proceed().
6. The proceed method delegates control to the intercepted Person's
drive method. After the drive method executes, logSkill calls
logger.log(Level.INFO, "Skill executed with
success").
With the aspect adequately implemented, we can remove the logger from the
Person class:
@Component
class Person {
// Code omitted
public void drive () {
skills.drive(name);
}
// Code omitted
}
We get the following output when rerunning the application with the aspect
adequately implemented:
Apr 06, 2024 10:27:37 PM dev.davivieira.LogSkillAspect
logSkill
INFO: Executing skill
Apr 06, 2024 10:27:37 PM dev.davivieira.Skills drive
INFO: John Doe knows how to drive
Apr 06, 2024 10:27:37 PM dev.davivieira.LogSkillAspect
logSkill
INFO: Skill executed with success
The log starts with the "Executing skill" entry, which refers to the
dev.davivieira.LogSkillAspect class, followed by the "John Doe
knows how to drive" entry, which refers to the
dev.davivieira.Skills class, where the drive method is executed.
After the drive method executes, the LogSkillAspect class retakes control
and provides the last "Skill executed with success" log entry.
Aspects, beans, and dependency injection comprise the building blocks of most
Spring Boot applications. Now that we understand how those building blocks
work, let us explore how to bootstrap a new Spring Boot project.
Setting up dependencies
Using Maven as the build tool for our Spring Boot project, we define the
following dependencies in the pom.xml file:
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
</dependencies>
The spring-boot-starter-web dependency allows the Spring Boot
application to expose RESTful HTTP endpoints. With spring-boot-
starter-data-jpa, we can interact with databases using ORM
technologies like Hibernate and Jakarta Persistence. The h2 dependency lets us
use an in-memory database while the application is running.
With the dependencies in place, we can configure the Spring Boot application.
SpringApplication.run(SpringSampleApplication.class,
args);
}
}
At this stage, the Spring Boot project can already be up and running. We are
ready to start implementing the application logic. Let’s start with the database
entity.
@Id
String email;
String name;
// Getters and setters omitted
}
We place the @Entity annotation above the Person class to make it into a
Jakarta Persistence database entity. Spring Data JPA lets us easily interact with
databases by using repository interfaces. Let us explore it further in the next
section.
Creating a repository
To handle Person database entities, we need to implement a repository
interface:
@Repository
public interface PersonRepository extends
CrudRepository<Person, String> {
Optional<Person> findByEmail(String email);
}
The CrudRepository is an interface provided by Spring with standard
database operations like delete, save, and findAll. Extending from the
CrudRepository, we can create our interface with new operations, like in
the above example, by declaring the findByEmail on the
PersonRepository interface. The CrudRepository is a generic type in
which we need to specify the entity class - Person in our example—handled by
the repository and the type used by the entity’s ID, which in our case is a
String because the email is the entity’s ID.
Following the convention-over-configuration approach, Spring Data JPA lets us
define methods like findByX, where X can be one of the entity attributes.
With the entity and its repository implemented, we can create a service
responsible for handling database objects.
Implementing a service
It is a common practice to have service classes containing logic related to the
database entities. Also, service classes are sometimes introduced to form a
service layer that intermediates communication with repository classes from the
data layer. The following code can be used to implement the PersonService
class:
@Service
public class PersonService {
private final PersonRepository personRepository;
@Autowired
public PersonService(PersonRepository
personRepository) {
this.personRepository = personRepository;
}
@Autowired
PersonController(PersonService personService) {
this.personService = personService;
}
@GetMapping("/person")
private List<Person> getAllPersons() {
return personService.getAllPersons();
}
@PostMapping("/person")
private void addPerson(@RequestBody Person person)
{
personService.addPerson(person);
}
@GetMapping("/person/{email}")
private Person getPerson(@PathVariable String
email) throws Exception {
return
personService.getPerson(email).orElseThrow(() -> new
Exception("Person not
found"));
}
}
We start by placing the @RestController annotation above the
PersonController class, enabling the provision of methods that handle
HTTP requests. Note we have annotations like @GetMapping and
@PostMapping. We use them to define the endpoint’s relative URL and the
HTTP method it supports, such as GET or POST.
Note that we also use the @RequestBody annotation with the Person class.
This annotation allows the Spring Boot to map a JSON payload request
attributes into class attributes. If we send a JSON payload with the email and
name attributes, Spring Boot will map such attributes to the Person class
attributes.
We can also pass URL parameters as we did with
@GetMapping("/person/{email}"), where the email parameter value
is captured by the @PathVariable annotation.
The controller is the last piece of our CRUD application. Let us see how we can
play with it next.
Conclusion
This chapter taught us how powerful Spring Boot can be for developing
enterprise applications. Starting with Spring fundamentals, we grasped essential
concepts like beans representing Java objects managed by Spring through its
context. To fully tap into the benefits provided by Spring beans, we explored
how the Spring dependency injection mechanism lets us inject beans into other
beans. In closing the fundamentals topic, we learned about aspect-oriented
programming (AOP) and how the Spring AOP lets us add new application
behaviors without modifying existing code. We checked how easy it is to start a
new Spring Boot application using the Spring Initializr CLI. Finally, we
developed a CRUD Spring Boot application with RESTful support to understand
how different components are arranged in a Spring Boot project.
In the next chapter, we continue our journey through Java frameworks by
exploring Quarkus’s cloud-native approach. We will learn the benefits Quarkus
provides and how to kickstart a new Quarkus project that will serve as the
foundation for developing a system exploring Quarkus features such as Quarkus
DI, Quarkus REST, and Panache.
CHAPTER 6
Improving Developer Experience
with Quarkus
Introduction
Designed to be a cloud-first framework, Quarkus presents an attractive
alternative for creating applications to run in the cloud. Based on industry
standards through specifications provided by projects like Jakarta EE and
Microprofile, Quarkus helps developers build cloud-native applications by
offering through its framework libraries supporting dependency injection, data
persistence, RESTful APIs, and much more.
Quarkus manages to conciliate cloud-native and enterprise development
practices gracefully, bringing the best of both worlds and making developers’
lives easier. So, this chapter introduces Quarkus and covers some of its features
most frequently used in enterprise Java applications running in the cloud.
Structure
The chapter covers the following topics:
• Assessing Quarkus benefits
• Kickstarting a new Quarkus project
• Building a CRUD app with Quakus
• Writing native applications
• Compiling and running the sample project
Objectives
By the end of this chapter, you will learn why Quarkus can be a viable
framework choice for your next Java project. Once you grasp how fluid software
development can be when using Quarkus, you will get why it can help boost
developer productivity. You will also acquire the skills to build modern Java
applications by learning the Quakus way to develop enterprise software.
@GET
@Produces(MediaType.TEXT_PLAIN)
public String hello() {
return "Hello from Quarkus REST";
}
}
The GreetingResource class exposes a REST endpoint that we can
test by sending an HTTP GET request to the application:
$ curl localhost:8080/hello
Hello from Quarkus REST
It shows the Quarkus application correctly handles HTTP requests.
If you are familiar with Spring Boot, you will appreciate that Quarkus eliminates
the need for a bootstrap class to start the application. While it is still an option
for customizing the start-up process, Quarkus is designed to start the application
without it.
Now that we have the Quarkus application up and running, let us implement a
CRUD project using fundamental Quarkus features like Quarkus DI, Hibernate,
Panache, and Quarkus REST.
Managed beans
A managed bean is a Java object controlled by the framework. In Spring,
managed beans live in the application context. In Quarkus, they live in what is
called the container, which is the framework environment where the application
is running. The managed beans’ lifecycle is controlled by the container that
decides when managed beans are created and destroyed. In Quarkus, we can
create beans at different levels, including the class, method, and field levels. It is
also possible to define the bean scope to determine how visible it will be to other
beans in the application. Some of the scopes we can use in a Quarkus project are
application-scoped, singleton, and request-scoped. Next, we explore how to
create and inject beans using such scopes.
Application-scoped beans
Whenever we need to provide an object that should be accessible from any place
in the application, we can use application-scoped beans. An application-scoped
bean object is created once and lives for the entire application runtime. The
following is how we can create an application-scoped bean:
@ApplicationScoped
public class Person {
@Inject
Person person;
@GET
@Produces(MediaType.TEXT_PLAIN)
public String personName() {
return person.getName(); // John Doe
}
}
The SampleApplication is annotated with @Path("/person"), making
it a RESTful endpoint. It contains a Person attribute annotated with @Inject.
We call injection point class attributes annotated with @Inject. When the
Quarkus application starts, it tries to locate a managed bean that can be assigned
to an injection point. Because we have annotated the Person class with the
@ApplicationScoped, Quarkus can find a managed bean Person object
and assign it to the person attribute injection point. For those from the Spring
world, the @Inject annotation works similarly to the @Autowired
annotation.
It is worth noting that application-scoped beans are lazy loaded, which means
their instances are created only when one of their methods or attributes is
invoked. In our previous example, the Person bean instance is created only
when its getName method is called.
An alternative to application-scoped beans is the singleton beans. Let us check
how they work.
Singleton beans
Like application-scoped beans, singleton beans are also created only once and
made available for the entire application. Contrary to application-scoped beans,
singleton beans are eagerly loaded, appearing when the Quarkus application
starts. The following is how we can create a singleton bean:
@Singleton
public class Location {
@Produces
List<String> countries() {
return List.of("Canada", "Japan", "Italy");
}
}
Here, we have two beans, one created at the class level through the
@Singleton annotation and the other at the method level with the
@Produces annotation. The following code is how we can inject those beans:
@Path("/location")
public class SampleApplication {
@Inject
Location location;
@Inject
List<String> countries;
@GET
@Produces(MediaType.TEXT_PLAIN)
public List<String> location() {
return Stream
.of(countries, location.cities)
.flatMap(Collection::stream)
.toList();
// [Canada, Japan, Italy, Vancouver,
Tokyo, Rome]
}
}
Consider the two injection points, represented by the location and country
attributes. Upon the start of a Quarkus application, these attributes are eagerly
assigned with bean instances that are compatible with their types. In this case,
injecting Location would be enough to allow direct access to both the
cities field and countries method from the Location singleton bean
instance. However, we introduce a new bean based on the countries method
to illustrate a specific use case: the possibility of having beans at the method
level.
Let us explore the practical application of request-scoped beans in Quarkus. This
understanding will equip us with the knowledge to effectively manage beans in
real-world scenarios.
Request-scoped beans
Both application and singleton-scoped beans are accessible from any part of the
system and last for as long as the Quarkus application is alive. On the other
hand, request-scoped beans let us create objects available only in the context of
an HTTP request. Once the request is finished, the request-scoped bean object
ceases to exist. Every HTTP request the Quarkus application receives will trigger
the creation of a new request-scoped bean object. Following is an example
showing how to use request-scoped beans:
@RequestScoped
public class Account {
@Override
public String toString() {
return "Account{" +
"name='" + name + '\'' +
", email='" + email + '\'' +
", randomId=" + randomId +
'}';
}
}
The @RequestScoped annotation turns the Account class into a request-
scoped bean. Note that we have the @PostConstruct annotation above the
setAccountAttributes method, which initializes the class attributes after
creating the bean instance. Every time a new Account instance is created, the
setAccountAttributes will be executed to initialize all the attributes,
including the random attribute that receives a random number. The following is
how we use the Account bean:
@Path("/account")
public class SampleApplication {
@Inject
Account account;
@GET
@Produces(MediaType.TEXT_PLAIN)
public Account account() {
return account;
//#1 Account{name='John Doe',
email='[email protected]',
randomId=49}
//#2 Account{name='John Doe',
email='[email protected]',
randomId=23}
//#3 Account{name='John Doe',
email='[email protected]',
randomId=27}
}
}
The comments show which response we may get every time we send a GET
request to http://localhost:8080/account. Note the randomId changes every
time a new request is sent, which confirms that new Account beans are being
created for each request.
In addition to the application-scoped, singleton, and request-scoped, there are
also the dependent, session, and other customized scopes. Those additional
scopes provide different bean behaviors that are helpful in specific situations;
however, most of the time, you will be using the scopes we covered in this
section.
Quarkus comes with solid support for data persistence. Let us explore it next.
@Id
private String email;
private String password;
// Code omitted
}
We can implement an AccountRepository class to handle Account
entities:
@ApplicationScoped
@Transactional
public class AccountRepository {
@Inject
EntityManager entityManager;
@Inject
AccountRepository accountRepository;
@Override
public void run() {
accountRepository.createAccount("[email protected]",
"pass");
accountRepository.createAccount("[email protected]",
"pass");
accountRepository.createAccount("[email protected]",
"pass");
System.out.println(accountRepository.getAccount("user2@daviviei
// Account{email='[email protected]',
password='pass'}
}
}
As an alternative to using the EntityManager directly, Quarkus provides a
convenient Panache library that simplifies handling database entities. Let us
explore it.
@Inject
AccountRepository accountRepository;
@Override
public void run() {
accountRepository.persist(new
Account("[email protected]",
"pass"));
accountRepository.persist(new
Account("[email protected]",
"pass"));
accountRepository.persist(new
Account("[email protected]",
"pass"));
System.out.println(accountRepository.findByEmail("user2@davivie
}
}
Observe that we are calling the persist method from the
AccountRepository class. The persist is a built-in operation provided
by Panache that lets us save entities into the database.
Let us now see how we use the active record pattern to achieve the same results
we achieved using the repository pattern.
@Id
private String email;
private String password;
@Override
@Transactional
public void run() {
new Account("[email protected]",
"pass").persist();
new Account("[email protected]",
"pass").persist();
new Account("[email protected]",
"pass").persist();
System.out.println(Account.findByEmail("[email protected]"))
}
}
The @Transactional annotation is placed above the run method because,
inside it, we have database writing operations triggered when we call persist
after creating the Account object. With the active record pattern approach, we
concentrate everything on the Account class, which establishes the database
table mapping and provides customized database operations like
findByEmail.
Deciding between the repository and active record patterns is something that the
needs of your project will dictate.
Having covered the fundamental aspects of how Quarkus deals with databases,
let us explore how to build an API with Quarkus.
@Id
private String email;
@Inject
AccountRepository accountRepository;
// Code omitted
}
We start by placing the @Path("/account") annotation above the
class name. By doing that, we establish a URL path that will be part of all
endpoints defined inside the AccountEndpoint class.
4. We first implement the endpoint that allows creating new accounts:
@Path("/account")
public class AccountEndpoint {
@Inject
AccountRepository accountRepository;
@POST
@Transactional
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public void create(Account account) {
accountRepository.persist(account);
}
// Code omitted
}
HTTP requests that create resources should be handled as POST requests,
so we have the @POST annotation in the create method. The
@Transactional annotation is also present because this endpoint
triggers a database writing operation that must be done inside a
transaction. With the @Consumer and @Produces annotations, we can
determine which media type this endpoint consumes and produces:
MediaType.APPLICATION_JSON for both.
5. Moving on, we implement the endpoints that allow us to retrieve
accounts:
@Path("/account")
public class AccountEndpoint {
@Inject
AccountRepository accountRepository;
// Code omitted
@Path("/{email}")
@GET
public Account get(@PathParam("email") String email) {
return accountRepository.findByEmail(email);
}
@Path("/all")
@GET
public List<Account> getAll() {
return accountRepository.listAll();
}
// Code omitted
}
The endpoint defined by the get method receives GET requests from the
/account/{email} path. The {email} is a path parameter mapped
to the endpoint method parameter at @PathParam("email")
String email. We also define an additional GET endpoint at
/account/all that retrieves all existing accounts.
6. Finally, we define an endpoint to delete accounts:
@Path("/account")
public class AccountEndpoint {
@Inject
AccountRepository accountRepository;
// Code omitted
@Path("/{email}")
@Transactional
@DELETE
public void delete(@PathParam("email") String email) {
accountRepository.deleteByEmail(email);
}
// Code omitted
}
The endpoint defined by the delete method receives HTTP DELETE
requests at /account/{email} that use the email address to locate
and delete Account entities from the database. The @Transactional
annotation is required because deletion is a writing database operation.
Once the Quarkus application is up and running, it is time to get hands-on with
the API. We can interact with it by sending various requests.
We send POST requests to create new accounts:
$ curl -H "Content-Type: application/json" -XPOST --
data
'{"email":"[email protected]","password":"123"}'
localhost:8080/account
$ curl -H "Content-Type: application/json" -XPOST --
data
'{"email":"[email protected]","password":"123"}'
localhost:8080/account
$ curl -H "Content-Type: application/json" -XPOST --
data
'{"email":"[email protected]","password":"123"}'
localhost:8080/account
We can send the following GET request to retrieve a specific account:
$ curl -s
localhost:8080/account/[email protected]| jq
{
"email": "[email protected]",
"password": "123"
}
The following is how we can retrieve all accounts:
$ curl -s localhost:8080/account/all| jq
[
{
"email": "[email protected]",
"password": "123"
},
{
"email": "[email protected]",
"password": "123"
},
{
"email": "[email protected]",
"password": "123"
}
]
To delete an account, we need to send an HTTP DELETE request:
curl -XDELETE
localhost:8080/account/[email protected]
If we send a new request to get all accounts, we can confirm that one of the
accounts no longer exists in the database:
$ curl -s localhost:8080/account/all| jq
[
{
"email": "[email protected]",
"password": "123"
},
{
"email": "[email protected]",
"password": "123"
}
]
Having working REST endpoints in a Quarkus application does not take much.
The framework lets us easily express how the application endpoints should
behave through intuitive annotations.
Let us explore next how to write native applications using Quarkus.
<quarkus.native.enabled>true</quarkus.native.enabled>
</properties>
</profile>
</profiles>
<!-- Code omitted -->
</project>
Both the profile ID and name are defined as native. We use the profile name later
when executing the mvn command to compile the native executable. The native
compilation is activated through the quarkus.native.enabled property
set as true. After creating the Maven profile, we can create a native executable
by executing the following command:
$ mvn clean package -Pnative
...
Produced artifacts:
/project/chapter06-1.0.0-SNAPSHOT-runner (executable)
/project/chapter06-1.0.0-SNAPSHOT-runner-build-
output-stats.json (build_info)
...
Instead of producing a JAR file, the command above creates a native executable
called chapter06-1.0.0-SNAPSHOT-runner that we can execute by
issuing the following command from within the project’s root directory:
$ ./target/chapter06-1.0.0-SNAPSHOT-runner
__ ____ __ _____ ___ __ ____ ______
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
2024-09-08 21:47:13,318 INFO [io.quarkus] (main)
chapter06 1.0.0-SNAPSHOT native (powered by Quarkus
3.9.4) started in 0.016s. Listening on:
http://0.0.0.0:8080
2024-09-08 21:47:13,318 INFO [io.quarkus] (main)
Profile prod activated.
2024-09-08 21:47:13,318 INFO [io.quarkus] (main)
Installed features: [agroal, cdi, hibernate-orm,
hibernate-orm-panache, jdbc-h2, narayana-jta, rest,
rest-jackson, smallrye-context-propagation, vertx]
Notice that the first logged line mentions that the Quarkus application runs in the
native mode.
To wrap up what we have learned in this chapter, in the next section, we run the
Account application and send sample requests to ensure it is working as
expected.
Introduction
Developing enterprise-grade applications in Java involves following the
standards and best practices that contribute to the robustness, stability, reliability,
and maintainability requirements that often appear when building mission-
critical applications. Such requirements can be met by relying on the
specifications provided by the Jakarta EE, a project that prescribes how
fundamental aspects of enterprise applications, like persistence, dependency
injection, transactions, security, and more, should work. Although extensive in
its coverage of the things enterprise application requires, the Jakarta EE does not
contain specifications that support the development of lightweight cloud-native
applications based on highly distributed architectures like microservices. To fill
this gap, the MicroProfile specification helps developers build modern enterprise
applications that are better prepared for cloud-native environments.
This chapter teaches us how powerful Jakarta EE and MicroProfile are when
combined to build modern enterprise Java applications.
Structure
The chapter covers the following topics:
• Overviewing Jarkarta EE
• Starting a new Jakarta EE project
• Building an enterprise application with Jakarta EE
• Adding microservices and cloud-native support with MicroProfile
• Compiling and running the sample project
Objectives
By the end of this chapter, you will be able to develop cloud-native enterprise
applications using the Jakarta EE and MicroProfile specifications. This chapter
shows you how the technologies derived from both specifications support the
development of applications that harness the time-tested enterprise features
provided by Jakarta while also tapping into the cloud-native development
capabilities offered by MicroProfile.
Overviewing Jarkarta EE
Jakarta EE is a continuation of a project officially launched in 1999 under the
name of Java 2 Enterprise Edition (J2EE). The vision behind this project was
to provide a set of specifications to support the development of Java enterprise
applications. These specifications would target technologies like databases,
messaging, and web protocols such as HTTP and WebSockets, which were
common in enterprise projects. Instead of reinventing the wheel by defining how
to deal with those technologies, developers could rely on the standards provided
by the J2EE specifications. Relying on the specifications would also grant some
flexibility and vendor lock-in protection because multiple vendors offered
implementations of the same J2EE specifications.
Sun Microsystems, later acquired by Oracle, was responsible for the first
versions of the J2EE specifications. In 2006, the project was renamed to Java EE
until 2020, when Oracle decided to turn Java EE into an open-source project by
giving its governance to the Eclipse Foundation, which renamed it to Jakarta EE.
Oracle decided to give up on Java EE because it could not catch up on the
innovations and features provided by other frameworks like Spring. Many
developers regarded Java EE as too complex and less productive compared to its
alternatives, which would deliver the same and even more functionalities more
straightforwardly.
However, even with its reputation as complex and heavyweight, which was to a
certain extent lost after the Java EE 5 version that introduced the annotation-
based configuration as an alternative to the XML-based one, Java EE found
strong adoption across many industries, such as banking and telecommunication,
that relied on the Java enterprise to enable their most critical operations. The
specifications provided the stability and robustness that big corporations needed
to ensure the health of their businesses.
Many things have changed since the first J2EE/Java EE version until its latest
incarnation, the Jakarta EE. Some specifications were removed because they no
longer make sense today, and other specifications evolved to reflect the needs of
modern software development. Although new frameworks and ways to develop
enterprise Java software have appeared, some principles and ideas from Jakarta
EE remain relevant and are still in use today.
Jakarta EE proposes a multitier architecture for developing enterprise
applications. Let us explore what multitiered applications mean further.
Client tier
The client tier is where all the enterprise system clients live. A client can be a
user interface (UI) served through a web browser or desktop application that
interacts with the enterprise system. Other applications that are not UIs can also
act as clients of the enterprise system. The main characteristic of the client-tier
components is that they trigger behaviors in the enterprise system by making
requests to it. A client is technology-agnostic; it can be developed in Java or any
other technology.
Web tier
An enterprise system may offer a web application that renders HTML pages
accessible through a web browser. Although not so common today, where most
front-end development is client-side, support for server-side front-end
applications is also part of the Jakarta EE specification. The Jakarta Server
Pages (JSP) and Jakarta Server Faces (JSF) specifications, built on top of the
Jakarta Servlet specification, are the technologies we find on the web tier that
enable enterprise system components to serve web resources.
Business tier
Business rules represent the most critical component of an enterprise system.
Whatever business problem an enterprise system aims to solve, the business tier
has components containing the business logic responsible for solving it. That is
where Jakarta EE components like Jakarta Persistence entities and stateless,
stateful, and message-driven beans, also known as enterprise beans, are used to
solve business problems.
Note that the communication flow starts with the client and then goes through
the web, business, and the EIS tier. Employing all tiers is not mandatory in a
Jakarta EE project. You can have an enterprise system that does not contain any
web component, so the web tier would not exist in such a system.
Jakarta EE is a collection of specifications governing the development of
enterprise software in Java. Let us further explore these specifications.
Note that some specification components from the Platform are not shown here.
Note that only some specifications are used here. Anything that is not considered
strictly necessary for the development of cloud applications is removed.
Web Archive
Classes representing web components like servlets and other web resources,
including HTML pages and images, are packaged in a WAR file. JAR files
containing enterprise beans can also be bundled into a WAR file. So, in this
packing structure, we can have both web and enterprise components packed into
a single WAR file.
Enterprise Archive
It is possible to put all the related modules of an enterprise application into an
EAR file, which allows a packing structure based on JAR files containing
enterprise beans and WAR files containing web components.
Once you have packaged your enterprise application, you can deploy it into a
compatible server. Fully compliant Jakarta EE projects must run on certified
Jakarta EE servers, which adhere to one of the Jakarta EE Platform or Profile
specifications. Applications servers like Oracle WebLogic, RedHat JBoss, and
Payara are examples of certified Jakarta EE servers. It is also possible to deploy
WAR and JAR files into non-certified servers like Tomcat, which does not
implement all the required Jakarta EE specifications but provides enough
capabilities to host enterprise Java applications.
Jakarta EE continues evolving to remain relevant as a solid platform for
developing enterprise applications. In this section, we covered only the surface
of this vast project, which offers a standard for creating enterprise Java
applications through its specifications. Let us now explore another ramification
of Jakarta EE: MicroProfile, a project that targets microservices development.
Introducing MicroProfile
Over the years, we have seen significant changes in the approaches to
developing Java enterprise applications. With decreased computing costs and the
popularization of cloud computing technologies like containers, developers
started to think in different ways to design enterprise systems. Instead of
developing a single monolith application to run in heavyweight application
servers, developers are now exploring distributed architectures based on smaller
applications, the so-called microservices, that run on containers orchestrated by
Kubernetes.
Organizations seeking to improve their ability to respond quickly to customer
needs have widely adopted distributed architectures, like microservices. The
main argument is that breaking a monolith system into smaller applications, such
as microservices, helps to tackle the maintainability and scalability issues when a
monolith system becomes too big. New challenges arise for the Java developer
aspiring to tap into the benefits of cloud-native applications based on the
microservices architecture. It is essential to understand how Java applications
behave when running inside containers. Knowing how to implement monitoring
and observability capabilities becomes a critical development activity to ensure
all components of a distributed system are running as expected. It is also
fundamental to know how the components of a distributed system communicate
with each other.
It is not trivial to ensure that a Java distributed system based on multiple
microservices applies the techniques and technologies to leverage all the benefits
provided by cloud environments. So, MicroProfile proposes, through its
specifications, a standard for developers aiming to create Java microservices that
run in cloud runtimes based on container technologies like Docker and
Kubernetes.
MicroProfile is similar to Jakarta EE in that it prescribes how to do things in a
Java enterprise system. However, it differs by providing a set of specifications
explicitly tailored for designing enterprise applications using cloud-native
development techniques and technologies.
To better understand how MicroProfile works, let us explore its specifications
further.
MicroProfile specifications
A typical Java microservice application may require monitoring, observability,
configuration, security, and other essential capabilities. The MicroProfile covers
the following capabilities with a set of specifications for cloud-native
development practices:
• MicroProfile Telemetry 1.1
• MicroProfile OpenAPI 3.1
• MicroProfile Rest Client 3.0
• MicroProfile Config 3.1
• MicroProfile Fault Tolerance 4.0
• MicroProfile Metrics 5.1
• MicroProfile JWT Authentication 2.1
• MicroProfile Health 4.0
Troubleshooting errors is one of the challenges when using microservices
architecture because to understand why a request failed, a developer may need to
check the log of multiple microservices involved in the failing request. The
MicroProfile Telemetry 1.1 provides advanced observability capabilities with
spans and traces, elements that help us to understand the flow of requests
crossing different applications. With MicroProfile Rest Client 3.0, we can make
microservices communicate with one another. With MicroProfile Health 4.0, we
explore the approaches to notify external agents about the health of a Java
application.
The MicroProfile specifications govern the development aspects of Java
enterprise applications running in cloud-native environments. Vendors like Open
Liberty, Quarkus, and Payara implement the MicroProfile specifications.
After covering the Jakarta EE and MicroProfile specifications, let us learn how
to use them to build Java enterprise applications.
}
The ApplicationConfig from the example above is the same one produced
by the Maven command that created the initial Jakarta EE project. The
@ApplicationPath annotation lets us configure the root path that will
precede all RESTful endpoints our application provides. We can create a new
RESTful endpoint by implementing the SampleResource class:
@Path("sample")
public class SampleResource {
@GET
@Produces(MediaType.TEXT_PLAIN)
public String sample() {
return "Sample data for the Jakarta EE
application";
}
}
The "sample" path defined here through the @Path annotation will be
appended to the path provided by the @ApplicationPath defined in the
ApplicationConfig class. The @GET and @Produces annotations come
from the Jakarta RESTful Web Services specification. We use these annotations
to expose an endpoint that receives HTTP GET requests and produces plain text
responses.
To run the enterprise application, we must provide a compatible Jakarta EE
runtime—an application server that implements the Jakarta EE specification. We
can accomplish this using the WildFly application server. We can configure it on
the project’s pom.xml file by adding the following Maven plugin:
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<version>5.0.1.Final</version>
<executions>
<execution>
<phase>install</phase>
<goals>
<goal>deploy</goal>
</goals>
</execution>
</executions>
</plugin>
The plugin above must be placed within the plugins section of the pom.xml file.
Once WildFly is adequately configured, we can start the application by running
the following command:
$ mvn clean package wildfly:run
We can confirm the application is working by accessing the URL
http://localhost:8080/enterpriseapp/sample in the browser or executing the
following command:
$ curl -X GET
http://localhost:8080/enterpriseapp/sample
Sample data for the Jakarta EE application
We get a plain text response by sending a GET request to
http://localhost:8080/enterpriseapp/sample.
Next, we explore how to combine Jakarta EE and MicroProfile to build
enterprise applications that support cloud-native and microservices capabilities.
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-
plugin</artifactId>
<version>3.11.0</version>
</plugin>
<plugin>
<artifactId>maven-war-
plugin</artifactId>
<version>3.4.0</version>
<configuration>
<failOnMissingWebXml>false</failOnMissingWebXml>
</configuration>
</plugin>
<!-- code omitted -->
</plugins>
</build>
</project>
We use the maven-compiler-plugin to compile the Java source files from
our MicroProfile project. The maven-war-plugin lets us package the
application files into a deployable WAR file.
Finally, we configure our MicroProfile project to run using the Payara runtime:
<?xml version="1.0" encoding="UTF-8"?>
<project ...>
<!-- code omitted -->
<build>
<finalName>license-management</finalName>
<plugins>
<!-- code omitted -->
<plugin>
<groupId>org.codehaus.cargo</groupId>
<artifactId>cargo-maven3-
plugin</artifactId>
<version>1.10.11</version>
<configuration>
<container>
<containerId>payara</containerId>
<artifactInstaller>
<groupId>
fish.payara.distributions
</groupId>
<artifactId>payara</artifactId>
<version>6.2024.1</version>
</artifactInstaller>
</container>
</configuration>
</plugin>
</plugins>
</build>
</project>
The cargo-maven3-plugin configured with the payara dependency
allows us to run our MicroProfile application on a Payara server that is provided
as a dependency of the Maven project.
After properly setting up the Maven project with the correct dependencies and
build configurations, we can start development using the Jakarta EE and
MicroProfile specifications. Let us begin defining a persistent data source for our
license management application.
Defining a data source
The license management application relies on an H2 in-memory database to
store data. We can configure this database by first creating a web.xml file in
the src/main/webapp/WEB-INF directory of the MicroProfile project:
<?xml version="1.0" encoding="UTF-8"?>
<web-app
xmlns="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee
http://xmlns.jcp.org/xml/ns/javaee/web-
app_3_1.xsd"
version="3.1" >
<data-source>
<name>java:global/h2-db</name>
<class-
name>org.h2.jdbcx.JdbcDataSource</class-name>
<url>jdbc:h2:mem:test</url>
</data-source>
</web-app>
The name component defines the data source name the MicroProfile application
uses to establish a connection with the database. The class-name component
specifies that this is an H2 data source. The url component contains the JDBC
URL expressing the connection to an in-memory H2 database called test. Having
a data source declaration inside a MicroProfile application’s web.xml file
allows this data source configuration to be deployed into the Payara server when
the MicroProfile application is also deployed.
After defining the data source in the web.xml file, we need to configure the
MicroProfile application to use it. We do that through the persistence.xml
file in the src/main/resources/META-INF directory:
<?xml version="1.0" encoding="UTF-8"?>
<persistence
xmlns="https://jakarta.ee/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://jakarta.ee/xml/ns/persistence
https://jakarta.ee/xml/ns/persistence/persistence_3_0.xsd"
version="3.0">
<persistence-unit name="H2DB">
<jta-data-source>java:global/h2-db</jta-data-
source>
<properties>
<property
name="jakarta.persistence.schema-
generation.database.action"
value="drop-and-create"/>
</properties>
</persistence-unit>
</persistence>
By providing the persistence.xml file, we enable our MicroProfile application to
use Jakarta Persistence. Note that the value java:global/h2-db used in the
jta-data-source configuration component is the same value from the data
source definition of the web.xml file. We set the property
jakarta.persistence.schema-generation.database.action
to drop-and-create to ensure our MicroProfile application creates tables in
the H2 database. These tables are created based on the Jakarta Persistence
entities implemented by the MicroProfile application.
After configuring the data source, we can start implementing the business logic
of our license management application. Let us start our implementation by
defining a Jakarta Persistence entity.
@Id
@GeneratedValue(strategy =
GenerationType.SEQUENCE)
long id;
String name;
LocalDate startDate;
LocalDate endDate;
boolean isExpired;
// Code omitted //
}
We place the @Entity annotation to make this class a valid Jakarta Persistence
entity. All entities require an ID, which is provided by declaring the id attribute
with the @Id and @GeneratedValue annotations. The @GeneratedValue
annotation is configured with the GenerationType.SEQUENCE strategy that
automatically generates IDs when persisting new entities to the database. Let us
create a repository class responsible for handling license database entities.
Implementing a repository with the EntityManager
Still relying on the Jakarta Persistence specification, we implement a repository
class using the EntiyManager:
@ApplicationScoped
@Transactional
public class LicenseRepository {
@PersistenceContext
private EntityManager entityManager;
public void persist(License license) {
entityManager.persist(license);
}
public List<License> findAllLicenses() {
return (List<License>) entityManager
.createQuery("SELECT license from
License license")
.getResultList();
}
}
Coming from the Jakarta CDI specification, we have the
@ApplicationScoped annotation to ensure the creation of one managed
bean instance of the LicenseRepository class. The
@PersistenceContext annotation placed above the entityManager
field relies on the configuration provided by the persistence.xml we
created earlier. This repository class contains a method that persists License
entities and another that retrieves all License entities from the database.
The persistence context represents the entities we can persist into the database.
When the Java application starts, the Jakarta entities are mapped and put into the
persistence context. Every persistence context is associated with an entity
manager, allowing entities to be handled from such a context.
Implementing a service class as a Jakarta CDI managed bean
Service classes are usually implemented to apply business logic and handle
application behaviors that may depend on external data sources. When dealing
with business logic that depends on the database, the service class can have a
direct dependency on the repository class responsible for handling database
entities:
@ApplicationScoped
public class LicenseService {
@Inject
private LicenseRepository licenseRepository;
@Inject
private LicenseService licenseService;
// Code omitted
}
The @Path annotation from the Jakarta RESTful Web Services specification is
crucial as it allows us to define the endpoint path. It is worth noting that the base
application path is /api, defined in the LicenseApplication class, which
is then prepended to the /license path used here in the LicenseEndpoint
class, resulting in the /api/license path. The @Tag annotation from the
MicroProfile OpenAPI specification is essential for generating API
documentation.
Continuing with the LicenseEndpoint implementation, we implement an
endpoint allowing the creation of new licenses:
@Path("/license")
@Tag(name = "License API", description = "It allows
managing licenses")
public class LicenseEndpoint {
// Code omitted
@POST
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
@Operation(summary = "It creates a license",
description = "A new
license is created and
persisted into the database")
@APIResponses(value = {
@APIResponse(
responseCode = "200",
description = "A new license has
been successfully
created"
)
})
public void createLicense(License license) {
licenseService.createLicense(license);
}
// Code omitted
}
The @POST annotation creates an endpoint path accessible at /api/license
through HTTP POST requests.
The @POST, @Consumes, and @Produces come from the Jakarta RESTful
Web Services specification. Note that this endpoint consumes and produces
JSON data. The Jakarta JSON Processing and Jakarta JSON Binding
specifications provide JSON support. The remaining annotations come from the
MicroProfile OpenAPI specification and allow us to provide detailed
information that will be used to document this endpoint in the API
documentation.
We finish by implementing an endpoint that retrieves all available licenses:
@Path("/license")
@Tag(name = "License API", description = "It allows
managing licenses")
public class LicenseEndpoint {
// Code omitted
@GET
@Operation(summary = "It retrieves all licenses",
description = "It
returns all non-expired
and expired licenses")
@APIResponses(value = {
@APIResponse(
responseCode = "200",
description = "List of licenses
retrieved
successfully",
content = @Content(
mediaType =
"application/json",
schema =
@Schema(implementation = List.class,
type = SchemaType.ARRAY)
)
)
})
public List<License> getAllLicenses() {
return licenseService.getAllLicenses();
}
// Code omitted
}
We place the @GET annotation to create a new endpoint path accessible at
/api/license through HTTP GET requests.
We need to add the following dependency to the pom.xml file to enable the
OpenAPI UI that lets us see the project’s API documentation in a web browser:
<dependency>
<groupId>org.microprofile-ext.openapi-
ext</groupId>
<artifactId>openapi-ui</artifactId>
<version>2.0.0</version>
<scope>runtime</scope>
</dependency>
To ensure the OpenAPI annotations we placed when implementing the API will
be used to generate the API documentation, we need to create the
microprofile-config.properties file in the
src/main/resources/META-INF directory:
openapi.ui.title=License Management
mp.openapi.scan=true
@Produces
@Liveness
HealthCheck checkMemoryUsage() {
return () ->
HealthCheckResponse.named("memory-
usage").status( true).build();
}
@Produces
@Readiness
HealthCheck checkCpuUsage() {
return () -> HealthCheckResponse.named("cpu-
usage").status(true).build();
}
}
We implement the LicenseHealthCheck class as a managed bean through
the @ApplicationScoped class-level annotation. The @Produces method-
level annotation is used with the checkMemoryUsage and
checkCpuUsage methods because they produce managed beans of type
HealthCheck containing health information we collect from the application.
On the checkMemoryUsage method, we have the @Liveness annotation
that lets third-party services know if the MicroProfile application is running
correctly. The @Readiness annotation used with the checkCpuUsage
method lets third-party services know if the MicroProfile application is ready to
receive requests.
Third-party services are often represented through Kubernetes cluster
components responsible for periodically sending liveness and readiness probes
to check the application’s health. If Kubernetes detects something wrong when
performing liveness and readiness health checks, it can trigger remedial actions,
like restarting the Pod where the MicroProfile application is running in the
Kubernetes cluster.
In the next section, we will learn how to compile and run the license
management application.
Conclusion
This chapter explored Jakarta EE and how its Platform, Web Profile, and Core
Profile specifications can be used to develop Java enterprise applications. While
comparing the differences between the different specifications, we learned that
the Platform specification includes all individual Jakarta EE specifications: the
Web Profile specification, which targets enterprise applications requiring web
components, and the Core Profile specification, suited for applications running
in cloud-native environments. We also learned how the Jakarta EE Core Profile
specification complements the MicroProfile specification to provide a set of
specifications to support the development of Java microservices. After
overviewing the Jakarta EE and MicroProfile specifications, we learned how to
use them by developing the license management application.
The next chapter covers the techniques and technologies for deploying and
running Java applications in cloud-native environments. We will learn about
Kubernetes, its architecture, and how to deploy and run containerized Java
applications in a Kubernetes cluster.
Introduction
Gone are the days when we used to deploy Java applications in heavyweight
application servers running on expensive dedicated hardware. Nowadays, it is far
more common to see Java systems running in Kubernetes clusters offered by
major cloud providers like Amazon Web Services (AWS), Google Cloud
Platform (GCP), or Microsoft Azure. By delegating most of their computing
infrastructure operations to cloud providers, organizations can allocate their time
to activities that have more potential to generate profit or reduce costs. The Java
developer working in an organization that has its infrastructure on the cloud
must know how to tap into the benefits of running Java applications in cloud-
native environments. That is why this chapter explores good practices for
deploying and running Java applications using technologies like Docker and
Kubernetes.
Structure
The chapter covers the following topics:
• Understanding container technologies
• Introducing Kubernetes
• Dockerizing a Spring Boot, Quarkus, and Jakarta EE application
• Deploying Docker-based applications on Kubernetes
• Compiling and running the sample project
Objectives
By the end of this chapter, you will know how to package Java applications into
Docker images and deploy them in a Kubernetes cluster. To serve as a solid
foundation for your practical skills, you will also acquire fundamental
knowledge about cloud technologies like virtualization and containers, allowing
you to use such technologies properly according to your project’s requirements.
Introducing virtualization
Virtualization is the core technology enabling cloud-native environments. What
we call cloud computing nowadays is only possible due to the ability to
virtualize computing resources. Knowledge of fundamental virtualization
concepts can help us better decide how to run Java applications in cloud
environments. Let us start exploring the full virtualization concept.
Full virtualization
Full virtualization is the technique of running software in an environment that
reproduces a real computer’s behaviors and instructions. It allows the complete
virtualization of computer resources like CPU and memory. All things provided
by computer hardware are virtualized, enabling the execution of software
entirely unaware it is executing in a virtualized environment.
A host machine can provide full virtualization with support for hardware-assisted
virtualization technologies like Intel VT-x or AMD-V. The host machine runs
hypervisor software responsible for creating virtual machine instances, also
known as guests of the host machine. KVM, VMware, and VirtualBox are some
of the hypervisors that provide full virtualization. When a machine is fully
virtualized, it lets us install an operating system that is different from the
operating system running on the host machine. For example, a Linux host
machine can have Windows guest machines.
Cloud providers rely on full virtualization technologies to provide virtual servers
as an Infrastructure-as-a-Service (IaaS) solution. These virtual servers run on
physical servers and are managed by hypervisor software.
When organizations started to move their infrastructure to the cloud, virtual
servers allowed those organizations to keep running their legacy and modern
applications. An important feature of virtual servers is the flexibility to adjust
computing resources like CPU and memory according to user demand.
Organizations moving to the cloud would no longer struggle with over—or
under-provisioning computing resources through physical hardware because
these resources could now be easily managed through the virtualization
hypervisor.
Full virtualization is excellent for running any application. Modern virtualization
technologies make the performance of applications running in virtualized servers
practically the same as if they were running in bare metal servers. However, it
comes at a cost. Full virtualization is a costly way to virtualize software
execution because it requires virtualizing all computer components. There is a
cheaper alternative to full virtualization called paravirtualization. Let us explore
it next.
Paravirtualization
Partial virtualization, also known as paravirtualization, is possible when the host
machine executes some instructions of the virtual guest machine. This implies
that the virtualized operating system needs to be aware that it is running in a
virtual environment. The paravirtualization technique relies on the cooperation
of both host and guest machines in identifying which instructions are better
executed by the virtualized hardware and which others the real hardware better
execute. Such collaboration between host and guest machines enhances
performance in the paravirtualized environment.
A paravirtualization hypervisor requires a guest virtual machine running an
operating system that can communicate with the hypervisor. So, the operating
system needs to be modified or provide special drivers in order to be compatible
with paravirtualized environments. That can be a limitation if you want to
virtually run applications in an operating system that does not support
paravirtualization.
Paravirtualization is cheaper than full virtualization but still implies significant
costs because it consumes valuable computing resources to provide a functional
virtualized environment. Xen and Hyper-V are some of the available
paravirtualization technologies cloud providers may use to offer virtualized
servers and other cloud solutions backed by paravirtualization.
Full virtualization and paravirtualization techniques are extensively used to run
Java applications. Such Java systems can run in application servers executing in
virtualized environments. Kubernetes cluster nodes are provisioned using virtual
servers. So, these virtualization techniques are awesome for running Java
applications. Although they can make Java applications portable, it may be
cumbersome to package and distribute those applications bundled into full or
paravirtual machine images.
Let us explore a virtualization technique that allows virtualizing the structure of
an operating system, which enables a convenient way to package applications
and their dependencies into a single, leaner, virtualized environment.
Container-based virtualization
Also known as OS-level virtualization, container-based virtualization consists of
virtualizing components of an operating system. In this approach, the guest
virtualized environment shares the same kernel used by the host machine. This
container virtualization does not go as deep as full virtualization or
paravirtualization, which virtualizes hardware instructions to provide the virtual
environment. We call containers the virtual environments offered by container-
based virtualization technologies.
Container technology has been around for quite a while through solutions like
OpenVZ, Linux Containers (LXC), and Docker. It is based on two essential
Linux kernel features: cgroups and namespaces. These features let applications
run as isolated processes within an operating system. The namespace kernel
mechanism allows the creation of an isolated environment in the hosting
operating system. From the application perspective, an environment provided by
a Linux namespace looks like a real operating system containing its own file
system and process tree. Processes running in one namespace cannot see
processes from other namespaces. Computing resources like CPU and memory
are managed by the cgroups kernel feature, which is responsible for allocating
computing resources to containers controlled by the host operating system.
A container can run as one or more isolated processes. Processes running inside
a container behave as if they were running in a real machine.
Before the advent of Docker, containers were not widely embraced by
developers. The earlier container technologies lacked a straightforward method
to package and distribute environment dependencies with application binaries.
However, Docker revolutionized this landscape by offering a container-based
virtualization solution. This allowed developers to effortlessly bundle their
applications into container images, complete with dependencies like libraries and
customized configurations necessary for application execution. Docker’s impact
extends beyond simplifying development processes; it also offers significant cost
benefits, making it a game-changer in the world of container technology.
Docker’s cost benefits can lead to significant savings, a prospect that should
inspire optimism in any organization.
The whole IT industry changed because of Docker. An increasing number of
organizations started to deploy their applications as Docker containers, which
raised challenges regarding how to efficiently operate containers at a large scale.
That is when technologies like Kubernetes appeared as a solution to managing
containers. Before diving into Kubernetes, let us discover how Docker works.
Exploring Docker
It is relatively simple to operate a bunch of Docker containers without the
assistance of any other tool than Docker itself. Containers run like processes in
the operating system, so it is essential to ensure container processes are always
running without errors. A system administrator can inspect container logs and
restart containers when necessary if something goes wrong. For simple use
cases, Docker alone may be enough to host applications. There may be manual
administration tasks to keep containers running, which is fine for smaller, non-
critical systems. However, Docker may need to be complemented with other
technologies to host critical enterprise applications.
Docker delivers effective container virtualization technology but lacks a reliable
mechanism for operating containers in cluster-based environments that can meet
strict application requirements involving high availability and fault tolerance.
With the increasing adoption of container technologies, container orchestrator
solutions like Mesos, Marathon, Rancher, Docker Swarm, and Kubernetes
appeared in the market. These container orchestrators were designed to facilitate
the operation of critical applications running on containers. With a container
orchestrator, we can adequately manage operating system resources like
networks and storage to meet the requirements of container-based systems. Also,
container orchestrators play a fundamental role in deploying the software by
allowing deployment strategies that decrease the risk of application downtime.
We can only discuss Kubernetes by also discussing containers. Kubernetes exists
only because of the container technology. So, to prepare ourselves for a deeper
investigation of Kubernetes, we first explore Docker by examining its
fundamentals.
The two lines above are placed in a file called Dockerfile, the default file name
that Docker uses when building Docker images. There are two image types:
parent-derived image and base image. Parent-derived images are always built on
top of a parent Docker image. The example above uses the FROM directive to
refer to the busybox parent image. Base images have no parent; they use the
FROM scratch directive in their Dockerfile to indicate they do not refer to
any parent image. We also use the CMD directive to execute a command once the
container starts. A command is a program executed as an isolated process in the
Docker container. The Dockerfile supports other directives, allowing us to
customize Docker images by adding files, creating directories, creating
environment variables, setting file system permissions, and other operations.
Introducing Kubernetes
Kubernetes started as an internal Google project called Borg, which used to
operate containers on a large scale. Google eventually open-sourced the project
and renamed it Kubernetes. It is considered the most used container orchestrator
in the market. Kubernetes can operate containers in small devices like Raspberry
PI or be used on top of an entire data center dedicated to containers.
Kubernetes has become popular, especially in large enterprises, because it
provides a reliable platform for managing and running containers. Organizations
found Kubernetes to be a mature technology capable of hosting mission-critical
applications with high availability and fault tolerance.
The decision to use Kubernetes comes when operating containers solely with
Docker, for example, is insufficient. Docker alone does not provide a high-
availability solution for distributing container workloads across multiple cluster
servers. Docker cannot dynamically auto-scale containers based on CPU or
memory usage. Applying a load balancing mechanism on container network
traffic is not possible with just Docker; however, it is possible when using
Kubernetes.
So, Kubernetes gets the core container technology and surrounds it with
additional components that make containers viable for running enterprise
applications.
Most cloud providers offer Kubernetes-based solutions, and companies of all
sizes rely on them to run their software. Designing an application to run on
Kubernetes gives an organization the flexibility to change cloud providers
without significant impact. Applications can be developed to rely on pure
Kubernetes standards instead of features provided by a specific cloud provider.
However, there are customized container orchestrators built on top of
Kubernetes, like OpenShift from Red Hat and Kyma from SAP, that provide
additional capabilities to ones already provided by Kubernetes.
A Java developer needs a basic understanding of how Kubernetes works because
a growing number of Java projects are running in Kubernetes clusters. So, next,
we explore the fundamentals of Kubernetes architecture and some of its main
objects commonly seen when deploying an application.
Kubernetes architecture
Kubernetes is a cluster made of at least one worker node, which is where
containerized applications run. We can have Kubernetes running on a single or
multiple machines. When running on a single machine, this machine acts
simultaneously as the master and worker node. When running with multiple
machines, master and worker nodes run in separate machines. Master nodes are
responsible for cluster management activities, while worker nodes run
containerized applications.
Inside a Kubernetes cluster, we have control plane and node components. Going
next, we cover control plane components.
kube-scheduler
When Kubernetes objects need to be provisioned in one of the worker nodes, the
kube-scheduler is responsible for finding a worker node that best suits the
requirements of a given Kubernetes object. Worker nodes with high CPU,
memory, and storage usage may be skipped in favor of worker nodes with more
free capacity. A Pod is one of the Kubernetes object examples we will explore
further in the next section.
kube-apiserver
The kube-apiserver interconnects different Kubernetes components and provides
an external API that command-line tools like kubectl use to interact with the
Kubernetes cluster. Practically everything that occurs inside a Kubernetes cluster
passes through the kube-apiserver.
kube-controller-manager
Kubernetes objects have a current and desired state. The desired state is
expressed through a Yet Another Markup Language (YAML) representation
that describes how a Kubernetes object should be provided. If, for some reason,
the current Kubernetes object’s current state is not the same as the desired one,
then the kube-controller-manager may take action to ensure the desired
state is achieved.
Control plane components usually run in a master node, a machine in the
Kubernetes cluster used only for management purposes.
Next, we cover Kubernetes node components.
Container runtime
It is the container engine running in a Kubernetes node. It can be containerd,
which Docker is based on, or any other compatible container technology.
Kubernetes provides the Container Runtime Interface (CRI), which is a
specification defining how container technologies can be implemented to be
supported by Kubernetes, so any container runtime that implements such a
specification is also compatible with Kubernetes.
kubelet
Every node in a Kubernetes cluster needs an agent called kubelet to manage
containers running in the node machine. When adding a new worker node to an
existing Kubernetes cluster, we need to ensure this node has the kubelet agent
properly installed.
kube-proxy
Network access to containers managed by Kubernetes can be done through the
network proxy provided by kube-proxy. We can use this proxy to set up a direct
connection with one of the ports exposed by a container running in a Kubernetes
cluster.
Node components run on every Kubernetes node, including master and worker
nodes.
Next, we learn about Kubernetes objects and how they are used to run
containerized applications.
Kubernetes objects
The containerized application represents the fundamental element by which all
the Kubernetes machinery is driven. When hosting an application in Kubernetes,
we can make it available for external networks, provide environment variables
required by the application, and control how many instances of the application
will run simultaneously. Such tasks can be accomplished by using Kubernetes
objects. We explore them further in the upcoming sections.
Pod
We do not deal directly with containers in a Kubernetes environment. Instead,
we use Pods composed of one or more containers. A Pod acts like a wrapper that
controls the entire lifecycle of a container. A Pod object specifies the container
image that must be used when the Pod is deployed in a Kubernetes cluster. We
have an example showing how to create a YAML representation of Pod, as
follows:
apiVersion: v1
kind: Pod
metadata:
name: httpd
spec:
containers:
- name: httpd
image: httpd: 2.4
ports:
- containerPort: 80
When defining a Kubernetes object, we utilize a key component called kind
that specifies the object’s type, which in the example above is Pod. The
metadata block contains elements that describe the Pod, including its name.
Within the containers block, we can define one or more containers that are
managed by this Pod. Each container is specified by its name and image. The
ports block, specifically containerPort, allows us to define the port on
which the containerized application is running.
Although possible, it is not a common practice to create Pods directly. Most of
the time, Pods are managed by other Kubernetes objects like the Deployment.
Let us check it next.
Deployment
Applications running on Kubernetes can scale horizontally, which means
multiple instances of the same application can be provisioned to distribute
application processing better and provide high availability. We can accomplish
this by using the Deployment object that manages Pod objects. The Deployment
lets us define, for example, how many replicas of a Pod must run simultaneously.
Check the following example of the YAML representation of a Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd
labels:
app: httpd
spec:
replicas: 3
selector:
matchLabels:
app: httpd
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd
image: httpd:2.4
ports:
- containerPort: 80
Part of the Deployment declaration is similar to a Pod because we need to define
how Pods managed by the Deployment will be created. Note the replicas
component; we use it to tell Kubernetes how many Pod instances must run under
this Deployment. This YAML declaration expresses the desired state of a
Deployment object. Suppose Kubernetes detects, for example, that only two
Pod instances are running when three instances are the expected amount. In that
case, Kubernetes automatically tries to bring up another instance to ensure the
current state matches the desired state. Also note the matchLabels block; we
use it to define labels other Kubernetes objects can use to refer to the
Deployment object. A common use case for the matchLabels component is
when we want to expose a Deployment in the network using a Service object.
We explore Service objects next.
Service
Pods can communicate with each other and be externally accessible for clients
outside the Kubernetes cluster. By default, when a Pod is deployed, it is not
accessible by other Pods in the cluster. We can solve it by creating a Service
object:
apiVersion: v1
kind: Service
metadata:
name: httpd
labels:
app: httpd
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: httpd
The selector block may contain a reference pointing to the same label value
used in a Deployment or Pod object. That is how a Service can expose other
Kubernetes objects to the network. Note that we use ClusterIP as the Service
type in the example above. We use this Service to expose a Pod to the internal
Kubernetes cluster network, allowing Pods to communicate with each other
through the Service. We can use the NodePort or LoadBalance Service
type to expose a Pod to networks outside the Kubernetes cluster. A Service must
specify its port and protocol. The targetPort component refers to the port
the containerized application listens to in the Pod. When omitted, the
targetPort is the same as the Service port.
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-
plugin</artifactId>
<version>3.3.4</version>
</plugin>
</plugins>
</build>
With the finalName property, we define the JAR file name. The plugin
responsible for doing the magic is called spring-boot-maven-plugin.
When using this plugin, the mvn package command will create the sample-
spring-boot-app.jar inside the target directory of the Spring Boot
Maven project.
Next, we will learn how to create a bootable JAR for a Quarkus application.
Conclusion
Cloud technologies like Docker and Kubernetes are familiar to most Java
developers. Knowing how to use these technologies is fundamental for anyone
interested in designing and operating Java cloud-native systems.
This chapter explored how crucial virtualization technology, especially container
virtualization, is for any cloud-native environment. We learned that a container
comprises one or more isolated processes provided by the namespace and
cgroups Linux kernel’s features. Known as one of the most popular container
technologies, we discovered how Docker makes developers’ lives easier by
providing a convenient way to package and deliver containerized applications
through Docker images created with a Dockerfile. Going deeper into the
containers, we learned how powerful Kubernetes is in providing a container
orchestration solution that reliably hosts containerized applications. Finally, we
applied techniques like configuration externalization on the Spring Boot
application to make it cloud-native and ready to run in Kubernetes clusters.
In the next chapter, we will look at monitoring and observability, activities that
play a fundamental role in the availability and reliability of Java applications
running in production. We will learn how to implement distributed tracing with
Spring Boot and OpenTelemetry. Also, we will explore handling logs using the
Elasticsearch, Fluentd, and Kibana (EFK) stack.
CHAPTER 9
Learning Monitoring and
Observability Fundamentals
Introduction
Those tasked with supporting Java applications in production understand the
value of comprehending system behavior in various situations. This
understanding is built on the foundation of monitoring and observability
techniques, which leverage metrics, logs, events, and other data to predict or
address application failures. By looking at the basics of monitoring and
observability, we can make informed decisions about the most effective
technologies and approaches to swiftly respond to unexpected application
behaviors.
Structure
The chapter covers the following topics:
• Understanding monitoring and observability
• Implementing distributed tracing with Spring Boot and OpenTelemetry
• Handling logs with Elasticsearch, Fluentd, and Kibana
• Compiling and running the sample project
Objectives
By the end of this chapter, you will learn the main concepts behind monitoring
and observability. With these concepts as a solid foundation, you will learn how
to apply distributed tracing techniques to understand through traces the life cycle
of requests spanning multiple microservices. Finally, you will know how to
collect and see logs from a Java application using Elasticsearch, Fluentd, and
Kibana.
Monitoring
Any application behaves in its own way, given the constraints and load exerted
upon it. The constraints are the computing resources, like CPU, memory,
storage, and network bandwidth, available to the application to perform its
activities. The load refers to how much of the available computing resources the
application uses to carry on with its tasks. Routinely inspecting how an
application behaves in the face of its constraints and load is one form of
monitoring. The goal of this kind of monitoring is, for example, to prevent
resource bottlenecks like lack of storage space or memory. That may be
accomplished by setting up monitoring dashboards and alerts configured to send
messages or phone calls when a predefined threshold is met, such as 80% of the
disk being used.
Another form of monitoring is concerned with application logic. Some
applications are developed in an enterprise environment to solve business
problems. The logic to solve those problems may be susceptible to dependencies
like database availability, input data provided through an application request, or
data obtained from an API. The ability to inspect how well an application solves
business problems is fundamental to predicting or quickly remediating issues.
Monitoring application logic can be accomplished, for example, through the
usage of metrics and application logs. Dashboards and alerts can also be used on
top of data provided by metrics and logs.
Observability techniques were conceived to enhance standard monitoring
practices and achieve a holistic understanding of what happens in a software
system by considering the behavior of not only a single application but also the
relationship between multiple of them, as in the case of distributed architecture
systems like microservices. Let us explore more of it next.
Observability
For quite some time, the standard way to build server-side systems was by
developing a single monolith application. The focal point for all monitoring
activities would be around that single monolith application, resulting in the
creation of monitoring dashboards based on application metrics. These
dashboards, a cornerstone of our understanding of the application’s behavior,
were heavily relied upon by developers and other interested parties. They served
as a window into the application’s world, providing crucial insights and
supporting troubleshooting activities. The information obtained from the
monitoring dashboards could trigger further investigation of application logs to
identify an issue’s root cause.
A problem arose when server-side systems started to be developed based on
distributed architectures. The logic from a distributed system is scattered across
multiple applications having particular responsibilities. Instead of having a
single monolith application, several smaller applications are now working
together to provide system functionalities. The shift in how server-side software
is developed, from monolith to distributed, also triggered a change in techniques
to understand how distributed software behaves, which ultimately culminated in
what is called observability.
Observability is the ability to understand software system behaviors through
structured events containing contextual data. It aims to enable the discovery of
what, when, where, and why something happened in a system. Such contextual
data is made of high-cardinality and high-dimensionality data.
A structured event is a piece of data describing system behavior at a given time.
Its attributes are arbitrarily defined to provide as much context as possible for
what happened when the software system attempted to do something.
High cardinality refers to data uniqueness. An example of high-cardinality data
is an event having attributes like the Request ID that must store unique values in
a system. On the other hand, a low-cardinality data example would be an event
having the Country attribute, which can contain non-unique values. High-
cardinality data is one of the cornerstones of observability because it allows us to
identify events describing system behaviors accurately.
High dimensionality refers to the data attributes used in an event describing
system behavior. An event containing a comprehensive set of attributes is helpful
in understanding system behaviors from different dimensions. For example, User
ID, Organization ID, Region, Status, Source, or Destination can be used as
dimensions where User ID is one dimension, Organization ID is another, and so
on. An event lacking crucial attributes may compromise the comprehension of
system behaviors; that is why it is essential to ensure structured events have high
dimensionality based on relevant data attributes.
Structured events with high-cardinality and high-dimensionality data are the
foundation for observability tools and techniques. In distributed systems,
observability techniques can be used to understand the flow of a request going
through multiple applications. Next, we learn how to implement distributed
tracing using Spring Boot and OpenTelemetry.
The trace example starts with Span 1, which represents a request coming to
Service A. Span 2 shows us that Service A requested Service B, which, in turn,
made a request to the database, represented through Span 3. Finally, we can see
that Service B requested Service C after receiving a response from the database.
OpenTelemetry provides a set of SDKs, APIs, and libraries that let us
instrumentalize applications to generate telemetry data such as metrics, logs, and
traces like the one presented in the previous example. Once adopted,
OpenTelemetry also enables us to collect and export traces to observability
applications, like Jaeager, that let us visualize the traces generated by a system.
Having a way to visualize system traces is very helpful for troubleshooting
purposes.
Let us start by building a simple distributed system based on two Spring Boot
applications. That system will serve as the scenario for implementing distributed
tracing using OpenTelemetry.
Configuring dependencies
We start by defining the base Maven’s project structure in the pom.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-
instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-
parent</artifactId>
<version>2.7.18</version>
<relativePath/>
</parent>
<groupId>dev.davivieira</groupId>
<artifactId>chapter09</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>pom</packaging>
<properties>
<maven.compiler.source>21</maven.compiler.source>
<maven.compiler.target>21</maven.compiler.target>
</properties>
<modules>
<module>inventory-service</module>
<module>report-service</module>
</modules>
<!-- Code omitted -->
</project>
Note that this is a Maven multi-module project with a module for the inventory
service and another for the report service. All the dependencies and build
configurations defined in this pom.xml file are shared with the inventory service
and report service modules, which will be defined soon.
Spring Boot simplifies our task by providing support for OpenTelemetry
libraries. This allows us to enable distributed tracing effortlessly. Here is how we
configure the first part of Maven’s dependencies for Spring Boot and
OpenTelemetry:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-
dependencies</artifactId>
<version>2021.0.5</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-otel-
dependencies</artifactId>
<version>1.1.2</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>
</dependencyManagement>
We use the dependencyManagement block to get the spring-cloud-
dependencies and spring-cloud-sleuth-otel-dependencies POM
dependencies. From the POM dependencies, we can specify the JAR
dependencies, which we will do next:
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-
web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-
sleuth</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-
brave</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-otel-
autoconfigure</artifactId>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-
otlp</artifactId>
<version>1.23.1</version>
</dependency>
</dependencies>
We use the spring-boot-starter-web dependency to create REST API
endpoints for our services. The spring-cloud-starter-sleuth
dependency provides auto-configuration for distributed tracing on Spring Boot
applications. We exclude the spring-cloud-sleuth-brave because it is
a trace generator library. Instead, we use the spring-cloud-sleuth-
otel-autoconfigure dependency, which generates traces using
OpenTelemetry. The opentelemetry-exporter-otlp lets us collect
trace data and export it to a collector.
Let us now start implementing the distributed system with the inventory service.
@GetMapping
public List<String> getAllInventory() {
LOGGER.info("Getting all inventory items");
return List.of("Inventory Item 1", "Inventory
Item 2", "Inventory
Item 3");
}
}
We use the @GetMapping annotation to expose inventory data through the
GET endpoint at /inventory. That is all we need from the Java class
implementation perspective. Now, we need to enable the application to produce
traces. How we can do that using the application.yml file is shown as follows:
server:
port : 8080
spring:
application:
name: inventory-service
sleuth:
otel:
config:
trace-id-ratio-based: 1.0
exporter:
otlp:
endpoint: http://collector:4317
There are three essential configurations to pay attention to here:
1. The name property is used to group trace data based on the application’s
name.
2. The trace-id-ratio-based defines the ratio through which spans
will be captured. The number 1.0 means all spans will be captured.
3. The endpoint property sets the collector’s URL where trace data will
be exported.
The trace collector is an external system that must be available. Otherwise, our
Spring Boot application won’t be able to export trace data. We will see soon how
to provide such a collector system.
Using Docker Compose, we intend to provide the inventory service as a
containerized application. To do so, we need to create a Dockerfile:
FROM openjdk:21-slim
ENV JAR_FILE inventory-service-1.0-SNAPSHOT.jar
ENV JAR_HOME /usr/apps
COPY target/$JAR_FILE $JAR_HOME/
WORKDIR $JAR_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $JAR_FILE"]
EXPOSE 8080
Note the port 8080 we expose in the Dockerfile is the same port used in the
application.yml file from the Spring Boot application.
Let us now implement the report service
@Autowired
public ReportEndpoint(RestTemplate restTemplate) {
this.restTemplate = restTemplate;
}
@GetMapping(path = "/generate")
public List<String> generateReport() {
LOGGER.info("Generating report");
return getInventoryItems();
}
spring:
application:
name: report-service
sleuth:
otel:
config:
trace-id-ratio-based: 1.0
exporter:
otlp:
endpoint: http://collector:4317
inventoryService:
baseUrl: ${INVENTORY_BASE_URL:http://localhost:8080}
Note that the configuration is quite similar to the inventory service. The only
differences are the server port, which is 9090, the application’s name, report-
service, and the presence of the baseUrl property containing the inventory
service URL.
The Dockerfile configuration is also quite similar to the inventory service:
FROM openjdk:21-slim
ENV JAR_FILE report-service-1.0-SNAPSHOT.jar
ENV JAR_HOME /usr/apps
COPY target/$JAR_FILE $JAR_HOME/
WORKDIR $JAR_HOME
ENTRYPOINT ["sh", "-c"]
CMD ["exec java -jar $JAR_FILE"]
EXPOSE 9090
To ensure the Spring Boot application will be accessible externally, we expose
the 9090 port, the same port configured in the Spring Boot application.
At this stage, we have two Spring Boot applications: the inventory and report
services. To complete the setup, we need to configure Docker Compose. This
configuration not only starts both applications but also provides an instance of
the Jaeger and OpenTelemetry Collector. These tools are crucial as they enable
us to get and visualize application traces.
service:
pipelines:
traces:
receivers: [ otlp ]
processors: [ batch ]
exporters: [ logging, jaeger ]
Note that we have specified jaeger-service:14250 as the Jaeger
endpoint. The host jaeger-service and 14250 port are part of the Docker
Compose configuration defined previously.
With Jaeger and OpenTelemetry Collector integrated with traces produced by
our Spring Boot applications, we can see how distributed tracing works in
practice. Before we start playing with it, let us add an essential element to our
observability setup: centralized logging.
Fluentd
Known as data collection software, Fluentd is frequently used to capture
application log outputs and send them to search engines like Elasticsearch. It has
been widely adopted in distributed architecture systems running in a Kubernetes
cluster.
Elasticsearch
Based on the Lucene search engine library, Elasticsearch is a distributed
enterprise search engine often used with data collection systems like Logstash
and Fluentd. Elasticsearch stores data in indexes and allows for the fast search of
large amounts of data.
Kibana
Built specifically to work with Elasticsearch, Kibana is a system that provides a
user interface for managing and searching Elasticsearch data. Users can search
data using customized filters and criteria, allowing high flexibility in the search.
The three technologies, Elasticsearch, Fluentd, and Kibana, are commonly
used together, forming the reliable and widely used EFK stack. In the next
section, we will guide you through setting up an EFK stack to capture logs from
the Spring Boot applications we developed earlier.
fluentd:
build: ./fluentd
volumes:
- ./fluentd/conf/fluent.conf:/fluentd/etc/fluent.conf
links:
- "elasticsearch"
ports:
- "24224:24224"
- "24224:24224/udp"
elasticsearch:
image: elasticsearch:8.13.4
container_name: elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false
expose:
- "9200"
ports:
- "9200:9200"
kibana:
image: kibana:8.13.4
environment:
- XPACK_SECURITY_ENABLED=false
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- INTERACTIVESETUP_ENABLED=false
links:
- "elasticsearch"
ports:
- "5601:5601"
depends_on:
- elasticsearch
Elasticsearch and Kibana require security mechanisms by default. For the
sake of simplicity, we are turning them off through environment variables
like XPACK_SECURITY_ENABLED=false and
INTERACTIVESETUP_ENABLED=false.
2. For the Fluentd setup, we need to provide a customized Fluentd Docker
image configured with the Elasticsearch plugin. The following code
shows how the Dockefile for such an image should be defined:
FROM fluent/fluentd:v1.17
USER root
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no
USER fluent
Note we also, the docker-compose.yml refers to the file
./fluentd/conf/fluent.conf, which specifies configurations
like the IP address and port the Fluentd will listen to receive application
logs, and also the destination place, Elasticsearch in our case, where log
data will be sent:
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<match *.**>
@type copy
<store>
@type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key @log_name
flush_interval 1s
</store>
<store>
@type stdout
</store>
</match>
The source configuration block defines a TCP endpoint through the
@type forward option that accepts TCP packets from applications
sharing their output. Note port 24224’s definition; we use it when
configuring the Spring Boot application container’s logging on Docker
Compose. After the source block, we have a match block to determine
the log output destination. Two necessary configurations here are the
host and port, which are defined as elasticsearch and 9200,
respectively.
3. Next, we add to the docker-compose.yml the configuration for the
inventory and report service Spring Boot applications we built in the
previous section:
# Code omitted
inventory-service:
build: inventory-service/
ports:
- "8080"
links:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: inventory.service
report-service:
environment:
- INVENTORY_BASE_URL=http://inventory-service:8080
build: report-service/
ports:
- "9090:9090"
links:
- fluentd
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: report.service
Note that inventory and report services have a logging configuration
block that defines Fluentd as the log driver connecting to the
localhost:24224. Remember, we previously configured Fluentd to
listen in port 24224. What happens here is that logs produced by
inventory-service and report-service containers will be forwarded to the
Fluentd server.
Let us combine all the pieces and play with our observability engine, which
supports distributed tracing and centralized logging.
Note that the trace comprises spans showing requests crossing through the
report-service and inventory-service applications.
We can find application logs on Kibana at the URL http://localhost:5601. The
screen is as follows:
Figure 9.3: Kibana showing application logs
When accessing Kibana for the first time, click on the Explore on my own
option, then on Discover, and finally, Try ES|QL, which lets you see data
without configuring indexes.
Conclusion
In this chapter, we explored monitoring and observability. We learned that
monitoring refers to the traditional approach of understanding system behavior
through metrics, logs, dashboards, and alerts. We discovered that observability
complements traditional monitoring techniques and is especially helpful for
understanding the behavior of systems based on distributed architectures, like
microservices.
We implemented a simple distributed system based on two Spring Boot
applications that can generate traces. On top of those traces, we configured
OpenTelemetry Collector to capture application traces and send them to the
Jaeager, which lets us visualize traces and their spans.
Finally, we configured centralized logging using the Elasticsearch, Fluentd,
and Kibana (EFK) stack, a solution that aggregates logs from different
applications.
In the upcoming chapter, we will look at the exciting world of Micrometer, a
library that has the potential to enhance the observability of Java applications
significantly. We will learn the importance of providing application-specific
metrics to enable better monitoring of application behaviors and explore how to
implement such metrics.
CHAPTER 10
Implementing Application Metrics
with Micrometer
Introduction
Understanding how an application behaves through metrics can help us
remediate issues faster or prevent problems from getting worse. Metrics play a
fundamental role in monitoring because they let us capture the state of the
functionalities provided by the application. When interpreted, those states can
indicate whether things are working as expected or if there are deviations
requiring further investigation.
To harness the power of metrics in Java applications, we need to adjust them by
configuring metrics to track application behaviors. We can do this using
Micrometer, a well-known Java library that allows us to instrumentalize Java
applications to generate metrics. That is why, in this chapter, we will explore
how the Micrometer works, the metrics types it provides, and how and when we
should use these metric types.
Structure
The chapter covers the following topics:
• Providing application metrics
• Introducing Micrometer
• Using Micrometer and Spring Boot to implement metrics
• Compiling and running the sample project
Objectives
By the end of this chapter, you will learn the benefits of instrumentalizing Java
applications to generate metrics that describe how well the system is performing
its activities. Knowing the advantages of application-specific metrics, you will
also learn how to implement them using Micrometer, a powerful metrics library
for Java.
Introducing Micrometer
Designed to be an agnostic metrics solution, Micrometer is a powerful library
that lets us instrumentalize Java systems to produce metrics. It is considered
agnostic because the Micrometer metrics are not vendor-specific; they can work
with different monitoring vendors. Micrometer metrics are compatible, for
example, with tools like Elastic and Dynatrace, to name a few. So, using
Micrometer allows us to switch across monitoring vendors without having to
refactor the Java application.
Micrometer works so that the metrics it produces are shared by the
instrumentalized Java application that exposes an endpoint used by external
monitoring tools like Prometheus, for example, that record metrics in a time
series database. Having a place to record the metrics produced by a Java
application is beneficial because once the application is restarted, the metrics
collected before are gone unless we have stored them elsewhere.
This section explores some core Micrometer concepts, like the registry and
meters, and some of the most used meter types, including counters, gauges,
timers, and distribution summaries. Let us proceed by checking what is a registry
in Micrometer.
Registry
The registry is represented through the MeterRegistry interface, which acts
as the fundamental component of the Micrometer architecture. All metrics
produced by the Micrometer come from a registry that enables the creation of
different metric types, like counters and gauges.
The MeterRegistry is an interface with implementations supporting
multiple monitoring systems. The SimpleMeterRegistry class, for
example, can be used for testing purposes because it keeps metrics data only in
memory. For real-world scenarios, you might use the
PrometheusMeterRegistry if your monitoring tool is Prometheus. You
can create a SimpleMeterRegistry using the following code:
MeterRegistry registry = new SimpleMeterRegistry();
By default, a registry publishes metrics only to one monitoring system, but it is
possible to publish metrics data to multiple monitoring systems by using the
CompositeMeterRegistry:
CompositeMeterRegistry composite = new
CompositeMeterRegistry();
The metrics generation activity always starts with a registry, regardless of
whether it is a single or composite one. Next, let us check how registries use
meters to capture metric data.
Counters
There are scenarios where we are interested in knowing the frequency at which
something occurs inside the system. Imagine a web application that receives
HTTP requests at different endpoints. We can use a counter to measure the rate
at which the application processes HTTP requests. A counter corresponds to a
positive number that can be incremented by one or any other arbitrary number.
The following is how we can create and use a counter:
SimpleMeterRegistry meterRegistry = new
SimpleMeterRegistry();
Counter httpRequestCounter = Counter
.builder("http.request")
.description("HTTP requests")
.tags("Source IP Address", "Operating System")
.register(meterRegistry);
httpRequestCounter.increment();
httpRequestCounter.increment();
httpRequestCounter.increment();
System.out.println(httpRequestCounter.count()); // 3
Micrometer provides a builder that lets us intuitively construct the Counter
object. Note that we are setting the counter meter name as http.request,
along with other data, including the description and the meter tags. Ultimately,
we need to pass the meterRegistry reference object used to create the
counter meter. A metric is generated when calling the increment method from
the Counter object. Every time the increment method is called, the count
metric number is incremented by one.
Gauges
We use gauges whenever we want to measure the size of a collection of things
that can increase or decrease in a system. For example, we can check how many
threads are active in the system. The number of threads can increase or decrease
depending on the moment the metric is captured. The following code shows how
we can create a gauge meter:
AtomicInteger totalThreads = meterRegistry.gauge(
"Total threads",
new
AtomicInteger(ManagementFactory.getThreadMXBean().getThreadCoun
);
totalThreads.set(ManagementFactory.getThreadMXBean().getThreadC
Instead of using the builder as we did for the counter, we create the gauge meter
directly in the meterRegistry object by providing the value the gauge
measures. In the example above, we provide the total number of threads in the
system. We wrap it within an AtomicInteger because we cannot use
primitive numbers or object numbers from java.lang because they are
immutable, which would not allow us to update the gauge value after we have
defined it for the first time.
Timers
In some situations, we want to know how long an operation takes to complete,
and timers are the measure we can use for those situations. Timers are
particularly helpful in identifying if some system behavior is taking longer than
expected to finish, for example. The following code lets us use the timer meter:
SimpleMeterRegistry meterRegistry = new
SimpleMeterRegistry();
Timer durationTimer =
meterRegistry.timer("task.duration");
timer.record(() -> {
try {
TimeUnit.SECONDS.sleep(5);
} catch (InterruptedException _) {
}
});
System.out.println(durationTimer.totalTime(TimeUnit.SECONDS));
// 5 5.000323186
The Timer interface has a method called record. As we did in the example
above, we can put the system operation we want to measure inside the record
method by placing the call to TimeUnit.SECONDS.sleep(5). After the
execution, we could check that the timer metric recorded five seconds as the
time to execute the task.
Distribution summaries
A recurrent use case for distribution summaries is when we want to measure the
file size an application provides for download. Similarly, we can use a
distribution summary to track the payload size of upload requests handled by the
application. The following is how we can create and use a distribution summary
meter:
SimpleMeterRegistry registry = new
SimpleMeterRegistry();
var fileSize = 243000.81;
DistributionSummary responseSizeSummary =
DistributionSummary
.builder("file.size")
.baseUnit("bytes")
.register(registry);
responseSizeSummary.record(fileSize);
System.out.println(responseSizeSummary.totalAmount());
// 243000.81
Although not mandatory, setting the baseUnit to express which size unit you
intend to track is recommended.
Now that we know how Micrometer works, let us learn how to use it together
with Spring Boot to produce application metrics.
management:
metrics:
enable:
all: false
file: true
endpoints:
web:
exposure:
include: Prometheus
We set the properties max-file-size and max-request-size to 10MB
because the Spring Boot default configuration is 1MB, which is insufficient
because we intend to upload files bigger than that. The datasource property
block contains the configuration that defines the H2 in-memory database mode.
The management block is where we define the properties that govern the
behavior of the actuator and Micrometer. By default, Spring Boot collects many
technical metrics from the system, mainly from the JVM. For simplicity’s sake,
we are not interested in them, so we disable these metrics by setting
management.metrics.enable.all to false to turn off all metrics
from the system. However, we cannot leave the configuration that way;
otherwise, no metric will be captured. To solve it, we set the property
management.metrics.enable.file to true, making Spring Boot
capture all metrics with the word file at the beginning of their names. Finally,
we define how metrics should be exposed by setting
endpoints.web.exposure.include to prometheus. That does not
mean Spring Boot will connect to a Prometheus instance to export Micrometer
metrics. Instead, it means that the Micrometer will generate metrics compatible
with Prometheus.
Having correctly defined the dependencies and adequately configured the Spring
Boot application to work with Micrometer, we are ready to start implementing
the file storage system with metrics instrumentalization.
SpringApplication.run(FileStorageApplication.class,
args);
}
}
We place the @SpringBootApplication annotation on top of the
FileStorageApplication class to make Spring Boot use this class to
initiate the application.
As our file storage system operates by storing files in a database, it is imperative
that we implement a Jakarta entity. This entity will be our key component for
interacting with the database. Let us proceed with this task.
@Id
private String id;
@Lob
@Column(length = 20971520)
private byte[] content;
@Autowired
public FileMetric(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
}
// Code omitted
}
The FileMetric class is annotated with the @Component, making it a
Spring managed bean that can be used in other Spring beans. We use the class
constructor to initialize the meterRegistry class attribute. Remember that
MeterRegistry is an interface with implementations for different monitoring
systems. Since we defined the specific Micrometer Maven dependency with
support for Prometheus, when we start this Spring Boot application, the
meterRegistry attribute will be initialized with a
PrometheusMeterRegistry type that is a MeterRegistry
implementation. Next, we implement the methods that return the meters we
intend to use in the application:
@Component
public class FileMetric {
// Code omitted
public Counter requestCounter(String method,
String path) {
return Counter
.builder("file.http.request")
.description("HTTP request")
.tags("method", method)
.tags("path", path)
.register(meterRegistry);
}
public DistributionSummary
fileDownloadSizeSummary(String fileName) {
return DistributionSummary
.builder("file.download.size")
.baseUnit("bytes")
.description("File Download Size")
.tags("fileName", fileName)
.register(meterRegistry);
}
}
The requestCounter method returns a Counter object that lets us measure
how many HTTP requests are arriving at the file storage system endpoints,
which will be implemented soon. Note that we pass the method and path
parameters used in meter tags. Then, we have the fileUploadTimer method
which returns a Timer object we use to measure how long the application takes
to upload a file. Finally, we have the fileDownloadSizeSummary method
which returns a DistributionSummary object that we use to measure the
size of files users download from the file storage system.
Implementing Micrometer metrics with annotations like @Counted and
@Timed is also possible. However, the author recommends using the builder
approach because not all Micrometer metrics annotations may work out of the
box in a Spring Boot application. For example, using the @Counted annotation
with classes annotated with @Controller requires additional Spring
configuration.
After implementing the FileMetric class, let us implement the service and
controller classes with metrics instrumentalization. Let us proceed with the
service class.
Implementing the File service
We use the service class as an intermediate layer between the controller and the
repository. The service class contains the methods responsible for uploading and
downloading files. The following code is how we can define the initial class
structure:
@Service
public class FileService {
fileMetric.fileUploadTimer(file.getName()).record(
() -> fileRepository.save(file)
);
}
@Autowired
FileController(FileService fileService, FileMetric
fileMetric) {
this.fileService = fileService;
this.fileMetric = fileMetric;
}
// Code omitted
private void incrementRequestCounter(String
method, String path) {
fileMetric.requestCounter(method,
path).increment();
}
private void recordDownloadSizeSummary(File file)
{
fileMetric
.fileDownloadSizeSummary(
file.getName()).record(file.getContent().length);
}
}
We inject the FileService and FileMetric dependencies through the
class constructor annotated with @Autowired. Next, we define the
incrementRequestCounter method using the requestCounter
method from the FileMetric to increment the counter metric. We do
something similar with the recordDownloadSizeSummary, which records
the file content size.
We are now ready to implement the methods representing the HTTP endpoints
for uploading and downloading files. Let us start with the upload endpoint:
@RestController
public class FileController {
// Code omitted
@PostMapping("/file")
private String uploadFile(@RequestParam("file")
MultipartFile file)
throws IOException {
incrementRequestCounter(HttpMethod.POST.name(),
"/file");
if (file.isEmpty()) {
return ResponseEntity.notFound().build();
}
recordDownloadSizeSummary(file.get());
.contentType(MediaType.APPLICATION_OCTET_STREAM)
.contentLength(resource.contentLength())
.header(HttpHeaders.CONTENT_DISPOSITION,
ContentDisposition.attachment()
.filename(file.get().getName())
.build().toString())
.body(resource);
}
// Code omitted
}
The downloadFile method tracks two metrics: first, a request counter metric
by calling incrementRequestCounter(HttpMethod.GET.name(),
"/file/"+id) and second, a download size summary metric through
recordDownloadSizeSummary(file.get()). Requests to the
/file/{id} endpoint will trigger a file download in the web browser.
Next, let us check how we can make requests to test our application and
visualize the metrics captured by the Micrometer.
Note that the file name random.txt is the same file name provided
when the file was uploaded.
2. We can access the URL http://localhost:8080/actuator/prometheus to
visualize the application metrics, as shown in the following figure:
Figure 10.2: Checking metrics produced by the application
Looking at the metrics in the image above, we can state that the /file
endpoint received one POST request captured by the following metric:
file_http_request_total{method="POST",path="/file",} 1.0
3. On the other hand, we can check that the /file/69d67cea-4022-
4262-a217-a89b9f52e57b endpoint received three requests
measured by using the following metric:
file_http_request_total{method="GET",path="/file/69d67cea-4
The following two metrics captured the time to upload the file and the file
download size, respectively:
file_upload_duration_seconds_sum{fileName="random.txt",} 0.
file_download_size_bytes_sum{fileName="random.txt",} 1.515E
Note that both metrics have the fileName="random.txt" as their
only tag.
Conclusion
We learned in this chapter, how important metrics are to help us understand
application behaviors, playing a fundamental role in measuring how well the
business rule code solves the problems the application is supposed to solve. We
also explored the main components behind the Micrometer. We learned that the
most fundamental components are the registry, which enables the exposure of
metrics to specific monitoring tools like Prometheus, and the meter, which
represents different metric types like counter, gauge, timer, and distribution
summary. After grasping the fundamental Micrometer concepts, we
implemented a file storage system using Spring Boot and Micrometer, where we
had the chance to implement metrics to measure the HTTP requests to the
application endpoints, the time required to upload a file, and the size of
downloaded files.
This chapter focused on how Micrometer can be used to generate application
metrics. The approach to how such metrics can be scrapped is explored in the
next chapter, where we will learn how to use Prometheus and Grafana to scrap
application metrics, enabling us to create helpful monitoring dashboards and set
alerts. We will learn how to capture and use application metrics with Prometheus
after integrating them with Grafana. We will also explore the features provided
by Grafana to create dashboards.
CHAPTER 11
Creating Useful Dashboards with
Prometheus and Grafana
Introduction
Over the last chapters, we have been exploring how essential monitoring tools
and techniques are to clarify how well a software system is running. Such tools
and techniques are fundamental in providing helpful input through metrics,
traces, and logs with details describing system behaviors. Metrics, in particular,
represent one of the cornerstones of monitoring practices that shed light on the
inner workings of applications’ operations. In today’s world, where the number
of applications and the volume of metrics they produce is bigger than ever, a
monitoring solution capable of capturing, storing, and serving large amounts of
metrics data is essential to ensure efficient monitoring of highly complex and
demanding applications. We have Prometheus as the solution that can help us
tackle such a monitoring challenge.
Complementing Prometheus in a frequently used technology monitoring stack is
Grafana, a powerful tool that lets us build beautiful and helpful dashboards using
metrics from many supported systems, including Prometheus. In this chapter, we
explore how to use Grafana and Prometheus to monitor Java applications.
Structure
The chapter covers the following topics:
• Capturing application metrics with Prometheus
• Integrating Prometheus with Grafana
• Creating Grafana dashboards with application-generated metrics
• Compiling and running the sample project
Objectives
By the end of this chapter, you will know the Prometheus architecture and its
fundamental concepts, giving you the essential knowledge to introduce
Prometheus as a monitoring tool to capture metrics from your Java applications.
Having comprehended the core Prometheus concepts, you will learn how to
apply them in a real-world monitoring setup where Prometheus captures metrics
from a Spring Boot application and serves them to Grafana, which provides a
helpful dashboard on top of those metrics. You will also learn how to use
Prometheus metrics to trigger alerts using Alertmanager.
Starting at the top in the figure above, the metrics exporters provide metrics data
that the Prometheus server scraps. Metric data is stored in the Prometheus
storage, allowing us to trigger alerts through the Alertmanager and create
dashboards. Let us assess each component of the architecture.
Metrics exporters
Exporters are the architecture components responsible for making metrics
available to Prometheus. There are three types of exporters we will cover, as
follows:
1. The third-party exporter is used when one wants to get metrics from a
system that has no control over its source code. Suppose your application
stores data in a MySQL database, and you want to get metrics from it. As
you have no control over the MySQL source code, you must rely on an
exporter provided by a third party that provides MySQL metrics.
2. The application exporter provides metrics generated by instrumented
applications. Contrary to third-party exporters, we have control over the
metrics generated by application exporters because we can change the
application’s code.
3. The node exporter is aimed to expose only machine-based metrics like
CPU, memory, and disk usage. It runs as a process in the operating
system.
All exporters provide an HTTP endpoint that the Prometheus server uses to pull
metrics data. Let us explore what is inside a Prometheus server.
Prometheus server
The Prometheus server is a fundamental architecture component. Its storage is
based on a time-series database that persists data directly on the operating
system disk. However, remote storage is also supported. The Prometheus server
scraps metrics data through a pull mechanism that reaches out to the HTTP
endpoints provided by metric exporters. The alerting engine uses metrics data
that let us define rules for triggering alerts. Finally, we have metrics consumers
covered next.
Metrics consumers
We have two major metrics consumers: the Alertmanager and the dashboard
systems. Alertmanager receives alert events from the Prometheus server and
triggers notifications through emails, phone calls, and messages in chat systems
like Slack. Prometheus provides a basic dashboard engine that lets us visually
represent the metrics. However, it is recommended and widespread practice to
plug in dedicated dashboard systems like Grafana, which we will explore further
in this chapter.
Now that we know the Prometheus architecture let us learn how to install the
Prometheus server and explore using PromQL, a powerful query language that
allows us to perform aggregation on metrics.
Configuring Prometheus
Prometheus configuration is defined through the prometheus.yml file that
lets us set, for example, the exporter endpoint that the Prometheus server will
use to scrape metrics. Next, we will cover the steps to configure Prometheus.
1. Let us consider first how the prometheus.yml default configuration
looks like:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 1
evaluation_interval: 15s # Evaluate rules every 15 second
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
The figure shows the initial page when you access the URL
http://localhost:9090/ in your web browser.
4. When you click on Status, then Target, you can see what are the exporter
endpoints that Prometheus is connected to, as shown in the following
figure:
Figure 11.3: Prometheus user interface in the web browser
prometheus:
image: prom/prometheus:v2.45.6
network_mode: host
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometh
grafana:
image: grafana/grafana:10.4.4
network_mode: host
depends_on:
- Prometheus
2. The Prometheus configuration is done through the prometheus.yml
file. Below is how we configure such file:
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: "File Storage"
metrics_path: /actuator/prometheus
static_configs:
- targets: ["localhost:8080"]
The metrics_path we pass here is the one that is exposed by the file
storage application through the Spring Boot actuator. The
localhost:8080 is the URL endpoint where the file storage system
will run.
3. We start the Grafana and Prometheus containers up by executing the
following command:
$ docker-compose up -d
Creating chapter11_prometheus_1 ... done
Creating chapter11_grafana_1 ... done
We can confirm that the container setup is working by accessing Prometheus at
http://localhost:9090 and Grafana at http://localhost:3000. Next, let us see
how to configure Prometheus as a Grafana data source.
alertmanager:
image: prom/alertmanager:v0.27.0
network_mode: host
volumes:
-
./monitoring/alertmanager.yml:/etc/alertmanager/config.yml
prometheus:
image: prom/prometheus:v2.45.6
network_mode: host
volumes:
-
./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
- ./monitoring/alert-
rules.yml:/etc/prometheus/alert-rules.yml
depends_on:
- alertmanager
# Code omitted
We need to adjust the prometheus.yml file to enable
the alerting mechanism:
# Code omitted
alerting:
alertmanagers:
- static_configs:
- targets: [ 'localhost:9093' ]
rule_files:
- "/etc/prometheus/alert-rules.yml"
# Code omitted
The alerting configuration block lets us specify the Alertmanager URL
localhost:9093 that is used to trigger alerts. With the rule_files block,
we can tell Prometheus where it can find the files containing the alert rules. Let
us explore next how we can define Prometheus rules and configure
Alertmanager to send email notifications.
After making many requests to the file storage system, you should see a
Grafana dashboard similar to the one shown in the picture above.
5. Execute the following command every 10 seconds for 1 minute:
$ curl --form file='@random.txt' localhost:8080/file
After executing the command above, you can check the alert captured by
Alertmanager at http://localhost:9093, as shown in the following figure:
Figure 11.11: Alert captured by Alertmanager
We can confirm the alert was successfully captured by checking the alert
name HighFileUploadDurationTime, which is the same as
defined in the alert-rules.yml file.
Conclusion
Storing and serving metrics data is a fundamental responsibility greatly fulfilled
by Prometheus, which acts as a monitoring backbone solution by letting us
capture application-generated metrics to build meaningful Grafana dashboards
and trigger alerts with Alertmanager.
In this chapter, we learned the Prometheus architecture, exploring how metrics
are collected through the application, node, or third-party exporters. We also
learned how Prometheus stores metrics data and makes them available for
alerting and dashboard creation purposes. To explore Prometheus’s possibilities
further, we integrated it with Grafana by creating visualizations based on the
metrics produced by the file storage system. Finally, we configured a
Prometheus rule to trigger an alert through the Alertmanager.
In the next chapter, we start a discussion on software architecture by examining a
technique called domain-driven design (DDD), which lets us structure software
code in a way that closely represents real-world problems. We will explore DDD
concepts such as entities to express identity and uniqueness in a system. We will
also learn how to use value objects to enhance the meaning of the domain model.
CHAPTER 12
Solving problems with Domain-
driven Design
Introduction
As a Java developer, most of the software you will see is made to solve business
problems. This software represents the processes, rules, and all sorts of things an
organization needs to do to stay profitable and fulfill customer expectations.
Malfunctioning and bugs in such software may translate directly to financial and
reputation damage because most, if not all, business activities depend on the
software that enables them.
As the software development industry matured over the years and software
systems shifted from mere supporters of business operations to becoming the
core actors of business success, developers became more concerned about the
practices that allowed them to develop applications that captured business
knowledge more accurately. From that concern, one practice called domain-
driven design (DDD) emerged as a software development technique with the
main goal of designing applications driven by real-world business problems.
Knowing how to employ domain-driven design is a fundamental skill for any
Java developer interested in building enterprise applications that act as crucial
assets for businesses. That is why this chapter covers essential domain-driven
design principles and techniques.
Structure
The chapter covers the following topics:
• Introducing domain-driven design
• Conveying meaning with value objects
• Expressing identity with entities
• Defining business rules with specifications
• Testing the domain model
• Compiling and running the sample project
Objectives
By the end of this chapter, you will know how domain-driven design principles
such as bounded context and ubiquitous languages help you to model a problem
domain that lets you write code that not only works but also serves as an
accurate expression of the business operations that are conducted via software to
solve real-world problems. By employing techniques such as entities, value
objects, and specifications, you will be able to develop better-structured
applications by keeping complexity under control and avoiding the so-called big
ball of mud systems, where any code change represents a high risk of breaking
things.
Bounded contexts
To understand bounded contexts and why mapping them is a fundamental
undertaking in domain-driven design, let us consider the business scenario of a
personal finance solution. A person usually expects from a personal finance
solution the ability to keep track of their expenses by keeping track of how much
money they received, where, when, and what they spent their money on. Seeing
the money activity through a monthly report is also a valuable capability of such
a personal finance solution.
When imagining a software system capable of delivering the personal finance
features described previously, we can consider the following three system
responsibilities:
• Money handling: It is responsible for storing and providing access to
money transactions. It also allows for the organization of transactions
through user-defined categories like gym, rent, grocery, investments, etc.
• Report generation: It contains the rules for generating monthly money
activity reports using Excel spreadsheets.
• File storage: It provides file storage capabilities, allowing Excel report files
to be uploaded and downloaded.
In a traditional monolithic approach, all these three responsibilities would be part
of the same application, packed together in the same deployable unit. They
would probably be together in the same source code repository. The following
figure illustrates the structure of a monolith application based on the three
responsibilities described above:
Figure 12.1: Personal finance monolith structure
Ubiquitous language
Clear communication is essential for the success of any software project. On the
one hand, we have customers expressing their needs. On the other hand, business
analysts try to understand those needs and share their learning with product
owners and developers. Failure to express or interpret an idea may have serious
consequences, as applications may be developed based on faulty thinking. How
can we bridge the communication gap across all stakeholders involved in a
software project? We can employ the domain-driven design principle called
ubiquitous language, which helps us define a set of terms and their meanings that
accurately describe a problem domain. These terms must be understood the same
way by developers, product owners, designers, business analysts, and any other
relevant stakeholders.
The primary benefit of establishing a ubiquitous language is that when a
software system is developed based on the terms defined by such language, the
application code becomes a source of knowledge of how the business operates.
By having the same understanding as domain experts have in the problem
domain, developers go one to create application code driven primarily by
business needs rather than anything else. There is technology integration with
databases and other resources, but it is not the technology choices that drive the
code structure; instead, it is the problem domain.
Coming up with an accurate ubiquitous language can be challenging.
Sometimes, developers have no clue about the problem domain in which they
are supposed to develop a software solution. Such problem domain knowledge
usually can be found in the minds of business analysts or experienced developers
who understand how the business operates. Documentation can also be a source
of problem-domain knowledge. However, there may be scenarios where people
with problem domain knowledge no longer work in the company, and there is no
documentation explaining the problem domain. By employing domain-driven
design, we can use knowledge-crunching techniques to learn more about the
problem domain.
Knowledge crunching can range from reading books on the problem domain
area to talking with people who can provide helpful information to understand
how a business operates. We use a technique called event storming as a way for
knowledge crunching. Event storming is a technique that originated from
domain-driven design practices and can yield significant results in understanding
the problem domain and building ubiquitous language. Next, we discuss what
event storming is and the benefits it can provide to a software project.
Event storming
Most businesses operate based on events representing the interaction of people
with business processes. These interactions come from the desire to achieve
some outcome carried out by the business process, which is expected to
represent a series of steps that, when executed, produce a result. Mapping those
business processes and how they work constitutes the major goal of the event
storming, which is a workshop session between people who do not know how
the business works and people who know. Software developers seeking problem-
domain knowledge are the people who need to learn how the business works. On
the other hand, those who know about the business process are the so-called
domain experts.
Interested parties, like developers and domain experts, sit together to identify
domain events and to which business processes those events are associated.
Learning about domain events lets one know what must be done to achieve
business outcomes. People usually leave these event storm sessions with a better
understanding of how problems are solved. Interested parties get the input they
need to implement applications that will benefit customers. Domain experts
provide their expertise and validate their knowledge by walking through the
steps of the business process.
Aware of the benefits that event storming can provide to help provide clearly
defined bounded contexts, the ubiquitous language, and the domain model, we
explore how to conduct an event storm session.
Domain events
The event storm session starts with the intent to acquire knowledge about how a
business process works. Consider, for example, a personal finance solution and
how it should work to achieve business outcomes. One of the most critical
aspects of such a solution is to enable people to track their expenses by adding
their transactions. Based on that, we can define a domain event named
TransactionAdded, as shown in the following figure:
Domain events are always defined as nouns in the past tense describing
something as it has already happened. Alright, we have the
TransactionAdded event, but how is it triggered? To do so, we need to
define a command. Let us check it next.
Commands
Having identified our first domain event, the TransactionAdded, we must
determine which command triggers such an event. We can solve it by defining
the AddTransaction command, as shown in the following figure:
Figure 12.4: The command stick note
Actors
Actors play a fundamental role in even storming because it is through them that
we can track the source of the existence of domain events. Actors can be defined
as humans or non-humans, and their relationship with domain events is mediated
through commands as follows:
Identifying the actors helps us understand who is triggering the events and
allows us to explore the motivations behind their interactions with commands
and the generated domain events.
Aggregates
We have identified the TransactionAdded domain event and the
AddTransaction command. The first represents an event that happened,
while the second refers to the action generating the event. Domain events and
commands represent a connection, an aggregation of activities to fulfill some
business outcome. This connection between domain events and commands is
described, in our personal finance example, through an aggregate called
Transaction, as shown in the following figure:
Think of an aggregate as an entity or data that ties together the command and
domain events responsible for enabling the business process. In the example
above, the aggregate is positioned between the command and the domain event
stick notes.
We can map all the business processes we are interested in by using domain
events, commands, actors, and aggregates. Once we have them mapped, the final
result will serve as the input for the domain model implementation. In the
following section, we will explore what the domain model is.
@Builder
public Account(Id id, String name, List<Transaction> tr
List<Category>
categories) {
this.id = id;
this.name = name;
if (transactions == null) {
throw new RuntimeException("Transaction list ca
null");
} else {
this.transactions = transactions;
}
if (categories == null) {
throw new RuntimeException("Categories list can
null");
} else {
this.categories = categories;
}
}
}
The Account entity is implemented as a record with a constructor with
guard checks to ensure lists of transactions and categories are never null.
When an account is created for the first time, we expect it to have no
transactions or categories. From the code implementation perspective, the
constructor accepts empty lists of transactions and categories but cannot
accept nulls.
In the domain model, we use entities to capture the data and behaviors that
represent the business problem we intend to solve. The entity’s behavior can be
subjected to constraints that define what can and cannot be done. Such
constraints can be expressed through specifications that we will explore next.
@Override
public void check(T t) throws GenericSpecificationExcep
}
}
The validation occurs inside the isSatisfiedBy that evaluates the
results of the spec1 and spec2.
Having implemented the specification abstraction, we implement specifications
for the personal finance application. For that, we can, for example, define a
specification with a business rule that ensures no transactions with zero amount
are entered into the system:
public final class TransactionAmountSpec extends
AbstractSpecification<Double> {
@Override
public boolean isSatisfiedBy(Double amount) {
return amount > 0;
}
@Override
public void check(Double amount) throws
GenericSpecificationException {
if(!isSatisfiedBy(amount))
throw new
GenericSpecificationException("Transaction value 0 is
invalid");
}
}
The business rule is implemented inside the isSatisfiedBy, where we check
if the transaction value is greater than zero. Below is how we use the
TransactionAmountSpec in the Transaction entity:
@Builder
public record Transaction (Id id, String name, Double
amount, Type type, Instant timestamp) {
public static Transaction
createTransaction(Account account, String
name, Double amount, Type type) {
var transaction = getTransaction(name, amount,
type);
var transactions = account.transactions();
new
TransactionAmountSpec().check(transaction.amount);
transactions.add(transaction);
return transaction;
}
// Code omitted
}
Whenever a new transaction is created, we add it to a list of transactions. Still,
before doing so, we check through the TransactionAmountSpec to see if
the transaction amount is greater than zero. If it is not, then the system throws an
exception. We can follow the same approach for the implementation of a
specification that ensures the user does not provide duplicate categories:
public final class DuplicateCategorySpec extends
AbstractSpecification<Category> {
public DuplicateCategorySpec(List<Category>
categories) {
this.categories = categories;
}
@Override
public boolean isSatisfiedBy(Category category) {
return categories.contains(category);
}
@Override
public void check(Category category) throws
GenericSpecificationException {
if(isSatisfiedBy(category))
throw new
GenericSpecificationException("Category already
exists");
}
}
The DuplicateCategorySpec has a constructor that receives a list of
categories of an account. The specification checks if the new category exists in
such a list. If it exists, then it throws an exception. The following is how we can
use the DuplicateCategorySpec in the Category entity:
@Builder
public record Category(Id id, String name,
List<Transaction> transactions) {
new
DuplicateCategorySpec(categories).check(category);
categories.add(category);
return category;
}
// Code omiited
}
The way we use the DuplicateCategorySpec in the Category entity is
similar to what we did previously in the Transaction entity. If the validation
passes, the category is added to the account’s list of categories.
@Test
public void Given_an_account_exists_create_a_category()
var name = "testAccount";
var category = "testCategory";
var account = createAccount(name);
assertEquals(0, account.categories().size());
Category.createCategory(account, category);
assertEquals(1, account.categories().size());
}
@Test
public void Given_a_category_already_exists_throw_excep
var name = "testAccount";
var category = "testCategory";
var account = createAccount(name);
Category.createCategory(account, category);
assertThrows(GenericSpecificationException.class, (
Category.createCategory(account, category));
}
// Code omitted
}
With the Given_an_account_exists_create_a_category
test, we can confirm whether a new category is created in an existing
account. We also test if the specification is really working with the
Given_a_category_already_exists_throw_exception
test, which checks if an exception is caught when we try to add an already
existing category.
3. To conclude, we test the Transaction entity:
public class TransactionTest {
// Code omitted
@Test
public void Given_an_invalid_transaction_throw_exceptio
var account = createAccount("testAccount");
assertThrows(GenericSpecificationException.class, (
Transaction.createTransaction(account, "tes
tion",
0.0, Type.DEBIT));
}
@Test
public void Given_a_category_add_and_remove_a_credit_tr
tion() {
var account = createAccount("testAccount");
var category = Category.createCategory(account, "te
gory");
var transaction = Transaction.createTransaction(acc
"testTransaction", 10.0, Type.CREDIT);
assertEquals(0, category.transactions().size());
transaction.addTransactionToCategory(category);
assertEquals(1, category.transactions().size());
transaction.removeTransactionFromCategory(category)
assertEquals(0, category.transactions().size());
}
// Code omitted
}
As we did in the CategoryTest, here in the TransactionTest, we
also test if the specification is working with the
Given_an_invalid_transaction_throw_exception test,
which checks if an exception is thrown when the transaction has zero
value. With the
Given_a_category_add_and_remove_a_credit_transaction
test, we check if the application adds to and removes transactions from a
category.
To wrap up, let us compile the personal finance project and run its tests.
Conclusion
Domain-driven design stands as a reliable approach to designing enterprise
applications. By putting business concerns, rather than technology ones, as the
main drivers for application development, the domain-drive design approach
with concepts like ubiquitous language, bound context, and domain model helps
us better understand business problems and how to solve them.
Motivated by the benefits of domain-driven design, we looked at fundamental
principles like the ubiquitous language, which fosters shared understanding
among developers, business analysts, project owners, and other stakeholders. We
also explored the importance of mapping bounded contexts to eliminate
ambiguities and clearly define system responsibilities. Furthermore, we
discovered event storming, a powerful collaboration technique that brings
together developers and domain experts to discuss and gain clarity on the
business problems that a software project intends to solve.
Finally, we put these concepts into action by implementing a personal finance
application. This practical exercise allowed us to see how entities, value objects,
and specifications, all key elements of domain-driven design, can be expressed
through Java code.
In the next chapter, we explore how to implement Java applications using
layered architecture. We will learn how to develop a Java application using an
architecture where the data layer is responsible for data access and manipulation,
the service layer provides business rules, and the API layer exposes system
behaviors.
Introduction
Whenever a new software project is started, developers need to decide how the
different software components will be structured and interact with each other to
fulfill user requirements. Such decisions are made to provide working software
running in production in the best way possible. Over the years, developers have
been exploring techniques to structure application code that produces working
software and let them do so sustainably by identifying and separating concerns
in a software system.
One technique, known as layered architecture, has been widely adopted in the
enterprise software industry due to its reasonable simplicity and pragmatic
approach. When employing layered architecture, it does not take too much to
implement it and explain to other team members how it works, which may make
it a viable alternative for those wanting to deliver working software faster while
keeping, to a certain degree, some order on how the application code is
structured. So, in this chapter, we will explore layered architecture and how we
can use it to develop better-structured Java applications.
Structure
The chapter covers the following topics:
• Importance of software architecture
• Understanding layered architecture
• Handling and persisting data in the data layer
• Defining business rules in the service layer
• Exposing application behaviors in the presentation layer
• Compiling and running the sample project
Objectives
By the end of this chapter, you will understand layered architecture by arranging
the application code into layers, each holding a specific system responsibility.
You will learn how the layered approach helps establish boundaries in the
application code, which can contribute to identifying and separating concerns in
a software system, positively influencing the overall software architecture. To
solidify the concepts explored in this chapter, we will examine the development
steps to implement a Java application using layered architecture ideas.
With this approach, we can avoid the situation where a layer is used only as a
proxy to access another layer.
Regardless of the layer communication approach, the layer dependency direction
must always go downwards because a high-level layer always depends on a
lower-level layer. For example, the presentation layer can depend on the service
or data layer, but the service layer cannot depend on the presentation layer.
Having grasped the fundamental ideas of layered architecture, let us now look at
their practical application in developing a personal finance system based on the
Spring Boot framework. Starting from the layer structure we have discussed so
far, which is based on the data, service, and presentation layers, we begin by
exploring the role of the data layer in handling data entities and persistence.
@Id
private String id;
private String name;
private Double amount;
private String type;
private Instant timestamp;
}
We use the @NoArgsConstructor, @AllArgsConstructor, @Getter,
and @Builder annotations from Lombok to make the code more concise.
Lombok is a Java library that helps considerably reduce the boilerplate produced
by the recurring usage of common language constructs such as constructor
declarations and the definition of getters and setters. We use the @Entity
annotation to make the Transaction class a Jakarta Persistence entity. When
declaring the entity class attributes, we must specify which attribute will be used
for the entity ID. We do that by placing the @Id annotation on the id attribute.
Next, we can define the repository interface:
@Repository
public interface TransactionRepository extends
CrudRepository<Transaction, String> { }
Here, we are just extending the CrudRepository without defining any
additional method because we are relying only on the basic database operations
that are already provided when the CrudRepository is extended.
Next, we implement the category entity and its repository.
@Id
private String id;
private String name;
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass())
return false;
Category category = (Category) o;
return Objects.equals(name, category.name);
}
@Override
public int hashCode() {
return Objects.hash(name);
}
}
Note that we have a @OneToMany annotation placed above the transactions
class attribute. This annotation expresses a one-to-many relationship between a
category and one or more transactions. Following this, we override both the
equals and hashCode methods. The logic we define establishes that
Category objects with the same name are considered equal. Next, we
implement the repository interface as follows:
@Repository
public interface CategoryRepository extends
CrudRepository<Category, String> { }
As we previously did for the TransactionRepository interface, we
extend the CrudRepository in the CategoryRepository, hence
inheriting all built-in database operations sufficient to handle Category
entities.
Finally, we implement the entity and repository classes to handle accounts.
@Id
private String id;
private String email;
private String password;
@Autowired
public TransactionService(CategoryRepository categoryRe
AccountRepository accountRepo
this.categoryRepository = categoryRepository;
this.accountRepository = accountRepository;
}
// Code omitted
}
We use the @Service annotation from Spring to make it a managed
bean, so we do not need to worry about creating class instances or
providing dependencies. We also define CategoryRepository and
AccountRepository class attributes injected through the
TransactionService's constructor.
2. Continuing with the implementation, we implement methods responsible
for creating new transactions:
@Service
public class TransactionService {
// Code omitted
public void createTransaction(Account account, Transact
transactionPayload) throws Exception {
validateAmount(transactionPayload);
var transaction = Transaction.builder()
.id(transactionPayload.getId())
.name(transactionPayload.getName())
.amount(transactionPayload.getAmount())
.type(transactionPayload.getType())
.timestamp(transactionPayload.getTimestamp(
.build();
account.getTransactions().add(transaction);
accountRepository.save(account);
}
@Autowired
public CategoryService(AccountRepository accountReposit
this.accountRepository = accountRepository;
}
// Code omited
}
We do not persist categories directly into the database. Instead, we save
them through the Account entity to which they belong. That is why
AccountRepository should be injected as a dependency.
2. Next is the code that lets us create a new category:
@Service
public class CategoryService {
@Autowired
public AccountService(AccountRepository
accountRepository) {
this.accountRepository = accountRepository;
}
public Account createAccount(AccountPayload
accountPayload) throws
Exception {
validateEmail(accountPayload);
var account = Account.builder()
.id(accountPayload.getId())
.email(accountPayload.getEmail())
.password(accountPayload.getPassword())
.categories(List.of())
.transactions(List.of())
.build();
accountRepository.save(account);
return account;
}
private void validateEmail(AccountPayload
accountPayload) throws
Exception {
if (!Pattern.matches("^(.+)@(\\S+)$",
accountPayload.getEmail())) {
throw new Exception("Email format name is
invalid.");
}
if
(accountRepository.findByEmail(accountPayload.getEmail()).isPre
sent()) {
throw new Exception("Email provided
already exists.");
}
}
}
The AccountService is a straightforward service class implementation that
relies only on the AccountRepository class. The createAccount
method receives an AccountPayload object as a parameter used by the
validateEmail method to ensure the email provided is valid. The new
account will be saved in the database if the validation passes. The following is
how the AccountPayload class can be implemented:
@Getter
public class AccountPayload {
@Autowired
private TransactionEndpoint(TransactionService transact
vice,
TransactionRepository transa
Repository,
AccountRepository accountRep
CategoryRepository categoryR
) {
this.transactionService = transactionService;
this.transactionRepository = transactionRepository;
this.accountRepository = accountRepository;
this.categoryRepository = categoryRepository;
}
// Code omitted
}
We put the @RestController annotation from Spring Boot on top of
the TransactionEndpoint class to expose API endpoints. We inject
the TransactionService, TransactionRepository,
AccountRepository, and CategoryRepository as
dependencies to allow proper transaction management.
2. Having the basic class structure, let us define the endpoints to create and
retrieve transactions:
@RestController
public class TransactionEndpoint {
// Code omitted
@PostMapping("/transaction")
public void createTransaction(
@RequestBody TransactionPayload transactionPayload) thr
tion {
var account =
accountRepository
.findById(transactionPayload.getAccountId()).get();
transactionService
.createTransaction(account, transactionPayload);
}
@GetMapping("/transactions")
public List<Transaction> allTransactions() {
return (List<Transaction>) transactionRepository.fin
}
// Code omitted
}
The createTransaction method handles HTTP POST requests at
/transaction containing a JSON payload mapped to the
TransactionPayload class. The payload, based on the data provided
by the user, is used to save a new transaction in the system. We use the
account ID obtained from the payload to fetch an Account object. Then,
we pass the Account and TransactionPayload objects to create
the transaction using
transactionService.createTransaction(account,
transactionPayload).
The allTransactions method is straightforward. It handles HTTP
GET requests that retrieve all transactions in the system.
3. Other than allowing the creation and retrieving transactions, the
TransactionEndpoint also contains endpoints that let us add to or
remove a transaction from a category:
@RestController
public class TransactionEndpoint {
// Code omitted
@PutMapping("/{categoryId}/{transactionId}")
public void addTransactionToCategory(
@PathVariable String categoryId,
@PathVariable String transactionId
) {
var category = categoryRepository.findById(category
var transaction =
transactionRepository.findById(transactionId).get()
transactionService.removeTransactionFromCategory(ca
transaction);
transactionService.addTransactionToCategory(categor
transaction);
}
@DeleteMapping("/{categoryId}/{transactionId}")
public void removeTransactionFromCategory(
@PathVariable String categoryId,
@PathVariable String transactionId
) {
var category = categoryRepository.findById(category
var transaction =
transactionRepository.findById(transactionId).g
transactionService.removeTransactionFromCategory(ca
transaction);
}
// Code omitted
}
To add a transaction to a category, the system expects an HTTP PUT
request at the /{categoryId}/{transactionId} endpoint
through the addTransactionToCategory method that receives the
categoryId and transcationId parameters that are used to fetch a
Category and Transaction objects that are used to categorize the
transaction through the call of
transactionService.addTransactionToCategory(category,
transaction). A similar operation occurs for the deletion endpoint
that receives an HTTP DELETE request that relies on the call to
transactionService.removeTransactionFromCategory(category,
transaction) to remove a transaction from a category. Note that to
change a transaction’s category, we must first delete it from the existing
category by sending an HTTP DELETE request.
Next, we learn how to implement the category endpoint.
@Autowired
private CategoryEndpoint(CategoryService
categoryService,
CategoryRepository
categoryRepository,
AccountRepository
accountRepository) {
this.categoryService = categoryService;
this.categoryRepository = categoryRepository;
this.accountRepository = accountRepository;
}
// Code omitted
}
We inject the CategoryService, CategoryRepository, and
AccountRepository classes as dependencies. The AccountRepository
is required because every category is associated with an account, so we use the
AccountRepository to retrieve Account objects. The
CategoryEndpoint lets users create and list categories, as follows:
@RestController
public class CategoryEndpoint {
@PostMapping("/category")
public void createCategory(@RequestBody
CategoryPayload
categoryPayload) throws Exception {
var account =
accountRepository.findById(categoryPayload.getAccountId()).get(
categoryService.createCategory(account,
categoryPayload);
}
@GetMapping("/categories")
public List<Category> allCategories() {
return (List<Category>)
categoryRepository.findAll();
}
}
The createCategory method handles HTTP POST requests at the
/category endpoint, creating a new category based on the user-provided
JSON payload mapped to CategoryPayload. On the other hand, the
allCategories method handles HTTP GET requests at the /categories
endpoint, retrieving all available categories.
Having implemented the endpoint classes to handle transactions and categories,
we still need to create an endpoint to handle accounts.
@Autowired
private AccountEndpoint(AccountService
accountService,
AccountRepository
accountRepository) {
this.accountService = accountService;
this.accountRepository = accountRepository;
}
@PostMapping("/account")
public Account createAccount(@RequestBody
AccountPayload
accountPayload) throws
Exception {
return
accountService.createAccount(accountPayload);
}
@GetMapping("/account/{email}")
public Account getAccount(@PathVariable String
email) throws Exception
{
return
accountRepository.findByEmail(email).orElseThrow(() ->
new
Exception("Account not found"));
}
}
The createAccount method handles HTTP POST requests at the
/account endpoint that lets users create accounts, while the getAccount
method handles HTTP GET requests, allowing users to get account details by
passing the account email address.
At this stage, the personal finance application is implemented using the data,
service, and presentation layers. Let us explore next how to compile and run the
personal finance application.
Conclusion
The ability to group system responsibilities into layers allows one to define
boundaries within a system. These boundaries are formed based on our
understanding of the steps a software system needs to conduct to fulfill user
needs. The layered architecture helps to capture such an understanding into
layers that cooperate in realizing system behaviors. Because of its pragmatic
approach, the learning curve to grasp layered architecture is not so high, which
makes such architecture a good candidate for fast application development.
Aware of the benefits that layered architecture can provide, in this chapter, we
explored the ideas behind designing software systems into layers by
implementing the personal finance system employing first the data layer,
responsible for abstracting and handling all database interactions, then the
service layer in charge of enforcing constraints through business rules that
dictate how the software should behave, and finally the presentation layer
responsible for presenting through an API or a graphical user interface, the
behaviors supported by the personal finance application.
In the next and final chapter, we explore the software design approach called
hexagonal architecture, which allows us to develop more change-tolerable
applications. We will learn how to arrange the domain model in the domain
hexagon, provide input and output ports in the application hexagon, and expose
input and output adapters in the framework hexagon.
CHAPTER 14
Building Applications with Hexagonal
Architecture
Introduction
Software development in an environment of constant changes can be
challenging. Customers want to receive good service, and organizations strive to
provide it with efficient software systems. However, customer needs change. Not
only that, the way to fulfill customer needs can also change. After all, enterprises
are in an unending quest to produce value in the most inexpensive way possible.
Such a landscape presents a formidable challenge for developers who need to
solve customer problems now and, at the same time, ensure the systems they are
creating can evolve sustainably. By being sustainable, we are referring to the
ability to handle software changes gracefully, especially those that deal with
fundamental technological dependencies like, for example, database or
messaging systems.
We tackle uncertainty by developing software in a change-tolerable way. We can
accomplish that by using hexagonal architecture, a technique that helps develop
software so that the technological aspects are entirely decoupled from the
business ones. This chapter explores how hexagonal architecture helps create
software systems capable of welcoming fundamental technological changes
without significant refactoring efforts.
Structure
The chapter covers the following topics:
• Introducing hexagonal architecture
• Arranging the domain model
• Providing input and output ports
• Exposing input and output adapters
• Compiling and running the sample project
Objectives
By the end of this chapter, you will know how to develop Java applications using
the hexagonal architecture. You will learn how to define the domain model
provided by the domain hexagon. You will also learn how to use input and
output ports from the application hexagon to orchestrate system dependencies
required to enable behaviors established by the domain hexagon. Finally, you
will learn how to use adapters from the framework hexagon to make your system
compatible with different technologies.
On the driver side, we have systems that can trigger behaviors in the hexagonal
application. On the driven side, we have systems on which the hexagonal
application depends. We can also use the primary and secondary terms instead of
driver and driven. In the upcoming sections, we will explore the driver and
driven sides further.
The domain, application, and framework hexagons each have responsibilities in
the hexagonal system. Let us start exploring such responsibilities with the
domain hexagon.
Entities
In domain-driven design, we use entities to describe everything that can be
uniquely identified. For example, a person, a product, a user, or an account can
be uniquely identified and modeled as entities in a hexagonal application. These
domain entities should not be misinterpreted with database entities. Instead, such
domain entities should be designed and inspired by our knowledge of the
problem domain we are dealing with. We are not concerned about which
technologies we will use to handle the entities on the domain hexagon. Instead,
our focus is on establishing the entity in the most straightforward way possible
without depending on data or behaviors coming from outside of the application.
When considering entity definition from the Java development perspective,
entity classes are defined as Plain Old Java Objects (POJO), classes that rely
only on the standard Java API and nothing more.
Entities carry data and behavior. For example, an account entity can use email
and password as data attributes. It can also have a method called
resetPassword as one of its behaviors.
Value objects
Contrary to entities, value objects are not uniquely identifiable. We can use,
however, value objects as attributes to describe an entity. Implemented as POJOs
in a Java application, value objects contribute with the domain hexagon purpose
of solving business problems without depending on data and behaviors provided
by third parties. Because of its immutable nature, a value object can be
implemented as a record in Java. However, the author recommends using
ordinary classes if the value object contains behaviors. The value objects are
pure Java classes that compose classes inside the domain hexagon. For example,
instead of using a String or UUID class to define an ID attribute, we can create
an ID value object value containing the data and behaviors that accurately
capture the identification needs of the problem domain we are working with.
Specifications
Enterprise applications are designed with a real business problem in mind.
Specifications are the domain-driven design technique that lets us capture the
rules for solving a system’s business problems. In a hexagonal architecture
application, specifications are considered the butter in the bread because they
carry the most critical asset of an application, the codified business rules that
solve real-world problems. Specifications are essential because understanding
business rules can be challenging, requiring knowledge and experience on how a
business operates. As entities and values objects, specifications are also modeled
as POJOs without dependency on data or behaviors provided by third parties.
Other domain-driven design elements include aggregates and domain services;
however, entities, value objects, and specifications are enough to start
implementing the domain hexagon.
The domain hexagon is the foundation for the hexagonal system because it
contains all the fundamental data and behaviors on which all other system parts
will depend. However, the domain hexagon alone is insufficient to provide a
working system. The domain hexagon becomes powerful when combined with
other hexagonal architecture elements, such as ports and adapters that rely on the
domain hexagon to provide fully functional features. However, it must be
reinforced that the domain hexagon code must evolve without any dependency
on technological details. Such a requirement is necessary to achieve the essential
outcome of shielding the code responsible for solving business problems from
the code responsible for providing the technology to solve those problems.
Let us explore next how the application hexagon helps to provide data and
behaviors that work based on the domain model provided by the domain
hexagon through input and output ports.
When we start developing the application hexagon, the domain hexagon should
be already implemented. The application hexagon depends on the domain
hexagon to process the driver and driven operations. So, we need to determine
which kind of data the hexagonal application is expected to receive from the
driver side and how such data should be processed, considering the possible
constraints provided by the domain hexagon. The hexagonal application may
need to retrieve data from somewhere else to process the data received from the
driver side. Such concerns about how the data will be received, processed, or
persisted can be expressed technologically agnostic through input ports, output
ports, and use cases. Let us proceed by exploring use cases.
Use cases
One of the advantages of a software system is the ability to automate things that
would instead be done manually if the software did not exist. Imagine sending a
message to a friend using a letter instead of email. To send a letter, we must
write the message on paper, put it into an envelope, paste a postal seal, and drop
it in a mailbox. These are all manual tasks we must perform to send a physical
letter. Now, imagine the scenario of sending an email. The system responsible
for it must provide the means to capture the message digitally written by a user.
Then, it must communicate with an SMTP server to send the email message to
its destination. The ability to store the email message in the system memory and
then send it using an SMTP server is an automated task provided by the software
that has email delivery as one of its use cases.
The use case represents the intent of an actor using the software system. Such an
actor can be a human or another system. In hexagonal applications, use cases can
be defined as abstractions, through interfaces or abstract classes, that represent
what the application can do. A use case abstraction may contain the operations to
accomplish a given goal. In the case of an email delivery system, we can have a
Java interface with two abstract methods called sendEmail(String
message) and getSMTPServer() that, when used together, let the system
send email messages.
If the use case is an abstraction, who implements it? The input ports are
responsible for that. Let us check the input ports next.
Input ports
Input ports describe how a use case will be fulfilled. Input ports are responsible
for handling data provided by clients sitting on the driver side of a hexagonal
application. Such data can be processed based on the constraints provided by the
domain hexagon. If necessary, the input port can use output ports to persist or
retrieve data from systems on the hexagonal application’s driven side.
All operations in the application hexagon are defined without specifying the
technology details of systems from both the driver and driven sides of the
hexagonal application. Designing the system without providing the technology
details in the application hexagon gives greater flexibility because we can define
system functionalities without specifying which technologies will enable those
functionalities.
We learned earlier that input ports can rely on output ports to persist or retrieve
data from systems on the hexagonal application’s driven side. Let us explore
output ports.
Output ports
Acting as abstractions, we use output ports to define what the hexagonal system
needs from the outside. The idea here is to express the need to get some data
without specifying how such data will be obtained. We do that to avoid
dependency on the underlying technology that provides the data. If output ports
are abstractions, you may wonder who provides their implementations. That is
the responsibility of the output adapter, which we will learn next while exploring
the framework hexagon.
Next, we explore what input and output adapters are and how we can use them to
make hexagonal applications compatible with any technology.
Input adapters
Once we have the hexagon application features defined as use cases and
implemented as input ports, we may want to expose those features to the outside
world. We accomplish that using input adapters that provide an interface
enabling driver actors to interact with the hexagonal application. Input adapters
resemble physical adapters in a way that both share the purpose of fitting the
format of something into another thing. From the software system perspective, a
format can be seen as a protocol supported by a hexagonal system. The support
of a protocol is done through the definition of an API that establishes which
technologies are used to communicate with the hexagonal application. We can
define APIs using HTTP-based solutions like RESTful, gRPC, or SOAP.
Depending on the use case, we can explore protocols like FTP for file transfer or
SMTP for email delivery.
Hexagonal architecture allows us to support as many technologies as we want in
the form of input adapters. A hexagonal application can be initially designed to
provide an input adapter supporting RESTful requests. A new input adapter
supporting gRPC calls can be quickly introduced as the application matures.
Adding new input adapters impacts only the framework hexagon. The code on
the application and domain hexagons are entirely protected from changes on the
framework hexagon. One or more input adapters can be connected to the same
input port, which allows access to the same system functionality provided by the
input port and exposes it through different technologies supported by the input
adapters.
Input adapters are the hexagonal architecture elements that open the door for
those interested in the functionalities offered by the hexagonal application.
However, to enable those functionalities, the hexagonal application needs to get
data and access systems backed by different technologies. Access to those
systems is made possible through the usage of output adapters. Let us explore
them next.
Output adapters
Most back-end applications depend on other systems to conduct their activities.
Such dependency can be expressed by the need to access, for example, an
external API responsible for processing payments, a scheduler system that
executes tasks in a pre-determined time, a message broker that allows
asynchronous processing, or a relation database for data persistence. Countless
scenarios can be used as dependency examples for a back-end application. The
critical concept here is that every dependency on something outside the
hexagonal application is handled through an output adapter.
Output adapters support the driven side of a hexagonal system. They are direct
implementations of the output ports defined in the application hexagon. Through
the output adapters, we define the code responsible for dealing with whatever
technology is necessary to provide the data or behavior to enable the hexagonal
application functionalities. The system can start persisting data in MySQL
databases and evolve without major refactoring to support Oracle databases.
Every new technology supported is a new output adapter implementing the same
output port that expresses which kind of data, based on the domain hexagon, the
hexagonal system requires.
Having covered the fundamental hexagonal architecture ideas, let us see how we
can apply the concepts learned in developing a hexagonal application next. We
start by arranging the domain model in the domain hexagon.
private Id id;
private Title title;
private String content;
private Instant creationTime;
@Override
public String toString() {
return name;
}
}
A Title value object can only be created through its static of(String
name) method. Using the Title value object also allows us to enforce a
constraint through the class’ constructor, preventing the creation of titles that are
too long.
The Note entity, the Id, and the Title value objects comprise the domain
model provided by the domain hexagon. As we will see in upcoming sections,
the domain hexagon plays a fundamental role in the hexagonal architecture
because the other hexagons, namely application and framework, depend on it.
Next, let us see how input and output ports are implemented in the application
hexagon.
List<Note> getNotes();
}
The NoteOutputPort interface definition shows that the application hexagon
depends on the domain hexagon by using the Note entity in the persistNote
and getNotes methods. We do not need to know how the system will persist
and get Note objects at the application hexagon level, so we express this
through the NoteOutputPort interface.
The next step is to define the NoteUseCase interface:
public interface NoteUseCase {
@Override
public List<Note> getNotes() {
return noteOutputPort.getNotes();
}
}
We use the @Service annotation to make the NoteInputPort a managed
bean object controlled by the Spring Boot. Note that we are injecting
NoteOutputPort as a dependency on the NoteInputPort constructor.
That is when things get interesting because NoteOutputPort is defined as an
interface, which means there may be different interface implementations
enabling us to get the required data to fulfill the operations of the
NoteInputPort. Moving ahead in the code, we implement the
createNote method, which creates a Note object using the title and
content attributes. Then, we persist the Note object using the
persistNote method, which relies on the NoteOutputPort. Finally, we
implement the getNotes method, which uses NoteOutputPort to get all
stored Notes.
The NoteInputPort is based on its NoteUseCase interface, the
NoteOutputPort abstraction it uses to get data from outside, and the domain
model provided by the domain hexagon. How can we use the
NoteInputPort? Let us discover it next when exposing input and output
adapters in the framework hexagon.
@Id
private String id;
private String title;
private String content;
private Instant creationTime;
}
We use the @Builder, @Getter, @AllArgsConstructor, and
@NoArgsConstructor Lombok annotations to make the code more concise.
The NoteData is a Jakarta entity representing the Note domain entity defined
in the domain hexagon. As we have these two different types of entities, we need
a mapper mechanism that lets us convert one entity into another:
public class NoteMapper {
Id.withId(noteData.getId().toString()),
Title.of(noteData.getTitle()),
noteData.getContent(),
noteData.getCreationTime()
);
}
public static NoteData noteDomainToData(Note note)
{
return NoteData.builder().
id(note.getId().toString()).
title(note.getTitle().getName()).
content(note.getContent()).
creationTime(note.getCreationTime())
.build();
}
}
The NoteMapper provides the noteDataToDomain and
noteDomainToData helper methods that produce Note and NoteData
objects, respectively. We will use NodeMapper when implementing the output
adapter class later in this section. Before doing so, let us first define the
repository interface to enable us to handle NoteData Jakarta entities:
@Repository
public interface NoteRepository extends
CrudRepository<NoteData, String> {
}
The NoteData, NoteMapper, and NoteRepository are the dependencies
we need to implement the NoteH2Adapter:
@Component
public class NoteH2Adapter implements NoteOutputPort {
public NoteH2Adapter(NoteRepository
noteRepository) {
this.noteRepository = noteRepository;
}
@Override
public Note persistNote(Note note) {
var noteData =
NoteMapper.noteDomainToData(note);
var persistedNoteData =
noteRepository.save(noteData);
return
NoteMapper.noteDataToDomain(persistedNoteData);
}
@Override
public List<Note> getNotes() {
var allNoteData = noteRepository.findAll();
var notes = new ArrayList<Note>();
allNoteData.forEach(noteData -> {
var note =
NoteMapper.noteDataToDomain(noteData);
notes.add(note);
});
return notes;
}
}
We use the @Component Spring annotation to make the NoteH2Adapter
class a managed bean controlled by Spring Boot. The NoteH2Adapter
implements the NoteOutputPort by implementing the persistNote and
getNotes methods. The persistNote uses the NoteMapper to convert
the Note domain entity to the NoteData Jakarta entity, then saves it into the
database using the NoteRepository. The getNotes also relies on the
NoteMapper to retrieve and convert the NoteData Jakarta entity to the
domain entity format.
Such conversions on the output adapter represent the system’s fundamental
change-tolerable capability. NoteH2Adapter is just one way that the system
can persist data. Using output ports and adapters lets us introduce new output
adapters with their proper data conversions whenever necessary.
Now that we have implemented the note keeper system’s output adapter, let us
implement the input adapters.
noteUseCase.getNotes().forEach(System.out::println);
}
}
The NoteCLIAdapter relies on the NoteUseCase interface that the
NoteInputPort implements. The input port provides the behaviors the
system supports, and an adapter can trigger them. The createNote method
uses the Scanner object to read data from the user’s keyboard. The user-
provided data is captured through the stdinParams method. Finally, we have
the printNotes method, which displays all stored notes.
As an alternative to the NoteCLIAdapter, we implement the
NoteRestAdapter:
@RestController
public class NoteRestAdapter {
@Autowired
NoteRestAdapter(NoteUseCase noteUseCase) {
this.noteUseCase = noteUseCase;
}
@PostMapping("/note")
private void addNote(@RequestBody NotePayload
notePayload) {
noteUseCase.createNote(Title.of(notePayload.title()),
notePayload.content());
}
@GetMapping("/notes")
private List<Note> all() {
return noteUseCase.getNotes();
}
}
Relying on the NoteUseCase as well, the NoteRestAdapter provides the
HTTP POST /note endpoint, which lets us create new notes in the system. It
also provides the HTTP GET /notes endpoint, which brings all stored notes.
The NoteCLIAdapter and NoteRestAdapter connect to the
NoteInputPort, derived from the NoteUseCase interface, to provide the
same behavior but through different means.
For the NoteCLIAdapter and NoteRestAdapter adapters to function
effectively, it is necessary to configure Spring Boot properly. To enable the
system to work in the CLI mode, we need to implement the
NoteKeeperCLIApplication class:
@Component
@ConditionalOnNotWebApplication
public class NoteKeeperCLIApplication implements
CommandLineRunner {
public NoteKeeperCLIApplication(NoteCLIAdapter
noteCLIAdapter) {
this.noteCLIAdapter = noteCLIAdapter;
}
@Override
public void run(String... args) {
var operation = args[0];
Scanner scanner = new Scanner(System.in);
switch (operation) {
case "createNote" -
>noteCLIAdapter.createNote(scanner);
case "printNotes" ->
noteCLIAdapter.printNotes();
default -> throw new
InvalidParameterException("The supported
operations are: createNote and getNotes");
}
}
}
We implement the CommandLineRunner interface from Spring, which
enables us to execute the application in the CLI mode.
The run method utilizes the String varargs parameter to receive data passed
as a parameter to the Java program, enabling it to either create a new note or
print stored notes.
To enable the system to work in the web mode, we need to implement the
NoteKeeperWebApplication class. The code is as follows:
@SpringBootApplication
public class NoteKeeperWebApplication {
SpringApplication.run(NoteKeeperWebApplication.class,
args);
}
}
The only requirement to make the application run in the web mode is to use the
@SpringBootApplication annotation.
To wrap up, let us see how to compile and run the note keeper system.
Conclusion
We reach the end of this book with the exploration of the hexagonal architecture.
This software design technique lets us create change-tolerable applications by
decoupling technology-related code from the code responsible for solving
business problems. This final chapter covered the fundamentals of hexagonal
architecture by exploring essential concepts like the domain hexagon and its role
in providing the domain model based on domain-drive-design techniques. We
learned how the hexagon application helps to express system behaviors in a
technology-agnostic way, which gives excellent flexibility by not coupling the
hexagonal system with third-party technologies. We also learned how the
framework hexagon is fundamental in making the system behaviors compatible
with different technologies.
We could apply the ideas shared in this chapter by implementing the note keeper
system, a Spring Boot application structured using hexagonal architecture. This
application shows how to apply concepts like use cases, input and output ports,
and input and output adapters to create an application accessible through a CLI
interface and a REST API.
As we conclude this book, the author wants to acknowledge the challenges you
have faced and the knowledge you have gained in our exploration of Java. You
have persevered by exploring the core Java API, the latest Java features, testing
techniques, cloud-native development, observability, and software architecture.
The author encourages you to continue learning, practicing, and overcoming the
complexities that are quite often present in the life of a Java developer.
B
beans 113
creating, with @Bean annotation 113-116
creating, with Spring stereotype annotations 117
blocking IO
handling, with reactive programming 51-53
bootable JAR 204
of Jakarta EE application 206, 207
of Quarkus application 205, 206
of Spring Boot application 205
bounded contexts 279-281
business rules
defining, with specifications 291-295
C
CallableStament interface
store procedures, calling with 68, 69
checked exceptions 17, 18
Collections API 1
command line interface (CLI) application 126
container-based virtualization 194
Container Runtime Interface (CRI) 200
container technologies 192
Criteria API 84
CRUD application, with Spring Boot
API endpoints, exposing with controller 133, 134
configuring 130, 131
database entity, defining 131
dependencies, setting up 130
HTTP requests, sending 134, 135
implementation 129
repository, creating 131, 132
service, implementing 132, 133
CRUD app, with Quarkus
building 143
database entity, simplifying with Panache 150
data persistence, with Hibernate 148
dependency injection, with Quarkus DI 143
D
database connection
creating, with DataSource interface 62-64
creating, with DriverManager class 61, 62
creating, with JDBC API 60, 61
Data Definition Language (DDL) 60
data layer
account entity and repository, implementing 308
category entity and repository, implementing 306, 307
data handling 305, 306
data persisting 305, 306
Data Manipulation Language (DML) 60
data structures
handling, with collections 2
key-value data structures, creating with maps 9-12
non-duplicate collections, providing with set 6-8
ordered object collections, creating with lists 3-5
Date-Time APIs 24
Instant class 28
LocalDate 24-26
LocalDateTime 26
LocalTime 26
ZoneDateTime 27
dependency injection
with @Autowired 119-122
distributed tracing 222
implementing, with Spring Boot and OpenTelemetry 222, 223
distribution summaries 244
Docker 195
fundamentals 196
Docker-based applications
access, allowing with Service 214-216
application configuration, externalizing 210
application configuration, providing with ConfigMap 210, 211
deploying, on Kubernetes 209
deploying, with Deployment 212-214
kubectl, for installing Kubernetes objects 216, 217
Kubernetes objects, creating 210
Secret, for defining database credentials 211, 212
Docker containers
creating 197, 198
Docker image
creating 207-209
managing 196, 197
domain-driven design (DDD) 277-279
domain hexagon 326, 327
arranging 332-335
entities 327
input adapter, creating 340-342
input and output adapters 337
input and output ports, providing 335-337
output adapter, creating 337-340
specifications 328
testing 295-298
value objects 327
DriverManager class 61
E
EFK stack 233
setting up, with with Docker Composer 233-236
Elasticsearch 232
enterprise information system (EIS) tiers 167
enterprise information system tier 166
Enterprise Java Bean (EJB) 167
enterprise resource planning (ERP) 166
entity 288
identity, expressing with 288-291
entity relationships
defining 74
many-to-many relationship 78-81
many-to-one relationship 76
one-to-many relationship 74-76
one-to-one relationship 76-78
error handling, with exceptions 17
checked exceptions 17, 18
custom exceptions, creating 20, 21
finally block 19
try-with-resources 19, 20
unchecked exceptions 18, 19
event storming 282, 283
event storm session participants
actors 285
aggregates 286
commands 285
domain events 284
domain model 286
event storm session, preparing 283, 284
identifying 283
F
Fluentd 232
framework hexagon 330, 331
input adapters 331, 332
output adapters 332
functional interfaces 28
Consumer 31
Function 29, 30
Predicate 29
Stream 31, 32
Supplier 30
functional programming
with streams and lambdas 28
G
Google Cloud Platform (GCP) 191
GraalVM 159
Grafana dashboard
building 268, 269
creating, with application-generated metrics 268
visualization for download size 271
visualization for file upload duration 270
visualization for number of requests per HTTP method 269, 270
H
hexagonal architecture 324, 325
application hexagon 328, 329
domain hexagon 326
framework hexagon 330
Hibernate 81
configuring 81
for handling database entities 81-83
I
Infrastructure-as-a-Service (IaaS) solution 193
inheritance 38
expectations 39-43
Instant class 28
integration tests 89
implementing, with Testcontainers 105-107
running, with Maven 107-109
intermediate operations 32
Inversion of Control (IoC) 113
J
Jakarta EE 164-167
enterprise application, building with 174, 175
Jakarta EE Core Profile specification 169, 170
Jakarta EE Platform specification 167, 168
Jakarta EE Web Profile specification 168, 169
multitiered applications, designing 165
Jakarta EE project 173, 174
Jakarta EE tiers
business tier 165
client tier 165
web tier 165
Jakarta Persistence
data handling, simplifying with 72, 73
entities, defining 73, 74
entity relationships, defining 74
Jakarta Server Faces (JSF) 165
Jakarta Server Pages (JSP) 165
Java 1
Java 2 Enterprise Edition (J2EE) 164
Java API 1
Java Archive (JAR) 170
Java Collections Framework 2, 3
Java Database Connectivity (JDBC) 59, 60
Java Development Kit (JDK) 159
Java Enterprise Edition (EE) 112
Java platform threads 49, 50
blocking IO operations 50, 51
limitations 50
Java Server Page (JSP) 167
java.util.Set interface 6
Java Virtual Machine (JVM) 27, 49, 158, 159, 208
JPQL
exploring 83
JUnit 5
account registration system 91-93
setting up 90, 91
using, for writing effective unit tests 90
K
Kibana 232
Kubernetes 198, 199
architecture 199
container runtime 200
kube-apiserver 200
kube-controller-manager 200
kubelet 200
kube-proxy 200
kube-scheduler 199
Kubernetes objects
ConfigMap 203, 204
Deployment 201, 202
Pod 201
Secret 203, 204
Service 203
L
lambda expression 29
layered architecture 303, 304
Linux Containers (LXC) 194
lists 3
LocalDate 24
LocalDateTime class 26
local development
approaches 85
with container databases 86
with in-memory databases 85
with remote databases 85
LocalTime class 26
Logging API
for improving application maintenance 21
formats 21-24
levels 21-24
log handlers 21-24
M
managed bean 143
many-to-many relationship 78
many-to-one relationship 76
Maven
tests, executing with 101
Micrometer 241
counters 242, 243
gauges 243
meters 242
registry 241
tags 242
timers 243, 244
MicroProfile 171, 172
Jakarta EE Core Profile specifications 172
specifications 173
MicroProfile project
API, building with Jakarta EE and MicroProfile 183,-186
data source, defining 179, 180
health checks, implementing 186, 187
Jakarta Persistence entity, implementing 180, 181
repository, implementing with EntityManager 181, 182
service class, implementing as Jakarta CDI managed bean 182
setting up 176-178
Mockito 96
external calls, mocking with 98-100
setting up, with JUnit 5 96
monitoring 220, 221
N
native applications
native image 159
writing, with Quarkus 158, 159
native executable
creating, with Quarkus 159, 160
NIO.2 API 13
NIO2, for file manipulation
files and directories, handling 14-16
paths, creating 12-14
using 12
O
object-oriented programming (OOP) 28
Object–Relational Mapping (ORM) 4, 83, 309
technologies 72
observability 221, 222
one-to-many relationship 74
one-to-one relationship 76
P
Panache
database entity handling, simplifying with 150
with active record pattern 152, 153
with repository pattern 151, 152
paravirtualization 193, 194
pattern matching 44
for record 48, 49
for switch statement 46, 47
for type 45, 46
Plain Old Java Objects (POJO) 327
pointcut
jointpoint 123
PrepapredStatement interface
parameterized queries, executing with 67, 68
presentation layer 304, 305
account endpoint, implementing 320, 321
application behaviors, exposing 315
category endpoint, implementing 319, 320
transaction endpoint, implementing 316-318
Prometheus 258
alerting rule, defining 273
architecture 259
configuring 261-264
configuring, as Grafana data source 267
downloading 261
installing 261
integrating, with Grafana 266, 267
metrics consumers 260
metrics exporters 260
server 260
setting up 261
PromQL
exploring 264-266
Q
Quarkus
benefits 138, 139
Quarkus project
bootstrapping 139-143
database entities, handling with EntityManager 149, 150
database support, enabling 148, 149
Quarkus REST
API implementation with 153-158
R
Remote File Converter 57
request-scoped beans 146-148
ResultSet object
results, processing with 69-72
S
service layer
account service, implementing 314, 315
business rules, defining 309
category service, implementing 312-314
transaction service, implementing 309-312
simple distributed system
building 223
Collector, setting up 230, 231
dependencies, configuring 223-226
Docker Compose, setting up 230
inventory service, implementing 226-228
Jaeger, setting up 230
report service, implementing 228-230
singleton beans 145, 146
software architecture 302, 303
software development 2
Spring 112
fundamentals 112
Spring AOP
using 123-125
Spring Boot Maven project, with Micrometer
configuration 246, 247
controller class, implementing 252-254
File entity, implementing 247, 248
File metrics, implementing 248-250
File repository, implementing 248
File service, implementing 250, 251
metrics, enabling on file storage system 247
setting up 245, 246
Spring Boot project 113
bootstrapping 126
creating, with Spring Initializr 126-129
Spring Boot with Micrometer
for implementing metrics 244
Spring context
for managing beans 113
Spring Core project 113
Spring stereotype annotations
@Component annotation 117
@Repository annotation 117
@Service annotation 117
using 117-119
Statement interface
simple queries, executing 64-67
Stream interface 31, 32
intermediate operation 33
stream source 32
terminal operation 34
stream source 32
Structured Query Language (SQL) syntax 60
T
terminal operation 32
Testcontainers
reliable integration tests, implementing with 102
setting up 102
thread 49
Title value objects 335
U
ubiquitous language 281, 282
unchecked exceptions 18, 19
unit tests 88, 89
V
value objects 287, 288
virtualization 192
container-based virtualization 194, 195
full virtualization 193
paravirtualization 193, 194
virtual threads 49
simple concurrent code, writing 53-57
W
Web Archive (WAR) 171
Y
Yet Another Markup Language (YAML) 200
Z
ZoneDateTime class 27