Microservices For Java Developers 2nd Ed
Microservices For Java Developers 2nd Ed
m
pl
im
en
ts
of
Microservices
for Java Developers
A Hands-On Introduction
to Frameworks & Containers
2nd
Edition
Microservices for
Java Developers
A Hands-on Introduction
to Frameworks and Containers
The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Microservices for
Java Developers, the cover image, and related trade dress are trademarks of O’Reilly
Media, Inc.
The views expressed in this work are those of the authors, and do not represent the
publisher’s views. While the publisher and the authors have used good faith efforts
to ensure that the information and instructions contained in this work are accurate,
the publisher and the authors disclaim all responsibility for errors or omissions,
including without limitation responsibility for damages resulting from the use of or
reliance on this work. Use of the information and instructions contained in this
work is at your own risk. If any code samples or other technology this work contains
or describes is subject to open source licenses or the intellectual property rights of
others, it is your responsibility to ensure that your use thereof complies with such
licenses and/or rights.
This work is part of a collaboration between O’Reilly and Red Hat. See our statement
of editorial independece.
978-1-492-03826-9
[LSI]
Table of Contents
iii
5. Deploying Microservices at Scale with Docker and Kubernetes. . . . 61
Immutable Delivery 62
Docker and Linux Containers 63
Kubernetes 65
Getting Started with Kubernetes 68
Where to Look Next 71
iv | Table of Contents
CHAPTER 1
Microservices for Java Developers
1
areas. They all come together to allow the people in an organization
to truly exhibit agile, responsive learning behaviors and to stay com‐
petitive in a fast-evolving business world. Let’s take a closer look.
Open source is also leading the charge in the technology space. Fol‐
lowing the commoditization curve, open source is a place develop‐
ers can go to challenge proprietary vendors by building and
innovating on software that was once only available (without source,
no less) with high license costs. This drives communities to build
things like operating systems (Linux), programming languages (Go),
message queues (Apache ActiveMQ), and web servers (httpd). Even
companies that originally rejected open source are starting to come
around by open sourcing their technologies and contributing to
existing communities. As open source and open ecosystems have
become the norm, we’re starting to see a lot of the innovation in
software technology coming directly from open source communities
(e.g., Apache Spark, Docker, and Kubernetes).
Disruption
The confluence of these two factors—service design and technology
evolution—is lowering the barrier of entry for anyone with a good
idea to start experimenting and trying to build new services. You
can learn to program, use advanced frameworks, and leverage on-
demand computing for next to nothing. You can post to social net‐
works, blog, and carry out bidirectional conversations with potential
MSA helps solve the problem of how we decouple our services and
teams to move quickly at scale. It allows teams to focus on providing
the services and making changes when necessary, and to do so
without costly synchronization points. Here are some things you
won’t hear about once you’ve adopted microservices:
• Jira tickets
• Unnecessary meetings
• Shared libraries
• Enterprise-wide canonical models
Challenges | 9
could not be completed properly and that users should try again
later. But errors in network requests or distributed applications
aren’t always that easy. What if the downstream application you
must call takes longer than normal to respond? This is a killer
because now your application must take into account this slowness
by throttling requests, timing out downstream requests, and poten‐
tially stalling all calls through your service. This backup can cause
upstream services to experience slowdowns and even grind to a halt.
And it can cause cascading failures.
Challenges | 11
Figure 1-5. Bounded contexts
Challenges | 13
you call our service, and one of our backends (the database that
stores that user’s current view of recommendations) is unavailable?
We could throw exceptions and stack traces back to you, but that
would not be a very good experience and could potentially blow up
other parts of the system. Because we made a promise, we can
instead try to do everything we can to keep it, including returning a
default list of books, or a subset of all the books. There are times
when promises cannot be kept, and identifying the best course of
action in these circumstances should be driven by the desired expe‐
rience or outcome for our users. The key here is the onus on our
service to try to keep its promise (return some recommendations),
even if our dependent services cannot keep theirs (the database was
down). In the course of trying to keep a promise, it helps to have
empathy for the rest of the system and the service quality we’re try‐
ing to uphold.
Another way to look at a promise is as an agreed-upon exchange
that provides value for both parties (like a producer and a con‐
sumer). But how do we go about deciding between two parties what
is valuable and what promises we’d like to agree upon? If nobody
calls our service or gets value from our promises, how useful is the
service? One way of articulating the promises between consumers
and providers is with consumer-driven contracts. With consumer-
driven contracts, we are able to capture the value of our promises
with code or assertions, and as a provider, we can use this knowl‐
edge to test whether we’re upholding our promises.
These are not easy problems to solve. The rest of this report will be
devoted to getting Java developers up and running with microservi‐
ces and able to solve some of the problems listed here.
Technology Solutions
Throughout the rest of the report, we’ll introduce you to some pop‐
ular technology components and how they help solve some of the
problems of developing and delivering software using a microservi‐
ces architecture. As touched upon earlier, microservices aren’t just a
technological problem, and getting the right organizational struc‐
ture and teams in place to facilitate this approach is paramount.
Switching from SOAP to REST doesn’t make a microservices archi‐
tecture.
The first step for a Java development team creating microservices is
to get something working locally on their machines. This report will
introduce you to three opinionated Java frameworks for working
Technology Solutions | 15
with microservices: Spring Boot, MicroProfile, and Apache Camel.
Each framework has upsides for different teams, organizations, and
approaches to microservices. As is the norm with technology, some
tools are a better fit for the job or team using them than others. Of
course, these are not the only frameworks to use. There are a couple
that take a reactive approach to microservices, like Vert.x and
Lagom. The mindshift for developing with an event-based model is
a bit different and requires a different learning curve, though, so for
this report we’ll stick with a model that most enterprise Java devel‐
opers will find comfortable.
If you want to know more about reactive programming and reactive
microservices, you can download the free ebook Building Reactive
Microservices in Java by Clement Escoffier from the Red Hat Devel‐
opers website.
The goal of this report is to get you up and running with the basics
for each framework. We’ll dive into a few advanced concepts in the
last chapter, but for the first steps with each framework, we’ll assume
a “Hello World” microservice application. This report is not an all-
encompassing reference for developing microservices; each chapter
ends with links to reference material that you can explore to learn
more as needed. We will iterate on the Hello World application by
creating multiple services and show some simple interaction pat‐
terns.
The final iteration for each framework will look at concepts like bul‐
kheading and promise theory to make services resilient in the face
of faults. We will dig into parts of the NetflixOSS stack, like Hystrix,
that can make our lives easier when implementing this functionality.
We will discuss the pros and cons of this approach and explore what
other options exist.
First, though, let’s take a look at the prerequisites you’ll need to get
started.
• JDK 1.8
The Spring ecosystem has some great tools you may wish to use
either at the command line or in an IDE. Most of the examples will
stick to the command line to stay IDE-neutral and because each IDE
has its own way of working with projects. For Spring Boot, we’ll use
the Spring Boot CLI 2.1.x.
Alternative IDEs and tooling for Spring, MicroProfile and Camel
include:
• Minishift
• Kubernetes/OpenShift CLI
• Docker CLI
Simplified Configuration
Spring historically was a nightmare to configure. Although the
framework improved upon other high-ceremony component mod‐
els (EJB 1.x, 2.x, etc.), it did come along with its own set of heavy‐
weight usage patterns. Namely, Spring required a lot of XML
configuration and a deep understanding of the individual beans
19
needed to construct JdbcTemplates, JmsTemplates, BeanFactory
lifecycle hooks, servlet listeners, and many other components. In
fact, writing a simple “Hello World” with Spring MVC required
understanding of DispatcherServlet and a whole host of Model–
View–Controller classes. Spring Boot aims to eliminate all of this
boilerplate configuration with some implied conventions and sim‐
plified annotations—although, you can still finely tune the underly‐
ing beans if you need to.
Starter Dependencies
Spring was used in large enterprise applications that typically lever‐
aged lots of different technologies to do the heavy lifting: JDBC
databases, message queues, file systems, application-level caching,
etc. Developers often had to stop what they were doing, switch cog‐
nitive contexts, figure out what dependencies belonged to which
piece of functionality (“Oh, I need the JPA dependencies!”), and
spend lots of time sorting out versioning mismatches and other
issues that arose when trying to use these various pieces together.
Spring Boot offers a large collection of curated sets of libraries for
adding these pieces of functionality. These starter modules allow
you to add things like:
Application Packaging
Spring Boot really is a set of bootstrap libraries with some conven‐
tion for configurations, but there’s no reason why you couldn’t run a
Spring Boot application inside your existing application servers as a
Web Application Archive (WAR). The idiom that most developers
who use Spring Boot prefer for their applications is the self-
Production-Ready Features
Spring Boot ships with a module called spring boot actuator that
enables things like metrics and statistics about your application. For
example, you can collect logs, view metrics, perform thread dumps,
show environment variables, understand garbage collection, and
show which beans are configured in the BeanFactory. You can
expose this information via HTTP or Java Management Extensions
(JMX), or you can even log in directly to the process via SSH.
With Spring Boot, you can leverage the power of the Spring frame‐
work and reduce boilerplate configuration and code to more quickly
build powerful, production-ready microservices. Let’s see how.
Getting Started
We’re going to use the Spring Boot command-line interface (CLI) to
bootstrap our first Spring Boot application (the CLI uses Spring Ini‐
tializr under the covers). You are free to explore the different ways to
do this if you’re not comfortable with the CLI. Alternatives include
using Spring Initializr plug-ins for your favorite IDE, or using the
web version. The Spring Boot CLI can be installed a few different
ways, including through package managers and by downloading it
straight from the website. Check for instructions on installing the
CLI most appropriate for your development environment.
Getting Started | 21
Once you’ve installed the CLI tools, you should be able to check the
version of Spring you have:
$ spring --version
Hello World
Now that we have a Spring Boot application that can run, let’s add
some simple functionality. We want to expose an HTTP/REST end‐
point at /api/hello that will return “Hello Spring Boot from X" where
X is the IP address where the service is running. To do this, navigate
to src/main/java/com/examples/hellospringboot. This location should
have been created for you if you followed the preceding steps. Then
create a new Java class called HelloRestController, as shown in
Example 2-1. We’ll add a method named hello() that returns a
string along with the IP address of where the service is running.
You’ll see in Chapter 6, when we discuss load balancing and service
Hello World | 23
discovery, how the host IPs can be used to demonstrate proper fail‐
over, load balancing, etc.
@RestController
@RequestMapping("/api")
public class HelloRestController {
Hello World | 25
want to say “Guten Tag” if we deploy our app in production for Ger‐
man users. We need a way to inject properties into our app.
Externalize Configuration
Spring Boot makes it easy to use external property sources like
properties files, command-line arguments, the OS environment, or
Java system properties. We can even bind entire “classes” of proper‐
ties to objects in our Spring context. For example, if we want to bind
all helloapp.* properties to the HelloRestController, we can add
@ConfigurationProperties(prefix="helloapp"), and Spring
Boot will automatically try to bind helloapp.foo and helloapp.bar
to Java Bean properties in the HelloRestController class. We can
define new properties in src/main/resources/application.properties.
The application.properties file was automatically created for us when
we created our project. (Note that we could change the filename to
application.yml and Spring would still recognize the YAML file as
the source of properties.)
Let’s add a new property to our src/main/resources/application.prop‐
erties file:
helloapp.saying=Guten Tag aus
@RestController
@RequestMapping("/api")
@ConfigurationProperties(prefix="helloapp")
public class HelloRestController {
Stop the application from running (if you haven’t already) and
restart it:
$ mvn clean spring-boot:run
Now if you navigate to http://localhost:8080/api/hello, you should see
the German version of the saying as shown in Figure 2-3.
Hello World | 27
Let’s see what it takes to enable the actuator. Open up the pom.xml
file for your hello-springboot microservice and add the following
Maven dependency in the <dependencies>...</dependencies>
section:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
Because we’ve added the actuator dependency, our application now
can expose a lot of information that will be very handy for debug‐
ging or general microservice insight.
Not all endpoints provided by the actuator dependency are exposed
by default, however. We need to manually specify which endpoints
will be exposed.
Add the following property to src/main/resources/application.proper‐
ties to expose some technology-agnostic endpoints:
#Enable management endpoints
management.endpoints.web.exposure.include=
beans,env,health,metrics,httptrace,mappings
Now restart your microservice by stopping it and running:
$ mvn clean spring-boot:run
Try hitting the following URLs and examine what gets returned:
• http://localhost:8080/actuator/beans
• http://localhost:8080/actuator/env
• http://localhost:8080/actuator/health
• http://localhost:8080/actuator/metrics
• http://localhost:8080/actuator/httptrace
• http://localhost:8080/actuator/mappings
If you look at the source code for this report, you’ll see a Maven
module called backend that contains a very simple HTTP servlet
that can be invoked with a GET request and query parameters. The
code for this backend is very simple, and it does not use any of the
microservice frameworks (Spring Boot, MicroProfile etc.). We have
created a ResponseDTO object that encapsulates time, ip, and greet
ing fields. We also leverage the awesome Jackson library for JSON
data binding, as seen here:
@WebServlet(urlPatterns = {"/api/backend"})
public class BackendHttpServlet extends HttpServlet {
@Override
protected void doGet(HttpServletRequest req,
HttpServletResponse resp)
throws ServletException, IOException {
resp.setContentType("application/json");
@RestController
@RequestMapping("/api")
@ConfigurationProperties(prefix="greeting")
public class GreeterRestController {
@RequestMapping(value = "/greeting",
method = RequestMethod.GET, produces = "text/plain")
public String greeting(){
String backendServiceUrl = String.format(
"http://%s:%d/api/backend?greeting={greeting}",
backendServiceHost, backendServicePort);
System.out.println("Sending to: " + backendServiceUrl);
return backendServiceUrl;
}
}
We’ve left out the setters for the properties in this class, but make
sure you have them in your source code! Note that we are using the
@ConfigurationProperties annotation again to configure the
REST controller here, although this time we are using the greeting
prefix. We also create a GET endpoint, like we did with the hello
service; and all it returns at the moment is a string with the values of
the backend service host and port concatenated (these values are
injected in via the @ConfigurationProperties annotation). Let’s
add the backendServiceHost and backendServicePort to our
application.properties file:
greeting.saying=Hello Spring Boot
greeting.backendServiceHost=localhost
greeting.backendServicePort=8080
@RestController
@RequestMapping("/api")
@ConfigurationProperties(prefix="greeting")
public class GreeterRestController {
@RequestMapping(value = "/greeting",
method = RequestMethod.GET, produces = "text/plain")
public String greeting(){
Now let’s build the microservice and verify that we can call this new
greeting endpoint and that it properly calls the backend. First, start
the backend if it’s not already running. Navigate to the backend
directory of the source code that comes with this application and
run it:
$ mvn clean wildfly:run
Next, we’ll build and run the Spring Boot microservice. Let’s also
configure this service to run on a different port than the default port
• Spring Boot
• Spring Boot Reference Guide
• Spring Boot in Action, by Craig Walls (Manning)
• Spring Boot on GitHub
• Spring Boot Samples on GitHub
37
TomEE from Apache, and OpenLiberty from IBM, just to name a
few.
Because Java EE had a strong influence on MicroProfile, it is worth
mentioning that in 2017, Oracle donated Java EE to the Eclipse
Foundation under the Jakarta EE brand. Although Jakarta EE and
MicroProfile share the same origin, their purpose remains very dif‐
ferent. While Jakarta EE is a continuation of Java EE and focuses on
enterprise applications, MicroProfile instead focuses on Enterprise
microservices applications.
The MicroProfile 2.1 specification defines the base programming
model using the Jakarta EE CDI, JSON-P, JAX-RS, and JSON-B
APIs, and adds Open Tracing, Open API, Rest Client, Config, Fault
Tolerance, Metrics, JWT Propagation, and Health Check APIs (see
Figure 3-1). Development remains active, with groups working on
Reactive Streams, Reactive Messaging, GraphQL, long running
actions, and service mesh features.
Thorntail
Thorntail is the MicroProfile implementation from Red Hat. It is a
complete teardown of the WildFly application server into bite-sized,
reusable components that can be assembled and formed into a
microservice application. Assembling these components is as simple
as including a dependency in your Java Maven (or Gradle) build file;
Thorntail takes care of the rest.
Getting Started
You can start a new Thorntail project by using the Thorntail Gener‐
ator web console to bootstrap it (similar to Spring Initializr for
Spring Boot). Simply open the page and fill in the fields with the fol‐
lowing values, as shown in Figure 3-2:
Now click the blue Generate Project button. This will cause a file
called hello-microprofile.zip to be downloaded. Save the file and
extract it.
Navigate to the hello-microprofile directory, and try running the fol‐
lowing command:
$ mvn thorntail:run
Make sure that you have stopped the backend service that you
started in the previous chapter.
Getting Started | 39
If everything boots up without any errors, you should see some log‐
ging similar to this:
2018-12-14 15:23:54,119 INFO [org.jboss.as.server] (main)
WFLYSRV0010: Deployed "demo.war" (runtime-name : "demo.war")
2018-12-14 15:23:54,129 INFO [org.wildfly.swarm] (main)
THORN99999: Thorntail is Ready
Congrats! You have just gotten your first MicroProfile application
up and running. If you navigate to http://localhost:8080/hello in your
browser, you should see the output shown in Figure 3-3.
Hello World
Just like with the Spring Boot framework in the preceding chapter,
we want to add some basic “Hello World” functionality and then
incrementally add more functionality on top of it.
We want to expose an HTTP/REST endpoint at /api/hello that will
return “Hello MicroProfile from X,” where X is the IP address where
the service is running. To do this, navigate to src/main/java/com/
examples/hellomicroprofile/rest. This location should have been cre‐
ated for you if you followed the preceding steps. Then create a new
Java class called HelloRestController, as shown in Example 3-1.
We’ll add a method named hello() that returns a string along with
the IP address of where the service is running. You’ll see in Chap‐
ter 6, in the sections on load balancing and service discovery, how
the host IPs can be used to demonstrate proper failover, load balanc‐
ing, etc.
@Path("/api")
public class HelloRestController {
Hello World | 41
@GET
@Produces("text/plain")
@Path("/hello")
public String hello() {
String hostname = null;
try {
hostname = InetAddress.getLocalHost()
.getHostAddress();
} catch (UnknownHostException e) {
hostname = "unknown";
}
return "Hello MicroProfile from " + hostname;
}
Now, the same way as we did for Spring Boot, we will see how to
inject external properties into our app using MicroProfile’s Config
API.
Now, as shown in Example 3-3, let’s add the @Inject and @Config
Property("helloapp.saying") annotations and our new saying
field to the HelloRestController class. Note that, unlike with
Spring Boot, we don’t need setters or getters.
@Path("/api")
public class HelloRestController {
@Inject
@ConfigProperty(name="helloapp.saying")
private String saying;
@GET
@Produces("text/plain")
@Path("/hello")
public String hello() {
String hostname = null;
try {
hostname = InetAddress.getLocalHost()
.getHostAddress();
} catch (UnknownHostException e) {
hostname = "unknown";
}
return saying + " " + hostname;
}
Hello World | 43
Because we’ve started using the CDI API in our examples, we’ll also
need to add the beans.xml file, with the contents shown in
Example 3-4.
This file will instruct the CDI API to process all the injection points
marked with the @Inject annotation.
Let’s stop our application from running (if we haven’t) and restart it:
$ mvn clean thorntail:run
Now if we navigate to http://localhost:8080/api/hello we should see
the German version of the saying, as shown in Figure 3-5.
Hello World | 45
Easy, right?
If you look at the source code for this report, you’ll see a Maven
module called backend which contains a very simple HTTP servlet
that can be invoked with a GET request and query parameters. The
code for this backend is very simple, and it does not use any of the
microservice frameworks (Spring Boot, MicroProfile, etc.).
To start up the backend service on port 8080, navigate to the back‐
end directory and run the following:
@Path("/api")
public class GreeterRestController {
@Inject
@ConfigProperty(name="greeting.saying", defaultValue = "Hello")
private String saying;
@Inject
@ConfigProperty(name = "greeting.backendServiceHost",
defaultValue = "localhost")
private String backendServiceHost;
@Inject
@ConfigProperty(name = "greeting.backendServicePort",
defaultValue = "8080")
private int backendServicePort;
@GET
@Path("/api")
public class GreeterRestController {
@Inject
@ConfigProperty(name="greeting.saying", defaultValue = "Hello")
private String saying;
@Inject
@ConfigProperty(name = "greeting.backendServiceHost",
defaultValue = "localhost")
private String backendServiceHost;
@Inject
@ConfigProperty(name = "greeting.backendServicePort",
defaultValue = "8080")
private int backendServicePort;
@GET
@Produces("text/plain")
@Path("greeting")
public String greeting() {
String backendServiceUrl = String.format("http://%s:%d",
backendServiceHost,backendServicePort);
return backendDTO.getGreeting()
+ " at host: " + backendDTO.getIp();
}
Now let’s build the microservice and verify that we can call this new
greeting endpoint and that it properly calls the backend. We’ll con‐
figure this service to run on a different port than the default (8080)
so that it doesn’t collide with the backend service, which is already
running on that port:
$ mvn thorntail:run \
-Dswarm.network.socket-binding-groups.standard-sockets
.port-offset=100
In Chapter 6, we’ll see how running these microservices in their own
Linux containers removes the restriction of port swizzling at run‐
time. With all that done, you can point your browser to http://local‐
host:8180/api/greeting to see if our microservice properly calls the
backend and displays what we’re expecting, as shown in Figure 3-8.
• MicroProfile
• MicroProfile slides
• Thorntail
• Thorntail Documentation
Now that you know how to build microservices, you could continue
building more and more. However, as the number of microservices
grows, the complexity for the client who is consuming these APIs
also grows.
Real applications could have dozens or even hundreds of microser‐
vices. A simple process like buying a book from an online store like
Amazon can cause a client (your web browser or your mobile app)
to use several other microservices. A client that has direct access to
the microservice would have to locate and invoke them and handle
any failures they caused itself. So, usually a better approach is to hide
those services behind a new service layer. This aggregator service
layer is known as an API gateway.
Another advantage of using an API gateway is that you can add
cross-cutting concerns like authorization and data transformation in
this layer. Services that use non-internet-friendly protocols can also
benefit from the usage of an API gateway. However, keep in mind
that it usually isn’t recommended to have a single API gateway for
all the microservices in your application. If you (wrongly) decided
to take that approach, it would act just like a monolithic bus, violat‐
ing microservice independence by coupling all the microservices.
Adding business logic to an API gateway is a mistake and should be
avoided.
53
Apache Camel
Apache Camel is an open source integration framework that is well
suited to implementing API gateways. The framework implements
most of the patterns for enterprise application integration (EAI)
described in the book Enterprise Integration Patterns, by Gregor
Hohpe and Bobby Woolf (Addison-Wesley). Each enterprise inte‐
gration pattern (EIP) describes a solution for a common design
problem that occurs repeatedly in many integration projects. The
book documents 65 EIPs, taking a technology-agnostic approach.
Apache Camel uses a consistent API based on EIPs to have a well-
defined programming model for integration. With over 200 compo‐
nents, the developer can connect Apache Camel to almost any
source/destination. In addition to HTTP, FTP, File, JPA, SMTP, and
Websocket components, there are even components for platforms
like Twitter, Facebook, AWS, etc.
Apache Camel is very powerful, yet very simple to use. This makes it
an ideal choice for creating the API gateway for our microservices.
Apache Camel can be executed as a standalone application or be
embedded in a existing application. For our API gateway example,
we will use Camel in a Spring Boot application.
Getting Started
Camel applications can be created by declaring the Maven depen‐
dencies in an existing application, or by using an existing Maven
Archetype.
Since we already showed how to use the Spring CLI to create the
hello-springboot application, this time we will use the Maven Arche‐
type approach.
The following command will create the Spring Boot application with
Camel in a directory named api-gateway:
$ mvn archetype:generate -B \
-DarchetypeGroupId=org.apache.camel.archetypes \
-DarchetypeArtifactId=camel-archetype-spring-boot \
-DgroupId=com.redhat.examples \
-DartifactId=api-gateway \
-Dversion=1.0
Note that Hello World will be printed in the console every two sec‐
onds.
@Component
@ConfigurationProperties(prefix="gateway")
public class MySpringBootRouter extends RouteBuilder {
&connectionClose=true";
@Override
from("direct:springboot").streamCaching()
.toF(REST_ENDPOINT, springbootsvcurl)
.log("Response from Spring Boot microservice: +
${body}")
.convertBodyTo(String.class)
.end();
rest()
.get("/gateway").enableCORS(true)
.route()
.multicast(AggregationStrategies.flexible()
.accumulateInCollection(ArrayList.class))
.parallelProcessing()
.to("direct:microprofile")
.to("direct:springboot")
.end()
.marshal().json(JsonLibrary.Jackson)
.convertBodyTo(String.class)
.endRest();
}
• backend: 8080
• hello-springboot: 8180
• hello-microprofile: 8280
• api-gateway: 8380
• Apache Camel
• Camel components
• Camel Maven Archetypes
• Camel Spring Boot
• API gateway pattern
61
ronment variables and configurations. We also deploy our
application servers in clusters with redundant hardware, load bal‐
ancers, and shared disks and try to keep things from failing as much
as possible. We may have built some automation around the infra‐
structure that supports this with great tools like Chef or Ansible, but
somehow deploying applications still tends to be fraught with mis‐
takes, configuration drift, and unexpected behaviors.
With this model, we do a lot of hoping, which tends to break down
quickly in current environments (never mind at scale). Is the appli‐
cation server configured in dev/QA/prod like it is on our machine?
If it’s not, have we completely captured the changes that need to be
made and expressed to the operations folks? Do any of our changes
impact other applications also running in the same application
server(s)? Are the runtime components like the operating system,
Java virtual machine (JVM), and associated dependencies exactly the
same as on our development machine? The JVM that runs an appli‐
cation is an implementation detail that’s highly coupled to how we
configure and tune the application, so variations across environ‐
ments can wreak havoc. When you start to deliver microservices, do
you run them in separate processes on traditional servers? Is process
isolation enough? What happens if one JVM goes berserk and takes
over 100% of the CPU? Or the network I/O? Or a shared disk? What
if all of the services running on that host crash? Are your applica‐
tions designed to accommodate that? As we split our applications
into smaller pieces, these issues become magnified.
Immutable Delivery
Immutable delivery concepts help us reason about these problems.
With immutable delivery, we try to reduce the number of moving
pieces into prebaked images as part of the build process. For exam‐
ple, imagine in your build process you could output a fully baked
image consisting of the operating system, the intended version of
the JVM, any sidecar applications, and all the configuration. You
could then deploy this in one environment, test it, and migrate it
along a delivery pipeline toward production without worrying about
whether the environment or application is configured consistently.
If you needed to make a change to your application, you could sim‐
ply rerun this pipeline to produce a new immutable image of the
application and then do a rolling upgrade to deliver it. If it didn’t
work, you could roll the change back by deploying the previous
Kubernetes
Google is known for running Linux containers at scale. In 2014,
Google engineer Joe Beda said the company started more than two
billion containers per week. In fact, “everything” running at Google
runs in Linux containers, and it’s all managed by their Borg cluster
management platform. Google even had a hand in creating the
underlying Linux technology that makes containers possible: in
2006 its engineers started working on “process containers,” which
eventually became cgroups and was merged into the Linux kernel
code base and released in 2008. With its breadth and background of
operating containers at scale, it’s no surprise Google has had a
strong influence on platforms built around containers. Here are just
a few examples:
Kubernetes | 65
ments is game-changing. The web-scale companies have been doing
this for years, and a lot of them (Netflix, Amazon, etc.) had to hand-
build much of the functionality that Kubernetes now has baked in.
Kubernetes has a handful of simple primitives that you should
understand before we dig into examples. In this chapter we’ll intro‐
duce you to these concepts, and in the following chapter we’ll make
use of them for managing a cluster of microservices.
Pods
A pod is a grouping of one or more Docker containers (like a pod of
whales?). A typical deployment of a pod, however, will often be one-
to-one with a Docker container. If you have sidecar, ambassador, or
adapter deployments that must always be colocated with the applica‐
tion, a pod is the way to group them. This abstraction is also a way
to guarantee container affinity (i.e., Docker container A will always
be deployed alongside Docker container B on the same host).
Kubernetes orchestrates, schedules, and manages pods. When we
refer to an application running inside of Kubernetes, it’s running
within a Docker container inside of a pod. A pod is given its own IP
address, and all containers within the pod share this address (which
is different from plain Docker, where each container gets an IP
address). When volumes are mounted to the pod, they are also
shared between the individual Docker containers running in the
pod.
One last thing to know about pods: they are fungible. This means
they can disappear at any time (either because the service crashed or
because the cluster killed it). They are not like VMs, which you care
for and nurture. Pods can be destroyed at any point. This falls within
our expectation in a microservice world that things will (and do)
fail, so we are strongly encouraged to write our microservices with
this premise in mind. This is an important distinction to keep in
mind as we talk about some of the other concepts in the following
sections.
Labels
Labels are simple key/value pairs that we can assign to pods, like
release=stable or tier=backend. Pods (and other resources, but
we’ll focus on pods) can have multiple labels that group and catego‐
rize them in a loosely coupled fashion, making it easier to build
Services
The last Kubernetes concept you should understand is the Kuber‐
netes Service. We’ve seen that ReplicationControllers can con‐
trol the number of replicas of a service we have. We also saw that
Kubernetes | 67
pods die (either crash on their own or be killed, maybe as part of a
ReplicationController scale-down event). Therefore, when we try
to communicate with a group of pods, we should not rely directly on
their IP addresses (each pod will have its own IP address), as pods
can come and go. What we need is a way to group pods to discover
where they are and how to communicate with them, and possibly
load balance against them. That’s exactly what the Kubernetes Ser
vice does. It allows us to use a label selector to group our pods and
abstract them with a single virtual (cluster) IP that we can then use
to discover them and interact with them. We’ll show some concrete
examples in the next chapter.
With these simple concepts—pods, labels, ReplicationControl
lers, and Services, we can manage and scale our microservices the
way Google has learned to (or learned not to). It takes many years
and many failures to identify simple solutions to complex problems,
so we highly encourage you to familiarize yourself with these con‐
cepts and experience the power of managing containers with Kuber‐
netes for your microservices.
Minishift
To get started developing microservices with Docker and Kuber‐
netes, we’re going to leverage a free developer tool called Minishift.
Minishift is a small tool that runs on a developer’s machine, that
OpenShift
Red Hat OpenShift 3.x is an Apache v2 licensed open source devel‐
oper self-service platform, OpenShift Origin, that has been revam‐
ped to use Docker and Kubernetes. OpenShift at one point had its
own cluster management and orchestration engine, but with the
knowledge, simplicity, and power that Kubernetes brings to the
world of container cluster management, it would have been silly to
try to compete. The broader community is converging around
Kubernetes, and Red Hat is all in with Kubernetes.
OpenShift has many features, but one of the most important is that
it’s still native Kubernetes under the covers and supports role-based
access control, out-of-the-box software defined networking, secu‐
rity, logins, developer builds, and many other things. We mention it
here because the flavor of Kubernetes that we’ll use for the rest of
this book is based on OpenShift. We’ll also use the oc OpenShift
command-line tools, which give us a better user experience and
allow us to easily log in to our Kubernetes clusters and control
which project we’re deploying into. Minishift has both vanilla
Kubernetes and OpenShift. For the rest of this book, we’ll be refer‐
ring to OpenShift and Kubernetes interchangeably but using Open‐
Shift.
oc new-project <projectname>
Next, let’s create a new project/namespace into which we’ll deploy
our microservices:
$ oc new-project tutorial
Now using project "tutorial" on server
"https://192.168.64.30:8443".
Although not required to run these examples, installing the Docker
CLI for your native developer laptop, is useful as well. This will
allow you to list Docker images and Docker containers right from
your developer laptop as opposed to having to log in to the
Minishift VM. Once you have the Docker CLI installed, you should
be able to run Docker directly from your command-line shell:
Getting Started
To deploy our microservices, we will assume that a Docker image
exists. Each microservice described here already has a Docker image
available at the Docker Hub registry, ready to be consumed. How‐
ever, if you want to craft your own Docker image, this chapter will
cover the steps to make it available inside your Kubernetes/Open‐
Shift cluster.
Each microservice uses the same base Docker image provided by the
Fabric8 team. The image fabric8/java-alpine-openjdk8-jdk uses
OpenJDK 8.0 installed on Alpine Linux distribution, which makes
the image as small as 74 MB.
This image also provides nice features like adjusting the JVM argu‐
ments -Xmx and -Xms, and makes it really simple to run fat JARs.
73
An example Dockerfile to build a Java fat jar image would be as sim‐
ple as:
FROM fabric8/java-alpine-openjdk8-jdk
ENV JAVA_APP_JAR <your-fat-jar-name>
ENV AB_OFF true
ADD target/<your-fat-jar-name> /deployments/
Deploying to Kubernetes
There are several ways that we could deploy our microservices/
containers inside a Kubernetes/OpenShift cluster. However, for
didatic purposes, we will use YAML files that express very well what
behavior we expect from the cluster.
In the source code for this report, for each microservice example
there is a folder called kubernetes containing two files: deploy‐
ment.yml and service.yml. The deployment file will create a Deploy
ment object with one replica.
The Deployment also provides at least two environment variables.
The one called JAVA_OPTIONS specifies the JVM arguments, like
-Xms and -Xmx. The one called GREETING_BACKENDSERVICEHOST
replaces the values we defined in our first two microservices to find
the BACKEND service, as you’ll see in “Service discovery” on page
80.
Here is the deployment.yml file used for the hello-springboot micro‐
service.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
Getting Started | 75
name: hello-springboot
labels:
app: hello-springboot
book: microservices4javadev
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: hello-springboot
version: v1
template:
metadata:
labels:
app: hello-springboot
book: microservices4javadev
version: v1
spec:
containers:
- env:
- name: JAVA_OPTIONS
value: -Xmx256m -Djava.net.preferIPv4Stack=true
- name: GREETING_BACKENDSERVICEHOST
value: backend
image: rhdevelopers/hello-springboot:1.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet: # make an HTTP request
port: 8080 # port to use
path: /actuator/health # endpoint to hit
scheme: HTTP # or HTTPS
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 1
name: hello-springboot
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
httpGet: # make an HTTP request
port: 8080 # port to use
path: /actuator/health # endpoint to hit
scheme: HTTP # or HTTPS
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 1
Since the deployment.yml and service.yml files are stored together
with the source code for this report, you can deploy the microservi‐
$ oc create -f http://raw.githubusercontent.com/
redhat-developer/microservices-book/
master/hello-springboot/kubernetes/service.yml
and the hello-microprofile microservice:
$ oc create -f http://raw.githubusercontent.com/
redhat-developer/microservices-book/
master/hello-microprofile/kubernetes/deployment.yml
$ oc create -f http://raw.githubusercontent.com/
redhat-developer/microservices-book/
master/hello-microprofile/kubernetes/service.yml
Finally, deploy the api-gateway microservice:
$ oc create -f http://raw.githubusercontent.com/
redhat-developer/
microservices-book/master/api-gateway/kubernetes/deployment.yml
$ oc create -f http://raw.githubusercontent.com/
redhat-developer/
microservices-book/master/api-gateway/kubernetes/service.yml
The deployment files will create four pods (one replica for each
microservice). The service files will make each of these replicas visi‐
ble to each other. You can check the pods that have been created
through the command:
$ oc get pods
NAME READY STATUS
api-gateway-5985d46fd5-4nsfs 1/1 Running
backend-659d8c4cb9-5hv2r 1/1 Running
hello-microprofile-844c6c758-mmx4h 1/1 Running
hello-springboot-5bf5c4c7fd-k5mf4 1/1 Running
Getting Started | 77
What advantages does Kubernetes bring as a cluster manager? Let’s
start by exploring the first of many. Let’s kill a pod and see what hap‐
pens:
$ oc delete pod hello-springboot-5bf5c4c7fd-k5mf4
pod "hello-springboot-5bf5c4c7fd-k5mf4" deleted
Now let’s list our pods again:
$ oc get pods
NAME READY STATUS
api-gateway-5985d46fd5-4nsfs 1/1 Running
backend-659d8c4cb9-5hv2r 1/1 Running
hello-microprofile-844c6c758-mmx4h 1/1 Running
hello-springboot-5bf5c4c7fd-28mpk 1/1 Running
Wow! There are still four pods! Another pod was created after we
deleted the previous one. Kubernetes can start/stop/auto-restart
your microservices for you. Can you imagine what a headache it
would be to manually determine whether your services are started/
stopped at any kind of scale? Let’s continue exploring some of the
other valuable cluster management features Kubernetes brings to
the table for managing microservices.
External access
Now that all our microservices have been deployed inside the clus‐
ter, we need to provide external access. Since we have an API Gate‐
way defined, only this microservice needs to be exposed; it will be
the single point of access to invoke the hello-microprofile and hello-
springboot microservices. For a refresher on our microservices
architetcure, take a look at Figure 6-1.
Now you can try the curl command to test all microservices.
$ curl http://api-gateway-tutorial.$(minishift ip).nip.io
/api/gateway
["Hello from cluster Backend at host: 172.17.0.7",
"Hello Spring Boot from cluster Backend at host: 172.17.0.7"]
The output should show that you reached both microservices
through the API gateway and both of them accessed the backend
microservice, which means that everything is working as expected.
Scaling
One of the advantages of deploying in a microservices architecture
is independent scalability. We should be able to replicate the number
of services in our cluster easily without having to worry about port
conflicts, JVM or dependency mismatches, or what else is running
on the same machine. With Kubernetes, these types of scaling con‐
cerns can be accomplished with the Deployment/ReplicationCon
troller. Let’s see what deployments exist in our deployment:
$ oc get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE
api-gateway 1 1 1 1
backend 1 1 1 1
hello-microprofile 1 1 1 1
hello-springboot 1 1 1 1
Getting Started | 79
Now if we list the pods, we should see three pods running our hello-
springboot application:
$ oc get pods
NAME READY STATUS
api-gateway-76649cffc-dgr84 1/1 Running
backend-659d8c4cb9-5hv2r 1/1 Running
hello-microprofile-844c6c758-mmx4h 1/1 Running
hello-springboot-5bf5c4c7fd-j77lj 1/1 Running
hello-springboot-5bf5c4c7fd-ltv5p 1/1 Running
hello-springboot-5bf5c4c7fd-w4z7c 1/1 Running
If any of those pods dies or gets deleted, Kubernetes will do what it
needs to do to make sure the replica count for this service is 3.
Notice also that we didn’t have to change ports on these services or
do any unnatural port remapping. Each of the services is listening
on port 8080 and does not collide with the others.
Kubernetes also has the ability to do autoscaling by watching met‐
rics like CPU, memory usage, or user-defined triggers and scaling
the number of replicas up or down to suit. Autoscaling is outside the
scope of this report but is a very valuable piece of the cluster man‐
agement puzzle.
Service discovery
In Kubernetes, a Service is a simple abstraction that provides a level
of indirection between a group of pods and an application using the
service represented by that group of pods. We’ve seen how pods are
managed by Kubernetes and can come and go. We’ve also seen how
Kubernetes can easily scale up the number of instances of a particu‐
lar service. In our example, we deployed our backend service from
the previous chapters to play the role of service provider. How does
our hello-springboot service communicate with that service?
Let’s take a look at what Kubernetes services exist:
$ oc get services
NAME TYPE CLUSTER-IP PORT(S)
api-gateway ClusterIP 172.30.227.148 8080/TCP
backend ClusterIP 172.30.169.193 8080/TCP
hello-microprofile ClusterIP 172.30.31.211 8080/TCP
hello-springboot ClusterIP 172.30.200.142 8080/TCP
Getting Started | 81
"greeting.backendServiceHost", defaultValue = "local
host") (see Example 3-5).
Because we declared the name of the service and not the IP address,
all it takes to find it is a little bit of DNS and the power of Kuber‐
netes service discovery. One big thing to notice about this approach
is that we did not specify any extra client libraries or set up any reg‐
istries or anything. We happen to be using Java in this case, but
using Kubernetes cluster DNS provides a technology-agnostic way
of doing basic service discovery!
Fault Tolerance
Complex distributed systems like microservices architectures must
be built with an important premise in mind: things will fail. We can
spend a lot of energy trying to prevent failures, but even then we
won’t be able to predict every case of where and how dependencies
in a microservices environment can fail. A corollary to our premise
that things will fail is therefore we must design our services for failure.
Another way of saying that is that we need to figure out how to sur‐
vive in an environment where there are failures.
Cluster Self-Healing
If a service begins to misbehave, how will we know about it? Ideally,
you might think, our cluster management solution could detect and
alert us about failures and let human intervention take over. This is
the approach we typically take in traditional environments. But
when running microservices at scale, where we have lots of services
that are supposed to be identical, do we really want to have to stop
and troubleshoot every possible thing that can go wrong with a ser‐
vice? Long-running services may experience unhealthy states. An
easier approach is to design our microservices such that they can be
terminated at any moment, especially when they appear to be behav‐
ing incorrectly.
Kubernetes has a couple of health probes we can use out of the box
to allow the cluster to administer and self-heal itself. The first is a
readiness probe, which allows Kubernetes to determine whether or
not a pod should be considered in any service discovery or load-
balancing algorithms. For example, some Java apps may take a few
seconds to bootstrap the containerized process, even though the pod
is technically up and running. If we start sending traffic to a pod in
Circuit Breaker
As a service provider, your responsibility is to your consumers to
provide the functionality you’ve promised. Following promise
theory, a service provider may depend on other services or down‐
stream systems but cannot and should not impose requirements
upon them. A service provider is wholly responsible for its promise
Fault Tolerance | 83
to consumers. Because distributed systems can and do fail, however,
there will be times when service promises can’t be met or can be
only partly met. In our previous examples, we showed our “hello”
microservices reaching out to a backend service to form a greeting at
the /api/greeting endpoint. What happens if the backend service is
not available? How do we hold up our end of the promise?
We need to be able to deal with these kinds of distributed systems
faults. A service may not be available; a network may be experienc‐
ing intermittent connectivity; the backend service may be experienc‐
ing enough load to slow it down and introduce latency; a bug in the
backend service may be causing application-level exceptions. If we
don’t deal with these situations explicitly; we run the risk of degrad‐
ing our own service; holding up threads, database locks, and resour‐
ces, and contributing to rolling, cascading failures that can take an
entire distributed network down. The following subsections present
two different approaches to help us account for these failures (one
for each technology, Spring Boot and MicroProfile).
@RestController
@RequestMapping("/api")
@ConfigurationProperties(prefix="greeting")
public class GreeterRestController {
@GET
@Produces("text/plain")
@Path("greeting")
@CircuitBreaker
@Timeout
@Fallback(fallbackMethod = "fallback")
public String greeting(){
// greeting implementation
}
Fault Tolerance | 85
• Injecting business logic and other stateful handling of faults
@RestController
@RequestMapping("/api")
@ConfigurationProperties(prefix="greeting")
public class GreeterRestController {
@RequestMapping(value = "/greeting",
method = RequestMethod.GET, produces = "text/plain")
@HystrixCommand(fallbackMethod = "fallback")
public String greeting(){
@EnableCircuitBreaker
@SpringBootApplication
public class HelloSpringbootApplication {
$ cd <PROJECT_ROOT>/hello-microprofile
$ mvn clean package
$ docker build -t rhdevelopers/hello-microprofile:1.0 .
Now that the images have been rebuilt, we can delete the previous
running containers—Kubernetes will restart them using the new
Docker images:
Fault Tolerance | 87
$ oc delete pod -l app=hello-springboot
$ oc delete pod -l app=hello-microprofile
We also need to scale the backend service to 0 replicas:
$ oc scale deployment backend --replicas=0
Now, we wait for all pods but the backend to be Running and try the
api-gateway service to get the response from both microservices:
$ curl http://api-gateway-tutorial.$(minishift ip).nip.io
/api/gateway
["Hello at host hello-
microprofile-57c9f8f9f4-24c2l- (fallback)",
"Hello Spring Boot at host hello-
springboot-f797878bd-24hxm- (fallback)"]
Load Balancing
In a highly scaled distributed system, we need a way to discover and
load balance against services in the cluster. As we’ve seen in previous
examples, our microservices must be able to handle failures; there‐
fore, we have to be able to load balance against services that exist,
services that may be joining or leaving the cluster, or services that
exist in an autoscaling group. Rudimentary approaches to load bal‐
ancing, like round-robin DNS, are not adequate. We may also need
sticky sessions, autoscaling, or more complex load-balancing algo‐
rithms. Let’s take a look at a few different ways of doing load balanc‐
ing in a microservices environment.
Load Balancing | 89
requests. The Service will load balance to the pods listed in the End
points field:
$ oc describe service backend
Name: backend
Namespace: microservices4java
Labels: app=backend
Annotations: <none>
Selector: app=backend
Type: ClusterIP
IP: 172.30.169.193
Port: http 8080/TCP
TargetPort: 8080/TCP
Endpoints: 172.17.0.11:8080,172.17.0.13:8080,
172.17.0.14:8080
Session Affinity: None
Events: <none>
We can see here that the backend service will select all pods with the
label app=backend. Let’s take a moment to see what labels are on one
of the backend pods:
$ oc describe pod/backend-859bbd5cc-ck68q | grep Labels
Labels: app=backend
The backend pods have a label that matches what the service is look‐
ing for, so any communication with the service will be load-
balanced over these matching pods.
Let’s make a few calls to our api-gateway service. We should see the
responses contain different IP addresses for the backend service:
$ curl http://api-gateway-tutorial.$(minishift ip).nip.io
/api/gateway
["Hello from cluster Backend at host: 172.17.0.11",
"Hello Spring Boot from cluster Backend at host: 172.17.0.14"]
We used curl here, but you can use your favorite HTTP/REST tool,
including your web browser. Just refresh your browser a few times;
you should see that the backend that gets called is different each
time, indicating that the Kubernetes Service is load balancing over
the respective pods as expected.
When you’re done experimenting, don’t forget to reduce the number
of replicas in your cluster to reduce the resource consumption:
• Spring Cloud
• MicroProfile Fault Tolerance
• Hystrix on GitHub
• Kubernetes
• OpenShift 3.11 Documentation
• “Why Kubernetes Is the New Application Server” by Rafael
Benevides
93
C++, and C#. There are already several OpenTracing implementa‐
tions, including Jaeger (from Uber), Apache Skywalking, and
Instana, and others. In this report, we will use Jaeger, which is the
most widely used implementation of OpenTracing.
Installing Jaeger
All information captured on each microservice should be reported
to a server that will collect and store this information, so it can be
queried later.
So, before instrumenting the source code of our microservices, first
we need to install the Jaeger server and its components. Jaeger pro‐
vides an all-in-one distribution composed of the Jaeger UI, collector,
query, and agent, with an in-memory storage component.
We can install this distribution with the following command:
$ oc process -f \
http://raw.githubusercontent.com/jaegertracing/jaeger-openshift
/master/
all-in-one/jaeger-all-in-one-template.yml | oc create -f -
@SpringBootApplication
@CamelOpenTracing
public class MySpringBootApplication {
/**
* A main method to start this application.
*/
public static void main(String[] args) {
SpringApplication.run(MySpringBootApplication.class, args);
@RestController
@RequestMapping("/api")
@ConfigurationProperties(prefix="greeting")
public class GreeterRestController {
@RequestMapping(value = "/greeting",
method = RequestMethod.GET, produces = "text/plain")
@HystrixCommand(fallbackMethod = "fallback")
public String greeting(){
template.setInterceptors(
Collections.singletonList(
new TracingRestTemplateInterceptor(
TracerResolver.resolveTracer())));
@Path("/api")
public class GreeterRestController {
@Inject
@ConfigProperty(name="greeting.saying",
defaultValue = "Hello")
private String saying;
@Inject
@ConfigProperty(name = "greeting.backendServiceHost",
defaultValue = "localhost")
private String backendServiceHost;
@Inject
@ConfigProperty(name = "greeting.backendServicePort",
defaultValue = "8080")
private int backendServicePort;
@GET
@Produces("text/plain")
@Path("greeting")
@CircuitBreaker
@Timeout
@Fallback(fallbackMethod = "fallback")
@Traced(operationName = "greeting")
public String greeting() {
String backendServiceUrl = String.format("http://%s:%d",
backendServiceHost,backendServicePort);
return backendDTO.getGreeting()
+ " at host: " + backendDTO.getIp();
}
That’s it! Just one annotation and we are good to go. But let’s not for‐
get about the JAEGER_SERVICE_NAME in the Dockerfile:
FROM fabric8/java-alpine-openjdk8-jdk
ENV JAVA_APP_JAR demo-thorntail.jar
ENV AB_OFF true
ENV JAEGER_SERVICE_NAME hello-microprofile
ADD target/demo-thorntail.jar /deployments/
We can then rebuild the JAR file and the Docker image and restart
the Kubernetes pod with the following commands:
$ mvn clean package -DskipTests
$ docker build -t rhdevelopers/hello-microprofile:1.0 .
$ oc delete pod -l app=hello-microprofile
//Perform work
scope.span().finish();
@WebServlet(urlPatterns = {"/api/backend"})
public class BackendHttpServlet extends HttpServlet {
@Override
protected void doGet(HttpServletRequest req,
HttpServletResponse resp)
throws ServletException, IOException {
resp.setContentType("application/json");
scope.span().finish();
}
This means that the default configuration for the tracer uses a UDP
Sender that sends the tracing information to localhost:5778. The
ProbabilisticSampler defines that only 0.1% (0.001) of the
requests will be traced. Tracing only 0.1% of the requests seems fine
for production usage. However, for our tutorial we will change the
tracer to collect all requests.
According to the environment variable definitions in the jaeger-
core module, we will need to configure the following keys/values for
all microservices:
• JAEGER_ENDPOINT: http://jaeger-collector:14268/api/
traces
• JAEGER_REPORTER_LOG_SPANS: true
• JAEGER_SAMPLER_TYPE: const
• JAEGER_SAMPLER_PARAM: 1
Now click Search in the top menu, and select the API-Gateway ser‐
vice. Scroll down the page, and click the Find Traces button. You
should see the tracing generated by your request with the curl com‐
mand, as shown in Figure 7-3.
Click on the trace, and Jaeger will open the details. It’s easy to see
that the api-gateway service made parallel requests to hello-
Feel free to go ahead and search for the backend service spans.
• OpenTracing
• Jaeger
• The camel-opentracing component
• Jaeger bindings for the Java OpenTracing API
• “Using OpenTracing with Jaeger to Collect Application Metrics
in Kubernetes” by Diane Mueller-Klingspor
• “OpenShift Commons Briefing #82: Distributed Tracing with
Jaeger & Prometheus on Kubernetes” by Diane Mueller
Configuration
Configuration is a very important part of any distributed system,
and it becomes even more difficult with microservices. We need to
find a good balance between configuration and immutable delivery
because we don’t want to end up with snowflake services. For exam‐
ple, we’ll need to be able to change logging, switch on features for
A/B testing, configure database connections, and use secret keys or
passwords. We saw in some of our examples how to configure our
microservices using each of the three Java frameworks presented
here, but each framework does configuration slightly differently.
What if we have microservices written in Python, Scala, Golang,
Node.js, etc.?
To be able to manage configuration across technologies and within
containers, we need to adopt an approach that works regardless of
what’s actually running in the containers. In a Docker environment,
we can inject environment variables and allow our application to
consume those environment variables. Kubernetes allows us to do
107
that as well, and considers it a good practice. Kubernetes also
includes APIs for mounting Secrets that allow us to safely decouple
usernames, passwords, and private keys from our applications and
inject them into the Linux container when needed. Furthermore, the
recently added ConfigMaps, which are very similar to Secrets in that
they allow application-level configuration to be managed and
decoupled from the application’s Docker image, and also allow us to
inject configuration via environment variables and/or files on the
container’s filesystem. If an application can consume configuration
files from the filesystem (which we saw with all three Java frame‐
works) or read environment variables, it can leverage the Kuber‐
netes configuration functionality. Taking this approach, we don’t
have to set up additional configuration services and complex clients
for consuming it. Configuration for our microservices running
inside containers (or even outside them), regardless of technology, is
now baked into the cluster management infrastructure.
Continuous Delivery
Deploying microservices with immutable images, as discussed ear‐
lier in Chapter 5, is paramount. When we have many more (if
smaller) services than before, our existing manual processes will not
scale. Moreover, with each team owning and operating their own
microservices, we need a way for teams to make immutable delivery
a reality without bottlenecks and human error. Once we release our
microservices, we need to have insight and feedback about their
usage to help drive further change. As the business requests change,
and as we get more feedback loops into the system, we will be doing
more releases more often. To make this a reality, we need a capable
software delivery pipeline. This pipeline may be composed of multi‐
ple subpipelines with gates and promotion steps, but ideally, we
want to automate the build, test, and deploy mechanics as much as
possible.
Tools like Docker and Kubernetes also give us the built-in capacity
to implement rolling upgrades, blue-green deployments, canary
releases, and other deployment strategies. Obviously these tools are
not required to deploy in this manner (places like Amazon and Net‐
flix have done it for years without Linux containers), but the incep‐
tion of containers does give us the isolation and immutability factors
to make this easier. You can use your CI/CD tooling, like Jenkins
and Jenkins Pipeline, in conjunction with Kubernetes and build out
flexible yet powerful build and deployment pipelines. Take a look at
OpenShift for more details on an implementation of CI/CD with
Kubernetes based on Jenkins Pipeline.
Summary
This report was meant as a hands-on, step-by-step guide for getting
started with building distributed systems with some popular Java
frameworks following a microservices approach. Microservices is
not a technology-only solution, as we discussed in the opening
chapter. People are the most important part of a complex system (a