100% found this document useful (1 vote)
443 views414 pages

jb283 1.0 Student Guide

Uploaded by

Jonathan Marin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
443 views414 pages

jb283 1.0 Student Guide

Uploaded by

Jonathan Marin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 414

RED HAT ®

TRAINING

Comprehensive, hands-on training that solves real world problems

Red Hat Application


Development II:
Implementing Microservice
Architectures
Student Workbook (ROLE)

© 2018 Red Hat, Inc. JB283-RHOAR1.0-en-1-20180517


RED HAT
APPLICATION
DEVELOPMENT II:
IMPLEMENTING
MICROSERVICE
ARCHITECTURES
Red Hat Application Development II: Implementing Microservice Architectures

Red Hat Openshift Application Runtimes 1.0 JB283


Red Hat Application Development II: Implementing
Microservice Architectures
Edition 1 20180517 20180517

Authors: Jim Rigsbee, Richard Allred, Ricardo Taniguchi, Douglas Silva,


Nancy K.A.N, Zach Gutterman
Editor: Seth Kenlon

Copyright © 2018 Red Hat, Inc.

The contents of this course and all its modules and related materials, including handouts to
audience members, are Copyright © 2018 Red Hat, Inc.

No part of this publication may be stored in a retrieval system, transmitted or reproduced in


any way, including, but not limited to, photocopy, photograph, magnetic, electronic or other
record, without the prior written permission of Red Hat, Inc.

This instructional program, including all material provided herein, is supplied without any
guarantees from Red Hat, Inc. Red Hat, Inc. assumes no liability for damages or legal action
arising from the use or misuse of contents or details contained herein.

If you believe Red Hat training materials are being used, copied, or otherwise improperly
distributed please e-mail [email protected] or phone toll-free (USA) +1 (866) 626-2994
or +1 (919) 754-3700.

Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, Hibernate, Fedora, the
Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and
other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other
countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a registered trademark of Silicon Graphics International Corp. or its subsidiaries in


the United States and/or other countries.

The OpenStack® Word Mark and OpenStack Logo are either registered trademarks/service
marks or trademarks/service marks of the OpenStack Foundation, in the United States
and other countries and are used with the OpenStack Foundation's permission. We are not
affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack
community.

All other trademarks are the property of their respective owners.


Document Conventions vii
Notes and Warnings ................................................................................................ vii

Introduction ix
Red Hat Application Development II: Implementing Microservice Architectures ............... ix
Orientation to the Classroom Environment ................................................................. x
Internationalization ................................................................................................. xii

1. Describing Microservice Architectures 1


Describing Microservices ........................................................................................... 2
Quiz: Describing Microservices .................................................................................. 8
Describing Microservices Architectural Patterns ......................................................... 12
Quiz: Describing Microservice Architecture Patterns .................................................. 26
Summary .............................................................................................................. 30

2. Deploying Microservice-based Applications 31


Deploying a Microservice from the MicroProfile Conference Application ....................... 32
Guided Exercise: Deploying a Microservice ................................................................ 42
Deploying a Microservice with the fabric8 Maven Plug-in ........................................... 48
Guided Exercise: Deploying a Microservice with the fabric8 Maven Plug-in .................... 55
Lab: Deploying a Microservice-based Application ....................................................... 65
Summary .............................................................................................................. 76

3. Implementing a Microservice with MicroProfile 77


Describing MicroProfile and Its Specifications ........................................................... 78
Quiz: Describing MicroProfile and Its Specifications ................................................... 83
Implementing a Microservice with CDI, JAX-RS, and JSON-P ....................................... 87
Guided Exercise: Implementing a RESTful Microservice .............................................. 93
Lab: Implementing a Microservice with MicroProfile .................................................. 101
Summary .............................................................................................................. 115

4. Testing Microservices 117


Testing Microservices with Arquillian ....................................................................... 118
Guided Exercise: Testing Microservices with Arquillian .............................................. 124
Testing Microservices with Mock Frameworks ........................................................... 129
Guided Exercise: Testing Microservices with Mock Frameworks ................................... 134
Lab: Testing Microservices ..................................................................................... 139
Summary ............................................................................................................. 148

5. Injecting Configuration Data into a Microservice 149


Injecting Configuration Data with the Config Specification ......................................... 150
Guided Exercise: Injecting Configuration Data .......................................................... 159
Implementing Service Discovery with OpenShift ....................................................... 164
Quiz: Implementing Service Discovery with OpenShift ............................................... 166
Lab: Injecting Configuration Data into a Microservice ................................................ 168
Summary ............................................................................................................. 178

6. Creating Application Health Checks 179


Implementing a Health Check Monitored By OpenShift ............................................. 180
Guided Exercise: Implementing a Health Check ........................................................ 190
Lab: Creating Application Health Checks ................................................................. 194
Summary ............................................................................................................ 206

7. Implementing Fault Tolerance 207


Applying Fault Tolerance Policies to a Microservice .................................................. 208

JB283-RHOAR1.0-en-1-20180517 v
Red Hat Application Development II: Implementing Microservice Architectures

Guided Exercise: Applying Fault Tolerance Policies .................................................... 216


Lab: Implementing Fault Tolerance .......................................................................... 221
Summary .............................................................................................................. 231
8. Developing an API Gateway 233
Describing the API Gateway Pattern ....................................................................... 234
Quiz: Describing the API Gateway Pattern ............................................................... 239
Developing an API Gateway for Microservices ......................................................... 243
Guided Exercise: Implementing Fault Tolerance in an API Gateway ............................. 247
Lab: Developing an API Gateway ............................................................................ 253
Summary ............................................................................................................. 275
9. Securing Microservices with JWT 277
Implementing a JSON Web Token Generator ........................................................... 278
Guided Exercise: Implementing a JSON Web Token Generator ................................... 285
Securing a Microservice Endpoint .......................................................................... 289
Guided Exercise: Securing a Microservice Endpoint .................................................. 292
Lab: Securing Microservices with JWT ................................................................... 299
Summary ............................................................................................................. 314
10. Monitoring Microservices 315
Adding Metrics to a Microservice ............................................................................ 316
Guided Exercise: Adding Metrics to a Microservice ................................................... 324
Enabling Distributed Tracing in a Microservice ......................................................... 328
Quiz: Enabling Distributed Tracing in a Microservice ................................................. 334
Describing OpenShift Log Aggregation ................................................................... 336
Quiz: Describing Log Aggregation ........................................................................... 341
Lab: Monitoring Microservices ............................................................................... 343
Summary ............................................................................................................. 351
11. Comprehensive Review: Red Hat Application Development II: Implementing
Microservice Architectures 353
Comprehensive Review ........................................................................................ 354
Lab: Developing a Microservice Endpoint ................................................................ 356
Lab: Monitoring a Microservice .............................................................................. 376

vi JB283-RHOAR1.0-en-1-20180517
Document Conventions
Notes and Warnings

Note
"Notes" are tips, shortcuts or alternative approaches to the task at hand. Ignoring a
note should have no negative consequences, but you might miss out on a trick that
makes your life easier.

Important
"Important" boxes detail things that are easily missed: configuration changes that
only apply to the current session, or services that need restarting before an update
will apply. Ignoring a box labeled "Important" will not cause data loss, but may cause
irritation and frustration.

Warning
"Warnings" should not be ignored. Ignoring warnings will most likely cause data loss.

References
"References" describe where to find external documentation relevant to a subject.

JB283-RHOAR1.0-en-1-20180517 vii
viii
Introduction
Red Hat Application Development II:
Implementing Microservice Architectures
Microservices are an important component of modern application architectures. In order to shift
monolithic application workloads to microservices, a developer must understand the principles
of microservice architecture. Red Hat Application Development II: Implementing Microservice
Architectures introduces the principles of microservice architecture and provides practical hands-
on experience creating microservices using Red Hat OpenShift Application Runtimes. This course
teaches students how to develop microservices with Wildfly Swarm, a Java EE MicroProfile
implementation. MicroProfile is a set of both Java EE specifications and new specifications
created specifically for microservice implementations. These specifications include support for
health checks, metrics, fault tolerance, JAX-RS, CDI, JSON-P, and JSON web tokens for security.

Objectives
• Demonstrate knowledge of implementing microservice architectures.

• Develop microservices using MicroProfile.

• Deploy microservices on an OpenShift cluster.

Audience
• Software Developers

• Software Architects

Prerequisites
• Red Hat Application Development I: Programming in Java EE (JB183) or equivalent Java EE
experience

• Introduction to Containers, Kubernetes, and Red Hat OpenShift course (DO180). Having
equivalent knowledge is helpful, but not required.

• RHCSA or higher is helpful for navigation and usage of the command line, but not required.

JB283-RHOAR1.0-en-1-20180517 ix
Introduction

Orientation to the Classroom Environment

Figure 0.1: Classroom Environment

In this course, the main computer system used for hands-on learning activities is workstation.
Four other machines will also be used by students for these activities. These are master, node1,
node2, and services. All four of these systems are in the lab.example.com DNS domain.

All student computer systems have a standard user account, student, which has the password
student. The root password on all student systems is redhat.

Classroom Machines
Machine name IP addresses Role
workstation.lab.example.com 172.25.250.254 Graphical workstation used
for system administration
master.lab.example.com 172.25.250.10 Master of the OpenShift
cluster
node1.lab.example.com 172.25.250.11 Node in the OpenShift cluster
node2.lab.example.com 172.25.250.12 Node in the OpenShift cluster
services.lab.example.com 172.25.250.13 Provides supporting services
such as: container image
registry, nexus repository, and
Git server

One additional function of workstation is that it acts as a router between the network that
connects student machines and the classroom network. If workstation is down, other student
machines will only be able to access systems on the student network.

There are several systems in the classroom that provide supporting services. Two servers,
content.example.com and materials.example.com are sources for software and lab
materials used in hands-on activities. Information on how to use these servers will be provided in
the instructions for those activities.

x JB283-RHOAR1.0-en-1-20180517
Orientation to the Classroom Environment

Controlling Your Station


The top of the console describes the state of your machine.

Machine States
State Description
none Your machine has not yet been started. When started, your machine
will boot into a newly initialized state (the disk will have been reset).
starting Your machine is in the process of booting.
running Your machine is running and available (or, when booting, soon will be.)
stopping Your machine is in the process of shutting down.
stopped Your machine is completely shut down. Upon starting, your machine
will boot into the same state as when it was shut down (the disk will
have been preserved).
impaired A network connection to your machine cannot be made. Typically this
state is reached when a student has corrupted networking or firewall
rules. If the condition persists after a machine reset, or is intermittent,
please open a support case.

Depending on the state of your machine, a selection of the following actions will be available to
you.

Machine Actions
Action Description
Start Station Start ("power on") the machine.
Stop Station Stop ("power off") the machine, preserving the contents of its disk.
Reset Station Stop ("power off") the machine, resetting the disk to its initial state.
Caution: Any work generated on the disk will be lost.
Refresh Refresh the page will re-probe the machine state.
Increase Timer Adds 15 minutes to the timer for each click.

The Station Timer


Your Red Hat Online Learning enrollment entitles you to a certain amount of computer time. In
order to help you conserve your time, the machines have an associated timer, which is initialized
to 60 minutes when your machine is started.

The timer operates as a "dead man's switch," which decrements as your machine is running. If
the timer is winding down to 0, you may choose to increase the timer.

JB283-RHOAR1.0-en-1-20180517 xi
Introduction

Internationalization

Language Support
Red Hat Enterprise Linux 7 officially supports 22 languages: English, Assamese, Bengali, Chinese
(Simplified), Chinese (Traditional), French, German, Gujarati, Hindi, Italian, Japanese, Kannada,
Korean, Malayalam, Marathi, Odia, Portuguese (Brazilian), Punjabi, Russian, Spanish, Tamil, and
Telugu.

Per-user Language Selection


Users may prefer to use a different language for their desktop environment than the system-
wide default. They may also want to set their account to use a different keyboard layout or input
method.

Language Settings
In the GNOME desktop environment, the user may be prompted to set their preferred language
and input method on first login. If not, then the easiest way for an individual user to adjust their
preferred language and input method settings is to use the Region & Language application. Run
the command gnome-control-center region, or from the top bar, select (User) > Settings.
In the window that opens, select Region & Language. The user can click the Language box and
select their preferred language from the list that appears. This will also update the Formats
setting to the default for that language. The next time the user logs in, these changes will take
full effect.

These settings affect the GNOME desktop environment and any applications, including gnome-
terminal, started inside it. However, they do not apply to that account if accessed through an
ssh login from a remote system or a local text console (such as tty2).

Note
A user can make their shell environment use the same LANG setting as their graphical
environment, even when they log in through a text console or over ssh. One way to do
this is to place code similar to the following in the user's ~/.bashrc file. This example
code will set the language used on a text login to match the one currently set for the
user's GNOME desktop environment:

i=$(grep 'Language=' /var/lib/AccountService/users/${USER} \


| sed 's/Language=//')
if [ "$i" != "" ]; then
export LANG=$i
fi

Japanese, Korean, Chinese, or other languages with a non-Latin character set may not
display properly on local text consoles.

Individual commands can be made to use another language by setting the LANG variable on the
command line:

[user@host ~]$ LANG=fr_FR.utf8 date

xii JB283-RHOAR1.0-en-1-20180517
System-wide Default Language Settings

jeu. avril 24 17:55:01 CDT 2014

Subsequent commands will revert to using the system's default language for output. The locale
command can be used to check the current value of LANG and other related environment
variables.

Input Method Settings


GNOME 3 in Red Hat Enterprise Linux 7 automatically uses the IBus input method selection
system, which makes it easy to change keyboard layouts and input methods quickly.

The Region & Language application can also be used to enable alternative input methods. In the
Region & Language application's window, the Input Sources box shows what input methods are
currently available. By default, English (US) may be the only available method. Highlight English
(US) and click the keyboard icon to see the current keyboard layout.

To add another input method, click the + button at the bottom left of the Input Sources window.
An Add an Input Source window will open. Select your language, and then your preferred input
method or keyboard layout.

Once more than one input method is configured, the user can switch between them quickly by
typing Super+Space (sometimes called Windows+Space). A status indicator will also appear
in the GNOME top bar, which has two functions: It indicates which input method is active, and
acts as a menu that can be used to switch between input methods or select advanced features of
more complex input methods.

Some of the methods are marked with gears, which indicate that those methods have advanced
configuration options and capabilities. For example, the Japanese Japanese (Kana Kanji) input
method allows the user to pre-edit text in Latin and use Down Arrow and Up Arrow keys to
select the correct characters to use.

US English speakers may find also this useful. For example, under English (United States) is the
keyboard layout English (international AltGr dead keys), which treats AltGr (or the right Alt)
on a PC 104/105-key keyboard as a "secondary-shift" modifier key and dead key activation key
for typing additional characters. There are also Dvorak and other alternative layouts available.

Note
Any Unicode character can be entered in the GNOME desktop environment if the user
knows the character's Unicode code point, by typing Ctrl+Shift+U, followed by the
code point. After Ctrl+Shift+U has been typed, an underlined u will be displayed to
indicate that the system is waiting for Unicode code point entry.

For example, the lowercase Greek letter lambda has the code point U+03BB, and can be
entered by typing Ctrl+Shift+U, then 03bb, then Enter.

System-wide Default Language Settings


The system's default language is set to US English, using the UTF-8 encoding of Unicode as its
character set (en_US.utf8), but this can be changed during or after installation.

From the command line, root can change the system-wide locale settings with the localectl
command. If localectl is run with no arguments, it will display the current system-wide locale
settings.

JB283-RHOAR1.0-en-1-20180517 xiii
Introduction

To set the system-wide language, run the command localectl set-locale LANG=locale,
where locale is the appropriate $LANG from the "Language Codes Reference" table in this
chapter. The change will take effect for users on their next login, and is stored in /etc/
locale.conf.

[root@host ~]# localectl set-locale LANG=fr_FR.utf8

In GNOME, an administrative user can change this setting from Region & Language and clicking
the Login Screen button at the upper-right corner of the window. Changing the Language of
the login screen will also adjust the system-wide default language setting stored in the /etc/
locale.conf configuration file.

Important
Local text consoles such as tty2 are more limited in the fonts that they can display
than gnome-terminal and ssh sessions. For example, Japanese, Korean, and Chinese
characters may not display as expected on a local text console. For this reason, it may
make sense to use English or another language with a Latin character set for the
system's text console.

Likewise, local text consoles are more limited in the input methods they support, and
this is managed separately from the graphical desktop environment. The available
global input settings can be configured through localectl for both local text virtual
consoles and the X11 graphical environment. See the localectl(1), kbd(4), and
vconsole.conf(5) man pages for more information.

Language Packs
When using non-English languages, you may want to install additional "language packs" to
provide additional translations, dictionaries, and so forth. To view the list of available langpacks,
run yum langavailable. To view the list of langpacks currently installed on the system,
run yum langlist. To add an additional langpack to the system, run yum langinstall
code, where code is the code in square brackets after the language name in the output of yum
langavailable.

References
locale(7), localectl(1), kbd(4), locale.conf(5), vconsole.conf(5),
unicode(7), utf-8(7), and yum-langpacks(8) man pages

Conversions between the names of the graphical desktop environment's X11 layouts and
their names in localectl can be found in the file /usr/share/X11/xkb/rules/
base.lst.

xiv JB283-RHOAR1.0-en-1-20180517
Language Codes Reference

Language Codes Reference


Language Codes
Language $LANG value
English (US) en_US.utf8
Assamese as_IN.utf8
Bengali bn_IN.utf8
Chinese (Simplified) zh_CN.utf8
Chinese (Traditional) zh_TW.utf8
French fr_FR.utf8
German de_DE.utf8
Gujarati gu_IN.utf8
Hindi hi_IN.utf8
Italian it_IT.utf8
Japanese ja_JP.utf8
Kannada kn_IN.utf8
Korean ko_KR.utf8
Malayalam ml_IN.utf8
Marathi mr_IN.utf8
Odia or_IN.utf8
Portuguese (Brazilian) pt_BR.utf8
Punjabi pa_IN.utf8
Russian ru_RU.utf8
Spanish es_ES.utf8
Tamil ta_IN.utf8
Telugu te_IN.utf8

JB283-RHOAR1.0-en-1-20180517 xv
xvi
TRAINING
CHAPTER 1

DESCRIBING MICROSERVICE
ARCHITECTURES

Overview
Goal Describe components and patterns of microservice-based
application architectures.
Objectives • Define what a microservice is and the guiding principles
for their creation.

• Describe the major patterns implemented in microservice


architectures.
Sections • Describing Microservices (and Quiz)

• Describing Microservice Architecture Patterns (and Quiz)

JB283-RHOAR1.0-en-1-20180517 1
Chapter 1. Describing Microservice Architectures

Describing Microservices

Objective
After completing this section, students should be able to define microservices and the guiding
principles for their creation.

Describing the Microservice Architecture


Microservice architecture is a method of dividing a traditional monolithic enterprise application
into a set of small, modular services. Each service is built around a specific business domain,
such as customer information or inventory pricing. Microservices can be written in different
programming languages, even managed and deployed using completely different tools. For
example, a developer can build a microservice for a specific business domain and then develop a
web application that calls the microservice.

Microservices can be independently managed and deployed using automation. They run in
unique processes and communicate using a lightweight mechanism, usually HTTP calls to a REST
API. This modularity means that microservices are good for running within a cloud environment,
and fit naturally into containerized deployment platforms like OpenShift Container Platform.

Additionally, a microservice generally has the following characteristics:

• modeled around a single business problem or domain

• implements its own business logic, persistent storage, and external collaboration

• has an individually published contract, also known as an API

• capable of running in isolation

• independent and loosely-coupled from other services

• easily replaced or upgraded

• scaled and deployed independently of other services

Analyzing Technology Trends and Reasons for


Adoption
Many organizations want to adopt microservice architecture in their new development, and to re-
architect their existing applications using this approach. The most popular reasons for adopting
microservices include:

• Faster deployment: Microservices are much smaller than traditional monolithic applications.
Smaller services improve the time required to fix bugs because these services are released
independently, meaning new features can be added, tested, and released quickly.

• Rapid development: Microservices are developed and maintained by small teams. Small teams
are typically focused on one or two microservices at most.

2 JB283-RHOAR1.0-en-1-20180517
Comparing Microservices and Monolithic Applications

Figure 1.1: Rapid Development

Collaboration becomes a bottleneck in large teams where team size is ten or more members.
To encourage small teams in the development, management, and operation of microservices,
the "two-pizza team" concept is often used. The two-pizza concept specifies that if you
cannot feed all the members of a team with two pizzas, then the team is too large. Generally, a
reasonable team size is 5-7 members.

• Less complex code: Microservices are much smaller than monolithic applications due to
their design and architecture. Each microservice is built for a specific business function, and
is treated as a separate entity. This results in less complex code than larger applications. This
means each microservice can be tested separately and have its own release schedule.

• Easier to scale: Microservices are typically deployed independently. Individual services can
scale out horizontally depending on the amount of load the service is receiving, meaning that
more resources are allocated to services that are receiving higher traffic.

Comparing Microservices and Monolithic Applications


Monolithic applications are large enterprise applications that are developed and deployed as a
single project. These applications are written in a single programming language, managed by a
single team of developers who test, deploy, and release as a single artifact. They are complex
to build, test, and migrate into production. Even though monolithic applications are divided into
tiers or layers, the applications are almost always packaged into a single WAR or EAR file.

Because monolithic applications are deployed as a single app, all of the memory, and other
resources, are limited by the hardware. Monolithic applications must scale by replicating the
entire application on multiple servers. Additionally, these applications tend to be more complex
and tightly coupled, making them more difficult to maintain and update.

JB283-RHOAR1.0-en-1-20180517 3
Chapter 1. Describing Microservice Architectures

Figure 1.2: Comparing monolithic applications and microservices

Compared to monolithic applications, microservices are better organized, smaller, more loosely-
coupled, and they are independently developed, tested, and deployed. Because microservices can
be released independently, the time required to fix bugs or to add new features is much shorter,
and changes can be deployed to production more efficiently. Additionally, because microservices
are small and stateless, they can scale much easier.

4 JB283-RHOAR1.0-en-1-20180517
Reviewing the Principles for Successful Adoption

Reviewing the Principles for Successful Adoption


As more companies attempt to move towards using a microservice architecture, a few important
principles have become critical to the success of microservice-based development. These
principles include:

• Reorganizing to DevOps: Microservices teams are responsible for the entire lifecycle of their
microservice, from development to operation. DevOps teams can work on their individual
product at their own pace. The principle of "you build it, you own it" provides autonomy and
speed for delivery teams.

• Packaging the service as a container: A container is a set of isolated processes that


provides all of the necessary dependencies to run an application. By using tools like Docker,
the deployment platform for the microservice can be packaged and deployed as a container.
Using containers allows developers to have control and confidence over the exact runtime
environment the service uses.

• Using an elastic infrastructure: Provide elastic infrastructure to meet the on-demand


requirements of the microservices. OpenShift Container Platform automates the provisioning,
management, and scaling of containerized applications and provides on-demand scaling
by providing automatic horizontal pod scaling. As load increases on certain microservices,
OpenShift automatically increases available resources for those microservices.

• Automating Processes: Use scripting or other tools to automate everything related to the
provisioning of the service infrastructure and the deployment of the service. Manage these
automation scripts or other resources in your version control system just like any other
application code. Infrastructure As Code (IAC) is an approach that uses descriptive language
to code versatile and adaptive provisioning and deployment processes using tools such as
Ansible, Puppet, or Jenkins.

• Continuous integration and delivery pipeline: Continuous integration and delivery (CI/CD)
is the practice of building, testing, and delivering software as the code is developed. A key to
continuous delivery is to automate the entire pipeline, including code check-in, builds, tests,
and deployments across multiple environments. Make sure software is always deployable.
When the build is broken, make sure that fixing the broken code takes priority over other tasks.
Tools like Jenkins can be used for CI/CD.

Understanding the Complexities of Microservice


Architecture
Designing and building a large scale distributed application with many microservices is
ocasionally challenging and complex. These challenges tend to arise when planning and
managing a large number of small components that must interact with each other. Testing an
individual microservice is simple, but when multiple services are dependent upon each other,
they must be tested together. Further, introducing dependencies between microservices can
cause the system to communicate in an unpredictable ways if one of the services does not
behave as expected.

Additionally, maintaining data consistency may be difficult in a large distributed system. Every
microservice can have its own persistent storage, so shared entity data formats cannot be
changed without changing all of the microservices. For example, if two microservices are both
persisting instances of the entity Employee and the data model for Employee changes, both
services need to update the entity.

JB283-RHOAR1.0-en-1-20180517 5
Chapter 1. Describing Microservice Architectures

Tracing and monitoring is also more complex for microservice-based applications. Instead of
monitoring a single application server, microservices are typically running on multiple servers
and writing to multiple log files. Collecting all the log information and collecting it in one place to
analyze and visualize efficiently is a complex process.

Finally, securing microservice applications can be a challenging and complex task. Users
authenticate with the UI layer just like a monolithic application, but individual services need to
be protected by authentication and authorization as well. The added communication between
services in a microservice-based application creates more opportunity for security risks, and
requires more points of authentication.

Understanding the Principles of a Resilient


Microservice Architecture
Domain-Driven Design and Bounded Contexts
Microservices are designed based on Domain-Driven Design, an approach to software
development that requires a close understanding of the business domain. Domains can have
multiple bounded contexts that encapsulate the details of a single business domain and
defines integration points with other bounded contexts. Using the example of an e-commerce
application: order, delivery, and billing are all examples of different bounded contexts. Each
bounded context maintains a data model derived from the bounded domain model. Each
bounded context is translated into one or more microservices.

Advantages of the Bounded Domain Model

• Changes in the domain model only affect a limited number of services.

• Services are autonomous.

Disadvantages of the Bounded Domain Model

• Excellent domain knowledge is required to define a bounded context.

• Complexity increases to keep the system consistent between contexts.

Deploy Independently on a Lightweight Runtime


Deploying services independently is one of the most important principles in microservices
architecture. Microservices are designed to be stateless, and state is maintained outside the
application in databases or data grids. This is done so that a microservice can scale easily and
deploy independently of other services.

Microservices are good candidates for lightweight, embedded runtimes over full-fledged
application servers. Some of the desired runtimes for microservices are Eclipse Vert.x,
Wildfly Swarm, and Sprint Boot. Additionally, JBoss Enterprise Application Platform (EAP) is
considered lightweight as most of EAP's application server components are started on-demand.

Microservices are packaged and delivered as containers that include the runtime and its
dependencies. To support continuous integration and delivery of the microservices, fully-
automated build and deployment pipelines are required. Microservices provide Blue-Green
deployment for quick rollback to the last working version. Blue-Green deployment is a technique
used to reduce downtime by running two identical production environments called Blue and
Green. Only one environment is active at a time.

6 JB283-RHOAR1.0-en-1-20180517
Understanding the Principles of a Resilient Microservice Architecture

Design for Failure


Distributed applications can fail due to application, hardware, or network failures. Applications
need to be designed to withstand such failure. Various design patterns can be incorporated to
make microservices applications failure-tolerant.

• High availability is provided for individual microservices for failure management.

• Use the circuit breaker design pattern to prevent a service or network failure from cascading to
other services. The fault tolerance chapter of this course covers this pattern is covered in more
detail.

• Use the bulkhead design pattern to prevent overload of an individual service and to isolate
failures from rippling throughout the system by limiting concurrent access to dependent
services and minimizing the impact of a dependent service that is down. The fault tolerance
chapter of this course covers this pattern is covered in more detail.

Applications require extensive testing to assess effects of a variety of different types of system
failure on the end user experience. Real-time monitoring is required to detect these failures
quickly and alert developers. Additionally, use a self-healing infrastructure like Kubernetes and
OpenShift Container Platform to leverage readiness and liveness checks which monitor the state
of running services, and allow the deployment platform to act on failures automatically. These
approaches are discussed in detail later in the course.

References
What are Microservices?
http://microservices.io/

JB283-RHOAR1.0-en-1-20180517 7
Chapter 1. Describing Microservice Architectures

Quiz: Describing Microservices

Choose the correct answers to the following questions:

1. Which of the following statements about monolithic applications is true?

a. A monolithic application is always packaged into a .tar.gz file.


b. A monolithic application is a lightweight application that is divided into a multiple
services and a client tier.
c. Monolithic applications are often complex to build and can be difficult to maintain.
d. Individual pieces of a monolithic application are typically developed by individual
small autonomous teams.

2. Which two of the following statements about microservices are true? (Choose two.)

a. Microservices are managed centrally, and typically share one common database.
b. Microservices are typically deployed on full-fledged application servers that manage
their lifecycle.
c. Microservices are self-contained, independent services.
d. Microservices within a single organization can be built using different programming
languages.

3. A company named "At Your Doorstep Inc." is considering using microservice architecture
for their new grocery delivery application. They have two small teams to manage IT
development and operations in 7 different states. Which of the following two statements
support their decision in favor of microservices? (Choose two.)

a. All order operations must be centralized in a single data center.


b. Microservices can be built for each business operation.
c. Microservices must be deployed manually to ensure there are no errors.
d. Individual services in a microservice-based application can go live quickly by using
continuous integration and delivery.

4. Which of the following statements regarding quicker development of microservices is true?

a. Microservices are developed and managed by large teams to support quicker


development.
b. Separate development and operations team is key to quicker development.
c. Individual microservices are often developed, maintained, and operated by small
teams.
d. All microservices in an application are tested collectively, which reduces the testing
time.

5. Which of the two following statements about microservices architecture are correct?
(Choose two.)

a. Microservice applications are required to be deployed on the same physical host.


b. Microservices architecture supports high availability of individual microservices.

8 JB283-RHOAR1.0-en-1-20180517
c. Microservices cannot be used if a company is embracing DevOps.
d. Microservices are designed using a bounded context that can communicate with other
bounded contexts.

JB283-RHOAR1.0-en-1-20180517 9
Chapter 1. Describing Microservice Architectures

Solution

Choose the correct answers to the following questions:

1. Which of the following statements about monolithic applications is true?

a. A monolithic application is always packaged into a .tar.gz file.


b. A monolithic application is a lightweight application that is divided into a multiple
services and a client tier.
c. Monolithic applications are often complex to build and can be difficult to maintain.
d. Individual pieces of a monolithic application are typically developed by individual
small autonomous teams.

2. Which two of the following statements about microservices are true? (Choose two.)

a. Microservices are managed centrally, and typically share one common database.
b. Microservices are typically deployed on full-fledged application servers that manage
their lifecycle.
c. Microservices are self-contained, independent services.
d. Microservices within a single organization can be built using different programming
languages.

3. A company named "At Your Doorstep Inc." is considering using microservice architecture
for their new grocery delivery application. They have two small teams to manage IT
development and operations in 7 different states. Which of the following two statements
support their decision in favor of microservices? (Choose two.)

a. All order operations must be centralized in a single data center.


b. Microservices can be built for each business operation.
c. Microservices must be deployed manually to ensure there are no errors.
d. Individual services in a microservice-based application can go live quickly by using
continuous integration and delivery.

4. Which of the following statements regarding quicker development of microservices is true?

a. Microservices are developed and managed by large teams to support quicker


development.
b. Separate development and operations team is key to quicker development.
c. Individual microservices are often developed, maintained, and operated by small
teams.
d. All microservices in an application are tested collectively, which reduces the testing
time.

5. Which of the two following statements about microservices architecture are correct?
(Choose two.)

a. Microservice applications are required to be deployed on the same physical host.


b. Microservices architecture supports high availability of individual microservices.
c. Microservices cannot be used if a company is embracing DevOps.

10 JB283-RHOAR1.0-en-1-20180517
Solution

d. Microservices are designed using a bounded context that can communicate with
other bounded contexts.

JB283-RHOAR1.0-en-1-20180517 11
Chapter 1. Describing Microservice Architectures

Describing Microservices Architectural


Patterns

Objective
After completing this section, students should be able to describe the major patterns
implemented in microservice architectures.

Reviewing Synchronous and Asynchronous Inter-


process Communication
While Microservices are typically deployed individually, most enterprise-level microservice
architectures require that the services interact with each other as well as other external services.
This communication is achieved using an inter-process communication (IPC) mechanism.
Depending on the application's requirements, communication between microservices can be
synchronous or asynchronous.

Synchronous Communication
Synchronous communication is based on a request and response model. In this model, the
client waits for a timely response from a service. A basic example is communicating with a REST
service over HTTP.

Figure 1.3: Synchronous communication between services

In this diagram, a passenger is using a smart phone client to buy a new train ticket. The phone
client sends a POST request to the trip management service. The trip management service sends
a GET request to the passenger management service. The passenger management service sends
a response back with the status 200 OK to the trip management, which returns a success status
201 CREATED. In this example, both clients wait for a response.

Synchronous IPC - Advantages and Disadvantages


• Advantages

◦ Easy to program and test

◦ Provides a better real-time response

◦ Firewall friendly, uses standard ports

◦ No need for an intermediate broker or other integration software

12 JB283-RHOAR1.0-en-1-20180517
Asynchronous Communication - Advantages and Disadvantages

• Disadvantages

◦ Supports only request-and-response style interaction

◦ Both client and service must be available for the entire duration of the exchange

◦ The client must know the URL (location) of the service or use a service discovery mechanism
to locate the service instances

Asynchronous Message-based Communication


Microservices can communicate using an asynchronous message-based communication such as
the AMQP or MQTT protocols. Microservices can use other message-based patterns like point-
to-point, publish-and-subscribe, request-and-reply, or request-and-notification. Asynchronous
communication is non-blocking, therefore the client is able to continue making requests witout
nee to wait to receive a response.

Figure 1.4: Asynchronous communication between services

In the diagram, the three services, trip management, passenger management, and driver
management, receive messages from a dispatcher using a single publish-subscribe channel.
A trip management service uses another publish-subscribe channel to send messages to the
dispatcher. In this example, the dispatcher service does not reply directly to the trip management
service when a new trip is submitted. Instead, it does some internal processing and then, once
it is ready, uses a different channel to reply to the trip management service, as well as to notify
the passenger and driver management services. This asynchronous approach allows the trip
management service to continue processing user requests for more new trips without waiting on
the dispatcher's processing and subsequent response.

Asynchronous Communication - Advantages and


Disadvantages
• Advantages

JB283-RHOAR1.0-en-1-20180517 13
Chapter 1. Describing Microservice Architectures

◦ Decouples the client from the service: The client is unaware of service instances. No
discovery mechanism required.

◦ Message buffering: The message broker queues messages in a message buffer while the
consumer is slow or unavailable.

◦ Flexible client-service interaction: The communication between client and service is


flexible. The clients need not be available to receive the messages. Messaging supports
various styles to ensure message delivery.

• Disadvantages

◦ Additional operational complexity: There is additional configuration for message


components. The message broker component must be highly available to ensure system
reliability.

◦ Complexity of implementing request-and-response based interaction: Each request


message must contain a reply channel and a correlation identifier. The service writes
the response and the correlation identifier to the reply channel. The client identifies the
message using correlation identifier.

Understanding Service Discovery


Services in monolithic applications invoke one another by using procedure calls or language-level
methods. Another form of service discovery is using EJB/CDI or even JNDI to look up resources
on the classpath.

In traditional distributed system deployment, services must call one another using HTTP/REST or
a remote procedure call (RPC) mechanism, and services run on known fixed locations (hosts and
ports).

A microservice-based application typically runs in a virtualized or containerized cloud


environment. Network locations are dynamically assigned to service instances and are subject
to change due to failures, auto scaling, and upgrades. These frequent changes make service
discovery challenging.

14 JB283-RHOAR1.0-en-1-20180517
Understanding Service Discovery

Figure 1.5: Service discovery challenges in a microservice-based application

Clients of a microservice must be able to discover these service instances with ynamically
changing network locations to make API calls. These clients need an elaborate mechanism for
successful service discovery. There are two main service discovery patterns: client-side discovery
and server-side discovery.

Reviewing the Client-Side Service Discovery Pattern


When the client-side service discovery pattern is used, the client queries for available service
instances in the service registry database. The client then uses a load-balancing algorithm to
select one of the available service instances. Once the service instance is selected, the client
makes a request. When a service starts, it registers its location with the service registry. When
the service instance terminates, its service registration is removed from the service registry. The
service registry is periodically updated by a heartbeat mechanism.

Figure 1.6: Client side service discovery

Reviewing the Server-Side Service Discovery Pattern


When the server-side service discovery pattern is used, a client makes a request to a service
through a load balancer. The load balancer queries the registry, and then routes each request to
an available service instance. Similar to server-side service discovery, clients must still register
themselves with the registry, and the registry is responsible for monitoring their health and
readiness, and removing any clients that become unavailable.

JB283-RHOAR1.0-en-1-20180517 15
Chapter 1. Describing Microservice Architectures

Figure 1.7: Server side service discovery

Reviewing Service Discovery in Kubernetes and


OpenShift
OpenShift provides its own service discovery mechanism, which leverages dynamic DNS to
route requests properly. In OpenShift, services run in pods, which are the equivalent of a virtual
machine instance for a container. A service can be placed over a group of pods, which can
be running on the same or different physical hosts. A service is a routable object with an IP
address and a port that acts as a service endpoint for external communication. Once created,
a service is mapped to a pod or group of pods by using a selector label. Then, a unique name is
associated with each service that is resolved by DNS. At a high level, a service can now act as a
load balancer for all the pods in the group.

16 JB283-RHOAR1.0-en-1-20180517
Reviewing Service Discovery in Kubernetes and OpenShift

Figure 1.8: Service discovery in Kubernetes/OpenShift

Containers can use environment variables to inject the values of other service endpoints.
Kubernetes can create environment variables that are accessible in all pods. For example, the
Service redis-master, which exposes TCP port 6379 and has an allocated cluster IP address
10.0.0.11, produces the following environment variables:

REDIS_MASTER_SERVICE_HOST=10.0.0.11
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_PORT=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_PORT_6379_TCP_PORT=6379
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11

Cluster-level DNS is built into Kubernetes. The cluster DNS points to the cluster IP. A cluster IP is
a virtual IP that is assigned to a service when the service object is created. A cluster IP is a fixed
IP, so there are no issues with DNS caching.

An internal DNS server creates a set of DNS records for every service. The naming system that
the DNS server uses is a hierarchical and logical tree called a namespace. Within the same
namespace, services are resolved using their names. Pods in other namespaces can access the
service by adding the namespace to the DNS path, as shown in the following example:

my-service.my-namespace.svc.cluster.local

Kubernetes/OCP Service Types


Kubernetes ServiceTypes specifies the type of a service. The default type is ClusterIP.

• ClusterIP: exposes the service on a cluster-internal IP. The service is reachable only from
within the cluster.

• NodePort: exposes the service on each Node’s IP at a static port (the NodePort). The service
is reachable through an external NodeIP:NodePort address on each node.

• LoadBalancer: exposes the service externally using a cloud provider’s load balancer.

• Route: exposes the service at a host name, so that external clients can reach it by name.

JB283-RHOAR1.0-en-1-20180517 17
Chapter 1. Describing Microservice Architectures

Describing the API Gateway Pattern


Another approach to solving the problem of service discovery when developing microservices
is to use the API gateway pattern. The clients of a microservices-based application face many
challenges, including:

• Microservices provide fine-grained APIs. Clients of a microservices-based application need to


interact with different services.

• Different clients need different data. For example, the desktop browser version of a product
details page is typically more elaborate than the mobile version.

• Network performance is different for different types of clients. For example, mobile and LAN
clients are usually subject to different network performance.

• The number of service instances and their locations (a hostname and a port number) changes
dynamically.

• The partitioning of contexts into services can change over time and should be hidden from
clients.

• Services might use a diverse set of protocols, some of which, such as AMQP, and Binary RPC
(Thrift), may not be web-friendly.

The API gateway pattern addresses all of these concerns by providing an intermediary service
that acts as a pass-through layer between back-end microservices and UI-focused clients, such as
web applications or mobile applications.

Using an API Gateway


An API gateway is a service that is the primary entry point for one or more microservices. The
gateway handles requests by proxying the request to the intended microservice. The API gateway
is responsible for request routing, composition, protocol translation, security, caching, and
analytics.

18 JB283-RHOAR1.0-en-1-20180517
Using an API Gateway

Figure 1.9: A mobile client directly communicates


with multiple microservices without using a gateway

JB283-RHOAR1.0-en-1-20180517 19
Chapter 1. Describing Microservice Architectures

Figure 1.10: A mobile client communicates with multiple microservices through an API gateway

Advantages of using an API Gateway


• Reduced coupling of clients to services: Insulates clients from the application structure

• Simplified service discovery: Insulates clients from the service instance location

• Client-specific APIs: Provides an optimal API for each type of client, simplifying client
development

Disadvantages of using an API Gateway


• Increased complexity: Represents another highly-available component (API gateway) that
must be developed, deployed, and managed

• Increased response time: Adds another network hop through the API gateway

• Potential development bottleneck: Requires updates whenever new microservices or APIs


need to be exposed

Understanding Fault Tolerance


Microservices-based applications can still fail like any other distributed application. The failure
can happen in an individual service or in a chain of services. If one service in the dependency
chain fails, all of the upstream clients are affected. This causes cascaded failure.

Use the circuit breaker and bulkhead patterns to provide fault tolerance to microservice-
based applications. Fault tolerance means that services can handle failures, and the end
user experience is not impacted by a single service failure. Fault tolerance is imperative in a
microservice-based application, because there are so many points of failure.

20 JB283-RHOAR1.0-en-1-20180517
Describing the Circuit Breaker Pattern

Describing the Circuit Breaker Pattern


The circuit breaker pattern is an application design pattern for avoiding cascading service
failure in a microservices architecture. Cascaded failure can occur for many reasons. In a
microservice application running with dependencies to subsystems, when a single dependency
shows increased latency under high volume, user request threads in an upstream system become
saturated and the entire application can become unresponsive, causing cascading failures.

Figure 1.11: A single service failure impedes user requests

The circuit breaker pattern helps ensure a microservice can gracefully handle downstream
failures of services it depends on. A circuit breaker object wraps the function calls to the
dependent services and monitors the success of the calls. When everything is normal and the
calls are succeeding, the circuit breaker is in the closed state. When the number of failures (an
exception or timeout during the call) reaches a preconfigured threshold, the circuit breaker trips
open. When the circuit breaker is open, no calls are made to the dependent service, but a fallback
response is returned. After a configurable amount of time, the circuit breaker moves to a half-
open state. In the half-open state, the circuit breaker executes the service calls periodically to
check the health of the dependent service. If the service is healthy again, and the test calls are
successful, the circuit state switches back to closed. The circuit breaker life cycle is shown in the
following diagram:

JB283-RHOAR1.0-en-1-20180517 21
Chapter 1. Describing Microservice Architectures

Figure 1.12: The circuit breaker life cycle

Reviewing Circuit Breaker Implementations


The Hystrix library implements the circuit breaker and bulkhead patterns. It is a part of the
Netflix OSS suite. Hystrix also includes the Hystrix dashboard, which allows developers to
monitor Hystrix metrics in real time. The dashboard also provides stream aggregation to monitor
cluster of servers with Netflix Turbine.

Other implementations of a circuit breaker pattern are available in:

• Hystrix EIP integration in Camel

• Vert.x circuit breaker component

• Hystrix-javanica integration in Spring Boot

• Wildfly Swarm Microprofile implementation (based on Hystrix)

Describing the Bulkhead Pattern


Use the bulkhead pattern to isolate dependencies from each other and to limit the number of
concurrent threads attempting to access each of them. When a bulkhead is applied to a service
call, that call is allocated a dedicated thread pool (semaphore). This isolation means that this call
is restricted from using more than these threads, which would impact performance of other parts
of the service, should the calls become unsaturated, or the dependent service is not performing
well.

An application makes a request to the component for connection. An individual bulkhead


controls the connection to each component. When the request for a new connection is made,

22 JB283-RHOAR1.0-en-1-20180517
Distributed Tracing

the bulkhead checks for the availability of a connection to the requested component. If a thread
to make the connection is available, it allocates the connection. If a thread is not available, it
waits for a predefined interval of time. If a thread becomes available within this duration, the
connection is allocated to the waiting request, otherwise it rejects the call and the fallback is
called.

Figure 1.13: Dependent service failure limited by the bulkhead pattern

Distributed Tracing
In monolithic applications, tracing a single user's interaction with the system could be
accomplished by isolating a single instance of an application and reproducing a problem.
Microservices-based applications are complex; a single microservice cannot provide the behavior,
performance, or correctness of the entire application.

Distributed tracing is a tool that provides complete information of the application's behavior as
the requests pass through multiple services. Distributed tracing tools can profile running services
for reporting purposes. These tools collect data in the central aggregator for storage, reporting,
and visualization.

JB283-RHOAR1.0-en-1-20180517 23
Chapter 1. Describing Microservice Architectures

Figure 1.14: Distributed tracing - visualizing traces

Distributed tracing injects the services with code that assigns each external request a unique
external request ID or trace ID. The trace ID is passed to all services that are involved in handling
the request and the trace ID is included in all log messages. Every service adds a new span ID to
the trace. The service adds metadata, such as start and stop timestamps, and business-relevant
data, to the span. Span data are collected by or sent to a central aggregator for storage and
visualization.

OpenTracing API
OpenTracing API is a vendor-neutral open standard for tracing. OpenTracing provides distributed
tracing of applications with minimal effort. It is supported across many languages, such as
Java, JavaScript, and Go. It is implemented in Zipkin from Twitter, Jaeger from Uber, and
Hawkular APM from Red Hat.

Using Aggregated Logging


Most microservices-based applications consist of many independent microservices. These
services are responsible for carrying out unique business tasks. Additionally, each service
instance can run on multiple machines, or in separate containers. Each of these running service
instances has its own logs. When services are running in containers, logs are written to stdout
and stderr, and both the containers and the logs are ephemeral.

Managing and monitoring all of these logs efficiently is quite a challenge. Use a log aggregation
mechanism to put all the logs into a central storage and use a tool that can parse log data
appropriately. To provide the most value, services should write logs in a standardized and
structured format. Application loggers should add context to the log message, such as the date
and time, class name, or thread number. Logs should be indexable, parseable, filterable, and
searchable. Log encoders can be used to produce JSON log messages.

OpenShift platform uses a stack known as EFK (Elasticsearch, fluentd, and Kibana) for log
aggregation. Logs are collected using fluentd which is a log collector daemon that monitors
container logs for all the running pods on the node. Elasticsearch is used for storing, indexing,
and querying the logs. Kibana is a web UI for log visualization.

24 JB283-RHOAR1.0-en-1-20180517
Maintaining Security in Microservices

Maintaining Security in Microservices


Maintaining identity and access management through a series of independent services can be
a real challenge in microservice-based applications. Requiring every service call to include an
authentication step is not ideal. Fortunately, there are a number of possible solutions, including:

• Single Sign-On: A common approach for authentication and authorization that permits the
client to use a single set of login credentials to access multiple services.

• Distributed sessions: A method of distributing identity between microservices and the entire
system.

• Client-side token: The client requests a token and uses this token to access a microservice.
The token is signed by an authentication service. A microservice validates the token without
calling the authentication service. JSON Web Token (JWT) is an example of token-based
authentication.

• Client-side token with API Gateway: API gateways cache client-side tokens. The validation
of tokens is handled by the API gateway.

References
Microservices patterns by Chris Richardson Manning Publications
https://www.manning.com/books/microservices-patterns

JB283-RHOAR1.0-en-1-20180517 25
Chapter 1. Describing Microservice Architectures

Quiz: Describing Microservice Architecture


Patterns

Choose the correct answers to the following questions:

1. Which three of the following statements about an asynchronous message-based


communication in microservices are true? (Choose three.)

a. Microservices can use the AMQP or MQTT protocols for asynchronous message-based
communication.
b. An advantage of asynchronous communication is that messages can be buffered
using message queues.
c. In asynchronous communication, clients need to be available to receive messages to
avoid losing the messages.
d. Using an asynchronous communication mechanism such as AMQP requires the use of
a message broker or other similar middleware integration technology.

2. Which of the following two statements about service discovery are true? (Choose two.)

a. When a client-side discovery pattern is used, the client sends a request to the service
through a load balancer.
b. Kubernetes uses environment variables to expose service endpoint addresses to
running pods.
c. When a server-side discovery pattern is used, a client directly queries the service
registry to obtain the location of a service instance.
d. In OpenShift container platform (OCP), services are resolved using their names within
their namespace through DNS.

3. A company named "At Your Doorstep Inc." is considering using Microservice architecture for
their new grocery delivery application. Customers can access this application from a desktop
or a smartphone. Both of these clients need to access many different microservices, such as
the order service, the catalog service, the shopping cart service, the delivery service, and the
payment service. As an architect, you propose using an API gateway pattern. Which of the
following two statements support your proposal in favor of an API gateway pattern? (Choose
two.)

a. An API gateway allows clients to directly communicate with their dependent services.
b. An API gateway can provide different answers for desktop and smartphone clients.
c. An API gateway centralizes the responsibility of request routing and protocol
translation.
d. An API gateway reduces the number of requests and roundtrips required by an
application.

4. Which of the following two statements regarding the fault tolerance patterns are true?
(Choose two.)

a. The circuit breaker pattern has three distinct states: open, closed, and closing.

26 JB283-RHOAR1.0-en-1-20180517
b. In the half-open state, a circuit breaker periodically checks the health of the
dependent service.
c. The bulkhead pattern allows you to allocate a dedicated thread pool for each
dependent service call, preventing a failure in one from impacting communication
with another.
d. The bulkhead pattern is used as a layer of abstraction between a client and services.

5. Which of the following two statements about distributed tracing and aggregated logging are
correct? (Choose two.)

a. OpenTracing API is popularly used with monolithic applications as well as


microservices-based applications.
b. Distributed tracing tools collect the trace data for storage and visualization.
c. Logs should not contain contextual information added by the logger.
d. OpenShift uses a log collector daemon called fluentd to collect container logs.

JB283-RHOAR1.0-en-1-20180517 27
Chapter 1. Describing Microservice Architectures

Solution
Choose the correct answers to the following questions:

1. Which three of the following statements about an asynchronous message-based


communication in microservices are true? (Choose three.)

a. Microservices can use the AMQP or MQTT protocols for asynchronous message-
based communication.
b. An advantage of asynchronous communication is that messages can be buffered
using message queues.
c. In asynchronous communication, clients need to be available to receive messages to
avoid losing the messages.
d. Using an asynchronous communication mechanism such as AMQP requires the use
of a message broker or other similar middleware integration technology.

2. Which of the following two statements about service discovery are true? (Choose two.)

a. When a client-side discovery pattern is used, the client sends a request to the service
through a load balancer.
b. Kubernetes uses environment variables to expose service endpoint addresses to
running pods.
c. When a server-side discovery pattern is used, a client directly queries the service
registry to obtain the location of a service instance.
d. In OpenShift container platform (OCP), services are resolved using their names
within their namespace through DNS.

3. A company named "At Your Doorstep Inc." is considering using Microservice architecture for
their new grocery delivery application. Customers can access this application from a desktop
or a smartphone. Both of these clients need to access many different microservices, such as
the order service, the catalog service, the shopping cart service, the delivery service, and the
payment service. As an architect, you propose using an API gateway pattern. Which of the
following two statements support your proposal in favor of an API gateway pattern? (Choose
two.)

a. An API gateway allows clients to directly communicate with their dependent services.
b. An API gateway can provide different answers for desktop and smartphone clients.
c. An API gateway centralizes the responsibility of request routing and protocol
translation.
d. An API gateway reduces the number of requests and roundtrips required by an
application.

4. Which of the following two statements regarding the fault tolerance patterns are true?
(Choose two.)

a. The circuit breaker pattern has three distinct states: open, closed, and closing.
b. In the half-open state, a circuit breaker periodically checks the health of the
dependent service.
c. The bulkhead pattern allows you to allocate a dedicated thread pool for each
dependent service call, preventing a failure in one from impacting communication
with another.

28 JB283-RHOAR1.0-en-1-20180517
Solution

d. The bulkhead pattern is used as a layer of abstraction between a client and services.

5. Which of the following two statements about distributed tracing and aggregated logging are
correct? (Choose two.)

a. OpenTracing API is popularly used with monolithic applications as well as


microservices-based applications.
b. Distributed tracing tools collect the trace data for storage and visualization.
c. Logs should not contain contextual information added by the logger.
d. OpenShift uses a log collector daemon called fluentd to collect container logs.

JB283-RHOAR1.0-en-1-20180517 29
Chapter 1. Describing Microservice Architectures

Summary
In this chapter, you learned:

• Microservices are small, self-contained, loosely coupled, and independently deployable


services. These are decentralized and can be developed in different programming languages,
run in their own process, and communicate using a lightweight mechanism.

• Microservices are modeled around business capabilities or domains. These domains can be
further separated into domains and subdomains called bounded context.

• Microservices interact using inter-process communication, synchronously or asynchronously.

• The API gateway pattern provides a single entry point for all clients, and simplifies service
discovery.

• The circuit breaker and bulkhead patterns provide fault tolerance in microservices that call
dependent services.

• Log aggregation stores logs from all microservices in a central location. OpenShift uses the
EFK stack for log aggregation.

• Token-based authentication technologies, such as single sign-on, distributed session, client-


side token, and client-side token with API gateway, help secure microservices.

30 JB283-RHOAR1.0-en-1-20180517
TRAINING
CHAPTER 2

DEPLOYING MICROSERVICE-
BASED APPLICATIONS

Overview
Goal Deploy portions of the course case study applications to an
OpenShift cluster.
Objectives • Deploy a microservice from the MicroProfile Conference
application to an OpenShift cluster.

• Deploy a microservice to OpenShift using the fabric8


Maven plug-in.
Sections • Deploying a Microservice from the MicroProfile
Conference Application (and Guided Exercise)

• Deploying a Microservice with the fabric8 Maven Plug-in


(and Guided Exercise)
Lab Deploying Microservice-based Applications

JB283-RHOAR1.0-en-1-20180517 31
Chapter 2. Deploying Microservice-based Applications

Deploying a Microservice from the MicroProfile


Conference Application

Objectives
After completing this section, students should be able to deploy a microservice from the
MicroProfile conference application to an OpenShift cluster.

Introducing the MicroProfile Conference Application


The MicroProfile conference application is a conference management application. This
application was originally built as a cross-platform demonstration of MicroProfile, and the
services are entirely built with MicroProfile version 1.3. This course uses this application as one of
its primary working examples of a microservice architecture built using MicroProfile.

Reviewing the Architecture of the MicroProfile Conference Application


The MicroProfile conference application contains six microservices and a front-end web UI. The
web UI uses the aggregated endpoints provided by the API gateway to access the microservices
architecture.

Figure 2.1: Components of the conference application

The conference application uses the following back-end services:

AuthZ
This service generates JSON web tokens (JWTs) so that web client users can authenticate.

Speaker
This service stores information about speakers presenting at the conference. It provides
endpoints to create, update, retrieve, or delete information about the conference speakers.

32 JB283-RHOAR1.0-en-1-20180517
Exploring the Web Application User Interface

Session
This service stores the conference session list. It provides endpoints to create, update,
retrieve, or delete conference session information.

Schedule
This service manages the schedules for the conference sessions. It provides endpoints to
create, update, retrieve, or delete session times for conference sessions.

Vote
This service uses a CouchDB database to store and manage the votes that conference
attendees cast for various conference sessions. It provides endpoints to create, update,
retrieve, or delete session vote data.

Exploring the Web Application User Interface


The web application provides a unified view of all the microservices. Each microservice is called
directly by the web application and provides data to display the required information for a given
tab.

Speakers Tab
This tab displays the speakers of the conference. It is the only tab available if you are not
authenticated. This tab displays all the registered speakers in the left panel and a biography
of the selected speaker in the right panel. The Speakers tab retrieves its information from the
speaker microservice, which loads data from a JSON file.

Figure 2.2: The MicroProfile conference application landing page

Login Tab
This tab allows the user to log in to the application. The web UI contacts the authz microservice
which provides a JWT authentication token. The user can log out at any time. By default log in
sessions are invalidated after five minutes.

JB283-RHOAR1.0-en-1-20180517 33
Chapter 2. Deploying Microservice-based Applications

Figure 2.3: The web UI Login tab

Sessions Tab
This tab lists the conference sessions. It shows all of the available sessions in the left panel and
the selected session details in the right panel. This tab also fetches and displays information from
the following microservices:
• speaker microservice: Provides the session's speaker name.

• schedule microservice: Provides the times for each session.

• vote microservice: Provides functionality for users to vote for a session.

Figure 2.4: The web UI Sessions tab

Schedules Tab
This tab displays the conference session schedules. The default view displays all the events
scheduled for a month. This information is retrieved from the schedule microservice.

34 JB283-RHOAR1.0-en-1-20180517
Exploring the Web Application User Interface

Figure 2.5: The web UI Schedules tab

Votes Tab
This tab aggregates the votes for each session into a pie chart. This information is retrieved from
the vote microservice.

Figure 2.6: The web UI Votes tab

JB283-RHOAR1.0-en-1-20180517 35
Chapter 2. Deploying Microservice-based Applications

Reviewing the Speaker Service Endpoints


The speaker microservice provides a REST API which implements functionality to add, update,
delete, search, and view speakers at the conference. The speaker microservice is implemented
using JAX-RS API.

The application path (base URL) for the speaker microservice is /speaker, which is specified in
the Application class.

@ApplicationPath("/speaker")
public class Application extends javax.ws.rs.core.Application {

The ResourceSpeaker class implements the REST endpoints using JAX-RS.

Speaker Service Endpoints


Endpoint HTTP Method Description
/speaker GET Lists all speakers
/speaker/add POST Adds a speaker
/speaker/remove/{id} DELETE Removes a speaker by id
/speaker/update PUT Updates an existing speaker
/speaker/retrieve/{id} GET Retrieves a speaker with a
given id
/speaker/search PUT Searches for a speaker

The speaker microservice includes the demo-bootstrap project as a dependency in its pom.xml
file. The demo-bootstrap project loads speaker data from a JSON file and provides that data to
the speaker microservice.

<dependency>
<groupId>io.microprofile.showcase</groupId>
<artifactId>demo-bootstrap</artifactId>
<version>${project.version}</version>
</dependency>

Containers and Applications


Containers are the preferred mechanism for running microservices. Containers are a kind of
operating system (OS) virtualization. Each container is an isolated partition inside a single host
operating system. Containers provide many of the same benefits as virtual machines, such
as security, storage, and network isolation, but require far fewer hardware resources and are
quicker to launch and terminate. Containers isolate the libraries and the runtime environment
(such as CPU and storage) used by an application.

36 JB283-RHOAR1.0-en-1-20180517
OpenShift and Kubernetes

Figure 2.7: Containers sharing hardware and a host operating system

Containers improve the efficiency, elasticity, and portability of the platform as well as the
reusability of the hosted applications. Docker is one of the container implementations available
for deployment.

OpenShift and Kubernetes


Red Hat OpenShift Container Platform (OCP) is a container management platform that you can
use to assist you with deploying microservices in containers. OCP adds PaaS (Platform-as-a-
Service) capabilities such as remote management, multitenancy, increased security, application
life-cycle management, and self-service interfaces for developers.

JB283-RHOAR1.0-en-1-20180517 37
Chapter 2. Deploying Microservice-based Applications

Figure 2.8: OCP architecture

Within OCP, Kubernetes manages containerized applications across a set of containers.


Kubernetes manages a cluster of hosts (physical or virtual) that run containers. It works with
resources that describe multicontainer applications composed of multiple resources, and how
they interconnect. It provides mechanisms for deployment, maintenance, and application scaling.

Docker provides the basic container management API and the container image file format.
The Docker service packages, instantiates, and runs containerized applications. Kubernetes
manages all of the Docker containers that you are running in your cluster, and provides a wealth
of additional functionality to ease the administration and maintenance of those containers.

Containerized services fulfill many PaaS infrastructure functions, such as networking and
authorization. OCP uses the basic container infrastructure from Docker and Kubernetes for
most internal functions. That is, most OCP internal services run as containers orchestrated by
Kubernetes.

OCP provides web UI and command-line interface (CLI) management tools for managing
user applications and OCP services, which can be used by external tools such as integrated
development platforms (IDEs) and continuous integration (CI) platforms.

38 JB283-RHOAR1.0-en-1-20180517
OpenShift and Kubernetes

Figure 2.9: The high-level OpenShift/Kubernetes workflow

A Kubernetes cluster is a set of node servers that run containers and are centrally managed by
a set of master servers. A single server can act as both a master and a node, but those roles are
usually segregated for increased stability.

The Master host manages the data store (Etcd) and communications in the Kubernetes cluster.

Node hosts are where the containers actually run, in pods that are in the Kubernetes cluster.

Pods
Pods provide the rough equivalent of a machine instance (physical or virtual) to a container.
Each pod allocates its own internal IP address, therefore owning its entire port space, and
containers within pods can share their local storage and networking. Each pod can be treated as
a physical host or virtual machine in terms of port allocation, networking, DNS, load balancing,
application configuration, and migration. Pods can communicate with external networks using
the address of the host it resides on. As long as the host can resolve the server that the pod
wants to communicate with, the pods can communicate with the target server using network
address translation.

Services
Services provide single IP and port combination to access to a pool of pods running the same
container image. A service acts as a load balancer in front of one or more pods. The service
provides a stable IP address, and it allows communication with pods without having to keep
track of individual pod IP addresses. By default, services connect clients to pods in a round-robin
fashion.

Routes
Routes are the key to exposing containerized applications to external users using a human-
readable address. Routes act as an entry point to expose the service externally to the OCP
cluster.

JB283-RHOAR1.0-en-1-20180517 39
Chapter 2. Deploying Microservice-based Applications

OCP Web Console


The OpenShift web console allows users to execute many of the same tasks as the OpenShift CLI.
The web console manages applications within projects and manipulates and examines application
resources.

The web console is accessible by accessing the master node from a web browser. In this
classroom, the URL is https://master.lab.example.com.

Note
For this classroom environment, the username is developer and the password is
redhat to log in to the OpenShift cluster.

Figure 2.10: The OCP web console

Related applications an resources are managed within a project. To create a new project using
web console, click Create Project at the upper-right corner of the web console.

Figure 2.11: Creating a new project

Within a project, OpenShift Container Platform allows users to deploy appplications based
on the application source code. By selecting the preferred runtime and providing a link to
the application's Git repository, OpenShift provides the necessary libraries and deploys the
application to the cluster.

40 JB283-RHOAR1.0-en-1-20180517
OCP Web Console

Figure 2.12: Specify the repository information

When creating a new application or attempting to debug a failed build, view the logs by clicking
build. When the build is complete, click Applications > Deployments to view the deployed
application.

The overview page for the project shows useful project details, including the image name, ports,
services, pod status, and routes. After an application is deployed, use the route URL to access the
application.

JB283-RHOAR1.0-en-1-20180517 41
Chapter 2. Deploying Microservice-based Applications

Guided Exercise: Deploying a Microservice

Outcomes
You should be able to use the OpenShift web console to deploy the hello microservice directly
from the Git repository.

Before you begin


Run lab hello-webconsole setup on the workstation machine to prepare for the
exercise.

[student@workstation ~]$ lab hello-webconsole setup

1. Create a new project using S2I with the OpenShift web console:

1.1. Open a web browser on the workstation VM. Click Applications > Favorites > Firefox
Web Browser.

1.2. Navigate to https://master.lab.example.com to access the OpenShift web


console.

Note
If you see a message indicating that your connection is not secure, click
Advanced and then click Add Exception in the dialog box. In the Add Security
Exception dialog box, click Confirm Security Exception.

Log in to the web console using developer as the user name and redhat as the
password.

Figure 2.13: The OpenShift web console

1.3. Create a new project. Click Create Project at the upper-right corner of the web console,
and enter the following information:

42 JB283-RHOAR1.0-en-1-20180517
• Name: hello-workshop

• Display Name: Hello Workshop

• Description: Hello workshop is a demo project to deploy


microservice with S2I using the web console.

Click Create to create the project. A message displays indicating that the project
successfully created.

1.4. On the web console, click the Languages tab to display the supported languages. Click
Java.

Click Red Hat OpenJDK 8 and then click Next.

1.5. In the resulting window enter the following information:

Warning
Do not click Create yet.

• Add to project: Hello Workshop

• Version: latest

• Application Name: hello-workshop

• Git Repository: http://services.lab.example.com/hello-microservices

JB283-RHOAR1.0-en-1-20180517 43
Chapter 2. Deploying Microservice-based Applications

Figure 2.14: Specifying the project repository information

1.6. Click advanced options.

Enter standalone for the Context Dir.

Scroll down to the Build Configuration section, and in the Environment Variables
(Build and Runtime) name and value fields, enter MAVEN_MIRROR_URL and http://
services.lab.example.com:8081/nexus/content/groups/training-java/,
respectively.

Figure 2.15: Environment variable for a custom Maven repository

Important
The MAVEN_MIRROR_URL variable configures the Maven build to use the
internal Maven repository available in the classroom to retrieve the project
dependencies.

Leave the other options at their default values and click Create at the bottom of the
page.

2. Monitor the build and deployment status.

2.1. View the build logs.

Click the Builds tab in the left panel, and then click Builds in the sub-menu to view all
the builds currently in the OpenShift cluster.

Click the #1 link from the hello-workshop to view the output of the S2I build.

Click the Logs tab to view the raw log output from the build.

For this project Maven builds a REST endpoint running on WildFly Swarm. The resulting
WildFly Swarm JAR is then wrapped in the OpenJDK container and pushed into the
OpenShift container registry.

44 JB283-RHOAR1.0-en-1-20180517
Figure 2.16: Full logs of hello-workshop S2I build

The following message is displayed at the end of the log after the build completes and
pushes the final container to the registry:

Pushing image docker-registry.default.svc:5000/hello-workshop/hello-


workshop:latest ...
Pushed 0/6 layers, 17% complete
Pushed 1/6 layers, 24% complete
Pushed 2/6 layers, 43% complete
Pushed 3/6 layers, 67% complete
Pushed 4/6 layers, 82% complete
Pushed 5/6 layers, 100% complete
Pushed 6/6 layers, 100% complete
Push successful

The deployment, service, pods, and route details can be accessed from the OpenShift
web console on the Applications tab in the left panel after the pod is deployed.

2.2. View the project overview.

Click the Overview tab in the left panel to view a summary of all the projects currently
running on the OpenShift cluster.

Use the small arrow icon on the left to expand the hello-workshop Deployment. The
full overview of the hello-workshop deployment displays:

JB283-RHOAR1.0-en-1-20180517 45
Chapter 2. Deploying Microservice-based Applications

Figure 2.17: Overview of the hello-workshop deployment

The Overview tab displays details about the hello-workshop application, including
the container image it is built from, information about the source code, as well as the
service and route definitions associated with the deployment.

3. Test the application's REST endpoint by accessing the endpoint in a browser.

3.1. Click the route URL in the upper-right corner of the Overview page.

The message Use the endpoint at /api/hello displays.

3.2. In the browser address bar, append /api/hello to the URL and press Enter.

The command returns the message hello.

4. Delete the Hello workshop project.

4.1. Return to the home page of the web console.

Click the OpenShift Container Platform logo in the top left corner of the page.

4.2. Click the menu icon next to the project name, and click Delete Project:

46 JB283-RHOAR1.0-en-1-20180517
Figure 2.18: Deleting the project

4.3. The following dialog box displays:

Figure 2.19: Delete project confirmation dialog box

Enter hello-workshop and click Delete.

In the upper-right corner the message Project 'Hello Workshop' is marked for deletion.
appears. In a few seconds, the project hello-workshop is deleted and removed from the
panel.

This concludes the guided exercise.

JB283-RHOAR1.0-en-1-20180517 47
Chapter 2. Deploying Microservice-based Applications

Deploying a Microservice with the fabric8


Maven Plug-in

Objectives
After completing this section, students should be able to deploy a microservice to OpenShift
using the fabric8 Maven plug-in.

Introducing the fabric8 Maven Plug-in


To deploy a microservice to an OpenShift cluster, you must first wrap the microservice
application in a container image that is properly customized to run the application. You also need
to create a Deployment object for the microservice container to deploy the container image.
Then, to expose the microservice endpoints to the other pods running in the cluster, you must
create a Service object. Finally, you must create a Route object to make the microservice
endpoints reachable from outside of the cluster.

The fabric8 Maven plug-in simplifies the container image build process, because it uses the
OpenShift S2I build process, introduced in the previous section, to produce a container image
from the application. The plug-in also generates the resource descriptor artifacts which can be
used to create the objects that OpenShift needs to deploy the microservice.

The fabric8 Maven plug-in still needs to build the actual Java application and all the
dependencies, typically using the regular Maven package goal. However, when that build is
complete, the plug-in starts an S2I build on the OpenShift cluster to generate a container image
to run the Java application. It then produces the other resource descriptors that are necessary to
deploy the microservice on OpenShift. The plug-in can also automatically trigger a deployment
of the microservice application to the OpenShift cluster using the container image and resource
descriptors that it generates.

Configuring the Plug-in in the Maven POM File


To enable the fabric8 Maven plug-in for the project, add the following configuration to the
pom.xml file:

<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>${version.fabric8-maven-plugin}</version>
</plugin>

In this example, the ${version.fabric8-maven-plugin} property is populated using a Maven


variable.

Configuration Options
The fabric8 Maven plug-in can be configured using one or more of the following methods: zero-
config, using the XML plug-in configuration in the project's pom.xml file, or using OpenShift
resource fragments.

Zero-config
This option is simplest and requires no additional XML configuration in the pom.xml file beyond
defining the plug-in itself.

48 JB283-RHOAR1.0-en-1-20180517
Introducing the fabric8 Maven Plug-in

The zero-config approach makes a lot of assumptions about your application, and may not
work for more customized needs. Without any further configuration, the fabric8 Maven plug-in
defaults to the following values:

• The base image fabric8/java-jboss-openjdk8-jdk [https://github.com/fabric8io-images/java/


tree/master/images/jboss/openjdk8/jdk] is chosen as the source container image where the
application runs.

• An OpenShift Build Config, Deployment Config, ImageStream and a Service are


created as resource objects.

• Port 8080 is exposed as the application service port (as well as ports 8778 and 9779 for
Jolokia and jmx_exporter access, respectively).

Note
Jolokia is remote JMX with JSON accessible over HTTP. jmx_exporter is intended
to be run as a Java Agent, exposing an HTTP server and serving metrics of the local
JVM. See the References at the end of the section for more information.

XML Plug-in Configuration


The zero-config mode offers a convenient starting point for developers, and can be marginally
customized. However, in many applications, more flexibility and control is required. To support
this, you can use an XML-based plug-in configuration in the pom.xml file directly inline with the
Maven plug-in configuration.

The fabric8 Maven plug-in XML configuration in the pom.xml file can be roughly divided into the
following sections:

An <images> section specifies the container images to build and how to build them.

<configuration>
...
<images>

</images>
...
</configuration>

JB283-RHOAR1.0-en-1-20180517 49
Chapter 2. Deploying Microservice-based Applications

Specifies the name and version of the container that the plug-in builds. In this case the
name is xml-config-demo and the version is 1.0.0.
Specifies the base image to use during the docker build. In this case the plug-in uses the
fabric8/java as the base image.
Specifies how build artifacts and other files can enter the container image.
Sets one or more environment variables that are present in the container while it is built.

Note
Refer to the plug-in documentation for more information regarding the specifics of
building container images, which is not covered in detail in this class.

A <resources> section defines the resource descriptors for deploying on an OpenShift.

<configuration>
...
<resources>
<labels>
<all>
<group>quickstarts</group>
</all>
</labels>
<deployment>
<name>${project.artifactId}</name>
<replicas>1</replicas>
<containers>
<container>
<alias>camel-app</alias>
<ports>
<port>8778</port>
</ports>
</container>
</containers>
</deployment>
<services>
<service>
<name>camel-service</name>
</service>
</services>
</resources>
...
</configuration>

A <generator> section configures generators that are responsible for creating images.
Generators are used as an alternative to a dedicated <images> section. A generator is a Java
component that provides an auto-detection mechanism for certain build types, such as WildFly
Swarm, Spring Boot, or plain Java builds.

<configuration>
...
<generator>

<includes>

<include>wildfly-swarm</include>
</includes>
</generator>
...

50 JB283-RHOAR1.0-en-1-20180517
Introducing the fabric8 Maven Plug-in

</configuration>

Contains one or more <include> elements with generator names which should be
included. If present, only this list of generators is included and in the given order. The order
is important because by default only the first matching generator is applied.
Specifies the name of the generator to include. The supported values by default include:

• java-exec: Generic generator for flat class path and fat-jar Java applications

• spring-boot: Spring Boot specific generator

• wildfly-swarm: Generator for WildFly Swarm applications


When a generator detects that it is applicable, it is called with the list of images configured in the
pom.xml file. A generator typically only creates a new image configuration dynamically if the
list is empty. A generator can also add new images to an existing list or even change the current
image list. Each generator also supports a set of customization options that can be used to tweak
the default configurations.

An <enricher> section can configure various enrichers that are supported by the plug-in
for creating or enhancing resource descriptors. Enrichers are the complementary concept to
generators. Whereas generators are used to create and customize container images, enrichers
are use to create and customize OpenShift resource object definitions.

<configuration>
...
<enricher>
<config>

<wildfly-swarm-health-check>

<port>4444</port>

<scheme>HTTPS</scheme>

<path>health/myapp</path>
</wildfly-swarm-health-check>
</config>
</enricher>
...
</configuration>

This enricher automatically defines the OpenShift readiness and liveness probes for a
WildFly Swarm application. This requires the monitor or health-check fraction to be
enabled in the WildFly Swarm application.
The port to use for the health check. The default value is 8080.
This is the scheme to use for the health check. The default value is http.
This is the URL path to use for the health check. The default value is /health.

OpenShift Resource Fragments


Uses an external configuration in the form of YAML resource descriptors, which are located in
the project's src/main/fabric8 directory. Each resource (service, deployment, route) can be
defined in its own file, which contains a skeleton of a resource description. The fabric8 Maven
plug-in picks up these resource fragments, enriches them, and then combines them into two
versions of single YAML file, one specific to OCP (openshift.yml) and one for Kubernetes
(kubernetes.yml).

JB283-RHOAR1.0-en-1-20180517 51
Chapter 2. Deploying Microservice-based Applications

Reviewing the Plug-in Goals


fabric8:resource
This goal creates the resource descriptors needed to deploy the application to an OpenShift
cluster. To use it, run the mvn fabric8:resource command in the same directory as the
project pom.xml file:

[user@demo project]$ mvn fabric8:resource

The fabric8:resource goal automatically creates a YAML resource descriptor for a


deployment configuration, along with any service or route you define. The goal uses the
descriptor fragments you provide in the src/main/fabric8 directory of your Maven project to
produce enriched resource descriptors. For example, the following YAML fragment is defined in a
route.yml file:

spec:
port:
targetPort: 8080
to:
kind: Service
name: ${project.artifactId}

As you can see, there is no metadata section as expected for each OpenShift resource object.
The fabric8 Maven plug-in creates this section automatically. The plug-in also extracts the
object's kind, if not specified, from the filename. In this case it is a Route because the file is
called route.yml.

The resulting route definition after enrichment is:

- apiVersion: v1
kind: Route
metadata:
labels:
app: inventory-service
provider: fabric8
version: 1.2.0-SNAPSHOT
group: com.redhat.coolstore
name: inventory-service
spec:
port:
targetPort: 8080
to:
kind: Service
name: inventory-service

The plugin places the final configuration files in the projectname/target/classes/META-


INF/fabric8/openshift directory.

fabric8:build
The fabric8:build goal builds a container image that wraps the Java application. The plug-in
supports two different ways to build the image. This is set using the fabric8.mode environment
property. The property supports the following values:

• kubernetes: Builds plain Docker container images and Kubernetes resource descriptors.

52 JB283-RHOAR1.0-en-1-20180517
Reviewing the Plug-in Goals

• openshift: Builds the images compatible with the OpenShift deployment model using an S2I
build.

• auto (default): Checks whether an OpenShift cluster is accessible. If that is true, then the
openshift value is used.

To pass the fabric8.mode environment variable explicitly, use the following command:

[demo@demo project]$ mvn package fabric8:build \


-Dfabric8.mode=openshift

OpenShift Build
Whenever the fabric8.mode variable is set to openshift, the fabric8.build.strategy
environment variable can be defined to set up how the container must be built. The plug-in
supports the following OpenShift binary source builds:

• s2i: Uses a binary deployment model. The application is built locally with Maven, and the
resulting binary is pushed to the OpenShift cluster and then injected into the builder image.
The resulting image is pushed to the OpenShift registry.

• docker: Similar to a regular Docker container image build process except that it is done by the
OpenShift cluster. This build pushes the generated image to the OpenShift internal registry to
make it accessible to the whole cluster. This course does not cover this option.

To pass the fabric8.build.strategy environment variable explicitly, use the following


command:

[user@demo project]$ mvn package fabric8:build \


-Dfabric8.mode=openshift -Dfabric8.build.strategy=s2i

fabric8:deploy
The fabric8:deploy goal builds the container image, generates the OpenShift resources, and
deploys them to the cluster.

[demo@demo project]$ mvn fabric8:deploy

The deploy goal is designed to run after the fabric8:build and fabric8:resource goals
have been run and generated their respective outputs.

To ensure this is the case, you can bind the resource and build goals to the standard Maven
life cycle so that they are called with normal goals, such as package or install. For example,
to always include the building of the OpenShift resource files and the container images, add the
following goals to the execution section of the plug-in.

<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>${version.fabric8-maven-plugin}</version>
<code><executions></code>
<code><execution></code>
<code><id>fmp</id></code>
<code><goals></code>
<code><goal>resource</goal></code>

JB283-RHOAR1.0-en-1-20180517 53
Chapter 2. Deploying Microservice-based Applications

<code><goal>build</goal></code>
<code></goals></code>
<code></execution></code>
<code></executions></code>
</plugin>

fabric8:undeploy
The fabric8:undeploy goal deletes the OpenShift resources deployed by fabric8:deploy
and fabric8:resource goals on the cluster.

[student@workstation project] $ mvn fabric8:undeploy

References
fabric8 Maven Plug-in Homepage
https://maven.fabric8.io

Jolokia Homepage
https://jolokia.org/

JMX Exporter on GitHub


https://github.com/prometheus/jmx_exporter

54 JB283-RHOAR1.0-en-1-20180517
Guided Exercise: Deploying a Microservice with the fabric8 Maven Plug-in

Guided Exercise: Deploying a Microservice with


the fabric8 Maven Plug-in

In this exercise, you will configure the fabric8 Maven plug-in to deploy the microservice-schedule
microservice to an OpenShift cluster.

Outcomes
You should be able to configure the fabric8 plug-in in the microservice-schedule project and
deploy it to OpenShift.

Before you begin


Use the git clone command to clone the microprofile-conference repository to the
workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Run the following command on the workstation machine to prepare for this exercise.

[student@workstation ~]$ lab deploy-fabric8 setup

Steps
1. Check out the lab-deploy-fabric8 Git branch to get the correct version of the
application code for this exercise.

1.1. Run the following commands to change to the correct directory and check out the
required branch:

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout lab-deploy-fabric8
Switched to a new branch 'lab-deploy-fabric8'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-deploy-fabric8
nothing to commit, working directory clean

2. Import the microprofile-conference project into JBoss Developer Studio.

2.1. Double-click the JBoss Developer Studio icon on the workstation VM desktop. Click
Launch in the Eclipse Launcher dialog box.

JB283-RHOAR1.0-en-1-20180517 55
Chapter 2. Deploying Microservice-based Applications

Note
If the JBoss Developer Studio Usage dialog box appears, click No to dismiss
it.

2.2. In the JBoss Developer Studio menu, click File > Import to open the Import wizard.

2.3. In the Import dialog box, click Maven > Existing Maven Projects, and then click Next.

2.4. In the Import Maven Projects dialog box, click Browse. The Select Root Folder dialog
box displays.

2.5. Navigate to the /home/student directory. Select the microprofile-conference


folder and click OK.

2.6. Click Finish to start the import.

2.7. Monitor the progress of the import operation using the JBoss Developer Studio status
bar (lower-right corner), until the Building workspace message disappears.

Note
It may take 5-10 minutes or sometimes longer to download all of the required
dependencies and build the workspace.

3. Configure the fabric8 Maven plug-in in the pom.xml file in the microservice-schedule
project.

3.1. Expand the microservice-schedule item in the Project Explorer pane on the left, and
then double-click the pom.xml file.

3.2. Click the pom.xml tab at the bottom of the file to view the contents of the pom.xml file.

3.3. Add the fabric8 Maven plug-in to the project by adding the following code to the
plugins section, immediately after the <!-- Add fabric8 maven plugin
configuration here --> comment:

<!-- Add fabric8 maven plugin configuration here -->


<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>${version.fabric8-maven-plugin}</version>
</plugin>

The version is specified in the parent pom.xml file.

3.4. Configure the fabric8 Maven plug-in to execute with the resource and build goals by
adding the <executions> to the <plugin> section created in the previous step.

<!-- Add fabric8 maven plugin configuration here -->

56 JB283-RHOAR1.0-en-1-20180517
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>${version.fabric8-maven-plugin}</version>
<executions>
<execution>
<id>fmp</id>
<goals>
<goal>resource</goal>
<goal>build</goal>
</goals>
</execution>
</executions>
</plugin>

3.5. Configure the plug-in to use the wildfly-swarm generator, which expects the
application to be a swarm uberJAR.

Append the <configuration> section, after the <executions> section added in the
previous step.

<!-- Add fabric8 maven plugin configuration here -->


<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>${version.fabric8-maven-plugin}</version>
<executions>
<execution>
<id>fmp</id>
<goals>
<goal>resource</goal>
<goal>build</goal>
</goals>
</execution>
</executions>
<configuration>
<generator>
<includes>
<include>wildfly-swarm</include>
</includes>
</generator>
</configuration>
</plugin>

3.6. Press Ctrl+S to save the changes.

4. Review the template files used to enrich the OpenShift deployment configuration file.

4.1. In the Project Explorer tab in the left pane of JBoss Developer Studio, click
microservice-schedule > src > main > fabric8, and then double-click the
deployment.yml file to open it.

4.2. Inspect the containerPort, name, and protocol elements. They expose port 8080
of the application to external access.

- ports:
- containerPort: 8080
name: http

JB283-RHOAR1.0-en-1-20180517 57
Chapter 2. Deploying Microservice-based Applications

protocol: TCP

4.3. Review the service.yml file.

metadata:
name: ${project.artifactId}
spec:

The service name is specified with the project.artifactId Maven variable.

4.4. Review the route.yml file.

metadata:
name: ${project.artifactId}
spec:
to:
kind: Service
name: ${project.artifactId}

Review the to element. It states the kind of resource this route points to. In this
case it is a service and the name of the service is the value retrieved from the
project.artifactId Maven variable.

5. Use mvn fabric8:resource to create resource descriptors.

5.1. Log in to the OpenShift cluster from the command line. From the existing terminal
window, run the following command:

[student@workstation microprofile-conference]$ oc login \


-u developer -p redhat https://master.lab.example.com
Login successful.

You don't have any projects. You can try to create a new project, by running

oc new-project <projectname>

5.2. Create a new project in OpenShift. From the command line, run the following command:

[student@workstation microprofile-conference]$ oc new-project deploy-fabric8


Now using project "deploy-fabric8" on server "https://
master.lab.example.com:443".
...output omitted...

5.3. Run Maven using the fabric8:resource goal. In the same terminal window, run the
following commands:

[student@workstation microprofile-conference]$ cd microservice-schedule


[student@workstation microservice-schedule]$ mvn fabric8:resource

Note that it generates the complete YAML files. At this stage, it does not build the
containers.

[INFO] Scanning for projects...

58 JB283-RHOAR1.0-en-1-20180517
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Conference :: Schedule 1.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- fabric8-maven-plugin:3.5.34:resource (default-cli) @ microservice-
schedule ---
[INFO] F8: Running in OpenShift mode
[INFO] F8: Using docker image name of namespace: default
[INFO] F8: Running generator wildfly-swarm
[INFO] F8: wildfly-swarm: Using Docker image registry.lab.example.com:5000
/redhat-openjdk-18/openjdk18-openshift as base / builder
[INFO] F8: using resource templates from /home/student/microprofile-conference
/microservice-schedule/src/main/fabric8
[INFO] F8: fmp-revision-history: Adding revision history limit to 2
[INFO] F8: f8-icon: Adding icon for deployment
[INFO] F8: f8-icon: Adding icon for service
[INFO] F8: validating /home/student/microprofile-conference/microservice-
schedule
/target/classes/META-INF/fabric8/openshift/microservice-schedule-svc.yml
resource
[INFO] F8: validating /home/student/microprofile-conference/microservice-
schedule
/target/classes/META-INF/fabric8/openshift/microservice-schedule-
deploymentconfig.yml resource
[INFO] F8: validating /home/student/microprofile-conference/microservice-
schedule
/target/classes/META-INF/fabric8/openshift/microservice-schedule-route.yml
resource
[INFO] F8: validating /home/student/microprofile-conference/microservice-
schedule
/target/classes/META-INF/fabric8/kubernetes/microservice-schedule-svc.yml
resource
[INFO] F8: validating /home/student/microprofile-conference/microservice-
schedule
/target/classes/META-INF/fabric8/kubernetes/microservice-schedule-deployment.yml
resource
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 5.279 s

Note
The following error might occur during execution and can be safely ignored:

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask@3cce6de9
rejected from
java.util.concurrent.ScheduledThreadPoolExecutor@79ccf414[Shutting
down, pool size = 1, active threads = 1,
queued tasks = 0, completed tasks = 10]

5.4. Review the build output.

The build uses the template files from /home/student/microprofile-


conference/microservice-schedule/src/main/fabric8. Then, it creates

JB283-RHOAR1.0-en-1-20180517 59
Chapter 2. Deploying Microservice-based Applications

the resource files in the /home/student/microprofile-conference/


microservice-schedule/target/classes/META-INF/fabric8/openshift
directory. Inspect the resulting files:

[student@workstation microservice-schedule]$ ls \
target/classes/META-INF/fabric8/openshift
microservice-schedule-deploymentconfig.yml microservice-schedule-route.yml
microservice-schedule-svc.yml

5.5. Inspect the generated deployment configuration file at /home/student/


microprofile-conference/microservice-schedule/target/
classes/META-INF/fabric8/openshift/microservice-schedule-
deploymentconfig.yml.

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
...
ports:
- containerPort: 8080
name: http
protocol: TCP
...

This file is a deployment configuration to deploy the microservice-schedule


microservice to OpenShift. Scroll down and review the ports section. It uses the
containerPort, name, and protocol specified in the deployment.yml file.

5.6. Inspect the generated service configuration file at /home/student/microprofile-


conference/microservice-schedule/target/classes/META-INF/fabric8/
openshift/microservice-schedule-svc.yml.

kind: Service
...
name: microservice-schedule

The kind element describes the type of resource. The name attribute uses the same
project name defined in Maven as defined in the service.yml file.

5.7. Inspect the generated route configuration file at /home/student/microprofile-


conference/microservice-schedule/target/classes/META-INF/fabric8/
openshift/microservice-schedule-route.yml.

kind: Route
...
group: io.microprofile.showcase
name: microservice-schedule
spec:
to:
kind: Service
name: microservice-schedule

Review the kind: Route element that describes the type of resource. In the
metadata: element, review the name: element. The name was generated by resolving
the ${project.artifactId} variable in the resource fragment file.

60 JB283-RHOAR1.0-en-1-20180517
6. Deploy the container image to OpenShift.

6.1. Create the container image using the S2I build. Run the following command in the
existing terminal window:

[student@workstation microservice-schedule]$ mvn package fabric8:build \


-DskipTests

Review the build output.

Scanning for projects...


[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Conference :: Schedule 1.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] F8: Using OpenShift build with strategy S2I
...output omitted...
[INFO] F8: Pushed 6/6 layers, 100% complete
[INFO] F8: Push successful
[INFO] F8: Build microservice-schedule-s2i-1 Complete
...output omitted...
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:39 min

The fabric8 Maven plug-in uses the S2I strategy for the build because there is an
OpenShift client connected to a cluster.

7. Deploy the microservice-schedule microservice with the fabric8 Maven plug-in.

7.1. Run the mvn fabric8:deploy command to deploy the application using the container
image built by the S2I build.

[student@workstation microservice-schedule]$ mvn fabric8:deploy -DskipTests


[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Conference :: Schedule 1.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] >>> fabric8-maven-plugin:3.5.34:deploy (default-cli) > install @
microservice-schedule >>>
[INFO]
[INFO] --- maven-resources-plugin:3.0.1:resources (default-resources) @
microservice-schedule ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO]
[INFO] --- fabric8-maven-plugin:3.5.34:resource (fmp) @ microservice-schedule
---
[INFO] F8: Running in OpenShift mode
...output omitted...
[INFO] --- fabric8-maven-plugin:3.5.34:deploy (default-cli) @ microservice-
schedule ---
[INFO] F8: Using OpenShift at https://master.lab.example.com:443/ in namespace
default with manifest

JB283-RHOAR1.0-en-1-20180517 61
Chapter 2. Deploying Microservice-based Applications

/home/student/microprofile-conference/microservice-schedule/target/classes/META-
INF/fabric8/openshift.yml
[INFO] OpenShift platform detected
[INFO] Using project: deploy-fabric8
[INFO] Creating a Service from openshift.yml namespace default name
microservice-schedule
[INFO] Created Service:
microservice-schedule/target/fabric8/applyJson/default/service-microservice-
schedule.json
[INFO] Using project: default
[INFO] Creating a DeploymentConfig from openshift.yml namespace default name
microservice-schedule
[INFO] Created DeploymentConfig:
microservice-schedule/target/fabric8/applyJson/default/deploymentconfig-
microservice-schedule.json
[INFO] Creating Route default:microservice-schedule host: null
[INFO] F8: HINT: Use the command `oc get pods -w` to watch your pods start up
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 45.673 s

Review the Maven build output. Note that the fabric8:deploy goal creates all the
configuration files and deploys the pod to OpenShift.

Note
The following error might occur during execution:

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask@3cce6de9
rejected from
java.util.concurrent.ScheduledThreadPoolExecutor@79ccf414[Shutting
down, pool size = 1, active threads = 1,
queued tasks = 0, completed tasks = 10]

You can disregard this error.

8. To get the route URL, run the following command in the terminal window:

[student@workstation microservice-schedule]$ oc status

The route information is displayed on the workstation VM. It should be similar to the
following:

In project deploy-fabric8 on server https://master.lab.example.com:443


...
http://microservice-schedule-deploy-fabric8.apps.lab.example.com (svc/microservice-
schedule)

9. Test the microservice by accessing it with the URL captured in the previous step.

9.1. Test the REST endpoints of the microservice using the RESTClient Firefox plug-in.

62 JB283-RHOAR1.0-en-1-20180517
Start Firefox on the workstation VM and click the RESTClient plug-in on the browser's
toolbar.

Figure 2.20: The Firefox RESTClient plug-in

9.2. Select GET as the Method. Enter http://microservice-schedule-deploy-


fabric8.apps.lab.example.com/schedule/all as the URL. Click Send.

9.3. In the Headers tab, verify that the Status Code is 200 OK.

9.4. In the Response tab, verify that the response is similar to the following:

[{"id":"190","sessionId":"89","venue":"Rijkevorsel","venueId":"88",
"date":"2018-01-15","startTime":"00:54:17","duration":"PT1H"},
{"id":"191","sessionId":"90","venue":"Lens-Saint-Remy","venueId":"89",
"date":"2017-08-01","startTime":"12:16:38","duration":"PT1H"},
{"id":"192","sessionId":"91","venue":"Edremit","venueId":"90",
"date":"2017-11-27","startTime":"09:21:46","duration":"PT1H"},
{"id":"193","sessionId":"92","venue":"St. Catharines","venueId":"91",
"date":"2018-04-13","startTime":"17:30:53","duration":"PT1H"},
{"id":"194","sessionId":"93","venue":"Malbaie","venueId":"92",
"date":"2016-12-24","startTime":"13:16:26","duration":"PT1H"},
{"id":"195","sessionId":"94","venue":"Fossato di Vico","venueId":"93",
"date":"2016-07-23","startTime":"21:55:37","duration":"PT1H"},
{"id":"196","sessionId":"95","venue":"Heikruis","venueId":"94",
"date":"2017-07-10","startTime":"15:02:31","duration":"PT1H"},
{"id":"197","sessionId":"96","venue":"Antofagasta","venueId":"95",
"date":"2017-05-10","startTime":"22:47:48","duration":"PT1H"},
{"id":"110","sessionId":"9","venue":"East Kilbride","venueId":"9",
"date":"2018-03-15","startTime":"19:21:01","duration":"PT1H"},
...output omitted...

10. Undeploy the microservice-schedule microservice using the fabric8 Maven plug-in.

[student@workstation microservice-schedule]$ mvn fabric8:undeploy

Log messages from the operation are displayed.

INFO] Scanning for projects...


[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Conference :: Schedule 1.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- fabric8-maven-plugin:3.5.34:undeploy (default-cli) @ microservice-
schedule ---
[INFO] F8: Using OpenShift at https://master.lab.example.com:443/ in namespace
default with manifest /home/student/microprofile-conference/microservice-schedule/
target/classes/META-INF/fabric8/openshift.yml
[INFO] OpenShift platform detected
[INFO] Using project: default
[INFO] F8: Deleting resource Route default/microservice-schedule

JB283-RHOAR1.0-en-1-20180517 63
Chapter 2. Deploying Microservice-based Applications

[INFO] F8: Deleting resource DeploymentConfig default/microservice-schedule


[INFO] F8: Deleting resource Service default/microservice-schedule
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.075 s

11. Clean up, commit your changes to your local Git repository in the lab branch, and return to
the master branch.

11.1. Delete the deploy-fabric8 project.

[student@workstation microservice-schedule]$ oc delete project deploy-fabric8


project "deploy-fabric8" deleted

11.2. In the terminal window where the microservice-schedule was built, use the git add
command to stage the uncommitted changes.

[student@workstation microservice-schedule]$ git add .

11.3. Use the git commit command to commit your changes to the local branch.

[student@workstation microservice-schedule]$ git commit \


-m"completing lab deploy application using fabric8."
...output omitted...

11.4. Check out the master branch to finish cleaning up.

[student@workstation microservice-schedule]$ git checkout master


Switched to branch 'master'

This concludes the guided exercise.

64 JB283-RHOAR1.0-en-1-20180517
Lab: Deploying a Microservice-based Application

Lab: Deploying a Microservice-based


Application

In this lab, you will configure the microservice-speaker application to use the fabric8 Maven
plug-in to deploy it to OpenShift, and test the service using the RESTClient Firefox plug-in.

Outcomes
You should be able to deploy the microservice-speaker application running as a container to an
OpenShift cluster using the fabric8 Maven plug-in and test it using the RESTClient plug-in.

Before you begin


If you have not already, use the git clone command to download the microprofile-conference
repository to the workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Run the following command on the workstation machine to prepare for the exercise.

[student@workstation ~]$ lab deploy-speaker setup

Steps
1. Check out the lab-deploy-speaker branch of the Git repository to get the correct
version of the application code for this exercise.

2. Complete the configuration of the fabric8 Maven plug-in in the pom.xml file from the
microservice-speaker project. Be sure to fix all the TODO comments.

3. Review the service.yml file and update it to specify the service exposes port 8080. Name
the exposed port http.

4. If you are not already authenticated, log in to the OpenShift cluster using admin as the user
name and redhat as the password. Create a new project called deploy-speaker.

5. Package the microservice-speaker application with Maven and then use the fabric8 Maven
plug-in to build the container image that is deployed to OpenShift.

This starts a new S2I build in the OCP cluster. You can skip the tests for a faster build time
using the -DskipTests option.

6. Deploy the microservice-speaker microservice with the fabric8 Maven plug-in. You can skip
the tests for a faster build time using the -DskipTests option.

JB283-RHOAR1.0-en-1-20180517 65
Chapter 2. Deploying Microservice-based Applications

Note
The following error might occur during execution:

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException:
Task java.util.concurrent.SpeakerdThreadPoolExecutor
$SpeakerdFutureTask@3cce6de9
rejected from
java.util.concurrent.SpeakerdThreadPoolExecutor@79ccf414[Shutting down,
pool size = 1, active threads = 1,
queued tasks = 0, completed tasks = 10]

You can disregard this error.

7. To get the route URL, run the following command in the terminal window:

[student@workstation microservice-speaker]$ oc status

The route information is displayed on the workstation VM. It should be similar to the
following:

In project deploy-speaker on server https://master.lab.example.com:443


...
http://microservice-speaker-deploy-speaker.apps.lab.example.com (svc/microservice-
speaker)

8. Test the microservice, accessing it with the URL captured in the previous step.

8.1. Test the REST endpoints of the microservice using the RESTClient Firefox plug-in.

Start Firefox on the workstation VM and click the RESTClient plug-in icon on the
browser's toolbar.

The Firefox RESTClient plug-in

8.2. Select GET as the Method. Enter http://microservice-speaker-deploy-


speaker.apps.lab.example.com/speaker/ as the URL and ciick Send.

8.3. In the Headers tab, verify that the Status Code is 200 OK.

8.4. Verify in the Response tab that the response is similar to the following:

[{"id":"25","title":"Mr.","nameFirst":"Abbot","nameLast":"Blanchard",
"organization":"n/a","biography":"Lorem ipsum dolor sit amet, consectetur
adipiscing elit. Nullam commodo eget nisl eu fermentum. Phasellus tellus
elit, eleifend vel bibendum quis, hendrerit sit amet enim. Donec nulla tortor,

66 JB283-RHOAR1.0-en-1-20180517
...output omitted...

9. Grade the lab.

[student@workstation ~]$ lab deploy-speaker grade

10. Undeploy the microservice-speaker microservice using the fabric8 Maven plug-in.

[student@workstation microservice-speaker]$ mvn fabric8:undeploy

Log messages from the operation are displayed.

[INFO] Scanning for projects...

[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Conference :: Speaker 1.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- fabric8-maven-plugin:3.5.34:undeploy (default-cli) @ microservice-speaker
---
[INFO] F8: Using OpenShift at https://master.lab.example.com:443/ in namespace
default with manifest /home/student/microprofile-conference/microservice-speaker/
target/classes/META-INF/fabric8/openshift.yml
[INFO] OpenShift platform detected
[INFO] Using project: default
[INFO] F8: Deleting resource Route default/microservice-speaker
[INFO] F8: Deleting resource DeploymentConfig default/microservice-speaker
[INFO] F8: Deleting resource Service default/microservice-speaker
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.075 s

11. Clean up, commit your changes to your local Git repository in the lab branch, and return to
the master branch.

11.1. Delete the deploy-speaker project.

[student@workstation microservice-speaker]$ oc delete project deploy-speaker


project "deploy-speaker" deleted

11.2. In the terminal window where the microservice-speaker was built, use the git add
command to stage the uncommitted changes.

[student@workstation microservice-speaker]$ git add .

11.3. Use the git commit command to commit your changes to the local branch.

[student@workstation microservice-speaker]$ git commit \


-m"completing lab deploy application using fabric8."
...output omitted...

JB283-RHOAR1.0-en-1-20180517 67
Chapter 2. Deploying Microservice-based Applications

11.4. Check out the master branch to finish cleaning up.

[student@workstation microservice-speaker]$ git checkout master


Switched to branch 'master'

This concludes the lab.

68 JB283-RHOAR1.0-en-1-20180517
Solution

Solution
In this lab, you will configure the microservice-speaker application to use the fabric8 Maven
plug-in to deploy it to OpenShift, and test the service using the RESTClient Firefox plug-in.

Outcomes
You should be able to deploy the microservice-speaker application running as a container to an
OpenShift cluster using the fabric8 Maven plug-in and test it using the RESTClient plug-in.

Before you begin


If you have not already, use the git clone command to download the microprofile-conference
repository to the workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Run the following command on the workstation machine to prepare for the exercise.

[student@workstation ~]$ lab deploy-speaker setup

Steps
1. Check out the lab-deploy-speaker branch of the Git repository to get the correct
version of the application code for this exercise.

1.1. Run the following commands to change to the correct directory and check out the lab-
deploy-speaker branch.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout lab-deploy-speaker
Switched to a new branch 'lab-deploy-speaker'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-deploy-speaker
nothing to commit, working directory clean

2. Complete the configuration of the fabric8 Maven plug-in in the pom.xml file from the
microservice-speaker project. Be sure to fix all the TODO comments.

2.1. Expand the microservice-speaker item in the Project Explorer pane on the left, and
then double-click the pom.xml file.

Click the pom.xml tab at the bottom of the pane to view the content of the pom.xml
file.

2.2. Specify the groupId and artifactId of the fabric8 Maven plug-in in the project's
pom.xml file. Update the XML code nested in the plugins section, immediately after
the <!-- Add fabric8 maven plugin configuration here --> comment:

JB283-RHOAR1.0-en-1-20180517 69
Chapter 2. Deploying Microservice-based Applications

<plugin>
<!--TODO set groupId -->
<groupId>io.fabric8</groupId>
<!--TODO set artifactId -->
<artifactId>fabric8-maven-plugin</artifactId>
<version>${version.fabric8-maven-plugin}</version>

The version is specified in the parent pom.xml file.

2.3. Define the execution goals in the pom.xml file in which the plug-in is triggered. Nested
in the plugin section created in the previous step, and after the version element, add
the resource and the build goals to the Maven execution.

<executions>
<execution>
<id>fmp</id>
<goals>
<!--TODO configure Maven to build the container image and resources each
build -->
<goal>resource</goal>
<goal>build</goal>
</goals>
</execution>
</executions>

2.4. Configure the plug-in to use the wildfly-swarm generator, which expects the
application to be a swarm uberJAR. Immediately after the executions section added
in the previous step, append the following configuration.

<configuration>
<generator>
<includes>
<!-- TODO set generator -->
<include>wildfly-swarm</include>
</includes>
</generator>
</configuration>

2.5. Press Ctrl+S to save the changes.

3. Review the service.yml file and update it to specify the service exposes port 8080. Name
the exposed port http.

3.1. In the Project Explorer tab in the left pane of JBoss Developer Studio, click
microservice-speaker > src > main > fabric8.

Double-click the service.yml file to open it.

3.2. Update the port and name values to 8080 and http, respectively.

metadata:
name: ${project.artifactId}
spec:
#TODO Set port to 8080 and name it http
ports:

70 JB283-RHOAR1.0-en-1-20180517
Solution

- port: 8080
name: http

3.3. Press Ctrl+S to save the changes.

4. If you are not already authenticated, log in to the OpenShift cluster using admin as the user
name and redhat as the password. Create a new project called deploy-speaker.

4.1. Log in to the OpenShift cluster from the command line:

[student@workstation microprofile-conference]$ oc login \


-u developer -p redhat https://master.lab.example.com
Login Successful
...output omitted...

4.2. Create a new project. From the command line, run the following command:

[student@workstation microprofile-conference]$ oc new-project deploy-speaker


Now using project "deploy-speaker" on server "https://
master.lab.example.com:443".
...output omitted...

5. Package the microservice-speaker application with Maven and then use the fabric8 Maven
plug-in to build the container image that is deployed to OpenShift.

This starts a new S2I build in the OCP cluster. You can skip the tests for a faster build time
using the -DskipTests option.

5.1. Create the container image for the microservice-speaker application using the S2I
build.

Navigate to the directory where the microservice-speaker project is located, and run
the following command:

[student@workstation microprofile-conference]$ cd microservice-speaker


[student@workstation microservice-speaker]$ mvn package fabric8:build \
-DskipTests

Review the build output.

Scanning for projects...


[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Conference :: Speaker 1.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] F8: Using OpenShift build with strategy S2I
...output omitted...
[INFO] F8: Pushed 6/6 layers, 100% complete
[INFO] F8: Push successful
[INFO] F8: Build microservice-speaker-s2i-1 Complete
...output omitted...
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

JB283-RHOAR1.0-en-1-20180517 71
Chapter 2. Deploying Microservice-based Applications

[INFO] Total time: 01:39 min

The fabric8 Maven plug-in uses the S2I strategy for the build because there is an
OpenShift client connected to a cluster.

6. Deploy the microservice-speaker microservice with the fabric8 Maven plug-in. You can skip
the tests for a faster build time using the -DskipTests option.

6.1. Use the mvn fabric8:deploy command to deploy the application using the container
image built by the S2I build.

[student@workstation microservice-speaker]$ mvn fabric8:deploy -DskipTests


[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Conference :: Speaker 1.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] >>> fabric8-maven-plugin:3.5.34:deploy (default-cli) > install @
microservice-speaker >>>
[INFO]
[INFO] --- maven-resources-plugin:3.0.1:resources (default-resources) @
microservice-speaker ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO]
[INFO] --- fabric8-maven-plugin:3.5.34:resource (fmp) @ microservice-speaker ---
[INFO] F8: Running in OpenShift mode
...output omitted...
[INFO] --- fabric8-maven-plugin:3.5.34:deploy (default-cli) @ microservice-
speaker ---
[INFO] F8: Using OpenShift at https://master.lab.example.com:443/ in namespace
default with manifest
/home/student/microprofile-conference/microservice-speaker/target/classes/META-
INF/fabric8/openshift.yml
[INFO] OpenShift platform detected
[INFO] Using project: deploy-speaker
[INFO] Creating a Service from openshift.yml namespace default name
microservice-speaker
[INFO] Created Service:
microservice-speaker/target/fabric8/applyJson/default/service-microservice-
speaker.json
[INFO] Using project: default
[INFO] Creating a DeploymentConfig from openshift.yml namespace default name
microservice-speaker
[INFO] Created DeploymentConfig:
microservice-speaker/target/fabric8/applyJson/default/deploymentconfig-
microservice-speaker.json
[INFO] Creating Route default:microservice-speaker host: null
[INFO] F8: HINT: Use the command `oc get pods -w` to watch your pods start up
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 45.673 s

Review the Maven build output. Note that the fabric8:deploy goal creates all the
configuration files and deploys the pod to OpenShift.

72 JB283-RHOAR1.0-en-1-20180517
Solution

Note
The following error might occur during execution:

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException:
Task java.util.concurrent.SpeakerdThreadPoolExecutor
$SpeakerdFutureTask@3cce6de9
rejected from
java.util.concurrent.SpeakerdThreadPoolExecutor@79ccf414[Shutting down,
pool size = 1, active threads = 1,
queued tasks = 0, completed tasks = 10]

You can disregard this error.

7. To get the route URL, run the following command in the terminal window:

[student@workstation microservice-speaker]$ oc status

The route information is displayed on the workstation VM. It should be similar to the
following:

In project deploy-speaker on server https://master.lab.example.com:443


...
http://microservice-speaker-deploy-speaker.apps.lab.example.com (svc/microservice-
speaker)

8. Test the microservice, accessing it with the URL captured in the previous step.

8.1. Test the REST endpoints of the microservice using the RESTClient Firefox plug-in.

Start Firefox on the workstation VM and click the RESTClient plug-in icon on the
browser's toolbar.

The Firefox RESTClient plug-in

8.2. Select GET as the Method. Enter http://microservice-speaker-deploy-


speaker.apps.lab.example.com/speaker/ as the URL and ciick Send.

8.3. In the Headers tab, verify that the Status Code is 200 OK.

8.4. Verify in the Response tab that the response is similar to the following:

[{"id":"25","title":"Mr.","nameFirst":"Abbot","nameLast":"Blanchard",
"organization":"n/a","biography":"Lorem ipsum dolor sit amet, consectetur
adipiscing elit. Nullam commodo eget nisl eu fermentum. Phasellus tellus
elit, eleifend vel bibendum quis, hendrerit sit amet enim. Donec nulla tortor,

JB283-RHOAR1.0-en-1-20180517 73
Chapter 2. Deploying Microservice-based Applications

...output omitted...

9. Grade the lab.

[student@workstation ~]$ lab deploy-speaker grade

10. Undeploy the microservice-speaker microservice using the fabric8 Maven plug-in.

[student@workstation microservice-speaker]$ mvn fabric8:undeploy

Log messages from the operation are displayed.

[INFO] Scanning for projects...

[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Conference :: Speaker 1.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- fabric8-maven-plugin:3.5.34:undeploy (default-cli) @ microservice-speaker
---
[INFO] F8: Using OpenShift at https://master.lab.example.com:443/ in namespace
default with manifest /home/student/microprofile-conference/microservice-speaker/
target/classes/META-INF/fabric8/openshift.yml
[INFO] OpenShift platform detected
[INFO] Using project: default
[INFO] F8: Deleting resource Route default/microservice-speaker
[INFO] F8: Deleting resource DeploymentConfig default/microservice-speaker
[INFO] F8: Deleting resource Service default/microservice-speaker
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 4.075 s

11. Clean up, commit your changes to your local Git repository in the lab branch, and return to
the master branch.

11.1. Delete the deploy-speaker project.

[student@workstation microservice-speaker]$ oc delete project deploy-speaker


project "deploy-speaker" deleted

11.2. In the terminal window where the microservice-speaker was built, use the git add
command to stage the uncommitted changes.

[student@workstation microservice-speaker]$ git add .

11.3. Use the git commit command to commit your changes to the local branch.

[student@workstation microservice-speaker]$ git commit \


-m"completing lab deploy application using fabric8."
...output omitted...

74 JB283-RHOAR1.0-en-1-20180517
Solution

11.4. Check out the master branch to finish cleaning up.

[student@workstation microservice-speaker]$ git checkout master


Switched to branch 'master'

This concludes the lab.

JB283-RHOAR1.0-en-1-20180517 75
Chapter 2. Deploying Microservice-based Applications

Summary
In this chapter, you learned:

• The MicroProfile conference application is a conference management application that


demonstrates a microservices architecture using WildFly Swarm and MicroProfile. The
application contains six microservices and a front-end web UI. The six services are authz,
speaker, session, schedule, vote, and an API gateway.

• The web application calls all of the back-end services and uses the data returned by those
services to display information to the user.

• The preferred mechanism for running microservices is to use containers. Containers provide
many of the same benefits as virtual machines, such as security, storage, and network
isolation, but require far fewer hardware resources and are quicker to launch and terminate.

• Red Hat OpenShift Container Platform is the enterprise distribution of Kubernetes for CI/CD
application development. OpenShift provides both browser-based and command-line interface
management tools for managing user applications and OpenShift services.

• The OpenShift web console can be used to create, build, deploy, and run applications. You can
use a Source-to-Image (S2I) strategy to create new applications.

• The fabric8 maven plug-in simplifies the container image build process and generation of
artifacts required by OpenShift for the S2I process.

• The main goals of the fabric8 plug-in are listed below:

◦ fabric8:resource: Enriches the YAML resource fragments used to create the OpenShift
resources

◦ fabric8:build: Starts the S2I build process and produces a container image to run the
application along with the enriched YAML files

◦ fabric8:deploy: Triggers a deployment using the container image and resources that the
plug-in previously created

76 JB283-RHOAR1.0-en-1-20180517
TRAINING
CHAPTER 3

IMPLEMENTING A
MICROSERVICE WITH
MICROPROFILE

Overview
Goal Describe the specifications in MicroProfile, implement a
microservice with some of the specifications, and deploy it to
an OpenShift cluster.
Objectives • Describe the specifications included in MicroProfile.

• Implement a microservice using the CDI, JAX-RS, and JSON-


P specifications of MicroProfile.
Sections • Describing MicroProfile and Its Specifications (and Quiz)

• Implementing a Microservice with CDI, JAX-RS, and JSON-P


(and Guided Exercise)
Lab Implementing a Microservice with MicroProfile

JB283-RHOAR1.0-en-1-20180517 77
Chapter 3. Implementing a Microservice with MicroProfile

Describing MicroProfile and Its Specifications

Objective
After completing this section, students should be able to describe the specifications included in
MicroProfile.

Reviewing the History of the MicroProfile


Specification
The technologies that developers use to build microservices continues to evolve. Java EE
remains a critical technology in today's modern IT infrastructure. As the Java ecosystem has
matured, the number of vendors and differing server technologies in the market has grown
significantly. Developers have an enormous amount of Java EE experience, and the tools built
to develop Java applications have become quite advanced. Further, the industry is beginning to
pivot towards a microservices-based architecture for new application development, which are
also mostly cloud-native applications.

The MicroProfile specification is a joint venture between the Eclipse Foundation and many large
vendors including Red Hat to define a baseline platform definition that optimizes Java for a
microservices-based architecture and provides portability of MicroProfile-based applications
across multiple runtimes. The intention was to provide a loose framework supporting many
of the most common design patterns that were already in use by Java developers building
microservices across the industry.

MicroProfile was never intended to be a full standard like a Java Standards Request (JSR)
because these complete standards require years to finalize, and are cumbersome to update.
As a matter of fact no platform, specification, or standard is every truly finalized, because
microservices applications are constantly evolving. By focusing only on the high-level areas of
commonality found in Java microservices applications, both vendors and the community can
innovate collaboratively without needing to use a rigid standard. This approach is much more
agile and allows innovation to occur much faster than using more rigid standardization. The
community can always opt to standardize functionality later when there is more stability in a
future version of MicroProfile. This approach, while different from the rigid JSR process of the
Java world, allows community members and vendors to continue to innovate independently while
using and contributing to MicroProfile wherever there is commonality across many different
microservices. By standardizing the areas of commonality, developers maintain a degree of
application portability, with many MicroProfile runtime implementations from which to choose.

The initial 1.0 version of the MicroProfile specification included only the JAX-RS, CDI, and JSON-
P specifications from Java EE, the absolute bare minimum required to build a microservice in
Java. MicroProfile uses these traditional Java EE specifications because after over a decade of
investment and optimization, these Java EE implementations are quite efficient. Additionally,
there is the added benefit of developer familiarity, as these specifications are ubiquitous in
Java development. One of the stated goals of the MicroProfile initiative is to utilize existing API
specifications where possible and combine them with new ones to create a baseline platform
optimized for developing microservices in the cloud. The MicroProfile community continues
to play an active role in defining the future versions of MicroProfile as Java EE technologies
continue to evolve.

78 JB283-RHOAR1.0-en-1-20180517
Reviewing the Components of the MicroProfile Version 1.3 Specification

Reviewing the Components of the MicroProfile Version


1.3 Specification
This course focuses mainly on the unchanged and updated APIs in version 1.3, released January
2, 2018. These APIs include the Fault Tolerance, Metrics, JWT, Health Check, CDI, JSON-P, JAX-RS
and Config specifications. There is coverage of OpenTracing and RestClient concepts later in the
course but these MicroProfile specifications were not supported by Wildfly Swarm as of the time
of the writing of this course. The 1.3 version includes the following APIs that were unchanged or
updated from Version 1.2:

CDI 1.2
The Contexts and Dependency Injection (CDI) 1.2 API specification defines a set of
complementary services that help improve the structure of application code. CDI layers
an enhanced life cycle and interaction model over existing Java components, including
managed beans and Enterprise Java Beans. This includes life-cycle management for stateful
objects, dependency injection, event notifications, and more.

JSON-P 1.0
JSON Processing (JSON-P) is a Java API to process (for example, parse, generate, transform,
and query) JSON messages. It produces and consumes JSON text as a stream and allows to
build a Java object model for JSON text using API classes.

JAX-RS 2.0
The Java API for RESTful Web Services (JAX-RS) API specification provides support in
creating web services according to the Representational State Transfer (REST) architectural
pattern. JAX-RS uses annotations to simplify development and deployment of web service
clients and endpoints.

Config 1.2
The MicroProfile Config API specification defines an easy to use and flexible system for
application configuration. It also defines ways to extend the configuration mechanism itself
with a Service Provider Interface (SPI) in a portable fashion.

Fault Tolerance 1.0


The MicroProfile Fault Tolerance API specification provides implementations for the
bulkhead, circuit breaker, and fallback patterns, as well as retry policies and timeouts to
microservices making service calls to dependent services, resulting in a more available and
resilient application.

JWT Propagation 1.0


The MicroProfile JSON Web Token (JWT) Propagation API specification defines MicroProfile-
JWT tokens and how to map them to Java EE and non-Java EE containers, providing a
universal security solution for all microservices. This results in more secure microservices
using existing infrastructure environments.

Health Check 1.0


The MicroProfile Health Check API specification provides the ability for infrastructure
systems to monitor the health of a microservice, and to act upon its findings, resulting in a
more robust and available application. This is important in cloud environments when using
technologies like OpenShift Container Platform that rely on probes to determine the health
of the pods that are running.

JB283-RHOAR1.0-en-1-20180517 79
Chapter 3. Implementing a Microservice with MicroProfile

Metrics 1.1
The MicroProfile Metrics API specification helps determine the health of an application.
It helps find issues, provides long-term trend data for capacity planning, and implements
proactive discovery of issues (for example, disk usage growing without bounds). Metrics can
also help scheduling systems decide when to scale an application to run on more or fewer
instances, based on application metrics.

Figure 3.1: Specifications included in MicroProfile version 1.3

Developing MicroProfile Services Using WildFly Swarm


There are a number of runtime implementation options for deploying MicroProfile applications.
In this course, the focus is on WildFly Swarm, which is a community open source project
sponsored by Red Hat. WildFly Swarm offers an innovative approach to packaging and running
Java EE applications with their server runtimes in a single Java Archive (JAR), known as
UberJars. This approach fits well in microservices architecture because it greatly simplifies the
packaging and deployment process, especially in containerized environments.

WildFly Swarm has not fully implemented the MicroProfile 1.3 specification as of the 2018.3.3
release. This course uses that version for all course lab materials. Red Hat also provides the
Red Hat OpenShift Application Runtimes, which includes support for WildFly Swarm applications
deployed on OpenShift Container Platform, but at the time of writing the OpenShift Application
Runtimes release 1.0 only includes support for MicroProfile version 1.0.

To use MicroProfile with WildFly Swarm in a project that uses Maven to manage
dependencies, first be sure to include the WildFly Swarm bill of materials (BOM) in the
dependencyManagement section of your Maven POM file.

<properties>
<version.wildfly.swarm>2018.3.3</version.wildfly.swarm>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>bom-all</artifactId>
<version>${version.wildfly.swarm}</version>
<scope>import</scope>
<type>pom</type>
</dependency>
</dependencies>

80 JB283-RHOAR1.0-en-1-20180517
Developing MicroProfile Services Using WildFly Swarm

</dependencyManagement>

Additionally, include the wildfly-swarm-plugin in the build section of the Maven POM file
so that Maven can build the final UberJar that includes the WildFly Swarm server runtimes.

<build>
<finalName>${project.artifactId}</finalName>
<plugins>
<plugin>
<groupId>org.wildfly.swarm</groupId>
<artifactId>wildfly-swarm-plugin</artifactId>
<configuration>
<useUberJar>true</useUberJar>
</configuration>
</plugin>
</plugins>
</build>

Finally, WildFly Swarm uses the term fractions to refer to the different MicroProfile specifications
that it can include in your application. Add these fractions to your Maven POM file as
dependencies to include support for the MicroProfile components in your microservices
application.

<dependencies>
<!-- other dependencies omitted -->

<!-- Include appropriate fractions for MicroProfile -->


<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>microprofile-jwt</artifactId>
</dependency>
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>microprofile-health</artifactId>
</dependency>
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>microprofile-config</artifactId>
</dependency>
</dependencies>

The MicroProfile fraction includes the following fractions:

• JAX-RS

• CDI

• JSON-P

• MicroProfile config

• MicroProfile health

• MicroProfile fault tolerance

• MicroProfile JWT

• MicroProfile metrics

JB283-RHOAR1.0-en-1-20180517 81
Chapter 3. Implementing a Microservice with MicroProfile

References
MicroProfile Home Page
http://microprofile.io/

WildFly Swarm Home Page


http://wildfly-swarm.io/

WildFly Swarm MicroProfile fractions


https://github.com/wildfly-swarm/wildfly-swarm/tree/master/fractions/microprofile

WildFly Swarm MicroProfile Documentation


http://docs.wildfly-swarm.io/2018.3.3/#_microprofile

82 JB283-RHOAR1.0-en-1-20180517
Quiz: Describing MicroProfile and Its Specifications

Quiz: Describing MicroProfile and Its


Specifications

Match the items below to their counterparts in the table.

CDI 1.2 Config 1.1 Fault Tolerance 1.0 Health Check 1.0

JAX-RS 2.0 JSON-P 1.0 JWT Auth 1.0 Metrics 1.0

MicroProfile Specification Description

A specification that defines a set of complementary


services that help improve the structure of application
code including an enhanced lifecycle and interaction
model over existing Java component types, including
managed beans and Enterprise Java Beans.

A specification that provides support in creating web


services according to the Representational State
Transfer (REST) architectural pattern using annotations
to simplify the development and deployment of web
service clients and endpoints.

A specification to process (parse, generate, transform,


and query) JSON messages. It produces and consumes
JSON text in a streaming fashion and allows to build a
Java object model for JSON text using API classes.

A specification that defines an easy to use and flexible


system for application configuration. It also defines
ways to extend the configuration mechanism itself with
an SPI (Service Provider Interface) in a portable fashion.

JB283-RHOAR1.0-en-1-20180517 83
Chapter 3. Implementing a Microservice with MicroProfile

MicroProfile Specification Description

A specification that provides definitions for the


bulkhead, circuit breaker, fallback patterns, as well as
retry policies and timeouts to microservices making
service calls to dependent services, resulting in a more
available and resilient application.

A specification that provides the ability for


infrastructure systems to monitor the health and
act upon it resulting in a more robust and available
application.

A specification that helps track an application's


performance. It can help pinpoint issues, provide long
term trend data for capacity planning and proactive
discovery of issues (such as disk usage growing without
bounds).

A specification that defines MicroProfile-JWT tokens


and how to map them to Java EE and non-Java EE
containers, providing a universal security solution for all
microservices, resulting in more secure microservices
using existing infrastructure environments.

84 JB283-RHOAR1.0-en-1-20180517
Solution

Solution

Match the items below to their counterparts in the table.

MicroProfile Specification Description

A specification that defines a set of complementary CDI 1.2


services that help improve the structure of application
code including an enhanced lifecycle and interaction
model over existing Java component types, including
managed beans and Enterprise Java Beans.

A specification that provides support in creating web JAX-RS 2.0


services according to the Representational State
Transfer (REST) architectural pattern using annotations
to simplify the development and deployment of web
service clients and endpoints.

A specification to process (parse, generate, transform, JSON-P 1.0


and query) JSON messages. It produces and consumes
JSON text in a streaming fashion and allows to build a
Java object model for JSON text using API classes.

A specification that defines an easy to use and flexible Config 1.1


system for application configuration. It also defines
ways to extend the configuration mechanism itself with
an SPI (Service Provider Interface) in a portable fashion.

A specification that provides definitions for the Fault Tolerance


bulkhead, circuit breaker, fallback patterns, as well as 1.0
retry policies and timeouts to microservices making
service calls to dependent services, resulting in a more
available and resilient application.

A specification that provides the ability for Health Check 1.0


infrastructure systems to monitor the health and

JB283-RHOAR1.0-en-1-20180517 85
Chapter 3. Implementing a Microservice with MicroProfile

MicroProfile Specification Description

act upon it resulting in a more robust and available


application.

A specification that helps track an application's Metrics 1.0


performance. It can help pinpoint issues, provide long
term trend data for capacity planning and proactive
discovery of issues (such as disk usage growing without
bounds).

A specification that defines MicroProfile-JWT tokens JWT Auth 1.0


and how to map them to Java EE and non-Java EE
containers, providing a universal security solution for all
microservices, resulting in more secure microservices
using existing infrastructure environments.

86 JB283-RHOAR1.0-en-1-20180517
Implementing a Microservice with CDI, JAX-RS, and JSON-P

Implementing a Microservice with CDI, JAX-


RS, and JSON-P

Objective
After completing this section, students should be able to implement a microservice using the CDI,
JAX-RS, and JSON-P specifications of MicroProfile.

Reviewing the CDI Specification


The Java EE 6 specification first introduced the Contexts and Dependency Injection for Java
(CDI) specification in 2009. Since that introduction, CDI has become an industry standard and
is now one of the most critical components available to enterprise Java developers. The CDI
specification defines a framework allowing developers to create a a robust set of complementary
tools and services that assist with the structure and organization of application code. Some of
the most important features include life cycle and state management of CDI beans, type-safe
dependency injection, object decoration for injected objects, and event notifications.

Any CDI managed object that is bound to a life cycle context is called a bean. When creating
beans using CDI, you no longer need to manage many tricky problems manually and can instead
leverage CDI. These problems include how to:

• Handle the life cycle of a bean, including when to create a bean and when to destroy it

• Store references to managed beans

• Share beans among other CDI-managed beans

A bean specifies only the type and semantics of other beans it depends upon. It need not be
aware of the actual life cycle, concrete implementation, threading model, or other clients of any
bean it interacts with. Even better, the concrete implementation, life cycle, and threading model
of a bean may vary according to the deployment scenario, without affecting any client. This
loose-coupling makes your code easier to maintain.

What is New in CDI 1.2


The Java Community Process formally approved version 1.2 of the CDI specification in April 2014.
While this minor release does not significantly change the core concepts of CDI, it does introduce
some important new features, including:

• Explicit @Interceptor, @Decorator, and @Stereotype annotations

• The @Priority annotation to solve ambiguous injection issues

• Automatic enablement of CDI beans without the presence of a beans.xml file

• A complete rebuild of the Event API

• A large number of other minor clarifications or improvements

JB283-RHOAR1.0-en-1-20180517 87
Chapter 3. Implementing a Microservice with MicroProfile

Reviewing the JAX-RS Specification


The Java API for RESTful Web Services (JAX-RS) is a specification that provides support for
creating web services following the Representational State Transfer (REST) architectural pattern.
The JAX-RS API is primarily annotation-based and intends to facilitate the development of web
service endpoints and clients in Java by simplifying and standardizing much of the boilerplate
code required to build a web service. This approach allows developers to focus their efforts
entirely on the business logic of their application. Version 1.1 of the JAX-RS was included in
the Java EE 6 specification, making it the industry-standard API for building Java-based REST
services.

To enable JAX-RS in your application, include a class that extends the Application class from
the javax.ws.rs.core package, and is annotated with the @ApplicationPath annotation,
as listed in the following example:

import javax.ws.rs.ApplicationPath;
import javax.ws.rs.core.Application;

@ApplicationPath("/")
public class GatewayApplication extends Application {

Now you are ready to create a service class.

The following table includes descriptions of the core annotations that the JAX-RS specification
defines for building REST service classes. These annotations are all found in the javax.ws.rs
package, and its sub-packages. :

JAX-RS Class and Method-Level Annotations Summary


Annotation Description
@Path The relative path for a class or method
@GET, @PUT,@POST, The HTTP request type of a method
@DELETE, @HEAD
@Produces, The Internet media types that can be received in the request or are
@Consumes sent in the response. Enums for these values can be found in the
javax.ws.rs.core.MediaType class

These annotations are all used at either a class- or method-level. They define how the service
interacts with clients, and allows the application server to map the incoming HTTP requests to
the appropriate method.

Additionally, JAX-RS provides a set of annotations that you apply at the method parameter level
and use to retrieve information directly from the HTTP request. These annotations, which are
also found in the javax.ws.rs package and its sub-packages, are summarized in the following
table:

JAX-RS Method Parameter Level Annotations Summary


Annotation Description
@PathParam Binds to a segment of the URI. It is possible to include a placeholder
with a matching name in the @Path value (see example below).

88 JB283-RHOAR1.0-en-1-20180517
Reviewing the JAX-RS Specification

Annotation Description
@QueryParam Binds to an HTTP query parameter using its name. An example URI
that includes a query parameter is: http://www.example.com/
rest?name=Test
@HeaderParam Binds to a header in the HTTP request using its name.
@FormParam Binds to a form field using its name.
@Context Returns the entire context of the HttpServletRequest object or
the SecurityContext object for the incoming HTTP request. This
annotation can also be used to inject class-level variables instead of
method parameters. This can also retrieve the UriInfo for incoming
requests.

The following example is a REST service built using only JAX-RS annotations:

@Path("/")
public class RestResource {

private final Logger log = LoggerFactory.getLogger(RestResource.class);

@Context
private SecurityContext securityContext;

@Context
private HttpServletRequest servletRequest;

@PostConstruct
private void init() {
log.info("Rest service created");
}

@GET

@Path("/rest")

@Produces("text/plain")
public String hello() {
String hostname = servletRequest.getServerName();
return String.format("Hello World. Request received from %s", hostname);
}

@GET

@Path("/rest/{firstName} ")
@Produces("text/plain")

public String hello(@PathParam("firstName") String firstName) {


String hostname = servletRequest.getServerName();
return String.format("Hello World. Request received from %s on %s", firstName,
hostname);
}

Use / as the relative path to the rest of the application.


Inject the SecurityContext object.
Inject the HttpServletRequest object.

JB283-RHOAR1.0-en-1-20180517 89
Chapter 3. Implementing a Microservice with MicroProfile

Map this method to HTTP GET requests.


Map this method to a path of /rest, relative to the rest of the class.
Specify that this method produces a media type of text/plain.
Name the path parameter firstName to map to the value in the @PathParam annotation.
Map the firstName value from the path into this parameter.

What is New in JAX-RS 2.0


Version 2.0 of the JAX-RS specification was released in May 2013 and was subsequently included
in the Java EE 7 specification. This major release included many new important features and
integration with other APIs, including:

Client API
While JAX-RS 1.0 was purely server-side, version 2.0 includes a full client API for consuming
web services.

Bean Validation
An annotation-based facility for specifying parameter metadata. For example, @NotNull
shares indicates that the shares parameter may not be null. You can also supply custom
annotations to ensure parameter values match certain data formats such as a zip code or
phone number.

Asynchronous Support
This allows a client to send a request to the server, and optionally get a Future object or an
InvocationCallback object to be notified when the response is complete.

Filters and Handlers


The new filters API in JAX-RS 2.0 provides the ability to chain Servlet filters in a chain of
responsibility pattern. This is useful for introducing addresses universal concerns that might
apply to all methods of a given type, such as logging. Handlers are similar to filters, except
that they wrap a method invocation at a specified point. A handler intercepts calls at that
point, for example to customize or enrich the request or response data.

HATEOAS (Hypermedia)
JAX-RS 2.0 provides Link and Target classes to allow a server to introduce hyperlinks into
a response, and clients to react to them.

Content Negotiation
JAX-RS 2.0 introduces more functionality to the @Consumes and @Produces annotated-
parameters, which allow you to prioritize request and response formats.

Understanding the JSON-P Specification Version 1.0


JSON Processing (JSON-P) is a Java API that can parse, generate, transform, and query JSON
data. Introduced in May 2013, the version 1.0 of the JSON-P API includes two JSON processing
models: an object model and a streaming model. Both of these models can be used to read and
write JSON data, but the streaming model is particularly efficient when processing high volumes
of JSON data.

The object model used by JSON-P represents the elements that form the JSON data structure
as a Java object called a JsonObject, which implements the java.util.Map interface. The
following example instantiates an instance of JsonObject, and adds some data to it:

90 JB283-RHOAR1.0-en-1-20180517
Understanding the JSON-P Specification Version 1.0

// Create Json and serialize


JsonObject json = new JsonObject();
json.add("name", "Test");
json.add("age", BigDecimal.valueOf(35));
json.add("active", Boolean.FALSE);
String result = json.toString();

The Json class also includes factory methods to create other useful objects when working with
the object model such as the JsonGenerator, JsonParser, and JsonReader class instances.
The following example creates a JsonObject instance from raw JSON data. The example uses
the createReaderFactory method to instantiate a new JsonReaderFactorywith optional
configuration options. Next it creates a JsonReader object using the factory to read the raw
JSON data and create a JsonObject from it. Finally, it uses the JsonObject to retrieve data
from its properties:

String json = "{\"id\": 123456, \"api\": \"JSON-Processing\", \"deployed\": true}";


JsonReaderFactory factory = Json.createReaderFactory(null);
JsonReader jsonReader = factory.createReader(new StringReader(json));
JsonObject jsonObject = jsonReader.readObject();
jsonReader.close();
int id = jsonObject.getInt("id");
String api = jsonObject.getString("api");
boolean deployed = jsonObject.getBoolean("deployed");

The streaming model API included in JSON-P is implemented differently than the object model,
and is a more low-level API. When using the streaming model, the writing of JSON data is done
by chaining methods that add data to the buffer, then flushing it to the output stream as shown
in the following example. This example uses the createObjectBuilder method of the Json
class to build an instance of JsonObject, with data already in it:

// Create Json and serialize


JsonObject json = Json.createObjectBuilder()
.add("name", "Test")
.add("age", BigDecimal.valueOf(35))
.add("active", Boolean.FALSE).build();
String result = json.toString();

The JSON-P API uses special objects to model each of the possible data types present in JSON
data. These include:

JsonObject
The wrapper or parent class of all JSON data.

JsonArray
Represents JSON list objects.

JsonNumber
Represents a numeric value in JSON data.

JsonString
Represents a string value in JSON data.

JB283-RHOAR1.0-en-1-20180517 91
Chapter 3. Implementing a Microservice with MicroProfile

JsonValue
A generic reference and can represent any of an object (JsonObject), an array
(JsonArray), a number (JsonNumber), a string (JsonString),true (JsonValue.TRUE),
false (JsonValue.FALSE), or null (JsonValue.NULL).

The following example includes parsing a JsonArray into separate JsonValue objects.

final JsonReaderFactory factory = Json.createReaderFactory(null);


final JsonReader reader = factory.createReader(scheduleResource.openStream());

final JsonArray items = reader.readArray();

// parse session objects


final List<JsonObject> sessions = new LinkedList<>();

for (final JsonValue item : items) {


JsonObject session = (JsonObject) item;
session.add("id", String.valueOf(this.id.incrementAndGet()));
sessions.add(session);
}

References
Home Page of the CDI Specification
http://cdi-spec.org/

Home Page of the JAX-RS Specification


https://github.com/jax-rs

JAX-RS Javadoc
https://docs.oracle.com/javaee/7/api/javax/ws/rs/package-summary.html

Home Page of the JSON-P Specification


https://javaee.github.io/jsonp/

92 JB283-RHOAR1.0-en-1-20180517
Guided Exercise: Implementing a RESTful Microservice

Guided Exercise: Implementing a RESTful


Microservice

In this exercise, you will implement a "hello world" microservice using only the core
specifications of MicroProfile, which includes CDI, JAX-RS, and JSON-P.

Outcomes
You should be able to create a REST service using JAX-RS and CDI, and parse JSON data using
JSON-P.

Before you begin


If you have not already, execute the git clone command to clone the hello-microservices
repository onto the workstation machine.

[student@workstation ~]$ git clone http://services.lab.example.com/hello-microservices


Cloning into 'hello-microservices'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Then run the lab setup to begin the exercise.

[student@workstation ~]$ lab cdi-jaxrs setup

Steps
1. Switch the repository to the lab-cdi-jaxrs branch to get the correct version of the
application code for this exercise.

1.1. Switch to the branch using the git checkout command.

[student@workstation ~]$ cd hello-microservices


[student@workstation hello-microservices]$ git checkout lab-cdi-jaxrs
Switched to a new branch 'lab-cdi-jaxrs'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation hello-microservices]$ git status


# On branch lab-cdi-jaxrs
nothing to commit, working directory clean

2. Import the hello-microservices project into JBoss Developer Studio.

2.1. In the JBoss Developer Studio menu, click File > Import to open the Import wizard.

2.2. In the Import dialog box, click Maven > Existing Maven Projects, and then click Next.

2.3. In the Import Maven Projects dialog box, click Browse. The Select Root Folder dialog
box displays.

2.4. Navigate to the /home/student directory. Select the hello-microservices folder


and click OK.

JB283-RHOAR1.0-en-1-20180517 93
Chapter 3. Implementing a Microservice with MicroProfile

2.5. Click Finish to start the import.

2.6. Monitor the progress of the import operation using the JBoss Developer Studio status
bar (lower-right corner), until the Building workspace message disappears.

Note
It may take 5-10 minutes or sometimes longer to download all of the required
dependencies and build the workspace.

3. Enable JAX-RS for the aloha application by updating the JaxRsActivator class.

3.1. Open the JaxRsActivator class by expanding the aloha item in the Project Explorer
tab in the left pane of JBoss Developer Studio, then click aloha > Java Resources >
src/main/java > com.redhat.training.msa.aloha.rest to expand it. Double-click the
JaxRsActivator.java file.

3.2. Update this class to extend the javax.ws.rs.core.Application superclass.

//TODO Enable JaxRs by extending the Application superclass


//TODO Set a root path of '/api' for the entire application
public class JaxRsActivator extends Application {
/* class body intentionally left blank */
}

3.3. Set the application path for this REST application to /api using the
@ApplicationPath annotation.

//TODO Enable JaxRs by extending the Application superclass


//TODO Set a root path of '/api' for the entire application
@ApplicationPath("/api")
public class JaxRsActivator {
/* class body intentionally left blank */
}

3.4. Save your changes to the file using Ctrl+S.

4. Implement the AlohaResource REST service using JAX-RS annotations.

4.1. Review the AlohaResource REST service class implementation by expanding the aloha
item in the Project Explorer tab in the left pane of JBoss Developer Studio, then click
aloha > Java Resources > src/main/java > com.redhat.training.msa.aloha.rest to expand
it. Double-click the AlohaResource.java file.

4.2. Set the class-level path to a value of / using the @Path annotation.

//TODO Add a class-level path of '/'


@Path("/")
public class AlohaResource {

94 JB283-RHOAR1.0-en-1-20180517
4.3. The PersonParser class is a CDI-managed bean that is eligible for injection. Use the
@Inject annotation to inject an instance of this class into the REST service.

//TODO Inject the parser class using CDI


@Inject
private PersonParser parser;

4.4. Inject the HttpServletRequest object for each incoming request using the
@Context annotation.

//TODO Inject the request using the Context


@Context
private HttpServletRequest servletRequest;

4.5. Use the @PostConstruct annotation to run the init() method every time a new
instance of AlohaResource is created by CDI.

//TODO Use the PostConstruct annotation to run this method every time an
AlohaResource is created
@PostConstruct
private void init() {
log.info("AlohaResource created!");
}

4.6. Annotate the hola() method using JAX-RS annotations to specify that it should:

• Map to HTTP GET requests using the @GET annotation

• Map to the relative path of /aloha using the @Path annotation

• Produce text using the @Produces annotation and the constant provided by the
MediaType class

//TODO Map this method to HTTP GET requests


@GET
//TODO Add a path of '/aloha'
@Path("/aloha")
//TODO Specify that this method produces a media type of text/plain
@Produces(MediaType.TEXT_PLAIN)
public String hola() {
String hostname = servletRequest.getServerName();
return String.format("Aloha mai %s", hostname);
}

4.7. Annotate the hola(String json) method using JAX-RS annotations to specify that
it must:

• Map to HTTP POST requests using the @POST annotation

• Map to the relative path of /aloha using the @Path annotation

• Produce text using the @Produces annotation and the constant provided by the
MediaType class

JB283-RHOAR1.0-en-1-20180517 95
Chapter 3. Implementing a Microservice with MicroProfile

• Consume JSON data using the @Consumes annotation and the constant provided by
the MediaType class

//TODO Map this method to HTTP POST requests


@POST
//TODO Add a path of '/aloha'
@Path("/aloha")
//TODO Specify that this method produces a media type of text/plain
@Produces(MediaType.TEXT_PLAIN)
//TODO Specify that this method consumes a media type of application/json
@Consumes(MediaType.APPLICATION_JSON)
public String hola(String json) {
Person p = parser.parse(json);
String hostname = servletRequest.getServerName();
return String.format("Aloha mai %s %s from %s on %s", p.getFirstName(),
p.getLastName(), p.getLocation(), hostname);
}

4.8. Save your changes to the file using Ctrl+S.

5. Implement the PersonParser class using JSON-P.

5.1. Review the Person model class implementation by expanding the aloha item in the
Project Explorer tab in the left pane of JBoss Developer Studio, then click aloha > Java
Resources > src/main/java > com.redhat.training.msa.aloha.json to expand it. Double-
click the Person.java file.

public class Person {

protected JsonObject underlying;

public Person(final JsonObject underlying) {


this.underlying = underlying;
}

@JsonIgnore
JsonObject getUnderlying() {
return underlying;
}

public String getFirstName() {

return underlying.getString("firstName");
}

public String getLastName() {


return underlying.getString("lastName");
}

public String getLocation() {


return underlying.getString("location");
}

An instance of the JsonObject class is used to represent the underlying JSON


data for each instance of Person.

96 JB283-RHOAR1.0-en-1-20180517
The getter method simply delegates to the getString function to retrieve data
from the underlying JsonObject instance.

5.2. Review the PersonParser class, which needs to be updated to use JSON-P to parse
the incoming JSON data.

In the same package from the previous step, double-click the PersonParser.java file.

@Named
public class PersonParser {

public Person parse(final String json) {


InputStream stream = new
ByteArrayInputStream(json.getBytes(StandardCharsets.UTF_8));
//TODO Create a new JsonReaderFactory with a default configuration
JsonReaderFactory factory = null;
//TODO Use the factory to create a JsonReader for the stream
JsonReader reader = null;
//TODO use the reader to read the JSON into a new JsonObject
JsonObject object = null;
return new Person(object);
}
}

This class must convert a String of raw JSON data into a Person object. Recall
that the Person class uses an underlying JsonObject instance. In order to set
this underlying object, you need to use JSON-P to convert the raw JSON data into a
JsonObject object, and then you can create the instance of Person.

5.3. Create a new JsonReaderFactory instance using the Json class. This method
requires a java.util.Map parameter with the configuration options. To use the
default configuration, just pass in a null value as the parameter value.

//TODO Create a new JsonReaderFactory with a default configuration


JsonReaderFactory factory = Json.createReaderFactory(null);

5.4. Use the factory object to create an instance of JsonReader, which can be used to
parse the InputStream instance that was created from the JSON data String object.

//TODO Use the factory to create a JsonReader for the stream


JsonReader reader = factory.createReader(stream);

5.5. Use the reader object to parse the JSON data into a new JsonObject instance.

//TODO use the reader to read the JSON into a new JsonObject
JsonObject object = reader.readObject();

5.6. Save your changes to the file using Ctrl+S.

6. Test the GET endpoint of service.

6.1. Build and run the WildFly Swarm application using the Maven plug-in.

JB283-RHOAR1.0-en-1-20180517 97
Chapter 3. Implementing a Microservice with MicroProfile

In your terminal window, navigate to the aloha directory and run mvn clean
wildfly-swarm:run to start the server.

[student@workstation hello-microservices]$ cd aloha


[student@workstation aloha]$ mvn clean wildfly-swarm:run

6.2. Test the service from a client using the RESTClient Firefox plug-in.

Start Firefox on the workstation VM and click the RESTClient plug-in in the browser's
toolbar.

Figure 3.2: The Firefox RESTClient plug-in

6.3. Select GET as the Method. In the URL form, enter http://localhost:8080/api/
aloha.

6.4. Click Send.

6.5. Verify in the Headers tab that the Status Code is 200 OK.

6.6. Verify in the Response tab that the response matches the following:

Aloha mai localhost

7. Test the POST endpoint of the service.

7.1. In the top toolbar, click Headers, and select Custom Header to add a new custom
header to the request.

7.2. In the custom header dialog, enter the following information:


• Name: Content-Type

• Value: application/json

98 JB283-RHOAR1.0-en-1-20180517
Figure 3.3: Creating a custom request header in RESTClient

Click Okay.

7.3. Select POST as the Method. In the URL form, enter http://localhost:8080/api/
aloha.

7.4. In the Body section of the request, add the following JSON (this can be copy and pasted
from /home/student/JB283/labs/lab-cdi-jaxrs/json.txt) representation of
a Person entity:

{
"firstName" : "Test",
"lastName" : "User",
"location" : "Virginia"
}

Click Send.

7.5. Verify in the Headers tab that the Status Code is 200 OK.

7.6. Verify in the Response tab that the response matches the following:

Aloha mai Test User from Virginia on localhost

8. Return to the terminal window where WildFly Swarm is running and stop the service using
Ctrl+C.

9. Clean up, commit your changes to your local Git repository in the lab branch, and return to
the master branch.

9.1. Stage the uncommitted changes using the git add command.

JB283-RHOAR1.0-en-1-20180517 99
Chapter 3. Implementing a Microservice with MicroProfile

[student@workstation aloha]$ git add .

9.2. Commit your changes to the local branch using the git commit command.

[student@workstation aloha]$ git commit -m"completing lab cdi-jaxrs"


[lab-cdi-jaxrs 7210573] completing lab cdi-jaxrs

9.3. Switch the working copy back to the master branch to finish cleaning up.

[student@workstation aloha]$ git checkout master


Switched to branch 'master'

100 JB283-RHOAR1.0-en-1-20180517
Lab: Implementing a Microservice with MicroProfile

Lab: Implementing a Microservice with


MicroProfile

In this lab, you will finish the implementation of the microservice-speaker service using
MicroProfile and deploy it to OpenShift Container Platform (OCP) using the fabric8 Maven plug-
in.

Outcomes
You should be able to implement a RESTful microservice using the JAX-RS, CDI, and JSON-P APIs
that MicroProfile provides.

Before you begin


If you have not already, use git clone to download the microprofile-conference repository
onto the workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Then run the lab setup to begin the exercise.

[student@workstation ~]$ lab implement-microprofile setup

Steps
1. Switch the repository to the lab-implement-microprofile branch to get the correct
version of the application code for this exercise.

2. Enable JAX-RS for the application by updating the SpeakerApplication class.

3. Update the getSpeakersFile method in the


io.microprofile.showcase.speaker.domain.VenueJavaOne2016 class to use the
JSON-P API to parse a JSON file with conference speaker data into a set of Speaker Java
objects. Be sure to fix all of the comments marked with //TODO in the method.

4. Update the io.microprofile.showcase.speaker.rest.ResourceSpeaker REST


service class to match the following attributes, using JAX-RS and CDI annotations:

• It should be an application-scoped CDI managed bean.

• It should specify at the class-level that it produces a content type of application/json.

• It should set a class-level relative path of "/".

• It should use CDI injection to obtain an instance of the SpeakerDAO class as a member
variable.

• It should inject the UriInfo so that the service can use this information to produce
dynamic URLs for its endpoints that are relative to where the client sent the HTTP
request.

JB283-RHOAR1.0-en-1-20180517 101
Chapter 3. Implementing a Microservice with MicroProfile

• It should map incoming HTTP GET method requests to invoke the retrieveAll()
method.

• It should map incoming HTTP POST method requests that are to relative path of /add to
invoke the add(Speaker speaker) method.

• It should map incoming HTTP DELETE method requests that are to a relative path of /
remove/id, where id is a parameter to invoke the remove(String id) method.

• It should map incoming HTTP PUT method requests that are to a relative path of /update
to invoke the update(Speaker speaker) method.

5. Create a new OCP project named lab-implement-microprofile, then deploy the


microservice to OCP using the fabric8 Maven plug-in.

6. Test the HTTP GET method that invokes the retrieveAll method using the RESTClient
Firefox plug-in.

Use the oc status command to find the name of the route connected to the speaker
microservice deployment and copy this value to the clipboard.

7. Verify that the output from the HTTP GET method invocation returns some entries.

8. Test the HTTP POST method that invokes the add method using the RESTClient Firefox
plug-in. Use the previously captured URL to invoke the microservice. A JSON Speaker
entity representation is available in the /home/student/JB283/labs/implement-
microprofile/json.txt file. Take note of the id provided by the JSON response for the
following steps.

9. Check that the output from the method execution is successful and it returns a new
Speaker JSON entity, such as:

{"id":"7f59e4cc-3665-4210-94b2-162ce95551c9","title":"Mr.",
"nameFirst":"Test","nameLast":"User","organization":"Tester
Inc.","biography":"Lorem ipsum dolor sit amet, consectetur adipiscing
elit. Nullam commodo eget nisl eu fermentum. Fusce vitae diam
fringilla, tincidunt dolor in, condimentum","picture":"assets/images/
unknown.jpg","twitterHandle":"@test_user","links":{"add":"http://
microservice-speaker-lab-implement-microprofile.apps.lab.example.com/
speaker/","search":"http://microservice-speaker-lab-implement-
microprofile.apps.lab.example.com/speaker/","self":"http://microservice-
speaker-lab-implement-microprofile.apps.lab.example.com/speaker/
retrieve/7f59e4cc-3665-4210-94b2-162ce95551c9","update":"http://microservice-
speaker-lab-implement-microprofile.apps.lab.example.com/speaker/","remove":"http://
microservice-speaker-lab-implement-microprofile.apps.lab.example.com/speaker/
remove/7f59e4cc-3665-4210-94b2-162ce95551c9"}}

10. Test the HTTP PUT method that invokes the update method using the RESTClient Firefox
plug-in. Use the previously captured URL to invoke the microservice. A JSON Speaker
entity representation is available in the /home/student/JB283/labs/implement-
microprofile/json2.txt file. Update the FIXME value with the ID generated in the
previous step.

11. Check that the output from the method execution is successful and that it returns a new
Speaker JSON entity, such as:

102 JB283-RHOAR1.0-en-1-20180517
{"id":"7f59e4cc-3665-4210-94b2-162ce95551c9","title":"Mr.","nameFirst":"TestUpdate",
"nameLast":"UserUpdate","organization":"Tester Inc.","biography":"Lorem ipsum dolor
sit amet, consectetur adipiscing elit. Nullam commodo eget nisl eu fermentum.
Fusce vitae diam fringilla, tincidunt dolor in, condimentum","picture":"assets/
images/unknown.jpg","twitterHandle":"@test_user","links":{"add":"http://
microservice-speaker-lab-implement-microprofile.apps.lab.example.com/
speaker/","search":"http://microservice-speaker-lab-implement-
microprofile.apps.lab.example.com/speaker/","self":"http://microservice-
speaker-lab-implement-microprofile.apps.lab.example.com/speaker/
retrieve/7f59e4cc-3665-4210-94b2-162ce95551c9","update":"http://microservice-
speaker-lab-implement-microprofile.apps.lab.example.com/speaker/","remove":"http://
microservice-speaker-lab-implement-microprofile.apps.lab.example.com/speaker/
remove/7f59e4cc-3665-4210-94b2-162ce95551c9"}}

12. Test the HTTP DELETE method that invokes the delete method using the RESTClient
Firefox plug-in. Use the previously captured URL to invoke the microservice. Append to the
URL the id attribute from the previous step.

13. Verify that the output from the method execution is successful.

14. Grade the lab.

[student@workstation ~]$ lab implement-microprofile grade

15. Clean up the OCP project, commit your changes to your local Git repository in the lab
branch, and return to the master branch.

15.1. Delete the OCP project lab-implement-microprofile to undeploy the service and
remove the other OCP resources.

[student@workstation microservice-speaker]$ oc delete project \


lab-implement-microprofile
project "lab-implement-microprofile" deleted

15.2.Stage the uncommitted changes using the git add command.

[student@workstation microservice-speaker]$ git add .

15.3.Commit your changes to the local branch using the git commit command.

[student@workstation microservice-speaker]$ git commit \


-m"completing lab implement-microprofile"
[lab-implement-microprofile 7210573] completing lab lab-implement-microprofile

15.4.Switch the working copy back to the master branch to finish cleaning up.

[student@workstation microservice-speaker]$ git checkout master


Switched to branch 'master'

JB283-RHOAR1.0-en-1-20180517 103
Chapter 3. Implementing a Microservice with MicroProfile

Solution
In this lab, you will finish the implementation of the microservice-speaker service using
MicroProfile and deploy it to OpenShift Container Platform (OCP) using the fabric8 Maven plug-
in.

Outcomes
You should be able to implement a RESTful microservice using the JAX-RS, CDI, and JSON-P APIs
that MicroProfile provides.

Before you begin


If you have not already, use git clone to download the microprofile-conference repository
onto the workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Then run the lab setup to begin the exercise.

[student@workstation ~]$ lab implement-microprofile setup

Steps
1. Switch the repository to the lab-implement-microprofile branch to get the correct
version of the application code for this exercise.

1.1. Switch to the branch using the git checkout command.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout \
lab-implement-microprofile
Switched to branch 'lab-implement-microprofile'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-implement-microprofile
nothing to commit, working directory clean

2. Enable JAX-RS for the application by updating the SpeakerApplication class.

2.1. In JBoss Developer Studio, open the SpeakerApplication class by expanding


the microservice-speaker item in the Project Explorer tab in the left pane of
JBoss Developer Studio, then click microservice-speaker > Java Resources > src/
main/java > io.microprofile.showcase.speaker.rest to expand it. Double-click the
SpeakerApplication.java file.

2.2. Set the application path for this REST application to /speaker using the
@ApplicationPath annotation.

//TODO Set a root path of "/speaker" for the entire application

104 JB283-RHOAR1.0-en-1-20180517
Solution

@ApplicationPath("/speaker")
//TODO Enable JaxRs by extending the Application superclass
public class SpeakerApplication {

2.3. Update this class to extend the javax.ws.rs.core.Application superclass.

//TODO Enable JaxRs by extending the Application superclass


public class SpeakerApplication extends Application {

2.4. Save your changes to the file using Ctrl+S.

3. Update the getSpeakersFile method in the


io.microprofile.showcase.speaker.domain.VenueJavaOne2016 class to use the
JSON-P API to parse a JSON file with conference speaker data into a set of Speaker Java
objects. Be sure to fix all of the comments marked with //TODO in the method.

private Set<Speaker> getSpeakersFile() throws IOException {


Set<Speaker> speakers = new HashSet<>();
final InputStream is = this.getClass().getResourceAsStream("/ConferenceData.json");
//TODO Create a JsonReaderFactory
JsonReaderFactory factory = null;
//TODO Create a JsonReader for the InputStream 'is' using the JsonReaderFactory
JsonReader reader = null;
//TODO Create a JsonArray using the JsonReader
JsonArray speakerList = null;

for(JsonValue item : speakerList) {


JsonObject speaker = (JsonObject) item;
//See Speaker(JsonObject obj) constructor for conversion details
Speaker speakerObj = new Speaker(speaker);
speakers.add(speakerObj);
}

return speakers;

3.1. Open the VenueJavaOne2016 class by expanding the


io.microprofile.showcase.speaker.domain package and then double-click the
VenueJavaOne2016.java file.

3.2. Create the JsonReaderFactory instance with the default configuration using the
Json.createReaderFactory() method.

//TODO Create a JsonReaderFactory with the default configuration


JsonReaderFactory factory = Json.createReaderFactory(null);

3.3. Create a JsonReader by passing the InputStream object into the


createReader(InputStream is) method.

//TODO Create a JsonReader for the InputStream 'is' using the JsonReaderFactory

JB283-RHOAR1.0-en-1-20180517 105
Chapter 3. Implementing a Microservice with MicroProfile

JsonReader reader = factory.createReader(is);

3.4. Create the JsonArray using the readArray() method on the JsonReader instance.

//TODO Create a JsonArray using the JsonReader


JsonArray speakerList = reader.readArray();

3.5. Save your changes to the file using Ctrl+S.

4. Update the io.microprofile.showcase.speaker.rest.ResourceSpeaker REST


service class to match the following attributes, using JAX-RS and CDI annotations:

• It should be an application-scoped CDI managed bean.

• It should specify at the class-level that it produces a content type of application/json.

• It should set a class-level relative path of "/".

• It should use CDI injection to obtain an instance of the SpeakerDAO class as a member
variable.

• It should inject the UriInfo so that the service can use this information to produce
dynamic URLs for its endpoints that are relative to where the client sent the HTTP
request.

• It should map incoming HTTP GET method requests to invoke the retrieveAll()
method.

• It should map incoming HTTP POST method requests that are to relative path of /add to
invoke the add(Speaker speaker) method.

• It should map incoming HTTP DELETE method requests that are to a relative path of /
remove/id, where id is a parameter to invoke the remove(String id) method.

• It should map incoming HTTP PUT method requests that are to a relative path of /update
to invoke the update(Speaker speaker) method.

4.1. Open the ResourceSpeaker class by expanding the


io.microprofile.showcase.speaker.rest package and then double-click the
ResourceSpeaker.java file.

4.2. Use the @ApplicationScoped CDI annotation to specify that the class is an
application-scoped CDI managed bean.

//TODO make this an application-scoped CDI managed bean


@ApplicationScoped
//TODO specify that this service produces and consumes content with a type of
"application/json"
//TODO set the relative path of this service to "/"
public class ResourceSpeaker {

4.3. Use the @Consumes and @Produces JAX-RS annotations to specify the content type as
application/json.

106 JB283-RHOAR1.0-en-1-20180517
Solution

//TODO make this an application-scoped CDI managed bean


@ApplicationScoped
//TODO specify that this service produces and consumes content with a type of
"application/json"
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
//TODO set the relative path of this service to "/"
public class ResourceSpeaker {

4.4. Use the @Path JAX-RS annotation to specify a relative path of / for this service.

//TODO make this an application-scoped CDI managed bean


@ApplicationScoped
//TODO specify that this service produces and consumes content with a type of
"application/json"
@Produces({MediaType.APPLICATION_JSON})
@Consumes({MediaType.APPLICATION_JSON})
//TODO set the relative path of this service to "/"
@Path("/")
public class ResourceSpeaker {

4.5. Use the @Inject CDI annotation to inject an instance of the SpeakerDAO class that
the service needs.

//TODO Inject this using CDI


@Inject
private SpeakerDAO speakerDAO;

4.6. Use the @Context JAX-RS annotation to inject the UriInfo object that the service
needs.

//TODO Inject this using the request context


@Context
private UriInfo uriInfo;

4.7. Use the @GET JAX-RS annotation to map HTTP GET method requests to the
retrieveAll() method.

//TODO Map HTTP GET requests


@GET
public Collection<Speaker> retrieveAll() {
final Collection<Speaker> speakers = this.speakerDAO.getSpeakers();
speakers.forEach(this::addHyperMedia);
return speakers;
}

4.8. Use the @POST JAX-RS annotation to map HTTP POST method requests to the
add(Speaker speaker) method, and specify the relative path of the method as /
add.

//TODO Map HTTP POST requests


@POST
//TODO Specify a relative path of "/add"

JB283-RHOAR1.0-en-1-20180517 107
Chapter 3. Implementing a Microservice with MicroProfile

@Path("/add")
public Speaker add(final Speaker speaker) {
return this.addHyperMedia(this.speakerDAO.persist(speaker));
}

4.9. Use the @DELETE JAX-RS annotation to map HTTP DELETE method requests to the
remove(Speaker speaker) method, and specify the relative path of the method as /
remove/id, where id is a parameter that is mapped into the method parameter using
the @PathParam JAX-RS annotation.

//TODO Map HTTP DELETE requests


@DELETE
//TODO Specify a relative path of "/remove/{id}"
@Path("/remove/{id}")
//TODO Specify that the "id" parameter is mapped into the String parameter of
the method
public void remove(@PathParam("id") final String id) {
this.speakerDAO.remove(id);
}

4.10.Use the @PUT JAX-RS annotation to map HTTP PUT method requests to the
update(Speaker speaker) method, and specify the relative path of the method as /
update.

//TODO Map HTTP PUT requests


@PUT
//TODO Specify a relative path of "/update"
@Path("/update")
public Speaker update(final Speaker speaker) {
return this.addHyperMedia(this.speakerDAO.update(speaker));
}

4.11. Save your changes to the file using Ctrl+S.

5. Create a new OCP project named lab-implement-microprofile, then deploy the


microservice to OCP using the fabric8 Maven plug-in.

5.1. If you are not already logged into to the OCP cluster, use the oc login -u
developer -p redhat command to authenticate yourself.

[student@workstation ~]$ oc login -u developer -p redhat


Login successful.

You don't have any projects. You can try to create a new project, by running

oc new-project <projectname>

5.2. Navigate to the microservice-speaker folder:

[student@workstation ~]$ cd ~/microprofile-conference/microservice-speaker

5.3. Create a new project named lab-implement-microprofile using the oc new-


project command. Log in as developer user with the password redhat to the OCP
master host https://master.lab.example.com if you are not logged in.

108 JB283-RHOAR1.0-en-1-20180517
Solution

[student@workstation microservice-speaker]$ oc new-project \


lab-implement-microprofile
Now using project "lab-implement-microprofile" on server "https://
master.lab.example.com:443".

You can add applications to this project with the 'new-app' command. For
example, try:

oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.

5.4. Use the fabric8:deploy goal to deploy the microservice to OCP.

[student@workstation microservice-speaker]$ mvn fabric8:deploy


[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Conference :: Speaker 1.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------

...Output omitted...

[INFO]
[INFO] <<< fabric8-maven-plugin:3.5.34:deploy (default-cli) < install @
microservice-speaker <<<
[INFO]
[INFO] --- fabric8-maven-plugin:3.5.34:deploy (default-cli) @ microservice-
speaker ---
[INFO] F8: Using OpenShift at https://master.lab.example.com:443/ in namespace
lab-implement-microprofile with manifest /home/student/microprofile-conference/
microservice-speaker/target/classes/META-INF/fabric8/openshift.yml
[INFO] OpenShift platform detected
[INFO] Using project: lab-implement-microprofile
[INFO] Creating a Service from openshift.yml namespace lab-implement-
microprofile name microservice-speaker
[INFO] Created Service: microservice-speaker/target/fabric8/applyJson/lab-
implement-microprofile/service-microservice-speaker.json
[INFO] Using project: lab-implement-microprofile
[INFO] Creating a DeploymentConfig from openshift.yml namespace lab-implement-
microprofile name microservice-speaker
[INFO] Created DeploymentConfig: microservice-speaker/target/fabric8/applyJson/
lab-implement-microprofile/deploymentconfig-microservice-speaker.json
[INFO] Creating Route lab-implement-microprofile:microservice-speaker host: null
[INFO] F8: HINT: Use the command `oc get pods -w` to watch your pods start up
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:59 min
[INFO] Finished at: 2018-02-23T09:22:57-05:00
[INFO] Final Memory: 63M/640M
[INFO] ------------------------------------------------------------------------

JB283-RHOAR1.0-en-1-20180517 109
Chapter 3. Implementing a Microservice with MicroProfile

Note
If your build fails during the tests, review the previous steps and make sure
you have completed all the required tasks. The build completes successfully if
you have properly completed the previous steps and the service is functioning
properly.

6. Test the HTTP GET method that invokes the retrieveAll method using the RESTClient
Firefox plug-in.

Use the oc status command to find the name of the route connected to the speaker
microservice deployment and copy this value to the clipboard.

[student@workstation microservice-speaker]$ oc status


In project lab-implement-microprofile on server https://master.lab.example.com:443

http://microservice-speaker-lab-implement-microprofile.apps.lab.example.com (svc/
microservice-speaker)
dc/microservice-speaker deploys istag/microservice-speaker:latest <-
bc/microservice-speaker-s2i source builds uploaded code on
registry.lab.example.com:5000/redhat-openjdk-18/openjdk18-openshift:latest
deployment #1 deployed 4 minutes ago - 1 pod

View details with 'oc describe <resource>/<name>' or list everything with 'oc get
all'.

6.1. Start Firefox on the workstation VM and click the RESTClient plug-in in the browser's
toolbar.

6.2. Test the retrieveAll() function of the microservice.

Select GET as the Method. In the URL form, paste the value http://microservice-
speaker-lab-implement-microprofile.apps.lab.example.com from the
clipboard and append the relative URI for the microservice /speaker.

6.3. Click Send.

7. Verify that the output from the HTTP GET method invocation returns some entries.

7.1. Verify in the Headers tab that the Status Code is 200 OK.

7.2. Verify in the Response tab that the response matches the following. Only the first JSON
result is included below, but there are hundreds of entries:

[{"id":"25","title":"Mr.","nameFirst":"Abbot",
"nameLast":"Blanchard","organization":"n/a","biography":"Lorem ipsum dolor
sit amet, consectetur adipiscing elit. Nullam commodo eget nisl eu fermentum.
Phasellus tellus elit, eleifend vel bibendum quis, hendrerit sit amet
enim. Donec nulla tortor, consectetur sed massa sed, luctus aliquet diam.
Fusce vitae iaculis risus, sed consectetur dolor. Donec mollis rhoncus
nisl porttitor ullamcorper. Suspendisse egestas diam ornare massa venenatis
efficitur. Mauris neque risus, facilisis vel consectetur at, tristique
nec eros. Sed scelerisque velit eget pulvinar rutrum. Quisque accumsan
ligula at dui commodo, eu fringilla metus condimentum. Etiam facilisis

110 JB283-RHOAR1.0-en-1-20180517
Solution

tempus mi a egestas. Vivamus vitae ullamcorper lorem. Fusce vitae diam


fringilla, tincidunt dolor in, condimentum tellus.","picture":"assets/
images/unknown.jpg","twitterHandle":"@abbot_blanchard","links":{"add":"http://
microservice-speaker-lab-implement-microprofile.apps.lab.example.com/
speaker/","search":"http://microservice-speaker-lab-implement-
microprofile.apps.lab.example.com/speaker/","self":"http://microservice-
speaker-lab-implement-microprofile.apps.lab.example.com/speaker/
retrieve/25","update":"http://microservice-speaker-lab-implement-
microprofile.apps.lab.example.com/speaker/",
"remove":"http://microservice-speaker-lab-implement-
microprofile.apps.lab.example.com/speaker/remove/25"}},...

8. Test the HTTP POST method that invokes the add method using the RESTClient Firefox
plug-in. Use the previously captured URL to invoke the microservice. A JSON Speaker
entity representation is available in the /home/student/JB283/labs/implement-
microprofile/json.txt file. Take note of the id provided by the JSON response for the
following steps.

8.1. Test the add() function of the microservice.

Select POST as the Method. Update the URL field to http://microservice-


speaker-lab-implement-microprofile.apps.lab.example.com/speaker/
add

8.2. In the Body section of the request, add the following JSON (this can be copy and pasted
from /home/student/JB283/labs/implement-microprofile/json.txt)
representation of a Speaker entity:

{
"title":"Mr.",
"nameFirst":"Test",
"nameLast":"User",
"organization":"Tester Inc.",
"biography":"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam
commodo eget nisl eu fermentum. Fusce vitae diam fringilla, tincidunt dolor in,
condimentum",
"picture":"assets/images/unknown.jpg",
"twitterHandle":"@test_user"
}

8.3. In the top toolbar, click Headers, and select Custom Header to add a new custom
header to the request.

8.4. In the custom header dialog, enter the following information:


• Name: Content-Type

• Value: application/json

Click Okay.

8.5. Click Send.

9. Check that the output from the method execution is successful and it returns a new
Speaker JSON entity, such as:

JB283-RHOAR1.0-en-1-20180517 111
Chapter 3. Implementing a Microservice with MicroProfile

{"id":"7f59e4cc-3665-4210-94b2-162ce95551c9","title":"Mr.",
"nameFirst":"Test","nameLast":"User","organization":"Tester
Inc.","biography":"Lorem ipsum dolor sit amet, consectetur adipiscing
elit. Nullam commodo eget nisl eu fermentum. Fusce vitae diam
fringilla, tincidunt dolor in, condimentum","picture":"assets/images/
unknown.jpg","twitterHandle":"@test_user","links":{"add":"http://
microservice-speaker-lab-implement-microprofile.apps.lab.example.com/
speaker/","search":"http://microservice-speaker-lab-implement-
microprofile.apps.lab.example.com/speaker/","self":"http://microservice-
speaker-lab-implement-microprofile.apps.lab.example.com/speaker/
retrieve/7f59e4cc-3665-4210-94b2-162ce95551c9","update":"http://microservice-
speaker-lab-implement-microprofile.apps.lab.example.com/speaker/","remove":"http://
microservice-speaker-lab-implement-microprofile.apps.lab.example.com/speaker/
remove/7f59e4cc-3665-4210-94b2-162ce95551c9"}}

9.1. Verify in the Headers tab that the Status Code is 200 OK.

9.2. Verify in the Response tab that the response matches the following:

{"id":"7f59e4cc-3665-4210-94b2-162ce95551c9","title":"Mr.","nameFirst":"Test",
"nameLast":"User","organization":"Tester Inc.","biography":"Lorem ipsum dolor
sit amet, consectetur adipiscing elit. Nullam commodo eget nisl eu fermentum.
Fusce vitae diam fringilla, tincidunt dolor in, condimentum","picture":"assets/
images/unknown.jpg","twitterHandle":"@test_user","links":{"add":"http://
microservice-speaker-lab-implement-microprofile.apps.lab.example.com/
speaker/","search":"http://microservice-speaker-lab-implement-
microprofile.apps.lab.example.com/speaker/","self":"http://microservice-
speaker-lab-implement-microprofile.apps.lab.example.com/speaker/
retrieve/7f59e4cc-3665-4210-94b2-162ce95551c9","update":"http://
microservice-speaker-lab-implement-microprofile.apps.lab.example.com/
speaker/","remove":"http://microservice-speaker-lab-
implement-microprofile.apps.lab.example.com/speaker/
remove/7f59e4cc-3665-4210-94b2-162ce95551c9"}}

9.3. Take note of the id field from the JSON response for subsequent HTTP requests.

10. Test the HTTP PUT method that invokes the update method using the RESTClient Firefox
plug-in. Use the previously captured URL to invoke the microservice. A JSON Speaker
entity representation is available in the /home/student/JB283/labs/implement-
microprofile/json2.txt file. Update the FIXME value with the ID generated in the
previous step.

10.1. Test the update() function of the microservice.

Select PUT as the Method. In the URL form, paste the value http://microservice-
speaker-lab-implement-microprofile.apps.lab.example.com from the
clipboard and append the relative URI for the service /speaker/update.

10.2.In the Body section of the request, add the following updated JSON (this can be copy
and pasted from /home/student/JB283/labs/implement-microprofile/
json2.txt) representation of a Speaker entity. Update the id field using the ID
parameter from the previous step:

{
"id": "FIXME",
"title":"Mr.",

112 JB283-RHOAR1.0-en-1-20180517
Solution

"nameFirst":"TestUpdate",
"nameLast":"UserUpdate",
"organization":"Tester Inc.",
"biography":"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam
commodo eget nisl eu fermentum. Fusce vitae diam fringilla, tincidunt dolor in,
condimentum",
"picture":"assets/images/unknown.jpg",
"twitterHandle":"@test_user"
}

11. Check that the output from the method execution is successful and that it returns a new
Speaker JSON entity, such as:

{"id":"7f59e4cc-3665-4210-94b2-162ce95551c9","title":"Mr.","nameFirst":"TestUpdate",
"nameLast":"UserUpdate","organization":"Tester Inc.","biography":"Lorem ipsum dolor
sit amet, consectetur adipiscing elit. Nullam commodo eget nisl eu fermentum.
Fusce vitae diam fringilla, tincidunt dolor in, condimentum","picture":"assets/
images/unknown.jpg","twitterHandle":"@test_user","links":{"add":"http://
microservice-speaker-lab-implement-microprofile.apps.lab.example.com/
speaker/","search":"http://microservice-speaker-lab-implement-
microprofile.apps.lab.example.com/speaker/","self":"http://microservice-
speaker-lab-implement-microprofile.apps.lab.example.com/speaker/
retrieve/7f59e4cc-3665-4210-94b2-162ce95551c9","update":"http://microservice-
speaker-lab-implement-microprofile.apps.lab.example.com/speaker/","remove":"http://
microservice-speaker-lab-implement-microprofile.apps.lab.example.com/speaker/
remove/7f59e4cc-3665-4210-94b2-162ce95551c9"}}

11.1. Click Send.

11.2. Verify in the Headers tab that the Status Code is 200 OK.

11.3. Verify in the Response tab that the response matches the following:

{"id":"7f59e4cc-3665-4210-94b2-162ce95551c9","title":"Mr.",
"nameFirst":"TestUpdate","nameLast":"UserUpdate","organization":"Tester
Inc.","biography":"Lorem ipsum dolor sit amet, consectetur adipiscing
elit. Nullam commodo eget nisl eu fermentum. Fusce vitae diam
fringilla, tincidunt dolor in, condimentum","picture":"assets/images/
unknown.jpg","twitterHandle":"@test_user","links":{"add":"http://
microservice-speaker-lab-implement-microprofile.apps.lab.example.com/
speaker/","search":"http://microservice-speaker-lab-implement-
microprofile.apps.lab.example.com/speaker/","self":"http://microservice-
speaker-lab-implement-microprofile.apps.lab.example.com/speaker/
retrieve/7f59e4cc-3665-4210-94b2-162ce95551c9","update":"http://
microservice-speaker-lab-implement-microprofile.apps.lab.example.com/
speaker/","remove":"http://microservice-speaker-lab-
implement-microprofile.apps.lab.example.com/speaker/
remove/7f59e4cc-3665-4210-94b2-162ce95551c9"}}

12. Test the HTTP DELETE method that invokes the delete method using the RESTClient
Firefox plug-in. Use the previously captured URL to invoke the microservice. Append to the
URL the id attribute from the previous step.

12.1. Test the delete() function of the microservice.

Select DELETE as the Method. In the URL form, paste the


value http://microservice-speaker-lab-implement-
microprofile.apps.lab.example.com from the clipboard

JB283-RHOAR1.0-en-1-20180517 113
Chapter 3. Implementing a Microservice with MicroProfile

and append the relative URI for the service /speaker/


remove/7f59e4cc-3665-4210-94b2-162ce95551c9.

13. Verify that the output from the method execution is successful.

13.1. Click Send.

13.2.Verify in the Headers tab that the Status Code is 204 No Content.

14. Grade the lab.

[student@workstation ~]$ lab implement-microprofile grade

15. Clean up the OCP project, commit your changes to your local Git repository in the lab
branch, and return to the master branch.

15.1. Delete the OCP project lab-implement-microprofile to undeploy the service and
remove the other OCP resources.

[student@workstation microservice-speaker]$ oc delete project \


lab-implement-microprofile
project "lab-implement-microprofile" deleted

15.2.Stage the uncommitted changes using the git add command.

[student@workstation microservice-speaker]$ git add .

15.3.Commit your changes to the local branch using the git commit command.

[student@workstation microservice-speaker]$ git commit \


-m"completing lab implement-microprofile"
[lab-implement-microprofile 7210573] completing lab lab-implement-microprofile

15.4.Switch the working copy back to the master branch to finish cleaning up.

[student@workstation microservice-speaker]$ git checkout master


Switched to branch 'master'

114 JB283-RHOAR1.0-en-1-20180517
Summary

Summary
In this chapter, you learned:

• The MicroProfile specification is a joint venture between the Eclipse Foundation and many
large vendors, including Red Hat, to define a baseline platform definition that optimizes
Java for a microservices-based architecture and provides portability of MicroProfile-based
applications across multiple runtimes.

• The initial 1.0 version of the MicroProfile specification included only the JAX-RS, CDI, and
JSON-P specifications from Java EE, which is the absolute bare minimum required to build a
microservice in Java.

• The 1.2 version of the MicroProfile specification, released October 2, 2017, includes the
following APIs:

◦ CDI 1.2

◦ JSON-P 1.0

◦ JAX-RS 2.0

◦ Config 1.1

◦ Fault Tolerance 1.0

◦ Health 1.0

◦ JWT Propagation 1.0

◦ Metrics 1.0

• WildFly Swarm offers an innovative approach to packaging and running Java EE applications
with their server runtimes in a single Java Archive (JAR), also known as UberJars.

• WildFly Swarm has fully implemented the MicroProfile 1.2 specification as of the 2017.12.1
release.

• To use MicroProfile with WildFly Swarm, in a project that uses Maven to manage dependencies,
you must include the WildFly Swarm bill of materials (BOM) in the dependencyManagement
section of your pom file.

JB283-RHOAR1.0-en-1-20180517 115
116
TRAINING
CHAPTER 4

TESTING MICROSERVICES

Overview
Goal Implement unit and integration tests for microservices.
Objectives • Implement a microservice test case using Arquillian.

• Implement a microservice test using mock frameworks.


Sections • Testing Microservices with Arquillian (and Guided Exercise)

• Testing Microservices with Mock Frameworks (and Guided


Exercise)
Lab Testing Microservices

JB283-RHOAR1.0-en-1-20180517 117
Chapter 4. Testing Microservices

Testing Microservices with Arquillian

Objectives
After completing this section, students should be able to:

• Implement a microservice test case using Arquillian.

• Create a JAR file with Shrinkwrap in a test case.

Comparing Unit Tests and Integration Tests


In an agile development process, any change or new feature added to the existing microservices
may potentially break the application functionality. Developers use testing frameworks such as
JUnit and TestNG to create unit tests to verify functionality of small pieces of self-contained
code.

However, when external systems are accessed by an application, such as databases or external
services, creating unit tests is not enough. To test communication among multiple systems, a
developer creates integration tests that exercise the systems as a whole.

To ease the amount of code to develop a test, use a testing framework extension to emulate
the system under test. Arquillian is a testing framework extension that allows the underlying
application server infrastructure of a microservice, such as Wildfly Swarm, to be executed during
the test. This provides resources needed to run integration tests without complex test coding.

Implementing an Integration Test with Arquillian


The first step to building an integration test is to annotate the test class with the @RunWith
annotation and pass the Arquillian.class class as the test runner parameter. This
annotation specifies that the test should run as an Arquillian integration test.

To run Arquillian tests on Wildfly Swarm, Arquillian requires that you generate the application
package, typically a Web Application Resource (WAR) file, that will be deployed in the Wildfly
Swarm container. Use the Shrinkwrap library to build this deployable WAR file. Shrinkwrap
provides an API that allows you to create the deployable package as part of the integration test
before the test container is started.

To use Shrinkwrap, a static method in the test class must be marked with the @Deployment
annotation and return an instance of the WebArchive class. This annotation tells Arquillian
to use this method to build the WAR during test execution before starting the Wildfly Swarm
container. If the project uses Maven to manage its dependencies, then this annotated method
must use the Maven.resolver static method to read the pom.xml file for the project
and discover all external JAR dependencies needed by the application to run. Download
a list of any external JAR files used by the project from the Maven repository using the
importDependencies method.

After resolving dependencies, use the ShrinkWrap.create static method to bundle all
dependencies, classes, and configuration from the project to generate a Java-compliant file
(WebArchive.class). To achieve this, use the addPackages method to add the packages and
classes from the project that are required to run the test to the WAR file. Then, to activate CDI,
use the addAsWebInfResource method to add an empty beans.xml file to the web archive.
Next, use the addAsLibraries method to include the list of dependencies downloaded from
Maven to the final file.

118 JB283-RHOAR1.0-en-1-20180517
Comparing Unit Tests and Integration Tests

Finally, to trigger WildFly Swarm, configure the test server by setting parameters such as the
port number in a static method that is marked with the @CreateSwarm annotation. This method
must return a Swarm object with the necessary parameters set.

In some test methods, the runtime environment information, such as the URL where the
REST API is accessible, may be needed. To solve this problem, Arquillian provides the
@ArquillianResource annotation to inject runtime information and use it in the test
methods.

The following example is a complete integration test class written using Arquillian and
Shrinkwrap. This runs tests inside a running WildFly Swarm container:

@RunWith(Arquillian.class)
public class ResourceSpeakerTest {

private final Logger log = Logger.getLogger(ResourceSpeakerTest.class.getName());

@ArquillianResource
private URL url;

@Deployment
public static WebArchive deploy() {

File[] deps = Maven.resolver()


.loadPomFromFile("pom.xml")
.importDependencies(ScopeType.COMPILE, ScopeType.RUNTIME)

.resolve().withTransitivity().asFile();

WebArchive wrap = ShrinkWrap.create(WebArchive.class

, ResourceSpeakerTest.class.getName() + ".war")

.addPackages(true, "io.microprofile.showcase.speaker")

.addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml")
.addAsResource(new File("src/test/resources/ConferenceData.json"))
.addAsManifestResource("META-INF/microprofile-
config.properties","microprofile-config.properties")

.addAsLibraries(deps);
return wrap;

@CreateSwarm
public static Swarm newContainer() throws Exception {
Properties properties = new Properties();
properties.put("swarm.http.port", 8080);
properties.put("java.util.logging.manager", "org.jboss.logmanager.LogManager");
Swarm swarm = new Swarm(properties);
return swarm.withProfile("defaults");
}

@Test
public void testGet() {

...
}

JB283-RHOAR1.0-en-1-20180517 119
Chapter 4. Testing Microservices

Customizes execution of a test case by enabling extensions from Arquillian with the
@RunWith annotation from JUnit.
Injects information from the runtime environment, such as the URL of the REST API.
Annotates, with @Deployment, the method responsible for bundling the application.
Gets all the API dependencies from the current project.
Creates a web archive (WAR) file.
Includes all the classes and packages from the project.
Adds an empty beans.xml file to trigger CDI extensions.
Adds the API dependencies from the project.
Creates the Swarm configuration needed for testing.
The following arquillian.xml file provides some extra configuration, such as ports and host
names, that must be externalized from the test source code:

<arquillian>

<container qualifier="jboss-embedded" default="true">


<configuration>

<property name="managementPort">10990</property>
</configuration>
</container>
</arquillian>

Identifies the container used to test the application.


Configures the port used for management purposes in WildFly.
Store the arquillian.xml file in the src/test/resources directory of the project.

Finally, to run the test, a pom.xml file used by Maven must declare the dependencies used by
Arquillian and Shrinkwrap.

<project>
...
<artifactId>microservice-speaker</artifactId>
<name>Conference :: Speaker</name>
<description>The Speaker microservice resource</description>
<packaging>war</packaging>
<dependencies>
<dependency>

<groupId>org.wildfly.swarm</groupId>
<artifactId>arquillian</artifactId>
<scope>test</scope>
</dependency>
<dependency>

<groupId>org.jboss.shrinkwrap.resolver</groupId>
<artifactId>shrinkwrap-resolver-impl-maven</artifactId>
<scope>test</scope>
</dependency>
...
</dependencies>
...
</project>

Imports the org.wildfly.swarm:arquillian artifact with all the dependencies from


Arquillian.

120 JB283-RHOAR1.0-en-1-20180517
Demonstration: Building an Arquillian Deployment Archive

Imports the org.jboss.shrinkwrap.resolver:shrinkwrap-resolver-impl-


maven artifact with all the dependencies from Shrinkwrap.

Comparing In-Container Testing and Client Testing


A developer may need to execute a test under different conditions:

• Check the external outcome from the test execution: In microservices, a developer may need to
check the output from a REST API call, and this is only possible if the application is running
and you call the API as an ordinary client.

• Inspect the test execution running inside the container: A developer may need to inspect the
outcome from code execution that is generating a different output than expected.

In both scenarios, the microservices must be running, but the latter evaluates the outcome
before it is transformed into human-readable output.

Arquillian supports both scenarios but it executes in-container tests by default. To run a client
testing, the developer must use the @RunAsClient annotation. To run client-side testing use
Resteasy and Rest Assured libraries.

In the following source code, the test method is annotated with @RunAsClient, and it uses
Resteasy client APIs to call the REST APIs.

@ArquillianResource
private URL url;

@Test
@RunAsClient
public void testGet() {
Client client = ClientBuilder.newBuilder()
.build()
.target(this.url.toExternalForm() + endpoint)
.request(MediaType.APPLICATION_JSON_TYPE)
.get();
...
}

Alternatively, to run an in-container test, the object under test must be injected using the
@javax.inject.Inject annotation. The test method cannot have the @RunAsClient
annotation.

@ArquillianResource
private URL url;

@Inject
private ShoppingCartService service;

@Test
public void testCart() {
ShoppingCart cart = service.checkOut();
...
}

Demonstration: Building an Arquillian Deployment


Archive
1. Log in to the workstation VM as student using student as the password.

JB283-RHOAR1.0-en-1-20180517 121
Chapter 4. Testing Microservices

2. Check out the demo-arquillian branch with Git by running the following commands:

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout demo-arquillian

3. Inspect the @RunWith(Arquillian.class) class-level annotation from the


microprofile-conference/microservice-speaker/src/main/java/io/
microprofile/showcase/speaker/rest/ResourceSpeakerTest.java file. The
JUnit framework processes the @RunWith annotation and starts Arquillian's infrastructure
to run the integration tests.

4. Inspect the @Deployment annotation declared on the deploy method. The method builds
the WAR file used by Arquillian to start the application for testing purposes.

5. Inspect the @CreateSwarm annotation declared on the newContainer method. The


method configures WildFly Swarm to run on port 8080.

6. Open the parent pom.xml file to inspect the test dependencies. In the
dependencyManagement section, look for the BOM that references the WildFly Swarm
dependencies.

7. Open the project's pom.xml file to inspect the test dependencies.

8. The arquillian artifact dependency includes all libraries needed by Arquillian.

9. The shrinkwrap-resolver-impl-maven artifact dependency includes all libraries


needed by Shrinkwrap.

10. The resteasy-client artifact includes all libraries needed to invoke the REST API in the
test.

11. Open the arquillian.xml file located in src/test/resources. This file defines the
container configuration needed to start the tests.

12. Inspect the testGet method. Demonstrate how this method is called and how it calls the
REST API from the speaker microservice.

13. Run the test case in Red Hat Developer Studio by right-clicking the
ResourceSpeakerTest test case and selecting Run As JUnit Test. From the console
output, show that the application is not bundled by the test case, and that it fails.

14. Implement the deploy method responsible for bundling the application. Add the following
source code to the deploy method:

public static WebArchive deploy() {


File[] deps =
Maven.resolver().loadPomFromFile("pom.xml").importDependencies(ScopeType.COMPILE,
ScopeType.RUNTIME).resolve().withTransitivity().asFile();

WebArchive wrap = ShrinkWrap.create(WebArchive.class


, ResourceSpeakerTest.class.getName() + ".war")
.addPackages(true, "io.microprofile.showcase.speaker")
.addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml")
.addAsResource(new File("src/test/resources/ConferenceData.json"))

122 JB283-RHOAR1.0-en-1-20180517
Demonstration: Building an Arquillian Deployment Archive

.addAsManifestResource("META-INF/microprofile-
config.properties","microprofile-config.properties")
.addAsLibraries(deps);
return wrap;

The source code can be found in the /home/student/JB283/labs/demo-arquillian/


code.txt file.

15. Re-run the test case. This time, the tests should run without problems.

This concludes the demonstration

References
Arquillian web site
http://arquillian.org/

JUnit web site


http://junit.org/

Wildfly Swarm documentation


http://wildfly-swarm.io/documentation/

JB283-RHOAR1.0-en-1-20180517 123
Chapter 4. Testing Microservices

Guided Exercise: Testing Microservices with


Arquillian

In this exercise, you will implement an integration test with Arquillian.

Outcomes
You should be able to develop integration tests with Arquillian.

Before you begin


If you have not already, use git clone to download the hello-microservices repository onto
the workstation machine.

[student@workstation ~]$ git clone http://services.lab.example.com/hello-microservices


Cloning into 'hello-microservices'...
... output omitted ...
Resolving deltas: 100% (2803/2803), done.

Then, run lab setup to begin the exercise.

[student@workstation ~]$ lab microservices-arquillian setup

Steps
1. Switch the repository to the lab-microservices-arquillian branch to get the correct
version of the application code for this exercise.

1.1. Switch to the correct branch using the git checkout command.

[student@workstation ~]$ cd hello-microservices


[student@workstation hello-microservices]$ git checkout \
lab-microservices-arquillian
... output omitted ...
Switched to a new branch 'lab-microservices-arquillian'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation hello-microservices]$ git status


# On branch lab-microservices-arquillian
nothing to commit, working directory clean

2. Run the test case.

2.1. In JBoss Developer Studio, open the HolaResourceFallBackIntegrationTest


test case by expanding the hola item in the Project Explorer tab in the left
pane of JBoss Developer Studio, then click hola > Java Resources > src/
test/java > com.redhat.training.msa.hola.rest to expand it. Double-click the
HolaResourceFallBackIntegrationTest.java file.

2.2. The source code is mostly composed of comments providing you with directions. The
testFallback test method must check that the REST endpoint /api/hola returns

124 JB283-RHOAR1.0-en-1-20180517
the Hola de localhost message. Currently the test is calling the fail method from
JUnit, which you must fix.

2.3. Run the JUnit test case.

Right-click the HolaResourceFallBackIntegrationTest test case and select


Run As > JUnit Test in JBoss Developer Studio. The JUnit tab shows output from the
test case execution, and it displays a Failure Trace panel that says the testFallback
method has an AssertionError. This is expected, because the fail static method is
called.

3. Enable Arquillian in the test case.

3.1. Set the JUnit test runner for the test case to Arquillian. Add the @RunWith annotation
just before the class declaration. Use Arquillian.class as the annotation
parameter, as follows:

//TODO Annotate the class to support Arquillian


@RunWith(Arquillian.class)
public class HolaResourceFallBackIntegrationTest {
...

3.2. Implement the deploy method, which bundles the UberJar package.

Add the @Deployment method-level annotation.

To simplify development, the


com.redhat.training.msa.hola.rest.ArquillianTestUtils helper class
provides the deploy method, which bundles all dependencies needed by the Arquillian
test case.

However, to run the lab, the microprofile-config.properties file should be


added to the META-INF directory in the UberJar. Use the addAsManifestResource
method to include this file into the archive.

The deploy method must have the following code:

//TODO Annotate the method to provide the webarchive


@Deployment
public static WebArchive deploy() {
//TODO Delegate a call to the deploy static method from ArquillianTestUtils
class
//TODO Add the microprofile-config.properties file to the META-INF directory
return ArquillianTestUtils.deploy()
.addAsManifestResource(
"META-INF/microprofile-config.properties",
"microprofile-config.properties");
}

3.3. Implement the method that configures the WildFly Swarm runtime.

Add the @CreateSwarm method-level annotation.

To simplify development, the


com.redhat.training.msa.hola.rest.ArquillianTestUtils helper class

JB283-RHOAR1.0-en-1-20180517 125
Chapter 4. Testing Microservices

provides the newContainer method, which configures all the common parameters,
such as port number and environment variables, needed by WildFly Swarm.

Add the following code to the createSwarm method:

return ArquillianTestUtils.newContainer();

The createSwarm method must have the following code:

//TODO To provide the Swarm configuration


@CreateSwarm
public static Swarm createSwarm() throws Exception {
//TODO Delegate a call to the newContainer static method from
ArquillianTestUtils class
return ArquillianTestUtils.newContainer();
}

3.4. Save your changes to the file using Ctrl+S.

3.5. Re-run the JUnit test case.

Right-click the HolaResourceFallBackIntegrationTest test case and select Run


As > JUnit Test in JBoss Developer Studio. The JUnit tab shows the output from the
test case execution, and it displays a Failure Trace panel that says the testFallback
method has an AssertionError exception.

Different from the previous execution, this test takes longer to run than the previous
one. The startup takes longer because the WildFly Swarm is initialized and loads all the
fractions used by the integration test.

Warning
The test may take some time to start because Shrinkwrap downloads all the
required dependencies from the Maven remote repository.

4. Implement the test.

The testFallback method must call the /api/hola REST endpoint. In order to call it, use
the JAX-RS client APIs in the test method. The method must call the REST endpoint by using
the ClientBuilder class.

4.1. The REST endpoint URL is needed to work with the ClientBuilder class. To get the
value provided by Arquillian during test execution, declare an url attribute to the test
case and annotate it with @ArquillianResource.

//TODO Inject the URL used by Arquillian to test the application


@ArquillianResource
private URL url;

4.2. To call the REST endpoint, build a Client instance with the ClientBuilder class as
follows:

126 JB283-RHOAR1.0-en-1-20180517
@Test
public void testFallback() {
//TODO Use the ClientBuilder class from javax.ws.rs.client.ClientBuilder class
final Client client = ClientBuilder.newBuilder().build();

4.3. To identify the REST endpoint, call the target method from the client variable. Get
the REST endpoint using the url attribute that was injected previously.

//TODO Define call the target method and pass this.url.toExternalForm()+"/api/


hola" as a parameter
WebTarget target = client.target(this.url.toExternalForm()+"/api/hola");

4.4. Call the REST endpoint using the HTTP GET method.

//TODO The REST Endpoint returns only text, set the request to get
MediaType.TEXT_PLAIN and store the output to a Response object
Response response = target.request(MediaType.TEXT_PLAIN).get();

4.5. To evaluate the outputs from the test, use the assertEquals method.

//TODO Evaluate the HTTP code is 200


assertEquals(200,response.getStatus());
//TODO Evaluate the Body of the REST response with "Hola de localhost"
assertEquals("Hola de localhost",response.readEntity(String.class));

4.6. Comment the fail method call. The test is completely implemented and the fail
method does not need to be called.

// fail("Not implemented yet!")

4.7. Save your changes to the file using Ctrl+S.

4.8. Re-run the JUnit test case.

Right-click the HolaResourceFallBackIntegrationTest test case and select Run


As > JUnit Test in JBoss Developer Studio. The JUnit tab shows the output from the
test case execution. This time, the entire test passes and a green bar is displayed after
the test execution.

5. Clean up and commit your changes to your local Git repository in the lab branch, and return
to the master branch.

5.1. Stage the uncommitted changes using the git add command.

[student@workstation hello-microservices]$ git add .

5.2. Commit your changes to the local branch using the git commit command.

[student@workstation hello-microservices]$ git commit -m" completing lab


microservices-arquillian"

JB283-RHOAR1.0-en-1-20180517 127
Chapter 4. Testing Microservices

[lab-microservices-arquillian 7a5f023] completing lab microservices-arquillian


1 file changed, 23 insertions(+), 8 deletions(-)

5.3. Switch the working copy back to the master branch to finish cleaning up.

[student@workstation hello-microservices]$ git checkout master


Switched to branch 'master'

This concludes this guided exercise.

128 JB283-RHOAR1.0-en-1-20180517
Testing Microservices with Mock Frameworks

Testing Microservices with Mock Frameworks

Objective
After completing this section, students should be able to implement a microservice test using
mock frameworks.

Understanding Problems in Creating Integration Tests


There are a number of complications that you face as a developer trying to create an integration
test. Two of the most common problems that arise include integration with unreliable or
unavailable external systems, and integration with services that are not yet implemented.

• External systems: To test code that uses external services, such as databases, message brokers,
or legacy systems, it is required that those external systems are running. Otherwise, you
cannot properly assess the functionality of that code.

• Unimplemented services: During development, some services may not be ready for usage
because there were unexpected delays in the project.

In both cases, dependent services are not available for a developer to run tests. To work around
these missing dependencies, developers must build tools that can mimic the absent services,
such as lightweight message brokers, in-memory databases, or dummy legacy systems.

Alternatively, a developer can use a mock framework. Mock frameworks provide mechanisms
that intercept calls made to a Java interface or class and return dummy values that can be used
by the test.

Unlike dummy services, the mock framework approach does not require you to start up these
services externally or instantiate them in Java code to trigger the test. This means it does not
consume the same amount of memory and CPU cycles needed by these external services, saving
time and resources.

Important
During initial development cycles, using mock frameworks avoids development delays,
and supports good development practices, including the use of interfaces to define
communication protocols with external services. However, it is important to remember
that mocks cannot directly substitute true integration tests.

Developing with Mock Frameworks and Other


Microservice Testing Tools
There are many mock framework options to use in Java projects. In microservice-driven
development, it is important to use frameworks that support the ways a microservice is called,
such as REST-based and Java API calls. There are some mock frameworks that simplify test
development, such as:

• Wiremock: A REST mock facility that mimics calls to other microservices. It removes the need
to start external services before the test.

JB283-RHOAR1.0-en-1-20180517 129
Chapter 4. Testing Microservices

• Mockito: A mock framework used to proxy Java interface method calls. Mockito can also be
used to validate method call order and provide return values needed to test the application.

Both of these libraries provide a large set of functionality that eases the developer's work
required to create tests, and lowers the points of integration with external systems.

Another common problem when developing tests for microservices is that each unit test usually
checks many of the same conditions, such as the return values from REST method calls, or the
final state of an existing object. This means that developers need to write a lot of boilerplate
code to make HTTP connections and compare the expected values and test results. There are
a number of tools available to help alleviate these issues. This course covers two of the most
common:

• Rest Assured calls REST APIs using a fluent interface and it simplifies the way a REST call is
made in a test with any testing framework, such as JUnit or TestNG.

• Hamcrest provides static methods that make the source code more readable and maintainable
using a fluent interfaces.

Wiremock
Wiremock is a REST mock framework that emulate the calls to other REST APIs. It is a useful
tool to test the processing of calls made to an external service in a microservice that is already
deployed using Arquillian. Wiremock allows the developer to control the response provided by a
REST endpoint.

To use Wiremock, the pom.xml file from the project must reference it by adding the following
dependency:

<dependency>
<groupId>com.github.tomakehurst</groupId>
<artifactId>wiremock-standalone</artifactId>
<scope>test</scope>
</dependency>

To import the classes and static methods used by Wiremock, add the following import
declarations in your test classes:

import static com.github.tomakehurst.wiremock.client.WireMock.*;


import static com.github.tomakehurst.wiremock.core.WireMockConfiguration.*;
import com.github.tomakehurst.wiremock.junit.WireMockRule;

To mock a call to a REST API, start the mock server that will respond to requests to the service
by declaring an attribute with the @Rule annotation:

@Rule
public WireMockRule wireMockRule = new WireMockRule(options().port(7070));

In the previous example, a server listens to requests on port 7070.

To mimic the response of a REST service, the REST endpoint, HTTP method, and the expected
response are declared before the test is executed:

wireMockRule.stubFor(get(urlMatching("/api/aloha"))
.willReturn(aResponse()

130 JB283-RHOAR1.0-en-1-20180517
Developing with Mock Frameworks and Other Microservice Testing Tools

.withStatus(200)
.withHeader("Content-Type", "application/json")
.withBody("Aloha [MOCK]")));

In the previous code, any requests to the /api/aloha REST endpoint returns an HTTP code
200, with a header defining the content type (application/json) and with the body payload
(Aloha [MOCK]).

Mockito
Mockito is a mock framework focused on Java code testing. It has important features that most
mock frameworks do not provide, such as:

• Mocking abstract and concrete classes: A mock framework is useful when defining a protocol
that should be developed in order to integrate systems with Java interfaces. Sometimes,
however, some code may have already developed provided as abstract or concrete classes.
If you need to mock existing Java classes or interfaces, Mockito can mock either concrete or
abstract classes.

• Inspecting the number of calls made to methods: Some mock frameworks evaluate only
whether the methods from the mocked class or interface were called in a certain order.
Mockito can evaluate not only if the methods were called, but it can also count the number
of calls and their order. If a strict evaluation is needed, Mockito can enforce the order and the
number of calls.

To use Mockito in a project, import the dependency using the pom.xml file:

<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<scope>test</scope>
</dependency>

To enable all the static methods needed to create Mockito-based tests, declare the following
import in the test classes:

import static org.mockito.Mockito.*;

To mock a class or interface, include the following call in the test method before creating the test
execution:

ClassOrInterface mock = mock(ClassOrInterface.class);

The verify method verifies the method calls made to the mock object. In the following example,
the developer expects that a method from the mock is called.

List list = mock(List.class);


verify(list.get(anyInt()));

To return values whenever a method is called, use the when static method. In the following
example, the call to the get method returns an empty List value:

List list = mock(List.class);

JB283-RHOAR1.0-en-1-20180517 131
Chapter 4. Testing Microservices

when(list.get(anyInt()).thenReturn(Collections<Object>.emptyList());

Rest Assured
To evaluate the output from a REST API, a developer usually has to handle the JSON data
manually. Rest Assured provides an interface that minimizes the need to parse JSON data using
complicated APIs.

To use Rest Assured in a project, import the dependency using the pom.xml file:

<dependency>
<groupId>io.rest-assured</groupId>
<artifactId>rest-assured</artifactId>
<scope>test</scope>
</dependency>

To use the Rest Assured static methods, add the following import declaration in your test classes:

import static io.restassured.RestAssured.*;

Each test method must usethe given method to trigger Rest Assured startup. The when method
defines some initial information needed to trigger the REST API, such as the endpoint and some
parameters and header values.

The then method identifies which are the expected values from the REST call output.

given()
.when()
.get("/api/hola-chaining")
.then()
.statusCode(200);

For complex outcomes, the evaluation may use JSONPath notation to check the body output:

given()
.get("/api/hola")
.then()
.body("user.login", equalTo("john doe"));

To store the body's output to a variable, Rest Assured provides the extract method. The
method processes output from the body and stores it in a variable by using the as method. In the
following example, the extract method stores the data from the REST endpoint call execution
in the body variable.

String body=given()
.get("/api/hola")
.then()
.extract().as(String.class);

Hamcrest
Hamcrest is a set of static methods developed to simplify the evaluation of outcomes from a
test. According to traditional test frameworks, a test verifies data from a method execution by
creating some assertions:

132 JB283-RHOAR1.0-en-1-20180517
Developing with Mock Frameworks and Other Microservice Testing Tools

assertEquals(1,calc.result());

For complex evaluation, the method can get complicated:

assertEquals("1",calc.getMemory().get(1).toString());

Hamcrest makes the test code readable, as it defines a fluent interface that mimics the English
language:

assertThat("1", is(equalTo(calc.getMemory().get(1).toString())));

To use Hamcrest in a project, import the dependency using the pom.xml file:

<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-library</artifactId>
<scope>test</scope>
</dependency>

To enable all the classes and static methods needed to create Hamcrest-based tests, declare the
following imports in the test classes:

import static org.hamcrest.MatcherAssert.assertThat;


import static org.hamcrest.Matchers.*;

References
Mock objects advantages.
https://martinfowler.com/articles/mocksArentStubs.html

Wiremock web site.


http://wiremock.org/

Mockito web site.


http://site.mockito.org

Rest assured web site.


http://rest-assured.io

Hamcrest web site.


http://hamcrest.org/JavaHamcrest/

JB283-RHOAR1.0-en-1-20180517 133
Chapter 4. Testing Microservices

Guided Exercise: Testing Microservices with


Mock Frameworks

In this exercise, you will implement mock tests with Wiremock and Rest Assured mock
frameworks.

Outcomes
You should be able to develop unit tests with mock frameworks and some helper classes provided
by Hamcrest.

Before you begin


If you have not already done so, use git clone to download the microprofile-conference
repository to the workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
... output omitted ...
Resolving deltas: 100% (2803/2803), done.

Then, run lab setup to begin the exercise.

[student@workstation ~]$ lab microservices-mock setup

Steps
1. Switch the repository to the lab-microservices-mock branch to get the correct version
of the application code for this exercise.

1.1. Switch to the branch using the git checkout command.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout \
lab-microservices-mock
... output omitted ...
Switched to a new branch 'lab-microservices-mock'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-microservices-mock
nothing to commit, working directory clean

2. Run the test case.

2.1. Open the MockResourceSpeakerTest test case by expanding the


microservice-speaker item in the Project Explorer tab in the left pane of
JBDS, then click microservice-speaker > Java Resources > src/test/java
> io.microprofile.showcase.speaker.rest and expand it. Double-click the
MockResourceSpeakerTest.java file.

134 JB283-RHOAR1.0-en-1-20180517
2.2. The source code is mostly composed of comments that provides you with directions.
The testGet test method must check that the REST endpoint /speaker returns a
set of speakers that are enrolled to the conference application. However, the test is not
currently implemented, and is calling the fail method from JUnit.

2.3. Run the JUnit test case.

Right-click the MockResourceSpeakerTest test case and select Run As > JUnit Test
in JBDS. The JUnit tab shows the output from the test case execution, and it displays
a Failure Trace panel that says the testGet method has an AssertionError. This is
expected, because the fail static method is called.

3. Inspect the mock server instantiation. In order to accept REST endpoint calls, the test has a
WireMockRule attribute. It instantiates the mock server that responds to the requests. To
configure the mock server to run on port 7070, use the options().port(7070) method.
JUnit starts and stops the mock server on all test methods using the @Rule annotation.

@Rule
public WireMockRule wireMockRule = new WireMockRule(options().port(7070));

4. Configure the Wiremock server. The test method sends a REST call to the microservice-
session application, but the microservice is not started for this test purpose. To answer the
request, the mock server must be configured by the developer. For this purpose, prepare the
mock server for calls by using the WireMockRule attribute.

4.1. Prepare the mock server to send answer to requests to the /sessions/speaker/
speakerId/99 URI. The underlying microservice returns a list of session IDs whose
speaker ID is 99.

Note
To analyze the REST endpoint called by the microservice-speaker
application, open the SessionResource class by expanding the
microservice-session item in the Project Explorer tab in the left pane
of JBDS, then click microservice-session > Java Resources > src/main/
java > io.microprofile.showcase.session and expand it. Double-click the
SessionResource.java file and look for the getSpeakersSession
method.

In the beginning of the testGet method, call the stubFor method from the
wireMockRule class attribute. To answer HTTP GET method calls, call the get
static method. Provide the result of the urlMatching ("/sessions/speaker/
speakerId/99") method call as the parameter.

//TODO Implement a mock that responds to /sessions/speakerspeakerId/99/ using


the HTTP GET method
wireMockRule.stubFor(get(urlMatching("/sessions/speaker/speakerId/99"))

JB283-RHOAR1.0-en-1-20180517 135
Chapter 4. Testing Microservices

Note
Wiremock uses a fluent interface. Do not add a semicolon after the stubFor
method.

4.2. To respond the REST endpoint call, call the willReturn() method.

//TODO Implement a mock that responds to /sessions/speakerspeakerId/99/ using


the HTTP GET method
wireMockRule.stubFor(get(urlMatching("/sessions/speaker/speakerId/99"))
.willReturn(aResponse()

4.3. The expected response is an HTTP code 200. Use the


aResponse().withStatus(200) static method to create this response and pass it
into the willReturn() method.

//TODO Implement a mock that responds to /sessions/speakerspeakerId/99/ using


the HTTP GET method
wireMockRule.stubFor(get(urlMatching("/sessions/speaker/speakerId/99"))
.willReturn(aResponse()
//TODO The mock must return an HTTP 200 Code
.withStatus(200)

4.4. The mock returns JSON data with the speakers as the payload. To prepare the client to
receive JSON data, the Content-Type HTTP header must be declared.

//TODO Implement a mock that responds to /sessions/speakerspeakerId/99/ using


the HTTP GET method
wireMockRule.stubFor(get(urlMatching("/sessions/speaker/speakerId/99"))
.willReturn(aResponse()
//TODO The mock must return an HTTP 200 Code
.withStatus(200)
//TODO the mock returns a JSON payload
.withHeader("Content-Type", "application/json")

4.5. The JSON data is provided by a preexisting attribute named sessions. Use this
attribute to pass the data into the withBody() method so that this data is sent as the
HTTP body content.

//TODO Implement a mock that responds to /sessions/speakerspeakerId/99/ using


the HTTP GET method
wireMockRule.stubFor(get(urlMatching("/sessions/speaker/speakerId/99"))
.willReturn(aResponse()
//TODO The mock must return an HTTP 200 Code
.withStatus(200)
//TODO the mock returns a JSON payload
.withHeader("Content-Type", "application/json")
//TODO The JSON payload must be returned via HTTP Body and it must be obtained
using the readFile method
.withBody(sessions)));

136 JB283-RHOAR1.0-en-1-20180517
5. Implement the test using REST Assured. To call the REST endpoint, use the REST Assured
API.

5.1. Call the given method to start the REST Assured client. Right after the Wiremock
server preparation, call the REST Assured given method.

//TODO Using REST Assured framework, invoke the /speaker REST endpoint with HTTP
GET method
given()

5.2. Call the when method to prepare REST Assured to call REST endpoints.

//TODO Using REST Assured framework, invoke the /speaker REST endpoint with HTTP
GET method
given()
.when()

5.3. Call the get static method with the "/speaker/sessions/speakerId/99"


parameter to invoke the HTTP GET method.

given().
when()
.get("/speaker/sessions/speakerId/99")

5.4. Check the expected output by calling the then method.

given()
.when()
.get("/speaker/sessions/speakerId/99")
.then()

5.5. The expected output is a JSON array with three session IDs. To verify that, use the
size() function from REST Assured assertion mechanisms.

given()
.when()
.get("/speaker/sessions/speakerId/99")
.then()
//TODO using REST Assured framework functions, check the number of items
returned (3)
//TODO Use the size() function
.body("size()",is(3));

5.6. Comment the fail method call. The test is completely implemented and the fail
method does not need to be called.

// fail("Not implemented yet!")

5.7. Save your changes to the file using Ctrl+S.

5.8. Re-run the JUnit test case.

JB283-RHOAR1.0-en-1-20180517 137
Chapter 4. Testing Microservices

Right-click the MockResourceSpeakerTest test case and select Run As > JUnit Test
in JBDS. The JUnit tab displays the output from the test case execution. This time, the
test passes and a green bar is displayed after the test execution.

6. Clean up and commit your changes to your local Git repository in the lab branch, and return
to the master branch.

6.1. Stage the uncommitted changes using the git add command.

[student@workstation microprofile-conference]$ git add .

6.2. Commit your changes to the local branch using the git commit command.

[student@workstation microprofile-conference]$ git commit -m" completing lab


microservices-mock"
[lab-microservices-mock 7a5f023] completing lab microservices-mock
1 file changed, 23 insertions(+), 8 deletions(-)

6.3. Switch the working copy back to the master branch to finish cleaning up.

[student@workstation microprofile-conference]$ git checkout master


Switched to branch 'master'

This concludes the guided exercise.

138 JB283-RHOAR1.0-en-1-20180517
Lab: Testing Microservices

Lab: Testing Microservices

In this lab, you will implement an integration test and a unit test using mock frameworks for the
microservice-speaker application.

Outcomes
You should be able to implement an integration test and a unit test using Arquillian, Wiremock,
Rest Assured, and Hamcrest.

Before you begin


If you have not already done so, use git clone to download the microprofile-conference
repository to the workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Then run the lab setup to begin the exercise.

[student@workstation ~]$ lab test-review setup

Steps
1. To begin the exercise, change to the lab-test-review branch of the application code.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout lab-test-review
Switched to branch 'lab-test-review'

1.1. Switch to the branch using the git checkout command.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout lab-test-review
... output omitted ...
Switched to a new branch 'lab-test-review'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-test-review
nothing to commit, working directory clean

2. Implement the integration test method with Arquillian.

The io.microprofile.showcase.speaker.rest.ResourceSpeakerTest test case


tests the io.microprofile.showcase.speaker.rest.ResourceSpeaker class
REST endpoints. You must implement the testAddAndRetrieve test method. It adds and
retrieves the information from a new speaker.

JB283-RHOAR1.0-en-1-20180517 139
Chapter 4. Testing Microservices

To accomplish that goal, the test must call the /speaker/add REST endpoint with the HTTP
POST method. A JSON representation of a speaker is provided as a string that must be
passed as part of the request body.

The microservice-speaker application stores the speaker and returns a JSON representation
of a new speaker with the ID generated as part of the response.

You must parse the ID and call the /speaker/retrieve/id to verify that the speaker was
actually saved. The JSON marshaling and unmarshaling process is already implemented
in the test case. You must use REST Assured to invoke both the /speaker/add and /
speaker/retrieve REST endpoints.

Run the integration test to evaluate that the changes are working. Right-click the
ResourceSpeakerTest test case and select Run As > JUnit Test in JBoss Developer
Studio. The JUnit tab should display that the test passes, and a green bar should be
displayed after the test execution.

3. Implement a test method using Wiremock.

The io.microprofile.showcase.speaker.rest.MockResourceSpeakerTest
test case uses Arquillian to start the microservice-speaker application. The
testGetSessions test method invokes the microservice-session application /
sessions/speaker/amount/speakerId REST endpoint. To support the test, you must
mock the REST endpoint from another microservice with Wiremock. The mock microservice
must respond with the number 10 to the method invocation, as JSON output.

Note
To analyze the REST endpoint called by the microservice-speaker application,
open the SessionResource class by expanding the microservice-session
item in the Project Explorer tab in the left pane of JBoss Developer
Studio, then click microservice-session > Java Resources > src/main/
java > io.microprofile.showcase.session to expand it. Double-click the
SessionResource.java file and look for the getAmountSpeakersSession
method.

4. Implement the test using REST Assured. To call the REST endpoint, use the REST Assured
API. Invoke the microservice-speaker application /speaker/session/amount/515 REST
endpoint. As the invocation result, the number 10 is expected.

Run the integration test to verify that the changes are working. Right-click the
MockResourceSpeakerTest test case and select Run As > JUnit Test in JBoss Developer
Studio. The JUnit tab should display that the test passes, and a green bar is displayed after
the test execution.

5. Grade the lab.

[student@workstation ~]$ lab test-review grade

The execution may take some time because all of the tests are executed.

140 JB283-RHOAR1.0-en-1-20180517
6. Clean up and commit your changes to your local Git repository in the lab branch, and return
to the master branch.

6.1. Stage the uncommitted changes using the git add command.

[student@workstation microprofile-conference]$ git add .

6.2. Commit your changes to the local branch using the git commit command.

[student@workstation microprofile-conference]$ git commit -m"completing lab


test-review"
[lab-test-review e59dc43] completing lab test-review
...output omitted...
3 files changed, 41 insertions(+), 18 deletions(-)

6.3. Switch the working copy back to the master branch to finish cleaning up.

[student@workstation microprofile-conference]$ git checkout master


Switched to branch 'master'

This concludes this lab.

JB283-RHOAR1.0-en-1-20180517 141
Chapter 4. Testing Microservices

Solution
In this lab, you will implement an integration test and a unit test using mock frameworks for the
microservice-speaker application.

Outcomes
You should be able to implement an integration test and a unit test using Arquillian, Wiremock,
Rest Assured, and Hamcrest.

Before you begin


If you have not already done so, use git clone to download the microprofile-conference
repository to the workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Then run the lab setup to begin the exercise.

[student@workstation ~]$ lab test-review setup

Steps
1. To begin the exercise, change to the lab-test-review branch of the application code.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout lab-test-review
Switched to branch 'lab-test-review'

1.1. Switch to the branch using the git checkout command.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout lab-test-review
... output omitted ...
Switched to a new branch 'lab-test-review'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-test-review
nothing to commit, working directory clean

2. Implement the integration test method with Arquillian.

The io.microprofile.showcase.speaker.rest.ResourceSpeakerTest test case


tests the io.microprofile.showcase.speaker.rest.ResourceSpeaker class
REST endpoints. You must implement the testAddAndRetrieve test method. It adds and
retrieves the information from a new speaker.

To accomplish that goal, the test must call the /speaker/add REST endpoint with the HTTP
POST method. A JSON representation of a speaker is provided as a string that must be
passed as part of the request body.

142 JB283-RHOAR1.0-en-1-20180517
Solution

The microservice-speaker application stores the speaker and returns a JSON representation
of a new speaker with the ID generated as part of the response.

You must parse the ID and call the /speaker/retrieve/id to verify that the speaker was
actually saved. The JSON marshaling and unmarshaling process is already implemented
in the test case. You must use REST Assured to invoke both the /speaker/add and /
speaker/retrieve REST endpoints.

Run the integration test to evaluate that the changes are working. Right-click the
ResourceSpeakerTest test case and select Run As > JUnit Test in JBoss Developer
Studio. The JUnit tab should display that the test passes, and a green bar should be
displayed after the test execution.

2.1. Open the ResourceSpeakerTest test case by expanding the microservice-


speaker item in the Project Explorer tab in the left pane of JBoss Developer
Studio, then click microservice-speaker > Java Resources > src/test/
java > io.microprofile.showcase.speaker.rest to expand it. Double-click the
ResourceSpeakerTest.java file.

2.2. Inspect the testAddAndRetrieve method from the test case. The
testAddAndRetrieve test method must call the/speaker/add REST endpoint with
the HTTP POST method and /speaker/retrieve/id REST endpoint with the HTTP
GET method. However, the test method is not currently completely implemented.

2.3. Implement the call to the /speaker/add REST endpoint using REST Assured.

After the json variable declaration, call the fluent interface methods from REST
Assured and send the json variable declared in the test method with the HTTP POST
method. Store the output from the fluent interface method calls to the result variable.

//TODO Call the speaker/add REST endpoint using the HTTP POST method using Rest
Assured.
//TODO capture the response to the result variable using the
extract().asString() method.
String result = given().
when()
.with()
.body(json)
.contentType(ContentType.JSON)
.post("speaker/add")
.then()
.statusCode(200)
.extract().asString();

2.4. Capture the speaker ID to retrieve the data using the /speaker/retrieve/id REST
endpoint.

The response variable transforms the String captured in the previous step into a
Speaker object. Using Hamcrest, evaluate that the transformed object is not null.

//TODO validate the output from the response.


assertThat(response, notNullValue());

2.5. Invoke the /speaker/retrieve/id REST endpoint using REST Assured.

JB283-RHOAR1.0-en-1-20180517 143
Chapter 4. Testing Microservices

After the value variable declaration, call the fluent interface methods from REST
Assured and request the speaker information with the HTTP GET method. Replace the
id with the variable captured in the previous step. Store the output from the fluent
interface method calls to the returnedSpeaker variable.

//TODO Call the /speaker/retrieve/{speakerId} endpoint. Use the Id obtained from


the previous REST endpoint call
String returnedSpeaker = given().
when()
.contentType(ContentType.JSON)
.get("/speaker/retrieve/"+value)
.then()
.statusCode(200)
.extract().asString();

2.6. Inspect the returned value with Hamcrest.

The response variable transforms the String captured in the previous step into a
Speaker object. Using Hamcrest, evaluate that the transformed object is not null and
the last name is Gumbrecht.

//TODO Assert that the value is not null.


assertThat(resultSpeaker, notNullValue());
//TODO Assert that speaker last name is "Gumbrecht".
assertThat(resultSpeaker.getNameLast(), containsString("Gumbrecht"));

2.7. Save your changes to the file using Ctrl+S.

2.8. Run the JUnit test case.

Right-click the ResourceSpeakerTest test case and select Run As > JUnit Test in
JBoss Developer Studio. The JUnit tab displays that the test passes and a green bar is
displayed after the test execution.

3. Implement a test method using Wiremock.

The io.microprofile.showcase.speaker.rest.MockResourceSpeakerTest
test case uses Arquillian to start the microservice-speaker application. The
testGetSessions test method invokes the microservice-session application /
sessions/speaker/amount/speakerId REST endpoint. To support the test, you must
mock the REST endpoint from another microservice with Wiremock. The mock microservice
must respond with the number 10 to the method invocation, as JSON output.

144 JB283-RHOAR1.0-en-1-20180517
Solution

Note
To analyze the REST endpoint called by the microservice-speaker application,
open the SessionResource class by expanding the microservice-session
item in the Project Explorer tab in the left pane of JBoss Developer
Studio, then click microservice-session > Java Resources > src/main/
java > io.microprofile.showcase.session to expand it. Double-click the
SessionResource.java file and look for the getAmountSpeakersSession
method.

3.1. In the beginning of the testGetSessions method, call the stubFor method from
the wireMockRule class attribute. To answer HTTP GET method calls, call the get
static method. Provide the result of the urlMatching ("/sessions/speaker/
speakerId/515") method call as the parameter.

//TODO use wiremockrule to train the /sessions/speaker/amount/515 REST endpoint


using the HTTP GET method
//TODO from the microservice-session application. Because this microservice is
not available, you need to mock it.
//TODO the endpoint must return 10 as the response.
wireMockRule.stubFor(get(urlMatching("/sessions/speaker/amount/515"))

3.2. To respond the REST endpoint call, call the willReturn() method. It must return an
HTTP code 200, with a header Content-Type key with the application/json value.
Add the number 10 as the body payload.

//TODO use wiremockrule to train the /sessions/speaker/amount/515 REST endpoint


using the HTTP GET method
//TODO from the microservice-session application. Because this microservice is
not available, you need to mock it.
//TODO the endpoint must return 10 as the response.
wireMockRule.stubFor(get(urlMatching("/sessions/speaker/amount/515"))
.willReturn(aResponse()
.withStatus(200)
.withHeader("Content-Type", "application/json")
.withBody("10")));

3.3. Save your changes to the file using Ctrl+S.

4. Implement the test using REST Assured. To call the REST endpoint, use the REST Assured
API. Invoke the microservice-speaker application /speaker/session/amount/515 REST
endpoint. As the invocation result, the number 10 is expected.

Run the integration test to verify that the changes are working. Right-click the
MockResourceSpeakerTest test case and select Run As > JUnit Test in JBoss Developer
Studio. The JUnit tab should display that the test passes, and a green bar is displayed after
the test execution.

4.1. Call the given method to start the REST Assured client. Just after the Wiremock server
preparation, call the REST Assured given method with the when clause. The invocation
must send an HTTP GET method request to the /speaker/sessions/amount/515
REST endpoint.

JB283-RHOAR1.0-en-1-20180517 145
Chapter 4. Testing Microservices

//TODO Use REST Assured to call the /speaker/sessions/amount/515 REST endpoint


using the HTTP GET method.
//TODO as the method delegates the call to the microservice-session application,
the expected result should be 10
given().
when()
.get("/speaker/sessions/amount/515")

4.2. Check the expected output by calling the then method. Use this to confirm that the
body has the number 10.

given()
.when()
.get("/speaker/sessions/amount/515")
.then()
.body(containsString("10"));

4.3. Comment the fail method call. The test is completely implemented and the fail
method does not need to be called.

// fail("Not implemented yet!")

4.4. Save your changes to the file using Ctrl+S.

4.5. Re-run the JUnit test case.

Right-click the MockResourceSpeakerTest test case and select Run As > JUnit
Test in JBoss Developer Studio. The JUnit tab displays the output from the test
case execution. This time, the test passes and a green bar is displayed after the test
execution.

5. Grade the lab.

[student@workstation ~]$ lab test-review grade

The execution may take some time because all of the tests are executed.

6. Clean up and commit your changes to your local Git repository in the lab branch, and return
to the master branch.

6.1. Stage the uncommitted changes using the git add command.

[student@workstation microprofile-conference]$ git add .

6.2. Commit your changes to the local branch using the git commit command.

[student@workstation microprofile-conference]$ git commit -m"completing lab


test-review"
[lab-test-review e59dc43] completing lab test-review
...output omitted...
3 files changed, 41 insertions(+), 18 deletions(-)

146 JB283-RHOAR1.0-en-1-20180517
Solution

6.3. Switch the working copy back to the master branch to finish cleaning up.

[student@workstation microprofile-conference]$ git checkout master


Switched to branch 'master'

This concludes this lab.

JB283-RHOAR1.0-en-1-20180517 147
Chapter 4. Testing Microservices

Summary
In this chapter, you learned:

• Microservices development must include integration tests to support changes made to the
source code and its integration.

• Arquillian supports WildFly Swarm testing framework to allow either in-container testing or
client testing.

• To create Arquillian integration tests, annotate the test case with


@RunWith(Arquillian.class).

• To create a WildFly Swarm deployable archive with WildFly Swarm, use the Shrinkwrap
annotation@Deployment on a method responsible for creating the archive.

• To emulate REST endpoint called by a microservice, use Wiremock.

• To emulate method calls, use Mockito.

• To simplify REST calls in a test method, use Rest Assured.

148 JB283-RHOAR1.0-en-1-20180517
TRAINING
CHAPTER 5

INJECTING CONFIGURATION
DATA INTO A MICROSERVICE

Overview
Goal Inject configuration data from an external source into a
microservice.
Objectives • Inject configuration data into a microservice using the
config specification.

• Implement service discovery with a dependent


microservice.
Sections • Injecting Configuration Data with the Config Specification
(and Guided Exercise)

• Implementing Service Discovery with OpenShift (and Quiz)


Lab Injecting Configuration Data into a Microservice

JB283-RHOAR1.0-en-1-20180517 149
Chapter 5. Injecting Configuration Data into a Microservice

Injecting Configuration Data with the Config


Specification

Objectives
After completing this section, students should be able to inject configuration data into a
microservice using the MicroProfile config specification.

Describing the MicroProfile Config Specification


Details
The objective of the MicroProfile config specification is to enable users to dynamically configure
applications. Typically, applications require some set of properties based on the environment in
which the applications are running. For example, an application that manages files might store
the files in different folders depending on whether the application is deployed to a development
environment or a production environment.

To solve this problem, you can use system properties, system environment variables, and
properties files to dynamically provide a value for the folder location. Each of these mechanisms
for dynamically loading property values is called a ConfigSource.

By default, the MicroProfile config specification defines three ConfigSource resource:

• JVM system properties

• System environment variables

• Any META-INF/microprofile-config.properties files in the Java class path.

The specification also allows you to write and register a custom ConfigSource resource that
can retrieve properties from any location. For example, you could write a custom ConfigSource
resource to retrieve parameters from a database.

Additionally, multiple ConfigSource can define the same property. In this case, the application
defaults to using the value that has a higher priority. To define the priority, each ConfigSource
resource defines an ordinal number. The highest priority is the ConfigSource resource with the
highest ordinal number.

By default, the three provided ConfigSource resources have the following ordinal values:

• JVM system properties: 400

• System environment variables: 300

• microprofile-config.properties: 100

Reviewing the Config Specification API


There are two approaches to retrieve a property from a ConfigSource resource. The first one is
by injecting the Config object:

import org.eclipse.microprofile.config.Config;
...

150 JB283-RHOAR1.0-en-1-20180517
Reviewing the Config Specification API

@Inject
private Config config;
...
public void displayProperties(){

Optional<String> optionalValue = config.getOptionalValue("optional",String.class);


optionalValue.ifPresent(v -> System.out.println(v));

String requiredValue = config.getValue("required",String.class);


System.out.println(requiredValue);
}

Injects the config object. This object has methods to recover properties from a
ConfigSource resource.
Retrieves a java.util.Optional object. Use this method if your application does not
need the property to work. For example, the application should run a job only if a certain
property exists.
Retrieves the property value. Use this method if your property is required.

The second approach to retrieve a property is by using the ConfigProperty qualifier:

@Inject

@ConfigProperty(name = "folderP")
private String folder;

@Inject

@ConfigProperty(name = "numberP", defaultValue = "320")


private Integer number;

@Inject

@ConfigProperty(name = "socialSecurity")
private Person person;

Retrieves the folderP property. The folder attribute is null if a ConfigSource does not
specify the folderP property.
Retrieves the numberP property. The number attribute will have the default value of 320 if
a ConfigSource doesn't specify the numberP property.
Retrieves the socialSecurity property and convert it to the person attribute. Since the
property has a text value, you need to create a custom converter to convert the text into the
Person object.

Creating a Custom Converter


A converter is required to convert a configuration property text value into an object. The
following example converts a String to a Person object so that it can be injected as a
configuration property. The searchPersonBySocialSecurity method needs to return an
instance of Person based on the social security.

...
import org.eclipse.microprofile.config.spi.Converter;

public class PersonConverter implements Converter<Person> {


@Override
public Person convert(String socialSecurity) {
Person p = searchPersonBySocialSecurity(socialSecurity);
return p;

JB283-RHOAR1.0-en-1-20180517 151
Chapter 5. Injecting Configuration Data into a Microservice

}
}

public Person searchPersonBySocialSecurity(String socialSecurity) {


PersonDao dao = new PersonDao();
return dao.searchPersonBySocialSecurity(socialSecurity);
}

After implementing the convert() method, you need to register custom converters. To
register a converter, you need to provide a file in the META-INF/services/ directory called
org.eclipse.microprofile.config.spi.Converter. The contents of this file must
contain the fully-qualified class name from the converter as its only content, as shown in the
following example:

com.redhat.training.msa.config.converter.PersonConverter

Externalizing Application Configuration in OpenShift


When deploying applications to OpenShift, configuration management presents a challenge
due to the immutable nature of containers and container images. Unlike in traditional, non-
containerized deployments that bundle configuration and application in the same deployment,
containerized deployments should avoid coupling both the application and the configuration
within the same immutable container image.

OpenShift provides secret and configuration map resource types to externalize and
manage configuration for applications.

Secret resources are used to store sensitive information, such as passwords, keys, and tokens.
You can also create your own secrets to store sensitive information, such as passwords and
authentication credentials, in your application.

Configuration map resources are similar to secret resources, but store nonsensitive data.
A configuration map resource can be used to store detailed information, such as individual
properties, or general information, such as entire configuration files and JSON data.

Configuration maps and secrets can be mounted as data volumes, or exposed as environment
variables, inside an application container. If the configuration map is exposed as environment
variables, a ConfigSource resource is automatically created. If the configuration map is
mounted as data volumes, the application needs to read the file from the volume and load the
properties as Java system properties. The next section of this chapter covers configuration maps
in OpenShift in more detail.

Demonstration: Implementing a ConfigMap in


OpenShift
1. Log in to the workstation VM as student using student as the password.

2. Check out the demo-configmap branch from Git by running the following commands:

[student@workstation ~]$ cd hello-microservices


[student@workstation hello-microservices]$ git checkout demo-configmap

3. Open the ClientConfiguration Java class located in the


com.redhat.training.msa.hola.client package.

152 JB283-RHOAR1.0-en-1-20180517
Demonstration: Implementing a ConfigMap in OpenShift

3.1. Inspect the @Inject annotation declared on the alohaPort property. The annotation
injects the value of the port to connect to the aloha microservice.

3.2. Inspect the @ConfigProperty annotation declared on the alohaPort property. The
annotation is a qualifier to provide the correct value that is injected by the @Inject
annotation.

3.3. Inspect the @Inject and the @ConfigProperty annotations declared on the
alohaHostname property. The property defines the host name to connect to the
aloha microservice.

4. Open the microprofile-config.properties properties file located in the src/main/


resources/META-INF/ directory.

4.1. Inspect the alohaPort property. The connection to the aloha microservice uses the
7070 port if any other ConfigSource with a higher priority is not specified.

4.2. Inspect the alohaHostname property. The connection to the aloha microservice uses
the localhost hostname if any other ConfigSource with a higher priority is not
specified.

5. Open the pom.xml file to inspect the MicroProfile dependency.

<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>microprofile</artifactId>
</dependency>

One of the dependencies imported by the MicroProfile BOM is the microprofile-


config-api artifactId from the org.eclipse.microprofile.config groupId.

6. Create a new project in OpenShift.

6.1. Open a web browser and enter the following URL:

https://master.lab.example.com.

Accept the self-signed, insecure certificate. The page displays the OpenShift
authentication page.

6.2. Log in to the registry console as the developer user, with redhat as password.

6.3. In the right frame, click + Create Project.

6.4. Fill in the Create Project form with the following values:

• In the Name field, enter helloconfigmap.

• In the Display Name field, enter Hello Microservices.

Leave the remaining fields empty. Click Create.

7. Deploy the microservices.

JB283-RHOAR1.0-en-1-20180517 153
Chapter 5. Injecting Configuration Data into a Microservice

7.1. Open a terminal window on the workstation VM and log in to OpenShift as the
developer user.

[student@workstation ~]$ oc login -u developer -p redhat \


https://master.lab.example.com

7.2. Select the helloconfigmap project:

[student@workstation ~]$ oc project helloconfigmap


Now using project "helloconfigmap"...

7.3. Navigate to the aloha microservice project and deploy it on OpenShift:

[student@workstation ~]$ cd hello-microservices/aloha


[student@workstation aloha]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...
...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS
...

7.4. Navigate to the hola microservice project and deploy it on OpenShift:

[student@workstation aloha]$ cd ../hola


[student@workstation hola]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...
...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS
...

8. Test the microservices.

8.1. Return to the web browser logged in to OpenShift. In the right navigation bar, click
Hello Microservices.

8.2. Wait for the build to complete and for the aloha and hola applications. Have one pod
running for each application.

154 JB283-RHOAR1.0-en-1-20180517
Demonstration: Implementing a ConfigMap in OpenShift

Figure 5.1: Application status

8.3. Test the aloha microservice. Open a web browser and navigate to http://
aloha.apps.lab.example.com/api/aloha. The page displays the Aloha mai
aloha.apps.lab.example.com message.

8.4. Test the hola microservice. Navigate to http://hola.apps.lab.example.com/


api/hola. The page displays the Hola de hola.apps.lab.example.com
message.

8.5. Test the hola chained microservice call.

The hola chained microservice call tries to connect to the aloha microservice using
the localhost host name with the 7070 port specified in the microprofile-
config.properties file.

Navigate to http://hola.apps.lab.example.com/api/hola-chaining.
The page displays the ["Hola de hola.apps.lab.example.com","Aloha
fallback"] message.

The chained microservice call fails because you need to update the properties to use
the aloha.apps.lab.example.com host name and the 80 port.

9. Create a configuration map to update the values for the alohaPort and the
alohaHostname properties.

9.1. In the OpenShift web console, in the left navigation bar, click Resources and then click
Config Maps.

9.2. Click Create Config Map.

9.3. Fill in the Create Config Map form with the following values:

• In the Name field, enter appconfig.

• In the Key field, enter project-defaults.yml.

9.4. In the text area, enter two properties:

• alohaPort: 80

JB283-RHOAR1.0-en-1-20180517 155
Chapter 5. Injecting Configuration Data into a Microservice

• alohaHostname: aloha.apps.lab.example.com

Figure 5.2: Config map form

9.5. Click Create.

10. Configure the deployment configuration resource to use the configuration map as a volume.

10.1. In the left navigation bar, click Applications and then click Deployments.

10.2. In the first column of the table, click hola.

10.3. In the tab menu, click Configuration.

10.4. In the Volumes section, click Add Config files.

10.5. Fill in the Add Config Files to hola form with the following values:

• In the Source field, select appconfig.

• In the Mount Path field, enter /app/config.

Leave the remaining fields blank.

Click Add.

11. Define the swarm.project.stage.file system property in the deployment


configuration.

11.1. In the tab menu, click Environment.

11.2. Update the JAVA_OPTIONS variable to add a new system property. In the Value field,
append the following value:

-Dswarm.project.stage.file=file:///app/config/project-
defaults.yml

156 JB283-RHOAR1.0-en-1-20180517
Demonstration: Implementing a ConfigMap in OpenShift

Figure 5.3: Environment variable form

Click Save.

12. Test the chaining again.

12.1. In the left navigation bar, click Overview.

12.2. Wait for the build to complete and for the aloha and hola applications to have one pod
for each application running.

Figure 5.4: Application status

12.3. Open a web browser and navigate to http://hola.apps.lab.example.com/


api/hola-chaining to test the hola chaining microservice. The page now
displays the ["Hola de hola.apps.lab.example.com","Aloha mai
aloha.apps.lab.example.com"] message.

This concludes the demonstration.

JB283-RHOAR1.0-en-1-20180517 157
Chapter 5. Injecting Configuration Data into a Microservice

References
Config Specification Project
https://github.com/eclipse/microprofile-config

158 JB283-RHOAR1.0-en-1-20180517
Guided Exercise: Injecting Configuration Data

Guided Exercise: Injecting Configuration Data

In this exercise, you will configure properties in a simple microservice to locate another
microservice, then build, and deploy it.

Outcomes
You should be able to inject configuration properties using the MicroProfile config specification.

Before you begin


If you have not already, execute the git clone command to clone the hello-microservices
repository onto the workstation machine.

[student@workstation ~]$ git clone http://services.lab.example.com/hello-microservices


Cloning into 'hello-microservices'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Run lab setup to begin the exercise.

[student@workstation ~]$ lab configmap setup

1. Switch the repository to the lab-configmap branch to get the correct version of the
application code for this exercise.

1.1. Switch to the branch using the git checkout command.

[student@workstation ~]$ cd hello-microservices


[student@workstation hello-microservices]$ git checkout lab-configmap
Branch lab-configmap set up to track remote branch lab-configmap from origin.
Switched to a new branch 'lab-configmap'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation hello-microservices]$ git status


#On branch lab-configmap
nothing to commit, working tree clean

2. Enable the MicroProfile config fraction for the application by updating the pom.xml Maven
configuration file.

2.1. Open the pom.xml file by expanding the hola item in the Project Explorer tab in the left
pane of JBoss Developer Studio. Double-click the pom.xml file. Select the pom.xml tab
at the bottom of this tab.

2.2. In the <dependencies> section, add the microprofile dependency:

<!-- add the microprofile dependency -->


<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>microprofile</artifactId>
</dependency>

JB283-RHOAR1.0-en-1-20180517 159
Chapter 5. Injecting Configuration Data into a Microservice

2.3. Save your changes to the file using Ctrl+S.

3. Implement the ClientConfiguration Java class using the MicroProfile config


specification.

3.1. Open the ClientConfiguration Java class by expanding the hola item in the
Project Explorer tab in the left pane of JBoss Developer Studio, then click hola > Java
Resources > src/main/java > com.redhat.training.msa.hola.client and expand it. Double-
click the ClientConfiguration.java file.

3.2. Inject the alohaPort property using the MicroProfile config annotation to the
alohaPort attribute. This attribute configures which port the client uses to connect to
the aloha microservice. Set 9090 as the default value.

//Inject the alohaPort property.


@Inject
@ConfigProperty(name = "alohaPort", defaultValue="9090")
private String alohaPort;

3.3. Inject the alohaHostname property using the MicroProfile config annotation to the
alohaHostname attribute. This attribute configures the server that the client uses to
connect to the aloha microservice . Set alohahost as the default value.

//Inject the alohaHostname property.


@Inject
@ConfigProperty(name = "alohaHostname", defaultValue="alohahost")
private String alohaHostname;

4. Update the microprofile-config.properties properties file to include the required


properties.

4.1. Update the microprofile-config.properties properties file by expanding the


hola item in the Project Explorer tab in the left pane of JBoss Developer Studio, then
click hola > Java Resources > src/main/resources > META-INF and expand it. Double-
click the microprofile-config.properties file and select the Source tab at the
bottom of this tab.

4.2. Include the alohaPort property. Assign the 7070 value.

#Include the alohaPort


alohaPort=7070

4.3. Include the alohaHostname property. Assign the localhost value.

#Include the alohaHostname


alohaHostname=localhost

5. Test the application.

5.1. Build and run the aloha microservice using the Maven WildFly Swarm plug-in.

160 JB283-RHOAR1.0-en-1-20180517
In your terminal window, navigate to the aloha directory, start the microservice on port
7070 without running the tests, and disable the management service.

[student@workstation hello-microservices]$ cd aloha


[student@workstation aloha]$ mvn clean wildfly-swarm:run -DskipTests \
-Dswarm.http.port=7070 -Dswarm.management.http.disable=true

5.2. Build and run the hola microservice using the Maven WildFly Swarm plug-in.

Open a new terminal window, navigate to the hello-microservices/hola directory,


start the microservice without running the tests, and disable the management service.

[student@workstation ~]$ cd ~/hello-microservices/hola


[student@workstation hola]$ mvn clean wildfly-swarm:run -DskipTests \
-Dswarm.management.http.disable=true

5.3. Test the service from a client using the RESTClient Firefox plug-in.

Start Firefox on the workstation VM and click the RESTClient plug-in in the browser's
toolbar.

Figure 5.5: The Firefox RESTClient plug-in

5.4. Select GET as the Method. In the URL form, enter http://localhost:8080/api/
hola-chaining.

The hola-chaining endpoint attempts to call the hola microservice using the values
set by the Microprofile config specification. In this case, the server uses the value 7070
for the port because the microprofile-config.properties file overrides the
default value of 9090 specified in the @ConfigProperty annotation.

5.5. Click Send.

5.6. Verify in the Headers tab that the Status Code is 200 OK.

5.7. Verify in the Response tab that the response matches the following:

["Hola de localhost","Aloha mai localhost"]

6. Test the application with a JVM system property.

6.1. Return to the terminal window where the hola microservice is running and stop the
service using Ctrl+C.

6.2. Build the aloha microservice:

JB283-RHOAR1.0-en-1-20180517 161
Chapter 5. Injecting Configuration Data into a Microservice

[student@workstation hola]$ mvn clean package -DskipTests

Note
The WildFly Swarm plug-in only accepts Java system properties used by
the plug-in. Therefore, any -D parameter passed to the mvn wildfly-
swarm:run command that is not WildFly Swarm plug-in-related configuration
is discarded. For the purpose of this lab, you need to provide the parameter
alohaPort to set up the port in the microservice. To accomplish this goal,
you must start the microservice using the java command instead.

6.3. Inspect the run.sh script from the hola microservice that starts the application.
Observe that the alohaPort property has the 2020 value, which is an invalid value
because no microservice is running in this port.

[student@workstation hola]$ cat run.sh


java -jar target/hola-swarm.jar -Dswarm.management.http.disable=true -
DalohaHostname=localhost -DalohaPort=2020

6.4. Start the application by running the run.sh script:

[student@workstation hola]$ ./run.sh


...
INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly Swarm is Ready

6.5. In the RESTClient Firefox plug-in, send a new request to the same REST endpoint
(http://localhost:8080/api/hola-chaining). Click Send.

6.6. Verify in the Headers tab that the Status Code is 200 OK.

6.7. Verify in the Response tab that the response matches the following:

["Hola de localhost", "Aloha fallback"]

Because the JVM system property that was set using the -DalohaPort=2020
parameter has a higher priority than the properties file, the client fails to reach the
aloha microservice, which is running on port 7070 not port 2020.

7. Clean up, commit your changes to your local Git repository in the lab branch, and return to
the master branch.

7.1. Return to the terminal window where the hola microservice is running and stop the
service using Ctrl+C.

7.2. Return to the terminal window where the aloha microservice is running and stop the
service using Ctrl+C.

7.3. In the terminal window where the hola microservice was stopped, stage the
uncommitted changes using the git add command.

162 JB283-RHOAR1.0-en-1-20180517
[student@workstation hola]$ git add .

7.4. Commit your changes to the local branch using the git commit command.

[student@workstation hola]$ git commit -m"completing lab configmap"


...output omitted...
[lab-configmap 7210256] completing lab configmap

7.5. Switch the working copy back to the master branch to finish cleaning up.

[student@workstation hola]$ git checkout master


Switched to branch 'master'

This concludes this guided exercise.

JB283-RHOAR1.0-en-1-20180517 163
Chapter 5. Injecting Configuration Data into a Microservice

Implementing Service Discovery with


OpenShift

Objectives
After completing this section, students should be able to implement service discovery with a
dependent microservice.

Discovering Services
An application is a composition of multiple services. In a monolithic application, a service
invokes another service by using a language-level method. For a microservice-based application,
each microservice is not registered for discovery in a common registry. This problem is known
as service discovery. It is especially problematic if your application is running in a cloud
environment, because the locations and number of instances of a service change frequently and
is not predictable.

To solve this problem, OpenShift provides the Service resource. An OpenShift service is a
DNS name representing a set of pods (or external servers) that are accessed by other pods.
An OpenShift service also serves as an internal load balancer. The service identifies a set of
replicated pods to proxy the connections it receives to the pods. Backing pods can be added to or
removed from an OpenShift service arbitrarily while the service remains consistently available,
enabling anything that depends on the service to refer to it at a consistent address.

An OpenShift service is assigned an internal IP address and port number as well as a DNS name.
The DNS name can be exposed externally by an OpenShift Route, but internal service discovery
does not require this. It is also easy to consume services from pods because OCP automatically
injects environment variables for the host name and port number into other pods.

For each OpenShift service inside an OCP project, the following environment variables are
automatically defined and injected into the containers for all of the pods running inside the
project:

• SVC_NAME_SERVICE_HOST is the service IP address.

• SVC_NAME_SERVICE_PORT is the service TCP port.

Note
The SVC_NAME is changed to comply with DNS naming restrictions: all letters are
capitalized and underscores (_) are replaced by dashes (-).

Another way to discover a service from a pod is by using the OCP internal DNS server, which is
visible only to pods. Each service is dynamically assigned an SRV record with a FQDN of the form:

SVC_NAME.PROJECT_NAME.svc

The service is available only for applications deployed in the same cluster. Use OpenShift routes
if you need to access the service from outside the cluster.

Locating Service Information on Web Console


You can locate service information by using the OpenShift web console.

164 JB283-RHOAR1.0-en-1-20180517
Discovering Services

1. In the right navigation bar, click the name of your project.

2. In the left navigation bar, click Applications and then click Services.

Figure 5.6: The OpenShift services view

The page displays a summary of the service, including the IP and ports.

3. Click the name of the service.

Figure 5.7: The OpenShift services details

The page displays details about the service.

References
Additional information about services is available in the Pods and Services section of
the OpenShift Container Platform documentation:
Architecture
https://access.redhat.com/documentation/en-us/openshift_container_platform/3.7/
html/architecture/

JB283-RHOAR1.0-en-1-20180517 165
Chapter 5. Injecting Configuration Data into a Microservice

Quiz: Implementing Service Discovery with


OpenShift

Choose the correct answers to the following questions:

1. Which two of the following items does OpenShift assign to a service (Choose two)?

a. An internal IP address.
b. A router to access the application outside the cluster.
c. A DNS name.
d. A firewall.

2. An OpenShift service is only available to which of the following?

a. Pods inside of the same project.


b. The whole cluster.

3. An OpenShift service called myservice is configured in the myproject project. What is the
name of the environment variable that OCP creates to access the service port in all of the
pods that belong to the myproject project?

a. myservice_service_port
b. myservice_port
c. MYSERVICE_SERVICE_PORT
d. MYSERVICE_PORT

4. An OpenShift service called myservice is configured in the myproject project. What is the
name of the environment variable that OCP creates to access the service IP address in all of
the pods that belong to the myproject project assuming default network configuration?

a. MY_SERVICE_IP
b. MY_SERVICE_HOST
c. MYSERVICE_SERVICE_IP
d. MYSERVICE_SERVICE_HOST

5. A pod needs to contact a service called myservice from the myproject project. This pod
does not belong to the myproject project. Choose the correct DNS name to access this
service from a different project.

a. myservice-myproject.svc
b. MYSERVICE.SVC
c. MYSERVICE_MYPROJECT.SVC
d. MYSERVICE.MYPROJECT.SVC

166 JB283-RHOAR1.0-en-1-20180517
Solution

Solution
Choose the correct answers to the following questions:

1. Which two of the following items does OpenShift assign to a service (Choose two)?

a. An internal IP address.
b. A router to access the application outside the cluster.
c. A DNS name.
d. A firewall.

2. An OpenShift service is only available to which of the following?

a. Pods inside of the same project.


b. The whole cluster.

3. An OpenShift service called myservice is configured in the myproject project. What is the
name of the environment variable that OCP creates to access the service port in all of the
pods that belong to the myproject project?

a. myservice_service_port
b. myservice_port
c. MYSERVICE_SERVICE_PORT
d. MYSERVICE_PORT

4. An OpenShift service called myservice is configured in the myproject project. What is the
name of the environment variable that OCP creates to access the service IP address in all of
the pods that belong to the myproject project assuming default network configuration?

a. MY_SERVICE_IP
b. MY_SERVICE_HOST
c. MYSERVICE_SERVICE_IP
d. MYSERVICE_SERVICE_HOST

5. A pod needs to contact a service called myservice from the myproject project. This pod
does not belong to the myproject project. Choose the correct DNS name to access this
service from a different project.

a. myservice-myproject.svc
b. MYSERVICE.SVC
c. MYSERVICE_MYPROJECT.SVC
d. MYSERVICE.MYPROJECT.SVC

JB283-RHOAR1.0-en-1-20180517 167
Chapter 5. Injecting Configuration Data into a Microservice

Lab: Injecting Configuration Data into a


Microservice

In this lab, you will configure a system property in a web application to locate services, and then
build and deploy it on OpenShift.

Outcomes
You should be able to inject configuration properties using the MicroProfile config specification
and deploy the application on OpenShift by using the ConfigMap resource.

Before you begin


If you have not already done so, use git clone to download the microprofile-conference
repository to the workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Then run the lab setup to begin the exercise.

[student@workstation ~]$ lab config-review setup

Steps
1. Switch the repository to the lab-config-review branch to get the correct version of the
application code for this exercise.

2. Enable the MicroProfile fraction for the application by updating the pom.xml Maven
configuration file in the web-application microservice.

3. The web-application has two configuration properties files available in the microprofile-
conference/web-application/src/main/local/webapp/WEB-INF folder:
• conference.properties: Load this file when you are working locally.

• openshift.properties: Load this file when you are working on OpenShift.

The getEndpoints() method from the EndPointService Java class loads the correct
configuration file by recovering a system environment variable.

Update the EndPointService Java class to use the MicroProfile config specification. You
need to create a new class attribute that is a String named application. Inject the value
of the application variable by using a configuration property, also named application.
The configuration property must have the default value defined as conference.

Refactor the getEndpoints() method to use the injected attribute instead of the system
environment variable.

4. Create a new project on OpenShift called lab-config-review.

168 JB283-RHOAR1.0-en-1-20180517
5. Deploy the web-application microservice to the OpenShift cluster using the fabric8
Maven plug-in. Skip the npm portion of the web-application build by passing the -
Dskip.npm system property.

Note
The nodejs portion of the application has been pre-built to accommodate the
offline classroom environment that this class uses. For this reason, you must skip
the npm portion of the web-application build.

6. Wait until fabric8 finishes deploying the microservice to the OpenShift cluster. Then, test the
microservice using the RESTClient Firefox plug-in.

You can list the microservices endpoints by requesting the http://


web.apps.lab.example.com/service/endpoints/list URL using the GET method.

7. Create a configuration map named appconfig on the OpenShift cluster.

This configuration map must contain a project-defaults.yml file that contains the
application property with the openshift value.

8. Configure the deployment configuration resource for lab-config-review project to use the
appconfig configuration map as a volume. The volume mount path where the file must be
mounted inside the pod is /app/config.

9. Define the swarm.project.stage.file system property in the deployment


configuration.

Set the value to file:///app/config/project-defaults.yml so that the application


running in the pod can reference the mounted configuration file.

10. Wait until OpenShift redeploys the microservice with the new configuration changes.
Then run a new test of the endpoint using the RESTClient Firefox plug-in and check that
the EndPointService Java class loaded the openshift.properties file and is now
returning updated values.

11. Grade the lab.

[student@workstation ~]$ lab config-review grade

12. Clean up the OCP project, commit your changes to your local Git repository in the lab
branch, and return to the master branch.

12.1. Delete the OCP project lab-config-reivew to undeploy the service and remove the
other OCP resources.

[student@workstation web-application]$ oc delete project \


lab-config-review
project "lab-config-review" deleted

12.2.Stage the uncommitted changes using the git add command.

JB283-RHOAR1.0-en-1-20180517 169
Chapter 5. Injecting Configuration Data into a Microservice

[student@workstation web-application]$ git add .

12.3.Commit your changes to the local branch using the git commit command.

[student@workstation web-application]$ git commit \


-m"completing lab config-review"
[lab-config-review 72109445] completing lab lab-config-review

12.4.Switch the working copy back to the master branch to finish cleaning up.

[student@workstation web-application]$ git checkout master


Switched to branch 'master'

This concludes this lab.

170 JB283-RHOAR1.0-en-1-20180517
Solution

Solution
In this lab, you will configure a system property in a web application to locate services, and then
build and deploy it on OpenShift.

Outcomes
You should be able to inject configuration properties using the MicroProfile config specification
and deploy the application on OpenShift by using the ConfigMap resource.

Before you begin


If you have not already done so, use git clone to download the microprofile-conference
repository to the workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Then run the lab setup to begin the exercise.

[student@workstation ~]$ lab config-review setup

Steps
1. Switch the repository to the lab-config-review branch to get the correct version of the
application code for this exercise.

1.1. Switch to the branch using the git checkout command.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout \
lab-config-review
Switched to branch 'lab-config-review'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-config-review
nothing to commit, working directory clean

2. Enable the MicroProfile fraction for the application by updating the pom.xml Maven
configuration file in the web-application microservice.

2.1. Open the pom.xml file by expanding the web-application item in the Project Explorer
tab in the left pane of JBoss Developer Studio. Double-click the pom.xml file. Select the
pom.xml tab at the bottom of this tab.

2.2. In the <dependencies> section, add the microprofile dependency:

<!-- add the microprofile dependency -->


<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>microprofile</artifactId>

JB283-RHOAR1.0-en-1-20180517 171
Chapter 5. Injecting Configuration Data into a Microservice

</dependency>

2.3. Save your changes to the file using Ctrl+S.

3. The web-application has two configuration properties files available in the microprofile-
conference/web-application/src/main/local/webapp/WEB-INF folder:
• conference.properties: Load this file when you are working locally.

• openshift.properties: Load this file when you are working on OpenShift.

The getEndpoints() method from the EndPointService Java class loads the correct
configuration file by recovering a system environment variable.

Update the EndPointService Java class to use the MicroProfile config specification. You
need to create a new class attribute that is a String named application. Inject the value
of the application variable by using a configuration property, also named application.
The configuration property must have the default value defined as conference.

Refactor the getEndpoints() method to use the injected attribute instead of the system
environment variable.

3.1. Open the EndpointService Java class by expanding the web-application item in the
Project Explorer tab in the left pane of JBoss Developer Studio. Click web-application
> Java Resources > src/main/local/java > io.micorprofile.showcase.web and expand it.
Double-click the EndpointService.java file.

3.2. Inspect the getEndpoints() method to check that it loads the correct configuration
properties file by recovering a system environment variable called ENDPOINT_NAME.

3.3. Create the application attribute and inject the application property using the
MicroProfile config annotation to the application attribute. Set conference as the
default value.

//Create the application attribute


@Inject
@ConfigProperty(name = "application", defaultValue = "conference")
private String application;

Note
If you see the following warning in JBoss Developer Studio, it can be safely
ignored:

No bean is eligible for injection to the injection point [JSR-346


§5.2.2]

3.4. Refactor the getEndpoints() method to use the injected application attribute:

public Endpoints getEndpoints() {


//refactor the getEndpoints() method

172 JB283-RHOAR1.0-en-1-20180517
Solution

return this.getCachedEndpoints(application);
}

4. Create a new project on OpenShift called lab-config-review.

4.1. Open a web browser and enter the following URL:

https://master.lab.example.com.

If required, accept the self-signed, insecure certificate. The page displays the OpenShift
authentication page.

4.2. Log in to the registry console as the developer user, with the redhat password.

4.3. In the right frame, click + Create Project.

4.4. Fill in the Create Project form with the following values:

• In the Name field, enter lab-config-review.

• In the Display Name field, enter Web Microservice.

Leave the remaining fields empty. Click Create.

5. Deploy the web-application microservice to the OpenShift cluster using the fabric8
Maven plug-in. Skip the npm portion of the web-application build by passing the -
Dskip.npm system property.

Note
The nodejs portion of the application has been pre-built to accommodate the
offline classroom environment that this class uses. For this reason, you must skip
the npm portion of the web-application build.

5.1. Open a terminal window on the workstation VM and log in to OpenShift as the
developer user.

[student@workstation ~]$ oc login -u developer -p redhat \


https://master.lab.example.com

5.2. Select the lab-config-review project:

[student@workstation ~]$ oc project lab-config-review


Now using project "lab-config-review"...

5.3. Navigate to the web-application microservice project and deploy it on OpenShift:

[student@workstation ~]$ cd ~/microprofile-conference/web-application/


[student@workstation web-application]$ mvn clean fabric8:deploy -Dskip.npm
[INFO] Scanning for projects...
...
[INFO] F8: Running in OpenShift mode
...

JB283-RHOAR1.0-en-1-20180517 173
Chapter 5. Injecting Configuration Data into a Microservice

[INFO] Current reconnect backoff is 4000 milliseconds (T2)


...
[INFO] BUILD SUCCESS
...

6. Wait until fabric8 finishes deploying the microservice to the OpenShift cluster. Then, test the
microservice using the RESTClient Firefox plug-in.

You can list the microservices endpoints by requesting the http://


web.apps.lab.example.com/service/endpoints/list URL using the GET method.

6.1. Return to the web browser logged in to OpenShift. In the right navigation bar, click Web
Microservice.

6.2. Wait for the build to complete and for the web-application application to have one pod
running.

6.3. Start Firefox on the workstation VM and click the RESTClient plug-in in the browser's
toolbar.

6.4. Select GET as the Method. In the URL form, enter http://
web.apps.lab.example.com/service/endpoints/list.

6.5. Click Send.

6.6. Verify in the Headers tab that the Status Code is 200 OK.

6.7. Verify in the Response tab that the response matches the following:

{"endpoints":[{"name":"vote-health","url":"http://localhost:7070/
health"},{"name":"schedule-metrics","url":"http://localhost:6060/
metrics"},{"name":"speaker-health","url":"http://localhost:4040/health"},
{"name":"session","url":"http://localhost:6055/gateway/sessions"},
{"name":"vote-metrics","url":"https://localhost:9443/metrics"},{"name":"session-
health","url":"http://localhost:5050/health"},{"name":"vote","url":"http://
localhost:6055/gateway/vote"},{"name":"session-metrics","url":"http://
localhost:5050/metrics"},{"name":"schedule-health","url":"http://
localhost:6060/health"},{"name":"speaker","url":"http://localhost:6055/
gateway/speaker"},{"name":"authz-health","url":"http://localhost:5055/
health"},{"name":"speaker-metrics","url":"http://localhost:4040/
metrics"},{"name":"schedule","url":"http://localhost:6055/gateway/
schedule"},{"name":"authz-metrics","url":"http://localhost:5055/
metrics"},{"name":"authz","url":"http://localhost:6055/gateway/
authz"}],"application":"conference","links":{"self":"http://
web.apps.lab.example.com/service/endpoints"}}

7. Create a configuration map named appconfig on the OpenShift cluster.

This configuration map must contain a project-defaults.yml file that contains the
application property with the openshift value.

7.1. In left navigation bar of the OpenShift web console, click Resources and then click
Config Maps.

7.2. Click Create Config Map.

174 JB283-RHOAR1.0-en-1-20180517
Solution

7.3. Fill in the Create Config Map form with the following values:

• In the Name field, enter appconfig.

• In the Key field, enter project-defaults.yml.

7.4. In the text area, enter the application property:

• application: "openshift"

7.5. Click Create.

8. Configure the deployment configuration resource for lab-config-review project to use the
appconfig configuration map as a volume. The volume mount path where the file must be
mounted inside the pod is /app/config.

8.1. In the left navigation bar, click Applications and then click Deployments.

8.2. In the first column of the table, click web-application.

8.3. In the tab menu, click Configuration.

8.4. In the Volumes section, click Add Config files.

8.5. Fill in the Add Config Files to web-application form with the following values:

• In the Source field, select appconfig.

• In the Mount Path field, enter /app/config.

Leave the remaining fields blank.

Click Add.

9. Define the swarm.project.stage.file system property in the deployment


configuration.

Set the value to file:///app/config/project-defaults.yml so that the application


running in the pod can reference the mounted configuration file.

9.1. In the tab menu, click Environment.

9.2. Click Add Value.

9.3. Create the JAVA_OPTIONS variable to add a new system property. In the Value field,
append the following value:

-Dswarm.project.stage.file=file:///app/config/project-
defaults.yml

9.4. Click Save.

10. Wait until OpenShift redeploys the microservice with the new configuration changes.
Then run a new test of the endpoint using the RESTClient Firefox plug-in and check that
the EndPointService Java class loaded the openshift.properties file and is now
returning updated values.

JB283-RHOAR1.0-en-1-20180517 175
Chapter 5. Injecting Configuration Data into a Microservice

10.1. Return to the web browser logged in to OpenShift. In the left navigation bar, click
Overview.

10.2.Wait for the build to complete and for the web-application application have one pod
running.

10.3.Return to the web browser running the RESTClient plug-in.

10.4.Select GET as the Method. In the URL form, enter http://


web.apps.lab.example.com/service/endpoints/list.

10.5.Click Send.

10.6.Verify in the Headers tab that the Status Code is 200 OK.

10.7. Verify in the Response tab that the response matches the following:

{"endpoints":[{"name":"authz","url":"http://microservice-
authz.apps.lab.example.com/authz"},{"name":"vote-metrics","url":"https://
microservice-vote-ssl.apps.lab.example.com/metrics"},
{"name":"session","url":"http://microservice-session.apps.lab.example.com/
sessions"},{"name":"vote","url":"http://microservice-vote.apps.lab.example.com/
vote"},{"name":"session-health","url":"http://microservice-
session.apps.lab.example.com/health"},{"name":"speaker","url":"http://
microservice-speaker.apps.lab.example.com/speaker"},
{"name":"schedule","url":"http://microservice-schedule.apps.lab.example.com/
schedule"},{"name":"session-metrics","url":"http://microservice-
session.apps.lab.example.com/metrics"},{"name":"authz-health","url":"http://
microservice-authz.apps.lab.example.com/health"},{"name":"vote-
health","url":"http://microservice-vote.apps.lab.example.com/
health"}],"application":"openshift","links":{"self":"http://localhost:8080/
service/endpoints"}}

11. Grade the lab.

[student@workstation ~]$ lab config-review grade

12. Clean up the OCP project, commit your changes to your local Git repository in the lab
branch, and return to the master branch.

12.1. Delete the OCP project lab-config-reivew to undeploy the service and remove the
other OCP resources.

[student@workstation web-application]$ oc delete project \


lab-config-review
project "lab-config-review" deleted

12.2.Stage the uncommitted changes using the git add command.

[student@workstation web-application]$ git add .

12.3.Commit your changes to the local branch using the git commit command.

176 JB283-RHOAR1.0-en-1-20180517
Solution

[student@workstation web-application]$ git commit \


-m"completing lab config-review"
[lab-config-review 72109445] completing lab lab-config-review

12.4.Switch the working copy back to the master branch to finish cleaning up.

[student@workstation web-application]$ git checkout master


Switched to branch 'master'

This concludes this lab.

JB283-RHOAR1.0-en-1-20180517 177
Chapter 5. Injecting Configuration Data into a Microservice

Summary
In this chapter, you learned:

• The MicroProfile config specification is a feature that allows users to dynamically configure
applications.

• The MicroProfile config specification defines three ConfigSource resource:

◦ JVM system properties

◦ System environment variables

◦ Any META-INF/microprofile-config.properties files on the Java class path

• Use the @Inject and @ConfigProperty annotations to retrieve a property value from a
ConfigSource resource.

• In OpenShift, a configuration map resource can be used to store detailed information such as
individual properties.

178 JB283-RHOAR1.0-en-1-20180517
TRAINING
CHAPTER 6

CREATING APPLICATION
HEALTH CHECKS

Overview
Goal Create a health check for a microservice.
Objective • Implement a health check in a microservice and enable a
probe in OpenShift to monitor it.
Section • Implementing a Health Check Monitored by OpenShift (and
Guided Exercise)
Lab Creating Application Health Checks

JB283-RHOAR1.0-en-1-20180517 179
Chapter 6. Creating Application Health Checks

Implementing a Health Check Monitored By


OpenShift

Objectives
After completing this section, students should be able to implement a health check in a
microservice and enable a probe in OpenShift to monitor it.

Describing the MicroProfile Health Specification


As the number of microservices running in your environment scales up, proactive monitoring of
the health of all instances of your microservices becomes both more critical, and much harder
to do. Using a container management technology like OpenShift allows you to leverage health
checks to make automated decisions about discarding and replacing unhealthy containers with
new ones. By replacing unhealthy containers quickly, OpenShift greatly improves the overall
uptime of the service.

To better integrate a microservice deployed in a WildFly Swarm container and running on


a platform like OpenShift, the MicroProfile Health specification provides a simple way for
an automated process to check the health of a microservice. The health check architecture
defined in the specification consists of a single /health REST endpoint in a MicroProfile-based
microservice that reports the health status of the entire microservice using an HTTP status code.
This approach works well because the health check is easily consumed and parsed by OpenShift,
and requires very little extra work for the microservice developer.

To leverage this functionality in a microservice running on WildFly Swarm, include the


microprofile dependency in your pom.xml to load all of the available specifications in
MicroProfile 1.3. Note that you do not need to specify a version if you are using the WildFly
Swarm bill of materials, as shown in the following example:

<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>microprofile</artifactId>
</dependency>

To create a new health check for a microservice, use the @Health annotation on any class that
implements the HealthCheck interface. The HealthCheck interface requires implementing a
single method named call() that returns a HealthCheckResponse object, as shown in the
following example:

@Health
@ApplicationScoped

public class HealthDemo implements HealthCheck {


@Override

public HealthCheckResponse call() {

HealthCheckResponseBuilder alive = HealthCheckResponse.named("alive");


// add other info
return alive.up().build();
}
}

180 JB283-RHOAR1.0-en-1-20180517
Describing the MicroProfile Health Specification

Use the @Health annotation to create a new health check in your microservice.
Health check classes must implement the HealthCheck interface.
Classes that implement the HealthCheck interface must implement the call() method
and return a HealthCheckResponse object.
Use the HealthCheckResponseBuilder factory class to build the health check response.
When a microservice that includes one or more health check runs, WildFly Swarm automatically
exposes an HTTP endpoint at the URL /health, which is not relative to the base application
URL. When the WildFly Swarm server receives a request on this health endpoint, the call()
method in each health check is triggered by the server. If the health check is successful, and the
HealthCheckResponse is set to a value of UP, then an HTTP status code of 200 is set as the
response. If the health check fails and the HealthCheckResponse is set to a value of DOWN,
then a 503 status code is returned. In addition to the response code, the /health endpoint
also returns JSON data with the details about the health checks that were run, as shown in the
following example:

[student@workstation ~]$ curl http://localhost:8080/health


{
"outcome": "UP",
"checks": [
{
"name": "alive",
"state": "UP"
}
]
}

If multiple health checks are defined in a single microservice, WildFly Swarm aggregates the
checks and reports a single overall status, which represents the logical AND of all of the checks.
That is, if a single check fails, the health outcome of the entire microservice is reported as DOWN:

[student@workstation ~]$ curl http://localhost:8080/health


{
"outcome": "DOWN",
"checks": [
{
"name": "alive",
"state": "UP"
},
{
"name": "database",
"state": "DOWN"
}
]
}

For convenience, the HealthCheckResponse class offers the named(String name) method
to produce an instance of HealthCheckResponseBuilder that already has its name set. You
can use method chaining to build the entire HealthCheckReponse object in a single line. Use
the methods available on the HealthCheckResponseBuilder to control the name of the
health check or to return custom data with the health response. The following table summarizes
the available methods:

JB283-RHOAR1.0-en-1-20180517 181
Chapter 6. Creating Application Health Checks

HealthCheckResponseBuilder methods
Method Description
name(String Set the name of the health check.
name)
withData(String Add extra data to the health check response, with a type of String.
name, String
value)
withData(String Add extra data to the health check response, with a type of String.
name, long
value)
withData(String Add extra data to the health check response, with a type of Boolean.
name, Boolean
value)
up() Set the status of the heath check to UP.
down() Set the status of the health check to DOWN.
state(boolean Set the status of the health check using a Boolean expression.
up)
build() Build and return the HealthCheckResponse object.

The following example shows how to build a HealthCheckResponse with custom data
attached:

HealthCheckResponse.named("sessions-check")
.withData("sessionCount", sessionCount)
.withData("lastCheckDate", new Date().toString())
.state(sessionCount > 0)
.build();

If you hit the /health endpoint using the code from the previous example, the response is the
following:

[student@workstation ~]$ curl http://localhost:8080/health


{
"outcome": "UP",
"checks": [
{
"name": "sessions-check",
"sessionCount": 160,
"lastCheckDate": "Wed Apr 4 02:00:00 EST 2018"
"state": "UP"
}
]
}

Monitoring Container Health Checks with OpenShift


Using Probes
In containerized microservice environments, it is common for individual components to become
unhealthy due to issues such as temporary connectivity loss, configuration errors, or problems

182 JB283-RHOAR1.0-en-1-20180517
Monitoring Container Health Checks with OpenShift Using Probes

with external dependencies. OpenShift Container Platform provides a number of options to


detect and handle unhealthy containers. The primary resource used by OpenShift to monitor
container health is called a probe.

A probe is a diagnostic process that uses some action to query the health of individual
containers, typically on a configurable schedule. There are two main types of probes that
OpenShift leverages: liveness probes and readiness probes.

Liveness Probes
A liveness probe checks if the container in which it is configured is still running. If the
liveness probe fails, OpenShift kills the container, which is then subjected to its restart policy.
After a pod is successfully deployed, its liveness probes are run continually on a schedule
monitoring the health of the pod.

Readiness Probes
A readiness probe determines whether a container is ready to service requests. Readiness
probes are run during the deployment of the pod to determine if the pod is finished
deploying. If the readiness probe fails for a container, the endpoints controller built into
OpenShift ensures the container has its IP address removed from the endpoints of all
attached services. OpenShift also uses readiness probes to signal to the endpoints controller
that even though a container is running, it should not receive any traffic from a proxy.

When you design a health check, it is important to consider whether it will be used as a liveness
probe or a readiness probe. The distinction is important, as the readiness probe health check
must indicate whether the container is up and running and ready to serve requests. A failed
readiness probe can simply indicate that the pod needs more time to finish starting up. A
liveness probe health check, however, can be much simpler, and only needs to indicate the
current status, either up or down, of the container. A failed liveness probe indicates that the pod
needs to be restarted immediately.

Both liveness and readiness probes support some common options for controlling when they are
to be executed by OpenShift and how they react to failures. These common options include:

initialDelaySeconds
The time in seconds that the probe must wait after the container finishes starting.

timeoutSeconds
The time in seconds that OpenShift must wait for the probe to finish, before considering the
probe a failure because no response was received.

Additionally, both liveness and readiness probes are configured by leveraging one of the three
possible approaches for defining probes. These approaches include:

HTTP Checks
OpenShift sends an HTTP GET request to a configurable URL to determine the healthiness of
the pod. The check is deemed successful if the HTTP response is received before the timeout
and the response code is between 200 and 399. The following is an example of a readiness
probe using the httpGet method for probing a pod:

...
readinessProbe:
httpGet:
path: /health
port: 8080

JB283-RHOAR1.0-en-1-20180517 183
Chapter 6. Creating Application Health Checks

initialDelaySeconds: 15
timeoutSeconds: 1
...

Container Execution Checks


OpenShift executes a command inside the container. Exiting the check with status 0 is
considered a success. The following is an example of a liveness probe using the exec method
for probing a pod:

...
livenessProbe:
exec:
command:
- cat
- /tmp/health
initialDelaySeconds: 15
timeoutSeconds: 1
...

TCP Socket Checks


OpenShift attempts to open a socket to the container. The container is only considered
healthy if the check can establish a connection.

...
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
timeoutSeconds: 1
...

Using the HTTP check works very well with MicroProfile health specification health check
endpoints because they return an HTTP status of 200 if the health check succeeds and an HTTP
status of 503 if it fails. Both container execution checks and TCP socket checks are useful for
probing containers where this type of HTTP-based health check endpoint is not available.

Creating a Health Check Probe in the OpenShift Web


Console
It is also possible to configure probes after your microservice is deployed onto the OpenShift
cluster. You can do this using the YAML resource definitions above, or you can use the OpenShift
web console. To create probes using the web console, make sure you have your current project
selected and then select Applications > Deployments and choose the deployment for your
microservice:

184 JB283-RHOAR1.0-en-1-20180517
Creating a Health Check Probe in the OpenShift Web Console

Figure 6.1: OpenShift web console deployments window

From the deployment summary screen, use the Actions drop-down menu in the top right corner,
and select Edit Health Checks:

Figure 6.2: OpenShift web console deployment summary window

On the Edit Health Checks page, you are presented with a form where you can configure both
the liveness and readiness probes for this deployment. In this example, use an HTTP GET for the
readiness probe with the path /health and port 8080:

JB283-RHOAR1.0-en-1-20180517 185
Chapter 6. Creating Application Health Checks

Figure 6.3: OpenShift web console readiness probe window

Define the liveness probe the same way, further down the form, using the same path and port. In
this example, both probes are monitoring the health check endpoint provided by the MicroProfile
health specification. Be sure to click the Save button when you finish configuring the probes.

Figure 6.4: OpenShift web console liveness probe window

Defining Health Check Resources With the fabric8


Maven Plug-in
The fabric8 Maven plug-in offers a simple approach to automatically creating application health
checks for your microservice deployed on OpenShift Container Platform. To do this, include
YAML definitions for whatever probes you want in a deployment.yml OpenShift resource
fragment. Place this YAML file in the src/main/fabric8 directory of your project. The
following is an example of a deployment.yml file that defines a liveness and a readiness probe
for its microservice:

spec:
template:
spec:
containers:
- readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 5
ports:
- containerPort: 8080

186 JB283-RHOAR1.0-en-1-20180517
Demonstration: Aggregating Health Checks

name: http
protocol: TCP

Demonstration: Aggregating Health Checks


1. Log in to the workstation VM as student using student as the password.

2. In the microservice-vote project, open the HashMapAttendeeDAO class located in the


io.microprofile.showcase.vote.persistence package.

2.1. Inspect the @Health class-level annotation declared on the class. The annotation
configures the class as a health check information provider.

2.2. Inspect the interfaces that the HashMapAttendeeDAO class


implements. Classes providing a health check must implement the
org.eclipse.microprofile.health.HealthCheck interface.

2.3. Inspect the call() method. Currently, this method always returns the
HealthCheckResponse.named("HashMap").up().build() value.

3. Open the CouchSessionRatingDAO class located in the


io.microprofile.showcase.vote.persistence.couch package.

3.1. Inspect the @Health class-level annotation declared on the class.

3.2. Verify that the class implements the


org.eclipse.microprofile.health.HealthCheck interface.

3.3. Inspect the return value from the call() method. It returns the
HealthCheckResponse.named().up().build() value if the CouchDB server is
accessible, and the HealthCheckResponse.named().down().build() value if
not.

4. Open the CouchAttendeeDAO class located in the


io.microprofile.showcase.vote.persistence.couch package.

4.1. Inspect the @Health class-level annotation declared on the class.

4.2. Verify that the class implements the


org.eclipse.microprofile.health.HealthCheck interface.

4.3. Inspect the return value from the call() method. It returns the
HealthCheckResponse.named().up().build() value if the CouchDB server is
accessible, and HealthCheckResponse.named().down().build() value if not.

5. Test the health check endpoint.

5.1. Open a terminal window on the workstation VM and navigate to the microservice-
vote project.

[student@workstation ~]$ cd microprofile-conference/microservice-vote

5.2. Start the microservice.

JB283-RHOAR1.0-en-1-20180517 187
Chapter 6. Creating Application Health Checks

[student@workstation microservice-vote]$ mvn clean wildfly-swarm:run \


-DskipTests

5.3. Start Firefox on the workstation VM and click the RESTClient plug-in in the
browser's toolbar.

5.4. Select GET as the Method. In the URL form, enter http://localhost:8080/
health.

Click Send.

5.5. Verify in the Headers tab that the Status Code is 503 Service Unavailable.

5.6. Verify in the Preview tab that the response matches the following:

{
"checks": [{
"name": "CouchAttendeeDAO",
"state": "DOWN"
},
{
"name": "CouchSessionRatingDAO",
"state": "DOWN"
},
{
"name": "HashMap",
"state": "UP"
}
],
"outcome": "DOWN"
}

The health check status is DOWN because the call methods implemented in the
CouchAttendeeDAO and CouchSessionRatingDAO classes inspects the CouchDB
server availability. As the server is not currently running, these methods return the
HealthCheckResponse.named().down().build() value.

6. Open a new terminal window on the workstation VM and start the CouchDB server:

[student@workstation microservice-vote]$ sudo systemctl start couchdb

7. Restart the microservice. This step is required because the microservice only attempts to
connect to the database during its initial start up.

7.1. Stop the microservice. Hit Ctrl+C in the terminal window running the microservice.

7.2. Start the microservice.

[student@workstation microservice-vote]$ mvn clean wildfly-swarm:run \


-DskipTests

8. Invoke the health check REST endpoint.

188 JB283-RHOAR1.0-en-1-20180517
Demonstration: Aggregating Health Checks

8.1. Using RESTClient plug-in, select GET as the Method. In the URL form, enter http://
localhost:8080/health.

Click Send.

8.2. Verify in the Headers tab that the Status Code is 200 OK.

8.3. Verify in the Preview tab that the response matches the following:

{
"checks": [{
"name": "CouchAttendeeDAO",
"state": "UP"
},
{
"name": "CouchSessionRatingDAO",
"state": "UP"
},
{
"name": "HashMap",
"state": "UP"
}
],
"outcome": "UP"
}

This concludes the demonstration.

References
MicroProfile Health Specification
https://github.com/eclipse/microprofile-health

OpenShift Developer Guide - Health Checks


https://docs.openshift.com/container-platform/3.7/dev_guide/
application_health.html

JB283-RHOAR1.0-en-1-20180517 189
Chapter 6. Creating Application Health Checks

Guided Exercise: Implementing a Health Check

In this exercise, you will activate health check capabilities in a microservice and monitor it in
OpenShift with a probe.

Outcomes
You should be able to activate health check capabilities in a microservice implemented with
WildFly Swarm.

Before you begin


If you have not already, execute the git clone command to clone the hello-microservices
repository onto the workstation machine.

[student@workstation ~]$ git clone http://services.lab.example.com/hello-microservices


Cloning into 'hello-microservices'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Then, run lab setup to begin the exercise.

[student@workstation ~]$ lab health setup

Steps
1. Switch the repository to the lab-health branch to get the correct version of the
application code for this exercise.

1.1. Switch to the branch using the git checkout command.

[student@workstation ~]$ cd hello-microservices


[student@workstation hello-microservices]$ git checkout lab-health
Branch lab-health set up to track remote branch lab-health from origin.
Switched to a new branch 'lab-health

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation hello-microservices]$ git status


#On branch lab-health
nothing to commit, working tree clean

2. Implement the MicroProfile health specification requirements in the HolaHealth class.

2.1. Open the HolaHealth class by expanding the hola item in the Project Explorer tab
in the left pane of JBoss Developer Studio, then click hola > Java Resources > src/
main/java > com.redhat.training.msa.hola.health and expand it. Double-click the
HolaHealth.java file.

2.2. Add the @Health class-level annotation to configure the class as a health check
information provider.

//Add the @Health annotation


@Health

190 JB283-RHOAR1.0-en-1-20180517
//Implements the HealthCheck interface
public class HolaHealth {
...

2.3. Support the requirement from the MicroProfile health specification. Declare the
org.eclipse.microprofile.health.HealthCheck interface as one of
theHolaHealth class implementations.

//Implements the HealthCheck interface


public class HolaHealth implements HealthCheck {
...

2.4. Implement the call() method to alert the health check probe that the endpoints
from the application are always running. This method needs to return the
HealthCheckResponse.named("hola service").up().build() value.

//Implement the call() method


public HealthCheckResponse call() {
return HealthCheckResponse.named("hola service")
.up().build();
}

3. Customize the deployment configuration file to configure the readiness health check probe
from OpenShift.

3.1. Open the deployment.yml file by expanding the hola item in the Project Explorer tab
in the left pane of JBoss Developer Studio, then click hola > src > main > fabric8 and
expand it. Double-click the deployment.yml file.

3.2. Update the file to configure a readiness health check probe with the following values:

• path: /health

• port: 8080

• scheme: HTTP

• initialDelaySeconds: 30

...
readinessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 30
...

4. Create a new project in OpenShift.

4.1. Open a terminal window on the workstation VM and log in to OpenShift cluster as the
developer user:

JB283-RHOAR1.0-en-1-20180517 191
Chapter 6. Creating Application Health Checks

[student@workstation ~]$ oc login -u developer -p redhat \


https://master.lab.example.com

4.2. Create the hellohealth project:

[student@workstation ~]$ oc new-project hellohealth


Now using project "hellohealth"...

5. Deploy the application on the OpenShift cluster.

5.1. Navigate to the hola microservice project and deploy it on the OpenShift cluster:

[student@workstation ~]$ cd hello-microservices/hola


[student@workstation hola]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS
...

6. Configure the liveness health check probe in the OpenShift cluster.

6.1. Open a web browser on the workstation VM.

6.2. Navigate to https://master.lab.example.com to access the OpenShift web


console.

6.3. Log in to the web console using developer as the user name and redhat as the
password.

6.4. Return to the web browser that you used to log in to the OpenShift cluster. In the right
navigation bar, click hellohealth.

6.5. In the left navigation bar, click Applications and then click Deployments.

6.6. In the table, click hola.

6.7. Click the Actions drop-down menu and then click Edit Health Checks.

6.8. Click Add Liveness Probe at the bottom of the page.

6.9. Fill in the Liveness Probe form with the following values:

• In the Type field, select HTTP GET.

• In the Path field, enter /health.

• In the Initial Delay field, enter 30.

• In the Timeout field, enter 5.

Click Save.

192 JB283-RHOAR1.0-en-1-20180517
7. Test the health check probe.

7.1. In the left navigation bar, click Applications and then click Pods.

7.2. Wait until the newest pod is in the Running state, and container's ready value is 1/1.
The pod changes to the Running state only when the readiness health check runs
successfully.

7.3. Open a web browser and navigate to http://hola.apps.lab.example.com/


health to test the health check. The expected result is:

{"checks": [
{"name":"hola service","state":"UP"}],
"outcome": "UP"
}

8. Clean up the OCP project, commit your changes to your local Git repository in the lab
branch, and return to the master branch.

8.1. Delete the OCP project hellohealth to undeploy the OCP resources associated with
the project.

[student@workstation hola]$ oc delete project \


hellohealth
project "hellohealth" deleted

8.2. Stage the uncommitted changes using the git add command.

[student@workstation hola]$ git add .

8.3. Commit your changes to the local branch using the git commit command.

[student@workstation hola]$ git commit \


-m"completing lab health"
[lab-health 72109445] completing lab health

8.4. Switch the working copy back to the master branch to finish cleaning up.

[student@workstation hola]$ git checkout master


Switched to branch 'master'

This concludes the guided exercise.

JB283-RHOAR1.0-en-1-20180517 193
Chapter 6. Creating Application Health Checks

Lab: Creating Application Health Checks

In this lab, you will activate health check capability in a microservice and monitor it on OpenShift
with probes.

Outcomes
You should be able to activate health check capabilities in a microservice implemented with
WildFly Swarm.

Before you begin


If you have not already done so, use git clone to download the microprofile-conference
repository to the workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Then run the lab setup to begin the exercise.

[student@workstation ~]$ lab health-review setup

Steps
1. Switch the repository to the lab-health-review branch to get the correct version of the
application code for this exercise.

2. Implement a health check in the SessionCheck class from the microservice-session


microservice. The application is considered healthy when the getSessionCount() method
return an integer that is greater than zero. Name the health check as sessions-check.
The health check must return the number of sessions and the current date information.
They must be labeled as sessionCount and lastDateCheck, respectively.

3. Test the health check by starting WildFly Swarm and using the RESTClient Firefox plug-in
and accessing http://localhost:8080/health URL. By default, the microservice-
session microservice does not have any session loaded, which causes the health check to
return a DOWN state. To avoid this condition, you may customize the loadSampleData
property from the MicroProfile configuration specification to load sessions during the start
up process. Use the following command to change the property value:

[student@workstation microprofile-conference]$ export loadSampleData=true

Start the microservice before accessing the application with the web browser. Inspect the
health check behavior with and without the loadSampleData variable set.

Note
Consider skipping the tests when starting WildFly Swarm to minimize the amount
of time needed to start the application.

194 JB283-RHOAR1.0-en-1-20180517
4. Implement the MicroProfile health check specification in the
io.microprofile.showcase.vote.persistence.couch.CouchAttendeeDAO class.
Consider the microservice-vote microservice healthy when it can connect to the CouchDB
server. Use the connected attribute to check whether the microservice is connected to the
CouchDB server. If the attribute is set to true, the method must return the UP state. Name
the health check CouchAttendeeDAO.

5. Customize the fabric8 deployment configuration file to configure the readiness and liveness
health check probes from OpenShift in the microservice-vote microservice. Configure the
probes with the following values:

• path: /health

• port: 8080

• scheme: HTTP

• initialDelaySeconds: 15

6. Create a new OpenShift project called health-review-unhealthy. Deploy the


microservice on the OpenShift cluster using the fabric8 Maven plug-in. Observe, in the
OpenShift web console, that the pod is restarting because the health check is returning
DOWN. The microservice is unhealthy because there is no CouchDB database available for
connection.

7. Delete the health-review-unhealthy project and create a new one called health-
review-healthy to deploy a CouchDB database pod and the microservice-vote
microservice again. To deploy it, execute the /home/student/JB283/labs/health-
review/deploy-couchdb.sh script.

7.1. Delete the health-review-unhealthy project.

[student@workstation microservice-vote]$ oc delete project \


health-review-unhealthy

7.2. Create the health-review-healthy project.

[student@workstation microservice-vote]$ oc new-project health-review-healthy

7.3. Deploy the CouchDB database:

[student@workstation microservice-vote]$ cd ~/JB283/labs/health-review/


[student@workstation health-review]$ ./deploy-couchdb.sh
...output omitted...
Deploying a CouchDB database...
Deployed

Note
The script may raise messages such as No resources found. You may
disregard it.

JB283-RHOAR1.0-en-1-20180517 195
Chapter 6. Creating Application Health Checks

8. Deploy the microservice-vote microservice. Test the health check from a client using
the RESTClient Firefox plug-in accessing the http://microservice-vote-health-
review-healthy.apps.lab.example.com/health URL.

9. Grade the lab.

[student@workstation microservice-vote]$ lab health-review grade

10. Clean up the OCP project, commit your changes to your local Git repository in the lab
branch, and return to the master branch.

10.1. Delete the OCP project health-review-healthy to undeploy the service and remove
the other OCP resources.

[student@workstation microservice-vote]$ oc delete project \


health-review-healthy
project "health-review-healthy" deleted

10.2.Stage the uncommitted changes using the git add command.

[student@workstation microservice-vote]$ git add .

10.3.Commit your changes to the local branch using the git commit command.

[student@workstation microservice-vote]$ git commit \


-m"completing lab health-review"
[lab-health-review 72109278] completing lab lab-health-review

10.4.Switch the working copy back to the master branch to finish cleaning up.

[student@workstation microservice-vote]$ git checkout master


Switched to branch 'master'

This concludes the lab.

196 JB283-RHOAR1.0-en-1-20180517
Solution

Solution
In this lab, you will activate health check capability in a microservice and monitor it on OpenShift
with probes.

Outcomes
You should be able to activate health check capabilities in a microservice implemented with
WildFly Swarm.

Before you begin


If you have not already done so, use git clone to download the microprofile-conference
repository to the workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Then run the lab setup to begin the exercise.

[student@workstation ~]$ lab health-review setup

Steps
1. Switch the repository to the lab-health-review branch to get the correct version of the
application code for this exercise.

1.1. Switch to the branch using the git checkout command.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout \
lab-health-review
Switched to branch 'lab-health-review'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-health-review
nothing to commit, working directory clean

2. Implement a health check in the SessionCheck class from the microservice-session


microservice. The application is considered healthy when the getSessionCount() method
return an integer that is greater than zero. Name the health check as sessions-check.
The health check must return the number of sessions and the current date information.
They must be labeled as sessionCount and lastDateCheck, respectively.

2.1. Open the SessionCheck class by expanding the microservice-session item in the
Project Explorer tab in the left pane of JBoss Developer Studio. Click microservice-
session > Java Resources > src/main/java > io.microprofile.showcase.session to expand
it. Double-click the SessionCheck.java file.

2.2. Add the @Health class-level annotation to configure the class as a health check
information provider.

JB283-RHOAR1.0-en-1-20180517 197
Chapter 6. Creating Application Health Checks

//Add the @Health annotation


@Health
@ApplicationScoped
//Implements the HealthCheck interface
public class SessionCheck {
...

2.3. Support the requirement from the MicroProfile Health specification. Declare the
org.eclipse.microprofile.health.HealthCheck interface as one of
theSessionCheck class implementations.

//Implements the HealthCheck interface


public class SessionCheck implements HealthCheck {
...

2.4. Implement the call() method. The method must provide the following information:

• the number of sessions

• the date the check was executed

public HealthCheckResponse call() {


long sessionCount = getSessionCount();
HealthCheckResponseBuilder healthCheckResponse =
HealthCheckResponse.named("sessions-check")
.withData(sessionCountName, sessionCount)
.withData("lastCheckDate", new Date().toString());
return (sessionCount > 0) ? healthCheckResponse.up().build()
: healthCheckResponse.down().build();
}

2.5. Save your changes to the file using Ctrl+S.

3. Test the health check by starting WildFly Swarm and using the RESTClient Firefox plug-in
and accessing http://localhost:8080/health URL. By default, the microservice-
session microservice does not have any session loaded, which causes the health check to
return a DOWN state. To avoid this condition, you may customize the loadSampleData
property from the MicroProfile configuration specification to load sessions during the start
up process. Use the following command to change the property value:

[student@workstation microprofile-conference]$ export loadSampleData=true

Start the microservice before accessing the application with the web browser. Inspect the
health check behavior with and without the loadSampleData variable set.

Note
Consider skipping the tests when starting WildFly Swarm to minimize the amount
of time needed to start the application.

3.1. Navigate to the microservice-session microservice and start it:

198 JB283-RHOAR1.0-en-1-20180517
Solution

[student@workstation microprofile-conference]$ cd microservice-session


[student@workstation microservice-session]$ mvn clean wildfly-swarm:run \
-DskipTests

3.2. Test the service from a client using the RESTClient Firefox plug-in.

Start Firefox on the workstation VM and click the RESTClient plug-in in the browser's
toolbar.

3.3. Select GET as the Method. In the URL form, enter http://localhost:8080/health.

3.4. Click Send.

3.5. Verify, in the Headers tab, that the Status Code is 503 Service Unavailable.

3.6. Verify, in the Preview tab, that the response matches the following:

{
"checks": [{
"name": "sessions-check",
"state": "DOWN",
"data": {
"lastCheckDate": Wed Mar 28 16:05:13 BRT 2018,
"session-count": 0
}
}],
"outcome": "DOWN"
}

3.7. Return to the terminal window running the microservice-session microservice and stop
the service using Ctrl+C.

3.8. Create the loadSampleData environment variable to load sessions during startup.

[student@workstation microservice-session]$ export loadSampleData=true

3.9. Start the microservice:

[student@workstation microservice-session]$ mvn clean wildfly-swarm:run \


-DskipTests

3.10.In the RESTClient Firefox plug-in, send a new request to the same REST endpoint
(http://localhost:8080/health). Click Send.

3.11. Verify, in the Headers tab, that the Status Code is 200 OK.

3.12. Verify, in the Preview tab, that the response matches the following:

{
"checks": [{
"name": "sessions-check",
"state": "UP",
"data": {

JB283-RHOAR1.0-en-1-20180517 199
Chapter 6. Creating Application Health Checks

"lastCheckDate": Wed Mar 28 16:08:33 BRT 2018,


"session-count": 101
}
}],
"outcome": "UP"
}

3.13. Return to the terminal window running the microservice-session microservice and stop
the service using Ctrl+C.

4. Implement the MicroProfile health check specification in the


io.microprofile.showcase.vote.persistence.couch.CouchAttendeeDAO class.
Consider the microservice-vote microservice healthy when it can connect to the CouchDB
server. Use the connected attribute to check whether the microservice is connected to the
CouchDB server. If the attribute is set to true, the method must return the UP state. Name
the health check CouchAttendeeDAO.

4.1. Open the CouchAttendeeDAO class by expanding the microservice-


vote item in the Project Explorer tab in the left pane of JBoss Developer
Studio. Click microservice-vote > Java Resources > src/main/java >
io.microprofile.showcase.vote.persistence.couch to expand it. Double-click the
CouchAttendeeDAO.java file.

4.2. Add the @Health class-level annotation to configure the class as a health check
information provider.

//Add the @Health annotation


@Health
//Implements the HealthCheck interface
public class CouchAttendeeDAO implements AttendeeDAO {
...

4.3. Support the requirement from the MicroProfile health specification. Declare the
org.eclipse.microprofile.health.HealthCheck interface as one of
theCouchAttendeeDAO class implementations.

//Implements the HealthCheck interface


public class CouchAttendeeDAO implements AttendeeDAO,HealthCheck {
...

4.4. Implement the call() method.

public HealthCheckResponse call() {


HealthCheckResponseBuilder b =
HealthCheckResponse.named(CouchAttendeeDAO.class.getSimpleName());
return connected ? b.up().build() : b.down().build();
}

5. Customize the fabric8 deployment configuration file to configure the readiness and liveness
health check probes from OpenShift in the microservice-vote microservice. Configure the
probes with the following values:

• path: /health

200 JB283-RHOAR1.0-en-1-20180517
Solution

• port: 8080

• scheme: HTTP

• initialDelaySeconds: 15

5.1. Open the deployment.yml file by expanding the microservice-vote item in the
Project Explorer tab in the left pane of JBoss Developer Studio. Click microservice-
vote > src > main > fabric8 to expand it. Double-click the deployment.yml file.

5.2. Update the file to configure a readiness health check probe:

...
- readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 15
...

5.3. Update the file to configure a liveness health check probe:

...
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 15
...

6. Create a new OpenShift project called health-review-unhealthy. Deploy the


microservice on the OpenShift cluster using the fabric8 Maven plug-in. Observe, in the
OpenShift web console, that the pod is restarting because the health check is returning
DOWN. The microservice is unhealthy because there is no CouchDB database available for
connection.

6.1. Open a terminal window on the workstation VM and log in to OpenShift cluster as the
developer user:

[student@workstation ~]$ oc login -u developer -p redhat \


https://master.lab.example.com

6.2. Create the health-review-unhealthy project:

[student@workstation ~]$ oc new-project health-review-unhealthy


Now using project "health-review-unhealthy"...

6.3. Open a new terminal window, and navigate to the microservice-vote microservice
project. Deploy it on the OpenShift cluster:

[student@workstation ~]$ cd microprofile-conference/microservice-vote

JB283-RHOAR1.0-en-1-20180517 201
Chapter 6. Creating Application Health Checks

[student@workstation microservice-vote]$ mvn clean fabric8:deploy -DskipTests


[INFO] Scanning for projects...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS
...

Note
The following error may occur during the deployment:

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask@7eae1359 rejected from
java.util.concurrent.ScheduledThreadPoolExecutor@3bac82e8[Shutting
down, pool size = 1, active threads = 1, queued tasks = 0, completed
tasks = 12]

You may disregard it.

6.4. Open a web browser and enter the following URL:

https://master.lab.example.com.

6.5. Log in to the web console as the developer user, using redhat as the password.

6.6. In the right navigation bar, click health-review-unhealthy.

6.7. In the left navigation bar, click Applications, and then click Pods. Observe that the
number of restarts is increasing for the microservice-vote-1-rs76j pod.

7. Delete the health-review-unhealthy project and create a new one called health-
review-healthy to deploy a CouchDB database pod and the microservice-vote
microservice again. To deploy it, execute the /home/student/JB283/labs/health-
review/deploy-couchdb.sh script.

7.1. Delete the health-review-unhealthy project.

[student@workstation microservice-vote]$ oc delete project \


health-review-unhealthy

7.2. Create the health-review-healthy project.

[student@workstation microservice-vote]$ oc new-project health-review-healthy

7.3. Deploy the CouchDB database:

[student@workstation microservice-vote]$ cd ~/JB283/labs/health-review/


[student@workstation health-review]$ ./deploy-couchdb.sh

202 JB283-RHOAR1.0-en-1-20180517
Solution

...output omitted...
Deploying a CouchDB database...
Deployed

Note
The script may raise messages such as No resources found. You may
disregard it.

8. Deploy the microservice-vote microservice. Test the health check from a client using
the RESTClient Firefox plug-in accessing the http://microservice-vote-health-
review-healthy.apps.lab.example.com/health URL.

8.1. Navigate to the microservice-vote microservice project and deploy it on the OpenShift
cluster:

[student@workstation health-review]$ cd \
~/microprofile-conference/microservice-vote
[student@workstation microservice-vote]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS
...

Note
The following error may occur during the deployment:

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask@7eae1359 rejected from
java.util.concurrent.ScheduledThreadPoolExecutor@3bac82e8[Shutting
down, pool size = 1, active threads = 1, queued tasks = 0, completed
tasks = 12]

You may disregard it.

8.2. Return to the web browser that used to log in to the OpenShift cluster. Select the
health-review-healthy project. In the left navigation bar, click Applications, and
then click Pods. Wait until the newest pod is in the Running state and the Containers
Ready value is 1/1.

8.3. Start Firefox on the workstation VM and click the RESTClient plug-in in the browser's
toolbar.

8.4. Select GET as the Method. In the URL form, enter http://microservice-vote-
health-review-healthy.apps.lab.example.com/health.

JB283-RHOAR1.0-en-1-20180517 203
Chapter 6. Creating Application Health Checks

8.5. Click Send.

8.6. Verify in the Headers tab that the Status Code is 200 OK.

8.7. Verify in the Preview tab that the response matches the following:

{
"checks": [{
"name": "CouchAttendeeDAO",
"state": "UP"
},
{
"name": "CouchSessionRatingDAO",
"state": "UP"
},
{
"name": "HashMap",
"state": "UP"
}],
"outcome": "UP"
}

9. Grade the lab.

[student@workstation microservice-vote]$ lab health-review grade

10. Clean up the OCP project, commit your changes to your local Git repository in the lab
branch, and return to the master branch.

10.1. Delete the OCP project health-review-healthy to undeploy the service and remove
the other OCP resources.

[student@workstation microservice-vote]$ oc delete project \


health-review-healthy
project "health-review-healthy" deleted

10.2.Stage the uncommitted changes using the git add command.

[student@workstation microservice-vote]$ git add .

10.3.Commit your changes to the local branch using the git commit command.

[student@workstation microservice-vote]$ git commit \


-m"completing lab health-review"
[lab-health-review 72109278] completing lab lab-health-review

10.4.Switch the working copy back to the master branch to finish cleaning up.

[student@workstation microservice-vote]$ git checkout master


Switched to branch 'master'

204 JB283-RHOAR1.0-en-1-20180517
Solution

This concludes the lab.

JB283-RHOAR1.0-en-1-20180517 205
Chapter 6. Creating Application Health Checks

Summary
In this chapter, you learned:

• The MicroProfile Health specification is an attempt to easily include health checks in a


microservice. The specification aims to provide health checks that are to be used by platforms
like OpenShift.

• The health check architecture defined in the specification consists of a single /health
REST endpoint in a MicroProfile-based microservice that reports the status of the entire
microservice using an HTTP status code.

• To create a new health check for the microservice, use the @Health annotation on any class
that implements the HealthCheck interface.

• If multiple health checks are defined in a single microservice, WildFly Swarm aggregates the
checks and reports a single overall status representing the logical AND of all of the checks.
That is, if a single check fails, the health outcome of the entire microservice is reported as
DOWN.

• In containerized microservice environments, it is common for individual components to


become unhealthy due to transient issues (such as temporary connectivity loss), configuration
errors, or problems with external dependencies.

• OpenShift Container Platform provides a number of options to detect and handle unhealthy
containers. The primary resource that OpenShift uses to monitor container health is called a
probe. A probe is a diagnostic process that uses some action to query the health of individual
containers, typically on a configurable schedule.

• There are two main types of probes that OpenShift leverages: liveness probes and readiness
probes.

206 JB283-RHOAR1.0-en-1-20180517
TRAINING
CHAPTER 7

IMPLEMENTING FAULT
TOLERANCE

Overview
Goal Implement fault tolerance in a microservice architecture.
Objectives • Apply fault tolerance policies to a Microservice.
Sections • Applying Fault Tolerance Policies to a Microservice (and
Guided Exercise)
Lab Lab: Implementing Fault Tolerance

JB283-RHOAR1.0-en-1-20180517 207
Chapter 7. Implementing Fault Tolerance

Applying Fault Tolerance Policies to a


Microservice

Objectives
After completing this section, students should be able to apply fault tolerance policies to a
microservice.

Developing Robust Applications with the MicroProfile


Fault Tolerance Specification
Fault tolerance ensures that a microservice fails gracefully if a dependent service becomes
unavailable. For example, hardware failures, network connectivity issues, routine maintenance,
and failed deployments are all reasons a service could go offline at any moment.

Whenever your microservice depends on another application, you must use a reliable fault
tolerance framework to ensure that your microservice does not succumb to any downstream
failures.

The MicroProfile fault tolerance specification uses multiple strategies to minimize the effects
of dependency failures by implementing a set of recovery procedures for microservices. The
recovery procedures defined by the fault tolerance specification include:

Circuit breaker (@org.eclipse.microprofile.faulttolerance.CircuitBreaker)


This annotation supports a fail-fast approach if the system is suffering from an overload or is
unavailable.

Bulkhead (@org.eclipse.microprofile.faulttolerance.Bulkhead)
This annotation isolates the part of the system with problems while allowing the remainder
of the system to continue to respond.

Fallback(@org.eclipse.microprofile.faulttolerance.Fallback)
This annotation executes an alternative method if the execution fails for the annotated
method.

Retry policy(@org.eclipse.microprofile.faulttolerance.Retry)
This annotation defines the criteria for when an execution should be retried.

Timeout(@org.eclipse.microprofile.faulttolerance.Timeout)
This annotation defines the maximum execution time before raising an error.

Asynchronous(@org.eclipse.microprofile.faulttolerance.Asynchronous)
This annotation executes the method asynchronously.

The MicroProfile implementations, such as WildFly Swarm, identify failures as they occur, but you
must define which recovery procedure your implementation supports using the fault tolerance
annotations defined in the specification.

208 JB283-RHOAR1.0-en-1-20180517
Hystrix

Hystrix
WildFly Swarm uses Hystrix, a third-party library from Netflix OSS, as its underlying fault
tolerance implementation to support the MicroProfile requirements. In addition to supporting
fault tolerance, Hystrix also supports latency, monitoring, and concurrency capabilities, as well as
a web UI for monitoring metrics and identifying problems.

To enable the Hystrix fault tolerance fraction in WildFly Swarm, add the following dependency to
the project's POM file:

<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>microprofile-fault-tolerance</artifactId>
</dependency>

Implementing Fault Tolerance with Annotations


To add support for the fault tolerance recovery policies, annotate each method in a microservice
with your preferred policy's annotation.

Retry Policy
The fault tolerance implementation uses the @Retry annotation to support multiple invocations
of a method if an exception, timeout, or other condition occurs. In the following example, a
method is re-executed whenever a RuntimeException is raised.

@Retry(retryOn={RuntimeException.class})
public Session getSessionById(int id){
...output omitted...
}

To configure the maximum duration that all the retry executions can take you can set the
maxDuration attribute.

@Retry(maxRetries=90, maxDuration=1000,retryOn={RuntimeException.class})
public Product getProduct(int id) {
...output omitted...
}

In the previous example, the method can be executed up to 90 times, or until the total execution
time exceeds 1000 milliseconds, whichever comes first.

The following table lists all the Retry annotation parameters:

Retry Parameters
Annotation Java type Description
@maxRetries long The maximum number of retries
delay long The amount of time for the
delays between each retry
delayUnit java.time.temporal.ChronoUnit The delay time unit
maxDuration long The maximum amount of time
that the retry attempts may
take

JB283-RHOAR1.0-en-1-20180517 209
Chapter 7. Implementing Fault Tolerance

Annotation Java type Description


durationUnit java.time.temporal.ChronoUnit The duration time unit
jitter long The amount of time that each
retry delays.
jitterDelayUnit java.time.temporal.ChronoUnit The jitter time unit
retryOn java.lang.Class[] The exception that causes
execution of a retry
abortOn java.lang.Class[] The exception that cause the
execution to abort

Timeout
The fault tolerance implementation uses the @Timeout annotation to inspect how long a method
may take to execute. If the method execution time exceeds that defined in the annotation, the
implementation aborts the execution and throws a TimeoutException exception.

@Timeout(1000)
public Product getProduct(int id) {
...output omitted...
}

In the previous example, the method throws a TimeoutException exception if the method
takes more than 1000 milliseconds to execute.

For larger amount of times, you can use the unit attribute, which supports SECONDS, MINUTES,
HOURS, DAYS, WEEKS, and other time periods from the java.time.temporal.ChronoUnit
enumeration.

@Timeout(value=1000,units=ChronoUnit.DAYS)
public Product getProduct(int id) {
...output omitted...
}

The following table lists all the Timeout annotation parameters:

Timeout Parameters
Annotation Java type Description
value long The maximum amount of time
the method may take before
timing out
delayUnit java.time.temporal.ChronoUnit The delay time unit

Fallback
The fault tolerance implementation uses the @Fallback annotation to define an alternative
method if any exception is raised by the method execution.

@Fallback(fallbackMethod="getCachedProduct")
public Product getProduct(int id) {
...output omitted...
}

210 JB283-RHOAR1.0-en-1-20180517
Implementing Fault Tolerance with Annotations

In the previous example, whenever the getProduct method executes, the fault tolerance
implementation monitors the method execution output. If the method raises an exception, the
fallback implementation calls the method defined in the fallbackMethod parameter.

The fallback implementation throws the FaultToleranceDefinitionException exception


whenever the fallback method does not return the same value from the original method
invocation.

In the following example, the @Timeout annotation is used to limit the time a method takes
to execute. If the TimeoutException exception is raised, the fault tolerance implementation
triggers the method defined in the fallbackMethod attribute.

@Timeout(500)
@Fallback(fallbackMethod="getCachedProduct")
public Product getProduct(int id) {
...output omitted...
}

In the previous example, if the method does not return for 500 ms, the timeout implementation
throws a TimeoutException exception.

The following table lists all the Fallback annotation parameters:

Fallback Parameters
Annotation Java type Description
fallbackMethod java.lang.String The method to call if an exception is
raised
value java.lang.Class The StringFallbackHandler
implementation that manages the fallback

Bulkhead
The fault tolerance implementation uses the @Bulkhead annotation to reduce the risk of
overloading an application by defining the maximum number of concurrent invocations of a
given method.

@Bulkhead(4)
public List getProducts() {
...
}

In the previous example, the method only supports four invocations of the getProducts()
method at the same time. If you invoke the method with more concurrent calls than the value
defined in the bulkhead, the fault tolerance implementation throws a BulkheadException
exception. To support alternative method execution, use the @Fallback annotation.

The bulkhead pattern also supports the use of the @Timeout annotation simultaneously.

The following table lists all the Bulkhead annotation parameters:

Bulkhead Parameters
Annotation Java type Description
waitingTaskQueue int The maximum number of requests in the queue

JB283-RHOAR1.0-en-1-20180517 211
Chapter 7. Implementing Fault Tolerance

Annotation Java type Description


value int The number of requests that can be processed by the
method

Circuit Breaker
The fault tolerance implementation uses the @CircuitBreaker annotation to protect the
execution of a method by marking it as not available if a failure occurs. As a way of recovering
from a failure, the circuit breaker implementation allows one request to inspect that the failure
does not happen again after a certain amount of time. If the request is processed with success,
the circuit breaker implementation allows the following requests to access the method again.

There are three circuit states:

Closed
The service functions as expected. If a failure occurs, the circuit breaker keeps the record.
The circuit opens after the specified number of failures are reached.

Open
The service does not work and every call to the circuit breaker fails immediately. The circuit
transitions to a half-open state after the specified timeout is reached.

Half-open
The service is behaving unexpectedly, but a single call is passed to the service as a way
to review whether the unstable condition has changed. If it fails, the circuit remains
open. Otherwise, subsequent calls are allowed. After the specified number of successful
executions, the circuit closes.

@CircuitBreaker(requestVolumeThreshold = 4, failureRatio = 0.5, delay = 1000)


public List<String> holaChaining() {
...output omitted...
}

In the previous code, the circuit opens if half of the requests (failureRatio = 0.5) of
four consecutive invocations (requestVolumeThreshold=4) fail. The circuit stays open for
1000 milliseconds and then it returns to become half-open. After a successful invocation, the
circuit is closed again.

The following table lists all the CircuitBreaker annotation parameters:

CircuitBreaker Parameters
Annotation Java type Description
successThreshold long The number of consecutive successful invocations
to close the circuit
requestVolumeThreshold long The number of consecutive invocations used as
the baseline to calculate the number of failures
failureRatio double The minimum failure ratio required to open the
circuit
delay long The amount of time the circuit stays open to
review that the service is back to normal

Whenever a circuit is opened, the fault tolerance implementation throws a


CircuitBreakerOpenException exception and you can use the @Fallback annotation to

212 JB283-RHOAR1.0-en-1-20180517
Implementing Fault Tolerance Policies Using the MicroProfile Configuration Specification

provide an alternative method execution. Similarly, to manage the amount of time the circuit
breaker waits until its completion, declare the @Timeout annotation.

Asynchronous Policy
The fault tolerance implementation uses the @Asynchronous annotation to invocation a method
asynchronously. An important difference is that the return value from an @Asynchronous
annotated method must be a Future instance that is managed asynchronously from the client-
side application. In the following example, a method is invoked asynchronously by the REST
endpoint.

@Asynchronous
public Future<Session> getSessionById(int id){
...output omitted...
}

Implementing Fault Tolerance Policies Using the


MicroProfile Configuration Specification
As an alternative to using the fault tolerance annotations, you can use the microprofile-
config.properties file to configure fault tolerance. You can use the MicroProfile
configuration specification to implement fault tolerance policies that allow finer tuning of fault
tolerance compared to using fault tolerance annotations.

As mentioned in the MicroProfile configuration chapter, the source code definition is overridden
by the values defined in the configuration file. To change the value of a parameter, edit the
microprofile-config.properties file and add a line using the following format:

<classname>/<methodname>/<annotation>/<parameter>

For example, to change the delay value from a circuit breaker annotation defined in the
holaChaining method from the com.redhat.training.msa.hola.rest.HolaResources
class, add the following line:

com.redhat.training.msa.hola.rest.HolaResources/holaChaining/CircuitBreaker/delay=5000

Demonstration: Combining Fault Tolerance Policies


1. Log in to the workstation VM as student using student as the password.

[student@workstation ~]$ cd hello-microservices


[student@workstation hello-microservices]$ git checkout \
demo-fault-tolerance-annotation

2. In JBoss Developer Studio, open the


com.redhat.training.msa.hola.rest.HolaResource class from the hola
microservice.

3. Inspect the holaChaining method annotations. The method is annotated with two
MicroProfile fault tolerance annotations: @Fallback and @Timeout.

JB283-RHOAR1.0-en-1-20180517 213
Chapter 7. Implementing Fault Tolerance

The @Timeout annotation defines the maximum time the method should take to execute.
If the method takes longer than this value, the MicroProfile fault tolerance implementation
must either throw an exception or manage the exception using the @Fallback annotation.

In the current implementation, the aloha microservice is called in the holaChaining


method, which takes a long time to execute. This behavior triggers the fallback method
defined in the holaChaining method.

4. Test the application.

Start the aloha microservice. From the terminal window on the workstation VM, run the
following commands:

[student@workstation hello-microservices]$ cd aloha


[student@workstation aloha]$ ./run.sh

Start the hola microservice. Open a new terminal window on the workstation VM and run
the following commands:

[student@workstation ~ ]$ cd hello-microservices/hola
[student@workstation aloha]$ ./run.sh

5. Test the service from a client using the RESTClient Firefox plug-in.

Start Firefox on the workstation VM and click the RESTClient plug-in on the browser's
toolbar.

Figure 7.1: The Firefox RESTClient plug-in

6. Select GET as the Method. In the URL form, enter http://localhost:8080/api/hola-


chaining and then click Send.

7. In the Headers tab, verify that the Status Code is 200 OK.

8. In the Response tab, verify that the response matches the following:

["Hola de localhost","Aloha fallback"]

This address triggers the holaChaining method execution and due to the timeout policy,
the alohaFallback method is executed.

9. Stop the hola microservice to update the application source code. Press Ctrl+C in the
terminal window running the hola microservice.

10. Remove the fallback capability from the method execution. Comment out the @Fallback
annotation from the holaChaining method.

214 JB283-RHOAR1.0-en-1-20180517
Demonstration: Combining Fault Tolerance Policies

11. Rerun the hola microservice without the fallback behavior. From the terminal window where
you stopped hola microservice, run the following command:

[student@workstation aloha]$ ./run.sh

12. Test the service from a client using the RESTClient Firefox plug-in.

Start Firefox on the workstation VM and click the RESTClient plug-in on the browser's
toolbar.

13. Select GET as the Method. In the URL form, enter http://localhost:8080/api/hola-
chaining and then click Send.

14. In the Headers tab, verify that the Status Code is 500 Internal Server Error.

15. In the Preview tab, verify that the response matches the following:

Error processing request


org.eclipse.microprofile.faulttolerance.exceptions.TimeoutException:
com.netflix.hystrix.exception.HystrixRuntimeException:
public java.util.List<java.lang.String>
com.redhat.training.msa.hola.rest.HolaResource.holaChaining()
timed-out and no fallback available
...output omitted...

This address triggers the holaChaining method execution and due to the lack of the
fallback policy the REST endpoint raises an error.

16. Stop the microservices. In each terminal window running the lab, press Ctrl+C.

This concludes the demo.

References
MicroProfile fault tolerance specification page
http://microprofile.io/project/eclipse/microprofile-fault-tolerance

Hystrix Netflix OSS page


https://github.com/Netflix/Hystrix

JB283-RHOAR1.0-en-1-20180517 215
Chapter 7. Implementing Fault Tolerance

Guided Exercise: Applying Fault Tolerance


Policies

In this exercise, you will annotate a method to enable fault tolerance policies in the hello-
microservices application.

Outcomes
You should be able to implement fault tolerance policies to provide a robust behavior for
MicroProfile-compliant applications.

Before you begin


If you have not already, clone the hello-microservices repository to the workstation VM.

[student@workstation ~]$ git clone http://services.lab.example.com/hello-microservices


Cloning into 'hello-microservices'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Use the lab setup command to ensure the environment is sound to begin the exercise.

[student@workstation ~]$ lab fault-tolerance-annotation setup

Steps
1. Switch the repository to the lab-fault-tolerance-annotation branch to get the
correct version of the application code for this exercise.

1.1. Use the git checkout command to check out the required branch.

[student@workstation ~]$ cd hello-microservices


[student@workstation hello-microservices]$ git checkout \
lab-fault-tolerance-annotation
...output omitted...
Switched to a new branch 'lab-fault-tolerance-annotation'

1.2. Use the git status command to ensure you are on the correct branch.

[student@workstation hello-microservices]$ git status


# On branch lab-fault-tolerance-annotation
nothing to commit, working directory clean

2. Start the hola and aloha microservices.

2.1. Start the aloha microservice. From the existing terminal window, run the following
commands:

[student@workstation hello-microservices]$ cd aloha


[student@workstation aloha]$ ./run.sh
...output omitted...
2018-03-09 17:03:23,329 INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly
Swarm is Ready

216 JB283-RHOAR1.0-en-1-20180517
Leave the terminal window running.

2.2. Start the hola microservice. Open a new terminal window on the workstation VM and
run the following commands:

[student@workstation ~]$ cd hello-microservices/hola


[student@workstation hola]$ ./run.sh
...output omitted...
2018-03-09 17:03:23,329 INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly
Swarm is Ready

2.3. Test the service from a client using Firefox. The /api/hola-chaining REST endpoint
calls the /api/aloha REST endpoint from the aloha microservice. The /api/aloha
REST endpoint takes a long time to execute. In the hola microservice, there is a
requirement that any execution that takes longer than one second should raise an
exception. Also, to minimize the processing impact, the method should not accept
requests until the other REST endpoint is restored to an acceptable processing time.

Start Firefox on the workstation VM.

Note
Different from the previous guided exercises and labs, do not use the
RESTClient plug-in. The rendering process of the plug-in may prevent you
from seeing the expected errors.

2.4. In the address bar, enter http://localhost:8080/api/hola-chaining and press


Enter.

2.5. Verify that the response matches the following:

["Hola de localhost","Aloha mai localhost"]

The hola microservice requirement does not expect that any REST endpoint call takes
more than one second to respond, but the REST endpoint takes longer to process the
request.

2.6. Return to the terminal window where the hola microservice is running and press
Ctrl+C to stop the service.

3. Implement the requirements using the MicroProfile fault tolerance annotations.

3.1. In JBoss Developer Studio, open the HolaResource class by expanding the hola item
in the Project Explorer tab in the left pane of JBoss Developer Studio.

Click hola > Java Resources > src/main/java > com.redhat.training.msa.hola.rest to


expand it. Double-click the HolaResource.java file.

3.2. Implement the timeout fault tolerance procedure. According to the specification,
the method may run for one second until an exception is raised. Look for the

JB283-RHOAR1.0-en-1-20180517 217
Chapter 7. Implementing Fault Tolerance

holaChaining method in the class. Immediately after the last holaChaining method
annotation, add the @Timeout annotation:

...
@GET
@Path("/hola-chaining")
@Produces("application/json")
@ApiOperation("Returns the greeting plus the next service in the chain")
@PermitAll
//TODO Implement the @Timeout with 1000ms
@Timeout(1000)
//TODO Implement the @CircuitBreaker with 500ms delay, with the
//one as the requestVolumeThreshold and the failureRatio of 0.5
public List<String> holaChaining() {
...

3.3. Implement the circuit breaker fault tolerance procedure. According to the specification,
after the first failure the method must immediately throw an exception. For the purpose
of this lab, the failure ratio must be 0.5 and the expected delay to half-open the circuit
is 5000 milliseconds. Immediately after the last holaChaining method annotation,
add the @CircuitBreaker annotation:

...
@GET
@Path("/hola-chaining")
@Produces("application/json")
@ApiOperation("Returns the greeting plus the next service in the chain")
@PermitAll
//TODO Implement the @Timeout with 1000ms
@Timeout(1000)
//TODO Implement the @CircuitBreaker with 500ms delay, with the
//one as the requestVolumeThreshold and the failureRatio of 0.5
@CircuitBreaker(requestVolumeThreshold = 1,
failureRatio = 0.50, delay = 5000)
public List<String> holaChaining() {
...

3.4. Press Ctrl+S to save your changes.

4. Restart the hola microservice.

4.1. Start the hola microservice. In the terminal window where you previously started the
hola microservice, run the following command:

[student@workstation hola]$ ./run.sh


...output omitted...
2018-03-09 17:03:23,329 INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly
Swarm is Ready

4.2. Use Firefox to test the service with the timeout and circuit breaker fault tolerance
procedures.

Start Firefox on the workstation VM.

4.3. In the address bar, enter http://localhost:8080/api/hola-chaining and press


Enter.

218 JB283-RHOAR1.0-en-1-20180517
4.4. Verify that the response contains the following:

org.jboss.resteasy.spi.UnhandledException:
org.eclipse.microprofile.faulttolerance.exceptions.TimeoutException:
com.netflix.hystrix.exception.HystrixRuntimeException:
public java.util.List<java.lang.String>
com.redhat.training.msa.hola.rest.HolaResource.holaChaining() timed-out and no
fallback available.

The hola microservice does not accept that any REST endpoint call takes more than one
second to respond.

5. Call the hola microservice REST endpoint again.

5.1. Use the same URL in the address bar and press Enter.

5.2. Verify that the response contains the following:

org.jboss.resteasy.spi.UnhandledException:
org.eclipse.microprofile.faulttolerance.exceptions.CircuitBreakerOpenException:
holaChaining

The hola microservice raises a different error message because the circuit breaker is
managing the requests to avoid calls to the method due to the failure. It is also faster
to execute than the previous execution because the circuit breaker is opened, and it
prevents the TimeoutException exception to raise again.

Warning
If the re-execution is not fast enough, the same TimeoutException
exception is raised. This behavior is expected because the delay is set to a
small value and there was enough time for the circuit breaker to become
half-opened. To reproduce the expected result, refresh the web browser
repeatedly.

6. Clean up and commit your changes to your local Git repository in the lab branch and return
to the master branch.

6.1. Stop the microservice. In the terminal window running the microservice, press Ctrl+C.

6.2. Use the git add command to stage any uncommitted changes.

[student@workstation hello-microservices]$ git add .

6.3. Use the git commit command to commit your changes to the local branch.

[student@workstation hello-microservices]$ git commit \


-m" completing lab fault-tolerance-annotation"
[lab-fault-tolerance-annotation 7a5f023] completing lab fault-tolerance-
annotation

JB283-RHOAR1.0-en-1-20180517 219
Chapter 7. Implementing Fault Tolerance

1 file changed, 23 insertions(+), 8 deletions(-)

6.4. Switch the working copy back to the master branch to finish cleaning up.

[student@workstation hello-microservices]$ git checkout master


Switched to branch 'master'

This concludes the guided exercise.

220 JB283-RHOAR1.0-en-1-20180517
Lab: Implementing Fault Tolerance

Lab: Implementing Fault Tolerance

In this lab, you will enable fault tolerance capabilities in methods from the microservice-vote
application.

Outcomes
You should be able to enable fault tolerance capabilities by using MicroProfile fault tolerance
annotations.

Before you begin


If you have not already done so, use git clone to download the microprofile-conference
repository to the workstation VM.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Use the lab setup command to ensure that the environment is sound so you can begin the
exercise.

[student@workstation ~]$ lab fault-tolerance-review setup

Steps
1. To begin the exercise, change to the lab-fault-tolerance-review branch of the
application code.

1.1. Use the git checkout command to check out the lab-fault-tolerance-review.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout \
lab-fault-tolerance-review
...output omitted...
Switched to a new branch 'lab-fault-tolerance-review'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-fault-tolerance-review
nothing to commit, working directory clean

2. Start the CouchDB server to support the test execution.

From the existing terminal window, run the following command:

[student@workstation microprofile-conference]$ sudo systemctl start couchdb

3. Add fault tolerance annotations to the microservice-vote project to increase the fault
tolerance of the application.

JB283-RHOAR1.0-en-1-20180517 221
Chapter 7. Implementing Fault Tolerance

The getRatingsByAttendee method from the


io.microprofile.showcase.vote.persistence.couch.CouchSessionRatingDAO
class handles a large amount of data. This can have a major impact on performance when
there is a large number of concurrent users. This can cause intermittent failures on some of
the CouchDB calls, which can be resolved with a retry.

Retry the execution once if there is any failure in the method execution. Additionally,
to minimize the impact of any CouchDB outages, use the MicroProfile fault tolerance
implementation to identify any execution slowness that goes beyond one second and throw
a TimeoutException exception immediately instead of waiting.

4. Add fault tolerance annotations in the microservice-vote project to minimize performance


issues in the application. Some users identified that the application has performance issues
and ask that you return a null value when there are issues connecting to the CouchDB
server, instead of an error code or message.

In the getAllRatings method from the


io.microprofile.showcase.vote.persistence.couch.CouchSessionRatingDAO
class, the processing time is increasing. The slowness comes from the CouchDB server and it
affects the method execution time.

Use the MicroProfile implementation features to raise a TimeoutException exception


whenever the method takes more than 1000 ms to process a request.

To meet customer needs, this method must return a null value if the method execution is
slow causing a TimeoutException and a fallback is required. The getAllRatingsEmpty
method provides the expected behavior for the fallback, but you need to configure the
getAllRatings method to use it.

Finally, to support a fail-fast approach, use the MicroProfile implementation to implement


the circuit breaker pattern. The circuit breaker must have a failure ratio of 0.5 and it must
wait for 1000 ms to evaluate whether the circuit should be reopened.

5. If you are not already authenticated, log in to the OpenShift cluster as developer using
redhat as the password.

The fault-tolerance-review project with the couchdb pod has already been created
for you to accomplish the lab. When you log in to the cluster, the fault-tolerance-
review project is already selected.

6. Deploy the microservice-vote microservice with the fabric8 Maven plug-in. You can use the
-DskipTests option to skip the tests for a faster build time.

7. Get the route URL to access the microservice endpoints.

Run the following command in the existing terminal window:

[student@workstation microservice-speaker]$ oc status

The route information is displayed on the workstation VM, and should look similar to the
following:

222 JB283-RHOAR1.0-en-1-20180517
In project fault-tolerance-review on server https://master.lab.example.com:443
...
http://microservice-vote-fault-tolerance-review.apps.lab.example.com (svc/
microservice-vote)

8. Test the /vote/rate REST endpoint from the microservice, using it with the URL captured
in the previous step. The expected result is a 204 status code and no answer is provided due
to the timeout fault tolerance annotation.

9. Test the /vote/ratingsByAttendee/1 REST endpoint from the microservice, using


it with the URL captured in the previous step. The expected result is a 404 status code
because the fallback fault tolerance annotation returns a null value. Notice that the first
execution takes a long time, but the following requests take less time because of the circuit
breaker fault tolerance annotation.

10. Grade the lab.

[student@workstation microservice-vote]$ lab fault-tolerance-review grade

All the checks should pass.

11. Delete the project from the terminal window.

12. Stop the CouchDB server, clean up, and commit your changes to your local Git repository in
the lab branch, and return to the master branch.

12.1. Run the following command to stop the CouchDB server:

[student@workstation microprofile-conference]$ sudo systemctl stop couchdb

12.2.Use the git add command to stage any uncommitted changes.

[student@workstation microprofile-conference]$ git add .

12.3.Use the git commit command to commit your changes to the local branch.

[student@workstation microprofile-conference]$ git commit \


-m"completing lab fault-tolerance-review"
[lab-test-review e59dc43] completing lab fault-tolerance-review
...output omitted...
3 files changed, 41 insertions(+), 18 deletions(-)

12.4.Check out the working copy back to the master branch to finish cleaning up.

[student@workstation microprofile-conference]$ git checkout master


Switched to branch 'master'

This concludes the lab.

JB283-RHOAR1.0-en-1-20180517 223
Chapter 7. Implementing Fault Tolerance

Solution
In this lab, you will enable fault tolerance capabilities in methods from the microservice-vote
application.

Outcomes
You should be able to enable fault tolerance capabilities by using MicroProfile fault tolerance
annotations.

Before you begin


If you have not already done so, use git clone to download the microprofile-conference
repository to the workstation VM.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Use the lab setup command to ensure that the environment is sound so you can begin the
exercise.

[student@workstation ~]$ lab fault-tolerance-review setup

Steps
1. To begin the exercise, change to the lab-fault-tolerance-review branch of the
application code.

1.1. Use the git checkout command to check out the lab-fault-tolerance-review.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout \
lab-fault-tolerance-review
...output omitted...
Switched to a new branch 'lab-fault-tolerance-review'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-fault-tolerance-review
nothing to commit, working directory clean

2. Start the CouchDB server to support the test execution.

From the existing terminal window, run the following command:

[student@workstation microprofile-conference]$ sudo systemctl start couchdb

3. Add fault tolerance annotations to the microservice-vote project to increase the fault
tolerance of the application.

The getRatingsByAttendee method from the


io.microprofile.showcase.vote.persistence.couch.CouchSessionRatingDAO

224 JB283-RHOAR1.0-en-1-20180517
Solution

class handles a large amount of data. This can have a major impact on performance when
there is a large number of concurrent users. This can cause intermittent failures on some of
the CouchDB calls, which can be resolved with a retry.

Retry the execution once if there is any failure in the method execution. Additionally,
to minimize the impact of any CouchDB outages, use the MicroProfile fault tolerance
implementation to identify any execution slowness that goes beyond one second and throw
a TimeoutException exception immediately instead of waiting.

3.1. Open the CouchSessionRatingDAO class by expanding the microservice-


vote item in the Project Explorer tab in the left pane of JBoss Developer
Studio. Click microservice-vote > Java Resources > src/main/java >
io.microprofile.showcase.vote.persistence.couch to expand it. Double-click the
CouchSessionRatingDAO.java file.

3.2. Enable timeout support for the getRatingsByAttendee method. To enable timeout
support, annotate the method with @Timeout method-level annotation.

@Override
// TODO annotate the method to support a 1000ms timeout.
@Timeout(1000)
// TODO annotate the method with Retry fault tolerance annotation to run once
// more
public Collection<SessionRating> getRatingsByAttendee(String attendeeId) {
return querySessionRating("attendee", attendeeId);
}

3.3. Enable retry policy support for the getRatingsByAttendee method. To enable the
retry policy, annotate the method with @Retry method-level annotation.

@Override
// TODO annotate the method to support a 1000ms timeout.
@Timeout(1000)
// TODO annotate the method with Retry fault tolerance annotation to run once
// more
@Retry(maxRetries=1)
public Collection<SessionRating> getRatingsByAttendee(String attendeeId) {
return querySessionRating("attendee", attendeeId);
}

3.4. Press Ctrl+S to save your changes.

3.5. Run the JUnit test case.

Open the CouchSessionRatingTimeoutDAOTest class by expanding the


microservice-vote item in the Project Explorer tab in the left pane of JBoss
Developer Studio, then click microservice-vote > Java Resources > src/test/java
> io.microprofile.showcase.vote.persistence.couch and expand it. Right-click the
CouchSessionRatingTimeoutDAOTest test case and select Run As > JUnit Test
in JBoss Developer Studio. The JUnit tab displays that a green bar after the test
execution.

4. Add fault tolerance annotations in the microservice-vote project to minimize performance


issues in the application. Some users identified that the application has performance issues

JB283-RHOAR1.0-en-1-20180517 225
Chapter 7. Implementing Fault Tolerance

and ask that you return a null value when there are issues connecting to the CouchDB
server, instead of an error code or message.

In the getAllRatings method from the


io.microprofile.showcase.vote.persistence.couch.CouchSessionRatingDAO
class, the processing time is increasing. The slowness comes from the CouchDB server and it
affects the method execution time.

Use the MicroProfile implementation features to raise a TimeoutException exception


whenever the method takes more than 1000 ms to process a request.

To meet customer needs, this method must return a null value if the method execution is
slow causing a TimeoutException and a fallback is required. The getAllRatingsEmpty
method provides the expected behavior for the fallback, but you need to configure the
getAllRatings method to use it.

Finally, to support a fail-fast approach, use the MicroProfile implementation to implement


the circuit breaker pattern. The circuit breaker must have a failure ratio of 0.5 and it must
wait for 1000 ms to evaluate whether the circuit should be reopened.

4.1. Open the CouchSessionRatingDAO class by expanding the microservice-


vote item in the Project Explorer tab in the left pane of JBoss Developer
Studio, then click microservice-vote > Java Resources > src/main/java >
io.microprofile.showcase.vote.persistence.couch and expand it. Double-click the
CouchSessionRatingDAO.java file.

4.2. Enable the timeout procedure from the MicroProfile specification in the
getAllRatings method. Annotate the getAllRatings method with the
@Timeout(1000) method-level annotation.

@Override
// TODO annotate the method with Timeout fault tolerance annotation
@Timeout(1000)
// TODO annotate the method to support a 1000ms timeout.
public Collection<SessionRating> getAllRatings() {
...

4.3. Enable the fallback procedure from the MicroProfile specification in the
getAllRatings method. Annotate the getAllRatings method with the
@Fallback("getAllRatingsEmpty") method-level annotation.

@Override
// TODO annotate the method with Timeout fault tolerance annotation to run once
// more
@Timeout(1000)
// TODO annotate the method to a fallback to the getAllRatingsEmpty method.
@Fallback(fallbackMethod = "getAllRatingsEmpty")

public Collection<SessionRating> getAllRatings() {


...

4.4. Enable the circuit breaker procedure from the MicroProfile specification
in the getAllRatings method. Annotate the getAllRatings

226 JB283-RHOAR1.0-en-1-20180517
Solution

method with the @CircuitBreaker(requestVolumeThreshold=1,


failureRatio=0.5,delay=1000) method-level annotation.

@Override
// TODO annotate the method with Timeout fault tolerance annotation to run once
// more
@Timeout(1000)
// TODO annotate the method to a fallback to the getAllRatingsEmpty method.
@Fallback(fallbackMethod = "getAllRatingsEmpty")
// TODO Enable circuit breaker
@CircuitBreaker(requestVolumeThreshold=1, failureRatio=0.5,delay=1000)

public Collection<SessionRating> getAllRatings() {


...

4.5. Press Ctrl+S to save your changes.

4.6. Run the JUnit test case.

Open the CouchSessionRatingFallbackDAOTest class by expanding the


microservice-vote item in the Project Explorer tab in the left pane of JBoss
Developer Studio, then click microservice-vote > Java Resources > src/test/java
> io.microprofile.showcase.vote.persistence.couch and expand it. Right-click the
CouchSessionRatingFallbackDAOTest test case and select Run As > JUnit Test
in JBoss Developer Studio. The JUnit tab displays that a green bar after the test
execution.

5. If you are not already authenticated, log in to the OpenShift cluster as developer using
redhat as the password.

The fault-tolerance-review project with the couchdb pod has already been created
for you to accomplish the lab. When you log in to the cluster, the fault-tolerance-
review project is already selected.

Log in to the OpenShift cluster from the command line. From the existing terminal window,
run the following command:

[student@workstation microprofile-conference]$ oc login \


-u developer -p redhat https://master.lab.example.com
Login Successful
...output omitted...
Using project "fault-tolerance-review".

6. Deploy the microservice-vote microservice with the fabric8 Maven plug-in. You can use the
-DskipTests option to skip the tests for a faster build time.

Run mvn fabric8:deploy to deploy the application using the container image built by the
S2I build.

[student@workstation microprofile-conference]$ cd microservice-vote


[student@workstation microservice-vote]$ mvn package fabric8:deploy \
-DskipTests

JB283-RHOAR1.0-en-1-20180517 227
Chapter 7. Implementing Fault Tolerance

Review the Maven build outputs. Note that the fabric8:deploy goal creates all the
configuration files and deploys the pod to OpenShift.

Solution
The following error may occur during the execution. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException:
Task java.util.concurrent.SpeakerdThreadPoolExecutor
$SpeakerdFutureTask@3cce6de9
rejected from
java.util.concurrent.SpeakerdThreadPoolExecutor@79ccf414[Shutting down,
pool size = 1, active threads = 1,
queued tasks = 0, completed tasks = 10]

7. Get the route URL to access the microservice endpoints.

Run the following command in the existing terminal window:

[student@workstation microservice-speaker]$ oc status

The route information is displayed on the workstation VM, and should look similar to the
following:

In project fault-tolerance-review on server https://master.lab.example.com:443


...
http://microservice-vote-fault-tolerance-review.apps.lab.example.com (svc/
microservice-vote)

8. Test the /vote/rate REST endpoint from the microservice, using it with the URL captured
in the previous step. The expected result is a 204 status code and no answer is provided due
to the timeout fault tolerance annotation.

8.1. Test the service from a client using the RESTClient Firefox plug-in.

Start Firefox on the workstation VM and click the RESTClient plug-in on the browser's
toolbar.

The Firefox RESTClient plug-in

8.2. Select GET as the Method. In the URL form, enter http://microservice-vote-
fault-tolerance-review.apps.lab.example.com/vote/rate.

8.3. In the Headers tab, verify that the Status Code is 204 OK.

8.4. In the Response tab, verify that the response is empty.

228 JB283-RHOAR1.0-en-1-20180517
Solution

9. Test the /vote/ratingsByAttendee/1 REST endpoint from the microservice, using


it with the URL captured in the previous step. The expected result is a 404 status code
because the fallback fault tolerance annotation returns a null value. Notice that the first
execution takes a long time, but the following requests take less time because of the circuit
breaker fault tolerance annotation.

9.1. Using the RESTClient Firefox plug-in, select GET as the Method. In the
URL form, enter http://microservice-vote-fault-tolerance-
review.apps.lab.example.com/vote/ratingsByAttendee/1.

9.2. In the Headers tab, verify that the Status Code is 404 Not Found.

9.3. In the Response tab, verify that the response is empty.

9.4. Click Send multiple times and evaluate the response time displayed at the bottom of the
page. Each request should take less than the first request because of the circuit breaker
procedure.

10. Grade the lab.

[student@workstation microservice-vote]$ lab fault-tolerance-review grade

All the checks should pass.

11. Delete the project from the terminal window.

[student@workstation microservice-vote]$ oc delete project fault-tolerance-review

12. Stop the CouchDB server, clean up, and commit your changes to your local Git repository in
the lab branch, and return to the master branch.

12.1. Run the following command to stop the CouchDB server:

[student@workstation microprofile-conference]$ sudo systemctl stop couchdb

12.2.Use the git add command to stage any uncommitted changes.

[student@workstation microprofile-conference]$ git add .

12.3.Use the git commit command to commit your changes to the local branch.

[student@workstation microprofile-conference]$ git commit \


-m"completing lab fault-tolerance-review"
[lab-test-review e59dc43] completing lab fault-tolerance-review
...output omitted...
3 files changed, 41 insertions(+), 18 deletions(-)

12.4.Check out the working copy back to the master branch to finish cleaning up.

[student@workstation microprofile-conference]$ git checkout master


Switched to branch 'master'

JB283-RHOAR1.0-en-1-20180517 229
Chapter 7. Implementing Fault Tolerance

This concludes the lab.

230 JB283-RHOAR1.0-en-1-20180517
Summary

Summary
In this chapter, you learned:

• The MicroProfile specification defines a set of annotations that support common patterns
defined by microservice developers to minimize the risks associated with failures.

• WildFly Swarm implements the fault tolerance specification using Hystrix.

• The available fault tolerance policies are described below:

◦ Circuit breaker, which supports a fail-fast approach if the system is suffering from an
overload or is unavailable.

◦ Bulkhead, which isolates the part of the system with problems while allowing the remainder
of the system to continue to respond.

◦ Fallback, which executes an alternative path for a failed execution.

◦ Retry, which defines criteria for when an execution should be retried.

◦ Timeout, which defines the amount of time an execution should take before raising an error.

JB283-RHOAR1.0-en-1-20180517 231
232
TRAINING
CHAPTER 8

DEVELOPING AN API GATEWAY

Overview
Goal Describe the API Gateway pattern and develop an API
gateway for a series of microservices.
Objectives • Describe the API Gateway pattern.

• Develop an API gateway for a series of microservices.


Sections • Describing the API Gateway Pattern (and Quiz)

• Developing an API Gateway for Microservices (and Guided


Exercise)
Lab Developing an API Gateway

JB283-RHOAR1.0-en-1-20180517 233
Chapter 8. Developing an API Gateway

Describing the API Gateway Pattern

Objectives
After completing this section, students should be able to describe the API gateway pattern.

Solving Problems with the API Gateway Pattern


When building applications that use a microservice-based architecture, one of the most desirable
benefits is that the microservices can be shared by a wide variety of clients, from mobile
applications to web-based clients. However, there are complicating issues that arise from using
this architecture, including:

• The granularity of information or actions provided by the microservice API does not always
directly align with what individual clients need and can require data transformation. This
also means that it is common for one user action in a client to translate to multiple back-
end microservice calls. For example, when a user clicks Add to Cart in an e-commerce web
application, it can trigger microservice calls to a pricing service, an inventory service, and a
cart service.

• The clients might need to communicate with services using a variety of protocols, including
some that are not HTTP-based, such as a messaging-based service. This can also complicate
the client code that a developer must write.

• The locations of the microservice instances can change dynamically, requiring a service
discovery solution. This is especially true if your microservices are deployed in the cloud.

• The microservices' bounded contexts can change over time, resulting in the APIs for those
microservices also changing. If these changes are not isolated from service clients using a
gateway, significant code changes are required to the clients themselves.

An API gateway can mitigate all of these issues by providing an intermediary between the clients
and the back-end services, as shown in the following figure:

234 JB283-RHOAR1.0-en-1-20180517
Solving Problems with the API Gateway Pattern

Figure 8.1: Using an API gateway to serve client requests

Advantages of Using an API Gateway


Including an API gateway in your application can potentially solve a lot of common issues that
microservice developers encounter. An API gateway includes the following features:

• Insulates clients from changes to the back-end microservices' separate bounded contexts

• Provides a service discovery solution so that clients only need to locate the gateway instead of
every back-end microservice

• Provides an optimal API for each client, which can greatly simplify the client code

• Reduces the total number of requests that a client needs to make if the gateway can retrieve
data from multiple services with a single round trip

• Provides an intermediary standard HTTP API in front of any services that are required by the
application that do not use client-friendly protocols, such as messaging or other non-HTTP
protocols

Disadvantages of Using an API Gateway


Including an API gateway in your microservices application has some disadvantages, including:

• Increases complexity overall. The API gateway is yet another service that developers must
build, test, manage, and deploy.

• Increases response time due to the additional network hop through the API gateway, although
in environments with low latency this is typically insignificant.

• Increases difficulty to scale the application if the gateway receives high volumes of traffic.
An API gateway must be built using a tool and platform that can support the anticipated
application traffic.

JB283-RHOAR1.0-en-1-20180517 235
Chapter 8. Developing an API Gateway

Implementing an API Gateway


An API gateway should provide a single entry point for clients that need to consume
microservices behind the gateway. The API gateway typically handles requests using two basic
strategies. The gateway can provide a proxy to route calls directly to the appropriate back-end
microservice. Alternatively, if necessary the gateway can take an individual request from a client
and calling multiple back-end microservices to compile the necessary data and then return that
data to the client. The API gateway can take responsibility for security, to ensure clients are
properly authenticated, and that they are authorized to communicate with the services they are
attempting to reach through the gateway. The API gateway can even include fault tolerance to
handle back-end microservice failures.

A simple API gateway might provide a single API for all types of clients, whereas a more complex
gateway might provide a different API for each type of client, such as mobile and web. A third
approach is to provide a completely separate API gateway service for each type of client, as
shown in the following diagram.

Figure 8.2: Using separate API gateways for each type of client

There is no standard technique or technology that you must use when building a custom API
gateway. The most important factor to consider is performance and reliability because most
API gateways need to support large numbers of API calls. The correct approach and technology
to use for a given use case depends on the number and variety of clients consuming your
microservices and the complexity of those clients' needs.

Managing Your APIs Using an API Management


Platform
As the number of microservices that are in use by an organization grows, any gateway solution
becomes increasingly complex. Depending on the use case, in environments where the number
of microservices is quite large, an API management product such as 3scale by Red Hat can help

236 JB283-RHOAR1.0-en-1-20180517
Managing Your APIs Using an API Management Platform

manage these microservices. 3scale provides APIcast, which implements the functionality of an
API gateway. APIcast is an NGINX-based API gateway used to integrate your internal and external
API services with 3scale API Management Platform. APIcast 2.0 is the latest supported version,
and brings enhanced integration with Red Hat OpenShift Container Platform to minimize the
effort required to deploy APIcast as a container running in OpenShift.

You can use APIcast in either hosted or self-managed mode. In both cases, it needs a connection
to the rest of the 3scale API management platform.

APIcast hosted
3scale hosts APIcast in the cloud. In this case, APIcast is already deployed for you and it is
limited to 50,000 calls per day.

APIcast self-managed
The self-managed mode is the intended mode of operation for production environments.
Recommended options for deploying APIcast include:

• Native deployment: Install OpenResty and other dependencies on your own server and
run APIcast using the code and configuration provided by 3scale.

• Docker: Download a ready-to-use container image that includes all of the dependencies to
run APIcast in a container.

• OpenShift: Run APIcast on OpenShift. You can connect self-managed APIcast instances
either to a 3scale AMP installation or to a 3scale online account.

In addition to the APIcast API gateway, 3scale also provides many other built-in features that
would otherwise require significant development time to build into any custom-developed API
gateway. These features include:

Access Control and Security


Access control features are essential to determine exactly who uses your API, how it is used,
and how many calls they can invoke. This type of role-based access is an industry standard
security practice and a necessity to properly secure the sensitive parts of your API. 3scale
makes it easy to centrally set up and manage policy and application plans for all of your APIs
on one platform. 3scale also offers a range of authentication patterns and credentials to
choose from, including unique API keys and OAuth tokens.

Rate Limits
After you establish who gets access, how they get it, and what can be done with your API,
you can set even finer management details to control the traffic itself. Rate limits allow you
to manage and control the rate of API calls your clients can make to your microservices. You
can control access by type of plan or by specific user, down to calls allowed per minute for
each individual endpoint, ensuring users do not abuse access to your API.

Developer Portal
One of the best ways to improve the performance and success of your API is to provide
a portal to enhance your developers' experience. 3scale provides a content management
system (CMS) which makes it simple to create your own custom domain portal to manage
developer interactions and increase API adoption.

Analytics
3scale allows you to monitor and set alerts on traffic flow. Using this data you can provide
API consumers and developers with reports on their traffic using a user dashboard

JB283-RHOAR1.0-en-1-20180517 237
Chapter 8. Developing an API Gateway

designed for them. You can also analyze your API traffic through detailed traffic analytics
by user account, application, or microservice, and share performance insights across the
organization using the built-in reporting tools.

Monetization Tools
3scale makes it easy to monetize your API by charging users to access the API through
simple in-product integration with payment options such as Stripe, Braintree, Authorize.net,
and Ogone. 3scale provides the ability to set up pricing rules, send invoices and collect
payments with 3scale's Payment Card Industry Data Security Standard (PCI DSS) compliant
system.

Centralized Dashboard
The 3scale administration portal dashboard gives you easy, centrally located insight into any
traffic and customer engagement opportunities or issues with your APIs.

Figure 8.3: 3scale dashboard provides a centralized view into your APIs

References
Microservices.io API Gateway Reference
http://microservices.io/patterns/apigateway.html

3scale by Red Hat Homepage


https://www.3scale.net/

238 JB283-RHOAR1.0-en-1-20180517
Quiz: Describing the API Gateway Pattern

Quiz: Describing the API Gateway Pattern

Choose the correct answers to the following questions:

1. An enterprise architect at a large company is considering using a microservice-based


architecture to develop the next generation of enterprise applications. Which two of the
following would be the most compelling reasons to incorporate the gateway pattern into
their design? (Choose two.)

a. Including a gateway drastically reduces the complexity of the system overall.


b. A gateway can provide data integrity and synchronization across microservices.
c. Using a gateway allows developers to create client-specific APIs that can be optimized
for different types of clients consuming the microservices.
d. A gateway reduces the total number of network hops required to reach all of the
necessary microservices when responding to a client request.
e. Using a gateway service that provides access to all of the back-end microservices
alleviates the issue of service discovery that developers typically must address in a
microservice-based system.

2. Which of the following statements most accurately describes the performance impact of
using a gateway service on a microservice-based client application?

a. A gateway drastically reduces performance due to the extra network hop required.
b. A gateway can improve performance when it can optimize client APIs to run multiple
back-end calls concurrently with a single client API call.
c. A gateway removes the requirement to persist data in the back-end microservices,
improving the speed of all client API calls.
d. A gateway reduces system complexity allowing developers to write more efficient
code in the client, thereby improving performance.

3. Which three of the following features of microservices can be implemented at the gateway
level? (Choose three.)

a. Security
b. Log monitoring
c. Fault tolerance
d. Service discovery
e. Persistence

4. Which of the following features of the 3scale API management platform provides an API
gateway?

a. Developer portal
b. Advanced analytics
c. APIcast
d. Rate limiting

JB283-RHOAR1.0-en-1-20180517 239
Chapter 8. Developing an API Gateway

5. Which two of the following features does the 3scale API management platform provide?
(Choose two.)

a. Kernel-level hardware monitoring for the underlying platforms where the APIs run
b. Log aggregation and visualization for the server logs of each microservice
c. Rate limiting and billing for API usage
d. Data duplication, synchronization, and storage
e. A built-in developer portal for all APIs that includes documentation and code
examples

240 JB283-RHOAR1.0-en-1-20180517
Solution

Solution

Choose the correct answers to the following questions:

1. An enterprise architect at a large company is considering using a microservice-based


architecture to develop the next generation of enterprise applications. Which two of the
following would be the most compelling reasons to incorporate the gateway pattern into
their design? (Choose two.)

a. Including a gateway drastically reduces the complexity of the system overall.


b. A gateway can provide data integrity and synchronization across microservices.
c. Using a gateway allows developers to create client-specific APIs that can be
optimized for different types of clients consuming the microservices.
d. A gateway reduces the total number of network hops required to reach all of the
necessary microservices when responding to a client request.
e. Using a gateway service that provides access to all of the back-end microservices
alleviates the issue of service discovery that developers typically must address in a
microservice-based system.

2. Which of the following statements most accurately describes the performance impact of
using a gateway service on a microservice-based client application?

a. A gateway drastically reduces performance due to the extra network hop required.
b. A gateway can improve performance when it can optimize client APIs to run
multiple back-end calls concurrently with a single client API call.
c. A gateway removes the requirement to persist data in the back-end microservices,
improving the speed of all client API calls.
d. A gateway reduces system complexity allowing developers to write more efficient
code in the client, thereby improving performance.

3. Which three of the following features of microservices can be implemented at the gateway
level? (Choose three.)

a. Security
b. Log monitoring
c. Fault tolerance
d. Service discovery
e. Persistence

4. Which of the following features of the 3scale API management platform provides an API
gateway?

a. Developer portal
b. Advanced analytics
c. APIcast
d. Rate limiting

5. Which two of the following features does the 3scale API management platform provide?
(Choose two.)

JB283-RHOAR1.0-en-1-20180517 241
Chapter 8. Developing an API Gateway

a. Kernel-level hardware monitoring for the underlying platforms where the APIs run
b. Log aggregation and visualization for the server logs of each microservice
c. Rate limiting and billing for API usage
d. Data duplication, synchronization, and storage
e. A built-in developer portal for all APIs that includes documentation and code
examples

242 JB283-RHOAR1.0-en-1-20180517
Developing an API Gateway for Microservices

Developing an API Gateway for Microservices

Objectives
After completing this section, students should be able to develop an API gateway for a series of
microservices.

Developing Custom API Gateway Endpoints for the


REST-based Microservices
You must use a REST client to create an API gateway for any microservice that uses REST-based
HTTP communication. RESTEasy is the REST client implementation provided for WildFly Swarm
applications, and is the REST client on which this course focuses. To use the RESTEasy client
library, include it as a dependency in your gateway's pom.xml project file.

<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-client</artifactId>
<version>3.0.24.Final-redhat-1</version>
<scope>provided</scope>
</dependency>

This library is included automatically by the WildFly Swarm runtime and therefore can be
specified to have a <scope> value of provided, indicating it does not need to be packaged into
the application's dependencies.

The RESTEasy Proxy Framework


The simplest approach to building proxy endpoints for multiple remote REST resources is using
the RESTEasy proxy framework. By using the RESTEasy proxy framework, you can minimize the
boilerplate code needed to communicate with the other microservices and focus on the business
logic needed in your gateway to provide the optimal APIs that your clients need.

When using the RESTEasy proxy framework, the client framework builds outgoing HTTP requests
to invoke a remote RESTful web service. This remote service does not have to be a JAX-RS
service and can be any web resource that accepts HTTP requests.

To use the RESTEasy client proxy framework to communicate with a microservice, write a Java
interface to represent the microservice. In that interface, define methods for each of the remote
endpoints that your gateway uses. Annotate these methods using JAX-RS annotations to define
how the proxy framework builds the outgoing request. For example:

public interface SimpleClient{

@GET

@Path("basic")

@Produces("text/plain")
String getBasic();

@PUT
@Path("basic")

@Consumes("text/plain")

JB283-RHOAR1.0-en-1-20180517 243
Chapter 8. Developing an API Gateway

void putBasic(String body);

@GET
@Path("queryParam")
@Produces("text/plain")

String getQueryParam(@QueryParam("param")String param);

@GET
@Path("uriParam/{param}")
@Produces("text/plain")

int getUriParam(@PathParam("param")int param);

Define a Java interface to represent the remote REST service.


Map this method to the HTTP GET method.
Specify the URI path for this method as basic.
Specify the content type that this method produces as text/plain.
Map this method to the HTTP PUT method.
Specify the content type that the method consumes as text/plain.
Specify an HTTP path parameter named param that the client forwards to the endpoint
destination.
Specify an HTTP path parameter named param that the client forwards to the endpoint
destination.
The RESTEasy client has a simple API based on Apache HttpClient. You generate a proxy
using the interface, and then you can invoke methods on the proxy service class. RESTEasy
automatically translates any method you invoke on the proxy to an HTTP request. RESTEasy
bases this translation on how you annotate the method in the interface. After the RESTEasy
proxy creates the HTTP request it automatically posts the request to the server and returns the
response. The following example shows how to configure an HTTP client using a RESTEasy proxy
interface:

Client client = ClientBuilder.newClient();

WebTarget target = client.target("http://example.com/base/uri");

ResteasyWebTarget rtarget = (ResteasyWebTarget)target;

SimpleClient simple = rtarget.proxy(SimpleClient.class);

client.putBasic("hello world");

Create a new standard Client instance named client.


Create a new WebTarget instance named target using the client object and the base
URI for the remote service.
Cast the target object as a ResteasyWebTarget instance which is an extension of the
WebTarget class and provides the proxy method.
Create the proxy using the interface that you annotated previously. The implementation
class that RESTEasy creates provides each method defined in the interface and connects
directly to the remote service when you invoke the methods.
Call one of the methods on the proxy directly.
After you create the proxy clients for all of the services that your gateway communicates
with, you can easily create your own set of REST endpoints using JAX-RS, and use the proxy

244 JB283-RHOAR1.0-en-1-20180517
Adding Fault Tolerance in a Custom API Gateway

clients to forward requests coming into the gateway microservice to the appropriate back-end
microservices.

Adding Fault Tolerance in a Custom API Gateway


Because the API gateway is constantly communicating with most or all of the other
microservices in an enterprise, any outage in any of the downstream services affects the
API gateway. This is why resilience to downstream failures, or fault tolerance, is an important
consideration when designing and building your API gateway.

Using the fault tolerance facilities provided by MicroProfile can improve the performance and
reliability of your API gateway. This can be as simple as defining service timeouts for proxy calls
that are taking too long, or including failure handlers to return default data or a simple outage
message. You can also use more complex fault tolerance methods, such as a circuit breaker, if
required.

Describing Security in the Custom API Gateway


Design
Depending on your design, the API gateway can either handle the authentication and
authorization of all clients, or it can delegate this to the individual services. Alternatively,
both the gateway and the back-end services could implement their own authentication and
authorization. There is no single correct approach to configuring security, and the best
solution depends on the authentication and authorization infrastructure currently in use by the
enterprise.

A simple and effective solution is to use token-based authentication such as OAuth delegated
authorization together with JSON Web Tokens (JWTs). JWT is discussed in detail later in this
course. An example of using JWT authentication through an API gateway is shown in the
following figure:

Figure 8.4: JWT-based authentication through an API gateway

JB283-RHOAR1.0-en-1-20180517 245
Chapter 8. Developing an API Gateway

References
RESTEasy Documentation
http://docs.jboss.org/resteasy/docs/3.0.24.Final/userguide/html_single/index.html

246 JB283-RHOAR1.0-en-1-20180517
Guided Exercise: Implementing Fault Tolerance in an API Gateway

Guided Exercise: Implementing Fault Tolerance


in an API Gateway

In this exercise, you will implement an API gateway using the RESTEasy proxy framework and
include fault tolerance using MicroProfile.

Outcomes
You should be able to create an API gateway for two REST services using the RESTEasy proxy
framework and include fault tolerance in the API gateway to handle any downstream failures.

Before you begin


If you have not already done so, clone the hello-microservices repository to the workstation
machine.

[student@workstation ~]$ git clone http://services.lab.example.com/hello-microservices


Cloning into 'hello-microservices'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Use the lab setup command to set up the exercise.

[student@workstation ~]$ lab gateway-ft setup

Steps
1. Switch the repository to the lab-gateway-ft branch to get the correct version of the
application code for this exercise.

1.1. Switch to the branch using the git checkout command.

[student@workstation ~]$ cd hello-microservices


[student@workstation hello-microservices]$ git checkout lab-gateway-ft
Branch lab-gateway-ft set up to track remote branch lab-gateway-ft from origin.
Switched to a new branch 'lab-gateway-ft'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation hello-microservices]$ git status


# On branch lab-gateway-ft
nothing to commit, working directory clean

2. Complete the AlohaService proxy interface, which provides a proxy for the aloha
microservice.

The aloha microservice provides an endpoint with the relative path of /aloha. This
endpoint responds to HTTP GET requests and produces text-based content.

2.1. In JBoss Developer Studio, open the AlohaService proxy interface by expanding the
api-gateway item in the Project Explorer tab in the left pane of JBoss Developer Studio.

JB283-RHOAR1.0-en-1-20180517 247
Chapter 8. Developing an API Gateway

Click api-gateway > Java Resources > src/main/java >


com.redhat.training.msa.gateway.rest and expand it. Double-click the
AlohaService.java file.

2.2. Update the interface to include the JAX-RS annotations that the RESTEasy proxy uses
to build outgoing requests.

//TODO specify the path as 'aloha'


@Path("aloha")
//TODO specify that this method produces content with a content-type of 'text/
plain'
@Produces("text/plain")
//TODO specify this method maps to HTTP GET requests
@GET
public String aloha();

2.3. Press Ctrl+S to save your changes.

3. The hola microservice provides an endpoint with the relative path of /hola. This endpoint
responds to HTTP GET requests and produces text-based content.

Complete the HolaService proxy interface, which provides a proxy for the hola
microservice.

4. Complete the HolaService proxy interface.

4.1. In JBoss Developer Studio, open the HolaService proxy interface by expanding the
api-gateway item in the Project Explorer tab in the left pane of JBoss Developer Studio.

Click api-gateway > Java Resources > src/main/java >


com.redhat.training.msa.gateway.rest and expand it. Double-click the
HolaService.java file.

4.2. Update the interface to include the JAX-RS annotations that the RESTEasy proxy uses
to build outgoing requests.

//TODO specify the path as 'hola'


@Path("hola")
//TODO specify that this method produces content with a content-type of 'text/
plain'
@Produces("text/plain")
//TODO specify this method maps to HTTP GET requests
@GET
public String hola();

4.3. Press Ctrl+S to save your changes.

5. Complete the ClientConfiguration class to create proxies for both services using the
two interfaces you created previously.

5.1. In JBoss Developer Studio, open the ClientConfiguration class by expanding the
api-gateway item in the Project Explorer tab in the left pane of JBoss Developer Studio.

248 JB283-RHOAR1.0-en-1-20180517
Click api-gateway > Java Resources > src/main/java >
com.redhat.training.msa.gateway.client and expand it. Double-click the
ClientConfiguration.java file.

5.2. Note that the alohaService() method is marked using the @Produces CDI
annotation, registering this method as a producer of AlohaService instances for the
application.

Update the method to create a proxy for the remote aloha microservice using the
AlohaService interface you completed previously:

@Produces
public AlohaService alohaService() {
Client client = ClientBuilder.newClient();
WebTarget target = client.target("http://" + alohaHostname + ":" + alohaPort +
"/api");
log.info("Aloha service is located at " + target.getUri());
ResteasyWebTarget rtarget = (ResteasyWebTarget) target;
//TODO create the service using the proxy interface
AlohaService service = rtarget.proxy(AlohaService.class);
return service;
}

5.3. Note that the holaService() method is marked using the @Produces CDI
annotation, registering this method as a producer of HolaService instances for the
application.

Update the method to create a proxy for the remote hola microservice using the
HolaService interface you completed previously:

@Produces
public HolaService holaService() {
Client client = ClientBuilder.newClient();
WebTarget target = client.target("http://" + holaHostname + ":" + holaPort + "/
api");
log.info("Hola service is located at " + target.getUri());
ResteasyWebTarget rtarget = (ResteasyWebTarget) target;
//TODO create the service using the proxy interface
HolaService service = rtarget.proxy(HolaService.class);
return service;
}

5.4. Press Ctrl+S to save your changes.

6. Inject the service proxy classes into the APIGatewayResource class using CDI and
configure fallback methods to provide fault tolerance when either of the downstream
services are not available.

6.1. In JBoss Developer Studio, open the APIGatewayResource class by expanding the
api-gateway item in the Project Explorer tab in the left pane of JBoss Developer Studio.

Click api-gateway > Java Resources > src/main/java >


com.redhat.training.msa.gateway.rest and expand it. Double-click the
APIGatewayResource.java file.

JB283-RHOAR1.0-en-1-20180517 249
Chapter 8. Developing an API Gateway

6.2. Inject the service proxy classes to be used by the RESTEasy client using CDI.

//TODO inject this using CDI


@Inject
private AlohaService alohaService;

//TODO inject this using CDI


@Inject
private HolaService holaService;

6.3. Specify the fallback methods for the hola() and aloha() methods using the
@Fallback fault tolerance annotation. Be sure to include the appropriate method name
such that fallback methods are the alohaFallback and holaFallback methods,
which are provided for you.

@GET
@Path("/es")
@Produces("text/plain")
@ApiOperation("Returns the greeting in Spanish")
//TODO specify the alohaFallback method as the fallback
@Fallback(fallbackMethod="holaFallback")
public String hola() {
String response = holaService.hola();
return response;
}

@GET
@Path("/haw")
@Produces("text/plain")
@ApiOperation("Returns the greeting in Hawaiian")
//TODO specify the alohaFallback method as the fallback
@Fallback(fallbackMethod="alohaFallback")
public String aloha() {
String response = alohaService.aloha();

return response;
}

6.4. Press Ctrl+S to save your changes.

7. Start the aloha, hola, and api-gateway microservices.

7.1. Use the provided run.sh script to execute the WildFly Swarm Maven plug-in to build
and run the aloha microservice.

In your terminal window, navigate to the aloha directory and execute the run.sh
script to start the application.

[student@workstation hello-microservices]$ cd aloha


[student@workstation aloha]$ ./run.sh

7.2. Use the provided run.sh script to execute the WildFly Swarm Maven plug-in to build
and run the hola microservice.

250 JB283-RHOAR1.0-en-1-20180517
While the aloha service is still running, open a new terminal window, navigate to the
hola directory and execute the run.sh script to start the application.

[student@workstation ~]$ cd hello-microservices/hola


[student@workstation hola]$ ./run.sh

Important
You might see the following exception in the script output. It is safe to ignore
this exception.

org.eclipse.aether.resolution.ArtifactResolutionException: Could not


find artifact
commons-io:commons-io:jar:2.7-SNAPSHOT

7.3. Use the provided run.sh script to execute the WildFly Swarm Maven plug-in to build
and run the api-gateway microservice.

While the aloha and hola microservices are still running, open a new terminal window,
navigate to the api-gateway directory, and execute the run.sh script to start the
application.

[student@workstation ~]$ cd hello-microservices/api-gateway


[student@workstation api-gateway]$ ./run.sh

7.4. Test calling the api-gateway microservice from a client using the RESTClient Firefox
plug-in.

Start Firefox on the workstation VM and click the RESTClient plug-in on the browser's
toolbar.

7.5. Select GET as the Method. In the URL form, enter http://localhost:8080/api/
haw and then click Send.

7.6. In the Headers tab, verify that the Status Code is 200 OK.

7.7. In the Response tab, verify that the response matches the following:

Aloha mai localhost

7.8. In the URL form, enter http://localhost:8080/api/es and then click Send.

7.9. In the Headers tab, verify that the Status Code is 200 OK.

7.10. In the Response tab, verify that the response matches the following:

Hola de localhost

8. Stop the two downstream services and test the API gateway's fault tolerance.

JB283-RHOAR1.0-en-1-20180517 251
Chapter 8. Developing an API Gateway

8.1. Stop the WildFly Swarm instance running the aloha microservice. Return to the terminal
where it is running and press Ctrl+C to stop the microservice.

8.2. Stop the WildFly Swarm instance running the hola microservice. Return to the terminal
where it is running and press Ctrl+C to stop the microservice.

8.3. Return to the Firefox window where you have the RESTClient plug-in open.

8.4. Select GET as the Method. In the URL form, enter http://localhost:8080/api/
haw and then click Send.

8.5. In the Headers tab, verify that the Status Code is 200 OK.

8.6. In the Response tab, verify that the response matches the following:

Aloha fallback

8.7. In the URL form, enter http://localhost:8080/api/es and then click Send.

8.8. In the Headers tab, verify that the Status Code is 200 OK.

8.9. In the Response tab, verify that the response matches the following:

Hola fallback

9. Stop the api-gateway microservice.

Return to the terminal window where the WildFly Swarm instance is running the api-gateway
microservice. Press Ctrl+C to stop the service.

10. Clean up, commit your changes to your local Git repository in the lab branch, and return to
the master branch.

10.1. Use the git add command to stage the uncommitted changes.

[student@workstation api-gateway]$ git add .

10.2.Use the git commit command to commit your changes to the local branch.

[student@workstation api-gateway]$ git commit -m"completing lab gateway-ft"


[lab-gateway-ft 7210573] completing lab gateway-ft

10.3.Switch the working copy back to the master branch to finish cleaning up.

[student@workstation api-gateway]$ git checkout master


Switched to branch 'master'

This concludes this guided exercise.

252 JB283-RHOAR1.0-en-1-20180517
Lab: Developing an API Gateway

Lab: Developing an API Gateway

In this lab, you will implement an API gateway using the RESTEasy proxy framework and include
fault tolerance support using MicroProfile. Then, you will deploy and test the gateway on
OpenShift.

Outcomes
You should be able to create an API gateway for two microservices using the RESTEasy proxy
framework and include fault tolerance in the API gateway to handle any downstream failures.

Before you begin


If you have not already done so, clone the microprofile-conference repository to the
workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Run the lab setup command to set up the exercise.

[student@workstation ~]$ lab gateway-review setup

Steps
1. Switch the repository to the lab-gateway-review branch to get the correct version of the
application code for this exercise.

2. Complete the io.microprofile.showcase.proxy.SpeakerResource proxy interface


from the microservice-gateway project, which provides a proxy for the microservice-
speaker REST endpoints.

You can review the endpoint definitions located in the ResourceSpeaker JAX-RS class in
the io/microprofile/showcase/speaker/rest/ package of the microservice-speaker application.

The base URI path for the service is speaker and all of the service methods produce and
consume JSON data.

The following table contains a summary of the available endpoints:

ResourceSpeaker Endpoints
Method Path HTTP Method
getAllSpeakers / GET
add(Speaker) /add POST
remove(id) /remove/{id} DELETE
update(Speaker) /update PUT
retrieve(id) /retrieve/{id} GET
search /search PUT

JB283-RHOAR1.0-en-1-20180517 253
Chapter 8. Developing an API Gateway

3. Complete the SessionResource proxy interface from the microservice-gateway project,


which provides a proxy for the microservice-session microservice REST endpoints.

You can review the endpoint definitions located in the SessionResource JAX-RS class in
the io/microprofile/showcase/session/ package of the microservice-session microservice.

The base URI path for the service is sessions, and all of the service's methods produce and
consume JSON data.

The following table contains a summary of the available endpoints:

SessionResource Endpoints
Method Path HTTP Method
allSessions() / GET
create(Session) / POST
getSession(sessionId) /{sessionId} GET
updateSession(sessionId, /{sessionId} PUT
Session)
deleteSession(sessionId) /{sessionId} DELETE
getSessionSpeakers(sessionId) /{sessionId}/ GET
speakers
addSessionSpeaker(sessionId, /{sessionId}/ PUT
speakerId) speakers/
{speakerId}
removeSessionSpeaker(sessionId,
/{sessionId}/ DELETE
speakerId) speakers/
{speakerId}

4. Use the proxy client created in the previous steps to finish the
io.microprofile.showcase.gateway.GatewaySpeakerResource class
implementation from the microservice-gateway project.

Be sure to address the following aspects of the class:

• Configure a timeout of three seconds for all methods so that the gateway fails quickly and
avoids lengthy wait times for its clients.

• Complete the buildClient() method to use the ClientBuilder class to create a new
RESTEasy REST client. Be sure to use the SpeakerResource proxy interface to configure
the client to connect to the microservice-speaker microservice.

• Update all of the endpoints to use the appropriate fallback based on the return type of
the method. There are four fallback methods available; some of the methods have been
overloaded to support different combinations of possible parameters:

◦ speakerCollectionFallback: Returns an empty Collection object for methods


that have a return type of Collection<Speaker>.

◦ speakerSetFallback: Returns an empty Set object for methods that have a return
type of Set<Speaker>.

254 JB283-RHOAR1.0-en-1-20180517
◦ speakerFallback: Returns a placeholder Speaker object for methods that have a
return type of Speaker.

◦ voidFallback: Does not return anything. Use this for methods with a void return
type.

5. Use the proxy client that you created in the previous steps to finish the
io.microprofile.showcase.gateway.GatewaySessionResource class
implementation from the microservice-gateway microservice.

Be sure to address the following aspects of the gateway service class:

• Configure a class-level timeout of three seconds so that the gateway fails quickly and
avoids lengthy wait times for its clients.

• Complete the buildClient() method to use the ClientBuilder class to create a new
RESTEasy REST client. Be sure to use the SessionResource proxy interface to configure
the client to connect to the microservice-session microservice.

• Update all of the endpoints to use the appropriate fallback based on the method's
return type. There are three fallback methods available. Some of the methods have been
overloaded to support different combinations of possible parameters:

◦ sessionCollectionFallback: Returns an empty Set for methods that have a


return type of Collection<Session>.

◦ sessionFallback: Returns a placeholder Session object for methods that have a


return type of Session.

◦ sessionResponseFallback: Returns a success response for methods that have a


return type of Response.

6. Log in to the OpenShift cluster as the developer user, create a new OpenShift project
called gateway-review. Then deploy the microservice-speaker, microservice-session, and
microservice-gateway applications to the OpenShift cluster using the fabric8 Maven plug-in.

Use the -DskipTests flag to skip the tests during the deployment process.

Use the oc status command to verify the three deployments and capture the URL of the
microservice-gateway microservice in the OpenShift cluster.

7. Test accessing the microservice-speaker microservice endpoints through the microservice-


gateway microservice from a client using the RESTClient Firefox plug-in.

7.1. Start Firefox on the workstation VM and click the RESTClient plug-in on the browser's
toolbar.

7.2. Select GET as the Method. In the URL form, enter http://microservice-gateway-
gateway-review.apps.lab.example.com/gateway/speaker and click Send.

7.3. In the Headers tab, verify that the Status Code is 200 OK.

7.4. In the Response tab, verify that the response contains the following:

JB283-RHOAR1.0-en-1-20180517 255
Chapter 8. Developing an API Gateway

[{"id":"25","title":"Mr.","nameFirst":"Abbot","nameLast":"Blanchard",
"organization":"n/a","biography":"Lorem ipsum dolor sit amet, consectetur
adipiscing elit. Nullam commodo eget nisl eu fermentum. Phasellus tellus
elit, eleifend vel bibendum quis, hendrerit sit amet enim. Donec nulla tortor,
consectetur sed massa sed, luctus aliquet diam...
...output omitted...

8. Test the /speaker/add REST endpoint of the microservice-gateway microservice.

8.1. Select POST as the Method. Update the URL field to http://microservice-
gateway-gateway-review.apps.lab.example.com/gateway/speaker/add.

8.2. In the Body section of the request, add the following JSON representation of a
Speaker entity:

Note
This can be copied and pasted from /home/student/JB283/labs/
gateway-review/json.txt

{
"title":"Mr.",
"nameFirst":"Test",
"nameLast":"User",
"organization":"Tester Inc.",
"biography":"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam
commodo eget nisl eu fermentum. Fusce vitae diam fringilla, tincidunt dolor in,
condimentum",
"picture":"assets/images/unknown.jpg",
"twitterHandle":"@test_user"
}

8.3. On the toolbar, click Headers and select Custom Header to add a new custom header to
the request.

8.4. In the custom header dialog box, enter the following information:

• Name: Content-Type

• Value: application/json

Click Okay and then click Send.

8.5. In the Headers tab, verify that the Status Code is 200 OK.

8.6. In the Response tab, verify that the response matches the following:

{"id":"7f59e4cc-3665-4210-94b2-162ce95551c9","title":"Mr.","nameFirst":"Test",
"nameLast":"User","organization":"Tester Inc.","biography":"Lorem ipsum dolor
sit amet, consectetur adipiscing elit. Nullam commodo eget nisl eu fermentum.
Fusce vitae diam fringilla, tincidunt dolor in, condimentum","picture":"assets/
images/unknown.jpg","twitterHandle":"@test_user","links":
{"add":"http://microservice-gateway-gateway-review.apps.lab.example.com/
speaker/","search":"http://microservice-gateway-gateway-

256 JB283-RHOAR1.0-en-1-20180517
review.apps.lab.example.com/speaker/","self":"http://
microservice-gateway-gateway-review.apps.lab.example.com/speaker/
retrieve/7f59e4cc-3665-4210-94b2-162ce95551c9","update":"http://microservice-
gateway-gateway-review.apps.lab.example.com/speaker/","remove":"http://
microservice-gateway-gateway-review.apps.lab.example.com/speaker/
remove/7f59e4cc-3665-4210-94b2-162ce95551c9"}}

9. Test the /gateway/speaker/update REST endpoint of the microservice-gateway


microservice.

9.1. Select PUT as the Method. In the URL form, paste the value http://microservice-
gateway-gateway-review.apps.lab.example.com from the clipboard and
append the relative URI for the /gateway/speaker/update service.

9.2. In the Body section of the request, add the following updated JSON representation of a
Speaker entity. Update the id field using the id parameter from the previous step:

Note
This can be copied and pasted from /home/student/JB283/labs/
gateway-review/json-2.txt

{
"id": "7f59e4cc-3665-4210-94b2-162ce95551c9",
"title":"Mr.",
"nameFirst":"TestUpdate",
"nameLast":"UserUpdate",
"organization":"Tester Inc.",
"biography":"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam
commodo eget nisl eu fermentum. Fusce vitae diam fringilla, tincidunt dolor in,
condimentum",
"picture":"assets/images/unknown.jpg",
"twitterHandle":"@test_user"
}

9.3. Click Send.

9.4. In the Headers tab, verify that the Status Code is 200 OK.

9.5. In the Response tab, verify that the response matches the following:

{"id":"7f59e4cc-3665-4210-94b2-162ce95551c9","title":"Mr.",
"nameFirst":"TestUpdate","nameLast":"UserUpdate","organization":"Tester
Inc.","biography":"Lorem ipsum dolor sit amet, consectetur adipiscing
elit. Nullam commodo eget nisl eu fermentum. Fusce vitae diam
fringilla, tincidunt dolor in, condimentum","picture":"assets/images/
unknown.jpg","twitterHandle":"@test_user","links":{"add":"http://
microservice-gateway-gateway-review.apps.lab.example.com/
speaker/","search":"http://microservice-gateway-gateway-
review.apps.lab.example.com/speaker/","self":"http://
microservice-gateway-gateway-review.apps.lab.example.com/speaker/
retrieve/7f59e4cc-3665-4210-94b2-162ce95551c9","update":"http://microservice-
gateway-gateway-review.apps.lab.example.com/speaker/","remove":"http://
microservice-gateway-gateway-review.apps.lab.example.com/speaker/
remove/7f59e4cc-3665-4210-94b2-162ce95551c9"}}

JB283-RHOAR1.0-en-1-20180517 257
Chapter 8. Developing an API Gateway

10. Test the /gateway/speaker/remove endpoint of the microservice-gateway microservice.

10.1. Select DELETE as the Method. In the URL form, paste the value http://
microservice-gateway-gateway-review.apps.lab.example.com
from the clipboard and append the relative URI for the /gateway/speaker/
remove/7f59e4cc-3665-4210-94b2-162ce95551c9 service.

10.2.Click Send.

10.3.In the Headers tab, verify that the Status Code is 204 No Content.

11. Test accessing the microservice-session endpoints through the microservice-gateway


microservice from a client using the RESTClient Firefox plug-in.

11.1. Select GET as the Method. In the URL form, enter http://microservice-gateway-
gateway-review.apps.lab.example.com/gateway/sessions and click Send.

11.2. In the Headers tab, verify that the Status Code is 200 OK.

11.3. In the Response tab, verify that the response matches the following:

[{"schedule":189,"id":"88","speakers":["32"],"title":"euismod in, dolor. Fusce


feugiat.",
"code":"A7858A80-383D-7635-C1C4-6F92DCBE6004","type":"Legal
Department","abstract":
"accumsan neque et nunc. Quisque ornare tortor at risus. Nunc ac sem ut dolor
dapibus
...output omitted...

12. Delete the OpenShift routes that expose the microservice-session and microservice-
speaker microservices to make them inaccessible. After these two microservices are no
longer available, test the fault tolerance of the gateway using the RESTClient plug-in.

12.1. Use the oc delete route microservice-session command to delete the


microservice-session route:

[student@workstation microservice-gateway]$ oc delete route microservice-session


route "microservice-session" deleted

12.2.Use the oc delete route microservice-speaker command to delete the


microservice-speaker route:

[student@workstation microservice-gateway]$ oc delete route microservice-speaker


route "microservice-speaker" deleted

12.3.Return to the Firefox window where you have the RESTClient plug-in open.

12.4.Select GET as the Method. In the URL form, enter http://microservice-gateway-


gateway-review.apps.lab.example.com/gateway/speaker and click Send.

12.5.In the Headers tab, verify that the Status Code is 200 OK.

12.6.In the Response tab, verify that the response matches the following:

258 JB283-RHOAR1.0-en-1-20180517
[]

12.7. In the URL form, enter http://microservice-gateway-gateway-


review.apps.lab.example.com/gateway/sessions and click Send.

12.8.In the Headers tab, verify that the Status Code is 200 OK.

12.9. In the Response tab, verify that the response matches the following:

[]

13. Grade the lab.

[student@workstation microservice-gateway]$ lab gateway-review grade

14. Clean up the OCP project, commit your changes to your local Git repository in the lab
branch, and return to the master branch.

14.1. Delete the lab-gateway-review OCP project to undeploy the service and remove the
other OCP resources.

[student@workstation microservice-gateway]$ oc delete project \


gateway-review
project "gateway-review" deleted

14.2.Use the git add command to stage the uncommitted changes.

[student@workstation microservice-gateway]$ git add .

14.3.Use the git commit command to commit your changes to the local branch.

[student@workstation microservice-gateway]$ git commit \


-m"completing lab gateway-review"
[lab-gateway-review 7210573] completing lab lab-gateway-review

14.4.Switch the working copy back to the master branch to finish cleaning up.

[student@workstation microservice-gateway]$ git checkout master


Switched to branch 'master'

JB283-RHOAR1.0-en-1-20180517 259
Chapter 8. Developing an API Gateway

Solution
In this lab, you will implement an API gateway using the RESTEasy proxy framework and include
fault tolerance support using MicroProfile. Then, you will deploy and test the gateway on
OpenShift.

Outcomes
You should be able to create an API gateway for two microservices using the RESTEasy proxy
framework and include fault tolerance in the API gateway to handle any downstream failures.

Before you begin


If you have not already done so, clone the microprofile-conference repository to the
workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Run the lab setup command to set up the exercise.

[student@workstation ~]$ lab gateway-review setup

Steps
1. Switch the repository to the lab-gateway-review branch to get the correct version of the
application code for this exercise.

1.1. Use the git checkout command to switch to the correct branch.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout \
lab-gateway-review
Switched to new branch 'lab-gateway-review'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-gateway-review
nothing to commit, working directory clean

2. Complete the io.microprofile.showcase.proxy.SpeakerResource proxy interface


from the microservice-gateway project, which provides a proxy for the microservice-
speaker REST endpoints.

You can review the endpoint definitions located in the ResourceSpeaker JAX-RS class in
the io/microprofile/showcase/speaker/rest/ package of the microservice-speaker application.

The base URI path for the service is speaker and all of the service methods produce and
consume JSON data.

The following table contains a summary of the available endpoints:

260 JB283-RHOAR1.0-en-1-20180517
Solution

ResourceSpeaker Endpoints
Method Path HTTP Method
getAllSpeakers / GET
add(Speaker) /add POST
remove(id) /remove/{id} DELETE
update(Speaker) /update PUT
retrieve(id) /retrieve/{id} GET
search /search PUT

2.1. In JBoss Developer Studio, open the SpeakerResource interface by expanding


the microservice-gateway item in the Project Explorer tab in the left pane of JBoss
Developer Studio.

Click microservice-gateway > Java Resources > src/main/java


> io.microprofile.showcase.proxy to expand it. Double-click the
SpeakerResource.java file.

2.2. Add the class-level JAX-RS annotations to specify the media types that the remote
microservice produces and consumes, as well as the base URI path for the microservice.

//TODO specify that the remote service consumes JSON data


@Consumes(MediaType.APPLICATION_JSON)
//TODO specify that the remote service produces JSON data
@Produces(MediaType.APPLICATION_JSON)
//TODO set the relative path to "speaker"
@Path("speaker")
public interface SpeakerResource {

2.3. Add the method-level JAX-RS annotations to map each of the proxy methods in the
interface to their matching methods on the remote microservice.

public interface SpeakerResource {

//TODO specify JAX-RS annotations


@GET
public Collection<Speaker> getAllSpeakers();

//TODO specify JAX-RS annotations


@POST
@Path("/add")
public Speaker add(final Speaker speaker);

//TODO specify JAX-RS annotations


@DELETE
@Path("/remove/{id}")
public void remove(@PathParam("id") final String id);

//TODO specify JAX-RS annotations


@PUT
@Path("/update")
public Speaker update(final Speaker speaker);

//TODO specify JAX-RS annotations

JB283-RHOAR1.0-en-1-20180517 261
Chapter 8. Developing an API Gateway

@GET
@Path("/retrieve/{id}")
public Speaker retrieve(@PathParam("id") final String id);

//TODO specify JAX-RS annotations


@PUT
@Path("/search")
public Set<Speaker> search(final Speaker speaker);
...output omitted...

2.4. Press Ctrl+S to save your changes.

3. Complete the SessionResource proxy interface from the microservice-gateway project,


which provides a proxy for the microservice-session microservice REST endpoints.

You can review the endpoint definitions located in the SessionResource JAX-RS class in
the io/microprofile/showcase/session/ package of the microservice-session microservice.

The base URI path for the service is sessions, and all of the service's methods produce and
consume JSON data.

The following table contains a summary of the available endpoints:

SessionResource Endpoints
Method Path HTTP Method
allSessions() / GET
create(Session) / POST
getSession(sessionId) /{sessionId} GET
updateSession(sessionId, /{sessionId} PUT
Session)
deleteSession(sessionId) /{sessionId} DELETE
getSessionSpeakers(sessionId) /{sessionId}/ GET
speakers
addSessionSpeaker(sessionId, /{sessionId}/ PUT
speakerId) speakers/
{speakerId}
removeSessionSpeaker(sessionId,
/{sessionId}/ DELETE
speakerId) speakers/
{speakerId}

3.1. In JBoss Developer Studio, open the SessionResource interface.


Expand the microservice-gateway > Java Resources > src/main/java >
io.microprofile.showcase.proxy package. Double-click the SpeakerResource.java
file.

3.2. Add the class-level JAX-RS annotations to specify the media types that the remote
microservice produces and consumes, as well as the base URI path for the microservice.

//TODO specify that the remote service consumes JSON data


@Consumes(MediaType.APPLICATION_JSON)
//TODO specify that the remote service produces JSON data

262 JB283-RHOAR1.0-en-1-20180517
Solution

@Produces(MediaType.APPLICATION_JSON)
//TODO set the relative path to "sessions"
@Path("sessions")
public interface SessionResource {

3.3. Add the method-level JAX-RS annotations to map each of the proxy methods in the
interface to their matching methods on the remote microservice.

public interface SessionResource {

//TODO specify JAX-RS annotations


@GET
public Collection<Session> allSessions();

//TODO specify JAX-RS annotations


@POST
public Session createSession(Session session);

//TODO specify JAX-RS annotations


@GET
@Path("/{sessionId}")
public Session getSession(@PathParam("sessionId")String sessionId);

//TODO specify JAX-RS annotations


@PUT
@Path("/{sessionId}")
public Session updateSession(@PathParam("sessionId")String sessionId, Session
session);

//TODO specify JAX-RS annotations


@DELETE
@Path("/{sessionId}")
public Response deleteSession(@PathParam("sessionId")String sessionId);

//TODO specify JAX-RS annotations


@GET
@Path("/{sessionId}/speakers")
public Response getSessionSpeakers(@PathParam("sessionId")String sessionId);

//TODO specify JAX-RS annotations


@PUT
@Path("/{sessionId}/speakers/{speakerId}")
public Session addSessionSpeaker(@PathParam("sessionId")String sessionId,
@PathParam("speakerId")String speakerId);

//TODO specify JAX-RS annotations


@DELETE
@Path("/{sessionId}/speakers/{speakerId}")
public Response removeSessionSpeaker(@PathParam("sessionId")String sessionId,
@PathParam("speakerId")String speakerId);
...output omitted...

3.4. Press Ctrl+S to save your changes.

4. Use the proxy client created in the previous steps to finish the
io.microprofile.showcase.gateway.GatewaySpeakerResource class
implementation from the microservice-gateway project.

Be sure to address the following aspects of the class:

JB283-RHOAR1.0-en-1-20180517 263
Chapter 8. Developing an API Gateway

• Configure a timeout of three seconds for all methods so that the gateway fails quickly and
avoids lengthy wait times for its clients.

• Complete the buildClient() method to use the ClientBuilder class to create a new
RESTEasy REST client. Be sure to use the SpeakerResource proxy interface to configure
the client to connect to the microservice-speaker microservice.

• Update all of the endpoints to use the appropriate fallback based on the return type of
the method. There are four fallback methods available; some of the methods have been
overloaded to support different combinations of possible parameters:

◦ speakerCollectionFallback: Returns an empty Collection object for methods


that have a return type of Collection<Speaker>.

◦ speakerSetFallback: Returns an empty Set object for methods that have a return
type of Set<Speaker>.

◦ speakerFallback: Returns a placeholder Speaker object for methods that have a


return type of Speaker.

◦ voidFallback: Does not return anything. Use this for methods with a void return
type.

4.1. In JBoss Developer Studio, open the GatewaySpeakerResource JAX-


RS service. Expand the microservice-gateway > Java Resources > src/
main/java > io.microprofile.showcase.gateway package. Double-click the
GatewaySpeakerResource.java file.

4.2. Use the @Timeout fault tolerance annotation to configure the timeout for all methods
in the class:

@Path("gateway/speaker")
@ApplicationScoped
@Consumes(MediaType.MEDIA_TYPE_WILDCARD)
@Produces(MediaType.APPLICATION_JSON)
//TODO add class-level timeout of 3 seconds
@Timeout(3000)
public class GatewaySpeakerResource {

4.3. Update the buildClient() method to use the ClientBuilder class and the
SpeakerResource proxy interface:

private SpeakerResource buildClient() {


System.out.println("Building new client");
// TODO create a new client
Client client = ClientBuilder.newClient();
WebTarget target = client.target(speakerURL);
ResteasyWebTarget restEasyTarget = (ResteasyWebTarget) target;
// TODO create proxy using the proxy interface
return restEasyTarget.proxy(SpeakerResource.class);
}

4.4. Specify the appropriate fallback methods for each JAX-RS method:

264 JB283-RHOAR1.0-en-1-20180517
Solution

@GET
//TODO specify fallback method as speakerCollectionFallback
@Fallback(fallbackMethod="speakerCollectionFallback")
public Collection<Speaker> getAllSpeakers() {
SpeakerResource proxy = buildClient();
return proxy.getAllSpeakers();
}

@POST
@Path("/add")
//TODO specify fallback method as speakerFallback
@Fallback(fallbackMethod="speakerFallback")
public Speaker add(final Speaker speaker) {
SpeakerResource proxy = buildClient();
return proxy.add(speaker);
}

@DELETE
@Path("/remove/{id}")
//TODO specify fallback method as speakerVoidFallback
@Fallback(fallbackMethod="speakerVoidFallback")
public void remove(@PathParam("id") final String id) {
SpeakerResource proxy = buildClient();
proxy.remove(id);
}

@PUT
@Path("/update")
//TODO specify fallback method as speakerFallback
@Fallback(fallbackMethod="speakerFallback")
public Speaker update(final Speaker speaker) {
SpeakerResource proxy = buildClient();
return proxy.update(speaker);
}

@GET
@Path("/retrieve/{id}")
//TODO specify fallback method as speakerFallback
@Fallback(fallbackMethod="speakerFallback")
public Speaker retrieve(@PathParam("id") final String id) {
SpeakerResource proxy = buildClient();
return proxy.retrieve(id);
}

@PUT
@Path("/search")
//TODO specify fallback method as speakerSetFallback
@Fallback(fallbackMethod="speakerSetFallback")
public Set<Speaker> search(final Speaker speaker) {
SpeakerResource proxy = buildClient();
return proxy.search(speaker);
}

4.5. Press Ctrl+S to save your changes.

5. Use the proxy client that you created in the previous steps to finish the
io.microprofile.showcase.gateway.GatewaySessionResource class
implementation from the microservice-gateway microservice.

Be sure to address the following aspects of the gateway service class:

JB283-RHOAR1.0-en-1-20180517 265
Chapter 8. Developing an API Gateway

• Configure a class-level timeout of three seconds so that the gateway fails quickly and
avoids lengthy wait times for its clients.

• Complete the buildClient() method to use the ClientBuilder class to create a new
RESTEasy REST client. Be sure to use the SessionResource proxy interface to configure
the client to connect to the microservice-session microservice.

• Update all of the endpoints to use the appropriate fallback based on the method's
return type. There are three fallback methods available. Some of the methods have been
overloaded to support different combinations of possible parameters:

◦ sessionCollectionFallback: Returns an empty Set for methods that have a


return type of Collection<Session>.

◦ sessionFallback: Returns a placeholder Session object for methods that have a


return type of Session.

◦ sessionResponseFallback: Returns a success response for methods that have a


return type of Response.

5.1. In JBoss Developer Studio, open the GatewaySessionResource JAX-


RS service. Expand the microservice-gateway > Java Resources > src/
main/java > io.microprofile.showcase.gateway package. Double-click the
GatewaySessionResource.java file.

5.2. Use the @Timeout fault tolerance annotation to configure the class-level timeout
behavior:

@Path("gateway/sessions")
@ApplicationScoped
@Consumes(MediaType.MEDIA_TYPE_WILDCARD)
//TODO add class-level timeout of 3 seconds
@Timeout(3000)
public class GatewaySessionResource {

5.3. Update the buildClient() method to use the ClientBuilder class and the
SessionResource proxy interface:

private SessionResource buildClient() {


System.out.println("Building new client");
//TODO create a new client
Client client = ClientBuilder.newClient();
WebTarget target = client.target(sessionURL);
ResteasyWebTarget restEasyTarget = (ResteasyWebTarget)target;
//TODO create proxy using the proxy interface
return restEasyTarget.proxy(SessionResource.class);
}

5.4. Specify the appropriate fallback method for each JAX-RS method:

@GET
@Produces(MediaType.APPLICATION_JSON)
@Path("/")
//TODO specify fallback method as sessionCollectionFallback

266 JB283-RHOAR1.0-en-1-20180517
Solution

@Fallback(fallbackMethod="sessionCollectionFallback")
public Collection<Session> allSessions() {
SessionResource sessionProxy = buildClient();
return sessionProxy.allSessions();
}

@POST
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
@Path("/")
//TODO specify fallback method as sessionFallback
@Fallback(fallbackMethod="sessionFallback")
public Session createSession(Session session) {
SessionResource sessionProxy = buildClient();
return sessionProxy.createSession(session);
}

@GET
@Produces(MediaType.APPLICATION_JSON)
@Path("/{sessionId}")
//TODO specify fallback method as sessionFallback
@Fallback(fallbackMethod="sessionFallback")
public Session getSession(@PathParam("sessionId")String sessionId) {
SessionResource sessionProxy = buildClient();
return sessionProxy.getSession(sessionId);
}

@PUT
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
@Path("/{sessionId}")
//TODO specify fallback method as sessionFallback
@Fallback(fallbackMethod="sessionFallback")
public Session updateSession(@PathParam("sessionId")String sessionId, Session
session) {
SessionResource sessionProxy = buildClient();
return sessionProxy.updateSession(sessionId, session);
}

@DELETE
@Path("/{sessionId}")
//TODO specify fallback method as sessionResponseFallback
@Fallback(fallbackMethod="sessionResponseFallback")
public Response deleteSession(@PathParam("sessionId")String sessionId) {
SessionResource sessionProxy = buildClient();
return sessionProxy.deleteSession(sessionId);
}

@GET
@Produces(MediaType.APPLICATION_JSON)
@Path("/{sessionId}/speakers")
//TODO specify fallback method as sessionResponseFallback
@Fallback(fallbackMethod="sessionResponseFallback")
public Response getSessionSpeakers(@PathParam("sessionId")String sessionId) {
SessionResource sessionProxy = buildClient();
return sessionProxy.getSessionSpeakers(sessionId);
}

@PUT
@Produces(MediaType.APPLICATION_JSON)
@Path("/{sessionId}/speakers/{speakerId}")
//TODO specify fallback method as sessionFallback
@Fallback(fallbackMethod="sessionFallback")

JB283-RHOAR1.0-en-1-20180517 267
Chapter 8. Developing an API Gateway

public Session addSessionSpeaker(@PathParam("sessionId")String sessionId,


@PathParam("speakerId")String speakerId) {
SessionResource sessionProxy = buildClient();
return sessionProxy.addSessionSpeaker(sessionId, speakerId);
}

@DELETE
@Path("/{sessionId}/speakers/{speakerId}")
//TODO specify fallback method as sessionResponseFallback
@Fallback(fallbackMethod="sessionResponseFallback")
public Response removeSessionSpeaker(@PathParam("sessionId")String sessionId,
@PathParam("speakerId")String speakerId) {
SessionResource sessionProxy = buildClient();
return sessionProxy.removeSessionSpeaker(sessionId, speakerId);
}

5.5. Press Ctrl+S to save your changes.

6. Log in to the OpenShift cluster as the developer user, create a new OpenShift project
called gateway-review. Then deploy the microservice-speaker, microservice-session, and
microservice-gateway applications to the OpenShift cluster using the fabric8 Maven plug-in.

Use the -DskipTests flag to skip the tests during the deployment process.

Use the oc status command to verify the three deployments and capture the URL of the
microservice-gateway microservice in the OpenShift cluster.

6.1. Open a terminal window on the workstation VM and log in to the OpenShift cluster as
the developer user:

[student@workstation ~]$ oc login -u developer -p redhat \


https://master.lab.example.com

6.2. Create the gateway-review OpenShift project:

[student@workstation ~]$ oc new-project gateway-review


Now using project "gateway-review"...

6.3. Use the fabric8 Maven plug-in to deploy the microservice-speaker application to the
OpenShift cluster.

Open a new terminal window, navigate to the microservice-speaker microservice


directory, and deploy it to the OpenShift cluster:

[student@workstation ~]$ cd microprofile-conference/microservice-speaker


[student@workstation microservice-speaker]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS
...

268 JB283-RHOAR1.0-en-1-20180517
Solution

6.4. Use the fabric8 Maven plug-in to deploy the microservice-session application to the
OpenShift cluster.

Navigate to the microservice-session microservice directory and deploy it to the


OpenShift cluster:

[student@workstation microservice-speaker]$ cd ../microservice-session


[student@workstation microservice-session]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS
...

6.5. Use the fabric8 Maven plug-in to deploy the microservice-gateway application to the
OpenShift cluster.

Navigate to the microservice-gateway microservice directory and deploy it to the


OpenShift cluster:

[student@workstation microservice-session]$ cd ../microservice-gateway


[student@workstation microservice-gateway]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS
...

6.6. Use the oc status command to verify the three deployments. If the deployments
succeed, the output is similar to the following:

[student@workstation microservice-gateway]$ oc status


In project gateway-review on server https://master.lab.example.com:443

http://microservice-gateway-gateway-review.apps.lab.example.com to pod port 8080


(svc/microservice-gateway)
dc/microservice-gateway deploys istag/microservice-gateway:latest <-
bc/microservice-gateway-s2i source builds uploaded code on
registry.lab.example.com:5000/redhat-openjdk-18/openjdk18-openshift:latest
deployment #1 deployed 11 seconds ago - 1 pod

http://microservice-session-gateway-review.apps.lab.example.com to pod port 8080


(svc/microservice-session)
dc/microservice-session deploys istag/microservice-session:latest <-
bc/microservice-session-s2i source builds uploaded code on
registry.lab.example.com:5000/redhat-openjdk-18/openjdk18-openshift:latest
deployment #1 deployed 4 minutes ago - 1 pod

http://microservice-speaker-gateway-review.apps.lab.example.com (svc/
microservice-speaker)
dc/microservice-speaker deploys istag/microservice-speaker:latest <-
bc/microservice-speaker-s2i source builds uploaded code on
registry.lab.example.com:5000/redhat-openjdk-18/openjdk18-openshift:latest

JB283-RHOAR1.0-en-1-20180517 269
Chapter 8. Developing an API Gateway

deployment #1 deployed 7 minutes ago - 1 pod

View details with 'oc describe <resource>/<name>' or list everything with 'oc
get all'.

7. Test accessing the microservice-speaker microservice endpoints through the microservice-


gateway microservice from a client using the RESTClient Firefox plug-in.

7.1. Start Firefox on the workstation VM and click the RESTClient plug-in on the browser's
toolbar.

7.2. Select GET as the Method. In the URL form, enter http://microservice-gateway-
gateway-review.apps.lab.example.com/gateway/speaker and click Send.

7.3. In the Headers tab, verify that the Status Code is 200 OK.

7.4. In the Response tab, verify that the response contains the following:

[{"id":"25","title":"Mr.","nameFirst":"Abbot","nameLast":"Blanchard",
"organization":"n/a","biography":"Lorem ipsum dolor sit amet, consectetur
adipiscing elit. Nullam commodo eget nisl eu fermentum. Phasellus tellus
elit, eleifend vel bibendum quis, hendrerit sit amet enim. Donec nulla tortor,
consectetur sed massa sed, luctus aliquet diam...
...output omitted...

8. Test the /speaker/add REST endpoint of the microservice-gateway microservice.

8.1. Select POST as the Method. Update the URL field to http://microservice-
gateway-gateway-review.apps.lab.example.com/gateway/speaker/add.

8.2. In the Body section of the request, add the following JSON representation of a
Speaker entity:

Note
This can be copied and pasted from /home/student/JB283/labs/
gateway-review/json.txt

{
"title":"Mr.",
"nameFirst":"Test",
"nameLast":"User",
"organization":"Tester Inc.",
"biography":"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam
commodo eget nisl eu fermentum. Fusce vitae diam fringilla, tincidunt dolor in,
condimentum",
"picture":"assets/images/unknown.jpg",
"twitterHandle":"@test_user"
}

8.3. On the toolbar, click Headers and select Custom Header to add a new custom header to
the request.

8.4. In the custom header dialog box, enter the following information:

270 JB283-RHOAR1.0-en-1-20180517
Solution

• Name: Content-Type

• Value: application/json

Click Okay and then click Send.

8.5. In the Headers tab, verify that the Status Code is 200 OK.

8.6. In the Response tab, verify that the response matches the following:

{"id":"7f59e4cc-3665-4210-94b2-162ce95551c9","title":"Mr.","nameFirst":"Test",
"nameLast":"User","organization":"Tester Inc.","biography":"Lorem ipsum dolor
sit amet, consectetur adipiscing elit. Nullam commodo eget nisl eu fermentum.
Fusce vitae diam fringilla, tincidunt dolor in, condimentum","picture":"assets/
images/unknown.jpg","twitterHandle":"@test_user","links":
{"add":"http://microservice-gateway-gateway-review.apps.lab.example.com/
speaker/","search":"http://microservice-gateway-gateway-
review.apps.lab.example.com/speaker/","self":"http://
microservice-gateway-gateway-review.apps.lab.example.com/speaker/
retrieve/7f59e4cc-3665-4210-94b2-162ce95551c9","update":"http://microservice-
gateway-gateway-review.apps.lab.example.com/speaker/","remove":"http://
microservice-gateway-gateway-review.apps.lab.example.com/speaker/
remove/7f59e4cc-3665-4210-94b2-162ce95551c9"}}

9. Test the /gateway/speaker/update REST endpoint of the microservice-gateway


microservice.

9.1. Select PUT as the Method. In the URL form, paste the value http://microservice-
gateway-gateway-review.apps.lab.example.com from the clipboard and
append the relative URI for the /gateway/speaker/update service.

9.2. In the Body section of the request, add the following updated JSON representation of a
Speaker entity. Update the id field using the id parameter from the previous step:

Note
This can be copied and pasted from /home/student/JB283/labs/
gateway-review/json-2.txt

{
"id": "7f59e4cc-3665-4210-94b2-162ce95551c9",
"title":"Mr.",
"nameFirst":"TestUpdate",
"nameLast":"UserUpdate",
"organization":"Tester Inc.",
"biography":"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam
commodo eget nisl eu fermentum. Fusce vitae diam fringilla, tincidunt dolor in,
condimentum",
"picture":"assets/images/unknown.jpg",
"twitterHandle":"@test_user"
}

9.3. Click Send.

JB283-RHOAR1.0-en-1-20180517 271
Chapter 8. Developing an API Gateway

9.4. In the Headers tab, verify that the Status Code is 200 OK.

9.5. In the Response tab, verify that the response matches the following:

{"id":"7f59e4cc-3665-4210-94b2-162ce95551c9","title":"Mr.",
"nameFirst":"TestUpdate","nameLast":"UserUpdate","organization":"Tester
Inc.","biography":"Lorem ipsum dolor sit amet, consectetur adipiscing
elit. Nullam commodo eget nisl eu fermentum. Fusce vitae diam
fringilla, tincidunt dolor in, condimentum","picture":"assets/images/
unknown.jpg","twitterHandle":"@test_user","links":{"add":"http://
microservice-gateway-gateway-review.apps.lab.example.com/
speaker/","search":"http://microservice-gateway-gateway-
review.apps.lab.example.com/speaker/","self":"http://
microservice-gateway-gateway-review.apps.lab.example.com/speaker/
retrieve/7f59e4cc-3665-4210-94b2-162ce95551c9","update":"http://microservice-
gateway-gateway-review.apps.lab.example.com/speaker/","remove":"http://
microservice-gateway-gateway-review.apps.lab.example.com/speaker/
remove/7f59e4cc-3665-4210-94b2-162ce95551c9"}}

10. Test the /gateway/speaker/remove endpoint of the microservice-gateway microservice.

10.1. Select DELETE as the Method. In the URL form, paste the value http://
microservice-gateway-gateway-review.apps.lab.example.com
from the clipboard and append the relative URI for the /gateway/speaker/
remove/7f59e4cc-3665-4210-94b2-162ce95551c9 service.

10.2.Click Send.

10.3.In the Headers tab, verify that the Status Code is 204 No Content.

11. Test accessing the microservice-session endpoints through the microservice-gateway


microservice from a client using the RESTClient Firefox plug-in.

11.1. Select GET as the Method. In the URL form, enter http://microservice-gateway-
gateway-review.apps.lab.example.com/gateway/sessions and click Send.

11.2. In the Headers tab, verify that the Status Code is 200 OK.

11.3. In the Response tab, verify that the response matches the following:

[{"schedule":189,"id":"88","speakers":["32"],"title":"euismod in, dolor. Fusce


feugiat.",
"code":"A7858A80-383D-7635-C1C4-6F92DCBE6004","type":"Legal
Department","abstract":
"accumsan neque et nunc. Quisque ornare tortor at risus. Nunc ac sem ut dolor
dapibus
...output omitted...

12. Delete the OpenShift routes that expose the microservice-session and microservice-
speaker microservices to make them inaccessible. After these two microservices are no
longer available, test the fault tolerance of the gateway using the RESTClient plug-in.

12.1. Use the oc delete route microservice-session command to delete the


microservice-session route:

272 JB283-RHOAR1.0-en-1-20180517
Solution

[student@workstation microservice-gateway]$ oc delete route microservice-session


route "microservice-session" deleted

12.2.Use the oc delete route microservice-speaker command to delete the


microservice-speaker route:

[student@workstation microservice-gateway]$ oc delete route microservice-speaker


route "microservice-speaker" deleted

12.3.Return to the Firefox window where you have the RESTClient plug-in open.

12.4.Select GET as the Method. In the URL form, enter http://microservice-gateway-


gateway-review.apps.lab.example.com/gateway/speaker and click Send.

12.5.In the Headers tab, verify that the Status Code is 200 OK.

12.6.In the Response tab, verify that the response matches the following:

[]

12.7. In the URL form, enter http://microservice-gateway-gateway-


review.apps.lab.example.com/gateway/sessions and click Send.

12.8.In the Headers tab, verify that the Status Code is 200 OK.

12.9. In the Response tab, verify that the response matches the following:

[]

13. Grade the lab.

[student@workstation microservice-gateway]$ lab gateway-review grade

14. Clean up the OCP project, commit your changes to your local Git repository in the lab
branch, and return to the master branch.

14.1. Delete the lab-gateway-review OCP project to undeploy the service and remove the
other OCP resources.

[student@workstation microservice-gateway]$ oc delete project \


gateway-review
project "gateway-review" deleted

14.2.Use the git add command to stage the uncommitted changes.

[student@workstation microservice-gateway]$ git add .

14.3.Use the git commit command to commit your changes to the local branch.

JB283-RHOAR1.0-en-1-20180517 273
Chapter 8. Developing an API Gateway

[student@workstation microservice-gateway]$ git commit \


-m"completing lab gateway-review"
[lab-gateway-review 7210573] completing lab lab-gateway-review

14.4.Switch the working copy back to the master branch to finish cleaning up.

[student@workstation microservice-gateway]$ git checkout master


Switched to branch 'master'

274 JB283-RHOAR1.0-en-1-20180517
Summary

Summary
In this chapter, you learned:

• An API gateway provides a number of benefits for microservice-based applications by


providing an intermediary between the microservice clients and the back-end microservices.
These benefits include:

◦ Insulating clients from the partitioning of the back-end microservices, which can change
over time.

◦ Implementing service discovery so that clients only need to locate the gateway instead of
every back-end service.

◦ Providing an optimal API for each type of client, which can greatly simplify the client code.

◦ Reducing the total number of requests a client needs to make if the gateway can retrieve
data from multiple services with a single round trip.

◦ Provides an intermediary standard HTTP API in front of any services that are required by the
application that do not use client-friendly protocols such as messaging or HTTP.

• An API gateway also presents some challenges, which you must consider when developing
your API gateway. These challenges include:

◦ Increasing overall system complexity. The API gateway is another service that developers
must build, test, manage, and deploy.

◦ Potentially increasing response time due to the additional network hop through the API
gateway.

◦ Increases the difficulty of scaling the application under heavy load. The gateway must be
built using a platform that can support such requirements.

• 3scale provides APIcast, which implements the functionality of an API gateway as well as a
number of other useful features. APIcast is an NGINX-based API gateway used to integrate
your internal and external API services with 3scale API Management Platform.

JB283-RHOAR1.0-en-1-20180517 275
276
TRAINING
CHAPTER 9

SECURING MICROSERVICES
WITH JWT

Overview
Goal Secure a microservice using the JSON web token
specification.
Objectives • Implement a microservice that generates a JSON web
token.

• Secure a microservice endpoint using JWT authentication


and authorization.
Sections • Implementing a JSON Web Token Generator (and Guided
Exercise)

• Securing a Microservice Endpoint (and Guided Exercise)


Lab Securing Microservices with JWT

JB283-RHOAR1.0-en-1-20180517 277
Chapter 9. Securing Microservices with JWT

Implementing a JSON Web Token Generator

Objective
After completing this section, students should be able to implement a microservice that
generates a JSON web token (JWT).

Creating Secured Microservices in MicroProfile


Specification
Implementing a reliable and robust security implementation in a microservices architecture is
important. The architecture of a microservice exposes multiple entry points to an application,
and communication likely necessitates multiple network hops, so the risk of unauthorized access
is high. This requires more planning than in traditional applications. Furthermore, security in
microservices using REST endpoints is hard to implement because of the following features of
REST services:

• REST is based on a stateless protocol (HTTP): Any sensitive information transmitted between
client and microservice must be transferred for each request.

• REST is based on a text-based protocol (HTTP): The information sent with each request is
available for anyone eavesdropping on communication, as HTTP is a plain-text protocol. Any
sensitive data is visible and may be captured by a third party.

• REST does not define a unique standard way to transmit sensitive data: There are at least three
ways to transmit information in a secure way in REST, including OAuth2, OpenID Connect
(OIDC), and JSON web tokens (JWT).

To avoid interoperability problems and the complexities mentioned, use MicroProfile JWT
specification to secure information passed among your microservices.

The specification uses JSON web tokens (JWT), which is a token-based authentication that defines
an algorithm to guarantee that any sensitive information is transferred in a reliable and secure
way in a REST-based application.

The token-based authentication workflow involves the following entities:

Issuer
Issues security tokens after asserting an identity. This is usually a unique microservice that
works as the identity provider, providing a JWT token generator.

Client
The microservice that requests the tokens from the issuer.

Subject
The person, system, or entity to which the information in the token refers.

Resource Server
The microservice that consumes the token.

The resource server uses the following token workflow:

1. Extract the security token from the header in the field named Authorization.

278 JB283-RHOAR1.0-en-1-20180517
Developing Microservices with the JWT Specification

2. Validate the token checking signature, encryption, and expiration checks.

3. Extract information about the subject.

4. Create a security context for the subject.

Developing Microservices with the JWT Specification


The JSON web token (JWT) defines a secure way to transfer information between applications
sent using HTTP protocol.

JWT Claims
According to the Internet Engineering Task Force (IETF), any sensitive information such as
mailing addresses, credit cards, or financial data you add to a JWT, is a claim. JWT supports
three different types of claims:

Registered claims
Standard claims that identify technical concerns, such as when the token was created and
token time-outs

Default or public claims


Standard claims defined by the Internet Assigned Numbers Authority (IANA) defining
standard information, including personal information, such as names and phone numbers,
and technical data, such as expiration time and public keys

Custom or private claims


Custom or private claims are used by a developer to store application-based information,
such as credit cards or Kerberos tokens.

Claims associated with a JWT are stored in data structures similar to a map or a dictionary, using
key value pairs. This structure is converted into a serialized format when it is transferred during
the request.

Registered claims include important claims such as those focused on when the token is
invalidated or activated, also named time to live claims. These claims include:

Expiration time (exp)


Defines the time stamp when the JWT expires

Not before (nbf)


Defines a starting time stamp when the JWT may be used

Issued at (iat)
Provides the time stamp of when the JWT was created

These claims are very important to maintaining the integrity of stored data. They restrict the
amount of time an authenticated user is able to access data without reauthenticating.

Warning
Even though these claims are not mandatory, they must be created with the first JWT
created to avoid tokens that may become stale or outdated.

JB283-RHOAR1.0-en-1-20180517 279
Chapter 9. Securing Microservices with JWT

JWT Content Integrity


To avoid any data manipulation and to ensure the integrity of the message from the sender to
the final destination, the JWT specification requires that JWT data must be either signed or
encrypted.

• Signed: Uses a private key to guarantee that the content comes from a reliable source. The
signature should be compliant with the JSON web signature (JWS) specification.

• Encrypted: Uses a private key to encrypt the content following the JSON web encryption (JWE)
specification.

JWT Structure
The resulting JWT content is organized using the following format:

xxxxxxxx.yyyyyyyyy.zzzzzzzzz

All blocks are encoded with base64 encoding to make it not human-readable to avoid unwanted
users from parsing the information.

First Block xxxxxxxx


Represents the JWT header that contains information used to process the second block,
such as the hashing algorithm and the type of the token, that is JWT.

Second Block yyyyyyyyy


Represents the JWT payload containing all claims added to the JWT. If the message is
encrypted, the content is encrypted and then encoded with base64 encoding.

Third Block zzzzzzzzz


Represents the signature for the header and payload, guaranteeing that nothing was
changed during the transmission.

In the following example, you have a JWT with each of the three blocks separated by a dot.

eyJhbGciOiJSUzI1NiJ9.

eyJzdWIiOiIyNDQwMDMyMCIsImF1ZCI...output omitted...Q1MDc3NSwianRpIjoiYS0xMjMifQ.

f7nJ_3Sdk1tibBRBnmII-NeNnpf1N-v...output omitted...rurw7NQf1yMIhQICq1SpJ0lU9SQ

The JWT header, containing the hashing algorithm and the type of token encoded in base64.
The payload from JWT in a base64-encoded format.
The signature of the header and the payload encoded in base64.

Transmitting JWT in a REST Endpoint


REST endpoints that need to send sensitive information must first request a token to the
JWT token provider. In the following diagram, Microservice A authenticates with the JWT
microservice provider. After validating the authentication, the JWT microservice provider
returns a JWT string that Microservice A can use for authentication with Microservice
B. Microservice A sends the JWT value using the Authorization HTTP header field. In
order to be accepted by Microservice B, the Authorization header field must contain the
Bearer prefix followed by the JWT string.

280 JB283-RHOAR1.0-en-1-20180517
Creating JWT with Java

Figure 9.1: JSON web token workflow

Creating JWT with Java


To align with a microservice architecture, where each service serves a single function, you can
create a single microservice that provides JWT to all other microservices that need to leverage
the token. This microservice is known as the JWT provider. Java provides libraries, such as
Auth0, Jose4J, and Nimbus JOSE JWT, to create JWT. This course uses the Nimbus JOSE JWT
implementation. After implementing the JWT generator, the resulting string is used to access a
secured microservice, which is discussed in a later section. The following example creates a JWT
using this library:

JSONObject claims = ...;

claims.put(Claims.iat.name(),System.currentTimeInMillis());
...

JWSSigner signer = new RSASSASigner(pk);

JWTClaimsSet claimsSet = JWTClaimsSet.parse(claims);

JWSHeader jwsHeader = new JWSHeader(JWSAlgorithm.RS256);

SignedJWT signedJWT = new SignedJWT(jwtHeader, claimsSet);

signedJWT.sign(signer);

String jwt = signedJWT.serialize();

Create claims as a JSON object, and defines the registered and default claims using the
Claims enum values.
Instantiate the object that signs the payload. You must provide a private key, created with
the ssh-keygen command, to instantiate a JWSSigner object to sign the claims.
Parse the claims into a JWTClaimsSet object.
Instantiate the JWSHeader object with the appropriate algorithm.
Sign the claims and the header:
Create a base64-encoded content that follows the JWT structure.
Create the String that represents the JWT structure.

JB283-RHOAR1.0-en-1-20180517 281
Chapter 9. Securing Microservices with JWT

Demonstration: Adding Custom Claims to a JSON Web


Token
1. Log in to the workstation VM as student using student as the password.

2. Check out the demo-security-jwt branch with Git by running the following commands:

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout demo-security-jwt

3. Inspect the createTokenForCredentials method from the microprofile-


conference/microservice-authz/src/main/io/microprofile/showcase/
tokens/AuthzResource.java file. The method is responsible for exposing a REST
endpoint that provides the JWT token string used by the microprofile-conference
application.

String username = credentials.getUsername();

String simpleName = simpleName(username);

String password = credentials.getPassword();

String expectedPassword = simpleName+"-secret";


if(!password.equals(expectedPassword)) {
System.err.printf("password(%s) != %s\n", password, expectedPassword);
return Response.status(403).build();
}
HashMap<String, Object> claims = new HashMap<>();

claims.put(Claims.upn.name(), username);
claims.put(Claims.preferred_username.name(), simpleName);

claims.put("cpny", "Red Hat");

String jsonResName = String.format("/%s.json", simpleName);


String stoken = TokenUtils.generateTokenString(jsonResName, claims, timeToLive);
System.out.printf("Created token: %s\n", stoken);
AuthToken token = new AuthToken(credentials.getUsername(), stoken);

Read the user name from the Credentials object


Use the string before the @ character from the email as the user name.
Read the password from the Credentials object.
Define the password as the concatenation of the user name and -secret string.
Populate the standard claims from JWT, such as preferred_username and upn.
Populate a custom claim named cpny representing the company name responsible for
generating the token.
Define the JSON template file name that must be used as a baseline.
In order to minimize JSON creation coding, the method reads existing JSON files to build
the JWT token. The token creation process is delegated to the generateTokenString
method from the TokenUtils class.

4. Inspect how the generateTokenString method from TokenUtils class creates or


updates the claims from the existing JWT token.

The method creates all the claims needed by the application.

282 JB283-RHOAR1.0-en-1-20180517
Demonstration: Adding Custom Claims to a JSON Web Token

JSONParser parser = new JSONParser(DEFAULT_PERMISSIVE_MODE);

JSONObject jwtContent = (JSONObject) parser.parse(content);

long currentTimeInSecs = currentTimeInSecs();

long exp = currentTimeInSecs + timeToLive;

if (claims != null && claims.containsKey(Claims.exp.name())) {


Number inputExp = (Number) claims.get(Claims.exp.name());
exp = inputExp.longValue();
}

jwtContent.put(Claims.iat.name(), currentTimeInSecs);

jwtContent.put(Claims.auth_time.name(), currentTimeInSecs);

jwtContent.put(Claims.exp.name(), exp);

for(String key : claims.keySet()) {


switch (key) {
case "exp":
case "iat":
case "auth_time":
break;
default:
jwtContent.put(key, claims.get(key));
break;
}
}

Change the parser to support a permissive JSON format.


Transform the file into a JSONObject instance.
Capture the current time stamp.
Update the expiration time using the timeToLive variable.
If the Claim was created previously, reuse the existing expiration time.
Update with the last time stamp when the existing Claim object was accessed.
Update with the time stamp when the user was authenticated with the existing Claim
object.
Update with the expiration time stamp in the Claim object.
Copy all fields obtained by the REST endpoint, such as user name and password, to the
Claim object.

5. Inspect how the generateTokenString method signs the JSON web token.

PrivateKey pk = readPrivateKey("/privateKey.pem");

JWSSigner signer = new RSASSASigner(pk);

JWTClaimsSet claimsSet = JWTClaimsSet.parse(jwtContent);

JWSHeader jwsHeader = new JWSHeader(JWSAlgorithm.RS256);

SignedJWT signedJWT = new SignedJWT(jwtHeader, claimsSet);


signedJWT.sign(signer);

String jwt = signedJWT.serialize();

Read the private key used to sign the JWT token.


Create a RSA signer.

JB283-RHOAR1.0-en-1-20180517 283
Chapter 9. Securing Microservices with JWT

Parse the claim that was populated in the previous step.


Build the header, signing with the private key, using the RSA algorithm, and following
JWT standards.
Instantiate the JWT with the header and JWT claims.
Create the JWT without signing it.
Generate text to be used as the JWT.
This concludes the demonstration.

References
Registered claims defined by IETF
https://www.iana.org/assignments/jwt/jwt.xhtml

Public claims defined by IANA


https://www.iana.org/assignments/jwt/jwt.xhtml

Nimbus JSON JWT implementation


https://bitbucket.org/connect2id/nimbus-jose-jwt/wiki/Home

Auth0 JWT implementation


https://github.com/auth0/java-jwt

Auth0 JWT implementation


https://jwt.io

284 JB283-RHOAR1.0-en-1-20180517
Guided Exercise: Implementing a JSON Web Token Generator

Guided Exercise: Implementing a JSON Web


Token Generator

In this exercise, you will implement a custom claim embedded in a JSON web token.

Outcomes
You should be able to add new custom claims to a JSON web token generated by an external
application.

Before you begin


If you have not already, execute the git clone command to clone the microprofile-conference
repository onto the workstation machine.

[student@workstation ~]$ git clone http://services.lab.example.com/microprofile-


conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Run the lab setup to begin the exercise.

[student@workstation ~]$ lab security-jwt setup

Steps
1. Switch the repository to the lab-security-jwt branch to get the correct version of the
application code for this exercise.

1.1. Switch to the branch using the git checkout command.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout lab-security-jwt
Switched to a new branch 'lab-security-jwt'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-security-jwt
nothing to commit, working directory clean

2. Inspect the REST endpoint responsible for providing a JSON web token (JWT) to the
microservices.

2.1. Open the AuthzResource class by expanding the microservice-authz item in the
Project Explorer tab in the left pane JBoss Developer Studio, then click microservice-
authz > Java Resources > src/main/java > io.microprofile.showcase.tokens to expand it.
Double-click the AuthzResource.java file.

2.2. Inspect the REST endpoint that captures the user name and the password from the
request. The createTokenForCredentials method accesses the user name and
password using a Credentials object processed by the request.

JB283-RHOAR1.0-en-1-20180517 285
Chapter 9. Securing Microservices with JWT

public Response createTokenForCredentials(Credentials credentials) throws


Exception {
String username = credentials.getUsername();
String simpleName = simpleName(username);
String password = credentials.getPassword();

2.3. Inspect the REST endpoint that adds the upn and the preferred_username default
claims into a HashMap instance, which is used later to generate a JWT string. The
HashMapobject is passed as a parameter to the TokenUtils utility class, which builds
the token string.

HashMap<String, Object> claims = new HashMap<>();


claims.put(Claims.upn.name(), username);
claims.put(Claims.preferred_username.name(), simpleName);

3. Add a custom claim to the JWT string used by the application.

3.1. Open the TokenUtils class by expanding the microservice-authz item in the Project
Explorer tab in the left pane of JBoss Developer Studio, then click microservice-
authz > Java Resources > src/main/java > io.microprofile.showcase.tokens to expand it.
Double-click the TokenUtils.java file.

3.2. In the generateTokenString method, add a new claim named dvlpr_nm to the
jwtContent object. Use your name as source:

if (claims != null && claims.containsKey(Claims.exp.name())) {


Number inputExp = (Number) claims.get(Claims.exp.name());
exp = inputExp.longValue();
}
//TODO Add the new claim here
jwtContent.put("dvlpr_nm", "YourName");
jwtContent.put(Claims.iat.name(), currentTimeInSecs);
jwtContent.put(Claims.auth_time.name(), currentTimeInSecs);

3.3. Save your changes to the file using Ctrl+S.

4. Test the token generator REST endpoint.

4.1. Build and run the WildFly Swarm application using the Maven plug-in.

In your terminal window, navigate to the microservice-authz directory and run mvn
clean wildfly-swarm:run to start the server.

[student@workstation microprofile-conference]$ cd microservice-authz


[student@workstation microservice-authz]$ mvn clean wildfly-swarm:run \
-DskipTests

286 JB283-RHOAR1.0-en-1-20180517
Note
To minimize the amount of time needed to start WildFly Swarm, use the -
DskipTests flag to bypass unit tests.

4.2. Generate the JWT string by querying the service from a client using the RESTClient
Firefox plug-in.

Start Firefox on the workstation VM and click the RESTClient plug-in in the browser's
toolbar.

Figure 9.2: The Firefox RESTClient plug-in

4.3. Select POST as the Method. In the URL form, enter http://localhost:8080/authz.
Add to the Body text area the following string:

{ "username": "alumni", "password": "alumni-secret" }

Note
The content may be copied and pasted from the /home/student/labs/
security-jwt/auth.txt file.

Select Headers > Custom Headers from the top menu and use the following values in
the Request Headers dialog box.

• Name: Content-Type

• Attribute Value: application/json

Click Okay.

4.4. Click Send.

4.5. Verify in the Headers tab that the Status Code is 200 OK.

4.6. Verify in the Response tab that the response is similar to the following output.

{"username":"alumni",
"id_token":"eyJhbGciOiJSUzI1NiJ9....output omitted...gVZX4JGQ"}

4.7. Copy the id_token content for later use.

Select the content from the id_token from the RESTClient Firefox plug-in and hit
Ctrl+C to copy the token to the clipboard.

JB283-RHOAR1.0-en-1-20180517 287
Chapter 9. Securing Microservices with JWT

5. Inspect that the generated JWT string contains the custom claim.

5.1. The io.microprofile.showcase.authz.Translate class provides a way to get


the claims from a signed JWT string from the command line.

Execute the Translate class by expanding the microservice-authz item in the Project
Explorer tab in the left pane of JBoss Developer Studio, then click microservice-authz
> Java Resources > src/main/java > io.microprofile.showcase.authz to expand it. Right-
click the Translate.java file, select Run As > Java Application.

5.2. In the Console tab, the Enter token: prompt is displayed. Click your mouse just after
the prompt and press Ctrl+V to paste the id_token. Press Enter to process the JWT
content.

5.3. In the Console tab, verify that the response is similar to the following output:

Claim name:[sub] / Claim value:[24400320]


Claim name:[aud] / Claim value:[[vote, sessions, schedule]]
Claim name:[upn] / Claim value:[alumni]
Claim name:[auth_time] / Claim value:[1522872208]
Claim name:[iss] / Claim value:[https://mpconference.com]
Claim name:[groups] / Claim value:[["Alumni","Voter","Registered"]]
Claim name:[preferred_username] / Claim value:[alumni]
Claim name:[exp] / Claim value:[Wed Apr 04 17:08:28 BRT 2018]
Claim name:[iat] / Claim value:[Wed Apr 04 17:03:28 BRT 2018]
Claim name:[dvlp_nm] / Claim value:[YourName]
Claim name:[jti] / Claim value:[a-123]

Your name must appear as one of the claims added to the token.

6. Return to the terminal window where WildFly Swarm is running, and stop the service using
Ctrl+C.

7. Clean up, commit your changes to your local Git repository in the lab branch, and return to
the master branch.

7.1. Use the git add command to stage the uncommitted changes.

[student@workstation aloha]$ git add .

7.2. Use the git commit command to commit your changes to the local branch.

[student@workstation aloha]$ git commit -m"completing lab security-jwt"


[lab-cdi-jaxrs 7210573] completing lab security-jwt

7.3. Switch the working copy back to the master branch to finish cleaning up.

[student@workstation aloha]$ git checkout master


Switched to branch 'master'

This concludes the guided exercise.

288 JB283-RHOAR1.0-en-1-20180517
Securing a Microservice Endpoint

Securing a Microservice Endpoint

Objectives
After completing this section, students should be able to secure a microservice endpoint using
JWT authentication and authorization.

Using JSON Web Tokens (JWT) for Authentication


A JSON web token (JWT) generated by a credential provider microservice is the gateway
to access a MicroProfile JWT-secured application. To authenticate with the JWT-secured
application, the HTTP request must send an HTTP header with the Authorization header field.
The value must start with the prefix Bearer (the space after the word "Bearer" is important)
and must contain the JWT string generated by the credential provider microservice.

Configuring the MicroProfile JWT Fraction


The MicroProfile JWT fraction provides the dependencies necessary to generate a JWT token
without requiring an external library, such as Auth0, Jose4J, and Nimbus JOSE JWT. In addition
to providing dependencies, including this fraction in your JWT provider simplifies configuration
and allows you to customize aspects of the JWT, such as:

Expiration Grace Period


The amount of time in seconds that a JWT expires

Issuer information
The URI of the JWT token issuer

Public Key Information


The public key of the JWT token signer

To configure these aspects, you must first include the MicroProfile JWT fraction in the
microservice provider's pom.xml file and update the project-defaults.yml file to include
the following:

swarm:
bind:
address: localhost
microprofile:
jwtauth:
token:

issuedBy: "https://server.example.com"

expGracePeriod: 3600

signerPubKey: xsaswq123q1dsws214

Customizes the issuer for each request


Customizes the amount of time for the endpoint grace period
Customizes the public key used by the endpoint

Configuring WildFly Swarm for JWT Login Module


Any microservice authenticating with JWT and developed to work with WildFly Swarm requires
the following contents in the project-defaults.yml file in the src/main/resources
directory:

JB283-RHOAR1.0-en-1-20180517 289
Chapter 9. Securing Microservices with JWT

swarm:
bind:
address: localhost
security:
security-domains:
hello-domain:
jaspi-authentication:
login-module-stacks:
roles-lm-stack:
login-modules:
- login-module: rm
code:

org.wildfly.swarm.microprofile.jwtauth.deployment.auth.jaas.JWTLoginModule
flag: required
module-options:

rolesProperties: jwt-roles.properties
logExceptions: true

auth-modules:
http:
code:

org.wildfly.extension.undertow.security.jaspi.modules.HTTPSchemeServerAuthModule
module: org.wildfly.extension.undertow
flag: required
login-module-stack-ref: roles-lm-stack

Configures WildFly Swarm to use the JWTLoginModule class for authentication purposes
Locates the properties file with the roles and their respective users that can authenticate to
the application
Customizes Undertow to support JWT
Configures the Undertow module to accept authentication using headers
The JWTLoginModule class is responsible for converting groups defined in JWT into Java
authentication and authorization service (JAAS) roles.

Capturing User Information With JAAS


The WildFly Swarm login module translates all information from JWT into JAAS-
compliant annotations and classes. This enables you to use JAAS annotations such as
@javax.annotation.security.DeclareRoles to declare which JWT roles are allowed to
access the application.

@DeclareRoles({"VIP", "Voter", "Alumni"})


public class MyRoles {
...

To allow specific roles to invoke a method, use the @RolesAllowed annotation:

@RolesAllowed("Alumni")
public String fooBar() {
...

Securing Programmatically with JAAS


In microservices that requires a low-level security management, such as time-based restrictions,
use JAAS to create specific rules. For example, you may enable access to a user during certain
hours of each day.

290 JB283-RHOAR1.0-en-1-20180517
Securing Programmatically with JAAS

Inject the javax.ws.rs.core.SecurityContext object into a class to get user information,


such as the user name. In the following example, the customer user is granted access just after
1:00 PM.

@Inject
private SecurityContext context;

public String fooBar() {


...
String name=context.getUserPrincipal().getName();
if("customer".equals(name) && new Date().getHours()>13){
...
}
...

To capture JWT-specific information, such as the claims, you must cast the UserPrincipal
object into an org.eclipse.microprofile.jwt.JsonWebToken object, as follows:

JsonWebToken token = (JsonWebToken) securityContext.getUserPrincipal();

This can be useful when programmatically extracting claim data, such as token expiry or some
custom attribute.

References
WildFly Swarm JWT RBAC Auth MicroProfile configuration
http://docs.wildfly-swarm.io/2018.3.3/#_microprofile_jwt_rbac_auth_fraction

MicroProfile JWT Auth GitHub web site.


https://github.com/eclipse/microprofile-jwt-auth

MicroProfile JWT Auth Specification


https://github.com/eclipse/microprofile-jwt-auth/files/1305001/microprofile-jwt-
auth-spec-1.0.pdf

JAAS Reference Guide


https://docs.oracle.com/javase/8/docs/technotes/guides/security/jaas/
JAASRefGuide.html

JB283-RHOAR1.0-en-1-20180517 291
Chapter 9. Securing Microservices with JWT

Guided Exercise: Securing a Microservice


Endpoint

In this exercise, you will configure a microservice to accept JSON web tokens (JWT) for
authentication purposes.

Outcomes
You should be able to configure the hola microservice to accept JWT and allow users from a
specific role to access a REST endpoint from the microservice.

Before you begin


If you have not already, clone the hello-microservices repository to the workstation VM.

[student@workstation ~]$ git clone http://services.lab.example.com/hello-microservices


Cloning into 'hello-microservices'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Use the lab setup command to ensure the environment is sound to begin the exercise.

[student@workstation ~]$ lab security-integration setup

Steps
1. Switch the repository to the lab-security-integration branch to get the correct
version of the application code for this exercise.

1.1. Use the git checkout command to check out the required branch.

[student@workstation ~]$ cd hello-microservices


[student@workstation hello-microservices]$ git checkout \
lab-security-integration
...output omitted...
Switched to a new branch 'lab-security-integration'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation hello-microservices]$ git status


# On branch lab-security-integration
nothing to commit, working directory clean

2. Configure the hola microservice to use a JWT to authenticate users.

2.1. In JBoss Developer Studio, open the project-defaults.yml file by expanding the
hola item in the Project Explorer tab in the left pane of JBoss Developer Studio.

Click hola > Java Resources > src/main/resources to expand it. Double-click the
project-defaults.yml file.

2.2. Update the file to support authentication using a JWT.

292 JB283-RHOAR1.0-en-1-20180517
In the login-modules section, update the code attribute with
org.wildfly.swarm.microprofile.jwtauth.deployment.auth.jaas.
JWTLoginModule.

login-modules:
- login-module: rm
# TODO use the
org.wildfly.swarm.microprofile.jwtauth.deployment.auth.jaas.JWTLoginModule
login module
code: org.wildfly.swarm.microprofile.jwtauth.deployment.auth.jaas.

JWTLoginModule
...

The login module class name should be defined in a single line.


Press Ctrl+S to save your changes.

3. Configure the microservice REST endpoint to allow access to users with the Alumni role.

3.1. Update the secureHola method from HolaResource class to allow access only to
users that are part of the Alumni role.

In JBoss Developer Studio, open the HolaResource class by expanding the hola item
in the Project Explorer tab in the left pane.

Click hola > Java Resources > src/main/java > com.redhat.training.msa.hola.rest to


expand it. Double-click the HolaResource.java file.

3.2. Add the @RolesAllowed method-level annotation to the secureHola method to allow
access only to the users that are part of the Alumni role.

@GET
@Path("/hola-secure")
@Produces("application/json")
//TODO Allow only roles with Alumni to access this method
@RolesAllowed ("Alumni")
public SecurePackage secureHola() {
...

4. Retrieve the JWT from the request and use this information to create the response to the
client.

4.1. Inject a SecurityContext instance to the HolaResource instance to get the


authentication information for the user.

In the HolaResource class, add a SecurityContext attribute and the @Context


annotation.

@Inject
@WithoutTracing
private AlohaService alohaService;

//TODO Inject the securityContext


@Context

JB283-RHOAR1.0-en-1-20180517 293
Chapter 9. Securing Microservices with JWT

private SecurityContext securityContext;

4.2. Implement the secureHola method to capture the user name and expiration time
stamp from the securityContext attribute in the username and expirationTime
variables, respectively.

In the HolaResource class, add the following code:

public SecurePackage secureHola() {


//TODO GET JWT
JsonWebToken token = (JsonWebToken) securityContext.getUserPrincipal();
//TODO GET the user name from the JWT
String username = token.getName();
//TODO GET the expiration time from the JWT
long expirationTime = token.getExpirationTime();
return new SecurePackage(username, new Date(expirationTime * 1000).toString(),
true);
}

5. Start the hola microservice.

In a new terminal window, run the following commands:

[student@workstation ~]$ cd hello-microservices/hola


[student@workstation hola]$ ./run.sh
...output omitted...
2018-03-09 17:03:23,329 INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly Swarm
is Ready

6. Validate that the hola microservice only accepts users that are authenticated with a JWT.

6.1. Start Firefox on the workstation VM and click the RESTClient plug-in in the browser's
toolbar.

6.2. Select GET as the Method. In the URL form, enter http://localhost:8080/api/
hola-secure and click Send.

6.3. Verify in the Headers tab that the Status Code is 401 Unauthorized.

7. Start the authz microservice. The microservice generates JWT authentication tokens usable
with any application in the microprofile-conference application.

In a new terminal window, run the following commands:

[student@workstation ~]$ cd hello-microservices/authz


[student@workstation authz]$ ./run-swarm.sh
...output omitted...
2018-03-09 17:03:23,329 INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly Swarm
is Ready

8. Validate that the hola microservice accepts the JWT generated by the authz microservice.

The authz microservice is a clone of the microservice-authz microservice from the previous
guided exercise. It accepts credentials where alumni is the user name and alumni-secret
is the password. This user is part of the Alumni role.

294 JB283-RHOAR1.0-en-1-20180517
8.1. Open a new tab in Firefox and click the RESTClient plug-in in the browser's toolbar.

Note
This step is important because you are going to copy the generated token and
use it with the already opened Firefox tab.

8.2. Select POST as the Method. In the URL form, enter http://localhost:5055/
authz.

8.3. Click Headers > Custom Header from the top menu. Fill out the form with the following
values:

• Name: Content-Type

• Attribute Value: application/json

Click Okay.

8.4. In the body form, add:

{
"username": "alumni",
"password": "alumni-secret"
}

Note
Content may be copied from the /home/student/JB283/labs/security-
integration/alumni.json file.

Click Send.

8.5. In the Headers tab, verify that the Status Code is 200 OK.

8.6. In the Response tab, verify that the response matches the following:

{"username":"alumni","id_token":"eyJhbGciOiJSUzI...84K5U85NAb2zpm3lQ"}

Copy the id_token attribute content to use as part of the authentication process.

9. Validate that the hola microservice accepts the alumni user's JWT.

Warning
The JWT may become stale due to timeout policies. If that happens during the
guided exercise, request a new token from the authz microservice.

JB283-RHOAR1.0-en-1-20180517 295
Chapter 9. Securing Microservices with JWT

9.1. In the Firefox tab that you used to invoke the hola-secure endpoint, select Headers
> Custom Header. Complete the form with the following values, updating the text after
the Bearer string with the id_token captured previously.

• Name: Authorization

• Attribute Value: Bearer eyJhbGciOiJSUzI...84K5U85NAb2zpm3lQ

Click Okay.

Note
You must leave a space between the Bearer string and the id_token value.

9.2. Click Send.

9.3. In the Headers tab, verify that the Status Code is 200 OK.

9.4. In the Response tab, verify that the response matches the following:

{"username":"alumni","expires":"Fri Apr 06 13:26:10 EDT 2018","isVIP":true}

10. Validate that the hola microservice does not accept users that are not part of the Alumni
role.

The unregistered user is not part of the Alumni role and its access is blocked.

10.1. On the workstation VM, use the Firefox tab accessing the authz microservice REST
endpoint.

10.2.Add the following values to the Body form:

{
"username": "unregistered",
"password": "unregistered-secret"
}

Note
This content may be copied from the /home/student/JB283/labs/
security-integration/unregistered.json file.

Click Send.

10.3.In the Headers tab, verify that the Status Code is 200 OK.

10.4.In the Response tab, verify that the response matches the following:

{"username":"alumni","id_token":"eyJhbGciOiJSUzI...84K5U85NAb2zpm3lQ"}

296 JB283-RHOAR1.0-en-1-20180517
Copy the id_token attribute content to use as part of the authentication process.

11. Validate that the hola microservice does not accept the unregistered user JWT.

Warning
The JWT may become stale due to the timeout policies. If that happens during the
guided exercise, request a new token from the authz microservice.

11.1. In the Firefox tab containing the hola-secure endpoint, click the Authorization value
from the Headers panel. Update the form with the following values, changing the text
after Bearer with the id_token captured previously.

• Name: Authorization

• Attribute Value: Bearer eyJhbGciOiJSUzI...84K5U85NAb2zpm3lQ

Click Okay.

Note
You must leave a space between the Bearer and the id_token value.

11.2. Click Send.

11.3. Verify in the Headers tab that the Status Code is 403 Forbidden.

11.4. Verify in the Response tab that the response matches the following:

<html><head><title>Error</title></head><body>Forbidden</body></html>

12. Clean up and commit your changes to your local Git repository in the lab branch and return
to the master branch.

12.1. Stop the authz microservice. In the terminal window running the authz microservice,
press Ctrl+C.

12.2.Stop the hola microservice. In the terminal window running the hola microservice, press
Ctrl+C.

12.3.Use the git add command to stage any uncommitted changes.

[student@workstation hello-microservices]$ git add .

12.4.Use the git commit command to commit your changes to the local branch.

[student@workstation hello-microservices]$ git commit \


-m" completing lab security-integration"
[lab-fault-tolerance-annotation 7a5f023] completing lab security-integration

JB283-RHOAR1.0-en-1-20180517 297
Chapter 9. Securing Microservices with JWT

1 file changed, 23 insertions(+), 8 deletions(-)

12.5.Switch the working copy back to the master branch to finish cleaning up.

[student@workstation hello-microservices]$ git checkout master


Switched to branch 'master'

This concludes the guided exercise.

298 JB283-RHOAR1.0-en-1-20180517
Lab: Securing Microservices with JWT

Lab: Securing Microservices with JWT

In this lab, you will configure the MicroProfile conference application to authenticate with JWT,
you will develop application code using JAAS, and deploy the application on an OpenShift cluster.

Outcomes
You should be able to configure JWTLoginModule in the microservice-session microservice,
develop business logic code with JAAS API, and deploy the microservice-authz, the
microservice-session, and the web-application microservices on an OpenShift cluster.

Before you begin


If you have not already, execute the git clone command to clone the microprofile-conference
repository onto the workstation VM.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Use the lab setup command to ensure that the environment is sound so you can begin the
exercise.

[student@workstation ~]$ lab security-review setup

Steps
1. To begin the exercise, switch the repository to the lab-security-review to get the
application code.

1.1. Use the git checkout command to check out the lab-security-review.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout \
lab-security-review
...output omitted...
Switched to a new branch 'lab-security-review'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-security-review
nothing to commit, working directory clean

2. If you are not already authenticated, log in to the OpenShift cluster as developer using
redhat as the password. Create the security-review project in the OpenShift cluster.

2.1. Log in to the OpenShift cluster from the command line.

From the existing terminal window, run the following command:

[student@workstation microprofile-conference]$ oc login \


-u developer -p redhat https://master.lab.example.com

JB283-RHOAR1.0-en-1-20180517 299
Chapter 9. Securing Microservices with JWT

Login Successful
...output omitted...

2.2. Create the security-review project in the OpenShift cluster.

Run the following command:

[student@workstation microprofile-conference]$ oc new-project \


security-review
Now using project "security-review" on server "https://
master.lab.example.com:443"

3. Deploy the microservice-authz microservice with the fabric8 Maven plug-in.

You can use the -DskipTests option to skip the tests for a faster build time.

Note
The following error may occur during execution. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException:
Task java.util.concurrent.SpeakerdThreadPoolExecutor
$SpeakerdFutureTask@3cce6de9
rejected from
java.util.concurrent.SpeakerdThreadPoolExecutor@79ccf414[Shutting down,
pool size = 1, active threads = 1,
queued tasks = 0, completed tasks = 10]

4. Get the route URL to access the microservice-authz microservice endpoints.

Run the following command in the existing terminal window:

[student@workstation microservice-authz]$ oc status

Route information is displayed on the workstation VM. It should look similar to the
following:

In project security-review on server https://master.lab.example.com:443


...
http://microservice-authz-security-review.apps.lab.example.com (svc/microservice-
authz)

5. Test the microservice-authz microservice endpoint responsible for generating the JWT
using the RESTClient Firefox plug-in. Use the following JSON content to authenticate with a
valid user.

{ "username": "alumni", "password": "alumni-secret" }

6. Deploy the microservice-gateway microservice with the fabric8 Maven plug-in.

300 JB283-RHOAR1.0-en-1-20180517
The API gateway is a proxy that invokes the microservice-authz microservice.

You can use the -DskipTests option to skip the tests for a faster build time.

Note
The following error may occur during execution. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException:
Task java.util.concurrent.SpeakerdThreadPoolExecutor
$SpeakerdFutureTask@3cce6de9
rejected from
java.util.concurrent.SpeakerdThreadPoolExecutor@79ccf414[Shutting down,
pool size = 1, active threads = 1,
queued tasks = 0, completed tasks = 10]

7. Get the route URL to access the microservice-gateway microservice endpoints.

Run the following command in the existing terminal window:

[student@workstation microservice-gateway]$ oc status

Route information is displayed on the workstation VM. It should look similar to the
following:

In project security-review on server https://master.lab.example.com:443


...
http://microservice-gateway-security-review.apps.lab.example.com (svc/microservice-
gateway)

8. Test the microservice-gateway microservice proxy endpoint responsible for generating


the JWT using the RESTClient Firefox plug-in. The proxy endpoint is located at http://
microservice-gateway-security-review.apps.lab.example.com/gateway/
authz. Use the following JSON content to authenticate as a valid user:

{ "username": "alumni", "password": "alumni-secret" }

9. Change the allSessions method from the SessionResource class from the
microservices-session microservice so that it returns a Collection with all sessions for
authenticated users.

Some sessions require very important people (VIP) passes. You must remove them from the
result. The Java stream is provided in the existing source code. List all sessions for those
individuals with VIP passes. A user is a VIP whenever it is part of the VIP JAAS role.

10. Configure the microservice-session microservice to use JWTLoginModule to authenticate


users.

JB283-RHOAR1.0-en-1-20180517 301
Chapter 9. Securing Microservices with JWT

11. Deploy the microservice-session microservice with the fabric8 Maven plug-in. You can use
the -DskipTests option to skip the tests for a faster build time.

Note
The following error may occur during execution. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException:
Task java.util.concurrent.SpeakerdThreadPoolExecutor
$SpeakerdFutureTask@3cce6de9
rejected from
java.util.concurrent.SpeakerdThreadPoolExecutor@79ccf414[Shutting down,
pool size = 1, active threads = 1,
queued tasks = 0, completed tasks = 10]

12. Get the route URL to access the microservice-session microservice endpoints.

Run the following command in the existing terminal window:

[student@workstation microservice-gateway]$ oc status

Route information is displayed on the workstation VM. It looks similar to the following:

In project security-review on server https://master.lab.example.com:443


...
http://microservice-session-security-review.apps.lab.example.com (svc/microservice-
session)

13. Test the microservice-gateway microservice proxy endpoint to access the session
endpoint using the RESTClient Firefox plug-in. The proxy endpoint is located in http://
microservice-gateway-security-review.apps.lab.example.com/gateway/
sessions and it may be accessed using the HTTP GET method.

14. Deploy the web-application microservice with the fabric8 Maven plug-in. You can use the -
DskipTests option to skip the tests for a faster build time and the -Dskip.npm to skip the
Node.js build.

Note
The nodejs portion of the application has been pre-built to accommodate the
offline classroom environment that this class uses. For this reason, you must skip
the npm portion of the web-application build.

302 JB283-RHOAR1.0-en-1-20180517
Note
The following error may occur during the execution. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException:
Task java.util.concurrent.SpeakerdThreadPoolExecutor
$SpeakerdFutureTask@3cce6de9
rejected from
java.util.concurrent.SpeakerdThreadPoolExecutor@79ccf414[Shutting down,
pool size = 1, active threads = 1,
queued tasks = 0, completed tasks = 10]

15. Get the route URL to access the web-application microservice.

Run the following command in the existing terminal window:

[student@workstation web-application]$ oc status

Route information is displayed on the workstation VM. It should look similar to the
following:

In project security-review on server https://master.lab.example.com:443


...
http://web.apps.lab.example.com (svc/web-application)

16. Test the web-application project that uses the microservice-authz, microservice-gateway,
and microservice-session microservices. From the existing Firefox window, open a new tab
and access http://web.apps.lab.example.com URI and click Login to authenticate.
Use the following credentials:

• username: [email protected]

• password: vipuser-secret

Click Create.

17. Navigate to the sessions section and get the list of sessions.

18. Grade the lab.

[student@workstation web-application]$ lab security-review grade

All the checks should pass.

19. Delete the project from the terminal window.

20. Clean up and commit your changes to your local Git repository in the lab branch, and return
to the master branch.

20.1.Use the git add command to stage any uncommitted changes.

JB283-RHOAR1.0-en-1-20180517 303
Chapter 9. Securing Microservices with JWT

[student@workstation web-application]$ git add .

20.2.Use the git commit command to commit your changes to the local branch.

[student@workstation web-application]$ git commit \


-m"completing lab security-review"
[lab-test-review e59dc43] completing lab security-review
...output omitted...
3 files changed, 41 insertions(+), 18 deletions(-)

20.3.Check out the working copy to change to the master branch.

[student@workstation web-application]$ git checkout master


Switched to branch 'master'

This concludes the lab.

304 JB283-RHOAR1.0-en-1-20180517
Solution

Solution
In this lab, you will configure the MicroProfile conference application to authenticate with JWT,
you will develop application code using JAAS, and deploy the application on an OpenShift cluster.

Outcomes
You should be able to configure JWTLoginModule in the microservice-session microservice,
develop business logic code with JAAS API, and deploy the microservice-authz, the
microservice-session, and the web-application microservices on an OpenShift cluster.

Before you begin


If you have not already, execute the git clone command to clone the microprofile-conference
repository onto the workstation VM.

[student@workstation ~]$ git clone \


http://services.lab.example.com/microprofile-conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Use the lab setup command to ensure that the environment is sound so you can begin the
exercise.

[student@workstation ~]$ lab security-review setup

Steps
1. To begin the exercise, switch the repository to the lab-security-review to get the
application code.

1.1. Use the git checkout command to check out the lab-security-review.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout \
lab-security-review
...output omitted...
Switched to a new branch 'lab-security-review'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-security-review
nothing to commit, working directory clean

2. If you are not already authenticated, log in to the OpenShift cluster as developer using
redhat as the password. Create the security-review project in the OpenShift cluster.

2.1. Log in to the OpenShift cluster from the command line.

From the existing terminal window, run the following command:

[student@workstation microprofile-conference]$ oc login \


-u developer -p redhat https://master.lab.example.com
Login Successful

JB283-RHOAR1.0-en-1-20180517 305
Chapter 9. Securing Microservices with JWT

...output omitted...

2.2. Create the security-review project in the OpenShift cluster.

Run the following command:

[student@workstation microprofile-conference]$ oc new-project \


security-review
Now using project "security-review" on server "https://
master.lab.example.com:443"

3. Deploy the microservice-authz microservice with the fabric8 Maven plug-in.

You can use the -DskipTests option to skip the tests for a faster build time.

Run mvn fabric8:deploy to deploy the application using the container image built by the
S2I build.

[student@workstation microprofile-conference]$ cd microservice-authz


[student@workstation microservice-authz]$ mvn package fabric8:deploy \
-DskipTests

Review the Maven build outputs. Note that the fabric8:deploy goal creates all the
configuration files and deploys the pod to the OpenShift cluster.

Note
The following error may occur during execution. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException:
Task java.util.concurrent.SpeakerdThreadPoolExecutor
$SpeakerdFutureTask@3cce6de9
rejected from
java.util.concurrent.SpeakerdThreadPoolExecutor@79ccf414[Shutting down,
pool size = 1, active threads = 1,
queued tasks = 0, completed tasks = 10]

4. Get the route URL to access the microservice-authz microservice endpoints.

Run the following command in the existing terminal window:

[student@workstation microservice-authz]$ oc status

Route information is displayed on the workstation VM. It should look similar to the
following:

In project security-review on server https://master.lab.example.com:443


...
http://microservice-authz-security-review.apps.lab.example.com (svc/microservice-
authz)

306 JB283-RHOAR1.0-en-1-20180517
Solution

5. Test the microservice-authz microservice endpoint responsible for generating the JWT
using the RESTClient Firefox plug-in. Use the following JSON content to authenticate with a
valid user.

{ "username": "alumni", "password": "alumni-secret" }

5.1. Generate the JWT string by contacting the service from a client using the RESTClient
Firefox plug-in.

Start Firefox on the workstation VM and click the RESTClient plug-in in the browser's
toolbar.

5.2. Select POST as the Method. In the URL form, enter http://microservice-authz-
security-review.apps.lab.example.com/authz. Add the following string to the
Body text area:

{ "username": "alumni", "password": "alumni-secret" }

Note
This content may be copied and pasted from the /home/student/JB283/
labs/security-review/auth.txt file.

Select Headers > Custom Headers from the top menu and use the following values in
the Request Headers dialog box:

• Name: Content-Type

• Attribute Value: application/json

Click Okay.

5.3. Click Send.

5.4. In the Headers tab, verify that the Status Code is 200 OK.

5.5. In the Response tab, verify that the response is similar to the following output.

{"username":"alumni",
"id_token":"eyJhbGciOiJSUzI1NiJ9....output omitted...gVZX4JGQ"}

6. Deploy the microservice-gateway microservice with the fabric8 Maven plug-in.

The API gateway is a proxy that invokes the microservice-authz microservice.

You can use the -DskipTests option to skip the tests for a faster build time.

Run mvn fabric8:deploy to deploy the application using the container image built by the
S2I build.

[student@workstation microservice-authz]$ cd ../microservice-gateway


[student@workstation microservice-gateway]$ mvn package fabric8:deploy \

JB283-RHOAR1.0-en-1-20180517 307
Chapter 9. Securing Microservices with JWT

-DskipTests

Review the Maven build outputs. Note that the fabric8:deploy goal creates all the
necessary configuration files and deploys the pod to the OpenShift cluster.

Note
The following error may occur during execution. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException:
Task java.util.concurrent.SpeakerdThreadPoolExecutor
$SpeakerdFutureTask@3cce6de9
rejected from
java.util.concurrent.SpeakerdThreadPoolExecutor@79ccf414[Shutting down,
pool size = 1, active threads = 1,
queued tasks = 0, completed tasks = 10]

7. Get the route URL to access the microservice-gateway microservice endpoints.

Run the following command in the existing terminal window:

[student@workstation microservice-gateway]$ oc status

Route information is displayed on the workstation VM. It should look similar to the
following:

In project security-review on server https://master.lab.example.com:443


...
http://microservice-gateway-security-review.apps.lab.example.com (svc/microservice-
gateway)

8. Test the microservice-gateway microservice proxy endpoint responsible for generating


the JWT using the RESTClient Firefox plug-in. The proxy endpoint is located at http://
microservice-gateway-security-review.apps.lab.example.com/gateway/
authz. Use the following JSON content to authenticate as a valid user:

{ "username": "alumni", "password": "alumni-secret" }

8.1. Generate the JWT string by contacting the service from a client using the RESTClient
Firefox plug-in.

From the existing Firefox window, open a new tab and click the RESTClient plug-in in the
browser's toolbar.

8.2. Select POST as the Method. In the URL form, enter http://microservice-
gateway-security-review.apps.lab.example.com/gateway/authz. Add the
following string to the Body text area:

{ "username": "alumni", "password": "alumni-secret" }

308 JB283-RHOAR1.0-en-1-20180517
Solution

Select Headers > Custom Headers from the top menu. Use the following values in the
Request Headers dialog box:

• Name: Content-Type

• Attribute Value: application/json

Click Okay.

8.3. Click Send.

8.4. Verify in the Headers tab that the Status Code is 200 OK.

8.5. In the Response tab, verify that the response is similar to the following output:

{"username":"alumni",
"id_token":"eyJhbGciOiJSUzI1NiJ9....output omitted...gVZX4JGQ"}

9. Change the allSessions method from the SessionResource class from the
microservices-session microservice so that it returns a Collection with all sessions for
authenticated users.

Some sessions require very important people (VIP) passes. You must remove them from the
result. The Java stream is provided in the existing source code. List all sessions for those
individuals with VIP passes. A user is a VIP whenever it is part of the VIP JAAS role.

9.1. Open the SessionResource class by expanding the microservices-session item in the
Project Explorer tab in the left pane of JBoss Developer Studio. Click microservices-
session > Java Resources > src/main/java > io.microprofile.showcase.session and
expand it. Double-click the SessionResource.java file.

9.2. Cast the UserPrincipal instance captured from the securityContext instance into
a JsonWebTokeninstance.

public Collection<Session> allSessions(@Context SecurityContext securityContext,


@Context HttpHeaders headers) throws Exception {

// TODO Access the authenticated user as a JsonWebToken


JsonWebToken jwt = (JsonWebToken) securityContext.getUserPrincipal();
...

9.3. Evaluate whether the JWT is provided. If not, return an empty Collection.

//TODO inspect the JWT. If it is null, then return an empty collection


if (jwt == null) {
return Collections.emptyList();
}
...

9.4. Evaluate whether the user is part of the VIP JAAS role. If not, filter out the sessions that
are VIP sessions using the Java stream provided. Otherwise, return all sessions.

// TODO If the user does NOT have a VIP role, filter out the VIP sessions

JB283-RHOAR1.0-en-1-20180517 309
Chapter 9. Securing Microservices with JWT

boolean isVIP = securityContext.isUserInRole("VIP");


Collection<Session> sessions = null;
if (!isVIP) {
// TODO Filter sessions that are not VIP only.
sessions = sessionStore.getSessions().stream().filter(session -> !
session.isVIPOnly()).collect(Collectors.toList());
} else {
// TODO Show all the sessions
sessions = sessionStore.getSessions();
}

Press Ctrl+S to save your changes.

10. Configure the microservice-session microservice to use JWTLoginModule to authenticate


users.

10.1. In JBoss Developer Studio, open the project-defaults.yml file by expanding the
microservice-session item in the Project Explorer tab in the left pane.

Click hola > Java Resources > src/main/resources to expand it. Double-click the
project-defaults.yml file.

10.2.Update the file to support authentication using JWT.

In the login-modules section, update the code attribute with

org.wildfly.swarm.microprofile.jwtauth.deployment.auth.jaas.
JWTLoginModule.

login-modules:
- login-module: rm
# TODO use the
org.wildfly.swarm.microprofile.jwtauth.deployment.auth.jaas.JWTLoginModule
login module
code: org.wildfly.swarm.microprofile.jwtauth.deployment.auth.jaas.

JWTLoginModule
...

The login module class name should be defined in a single line.


Press Ctrl+S to save your changes.

11. Deploy the microservice-session microservice with the fabric8 Maven plug-in. You can use
the -DskipTests option to skip the tests for a faster build time.

Run mvn fabric8:deploy to deploy the application using the container image built by the
S2I build.

[student@workstation microservice-gateway]$ cd ../microservice-session


[student@workstation microservice-session]$ mvn package fabric8:deploy \
-DskipTests

Review the Maven build outputs. Note that the fabric8:deploy goal creates all the
necessary configuration files and deploys the pod to the OpenShift cluster.

310 JB283-RHOAR1.0-en-1-20180517
Solution

Note
The following error may occur during execution. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException:
Task java.util.concurrent.SpeakerdThreadPoolExecutor
$SpeakerdFutureTask@3cce6de9
rejected from
java.util.concurrent.SpeakerdThreadPoolExecutor@79ccf414[Shutting down,
pool size = 1, active threads = 1,
queued tasks = 0, completed tasks = 10]

12. Get the route URL to access the microservice-session microservice endpoints.

Run the following command in the existing terminal window:

[student@workstation microservice-gateway]$ oc status

Route information is displayed on the workstation VM. It looks similar to the following:

In project security-review on server https://master.lab.example.com:443


...
http://microservice-session-security-review.apps.lab.example.com (svc/microservice-
session)

13. Test the microservice-gateway microservice proxy endpoint to access the session
endpoint using the RESTClient Firefox plug-in. The proxy endpoint is located in http://
microservice-gateway-security-review.apps.lab.example.com/gateway/
sessions and it may be accessed using the HTTP GET method.

13.1. Get an empty session list.

From the existing Firefox window, open a new tab and click the RESTClient plug-in in the
browser's toolbar.

13.2.Select GET as the Method. In the URL form, enter http://microservice-gateway-


security-review.apps.lab.example.com/gateway/sessions.

Select Headers > Custom Headers from the top menu and use the following values in
the Request Headers dialog box:

• Name: Content-Type

• Attribute Value: application/json

Click Okay.

13.3. Click Send.

13.4.In the Headers tab, verify that the Status Code is 200 OK.

JB283-RHOAR1.0-en-1-20180517 311
Chapter 9. Securing Microservices with JWT

13.5.In the Response tab, verify that the response is identical to the following output:

[]

14. Deploy the web-application microservice with the fabric8 Maven plug-in. You can use the -
DskipTests option to skip the tests for a faster build time and the -Dskip.npm to skip the
Node.js build.

Note
The nodejs portion of the application has been pre-built to accommodate the
offline classroom environment that this class uses. For this reason, you must skip
the npm portion of the web-application build.

Run mvn fabric8:deploy to deploy the application using the container image built by the
S2I build.

[student@workstation microservice-session]$ cd ../web-application


[student@workstation web-application]$ mvn package fabric8:deploy \
-DskipTests -Dskip.npm

Review the Maven build outputs. Note that the fabric8:deploy goal creates all necessary
configuration files and deploys the pod to OpenShift.

Note
The following error may occur during the execution. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException:
Task java.util.concurrent.SpeakerdThreadPoolExecutor
$SpeakerdFutureTask@3cce6de9
rejected from
java.util.concurrent.SpeakerdThreadPoolExecutor@79ccf414[Shutting down,
pool size = 1, active threads = 1,
queued tasks = 0, completed tasks = 10]

15. Get the route URL to access the web-application microservice.

Run the following command in the existing terminal window:

[student@workstation web-application]$ oc status

Route information is displayed on the workstation VM. It should look similar to the
following:

In project security-review on server https://master.lab.example.com:443


...

312 JB283-RHOAR1.0-en-1-20180517
Solution

http://web.apps.lab.example.com (svc/web-application)

16. Test the web-application project that uses the microservice-authz, microservice-gateway,
and microservice-session microservices. From the existing Firefox window, open a new tab
and access http://web.apps.lab.example.com URI and click Login to authenticate.
Use the following credentials:

• username: [email protected]

• password: vipuser-secret

Click Create.

17. Navigate to the sessions section and get the list of sessions.

From the top menu, click Sessions and check that all the sessions are listed.

18. Grade the lab.

[student@workstation web-application]$ lab security-review grade

All the checks should pass.

19. Delete the project from the terminal window.

[student@workstation web-application]$ oc delete project security-review

20. Clean up and commit your changes to your local Git repository in the lab branch, and return
to the master branch.

20.1.Use the git add command to stage any uncommitted changes.

[student@workstation web-application]$ git add .

20.2.Use the git commit command to commit your changes to the local branch.

[student@workstation web-application]$ git commit \


-m"completing lab security-review"
[lab-test-review e59dc43] completing lab security-review
...output omitted...
3 files changed, 41 insertions(+), 18 deletions(-)

20.3.Check out the working copy to change to the master branch.

[student@workstation web-application]$ git checkout master


Switched to branch 'master'

This concludes the lab.

JB283-RHOAR1.0-en-1-20180517 313
Chapter 9. Securing Microservices with JWT

Summary
In this chapter, you learned:

• The MicroProfile specification defines JSON web tokens (JWT) as the base technology for
authentication and authorization in Java microservices.

• JWT defines a standard field to transmit data named claims.

• There are three different types of claims: registered, default, and custom claims.

• JWT is composed of three different blocks that are all base64-encoded:

◦ header: contains all the information needed by the counterpart to process the JWT

◦ payload: contains all claims

◦ signature: signs the contents from the JWT using a private key to support encryption and
integrity

• In a microservices-based architecture, a dedicated microservice provides credentials by


generating JWTs for other microservices.

• The WildFly Swarm microservices using a JWT must configure the project-defaults.yml
file to support the JWTLoginModule.

• A MicroProfile-based application can read claims and authentication information using the
Java authentication and authorization services (JAAS) API.

• You must use JAAS-based annotations to secure MicroProfile-based applications.

• Claims of JWT groups type are automatically translated into JAAS roles.

314 JB283-RHOAR1.0-en-1-20180517
TRAINING
CHAPTER 10

MONITORING MICROSERVICES

Overview
Goal Monitor the operation of a microservice using metrics,
distributed tracing, and log aggregation.
Objectives • Use the Metric specification to add metrics to a
microservice.

• Enable distributed tracing in a microservice using the


OpenTracing API and Jaeger.

• Describe the log aggregation feature of OpenShift.


Sections • Adding Metrics to a Microservice (and Guided Exercise)

• Enabling Distributed Tracing in a Microservice (and Quiz)

• Describing OpenShift Log Aggregation (and Quiz)


Lab Monitoring Microservices

JB283-RHOAR1.0-en-1-20180517 315
Chapter 10. Monitoring Microservices

Adding Metrics to a Microservice

Objectives
After completing this section, students should be able to use the MicroProfile metrics
specification to add metrics to a microservice.

Describing the MicroProfile Metrics Specification


Version 1.1
Any applications that you deploy in production need a monitoring solution to track metrics about
the application's performance and usage. The value of monitoring details of the performance
of an application cannot be overstated when defects arise, or application performance is
unexpectedly poor. It is important to standardize how your applications expose metrics data, and
the content and format of that metrics data, to enable the most efficient and simple monitoring
solutions.

The primary function of a health check is to provide a quick indication of the application's
health. Platforms, such as OpenShift, that orchestrate the deployment of applications use
health information to restart the application if the health check fails. Metrics, on the other hand,
determine the health of an application by pinpointing the underlying software issues, provide
long-term trend data for capacity planning, and performs proactive discovery of platform-
related problems (such as disk usage growing without bounds). You can also use metrics to
configure OpenShift to decide when to scale the application to run on more or fewer pods based
on application usage.

Java offers Java Management Extensions (JMX) as a standard to expose the low-level metrics
of a Java Virtual Machine. However, remote-JMX does not fit well in a polyglot environment
where other services may not be running on the JVM. The main goal of the MicroProfile metrics
specification is to provide a standard that outlines how to expose both a standard set of metrics
data as well as any custom metric data for MicroProfile-based applications. Standardizing the
metrics data, and how it is exposed, enables you to use a common monitoring strategy for all of
your microservices. All MicroProfile implementations must follow the standards defined in the
metrics specification for the metrics that are available, the HTTP return codes, the API path, and
the JSON data types the server uses to represent the metrics data.

When using a MicroProfile metrics implementation such as WildFly Swarm, metrics data is
exposed by REST over HTTP under the /metrics base path in two different data formats for
HTTP GET requests:

• JSON format: the response format when the HTTP Accept header matches application/
json

• Prometheus text format: the default response format when the HTTP Accept header does
not match a more specific media type, such as application/json

Important
Future versions may allow for more export formats triggered by their specific media
type.

316 JB283-RHOAR1.0-en-1-20180517
Describing the MicroProfile Metrics Specification Version 1.1

Note
The Prometheus text format is not covered in detail in this course. For more
information, refer to the documentation [https://github.com/prometheus/docs/blob/
master/content/docs/instrumenting/exposition_formats.md].

The MicroProfile metrics specification divides metrics into three major categories. In the
specification, these categories are referred to as scopes and serve to organize available metrics.
The three available scopes defined in the specification are:

Base metrics
Base metrics are the minimum required metrics as outlined in the specification. These
base metrics include JVM statistics such as the current heap sizes, garbage collection
times, thread counts, and other OS and host system information. All vendors implementing
MicroProfile must include these metrics. These metrics are automatically exposed as a REST
endpoint using the relative path /metrics/base.

Note
For the full list of base metrics, review the Required Metrics chapter in the
metrics specification [https://github.com/eclipse/microprofile-metrics/releases/
download/1.1/metrics_spec-1-1.pdf].

Vendor metrics
Vendor metrics include any metrics data on top of the base set of required metrics that the
MicroProfile implementation can optionally include. Vendor specific metrics are exposed as
a REST endpoint using the relative path /metrics/vendor. An example of vendor specific
data is any metric that is platform specific, such as OSGi statistics if the MicroProfile-enabled
container internally runs on top of OSGi.

Application metrics
Application metrics are custom metrics that the application developer defines, and
are specific to that particular application. An example of an application metric is how
many times a specific method is invoked, or the current count of active users in the last
fifteen minutes. Application-specific metrics cannot be included automatically by the
implementation because they are provided by the application at runtime. To solve this, the
specification defines a Java API that uses annotations to define custom application metrics.
These metrics are automatically exposed by the MicroProfile implementation as a REST
endpoint using the relative path /metrics/application.

To provide the greatest value with metrics data, the specification also defines a common set of
metadata that any implementations of the specification must provide to give context to the data.
The attributes that the specification defines for all metrics data includes the following fields:

unit
A fixed set of string units.

type
Defines the metric type. There are a fixed set of types available. These include:

JB283-RHOAR1.0-en-1-20180517 317
Chapter 10. Monitoring Microservices

• counter: An incrementally increasing or decreasing numeric value. For example, a counter


might track the total number of requests received or total number of concurrently active
HTTP sessions.

• gauge: A metric that must be sampled to obtain its value. For example, the CPU
temperature, or the disk usage.

• meter: Tracks mean throughput and one, five, and fifteen-minute exponentially-weighted
moving average throughput; for example, how many database queries the microservice is
running per second.

• histogram: Calculates the distribution of a value.

• timer: Aggregates timing duration and provides duration statistics, plus throughput
statistics.

description (optional)
A description of the metric.

displayName (optional)
A name of the metric for display purposes, if the metric name is not otherwise human
readable. For instance, a metric name might be a UUID or some other generated value.

tags (optional)
A list of key value pairs that are separated by a comma. Microservice scheduling platforms
like OpenShift use tags to identify where an application is running. Now that the application
code can run on any node and can be rescheduled to a different node at any time, the typical
mapping of host to node and the application runtime on the node, is no longer reliable.

reusable (optional)
If set to true, then the implementation is allowed to register multiple metrics under the
same name. Note that all such instances must set the reusable attribute to true. The
default value is false.

The specification also defines a MetricRegistry class that stores the metrics data and other
metadata information. There is one MetricRegistry instance for each of the three scopes:
base, vendor, and application.

Creating Application-specific Metrics Using


Annotations
To leverage MicroProfile metrics functionality in a microservice running on WildFly Swarm,
include the microprofile dependency in your pom.xml. This loads all of available
specifications in MicroProfile 1.3. You do not need to specify a version if you are using the WildFly
Swarm bill of materials, as shown in the following example:

<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>microprofile</artifactId>
</dependency>

The MicroProfile metrics specification provides a way to register application-specific metrics to


allow applications to expose metrics in the application scope. This makes it very simple to define

318 JB283-RHOAR1.0-en-1-20180517
Creating Application-specific Metrics Using Annotations

your application metrics directly inside your application code itself. Configure these metrics in
the application code using the Java API, which defines the MicroProfile metrics annotations.

The specification defines the following annotations to use when defining application metrics:

MicroProfile Metrics Annotations Summary


Annotation Description Default Unit
@Counted Denotes a counter, which counts the MetricUnits.NONE
invocations of the annotated object.
@Gauge Denotes a gauge, which samples the value of None, you must specify it.
the annotated object.
@Metered Denotes a meter, which tracks the frequency of MetricUnits.PER_SECOND
invocations of the annotated object.
@Metric An annotation that contains the metadata MetricUnits.NONE
information when requesting a metric object to
be injected or produced. This annotation can be
used on fields of type Meter, Timer, Counter,
and Histogram. If you use the @Metric on
an instance of Gauge, it must be on a producer
methods/fields.
@Timed Denotes a timer, which tracks duration of the MetricUnits.NANOSECONDS
annotated object.

Each of the annotations described in the table supports the following options, which you can use
to specify the necessary metadata attributes:

String name (optional)


Sets the name of the metric. If not explicitly given, then the server uses the name of the
annotated object.

boolean absolute
If set to true, then the server uses the given name as the absolute name of the metric. If
set to false, the server prepends the package name and class name before the given name.
The default value is false.

String displayName (optional)


A display name for the metric.

String description (optional)


A description of the metric.

String unit
The unit of the metric. For the @Gauge annotation, no default is provided. Check the
MetricUnits class for a set of predefined units.

String[] tags (optional)


An array of String objects in the <key>=<value> format to supply special tags to a
metric.

boolean reusable
Denotes whether a metric with a certain name can be registered in more than one place.
Does not apply to gauges.

JB283-RHOAR1.0-en-1-20180517 319
Chapter 10. Monitoring Microservices

The following sample code includes four application metrics defined using MicroProfile metrics
annotations:

package com.example;
import javax.inject.Inject;

import org.eclipse.microprofile.metrics.Counter;
import org.eclipse.microprofile.metrics.annotation.Metric;

public class Colours {

@Inject
@Metric
Counter redCount;

@Inject
@Metric(name="blue")
Counter blueCount;

@Inject
@Metric(absolute=true)
Counter greenCount;

@Inject
@Metric(name="purple", absolute=true)
Counter purpleCount;

The above class produces the following entries in the metrics registry:

com.example.Colours.redCount
com.example.Colours.blue
greenCount
purple

Accessing Metrics Data Using HTTP


The MicroProfile metrics specification includes a number of REST endpoints that are exposed
automatically on WildFly swarm simply by including the microprofile or microprofile-
metrics fractions. All of the metrics HTTP endpoints must implement the same logic in regards
to the HTTP response codes they return, as defined in the specification. The possible HTTP
response codes include:

200
Indicates the successful retrieval of an object.

204
Indicates the retrieval of a sub-tree that would exist, but has no content. For example, if an
application-specific sub-tree has no application-specific metrics defined.

404
Indicates the retrieval of a directly-addressed item that does not exist. This may be a
nonexistent sub-tree or nonexistent object.

406
Indicates that the HTTP Accept Header in the request cannot be handled by the server.

320 JB283-RHOAR1.0-en-1-20180517
Demonstration: Retrieving Metrics from a Microservice

500
Indicates that a request failed due to an internal server error.

The following endpoints are exposed automatically. Remember to specify the HTTP Accept
header with a value of application/json if you want to retrieve the data in JSON format.

Supported MicroProfile Metrics REST Endpoints


Endpoint Request Supported Description
Type Format
/metrics GET JSON, Returns all registered
Prometheus metrics.
/metrics/scope GET JSON, Returns metrics
Prometheus registered for the
respective scope.
/metrics/scope/metric_name GET JSON, Returns the metric that
Prometheus matches the metric name
for the respective scope.
/metrics OPTIONS JSON Returns metadata for all
registered metrics.
/metrics/scope OPTIONS JSON Returns metadata
registered for the
respective scope.
/metrics/scope/metric_name OPTIONS JSON Returns metadata that
matches the metric name
for the respective scope.

When using JSON format, the REST API responds to HTTP GET requests with data formatted in a
tree-like fashion with sub-trees for the sub-resources. Any sub-tree that does not contain data is
omitted.

For example, if you access the /metrics endpoint and request JSON data, the response
includes a wrapper for each scope:

{
"application":
{
"hitCount": 45
},
"base":
{
"thread.count" : 33,
"thread.max.count" : 47
},
"vendor":
{...}
}

Demonstration: Retrieving Metrics from a


Microservice
1. Log in to the workstation VM as student using student as the password.

JB283-RHOAR1.0-en-1-20180517 321
Chapter 10. Monitoring Microservices

2. In the hola project, open the HolaResource class located in the


com.redhat.training.msa.hola.rest package.

2.1. Inspect the @Inject annotation declared on the requestCounter attribute. The
annotation injects the Counter object used by the MicroProfile metrics fraction to
count the number of requests received by the microservice.

2.2. Inspect the @Metric annotation declared on the requestCounter attribute. The
annotation adds a new metric named requestCount to the list of monitored metrics.

2.3. Inspect the @Inject annotation declared on the failedCount attribute. The
annotation injects the Counter object used by the MicroProfile metrics fraction to
count the number of failed requests received by the microservice.

2.4. Inspect the hola and holaChaining methods. They increment the
requestCounter attribute as they are invoked.

2.5. Inspect the @Metric annotation declared on the failedCount attribute. The
annotation adds a new metric named failureCount to the list of monitored metrics.

2.6. Inspect the alohaFallback method. It increments the attribute named


failedCount.

3. Start the application.

3.1. Open a terminal window on the workstation VM and navigate to the hola project.

[student@workstation ~]$ cd hello-microservices/hola

3.2. Start the microservice.

[student@workstation hola]$ mvn clean wildfly-swarm:run \


-DskipTests

Note
You may see the following exception in the output from the script. It can be

safely ignored.

org.eclipse.aether.resolution.ArtifactResolutionException: Could not


find artifact commons-io:commons-io:jar:2.7-SNAPSHOT

4. Retrieve the metrics.

4.1. Start Firefox on the workstation VM and click the RESTClient plug-in in the
browser's toolbar.

4.2. In the top navigation bar, click Headers and then click Custom Header.

4.3. Fill in the Request Headers form with the following values:

322 JB283-RHOAR1.0-en-1-20180517
Demonstration: Retrieving Metrics from a Microservice

• In the Name field, enter Accept.

• In the Attribute Value field, enter application/json.

4.4. Click Okay.

Note
Do not change the request headers for the following steps.

4.5. Select GET as the Method. In the URL form, enter http://localhost:8080/api/
hola.

4.6. Click Send three times to increase the counter used to monitor the number of requests
made to the microservice.

4.7. In the URL form, enter http://localhost:8080/metrics and click Send.

4.8. Verify that the Response tab lists all metrics provided by the MicroProfile metrics
fraction.

4.9. In the URL form, enter http://localhost:8080/metrics/base and click Send.

4.10. Verify that the Response tab lists the system metrics provided by the MicroProfile
metrics fraction.

4.11. In the URL form, enter http://localhost:8080/metrics/vendor, and click Send

4.12. Verify that the Response tab lists the metrics provided by the MicroProfile metrics
fraction.

4.13. In the URL form, enter http://localhost:8080/metrics/application and click


Send.

4.14. Verify that the Response tab lists all the application metrics.

4.15. In the URL form, enter http://localhost:8080/metrics/application/


requestCount and click Send.

4.16. Verify that the Response tab lists only the requestCount application metric.

This concludes the demonstration.

References
MicroProfile Metrics Specification
https://github.com/eclipse/microprofile-metrics/

JB283-RHOAR1.0-en-1-20180517 323
Chapter 10. Monitoring Microservices

Guided Exercise: Adding Metrics to a


Microservice

In this exercise, you will enable and retrieve metrics from a microservice.

Outcomes
You should be able to enable a counter, a histogram, and a gauge metric.

Before you begin


If you have not already, execute the git clone command to clone the microprofile-conference
repository onto the workstation VM.

[student@workstation ~]$ git clone http://services.lab.example.com/microprofile-


conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Then, run lab setup to begin the exercise.

[student@workstation ~]$ lab metrics setup

Steps
1. Switch the repository to the lab-metrics branch to get the application code for this
exercise.

1.1. Switch to the branch using the git checkout command.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout lab-metrics
Branch lab-metrics set up to track remote branch lab-metrics from origin.
Switched to a new branch 'lab-metrics

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


#On branch lab-metrics
nothing to commit, working directory clean

2. Implement a request counter that increases on each invocation of the allSessions


method in the SessionResource class.

2.1. Open the SessionResource class by expanding the microservice-session


item in the Project Explorer tab in the left pane of JBoss Developer
Studio, then click microservice-session > Java Resources > src/main/
java > io.microprofile.showcase.session to expand it. Double-click the
SessionResource.java file.

2.2. Annotate the requestCount attribute the @Metric annotation to configure a counter
metric. Name the metric requestCount.

324 JB283-RHOAR1.0-en-1-20180517
// Provide a metric 'requestCount' that records how many times a GET method is
invoked
@Inject
@Metric(name = "requestCount", description = "All JAX-RS request made to the
SessionResource",
displayName = "SessionResource#requestCount")
private Counter requestCount;

2.3. Update the allSessions method to increase the number of requests.

public Collection<Session> allSessions(@Context SecurityContext securityContext,


@Context HttpHeaders headers) throws Exception {
//increase the number of requests
requestCount.inc();
return sessionStore.getSessions();
}

Press Ctrl+S to save your changes.

3. Implement and configure a histogram metric in the SessionResource class.

3.1. Inject a MetricRegistry instance to configure a histogram.

//Inject the metric registry object to register the histogram


@Inject
private MetricRegistry metrics;

3.2. In the generateHistogram method, configure the metric type as


MetricType.HISTOGRAM.

private void generateHistogram(Collection<Session> sessions){


//set the metric type as MetricType.HISTOGRAM
MetricType type = MetricType.HISTOGRAM;
...
}

3.3. In the generateHistogram method, register a new histogram.

private void generateHistogram(Collection<Session> sessions){


//register a new histogram
Histogram abstractWordCount = metrics.histogram(metadata);
...
}

Press Ctrl+S to save your changes.

4. Configure the getSessionNumber method from the SessionResource class to define a


gauge metric. Annotate the getSessionsNumber method with the @Gauge annotation.

//Add the Gauge metric


@Gauge(name = "sessionNumber", description = "The number of sessions",
displayName = "The number of sessions", unit = MetricUnits.NONE)
public int getSessionsNumber(){
return sessionStore.getSessions().size();

JB283-RHOAR1.0-en-1-20180517 325
Chapter 10. Monitoring Microservices

Press Ctrl+S to save your changes.

5. Start the microservice-session microservice.

5.1. Open a terminal window on the workstation VM and navigate to the hola project.

[student@workstation ~]$ cd ~/microprofile-conference/microservice-session

5.2. Start the microservice.

[student@workstation microservice-session]$ mvn clean wildfly-swarm:run \


-DskipTests

6. Retrieve the metrics.

6.1. Start Firefox on the workstation VM and click the RESTClient plug-in in the browser's
toolbar.

6.2. Select GET as the Method. In the URL form, enter http://localhost:8080/
sessions.

6.3. Click Send four times to increase the counter used to monitor the number of requests
made to the microservice.

6.4. In the top navigation bar, click Headers and then click Custom Header.

6.5. Fill in the Request Headers form with the following values:

• Name: Accept.

• Attribute Value: application/json.

6.6. Click Okay.

Note
Do not change the request headers for the following steps.

6.7. In the URL form, enter http://localhost:8080/metrics and click Send.

6.8. Verify that the Response tab lists all metrics provided by the MicroProfile metrics
fraction.

6.9. In the URL form, enter http://localhost:8080/metrics/base and click Send.

6.10.Verify that the Response tab lists the system metrics provided by the MicroProfile
metrics fraction.

6.11. In the URL form, enter http://localhost:8080/metrics/vendor, and click Send

326 JB283-RHOAR1.0-en-1-20180517
6.12.Verify that the Response tab lists the vendor-specific metrics provided by the WildFly
MicroProfile metrics fraction.

6.13.In the URL form, enter http://localhost:8080/metrics/application and click


Send.

6.14.Verify that the Response tab lists all application metrics defined previously.

6.15.In the URL form, enter http://localhost:8080/metrics/application/


io.microprofile.showcase.session.SessionResource.sessionNumber and
click Send.

Verify that the Response tab lists only the sessionNumber application-specific metric.

7. Clean up, commit your changes to your local Git repository in the lab branch, and return to
the master branch.

7.1. Return to the terminal window running the microservice-session microservice and stop
the service using Ctrl+C.

7.2. In the terminal window where the microservice-session microservice was stopped, use
the git add command to stage the uncommitted changes.

[student@workstation microservice-session]$ git add .

7.3. Use the git commit command to commit your changes to the local branch:

[student@workstation microservice-session]$ git commit \


-m"completing lab metrics"
...output omitted...
[lab-metrics 7210256] completing lab metrics

7.4. Switch the working copy back to the master branch to finish cleaning up.

[student@workstation microservice-session]$ git checkout master


Switched to branch 'master'

This concludes the guided exercise.

JB283-RHOAR1.0-en-1-20180517 327
Chapter 10. Monitoring Microservices

Enabling Distributed Tracing in a Microservice

Objectives
After completing this section, students should be able to enable distributed tracing in a
microservice using the OpenTracing API and Jaeger.

Solving the Problem of Distributed Tracing Using


OpenTracing
Tracing is a specialized way to monitor the path of execution of a piece of software for the
purpose of debugging or troubleshooting. You may be familiar with the TRACE log level, which
includes information about each and every method call as they occur. The goal of tracing
in microservices is analogous to this level of logging. At the highest level, a trace from one
microservice to another tells the story of a transaction or request as it propagates through a
microservice-based system.

Distributed tracing refers specifically to tracing the flow of a request across microservice
boundaries. This is more challenging than traditional tracing inside a single application because
the request moves from completely different microservices. Tracing, however, is particularly
important in a microservice environment where a request can flow through multiple services.
This is because tracing provides you with valuable performance data that you can use to
efficiently identify application bottlenecks, bugs, or other issues introducing latency into your
microservice-based application.

OpenTracing is a new, open-distributed tracing standard for applications and open source
software packages. The stated goal of the OpenTracing project is to provide "high-quality
distributed traces with little to no instrumentation effort by the application programmer." By
offering consistent, expressive, vendor-neutral APIs for popular platforms, using OpenTracing
makes it easy for developers to add or switch tracing implementations with a simple
configuration change.

The following list includes some of the more popular implementations that support the
OpenTracing specification:

• Jaeger

• Appdash

• Lightstep

• Hawkular

• Apache SkyWalking

In OpenTracing, a trace is a directed acyclic graph (DAG) of spans. A DAG is a graph of nodes
where the edges show direction, and there are no cycles. Spans are named, timed operations
representing a contiguous unit of work in that trace. This contiguous unit of work could represent
a single call to a database service, or a complex operation that requires multiple downstream
services.

Each microservice that participates in a distributed trace can create its own span or spans. Spans
are hierarchical, meaning that parent-child relationships can exist between spans. This helps

328 JB283-RHOAR1.0-en-1-20180517
Describing the MicroProfile OpenTracing API Version 1.0

to organize the trace data into both larger high-level tasks, such as adding an item to your cart
in an e-commerce web application. A task such as this typically represents multiple operations
using a parent span, and the low-level granular operations, such as individual database lookups
or external service calls, are represented using child spans. A parent span may explicitly start
other spans, either in serial or in parallel. In OpenTracing, it is even possible to model a child span
with multiple parents.

For example, in the MicroProfile conference application, a sample trace shown in the following
figure goes from the web-application client through the API gateway, to the microservice-vote
endpoint, which calls the CouchDB service, and then returns the result back through the API
gateway to the web-application client:

Figure 10.1: Example trace from a web-application

By default, the trace shown in the previous figure contains three individual spans. One span
is created for each web service call made. Each subsequent span after the first inherits the
previous span as its parent. This means the span for the web application call to the API gateway
includes all the time it took for the API gateway to call the microservice-vote application. It also
includes the time required for the microservice-vote application to call the CouchDB service and
return the result back to the API gateway, which then returns the final result back to the web
application.

Describing the MicroProfile OpenTracing API Version


1.0
A common companion to distributed trace logging is a service where the distributed trace
records can be stored. The storage service for distributed trace records typically provide features
to visualize cross service trace records associated with particular request flows. By using a
standard approach to tracing instrumentation, microservices written in compliance with the

JB283-RHOAR1.0-en-1-20180517 329
Chapter 10. Monitoring Microservices

MicroProfile specification are able to integrate well with a distributed trace system that is part of
the larger microservices environment.

The MicroProfile OpenTracing specification defines an API and implementation behaviors


that allow microservices to easily participate in an environment where distributed tracing is
enabled. The stated goal of the OpenTracing specification addresses the problem of making it
easy to modify the services during runtime with distributed tracing function, given an existing
distributed tracing system in the environment.

In order for a distributed tracing system to be effective and usable, every microservice in your
environment requires two things:

1. They must agree on the mechanism for transferring correlation IDs across microservices.
Correlation IDs are used internally by the tracing implementation to track individual spans
that are already present on incoming requests from upstream systems.

2. They must produce their trace records in a standard format that is consumable by the
common storage service for distributed trace records.

The MicroProfile OpenTracing specification does not address the problem of defining,
implementing, or configuring the underlying distributed tracing system. OpenTracing focuses
on three areas: it gives developers a simple, standardized, vendor-independent mechanism to
introduce tracing into MicroProfile-based microservices, it provides solutions to standardize how
tracing data is transferred from one microservice to another, and it produces the tracing data in
a standard format.

The MicroProfile OpenTracing implementation allows JAX-RS applications to participate in


distributed tracing, without requiring developers to add any distributed tracing code into their
applications, and without requiring developers to know anything about the distributed tracing
environment into which they deploy their JAX-RS application.

To facilitate these requirements, the MicroProfile OpenTracing specification dictates that all
MicroProfile implementations must automatically:

• Detect and configure an io.opentracing.Tracer implementation available on the


classpath for use by JAX-RS applications.

• Extract SpanContext information from any incoming JAX-RS request.

• Start a Span for any incoming JAX-RS request, and finish the Span when the request
completes.

• Inject SpanContext information into any outgoing JAX-RS request.

• Start a Span for any outgoing JAX-RS request, and finish the Span when the request is
complete.

Adding Distributed Tracing to MicroProfile-Based


Microservices Using OpenTracing
By default, including the MicroProfile OpenTracing libraries and an implementation of
io.opentracing.Tracer in your application's dependencies is enough to enable distributed
tracing for your microservice. In the following example, the MicroProfile OpenTracing API is
included and Jaeger is the implementation of Tracer.

330 JB283-RHOAR1.0-en-1-20180517
Adding Distributed Tracing to MicroProfile-Based Microservices Using OpenTracing

<dependencies>
...
<dependency>
<groupId>org.eclipse.microprofile.opentracing</groupId>
<artifactId>microprofile-opentracing-api</artifactId>
</dependency>
<dependency>
<groupId>com.uber.jaeger</groupId>
<artifactId>jaeger-core</artifactId>
<version>0.20.0</version>
</dependency>
<dependency>
<groupId>com.uber.jaeger</groupId>
<artifactId>jaeger-tracerresolver</artifactId>
<version>0.20.0</version>
</dependency>
</dependencies>

Using this default configuration, all incoming and outgoing requests are traced automatically.
This means that you do not need to write any custom instrumentation code to support tracing,
simplifying your application code drastically.

It is possible to further configure this behavior using the @Traced annotation. This allows you to
manually define custom spans that you want to trace.

Using the @Traced Annotation


When applied to a class, the @Traced annotation is automatically applied to all methods of
the class. If the @Traced annotation is applied to a class and method, then the annotation
configuration applied to the method overrides the configuration of the annotation at the class
level. The annotation starts a span at the beginning of the method execution, and finishes the
span at the end of the method execution.

The @Traced annotation has the following two optional arguments:

• value either enables or disables explicit tracing at the class or method level. If the @Traced
annotation is specified at the class level, then use @Traced(false) to annotate specific
methods to disable creation of a span for those methods. By default, the value is set to true.

• operationName is used to specify a custom name for the span. If the @Traced annotation
finds the operationName unset or set to an empty string, the implementation uses the
default operation name, which is:

<HTTP method>:<package name>.<class name>.<method name>

The following example includes the use of the @Traced annotation on a method:

package com.redhat.training.bookstore.inventory.rest;

...imports excluded...

@Path("/")
@Api("inventory")
public class InventoryResource {

private final Logger log = LoggerFactory.getLogger(InventoryResource.class);

@Inject

JB283-RHOAR1.0-en-1-20180517 331
Chapter 10. Monitoring Microservices

private InventoryDatabase db;

@GET
@Path("/inventory/{isbn}")
@Produces(MediaType.APPLICATION_JSON)
@ApiOperation("Returns the inventory count for a book identified by ISBN")
@Traced(value = true, operationName = "getInventory")
public Response getInventory(@PathParam("isbn") String isbn) {
log.debug("get Inventory endpoint called");
for (BookInventory inventory : db.getInventory()) {
if (isbn.equals(inventory.getIsbn()))
return Response.ok(inventory,MediaType.APPLICATION_JSON).build();
}

return Response.status(404).build();
}
}

Important
At the time of the writing of this course, the 2018.3.3 version of WildFly Swarm does
not support the MicroProfile OpenTracing specification, but this support is planned for
a future release.

Using Jaeger to View Tracing Data


Jaeger is a distributed tracing system released as open source by Uber Technologies. Jaeger
features an OpenTracing-compatible data model and includes implementations in Go, Java, Node,
Python, and C++. Jaeger is comprised of multiple componentsw including a web UI and backend
collection agents.

Jaeger Web UI is implemented in Javascript using the popular open source frameworks React.
It provides a unified view into all tracing data in your application, with helpful visualizations.
Jaeger backend is distributed as a collection of Docker images. The binaries support
various configuration methods, including command-line options, environment variables, and
configuration files.

Additionally, Jaeger provides an all-in-one Docker container image. This container image,
designed for quick local testing, launches the Jaeger UI, collector, query, and agent with an in-
memory storage component.

The simplest way to start the all-in-one container image is to use the pre-built image published to
DockerHub using the following command:

[user@localhost ~]$ docker run -d -e \


COLLECTOR_ZIPKIN_HTTP_PORT=9411\
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 9411:9411 \
jaegertracing/all-in-one:latest

332 JB283-RHOAR1.0-en-1-20180517
Using Jaeger to View Tracing Data

You can then navigate to http://localhost:16686 to access the Jaeger UI.

Figure 10.2: The Jaeger UI

References
MicroProfile OpenTracing Specification
https://github.com/eclipse/microprofile-opentracing

OpenTracing Documentation
http://opentracing.io/documentation/

Jaeger Tracing
https://github.com/jaegertracing/jaeger

JB283-RHOAR1.0-en-1-20180517 333
Chapter 10. Monitoring Microservices

Quiz: Enabling Distributed Tracing in a


Microservice

Choose the correct answers to the following questions:

1. Which two of the following statements about OpenTracing are true? (Choose two.)

a. Distributed tracing with OpenTracing requires the use of closed-source proprietary


libraries.
b. OpenTracing is an open distributed tracing standard for applications and OSS
packages.
c. OpenTracing-compliant implementations are completely swappable with minimal to no
code changes for the developer.
d. Jaeger is the only supported OpenTracing implementation available.

2. Which three of the following rules apply to all MicroProfile OpenTracing implementations?
(Choose three.)

a. A MicroProfile implementation must provide a mechanism to configure an


io.opentracing.Tracer implementation for use by each JAX-RS application.
b. A MicroProfile implementation must provide a mechanism to log, store, and visualize
all Span objects present on any incoming request.
c. A MicroProfile implementation must provide a mechanism to automatically start
a Span for any incoming JAX-RS request, and finish the Span when the request
completes.
d. A MicroProfile implementation must provide a mechanism to automatically start
a Span for any outgoing JAX-RS request, and finish the Span when the request
completes.
e. A MicroProfile implementation must provide a mechanism to remove the Span
objects attached to any outgoing JAX-RS request.

3. MicroProfile OpenTracing implementations must use which of the following patterns to


derive the name of a Span if no custom name is specified?

a. <package name>.<class name>.<method name>:<HTTP method>


b. <class name>.<method name>-span
c. <HTTP method>:<package name>.<class name>.<method name>
d. <HTTP path>:<HTTP method>

4. Which of the following annotations disables span creation for a specific class or method?

a. @NoTrace
b. @Trace(false)
c. @Traced(false)
d. @Trace(span=false)

334 JB283-RHOAR1.0-en-1-20180517
Solution

Solution
Choose the correct answers to the following questions:

1. Which two of the following statements about OpenTracing are true? (Choose two.)

a. Distributed tracing with OpenTracing requires the use of closed-source proprietary


libraries.
b. OpenTracing is an open distributed tracing standard for applications and OSS
packages.
c. OpenTracing-compliant implementations are completely swappable with minimal to
no code changes for the developer.
d. Jaeger is the only supported OpenTracing implementation available.

2. Which three of the following rules apply to all MicroProfile OpenTracing implementations?
(Choose three.)

a. A MicroProfile implementation must provide a mechanism to configure an


io.opentracing.Tracer implementation for use by each JAX-RS application.
b. A MicroProfile implementation must provide a mechanism to log, store, and visualize
all Span objects present on any incoming request.
c. A MicroProfile implementation must provide a mechanism to automatically start
a Span for any incoming JAX-RS request, and finish the Span when the request
completes.
d. A MicroProfile implementation must provide a mechanism to automatically start
a Span for any outgoing JAX-RS request, and finish the Span when the request
completes.
e. A MicroProfile implementation must provide a mechanism to remove the Span
objects attached to any outgoing JAX-RS request.

3. MicroProfile OpenTracing implementations must use which of the following patterns to


derive the name of a Span if no custom name is specified?

a. <package name>.<class name>.<method name>:<HTTP method>


b. <class name>.<method name>-span
c. <HTTP method>:<package name>.<class name>.<method name>
d. <HTTP path>:<HTTP method>

4. Which of the following annotations disables span creation for a specific class or method?

a. @NoTrace
b. @Trace(false)
c. @Traced(false)
d. @Trace(span=false)

JB283-RHOAR1.0-en-1-20180517 335
Chapter 10. Monitoring Microservices

Describing OpenShift Log Aggregation

Objectives
After completing this section, students should be able to describe the log aggregation feature of
OpenShift.

Introduction to Log Aggregation


When you deploy multiple microservices on a container deployment platform such as OpenShift,
it is important to consider log management, which includes log rotation policies for your
containers as well as a log aggregation solution. Each microservice instance deployed in the
cluster has its own set of server logs, which can make log management a complicated and
tedious task. For instance, you are likely to spend a lot of time tracing the root cause of an
intermittent failure that is occurring sporadically on multiple nodes if no log management
solution is deployed. Log aggregation provides a single point of view into the logs of all instances
of your microservices, making debugging problems more efficient.

You can use a centralized system to facilitate troubleshooting. While there are many centralized
log aggregators, OpenShift Container Platform provides a solution called EFK.

EFK, is an acronym composed of the letter of three open source projects:

• Elasticsearch: an open source search engine and object store that provides a distributed
RESTful API for log data

• Fluentd: a data collector project that gathers logs from the application nodes and sends them
to the Elasticsearch service

• Kibana: a web interface for Elasticsearch

EFK features include user management for log access, graphs and charts, a quick overview of
common errors, as well as simple searching and filtering of log files. When you deploy the EFK
stack into your environment, it aggregates logs from all nodes and projects in the Elasticsearch
database, and uses Kibana to provide a web interface with access to logs.

Elasticsearch
Elasticsearch is a highly scalable open source full-text search and analytics engine meant to
run in distributed environments. It allows you to store, search, and analyze big volumes of data
quickly and nearly in real time. It is generally used as the underlying engine and technology that
powers applications that have complex search features and requirements. Elasticsearch provides
standard RESTful APIs that can produce JSON to expose your log data.

Fluentd
Fluentd is an open source data collector for a unified logging layer. Fluentd allows you to
unify data collection and consumption for a better use and understanding of data. When
deploying the EFK logging environment to OpenShift, Fluentd is deployed as a DaemonSet.
A DaemonSet is an OpenShift object which ensures that all nodes run a copy of a pod. By
default, the Fluentd service reads log entries from the /var/log/messages and /var/log/
containers/container.log files. However, you can also use the systemd journal as the log
source.

336 JB283-RHOAR1.0-en-1-20180517
Introduction to Log Aggregation

Kibana
Kibana is the web interface that reads log entries from the Elasticsearch database. It creates
visualization graphs, charts, time tables, and reports, using time-based and non-time-based
events. You can visualize the cluster data, export CSV files, create dashboards, and run advanced
requests without having to write complicated scripts or build custom solutions.

The Figure 10.3: The Kibana web interface displays some Kibana charts. Accesss them from the
Discover menu. The chart view has a search field you can use to run searches with advanced
patterns. For example, a search of NullPointerException lists all logs containing a log line
with a NullPointerException entry. Use an exclamation mark to negate a query, such as !
CartService to exclude logs that contain entries with the text CartService.

You can use the Discover page to exclude values from a list, add columns to tables, or use the
search bar to search for documents.

Figure 10.3: The Kibana web interface

Use the Visualize tab to generate visualizations, such as area charts, data tables, line charts, tile
maps, and pie charts. You can use the search bar to update any visualization.

JB283-RHOAR1.0-en-1-20180517 337
Chapter 10. Monitoring Microservices

Figure 10.4: The Visualize page

If you find a visualization particularly useful, you can save it by adding it to the dashboard. You
can also share entire dashboards as embedded iFrames or with a generated HTML link to share
your dashboards to external teams.

Use the date picker and the search bar to update charts in real time. For each graph, tools such
as filters are also available. You can also share a snapshot of the data retrieved at a current point
in time by Kibana.

Figure 10.5: The Kibana dashboard

Demonstration: Examining the OpenShift Logging


Console
1. Log in to the workstation VM as student using student as the password.

2. Log in to the OpenShift cluster.

2.1. Open a terminal window on the workstation VM and run the following command.

[student@workstation ~]$ oc login -u developer \


-p redhat https://master.lab.example.com

3. Create a new project in OpenShift.

3.1. Open a terminal window on the workstation VM and run the following command.

[student@workstation ~]$ oc new-project demo-logging

4. Start the hola microservice.

4.1. Open a terminal window on the workstation VM and navigate to the hola project.

[student@workstation ~]$ cd hello-microservices/hola

338 JB283-RHOAR1.0-en-1-20180517
Demonstration: Examining the OpenShift Logging Console

4.2. Start the microservice.

[student@workstation hola]$ mvn clean fabric8:deploy \


-DskipTests

5. Raise exceptions in the application to get them in the Kibana web interface.

5.1. Start Firefox on the workstation VM and click the RESTClient plug-in in the
browser's toolbar.

5.2. Select GET as the Method. In the URL form, enter http://
hola.apps.lab.example.com/api/hola.

5.3. Click Send three times to increase the counter used to monitor the number of requests
made to the microservice.

5.4. In a new tab from Firefox, enter https://kibana.apps.lab.example.com and


press Enter.

5.5. Log in to the Kibana web interface with the following credentials:

• Username: admin

• Password: redhat

Press Enter.

Important
Accept any insecure certificate from the server. The OpenShift cluster
installation from the classroom was deployed only with self-signed
certificates.

5.6. From the Discover page, select the project.demo-logging.UUID project.

Add the following search fields:

• kubernetes.container_name

• message

• hostname

The information provides any stacktrace or outputs from all the hosts that the
application was deployed.

5.7. Click the kubernetes.container_name filter in the Selected Fields section to filter
the results with all discovered container names. Click the magnifier icon with a plus
sign (+) next to any container, for example, wildfly-swarm, to filter the results with
this container.

JB283-RHOAR1.0-en-1-20180517 339
Chapter 10. Monitoring Microservices

Notice the new entry under the search bar, which reads
kubernetes.container_name: "wildfly-swarm".

5.8. Delete the project from OpenShift.

Run the following command to remove the project from OpenShift:

[student@workstation hola]$ oc delete project demo-logging

This concludes the demonstration.

References
Elasticsearch
http://elastic.co

Kibana
https://www.elastic.co/products/kibana

Fluentd
https://www.fluentd.org

340 JB283-RHOAR1.0-en-1-20180517
Quiz: Describing Log Aggregation

Quiz: Describing Log Aggregation

Match the items below to their counterparts in the table.

DaemonSet Elasticsearch Fluentd Kibana

Log Aggregation

Stack Element Description

Service that collects


data and gather logs
from application nodes

Providing a centralized
view into the logs
of all instances of
your microservices,
making debugging
more efficient

Service that provides a


web interface to view
and analyze log data

Service that stores


log data objects and
provides a search
engine

An OpenShift object
that ensures all nodes
run a copy of a pod.

JB283-RHOAR1.0-en-1-20180517 341
Chapter 10. Monitoring Microservices

Solution
Match the items below to their counterparts in the table.

Stack Element Description

Service that collects Fluentd


data and gather logs
from application nodes

Providing a centralized Log Aggregation


view into the logs
of all instances of
your microservices,
making debugging
more efficient

Service that provides a Kibana


web interface to view
and analyze log data

Service that stores Elasticsearch


log data objects and
provides a search
engine

An OpenShift object DaemonSet


that ensures all nodes
run a copy of a pod.

342 JB283-RHOAR1.0-en-1-20180517
Lab: Monitoring Microservices

Lab: Monitoring Microservices

In this lab, you will enable and retrieve metrics from a microservice deployed on OpenShift.

Outcomes
You should be able to configure and customize counters and gauges metrics.

Before you begin


If you have not already, execute the git clone command to clone the microprofile-conference
repository onto the workstation VM.

[student@workstation ~]$ git clone http://services.lab.example.com/microprofile-


conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Run lab setup to begin the exercise.

[student@workstation ~]$ lab monitor-review setup

Steps
1. Switch the repository to the lab-monitor branch to get the correct version of the
application code for this exercise.

2. Enable a counter metric in the ResourceSpeaker class from the microprofile-


speaker microservice. Name the metric requestCount. You need to increase the counter
metric in the retrieveAll method.

3. Get the number of speakers stored in the microservices using a gauge from the MicroProfile
metrics specification. . Define the gauge metric in the ResourceSpeaker class from the
microprofile-speaker microservice. Name the metric speakersSize. You need to
enable the gauge metric in the getSpeakersSize method.

Press Ctrl+S to save your changes.

4. Create a new OpenShift project named monitor-review. Deploy the microservice-


speaker microservice on the OpenShift cluster using the fabric8 Maven plug-in.

5. Get the route URL to access the microservice-speaker microservice endpoints.

Run the following command in the existing terminal window:

[student@workstation microservice-speaker]$ oc status

The route information is displayed on the workstation VM, and should look similar to the
following:

In project monitor-review on server https://master.lab.example.com:443


...
http://microservice-speaker-monitor-review.apps.lab.example.com (svc/microservice-
speaker)

JB283-RHOAR1.0-en-1-20180517 343
Chapter 10. Monitoring Microservices

6. Access http://microservice-speaker-monitor-
review.apps.lab.example.com/speaker five times to update the counter metrics
using the HTTP GET method. Use the following request header:

• Name: Accept

• Attribute Value: application/json

Note
Do not change the request headers for the following steps.

7. Get all the metrics available after the load using the RESTClient Firefox plug-in.

8. Get the base metrics after the load using the RESTClient Firefox plug-in.

9. Get the vendor-specific metrics after the load using RESTClient Firefox plug-in.

10. Get the metrics configured in the previous steps after the load using RESTClient Firefox
plug-in.

11. Grade the lab.

[student@workstation microservice-speaker]$ lab monitor-review grade

12. Clean up the OCP project, commit your changes to your local Git repository in the lab
branch, and return to the master branch.

12.1. Delete the OCP project monitor-review to undeploy the service and remove the
other OCP resources.

[student@workstation microservice-speaker]$ oc delete project \


monitor-review
project "monitor-review" deleted

12.2.Use the git add command. to stage the uncommitted changes.

[student@workstation microservice-speaker]$ git add .

12.3.Use the git commit command to commit your changes to the local branch.

[student@workstation microservice-speaker]$ git commit \


-m"completing lab monitor"
[lab-monitor 72109278] completing lab lab-monitor

12.4.Switch the working copy back to the master branch to finish cleaning up.

[student@workstation microservice-speaker]$ git checkout master


Switched to branch 'master'

344 JB283-RHOAR1.0-en-1-20180517
This concludes the lab.

JB283-RHOAR1.0-en-1-20180517 345
Chapter 10. Monitoring Microservices

Solution
In this lab, you will enable and retrieve metrics from a microservice deployed on OpenShift.

Outcomes
You should be able to configure and customize counters and gauges metrics.

Before you begin


If you have not already, execute the git clone command to clone the microprofile-conference
repository onto the workstation VM.

[student@workstation ~]$ git clone http://services.lab.example.com/microprofile-


conference
Cloning into 'microprofile-conference'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Run lab setup to begin the exercise.

[student@workstation ~]$ lab monitor-review setup

Steps
1. Switch the repository to the lab-monitor branch to get the correct version of the
application code for this exercise.

1.1. Switch to the branch using the git checkout command.

[student@workstation ~]$ cd microprofile-conference


[student@workstation microprofile-conference]$ git checkout \
lab-monitor
Switched to branch 'lab-monitor'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation microprofile-conference]$ git status


# On branch lab-monitor
nothing to commit, working directory clean

2. Enable a counter metric in the ResourceSpeaker class from the microprofile-


speaker microservice. Name the metric requestCount. You need to increase the counter
metric in the retrieveAll method.

2.1. Open the ResourceSpeaker class by expanding the microservice-speaker


item in the Project Explorer tab in the left pane of JBoss Developer
Studio, then click microservice-speaker > Java Resources > src/main/
java > io.microprofile.showcase.speaker.rest to expand it. Double-click the
ResourceSpeaker.java file.

2.2. Annotate the requestCount attribute with the @Metric annotation to configure a
counter metric. Name the metric requestCount.

// Provide a metric 'requestCount' that records how many times a GET method is
invoked

346 JB283-RHOAR1.0-en-1-20180517
Solution

@Inject
@Metric(name = "requestCount")
private Counter requestCount;

2.3. Update the retrieveAll method to increment the counter metric.

public Collection<Speaker> retrieveAll() {


//increase the number of requests
requestCount.inc();
final Collection<Speaker> speakers = this.speakerDAO.getSpeakers();
speakers.forEach(this::addHyperMedia);
return speakers;
}

Press Ctrl+S to save your changes.

3. Get the number of speakers stored in the microservices using a gauge from the MicroProfile
metrics specification. . Define the gauge metric in the ResourceSpeaker class from the
microprofile-speaker microservice. Name the metric speakersSize. You need to
enable the gauge metric in the getSpeakersSize method.

//Configure the Gauge metric.


@Gauge(name = "speakersSize", unit = MetricUnits.NONE)
public Integer getSpeakersSize(){
return speakerDAO.getSpeakers().size();
}

Press Ctrl+S to save your changes.

4. Create a new OpenShift project named monitor-review. Deploy the microservice-


speaker microservice on the OpenShift cluster using the fabric8 Maven plug-in.

4.1. Open a terminal window on the workstation VM and log in to OpenShift cluster as the
developer user:

[student@workstation ~]$ oc login -u developer -p redhat \


https://master.lab.example.com

4.2. Create the monitor-review project:

[student@workstation ~]$ oc new-project monitor-review


Now using project "monitor-review"...

4.3. Open a new terminal window, and navigate to the microservice-speaker microservice
project. Deploy it on the OpenShift cluster:

[student@workstation ~]$ cd microprofile-conference/microservice-speaker


[student@workstation microservice-speaker]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS

JB283-RHOAR1.0-en-1-20180517 347
Chapter 10. Monitoring Microservices

...

Note
The following error may occur during deployment:

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask@7eae1359 rejected from
java.util.concurrent.ScheduledThreadPoolExecutor@3bac82e8[Shutting
down, pool size = 1, active threads = 1, queued tasks = 0, completed
tasks = 12]

You may disregard it.

5. Get the route URL to access the microservice-speaker microservice endpoints.

Run the following command in the existing terminal window:

[student@workstation microservice-speaker]$ oc status

The route information is displayed on the workstation VM, and should look similar to the
following:

In project monitor-review on server https://master.lab.example.com:443


...
http://microservice-speaker-monitor-review.apps.lab.example.com (svc/microservice-
speaker)

6. Access http://microservice-speaker-monitor-
review.apps.lab.example.com/speaker five times to update the counter metrics
using the HTTP GET method. Use the following request header:

• Name: Accept

• Attribute Value: application/json

Note
Do not change the request headers for the following steps.

6.1. Start Firefox on the workstation VM and click the RESTClient plug-in in the browser's
toolbar.

6.2. In the top navigation bar, click Headers and then click Custom Header.

6.3. Fill in the Request Headers form with the following values:

• Name: Accept.

348 JB283-RHOAR1.0-en-1-20180517
Solution

• Attribute Value: application/json.

6.4. Click Okay.

6.5. Select GET as the Method. In the URL form, enter http://microservice-speaker-
monitor-review.apps.lab.example.com/speaker.

6.6. Click Send five times to increase the counter used to monitor the number of requests
made to the microservice.

7. Get all the metrics available after the load using the RESTClient Firefox plug-in.

7.1. Access http://microservice-speaker-monitor-


review.apps.lab.example.com/metrics to retrieve all the metrics from the
microservice.

7.2. Verify that the Response tab lists all the metrics provided by the MicroProfile metrics
fraction.

8. Get the base metrics after the load using the RESTClient Firefox plug-in.

8.1. In the URL form, enter http://microservice-speaker-monitor-


review.apps.lab.example.com/metrics/base and click Send.

8.2. Verify that the Response tab lists the system metrics provided by the MicroProfile
metrics fraction.

9. Get the vendor-specific metrics after the load using RESTClient Firefox plug-in.

9.1. In the URL form, enter http://microservice-speaker-monitor-


review.apps.lab.example.com/metrics/vendor, and click Send

9.2. Verify that the Response tab lists the vendor-specific metrics provided by the WildFly
MicroProfile metrics fraction.

10. Get the metrics configured in the previous steps after the load using RESTClient Firefox
plug-in.

10.1. In the URL form, enter http://microservice-speaker-monitor-


review.apps.lab.example.com/metrics/application and click Send.

10.2.Verify that the Response tab lists all the application metrics defined previously.

11. Grade the lab.

[student@workstation microservice-speaker]$ lab monitor-review grade

12. Clean up the OCP project, commit your changes to your local Git repository in the lab
branch, and return to the master branch.

12.1. Delete the OCP project monitor-review to undeploy the service and remove the
other OCP resources.

[student@workstation microservice-speaker]$ oc delete project \

JB283-RHOAR1.0-en-1-20180517 349
Chapter 10. Monitoring Microservices

monitor-review
project "monitor-review" deleted

12.2.Use the git add command. to stage the uncommitted changes.

[student@workstation microservice-speaker]$ git add .

12.3.Use the git commit command to commit your changes to the local branch.

[student@workstation microservice-speaker]$ git commit \


-m"completing lab monitor"
[lab-monitor 72109278] completing lab lab-monitor

12.4.Switch the working copy back to the master branch to finish cleaning up.

[student@workstation microservice-speaker]$ git checkout master


Switched to branch 'master'

This concludes the lab.

350 JB283-RHOAR1.0-en-1-20180517
Summary

Summary
In this chapter, you learned:

• The MicroProfile metrics specification divides metrics into three major categories: base,
vendor, and application.

• The MicroProfile metrics specification provides a way to register application-specific metrics,


making it simple to define your application metrics directly inside the application code itself.

• The MicroProfile metrics specification automatically exposes REST endpoints that provide
access to all metrics data in either JSON or Prometheus format.

• Distributed tracing can provide you with valuable performance data that can help to efficiently
identify performance problems, bugs, or other issues that can introduce latency into your
microservice-based application.

• OpenTracing is a new, open distributed tracing standard for applications and OSS packages.
The stated goal of the OpenTracing project is to provide "high-quality distributed traces with
little to no instrumentation effort by the application programmer."

• Spans are named, timed operations representing a contiguous unit of work. A trace is a
directed acyclic graph (DAG) of spans.

• Jaeger is an open source distributed tracing system. Jaeger features an OpenTracing


compatible data model and includes implementations in Go, Java, Node, Python, and C + +.
Jaeger is comprised of multiple components including a web UI, and backend collection
agents.

• OpenShift Container Platform provides a logging aggregation solution called EFK. EFK is
composed of three open source projects:

◦ Elasticsearch: an open source search engine and object store that provides a distributed
RESTful API for logs

◦ Fluentd: a data collector project that gathers logs from the application nodes and sends
them to the Elasticsearch service

◦ Kibana: a web interface for Elasticsearch

JB283-RHOAR1.0-en-1-20180517 351
352
TRAINING
CHAPTER 11

COMPREHENSIVE REVIEW:
RED HAT APPLICATION
DEVELOPMENT II:
IMPLEMENTING MICROSERVICE
ARCHITECTURES

Overview
Goal Review tasks from Red Hat Application Development II:
Implementing Microservice Architectures
Objectives • Demonstrate knowledge of Developing a Microservice
Endpoint and Monitoring a Microservice.
Sections • Comprehensive Review: Developing a Microservice
Endpoint

• Comprehensive Review: Monitoring a Microservice


Lab • Lab: Developing a Microservice Endpoint

• Lab: Monitoring a Microservice

JB283-RHOAR1.0-en-1-20180517 353
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

Comprehensive Review

Objectives
After completing this section, students should be able to review and refresh knowledge and skills
learned in Red Hat Application Development II: Implementing Microservice Architectures.

Reviewing Red Hat Application Development II:


Implementing Microservice Architectures
Before beginning the comprehensive review for this course, you should be comfortable with the
topics covered in each chapter.

You can refer to earlier sections in the textbook for extra study.

Chapter 1, Describing Microservice Architectures


Describe components and patterns of microservice-based application architectures.
• Define what a microservice is and the guiding principles for their creation.

• Describe the major patterns implemented in microservice architectures.

Chapter 2, Deploying Microservice-based Applications


Deploy portions of the course case study applications to an OpenShift cluster.
• Deploy a microservice from the MicroProfile Conference application to an OpenShift cluster.

• Deploy a microservice to OpenShift using the fabric8 Maven plug-in.

Chapter 3, Implementing a Microservice with MicroProfile


Describe the specifications in MicroProfile, implement a microservice with some of the
specifications, and deploy it to an OpenShift cluster.
• Describe the specifications included in MicroProfile.

• Implement a microservice using the CDI, JAX-RS, and JSON-P specifications of MicroProfile.

Chapter 4, Testing Microservices


Implement unit and integration tests for microservices.
• Implement a microservice test case using Arquillian.

• Implement a microservice test using mock frameworks.

Chapter 5, Injecting Configuration Data into a Microservice


Inject configuration data from an external source into a microservice.
• Inject configuration data into a microservice using the config specification.

• Implement service discovery with a dependent microservice.

Chapter 6, Creating Application Health Checks


Create a health check for a microservice.
• Implement a health check in a microservice and enable a probe in OpenShift to monitor it.

Chapter 7, Implementing Fault Tolerance


Implement fault tolerance in a microservice architecture.

354 JB283-RHOAR1.0-en-1-20180517
Reviewing Red Hat Application Development II: Implementing Microservice Architectures

• Apply fault tolerance policies to a Microservice.

Chapter 8, Developing an API Gateway


Describe the API Gateway pattern and develop an API gateway for a series of microservices.
• Describe the API Gateway pattern.

• Develop an API gateway for a series of microservices.

Chapter 9, Securing Microservices with JWT


Secure a microservice using the JSON web token specification.
• Implement a microservice that generates a JSON web token.

• Secure a microservice endpoint using JWT authentication and authorization.

Chapter 10, Monitoring Microservices


Monitor the operation of a microservice using metrics, distributed tracing, and log aggregation.
• Use the Metric specification to add metrics to a microservice.

• Enable distributed tracing in a microservice using the OpenTracing API and Jaeger.

• Describe the log aggregation feature of OpenShift.

JB283-RHOAR1.0-en-1-20180517 355
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

Lab: Developing a Microservice Endpoint

In this review, you will configure access to the bookstore inventory microservice using REST
endpoints, configure a proxy interface to a third-party microservice, and implement the response
to a REST endpoint.

Outcomes
You should be able to:
• Configure a microservice to provide access using REST endpoints.

• Access a third-party REST endpoint microservice using RESTEasy proxy framework interface.

• Access the proxy interface with RESTEasy client proxy framework.

Before you begin


If you did not reset the workstation, master, node1, and node2 VMs at the end of the last
chapter, save any work you want to keep from earlier exercises on those machines, and reset
them now.

Use the git clone command to clone the JB283-comprehensive-review repository to the
workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/JB283-comprehensive-review
Cloning into 'JB283-comprehensive-review'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Log in to workstation as student and run the following command:

[student@workstation ~]$ lab comp-review-develop setup

Instructions
The bookstore application provides the catalog microservice responsible for providing
information about the available books for sale.

The source code from the application is available in the http://


services.lab.example.com/JB283-comprehensive-review Git repository.

Use the Import Existing Maven Project in JBoss Developer Studio to import the JB283-
comprehensive-review project.

For the purpose of this lab, work with the lab-comp-review-develop branch, which contains
unfinished work from a former developer.

Implement the parser responsible for converting a JSON file into a set
of Book Java objects that the application uses as a mock database. The
com.redhat.training.bookstore.catalog.model.BookParser class parses a JSON
file into a Set<Book> collection containing data from the books.json file included in the
src/main/resources directory of the project. Use the JSON-P API to parse the file. Use
the com.redhat.training.bookstore.catalog.model.BookParserTest to evaluate
whether the implementation is working.

356 JB283-RHOAR1.0-en-1-20180517
Configure the catalog microservice to enable REST endpoints using the following guidelines:

• Trigger REST support for the microservice and make all


the endpoints available under the /api URI. Create the
com.redhat.training.bookstore.catalog.rest.JaxRsActivator class to support
this requirement.

• The getBooks method from


com.redhat.training.bookstore.catalog.rest.CatalogResource returns a list of
all books available for sale. Make it accessible using the /api/books REST endpoints through
a GET request. The method must return JSON data.

• The getBookWithInventory method from


com.redhat.training.bookstore.catalog.rest.CatalogResource returns
the number of books available in stock using ISBN. Make it accessible using the /api/
bookinventory/isbn REST endpoints through a GET request. The method must return
JSON data.

Use the
com.redhat.training.bookstore.catalog.rest.CatalogServiceEndpointTest test
case to confirm that the endpoints are available.

Configure a proxy interface to forward the requests to the inventory


microservice. Annotate the InventoryService interface from the
com.redhat.training.bookstore.catalog.rest.client package to be used as the
proxy of the third-party inventory microservice. The inventory microservice provides a REST
endpoint at /inventory/isbn, which provides inventory data for a book by its isbn value, in
JSON data format. This REST endpoint responds to HTTP GET requests.

• Annotate the getInventory method from the InventoryService interface to forward


requests to the inventory/isbn REST endpoint.

• Invoke the inventory service using the RESTEasy proxy framework in the
ClientConfiguration class.

Create a new instance of the Client class and point to the URL where all the inventory
microservice REST endpoints are available. As the URL may be different depending on
the environment executing the application, you must use the inventoryHost and the
inventoryPort attributes to dynamically configure the URL.

Inject the configuration defined in the MicroProfile configuration specification into the
ClientConfiguration class. If no value is set in the inventoryHost attribute, use
inventory-service as the default value. Similarly, if no value is set in the inventoryPort
attribute, use 8080 as the default value.

• Inject the InventoryService interface to access the REST endpoint with the proxy interface.

• Implement logic to provide the inventory microservice information in the catalog microservice.
You may get the source code in the /home/student/JB283/labs/lab-comp-review-
develop/getBookWithInventory.txt file.

Use the
com.redhat.training.bookstore.catalog.rest.CatalogServiceNotAuthenticatedTest
test case to confirm that the endpoints are correctly implemented.

JB283-RHOAR1.0-en-1-20180517 357
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

Configure the inventory microservice to request JSON web tokens for authentication.

• Update the microservice to authenticate and authorize users using the JWTLoginModule
login module.

• The getInventory method provided by the InventoryResource class from the inventory
microservice must allow only users with the InventoryHandler role to access the REST
endpoint.

Configure the AuthService proxy interface to forward requests to


the auth microservice. Annotate the AuthService interface from the
com.redhat.training.bookstore.catalog.rest.client package to be used as a
proxy of the third-party auth microservice. The auth microservice provides a REST endpoint
available at /auth. This endpoint produces plain text data representing a JSON web token. This
JWT allows the catalog microservice to connect to the auth microservice and potentially other
microservices in the architecture. The auth microservice REST endpoint responds to HTTP POST
requests.

• Annotate the createToken method from the AuthService interface to forward requests to
the /auth REST endpoint.

• Implement logic to provide the auth microservice information in the catalog microservice.

Inject the configuration defined in the MicroProfile configuration specification into the
ClientConfiguration class. If no value is set in the authorizationHost attribute, use
auth-service as the default value. Similarly, if no value is set in the authorizationPort
attribute, use 8080 as the default value.

• Inject the AddAuthorizationHeaderFilter interface to access the REST endpoint with the
proxy interface.

Use the com.redhat.training.bookstore.catalog.rest.CatalogServiceTest test


case to confirm that the endpoints are correctly implemented.

Important
The other tests fails if the final implementation is correct.

Deploy the application on the OpenShift cluster in project named comp-review-develop using
the fabric8 Maven plug-in.

Test endpoints implemented to verify that the microservice works using the RESTClient
Firefox plug-in. To validate that your deployment is correct, the endpoint http://
catalog.apps.lab.example.com/api/bookinventory/12345 must return:

{"bookTitle":"Gone with the Wind","isbn":"12345","price":9.95,"inventory":12}

Evaluation
As the student user on workstation, run the lab comp-review-develop script with the
grade argument to confirm the success of this exercise. Correct any reported failures and rerun
the script until successful.

358 JB283-RHOAR1.0-en-1-20180517
[student@workstation ~]$ lab comp-review-develop grade

This concludes the lab.

JB283-RHOAR1.0-en-1-20180517 359
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

Solution
In this review, you will configure access to the bookstore inventory microservice using REST
endpoints, configure a proxy interface to a third-party microservice, and implement the response
to a REST endpoint.

Outcomes
You should be able to:
• Configure a microservice to provide access using REST endpoints.

• Access a third-party REST endpoint microservice using RESTEasy proxy framework interface.

• Access the proxy interface with RESTEasy client proxy framework.

Before you begin


If you did not reset the workstation, master, node1, and node2 VMs at the end of the last
chapter, save any work you want to keep from earlier exercises on those machines, and reset
them now.

Use the git clone command to clone the JB283-comprehensive-review repository to the
workstation machine.

[student@workstation ~]$ git clone \


http://services.lab.example.com/JB283-comprehensive-review
Cloning into 'JB283-comprehensive-review'...
...output omitted...
Resolving deltas: 100% (2803/2803), done.

Log in to workstation as student and run the following command:

[student@workstation ~]$ lab comp-review-develop setup

Instructions
The bookstore application provides the catalog microservice responsible for providing
information about the available books for sale.

The source code from the application is available in the http://


services.lab.example.com/JB283-comprehensive-review Git repository.

Use the Import Existing Maven Project in JBoss Developer Studio to import the JB283-
comprehensive-review project.

For the purpose of this lab, work with the lab-comp-review-develop branch, which contains
unfinished work from a former developer.

Implement the parser responsible for converting a JSON file into a set
of Book Java objects that the application uses as a mock database. The
com.redhat.training.bookstore.catalog.model.BookParser class parses a JSON
file into a Set<Book> collection containing data from the books.json file included in the
src/main/resources directory of the project. Use the JSON-P API to parse the file. Use
the com.redhat.training.bookstore.catalog.model.BookParserTest to evaluate
whether the implementation is working.

Configure the catalog microservice to enable REST endpoints using the following guidelines:

360 JB283-RHOAR1.0-en-1-20180517
Solution

• Trigger REST support for the microservice and make all


the endpoints available under the /api URI. Create the
com.redhat.training.bookstore.catalog.rest.JaxRsActivator class to support
this requirement.

• The getBooks method from


com.redhat.training.bookstore.catalog.rest.CatalogResource returns a list of
all books available for sale. Make it accessible using the /api/books REST endpoints through
a GET request. The method must return JSON data.

• The getBookWithInventory method from


com.redhat.training.bookstore.catalog.rest.CatalogResource returns
the number of books available in stock using ISBN. Make it accessible using the /api/
bookinventory/isbn REST endpoints through a GET request. The method must return
JSON data.

Use the
com.redhat.training.bookstore.catalog.rest.CatalogServiceEndpointTest test
case to confirm that the endpoints are available.

Configure a proxy interface to forward the requests to the inventory


microservice. Annotate the InventoryService interface from the
com.redhat.training.bookstore.catalog.rest.client package to be used as the
proxy of the third-party inventory microservice. The inventory microservice provides a REST
endpoint at /inventory/isbn, which provides inventory data for a book by its isbn value, in
JSON data format. This REST endpoint responds to HTTP GET requests.

• Annotate the getInventory method from the InventoryService interface to forward


requests to the inventory/isbn REST endpoint.

• Invoke the inventory service using the RESTEasy proxy framework in the
ClientConfiguration class.

Create a new instance of the Client class and point to the URL where all the inventory
microservice REST endpoints are available. As the URL may be different depending on
the environment executing the application, you must use the inventoryHost and the
inventoryPort attributes to dynamically configure the URL.

Inject the configuration defined in the MicroProfile configuration specification into the
ClientConfiguration class. If no value is set in the inventoryHost attribute, use
inventory-service as the default value. Similarly, if no value is set in the inventoryPort
attribute, use 8080 as the default value.

• Inject the InventoryService interface to access the REST endpoint with the proxy interface.

• Implement logic to provide the inventory microservice information in the catalog microservice.
You may get the source code in the /home/student/JB283/labs/lab-comp-review-
develop/getBookWithInventory.txt file.

Use the
com.redhat.training.bookstore.catalog.rest.CatalogServiceNotAuthenticatedTest
test case to confirm that the endpoints are correctly implemented.

Configure the inventory microservice to request JSON web tokens for authentication.

JB283-RHOAR1.0-en-1-20180517 361
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

• Update the microservice to authenticate and authorize users using the JWTLoginModule
login module.

• The getInventory method provided by the InventoryResource class from the inventory
microservice must allow only users with the InventoryHandler role to access the REST
endpoint.

Configure the AuthService proxy interface to forward requests to


the auth microservice. Annotate the AuthService interface from the
com.redhat.training.bookstore.catalog.rest.client package to be used as a
proxy of the third-party auth microservice. The auth microservice provides a REST endpoint
available at /auth. This endpoint produces plain text data representing a JSON web token. This
JWT allows the catalog microservice to connect to the auth microservice and potentially other
microservices in the architecture. The auth microservice REST endpoint responds to HTTP POST
requests.

• Annotate the createToken method from the AuthService interface to forward requests to
the /auth REST endpoint.

• Implement logic to provide the auth microservice information in the catalog microservice.

Inject the configuration defined in the MicroProfile configuration specification into the
ClientConfiguration class. If no value is set in the authorizationHost attribute, use
auth-service as the default value. Similarly, if no value is set in the authorizationPort
attribute, use 8080 as the default value.

• Inject the AddAuthorizationHeaderFilter interface to access the REST endpoint with the
proxy interface.

Use the com.redhat.training.bookstore.catalog.rest.CatalogServiceTest test


case to confirm that the endpoints are correctly implemented.

Important
The other tests fails if the final implementation is correct.

Deploy the application on the OpenShift cluster in project named comp-review-develop using
the fabric8 Maven plug-in.

Test endpoints implemented to verify that the microservice works using the RESTClient
Firefox plug-in. To validate that your deployment is correct, the endpoint http://
catalog.apps.lab.example.com/api/bookinventory/12345 must return:

{"bookTitle":"Gone with the Wind","isbn":"12345","price":9.95,"inventory":12}

Steps
1. Check out the lab-comp-review-develop Git branch to get the correct version of the
application code for this exercise.

1.1. Run the following commands to change to the correct directory and check out the
required branch:

[student@workstation ~]$ cd JB283-comprehensive-review

362 JB283-RHOAR1.0-en-1-20180517
Solution

[student@workstation JB283-comprehensive-review]$ git checkout lab-comp-review-


develop
Switched to a new branch 'lab-comp-review-develop'

1.2. Use the git status command to ensure that you are on the correct branch.

[student@workstation JB283-comprehensive-review]$ git status


# On branch lab-comp-review-develop
nothing to commit, working directory clean

2. Import the JB283-comprehensive-review project into JBoss Developer Studio.

2.1. Double-click the JBoss Developer Studio icon on the workstation VM desktop. Click
Launch in the Eclipse Launcher dialog box.

Note
If the JBoss Developer Studio Usage dialog box appears, click No to dismiss
it.

2.2. In the JBoss Developer Studio menu, click File > Import to open the Import wizard.

2.3. In the Import dialog box, click Maven > Existing Maven Projects, and then click Next.

2.4. In the Import Maven Projects dialog box, click Browse. The Select Root Folder dialog
box displays.

2.5. Navigate to the /home/student directory. Select the JB283-comprehensive-


review folder and click OK.

2.6. Click Finish to start the import.

3. Implement the parser logic responsible for reading the books.json file used by the
bookstore application.

3.1. In JBoss Developer Studio, open the BookParser class by expanding the catalog-
service item in the Project Explorer tab in the left pane.

Click catalog-service > Java Resources > src/main/java >


com.redhat.training.bookstore.catalog.model to expand it. Double-click the
BookParser.java file.

3.2. Implement the parse method that the catalog microservice uses it to read data from a
file where all the books are stored. To parse the JSON file, use JSON-P as the base API.

In the BookParser class, create a JsonReaderFactory instance to generate a


JsonReader instance and process the file. Read each item in the file into a JsonArray
object and convert it to a Set instance.

public Set<Book> parse(final URL bookFile) {

Set<Book> books = new HashSet<Book>();

JB283-RHOAR1.0-en-1-20180517 363
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

try {
JsonReaderFactory factory = Json.createReaderFactory(null);
JsonReader reader = factory.createReader(bookFile.openStream());
JsonArray bookArray = reader.readArray();
for (JsonValue book : bookArray) {
books.add(new Book((JsonObject) book));
}

} catch (IOException e) {
System.out.println(e);
}
return books;
}

Note
You may get the source code in the /home/student/JB283/labs/lab-
comp-review-develop/book.txt file.

3.3. Press Ctrl+S to save your changes.

3.4. Run the BookParserTest test case to evaluate whether you implemented the parser
correctly.

Click catalog-service > Java Resources > src/test/java >


com.redhat.training.bookstore.catalog.model to expand it. Double-click the
BookParser.java file.

Right-click the BookParserTest test case and select Run As > JUnit Test in JBoss
Developer Studio. The JUnit tab shows the output from the test case execution and a
green bar is displayed after the test execution.

4. Configure the catalog microservice to support REST endpoints. Create the


com.redhat.training.bookstore.catalog.rest.JaxRsActivator class and
annotate it with JAX-RS API annotations to allow REST endpoints at /api URL.

4.1. In JBoss Developer Studio, select the


com.redhat.training.bookstore.catalog.rest package and create the
JaxRsActivator class.

Click catalog-service > Java Resources > src/main/java >


com.redhat.training.bookstore.catalog.rest to expand it. Right-click the
com.redhat.training.bookstore.catalog.rest package and select New >
Class. Use the following values to create a class:

• Name: JaxRsActivator

• Superclass: javax.ws.rs.core.Application

Click Finish to create the class.

4.2. Configure the JaxRsActivator class you just created to respond to requests at /api
URL with the @ApplicationPath class-level annotation.

364 JB283-RHOAR1.0-en-1-20180517
Solution

Before the class declaration, use the @ApplicationPath annotation with the /api
parameter.

@ApplicationPath("/api")
public class JaxRsActivator extends Application {
}

4.3. Press Ctrl+S to save your changes.

4.4. Annotate the class and the methods that respond to requests from REST
endpoints in the CatalogResource class with JAX-RS API annotations. The
getBooks method must be available at /api/books URI using the HTTP GET
method. The getBookWithInventory method must be available at /api/
bookinventory/{isbn} URI.

In JBoss Developer Studio, open the CatalogResource class by expanding the


catalog-service item in the Project Explorer tab in the left pane. Click catalog-service
> Java Resources > src/main/java > com.redhat.training.bookstore.catalog.rest to
expand it. Double-click the CatalogResource.java file.

4.5. Annotate the CatalogResource class with @Path class-level annotation and make it
available at / URI.

@Path("/")
public class CatalogResource {

...output omitted...

4.6. Annotate the getBooks method with the @GET method-level annotation and make it
available at /api/books URI.

// TODO Enable the endpoint to support GET HTTP method


@GET
// TODO Make the endpoint accessible via /books
@Path("/books")
@Produces(MediaType.APPLICATION_JSON)
public Set<Book> getBooks(){
...output omitted...

4.7. Annotate the getBookWithInventory method with the @GET method-level


annotation and make it available at the /api/bookinventory/{isbn} URI. The isbn
parameter is provided by the request to the endpoint.

// TODO Enable the endpoint to support GET HTTP method


@GET
// TODO Make the endpoint accessible via /bookinventory/{isbn}
@Path("/bookinventory/{isbn}")
// TODO return the result as a JSON object.
@Produces(MediaType.APPLICATION_JSON)
// TODO get the ISBN parameter as a path parameter.
public Response getBookWithInventory(@PathParam("isbn") String isbn) {
... output omitted...

JB283-RHOAR1.0-en-1-20180517 365
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

4.8. Press Ctrl+S to save your changes.

4.9. Run the CatalogServiceEndpointTest test case to confirm that you implemented
the endpoint addresses correctly.

Click catalog-service > Java Resources > src/test/java >


com.redhat.training.bookstore.catalog.rest to expand it. Double-click the
CatalogServiceEndpointTest.java file.

Right-click the CatalogServiceEndpointTest test case and select Run As > JUnit
Test in JBoss Developer Studio. The JUnit tab shows the output from the test case
execution and a green bar is displayed after the test execution.

5. Configure the InventoryService proxy interface that invokes the inventory microservice
REST endpoint to use RESTEasy proxy framework.

5.1. In JBoss Developer Studio, open the InventoryService class by


expanding the catalog-service item in the Project Explorer tab in the left
pane, and then click catalog-service > Java Resources > src/main/java >
com.redhat.training.bookstore.catalog.rest.client to expand it. Double-click the
InventoryService.java file.

5.2. Configure the getInventory method from the InventoryService interface to


accept HTTP GET method requests and forward it to the inventory/{isbn} REST
endpoint from the inventory microservice. Use the @GET method-level annotation.

Update the interface with the following code:

//TODO implement the proxy interface to access the inventory/{isbn} endpoint


from the inventory application
@GET
@Path("inventory/{isbn}")
public BookInventory getInventory(@PathParam("isbn") String isbn);

5.3. Press Ctrl+S to save your changes.

5.4. Invoke the inventory service using the RESTEasy proxy framework in the
ClientConfiguration class.

In JBoss Developer Studio, open the ClientConfiguration class by


expanding the catalog-service item in the Project Explorer tab in the left
pane, and then click catalog-service > Java Resources > src/main/java >
com.redhat.training.bookstore.catalog.rest.client to expand it. Double-click the
ClientConfiguration.java file.

Create a new instance of the Client class and point to the URL where all the inventory
microservice REST endpoints are available. As the URL may be different depending on
the environment executing the application, you must use the inventoryHost and the
inventoryPort attributes to dynamically configure the URL.

Cast the target address to a ResteasyWebTarget instance and create the proxy.

Return the resulting object as the method outcome.

366 JB283-RHOAR1.0-en-1-20180517
Solution

@Produces
@Singleton
public InventoryService inventoryService() {
Client client = ClientBuilder.newClient();
WebTarget target = client.target("http://"+inventoryHost+":"+inventoryPort+"/
api");
ResteasyWebTarget rtarget = (ResteasyWebTarget) target;
InventoryService service = rtarget.proxy(InventoryService.class);
return service;
}

Note
You may get the source code in the /home/student/JB283/labs/lab-
comp-review-develop/inventoryService.txt file.

Save the file.

5.5. Inject the inventoryHost and the inventoryPort attributes using the MicroProfile
configuration specification.

Inject the configuration defined in the MicroProfile configuration specification into the
ClientConfiguration class. If no value is set in the inventoryHost attribute,
use inventory-service as the default value. Similarly, if no value is set in the
inventoryPort attribute, use 8080 as the default value.

Annotate with @ConfigProperty attribute-level annotation as follows:

//TODO use the MicroProfile configuration spec to inject the inventoryPort


variable
@Inject
@ConfigProperty(name="inventoryPort", defaultValue="8080")
private String inventoryPort;

//TODO use the MicroProfile configuration spec to inject the inventoryHost


variable
@Inject
@ConfigProperty(name="inventoryHost", defaultValue="inventory-service")
private String inventoryHost;

5.6. Press Ctrl+S to save your changes.

5.7. Implement the logic to get the inventory for an ISBN in the CatalogResource class
and return it to the REST endpoint invocation.

In JBoss Developer Studio, open the CatalogResource.java file by expanding the


catalog-service item in the Project Explorer tab in the left pane, and then click catalog-
service > Java Resources > src/main/java > com.redhat.training.bookstore.catalog.rest
to expand it. Double-click the CatalogResource.java file.

Inject the InventoryService instance created by the inventoryService method


as the class attribute and use it to get the inventory.

JB283-RHOAR1.0-en-1-20180517 367
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

@Inject
private InventoryService inventoryService;
...output omitted...

5.8. Implement the getBookWithinventory method to look for the inventory using the
inventoryService attribute declared in the previous step. If the book is not available,
then set the inventory to zero.

// TODO Enable the endpoint to support GET HTTP method


@GET
// TODO Make the endpoint accessible via /bookinventory/{isbn}
@Path("/bookinventory/{isbn}")
// TODO return the result as a JSON object.
@Produces(MediaType.APPLICATION_JSON)

public Response getBookWithInventory(@PathParam("isbn") String isbn) {


for (Book book : db.getBooks()) {
if (isbn.equals(book.getIsbn())) {
try {
BookInventory inventory = inventoryService.getInventory(isbn);
if (inventory == null)
book.setInventory(0);
else
book.setInventory(inventory.getInventory());
return Response.ok(book, MediaType.APPLICATION_JSON).build();
} catch (NotAuthorizedException e) {
log.error("Inventory service call failed. Unauthorized.");
throw e;
}
}
}
return Response.status(404).build();
// return Response.ok(null, MediaType.APPLICATION_JSON).build();
}
...

Note
You may get the source code in the /home/student/JB283/labs/lab-
comp-review-develop/getBookWithInventory.txt file.

5.9. Press Ctrl+S to save your changes.

5.10.Run the CatalogServiceNotAuthenticatedTest test case to confirm that you


implemented the endpoint addresses correctly.

Right-click the CatalogServiceNotAuthenticatedTest test case and select Run


As > JUnit Test in JBoss Developer Studio. The JUnit tab shows the output from the
test case execution and a green bar is displayed after the test execution.

6. Configure the inventory microservice to request JSON web tokens for authentication.

6.1. In the project-defaults.yml file from the inventory-service microservice,


configure the microservice to support JWT.

368 JB283-RHOAR1.0-en-1-20180517
Solution

In JBoss Developer Studio, open the project-defaults.yml file by expanding the


inventory-service item in the Project Explorer tab in the left pane, then click inventory-
service > Java Resources > src/main/resources > META-INF to expand it. Double-click
the project-defaults.yml file.

6.2. Update the microservice to authenticate and authorize users using the
JWTLoginModule login module.

Configure the rm login module to use the JWTLoginModule login module.

login-modules:
- login-module: rm
#TODO Use the JWTLoginModule for configuration
code: org.wildfly.swarm.microprofile.jwtauth.deployment.auth.jaas.

JWTLoginModule

The login module class name should be defined in a single line.

6.3. Press Ctrl+S to save your changes.

6.4. Annotate the InventoryResource from the inventory microservice to request


credentials.

In JBoss Developer Studio, open the InventoryResource class by


expanding the inventory-service item in the Project Explorer tab in the left
pane, and then click inventory-service > Java Resources > src/main/java >
com.redhat.training.bookstore.inventory.rest to expand it. Double-click the
InventoryResource.java file.

6.5. Annotate the class with the @DeclareRoles class-level annotation to define which
roles may access the REST endpoints.

@DeclareRoles("InventoryHandler")
public class InventoryResource {
...output omitted...

6.6. Annotate the getInventory method with @RolesAllowed method-level annotation


to allow only users with the InventoryHandler role to access the REST endpoint.

@RolesAllowed("InventoryHandler")
public Response getInventory(@PathParam("isbn") String isbn) {
...output omitted...

6.7. Press Ctrl+S to save your changes.

7. Configure a proxy interface that invokes the auth microservice REST endpoint.

7.1. In JBoss Developer Studio, open the AuthService class by expanding the catalog-
service item in the Project Explorer tab in the left pane, and then click catalog-service
> Java Resources > src/main/java > com.redhat.training.bookstore.catalog.rest.client
to expand it. Double-click the AuthService.java file.

JB283-RHOAR1.0-en-1-20180517 369
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

7.2. Configure the getInventory method from the InventoryService interface to


accept HTTP POST method requests and forward them to the /auth REST endpoint
from the inventory microservice. Use the @GET method-level annotation.

Update the interface with the following code:

//TODO implement the proxy interface to access the auth endpoint from the
inventory application
@POST
@Path("/auth")
@Produces(MediaType.TEXT_PLAIN)
@Consumes(MediaType.TEXT_PLAIN)
public String createToken(String credentials);

7.3. Press Ctrl+S to save your changes.

7.4. Invoke the auth microservice using the RESTEasy proxy framework in the
ClientConfiguration class.

In JBoss Developer Studio, open the ClientConfiguration class by


expanding the catalog-service item in the Project Explorer tab in the left
pane, and then click catalog-service > Java Resources > src/main/java >
com.redhat.training.bookstore.catalog.rest.client to expand it. Double-click the
ClientConfiguration.java file.

Create a new instance of the Client class and set the URL for all inventory
microservice REST endpoints. As the URL may be different, depending on the
environment executing the application, you must use the authorizationHost and the
authorizationPort attributes to dynamically configure the URL.

Cast the target address to a ResteasyWebTarget instance and create the proxy.

Return the resulting object as the method outcome.

@Produces
@Singleton
// TODO Instantiate the InventoryService using the RESTEasy proxy framework
public AuthService authService() {
// TODO Connect to the remote authentication using the RESTEasy proxy
framework
Client client = ClientBuilder.newClient();
WebTarget target = client.target("http://" + authorizationHost + ":" +
authorizationPort + "/api");
ResteasyWebTarget rtarget = (ResteasyWebTarget) target;
AuthService service = rtarget.proxy(AuthService.class);
return service;
}

Note
You may get the source code in the /home/student/JB283/labs/lab-
comp-review-develop/authService.txt file.

370 JB283-RHOAR1.0-en-1-20180517
Solution

7.5. Press Ctrl+S to save your changes.

7.6. Inject the inventoryHost and the inventoryPort attributes using the MicroProfile
configuration specification.

7.7. Inject the configuration defined in the MicroProfile configuration specification in to


the ClientConfiguration class. If no value is set in the authorizationHost
attribute, use auth-service as the default value. Similarly, if no value is set in the
authorizationPort attribute, use 8080 as the default value.

Annotate with @ConfigProperty attribute-level annotation as follows:

//TODO use the MicroProfile configuration spec to inject the authorizationPort


variable
@Inject
@ConfigProperty(name="authorizationPort", defaultValue="8080")
private String authorizationPort;

//TODO use the MicroProfile configuration spec to inject the authorizationPort


variable
@Inject
@ConfigProperty(name="authorizationHost", defaultValue="auth-service")
private String authorizationHost;

7.8. Press Ctrl+S to save your changes.

7.9. Implement the logic to get the JWT using the AddAuthorizationHeaderFilter
class and add it to the HTTP request header on each REST endpoint invocation. Inject
the AuthService instance created by the authorizationService method as the
class attribute, access the service and get the REST endpoint.

@Inject
private AuthService service;
//TODO use the MicroProfile configuration spec to inject the
inventoryServiceUsername variable
@Inject
@ConfigProperty(name="inventoryServiceUsername")
private String username;
//TODO use the MicroProfile configuration spec to inject the
inventoryServicePassword variable
@Inject
@ConfigProperty(name="inventoryServicePassword")
private String password;

@Override
public void filter(ClientRequestContext requestContext) throws IOException {
//TODO Implement the filter to request for a JWT in the authorization
microservice
String token = service.createToken(username + ":" + password);
//TODO Add the header named Authorization with the requested token
requestContext.getHeaders().add("Authorization", "Bearer " + token);
}

JB283-RHOAR1.0-en-1-20180517 371
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

Note
You may get the source code in the /home/student/JB283/labs/lab-
comp-review-develop/filter.txt file.

7.10. Press Ctrl+S to save your changes.

7.11. Inject an AddAuthorizationHeaderFilter instance to the ClientConfiguration


class, and register it the client instance to automatically invoke the filter on each
request made to the auth microservice.

@Inject
private AddAuthorizationHeaderFilter filter;

public InventoryService inventoryService() {


// TODO Connect to the remote inventory using the RESTEasy proxy framework
Client client = ClientBuilder.newClient();
client.register(filter);
...

7.12. Run the CatalogServiceTest test case to confirm that you implemented the secure
REST endpoint addresses correctly.

Right-click the CatalogServiceTest test case and select Run As > JUnit Test in
JBoss Developer Studio. The JUnit tab shows the output from the test case execution
and a green bar is displayed after the test execution.

8. Start the auth, catalog, and inventory microservices.

8.1. Start the auth microservice. From your terminal window, run the following commands:

[student@workstation JB283-comprehensive-review]$ cd auth


[student@workstation auth]$ ./run.sh
...output omitted...
2018-03-09 17:03:23,329 INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly
Swarm is Ready

Leave the terminal window running.

8.2. Start the catalog microservice. Open a new terminal window on the workstation VM
and run the following commands:

[student@workstation ~]$ cd JB283-comprehensive-review/catalog


[student@workstation catalog]$ ./run.sh
...output omitted...
2018-03-09 17:03:23,329 INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly
Swarm is Ready

8.3. Start the inventory microservice. Open a new terminal window on the workstation
VM and run the following commands:

372 JB283-RHOAR1.0-en-1-20180517
Solution

[student@workstation ~]$ cd JB283-comprehensive-review/inventory


[student@workstation inventory]$ ./run.sh
...output omitted...
2018-03-09 17:03:23,329 INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly
Swarm is Ready

8.4. Test the service from a client using the RESTClient Firefox plug-in. Start Firefox on the
workstation VM and click the RESTClient plug-in in the browser's toolbar.

The /api/bookinventory/12345 REST endpoint calls the /api/inventory/12345


REST endpoint from the inventory microservice. To authenticate, the microservice
requests a JWT for the auth microservice.

8.5. Select GET as the Method. In the URL form, enter http://localhost:8080/api/
bookinventory/12345.

8.6. Click Send.

8.7. In the Headers tab, verify that the Status Code is 200 OK.

8.8. In the Preview tab, verify that the response matches the following:

{"bookTitle":"Gone with the Wind","isbn":"12345","price":9.95,"inventory":12}

8.9. Return to the terminal window running the auth microservice and stop the service using
Ctrl+C.

8.10.Return to the terminal window running the catalog microservice and stop the service
using Ctrl+C.

8.11. Return to the terminal window running the inventory microservice and stop the service
using Ctrl+C.

9. Deploy the auth, catalog, and inventory microservices to the OpenShift cluster.

9.1. Open a terminal window on the workstation VM and log in to OpenShift cluster as the
developer user:

[student@workstation ~]$ oc login -u developer -p redhat \


https://master.lab.example.com

9.2. Create the comp-review-develop project:

[student@workstation ~]$ oc new-project comp-review-develop


Now using project "comp-review-develop"...

9.3. Open a new terminal window and navigate to the auth microservice project. Deploy it on
the OpenShift cluster:

[student@workstation ~]$ cd JB283-comprehensive-review/auth


[student@workstation auth]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...

JB283-RHOAR1.0-en-1-20180517 373
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

[INFO] F8: Running in OpenShift mode


...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS
...

Note
The following error may occur during the deployment. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask@7eae1359 rejected from
java.util.concurrent.ScheduledThreadPoolExecutor@3bac82e8[Shutting
down, pool size = 1, active threads = 1, queued tasks = 0, completed
tasks = 12]

9.4. Navigate to the catalog microservice project. Deploy it on the OpenShift cluster:

[student@workstation auth]$ cd ../catalog


[student@workstation catalog]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS
...

Note
The following error may occur during deployment. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask@7eae1359 rejected from
java.util.concurrent.ScheduledThreadPoolExecutor@3bac82e8[Shutting
down, pool size = 1, active threads = 1, queued tasks = 0, completed
tasks = 12]

9.5. Navigate to the inventory microservice project. Deploy it on the OpenShift cluster:

[student@workstation auth]$ cd ../inventory


[student@workstation inventory]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...

374 JB283-RHOAR1.0-en-1-20180517
Solution

[INFO] BUILD SUCCESS


...

Note
The following error may occur during deployment. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask@7eae1359 rejected from
java.util.concurrent.ScheduledThreadPoolExecutor@3bac82e8[Shutting
down, pool size = 1, active threads = 1, queued tasks = 0, completed
tasks = 12]

10. Test the service from a client using the RESTClient Firefox plug-in. Start Firefox on the
workstation VM and click the RESTClient plug-in in the browser's toolbar.

The http://catalog.apps.lab.example.com/api/bookinventory/12345 REST


endpoint calls the http://catalog.apps.lab.example.com/api/inventory/12345
REST endpoint from the inventory microservice. To authenticate, the microservice requests
a JWT for the auth microservice.

11. Select GET as the Method. In the URL form, enter http://localhost:8080/api/
bookinventory/12345.

12. Click Send.

13. Verify, in the Headers tab, that the Status Code is 200 OK.

14. In the Preview tab, verify that the response matches the following:

{"bookTitle":"Gone with the Wind","isbn":"12345","price":9.95,"inventory":12}

Evaluation
As the student user on workstation, run the lab comp-review-develop script with the
grade argument to confirm the success of this exercise. Correct any reported failures and rerun
the script until successful.

[student@workstation ~]$ lab comp-review-develop grade

This concludes the lab.

JB283-RHOAR1.0-en-1-20180517 375
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

Lab: Monitoring a Microservice

In this review, you will implement health checks and custom metrics into a pair of WildFly
Swarm microservices to monitor their availability and performance in real time. You will also
implement fault tolerance into one of the microservices so that if the service which it depends on
is unavailable, it can insulate its clients from that failure.

Outcomes
You should be able to:
• Use the MicroProfile health specification to inspect microservice availability.

• Expose custom metrics from a microservice using the MicroProfile metrics specification.

• Enable fault tolerance policies in a microservice using the MicroProfile fault tolerance
specification.

• Deploy and test the microservices locally, then containerize the services and deploy them to
the OpenShift cluster using the fabric8 Maven plug-in.

Before you begin


Log in to workstation as student, and run the following command:

[student@workstation ~]$ lab comp-review-monitor setup

Instructions
For the purpose of this lab, work with the lab-comp-review-monitor branch, which contains
the solution from the previous lab as a starting point.

The bookstore microservices you built in the previous lab are starting to attract real clients, as
the developers working on the front-end web application have started to call your microservices.
However as the traffic increases, you are beginning to have problems with performance and
downtime, and people are starting to complain.

In order to better diagnose problems in the inventory and catalog microservices, you decide to
implement health check REST endpoints that OpenShift can call to determine the health of your
microservices. To build these endpoints use the MicroProfile health specification that is available
in WildFly Swarm applications.

Follow these guidelines to implement the health checks:

• Be sure to implement health checks in both the inventory and catalog projects.

• Empty classes are provided for you to implement the health checks in both projects:

◦ inventory:
com.redhat.training.bookstore.inventory.health.InventoryHealth

◦ catalog:
com.redhat.training.bookstore.catalog.health.CatalogServiceCheck

• Both health checks must include a data point with the name catalogSize that contains the
current number of books in the database in the response from the health check endpoint.

376 JB283-RHOAR1.0-en-1-20180517
• The health checks must use the following names:

◦ inventory: inventory-service-check

◦ catalog: catalog-service-check

• Both health checks must use the current size of their databases to determine the health of
their respective services. If the database is empty, the microservice must report its status as
DOWN. If the database has any data in it at all, the microservice must report its status as UP.

After the health checks are in place, you determine that you need to improve the performance
of the catalog microservice, which is especially poor when the inventory microservice is not
performing well or is unavailable. To fix this, use the MicroProfile fault tolerance specification
available in WildFly Swarm applications.

Follow these guidelines to implement fault tolerance in the catalog microservice:

• Implement a timeout policy so that the catalog microservice automatically times out when any
calls to the inventory microservice take longer than 5 seconds.

• Complete the fallback method called getBookWithDefaultInventory(String isbn)


that meets the following requirements:
◦ If the passed-in isbn value is present in the database, this method must return an HTTP
response code of 200 with an inventory value of -1 as an indicator that the inventory
service call failed.

◦ If the isbn is not present in the database, the fallback method must return an HTTP
response code 404.

• Configure the getBookWithInventory(String isbn) method to use the


getBookWithDefaultInventory(String isbn) as its fallback.

• Implement a retry policy so that failed executions caused by any instance of an Exception
thrown by the getBookWithInventory method are reattempted twice before the fallback
method is called. The retry policy must adhere to the following guidelines:

◦ The method must be retried for a maximum of two retry attempts.

◦ The delay between retry attempts must be 1 second.

• Implement a circuit breaker policy so that the service fails fast after a certain number of
failures. The circuit breaker must adhere to the following guidelines:

◦ Once the circuit opens, the threshold to close the circuit is five successful executions.

◦ The rolling window of executions the policy needs to use to determine the state of the circuit
is the last ten method invocations.

◦ The failure threshold required to close the circuit is an 80% failure rate, or if eight of the last
ten executions resulted in failure.

◦ When the circuit opens, a ten second delay must occur before subsequent executions are
allowed to occur.

• Configure a limit of five concurrent requests that can be processed simultaneously using the
semaphore approach, instead of the thread pool approach.

JB283-RHOAR1.0-en-1-20180517 377
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

Finally, to keep track of failures in the catalog microservice, you decide to add some
custom metrics that you can monitor. You wish to track both the number of calls to
the inventory microservice that fail, as well as timing data for each invocation of the
getBookWithInventory method, so that you can identify poor performance.

Follow these guidelines to implement custom metrics in the catalog microservice:

• Include a counter metric named failureCount, which uses the following attributes:
◦ Name: failureCount

◦ Description: Number of times the inventory endpoint fails

◦ Display Name; InventoryLookupFailureCount

◦ Absolute: True

• Update the getBookWithDefaultInventory(String isbn) method to increment the


failureCount each time it is invoked.

• Configure the getBookWithInventory(String isbn) method to be timed using a metric


named inventoryTimer, which uses the following attributes:
◦ Name: inventoryTimer

◦ Unit: MetricUnits.MILLISECONDS

◦ Description: Invocation time for getting book inventory

◦ Display Name: InventoryTimer

◦ Absolute: True

Test the application locally by running the three microservices included with this lab (auth,
catalog, and inventory) using the provided run.sh scripts, located in each microservices root
directory.

Use the RESTClient Firefox plug-in to test the fault tolerance added to the catalog microservice,
as well as the health and metrics endpoints implemented to verify that the changes made in the
lab work.

Finally, deploy the application on the OpenShift cluster in project named comp-review-
monitor using the fabric8 Maven plug-in to prepare the lab for grading.

Evaluation
As the student user on workstation, run the lab comp-review-monitor script with the
grade argument to confirm the success of this exercise. Correct any reported failures and rerun
the script until successful.

[student@workstation ~]$ lab comp-review-monitor grade

This concludes the lab.

378 JB283-RHOAR1.0-en-1-20180517
Solution

Solution
In this review, you will implement health checks and custom metrics into a pair of WildFly
Swarm microservices to monitor their availability and performance in real time. You will also
implement fault tolerance into one of the microservices so that if the service which it depends on
is unavailable, it can insulate its clients from that failure.

Outcomes
You should be able to:
• Use the MicroProfile health specification to inspect microservice availability.

• Expose custom metrics from a microservice using the MicroProfile metrics specification.

• Enable fault tolerance policies in a microservice using the MicroProfile fault tolerance
specification.

• Deploy and test the microservices locally, then containerize the services and deploy them to
the OpenShift cluster using the fabric8 Maven plug-in.

Before you begin


Log in to workstation as student, and run the following command:

[student@workstation ~]$ lab comp-review-monitor setup

Instructions
For the purpose of this lab, work with the lab-comp-review-monitor branch, which contains
the solution from the previous lab as a starting point.

The bookstore microservices you built in the previous lab are starting to attract real clients, as
the developers working on the front-end web application have started to call your microservices.
However as the traffic increases, you are beginning to have problems with performance and
downtime, and people are starting to complain.

In order to better diagnose problems in the inventory and catalog microservices, you decide to
implement health check REST endpoints that OpenShift can call to determine the health of your
microservices. To build these endpoints use the MicroProfile health specification that is available
in WildFly Swarm applications.

Follow these guidelines to implement the health checks:

• Be sure to implement health checks in both the inventory and catalog projects.

• Empty classes are provided for you to implement the health checks in both projects:

◦ inventory:
com.redhat.training.bookstore.inventory.health.InventoryHealth

◦ catalog:
com.redhat.training.bookstore.catalog.health.CatalogServiceCheck

• Both health checks must include a data point with the name catalogSize that contains the
current number of books in the database in the response from the health check endpoint.

JB283-RHOAR1.0-en-1-20180517 379
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

• The health checks must use the following names:

◦ inventory: inventory-service-check

◦ catalog: catalog-service-check

• Both health checks must use the current size of their databases to determine the health of
their respective services. If the database is empty, the microservice must report its status as
DOWN. If the database has any data in it at all, the microservice must report its status as UP.

After the health checks are in place, you determine that you need to improve the performance
of the catalog microservice, which is especially poor when the inventory microservice is not
performing well or is unavailable. To fix this, use the MicroProfile fault tolerance specification
available in WildFly Swarm applications.

Follow these guidelines to implement fault tolerance in the catalog microservice:

• Implement a timeout policy so that the catalog microservice automatically times out when any
calls to the inventory microservice take longer than 5 seconds.

• Complete the fallback method called getBookWithDefaultInventory(String isbn)


that meets the following requirements:
◦ If the passed-in isbn value is present in the database, this method must return an HTTP
response code of 200 with an inventory value of -1 as an indicator that the inventory
service call failed.

◦ If the isbn is not present in the database, the fallback method must return an HTTP
response code 404.

• Configure the getBookWithInventory(String isbn) method to use the


getBookWithDefaultInventory(String isbn) as its fallback.

• Implement a retry policy so that failed executions caused by any instance of an Exception
thrown by the getBookWithInventory method are reattempted twice before the fallback
method is called. The retry policy must adhere to the following guidelines:

◦ The method must be retried for a maximum of two retry attempts.

◦ The delay between retry attempts must be 1 second.

• Implement a circuit breaker policy so that the service fails fast after a certain number of
failures. The circuit breaker must adhere to the following guidelines:

◦ Once the circuit opens, the threshold to close the circuit is five successful executions.

◦ The rolling window of executions the policy needs to use to determine the state of the circuit
is the last ten method invocations.

◦ The failure threshold required to close the circuit is an 80% failure rate, or if eight of the last
ten executions resulted in failure.

◦ When the circuit opens, a ten second delay must occur before subsequent executions are
allowed to occur.

• Configure a limit of five concurrent requests that can be processed simultaneously using the
semaphore approach, instead of the thread pool approach.

380 JB283-RHOAR1.0-en-1-20180517
Solution

Finally, to keep track of failures in the catalog microservice, you decide to add some
custom metrics that you can monitor. You wish to track both the number of calls to
the inventory microservice that fail, as well as timing data for each invocation of the
getBookWithInventory method, so that you can identify poor performance.

Follow these guidelines to implement custom metrics in the catalog microservice:

• Include a counter metric named failureCount, which uses the following attributes:
◦ Name: failureCount

◦ Description: Number of times the inventory endpoint fails

◦ Display Name; InventoryLookupFailureCount

◦ Absolute: True

• Update the getBookWithDefaultInventory(String isbn) method to increment the


failureCount each time it is invoked.

• Configure the getBookWithInventory(String isbn) method to be timed using a metric


named inventoryTimer, which uses the following attributes:
◦ Name: inventoryTimer

◦ Unit: MetricUnits.MILLISECONDS

◦ Description: Invocation time for getting book inventory

◦ Display Name: InventoryTimer

◦ Absolute: True

Test the application locally by running the three microservices included with this lab (auth,
catalog, and inventory) using the provided run.sh scripts, located in each microservices root
directory.

Use the RESTClient Firefox plug-in to test the fault tolerance added to the catalog microservice,
as well as the health and metrics endpoints implemented to verify that the changes made in the
lab work.

Finally, deploy the application on the OpenShift cluster in project named comp-review-
monitor using the fabric8 Maven plug-in to prepare the lab for grading.

Steps
1. Check out the lab-comp-review-monitor Git branch to get the correct version of the
application code for this exercise.

1.1. Run the following commands to change to the correct directory and check out the
required branch:

[student@workstation ~]$ cd JB283-comprehensive-review


[student@workstation JB283-comprehensive-review]$ git checkout lab-comp-review-
monitor
Switched to a new branch 'lab-comp-review-develop'

1.2. Use the git status command to ensure that you are on the correct branch.

JB283-RHOAR1.0-en-1-20180517 381
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

[student@workstation JB283-comprehensive-review]$ git status


# On branch lab-comp-review-monitor
nothing to commit, working directory clean

2. Implement a health check in the catalog microservice.

2.1. Open the empty CatalogServiceCheck class where you are to implement the health
check endpoint.

In JBoss Developer Studio, open the CatalogServiceCheck class by expanding the


catalog-service item in the Project Explorer tab in the left pane.

Click catalog-service > Java Resources > src/main/java >


com.redhat.training.bookstore.catalog.health to expand it. Double-click the
CatalogServiceCheck.java file.

2.2. Annotate this class as the health check endpoint for the catalog microservice using the
@Health class-level annotation:

//TODO annotate this class as the health check endpoint


@Health
//TODO make this class implement the HealthCheck interface
public class CatalogServiceCheck {

2.3. Make the CatalogServiceCheck implement the HealthCheck interface:

//TODO annotate this class as the health check endpoint


@Health
//TODO make this class implement the HealthCheck interface
public class CatalogServiceCheck implements HealthCheck{

2.4. Inject the BookDatabase class using the @Inject so that you can check the current
database size during the health check.

//TODO annotate this class as the health check endpoint


@Health
//TODO make this class implement the HealthCheck interface
public class CatalogServiceCheck implements HealthCheck{

//TODO Inject BookDatabase to use the catalogSize in the health check data
@Inject
private BookDatabase db;

2.5. Implement the call() method, as required by the HealthCheck interface.

Use the withData() method to include the catalogSize data in the response. Use
the HealthCheckResponseBuilder class to construct a health check response based
on the current database size:

//TODO annotate this class as the health check endpoint


@Health
//TODO make this class implement the HealthCheck interface
public class CatalogServiceCheck implements HealthCheck{

382 JB283-RHOAR1.0-en-1-20180517
Solution

//TODO Inject BookDatabase to use the catalogSize in the health check data
@Inject
private BookDatabase db;

//TODO implement the required call() method, include the current database size
in the response.
//TODO The service should report up if the database contains any books,
otherwise it should report down
@Override
public HealthCheckResponse call() {

HealthCheckResponseBuilder healthCheckBuilder =
HealthCheckResponse.named("catalog-service-check")
.withData("catalogSize", db.getBooks().size());
return (db.getBooks().size() == 0) ?
healthCheckBuilder.down().build() : healthCheckBuilder.up().build();
}

2.6. Press Ctrl+S to save your changes.

3. Implement a health check in the inventory microservice.

3.1. Open the empty InventoryHealth class where you are to implement the health check
endpoint.

In JBoss Developer Studio, open the InventoryHealth class by expanding the


inventory-service item in the Project Explorer tab in the left pane.

Click inventory-service > Java Resources > src/main/java >


com.redhat.training.bookstore.inventory.health to expand it. Double-click the
InventoryHealth.java file.

3.2. Annotate this class as the health check endpoint for the inventory microservice using
the @Health class-level annotation:

//TODO annotate this class as the health check endpoint


@Health
//TODO make this class implement the HealthCheck interface
public class InventoryHealth {

3.3. Make the InventoryHealth implement the HealthCheck interface:

//TODO annotate this class as the health check endpoint


@Health
//TODO make this class implement the HealthCheck interface
public class InventoryHealth implements HealthCheck{

3.4. Inject the InventoryDatabase class using the @Inject so that you can check the
current database size during the health check.

//TODO annotate this class as the health check endpoint


@Health
//TODO make this class implement the HealthCheck interface
public class InventoryHealth implements HealthCheck{

JB283-RHOAR1.0-en-1-20180517 383
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

//TODO Inject InventoryDatabase to use the catalogSize in the health check


data
@Inject
private InventoryDatabase db;

3.5. Implement the call() method which is required by the HealthCheck interface.

Use the withData() method to include the catalogSize data in the response. Use
the HealthCheckResponseBuilder class to construct a health check response based
on the current database size:

//TODO annotate this class as the health check endpoint


@Health
//TODO make this class implement the HealthCheck interface
public class InventoryHealth implements HealthCheck{

//TODO Inject InventoryDatabase to use the catalogSize in the health check


data
@Inject
private InventoryDatabase db;

//TODO implement the required call() method, include the current database size
in the response.
//TODO The service should report up if the database contains any books,
otherwise it should report down
@Override
public HealthCheckResponse call() {

HealthCheckResponseBuilder healthCheckBuilder =
HealthCheckResponse.named("inventory-service-check")
.withData("catalogSize", db.getInventory().size());

return (db.getInventory().size() == 0) ?
healthCheckBuilder.down().build() : healthCheckBuilder.up().build();
}

3.6. Press Ctrl+S to save your changes.

4. Implement a timeout policy so that the catalog microservice automatically times out when
any calls to the inventory microservice take longer than 5 seconds.

4.1. In JBoss Developer Studio, open the CatalogResource class by expanding the
catalog-service item in the Project Explorer tab in the left pane.

Click catalog-service > Java Resources > src/main/java >


com.redhat.training.bookstore.catalog.rest to expand it. Double-click the
CatalogResource.java file.

4.2. Add the timeout fault tolerance policy using the @Timeout annotation on the
getBookWithInventory with a value of 5000 milliseconds:

@GET
@Path("/bookinventory/{isbn}")
@Produces(MediaType.APPLICATION_JSON)
//TODO Add a metric named inventoryTimer to time the execution of this method

//TODO configure this method to have a timeout of 5 seconds


@Timeout(5000)

384 JB283-RHOAR1.0-en-1-20180517
Solution

//TODO configure this method to use the getBookWithDefaultInventory as its


fallback

//TODO configure a retry policy so that exceptions are retried twice, with a 1
second delay

//TODO configure a circuit breaker policy so that the service fails fast after a
certain number of failures

//TODO configure this method to run a maximum of 5 concurrent requests

public Response getBookWithInventory(@PathParam("isbn") String isbn) {

4.3. Press Ctrl+S to save your changes.

5. Implement a fallback method called getBookWithDefaultInventory(String isbn)


that always returns an inventory value of -1. Then configure the getBookWithInventory
method to use this as its fallback.

5.1. Complete the fallback method provided by adding a line of code to set the current
inventory of the book to a value of -1.

@SuppressWarnings("unused")
private Response getBookWithDefaultInventory(String isbn) {

//TODO Increment failureCounter

for(Book book : db.getBooks()) {


if (isbn.equals(book.getIsbn())) {

//TODO set inventory value to -1 to indicate this is the fallback


book.setInventory(-1);

return Response.ok(book, MediaType.APPLICATION_JSON).build();


}
}
return Response.status(404).build();
}

5.2. Configure the getBookWithInventory(String isbn) method to use the


getBookWithDefaultInventory(String isbn) as its fallback.

@GET
@Path("/bookinventory/{isbn}")
@Produces(MediaType.APPLICATION_JSON)
//TODO Add a metric named inventoryTimer to time the execution of this method

//TODO configure this method to have a timeout of 5 seconds


@Timeout(5000)
//TODO configure this method to use the getBookWithDefaultInventory as its
fallback
@Fallback(fallbackMethod="getBookWithDefaultInventory")
//TODO configure a retry policy so that exceptions are retried twice, with a 1
second delay

//TODO configure a circuit breaker policy so that the service fails fast after a
certain number of failures

//TODO configure this method to run a maximum of 5 concurrent requests

JB283-RHOAR1.0-en-1-20180517 385
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

public Response getBookWithInventory(@PathParam("isbn") String isbn) {

5.3. Press Ctrl+S to save your changes.

6. Implement a retry policy so that failed executions caused by an Exception thrown by the
getBookWithInventory method are reattempted twice before the fallback method is
called.

6.1. Add the @Retry annotation to the getBookWithInventory method. Configure


the retryOn attribute to re-execute only when an Exception is thrown Use the
maxRetries attribute to set two retries, and use delay and delayUnit attributes for
a one second delay between attempts.

@GET
@Path("/bookinventory/{isbn}")
@Produces(MediaType.APPLICATION_JSON)
//TODO Add a metric named inventoryTimer to time the execution of this method

//TODO configure this method to have a timeout of 5 seconds


@Timeout(5000)
//TODO configure this method to use the getBookWithDefaultInventory as its
fallback
@Fallback(fallbackMethod="getBookWithDefaultInventory")
//TODO configure a retry policy so that exceptions are retried twice, with a 1
second delay
@Retry(delay=1,delayUnit=ChronoUnit.SECONDS, maxRetries=2,
retryOn=Exception.class)
//TODO configure a circuit breaker policy so that the service fails fast after a
certain number of failures

//TODO configure this method to run a maximum of 5 concurrent requests

public Response getBookWithInventory(@PathParam("isbn") String isbn) {

6.2. Press Ctrl+S to save your changes.

7. Implement a circuit breaker policy so that the service fails fast after a certain number of
failures.

7.1. Add the @CircuitBreaker annotation to the getBookWithInventory method.


Configure the successThreshold attribute to 5, the requestVolumeThreshold
attribute to 4, the failureRatio attribute to 0.75, and the delay attribute to 10000.

@GET
@Path("/bookinventory/{isbn}")
@Produces(MediaType.APPLICATION_JSON)
//TODO Add a metric named inventoryTimer to time the execution of this method

//TODO configure this method to have a timeout of 5 seconds


@Timeout(5000)
//TODO configure this method to use the getBookWithDefaultInventory as its
fallback
@Fallback(fallbackMethod="getBookWithDefaultInventory")
//TODO configure a retry policy so that exceptions are retried twice, with a 1
second delay
@Retry(delay=1,delayUnit=ChronoUnit.SECONDS, maxRetries=2,
retryOn=Exception.class)

386 JB283-RHOAR1.0-en-1-20180517
Solution

//TODO configure a circuit breaker policy so that the service fails fast after a
certain number of failures
@CircuitBreaker(successThreshold=5, requestVolumeThreshold=4, failureRatio=0.75,
delay=10000)
//TODO configure this method to run a maximum of 5 concurrent requests

public Response getBookWithInventory(@PathParam("isbn") String isbn) {

7.2. Press Ctrl+S to save your changes.

8. Configure a limit of five concurrent requests that can be processed simultaneously, and
make sure those requests are each executed on their own thread.

8.1. Add the @Bulkhead annotation with a value of 5 to the getBookWithInventory


method:

@GET
@Path("/bookinventory/{isbn}")
@Produces(MediaType.APPLICATION_JSON)
//TODO Add a metric named inventoryTimer to time the execution of this method

//TODO configure this method to have a timeout of 5 seconds


@Timeout(5000)
//TODO configure this method to use the getBookWithDefaultInventory as its
fallback
@Fallback(fallbackMethod="getBookWithDefaultInventory")
//TODO configure a retry policy so that exceptions are retried twice, with a 1
second delay
@Retry(delay=1,delayUnit=ChronoUnit.SECONDS, maxRetries=2,
retryOn=Exception.class)
//TODO configure a circuit breaker policy so that the service fails fast after a
certain number of failures
@CircuitBreaker(successThreshold=5, requestVolumeThreshold=4, failureRatio=0.75,
delay=10000)
//TODO configure this method to run a maximum of 5 concurrent requests
@Bulkhead(5)
public Response getBookWithInventory(@PathParam("isbn") String isbn) {

8.2. Press Ctrl+S to save your changes.

9. Include the failureCount and inventoryTimer metrics to better monitor the


performance and failures of the getBookWithInventory method.

9.1. Use the @Inject annotation in conjunction with the @Metric annotation to create the
failureCount metric, which is a Counter object:

@Inject
@Metric(name = "failureCount", description = "Number of times the inventory
endpoint fails",
displayName="InventoryLookupFailureCount", absolute=true)
private Counter failureCount;

9.2. Update the getBookWithDefaultInventory method to increment the


failureCount counter value each time a failure occurs and the server uses that
method as the fallback:

@SuppressWarnings("unused")

JB283-RHOAR1.0-en-1-20180517 387
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

private Response getBookWithDefaultInventory(String isbn) {

//TODO Increment failureCounter


failureCounter.inc();

...code omitted...
}

9.3. Use the @Timed annotation to configure a timer for the getBookWithInventory
method:

@GET
@Path("/bookinventory/{isbn}")
@Produces(MediaType.APPLICATION_JSON)
//TODO Add a metric named inventoryTimer to time the execution of this method
@Timed(absolute=true, unit = MetricUnits.MILLISECONDS, name = "inventoryTimer",
displayName = "inventoryTimer",description = "Invocation time for getting
book inventory")
...code omitted...
public Response getBookWithInventory(@PathParam("isbn") String isbn) {

10. Start the auth, catalog, and inventory microservices.

10.1. Start the auth microservice. In your terminal window, run the following commands:

[student@workstation JB283-comprehensive-review]$ cd auth


[student@workstation auth]$ ./run.sh
...output omitted...
2018-03-09 17:03:23,329 INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly
Swarm is Ready

Leave the terminal window running.

10.2.Start the catalog microservice. Open a new terminal window on the workstation VM
and run the following commands:

[student@workstation ~]$ cd JB283-comprehensive-review/catalog


[student@workstation catalog]$ ./run.sh
...output omitted...
2018-03-09 17:03:23,329 INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly
Swarm is Ready

10.3.Start the inventory microservice. Open a new terminal window on the workstation
VM and run the following commands:

[student@workstation ~]$ cd JB283-comprehensive-review/inventory


[student@workstation inventory]$ ./run.sh
...output omitted...
2018-03-09 17:03:23,329 INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly
Swarm is Ready

11. Test the health check endpoints from a client using the RESTClient Firefox plug-in.

11.1. Start Firefox on the workstation VM and click the RESTClient plug-in in the browser's
toolbar.

388 JB283-RHOAR1.0-en-1-20180517
Solution

11.2. Select GET as the Method. In the URL form, enter http://localhost:8080/health.

Click Send.

11.3. In the Headers tab, verify that the Status Code is 200 OK.

In the Preview tab, verify that the response matches the following:

{
"checks": [{
"name": "catalog-service-check",
"state": "UP",
"data": {
"catalogSize": 2
}
}],
"outcome": "UP"
}

11.4. In the URL form, enter http://localhost:7070/health.

Click Send.

11.5. In the Headers tab, verify that the Status Code is 200 OK.

In the Preview tab, verify that the response matches the following:

{
"checks": [{
"name": "inventory-service-check",
"state": "UP",
"data": {
"catalogSize": 2
}
}],
"outcome": "UP"
}

12. Test the fault tolerance of the /api/bookinventory endpoint from a client using the
RESTClient Firefox plug-in.

12.1. First, test the endpoint with the inventory service still running.

Return to the RESTClient plug-in that you have open in Firefox.

12.2.In the URL form, enter http://localhost:8080/api/bookinventory/12345.

Click Send.

12.3.Verify, in the Headers tab, that the Status Code is 200 OK.

Verify, in the Preview tab, that the response matches the following:

{
"bookTitle": "Gone with the Wind",
"isbn": "12345",

JB283-RHOAR1.0-en-1-20180517 389
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

"price": 9.95,
"inventory": 12
}

12.4.Examine the server logs from the catalog microservice visible in the terminal window.
You will see a single log entry similar to the following:

2018-04-25 12:11:15,180 INFO [stdout] (default task-1) Called inventory with


token eyJraWQiOi...
...remaining token value omitted...

12.5.Stop the inventory microservice running locally to test the fault tolerance of the /api/
bookinventory endpoint, which relies on the inventory microservice to provide the
current inventory data.

Return to the terminal window running the inventory microservice and stop it using
Ctrl+C.

12.6.Return to the RESTClient plug-in that you have open in Firefox.

12.7. In the URL form, enter http://localhost:8080/api/bookinventory/12345.

Click Send.

12.8.Verify, in the Headers tab, that the Status Code is 200 OK.

Verify, in the Preview tab, that the response matches the following:

{
"bookTitle": "Gone with the Wind",
"isbn": "12345",
"price": 9.95,
"inventory": -1
}

Note that the inventory value is now -1, indicating you are now receiving data from the
fallback method.

12.9. Re-examine the server logs from the catalog microservice visible in the terminal
window. You should now see multiple log entries similar to the following. There should
be three new entries:

2018-04-24 12:11:15,180 INFO [stdout] (default task-1) Called inventory with


token eyJraWQiOi...
...remaining token value omitted...
2018-04-24 12:11:16,185 INFO [stdout] (default task-1) Called inventory with
token eyJraWQiOi...
...remaining token value omitted...
2018-04-24 12:11:17,378 INFO [stdout] (default task-1) Called inventory with
token eyJraWQiOi...
...remaining token value omitted...

These represent the original call and the two retry attempts. Note the 1 second
differences in time stamps, representing the delay you configured on the retry policy.

390 JB283-RHOAR1.0-en-1-20180517
Solution

13. Test the metrics endpoints from a client using the RESTClient Firefox plug-in.

13.1. Return to the RESTClient plug-in that you have open in Firefox.

13.2.Add a header to the request to tell the metrics endpoint to report JSON data.

In the top navigation bar, click Headers and then click Custom Header.

Fill in the Request Headers form with the following values:

• Name: Accept.

• Attribute Value: application/json.

Click Okay.

13.3. In the URL form, enter http://localhost:8080/metrics/base.

Click Send.

13.4.Verify, in the Headers tab, that the Status Code is 200 OK.

Verify, in the Preview tab, that the response is similar to the following:

{
"classloader.totalLoadedClass.count": 15817,
"cpu.systemLoadAverage": 0.06,
"thread.count": 35,
"classloader.currentLoadedClass.count": 15785,
"jvm.uptime": 724079,
"gc.PS MarkSweep.count": 3,
"memory.committedHeap": 588775424,
"thread.max.count": 58,
"gc.PS Scavenge.count": 14,
"cpu.availableProcessors": 2,
"thread.daemon.count": 15,
"classloader.totalUnloadedClass.count": 32,
"memory.maxHeap": 1353711616,
"memory.usedHeap": 332985672,
"gc.PS MarkSweep.time": 457,
"gc.PS Scavenge.time": 268
}

13.5.In the URL form, enter http://localhost:8080/metrics/application.

Click Send.

13.6.Verify, in the Headers tab, that the Status Code is 200 OK.

Verify, in the Preview tab, that the response is similar to the following:

{
"failureCount": 1,
"inventoryTimer": {
"p50": 2.812532695E9,
"p75": 2.812532695E9,
"p95": 2.812532695E9,
"p98": 2.812532695E9,
"p99": 2.812532695E9,

JB283-RHOAR1.0-en-1-20180517 391
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

"p999": 2.812532695E9,
"min": 2812532695,
"mean": 2.812532695E9,
"max": 2812532695,
"stddev": 0.0,
"count": 1,
"meanRate": 0.0013171228334273545,
"oneMinRate": 7.453306344157396E-7,
"fiveMinRate": 0.016416999724779772,
"fifteenMinRate": 0.08691964170141538
}
}

Note
If you called the catalog service more than once while the inventory service
was down, you will see a higher value for the failureCount metric.

14. Stop the microservices that are running locally.

14.1. Return to the terminal window running the auth microservice and stop it using Ctrl+C.

14.2.Return to the terminal window running the catalog microservice and stop it using
Ctrl+C.

15. Deploy the microservices to the OpenShift cluster.

15.1. Open a terminal window on the workstation VM and log in to the OpenShift cluster as
the developer user:

[student@workstation ~]$ oc login -u developer -p redhat \


https://master.lab.example.com

15.2.Create the comp-review-monitor project:

[student@workstation ~]$ oc new-project comp-review-monitor


Now using project "comp-review-monitor"...

15.3.Open a new terminal window and navigate to the auth microservice directory. Deploy it
on the OpenShift cluster:

[student@workstation ~]$ cd JB283-comprehensive-review/auth


[student@workstation auth]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS
...

392 JB283-RHOAR1.0-en-1-20180517
Solution

Note
The following error may occur during deployment. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask@7eae1359 rejected from
java.util.concurrent.ScheduledThreadPoolExecutor@3bac82e8[Shutting
down, pool size = 1, active threads = 1, queued tasks = 0, completed
tasks = 12]

15.4.Navigate to the catalog microservice directory. Deploy it on the OpenShift cluster:

[student@workstation auth]$ cd ../catalog


[student@workstation catalog]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS
...

Note
The following error may occur during the deployment. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask@7eae1359 rejected from
java.util.concurrent.ScheduledThreadPoolExecutor@3bac82e8[Shutting
down, pool size = 1, active threads = 1, queued tasks = 0, completed
tasks = 12]

15.5.Navigate to the inventory microservice directory. Deploy it on the OpenShift cluster:

[student@workstation auth]$ cd ../inventory


[student@workstation inventory]$ mvn clean fabric8:deploy -DskipTests
[INFO] Scanning for projects...
[INFO] F8: Running in OpenShift mode
...
[INFO] Current reconnect backoff is 4000 milliseconds (T2)
...
[INFO] BUILD SUCCESS
...

JB283-RHOAR1.0-en-1-20180517 393
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

Note
The following error may occur during the deployment. You may disregard it.

[ERROR] Exception in reconnect


java.util.concurrent.RejectedExecutionException: Task
java.util.concurrent.ScheduledThreadPoolExecutor
$ScheduledFutureTask@7eae1359 rejected from
java.util.concurrent.ScheduledThreadPoolExecutor@3bac82e8[Shutting
down, pool size = 1, active threads = 1, queued tasks = 0, completed
tasks = 12]

16. Test the health check endpoints from a client using the RESTClient Firefox plug-in.

16.1. Return to the Firefox window where the RESTClient plug-in is running.

16.2.Select GET as the Method. In the URL form, enter http://


catalog.apps.lab.example.com/health.

Click Send.

16.3.Verify, in the Headers tab, that the Status Code is 200 OK.

Verify, in the Preview tab, that the response matches the following:

{
"checks": [{
"name": "catalog-service-check",
"state": "UP",
"data": {
"catalogSize": 2
}
}],
"outcome": "UP"
}

16.4.In the URL form, enter http://inventory.apps.lab.example.com/health.

Click Send.

16.5.Verify, in the Headers tab, that the Status Code is 200 OK.

Verify, in the Preview tab, that the response matches the following:

{
"checks": [{
"name": "inventory-service-check",
"state": "UP",
"data": {
"catalogSize": 2
}
}],
"outcome": "UP"
}

394 JB283-RHOAR1.0-en-1-20180517
Solution

17. Test the fault tolerance of the /api/bookinventory endpoint from a client using the
RESTClient Firefox plug-in.

17.1. First, test the endpoint with the inventory service still running.

Return to the RESTClient plug-in that you have open in Firefox.

17.2. In the URL form, enter http://catalog.apps.lab.example.com/api/


bookinventory/12345.

Click Send.

17.3. Verify, in the Headers tab, that the Status Code is 200 OK.

Verify, in the Preview tab, that the response matches the following:

{
"bookTitle": "Gone with the Wind",
"isbn": "12345",
"price": 9.95,
"inventory": 12
}

17.4. Delete the route to the inventory microservice to make it inaccessible to the catalog
microservice without undeploying it.

Open a new terminal window and use the oc delete route inventory-service
command to make the inventory microservice inaccessible.

[student@workstation catalog]$ oc delete route inventory-service

17.5. Return to the RESTClient plug-in that you have open in Firefox.

17.6. In the URL form, enter http://catalog.apps.lab.example.com/api/


bookinventory/12345.

Click Send.

17.7. Verify, in the Headers tab, that the Status Code is 200 OK.

Verify, in the Preview tab, that the response matches the following:

{
"bookTitle": "Gone with the Wind",
"isbn": "12345",
"price": 9.95,
"inventory": -1
}

Note that the inventory value is now -1, indicating that you are now receiving data from
the fallback method.

18. Test the metrics endpoints from a client using the RESTClient Firefox plug-in.

18.1. Return to the RESTClient plug-in that you have open in Firefox.

JB283-RHOAR1.0-en-1-20180517 395
Chapter 11. Comprehensive Review: Red Hat Application Development II: Implementing Microservice Architectures

18.2.Add a header to the request to tell the metrics endpoint to report JSON data.

In the top navigation bar, click Headers and then click Custom Header.

Fill in the Request Headers form with the following values:

• Name: Accept.

• Attribute Value: application/json.

Click Okay.

18.3.In the URL form, enter http://catalog.apps.lab.example.com/metrics/base.

Click Send.

18.4.Verify, in the Headers tab, that the Status Code is 200 OK.

Verify, in the Preview tab, that the response is similar to the following:

{
"classloader.totalLoadedClass.count": 15817,
"cpu.systemLoadAverage": 0.06,
"thread.count": 35,
"classloader.currentLoadedClass.count": 15785,
"jvm.uptime": 724079,
"gc.PS MarkSweep.count": 3,
"memory.committedHeap": 588775424,
"thread.max.count": 58,
"gc.PS Scavenge.count": 14,
"cpu.availableProcessors": 2,
"thread.daemon.count": 15,
"classloader.totalUnloadedClass.count": 32,
"memory.maxHeap": 1353711616,
"memory.usedHeap": 332985672,
"gc.PS MarkSweep.time": 457,
"gc.PS Scavenge.time": 268
}

18.5.In the URL form, enter http://catalog.apps.lab.example.com/metrics/


application.

Click Send.

18.6.Verify, in the Headers tab, that the Status Code is 200 OK.

Verify, in the Preview tab, that the response is similar to the following:

{
"failureCount": 1,
"inventoryTimer": {
"p50": 2.812532695E9,
"p75": 2.812532695E9,
"p95": 2.812532695E9,
"p98": 2.812532695E9,
"p99": 2.812532695E9,
"p999": 2.812532695E9,
"min": 2812532695,

396 JB283-RHOAR1.0-en-1-20180517
Solution

"mean": 2.812532695E9,
"max": 2812532695,
"stddev": 0.0,
"count": 1,
"meanRate": 0.0013171228334273545,
"oneMinRate": 7.453306344157396E-7,
"fiveMinRate": 0.016416999724779772,
"fifteenMinRate": 0.08691964170141538
}
}

Note
If you called the catalog service more than once while the inventory service
was down, you will see a higher value for the failureCount metric.

Evaluation
As the student user on workstation, run the lab comp-review-monitor script with the
grade argument to confirm the success of this exercise. Correct any reported failures and rerun
the script until successful.

[student@workstation ~]$ lab comp-review-monitor grade

This concludes the lab.

JB283-RHOAR1.0-en-1-20180517 397
398

You might also like