Reference Architectures 2017: Spring Boot Microservices On Red Hat Openshift Container Platform 3
Reference Architectures 2017: Spring Boot Microservices On Red Hat Openshift Container Platform 3
Babak Mozaffari
[email protected]
Legal Notice
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity
logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other
countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United
States and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related
to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This reference architecture demonstrates the design, development and deployment of Spring
Boot Microservices on Red Hat® OpenShift Container Platform 3.
Table of Contents
Table of Contents
. . . . . . . . . . . . . AND
COMMENTS . . . . . FEEDBACK
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . .
. . . . . . . . . . . 1.
CHAPTER . .EXECUTIVE
. . . . . . . . . . . . .SUMMARY
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . .
. . . . . . . . . . . 2.
CHAPTER . . SOFTWARE
. . . . . . . . . . . . .STACK
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . .
2.1. FRAMEWORK 6
2.2. CLIENT LIBRARY 6
2.2.1. Overview 6
2.2.2. Ribbon 6
2.2.3. gRPC 6
2.3. SERVICE REGISTRY 6
2.3.1. Overview 6
2.3.2. Eureka 7
2.3.3. Consul 7
2.3.4. ZooKeeper 7
2.3.5. OpenShift 7
2.4. LOAD BALANCER 7
2.4.1. Overview 7
2.4.2. Ribbon 7
2.4.3. gRPC 8
2.4.4. OpenShift Service 8
2.5. CIRCUIT BREAKER 8
2.5.1. Overview 8
2.5.2. Hystrix 8
2.6. EXTERNALIZED CONFIGURATION 8
2.6.1. Overview 8
2.6.2. Spring Cloud Config 8
2.6.3. OpenShift ConfigMaps 8
2.7. DISTRIBUTED TRACING 9
2.7.1. Overview 9
2.7.2. Sleuth/Zipkin 9
2.7.3. Jaeger 9
2.8. PROXY/ROUTING 9
2.8.1. Overview 9
2.8.2. Zuul 9
2.8.3. Istio 9
. . . . . . . . . . . 3.
CHAPTER . . REFERENCE
. . . . . . . . . . . . . .ARCHITECTURE
. . . . . . . . . . . . . . . . .ENVIRONMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
.............
.CHAPTER
. . . . . . . . . . 4.
. . .CREATING
. . . . . . . . . . .THE
. . . . .ENVIRONMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
.............
4.1. OVERVIEW 15
4.2. PROJECT DOWNLOAD 15
4.3. SHARED STORAGE 15
4.4. OPENSHIFT CONFIGURATION 16
4.5. ZIPKIN DEPLOYMENT 16
4.6. SERVICE DEPLOYMENT 18
4.7. FLIGHT SEARCH 20
4.8. EXTERNAL CONFIGURATION 21
4.9. A/B TESTING 23
.CHAPTER
. . . . . . . . . . 5.
. . DESIGN
. . . . . . . . .AND
. . . . . DEVELOPMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
..............
5.1. OVERVIEW 27
5.2. RESOURCE LIMITS 27
1
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
. . . . . . . . . . . 6.
CHAPTER . . CONCLUSION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
..............
. . . . . . . . . . . . A.
APPENDIX . . .AUTHORSHIP
. . . . . . . . . . . . . . HISTORY
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41
.............
. . . . . . . . . . . . B.
APPENDIX . . .CONTRIBUTORS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
..............
. . . . . . . . . . . . C.
APPENDIX . . .REVISION
. . . . . . . . . . HISTORY
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
..............
2
Table of Contents
3
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
4
CHAPTER 1. EXECUTIVE SUMMARY
Red Hat OpenShift Application Runtimes (RHOAR) is an ongoing effort by Red Hat to
provide official OpenShift images with a combination of fully supported Red Hat software and popular
third-party open-source components. With the first public release, a large number of Spring Boot
components have been tested and verified on top of supported components including OpenJDK and
the base image itself.
The reference architecture serves as a potential blueprint for certain greenfield and brownfield
projects. This includes scenarios where teams or environments have a strong preference to use the
software stack most common in Spring Boot microservices, despite the availability of other options
when taking advantage of OpenShift as the deployment platform. This architecture can also help guide
the migration and deployment of existing Spring Boot microservices on OpenShift Container Platform.
5
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
2.1. FRAMEWORK
Numerous frameworks are available for building microservices, and each provides various advantage
and disadvantages. This reference architecture focuses on a microservice architecture built on top of
the Spring Boot framework. The Spring Boot framework can use various versions of Tomcat, Jetty
and Undertow as its embedded servlet containers. This paper focuses on the use of Spring Boot with
an embedded Tomcat server, running on an OpenShift base image from Red Hat®, with a supported
JVM and environment.
2.2.1. Overview
While invoking a microservice is typically a simple matter of sending a JSON or XML payload over
HTTP, various considerations have led to the prevalence of specialized client libraries, particularly in a
Spring Boot environment. These libraries provide integration with not only Spring Boot, but also many
other tools and libraries often required in a microservice architecture.
2.2.2. Ribbon
Ribbon is an Inter-Process Communication (remote procedure calls) library with built-in client-side
load balancers. The primary usage model involves REST calls with various serialization scheme
support.
This reference architecture uses Ribbon, without relying on it for much intelligence. The main reason
for including and using Ribbon is its prevalence in Spring Boot microservice applications, and relatedly,
its support for and integration with various tools and libraries commonly used in such applications.
2.2.3. gRPC
The more modern gRPC is a replacement for Ribbon that’s been developed by Google and adopted by
a large number of projects.
While Ribbon uses simple text-based JSON or XML payloads over HTTP, gRPC relies on Protocol
Buffers for faster and more compact serialization. The payload is sent over HTTP/2 in binary form. The
result is better performance and security, at the expense of compatibility and tooling support in the
existing market.
2.3.1. Overview
Microservice architecture often implies dynamic scaling of individual services, in a private, hybrid or
public cloud where the number and address of hosts cannot always be predicted or statically
configured in advance. The solution is the use of a service registry as a starting point for discovering
the deployed instances of each service. This will often be paired by a client library or load balancer
layer that seamlessly fails over upon discovering that an instance no longer exists, and caches service
registry lookups. Taking things one step further, integration between a client library and the service
registry can make this lookup and invoke process into a single step, and transparent to developers.
6
CHAPTER 2. SOFTWARE STACK
In modern cloud environments, such capability is often provided by the platform, and service
replication and scaling is a core feature. This reference architecture is built on top of OpenShift,
therefore benefiting from the Kubernetes Service abstraction.
2.3.2. Eureka
Eureka is a REST (REpresentational State Transfer) based service that is primarily used in the AWS
cloud for locating services for the purpose of load balancing and failover of middle-tier servers.
Tight integration between Ribbon and Eureka allows declarative use of Eureka when the caller is using
the Ribbon library.
2.3.3. Consul
Consul is a tool for discovering and configuring services in your infrastructure. It is provided both as
part of the HashiCorp enterprise suite of software, as well as an open source component that is used
in the Spring Cloud.
Integration with Ribbon within a Spring Cloud environment allows transparent and declarative lookups
of services registered with Consul.
2.3.4. ZooKeeper
Apache ZooKeeper is a centralized service for maintaining configuration information, naming,
providing distributed synchronization, and providing group services.
Once again, the support of ZooKeeper within Spring Cloud environments and integration with Ribbon
allows declarative lookups of service instances before invocation.
2.3.5. OpenShift
In OpenShift, a Kubernetes service serves as an internal load balancer . It identifies a set of replicated
pods in order to proxy the connections it receives to them. Additional backing pods can be added to, or
removed from a service, while the service itself remains consistently available, enabling anything that
depends on the service to refer to it through a consistent address.
Contrary to a third-party service registry, the platform in charge of service replication can provide a
current and accurate report of service replicas at any moment. The service abstraction is also a critical
platform component that is as reliable as the underlying platform itself. This means that the client
does not need to keep a cache and account for the failure of the service registry itself. Ribbon can be
declaratively configured to use OpenShift instead of a service registry, without any code changes.
2.4.1. Overview
For client calls to stateless services, high availability (HA) translates to a need to look up the service
from a service registry, and load balance among available instances. The client libraries previously
mentioned include the ability to combine these two steps, but OpenShift makes both actions
redundant by including load balancing capability in the service abstraction. OpenShift provides a single
address where calls will be load balanced and redirected to an appropriate instance.
2.4.2. Ribbon
7
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
Ribbon allows load balancing among a static list of instances that are declared, or however many
instances of the service that are discovered from a registry lookup.
2.4.3. gRPC
gRPC also provides load balancing capability within the same library layer.
2.5.1. Overview
The highly distributed nature of microservices implies a higher risk of failure of a remote call, as the
number of such remote calls increases. The circuit breaker pattern can help avoid a cascade of such
failures by isolating problematic services and avoiding damaging timeouts.
2.5.2. Hystrix
Hystrix is a latency and fault tolerance library designed to isolate points of access to remote
systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex
distributed systems where failure is inevitable.
2.6.1. Overview
Externalized configuration management solutions can provide an elegant alternative to the typical
combination of configuration files, command line arguments, and environment variables that are used
to make applications more portable and less rigid in response to outside changes. This capability is
largely dependent on the underlying platform and is provided by ConfigMaps in OpenShift.
8
CHAPTER 2. SOFTWARE STACK
2.7.1. Overview
For all its advantages, a microservice architecture is very difficult to analyze and troubleshoot. Each
business request spawns multiple calls to, and between, individual services at various layers.
Distributed tracing ties all individual service calls together, and associates them with a business
request through a unique generated ID.
2.7.2. Sleuth/Zipkin
Spring Cloud Sleuth generates trace IDs for every call and span IDs at the requested points in an
application. This information can be integrated with a logging framework to help troubleshoot the
application by following the log files, or broadcast to a Zipkin server and stored for analytics and
reports.
2.7.3. Jaeger
Jaeger, inspired by Dapper and OpenZipkin, is an open source distributed tracing system that fully
conforms to the Cloud Native Computing Foundation (CNCF) OpenTracing standard. It can be
used for monitoring microservice-based architectures and provides distributed context propagation
and transaction monitoring, as well as service dependency analysis and performance / latency
optimization.
2.8. PROXY/ROUTING
2.8.1. Overview
Adding a proxy in front of every service call enables the application of various filters before and after
calls, as well as a number of common patterns in a microservice architecture, such as A/B testing.
Static and dynamic routing rules can help select the desired version of a service.
2.8.2. Zuul
Zuul is an edge service that provides dynamic routing, monitoring, resiliency, security, and more. Zuul
supports multiple routing models, ranging from declarative URL patterns mapped to a destination, to
groovy scripts that can reside outside the application archive and dynamically determine the route.
2.8.3. Istio
Istio is an open platform-independent service mesh that provides traffic management, policy
enforcement, and telemetry collection. Istio is designed to manage communications between
microservices and applications. Istio is still in pre-release stages.
9
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
Each microservice instance runs in a container instance, with one container per OpenShift pod and one
pod per service replica. At its core, an application built in the microservice architectural style consists
of a number of replicated containers calling each other:
10
CHAPTER 3. REFERENCE ARCHITECTURE ENVIRONMENT
The core functionality of the application is provided by microservices, each fulfilling a single
responsibility. One service acts as the API gateway, calling individual microservices and aggregating
the response so it can be consumed easier.
11
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
The architecture makes extended use of Spring Sleuth and OpenZipkin for distributed tracing.
OpenZipkin runs as a separate service with a MySQL database used to persist its data, and it is called
from every service in the application.
12
CHAPTER 3. REFERENCE ARCHITECTURE ENVIRONMENT
Finally, the reference architecture uses Zuul as an edge service to provide static and dynamic routing.
The result is that all service calls are actually directed to Zuul and it proxies the request as
appropriate. This capability is leveraged to demonstrate A/B testing by providing an alternate version
of the Sales service and making a runtime decision to use it for a group of customers.
13
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
14
CHAPTER 4. CREATING THE ENVIRONMENT
4.1. OVERVIEW
This reference architecture can be deployed in either a production or a trial environment. In both
cases, it is assumed that ocp-master1 refers to one (or the only) OpenShift master host and that the
environment includes two other OpenShift schedulable hosts with the host names of ocp-node1 and
ocp-node2. Production environments would have at least 3 master hosts to provide High Availability
(HA) resource management, and presumably a higher number of working nodes.
It is further assumed that OpenShift Container Platform has been properly installed, and that a Linux
user with sudo privileges has access to the host machines. This user can then set up an OpenShift user
through its identity providers.
Change directory to the root of this project. It is assumed that from this point on, all instructions are
executed from inside the LambdaAir directory.
$ cd LambdaAir
Attach 2GB of storage and create a volume group for it, and two logical volumes of 1GB for each
required persistent volume:
Create a corresponding mount directory for each logical volume and mount them.
Share these mounts with all nodes by configuring the /etc/exports file on the NFS server, and make
sure to restart the NFS service before proceeding.
15
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
Grant OpenShift admin and cluster admin roles to this user, so it can create persistent volumes:
At this point, the new OpenShift user can be used to sign in to the cluster through the master server:
Login successful.
$ oc create -f Zipkin/zipkin-mysql-pv.json
persistentvolume "zipkin-mysql-data" created
$ oc get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS
CLAIM REASON AGE
zipkin-mysql-data 1Gi RWO Recycle Available
1m
Once available, use the provided zipkin template to deploy both MySQL and Zipkin services:
$ oc new-app -f Zipkin/zipkin-mysql.yml
--> Deploying template "lambdaair/" for "Zipkin/zipkin-mysql.yml" to
project lambdaair
16
CHAPTER 4. CREATING THE ENVIRONMENT
---------
MySQL database service, with persistent storage. For more information
about using this template, including OpenShift considerations, see
https://github.com/sclorg/mysql-container/blob/master/5.7/README.md.
NOTE: Scaling to more than one replica is not supported. You must
have persistent volumes available in your cluster to use this template.
Username: zipkin
Password: TwnDiEpoMqOGiJNb
Database Name: zipkin
Connection URL: mysql://zipkin-mysql:3306/
* With parameters:
* Memory Limit=512Mi
* Namespace=openshift
* Database Service Name=zipkin-mysql
* MySQL Connection Username=zipkin
* MySQL Connection Password=TwnDiEpoMqOGiJNb # generated
* MySQL root user Password=YJmmYOO3BVyX77wL # generated
* MySQL Database Name=zipkin
* Volume Capacity=1Gi
* Version of MySQL Image=5.7
NOTE
The output above includes randomly generated passwords for the database that will be
different each time. It is advisable to note down the passwords for your deployed
database, in case it is later needed for troubleshooting.
You can use oc status to get a report, but for further details and to view the progress of the
deployment, watch the pods as they get created and deployed:
17
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
It may take a few minutes for the deployment process to complete, at which point there should be two
pods in the Running state:
$ oc get pods
NAME READY STATUS RESTARTS AGE
zipkin-1-k0dv6 1/1 Running 0 5m
zipkin-mysql-1-g44s7 1/1 Running 0 4m
Once the deployment is complete, you will be able to access the Zipkin console. Discover its address by
querying the routes:
$ oc get routes
NAME HOST/PORT
PATH SERVICES PORT TERMINATION WILDCARD
zipkin zipkin-lambdaair.ocp.xxx.example.com zipkin
9411 None
Use the displayed URL to access the console from a browser and verify that it works correctly:
18
CHAPTER 4. CREATING THE ENVIRONMENT
[INFO]
[INFO] ---------------------------------------------------------------
---------
[INFO] Building Lambda Air 1.0-SNAPSHOT
[INFO] ---------------------------------------------------------------
---------
...
...
...
[INFO] --- fabric8-maven-plugin:3.5.30:deploy (default-cli) @ aggregation
---
[WARNING] F8: No such generated manifest file
/Users/bmozaffa/RedHatDrive/SysEng/Microservices/SpringBoot/SpringBootOCP/
LambdaAir/target/classes/META-INF/fabric8/openshift.yml for this project
so ignoring
[INFO] ---------------------------------------------------------------
---------
[INFO] Reactor Summary:
[INFO]
[INFO] Lambda Air ......................................... SUCCESS [01:33
min]
[INFO] Lambda Air ......................................... SUCCESS [02:21
min]
[INFO] Lambda Air ......................................... SUCCESS [01:25
min]
[INFO] Lambda Air ......................................... SUCCESS [01:05
min]
[INFO] Lambda Air ......................................... SUCCESS [02:20
min]
[INFO] Lambda Air ......................................... SUCCESS [01:06
min]
[INFO] Lambda Air ......................................... SUCCESS [
1.659 s]
[INFO] ---------------------------------------------------------------
---------
[INFO] BUILD SUCCESS
[INFO] ---------------------------------------------------------------
---------
[INFO] Total time: 09:55 min
[INFO] Finished at: 2017-12-08T16:03:12-08:00
[INFO] Final Memory: 67M/661M
[INFO] ---------------------------------------------------------------
---------
Once all services have been built and deployed, there should be a total of 8 running pods, including the
2 Zipkin pods from before, and a new pod for each of the 6 services:
$ oc get pods
NAME READY STATUS RESTARTS AGE
airports-1-72kng 1/1 Running 0 18m
airports-s2i-1-build 0/1 Completed 0 21m
flights-1-4xkfv 1/1 Running 0 15m
flights-s2i-1-build 0/1 Completed 0 16m
presentation-1-k2xlz 1/1 Running 0 10m
presentation-s2i-1-build 0/1 Completed 0 11m
sales-1-fqxjd 1/1 Running 0 7m
19
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
$ oc get routes
NAME HOST/PORT PATH
SERVICES PORT TERMINATION WILDCARD
presentation presentation-lambdaair.ocp.xxx.example.com
presentation 8080 None
zipkin zipkin-lambdaair.ocp.xxx.example.com
zipkin 9411 None
Use the URL of the route to access the HTML application from a browser, and verify that it comes up:
Search for a flight by entering values for each of the four fields. The first search may take a bit longer,
so wait a few seconds for the response:
20
CHAPTER 4. CREATING THE ENVIRONMENT
Create a new application.yml file that assumes a higher number of Sales service pods relative to
Presentation pods:
$ vi application.yml
hystrix:
threadpool:
SalesThreads:
21
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
coreSize: 30
maxQueueSize: 300
queueSizeRejectionThreshold: 300
Edit the Presentation deployment config and mount this ConfigMap as /deployments/config, where it
will automatically be part of the Spring Boot application classpath:
$ oc edit dc presentation
Add a new volume with an arbitrary name, such as config-volume, that references the previously
created configmap. The volumes definition is a child of the template spec. Next, create a volume mount
under the container to reference this volume and specify where it should be mounted. The final result is
as follows, with the new lines highlighted:
...
resources: {}
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- name: config-volume
mountPath: /deployments/config
volumes:
- name: config-volume
configMap:
name: presentation
dnsPolicy: ClusterFirst
restartPolicy: Always
...
Once the deployment config is modified and saved, OpenShift will deploy a new version of the service
that will include the overriding properties. This change is persistent and pods created in the future
with this new version of the deployment config will also mount the yaml file.
List the pods and note that a new pod is being created to reflect the change in the deployment config,
which is the mounted file:
$ oc get pods
NAME READY STATUS RESTARTS AGE
airports-1-72kng 1/1 Running 0 18m
airports-s2i-1-build 0/1 Completed 0 21m
flights-1-4xkfv 1/1 Running 0 15m
flights-s2i-1-build 0/1 Completed 0 16m
presentation-1-k2xlz 1/1 Running 0 10m
presentation-2-deploy 0/1 ContainerCreating 0 3s
presentation-s2i-1-build 0/1 Completed 0 11m
sales-1-fqxjd 1/1 Running 0 7m
sales-s2i-1-build 0/1 Completed 0 8m
salesv2-1-s1wq0 1/1 Running 0 5m
22
CHAPTER 4. CREATING THE ENVIRONMENT
Wait until the second version of the pod has started in the running state. The first version will be
terminated and subsequently removed:
$ oc get pods
NAME READY STATUS RESTARTS AGE
...
presentation-2-pxx85 1/1 Running 0 5m
presentation-s2i-1-build 0/1 Completed 0 1h
...
Once this has happened, use the browser to do one or several more flight searches. Then verify the
updated thread pool size by searching the logs of the new presentation pod and verify the batch size:
Notice that with the mounted overriding properties, pricing happens in concurrent batches of 30
instead of 20 items now.
$ cp Zuul/misc/ABTestingFilterBean.groovy /mnt/zuul/volume/
Create a persistent volume for the Zuul service. External groovy scripts placed in this location can
provide dynamic routing.
$ oc create -f Zuul/misc/zuul-pv.json
persistentvolume "groovy" created
23
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
$ oc create -f Zuul/misc/zuul-pvc.json
persistentvolumeclaim "groovy-claim" created
$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
groovy-claim Bound groovy 1Gi RWO 7s
zipkin-mysql Bound zipkin-mysql-data 1Gi RWO 2h
Attach the persistent volume claim to the deployment config as a directory called groovy on the root
of the filesystem:
Once again, the change prompts a new deployment and terminates the original zuul pod, once the new
version is started up and running.
Wait until the second version of the pod reaches the running state:
Return to the browser and perform one or more flight searches. Then return to the OpenShift
environment and look at the log for the zuul pod.
If the IP address received from your browser ends in an odd number, the groovy script filters pricing
calls and sends them to version 2 of the sales service instead. This will be clear in the zuul log:
$ oc logs zuul-2-gz7hl
...
... groovy.ABTestingFilterBean : Caller IP address is
10.3.116.79
Running filter
24
CHAPTER 4. CREATING THE ENVIRONMENT
In this case, the logs from salesv2 will show tickets being priced with a modified algorithm:
$ oc logs salesv2-1-s1wq0
... c.r.r.o.b.l.sales.service.Controller : Priced ticket at 463 with
lower hop discount
... c.r.r.o.b.l.sales.service.Controller : Priced ticket at 425 with
lower hop discount
... c.r.r.o.b.l.sales.service.Controller : Priced ticket at 407 with
lower hop discount
... c.r.r.o.b.l.sales.service.Controller : Priced ticket at 549 with
lower hop discount
... c.r.r.o.b.l.sales.service.Controller : Priced ticket at 509 with
lower hop discount
... c.r.r.o.b.l.sales.service.Controller : Priced ticket at 598 with
lower hop discount
... c.r.r.o.b.l.sales.service.Controller : Priced ticket at 610 with
lower hop discount
If that is not the case and your IP address ends in an even number, it will still be printed but the Running
filter statement will not appear:
$ oc logs zuul-2-gz7hl
...
... groovy.ABTestingFilterBean : Caller IP address is
10.3.116.78
... groovy.ABTestingFilterBean : Caller IP address is
10.3.116.78
... groovy.ABTestingFilterBean : Caller IP address is
10.3.116.78
... groovy.ABTestingFilterBean : Caller IP address is
10.3.116.78
In this case, you can change the filter criteria to send IP addresses with an even digit to the new
version of pricing algorithm, instead of the odd ones:
$ vi /mnt/zuul/volume/ABTestingFilterBean.groovy
...
if( lastDigit % 2 == 0 )
{
//Even IP address will be filtered
true
}
else
{
//Odd IP address won’t be filtered
false
}
...
Deploy a new version of the zuul service to pick up the updated groovy script:
25
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
Once the new pod is running, do a flight search again and check the logs. The calls to pricing should go
to the salesv2 service now, and logs should appear as previously described.
26
CHAPTER 5. DESIGN AND DEVELOPMENT
5.1. OVERVIEW
The source code for the Lambda Air application is made available in a public github repository . This
chapter briefly covers each microservice and its functionality while reviewing the pieces of the
software stack used in the reference architecture.
The OpenShift template provided in the project repository uses this capability to request that at least
20% of a CPU core and 200 megabytes of memory be made available to its container. Twice the
processing power and four times the memory may be provided to the container, if necessary and
available, but no more than that will be assigned.
resources:
limits:
cpu: "400m"
memory: "800Mi"
requests:
cpu: "200m"
memory: "200Mi"
When the fabric8 Maven plugin is used to create the image and direct edits to the deployment
configuration are not convenient, resource fragments can be used to provide the desired snippets. This
application provides deployment.yml files to leverage this capability and set resource requests and
limits on Spring Boot projects:
spec:
replicas: 1
template:
spec:
containers:
- resources:
requests:
cpu: '200m'
memory: '400Mi'
limits:
cpu: '400m'
memory: '800Mi'
Control over the memory and processing use of individual services is often critical. Proper
configuration of these values, as specified above, is seamless to the deployment and administration
process. However, it can be helpful to set up resource quotas in projects for the purpose of enforcing
their inclusion in pod deployment configurations.
27
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
5.3.1. Overview
The Airports service is the simplest microservice of the application, which makes it a good point of
reference for building a basic Spring Boot REST service.
package com.redhat.refarch.obsidian.brownfield.lambdaair.airports;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class RestApplication
{
public static void main(String[] args)
{
SpringApplication.run( RestApplication.class, args );
}
}
It is also good practice to declare the application name, which can be done as part of the common
application properties. This application uses application.yml files that begin with each project’s name:
spring:
application:
name: airports
The POM file uses a property to declare the base image containing the operating system and Java
Development Kit (JDK). All the services in this application build on top of a Red Hat
Enterprise Linux (RHEL) base image, containing a supported version of OpenJDK:
<properties>
...
<fabric8.generator.from>registry.access.redhat.com/redhat-openjdk-18/openjdk18-
openshift</fabric8.generator.from>
</properties>
To easily include the dependencies for a simple Spring Boot application that provides a REST service,
declare the following two artifacts:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
28
CHAPTER 5. DESIGN AND DEVELOPMENT
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
Every service in this application also declares a dependency on the Spring Boot Actuator
component, which includes a number of additional features to help monitor and manage your
application.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
When a dependency on the Actuator is declared, fabric8 generates default OpenShift health probes
that communicate with Actuator services to determine whether a service is running and ready to
service requests:
livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 180
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
...
import org.springframework.web.bind.annotation.RestController;
@RestController
public class Controller
Specify the listening port for this service in the application properties:
server:
port: 8080
Each REST operation is implemented by a Java method. Business operations typically require
specifying request arguments:
29
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
import org.springframework.context.ApplicationListener;
import org.springframework.context.event.ContextRefreshedEvent;
import org.springframework.stereotype.Component;
@Component
public class ApplicationInitialization implements
ApplicationListener<ContextRefreshedEvent>
{
@Override
public void onApplicationEvent(ContextRefreshedEvent
contextRefreshedEvent)
5.4.1. Overview
The Flights service has a similar structure to that of the Airports service, but relies on, and calls the
Airports service. As such, it makes use of Ribbon and the generated OpenShift service for high
availability.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-ribbon</artifactId>
</dependency>
This application also makes use of the Jackson JSR 310 libraries to correctly serialize and deserialize
Java 8 date objects:
<dependency>
<groupId>com.fasterxml.jackson.datatype</groupId>
<artifactId>jackson-datatype-jsr310</artifactId>
<version>2.8.8</version>
</dependency>
@LoadBalanced
30
CHAPTER 5. DESIGN AND DEVELOPMENT
@Bean
RestTemplate restTemplate()
{
return new RestTemplate();
}
@Autowired
private RestTemplate restTemplate;
The service address provided as the host part of the URL is resolved through Ribbon, based on values
provided in application properties:
zuul:
ribbon:
listOfServers: zuul:8080
In this case, Ribbon expects a list of statically defined service addresses, but a single one is provided
with the hostname of zuul with port 8080. Zuul uses the second part of the address, the root web
context, to redirect the request through statically or dynamic routing, as explained later in this
document.
The provided hostname of zuul is the OpenShift service name, and is resolved to the cluster IP address
of the service, then routed to an internal OpenShift load balancer. The OpenShift service name is
determined when a service is created using the oc tool, or when deploying an image using the fabric8
Maven plugin, it is declared in the service yaml file.
Ribbon is effectively not load balancing requests, but rather sending them to an OpenShift internal
load balancer, which is aware of replication and failure of service instances, and can redirect the
request properly.
5.5.1. Overview
The Presentation service makes minimal use of Spring MVC to serve the client-side HTML application
to calling browsers.
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.servlet.ModelAndView;
@Controller
@RequestMapping( "/" )
31
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
This class declares that the service will be listening to HTTP GET requests on the root context of the
application. It serves the index file, provided as an index.html in the templates directory, back to the
browser.
Templates typically allow parameter substitution, but as previously mentioned, this service makes very
minimal use of Spring MVC functionality.
5.5.4. PatternFly
The HTML application developed for this reference architecture uses PatternFly to provide consistent
visual design and improved user experience.
<!-- PatternFly -→
<script src="bower_components/patternfly/dist/js/patternfly.min.js"></script>
5.5.5. JavaScript
The presentation tier of this application is built in HTML5 and relies heavily on JavaScript. This
includes using ajax calls to the API gateway, as well as minor changes to HTML elements that visible
and displayed to the user.
5.5.5.1. jQuery UI
Some features of the jQuery UI library, including autocomplete for airport fields, are utilized in the
presentation layer.
32
CHAPTER 5. DESIGN AND DEVELOPMENT
To display flight search results in a dynamic table with pagination, and the ability to expand each row
to reveal more data, a jQuery Bootstrap Table library is included and utilized.
5.6. HYSTRIX
5.6.1. Overview
The Presentation service includes a second listening controller, this time a REST controller, that acts as
an API gateway. The API gateway makes simple REST calls to the Airports service, similar to the
previously discussed Flights service, but also calls the Sales service to get pricing information and uses
a different pattern for this call. Hystrix is used to avoid a large number of hung threads and lengthy
timeouts when the Sales service is down. Instead, flight information can be returned without providing a
ticket price. The reactive interface of Hystrix is also leveraged to implement parallel processing.
PricingCall(Flight flight)
{
super( HystrixCommandGroupKey.Factory.asKey( "Sales" ),
HystrixThreadPoolKey.Factory.asKey( "SalesThreads" ) );
this.flight = flight;
}
@Override
protected Itinerary run() throws Exception
{
try
{
return restTemplate.postForObject( "http://zuul/sales/price", flight,
Itinerary.class );
}
catch( Exception e )
{
logger.log( Level.SEVERE, "Failed!", e );
throw e;
}
}
@Override
protected Itinerary getFallback()
{
logger.warning( "Failed to obtain price, " +
getFailedExecutionException().getMessage() + " for " + flight );
return new Itinerary( flight );
}
}
33
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
After being instantiated and provided a flight for pricing, the command takes one of two routes. When
successful and able to reach the service being called, the run method is executed which uses the now-
familiar pattern of calling the service through Ribbon and the OpenShift service abstraction. However,
if an error prevents us from reaching the Sales service, getFallback() provides a chance to recover from
the error, which in this case involves returning the itinerary without a price.
The fallback scenario can happen simply because the call has failed, but also in cases when the circuit
is open (tripped). Configure Hystrix as part of the service properties to specify when a thread should
time out and fail, as well as the queue used for concurrent processing of outgoing calls.
To configure the command timeout for a specific command (and not globally), the HystrixCommandKey
is required. This defaults to the command class name, which is PricingCall in this implementation.
Configure thread pool properties for this specific thread pool by using the specified thread pool key of
SalesThreads.
hystrix.command.PricingCall.execution.isolation.thread.timeoutInMilliseconds: 2000
hystrix:
threadpool:
SalesThreads:
coreSize: 20
maxQueueSize: 200
queueSizeRejectionThreshold: 200
The API gateway service queries and stores the configured thread pool size as a field:
@Value("${hystrix.threadpool.SalesThreads.coreSize}")
private int threadSize;
The thread size is later used as the batch size for the concurrent calls to calculate the price of a flight:
The Reactive zip operator is used to process the calls for each batch concurrently and store results in
a collection. The number of batches depends on the ratio of total flights found to the batch size, which
is set to 20 in this service configuration.
5.7.1. Overview
While considering the concurrent execution of pricing calls, it should be noted that the API gateway is
34
CHAPTER 5. DESIGN AND DEVELOPMENT
itself multi-threaded, so the batch size is not the final determinant of the thread count. In this example
of a batch size of 20, with a maximum queue size of 200 and the same threshold leading to rejection,
receiving more than 10 concurrent query calls can lead to errors. These values should be fine-tuned
based on realistic expectations of load as well as the horizontal scaling of the environment.
This configuration can be externalized by creating a ConfigMap for each OpenShift environment, with
overriding values provided in a properties file that is then provided to all future pods.
5.8. ZIPKIN
5.8.1. Overview
This reference architecture uses Spring Sleuth to collect and broadcast tracing data to OpenZipkin,
which is deployed as an OpenShift service and backed by a persistent MySQL database image. The
tracing data can be queried from the Zipkin console, which is exposed through an OpenShift route.
Logging integration is also possible, although not demonstrated, to use trace IDs to tie in together the
distributed execution of the same business request.
To enable persistent storage for the MySQL database image, this reference architecture creates and
mounts a logical volume that is expose through NFS. An OpenShift persistent volume exposes the
storage to the image. Once the storage is set up and shared by the NFS server:
$ oc create -f zipkin-mysql-pv.json
persistentvolume "zipkin-mysql-data" created
This reference architecture provides a single OpenShift template to create a database image, the
database schema required for OpenZipkin, and the OpenZipkin image itself. This template relies on the
MySQL image definition that is available by default in the openshift project.
This reference architecture demonstrates the use of lifecycle hooks to initialize a database after the
pod has been created. Specifically, a post hook is used as follows:
recreateParams:
post:
failurePolicy: Abort
execNewPod:
35
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
containerName: mysql
command:
- /bin/sh
- -c
- hostname && sleep 10 && /opt/rh/rh-mysql57/root/usr/bin/mysql -h
$DATABASE_SERVICE_NAME -u $MYSQL_USER -D $MYSQL_DATABASE -p$MYSQL_PASSWORD -P
3306 < /docker-entrypoint-initdb.d/init.sql && echo Initialized database
env:
- name: DATABASE_SERVICE_NAME
value: ${DATABASE_SERVICE_NAME}
volumes:
- mysql-init-script
Notice that the hook uses the command line mysql utility to run the SQL script located /docker-
entrypoint-initdb.d/init.sql. Some database images standardize on this location for initialization, in which
case a lifecycle hook is not required.
The SQL script to create the schema is embedded in the template as a config map. It is then declared
as a volume and mounted at its final path under /docker-entrypoint-initdb.d/.
image: openzipkin/zipkin:1.19.2
Required parameters for OpenZipkin to access the associated MySQL database are either configured or
generated as part of the same template. Database passwords are randomly generated by OpenShift as
part of the template and stored in a secret, which makes them inaccessible to users and administrators
in the future. That is why a template message is printed to allow a one-time access to the database
password for monitoring and troubleshooting purposes.
$ oc new-app -f LambdaAir/Zipkin/zipkin-mysql.yml
--> Deploying template "lambdaair/" for "zipkin-mysql.yml" to project
lambdaair
---------
MySQL database service, with persistent storage. For more information
about using this template, including OpenShift considerations, see
https://github.com/sclorg/mysql-container/blob/master/5.7/README.md.
NOTE: Scaling to more than one replica is not supported. You must
have persistent volumes available in your cluster to use this template.
Username: zipkin
Password: Y4hScBSPH5bAhDL2
Database Name: zipkin
Connection URL: mysql://zipkin-mysql:3306/
36
CHAPTER 5. DESIGN AND DEVELOPMENT
* With parameters:
* Memory Limit=512Mi
* Namespace=openshift
* Database Service Name=zipkin-mysql
* MySQL Connection Username=zipkin
* MySQL Connection Password=Y4hScBSPH5bAhDL2 # generated
* MySQL root user Password=xYVNsuRXRV5xqu4A # generated
* MySQL Database Name=zipkin
* Volume Capacity=1Gi
* Version of MySQL Image=5.7
Integration with Ribbon and other framework libraries make it very easy to use Spring Sleuth in the
application. Include the libraries by declaring a dependency in the project Maven file:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-zipkin</artifactId>
</dependency>
Also specify in the application properties the percentage of requests that should be traced, as well as
the address to the zipkin server. Once again, we rely on the OpenShift service abstract to reach zipkin.
spring:
sleuth:
sampler:
percentage: 1.0
zipkin:
baseUrl: http://zipkin/
These two steps are enough to collect tracing data, but a Tracer object can also be injected into the
code for extended functionality. While every remote call can produce and store a trace by default,
adding a tag can help to better understand zipkin reports. The service also creates and demarcates
37
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
While Spring Sleuth is primarily intended as a distributed tracing tool, its ability to correlate distributed
calls can have other practical uses as well. Every created span allows the attachment of arbitrary data,
called a baggage item, that will be automatically inserted into the HTTP header and seamlessly carried
along with the business request from service to service, for the duration of the span. This application is
interested in making the original caller’s IP address available to every microservice. In an OpenShift
environment, the calling IP address is stored in the HTTP header under a standard key. To retrieve and
set this value on the span:
This value will later be accessible from any service within the same call span under the header key of
baggage-forwarded-for. It is used by the Zuul service in a Groovy script to perform dynamic routing.
5.9. ZUUL
5.9.1. Overview
This reference architecture uses Zuul as a central proxy for all calls between microservices. By default,
the service uses static routing as defined in its application properties:
zuul:
routes:
airports:
path: /airports/**
url: http://airports:8080/
flights:
path: /flights/**
url: http://flights:8080/
sales:
path: /sales/**
url: http://sales:8080/
The path provided in the above rules uses the first part of the web address to determine the service to
be called, and the rest of the address as the context.
if(
!RequestContext.currentContext.getRequest().getRequestURI().matches("/sale
s.*") )
{
//Won't filter this request URL
false
}
38
CHAPTER 5. DESIGN AND DEVELOPMENT
Only those calls to the Sales service that originate from an IP address ending in an odd digit are
filtered:
If the caller has an odd digit at the end of their IP address, the request is rerouted. That means the run
method of the filter is executed, which changes the route host:
@Override
Object run() {
println("Running filter")
RequestContext.currentContext.routeHost = new
URL("http://salesv2:8080")
}
To enable dynamic routing without changing application code, shared storage is made available to the
OpenShift nodes and a persistent volume is created and claimed. With the volume set up and the
groovy filter in place, the OpenShift deployment config can be adjusted administratively to mount a
directory as a volume:
This results in all groovy scripts under the groovy directory being found. The zuul application code
anticipates the introduction of dynamic routing filters by seeking and applying any groovy script under
this path:
39
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
CHAPTER 6. CONCLUSION
Spring Boot applications designed based on the microservices architectural style are often very easy
to migrate to, and deploy on, Red Hat® OpenShift Container Platform 3. Most of the open
source libraries commonly found in such applications can run on OpenShift without any changes.
This paper and its accompanying technical implementation seek to serve as a useful reference for
such a migration, while providing a proof of concept that can easily be replicated in a customer
environment.
40
APPENDIX A. AUTHORSHIP HISTORY
41
Reference Architectures 2017 Spring Boot Microservices on Red Hat OpenShift Container Platform 3
APPENDIX B. CONTRIBUTORS
We would like to thank the following individuals for their time and patience as we collaborated on this
process. This document would not have been possible without their many contributions.
42
APPENDIX C. REVISION HISTORY
43