Introducing Istio Service Mesh For Microservices
Introducing Istio Service Mesh For Microservices
Introducing Istio Service Mesh For Microservices
MICROSERVICES
FUTURE
Learn how you can build truly scalable, adaptive, complex
systems that help your business adjust to rapidly changing
competitive markets.
Drive your career with expert insights, plus no-cost access to Red
Hat's software library.
Sign up at
https://developers.redhat.com/
SECOND EDITION
The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Introducing Istio
Service Mesh for Microservices, the cover image, and related trade dress are trade‐
marks of O’Reilly Media, Inc.
While the publisher and the authors have used good faith efforts to ensure that the
information and instructions contained in this work are accurate, the publisher and
the authors disclaim all responsibility for errors or omissions, including without
limitation responsibility for damages resulting from the use of or reliance on this
work. Use of the information and instructions contained in this work is at your own
risk. If any code samples or other technology this work contains or describes is sub‐
ject to open source licenses or the intellectual property rights of others, it is your
responsibility to ensure that your use thereof complies with such licenses and/or
rights.
This work is part of a collaboration between O’Reilly and Red Hat. See our statement
of editorial independence.
978-1-492-05260-9
[LSI]
Table of Contents
1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
The Challenge of Going Faster 2
Meet Istio 3
Understanding Istio Components 4
3. Traffic Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Smarter Canaries 23
Traffic Routing 24
Dark Launch 31
Egress 33
4. Service Resiliency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Load Balancing 38
Timeout 40
Retry 42
Circuit Breaker 44
Pool Ejection 50
Combination: Circuit Breaker + Pool Ejection + Retry 53
5. Chaos Testing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
HTTP Errors 58
iii
Delays 59
6. Observability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Tracing 61
Metrics 63
Service Graph 65
7. Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
mutual Transport Layer Security (mTLS) 67
Access Control with Mixer Policy 72
Role-Based Access Control (RBAC) 75
Conclusion 77
iv | Table of Contents
CHAPTER 1
Introduction
If you are looking for an introduction into the world of Istio, the ser‐
vice mesh platform, with detailed examples, this is the book for you.
This book is for the hands-on application architect and develop‐
ment team lead focused on cloud native applications based on the
microservices architectural style. This book assumes that you have
had hands-on experience with Docker, and while Istio will be avail‐
able on multiple Linux container orchestration solutions, the focus
of this book is specifically targeted at Istio on Kubernetes/OpenShift.
Throughout this book, we will use the terms Kubernetes and Open‐
Shift interchangeably. (OpenShift is Red Hat’s supported distribution
of Kubernetes.)
If you need an introduction to Java microservices covering Spring
Boot and Thorntail (formerly known as WildFly Swarm), check out
Microservices for Java Developers (O’Reilly), by Christian Posta.
Also, if you are interested in reactive microservices, an excellent
place to start is Building Reactive Microservices in Java (O’Reilly), by
Clement Escoffier, as it is focused on Vert.x, a reactive toolkit for the
Java Virtual Machine.
In addition, this book assumes that you have a comfort level with
Kubernetes/OpenShift; if that is not the case, OpenShift for Develop‐
ers (O’Reilly), by Grant Shipley and Graham Dumpleton, is an excel‐
lent ebook on that very topic. We will be deploying, interacting, and
configuring Istio through the lens of OpenShift; however, the com‐
mands we’ll use are mostly portable to vanilla Kubernetes as well.
1
To begin, we discuss the challenges that Istio can help developers
solve and then describe Istio’s primary components.
2 | Chapter 1: Introduction
updated, the applications also needed to be updated to stay in lock
step. Finally, even if they created an implementation of these frame‐
works for every possible permutation of language runtime, they’d
have massive overhead in trying to apply the functionality consis‐
tently. At least in the Netflix example, these libraries were created in
a time when the virtual machine (VM) was the main deployable unit
and they were able to standardize on a single cloud platform plus a
single application runtime, the Java Virtual Machine. Most compa‐
nies cannot and will not do this.
The advent of the Linux container (e.g., Docker) and Kubernetes/
OpenShift have been fundamental enablers for DevOps teams to
achieve vastly higher velocities by focusing on the immutable image
that flows quickly through each stage of a well-automated pipeline.
How development teams manage their pipeline is now independent
of the language or framework that runs inside the container. Open‐
Shift has enabled us to provide better elasticity and overall manage‐
ment of a complex set of distributed, polyglot workloads. OpenShift
ensures that developers can easily deploy and manage hundreds, if
not thousands, of individual services. Those services are packaged as
containers running in Kubernetes pods complete with their respec‐
tive language runtime (e.g., Java Virtual Machine, CPython, and V8)
and all their necessary dependencies, typically in the form of
language-specific frameworks (e.g., Spring or Express) and libraries
(e.g., jars or npms). However, OpenShift does not get involved with
how each of the application components, running in their individual
pods, interact with one another. This is the crossroads where archi‐
tects and developers find ourselves. The tooling and infrastructure
to quickly deploy and manage polyglot services is becoming mature,
but we’re missing similar capabilities when we talk about how those
services interact. This is where the capabilities of a service mesh
such as Istio allow you, the application developer, to build better
software and deliver it faster than ever before.
Meet Istio
Istio is an implementation of a service mesh. A service mesh is the
connective tissue between your services that adds additional capa‐
bilities like traffic control, service discovery, load balancing, resil‐
ience, observability, security, and so on. A service mesh allows
applications to offload these capabilities from application-level libra‐
ries and allows developers to focus on differentiating business logic.
Meet Istio | 3
Istio has been designed from the ground up to work across deploy‐
ment platforms, but it has first-class integration and support for
Kubernetes.
Like many complementary open source projects within the Kuber‐
netes ecosystem, Istio is a Greek nautical term that means “sail”—
much like Kubernetes itself is the Greek term for “helmsman” or
“ship’s pilot”. With Istio, there has been an explosion of interest in
the concept of the service mesh—where Kubernetes/OpenShift has
left off is where Istio begins. Istio provides developers and architects
with vastly richer and declarative service discovery and routing
capabilities. Where Kubernetes/OpenShift itself gives you default
round-robin load balancing behind its service construct, Istio allows
you to introduce unique and finely grained routing rules among all
services within the mesh. Istio also provides us with greater observa‐
bility, that ability to drill down deeper into the network topology of
various distributed microservices, understanding the flows (tracing)
between them and being able to see key metrics immediately.
If the network is in fact not always reliable, that critical link between
and among our microservices needs to not only be subjected to
greater scrutiny but also applied with greater rigor. Istio provides us
with network-level resiliency capabilities such as retry, timeout, and
implementing various circuit-breaker capabilities.
Istio also gives developers and architects the foundation to delve
into a basic exploration of chaos engineering. In Chapter 5, we
describe Istio’s ability to drive chaos injection so that you can see
how resilient and robust your overall application and its potentially
dozens of interdependent microservices actually are.
Before we begin that discussion, we want to ensure that you have a
basic understanding of Istio. The following section will provide you
with an overview of Istio’s essential components.
4 | Chapter 1: Introduction
Figure 1-1. Data plane versus control plane
Data Plane
The data plane is implemented in such a way that it intercepts all
inbound (ingress) and outbound (egress) network traffic. Your busi‐
ness logic, your app, your microservice is blissfully unaware of this
fact. Your microservice can use simple framework capabilities to
invoke a remote HTTP endpoint (e.g., Spring RestTemplate or JAX-
RS client) across the network and mostly remain ignorant of the fact
that a lot of interesting cross-cutting concerns are now being applied
automatically. Figure 1-2 describes your typical microservice before
the advent of Istio.
Service proxy
A service proxy augments an application service. The application
service calls through the service proxy any time it needs to commu‐
nicate over the network. The service proxy acts as an intermediary
or interceptor that can add capabilities like automatic retries, circuit
breaker, service discovery, security, and more. The default service
proxy for Istio is based on Envoy proxy.
Envoy proxy is a layer 7 (L7) proxy (see the OSI model on Wikipe‐
dia) developed by Lyft, the ridesharing company, which currently
uses it in production to handle millions of requests per second.
Written in C++, it is battle-tested, highly performant, and light‐
weight. It provides features like load balancing for HTTP1.1,
HTTP2, and gRPC. It has the ability to collect request-level metrics,
trace spans, provide for service discovery, inject faults, and much
more. You might notice that some of the capabilities of Istio overlap
with Envoy. This fact is simply explained as Istio uses Envoy for its
implementation of these capabilities.
But how does Istio deploy Envoy as a service proxy? Istio brings the
service proxy capabilities as close as possible to the application code
through a deployment technique known as the sidecar.
Sidecar
When Kubernetes/OpenShift were born, they did not refer to a
Linux container as the runnable/deployable unit as you might
6 | Chapter 1: Introduction
expect. Instead, the name pod was born, and it is the primary thing
to manage in a Kubernetes/OpenShift world. Why pod? Some think
it is an obscure reference to the 1956 film Invasion of the Body
Snatchers, but it is actually based on the concept of a family or group
of whales. The whale was the early image associated with the Docker
open source project—the most popular Linux container solution of
its era. So, a pod can be a group of Linux containers. The sidecar is
yet another Linux container that lives directly alongside your busi‐
ness logic application or microservice container. Unlike a real-world
sidecar that bolts onto the side of a motorcycle and is essentially a
simple add-on feature, this sidecar can take over the handlebars and
throttle.
With Istio, a second Linux container called “istio-proxy” (aka the
Envoy service proxy) is manually or automatically injected into the
pod that houses your application or microservice. This sidecar is
responsible for intercepting all inbound (ingress) and outbound
(egress) network traffic from your business logic container, which
means new policies can be applied that reroute the traffic (in or out),
perhaps apply policies such as access control lists (ACLs) or rate
limits, also snatch monitoring and tracing data (Mixer), and even
introduce a little chaos such as network delays or HTTP errors.
Control Plane
The control plane is responsible for being the authoritative source
for configuration and policy and making the data plane usable in a
cluster potentially consisting of hundreds of pods scattered across a
number of nodes. Istio’s control plane comprises three primary Istio
services: Pilot, Mixer, and Citadel.
Pilot
The Pilot is responsible for managing the overall fleet—all of your
microservices’ sidecars running across your Kubernetes/OpenShift
cluster. The Istio Pilot ensures that each of the independent micro‐
services, wrapped as individual Linux containers and running inside
their pods, has the current view of the overall topology and an up-
to-date “routing table.” Pilot provides capabilities like service discov‐
ery as well as support for VirtualService. The VirtualService is
what gives you fine-grained request distribution, retries, timeouts,
etc. We cover this in more detail in Chapter 3 and Chapter 4.
Citadel
The Istio Citadel component, formerly known as Istio CA or Auth,
is responsible for certificate signing, certificate issuance, and revoca‐
tion/rotation. Istio issues X.509 certificates to all your microservices,
allowing for mutual Transport Layer Security (mTLS) between those
services, encrypting all their traffic transparently. It uses identity
built into the underlying deployment platform and builds that into
the certificates. This identity allows you to enforce policy. An exam‐
ple of setting up mTLS is discussed in Chapter 7.
8 | Chapter 1: Introduction
CHAPTER 2
Installation and Getting Started
In this section, we show you how to get started with Istio on Kuber‐
netes. Istio is not tied to Kubernetes in any way, and in fact, it’s
intended to be agnostic of any deployment infrastructure. With that
said, Kubernetes is a great place to run Istio with its native support
of the sidecar-deployment concept. Feel free to use any distribution
of Kubernetes you wish, but here we use Minishift, which is a
developer-focused enterprise distribution of Kubernetes named
OpenShift.
9
kubectl
We will focus on the usage of the oc CLI throughout this book,
but it is mostly interchangeable with kubectl. You can switch
back and forth between the two easily.
oc
minishift oc-env will output the path to the oc client binary,
no need to download separately.
OpenJDK
You will need access to both javac and java command-line tools.
Maven
For building the sample Java projects.
stern
For easily viewing logs.
Siege
For load testing the Istio resiliency options in Chapter 4.
Git
For git clone, downloading the sample code.
istioctl
Will be installed via the steps that follow.
curl and tar
To use as part of your bash shell.
Kubernetes/OpenShift Installation
Keep in mind when bootstrapping Minishift that you’ll be creating a
lot of services. You’ll be installing the Istio control plane, some sup‐
porting metrics and visualization applications, and your sample
application services. To accomplish this, the virtual machine (VM)
that you use to run Kubernetes will need to have enough resources.
Although we recommend 8 GB of RAM and 3 CPUs for the VM, we
have seen the examples contained in this book run successfully on 4
GB of RAM and 2 CPUs.
After you’ve installed Minishift, you can bootstrap the environment
by using this script:
#!/bin/bash
minishift start
When things have launched correctly, you should be able to set up
your environment to have access to Minishift’s included docker dae‐
mon and also log in to the Kubernetes cluster:
eval $(minishift oc-env)
eval $(minishift docker-env)
oc login $(minishift ip):8443 -u admin -p admin
If everything is successful up to this point, you should be able to run
the following command:
oc get node
NAME STATUS AGE VERSION
localhost Ready 5m v1.11.0+d4cacc0
Plus, you can view the main web console with the following:
minishift dashboard
If you have errors along the way, review the current steps of the Istio
Tutorial for Java Microservices and potentially file a GitHub issue.
Istio Installation
Istio distributions come bundled with the necessary binary
command-line interface (CLI) tool, installation resources, and sam‐
ple applications. You should download the Istio 1.0.4 release:
curl -L https://github.com/istio/istio/releases/download/
1.0.4/istio-1.0.4/-osx.tar.gz | tar xz
cd istio-1.0.4
Now you need to prepare your OpenShift/Kubernetes environment.
Istio uses ValidatingAdmissionWebhook for validating Istio config‐
uration and MutatingAdmissionWebhook for automatically injecting
Istio Installation | 11
the sidecar proxy into user pods. Update Minishift’s default configu‐
ration by running the following:
minishift openshift config set --target=kube --patch '{
"admissionConfig": {
"pluginConfig": {
"ValidatingAdmissionWebhook": {
"configuration": {
"apiVersion": "v1",
"kind": "DefaultAdmissionConfig",
"disable": false
}
},
"MutatingAdmissionWebhook": {
"configuration": {
"apiVersion": "v1",
"kind": "DefaultAdmissionConfig",
"disable": false
}
}
}
}
}'
Now you can install Istio. From the Istio distribution’s root folder,
run the following:
oc apply -f install/kubernetes/helm/istio/templates/crds.yaml
oc apply -f install/kubernetes/istio-demo.yaml
oc project istio-system
This will install all the necessary Istio control plane components
including Istio Pilot, Mixer (the actual Mixer pods are called teleme‐
try and policy), and Citadel. In addition, it installs some useful com‐
panion services: Prometheus, for metrics collection; Jaeger, for
distributed tracing support; Grafana for metrics dashboard; and
Servicegraph for simple visualization of services. You will be touch‐
ing these services in Chapter 6.
Finally, because we’re on OpenShift, you can expose these services
directly through the OpenShift Router. This way you don’t have to
mess around with node ports:
oc expose svc servicegraph
oc expose svc grafana
oc expose svc prometheus
oc expose svc tracing
Now, from your command line you should be able to type istioctl
version and see a valid response:
istioctl version
Version: 1.0.4
GitRevision: a44d4c8bcb427db16ca4a439adfbd8d9361b8ed3
User: root@0ead81bba27d
Hub: docker.io/istio
GolangVersion: go1.10.4
BuildStatus: Clean
At this point, you can move on to installing the sample services.
Istio Installation | 13
Example Java Microservices Installation
To effectively demonstrate the capabilities of Istio, you’ll need to use
a set of services that interact and communicate with one another.
The services we have you work with in this section are a fictitious
and simplistic re-creation of a customer portal for a website (think
retail, finance, insurance, and so forth). In these scenarios, a cus‐
tomer portal would allow customers to set preferences for certain
aspects of the website. Those preferences will have the opportunity
to take recommendations from a recommendation engine that
offers up suggestions. The flow of communication looks like this:
Customer ⇒ Preference ⇒ Recommendation
From this point forward, it would be best for you to have the source
code that accompanies the book. You can checkout the source code
from the Istio Tutorial for Java Microservices and switch to the
branch book-1.0.4, as demonstrated here:
git clone \
https://github.com/redhat-developer-demos/istio-tutorial.git
cd istio-tutorial
git checkout book-1.0.4
@RequestMapping("/")
public ResponseEntity<String> getCustomer(
@RequestHeader("User-Agent")
String userAgent,
@RequestHeader(value = "user-preference",
required = false)
String userPreference) {
Before you deploy your services, make sure that you create the tar‐
get namespace/project and apply the correct security permissions:
oc new-project tutorial
oc adm policy add-scc-to-user privileged -z default -n tutorial
oc create -f ../../kubernetes/Service.yml
oc create -f ../../kubernetes/Service.yml
oc get pods -w
Look for “2/2” under the READY column. Ctrl-C to break out of the
wait, and now when you do the curl, you should receive a better
response:
curl customer-tutorial.$(minishift ip).nip.io
Smarter Canaries
The concept of the canary deployment has become fairly popular in
the last few years. The name comes from the “canary in the coal
mine” concept. Miners used to take an actual bird, a canary in a
cage, into the mines to detect whether there were dangerous gases
present because canaries are more susceptible to these gases than
humans. The canary would not only provide nice musical songs to
entertain the miners, but if at any point it collapsed off its perch, the
miners knew to get out of the mine quickly.
The canary deployment has similar semantics. With a canary
deployment, you deploy a new version of your code to production
but allow only a subset of traffic to reach it. Perhaps only beta cus‐
tomers, perhaps only internal employees of your organization, per‐
23
haps only iOS users, and so on. After the canary is out there, you
can monitor it for exceptions, bad behavior, changes in service-level
agreement (SLA), and so forth. If the canary deployment/pod exhib‐
its no bad behavior, you can begin to slowly increase end-user traffic
to it. If it exhibits bad behavior, you can easily pull it from produc‐
tion. The canary deployment allows you to deploy faster but with
minimal disruption should a “bad” code change make it through
your automated QA tests in your continous deployment pipeline.
By default, Kubernetes offers out-of-the-box round-robin load bal‐
ancing of all the pods behind a Kubernetes Service. If you want only
10% of all end-user traffic to hit your newest pod, you must have at
least a 10-to-1 ratio of old pods to the new pod. With Istio, you can
be much more fine-grained. You can specify that only 2% of traffic,
across only three pods be routed to the latest version. Istio will also
let you gradually increase overall traffic to the new version until all
end users have been migrated over and the older versions of the app
logic/code can be removed from the production environment.
Traffic Routing
With Istio, you can specify routing rules that control the traffic to a
set of pods. Specifically, Istio uses DestinationRule and VirtualSer
vice resources to describe these rules. The following is an example
of a DestinationRule that establishes which pods make up a spe‐
cific subset:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: recommendation
namespace: tutorial
spec:
host: recommendation
subsets:
- labels:
version: v1
name: version-v1
- labels:
version: v2
name: version-v2
Traffic Routing | 25
cd recommendation/java/vertx
You can run oc get pods to see the pods as they all come up; it
should look like this when all the pods are running successfully:
NAME READY STATUS RESTARTS AGE
customer-3600192384-fpljb 2/2 Running 0 17m
preference-243057078-8c5hz 2/2 Running 0 15m
recommendation-v1-60483540 2/2 Running 0 12m
recommendation-v2-99634814 2/2 Running 0 15s
At this point, if you curl the customer endpoint, you should see traf‐
fic load balanced across both versions of the recommendation ser‐
vice. You should see something like this:
#!/bin/bash
while true
do curl customer-tutorial.$(minishift ip).nip.io
sleep .1
done
oc -n tutorial create -f \
istiofiles/virtual-service-recommendation-v1.yml
Now if you try to query the customer service, you should see all traf‐
fic routed to v1 of the service:
customer => preference => recommendation v1 from '60483540': 32
customer => preference => recommendation v1 from '60483540': 33
customer => preference => recommendation v1 from '60483540': 34
Traffic Routing | 27
If you start sending load against the customer service like in the pre‐
vious steps, you should see that only a fraction of traffic actually
makes it to v2. This is a canary release. Monitor your logs, metrics,
and tracing systems to see whether this new release has introduced
any negative or unexpected behaviors into your system.
Traffic Routing | 29
specified set of criteria. For example, you might want to split traffic
to a particular service based on geography, mobile device, or
browser. Let’s see how to do that with Istio.
With Istio, you can use a match clause in the VirtualService to
specify a predicate. For example, take a look at the following Vir
tualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
creationTimestamp: null
name: recommendation
namespace: tutorial
spec:
hosts:
- recommendation
http:
- match:
- headers:
baggage-user-agent:
regex: .*Safari.*
route:
- destination:
host: recommendation
subset: version-v2
- route:
- destination:
host: recommendation
subset: version-v1
This rule uses a request header–based matching clause that will
match only if the request includes “Safari” as part of the user-agent
header. If the request matches the predicate, it will be routed to v2 of
the recommendation service.
Install the rule:
oc -n tutorial create -f \
istiofiles/virtual-service-safari-recommendation-v2.yml
And let’s try it out:
curl customer-tutorial.$(minishift ip).nip.io
Cleaning up rules
After getting to this section, you can clean up all of the Istio objects
you’ve installed:
oc delete virtualservice recommendation -n tutorial
oc delete destinationrule recommendation -n tutorial
Dark Launch
Dark launch can mean different things to different people. In
essence, a dark launch is a deployment to production that is invisible
Dark Launch | 31
to customers. In this case, Istio allows you to duplicate or mirror
traffic to a new version of your application and see how it behaves
compared to the live application pod. This way you’re able to put
production quality requests into your new service without affecting
any live traffic.
For example, you could say recommendation v1 takes the live traffic
and recommendation v2 will be your new deployment. You can use
Istio to mirror traffic that goes to v1 into the v2 pod. When Istio
mirrors traffic, it does so in a fire-and-forget manner. In other
words, Istio will do the mirroring asynchronously from the critical
path of the live traffic, send the mirrored request to the test pod, and
not worry about or care about a response. Let’s try this out.
The first thing you should do is make sure that no DestinationRule
or VirtualService is currently being used:
oc get destinationrules -n tutorial
No resources found.
oc get virtualservices -n tutorial
No resources found.
oc -n tutorial create -f \
istiofiles/virtual-service-recommendation-v1-mirror-v2.yml
In one terminal, tail the logs for the recommendation v2 service:
oc -n tutorial \
logs -f `oc get pods|grep recommendation-v2|awk
'{ print $1 }'` \
-c recommendation
You can also use stern as another way to see logs of both recommen‐
dation v1 and v2:
stern recommendation
In another window, you can send in a request:
curl customer-tutorial.$(minishift ip).nip.io
Egress
By default, Istio directs all traffic originating in a service through the
Istio proxy that’s deployed alongside the service. This proxy evalu‐
ates its routing rules and decides how best to deliver the request.
One nice thing about the Istio service mesh is that by default it
Egress | 33
blocks all outbound (outside of the cluster) traffic unless you specifi‐
cally and explicitly create rules to allow traffic out. From a security
standpoint, this is crucial. You can use Istio in both zero-trust net‐
working architectures as well as traditional perimeter-based security.
In both cases, Istio helps protect against a nefarious agent gaining
access to a single service and calling back out to a command-and-
control system, thus allowing an attacker full access to the network.
By blocking any outgoing access by default and allowing routing
rules to control not only internal traffic but any and all outgoing
traffic, you can make your security posture more resilient to outside
attacks irrespective of where they originate.
You can test this concept by shelling into one of your pods and sim‐
ply running a curl command:
oc get pods -n tutorial
NAME READY STATUS RESTARTS AGE
customer-6564ff969f-jqkkr 2/2 Running 0 19m
preference-v1-5485dc6f49-hrlxm 2/2 Running 0 19m
recommendation-v1-60483540 2/2 Running 0 20m
recommendation-v2-99634814 2/2 Running 0 7m
oc exec -it recommendation-v2-99634814 /bin/bash
But first make sure to have your DestinationRule set up and then
apply the ServiceEntry:
oc -n tutorial create -f \
istiofiles/destination-rule-recommendation-v1-v2.yml
oc -n tutorial create -f \
istiofiles/service-entry-egress-httpbin.yml -n tutorial
Now when you shell into your pod and run curl you get back a cor‐
rect 200 response:
oc exec -it recommendation-v2-99634814 /bin/bash
Egress | 35
CHAPTER 4
Service Resiliency
37
Timeout
Wait only N seconds for a response and then give up.
Retry
If one pod returns an error (e.g., 503), retry for another pod.
Simple circuit breaker
Instead of overwhelming the degraded service, open the circuit
and reject further requests.
Pool ejection
This provides auto removal of error-prone pods from the load-
balancing pool.
Let’s look at each capability with an example. Here, we use the same
set of services of customer, preference, and recommendation as in the
previous chapters.
Load Balancing
A core capability for increasing throughput and lowering latency is
load balancing. A straightforward way to implement this is to have a
centralized load balancer with which all clients communicate and
that knows how to distribute load to any backend systems. This is a
great approach, but it can become both a bottleneck as well as a sin‐
gle point of failure. Load-balancing capabilities can be distributed to
clients with client-side load balancers. These client load balancers
can use sophisticated, cluster-specific, load-balancing algorithms to
increase availability, lower latency, and increase overall throughput.
The Istio proxy has the capabilities to provide client-side load bal‐
ancing through the following configurable algorithms:
ROUND_ROBIN
This algorithm evenly distributes the load, in order, across the
endpoints in the load-balancing pool.
RANDOM
This evenly distributes the load across the endpoints in the
load-balancing pool but without any order.
LEAST_CONN
This algorithm picks two random hosts from the load-balancing
pool and determines which host has fewer outstanding requests
Load Balancing | 39
metadata:
name: recommendation
namespace: tutorial
spec:
host: recommendation
trafficPolicy:
loadBalancer:
simple: RANDOM
This destination policy configures traffic to the recommendation ser‐
vice to be sent using a random load-balancing algorithm.
Let’s create this DestinationRule:
oc -n tutorial create -f \
istiofiles/destination-rule-recommendation_lb_policy_app.yml
You should now see a more random distribution when you call your
service:
customer => ... => recommendation v2 from '2819441432': 183
customer => ... => recommendation v2 from '6375428941': 3
customer => ... => recommendation v2 from '2819441432': 184
customer => ... => recommendation v1 from '99634814': 1153
customer => ... => recommendation v1 from '99634814': 1154
customer => ... => recommendation v2 from '2819441432': 185
customer => ... => recommendation v2 from '6375428941': 4
customer => ... => recommendation v2 from '6375428941': 5
customer => ... => recommendation v2 from '2819441432': 186
customer => ... => recommendation v2 from '4876125439': 3
Timeout
Timeouts are a crucial component for making systems resilient and
available. Calls to services over a network can result in lots of unpre‐
dictable behavior, but the worst behavior is latency. Did the service
fail? Is it just slow? Is it not even available? Unbounded latency
means any of those things could have happened. But what does your
service do? Just sit around and wait? Waiting is not a good solution
if there is a customer on the other end of the request. Waiting also
uses resources, causes other systems to potentially wait, and is usu‐
vertx.createHttpServer().requestHandler(router::accept)
.listen(LISTEN_ON);
}
You should save your changes before continuing and then build the
service and deploy it:
cd recommendation/java/vertx
mvn clean package
docker build -t example/recommendation:v2 .
oc delete pod -l app=recommendation,version=v2 -n tutorial
The last step here is to restart the v2 pod with the latest image of
your recommendation service. Now, if you call your customer service
endpoint, you should experience the delay when the call hits the reg‐
istration v2 service:
time curl customer-tutorial.$(minishift ip).nip.io
real 0m3.054s
user 0m0.003s
sys 0m0.003s
You might need to make the call a few times for it to route to the v2
service. The v1 version of recommendation does not have the delay
as it is based on the v1 variant of the code.
Let’s look at your VirtualService that introduces a rule that impo‐
ses a timeout when making calls to the recommendation service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
Timeout | 41
metadata:
creationTimestamp: null
name: recommendation
namespace: tutorial
spec:
hosts:
- recommendation
http:
- route:
- destination:
host: recommendation
timeout: 1.000s
real 0m1.151s
user 0m0.003s
sys 0m0.003s
Retry
Because you know the network is not reliable you might experience
transient, intermittent errors. This can be even more pronounced
with distributed microservices rapidly deploying several times a
week or even a day. The service or pod might have gone down only
briefly. With Istio’s retry capability, you can make a few more
attempts before having to truly deal with the error, potentially falling
back to default logic. Here, we show you how to configure Istio to
do this.
The first thing you need to do is simulate transient network errors.
In the recommendation service example, you find a special endpoint
that simply sets a flag; this flag indicates that the return value of
getRecommendations should always be a 503. To change the misbe
curl localhost:8080/misbehave
Now when you send traffic to the customer service, you should see
some 503 errors:
#!/bin/bash
while true
do
curl customer-tutorial.$(minishift ip).nip.io
sleep .1
done
This rule sets your retry attempts to 3 and will use a 2s timeout for
each retry. The cumulative timeout is therefore 6 seconds plus the
time of the original call.
Let’s create your retry rule and try the traffic again:
oc -n tutorial create -f \
istiofiles/virtual-service-recommendation-v2_retry.yml
Now when you send traffic, you shouldn’t see any errors. This
means that even though you are experiencing 503s, Istio is automat‐
ically retrying the request, as shown here:
Retry | 43
customer => preference => recommendation v1 from '99634814': 35
customer => preference => recommendation v1 from '99634814': 36
customer => preference => recommendation v1 from '99634814': 37
customer => preference => recommendation v1 from '99634814': 38
Now you can clean up all the rules you’ve installed:
oc delete destinationrules --all -n tutorial
oc delete virtualservices --all -n tutorial
Circuit Breaker
Much like the electrical safety mechanism in the modern home (we
used to have fuse boxes, and “blew a fuse” is still part of our vernac‐
ular), the circuit breaker ensures that any specific appliance does not
overdraw electrical current through a particular outlet. If you’ve ever
lived with someone who plugged their radio, hair dryer, and perhaps
a portable heater into the same circuit, you have likely seen this in
action. The overdraw of current creates a dangerous situation
because you can overheat the wire, which can result in a fire. The
circuit breaker opens and disconnects the electrical current flow.
Circuit Breaker | 45
metadata:
creationTimestamp: null
name: recommendation
namespace: tutorial
spec:
host: recommendation
subsets:
- labels:
version: v1
name: version-v1
- labels:
version: v2
name: version-v2
All the requests to the application were successful, but it took some
time to run the test because the v2 pod was a slow performer.
Suppose that in a production system this 3-second delay was caused
by too many concurrent requests to the same instance or pod. You
Circuit Breaker | 47
don’t want multiple requests getting queued or making that instance
or pod even slower. So, we’ll add a circuit breaker that will open
whenever you have more than one request being handled by any
instance or pod.
To create circuit breaker functionality for our services, we use an
Istio DestinationRule that looks like this:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
creationTimestamp: null
name: recommendation
namespace: tutorial
spec:
host: recommendation
subsets:
- name: version-v1
labels:
version: v1
- name: version-v2
labels:
version: v2
trafficPolicy:
connectionPool:
http:
http1MaxPendingRequests: 1
maxRequestsPerConnection: 1
tcp:
maxConnections: 1
outlierDetection:
baseEjectionTime: 120.000s
consecutiveErrors: 1
interval: 1.000s
maxEjectionPercent: 100
Here, you’re configuring the circuit breaker for any client calling
into v2 of the recommendation service. Remember in the previous
VirtualService that you are splitting (50%) traffic between both v1
and v2, so this DestinationRule should be in effect for half the traf‐
fic. You are limiting the number of connections and number of
pending requests to one. (We discuss the other settings in “Pool
Ejection” on page 50, in which we look at outlier detection.) Let’s
create this circuit breaker policy:
oc -n tutorial replace -f \
istiofiles/destination-rule-recommendation_cb_policy_
version_v2.yml
You can now see that almost all calls completed in less than a second
with either a success or a failure. You can try this a few times to see
that this behavior is consistent. The circuit breaker will short circuit
any pending requests or connections that exceed the specified thres‐
hold (in this case, an artificially low number, 1, to demonstrate these
capabilities). The goal of the circuit breaker is to fail fast.
Circuit Breaker | 49
You can clean up these Istio rules with a simple “oc delete”:
oc delete virtualservice recommendation -n tutorial
oc delete destinationrule recommendation -n tutorial
Pool Ejection
The last of the resilience capabilities that we discuss has to do with
identifying badly behaving cluster hosts and not sending any more
traffic to them for a cool-off period (essentially kicking the bad-
behaving pod out of the load-balancing pool). Because the Istio
proxy is based on Envoy and Envoy calls this implementation outlier
detection, we’ll use the same terminology for discussing Istio.
In a scenario where your software development teams are deploying
their components into production, perhaps multiple times per week,
during the middle of the workday, being able to kick out misbehav‐
ing pods adds to overall resiliency. Pool ejection or outlier detection
is a resilience strategy that is valuable whenever you have a group of
pods (multiple replicas) to serve a client request? If the request is
forwarded to a certain instance and it fails (e.g., returns a 50x error
code), Istio will eject this instance from the pool for a certain sleep
window. In our example, the sleep window is configured to be 15s.
This increases the overall availability by making sure that only
healthy pods participate in the pool of instances.
First, you need to ensure that you have a DestinationRule and Vir
tualService in place. Let’s use a 50/50 split of traffic:
oc -n tutorial create -f \
istiofiles/destination-rule-recommendation-v1-v2.yml
oc -n tutorial create -f \
istiofiles/virtual-service-recommendation-v1_and_v2_50_50.yml
Next, you can scale the number of pods for the v2 deployment of
recommendation so that you have multiple instances in the load-
balancing pool with which to work:
oc scale deployment recommendation-v2 --replicas=2 -n tutorial
Wait a moment for all of the pods to get to the ready state then gen‐
erate some traffic against the customer service:
#!/bin/bash
while true
do curl customer-tutorial.$(minishift ip).nip.io
Pool Ejection | 51
You’ll see that whenever the pod recommendation-v2-3416541697
receives a request, you get a 503 error:
customer => ... => recommendation v1 from '2039379827': 495
customer => ... => recommendation v2 from '2036617847': 248
customer => ... => recommendation v1 from '2039379827': 496
customer => ... => recommendation v1 from '2039379827': 497
customer => 503 preference => 503 misbehavior from '3416541697'
customer => ... => recommendation v2 from '2036617847': 249
customer => ... => recommendation v1 from '2039379827': 498
customer => 503 preference => 503 misbehavior from '3416541697'
Now let’s see what happens when you configure Istio to eject misbe‐
having hosts. Look at the DestinationRule in the following:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
creationTimestamp: null
name: recommendation
namespace: tutorial
spec:
host: recommendation
subsets:
- labels:
version: v1
name: version-v1
trafficPolicy:
connectionPool:
http: {}
tcp: {}
loadBalancer:
simple: RANDOM
outlierDetection:
baseEjectionTime: 15.000s
consecutiveErrors: 1
interval: 5.000s
maxEjectionPercent: 100
- labels:
version: v2
name: version-v2
trafficPolicy:
connectionPool:
http: {}
tcp: {}
loadBalancer:
simple: RANDOM
outlierDetection:
baseEjectionTime: 15.000s
consecutiveErrors: 1
57
HTTP Errors
Based on exercises earlier in this book, make sure that recommenda‐
tion v1 and v2 are both deployed with no code-driven misbehavior
or long waits/latency. Now, you will be injecting errors via Istio
instead of using Java code:
oc get pods -l app=recommendation -n tutorial
NAME READY STATUS RESTARTS AGE
recommendation-v1-3719512284 2/2 Running 6 18m
recommendation-v2-2815683430 2/2 Running 0 13m
The VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: recommendation
namespace: tutorial
spec:
hosts:
- recommendation
http:
- fault:
abort:
httpStatus: 503
percent: 50
oc -n tutorial create -f \
istiofiles/virtual-service-recommendation-503.yml
Testing the change is as simple as issuing a few curl commands at
the customer endpoint. Make sure to test it a few times, looking for
the resulting 503 approximately 50% of the time:
curl customer-tutorial.$(minishift ip).nip.io
customer => preference => recommendation v1 from
'3719512284': 88
Delays
The most insidious of possible distributed computing faults is not a
“dead” service but a service that is responding slowly, potentially
causing a cascading failure in your network of services. More impor‐
tantly, if your service has a specific service-level agreement (SLA) it
must meet, how do you verify that slowness in your dependencies
doesn’t cause you to fail in delivering to your awaiting customer?
Much like the HTTP error injection, network delays use the Virtual
Service kind, as well. The following manifest injects 7 seconds of
delay into 50% of the responses from the recommendation service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
creationTimestamp: null
name: recommendation
Delays | 59
namespace: tutorial
spec:
hosts:
- recommendation
http:
- fault:
delay:
fixedDelay: 7.000s
percent: 50
route:
- destination:
host: recommendation
subset: app-recommendation
Tracing
Often the first thing to understand about your microservices archi‐
tecture is specifically which microservices are involved in an end-
user transaction. If many teams are deploying their dozens of
microservices, all independently of one another, it is often difficult
to understand the dependencies across that “mesh” of services. Istio’s
Mixer comes “out of the box” with the ability to pull tracing spans
61
from your distributed microservices. This means that tracing is
programming-language agnostic so that you can use this capability
in a polyglot world where different teams, each with its own micro‐
service, can be using different programming languages and frame‐
works.
Although Istio supports both Zipkin and Jaeger, for our purposes we
focus on Jaeger, which implements OpenTracing, a vendor-neutral
tracing API. Jaeger was originally open sourced by the Uber Tech‐
nologies team and is a distributed tracing system specifically focused
on microservices architecture.
An important term to understand here is span, which Jaeger defines
as “a logical unit of work in the system that has an operation name,
the start time of the operation, and the duration. Spans can be nes‐
ted and ordered to model causal relationships. An RPC call is an
example of a span.”
Another important term is trace, which Jaeger defines as “a data/
execution path through the system, and can be thought of as a direc‐
ted acyclic graph of spans.”
Open the Jaeger console by using the following command:
minishift openshift service tracing --in-browser
You can then select Customer from the drop-down list box and
explore the traces found, as illustrated in Figure 6-1.
62 | Chapter 6: Observability
x-request-id
x-b3-traceid
x-b3-spanid
x-b3-parentspanid
x-b3-sampled
x-b3-flags
x-ot-span-context
However, your chosen framework may have support for automati‐
cally carrying those headers. In the case of the customer and prefer‐
ence services, for the Spring Boot implementations, there is
opentracing_spring_cloud.
Our customer and preference services are using the TracerResolver
library, so that the concrete tracer can be loaded automatically
without our code having a hard dependency on Jaeger. Given that
the Jaeger tracer can be configured via environment variables, we
don’t need to do anything in order to get a properly configured
Jaeger tracer ready and registered with OpenTracing. That said,
there are cases where it’s appropriate to manually configure a tracer.
Refer to the Jaeger documentation for more information on how to
do that.
By default, Istio captures or samples 100% of the requests flowing
through the mesh. This is valuable for a development scenario
where you are attempting to debug aspects of the application but it
might be too voluminous in a different setting such as performance
benchmark or production environment. The sampling rate is
defined by the “PILOT_TRACE_SAMPLING” environment variable
on the Istio Pilot Deployment. This can can be viewed/edited via the
following command:
oc edit deployment istio-pilot -n istio-system
Metrics
By default, Istio will gather telemetry data across the service mesh by
leveraging Prometheus and Grafana to get started with this impor‐
tant capability. You can get the URL to the Grafana console using
the minishift service command:
minishift openshift service grafana --url
Make sure to select Istio Workload Dashboard in the upper left of
the Grafana dashboard, as demonstrated in Figure 6-2.
Metrics | 63
Figure 6-2. The Grafana dashboard—selecting Istio Workload Dash‐
board
You can also visit the Prometheus dashboard directly with the fol‐
lowing command:
minishift openshift service prometheus --in-browser
The Prometheus dashboard allows you to query for specific metrics
and graph them. For instance, you can review the total request
count for the recommendation service, specifically the “v2” version
as seen in Figure 6-3:
istio_requests_total{destination_app="recommendation",\
destination_version="v2"}
64 | Chapter 6: Observability
You can also review other interesting datapoints such as the pod
memory usage with the following query string:
container_memory_rss{container_name="customer"}
Prometheus is a very powerful tool for gathering and extracting
metric data from your Kubernetes/OpenShift cluster. Prometheus is
currently a top-level or graduated project within the Cloud Native
Computing Foundation alongside Kubernetes itself. For more infor‐
mation on query syntax and alerting, please review the documenta‐
tion at the Prometheus website.
Service Graph
Istio has provided the out-of-the-box basic Servicegraph visualiza‐
tion since its earliest days. Now, a new, more comprehensive service
graph tool and overall health monitoring solution called Kiali has
been created by the Red Hat team, as depicted in Figure 6-4. The
Kiali project provides answers to interesting questions like: What
microservices are part of my Istio service mesh and how are they
connected?
At the time of this writing, Kiali must be installed separately and
those installation steps are somewhat complicated. Kiali wants to
know the URLs for both Jaeger and Grafana and that requires some
interesting environment variable substitution. The envsubst tool
comes from a package called gettext and is available for Fedora via:
dnf install gettext
Or macOS:
brew install gettext
And the Kiali installation steps:
# URLS for Jaeger and Grafana
export JAEGER_URL="https://tracing-istio-system.$(minishift
ip).nip.io"
export GRAFANA_URL="https://grafana-istio-system.$(minishift
ip).nip.io"
export IMAGE_VERSION="v0.10.0"
Service Graph | 65
Figure 6-4. The Kiali dashboard
66 | Chapter 6: Observability
CHAPTER 7
Security
67
could deploy their own service and attempt to sniff the traffic flow‐
ing through the system. To make that point, open up two command
shells where one is using tcpdump to sniff traffic while the other is
performing a curl command.
Shell 1:
PREFPOD=$(oc get pod -n tutorial -l app=preference -o \
'jsonpath={.items[0].metadata.name}')
sudo tcpdump -A -s 0 \
'tcp port 8080 and (((ip[2:2]-((ip[0]&0xf)<<2))-
((tcp[12]&0xf0)>>2))!= 0)'
Shell 2:
PREFPOD=$(oc get pod -n tutorial -l app=preference -o \
'jsonpath={.items[0].metadata.name}')
curl recommendation:8080
The results for Shell 1 should look similar to the following:
..:...:.HTTP/1.1 200 OK
content-length: 47
x-envoy-upstream-service-time: 0
date: Mon, 24 Dec 2018 17:16:13 GMT
server: envoy
The curl command works and the tcpdump command outputs the
results in clear text as seen in Figure 7-1.
68 | Chapter 7: Security
Figure 7-1. Three shells before mTLS policy
You should notice that the tcpdump shell is no longer providing clear
text and your curl command executes successfully. You can also use
the istioctl tool to verify if mTLS is enabled:
istioctl authn tls-check | grep tutorial
Now it is time to test from the external world’s perspective. In Shell
2, exit from the preference container back to your host OS. Then
curl the customer endpoint, which results in “Empty reply from
server”:
curl customer-tutorial.$(minishift ip).nip.io
curl: (52) Empty reply from server
70 | Chapter 7: Security
This particular external URL was generated via an OpenShift Route
and minishift leverages a special service called nip.io for DNS reso‐
lution. Now that you have enabled mTLS, you need to leverage a
gateway to achieve end-to-end encryption. Istio has its own ingress
gateway, aptly named Istio Gateway, a solution that exposes a URL
external to the cluster and supports Istio features such as monitor‐
ing, traffic management, and policy.
To set up the Istio Gateway for the customer service, create the Gate
way and its supporting VirtualService objects
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: customer-gateway
namespace: tutorial
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: customer
namespace: tutorial
spec:
hosts:
- "*"
gateways:
- customer-gateway
http:
- match:
- uri:
exact: /
route:
- destination:
host: customer
port:
number: 8080
And you can apply these manifests:
oc apply -f istiofiles/gateway-customer.yml
GATEWAY_URL=$(minishift ip):$INGRESS_PORT
curl http://${GATEWAY_URL}/
Clean up:
oc delete -n tutorial -f istiofiles/gateway-customer.yml
oc delete -n tutorial -f istiofiles/destination-rule-tls.yml
oc delete -n tutorial -f istiofiles/authentication-
enable-tls.yml
And return to the original invocation mechanism with the Open‐
Shift Route:
oc expose service customer
72 | Chapter 7: Security
follow an approved invocation path. In the case of the example serv‐
ices, it is expected that customer calls preference and then preference
calls recommendation, in that specific order. Therefore there are
some alternative paths that are specifically denied:
Next, curl the customer service and see that it succeeds, because all
the services are visible to one another by default (as expected in a
Kubernetes/OpenShift cluster):
curl customer:8080
customer => preference => recommendation v2 from
'7cbd9f9c79': 23
Use the describe verb for the kubectl or oc tool to see the rules you
have in place:
oc get rules
NAME AGE
no-customer-to-recommendation 3m
no-preference-to-customer 3m
no-recommendation-to-customer 3m
no-recommendation-to-preference 3m
oc describe rule no-preference-to-customer
...
Spec:
Actions:
Handler: do-not-pass-go.denier
Instances:
just-stop.checknothing
Match: source.labels["app"]=="preference" &&
destination.labels["app"] == "customer"
Events: <none>
74 | Chapter 7: Security
You can remove these rules to return to original state:
oc delete rules --all -n tutorial
Istio’s Mixer also supports a whitelist and blacklist mechanism
involving the listchecker and listentry objects. If you are inter‐
ested in that capability check out the Istio Tutorial and/or the Istio
Documentation.
Now if you curl your customer endpoint, you will receive “RBAC:
access denied”:
curl customer-tutorial.$(minishift ip).nip.io
RBAC: access denied
Istio’s RBAC uses a deny-by-default strategy, meaning that nothing
is permitted until you explicitly define an access-control policy to
grant access to any service. To reopen the customer endpoint to
end-user traffic, create a ServiceRole and a ServiceRoleBinding:
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRole
metadata:
name: service-viewer
namespace: tutorial
spec:
rules:
- services: ["*"]
methods: ["GET"]
constraints:
- key: "destination.labels[app]"
values: ["customer", "recommendation", "preference"]
---
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
name: bind-service-viewer
namespace: tutorial
spec:
subjects:
- user: "*"
roleRef:
kind: ServiceRole
name: "service-viewer"
Apply it:
oc -n tutorial apply -f \
istiofiles/namespace-rbac-policy.yml
76 | Chapter 7: Security
tion.labels[app] constraint demonstrated in the example. You can
also specify which methods are allowed such as GET versus POST.
The ServiceRoleBinding object allows you to specify which users
are permitted. In the current case, user: “*” with no additional prop‐
erties means that any user is allowed to access these services.
The concept of users and user management has always been unique
per organization, often unique per application. In the case of a
Kubernetes cluster, your cluster administrator will likely have a pre‐
ferred strategy for user authentication and authorization.
Istio has support for user authentication and authorization via JWT
(JSON Web Token). From the “Introduction to JSON Web Tokens”
page: “JSON Web Token (JWT) is an open standard (RFC 7519) that
defines a compact and self-contained way for securely transmitting
information between parties as a JSON object.” To leverage JWT,
you will need a JWT issuer like auth0.com, or perhaps a local service
based on the open source software project called Keycloak, which
supports OpenID Connect, OAuth 2.0, and SAML 2.0.
Setup and configuration of a JWT Issuer is beyond the scope of this
book, but more information can be found at the Istio Tutorial.
In addition, Istio Security has more information about Istio’s secu‐
rity capabilities.
Conclusion
You have now taken a relatively quick tour through some of the
capabilities of Istio service mesh. You saw how this service mesh can
solve distributed systems problems in cloud native environments,
and how Istio concepts like observability, resiliency, and chaos injec‐
tion can be immediately beneficial to your current application.
Moreover, Istio has capabilities beyond those we discussed in this
book. If you’re interested, we suggest that you explore the following
topics more deeply:
• Policy enforcement
• Mesh expansion
• Hybrid deployments
• Phasing in Istio into an existing environment
Conclusion | 77
• Gateway/Advanced ingress
Istio is also evolving at a rapid rate. To keep up with the latest devel‐
opments, we suggest that you keep an eye on the upstream commu‐
nity project page as well as Red Hat’s evolving Istio Tutorial.
78 | Chapter 7: Security
About the Authors
Burr Sutter (@burrsutter) is a lifelong developer advocate, commu‐
nity organizer, technology evangelist, and featured speaker at tech‐
nology events around the globe—from Bangalore to Brussels and
Berlin to Beijing (and most parts in between). He is currently Red
Hat’s Director of Developer Experience. A Java Champion since
2005 and former president of the Atlanta Java User Group, Burr
founded the DevNexus conference, now the second-largest Java
event in the United States. When spending time away from the com‐
puter, he enjoys going off-grid in the jungles of Mexico and the bush
of Kenya. You can find Burr online at burrsutter.com.
Christian Posta (@christianposta) is Field CTO at solo.io and well
known in the community for being an author (Istio in Action, Man‐
ning; Microservices for Java Developers, O’Reilly), frequent blogger,
speaker, open-source enthusiast and committer on various open-
source projects including Istio and Kubernetes. Christian has spent
time at web-scale companies and now helps companies create and
deploy large-scale, resilient, distributed architectures—many of
what we now call Serverless and Microservices. He enjoys mentor‐
ing, training, and leading teams to be successful with distributed
systems concepts, microservices, devops, and cloud native applica‐
tion design.