0% found this document useful (0 votes)
7 views69 pages

Week4 Lesson2 TheDockerBook Chapter Orchestration

The document discusses Docker orchestration and service discovery, focusing on tools like Docker Compose, Consul, and Swarm for managing applications across multiple containers and hosts. It provides a detailed guide on installing Docker Compose, creating a sample multi-container application using Python and Redis, and configuring it through a YAML file. The chapter also explains how to run and manage these services using Docker Compose commands.

Uploaded by

amaador3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views69 pages

Week4 Lesson2 TheDockerBook Chapter Orchestration

The document discusses Docker orchestration and service discovery, focusing on tools like Docker Compose, Consul, and Swarm for managing applications across multiple containers and hosts. It provides a detailed guide on installing Docker Compose, creating a sample multi-container application using Python and Redis, and configuring it through a YAML file. The chapter also explains how to run and manage these services using Docker Compose commands.

Uploaded by

amaador3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

The Docker Book

James Turnbull

September 24, 2017

Version: v17.07.0-ce-2 (e269502)

Website: The Docker Book


Some rights reserved. No part of this publication may be reproduced, stored in a
retrieval system, or transmitted in any form or by any means, electronic,
mechanical or photocopying, recording, or otherwise, for commercial purposes
without the prior permission of the publisher.
This work is licensed under the Creative Commons
Attribution-NonCommercial-NoDerivs 3.0 Unported License. To view a copy of
this license, visit here.
© Copyright 2015 - James Turnbull <[email protected]>
Chapter 7

Docker Orchestration and Service


Discovery

Orchestration is a pretty loosely defined term. It’s broadly the process of auto-
mated configuration, coordination, and management of services. In the Docker
world we use it to describe the set of practices around managing applications run-
ning in multiple Docker containers and potentially across multiple Docker hosts.
Native orchestration is in its infancy in the Docker community but an exciting
ecosystem of tools is being integrated and developed.
In the current ecosystem there are a variety of tools being built and integrated
with Docker. Some of these tools are simply designed to elegantly ”wire” together
multiple containers and build application stacks using simple composition. Other
tools provide larger scale coordination between multiple Docker hosts as well as
complex service discovery, scheduling and execution capabilities.
Each of these areas really deserves its own book but we’ve focused on a few useful
tools that give you some insight into what you can achieve when orchestrating
containers. They provide some useful building blocks upon which you can grow
your Docker-enabled environment.
In this chapter we will focus on three areas:

• Simple container orchestration. Here we’ll look at Docker Compose. Docker


Compose (previously Fig) is an open source Docker orchestration tool devel-

255
Chapter 7: Docker Orchestration and Service Discovery

oped by the Orchard team and then acquired by Docker Inc in 2014. It’s
written in Python and licensed with the Apache 2.0 license.

• Distributed service discovery. Here we’ll introduce Consul. Consul is also


open source, licensed with the Mozilla Public License 2.0, and written in
Go. It provides distributed, highly available service discovery. We’re going
to look at how you might use Consul and Docker to manage application
service discovery.

• Orchestration and clustering of Docker. Here we’re looking at Swarm.


Swarm is open source, licensed with the Apache 2.0 license. It’s written in
Go and developed by the Docker Inc team. As of Docker 1.12 the Docker
Engine now has a Swarm-mode built in and we’ll be covering that later in
this chapter.

TIP We’ll also talk about many of the other orchestration tools available to
you later in this chapter.

Docker Compose

Now let’s get familiar with Docker Compose. With Docker Compose, we define a
set of containers to boot up, and their runtime properties, all defined in a YAML
file. Docker Compose calls each of these containers ”services” which it defines as:

A container that interacts with other containers in some way and that
has specific runtime properties.

We’re going to take you through installing Docker Compose and then using it to
build a simple, multi-container application stack.

Version: v17.07.0-ce-2 (e269502) 256


Chapter 7: Docker Orchestration and Service Discovery

Installing Docker Compose

We start by installing Docker Compose. Docker Compose is currently available for


Linux, Windows, and OS X. It can be installed directly as a binary, via Docker for
Mac or Windows or via a Python Pip package.
To install Docker Compose on Linux we can grab the Docker Compose binary from
GitHub and make it executable. Like Docker, Docker Compose is currently only
supported on 64-bit Linux installations. We’ll need the curl command available
to do this.

Listing 7.1: Installing Docker Compose on Linux

$ sudo curl -L https://github.com/docker/compose/releases/


download/1.16.1/docker-compose-`uname -s`-`uname -m` > /usr/
local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose

This will download the docker-compose binary from GitHub and install it into
the /usr/local/bin directory. We’ve also used the chmod command to make the
docker-compose binary executable so we can run it.

If we’re on OS X Docker Compose comes bundled with Docker for Mac or we can
install it like so:

Listing 7.2: Installing Docker Compose on OS X

$ sudo bash -c "curl -L https://github.com/docker/compose/


releases/download/1.16.1/docker-compose-Darwin-x86_64 > /usr/
local/bin/docker-compose"
$ sudo chmod +x /usr/local/bin/docker-compose

Version: v17.07.0-ce-2 (e269502) 257


Chapter 7: Docker Orchestration and Service Discovery

TIP Replace the 1.16.1 with the release number of the current Docker Com-
pose release.

If we’re on Windows Docker Compose comes bundled inside Docker for Windows.
Compose is also available as a Python package if you’re on another platform or
if you prefer installing via package. You will need to have the Python-Pip tool
installed to use the pip command. This is available via the python-pip package
on most Red Hat, Debian and Ubuntu releases.

Listing 7.3: Installing Compose via Pip

$ sudo pip install -U docker-compose

Once you have installed the docker-compose binary you can test it’s working using
the docker-compose command with the --version flag:

Listing 7.4: Testing Docker Compose is working

$ docker-compose --version
docker-compose version 1.16.1, build f3628c7

NOTE If you’re upgrading from a pre-1.3.0 release you’ll need to mi-


grate any existing container to the new 1.3.0 format using the
command.

Version: v17.07.0-ce-2 (e269502) 258


Chapter 7: Docker Orchestration and Service Discovery

Getting our sample application

To demonstrate how Compose works we’re going to use a sample Python Flask
application that combines two containers:

• An application container running our sample Python application.


• A container running the Redis database.

Let’s start with building our sample application. Firstly, we create a directory and
a Dockerfile.

Listing 7.5: Creating the composeapp directory

$ mkdir composeapp
$ cd composeapp

Here we’ve created a directory to hold our sample application, which we’re calling
composeapp.

Next, we need to add our application code. Let’s create a file called app.py in the
composeapp directory and add the following Python code to it.

Version: v17.07.0-ce-2 (e269502) 259


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.6: The app.py file

from flask import Flask


from redis import Redis
import os

app = Flask(__name__)
redis = Redis(host="redis", port=6379)

@app.route('/')
def hello():
redis.incr('hits')
return 'Hello Docker Book reader! I have been seen {0} times'
.format(redis.get('hits'))

if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)

TIP You can find this source code on GitHub here or on the Docker Book site
here.

This simple Flask application tracks a counter stored in Redis. The counter is
incremented each time the root URL, /, is hit.
We also need to create a requirements.txt file to store our application’s depen-
dencies. Let’s create that file now and add the following dependencies.

Version: v17.07.0-ce-2 (e269502) 260


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.7: The requirements.txt file

flask
redis

Now let’s populate our Compose Dockerfile.

Listing 7.8: The composeapp Dockerfile

# Compose Sample application image


FROM python:2.7
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2016-06-01

ADD . /composeapp

WORKDIR /composeapp

RUN pip install -r requirements.txt

Our Dockerfile is simple. It is based on the python:2.7 image. We add our app
.py and requirements.txt files into a directory in the image called /composeapp.
The Dockerfile then sets the working directory to /composeapp and runs the pip
installation process to install our application’s dependencies: flask and redis.
Let’s build that image now using the docker build command.

Version: v17.07.0-ce-2 (e269502) 261


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.9: Building the composeapp application

$ sudo docker build -t jamtur01/composeapp .


Sending build context to Docker daemon 16.9 kB
Sending build context to Docker daemon
Step 0 : FROM python:2.7
---> 1c8df2f0c10b
Step 1 : MAINTAINER James Turnbull <[email protected]>
---> Using cache
---> aa564fe8be5a
Step 2 : ADD . /composeapp
---> c33aa147e19f
Removing intermediate container 0097bc79d37b
Step 3 : WORKDIR /composeapp
---> Running in 76e5ee8544b3
---> d9da3105746d
Removing intermediate container 76e5ee8544b3
Step 4 : RUN pip install -r requirements.txt
---> Running in e71d4bb33fd2
Downloading/unpacking flask (from -r requirements.txt (line 1))
. . .
Successfully installed flask redis Werkzeug Jinja2 itsdangerous
markupsafe
Cleaning up...
---> bf0fe6a69835
Removing intermediate container e71d4bb33fd2
Successfully built bf0fe6a69835

This will build a new image called jamtur01/composeapp containing our sample
application and its required dependencies. We can now use Compose to deploy
our application.

Version: v17.07.0-ce-2 (e269502) 262


Chapter 7: Docker Orchestration and Service Discovery

NOTE We’ll be using a Redis container created from the default Redis
image on the Docker Hub so we don’t need to build or customize that.

The file

Now we’ve got our application image built we can configure Compose to create
both the services we require. With Compose, we define a set of services (in the
form of Docker containers) to launch. We also define the runtime properties we
want these services to start with, much as you would do with the docker run
command. We define all of this in a YAML file. We then run the docker-compose
up command. Compose launches the containers, executes the appropriate runtime
configuration, and multiplexes the log output together for us.
Let’s create a docker-compose.yml file for our application inside our composeapp
directory.

Listing 7.10: Creating the docker-compose.yml file

$ touch docker-compose.yml

Let’s populate our docker-compose.yml file. The docker-compose.yml file is a


YAML file that contains instructions for running one or more Docker containers.
Let’s look at the instructions for our example application.

Version: v17.07.0-ce-2 (e269502) 263


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.11: The docker-compose.yml file

version: '3'
services:
web:
image: jamtur01/composeapp
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/composeapp
redis:
image: redis

Each service we wish to launch is specified as a YAML hash inside a hash called
services. Here our two services are: web and redis.

TIP The tag tells Docker Compose what configuration version of use.
The Docker Compose API has evolved over the years and each change has been
marked by incrementing the version.

For our web service we’ve specified some runtime options. Firstly, we’ve specified
the image we’re using: the jamtur01/composeapp image. Compose can also build
Docker images. You can use the instruction and provide the path to a
Dockerfile to have Compose build an image and then create services from it.

Version: v17.07.0-ce-2 (e269502) 264


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.12: An example of the build instruction

web:
build: /home/james/composeapp
. . .

This build instruction would build a Docker image from a Dockerfile found in
the /home/james/composeapp directory.
We’ve also specified the command to run when launching the service. Next we
specify the ports and volumes as a list of the port mappings and volumes we want
for our service. We’ve specified that we’re mapping port 5000 inside our service
to port 5000 on the host. We’re also creating /composeapp as a volume.
If we were executing the same configuration on the command line using docker
run we’d do it like so:

Listing 7.13: The docker run equivalent command

$ sudo docker run -d -p 5000:5000 -v .:/composeapp \


--name jamtur01/composeapp python app.py

Next we’ve specified another service called redis. For this service we’re not set-
ting any runtime defaults at all. We’re just going to use the base redis image. By
default, containers run from this image launches a Redis database on the standard
port. So we don’t need to configure or customize it.

TIP You can see a full list of the available instructions you can use in the
file here.

Version: v17.07.0-ce-2 (e269502) 265


Chapter 7: Docker Orchestration and Service Discovery

Running Compose

Once we’ve specified our services in docker-compose.yml we use the docker-


compose up command to execute them both.

Listing 7.14: Running docker-compose up with our sample application

$ cd composeapp
$ sudo docker-compose up
Creating network "composeapp_default" with the default driver
Recreating composeapp_web_1 ...
Recreating composeapp_web_1
Recreating composeapp_redis_1 ...
Recreating composeapp_web_1 ... done
Attaching to composeapp_redis_1, composeapp_web_1
web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to
quit)
. . .

TIP You must be inside the directory with the file in


order to execute most Compose commands.

Compose has created two new services: composeapp_redis_1 and composeapp_web_1


. So where did these names come from? Well, to ensure our services are unique,
Compose has prefixed and suffixed the names specified in the docker-compose.
yml file with the directory and a number respectively.

Compose then attaches to the logs of each service, each line of log output is pre-
fixed with the abbreviated name of the service it comes from, and outputs them
multiplexed:

Version: v17.07.0-ce-2 (e269502) 266


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.15: Compose service log output

redis_1 | 1:M 05 Aug 17:49:17.839 * The server is now ready to


accept connections on port 6379

The services (and Compose) are being run interactively. That means if you use
Ctrl-C or the like to cancel Compose then it’ll stop the running services. We
could also run Compose with -d flag to run our services daemonized (similar to
the docker run -d flag).

Listing 7.16: Running Compose daemonized

$ sudo docker-compose up -d

Let’s look at the sample application that’s now running on the host. The applica-
tion is bound to all interfaces on the Docker host on port 5000. So we can browse
to that site on the host’s IP address or via localhost.

Figure 7.1: Sample Compose application.

We see a message displaying the current counter value. We can increment the
counter by refreshing the site. Each refresh stores the increment in Redis. The

Version: v17.07.0-ce-2 (e269502) 267


Chapter 7: Docker Orchestration and Service Discovery

Redis update is done via the link between the Docker containers controlled by
Compose.

TIP By default, Compose tries to connect to a local Docker daemon but it’ll
also honor the environment variable to connect to a remote Docker
host.

Using Compose

Now let’s explore some of Compose’s other options. Firstly, let’s use Ctrl-C to
cancel our running services and then restart them as daemonized services.
Press Ctrl-C inside the composeapp directory and then re-run the docker-compose
up command, this time with the -d flag.

Listing 7.17: Restarting Compose as daemonized

$ sudo docker-compose up -d
Starting composeapp_web_1 ...
Starting composeapp_redis_1 ...
Starting composeapp_redis_1
Starting composeapp_web_1 ... done
$ . . .

We see that Compose has recreated our services, launched them and returned to
the command line.
Our Compose-managed services are now running daemonized on the host. Let’s
look at them now using the docker-compose ps command; a close cousin of the
docker ps command.

Version: v17.07.0-ce-2 (e269502) 268


Chapter 7: Docker Orchestration and Service Discovery

TIP You can get help on Compose commands by running


and the command you wish to get help on, for example
.

The docker-compose ps command lists all of the currently running services from
our local docker-compose.yml file.

Listing 7.18: Running the docker-compose ps command

$ cd composeapp
$ sudo docker-compose ps
Name Command State Ports
-----------------------------------------------------------------
-
composeapp_redis_1 docker-entrypoint.sh redis Up 6379/tcp
composeapp_web_1 python app.py Up 0.0.0.0:5000
->5000/tcp

This shows some basic information about our running Compose services. The
name of each service, what command we used to start the service, and the ports
that are mapped on each service.
We can also drill down further using the docker-compose logs command to show
us the log events from our services.

Version: v17.07.0-ce-2 (e269502) 269


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.19: Showing a Compose services logs

$ sudo docker-compose logs


docker-compose logs
Attaching to composeapp_redis_1, composeapp_web_1
redis_1 | ( ' , .-` | `, ) Running in
stand alone mode
redis_1 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
redis_1 | | `-._ `._ / _.-' | PID: 1
. . .

This will tail the log files of your services, much as the tail -f command. Like
the tail -f command you’ll need to use Ctrl-C or the like to exit from it.
We can also stop our running services with the docker-compose stop command.

Listing 7.20: Stopping running services

$ sudo docker-compose stop


Stopping composeapp_web_1...
Stopping composeapp_redis_1...

This will stop both services. If the services don’t stop you can use the docker-
compose kill command to force kill the services.

We can verify this with the docker-compose ps command again.

Version: v17.07.0-ce-2 (e269502) 270


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.21: Verifying our Compose services have been stopped

$ sudo docker-compose ps
Name Command State Ports
-------------------------------------------------
composeapp_redis_1 redis-server Exit 0
composeapp_web_1 python app.py Exit 0

If you’ve stopped services using docker-compose stop or docker-compose kill


you can also restart them again with the docker-compose start command. This
is much like using the docker start command and will restart these services.
Finally, we can remove services using the docker-compose rm command.

Listing 7.22: Removing Compose services

$ sudo docker-compose rm
Going to remove composeapp_redis_1, composeapp_web_1
Are you sure? [yN] y
Removing composeapp_redis_1...
Removing composeapp_web_1...

You’ll be prompted to confirm you wish to remove the services and then both
services will be deleted. The docker-compose ps command will now show no
running or stopped services.

Version: v17.07.0-ce-2 (e269502) 271


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.23: Showing no Compose services

$ sudo docker-compose ps
Name Command State Ports
------------------------------

Compose in summary

Now in one file we have a simple Python-Redis stack built! You can see how much
easier this can make constructing applications from multiple Docker containers.
It’s especially a great tool for building local development stacks. This, however,
just scratches the surface of what you can do with Compose. There are some
more examples using Rails, Django and Wordpress on the Compose website that
introduce some more advanced concepts.

TIP You can see a full command line reference here.

Consul, Service Discovery and Docker

Service discovery is the mechanism by which distributed applications manage


their relationships. A distributed application is usually made up of multiple com-
ponents. These components can be located together locally or distributed across
data centers or geographic regions. Each of these components usually provides or
consumes services to or from other components.
Service discovery allows these components to find each other when they want
to interact. Due to the distributed nature of these applications, service discovery
mechanisms also need to be distributed. As they are usually the ”glue” between

Version: v17.07.0-ce-2 (e269502) 272


Chapter 7: Docker Orchestration and Service Discovery

components of distributed applications they also need to be dynamic, reliable,


resilient and able to quickly and consistently share data about these services.
Docker, with its focus on distributed applications and service-oriented and mi-
croservices architectures, is an ideal candidate for integration with a service dis-
covery tool. Each Docker container can register its running service or services
with the tool. This provides the information needed, for example an IP address or
port or both, to allow interaction between services.
Our example service discovery tool, Consul, is a specialized datastore that uses
consensus algorithms. Consul specifically uses the Raft consensus algorithm to
require a quorum for writes. It also exposes a key value store and service catalog
that is highly available, fault-tolerant, and maintains strong consistency guaran-
tees. Services can register themselves with Consul and share that registration
information in a highly available and distributed manner.
Consul is also interesting because it provides:

• A service catalog with an API instead of the traditional key=value store of


most service discovery tools.
• Both a DNS-based query interface through an inbuilt DNS server and a HTTP-
based REST API to query the information. The choice of interfaces, especially
the DNS-based interface, allows you to easily drop Consul into your existing
environment.
• Service monitoring AKA health checks. Consul has powerful service moni-
toring built into the tool.

To get a better understanding of how Consul works, we’re going to see how to
run distributed Consul inside Docker containers. We’re then going to register
services from Docker containers to Consul and query that data from other Docker
containers. To make it more interesting we’re going to do this across multiple
Docker hosts.
To do this we’re going to:

• Create a Docker image for the Consul service.

Version: v17.07.0-ce-2 (e269502) 273


Chapter 7: Docker Orchestration and Service Discovery

• Build three hosts running Docker and then run Consul on each. The three
hosts will provide us with a distributed environment to see how resiliency
and failover works with Consul.
• Build services that we’ll register with Consul and then query that data from
another service.

NOTE You can see a more generic introduction to Consul here.

Building a Consul image

We’re going to start with creating a Dockerfile to build our Consul image. Let’s
create a directory to hold our Consul image first.

Listing 7.24: Creating a Consul Dockerfile directory

$ mkdir consul
$ cd consul
$ touch Dockerfile

Now let’s look at the Dockerfile for our Consul image.

Version: v17.07.0-ce-2 (e269502) 274


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.25: The Consul Dockerfile

FROM ubuntu:16.04
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2014-08-01

RUN apt-get -qqy update


RUN apt-get -qqy install curl unzip

ADD https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4
_linux_amd64.zip /tmp/consul.zip
RUN cd /usr/sbin; unzip /tmp/consul.zip; chmod +x /usr/sbin/
consul; rm /tmp/consul.zip

RUN mkdir -p /webui/


ADD https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4
_web_ui.zip /webui/webui.zip
RUN cd /webui; unzip webui.zip; rm webui.zip

ADD consul.json /config/

EXPOSE 53/udp 8300 8301 8301/udp 8302 8302/udp 8400 8500

VOLUME ["/data"]

ENTRYPOINT [ "/usr/sbin/consul", "agent", "-config-dir=/config" ]


CMD []

Our Dockerfile is pretty simple. It’s based on an Ubuntu 16.04 image. It installs
curl and unzip. We then download the Consul zip file containing the consul
binary. We move that binary to /usr/sbin/ and make it executable. We also
download Consul’s web interface and place it into a directory called /webui. We’re
going to see this web interface in action a little later.

Version: v17.07.0-ce-2 (e269502) 275


Chapter 7: Docker Orchestration and Service Discovery

We then add a configuration file for Consul, consul.json, to the /config directory.
Let’s create and look at that file now.

Listing 7.26: The consul.json configuration file

{
"data_dir": "/data",
"ui_dir": "/webui",
"client_addr": "0.0.0.0",
"ports": {
"dns": 53
},
"recursor": "8.8.8.8"
}

The consul.json configuration file is JSON formatted and provides Consul with
the information needed to get running. We’ve specified a data directory, /data,
to hold Consul’s data. We also specify the location of the web interface files: /
webui. We use the client_addr variable to bind Consul to all interfaces inside our
container.
We also use the ports block to configure on which ports various Consul services
run. In this case we’re specifying that Consul’s DNS service should run on port
53. Lastly, we’ve used the recursor option to specify a DNS server to use for
resolution if Consul can’t resolve a DNS request. We’ve specified 8.8.8.8 which
is one of the IP addresses of Google’s public DNS service.

TIP You can find the full list of available Consul configuration options here.

Back in our Dockerfile we’ve used the EXPOSE instruction to open up a series of
ports that Consul requires to operate. I’ve added a table showing each of these

Version: v17.07.0-ce-2 (e269502) 276


Chapter 7: Docker Orchestration and Service Discovery

ports and what they do.

Table 7.1: Consul’s default ports.

Port Purpose
53/udp DNS server
8300 Server RPC
8301 + udp Serf LAN port
8302 + udp Serf WAN port
8400 RPC endpoint
8500 HTTP API

You don’t need to worry about most of them for the purposes of this chapter. The
important ones for us are 53/udp which is the port Consul is going to be running
DNS on. We’re going to use DNS to query service information. We’re also going to
use Consul’s HTTP API and its web interface, both of which are bound to port 8500.
The rest of the ports handle the backend communication and clustering between
Consul nodes. We’ll configure them in our Docker container but we don’t do
anything specific with them.

NOTE You can find more details of what each port does here.

Next, we’ve also made our /data directory a volume using the VOLUME instruction.
This is useful if we want to manage or work with this data as we saw in Chapter
6.
Finally, we’ve specified an ENTRYPOINT instruction to launch Consul using the
consul binary when a container is launched from our image.

Let’s step through the command line options we’ve used. We’ve specified the
consul binary in /usr/sbin/. We’ve passed it the agent command which tells
Consul to run as an agent and the -config-dir flag and specified the location of
our consul.json file in the /config directory.

Version: v17.07.0-ce-2 (e269502) 277


Chapter 7: Docker Orchestration and Service Discovery

Let’s build our image now.

Listing 7.27: Building our Consul image

$ sudo docker build -t="jamtur01/consul" .

NOTE You can get our Consul and configuration file here or
on GitHub here. If you don’t want to use a home grown image there is also an
officially santionced Consul image on the Docker Hub.

Testing a Consul container locally

Before we run Consul on multiple hosts, let’s see it working locally on a single
host. To do this we’ll run a container from our new jamtur01/consul image.

Version: v17.07.0-ce-2 (e269502) 278


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.28: Running a local Consul node

$ sudo docker run -p 8500:8500 -p 53:53/udp \


-h node1 jamtur01/consul -server -bootstrap
==> WARNING: Bootstrap mode enabled! Do not enable unless
necessary
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!
Node name: 'node1'
Datacenter: 'dc1'
Server: true (bootstrap: true)
Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC:
8400)
Cluster Addr: 172.17.0.8 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>

==> Log data will now stream in as it occurs:

. . .

2016/08/05 17:59:38 [INFO] consul: cluster leadership acquired


2016/08/05 17:59:38 [INFO] consul: New leader elected: node1
2016/08/05 17:59:38 [INFO] raft: Disabling EnableSingleNode (
bootstrap)
2016/08/05 17:59:38 [INFO] consul: member 'node1' joined, marking
health alive
2016/08/05 17:59:40 [INFO] agent: Synced service 'consul'

We’ve used the docker run command to create a new container. We’ve mapped
two ports, port 8500 in the container to 8500 on the host and port 53 in the con-
tainer to 53 on the host. We’ve also used the -h flag to specify the hostname of

Version: v17.07.0-ce-2 (e269502) 279


Chapter 7: Docker Orchestration and Service Discovery

the container, here node1. This is going to be both the hostname of the container
and the name of the Consul node. We’ve then specified the name of our Consul
image, jamtur01/consul.
Lastly, we’ve passed two flags to the consul binary: -server and -bootstrap. The
-server flag tells the Consul agent to operate in server mode. The -bootstrap flag
tells Consul that this node is allowed to self-elect as a leader. This allows us to
see a Consul agent in server mode doing a Raft leadership election.

WARNING It is important that no more than one server per datacenter


be running in bootstrap mode. Otherwise consistency cannot be guaranteed if
multiple nodes are able to self-elect. We’ll see some more on this when we add
other nodes to the cluster.

We see that Consul has started node1 and done a local leader election. As we’ve
got no other Consul nodes running it is not connected to anything else.
We can also see this via the Consul web interface if we browse to our local host’s
IP address on port 8500.

Figure 7.2: The Consul web interface.

Running a Consul cluster in Docker

As Consul is distributed we’d normally create three (or more) hosts to run in sep-
arate data centers, clouds or regions. Or even add an agent to every application
server. This will provide us with sufficient distributed resilience. We’re going to

Version: v17.07.0-ce-2 (e269502) 280


Chapter 7: Docker Orchestration and Service Discovery

mimic this required distribution by creating three new hosts each with a Docker
daemon to run Consul. We will create three new Ubuntu 16.04 hosts: larry,
curly, and moe. On each host we’ll install a Docker daemon. We’ll also pull down
the jamtur01/consul image.

TIP Create the hosts using whatever means you run up new hosts and to
install Docker you can use the installation instructions in Chapter 2.

Listing 7.29: Pulling down the Consul image

$ sudo docker pull jamtur01/consul

On each host we’re going to run a Docker container with the jamtur01/consul
image. To do this we need to choose a network to run Consul over. In most cases
this would be a private network but as we’re just simulating a Consul cluster I
am going to use the public interfaces of each host. To start Consul on this public
network I am going to need the public IP address of each host. This is the address
to which we’re going to bind each Consul agent.
Let’s grab that now on larry and assign it to an environment variable, $PUBLIC_IP.

Listing 7.30: Getting public IP on larry

larry$ PUBLIC_IP="$(ifconfig eth0 | awk -F ' *|:' '/inet addr/{


print $4}')"
larry$ echo $PUBLIC_IP
162.243.167.159

And then create the same $PUBLIC_IP variable on curly and moe too.

Version: v17.07.0-ce-2 (e269502) 281


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.31: Assigning public IP on curly and moe

curly$ PUBLIC_IP="$(ifconfig eth0 | awk -F ' *|:' '/inet addr/{


print $4}')"
curly$ echo $PUBLIC_IP
162.243.170.66
moe$ PUBLIC_IP="$(ifconfig eth0 | awk -F ' *|:' '/inet addr/{
print $4}')"
moe$ echo $PUBLIC_IP
159.203.191.16

We see we’ve got three hosts and three IP addresses, each assigned to the
$PUBLIC_IP environmental variable.

Table 7.2: Consul host IP addresses

Host IP Address
larry 162.243.167.159
curly 162.243.170.66
moe 159.203.191.16

We’re also going to need to nominate a host to bootstrap to start the cluster. We’re
going to choose larry. This means we’ll need larry’s IP address on curly and moe
to tell them which Consul node’s cluster to join. Let’s set that up now by adding
larry’s IP address of 162.243.167.159 to curly and moe as the environment vari-
able, $JOIN_IP.

Version: v17.07.0-ce-2 (e269502) 282


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.32: Adding the cluster IP address

curly$ JOIN_IP=162.243.167.159
moe$ JOIN_IP=162.243.167.159

Starting the Consul bootstrap node

Let’s start our initial bootstrap node on larry. Our docker run command is going
to be a little complex because we’re mapping a lot of ports. Indeed, we need to
map all the ports listed in Table 7.1 above. And, as we’re both running Consul in
a container and connecting to containers on other hosts, we’re going to map each
port to the corresponding port on the local host. This will allow both internal and
external access to Consul.
Let’s see our docker run command now.

Listing 7.33: Start the Consul bootstrap node

larry$ sudo docker run -d -h $HOSTNAME \


-p 8300:8300 -p 8301:8301 \
-p 8301:8301/udp -p 8302:8302 \
-p 8302:8302/udp -p 8400:8400 \
-p 8500:8500 -p 53:53/udp \
--name larry_agent jamtur01/consul \
-server -advertise $PUBLIC_IP -bootstrap-expect 3

Here we’ve launched a daemonized container using the jamtur01/consul image


to run our Consul agent. We’ve set the -h flag to set the hostname of the container
to the value of the $HOSTNAME environment variable. This sets our Consul agent’s
name to be the local hostname, here larry. We’re also mapped a series of eight
ports from inside the container to the respective ports on the local host.

Version: v17.07.0-ce-2 (e269502) 283


Chapter 7: Docker Orchestration and Service Discovery

We’ve also specified some command line options for the Consul agent.

Listing 7.34: Consul agent command line arguments

-server -advertise $PUBLIC_IP -bootstrap-expect 3

The -server flag tell the agent to run in server mode. The -advertise flag tells
that server to advertise itself on the IP address specified in the $PUBLIC_IP environ-
ment variable. Lastly, the -bootstrap-expect flag tells Consul how many agents
to expect in this cluster. In this case, 3 agents. It also bootstraps the cluster.
Let’s look at the logs of our initial Consul container with the docker logs com-
mand.

Version: v17.07.0-ce-2 (e269502) 284


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.35: Starting bootstrap Consul node

larry$ sudo docker logs larry_agent


==> WARNING: Expect Mode enabled, expecting 3 servers
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!
Node name: 'larry'
Datacenter: 'dc1'
Server: true (bootstrap: false)
Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC:
8400)
Cluster Addr: 162.243.167.159 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>

==> Log data will now stream in as it occurs:

. . .

2016/08/06 12:35:11 [INFO] serf: EventMemberJoin: larry.dc1


162.243.167.159
2016/08/06 12:35:11 [INFO] consul: adding LAN server larry (Addr:
162.243.167.159:8300) (DC: dc1)
2016/08/06 12:35:11 [INFO] consul: adding WAN server larry.dc1 (
Addr: 162.243.167.159:8300) (DC: dc1)
2016/08/06 12:35:11 [ERR] agent: failed to sync remote state: No
cluster leader
2016/08/06 12:35:12 [WARN] raft: EnableSingleNode disabled, and
no known peers. Aborting election.

We see that the agent on larry is started but because we don’t have any more
nodes yet no election has taken place. We know this from the only error returned.

Version: v17.07.0-ce-2 (e269502) 285


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.36: Cluster leader error

[ERR] agent: failed to sync remote state: No cluster leader

Starting the remaining nodes

Now we’ve bootstrapped our cluster we can start our remaining nodes on curly
and moe. Let’s start with curly. We use the docker run command to launch our
second agent.

Listing 7.37: Starting the agent on curly

curly$ sudo docker run -d -h $HOSTNAME \


-p 8300:8300 -p 8301:8301 \
-p 8301:8301/udp -p 8302:8302 \
-p 8302:8302/udp -p 8400:8400 \
-p 8500:8500 -p 53:53/udp \
--name curly_agent jamtur01/consul \
-server -advertise $PUBLIC_IP -join $JOIN_IP

We see our command is similar to our bootstrapped node on larry with the ex-
ception of the command we’re passing to the Consul agent.

Listing 7.38: Launching the Consul agent on curly

-server -advertise $PUBLIC_IP -join $JOIN_IP

Again we’ve enabled the Consul agent’s server mode with -server and bound the
agent to the public IP address using the -advertise flag. Finally, we’ve told Con-

Version: v17.07.0-ce-2 (e269502) 286


Chapter 7: Docker Orchestration and Service Discovery

sul to join our Consul cluster by specifying larry’s IP address using the $JOIN_IP
environment variable.
Let’s see what happened when we launched our container.

Version: v17.07.0-ce-2 (e269502) 287


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.39: Looking at the Curly agent logs

curly$ sudo docker logs curly_agent


==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Joining cluster...
Join completed. Synced with 1 initial agents
==> Consul agent running!
Node name: 'curly'
Datacenter: 'dc1'
Server: true (bootstrap: false)
Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC:
8400)
Cluster Addr: 162.243.170.66 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>

==> Log data will now stream in as it occurs:

. . .

2016/08/06 12:37:17 [INFO] consul: adding LAN server curly (Addr:


162.243.170.66:8300) (DC: dc1)
2016/08/06 12:37:17 [INFO] consul: adding WAN server curly.dc1 (
Addr: 162.243.170.66:8300) (DC: dc1)
2016/08/06 12:37:17 [INFO] agent: (LAN) joining:
[162.243.167.159]
2016/08/06 12:37:17 [INFO] serf: EventMemberJoin: larry
162.243.167.159
2016/08/06 12:37:17 [INFO] agent: (LAN) joined: 1 Err: <nil>
2016/08/06 12:37:17 [ERR] agent: failed to sync remote state: No
cluster leader
2016/08/06 12:37:17 [INFO] consul: adding LAN server larry (Addr:
162.243.167.159:8300) (DC: dc1)
2016/08/06
Version: 12:37:18
v17.07.0-ce-2 [WARN] raft: EnableSingleNode disabled, and
(e269502) 288
no known peers. Aborting election.
Chapter 7: Docker Orchestration and Service Discovery

We see curly has joined larry, indeed on larry we should see something like the
following:

Listing 7.40: Curly joining Larry

2016/08/06 12:37:17 [INFO] serf: EventMemberJoin: curly


162.243.170.66
2016/08/06 12:37:17 [INFO] consul: adding LAN server curly (Addr:
162.243.170.66:8300) (DC: dc1)

But we’ve still not got a quorum in our cluster, remember we told -bootstrap-
expect to expect 3 nodes. So let’s start our final agent on moe.

Listing 7.41: Starting the agent on moe

moe$ sudo docker run -d -h $HOSTNAME \


-p 8300:8300 -p 8301:8301 \
-p 8301:8301/udp -p 8302:8302 \
-p 8302:8302/udp -p 8400:8400 \
-p 8500:8500 -p 53:53/udp \
--name moe_agent jamtur01/consul \
-server -advertise $PUBLIC_IP -join $JOIN_IP

Our docker run command is basically the same as what we ran on curly. But this
time we have three agents in our cluster. Now, if we look at the container’s logs,
we will see a full cluster.

Version: v17.07.0-ce-2 (e269502) 289


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.42: Consul logs on moe

moe$ sudo docker logs moe_agent


==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Joining cluster...
Join completed. Synced with 1 initial agents
==> Consul agent running!
Node name: 'moe'
Datacenter: 'dc1'
Server: true (bootstrap: false)
Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC:
8400)
Cluster Addr: 159.203.191.16 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>

==> Log data will now stream in as it occurs:

. . .

2016/08/06 12:39:14 [ERR] agent: failed to sync remote state: No


cluster leader
2016/08/06 12:39:15 [INFO] consul: New leader elected: larry
2016/08/06 12:39:16 [INFO] agent: Synced service 'consul'

We see from our container’s logs that moe has joined the cluster. This causes Consul
to reach its expected number of cluster members and triggers a leader election. In
this case larry is elected cluster leader.
We see the result of this final agent joining in the Consul logs on larry too.

Version: v17.07.0-ce-2 (e269502) 290


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.43: Consul leader election on larry

2016/08/06 12:39:14 [INFO] consul: Attempting bootstrap with


nodes: [162.243.170.66:8300 159.203.191.16:8300
162.243.167.159:8300]
2016/08/06 12:39:15 [WARN] raft: Heartbeat timeout reached,
starting election
2016/08/06 12:39:15 [INFO] raft: Node at 162.243.170.66:8300 [
Candidate] entering Candidate state
2016/08/06 12:39:15 [WARN] raft: Remote peer 159.203.191.16:8300
does not have local node 162.243.167.159:8300 as a peer
2016/08/06 12:39:15 [INFO] raft: Election won. Tally: 2
2016/08/06 12:39:15 [INFO] raft: Node at 162.243.170.66:8300 [
Leader] entering Leader state
2016/08/06 12:39:15 [INFO] consul: cluster leadership acquired
2016/08/06 12:39:15 [INFO] consul: New leader elected: larry
2016/08/06 12:39:15 [INFO] raft: pipelining replication to peer
159.203.191.16:8300
2016/08/06 12:39:15 [INFO] consul: member 'larry' joined, marking
health alive
2016/08/06 12:39:15 [INFO] consul: member 'curly' joined, marking
health alive
2016/08/06 12:39:15 [INFO] raft: pipelining replication to peer
162.243.170.66:8300
2016/08/06 12:39:15 [INFO] consul: member 'moe' joined, marking
health alive

We can also browse to the Consul web interface on larry on port 8500 and select
the Consul service to see the current state

Version: v17.07.0-ce-2 (e269502) 291


Chapter 7: Docker Orchestration and Service Discovery

Figure 7.3: The Consul service in the web interface.

Finally, we can test the DNS is working using the dig command. We specify our
local Docker bridge IP as the DNS server. That’s the IP address of the Docker
interface: docker0.

Listing 7.44: Getting the docker0 IP address

larry$ ip addr show docker0


3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP group default
link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::5484:7aff:fefe:9799/64 scope link
valid_lft forever preferred_lft forever

We see the interface has an IP of 172.17.0.1. We then use this with the dig
command.

Version: v17.07.0-ce-2 (e269502) 292


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.45: Testing the Consul DNS

larry$ dig @172.17.0.1 consul.service.consul


; <<>> DiG 9.10.3-P4-Ubuntu <<>> @172.17.0.1 consul.service.
consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42298
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0,
ADDITIONAL: 0

;; QUESTION SECTION:
;consul.service.consul. IN A

;; ANSWER SECTION:
consul.service.consul. 0 IN A 162.243.170.66
consul.service.consul. 0 IN A 159.203.191.16
consul.service.consul. 0 IN A 162.243.167.159

;; Query time: 1 msec


;; SERVER: 172.17.0.1#53(172.17.0.1)
;; WHEN: Sat Aug 06 12:54:18 UTC 2016
;; MSG SIZE rcvd: 150

Here we’ve queried the IP of the local Docker interface as a DNS server and asked
it to return any information on consul.service.consul. This format is Consul’s
DNS shorthand for services: consul is the host and service.consul is the domain.
Here consul.service.consul represent the DNS entry for the Consul service itself.
For example:

Version: v17.07.0-ce-2 (e269502) 293


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.46: Querying another Consul service via DNS

larry$ dig @172.17.0.1 webservice.service.consul

Would return all DNS A records for the service webservice. We can also query
individual nodes.

Listing 7.47: Querying another Consul service via DNS

larry$ dig @172.17.0.1 curly.node.consul +noall +answer

; <<>> DiG 9.10.3-P4-Ubuntu <<>> @172.17.0.1 curly.node.consul +


noall +answer
; (1 server found)
;; global options: +cmd
curly.node.consul. 0 IN A 162.243.170.66

TIP You can see more details on Consul’s DNS interface here.

We now have a running Consul cluster inside Docker containers running on three
separate hosts. That’s pretty cool but it’s not overly useful. Let’s see how we can
register a service in Consul and then retrieve that data.

Running a distributed service with Consul in Docker

To register our service we’re going to create a phony distributed application writ-
ten in the uWSGI framework. We’re going to build our application in two pieces.

• A web application, distributed_app. It runs web workers and registers

Version: v17.07.0-ce-2 (e269502) 294


Chapter 7: Docker Orchestration and Service Discovery

them as services with Consul when it starts.


• A client for our application, distributed_client. The client reads data
about distributed_app from Consul and reports the current application
state and configuration.

We’re going run the distributed_app on two of our Consul nodes: larry and
curly. We’ll run the distributed_client client on the moe node.

Building our distributed application

We’re going to start with creating a Dockerfile to build distributed_app. Let’s


create a directory to hold our image first.

Listing 7.48: Creating a distributed_app Dockerfile directory

$ mkdir distributed_app
$ cd distributed_app
$ touch Dockerfile

Now let’s look at the Dockerfile for our distributed_app application.

Version: v17.07.0-ce-2 (e269502) 295


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.49: The distributed_app Dockerfile

FROM ubuntu:16.04
MAINTAINER James Turnbull "[email protected]"
ENV REFRESHED_AT 2016-06-01

RUN apt-get -qqy update


RUN apt-get -qqy install ruby-dev git libcurl4-openssl-dev curl
build-essential python
RUN gem install --no-ri --no-rdoc uwsgi sinatra

RUN mkdir -p /opt/distributed_app


WORKDIR /opt/distributed_app

RUN uwsgi --build-plugin https://github.com/unbit/uwsgi-consul

ADD uwsgi-consul.ini /opt/distributed_app/


ADD config.ru /opt/distributed_app/

ENTRYPOINT [ "uwsgi", "--ini", "uwsgi-consul.ini", "--ini", "


uwsgi-consul.ini:server1", "--ini", "uwsgi-consul.ini:server2"
]
CMD []

Our Dockerfile installs some required packages including the uWSGI and Sinatra
frameworks as well as a plugin to allow uWSGI to write to Consul. We create a
directory called /opt/distributed_app/ and make it our working directory. We
then add two files, uwsgi-consul.ini and config.ru to that directory.
The uwsgi-consul.ini file configured uWSGI itself. Let’s look at it now.

Version: v17.07.0-ce-2 (e269502) 296


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.50: The uWSGI configuration

[uwsgi]
plugins = consul
socket = 127.0.0.1:9999
master = true
enable-threads = true

[server1]
consul-register = url=http://%h.node.consul:8500,name=
distributed_app,id=server1,port=2001
mule = config.ru

[server2]
consul-register = url=http://%h.node.consul:8500,name=
distributed_app,id=server2,port=2002
mule = config.ru

The uwsgi-consul.ini file uses uWSGI’s Mule construct to run two identical ap-
plications that do ”Hello World” in the Sinatra framework. Let’s look at those in
the config.ru file.

Version: v17.07.0-ce-2 (e269502) 297


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.51: The distributed_app config.ru file

require 'rubygems'
require 'sinatra'

get '/' do
"Hello World!"
end

run Sinatra::Application

Each application is defined in a block, labelled server1 and server2 respectively.


Also inside these blocks is a call to the uWSGI Consul plugin. This call connects
to our Consul instance and registers a service called distributed_app with an ID
of server1 or server2. Each service is assigned a different port, 2001 and 2002
respectively.
When the framework runs this will create our two web application workers and
register a service for each on Consul. The application will use the local Consul
node to create the service with the %h configuration shortcut populating the Consul
URL with the right hostname.

Listing 7.52: The Consul plugin URL

url=http://%h.node.consul:8500...

Lastly, we’ve configured an ENTRYPOINT instruction to automatically run our web


application workers.
Let’s build our image now.

Version: v17.07.0-ce-2 (e269502) 298


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.53: Building our distributed_app image

$ sudo docker build -t="jamtur01/distributed_app" .

NOTE You can get our and configuration


and application files on the book’s site here or on GitHub here.

Building our distributed client

We’re now going to create a Dockerfile to build our distributed_client image.


Let’s create a directory to hold our image first.

Listing 7.54: Creating a distributed_client Dockerfile directory

$ mkdir distributed_client
$ cd distributed_client
$ touch Dockerfile

Now let’s look at the Dockerfile for the distributed_client application.

Version: v17.07.0-ce-2 (e269502) 299


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.55: The distributed_client Dockerfile

FROM ubuntu:16.04
MAINTAINER James Turnbull "[email protected]"
ENV REFRESHED_AT 2016-06-01

RUN apt-get -qqy update


RUN apt-get -qqy install ruby ruby-dev build-essential
RUN gem install --no-ri --no-rdoc json

RUN mkdir -p /opt/distributed_client


ADD client.rb /opt/distributed_client/

WORKDIR /opt/distributed_client

ENTRYPOINT [ "ruby", "/opt/distributed_client/client.rb" ]


CMD []

The Dockerfile installs Ruby and some prerequisite packages and gems. It creates
the /opt/distributed_client directory and makes it the working directory. It
copies our client application code, contained in the client.rb file, into the /opt
/distributed_client directory.

Let’s take a quick look at our application code now.

Version: v17.07.0-ce-2 (e269502) 300


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.56: The distributed_client application

require "rubygems"
require "json"
require "net/http"
require "uri"
require "resolv"

uri = URI.parse("http://consul.service.consul:8500/v1/catalog/
service/distributed_app")

http = Net::HTTP.new(uri.host, uri.port)


request = Net::HTTP::Get.new(uri.request_uri)
response = http.request(request)

while true
if response.body == "{}"
puts "There are no distributed applications registered in
Consul"
sleep(1)
elsif
result = JSON.parse(response.body)
result.each do |service|
puts "Application #{service['ServiceName']} with element #{
service["ServiceID"]} on port #{service["ServicePort"]}
found on node #{service["Node"]} (#{service["Address"]}).
"
dns = Resolv::DNS.new.getresources("distributed_app.service
.consul", Resolv::DNS::Resource::IN::A)
puts "We can also resolve DNS - #{service['ServiceName']}
resolves to #{dns.collect { |d| d.address }.join(" and ")
}."
sleep(1)
end
end
Version: v17.07.0-ce-2 (e269502) 301
end
Chapter 7: Docker Orchestration and Service Discovery

Our client checks the Consul HTTP API and the Consul DNS for the presence of
a service called distributed_app. It queries the host consul.service.consul
which is the DNS CNAME entry we saw earlier that contains all the A records of
our Consul cluster nodes. This provides us with a simple DNS round robin for our
queries.
If no service is present it puts a message to that effect on the console. If it detects
a distributed_app service then it:

• Parses out the JSON output from the API call and returns some useful infor-
mation to the console.
• Performs a DNS lookup for any A records for that service and returns them
to the console.

This will allow us to see the results of launching our distributed_app containers
on our Consul cluster.
Lastly our Dockerfile specifies an ENTRYPOINT instruction that runs the client.rb
application when the container is started.
Let’s build our image now.

Listing 7.57: Building our distributed_client image

$ sudo docker build -t="jamtur01/distributed_client" .

NOTE You can get our and configuration


and application files on the book’s site here or on GitHub here.

Starting our distributed application

Now we’ve built the required images we can launch our distributed_app applica-
tion container on larry and curly. We’ve assumed that you have Consul running

Version: v17.07.0-ce-2 (e269502) 302


Chapter 7: Docker Orchestration and Service Discovery

as we’ve configured it earlier in the chapter. Let’s start by running one application
instance on larry.

Listing 7.58: Starting distributed_app on larry

larry$ sudo docker run --dns=172.17.0.1 -h $HOSTNAME -d --name


larry_distributed \
jamtur01/distributed_app

Here we’ve launched the jamtur01/distributed_app image and specified the --


dns flag to add a DNS lookup from the Docker server, here represented by the
docker0 interface bridge IP address of 172.17.0.1. As we’ve bound Consul’s DNS
lookup when we ran the Consul server this will allow the application to lookup
nodes and services in Consul. You should replace this with the IP address of your
own docker0 interface.
We’ve also specified -h flag to set the hostname. This is important because we’re
using this hostname to tell uWSGI what Consul node to register the service on.
We’ve called our container larry_distributed and run it daemonized.
If we check the log output from the container we should see uWSGI starting our
web application workers and registering the service on Consul.

Version: v17.07.0-ce-2 (e269502) 303


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.59: The distributed_app log output

larry$ sudo docker logs larry_distributed


*** Starting uWSGI 2.0.13.1 (64bit) on [Sat Aug 6 13:44:26 2016]
***
compiled with version: 5.4.0 20160609 on 06 August 2016 12:58:54
os: Linux-4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC
2016

. . .

Sat Aug 6 13:44:26 2016 - [consul] workers ready, let's register


the service to the agent
spawned uWSGI mule 2 (pid: 13)
[consul] service distributed_app registered successfully
Sat Aug 6 13:44:27 2016 - [consul] workers ready, let's register
the service to the agent
[consul] service distributed_app registered successfully

We see a subset of the logs here and that uWSGI has started. The Consul plugin has
constructed a service entry for each distributed_app worker and then registered
them with Consul. If we now look at the Consul web interface we should be able
to see our new services.

Version: v17.07.0-ce-2 (e269502) 304


Chapter 7: Docker Orchestration and Service Discovery

Figure 7.4: The distributed_app service in the Consul web interface.

Let’s start some more web application workers on curly now.

Listing 7.60: Starting distributed_app on curly

curly$ sudo docker run --dns=172.17.0.1 -h $HOSTNAME -d --name


curly_distributed \
jamtur01/distributed_app

If we check the logs and the Consul web interface we should now see more services
registered.

Version: v17.07.0-ce-2 (e269502) 305


Chapter 7: Docker Orchestration and Service Discovery

Figure 7.5: More distributed_app services in the Consul web interface.

Starting our distributed application client

Now we’ve got web application workers running on larry and curly let’s start
our client on moe and see if we can query data from Consul.

Listing 7.61: Starting distributed_client on moe

moe$ sudo docker run -ti --dns=172.17.0.1 --name


moe_distributed_client jamtur01/distributed_client

This time we’ve run the jamtur01/distributed_client image on moe and created
an interactive container called moe_distributed_client. It should start emitting
log output like so:

Version: v17.07.0-ce-2 (e269502) 306


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.62: The distributed_client logs on moe

Application distributed_app with element server1 on port 2001


found on node curly (162.243.170.66).
We can also resolve DNS - distributed_app resolves to
162.243.167.159 and 162.243.170.66.
Application distributed_app with element server2 on port 2002
found on node curly (162.243.170.66).
We can also resolve DNS - distributed_app resolves to
162.243.167.159 and 162.243.170.66.
Application distributed_app with element server1 on port 2001
found on node larry (162.243.167.159).
We can also resolve DNS - distributed_app resolves to
162.243.170.66 and 162.243.167.159.
Application distributed_app with element server2 on port 2002
found on node larry (162.243.167.159).
We can also resolve DNS - distributed_app resolves to
162.243.167.159 and 162.243.170.66.

We see that our distributed_client application has queried the HTTP API and
found service entries for distributed_app and its server1 and server2 workers
on both larry and curly. It has also done a DNS lookup to discover the IP address
of the nodes running that service, 162.243.167.159 and 162.243.170.66.
If this was a real distributed application our client and our workers could take
advantage of this information to configure, connect, route between elements of
the distributed application. This provides a simple, easy and resilient way to build
distributed applications running inside separate Docker containers and hosts.

Docker Swarm

Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts

Version: v17.07.0-ce-2 (e269502) 307


Chapter 7: Docker Orchestration and Service Discovery

into a single virtual Docker host. Swarm has a simple architecture. It clusters
together multiple Docker hosts and serves the standard Docker API on top of that
cluster. This is incredibly powerful because it moves up the abstraction of Docker
containers to the cluster level without you having to learn a new API. This makes
integration with tools that already support the Docker API easy, including the
standard Docker client. To a Docker client a Swarm cluster is just another Docker
host.
Swarm, like many other Docker tools, follows a design principle of ”batteries in-
cluded but removable”. This means it ships with tooling and backend integration
for simple use cases and provides an API for integration with more complex tools
and use cases. Swarm is shipped integrated into Docker since Docker 1.12. Prior
to that it was a standalone application licensed with the Apache 2 license.

Understanding the Swarm

A swarm is a cluster of Docker hosts onto which you can deploy services. Since
Docker 1.12 the Docker command line tool has included a swarm mode. This
allows the docker binary to create and manage swarms as well as run local con-
tainers.
A swarm is made up of manager and worker nodes. Manager do the dispatching
and organizing of work on the swarm. Each unit of work is called a task. Managers
also handle all the cluster management functions that keep the swarm healthy and
active. You can have many manager nodes, if there is more than one then the
manager node will conduct an election for a leader.
Worker nodes run the tasks dispatched from manager nodes. Out of the box, every
node, managers and workers, will run tasks. You can instead configure a swarm
manager node to only perform management activities and not run tasks.
As a task is a pretty atomic unit swarms use a bigger abstraction, called a service
as a building block. Services defined which tasks are executed on your nodes.
Each service consists of a container image and a series of commands to execute
inside one or more containers on the nodes. You can run services in a number of
modes:

Version: v17.07.0-ce-2 (e269502) 308


Chapter 7: Docker Orchestration and Service Discovery

• Replicated services - a swarm manager distributes replica tasks amongst


workers according to a scale you specify.

• Global services - a swarm manager dispatches one task for the service on
every available worker.

The swarm also manages load balancing and DNS much like a local Docker host.
Each swarm can expose ports, much like Docker containers publish ports. Like
container ports, These can be automatically or manually defined. The swarm
handles internal DNS much like a Docker host allowing services and workers to
be discoverable inside the swarm.

Installing Swarm

The easiest way to install Swarm is to use Docker itself. As a result, Swarm doesn’t
have anymore prerequisites than what we saw in Chapter 2. These instructions
assume you’ve installed Docker in accordance with those instructions.

TIP Prior to Docker 1.12, when Swarm was integrated into Docker, you can
use Swarm via a Docker image provided by the Docker Inc team called .
Instructions for installation and usage are available on the Docker Swarm docu-
mentation site.

We’re going to reuse our larry, curly and moe hosts to demonstrate Swarm.
The latest Docker release is already installed on these hosts and we’re going to
turn them into nodes of a Swarm cluster.

Setting up a Swarm

Now let’s create a Swarm cluster. Each node in our cluster runs a Swarm node
agent. Each agent registers its related Docker daemon with the cluster. Also

Version: v17.07.0-ce-2 (e269502) 309


Chapter 7: Docker Orchestration and Service Discovery

available is the Swarm manager that we’ll use to manage our cluster. We’re going
to create two cluster workers and a manager on our three hosts.

Table 7.3: Swarm addresses and roles

Host IP Address Role


larry 162.243.167.159 Manager
curly 162.243.170.66 Worker
moe 159.203.191.16 Worker

We also need to make sure some ports are open between all our nodes. We need
to consider the following access:

Table 7.4: Docker Swarm default ports.

Port Purpose
2377 Cluster Management
7946 + udp Node communication
4789 + udp Overlay network

We’re going to start with registering a Swarm on our larry node and use this host
as our Swarm manager. We’re again going to need larry’s public IP address. Let’s
make sure it’s still assigned to an environment variable.

Listing 7.63: Getting public IP on larry again

larry$ PUBLIC_IP="$(ifconfig eth0 | awk -F ' *|:' '/inet addr/{


print $4}')"
larry$ echo $PUBLIC_IP
162.243.167.159

Now let’s initialize a swarm on larry using this address.

Version: v17.07.0-ce-2 (e269502) 310


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.64: Initializing a swarm on larry

$ sudo docker swarm init --advertise-addr $PUBLIC_IP


Swarm initialized: current node (bu84wfix0h0x31aut8qlpbi9x) is
now a manager.

To add a worker to this swarm, run the following command:


docker swarm join \
--token SWMTKN-1-2
mk0wnb9m9cdwhheoysr3pt8orxku8c7k3x3kjjsxatc5ua72v-776
lg9r60gigwb32q329m0dli \
162.243.167.159:2377

To add a manager to this swarm, run the following command:


docker swarm join \
--token SWMTKN-1-2
mk0wnb9m9cdwhheoysr3pt8orxku8c7k3x3kjjsxatc5ua72v-78
bsc54abf35rhpr3ntbh98t8 \
162.243.167.159:2377

You can see we’ve run a docker command: swarm. We’ve then used the init option
to initialize a swarm and the --advertise-addr flag to specify the management
IP of the new swarm.
We can see the swarm has been started, assigning larry as the swarm manager.
Each swarm has two registration tokens initialized when the swarm begins. One
token for a manager and another for worker nodes. Each type of node can use
this token to join the swarm. We can see one of our tokens:
SWMTKN-1-2mk0wnb9m9cdwhheoysr3pt8orxku8c7k3x3kjjsxatc5ua72v-776
lg9r60gigwb32q329m0dli

You can see that the output from initializing the swarm has also provided sample
commands for adding workers and managers to the new swarm.

Version: v17.07.0-ce-2 (e269502) 311


Chapter 7: Docker Orchestration and Service Discovery

TIP If you ever need to get this token back again then you can run the
command on the Swarm manager to retrieve it.

Let’s look at the state of our Swarm by running the docker info command.

Listing 7.65: The Docker

larry$ sudo docker info


. . .
Swarm: active
NodeID: bu84wfix0h0x31aut8qlpbi9x
Is Manager: true
ClusterID: 0qtrjqv37gs3yc5f7ywt8nwfq
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot interval: 10000
Heartbeat tick: 1
Election tick: 3
Dispatcher:
Heartbeat period: 5 seconds
CA configuration:
Expiry duration: 3 months
Node Address: 162.243.167.159
. . .

By enabling a swarm you’ll see a new section in the docker info output.
We can also view information on the nodes inside the swarm using the docker
node command.

Version: v17.07.0-ce-2 (e269502) 312


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.66: The docker node command

larry$ sudo docker node ls


ID HOSTNAME STATUS AVAILABILITY
MANAGER STATUS
bu84wfix0h0x31aut8qlpbi9x * larry Ready Active
Leader

The docker node command with the ls flag shows the list of nodes in the swarm.
Currently we only have one node larry which is active and shows its role as
Leader of the manager nodes.

Let’s add our curly and moe hosts to the swarm as workers. We can use the
command emitted when we initialized the swarm.

Listing 7.67: Adding worker nodes to the cluster

curly$ sudo docker swarm join \


--token SWMTKN-1-2
mk0wnb9m9cdwhheoysr3pt8orxku8c7k3x3kjjsxatc5ua72v-776
lg9r60gigwb32q329m0dli \
162.243.167.159:2377
This node joined a swarm as a worker.

The docker swarm join command takes a token, in our case the worker token,
and the IP address and port of a Swarm manager node and adds that Docker host
to the swarm.
And then again with the same command on the moe node. Now let’s look at our
node list again on the larry host.

Version: v17.07.0-ce-2 (e269502) 313


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.68: Running the docker node command again

larry$ sudo docker node ls


ID HOSTNAME STATUS AVAILABILITY
MANAGER STATUS
bu84wfix0h0x31aut8qlpbi9x * larry Ready Active
Leader
c6viix7oja1twnyuc8ez7txhd curly Ready Active
dzxrvk6awnegjtj5aixnojetf moe Ready Active

Now we can see two more nodes added to our swarm as workers.

Running a service on your Swarm

With the swarm running, we can now start to run services on it. Remember ser-
vices are a container image and commands that will be executed on our swarm
nodes. Let’s create a simple replica service now. Remember that replica services
run the number of tasks you specify.

Listing 7.69: Creating a swarm service

$ sudo docker service create --replicas 2 --name heyworld ubuntu


/bin/sh -c "while true; do echo hey world; sleep 1; done"
8bl7yw1z3gzir0rmcvnrktqol

TIP You can find the full list of flags on the Docker
documentation site.

We’ve used the docker service command with the create keyword. This creates

Version: v17.07.0-ce-2 (e269502) 314


Chapter 7: Docker Orchestration and Service Discovery

services on our swarm. We’ve used the --name flag to call the service: heyworld.
The heyworld runs the ubuntu image and a while loop that echoes hey world. The
--replicas flag controls how many tasks are run on the swarm. In this case we’re
running two tasks.
Let’s look at our service using the docker service ls command.

Listing 7.70: Listing the services

$ sudo docker service ls


ID NAME REPLICAS IMAGE COMMAND
8bl7yw1z3gzi heyworld 2/2 ubuntu /bin/sh -c while true;
do echo hey world; sleep 1; done

This command lists all services in the swarm. We can see that our heyworld service
is running on two replicas. We can inspect the service in further detail using the
docker service inspect command. We’ve also passed in the --pretty flag to
return the output in an elegant form.

Version: v17.07.0-ce-2 (e269502) 315


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.71: Inspecting the heyworld service

$ sudo docker service inspect --pretty heyworld


ID: 8bl7yw1z3gzir0rmcvnrktqol
Name: heyworld
Mode: Replicated
Replicas: 2
Placement:
UpdateConfig:
Parallelism: 1
On failure: pause
ContainerSpec:
Image: ubuntu
Args: /bin/sh -c while true; do echo hey world; sleep 1;
done
Resources:

But we still don’t know where the service running. Let’s look at another command:
docker service ps.

Listing 7.72: Checking the heyworld service process

$ sudo docker service ps heyworld


ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
103q... heyworld.1 ubuntu larry Running Running about a
minute ago
6ztf... heyworld.2 ubuntu moe Running Running about a
minute ago

We can see each task, suffixed with the task number, and the node it is running
on.

Version: v17.07.0-ce-2 (e269502) 316


Chapter 7: Docker Orchestration and Service Discovery

Now, let’s say we wanted to add another task to the service, scaling it up. To do
this we use the docker service scale command.

Listing 7.73: Scaling the heyworld service

$ sudo docker service scale heyworld=3


heyworld scaled to 3

We specify the service we want to scale and then the new number of tasks we
want run, here 3. The swarm has then let us know it has scaled. Let’s again check
the running processes.

Listing 7.74: Checking the heyworld service process

$ sudo docker service ps heyworld


ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
103q... heyworld.1 ubuntu larry Running Running 5 minutes
ago
6ztf... heyworld.2 ubuntu moe Running Running 5 minutes
ago
1gib... heyworld.3 ubuntu curly Running Running about a
minute ago

We can see that our service is now running on a third node.


In addition to running replica services we can also run global services. Rather
than running as many replicas as you specify, global services run on every worker
in the swarm.

Version: v17.07.0-ce-2 (e269502) 317


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.75: Running a global service

$ sudo docker service create --name heyworld_global --mode global


ubuntu /bin/sh -c "while true; do echo hey world; sleep 1;
done"

Here we’ve started a global service called heyworld_global. We’ve specified the
--mode flag with a value of global and run the same ubuntu image and the same
command we ran above.
Let’s see the processes for the heyworld_global service using the docker service
ps command.

Listing 7.76: The heyworld_global process

$ sudo docker service ps heyworld_global


ID NAME IMAGE NODE DESIRED STATE CURRENT
STATE
c8c1... heyworld_global ubuntu moe Running Running 30
seconds ago
48wm... \_ heyworld_global ubuntu curly Running Running 30
seconds ago
8b8u... \_ heyworld_global ubuntu larry Running Running 29
seconds ago

We can see that the heyworld_global service is running on every one of our nodes.
If we want to stop a service we can run the docker service rm command.

Version: v17.07.0-ce-2 (e269502) 318


Chapter 7: Docker Orchestration and Service Discovery

Listing 7.77: Deleting the heyworld service

$ sudo docker service rm heyworld


heyworld

We can now list the running services.

Listing 7.78: Listing the remaining services

$ sudo docker service ls


ID NAME REPLICAS IMAGE COMMAND
5k3t... heyworld_global global ubuntu /bin/sh -c...

And we can see that only the heyworld_global service remains running.

TIP Swarm mode also allows for scaling, draining and staged upgrades. You
can find some examples of this in Docker Swarm tutorial.

Orchestration alternatives and components

As we mentioned earlier, Compose and Consul aren’t the only games in town when
it comes to Docker orchestration tools. There’s a fast growing ecosystem of them.
This is a non-comprehensive list of some of the tools available in that ecosystem.
Not all of them have matching functionality and broadly fall into two categories:

• Scheduling and cluster management.


• Service discovery.

Version: v17.07.0-ce-2 (e269502) 319


Chapter 7: Docker Orchestration and Service Discovery

NOTE All of the tools listed are open source under various licenses.

Fleet and etcd

Fleet and etcd are released by the CoreOS team. Fleet is a cluster management tool
and etcd is highly available key value store for shared configuration and service
discovery. Fleet combines systemd and etcd to provide cluster management and
scheduling for containers. Think of it as an extension of systemd that operates at
the cluster level instead of the machine level.

Kubernetes

Kubernetes is a container cluster management tool open sourced by Google. It


allows you to deploy and scale applications using Docker across multiple hosts.
Kubernetes is primarily targeted at applications comprised of multiple containers,
such as elastic, distributed micro-services.

Apache Mesos

The Apache Mesos project is a highly available cluster management tool. Since
Mesos 0.20.0 it has built-in Docker integration to allow you to use containers with
Mesos. Mesos is popular with a number of startups, notably Twitter and AirBnB.

Helios

The Helios project has been released by the team at Spotify and is a Docker or-
chestration platform for deploying and managing containers across an entire fleet.
It creates a ”job” abstraction that you can deploy to one or more Helios hosts
running Docker.

Version: v17.07.0-ce-2 (e269502) 320


Chapter 7: Docker Orchestration and Service Discovery

Centurion

Centurion is focused on being a Docker-based deployment tool open sourced by


the New Relic team. Centurion takes containers from a Docker registry and runs
them on a fleet of hosts with the correct environment variables, host volume map-
pings, and port mappings. It is designed to help you do continuous deployment
with Docker.

Summary

In this chapter we’ve introduced you to orchestration with Compose. We’ve shown
you how to add a Compose configuration file to create simple application stacks.
We’ve shown you how to run Compose and build those stacks and how to perform
basic management tasks on them.
We’ve also shown you a service discovery tool, Consul. We’ve installed Consul
onto Docker and created a cluster of Consul nodes. We’ve also demonstrated how
a simple distributed application might work on Docker.
We also took a look at Docker Swarm as a Docker clustering and scheduling tool.
We saw how to install Swarm, how to manage it and how to schedule workloads
across it.
Finally, we’ve seen some of the other orchestration tools available to us in the
Docker ecosystem.
In the next chapter we’ll look at the Docker API, how we can use it, and how we
can secure connections to our Docker daemon via TLS.

Version: v17.07.0-ce-2 (e269502) 321

You might also like