Week4 Lesson2 TheDockerBook Chapter Orchestration
Week4 Lesson2 TheDockerBook Chapter Orchestration
James Turnbull
Orchestration is a pretty loosely defined term. It’s broadly the process of auto-
mated configuration, coordination, and management of services. In the Docker
world we use it to describe the set of practices around managing applications run-
ning in multiple Docker containers and potentially across multiple Docker hosts.
Native orchestration is in its infancy in the Docker community but an exciting
ecosystem of tools is being integrated and developed.
In the current ecosystem there are a variety of tools being built and integrated
with Docker. Some of these tools are simply designed to elegantly ”wire” together
multiple containers and build application stacks using simple composition. Other
tools provide larger scale coordination between multiple Docker hosts as well as
complex service discovery, scheduling and execution capabilities.
Each of these areas really deserves its own book but we’ve focused on a few useful
tools that give you some insight into what you can achieve when orchestrating
containers. They provide some useful building blocks upon which you can grow
your Docker-enabled environment.
In this chapter we will focus on three areas:
255
Chapter 7: Docker Orchestration and Service Discovery
oped by the Orchard team and then acquired by Docker Inc in 2014. It’s
written in Python and licensed with the Apache 2.0 license.
TIP We’ll also talk about many of the other orchestration tools available to
you later in this chapter.
Docker Compose
Now let’s get familiar with Docker Compose. With Docker Compose, we define a
set of containers to boot up, and their runtime properties, all defined in a YAML
file. Docker Compose calls each of these containers ”services” which it defines as:
A container that interacts with other containers in some way and that
has specific runtime properties.
We’re going to take you through installing Docker Compose and then using it to
build a simple, multi-container application stack.
This will download the docker-compose binary from GitHub and install it into
the /usr/local/bin directory. We’ve also used the chmod command to make the
docker-compose binary executable so we can run it.
If we’re on OS X Docker Compose comes bundled with Docker for Mac or we can
install it like so:
TIP Replace the 1.16.1 with the release number of the current Docker Com-
pose release.
If we’re on Windows Docker Compose comes bundled inside Docker for Windows.
Compose is also available as a Python package if you’re on another platform or
if you prefer installing via package. You will need to have the Python-Pip tool
installed to use the pip command. This is available via the python-pip package
on most Red Hat, Debian and Ubuntu releases.
Once you have installed the docker-compose binary you can test it’s working using
the docker-compose command with the --version flag:
$ docker-compose --version
docker-compose version 1.16.1, build f3628c7
To demonstrate how Compose works we’re going to use a sample Python Flask
application that combines two containers:
Let’s start with building our sample application. Firstly, we create a directory and
a Dockerfile.
$ mkdir composeapp
$ cd composeapp
Here we’ve created a directory to hold our sample application, which we’re calling
composeapp.
Next, we need to add our application code. Let’s create a file called app.py in the
composeapp directory and add the following Python code to it.
app = Flask(__name__)
redis = Redis(host="redis", port=6379)
@app.route('/')
def hello():
redis.incr('hits')
return 'Hello Docker Book reader! I have been seen {0} times'
.format(redis.get('hits'))
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
TIP You can find this source code on GitHub here or on the Docker Book site
here.
This simple Flask application tracks a counter stored in Redis. The counter is
incremented each time the root URL, /, is hit.
We also need to create a requirements.txt file to store our application’s depen-
dencies. Let’s create that file now and add the following dependencies.
flask
redis
ADD . /composeapp
WORKDIR /composeapp
Our Dockerfile is simple. It is based on the python:2.7 image. We add our app
.py and requirements.txt files into a directory in the image called /composeapp.
The Dockerfile then sets the working directory to /composeapp and runs the pip
installation process to install our application’s dependencies: flask and redis.
Let’s build that image now using the docker build command.
This will build a new image called jamtur01/composeapp containing our sample
application and its required dependencies. We can now use Compose to deploy
our application.
NOTE We’ll be using a Redis container created from the default Redis
image on the Docker Hub so we don’t need to build or customize that.
The file
Now we’ve got our application image built we can configure Compose to create
both the services we require. With Compose, we define a set of services (in the
form of Docker containers) to launch. We also define the runtime properties we
want these services to start with, much as you would do with the docker run
command. We define all of this in a YAML file. We then run the docker-compose
up command. Compose launches the containers, executes the appropriate runtime
configuration, and multiplexes the log output together for us.
Let’s create a docker-compose.yml file for our application inside our composeapp
directory.
$ touch docker-compose.yml
version: '3'
services:
web:
image: jamtur01/composeapp
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/composeapp
redis:
image: redis
Each service we wish to launch is specified as a YAML hash inside a hash called
services. Here our two services are: web and redis.
TIP The tag tells Docker Compose what configuration version of use.
The Docker Compose API has evolved over the years and each change has been
marked by incrementing the version.
For our web service we’ve specified some runtime options. Firstly, we’ve specified
the image we’re using: the jamtur01/composeapp image. Compose can also build
Docker images. You can use the instruction and provide the path to a
Dockerfile to have Compose build an image and then create services from it.
web:
build: /home/james/composeapp
. . .
This build instruction would build a Docker image from a Dockerfile found in
the /home/james/composeapp directory.
We’ve also specified the command to run when launching the service. Next we
specify the ports and volumes as a list of the port mappings and volumes we want
for our service. We’ve specified that we’re mapping port 5000 inside our service
to port 5000 on the host. We’re also creating /composeapp as a volume.
If we were executing the same configuration on the command line using docker
run we’d do it like so:
Next we’ve specified another service called redis. For this service we’re not set-
ting any runtime defaults at all. We’re just going to use the base redis image. By
default, containers run from this image launches a Redis database on the standard
port. So we don’t need to configure or customize it.
TIP You can see a full list of the available instructions you can use in the
file here.
Running Compose
$ cd composeapp
$ sudo docker-compose up
Creating network "composeapp_default" with the default driver
Recreating composeapp_web_1 ...
Recreating composeapp_web_1
Recreating composeapp_redis_1 ...
Recreating composeapp_web_1 ... done
Attaching to composeapp_redis_1, composeapp_web_1
web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to
quit)
. . .
Compose then attaches to the logs of each service, each line of log output is pre-
fixed with the abbreviated name of the service it comes from, and outputs them
multiplexed:
The services (and Compose) are being run interactively. That means if you use
Ctrl-C or the like to cancel Compose then it’ll stop the running services. We
could also run Compose with -d flag to run our services daemonized (similar to
the docker run -d flag).
$ sudo docker-compose up -d
Let’s look at the sample application that’s now running on the host. The applica-
tion is bound to all interfaces on the Docker host on port 5000. So we can browse
to that site on the host’s IP address or via localhost.
We see a message displaying the current counter value. We can increment the
counter by refreshing the site. Each refresh stores the increment in Redis. The
Redis update is done via the link between the Docker containers controlled by
Compose.
TIP By default, Compose tries to connect to a local Docker daemon but it’ll
also honor the environment variable to connect to a remote Docker
host.
Using Compose
Now let’s explore some of Compose’s other options. Firstly, let’s use Ctrl-C to
cancel our running services and then restart them as daemonized services.
Press Ctrl-C inside the composeapp directory and then re-run the docker-compose
up command, this time with the -d flag.
$ sudo docker-compose up -d
Starting composeapp_web_1 ...
Starting composeapp_redis_1 ...
Starting composeapp_redis_1
Starting composeapp_web_1 ... done
$ . . .
We see that Compose has recreated our services, launched them and returned to
the command line.
Our Compose-managed services are now running daemonized on the host. Let’s
look at them now using the docker-compose ps command; a close cousin of the
docker ps command.
The docker-compose ps command lists all of the currently running services from
our local docker-compose.yml file.
$ cd composeapp
$ sudo docker-compose ps
Name Command State Ports
-----------------------------------------------------------------
-
composeapp_redis_1 docker-entrypoint.sh redis Up 6379/tcp
composeapp_web_1 python app.py Up 0.0.0.0:5000
->5000/tcp
This shows some basic information about our running Compose services. The
name of each service, what command we used to start the service, and the ports
that are mapped on each service.
We can also drill down further using the docker-compose logs command to show
us the log events from our services.
This will tail the log files of your services, much as the tail -f command. Like
the tail -f command you’ll need to use Ctrl-C or the like to exit from it.
We can also stop our running services with the docker-compose stop command.
This will stop both services. If the services don’t stop you can use the docker-
compose kill command to force kill the services.
$ sudo docker-compose ps
Name Command State Ports
-------------------------------------------------
composeapp_redis_1 redis-server Exit 0
composeapp_web_1 python app.py Exit 0
$ sudo docker-compose rm
Going to remove composeapp_redis_1, composeapp_web_1
Are you sure? [yN] y
Removing composeapp_redis_1...
Removing composeapp_web_1...
You’ll be prompted to confirm you wish to remove the services and then both
services will be deleted. The docker-compose ps command will now show no
running or stopped services.
$ sudo docker-compose ps
Name Command State Ports
------------------------------
Compose in summary
Now in one file we have a simple Python-Redis stack built! You can see how much
easier this can make constructing applications from multiple Docker containers.
It’s especially a great tool for building local development stacks. This, however,
just scratches the surface of what you can do with Compose. There are some
more examples using Rails, Django and Wordpress on the Compose website that
introduce some more advanced concepts.
To get a better understanding of how Consul works, we’re going to see how to
run distributed Consul inside Docker containers. We’re then going to register
services from Docker containers to Consul and query that data from other Docker
containers. To make it more interesting we’re going to do this across multiple
Docker hosts.
To do this we’re going to:
• Build three hosts running Docker and then run Consul on each. The three
hosts will provide us with a distributed environment to see how resiliency
and failover works with Consul.
• Build services that we’ll register with Consul and then query that data from
another service.
We’re going to start with creating a Dockerfile to build our Consul image. Let’s
create a directory to hold our Consul image first.
$ mkdir consul
$ cd consul
$ touch Dockerfile
FROM ubuntu:16.04
MAINTAINER James Turnbull <[email protected]>
ENV REFRESHED_AT 2014-08-01
ADD https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4
_linux_amd64.zip /tmp/consul.zip
RUN cd /usr/sbin; unzip /tmp/consul.zip; chmod +x /usr/sbin/
consul; rm /tmp/consul.zip
VOLUME ["/data"]
Our Dockerfile is pretty simple. It’s based on an Ubuntu 16.04 image. It installs
curl and unzip. We then download the Consul zip file containing the consul
binary. We move that binary to /usr/sbin/ and make it executable. We also
download Consul’s web interface and place it into a directory called /webui. We’re
going to see this web interface in action a little later.
We then add a configuration file for Consul, consul.json, to the /config directory.
Let’s create and look at that file now.
{
"data_dir": "/data",
"ui_dir": "/webui",
"client_addr": "0.0.0.0",
"ports": {
"dns": 53
},
"recursor": "8.8.8.8"
}
The consul.json configuration file is JSON formatted and provides Consul with
the information needed to get running. We’ve specified a data directory, /data,
to hold Consul’s data. We also specify the location of the web interface files: /
webui. We use the client_addr variable to bind Consul to all interfaces inside our
container.
We also use the ports block to configure on which ports various Consul services
run. In this case we’re specifying that Consul’s DNS service should run on port
53. Lastly, we’ve used the recursor option to specify a DNS server to use for
resolution if Consul can’t resolve a DNS request. We’ve specified 8.8.8.8 which
is one of the IP addresses of Google’s public DNS service.
TIP You can find the full list of available Consul configuration options here.
Back in our Dockerfile we’ve used the EXPOSE instruction to open up a series of
ports that Consul requires to operate. I’ve added a table showing each of these
Port Purpose
53/udp DNS server
8300 Server RPC
8301 + udp Serf LAN port
8302 + udp Serf WAN port
8400 RPC endpoint
8500 HTTP API
You don’t need to worry about most of them for the purposes of this chapter. The
important ones for us are 53/udp which is the port Consul is going to be running
DNS on. We’re going to use DNS to query service information. We’re also going to
use Consul’s HTTP API and its web interface, both of which are bound to port 8500.
The rest of the ports handle the backend communication and clustering between
Consul nodes. We’ll configure them in our Docker container but we don’t do
anything specific with them.
NOTE You can find more details of what each port does here.
Next, we’ve also made our /data directory a volume using the VOLUME instruction.
This is useful if we want to manage or work with this data as we saw in Chapter
6.
Finally, we’ve specified an ENTRYPOINT instruction to launch Consul using the
consul binary when a container is launched from our image.
Let’s step through the command line options we’ve used. We’ve specified the
consul binary in /usr/sbin/. We’ve passed it the agent command which tells
Consul to run as an agent and the -config-dir flag and specified the location of
our consul.json file in the /config directory.
NOTE You can get our Consul and configuration file here or
on GitHub here. If you don’t want to use a home grown image there is also an
officially santionced Consul image on the Docker Hub.
Before we run Consul on multiple hosts, let’s see it working locally on a single
host. To do this we’ll run a container from our new jamtur01/consul image.
. . .
We’ve used the docker run command to create a new container. We’ve mapped
two ports, port 8500 in the container to 8500 on the host and port 53 in the con-
tainer to 53 on the host. We’ve also used the -h flag to specify the hostname of
the container, here node1. This is going to be both the hostname of the container
and the name of the Consul node. We’ve then specified the name of our Consul
image, jamtur01/consul.
Lastly, we’ve passed two flags to the consul binary: -server and -bootstrap. The
-server flag tells the Consul agent to operate in server mode. The -bootstrap flag
tells Consul that this node is allowed to self-elect as a leader. This allows us to
see a Consul agent in server mode doing a Raft leadership election.
We see that Consul has started node1 and done a local leader election. As we’ve
got no other Consul nodes running it is not connected to anything else.
We can also see this via the Consul web interface if we browse to our local host’s
IP address on port 8500.
As Consul is distributed we’d normally create three (or more) hosts to run in sep-
arate data centers, clouds or regions. Or even add an agent to every application
server. This will provide us with sufficient distributed resilience. We’re going to
mimic this required distribution by creating three new hosts each with a Docker
daemon to run Consul. We will create three new Ubuntu 16.04 hosts: larry,
curly, and moe. On each host we’ll install a Docker daemon. We’ll also pull down
the jamtur01/consul image.
TIP Create the hosts using whatever means you run up new hosts and to
install Docker you can use the installation instructions in Chapter 2.
On each host we’re going to run a Docker container with the jamtur01/consul
image. To do this we need to choose a network to run Consul over. In most cases
this would be a private network but as we’re just simulating a Consul cluster I
am going to use the public interfaces of each host. To start Consul on this public
network I am going to need the public IP address of each host. This is the address
to which we’re going to bind each Consul agent.
Let’s grab that now on larry and assign it to an environment variable, $PUBLIC_IP.
And then create the same $PUBLIC_IP variable on curly and moe too.
We see we’ve got three hosts and three IP addresses, each assigned to the
$PUBLIC_IP environmental variable.
Host IP Address
larry 162.243.167.159
curly 162.243.170.66
moe 159.203.191.16
We’re also going to need to nominate a host to bootstrap to start the cluster. We’re
going to choose larry. This means we’ll need larry’s IP address on curly and moe
to tell them which Consul node’s cluster to join. Let’s set that up now by adding
larry’s IP address of 162.243.167.159 to curly and moe as the environment vari-
able, $JOIN_IP.
curly$ JOIN_IP=162.243.167.159
moe$ JOIN_IP=162.243.167.159
Let’s start our initial bootstrap node on larry. Our docker run command is going
to be a little complex because we’re mapping a lot of ports. Indeed, we need to
map all the ports listed in Table 7.1 above. And, as we’re both running Consul in
a container and connecting to containers on other hosts, we’re going to map each
port to the corresponding port on the local host. This will allow both internal and
external access to Consul.
Let’s see our docker run command now.
We’ve also specified some command line options for the Consul agent.
The -server flag tell the agent to run in server mode. The -advertise flag tells
that server to advertise itself on the IP address specified in the $PUBLIC_IP environ-
ment variable. Lastly, the -bootstrap-expect flag tells Consul how many agents
to expect in this cluster. In this case, 3 agents. It also bootstraps the cluster.
Let’s look at the logs of our initial Consul container with the docker logs com-
mand.
. . .
We see that the agent on larry is started but because we don’t have any more
nodes yet no election has taken place. We know this from the only error returned.
Now we’ve bootstrapped our cluster we can start our remaining nodes on curly
and moe. Let’s start with curly. We use the docker run command to launch our
second agent.
We see our command is similar to our bootstrapped node on larry with the ex-
ception of the command we’re passing to the Consul agent.
Again we’ve enabled the Consul agent’s server mode with -server and bound the
agent to the public IP address using the -advertise flag. Finally, we’ve told Con-
sul to join our Consul cluster by specifying larry’s IP address using the $JOIN_IP
environment variable.
Let’s see what happened when we launched our container.
. . .
We see curly has joined larry, indeed on larry we should see something like the
following:
But we’ve still not got a quorum in our cluster, remember we told -bootstrap-
expect to expect 3 nodes. So let’s start our final agent on moe.
Our docker run command is basically the same as what we ran on curly. But this
time we have three agents in our cluster. Now, if we look at the container’s logs,
we will see a full cluster.
. . .
We see from our container’s logs that moe has joined the cluster. This causes Consul
to reach its expected number of cluster members and triggers a leader election. In
this case larry is elected cluster leader.
We see the result of this final agent joining in the Consul logs on larry too.
We can also browse to the Consul web interface on larry on port 8500 and select
the Consul service to see the current state
Finally, we can test the DNS is working using the dig command. We specify our
local Docker bridge IP as the DNS server. That’s the IP address of the Docker
interface: docker0.
We see the interface has an IP of 172.17.0.1. We then use this with the dig
command.
;; QUESTION SECTION:
;consul.service.consul. IN A
;; ANSWER SECTION:
consul.service.consul. 0 IN A 162.243.170.66
consul.service.consul. 0 IN A 159.203.191.16
consul.service.consul. 0 IN A 162.243.167.159
Here we’ve queried the IP of the local Docker interface as a DNS server and asked
it to return any information on consul.service.consul. This format is Consul’s
DNS shorthand for services: consul is the host and service.consul is the domain.
Here consul.service.consul represent the DNS entry for the Consul service itself.
For example:
Would return all DNS A records for the service webservice. We can also query
individual nodes.
TIP You can see more details on Consul’s DNS interface here.
We now have a running Consul cluster inside Docker containers running on three
separate hosts. That’s pretty cool but it’s not overly useful. Let’s see how we can
register a service in Consul and then retrieve that data.
To register our service we’re going to create a phony distributed application writ-
ten in the uWSGI framework. We’re going to build our application in two pieces.
We’re going run the distributed_app on two of our Consul nodes: larry and
curly. We’ll run the distributed_client client on the moe node.
$ mkdir distributed_app
$ cd distributed_app
$ touch Dockerfile
FROM ubuntu:16.04
MAINTAINER James Turnbull "[email protected]"
ENV REFRESHED_AT 2016-06-01
Our Dockerfile installs some required packages including the uWSGI and Sinatra
frameworks as well as a plugin to allow uWSGI to write to Consul. We create a
directory called /opt/distributed_app/ and make it our working directory. We
then add two files, uwsgi-consul.ini and config.ru to that directory.
The uwsgi-consul.ini file configured uWSGI itself. Let’s look at it now.
[uwsgi]
plugins = consul
socket = 127.0.0.1:9999
master = true
enable-threads = true
[server1]
consul-register = url=http://%h.node.consul:8500,name=
distributed_app,id=server1,port=2001
mule = config.ru
[server2]
consul-register = url=http://%h.node.consul:8500,name=
distributed_app,id=server2,port=2002
mule = config.ru
The uwsgi-consul.ini file uses uWSGI’s Mule construct to run two identical ap-
plications that do ”Hello World” in the Sinatra framework. Let’s look at those in
the config.ru file.
require 'rubygems'
require 'sinatra'
get '/' do
"Hello World!"
end
run Sinatra::Application
url=http://%h.node.consul:8500...
$ mkdir distributed_client
$ cd distributed_client
$ touch Dockerfile
FROM ubuntu:16.04
MAINTAINER James Turnbull "[email protected]"
ENV REFRESHED_AT 2016-06-01
WORKDIR /opt/distributed_client
The Dockerfile installs Ruby and some prerequisite packages and gems. It creates
the /opt/distributed_client directory and makes it the working directory. It
copies our client application code, contained in the client.rb file, into the /opt
/distributed_client directory.
require "rubygems"
require "json"
require "net/http"
require "uri"
require "resolv"
uri = URI.parse("http://consul.service.consul:8500/v1/catalog/
service/distributed_app")
while true
if response.body == "{}"
puts "There are no distributed applications registered in
Consul"
sleep(1)
elsif
result = JSON.parse(response.body)
result.each do |service|
puts "Application #{service['ServiceName']} with element #{
service["ServiceID"]} on port #{service["ServicePort"]}
found on node #{service["Node"]} (#{service["Address"]}).
"
dns = Resolv::DNS.new.getresources("distributed_app.service
.consul", Resolv::DNS::Resource::IN::A)
puts "We can also resolve DNS - #{service['ServiceName']}
resolves to #{dns.collect { |d| d.address }.join(" and ")
}."
sleep(1)
end
end
Version: v17.07.0-ce-2 (e269502) 301
end
Chapter 7: Docker Orchestration and Service Discovery
Our client checks the Consul HTTP API and the Consul DNS for the presence of
a service called distributed_app. It queries the host consul.service.consul
which is the DNS CNAME entry we saw earlier that contains all the A records of
our Consul cluster nodes. This provides us with a simple DNS round robin for our
queries.
If no service is present it puts a message to that effect on the console. If it detects
a distributed_app service then it:
• Parses out the JSON output from the API call and returns some useful infor-
mation to the console.
• Performs a DNS lookup for any A records for that service and returns them
to the console.
This will allow us to see the results of launching our distributed_app containers
on our Consul cluster.
Lastly our Dockerfile specifies an ENTRYPOINT instruction that runs the client.rb
application when the container is started.
Let’s build our image now.
Now we’ve built the required images we can launch our distributed_app applica-
tion container on larry and curly. We’ve assumed that you have Consul running
as we’ve configured it earlier in the chapter. Let’s start by running one application
instance on larry.
. . .
We see a subset of the logs here and that uWSGI has started. The Consul plugin has
constructed a service entry for each distributed_app worker and then registered
them with Consul. If we now look at the Consul web interface we should be able
to see our new services.
If we check the logs and the Consul web interface we should now see more services
registered.
Now we’ve got web application workers running on larry and curly let’s start
our client on moe and see if we can query data from Consul.
This time we’ve run the jamtur01/distributed_client image on moe and created
an interactive container called moe_distributed_client. It should start emitting
log output like so:
We see that our distributed_client application has queried the HTTP API and
found service entries for distributed_app and its server1 and server2 workers
on both larry and curly. It has also done a DNS lookup to discover the IP address
of the nodes running that service, 162.243.167.159 and 162.243.170.66.
If this was a real distributed application our client and our workers could take
advantage of this information to configure, connect, route between elements of
the distributed application. This provides a simple, easy and resilient way to build
distributed applications running inside separate Docker containers and hosts.
Docker Swarm
Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts
into a single virtual Docker host. Swarm has a simple architecture. It clusters
together multiple Docker hosts and serves the standard Docker API on top of that
cluster. This is incredibly powerful because it moves up the abstraction of Docker
containers to the cluster level without you having to learn a new API. This makes
integration with tools that already support the Docker API easy, including the
standard Docker client. To a Docker client a Swarm cluster is just another Docker
host.
Swarm, like many other Docker tools, follows a design principle of ”batteries in-
cluded but removable”. This means it ships with tooling and backend integration
for simple use cases and provides an API for integration with more complex tools
and use cases. Swarm is shipped integrated into Docker since Docker 1.12. Prior
to that it was a standalone application licensed with the Apache 2 license.
A swarm is a cluster of Docker hosts onto which you can deploy services. Since
Docker 1.12 the Docker command line tool has included a swarm mode. This
allows the docker binary to create and manage swarms as well as run local con-
tainers.
A swarm is made up of manager and worker nodes. Manager do the dispatching
and organizing of work on the swarm. Each unit of work is called a task. Managers
also handle all the cluster management functions that keep the swarm healthy and
active. You can have many manager nodes, if there is more than one then the
manager node will conduct an election for a leader.
Worker nodes run the tasks dispatched from manager nodes. Out of the box, every
node, managers and workers, will run tasks. You can instead configure a swarm
manager node to only perform management activities and not run tasks.
As a task is a pretty atomic unit swarms use a bigger abstraction, called a service
as a building block. Services defined which tasks are executed on your nodes.
Each service consists of a container image and a series of commands to execute
inside one or more containers on the nodes. You can run services in a number of
modes:
• Global services - a swarm manager dispatches one task for the service on
every available worker.
The swarm also manages load balancing and DNS much like a local Docker host.
Each swarm can expose ports, much like Docker containers publish ports. Like
container ports, These can be automatically or manually defined. The swarm
handles internal DNS much like a Docker host allowing services and workers to
be discoverable inside the swarm.
Installing Swarm
The easiest way to install Swarm is to use Docker itself. As a result, Swarm doesn’t
have anymore prerequisites than what we saw in Chapter 2. These instructions
assume you’ve installed Docker in accordance with those instructions.
TIP Prior to Docker 1.12, when Swarm was integrated into Docker, you can
use Swarm via a Docker image provided by the Docker Inc team called .
Instructions for installation and usage are available on the Docker Swarm docu-
mentation site.
We’re going to reuse our larry, curly and moe hosts to demonstrate Swarm.
The latest Docker release is already installed on these hosts and we’re going to
turn them into nodes of a Swarm cluster.
Setting up a Swarm
Now let’s create a Swarm cluster. Each node in our cluster runs a Swarm node
agent. Each agent registers its related Docker daemon with the cluster. Also
available is the Swarm manager that we’ll use to manage our cluster. We’re going
to create two cluster workers and a manager on our three hosts.
We also need to make sure some ports are open between all our nodes. We need
to consider the following access:
Port Purpose
2377 Cluster Management
7946 + udp Node communication
4789 + udp Overlay network
We’re going to start with registering a Swarm on our larry node and use this host
as our Swarm manager. We’re again going to need larry’s public IP address. Let’s
make sure it’s still assigned to an environment variable.
You can see we’ve run a docker command: swarm. We’ve then used the init option
to initialize a swarm and the --advertise-addr flag to specify the management
IP of the new swarm.
We can see the swarm has been started, assigning larry as the swarm manager.
Each swarm has two registration tokens initialized when the swarm begins. One
token for a manager and another for worker nodes. Each type of node can use
this token to join the swarm. We can see one of our tokens:
SWMTKN-1-2mk0wnb9m9cdwhheoysr3pt8orxku8c7k3x3kjjsxatc5ua72v-776
lg9r60gigwb32q329m0dli
You can see that the output from initializing the swarm has also provided sample
commands for adding workers and managers to the new swarm.
TIP If you ever need to get this token back again then you can run the
command on the Swarm manager to retrieve it.
Let’s look at the state of our Swarm by running the docker info command.
By enabling a swarm you’ll see a new section in the docker info output.
We can also view information on the nodes inside the swarm using the docker
node command.
The docker node command with the ls flag shows the list of nodes in the swarm.
Currently we only have one node larry which is active and shows its role as
Leader of the manager nodes.
Let’s add our curly and moe hosts to the swarm as workers. We can use the
command emitted when we initialized the swarm.
The docker swarm join command takes a token, in our case the worker token,
and the IP address and port of a Swarm manager node and adds that Docker host
to the swarm.
And then again with the same command on the moe node. Now let’s look at our
node list again on the larry host.
Now we can see two more nodes added to our swarm as workers.
With the swarm running, we can now start to run services on it. Remember ser-
vices are a container image and commands that will be executed on our swarm
nodes. Let’s create a simple replica service now. Remember that replica services
run the number of tasks you specify.
TIP You can find the full list of flags on the Docker
documentation site.
We’ve used the docker service command with the create keyword. This creates
services on our swarm. We’ve used the --name flag to call the service: heyworld.
The heyworld runs the ubuntu image and a while loop that echoes hey world. The
--replicas flag controls how many tasks are run on the swarm. In this case we’re
running two tasks.
Let’s look at our service using the docker service ls command.
This command lists all services in the swarm. We can see that our heyworld service
is running on two replicas. We can inspect the service in further detail using the
docker service inspect command. We’ve also passed in the --pretty flag to
return the output in an elegant form.
But we still don’t know where the service running. Let’s look at another command:
docker service ps.
We can see each task, suffixed with the task number, and the node it is running
on.
Now, let’s say we wanted to add another task to the service, scaling it up. To do
this we use the docker service scale command.
We specify the service we want to scale and then the new number of tasks we
want run, here 3. The swarm has then let us know it has scaled. Let’s again check
the running processes.
Here we’ve started a global service called heyworld_global. We’ve specified the
--mode flag with a value of global and run the same ubuntu image and the same
command we ran above.
Let’s see the processes for the heyworld_global service using the docker service
ps command.
We can see that the heyworld_global service is running on every one of our nodes.
If we want to stop a service we can run the docker service rm command.
And we can see that only the heyworld_global service remains running.
TIP Swarm mode also allows for scaling, draining and staged upgrades. You
can find some examples of this in Docker Swarm tutorial.
As we mentioned earlier, Compose and Consul aren’t the only games in town when
it comes to Docker orchestration tools. There’s a fast growing ecosystem of them.
This is a non-comprehensive list of some of the tools available in that ecosystem.
Not all of them have matching functionality and broadly fall into two categories:
NOTE All of the tools listed are open source under various licenses.
Fleet and etcd are released by the CoreOS team. Fleet is a cluster management tool
and etcd is highly available key value store for shared configuration and service
discovery. Fleet combines systemd and etcd to provide cluster management and
scheduling for containers. Think of it as an extension of systemd that operates at
the cluster level instead of the machine level.
Kubernetes
Apache Mesos
The Apache Mesos project is a highly available cluster management tool. Since
Mesos 0.20.0 it has built-in Docker integration to allow you to use containers with
Mesos. Mesos is popular with a number of startups, notably Twitter and AirBnB.
Helios
The Helios project has been released by the team at Spotify and is a Docker or-
chestration platform for deploying and managing containers across an entire fleet.
It creates a ”job” abstraction that you can deploy to one or more Helios hosts
running Docker.
Centurion
Summary
In this chapter we’ve introduced you to orchestration with Compose. We’ve shown
you how to add a Compose configuration file to create simple application stacks.
We’ve shown you how to run Compose and build those stacks and how to perform
basic management tasks on them.
We’ve also shown you a service discovery tool, Consul. We’ve installed Consul
onto Docker and created a cluster of Consul nodes. We’ve also demonstrated how
a simple distributed application might work on Docker.
We also took a look at Docker Swarm as a Docker clustering and scheduling tool.
We saw how to install Swarm, how to manage it and how to schedule workloads
across it.
Finally, we’ve seen some of the other orchestration tools available to us in the
Docker ecosystem.
In the next chapter we’ll look at the Docker API, how we can use it, and how we
can secure connections to our Docker daemon via TLS.