0% found this document useful (0 votes)
225 views17 pages

Docker

This document describes how to create a simple Python Flask app without Docker and then package it into a Docker container. It includes the steps to: 1) Create a basic Python Flask "hello world" app without any dependencies beyond Flask. 2) Define a Dockerfile that uses the Python Alpine image as a base, installs Flask via pip, copies the Python file, and sets the default command to run the app. 3) Build the Docker image using the Dockerfile with the name "python-hello-world".

Uploaded by

siddharth harle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
225 views17 pages

Docker

This document describes how to create a simple Python Flask app without Docker and then package it into a Docker container. It includes the steps to: 1) Create a basic Python Flask "hello world" app without any dependencies beyond Flask. 2) Define a Dockerfile that uses the Python Alpine image as a base, installs Flask via pip, copies the Python file, and sets the default command to run the app. 3) Build the Docker image using the Dockerfile with the name "python-hello-world".

Uploaded by

siddharth harle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

1.

Create a Python app (without using Docker)

1. Copy and paste this entire command into the terminal. The result of running this
command will create a file named app.py.
2. echo 'from flask import Flask
3.
4. app = Flask(__name__)
5.
6. @app.route("/")
7. def hello():
8. return "hello world!"
9.
10. if __name__ == "__main__":
11. app.run(host="0.0.0.0")' > app.py

This is a simple Python app that uses Flask to expose an HTTP web server on port
5000. (5000 is the default port for flask.) Don't worry if you are not too familiar with
Python or Flask. These concepts can be applied to an application written in any
language.

12. Optional: If you have Python and pip installed, run this app locally. If not, move on
to the next section of this lab.
13. $ python3 --version
14. Python 3.6.1
15. $ pip3 --version
16. pip 9.0.1 from /usr/local/lib/python3.6/site-packages (python 3.6)
17. $ pip3 install flask
18. Requirement already satisfied: flask in
/usr/local/lib/python3.6/site-packages
19. Requirement already satisfied: Werkzeug>=0.7 in
/usr/local/lib/python3.6/site-packages (from flask)
20. Requirement already satisfied: itsdangerous>=0.21 in
/usr/local/lib/python3.6/site-packages (from flask)
21. Requirement already satisfied: Jinja2>=2.4 in
/usr/local/lib/python3.6/site-packages (from flask)
22. Requirement already satisfied: click>=2.0 in
/usr/local/lib/python3.6/site-packages (from flask)
23. Requirement already satisfied: MarkupSafe>=0.23 in
/usr/local/lib/python3.6/site-packages (from Jinja2>=2.4->flask)
24. johns-mbp:test johnzaccone$ pip3 install flask
25. Requirement already satisfied: flask in
/usr/local/lib/python3.6/site-packages
26. Requirement already satisfied: itsdangerous>=0.21 in
/usr/local/lib/python3.6/site-packages (from flask)
27. Requirement already satisfied: Jinja2>=2.4 in
/usr/local/lib/python3.6/site-packages (from flask)
28. Requirement already satisfied: click>=2.0 in
/usr/local/lib/python3.6/site-packages (from flask)
29. Requirement already satisfied: Werkzeug>=0.7 in
/usr/local/lib/python3.6/site-packages (from flask)
30. Requirement already satisfied: MarkupSafe>=0.23 in
/usr/local/lib/python3.6/site-packages (from Jinja2>=2.4->flask)
31. $ python3 app.py
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
2. Create and build the Docker image

If you don't have Python installed locally, don't worry because you don't need it. One of the
advantages of using Docker containers is that you can build Python into your containers
without having Python installed on your host.

1. Create a file named Dockerfile and add the following content:


2. FROM python:3.6.1-alpine
3. RUN pip install flask
4. CMD ["python","app.py"]
5. COPY app.py /app.py

A Dockerfile lists the instructions needed to build a Docker image. Let's go through
the Dockerfile line by line.

o FROM python:3.6.1-alpine

This is the starting point for your Dockerfile. Every Dockerfile typically starts
with a FROM line that is the starting image to build your layers on top of. In this
case, you are selecting the python:3.6.1-alpine base layer because it
already has the version of Python and pip that you need to run your
application. The alpine version means that it uses the alpine distribution,
which is significantly smaller than an alternative flavor of Linux. A smaller
image means it will download (deploy) much faster, and it is also more secure
because it has a smaller attack surface.

Here you are using the 3.6.1-alpine tag for the Python image. Look at the
available tags for the official Python image on the Docker Hub. It is best
practice to use a specific tag when inheriting a parent image so that changes to
the parent dependency are controlled. If no tag is specified, the latest tag takes
effect, which acts as a dynamic pointer that points to the latest version of an
image.

For security reasons, you must understand the layers that you build your
docker image on top of. For that reason, it is highly recommended to only use
official images found in the Docker Hub, or noncommunity images found in
the Docker Store. These images are vetted to meet certain security
requirements, and also have very good documentation for users to follow. You
can find more information about this Python base image and other images that
you can use on the Docker store.

For a more complex application, you might need to use a FROM image that is
higher up the chain. For example, the parent Dockerfile for your Python
application starts with FROM alpine, then specifies a series of CMD and RUN
commands for the image. If you needed more control, you could start with
FROM alpine (or a different distribution) and run those steps yourself.
However, to start, it's recommended that you use an official image that closely
matches your needs.

o RUN pip install flask


The RUN command executes commands needed to set up your image for your
application, such as installing packages, editing files, or changing file
permissions. In this case, you are installing Flask. The RUN commands are
executed at build time and are added to the layers of your image.

o CMD ["python","app.py"]

CMD is the command that is executed when you start a container. Here, you are
using CMD to run your Python applcation.

There can be only one CMD per Dockerfile. If you specify more than one CMD,
then the last CMD will take effect. The parent python:3.6.1-alpine also specifies
a CMD (CMD python2). You can look at the Dockerfile for the official
python:alpine image.

You can use the official Python image directly to run Python scripts without
installing Python on your host. However, in this case, you are creating a
custom image to include your source so that you can build an image with your
application and ship it to other environments.

o COPY app.py /app.py

This line copies the app.py file in the local directory (where you will run
docker image build) into a new layer of the image. This instruction is the
last line in the Dockerfile. Layers that change frequently, such as copying
source code into the image, should be placed near the bottom of the file to take
full advantage of the Docker layer cache. This allows you to avoid rebuilding
layers that could otherwise be cached. For instance, if there was a change in
the FROM instruction, it will invalidate the cache for all subsequent layers of
this image. You'll see this little later in this lab.

It seems counter-intuitive to put this line after the CMD ["python","app.py"]


line. Remember, the CMD line is executed only when the container is started, so
you won't get a file not found error here.

And there you have it: a very simple Dockerfile. See the full list of commands
that you can put into a Dockerfile. Now that you've defined the Dockerfile,
you'll use it to build your custom docker image.

6. Build the Docker image. Pass in the -t parameter to name your image python-
hello-world.
7. $ docker image build -t python-hello-world .
8. Sending build context to Docker daemon 3.072kB
9. Step 1/4 : FROM python:3.6.1-alpine
10. 3.6.1-alpine: Pulling from library/python
11. acb474fa8956: Pull complete
12. 967ab02d1ea4: Pull complete
13. 640064d26350: Pull complete
14. db0225fcac8f: Pull complete
15. 5432cc692c60: Pull complete
16. Digest:
sha256:768360b3fad01adffcf5ad9eccb4aa3ccc83bb0ed341bbdc45951e89335082
ce
17. Status: Downloaded newer image for python:3.6.1-alpine
18. ---> c86415c03c37
19. Step 2/4 : RUN pip install flask
20. ---> Running in cac3222673a3
21. Collecting flask
22. Downloading Flask-0.12.2-py2.py3-none-any.whl (83kB)
23. Collecting itsdangerous>=0.21 (from flask)
24. Downloading itsdangerous-0.24.tar.gz (46kB)
25. Collecting click>=2.0 (from flask)
26. Downloading click-6.7-py2.py3-none-any.whl (71kB)
27. Collecting Werkzeug>=0.7 (from flask)
28. Downloading Werkzeug-0.12.2-py2.py3-none-any.whl (312kB)
29. Collecting Jinja2>=2.4 (from flask)
30. Downloading Jinja2-2.9.6-py2.py3-none-any.whl (340kB)
31. Collecting MarkupSafe>=0.23 (from Jinja2>=2.4->flask)
32. Downloading MarkupSafe-1.0.tar.gz
33. Building wheels for collected packages: itsdangerous, MarkupSafe
34. Running setup.py bdist_wheel for itsdangerous: started
35. Running setup.py bdist_wheel for itsdangerous: finished with
status 'done'
36. Stored in directory:
/root/.cache/pip/wheels/fc/a8/66/24d655233c757e178d45dea2de22a04c6d92
766abfb741129a
37. Running setup.py bdist_wheel for MarkupSafe: started
38. Running setup.py bdist_wheel for MarkupSafe: finished with status
'done'
39. Stored in directory:
/root/.cache/pip/wheels/88/a7/30/e39a54a87bcbe25308fa3ca64e8ddc75d9b3
e5afa21ee32d57
40. Successfully built itsdangerous MarkupSafe
41. Installing collected packages: itsdangerous, click, Werkzeug,
MarkupSafe, Jinja2, flask
42. Successfully installed Jinja2-2.9.6 MarkupSafe-1.0 Werkzeug-0.12.2
click-6.7 flask-0.12.2 itsdangerous-0.24
43. ---> ce41f2517c16
44. Removing intermediate container cac3222673a3
45. Step 3/4 : CMD python app.py
46. ---> Running in 2197e5263eff
47. ---> 0ab91286958b
48. Removing intermediate container 2197e5263eff
49. Step 4/4 : COPY app.py /app.py
50. ---> f1b2781b3111
51. Removing intermediate container b92b506ee093
52. Successfully built f1b2781b3111
Successfully tagged python-hello-world:latest

53. Verify that your image shows in your image list:


54. $ docker image ls
55.
56. REPOSITORY TAG IMAGE ID
CREATED SIZE
57. python-hello-world latest f1b2781b3111 26
seconds ago 99.3MB
58. python 3.6.1-alpine c86415c03c37 8 days
ago 88.7MB

Notice that your base image, python:3.6.1-alpine, is also in your list.


3. Run the Docker image

Now that you have built the image, you can run it to see that it works.

1. Run the Docker image:


2. $ docker run -p 5001:5000 -d python-hello-world
0b2ba61df37fb4038d9ae5d145740c63c2c211ae2729fc27dc01b82b5aaafa26

The -p flag maps a port running inside the container to your host. In this case, you're
mapping the Python app running on port 5000 inside the container to port 5001 on
your host. Note that if port 5001 is already being used by another application on your
host, you might need to replace 5001 with another value, such as 5002.

3. Navigate to http://localhost:5001 in a browser to see the results.

You should see "hello world!" in your browser.

4. Check the log output of the container.

If you want to see logs from your application, you can use the docker container
logs command. By default, docker container logs prints out what is sent to
standard out by your application. Use the command docker container ls to find
the ID for your running container.

$ docker container ls (gets container id)


$ docker container logs [container id]
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
172.17.0.1 - - [28/Jun/2017 19:35:33] "GET / HTTP/1.1" 200 -

The Dockerfile is used to create reproducible builds for your application. A common
workflow is to have your CI/CD automation run docker image build as part of its
build process. After images are built, they will be sent to a central registry where they
can be accessed by all environments (such as a test environment) that need to run
instances of that application. In the next section, you will push your custom image to
the public Docker registry, which is the Docker Hub, where it can be consumed by
other developers and operators.

4. Push to a central registry

1. Navigate to Docker Hub and create a free account if you haven't already.

For this lab, you will use the Docker Hub as your central registry. Docker Hub is a
free service to publicly store available images. You can also pay to store private
images.
Most organizations that use Docker extensively will set up their own registry
internally. To simplify things, you will use Docker Hub, but the following concepts
apply to any registry.

2. Log in to the Docker registry account by entering docker login on your terminal:
3. $ docker login
4. Login with your Docker ID to push and pull images from Docker Hub. If
you don't have a Docker ID, head over to https://hub.docker.com to
create one.
Username:

5. Tag the image with your username.

The Docker Hub naming convention is to tag your image with [dockerhub
username]/[image name]. To do this, tag your previously created image python-
hello-world to fit that format.

$ docker tag python-hello-world [dockerhub username]/python-hello-


world

6. After you properly tag the image, use the docker push command to push your image
to the Docker Hub registry:
7. $ docker push jzaccone/python-hello-world
8. The push refers to a repository [docker.io/jzaccone/python-hello-
world]
9. 2bce026769ac: Pushed
10. 64d445ecbe93: Pushed
11. 18b27eac38a1: Mounted from library/python
12. 3f6f25cd8b1e: Mounted from library/python
13. b7af9d602a0f: Mounted from library/python
14. ed06208397d5: Mounted from library/python
15. 5accac14015f: Mounted from library/python
16. latest: digest:
sha256:508238f264616bf7bf962019d1a3826f8487ed6a48b80bf41fd3996c7175fd
0f size: 1786

17. Check your image on Docker Hub in your browser.

Navigate to Docker Hub and go to your profile to see your uploaded image.

Now that your image is on Docker Hub, other developers and operators can use the
docker pull command to deploy your image to other environments.

Remember: Docker images contain all the dependencies that they need to run an
application within the image. This is useful because you no longer need to worry
about environment drift (version differences) when you rely on dependencies that are
installed on every environment you deploy to. You also don't need to follow more
steps to provision these environments. Just one step: install docker, and that's it.

6. Understand image layers

One of the important design properties of Docker is its use of the union file system.
Consider the Dockerfile that you created before:

FROM python:3.6.1-alpine
RUN pip install flask
CMD ["python","app.py"]
COPY app.py /app.py

Each of these lines is a layer. Each layer contains only the delta, or changes from the layers
before it. To put these layers together into a single running container, Docker uses the union
file system to overlay layers transparently into a single view.

Each layer of the image is read-only except for the top layer, which is created for the
container. The read/write container layer implements "copy-on-write," which means that files
that are stored in lower image layers are pulled up to the read/write container layer only when
edits are being made to those files. Those changes are then stored in the container layer.

The "copy-on-write" function is very fast and in almost all cases, does not have a noticeable
effect on performance. You can inspect which files have been pulled up to the container level
with the docker diff command. For more information, see the command-line reference on
the docker diff command.
Because image layers are read-only, they can be shared by images and by running containers.
For example, creating a new Python application with its own Dockerfile with similar base
layers will share all the layers that it had in common with the first Python application.

FROM python:3.6.1-alpine
RUN pip install flask
CMD ["python","app2.py"]
COPY app2.py /app2.py

You can also see the sharing of layers when you start multiple containers from the same
image. Because the containers use the same read-only layers, you can imagine that starting
containers is very fast and has a very low footprint on the host.

You might notice that there are duplicate lines in this Dockerfile and the Dockerfile that you
created earlier in this lab. Although this is a trivial example, you can pull common lines of
both Dockerfiles into a base Dockerfile, which you can then point to with each of your child
Dockerfiles by using the FROM command.

Image layering enables the docker caching mechanism for builds and pushes. For example,
the output for your last docker push shows that some of the layers of your image already
exist on the Docker Hub.

$ docker push jzaccone/python-hello-world


The push refers to a repository [docker.io/jzaccone/python-hello-world]
94525867566e: Pushed
64d445ecbe93: Layer already exists
18b27eac38a1: Layer already exists
3f6f25cd8b1e: Layer already exists
b7af9d602a0f: Layer already exists
ed06208397d5: Layer already exists
5accac14015f: Layer already exists
latest: digest:
sha256:91874e88c14f217b4cab1dd5510da307bf7d9364bd39860c9cc8688573ab1a3a
size: 1786

To look more closely at layers, you can use the docker image history command of the
Python image you created.
$ docker image history python-hello-world
IMAGE CREATED CREATED BY
SIZE COMMENT
f1b2781b3111 5 minutes ago /bin/sh -c #(nop) COPY
file:0114358808a1bb... 159B
0ab91286958b 5 minutes ago /bin/sh -c #(nop) CMD ["python"
"app.py"] 0B
ce41f2517c16 5 minutes ago /bin/sh -c pip install flask
10.6MB
c86415c03c37 8 days ago /bin/sh -c #(nop) CMD ["python3"]
0B
<missing> 8 days ago /bin/sh -c set -ex; apk add --no-
cache -... 5.73MB
<missing> 8 days ago /bin/sh -c #(nop) ENV
PYTHON_PIP_VERSION=... 0B
<missing> 8 days ago /bin/sh -c cd /usr/local/bin && ln
-s idl... 32B
<missing> 8 days ago /bin/sh -c set -ex && apk add --
no-cache ... 77.5MB
<missing> 8 days ago /bin/sh -c #(nop) ENV
PYTHON_VERSION=3.6.1 0B
<missing> 8 days ago /bin/sh -c #(nop) ENV
GPG_KEY=0D96DF4D411... 0B
<missing> 8 days ago /bin/sh -c apk add --no-cache ca-
certificates 618kB
<missing> 8 days ago /bin/sh -c #(nop) ENV LANG=C.UTF-8
0B
<missing> 8 days ago /bin/sh -c #(nop) ENV
PATH=/usr/local/bin... 0B
<missing> 9 days ago /bin/sh -c #(nop) CMD ["/bin/sh"]
0B
<missing> 9 days ago /bin/sh -c #(nop) ADD
file:cf1b74f7af8abcf... 4.81MB

Each line represents a layer of the image. You'll notice that the top lines match to the
Dockerfile that you created, and the lines below are pulled from the parent Python image.
Don't worry about the <missing> tags. These are still normal layers; they have just not been
given an ID by the Docker system.

7. Remove the containers

Completing this lab results in a lot of running containers on your host. You'll stop and
remove these containers.

1. Get a list of the containers running by running the command docker container ls:
2. $ docker container ls
3. CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
4. 0b2ba61df37f python-hello-world "python app.py" 7
minutes ago Up 7 minutes 0.0.0.0:5001->5000/tcp
practical_kirch
5. Run docker container stop [container id] for each container in the list that is
running:
6. $ docker container stop 0b2
7. 0b2
8. Remove the stopped containers by running docker system prune:
9. $ docker system prune
10. WARNING! This will remove:
11. - all stopped containers
12. - all volumes not used by at least one container
13. - all networks not used by at least one container
14. - all dangling images
15. Are you sure you want to continue? [y/N] y
16. Deleted Containers:
17. 0b2ba61df37fb4038d9ae5d145740c63c2c211ae2729fc27dc01b82b5aaafa26
18.
19. Total reclaimed space: 300.3kB

1. Create your first swarm

In this section, you will create your first swarm by using Play-with-Docker.

1. Navigate to Play-with-Docker. You're going to create a swarm with three nodes.


2. Click Add new instance on the left side three times to create three nodes.
3. Initialize the swarm on node 1:
4. $ docker swarm init --advertise-addr eth0
5. Swarm initialized: current node (vq7xx5j4dpe04rgwwm5ur63ce) is now a
manager.
6.
7. To add a worker to this swarm, run the following command:
8.
9. docker swarm join \
10. --token SWMTKN-1-
50qba7hmo5exuapkmrj6jki8knfvinceo68xjmh322y7c8f0pj-
87mjqjho30uue43oqbhhthjui \
11. 10.0.120.3:2377
12.
13. To add a manager to this swarm, run 'docker swarm join-token
manager' and follow the instructions.

You can think of Docker Swarm as a special mode that is activated by the command:
docker swarm init. The --advertise-addr option specifies the address in which
the other nodes will use to join the swarm.

This docker swarm init command generates a join token. The token makes sure
that no malicious nodes join the swarm. You need to use this token to join the other
nodes to the swarm. For convenience, the output includes the full command docker
swarm join, which you can just copy/paste to the other nodes.

14. On both node2 and node3, copy and run the docker swarm join command that was
outputted to your console by the last command.

You now have a three-node swarm!

15. Back on node1, run docker node ls to verify your three-node cluster:
16. $ docker node ls
17. ID HOSTNAME STATUS
AVAILABILITY MANAGER STATUS
18. 7x9s8baa79l29zdsx95i1tfjp node3 Ready
Active
19. x223z25t7y7o4np3uq45d49br node2 Ready
Active
20. zdqbsoxa6x1bubg3jyjdmrnrn * node1 Ready
Active Leader

This command outputs the three nodes in your swarm. The asterisk (*) next to the ID
of the node represents the node that handled that specific command (docker node ls
in this case).

Your node consists of one manager node and two workers nodes. Managers handle
commands and manage the state of the swarm. Workers cannot handle commands and
are simply used to run containers at scale. By default, managers are also used to run
containers.

All docker service commands for the rest of this lab need to be executed on the manager
node (Node1).

Note: Although you control the swarm directly from the node in which its running, you can
control a Docker swarm remotely by connecting to the Docker Engine of the manager by
using the remote API or by activating a remote host from your local Docker installation
(using the $DOCKER_HOST and $DOCKER_CERT_PATH environment variables). This will
become useful when you want to remotely control production applications, instead of using
SSH to directly control production servers.

2. Deploy your first service

Now that you have your three-node Swarm cluster initialized, you'll deploy some containers.
To run containers on a Docker Swarm, you need to create a service. A service is an
abstraction that represents multiple containers of the same image deployed across a
distributed cluster.

Let's do a simple example using NGINX. For now, you will create a service with one running
container, but you will scale up later.

1. Deploy a service by using NGINX:


2. $ docker service create --detach=true --name nginx1 --publish 80:80
--mount
source=/etc/hostname,target=/usr/share/nginx/html/index.html,type=bin
d,ro nginx:1.12
3. pgqdxr41dpy8qwkn6qm7vke0q

This command statement is declarative, and Docker Swarm will try to maintain the
state declared in this command unless explicitly changed by another docker service
command. This behavior is useful when nodes go down, for example, and containers
are automatically rescheduled on other nodes. You will see a demonstration of that a
little later in this lab.
The --mount flag is useful to have NGINX print out the hostname of the node it's
running on. You will use this later in this lab when you start load balancing between
multiple containers of NGINX that are distributed across different nodes in the cluster
and you want to see which node in the swarm is serving the request.

You are using NGINX tag 1.12 in this command. You will see a rolling update with
version 1.13 later in this lab.

The --publish command uses the swarm's built-in routing mesh. In this case, port 80
is exposed on every node in the swarm. The routing mesh will route a request coming
in on port 80 to one of the nodes running the container.

4. Inspect the service. Use the command docker service ls to inspect the service you
just created:
5. $ docker service ls
6. ID NAME MODE REPLICAS
IMAGE PORTS
7. pgqdxr41dpy8 nginx1 replicated 1/1
nginx:1.12 *:80->80/tcp
8. Check the running container of the service.

To take a deeper look at the running tasks, use the command docker service ps. A
task is another abstraction in Docker Swarm that represents the running instances of a
service. In this case, there is a 1-1 mapping between a task and a container.

$ docker service ps nginx1


ID NAME IMAGE NODE
DESIRED STATE CURRENT STATE ERROR PORTS
iu3ksewv7qf9 nginx1.1 nginx:1.12 node1
Running Running 8 minutes ago

If you know which node your container is running on (you can see which node based
on the output from docker service ps), you can use the command docker
container ls to see the container running on that specific node.

9. Test the service.

Because of the routing mesh, you can send a request to any node of the swarm on port
80. This request will be automatically routed to the one node that is running the
NGINX container.

Try this command on each node:

$ curl localhost:80
node1

Curling will output the hostname where the container is running. For this example, it
is running on node1, but yours might be different.
3. Scale your service

In production, you might need to handle large amounts of traffic to your application, so you'll
learn how to scale.

1. Update your service with an updated number of replicas.

Use the docker service command to update the NGINX service that you created
previously to include 5 replicas. This is defining a new state for the service.

$ docker service update --replicas=5 --detach=true nginx1


nginx1

When this command is run, the following events occur:

o The state of the service is updated to 5 replicas, which is stored in the swarm's
internal storage.
o Docker Swarm recognizes that the number of replicas that is scheduled now
does not match the declared state of 5.
o Docker Swarm schedules 4 more tasks (containers) in an attempt to meet the
declared state for the service.

This swarm is actively checking to see if the desired state is equal to actual state and
will attempt to reconcile if needed.

2. Check the running instances.

After a few seconds, you should see that the swarm did its job and successfully started
4 more containers. Notice that the containers are scheduled across all three nodes of
the cluster. The default placement strategy that is used to decide where new
containers are to be run is the emptiest node, but that can be changed based on your
needs.

$ docker service ps nginx1

3. Send a lot of requests to http://localhost:80.

The --publish 80:80 parameter is still in effect for this service; that was not
changed when you ran the docker service update command. However, now when
you send requests on port 80, the routing mesh has multiple containers in which to
route requests to. The routing mesh acts as a load balancer for these containers,
alternating where it routes requests to.
Try it out by curling multiple times. Note that it doesn't matter which node you send
the requests. There is no connection between the node that receives the request and
the node that that request is routed to.

$ curl localhost:80
node3
$ curl localhost:80
node3
$ curl localhost:80
node2
$ curl localhost:80
node1
$ curl localhost:80
node1

You should see which node is serving each request because of the useful --mount
command you used earlier.

Limits of the routing mesh: The routing mesh can publish only one service on port
80. If you want multiple services exposed on port 80, you can use an external
application load balancer outside of the swarm to accomplish this.

4. Check the aggregated logs for the service.

Another easy way to see which nodes those requests were routed to is to check the
aggregated logs. You can get aggregated logs for the service by using the command
docker service logs [service name]. This aggregates the output from every
running container, that is, the output from docker container logs [container
name].

$ docker service logs nginx1

Based on these logs, you can see that each request was served by a different container.

In addition to seeing whether the request was sent to node1, node2, or node3, you can
also see which container on each node that it was sent to. For example, nginx1.5
means that request was sent to a container with that same name as indicated in the
output of the command docker service ps nginx1.

4. Apply rolling updates

Now that you have your service deployed, you'll see a release of your application. You are
going to update the version of NGINX to version 1.13.
1. Run the docker service update command:
2. $ docker service update --image nginx:1.13 --detach=true nginx1

This triggers a rolling update of the swarm. Quickly enter the command docker
service ps nginx1 over and over to see the updates in real time.

You can fine-tune the rolling update by using these options:

o --update-parallelism: specifies the number of containers to update


immediately (defaults to 1).
o --update-delay: specifies the delay between finishing updating a set of
containers before moving on to the next set.
3. After a few seconds, run the command docker service ps nginx1 to see all the
images that have been updated to nginx:1.13.
4. $ docker service ps nginx1

You have successfully updated your application to the latest version of NGINX.

5. Reconcile problems with containers

In the previous section, you updated the state of your service by using the command docker
service update. You saw Docker Swarm in action as it recognized the mismatch between
desired state and actual state, and attempted to solve the issue.

The inspect-and-then-adapt model of Docker Swarm enables it to perform reconciliation


when something goes wrong. For example, when a node in the swarm goes down, it might
take down running containers with it. The swarm will recognize this loss of containers and
will attempt to reschedule containers on available nodes to achieve the desired state for that
service.

You are going to remove a node and see tasks of your nginx1 service be rescheduled on other
nodes automatically.

1. To get a clean output, create a new service by copying the following line. Change the
name and the publish port to avoid conflicts with your existing service. Also, add the
--replicas option to scale the service with five instances:
2. $ docker service create --detach=true --name nginx2 --replicas=5 --
publish 81:80 --mount
source=/etc/hostname,target=/usr/share/nginx/html/index.html,type=bin
d,ro nginx:1.12
3. aiqdh5n9fyacgvb2g82s412js
4. On node1, use the watch utility to watch the update from the output of the docker
service ps command.
Tip: watch is a Linux utility and might not be available on other operating systems.

$ watch -n 1 docker service ps nginx2

This command should create output like this:

5. Click node3 and enter the command to leave the swarm cluster:
6. $ docker swarm leave

Tip: This is the typical way to leave the swarm, but you can also kill the node and the
behavior will be the same.

7. Click node1 to watch the reconciliation in action. You should see that the swarm
attempts to get back to the declared state by rescheduling the containers that were
running on node3 to node1 and node2 automatically.

6. Determine how many nodes you need

In this lab, your Docker Swarm cluster consists of one master and two worker nodes. This
configuration is not highly available. The manager node contains the necessary information
to manage the cluster, but if this node goes down, the cluster will cease to function. For a
production application, you should provision a cluster with multiple manager nodes to allow
for manager node failures.

You should have at least three manager nodes but typically no more than seven. Manager
nodes implement the raft consensus algorithm, which requires that more than 50% of the
nodes agree on the state that is being stored for the cluster. If you don't achieve more than
50% agreement, the swarm will cease to operate correctly. For this reason, note the following
guidance for node failure tolerance:

 Three manager nodes tolerate one node failure.


 Five manager nodes tolerate two node failures.
 Seven manager nodes tolerate three node failures.
It is possible to have an even number of manager nodes, but it adds no value in terms of the
number of node failures. For example, four manager nodes will tolerate only one node
failure, which is the same tolerance as a three-manager node cluster. However, the more
manager nodes you have, the harder it is to achieve a consensus on the state of a cluster.

While you typically want to limit the number of manager nodes to no more than seven, you
can scale the number of worker nodes much higher than that. Worker nodes can scale up into
the thousands of nodes. Worker nodes communicate by using the gossip protocol, which is
optimized to be perform well under a lot of traffic and a large number of nodes.

If you are using Play-with-Docker, you can easily deploy multiple manager node clusters by
using the built in templates. Click the Templates icon in the upper left to view the available
templates.

You might also like