14 Dockers
14 Dockers
Dockers
1. Applications Era
In today’s world, we are all surrounded by apps and websites. We use our smartphones and
computers to browse around internet and use all the web services through our mobile apps or
browsers. All these millions of web based data is coming somewhere far from some computers
which would be located in some datacenter. We generally call them servers; these servers could be
those physical machines that we see racked up in a datacenter with all those flashing lights and
cables.
If we take some examples like Amazon, Google, Netflix, Goibibo etc, all these businesses are
running on applications or we can say their applications are their business. This makes a very
important point that we cannot separate their business with their application.
Application needs compute resource to run and that comes from the server where they hosted their
application. In olden days when we did not have any virtualization or cloud computing we use to
run them directly on a physical server.
So, if I want to host an application on 10 webservers, I need ten physical servers under load
balancer serving the web traffic.
These servers are very expensive and we need to do lot of maintenance for them.
➢ We need to procure a server. A process where we place an order for the purchase.
➢ There is Capital expenditure or CapEx required.
➢ There is Operational expenditure (OpEx), like cooling, power, admins to maintain that
server farm.
We deploy one application per server because we want our applications to be isolated. For example,
if we need web app, db app and few backend apps.
We may end up having multiple physical system each running a single instance of that app.
So, every time we need a new app to run we buy servers, install OS and setup our app on that.
And most of the time nobody knew the performance requirements of the new application!
This meant IT had to make guesses when choosing the model and size of
servers to buy.
As a result, IT did the only reasonable thing - it bought big fast servers with
lots of resiliency. After all, the last thing anyone wanted - including the
business - was under-powered servers.
Most part of the time these physical server compute resource will be under-utilized as low as 5-10%
of their potential capacity. A tragic waste of company capital and resources.
2. Virtualization Revolution.
VMware gave the world the virtual machine and everything changed after that. Now we could run
multiple applications isolated in separate OS but in the same physical server.
In the virtualization chapter, we discussed the benefits and features of virtualization, The
Hypervisor Architecture.
Now we know that every VM has its own OS, which is a problem. OS needs fair amount of
resources like CPU, Memory, Storage etc. We also maintain OS licences and nurse them regularly
like patching, upgrades, config changes. We wanted to host an application but collected good
amount of fats over our infra, we are wasting OpEx and CapEx here. Think about shipping a vm
from one place to other place, this sounds a great idea that if we could bundle everything in a vm
image and ship it so the other person doesn’t need to setup vm from scratch can directly run the vm
from image. We did it in Vagrant chapter where we download preinstalled vm and have just run it.
But these images are heavy and bulky as they contain OS with the app. Booting them is a slow
process. So being portable it’s not convenient to ship the vm every time.
Shipping an application bundled with all the dependencies/libraries in an image without OS.
Hmm, sounds like we solved a big problem there. That’s what containers are.
4. Containers.
If virtual machines are hardware virtualization then containers are OS virtualization. We don’t need
a real OS in the container to install our application. Applications inside the containers are dependent
on Host OS kernel where its running. So, if I have hosted java application like inside the container it
will use all the java libraries and config files from container data, but for compute resource its relied
Containers are very lightweight as it just has the libraries and application. So that means less
compute resource is utilized and that means more free space to run more container's. So, in terms of
resources also we are saving CapEx & OpEx.
Containers is not a modern technology, it was around us in different forms and technologies. But
Docker has brought it to a whole new level when it comes to building, shipping and managing
containers.
5. Dockers
Docker, Inc. started its life as a platform as a service (PaaS) provider called dotCloud. Behind the
scenes, the dotCloud platform leveraged Linux contain-ers. To help them create and manage these
When most of the people talk about Docker they generally refer to the Docker Engine.
Docker engine runs and orchestrate containers. As of now we can think docker engine like a
hypervisor. The same way as hypervisor technology that runs virtual machines, the Docker Engine
is the core container runtime that runs containers.
There are so many Docker technologies that gets integrated with the docker engine to automate,
orchestrate or manage docker containers.
Install Docker:
sudo apt-get install docker-ce -y
Docker should now be installed, the daemon started, and the process enabled to start on boot. Check
that it's running:
sudo systemctl status docker
docker --version
Let’s quickly feel and taste the docker engine before we dive deep into it.
➢ Docker containers
Images
As of now you can think images as vagrant boxes. It’s very much different from the vm images but
it will feel as same initially. Vagrant boxes are stopped state of a VM and Images and stopped state
of containers.
$ docker images
This command will list the downloaded images on your machine, so you won’t see anything now in
the output. We need to download some images, in docker world we call it Pulling an image.
So where does it pull the image from? Again, same analogy as vagrant boxes. We download the
vagrant boxes from vagrant cloud, docker images are downloaded from Docker Registries, the
most famous docker registry is DockerHub. There are other registries as well, from google, redhat
etc.
Digest: sha256:ea1d854d38be82f54d39efe2c67000bed1b03348bcc2f3dc094f260855dff368
Run the docker images command again to see the ubuntu:latest image you just pulled.
$ docker images
Containers
Now that we have an image pulled locally on our Docker host, we can use the docker run command
to launch a container from it.
root@b8765d3a67a9:/#
Look closely at the output form the command above. You should notice that your shell prompt has
changed. This is because your shell is now attached to the shell of the new container - you are
literally inside of the new container!
➢ The -it flags tell the daemon to make the container interactive and to attach our current shell
to the shell of the container.
➢ Next, the command tells Docker that we want the container to be based on the ubuntu:latest
image.
Run the following ps command from inside of the container to list all running processes.
root@b8765d3a67a9:/# ps -elf
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
As you can see from the output of the ps command, there are only two processes running inside of
the container:
➢ PID 1. This is the/bin/bash process that we told the container to run with the docker run command.
➢ PID 10. This is the ps elf process that we ran to list the running processes.
Press Ctrl-PQ to exit the container. This will land you back in the shell of your Docker host. You
can verify this by looking at your shell prompt.
In a previous step you pressed Ctrl-PQ to exit your shell from the container. Doing this from inside
of a container will exit you form the container without killing it. You can see all the running
containers on your system using the docker ps command.
imran@DevOps:~$ docker ps
The output above shows a single running container. This is the container that you created earlier.
The presence of your container in this output
proves that it’s still running. You can also see that it was created 6 minutes ago and has been
running for 6 minutes.
You can attach your shell to running containers with the docker exec command. As the container
from the previous steps are still running let’s connect back to it.
Note: The example below references a container called “inspiring_heyrovsky”. The name of your
container will be different, so remember to substitute “inspiring_heyrovsky” with the name or ID of
the container running on your Docker host.
root@b8765d3a67a9:/#
Notice that your shell prompt has changed again. You are back inside the container.
In our example, we used the -it options to attach our shell to the container’s shell. We referenced the
container by name and told it to run the bash shell.
Run the docker ps command again to verify that your container is still running.
imran@DevOps:~$ docker ps
Stop the container and kill it using the docker stop and docker rm commands.
inspiring_heyrovsky
inspiring_heyrovsky
Verify that the container was successfully deleted by running another docker ps command.
imran@DevOps:~$ docker ps
imran@DevOps:~$
Now you would have got the taste of docker images and containers. We pulled an image, executed a
container, stopped and removed it. In next section, we will dig more in detail of images and then
containers.
8. Images
We have seen very basic stuff of docker images, now we will do a deep dive into docker images.
Images are built and distributed like software. As we have seen in Continuous Integration chapter
that there should be a process of Build & Release a software, we must do the same if we are
releasing Images.
We have mentioned earlier Images are stopped state of container so you can stop a container and
create a new image from it.
Images that we are shipping should be light weight and should only contain the files and libraries
that required to run the application inside it. For example, if we are shipping a java application, it
should contain only java libraries, application server like tomcat and files to run our app and not
anything extra.
Pulling Images.
Digest: sha256:1d496e5c8e692dfabeb1cc8a18f01e2b501111f32c3d08e94e5402daeceb94e6
$ docker images
Image registries
Docker images are stored in image registries. The most common image registry is Docker Hub.
Other registries exist including 3rd party registries and secure on-premises registries, but Docker
Hub is the default, and it’s the one we’ll use in this tutorial.
https://hub.docker.com/
Image registries contain multiple image repositories. Image repositories contain images. That might
be a bit confusing, so Figure 5.2 shows a picture of an image registry containing 3 repositories, and
each repository contains a few images.
Unofficial repositories are uploaded by anyone and are not verified by Docker, Inc.
The list below contains a few of the official repositories and shows their URLs that exist at the top
level of the Docker Hub namespace:
• nginx - https://hub.docker.com/_/nginx/
• busybox - https://hub.docker.com/_/busybox/
• redis - https://hub.docker.com/_/redis/
• mongo - https://hub.docker.com/_/mongo/
Our personal images live in the unofficial repositories. Below are some examples of images in my
repositories:
visualpath/myjsonsinatra - https://hub.docker.com/r/visualpath/myjsonsinatra/
visualpath/devops-docker-ci - https://hub.docker.com/r/visualpath/devops-docker-ci/
Image Tags.
While pulling a image we give the imagename:TAG and docker will reach by default to dockerhub
registry and find the image with the TAG we specified.
Tag generally refers to the version of the image from the repository.
If we are looking for some other version like 1.12.0 then we can use below command.
If we do not specify any tag then the default tag is latest. Latest tag does not mean that the images is
latest in version, it’s just the name of the tag and that’s all.
docker pull nginx
Above command will download the nginx image with latest tag.
All Docker images are made up of one or more read-only layers as shown below.
and we’ve already seen one of them. Let’s take a second look at the output of the docker
pull node:latest command from earlier:
Digest: sha256:1d496e5c8e692dfabeb1cc8a18f01e2b501111f32c3d08e94e5402daeceb94e6
Each line in the output above that ends with “Pull complete” represents a layer in the image that
was pulled. As we can see, this image has 5 layers.
Another way to see the layers that make up an image is to inspect the
image with the docker inspect command. The example below inspects the same ubuntu:latest
image.
"Id": "sha256:f3068bc71556e181c774ee7dadc4d3ebbf5643e95680a202779f08146332547d",
"RepoTags": [
"node:latest"
],
"RepoDigests": [
"node@sha256:1d496e5c8e692dfabeb1cc8a18f01e2b501111f32c3d08e94e5402daeceb94e6"
],
"Parent": "",
"Comment": "",
"Created": "2017-06-15T17:26:33.424702587Z",
"Container": "8c6ffc2b7a445e6f2ade22c6be3a430c772e0ab61bf0ee4fff69b36a24e9123e",
"ContainerConfig": {
"Hostname": "5c84359661e5",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_VERSION=8.1.2",
"YARN_VERSION=0.24.6"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"node\"]"
],
"ArgsEscaped": true,
"Image": "sha256:3864d82628abf07bbdebe3d1d529aa90eef89f8dc99d06dea2a35329879c81a7",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": [],
"Labels": {}
},
"DockerVersion": "17.03.1-ce",
"Author": "",
"Config": {
"Hostname": "5c84359661e5",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_VERSION=8.1.2",
"YARN_VERSION=0.24.6"
"Cmd": [
"node"
],
"ArgsEscaped": true,
"Image": "sha256:3864d82628abf07bbdebe3d1d529aa90eef89f8dc99d06dea2a35329879c81a7",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": [],
"Labels": {}
},
"Architecture": "amd64",
"Os": "linux",
"Size": 666628404,
"VirtualSize": 666628404,
"GraphDriver": {
"Data": null,
"Name": "aufs"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:007ab444b234be691d8dafd51839b7713324a261b16e2487bf4a3f989ded912d",
"sha256:4902b007e6a712835de8e09c385c0f061638323c3cacc13f7190676f05dad9d7",
"sha256:bb07d0c1008de4fd468f865764e6f1129ba53f4bfe6ab14dd5eb3ab256947ab0",
"sha256:ecf5c2e2468e7fe6600193972ffc659214050d8829f44e5194a22997de13aab4",
"sha256:7b3b4fef39c1df95f7a015716bc980dad38fff92ebba6de82d5add10b1258523",
"sha256:677f02386f077da16bfbe00af5305928545c11c40dccf70f93fa332f617c1fba",
"sha256:99c62c5bb4f21ab5b288339ce1a29e33cb3a82670ec1a9254730b6925d7da7dc",
"sha256:0dd4309d61fe600b230931ce669a93ee0baf9b2c18f574749c3af1e3eed44b83"
Deleting Images
When you no longer need an image, you can delete it form your Docker host with the docker rmi
command. rmi is short for remove image.
9. Containers
Container is the runtime instance of an image like we start a vm from vagrant box. We can start
multiple containers from one single image.
We create container from an image by giving docker run command. Containers run until the
processes running inside them exists. There should be minimum one process running inside the
container with PID 1. If this process dies the containers also dies.
root@ae141c80e436:/# ps -ef
In above container /bin/bash has the PID 1, this process will get killed if we hit exit command.
root@ae141c80e436:/# exit
exit
imran@DevOps:~$ docker ps
As exit will logout and kill the current shell and that’s our PID 1 our container also got killed with
it.
ae141c80e436
ae141c80e436
ae141c80e436
c695f224adf6a91f971018c6934b255e5cb86d6d8f6a4445d24fa7d7f1f70967
Here we are mapping two ports 8080 and 50000, host and container ports are same which is ok if
your host ports are not busy.
Containers data is not persistent that means if we delete the container its data is also lost, which is
obvious. But if we want to keep that data safe on host machine, we can use -v flag which is for
volumes. Left hand side is host machine directory path and right-hand side is containers directory
path which you want to save on host machine. It’s similar to our vagrant sync directories.
Now even if we delete the container its data in /var/jenkins_home will be safe on the host machine
in /your/home directory.
Inspecting Containers
In the previous example you might have noticed that we didn’t specify a command for the container
when we issued the docker run. Yet the container ran a simple web service. How did this happen?
When building a Docker image, it’s possible to embed a default command or process you want
containers using the image to run. If we run a docker inspect command against the image we used
to run our container, we’ll be able to see the command/process that the container will run when it
starts.
"Id": "c695f224adf6a91f971018c6934b255e5cb86d6d8f6a4445d24fa7d7f1f70967",
"Created": "2017-06-16T22:13:41.841190294Z",
"Path": "/bin/sh",
"Args": [
"/usr/sbin/apache2ctl -D FOREGROUND"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 17026,
"ExitCode": 0,
"Error": "",
"StartedAt": "2017-06-16T22:13:42.656496753Z",
"FinishedAt": "0001-01-01T00:00:00Z"
Dockerfile will define what goes on in the environment inside your container. Access to resources like
networking interfaces and disk drives is virtualized inside this environment, which is isolated from the
rest of your system, so you have to map ports to the outside world, and be specific about what files you
want to “copy in” to that environment. However, after doing that, you can expect that the build of your
app defined in thisDockerfile will behave exactly the same wherever it runs.
Dockerfile
Create an empty directory and put this file in it, with the name Dockerfile. Take note of the comments
that explain each statement.
# Use an official Python runtime as a base image
FROM python:2.7-slim
WORKDIR /app
EXPOSE 80
This Dockerfile refers to a couple of things we haven’t created yet, namely app.py and
requirements.txt. Let’s get those in place next.
The app itself
Grab these two files and place them in the same folder as Dockerfile. This completes our app, which as
you can see is quite simple. When the above Dockerfile is built into an image, app.py and
requirements.txtwill be present because of that Dockerfile’s ADD command, and the output from app.py will
be accessible over HTTP thanks to the EXPOSE command.
requirements.txt
Flask
Redis
app.py
from flask import Flask
import os
import socket
# Connect to Redis
app = Flask(__name__)
def hello():
try:
visits = redis.incr("counter")
except RedisError:
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
Now we see that pip install -r requirements.txt installs the Flask and Redis libraries for Python, and the app
prints the environment variable NAME, as well as the output of a call to socket.gethostname(). Finally,
because Redis isn’t running (as we’ve only installed the Python library, and not Redis itself), we should
expect that the attempt to use it here will fail and produce the error message.
Note: Accessing the name of the host when inside a container retrieves the container ID,
which is like the process ID for a running executable.
That’s it! You don’t need Python or anything in requirements.txt on your system, nor will building or
running this image install them on your system. It doesn’t seem like you’ve really set up an environment
with Python and Flask, but you have.
$ ls
Now run the build command. This creates a Docker image, which we’re going to tag using -t so it has a
friendly name.
Where is your built image? It’s in your machine’s local Docker image registry:
Run the app, mapping your machine’s port 4000 to the container’s EXPOSEd port 80 using -p:
You should see a notice that Python is serving your app at http://0.0.0.0:80. But that message is coming
from inside the container, which doesn’t know you mapped port 80 of that container to 4000, making
the correct URL http://localhost:4000.
Go to that URL in a web browser to see the display content served up on a web page, including
“Hello World” text, the container ID, and the Redis error message.
You can also use the curl command in a shell to view the same content.
$ curl http://localhost:4000
Note: This port remapping of 4000:80 is to demonstrate the difference between what you
EXPOSEwithin the Dockerfile, and what you publish using docker run -p. In later steps, we’ll
just map port 80 on the host to port 80 in the container and use http://localhost.
You get the long container ID for your app and then are kicked back to your terminal. Your container is
running in the background. You can also see the abbreviated container ID with docker ps (and both work
interchangeably when running commands):
$ docker ps
Now use docker stop to end the process, using the CONTAINER ID, like so:
To demonstrate the portability of what we just created, let’s upload our built image and run it
somewhere else. After all, you’ll need to learn how to push to registries when you want to deploy
containers to production.
Now, put it all together to tag the image. Run docker tag image with your username, repository, and tag
names so that the image will upload to your desired destination. The syntax of the command is:
docker tag image username/repository:tag
For example:
Run docker images to see your newly tagged image. (You can also use docker image ls.)
$ docker images
...
Once complete, the results of this upload are publicly available. If you log in to Docker Hub, you will
see the new image there, with its pull command.
If the image isn’t available locally on the machine, Docker will pull it from the repository.
Digest: sha256:0601c866aab2adcc6498200efd0f754037e909e5fd42069adeff72d1e2439068
Note: If you don’t specify the :tag portion of these commands, the tag of :latest will be
assumed, both when you build and when you run images. Docker will use the last version of
the image that ran without a tag specified (not necessarily the most recent image).
No matter where docker run executes, it pulls your image, along with Python and all the dependencies
from requirements.txt, and runs your code. It all travels together in a neat little package, and the host
machine doesn’t have to install anything but Docker to run it.
Dockerfile Instructions
We have seen in previous section that Dockerfile is used to build docker images. It contains the list
of instructions that docker reads to setup an Image. There are around a dozen Instruction that we
can use in our Dockerfile.
1.ADD
2.CMD
3.ENTRYPOINT
4.ENV
5.EXPOSE
6.FROM
7.MAINTAINER
8.RUN
9.USER
10.VOLUME
11.WORKDIR
12.ONBUILD
This instruction is used to set the base image for subsequent instructions. It is mandatory to set this
in the first line of a Dockerfile. You can use it any number of times though.
Example:
FROM ubuntu:latest
MAINTAINER
This is a non-executable instruction used to indicate the author of the Dockerfile.
Example:
MAINTAINER <name>
RUN
This instruction lets you execute a command on top of an existing layer and create a new layer with
the results of command execution.
For example, if there is a pre-condition to install PHP before running an application, you can run
appropriate commands to install PHP on top of base image (say Ubuntu) like this:
FROM ubuntu
CMD
The major difference between CMD and RUN is that CMD doesn’t execute anything during the build
time. It just specifies the intended command for the image. Whereas RUN executes the command
during build time.
Note: there can be only one CMD instruction in a Dockerfile, if you add more, only the last one takes
effect.
Example:
CMD "echo" "Hello World!"
EXPOSE
While running your service in the container you may want your container to listen on specified
ports. The EXPOSE instruction helps you do this.
Example:
EXPOSE 6456
ENV
This instruction can be used to set the environment variables in the container.
Example:
ENV var_home="/var/etc"
ADD
This instruction is similar to the COPY instruction with few added features like remote URL support
in the source field and local-only tar extraction. But if you don’t need an extra feature, it is
suggested to use COPY as it is more readable.
Example:
ADD http://www.site.com/downloads/sample.tar.xz /usr/src
ENTRYPOINT
You can use this instruction to set the primary command for the image.
For example, if you have installed only one application in your image and want it to run whenever
the image is executed, ENTRYPOINT is the instruction for you.
Note: arguments are optional, and you can pass them during the runtime with something like
docker run <image-name>.
Also, all the elements specified using CMD will be overridden, except the arguments. They will be
passed to the command specified in ENTRYPOINT.
Example:
CMD "Hello World!"
ENTRYPOINT echo
VOLUME
You can use the VOLUME instruction to enable access to a location on the host system from a
container. Just pass the path of the location to be accessed.
Example:
VOLUME /data
USER
This is used to set the UID (or username) to use when running the image.
Example:
USER daemon
WORKDIR
This is used to set the currently active directory for other instructions such as RUN, CMD,
ENTRYPOINT, COPY and ADD.
Note that if relative path is provided, the next WORKDIR instruction will take it as relative to the
path of previous WORKDIR instruction.
Example:
WORKDIR home
RUN pwd
ONBUILD
This instruction adds a trigger instruction to be executed when the image is used as the base for
some other image. It behaves as if a RUN instruction is inserted immediately after the FROM
instruction of the downstream Dockerfile. This is typically helpful in cases where you need a static
base image with a dynamic config value that changes whenever a new image must be built (on top
of the base image).
Example:
ONBUILD RUN rm -rf /usr/temp
Dockerhub has Dockerfile for every official Image that’s hosted there.
Above screenshot is from nginx official repository from Dockerhub. If you see there are links to
Dockerfile for every version of the image. The links points to Dockerfile hosted in github.
############################################################
# Based on Ubuntu
############################################################
FROM ubuntu
# Ref: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/
EXPOSE 27017
ENTRYPOINT usr/bin/mongod
Run the Docker Hub image nginx, which contains a basic web server:
b53c99839ccf662b17ff96424352701cc494b788e4d97525d930406b0cfdb237
imran@DevOps:~$ docker ps
The web server is running on ports 80 and 443 inside the container.Those ports are mapped to
ports 32769 and 32768 on our Docker host.We will explain the whys and hows of this port
mapping.
But first, let's make sure that everything works properly.
Make sure to use the right port number if it is different from the example below:
$ curl localhost:32769
<head>
<title>Welcome to nginx!</title>
We are running two NGINX web servers. The first one is exposed on port 80.The second one is
exposed on port 8000.The third one is exposed on ports 8080 and 8888.Note: the convention is
port-on-host: port-on-container.
There are many ways to integrate containers in your network. Start the container, letting Docker
allocate a public port for it. Then retrieve that port number and feed it to your configuration.
Pick a fixed port number in advance, when you generate your configuration. Then start your
container by setting the port numbers manually.
Use a network plugin, connecting your containers with e.g. VLANs, tunnels...
Enable Swarm Mode to deploy across a cluster.
The container will then be reachable through any node of the cluster.
We can use the docker inspect command to find the IP address of the container.
docker inspect is an advanced command, that can retrieve a ton of information about our
containers. Here, we provide it with a format string to extract exactly the private IP address of the
container.
We can test connectivity to the container using the IP address we've just discovered.
Let's see this now by using the ping tool.
$ ping <ipAddress>
bridge (default)
none
host
container
By default, the container gets a virtual eth0 interface. (In addition to its own private lo loopback
interface.)
Outbound traffic goes through an iptables MASQUERADE rule. Inbound traffic goes through an
iptables DNAT rule. The container can have its own routes, iptables rules, etc.
The CNM adds the notion of a network, and a new top-level command to manipulate and see
those networks: docker network.
What's in a network?
Creating a network
27c06defdb9d99002e670935d8fbcd24ef7f24eb3fd1c22b18f383005722a3c4
$ docker network ls
We will create a named container on this network. It will be reachable with its name, search.
Digest:
sha256:0b94d1d1b5eb130dd0253374552445b39470653fb1a1ec2d81490948876e462c
/ #
From this new container, we can resolve and ping the other one, using its assigned name:
/ # ping search
root@0ecccdfa45ef:/#
In Docker Engine 1.9, name resolution is implemented with /etc/hosts, and updating it each time
containers are added/removed.
$ docker ps -l
If we connect to the application now, we should see that the app is working correctly:
When the app tries to resolve redis, instead of getting a DNS error, it gets the IP address
of our Redis container.
$ docker rm -f redis
Check that the app still works (but the counter is back to 1, since we wiped out the old Redis
container).
Let's try to ping our search container from another container, when that other container is not on
the dev network.
$ docker run --rm alpine ping search ping: bad address 'search'
Names can be resolved only when containers are on the same network. Containers can contact
each other only when they are on the same network (you can try to ping using the IP address to
verify).
Network aliases
We would like to have another network, prod, with its own search container. But there can be
only one container named search! We will use network aliases.
A container can have multiple network aliases. Network aliases are local to a given network
(only exist in this network). Multiple containers can have the same network alias (even on the
same network). In Docker Engine 1.11, resolving a network alias yields the IP addresses of all
containers holding this alias.
5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d5 0c
38079d21caf0c5533a391700d9e9e920724e89200083df73211081c8a356d771
Let's try DNS resolution first, using the nslookup tool that ships with the alpine image.
$ docker run --net prod --rm alpine nslookup search Name: search
Each ElasticSearch instance has a name (generated when it is started). This name can be seen
when we issue a simple HTTP request on the ElasticSearch API endpoint.
"name" : "Tarot",
...
Then try it a few times by replacing --net dev with --net prod:
...
Docker will not create network names and aliases on the default bridge network.
Therefore, if you want to use those features, you have to create a custom network first.
Network aliases are not unique: you can give multiple containers the same alias on the same
network.
In Engine 1.10: one container will be selected and only its IP address will be returned when
resolving the network alias.
In Engine 1.11: when resolving the network alias, the DNS reply includes the IP addresses of all
containers with this network alias. This allows crude load balancing across multiple containers
(but is not a substitute for a real load balancer).
In Engine 1.12: enabling Swarm Mode gives access to clustering features, including an advanced
load balancer using Linux IPVS.
Creation of networks and network aliases is generally automated with tools likeCompose
(covered in a few chapters).
Never again:
• "Works on my machine"
• "Not the same version"
• "Missing dependency"
Check the port number with docker ps and open the application.
Where's my code?
According to the Dockerfile, the code is copied into /src :
FROM ruby
COPY . /src
WORKDIR /src
EXPOSE 9292
We want to make changes inside the container without rebuilding it each time.
For that, we will use a volume.
The d flag indicates that the container should run in detached mode (in the background).
The v flag provides volume mounting inside containers.
The p flag maps port 9292 inside the container to port 80 on the host.
jpetazzo/namer is the name of the image we will run.
We don't need to give a command to run because the Dockerfile already specifies rackup.
The v flag mounts a directory from your host into your Docker container. The flag structure is:
[host-path]:[container-path]:[rw|ro]
http://<yourHostIP>:80
Our customer really doesn't like the color of our text. Let's change it.
$ vi company_name_generator.rb
And change
color: royalblue;
http://<yourHostIP>:80
You can also start the container with the following command: $ dockercompose up d
.:/src
ports:
80:9292
Why Compose?
Workflow explained
It allows users to run a new process in a container which is already running.If sometimes you find
yourself wishing you could SSH into a container: you can use docker exec instead.You can get a shell
prompt inside an existing container this way, or run an arbitrary process for automation.
$ # You can run ruby commands in the area the app is running and more!
root@5ca27cf74c2e:/opt/namer# irb
irb(main):002:0> exit
And remove it
$ docker rm ourContainerID>
Dockerfiles are great to build a single container. But when you want to start a complex stack made of
multiple containers, you need a different tool. This tool is Docker Compose.
In this lesson, you will use Compose to bootstrap a development environment.
Docker Compose (formerly known as fig) is an external tool. It is optional (you do not need Compose to
run Docker and containers) but we recommend it highly! The general idea of Compose is to enable a very
simple, powerful onboarding workflow:
Compose overview
You describe a set (or stack) of containers in a YAML file called docker-compose.yml.
You run docker-compose up.
Compose automatically pulls images, builds containers, and starts them.
Compose can set up links, volumes, and other Docker options for you.
Compose can run the containers in the background, or in the foreground.
When containers are running in the foreground, their aggregated output is shown.
If you are using the official training virtual machines, Compose has been pre-installed.
You can always check that it is installed by running:
$ docker-compose --version
Installing Compose
If you want to install Compose on your machine, there are (at least) two methods.
Compose is written in Python. If you have pip and use it to manage other Python packages, you can
install compose with:
(Note: if you are familiar with virtualenv, you can also use it to install Compose.)
curl -L \
https://github.com/docker/compose/releases/download/1.8.0/docker-
compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose
First step: clone the source code for the app we will be working on.
$ cd
...
$ cd trainingwheels
$ docker-compose up
Watch Compose build and run your app with the correct parameters, including linking the relevant
containers together.
version: "2"
services:
www:
build: www
ports:
- 8000:5000
user: nobody
environment:
DEBUG: 1
volumes:
- ./www:/src
redis:
image: redis
Version 1 directly has the various containers (www, redis...) at the top level of the file.
Containers in docker-compose.yml
Each service in the YAML file must contain either build, or image.
Container parameters
Compose commands
We already saw docker-compose up, but another one is docker-compose build. It will execute docker
build for all containers mentioning a build path. It is common to execute the build and run steps in
sequence:
$ docker-compose up –d
It can be tedious to check the status of your containers with docker ps, especially when running multiple
apps at the same time.
Compose makes it easier; with docker-compose ps you will see only the status of the containers of the
current stack:
----------------------------------------------------------------------------------------
Cleaning up
If you have started your application in the background with Compose and want to stop it easily, you can
use the kill command:
$ docker-compose kill
$ docker-compose rm
Removing trainingwheels_redis_1...
Removing trainingwheels_www_1...
$ docker-compose down
Stopping trainingwheels_redis_1
... done
Compose is smart. If your container uses volumes, when you restart your application,
Compose will create a new container, but carefully re-use the volumes it was using previously.
This makes it easy to upgrade a stateful service, by pulling its new image and just restarting your
stack with Compose.