0% found this document useful (0 votes)
74 views34 pages

Docker Course

This document provides information about a Docker essentials course, including its goals, target audience, prerequisites and structure. The course will cover Docker installation, basic usage, image creation, services, stacks and swarm mode over 4 days. It also defines key Docker concepts like images, containers, volumes and the Docker engine. Requirements for installing Docker on Linux distributions like CentOS and Fedora are outlined.

Uploaded by

Ionut Carp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views34 pages

Docker Course

This document provides information about a Docker essentials course, including its goals, target audience, prerequisites and structure. The course will cover Docker installation, basic usage, image creation, services, stacks and swarm mode over 4 days. It also defines key Docker concepts like images, containers, volumes and the Docker engine. Requirements for installing Docker on Linux distributions like CentOS and Fedora are outlined.

Uploaded by

Ionut Carp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

ADM-019

Docker Essential
Dan Pomohaci

Introduction

1. Course Goal

2. Course Target

3. Prerequisites

4. Course Structure

Course Goal
This course covers the essential information about docker:

ˆ installation,

ˆ basic usage,

ˆ image creation,

ˆ services and stack.

ˆ swarm

Course Target
This course is addressed for those who want to learn to use docker (devs, admins,
testers, devops, .etc)

Prerequisites
ˆ Basic knowledge about shell scripting

ˆ Some knowledge about Linux

ˆ Some knowledge about networking

ˆ Knowledge about Git

1
Course Structure
ˆ 4 day/4 hours

ˆ 4 parts:

1. General Knowledge

2. Creating Images

3. Swarm, Services, and Stack

4. Using Docker, Useful Scripts, Howtos, and Tips

ˆ structure (course 50 min + break 10 min)

Containers vs VMS

There are two types of virtualization tools:

ˆ Virtual Machines: VMware Workstation and VirtualBox

ˆ Containers: Docker

Virtual Machines

Abstraction of physical hardware.


Each VM includes a full copy of an op-
erating system.

Pros/Cons
Cons
Pros
ˆ Big hypervisor overhead
ˆ Flexibility (dierent OS)
ˆ Need extra software
ˆ Full separation of users
ˆ Need more resources
ˆ Security
ˆ Complex conguration

Containers

Containers are an abstraction at the


app layer that packages code and de-
pendencies together.

Multiple containers can run on the same machine and share the OS kernel
with other containers, each running as isolated processes in user space.

2
Pros/Cons
Pros
Cons

ˆ Real separation of user


ˆ Not so exible (shared kernel)

ˆ No overhead
ˆ Not so secure as we expect

ˆ Users can create containers (un-


ˆ By default run by root
privileged containers)

Short history of Docker


ˆ 2010  Solomon Hykes started Docker project in dotCloud PaaS

ˆ 2013  Docker goes opensource

ˆ 2013  RedHat as contributor

ˆ 2014  version 0.9  Docker dropped LXC as default container execution


environment and intoroduce its own libcontainer

ˆ 2014  MS announced integration with Docker

ˆ 2017  Docker team announced Moby  framework to assembling contain-


ers

Docker Dictionary

When you use Docker, you are creating and using images, containers, networks,
volumes, plugins, and other objects. This section is a brief overview of some of
those concepts.

Image
An image is a read-only template with instructions for creating a Docker con-
tainer.
An image typically contains a union of layered lesystems stacked on top of
each other.
An image does not have state and it never changes.
Often, an image is based on another image, with some additional customiza-
tion.
An image is identied by:

repository in the form namespace/imagename (see also Registry)


tag (optional) image version. The special word latest is used for specifying
the last version. If missing the latest version is used.

3
Container
A container is a standardized, encapsulated environment that runs applica-
tions.
A container is dened by its image as well as any conguration options you
provide to it when you create or start it.
When a container is removed, any changes to its state that are not stored in
persistent storage disappear.
A container has the following attributes:

ˆ ID  unique ID of container
ˆ image  source image from witch Docker container has been crafted
ˆ command  command executed during container launch
ˆ created  creation date
ˆ status  current status of container
ˆ ports  ports assigned to container
ˆ name  name set at start or random generated human friendly name

Volume
A volume is a specially-designated directory within one or more containers that
bypasses the Union File System.
Volumes are designed to persist data, independent of the container's life
cycle.
Docker therefore never automatically delete volumes when you remove a
container,

Docker Engine
Docker Engine is the underlying client-server technology that builds and runs
containers using Docker's components and services.

4
ˆ A server which is a type of long-
running program called a daemon
process (the dockerd command).

ˆ A REST API which species in-


terfaces that programs can use to
talk to the daemon and instruct
it what to do.

ˆ A command line interface


(CLI) client (the docker com-
mand).

Service
Services are really just containers in production.
A service only runs one image, but it codies the way that image runswhat
ports it should use, how many replicas of the container should run so the service
has the capacity it needs, and so on.

Docker Compose
Compose is a tool for dening and running multi-container Docker applications.
With Compose, you use a YAML le to congure your application's services.
Then, with a single command, you create and start all the services from your
conguration.

Swarm
A swarm is a cluster of one or more Docker Engines running in swarm mode.

Node
A node is an instance of the Docker engine participating in the swarm.
You can run one or more nodes on a single physical computer or cloud server,
but production swarm deployments typically include Docker nodes distributed
across multiple physical and cloud machines.

Task
A task carries a Docker container and the commands to run inside the container.
It is the atomic scheduling unit of swarm.
Manager nodes assign tasks to worker nodes according to the number of
replicas set in the service scale. Once a task is assigned to a node, it cannot
move to another node. It can only run on the assigned node or fail.

Registry
The registry is a stateless, highly scalable server side application that stores
and lets you distribute Docker images.
Besides private registers the most used registry is Docker Hub.

5
Exercise: Use Docker Hub
1. Create an account on: Docker Hub

2. Explore repositories on Docker Hub

3. Other repositories

Docker Installation

Docker is available in two editions:

ˆ Community Edition (CE)

ˆ Enterprise Edition (EE)

We will install Docker CE version.

Docker Community Edition


Docker Community Edition (CE) is ideal for individual developers and small
teams looking to get started with Docker and experimenting with container-
based apps.

Docker Enterprise Edition


Docker Enterprise Edition/ (EE) is designed for enterprise development and IT
teams who build, ship, and run business critical applications in production at
scale.

Hardware - Minimum requirements


64-bit architecture There are no other requirements.
However remember less RAM means less containers that could be run. Same
for CPU.
For comfortable usage you need at least 2GB RAM and 2 cores.
But it is possible to run Docker on IoT stu like RPi Zero (512MB RAM,
1Ghz Single-core CPU)

Linux
Docker is available in all main distributions Archlinux, Debian, Fedora, Ubuntu,
Mint, etc.

CENT OS
To install Docker Engine - Community, you need a maintained version of CentOS
7.
Archived versions aren't supported or tested.
The centos-extras repository must be enabled.

6
Uninstall old versions
Older versions of Docker were called docker or docker-engine. If these are in-
stalled, uninstall them, along with associated dependencies.

sudo yum remove docker \


docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine

Install required packages


yum-utils provides the yum-cong-manager utility, and device-mapper-persistent-
data and lvm2 are required by the devicemapper storage driver.

sudo yum install -y yum-utils \


device-mapper-persistent-data \
lvm2

Set up the stable repository


sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

Install Docker CE
sudo yum install docker-ce docker-ce-cli containerd.io

Verify that Docker CE is installed correctly by running the hello-world image:

sudo docker run hello-world

Fedora
To install Docker, you need the 64-bit version of one of these Fedora versions:

ˆ 28

ˆ 29

Uninstall old versions


Older versions of Docker were called docker or docker-engine. If these are in-
stalled, uninstall them, along with associated dependencies.

sudo dnf remove docker \


docker-client \
docker-client-latest \
docker-common \

7
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine

Install required packages


Install the dnf-plugins-core package which provides the commands to manage
your DNF repositories from the command line.

sudo dnf -y install dnf-plugins-core

Set up the stable repository


sudo dnf config-manager \
--add-repo \
https://download.docker.com/linux/fedora/docker-ce.repo

Install Docker CE
sudo dnf install docker-ce docker-ce-cli containerd.io
Verify that Docker CE is installed correctly by running the hello-world image:

sudo docker run hello-world

Debian based distributions


To install Docker CE, you need the 64-bit version of one of these Debian/Ubuntu
versions:

ˆ Buster 10

ˆ Stretch 9 (stable)

ˆ Cosmic 18.10

ˆ Bionic 18.04 (LTS)

ˆ Xenial 16.04 (LTS)

Docker CE is supported on x86_64 (or amd64), armhf, and arm64 architec-


tures.

Uninstall old versions


Older versions of Docker were called docker, docker.io , or docker-engine.
If these are installed, uninstall them:

sudo apt-get remove docker docker-engine docker.io containerd runc


It's OK if apt-get reports that none of these packages are installed.
The contents of /var/lib/docker/, including images, containers, volumes,
and networks, are preserved.

8
Install packages to allow apt to use a repository over HTTPS
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common

For ubuntu change gnupg2 with gnupg-agent.

Add Docker's ocial GPG key


curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

For ubuntu change in the url debian with ubuntu.


Verify that you now have the key:

sudo apt-key fingerprint 0EBFCD88

pub 4096R/0EBFCD88 2017-02-22


Key fingerprint = 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
uid Docker Release (CE deb) <[email protected]>
sub 4096R/F273FCD8 2017-02-22

Set Docker Repository


sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"

For ubuntu change in the url debian with ubuntu.

Install Docker CE
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli \
containerd.io docker-compose

Verify that Docker CE is installed correctly by running the hello-world image:

sudo docker run hello-world

Archlinux based distributions


Archilinux Docker instalation procedure is simplier than the Debian one.

Enable the loop module


sudo tee /etc/modules-load.d/loop.conf <<< "loop"
sudo modprobe loop

9
Install docker
pacman -S docker

Post-installation settings for Linux


This settings are optional but they permit you to work better with Docker.

Manage Docker as a non-root user


sudo usermod -aG docker $USER
# or
sudo gpasswd -a $USER docker

Log out and log back in so that your group membership is re-evaluated.

Congure Docker to start on boot


Start Docker:

sudo systemctl start docker.service


Enable Docker:

sudo systemctl enable docker.service

Create custom scripts


I prefer to create custom local scripts to start/stop docker.
~/.local/bin/start-docker.sh:
#!/usr/bin/env bash

systemctl start docker.service


~/.local/bin/stop-docker.sh:
#!/usr/bin/env bash

systemctl stop docker.service

Windows
System Requirements
ˆ Windows 10 64bit: Pro, Enterprise or Education (Build 15063 or later).

ˆ Virtualization is enabled in BIOS. (For more detail see Virtualization must


be enabled ).

ˆ CPU SLAT-capable feature. (see Hyper-V list of SLAT capable CPUs for
more info)

ˆ At least 4GB of RAM.

10
Installation
Download the installer from download.docker.com and run it.

Mac
System Requirements
ˆ Mac hardware must be a 2010 or newer model, with Intel's hardware
support for memory management unit (MMU) virtualization, including
Extended Page Tables (EPT) and Unrestricted Mode. You can check to
see if your machine has this support by running the following command
in a terminal:

sysctl kern.hv_support

ˆ macOS Sierra 10.12 and newer macOS releases are supported.

ˆ At least 4GB of RAM

Installation:
Download the installer from Docker Hub and run it.

Exercise: Install docker on your laptop


1. Check your platform specications. For linux:

lsb_release -a
uname -a

2. Use one of the previous sections depending on the platform type.

Docker Basics

The general syntax of docker operations is:

docker [OPTIONS] COMMAND


To list available commands, either run docker with no parameters or execute:

docker help
In this section we will explore the most common and used commands.

Basic Operations on Images


In all images operations an image is identied by:
NAME[:TAG]
The default value for TAG is latest.

11
List Images
docker images [OPTIONS] [REPOSITORY[:TAG]]

The default command will show all top level images, their repository and
tags, and their size:

REPOSITORY TAG IMAGE ID CREATED SIZE


debian stable-slim 87f6f3b892c0 8 weeks ago 55.3MB
gitlab/gitlab-ce <none> 17d5117a2e37 2 months ago 1.76GB
gitlab/gitlab-runner alpine e97317eab92c 3 months ago 116MB
gitlab/gitlab-ce 11.8.1-ce.0 2db70ee84b5c 4 months ago 1.62GB
gitlab/gitlab-ce latest 2db70ee84b5c 4 months ago 1.62GB
loomchild/volume-backup latest 4a39291c06e2 4 months ago 5.53MB
ubuntu latest 47b19964fb50 4 months ago 88.1MB
astefanutti/decktape 2.8.4 9ab66751310f 13 months ago 373MB
postgres 9.5.4 2417ea518abc 2 years ago 264MB

Get an Image
docker pull [OPTIONS] NAME[:TAG]

Download a particular image from a registry (default Docker Hub).


Example:

docker pull ubuntu


Result:

Using default tag: latest


latest: Pulling from library/ubuntu
5b7339215d1d: Pull complete
14ca88e9f672: Pull complete
a31c3b1caad4: Pull complete
b054a26005b7: Pull complete
Digest: sha256:9b1702dcfe32c873a770a32cfd306dd7fc1c4fd134adfb783db68defc8894b3c
Status: Downloaded newer image for ubuntu:latest

Run an Image
docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]

Run a command in a new isolated container.


The docker run command rst creates a writeable container layer over the
specied image, and then starts it using the specied command.
Example:

docker run ubuntu echo "Hello World!"


Hello World!

12
Run an Image - main options
Options Description
detach -d run container in background and print container ID
interactive -i keep STDIN open even if not attached
publish -p publish a container's port(s) to the host
tty -t allocate a pseudo-TTY
rm automatically remove the container when it exits
-e set any environment variable in the container
name dene a name for the container

Run an Image - interactive mode


docker run -it --name test ubuntu
root@2b5a86d20bce:/# env
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:
HOSTNAME=2b5a86d20bce
PWD=/
HOME=/root
TERM=xterm
SHLVL=1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
_=/usr/bin/env
root@2b5a86d20bce:/# exit 9
exit

The -it instructs Docker to allocate a pseudo-TTY connected to the con-


tainer's stdin; creating an interactive bash shell in the container.
In the example, the bash shell is quit by entering exit 9.
This exit code is passed on to the caller of docker run, and is recorded in
the container's metadata.

Basic Operations on Containers


All basic operations on container needs a container identier (ID or NAME).

List Containers
docker ps [OPTIONS]

Without options it lists all running containers:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS


c06288d696cc postgres:9.5.4 "/docker-entrypoint...." 19 hours ago Up 19 hours 5432/t
73e4a12f6f22 zookeeper:latest "/run.py" 19 hours ago Up 19 hours 9000/tcp
66a1a96de143 kafka:0.8.2.2-8 "/run.py" 19 hours ago Up 19 hours 9092/tcp

With -a options it shows all containers.

13
Logs
docker logs [OPTIONS] CONTAINER

Fetch the logs of a container


Options:

ˆ -f , --follow : follow log output

ˆ --tail N : number of lines to show from the end of the logs (default all)

ˆ --since TIME : show logs since timestamp (e.g. 2019-07-10T12:40:37) or


relative (e.g. 16m for 16 minutes)

ˆ --until TIME : show logs before a timestamp (e.g. 2019-07-10T12:40:37)


or relative (e.g. 16m for 16 minutes)

Stop
docker stop CONTAINER [CONTAINER ...]

The main process inside the container will receive SIGTERM, and after a grace
period, SIGKILL.

Restart
docker restart [OPTIONS] CONTAINER [CONTAINER ...]

Restart one or more containers

Kill
docker kill [OPTIONS] CONTAINER [CONTAINER...]

Kill one or more running containers.

Exec
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]

Run a command in a running container.

Exec - main options

Options Description
--detach -d run command in background and print container ID
--interactive -i keep STDIN open even if not attached
--publish -p publish a container's port(s) to the host
--tty -t allocate a pseudo-TTY
--user -u username or UID (format: <name/uid>[:<group/gid>])
--env -e set environment variable in the container
--workdir -w working directory inside the container

14
Recap and cheat sheet (1)
## List Docker CLI commands
docker
docker container --help

## Display Docker version and info


docker version
docker info

## Execute Docker image


docker run hello-world
docker run -p 4000:80 friendlyhello # Run "friendlyhello" mapping port 4000 to 80
docker run -d -p 4000:80 friendlyhello # Same thing, but in detached mode

## List Docker images


docker image ls
docker image rm <image id> # Remove specified image from this machine

Recap and cheat sheet (2)


## List Docker containers (running, all)
docker container ls
docker container ls -a
docker container stop <hash> # Gracefully stop the specified container
docker container kill <hash> # Force shutdown of the specified container
docker container rm <hash> # Remove specified container from this machine

Exercise: Play with basic docker commands


Admins and testers go to: First Alpine Linux Containers and do all exercises.
Devs go to: Docker for Beginners - Linux and do only Task0 and Task1.
DevsOps should do the both assignments :)

Sharing Data

By default all les created inside a container are stored on a writable container
layer.
This means that:

ˆ The data doesn't persist when that container no longer exists, and it can
be dicult to get the data out of the container if another process needs
it.

ˆ A container's writable layer is tightly coupled to the host machine where


the container is running. You can't easily move the data somewhere else.

Docker has two options for containers to store les in the host machine, so
that the les are persisted even after the container stops:

ˆ bind mounts,

ˆ volumes.

15
Bind Mounts
When you use a bind mount, a le or directory on the host machine is mounted
into a container.
The le or directory is referenced by its full or relative path on the host
machine.
The le or directory does not need to exist on the Docker host already. It is
created on demand if it does not yet exist.

Good use cases for bind mounts


Bind mounts are appropriate for the following types of use case:

ˆ Sharing conguration les from the host machine to containers.

ˆ Sharing source code or build artifacts between a development environment


on the Docker host and a container.

ˆ When the le or directory structure of the Docker host is guaranteed to


be consistent with the bind mounts the containers require.

Volumes
Volumes are the preferred mechanism for persisting data generated by and used
by Docker containers.
Volumes are created and managed by Docker. You can create a volume
explicitly using the docker volume create command, or Docker can create a
volume during container or service creation.
When you create a volume, it is stored within a directory on the Docker
host. When you mount the volume into a container, this directory is what is
mounted into the container. This is similar to the way that bind mounts work,
except that volumes are managed by Docker and are isolated from the core
functionality of the host machine.
A given volume can be mounted into multiple containers simultaneously.
When no running container is using a volume, the volume is still available to
Docker and is not removed automatically.

Create a volume
docker volume create my-vol

List volumes
docker volume ls

Inspect a volume
docker volume inspect my-vol
[
{
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/my-vol/_data",

16
"Name": "my-vol",
"Options": {},
"Scope": "local"
}
]

Remove a volume
docker volume rm my-vol

Good use cases for volumes


Volumes are the preferred way to persist data in Docker containers and services.
Some use cases for volumes include:

ˆ Sharing data among multiple running containers. If you don't explicitly


create it, a volume is created the rst time it is mounted into a container.
When that container stops or is removed, the volume still exists. Multiple
containers can mount the same volume simultaneously, either read-write
or read-only. Volumes are only removed when you explicitly remove them.

ˆ When the Docker host is not guaranteed to have a given directory or le
structure. Volumes help you decouple the conguration of the Docker host
from the container runtime.

ˆ When you want to store your container's data on a remote host or a cloud
provider, rather than locally.

ˆ When you need to back up, restore, or migrate data from one Docker host
to another, volumes are a better choice.

Flags
Originally, the -v or --volume ag was used for standalone containers and the
--mount ag was used for swarm services.
However, starting with Docker 17.06, you can also use --mount with stan-
dalone containers and it becomes the recomended syntax.

volume ag
Consists of three elds, separated by colon characters (:). The elds must be in
the correct order:

ˆ In the case of bind mounts, the rst eld is the path to the le or directory
on the host machine.

ˆ The second eld is the path where the le or directory is mounted in the
container.

ˆ The third eld is optional, and is a comma-separated list of options, such


as ro, consistent, delegated, cached, z, and Z.

Example:

17
docker run -d \
-it \
--name devtest \
-v "$(pwd)"/target:/app \
nginx:latest

mount ag
Consists of multiple key-value pairs, separated by commas and each consisting
of a <key>=<value> tuple.
The --mount syntax is more verbose than --volume, but the order of the
keys is not signicant:

ˆ type of the mount, which can be bind, volume, or tmpfs.


ˆ source (or src) of the mount. For bind mounts, this is the path to the
le or directory on the Docker daemon host.

ˆ target (or dst) takes as its value the path where the le or directory is
mounted in the container.

ˆ readonly option, if present, causes the bind mount to be mounted into


the container as read-only.

Example:

docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest

Networking

Docker's networking subsystem is pluggable, using drivers. Several drivers exist


by default, and provide core networking functionality.

Network Commands
All network commands have the syntax:

docker network COMMAND


To see all commands available run:

docker network help

Bridge
The default network driver. If you don't specify a driver, this is the type of
network you are creating.
Bridge networks are usually used when your applications run in standalone
containers that need to communicate.

18
Default Bridge
When you start Docker, a default bridge network (also called bridge ) is cre-
ated automatically, and newly-started containers connect to it unless otherwise
specied.

docker network ls

If you do not specify a network using the --network ag, and you do specify
a network driver, your container is connected to the default bridge network by
default. Containers connected to the default bridge network can communicate,
but only by IP address.

User-dened Bridge
Command to create a user-dened bridge network:

docker network create my-net


Command to remove a user-dened bridge network. If containers are cur-
rently connected to the network, disconnect them rst.

docker network rm my-net


When you create a new container, you can specify one or more network
ags.
This example connects a Nginx container to the my-net network. It also
publishes port 80 in the container to port 8080 on the Docker host, so external
clients can access that port. Any other container connected to the my-net
network has access to all ports on the my-nginx container, and vice versa.

docker create --name my-nginx \


--network my-net \
--publish 8080:80 \
nginx:latest
To connect a running container to an existing user-dened bridge, use the
docker network connect command.
The following command connects an already-running my-nginx container to
an already-existing my-net network:

docker network connect my-net my-nginx

To disconnect a running container from a user-dened bridge, use the docker


network disconnect command.
The following command disconnects the my-nginx container from the my-net
network.

docker network disconnect my-net my-nginx

19
advantages of user-dened bridge
ˆ User-dened bridges provide better isolation and interoperability between
containerized applications.

ˆ User-dened bridges provide automatic DNS resolution between contain-


ers.

ˆ Containers can be attached and detached from user-dened networks on


the y.

ˆ Each user-dened network creates a congurable bridge.

ˆ Linked containers on the default bridge network share environment vari-


ables.

Exercise: Docker networking


Go to Docker Networking Hand-on Lab and do Section #1 and #2

host
For standalone containers, remove network isolation between the container and
the Docker host, and use the host's networking directly.
host is only available for swarm services on Docker 17.06 and higher.

Creating Images

Docker can build images automatically by reading the instructions from a Dock-
erle.
A Dockerle is a text document that contains all the commands a user
could call on the command line to assemble an image.
Using docker build users can create an automated build that executes
several command-line instructions in succession.

Build Image
docker build [OPTIONS] PATH

The docker build command builds Docker images from a Dockerle and a
context.
A build's context is the set of les located in the specied PATH.

Build Options
Options Description
--file -f Name of the Dockerle (Default is `PATH/Dockerle')
--tag -t Name and optionally a tag in the `name:tag' format
--rm Remove intermediate containers after a successful build

20
Build Examples
Build the image using current directory. A Dockerle le must exist in it:

docker build .
Build the image using current directory and a specic Dockerle:

docker build -f Dockerfile.test .


Build a specic tag image using current directory:

docker build -t testuser/test-image:2.1 -t testuser/test-image:latest/ .

Dockerle
Dockerle is a plain text document with a simple syntax:

# Comment
INSTRUCTION arguments
The instruction is not case-sensitive. However, convention is for them to be
UPPERCASE to distinguish them from arguments more easily.
Docker runs instructions in a Dockerle in order.
Docker treats lines that begin with # as a comment. A # marker anywhere
else in a line is treated as an argument.
Next we will go through the most common commands.
For a complete list and more details see: Dockerle reference

FROM
FROM <image>[:<tag>] [AS <name>]

The FROM instruction initializes a new build stage and sets the Base Image
for subsequent instructions.
As such, a valid Dockerle must start with a FROM instruction.

ENV
ENV <key>=<value>

The ENV instruction sets the environment variable <key> to the value
<value>.
This value will be in the environment for all subsequent instructions in the
build stage and can be replaced inline in many as well.

WORKDIR
WORKDIR /path/to/workdir

The WORKDIR instruction sets the working directory for any RUN, CMD,
ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerle.
If the WORKDIR doesn't exist, it will be created even if it's not used in any
subsequent Dockerle instruction.

21
The WORKDIR instruction can be used multiple times in a Dockerle.
If a relative path is provided, it will be relative to the path of the previous
WORKDIR instruction.
The WORKDIR instruction can resolve environment variables previously
set using ENV. You can only use environment variables explicitly set in the
Dockerle.

USER
USER <user>[:<group>] or
USER <UID>[:<GID>]
The USER instruction sets the user name (or UID) and optionally the user
group (or GID) to use when running the image and for any RUN, CMD and
ENTRYPOINT instructions that follow it in the Dockerle.
This command doesn't create the user. He should exist before the command
is issued.

RUN
# shell form
RUN <command>
# exec form
RUN ["executable", "param1", "param2"]
The RUN instruction will execute any commands in a new layer on top of
the current image and commit the results. The resulting committed image will
be used for the next step in the Dockerle.
In the shell form you can use a \ (backslash) to continue a single RUN
instruction onto the next line.

COPY
COPY [--chown=<user>:<group>] <src> <dest>
The COPY instruction copies new les or directories from <src> and adds
them to the lesystem of the container at the path <dest>.
There is also the command ADD which does the same thing but it also sup-
ports 2 other sources (URL and archive). The docker documentation recomends
tu use always COPY because it's more explicit.

EXPOSE
EXPOSE <port> [<port>/<protocol>...]
The EXPOSE instruction informs Docker that the container listens on the
specied network ports at runtime.
You can specify whether the port listens on TCP or UDP, and the default
is TCP if the protocol is not specied.
This instruction does not actually publish the port. It functions as a type of
documentation between the person who builds the image and the person who
runs the container, about which ports are intended to be published. To actually
publish the port when running the container, use the -p ag on docker run.

22
LABEL
LABEL <key>=<value> <key>=<value> <key>=<value> ...

The LABEL instruction adds metadata to an image. To view an image's


labels, use the docker inspect command.
Example:

LABEL version=1.0"\
description== "This image is used for ..."

ENTRYPOINT
# exec form, preferred
ENTRYPOINT ["executable", "param1", "param2"]
# shell form
ENTRYPOINT command param1 param2

An ENTRYPOINT allows you to congure a container that will run as an


executable.
Only the last ENTRYPOINT instruction in the Dockerle will have an eect.

CMD
# exec form, this is the preferred form
CMD ["executable","param1","param2"]
# as default parameters to ENTRYPOINT
CMD ["param1","param2"]
# shell form
CMD command param1 param2

The main purpose of a CMD is to provide defaults for an executing container.


There can only be one CMD instruction in a Dockerle. If you list more
than one CMD then only the last CMD will take eect.

Understand how CMD and ENTRYPOINT interact


Both CMD and ENTRYPOINT instructions dene what command gets exe-
cuted when running a container. There are few rules that describe their co-
operation:

1. Dockerle should specify at least one of CMD or ENTRYPOINT com-


mands.

2. ENTRYPOINT should be dened when using the container as an exe-


cutable.

3. CMD will be overridden when running the container with alternative ar-
guments.

Some combination:
A. If ENTRYPOINT is in shell mode:

ENTRYPOINT cmd1 p1

23
CMD is ignored and docker execute:

/bin/sh -c cmd1 p1
B. If ENTRYPOINT is in exec form:

["cmd1", "p1"]

ˆ if no CMD docker runs:

cmd1 p1

ˆ if CMD is in exec mode [p2 p3] it runs:

cmd1 p1 p2 p3

ˆ if CMD is in shell mode cmd2 p3 it runs:

cmd1 p1 /bin/sh -c cmd2 p3

C. If no ENTRYPOINT,
CMD is executed depending on its mode.

When Should You Use ENTRYPOINT?


The choice you make is largely an artistic one, and it will depend signicantly
on your use case. My experience, though, is that ENTRYPOINT suits almost
every case I've encountered. Consider the following use cases:

ˆ Wrappers

Some images contain a so-called wrapper* that decorates a legacy pro-


gram or otherwise prepares it for use in a containerized environment.

For example, suppose your service was written to read its conguration
from a le instead of from environment variables. In such a situation,
you might include a wrapper script that generates the app's cong le
from the environment variables, then launches the app by calling exec
/path/to/app at the very end.

Declaring an ENTRYPOINT that points to the wrapper is a great way to


ensure the wrapper is always run, no matter what arguments are passed
to docker run.

ˆ Single-Purpose Images

If your image is built to do only one thing  for example, run a web server
 use ENTRYPOINT to specify the path to the server binary and any
mandatory arguments.

A textbook example of this is the nginx image, whose sole purpose is


to run the nginx web server. This lends itself to a pleasant and natural
command line invocation:

docker run nginx

24
Then you can append program arguments naturally on the command line,
such as:

docker run nginx -c /test.conf 

just like you would if you were running nginx without Docker.

ˆ Multi-Mode Images

It's also a common pattern for images that support multiple modes to
use the rst argument to docker run <image> to specify a verb that maps
to the mode, such as shell, migrate, or debug.

For such use cases I recommend setting ENTRYPOINT to point to a


script that parses the verbal argument and does the right thing based on
its value, such as:

ENTRYPOINT ["/bin/parse_container_args"]

Best practices for writing Dockerles


A Docker image consists of read-only layers each of which represents a Dockerle
instruction. The layers are stacked and each one is a delta of the changes from
the previous layer.

Create ephemeral containers


The container can be stopped and destroyed, then rebuilt and replaced with an
absolute minimum set up and conguration.

Understand build context


Regardless of where the Dockerle actually lives, all recursive contents of les
and directories in the current directory are sent to the Docker daemon as the
build context. Inadvertently including les that are not necessary for building an
image results in a larger build context and larger image size. Use .dockerignore
to exclude les.

Leverage build cache


When building an image, Docker steps through the instructions in your Dock-
erle, executing each in the order specied. As each instruction is examined,
Docker looks for an existing image in its cache that it can reuse, rather than
creating a new (duplicate) image.
If your build contains several layers, you can order them from the less fre-
quently changed (to ensure the build cache is reusable) to the more frequently
changed:

ˆ Install tools you need to build your application,

ˆ Install or update library dependencies,

ˆ Generate your application

25
Use multi-stage builds
Studiu individual:
https://docs.docker.com/develop/develop-images/multistage-build/

Don't install unnecessary packages


To reduce complexity, dependencies, le sizes, and build times, avoid installing
extra or unnecessary packages just because they might be nice to have.
For example, you don't need to include a text editor in a database image.

Decouple applications
Each container should have only one concern.
Decoupling applications into multiple containers makes it easier to scale
horizontally and reuse containers.
For instance, a web application stack might consist of three separate contain-
ers, each with its own unique image, to manage the web application, database,
and an in-memory cache in a decoupled manner.

Sort multi-line arguments


Whenever possible, ease later changes by sorting multi-line arguments alphanu-
merically.
This helps to avoid duplication of packages and make the list much easier to
update.
This also makes PRs a lot easier to read and review.
Adding a space before a backslash (\) helps as well.
Here's an example from the buildpack-deps image:

RUN apt-get update && apt-get install -y \


bzr \
cvs \
git \
mercurial \
subversion

Publishing Images
By publishing the image you save the image in a safe place and made it available
for future use.

Push
docker push [OPTIONS] NAME[:TAG]

Push an image or a repository to a registry

26
Exercise1: Push a image on Docker Hub
1. Login on Docker Hub,

2. Create a repository:

ˆ name it test,
ˆ select Private,

ˆ click the Create button.

3. Open the terminal and sign in to Docker Hub on your computer by running
docker login
4. Create an image:

cat > Dockerfile <<EOF


FROM busybox
CMD echo "Hello world! This is my first Docker image."
EOF

1. Build your Docker image:

docker build -t <your_username>/test .

2. Test your docker image locally:

docker run <your_username>/test

3. Push the image:

docker push <your_username>/test

Exercise2: Push a image on Docker Hub


Go to: Docker for Beginners - Linux and do Task1 and Task2.

Recap and cheat sheet


docker build -t friendlyhello . # Create image using this directory's Dockerfile
docker login # Log in this CLI session using your Docker credentials
docker tag <image> username/repository:tag # Tag <image> for upload to registry
docker push username/repository:tag # Upload tagged image to registry
docker run username/repository:tag # Run image from a registry

Docker Compose

Compose is a tool for dening and running multi-container Docker applications.


With a single command, you create and start all the services from your cong-
uration.
Compose works in all environments: production, staging, development, test-
ing, as well as CI workows.

27
Command General Syntax
docker-compose [OPTIONS] [COMMAND] [ARGS]

Options

Options Description
--file -f Specify the compose le (default: docker-compose.yml)
--project-name -p Project name (default: directory name)
--project-directory Working directory (default: the path of the compose le)

Commands
ˆ build : build or rebuild services

ˆ cong : validate and view the compose le

ˆ down : stop containers and remove containers, networks, volumes, and


images created by up

ˆ exec : run arbitrary commands in services (default interactive)

ˆ logs : displays log output from services.

ˆ ps : list containers

ˆ stop/start : stop/start a service

ˆ up : create and start containers

Compose le
The Compose le is a YAML le dening services, networks and volumes.
The default path for a Compose le is ./docker-compose.yml.
A service denition contains conguration that is applied to each container
started for that service, much like passing command-line parameters to docker
container create. Likewise, network and volume denitions are analogous to
docker network create and docker volume create.
Next we will go through the most common options.
For a complete list and more details see: Compose le reference.

Structure of a compose-le
version: "3.7"
services:
<service_name1>:
<options>
...
volumes:
<volume_name1>:
<options>
...
networks:

28
<network_name1>:
<options>
...
configs:
<config_name1>:
<options>
secrets:
<secrets_name1>:
<options>

Service Option: build


Conguration options that are applied at build time.

version: "3.7"
services:
webapp:
build:
context: ./dir
dockerfile: Dockerfile-alternate

Service Option: image


Specify the image to start the container from.

version: "3.7"
services:
web:
image: ubuntu:14.04
db:
image: postgres:9.5.4

Service Option: networks


Networks to join, referencing entries under the top-level networks key.

version: "3.7"

services:
app:
image: nginx:alpine
networks:
app_net:
ipv4_address: 172.16.238.10
...

Service Option: ports


Expose ports.

version: "3.7"
services:

29
web:
image: ubuntu:14.04
ports:
- "80:8080"
- "90:8090"
db:
image: postgres:9.5.4
ports:
- "5432:5432"

Service Option: volumes


Mount host paths or named volumes, specied as sub-options to a service.

version: "3.7"
services:
web:
image: nginx:alpine
volumes:
- type: volume
source: mydata
target: /data
db:
image: postgres:9.5.4
volumes:
- "/var/run/postgres/postgres.sock:/var/run/postgres/postgres.sock"
- "dbdata:/var/lib/postgresql/data"

volumes:
mydata:
dbdata:

Volumes Section
This section allows you to create named volumes that can be reused across mul-
tiple services, and are easily retrieved and inspected using the docker command
line or API.

version: "3.7"

services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data

volumes:
data:
external: true

30
Network Section
The top-level networks key lets you specify networks to be created.

version: "3.7"

services:
app:
image: nginx:alpine
networks:
app_net:
ipv4_address: 172.16.238.10

networks:
app_net:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"

Recap and cheat sheet


docker-compose build # Build services from the current compose file
docker-compose -d up # Starts the containers in the background and leaves them running
docker-compose ps # List running services
docker-compose exec web sh # Connect to web services in a sh terminal
docker-compose stop web # Stops to web services
docker-compose down # Stops containers and removes containers, networks, volumes, and ima

Swarm

A swarm consists of multiple Docker hosts which run in swarm mode and act
as managers (to manage membership and delegation) and workers (which run
swarm services).
One of the key advantages of swarm services over standalone containers
is that you can modify a service's conguration, including the networks and
volumes it is connected to, without the need to manually restart the service.
Docker will update the conguration, stop the service tasks with the out of date
conguration, and create new ones matching the desired conguration.

Swarm mode
Current versions of Docker include swarm mode for natively managing a clus-
ter of Docker Engines called a swarm.
We will use the Docker CLI to create a swarm, deploy application services
to a swarm, and manage swarm behavior.

Init
docker swarm init [OPTIONS]

31
Initialize a swarm.
It generates two random tokens, a worker token and a manager token. When
you join a new node to the swarm, the node joins as a worker or manager node
based upon the token you pass to swarm join.

Join-token
docker swarm join-token [OPTIONS] (worker|manager)

It displays one of the tokens generated at initialisation. The token is neces-


sary when you add new nodes with the docker swarm join command.

Join
docker swarm join [OPTIONS] HOST:PORT

Join a node to a swarm.


The node joins as a manager node or worker node based upon the token you
pass with the token ag. If you pass a manager token, the node joins as a
manager. If you pass a worker token, the node joins as a worker.

Leave
docker swarm leave [--force]

When you run this command on a worker, that worker leaves the swarm.
You can use the force option on a manager to remove it from the swarm.
However, this does not recongure the swarm to ensure that there are enough
managers to maintain a quorum in the swarm.
The safe way to remove a manager from a swarm is to demote it to a worker
and then direct it to leave the quorum without using force.
Only use force in situations where the swarm will no longer be used after
the manager leaves, such as in a single-node swarm.
https://docs.docker.com/engine/reference/commandline/swarm_join/

Exercise: Deploying an app to a Swarm


https://github.com/docker/labs/blob/master/beginner/chapters/votingapp.
md

Recap and cheat sheet


docker swarm init --advertise-addr <ip> # Set up master
docker swarm init --force-new-cluster -advertise-addr <ip> # Force manager on broken clu

docker swarm join-token worker # Get token to join workers


docker swarm join-token manager # Get token to join new manager
docker swarm join <server> worker # Join host as a worker

docker swarm leave


docker swarm unlock # Unlock a manager host after docker

32
# daemon restart when autolock is on
docker swarm unlock-key # Print key needed for 'unlock'

docker node ls # Print swarm node list


docker node rm <node id>
docker node inspect --pretty <node id>

docker node promote <node id> # Promote node to manager


docker node demote <node id>

Using Docker, Useful Scripts, Howtos, and Tips

Cleaning your docker platform


Remove all containers:

docker container rm $(docker container ls -a -q)


Remove all images from this machine:

docker image rm $(docker image ls -a -q)

Tips for reducing images


https://medium.com/@gdiener/how-to-build-a-smaller-docker-image-76779e18d48a

Custom User
https://medium.com/faun/set-current-host-user-for-docker-container-4e521cef9ffc

export UID=$(id -u)


export GID=$(id -g)
docker run -it \
--user $UID:$GID \
--workdir="/home/$USER" \
--volume="/etc/group:/etc/group:ro" \
--volume="/etc/passwd:/etc/passwd:ro" \
--volume="/etc/shadow:/etc/shadow:ro" \
<IMAGE NAME> /bin/bash

Save/Restore Docker Volumes


Docker volumes are platform local but with the folowing scripts could be trans-
ported from one site to an other.

backup-vol
#!/usr/bin/env bash

script_name=$(basename "$0")
usage() {
>&2 echo "Usage: $script_name volume_name"

33
exit 1
}
if [ $# -ne 1 ]; then
usage
fi
set -e

volume=$1
today="$(date '+%Y%m%d%H%M')"
bin_dir=$(dirname "$(readlink -f "$0")")
project_dir=$(dirname "$bin_dir")
bkp_dir=$project_dir/backups
bkp_file="$volume-$today.tar.bz2"

cd "$bkp_dir"
echo "backup $volume"
docker run -v "$volume":/volume --rm loomchild/volume-backup backup - > "$bkp_file"
echo "$bkp_file" > "$volume.last"
cd "$project_dir"

restore-vol
#!/usr/bin/env bash

script_name=$(basename $0)
usage() {
>&2 echo "Usage: $script_name volume_name backup_tar"
exit 1
}
if [ $# -ne 2 ]; then
usage
fi

volume=$1
bkp_file=$2
echo "restore $bkp_file in $volume"
cat $bkp_file | docker run -i -v $volume:/volume --rm loomchild/volume-backup restore -

Global Resources

1. Docker Documentation

2. Awesome-docker

3. A collection of how-to guides for establishing your own container-based


self-hosting platform

34

You might also like