Docker Training

Download as pdf or txt
Download as pdf or txt
You are on page 1of 261
At a glance
Powered by AI
The key takeaways from the document are that Docker is a platform for developing, shipping and running applications using container technology. Containers provide a more lightweight virtualization method compared to virtual machines by running applications directly on the host operating system kernel and sharing resources.

Containers are more lightweight than virtual machines since they don't require a full guest operating system and share the host operating system kernel. This allows for more efficient use of system resources and faster deployment of applications. Containers isolate applications from each other and allocate CPU, memory, storage and networking resources.

Some benefits of Docker include more efficient use of system resources through lightweight virtualization, faster and easier deployment of applications, standardized packaging of applications, portability of applications across environments, and easier management of applications through their entire lifecycle.

Docker Workshop (Part 1)

Basic Docker Components


Instructor Intro

• John Willis
• Director of Ecosystem Development
• @botchagalue
[email protected]
• 35 Years in IT Operations
• Exxon, Canonical, Chef, Enstratius, Socketplane
• Devopsdays Core Organizer
• Devopscafe on iTunes

2
Session logistics and pre-requisites
• 6 hours including question
• Short break every hour
• Lunch break at noon
• Ask questions at anytime
!
Prerequisites
• No Docker experience required
• Be familiar with Linux command line

3
First Half of Day
• Introduction to containers
• Installing Docker
• Docker concepts and terms
• Introduction to images
• Running and managing containers
• Building images
• Managing and distributing images
• Container volumes
• Container networking

4
Second Half of Day
• Docker Machine
• Docker Swarm
• Docker Compose

5
Module 1: 

Introduction to Containers
Module objectives
In this module we will:
• Introduce the concept of container based virtualization
• Outline the benefits of containers over virtual machines

7
What is Docker?
Docker is a platform for developing, shipping and running
applications using container technology

• The Docker Platform consists of multiple products/tools


– Docker Engine
– Docker Hub
– Docker Trusted Registry
– Docker Machine
– Docker Swarm
– Docker Compose
– Kitematic
– Docker Toolbox (new)
8
A History Lesson
In the Dark Ages

One application on one physical server

9
Historical limitations of application deployment
• Slow deployment times
• Huge costs
• Wasted resources
• Difficult to scale
• Difficult to migrate
• Vendor lock in

10
A History Lesson
Hypervisor-based Virtualization

• One physical server can contain multiple applications


• Each application runs in a virtual machine (VM)

11
Benefits of VM’s
• Better resource pooling
– One physical machine divided into multiple virtual machines
• Easier to scale
• VM’s in the cloud
– Rapid elasticity
– Pay as you go model

12
Limitations of VM’s
• Each VM stills requires
– CPU allocation
– Storage
– RAM
– An entire guest operating system
• The more VM’s you run, the more resources you need
• Guest OS means wasted resources
• Application portability not guaranteed

13
Introducing Containers
Containerization uses the kernel on the host operating system
to run multiple root file systems

• Each root file system is called a container


• Each container also has its own
– Processes
– Memory
– Devices
– Network stack

14
Containers

15
Containers vs VM’s
• Containers are more lightweight
• No need to install guest OS
• Less CPU, RAM, storage space required
• More containers per machine than VMs
• Speed of instantiation
• Greater portability

16
Why Docker?
• Isolation
• Lightweight
• Simplicity
• Workflow
• Community

17
The shipping container
Do I worry
about how
Multiple goods
types of interact? (i.e.
goods place coffee
beans next to
spices)

Can I
Multiple
transport
methods of
quickly and
transportati
smoothy? (i.e
on
unload from
ship onto
train)

18
Docker containers

Do services
Multiple and apps
development interact
stacks appropriately?

Multiple Can I
hardware migrate
environment quickly
s and
smoothly?

19
Benefits of Docker
• Separation of concerns
– Developers focus on building their apps
– System admins focus on deployment
• Fast development cycle
• Application portability
– Build in one environment, ship to another
• Scalability
– Easily spin up new containers if needed
• Run more apps on one host machine

20
Module 2:

Docker Concepts and Terms

21
Module objectives
In this module we will:
• Explain all the main Docker concepts including
– Docker Engine
– Docker client
– Containers
– Images
– Registry and Repositories
– Docker Hub
– Orchestration
– Boot2Docker

22
Docker and the Linux Kernel
• Docker Engine is the
program that enables
containers to be distributed
and run
• Docker Engine uses Linux
Kernel namespaces and
control groups
• Namespaces give us the
isolated workspace

23
Docker Client and Daemon
• Client / Server architecture
• Client takes user inputs and
sends them to the daemon
• Daemon runs and distributes
containers
Client
• Client and daemon can run on
the same host or on different
hosts
• CLI client and GUI (Kitematic)

24
Checking Client and Daemon Version
• Run 

docker version

Client

Daemon

25
Boot2Docker
• Docker daemon does not run natively on Windows and OSX
• Boot2Docker is a lightweight Linux VM designed to run the Docker
daemon
• VM is run in VirtualBox
• During installation, Boot2Docker will also install VirtualBox and the
Docker client onto your machine
• For Windows, there is also an option to install msysgit, which is an
alternative terminal to the Windows CMD terminal

26
Docker Containers and Images
• Images
– Read only template used to create
containers (Binaries) Creates
– Built by you or other Docker users
– Stored in Docker Hub, Docker Trusted
Registry or your own Registry Image Container
• Containers
– Isolated application platform
– Contains everything needed to run your
application
– Based on one or more images

27
Registry and Repository

images

28
Docker Hub
Docker Hub is the public registry that contains a large number of images
available for your use

29
Docker Orchestration
• Three tools for orchestrating distributed applications with Docker
• Docker Machine
– Tool that provisions Docker hosts and installs the Docker Engine on them
• Docker Swarm
– Tool that clusters many Engines and schedules containers
• Docker Compose
– Tool to create and manage multi-container applications
• Covered in the afternoon

30
Module 3: 

Installing Docker
Module objectives
In this module we will:
• Outline the ways to install Docker on various Linux operating systems
• Install Docker on our Amazon AWS Ubuntu instance
• Explain how to run Docker without requiring “sudo”
• Install Boot2Docker on our PC or Mac

32
Installation options overview
• Docker can be installed in a variety of ways depending on your platform
• Packaged distributions available for most Linux platforms
– Ubuntu, CentOS, Fedora, Arch Linux, RHEL, Debian, Gentoo, openSUSE
• Binary download
• Installation script from Docker

33
Installation script
• Available at https://get.docker.com
• The script will install Docker and all necessary dependencies
• Grab the script with curl and pipe the output to sh

curl -s https://get.docker.com | sh

OR

wget -qO- https://get.docker.com/ | sh
• Works for
– Ubuntu
– Fedora
– Gentoo
– Debian


34
EX3.1 – Install Docker on Ubuntu
1. Install Docker by running the following command

wget -qO- https://get.docker.com/ | sh
2. Run the hello-world container to test your installation

sudo docker run hello-world

35
Verify the installation
• Run a simple hello-world container

sudo docker run hello-world
• Run sudo docker version to check the version

36
The docker group
• To run Docker commands without requiring sudo, add your user account
to the docker group

sudo usermod –aG docker <user>
– Also allows for tabbed completion of commands
• Logout and log back in for the changes to take effect
• Note: The docker group might not have been created on certain
distributions. If this is the case, create the group first:

sudo groupadd docker
• Note: users in the docker group essentially have root access. Be aware
of the security implications

37
EX3.2 – Docker group
1. Add your user account to the docker group

sudo usermod –aG docker <user>
2. Logout of your terminal and log back in for the changes to take effect
3. Verify that you can run the hello-world container without using sudo

docker run hello-world

38
Install on Windows and Mac
• Download and install Boot2Docker from 

https://github.com/boot2docker/windows-installer/releases OR

https://github.com/boot2docker/osx-installer/releases

39
EX3.3 – Install Boot2Docker
1. Install Boot2Docker on your PC or MAC
2. Verify the installation by running the hello-world container

docker run hello-world
!
Note: Students using their own Linux machine will not need to perform this

40
Module 4:

Introduction to Images
Module objectives
In this module we will:
• Learn to search for images on Docker Hub and with the Docker client
• Explain what official repositories are
• Create a Docker Hub account
• Explain the concept of image tags
• Learn to pull images from Docker Hub

42
Search for Images on Docker Hub
• Lots of Images available for use
• Images reside in various Repositories

43
Search for images using Docker client
• Run the docker search command
• Results displayed in table form

44
Official repositories
• Official repositories are a certified and curated set of Docker repositories
that are promoted on Docker Hub
• Repositories come from vendors such as NGINX, Ubuntu, Red Hat,
Redis, etc…
• Images are supported by their maintainers, optimised and up to date
• Official repository images are a mixture of
– Base images for Linux operating systems (Ubuntu, CentOS etc…)
– Images for popular development tools, programming languages, web and
application servers, data stores

45
Identifying an official repository
• There are a few ways to tell if a repository is official
– Marked on the OFFICIAL column in the terminal output
– Prefixed with “library” on autocomplete search results in
Docker Hub
– Repository has the Docker logo on the Docker Hub search
results

46
EX4.1 - Create a Docker Hub Account
1. Go to https://hub.docker.com/account/signup/ and signup for an account
if you do not already have one.

No credit card details are needed
2. Find your confirmation email and activate your account
3. Browse some of the repositories
4. Search for some images of your favorite dev tools, languages, servers
etc…
a) (examples: Java, Perl, Maven, Tomcat, NGINX, Apache)

47
Display Local Images
• Run docker images
• When creating a container Docker will attempt to use a local image first
• If no local image is found, the Docker daemon will look in Docker Hub
unless another registry is specified

48
Image Tags
• Images are specified by repository:tag
• The same image may have multiple tags
• The default tag is latest
• Look up the repository on Docker Hub to see what tags are available
Pulling images
• To download an image from Docker Hub or any registry, use 

docker pull command
• When running a container with the docker run command, images are
automatically pulled if no local copy is found

Pull the latest image from the Ubuntu repository in Docker Hub
docker pull ubuntu
!
Pull the image with tag 12.04 from Ubuntu repository in Docker Hub
docker pull ubuntu:12.04

50
EX4.2 – Display local images
1. Pull a busybox image from Docker Hub

docker pull busybox
2. Display your local images and verify that the image is present

51
Module 5: 

Running and Managing Containers

52
Module objectives
In this module we will
• Learn how to launch containers with the docker run command
• Explore different methods of accessing a container
• Explain how container processes work
• Learn how to stop and start containers
• Learn how to check container logs
• Learn how to find your containers with the docker ps command
• Learn how to display information about a container

53
Container lifecycle
• Basic lifecycle of a Docker container
– Create container from image
– Run container with a specified process
– Process finishes and container stops
– Destroy container
• More advanced lifecycle
– Create container from image
– Run container with a specified process
– Interact and perform other actions inside the container
– Stop the container
– Restart the container

54
Creating and running a Container
• Use docker run command
• The docker run command actually does two things
– Creates the container using the image we specify
– Runs the container
• Syntax 

docker run [options] [image] [command] [args]
• Image is specified with repository:tag

Examples
docker run ubuntu:14.04 echo “Hello World”
docker run ubuntu ps ax

55
EX5.1 - Run a Simple Container
1. On your terminal type

docker run ubuntu:14.04 echo “hello world”
2. Observe the output
3. Then type 

docker run ubuntu:14.04 ps -ef
4. Observe the output
5. Notice the much faster execution time compared to the first container
that was run. This is due to the fact that Docker now has the Ubuntu
14.04 image locally and thus does not need to download the image

56
Find your Containers
• Use docker ps to list running containers
• The –a flag to list all containers (includes containers that are stopped)


57
EX5.2 – List containers
1. List your running containers. What can you observe?

docker ps
2. List all your containers, including ones that have stopped

docker ps -a

58
Container with Terminal
• Use -i and -t flags with docker run
• The -i flag tells docker to connect to STDIN on the container
• The -t flag specifies to get a pseudo-terminal
• Note: You need to run a terminal process as your command (e.g. bash)

Example
docker run –i –t ubuntu:latest bash

59
Exit the Terminal
• Type exit to quit the terminal and return to your host terminal
• Exiting the terminal will shutdown the container
• To exit the terminal without a shutdown, hit 

CTRL + P + Q together

60
EX5.3 - Terminal Access
1. Create a container using the ubuntu 14.04 image and connect to STDIN and a
terminal

docker run -i -t ubuntu:14.04 bash
2. In your container, create a new user using your first and last name as the username

adduser username
3. Add the user to the sudo group 

adduser username sudo
4. Exit the container

exit
5. Notice how the container shut down
6. Once again run:

docker run -i -t ubuntu:14.04 bash
7. Try and find your user
8. Notice that it does not exist

61
Container Processes
• A container only runs as long
as the process from your
specified docker run
command is running
• Your command’s process is
always PID 1 inside the
container

62
Container ID
• Containers can be specified using their ID or name
• Long ID and short ID
• Short ID and name can be obtained using docker ps command to list
containers
• Long ID obtained by inspecting a container

63
docker ps command
• To view only the container ID’s (displays short ID)

docker ps -q
• The view the last container that was started

docker ps -l

64
docker ps command
• Combining flags to list all containers with only their short ID

docker ps –aq
• Combining flags to list the short ID of the last container started

docker ps -lq

65
docker ps filtering
• Use the --filter flag to specify filtering conditions
• Currently you can filter based on the container’s exit code and status
• Status can be one of
– Restarting
– Running
– Exited
– Paused
• To specify multiple conditions, pass multiple --filter flags

66
docker ps filtering examples
• List containers with an exit code of 1 (exited with error)


67
Running in Detached Mode
• Also known as running in the background or as a daemon
• Use -d flag
• To observe output use docker logs [container id]

Create a centos container and run the ping command to


ping the container itself 50 times
docker run –d centos:7 ping 127.0.0.1 –c 50

68
EX5.4 – Run in detached mode
1. Run

docker run -d centos:7 ping 127.0.0.1 –c 50
2. List your containers by running docker ps
3. Notice the centos container running
4. Wait for minute and run docker ps again
5. You may notice that the container has stopped (if it is still running, wait
a bit longer). Why do you think the container has stopped?

69
A More Practical Container
• Run a web application inside a container
• The -P flag to map container ports to host ports

Create a container using the tomcat image, run in


detached mode and map the tomcat ports to the host port
docker run –d –P tomcat:7
EX5.5 - Web Application Container
1. Run 

docker run -d -P nginx
2. Check your image details by running 

docker ps
3. Notice the port mapping.
!
!
!
4. Go to <your linux server url>:<port number> and verify that
you can see the NGINX welcome page

71
EX5.6 – Check container process
1. Run an Ubuntu container with bash as the process. Remember to use -
it to gain terminal access
2. Run ps -ef and check the PID number of the bash process
3. Exit the container without shutting it down

CTRL + P + Q
4. Run ps -ef on your host and look for the bash processes 

ps -ef | grep bash
5. Note down the parent PID of each bash process
6. Find the processes with the parent PID numbers you noted down
7. What do you notice?
72
Attaching to a container
• Attaching a client to a container will bring a container which is running in the
background into the foreground
• The containers PID 1 process output will be displayed on your terminal
• Use docker attach command and specify the container ID or name
• Warning: Attaching to containers is error prone because if you hit 

CTRL + C by accident, you will stop the process and therefore stop the
container

73
Detaching from a container
• Hit CTRL + P + Q together on your terminal
• Only works if the following two conditions are met
– The container standard input is connected
– The container has been started with a terminal
– For example: docker run -i -t ubuntu
• Hitting CTRL + C will terminate the process, thus shutting down the
container

74
EX5.7 – Attach and detach
1. Run 

docker run -d ubuntu ping 127.0.0.1 -c 50
2. Attach to the newly created container and observe the output
3. Try and detach from the container. Notice how you can only exit with
CTRL + C
4. Check your running containers
5. Run another container 

docker run –d -it ubuntu ping 127.0.0.1 -c 50
6. Attach to the newly created container
7. Detach from the container with CTRL + P + Q
8. List your running containers
75
Docker exec command
• docker exec command allows us to execute additional processes
inside a container
• Typically used to gain command line access
• docker exec -i -t [container ID] bash
• Exiting from the terminal will not terminate the container

76
EX5.8 – docker exec
1. Run an NGINX container in the background

docker run -d nginx
2. Use docker exec to start a bash terminal in the container

docker exec -it <container id> bash
3. Exit the container terminal
4. Verify that your container is still running

docker ps

77
Inspecting container logs
• Container PID 1 process output can be viewed with docker logs
command
• Will show whatever PID 1 writes to stdout
• Displays the entire log output from the time the container was created

View the output of the containers PID 1 process


docker logs <container name>

78
Following container logs
• Use docker logs command and specify the -f option
• Similar to Linux tail -f command
• CTRL + C to exit
• Will still follow the log output from the very beginning

79
Tailing container logs
• We can specify to only show the last “x” number of lines from the logs
• Use --tail option and specify the number

Show the last 5 lines from the container log


docker logs --tail 5 <container ID>
!
Show the last 5 lines and follow the log

docker logs --tail 5 -f <container ID>

80
EX5.9 – container logs
1. Start a new Ubuntu container and run the ping command to ping
127.0.0.1 and have it repeat 100 times

docker run -d ubuntu:14.04 ping 127.0.0.1 -c 100
2. Inspect the container log, run this command a few times

docker logs <container id>
3. Then inspect and follow the container log

docker logs -f <container id>
4. Hit CTRL + C to stop following the log
5. Now follow the log again but trim the output to start from the last 10
lines

docker logs --tail 10 -f <container id>

81
Stopping a container
• Two commands we can use
– docker stop
– docker kill
• docker stop sends a SIGTERM to the main container process
– Process then receives a SIGKILL after a grace period
– Grace period can be specified with -t flag (default is 10 seconds)
• docker kill sends a SIGKILL immediately to the main container
process

82
Restarting a container
• Use docker start to restart a container that has been stopped
• Container will start using the same options and command specified
previously
• Can attach to the container with -a flag

Start a stopped container and attach to the process that it is running


docker start -a <container ID>

83
EX5.10 – Stop and Start container
1. Run a tomcat container in detached mode

docker run -d tomcat
2. Inspect and follow the container log 

docker logs -f <container id>
3. Stop the container

docker stop <container id>
4. Start the container again and re-attach to it

docker start -a <container id>

84
EX5.10 – (cont’d)
5. Detach and stop the container

CTRL + C
6. Restart the container again and follow the log output

docker start <container id>

docker logs -f <container id>
7. Notice how there are so many log lines it is difficult to follow
8. Hit CTRL + C to stop following the log
9. Now inspect the container log again but this time, trim the output to only
show the last 10 lines and follow it

docker logs --tail 10 -f <container id>

85
Inspecting a container
• docker inspect command displays all the details about a container
• Outputs details in JSON array

86
EX5.11 – Inspect container properties
1. Inspect your tomcat container from exercise 5.10

docker inspect <container id>
2. Look through the JSON output and look for the container IP address
and long ID

87
Finding a specific property
• You can pipe the output of docker inspect to grep and use it to
search a particular container property.
• Example

docker inspect <container name> | grep IPAddress

88
Formatting docker inspect output
• Piping the output to grep is simple, but not the most effective method of
formatting
• Use the --format option of the docker inspect command
• Format option uses Go’s text/template package

http://golang.org/pkg/text/template/

89
docker inspect formatting syntax
• docker inspect --format=‘{{.<field1>.<field2>}}’ \

<container id>
• Field names are case sensitive
• Example – to get the value of the Cmd field

docker inspect --format=‘{{.Config.Cmd}}’ <container id>

90
EX5.12 – Format inspect output
1. Inspect your tomcat container and pipe the output to grep. Grep for
“IPAddress”

docker inspect <container name> | grep IPAddress
2. Now try and grep for “Cmd”. Notice how the output does not give us what we
want

docker inspect <container name> | grep Cmd
3. Use the --format option and grab the output of the Cmd field

docker inspect --format=‘{{.Config.Cmd}}’ \

<container name>
4. Use the --format option and grab the output of the IPAddress field

docker inspect --format=‘{{.NetworkSettings.IPAddress}}’ \

<container name>
5. Repeat step 4, trying different fields of your choice

91
Inspecting a whole JSON object
• When you want to output all the fields of a
JSON object you need to use the Go
template’s JSON function

Incorrect approach
docker inspect --format=‘{{.Config}}’ <container name>
!
Correct approach
docker inspect --format=‘{{json .Config}}’ <container name>

92
EX5.13 – More inspection
1. Inspect your tomcat container and take note of the output of the 

Config object
2. Format the output of your docker inspect command to only display the Config
object without using the JSON function

docker inspect --format=‘{{.Config}}’ <container name>
3. Notice how the output is not correct
4. Now apply the json function in your formatting and compare the difference

docker inspect --format=‘{{json .Config}}’ <container name>
5. Pick another JSON object from the full docker inspect output and repeat
the same steps as above

93
Deleting containers
• Can only delete containers that have been stopped
• Use docker rm command
• Specify the container ID or name

94
Delete all containers
• Use docker ps -aq to list the id’s of all containers
• Feed the output into docker rm command

Delete all containers that are stopped


docker rm $(docker ps –aq)

95
EX5.14 – Delete containers
1. List all your stopped containers

docker ps --filter=‘status=exited’
2. Delete one container using it’s short ID

docker rm <container id>
3. Delete the latest container that was run

docker rm $(docker ps -ql)
4. Delete all stopped containers

docker rm $(docker ps -aq)

96
Module Summary
• Containers can be run in the foreground and background
• A container only runs as long as the process we specified during creation
is running
• The container log is the output of its PID 1 process
• Key commands we learnt
– docker run
– docker ps
– docker logs
– docker inspect

97
Module 6:

Building Images
Module objectives
In this module we will
• Explain how image layers work
• Build an image by committing changes in a container
• Learn how to build images with a Dockerfile
• Work through examples of key Dockerfile instructions
– RUN
– CMD
– ENTRYPOINT
– COPY
• Talk about the best practices when writing a Dockerfile
99
Understanding image layers
• An image is a collection of files and
some meta data
• Images are comprised of multiple layers
• A layer is also just another image
• Each image contains software you want
to run
• Every image contains a base layer
• Docker uses a copy on write system
• Layers are read only
!
• COW/Union Filesystems (AUFS/BTRFS)

100
Sharing layers
• Images can share layers in order to speed up transfer times and
optimize disk and memory usage
• Parent images that already exists on the host do not have to be
downloaded

101
The container writable layer
• Docker creates a top writable
layer for containers
• Parent images are read only
• All changes are made at the
writeable layer
• When changing a file from a read
only layer, the copy on write
system will copy the file into the
writable layer

102
Methods of building images
• Three ways
– Commit changes from a container as a new image
– Build from a Dockerfile
– Import a tarball into Docker as a standalone base layer

103
Committing changes in a container
• Allows us to build images interactively
• Get terminal access inside a container and install the necessary
programs and your application
• Then save the container as a new image using the docker commit
command

104
EX6.1 - Make changes in a container
1. Create a container from an Ubuntu image and run a bash terminal

docker run -i -t ubuntu:14.04 bash
2. Run apt-get update to refresh the distro packages
3. Install wget
4. Install vim
5. Exit the container terminal

105
Comparing container changes
• Use the docker diff command to compare a container with its parent image
– Recall that images are read only and changes occur in a new layer
– The parent image (the original) is being compared with the new layer
• Copy on write system ensures that starting a container from a large image does
not result in a large copy operation
• Lists the files and directories that have changed

106
EX6.2 – Compare changes
1. Run docker diff against the container you created in 

exercise 6.1
2. What can you observe?

107
Docker Commit
• docker commit command saves changes in a container as a new image
• Syntax

docker commit [options] [container ID] [repository:tag]
• Repository name should be based on username/application
• Can reference the container with container name instead of ID

Save the container with ID of 984d25f537c5 as a new image in the


repository johnnytu/myapplication. Tag the image as 1.0
docker commit 984d25f537c5 johnnytu/myapplication:1.0

108
Image namespaces
• Image repositories belong in one of three namespaces
– Root

ubuntu:14.04

centos:7

nginx
– User OR organization

johnnytu/myapp

mycompany/myapp
– Self hosted

registry.mycompany.com:5000/my-image

109
Uses for namespaces
• Root namespace is for official repositories
• User and organization namespaces are for images you create and plan
to distribute on Docker Hub
• Self-hosted namespace is for images you create and plan to distribute in
your own registry server

110
EX6.3 - Build new image
1. Save your container as a new image. For the repository name use 

<your name>/myimage. Tag the image as 1.0

docker commit <container ID> <yourname>/myapp:1.0
2. Run docker images and verify that you can see your new image
3. Create a container using the new image you created in the previous
exercise. Run bash as the process to get terminal access

docker run -i -t <yourname>/myapp:1.0 bash
4. Verify that vim and wget are installed
5. Create a file in your container and commit that as a new image. Use the
same image name but tag it as 1.1
6. Run docker diff on your container. What do you observe?

111
Intro to Dockerfile
A Dockerfile is a configuration file that contains instructions
for building a Docker image

• Provides a more effective way to build images compared to using


docker commit
• Easily fits into your development workflow and your continuous
integration and deployment process

112
Process for building images from Dockerfile
1. Create a Dockerfile in a new folder or in your existing application folder
2. Write the instructions for building the image
– What programs to install
– What base image to use
– What command to run
3. Run docker build command to build an image from the Dockerfile

113
Dockerfile Instructions
• Instructions specify what to do when building the image
• FROM instruction specifies what the base image should be
• RUN instruction specifies a command to execute
• Comments start with “#”

#Example of a comment
FROM ubuntu:14.04
RUN apt-get install vim
RUN apt-get install curl

114
FROM instruction
• Must be the first instruction specified in the Dockerfile (not including comments)
• Can be specified multiple times to build multiple images
– Each FROM marks the beginning of a new image
• Can use any image including, images from official repositories, user images and
images in self hosted registries.

Examples
FROM ubuntu
FROM ubuntu:14.04
FROM johnnytu/myapplication:1.0
FROM company.registry:5000/myapplication:1.0

115
More about RUN
• RUN will do the following:
– Execute a command.
– Record changes made to the filesystem.
– Works great to install libraries, packages, and various files.
• RUN will NOT do the following:
– Record state of processes.
– Automatically start daemons.

116
EX6.4 – A Simple Dockerfile
1. In your home directory, create a folder called myimage
2. Change directory into the myimage folder and create a new file called
Dockerfile
3. Open Dockerfile, write the following and then save the file

117
Docker Build
• Syntax

docker build [options] [path]
• Common option to tag the build

docker build -t [repository:tag] [path]

Build an image using the current folder as the context path. Put the image in
the johnnytu/myimage repository and tag it as 1.0
docker build -t johnnytu/myimage:1.0 .
!
As above but use the myproject folder as the context path
docker build -t johnnytu/myimage:1.0 myproject

118
EX6.5 – Build the Dockerfile
1. Run docker build –t myimage .
2. Observe the output
3. Verify that the image is built by running docker images
4. Run a container from your image and verify that wget is installed
5. Optional: Use the -f flag to point to a specific directory for Dockerfile
(file)

119
Build output

120
The build context

• The build context is the directory that the Docker client sends to the
Docker daemon during the docker build command
• Directory is sent as an archive
• Docker daemon will build using the files available in the context
• Specifying “.” for the build context means to use the current directory

121
Examining the build process
• Each RUN instruction will execute the command on the top writable layer
of a new container
!
!
• At the end of the execution of the command, the container is committed
as a new image and then deleted

122
Examining the build process (cont’d)
• The next RUN instruction will be executed in a new container using the
newly created image
!
!
• The container is committed as a new image and then removed. Since
this RUN instruction is the final instruction, the image committed is what
will be used when we specify the repository name and tag used in the
docker build command

123
EX6.6 - The build cache
1. In your myimage folder run the same build command again

docker build -t myimage .
2. Notice how the build was much faster. Almost instant.
3. Observe the output. Do you notice anything different?

124
The build cache
• Docker saves a snapshot of the image after each build step
• Before executing a step, Docker checks to see if it has already run that
build sequence previously
• If yes, Docker will use the result of that instead of executing the
instruction again
• Docker uses exact strings in your Dockerfile to compare with the cache
– Simply changing the order of instructions will invalidate the cache
• To disable the cache manually use the --no-cache flag

docker build --no-cache myimage .

125
EX6.7 – Experiment with the cache
1. Open your Dockerfile and add another RUN instruction to install vim
2. Build the image and observe the output. Notice the cache being used
for the RUN instructions we had defined previously but not for the new
one
3. Build the image again and notice the cache being used for all three
RUN instructions
4. Edit the Dockerfile and swap the order of the two apt-get install
commands
5. Build the image again and notice the cache being invalidated for both
apt-get install instructions

126
Run Instruction aggregation (Best Practice)
• Can aggregate multiple RUN instructions by using “&&”
• Commands will all be run in the same container and committed as a new
image at the end
• Reduces the number of image layers that are produced

RUN apt-get update && apt-get install –y \


curl \
vim \
openjdk-7-jdk

127
Viewing Image layers and history
• docker history command shows us the layers that make up an
image
• See when each layer was created, its size and the command that was
run

IMAGE CREATED CREATED BY SIZE


10f1e1747aa1 12 seconds ago /bin/sh -c apt-get install -y wget 6.119 MB
9b6aeef1e9cc 23 seconds ago /bin/sh -c apt-get install -y vim 43.12 MB
334d0289feff 10 minutes ago /bin/sh -c apt-get update 20.86 MB
07f8e8c5e660 2 weeks ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0 B
37bea4ee0c81 2 weeks ago /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$/ 1.895 kB
a82efea989f9 2 weeks ago /bin/sh -c echo '#!/bin/sh' > /usr/sbin/polic 194.5 kB
e9e06b06e14c 2 weeks ago /bin/sh -c #(nop) ADD file:f4d7b4b3402b5c53f2 188.1 MB

128
EX6.8 (optional) – Build history
1. Examine the build history of the image built from the Dockerfile
2. Note down the image id of the layer with the apt-get update
command
3. Now aggregate the two RUN instructions with the apt-get install
command

RUN apt-get install -y wget vim
4. Build the image again
5. Run docker history on the image. What do you observe about the
image layers

129
CMD Instruction
• CMD defines a default command to execute when a container is created
• Shell format and EXEC format
• Can only be specified once in a Dockerfile
– If specified multiple times, the last CMD instruction is executed
• Can be overridden at run time
• ( Array format doesn’t append /bin/sh -c )

Shell format
CMD ping 127.0.0.1 –c 30
Exec format
CMD [“ping”, “127.0.0.1”, “-c”, “30”]

130
EX6.9 - Try CMD
1. Go into the myimage folder and open your Dockerfile from the previous 

exercise
2. Add the following line to the end

CMD [“ping”, “127.0.0.1”, “-c”, “30”]
3. Build the image

docker build -t <yourname>/myimage:1.0 .
4. Execute a container from the image and observe the output

docker run <yourname>/myimage:1.0
5. Execute another container from the image and specify the echo command

docker run <yourname>/myimage:1.0 echo “hello world”
6. Observe how the container argument overrides the CMD instruction

131
ENTRYPOINT Instruction
• Defines the command that will run when a container is executed
• Run time arguments and CMD instruction are passed as parameters to the
ENTRYPOINT instruction
• Shell and EXEC form
• Container essentially runs as an executable
• Bu default ENTRYPOINT is not overridden (can be using —entrypoint flag)

ENTRYPOINT [“/usr/sbin/nginx”]
CMD [“-h”]

132
EX6.10 - Entrypoint
1. Open your Dockerfile in the myimage folder
2. Delete the CMD instruction line and replace it with

ENTRYPOINT [“ping”]
3. Build the image
4. Run a container from the image but do not specify any commands or
arguments

docker run myimage:1.0
5. What do you notice?
6. Now run a container and specify an argument of 127.0.0.1

docker run myimage:1.0 127.0.0.1

133
Using CMD with ENTRYPOINT
• If ENTRYPOINT is used, the CMD instruction can be used to specify
default parameters
• Parameters specified during docker run will override CMD
• If no parameters are specified during docker run, the CMD arguments
will be used for the ENTRYPOINT command

134
EX6.11 – CMD and ENTRYPOINT
1. Open your Dockerfile and modify the ENTRYPOINT instruction 

to include 2 arguments for the ping command

ENTRYPOINT [“ping”, “-c”, “50”]
2. Add a CMD instruction and specify the IP 127.0.0.1

CMD [“127.0.0.1”]
3. Save the file and build the image
4. Run a container and specify your IP address as the parameter

docker run myimage:1.0 <your ip>
5. Now run another container but do not specify any parameters. What do
you observe?

docker run myimage:1.0

135
Shell vs exec format
• The RUN, CMD and ENTRYPOINT instructions can be specified in either
shell or exec form

In shell form, the command will run inside a shell with /bin/sh –c
RUN apt-get update
!
Exec format allows execution of command in images that don’t have /
bin/sh
RUN [“apt-get”, “update”]

136
Shell vs exec format
• Shell form is easier to write and you can perform shell parsing of
variables
• For example

CMD sudo -u $(USER} java ….
• Exec form does not require image to have a shell
• For the ENTRYPOINT instruction, using shell form will prevent the ability
to specify arguments at run time
– The CMD arguments will not be used as parameters for ENTRYPOINT

137
Overriding ENTRYPOINT
• Specifying parameters during docker run will result in your
parameters being used as arguments for the ENTRYPOINT command
• To override the command specified by ENTRYPOINT, use the

--entrypoint flag.
• Useful for troubleshooting your images

Run a container using the image “myimage” and specify to run a bash
terminal instead of the program specified in the image ENTRYPOINT
instruction
docker run -it --entrypoint bash myimage

138
Copying source files
• When building “real” images you would want to do more than just install
some programs
• Examples
– Compile your source code and run your application
– Copy configuration files
– Copy other content
• How do we get our content on our host into the container?
• Use the COPY instruction

139
COPY instruction
• The COPY instruction copies new files or directories from a specified
source and adds them to the container filesystem at a specified
destination
• Syntax

COPY <src> <dest>
• The <src> path must be inside the build context
• If the <src> path is a directory, all files in the directory are copied. The
directory itself is not copied
• You can specify multiple <src> directories

140
COPY examples
Copy the server.conf file in the build context into the root folder of the
container
COPY server.conf /
!
Copy the files inside the data/server folder of the build context into
the /data/server folder of the container
COPY data/server /data/server

141
Dockerize an application
• The Dockerfile is essential if we want to adapt our existing application to
run on containers
• Take a simple Java program as an example. To build and run it, we need
the following on our host
– The Java Development Kit (JDK)
– The Java Virtual Machine (JVM)
– Third party libraries depending on the application itself
• You compile the code, run the application and everything looks good

142
Dockerize an application
• Then you distribute the application and run it on a different environment and it
fails
• Reasons why the Java application fails?
– Missing libraries in the environment
– Missing the JDK or JVM
– Wrong version of libraries
– Wrong version of JDK or JVM
• So why not run your application in a Docker container?
• Install all the necessary libraries in the container
• Build and run the application inside the container and distribute the image for the
container
• Will run on any environment with the Docker Engine installed

143
EX6.12 – Setup sample application
1. Install java 7

sudo apt-get install openjdk-7-jdk
2. In your home directory, create a folder called javahelloworld
3. In your new folder, create a file called HelloWorld.java
4. Write the following code

144
EX6.12 – Setup sample application
5. Compile your HelloWorld.java class

javac HelloWorld.java
6. Run the program

java HelloWorld

145
EX6.13 – Dockerize the application
1. In the javahelloworld folder, create a Dockerfile
2. Use java:7 as your base image. This image contains the JDK needed to build
and run our application

FROM java:7
3. Copy your source file into the container root folder

COPY HelloWorld.java /
4. Add an instruction to compile your code

RUN javac HelloWorld.java
5. Add an instruction to run your program when running the container

ENTRYPOINT [“java”, “HelloWorld”]
6. Build the image
7. Run a container from the image and observe the output

146
EX6.13 - Dockerize the application
8. Let’s take a look inside the container. Run another container and
override the ENTRYPOINT instruction. Specify to run a bash terminal

docker run -it --entrypoint bash <your java image>
9. Find where your HelloWorld.java source file is
10. Find where the compiled code is

147
Specify a working directory
• Previously all our instructions have been executed at the root folder in
our container
• WORKDIR instruction allows us to set the working directory for any
subsequent RUN, CMD, ENTRYPOINT and COPY instructions to be
executed in
• Syntax

WORKDIR /path/to/folder
• Path can be absolute or relative to the current working directory
• Instruction can be used multiple times

148
EX6.14 – Restructure our application
1. In your javahelloword folder, create two folders called src and
bin
2. Move your HelloWorld.java source file into src
3. Compile your code and place the compiled class into the bin folder

javac -d bin src/HelloWorld.java
4. Run your application

java -cp bin HelloWorld

149
EX6.15 – Redefine our Dockerfile
1. Open your Dockerfile
2. Modify the COPY instruction to copy all files in the src folder into /
home/root/javahelloworld/src

COPY src /home/root/javahelloworld/src
3. Modify the RUN instruction to compile the code by referencing the
correct src folder and to place the compiled code into the bin folder

javac -d bin src/HelloWorld.java
4. Modify ENTRYPOINT to specify java -cp bin

ENTRYPOINT [“java”, “-cp”, “bin”, “HelloWorld”]
5. Build your image. Notice the error?

150
EX6.16 – Specify the working directory
1. Before the RUN instruction, add a WORKDIR instruction to specify where to
execute your commands

WORKDIR /home/root/javahelloworld
2. Build your image. Notice there is still an error.
3. Add another RUN instruction after WORKDIR to create the bin folder

RUN mkdir bin
4. Build your image.
5. Run a container from your image and verify your can see the “hello world”
output
6. Override the ENTRYPOINT and run bash. Check your folder structure and verify
you have src and bin inside /home/root/javahelloworld

docker run -it --entrypoint bash <your image>

151
MAINTAINER Instruction
• Specifies who wrote the Dockerfile
• Optional but best practice to include
• Usually placed straight after the FROM instruction

Example
MAINTAINER Docker Training <[email protected]>

152
ENV instruction
• Used to set environment variables in any container launched from the
image
• Syntax

ENV <variable> <value>

Examples
ENV JAVA_HOME /usr/bin/java
ENV APP_PORT 8080

153
EX6.17 – Environment variables
1. Modify the javahelloworld image Dockerfile and set a variable
called “foo”. Specify any value

ENV FOO bar
2. Build the image
3. Run a container from the image and check for the presence of the
variable

154
ADD instruction
• The ADD instruction copies new files or directories from a specified source and adds
them to the container filesystem at a specified destination
• Syntax

ADD <src> <dest>
• The <src> path is relative to the directory container the Dockerfile
• If the <src> path is a directory, all files in the directory are copied. The directory itself is
not copied
• You can specify multiple <src> directories

Example

ADD /src /myapp/src

155
COPY vs ADD
• Both instructions perform a near identical function
• ADD has the ability to auto unpack tar files
• ADD instruction also allows you to specify a URL for your content
(although this is not recommended)
• Both instructions use a checksum against the files added. If the
checksum is not equal then the test fails and the build cache will be
invalidated
– Because it means we have modified the files
Best practices for writing Dockerfiles
• Remember, each line in a Dockerfile creates a new layer
• You need to find the right balance between having lots of layers created for the
image and readability of the Dockerfile
• Don’t install unnecessary packages
• One ENTRYPOINT per Dockerfile
• Combine similar commands into one by using “&&” and “\”

Example

RUN apt-get update && \
apt-get install –y vim && \

apt-get install –y curl

157
Best practices for writing Dockerfiles
• Use the caching system to your advantage
– The order of statements is important
– Add files that are least likely to change first and the ones most likely to change last
• The example below is not ideal because our build system does not know whether the
requirements.txt file has changed

158
Best practices for writing Dockerfiles
• Correct way would be in the example below.
• requirements.txt is added in a separate step so Docker can cache more
efficiently. If there’s no change in the file the RUN pip install instruction does not have
to execute and Docker can use the cache for that layer.
• The rest of the files are added afterwards

159
Module summary
• Image are made up of multiple layers
• Committing changes we make in a container as a new image is a simple way to
create our own images but is not a very effective method as part of a
development workflow
• A Dockerfile is the preferred method of creating images
• Key Dockerfile instructions we learnt about
– RUN
– CMD
– ENTRYPOINT
– COPY
• Key commands
– docker build

160
Module 7:

Managing and Distributing Images
Module objectives
In this module we will
• Outline where we can distribute our images to
• Push an image into Docker Hub
• Setup public and private repositories in Docker Hub
• Explain how image tagging works
• Learn how to delete images

162
Distributing your image
• To distribute your image there are two options
– Push to Docker Hub
– Push to your own registry server
• Images in Docker Hub can reside in public or private repositories
• Registry server can be setup to be publicly available or behind the
firewall

163
Docker Hub Repositories
• Users can create their own repositories on Docker Hub
• Public and Private
• Push local images to a repository

164
Creating a repository
• Repository will reside in the user or organization namespace. For
example:
– trainingteam/myrepo
– johnny/myrepo
• Public repositories are listed and searchable for public use
• Anyone can pull images from a public repository

165
Creating a repository

166
Repository description
• Once a repository has been created we can write a detailed description
about the images
• Good to include instructions on how to run the images
• You may want to
– Link to the source repository of the application the image is designed to run
– Link to the source of the Dockerfile
• Description is written in markdown 

http://daringfireball.net/projects/markdown/syntax

167
EX7.1 – Create a public repository
1. Login to your Docker Hub account
2. Create a new public repository called javahelloworld
3. Write a basic description for your repository

168
Pushing Images to Docker Hub
• Use docker push command
• Syntax

docker push [repo:tag]
• Local repo must have same name and tag as the Docker Hub repo
• Only the image layers that have changed get pushed
• You will be prompted to login to your Docker Hub account

169
Pushing Images

170
EX7.2 – Push image to Docker Hub
1. Try to push the javahelloworld image we created in 

Exercise 6.16 to Docker Hub

docker push javahelloworld
2. Notice the error message saying that you cannot push a root repository

171
Tagging Images
• Used to rename a local image repository before pushing to Docker Hub
• Syntax:

docker tag [image ID] [repo:tag]

OR

docker tag [local repo:tag] [Docker Hub repo:tag]

Tag image with ID (trainingteam/testexample is the name of repository on


Docker hub)
docker tag edfc212de17b trainingteam/testexample:1.0
!
Tag image using the local repository tag
docker tag johnnytu/testimage:1.5 trainingteam/testexample

172
One image, many tags
• The same image can have multiple tags
• Image can be identified by it’s ID

REPOSITORY TAG IMAGE ID CREATED VIRTUAL


SIZE
trainingteam/javahelloworld 1.1 76b3b2455967 5 minutes ago 598.1 MB
trainingteam/testimage 1.0 ee8800b0677b 8 minutes ago 263.8 MB
javahelloworld 1.0 b8a9f23d0df8 3 hours ago 588.7 MB
javahelloworld latest b8a9f23d0df8 3 hours ago 588.7 MB
trainingteam/javahelloworld 1.0 b8a9f23d0df8 3 hours ago 588.7 MB
java 7 31dd6207396b 2 weeks ago 588.7 MB
ubuntu 14.04 07f8e8c5e660 2 weeks ago 188.3 MB

173
Tagging images for a push
Local repo name

Repo name on Docker Hub

174
EX7.3 – Tag and push image
1. Tag your local image so that it has the same repository name as the
Docker Hub repository created in exercise 7.1

docker tag javahelloworld:1.0 <username>/javahelloworld:1.0
2. List your images and verify the image you just tagged
3. Now try and push the image

docker push <username>/javahelloworld:1.0
4. Navigate to your repository on Docker Hub and open the Tags tab
5. Verify that you can see the tag 1.0

175
EX7.4 – Modify image and push
1. Go into the javahelloworld folder and edit the Dockerfile
2. Add another RUN instruction just before ENTRYPOINT and install an
application of your choice.
3. Save and build the image. Tag the image as 1.1

docker build –t trainingteam/javahelloworld:1.1
4. Push the image to Docker Hub
5. Observe the output. What do you notice?

176
Private repositories
• Docker Hub private repositories are not available to the public and will
not be listed in search results
• Only you can push and pull images to your repository
• Additional users who want access need to be added as collaborators

177
EX7.5 - Create a private repository
1. Create a private repository in Docker Hub called myapplication
2. Push the image from exercise 6.5 (repo name myimage) into the
repository. Remember, you will have to tag the image first. 

docker tag myimage:1.0 <username>/myapplication:1.0

docker push <username>/myapplication:1.0
3. Collaborate with another student in the class and try to download their
image from their private repository.
4. Notice how it doesn’t work

178
Collaborators
• Collaborators can push and pull from a private repository but do not have
administrative access
• Collaborators will be able to see the private repository on their “Repositories” page
• You also need to be a collaborator if you want to push to a public repository

179
Adding collaborators
• Go to your repository settings on the right side of
the page and click “collaborators”
• Specify the Docker Hub username of the person
you wish to add

180
EX7.6 – Add collaborators
1. Add another student in the class as a collaborator in your private
repository
2. Get that person to verify that they can see your private repository on
their “Repositories” page
3. Get that person to download an image from your private repository

181
Deleting local Images
• Use docker rmi command
• docker rmi [image ID]

or

docker rmi [repo:tag]
• If an image is tagged multiple times, remove each tag

REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE


test1 latest cbfa5ab76a11 12 seconds ago 262.5 MB
test latest cbfa5ab76a11 12 seconds ago 262.5 MB
!johnnytu@dockertraining:~/test$ docker rmi test
Untagged: test:latest
johnnytu@dockertraining:~/test$ docker rmi test1
Untagged: test1:latest
Deleted: cbfa5ab76a11eec84b751ae261d3f870a0be61bb899e651c857ae4cc3eed9bc9

182
EX7.7 – Delete images
1. Use the docker rmi command and delete the image you downloaded
from the other student’s private repository

183
Mark repository as unlisted
• Marking a repository as unlisted means that it
will not be displayed in Docker Hub or command
line search
• Repository is still publicly accessible to those
who know the exact name
• Uses
– Do a limited release of an application and only
inform a select few people

184
Module summary
• Image can be pushed to the public registry (Docker Hub) or your own
private Registry
• To push an image to a Docker Hub repository, the local repository name
and tag must be the same
• Image deletion is based on the image tags
• Key commands
– docker push
– docker tag

185
Module 8:

Volumes
Module objectives
In this module we will
• Explain what volumes are and what they are used for
• Learn the different methods of mounting a volume in a container
• Mount volumes during the docker run command and also in a
Dockerfile
• Explain how data containers work
• Create some data containers

187
Volumes
A Volume is a designated directory in a container, which is designed
to persist data, independent of the container’s life cycle

• Volume changes are excluded when updating an image


• Persist when a container is deleted (kind of.. not documented dirs)
• Can be mapped to a host folder
• Can be shared between containers

188
Volumes and copy on write
• Volumes bypass the copy on write system
• Act as passthroughs to the host filesystem
• When you commit a container as a new image, the content of the
volumes will not be brought into that image
• If a RUN instruction in a Dockerfile changes the content of a volume,
those changes are not recorded neither.

189
Uses of volumes
• De-couple the data that is stored, from the container which created the
data
• Good for sharing data between containers
– Can setup a data containers which has a volume you mount in other
containers
– Share directories between multiple containers
• Bypassing the copy on write system to achieve native disk I/O
performance
• Share a host directory with a container
• Share a single file between the host and container
Mount a Volume
• Volumes can be mounted when running a container
• Use the -v option on docker run
• Volume paths specified must be absolute
• Can mount multiple volumes by using the -v option multiple times

Execute a new container and mount the folder /myvolume into its file system
docker run -d -P -v /myvolume nginx:1.7
!
Example of mounting multiple volumes
docker run -d -P -v /data/www -v /data/images nginx

191
EX8.1 - Create and test a Volume
1. Execute a new container and initialise a volume at /www/website. 

Run a bash terminal as your container process 

docker run -i -t -v /www/website ubuntu:14.04 bash
2. Inside the container, verify that you can get to /www/website
3. Create a file inside the /www/website folder
4. Exit the container
5. Commit the updated container as a new image called test and tag it as 1.0

docker commit <container ID> test:1.0
6. Execute a new container with your test image and go into it’s bash shell

docker run -i -t test:1.0 bash
7. Verify that the /www/website folder exists and that there are no files inside

192
Where are our volumes?
• Volumes exist independently from containers
• If a container is stopped, we can still access our volume
• To find where the volume is use docker inspect on the container

193
EX8.2 – Find your volume
1. Run docker inspect on the container you created in exercise 8.1
2. Scroll to the volume field and copy the path to your volume. Save this
path to a text editor for quick reference. You will need it later
3. Run sudo ls <path>. Verify that you can see the file you created
4. Now run 

sudo cat <path>/<file name>
5. Verify you can see the contents of that file
6. Inspect the container again, but this time format the output to only show
the volume path

docker inspect -f ‘{{json .Volumes}}’ <container
name>

194
Deleting a volume
• Volumes are not deleted when you delete a container
• To remove the volumes associated with a container use the -v option in
the docker rm command

Delete a container and remove it’s associated volumes


docker rm -v <container ID>

195
Deleting volumes
• If you created multiple containers which referenced the same volume,
you will be able to access that volume as long as one of those containers
is still present (running or stopped)
• When you remove the last container referencing a volume, that volume
will be orphaned
• Orphaned volumes still exists but are very difficult to access and remove
• Best practice: When deleting a container, make sure you delete the
volume associated with it, unless there are other containers using that
volume

196
EX8.3 – Deleting volumes
1. Delete the container from exercise 8.1 without using any options 

docker rm <container ID>
2. Copy the volume path from your text editor and run

sudo ls <path>
3. Notice the folder and files are still present
4. Run another container and specify a volume 

docker run -d -it -v /data/www ubuntu:14.04
5. Inspect the container and copy down the volume path to a text editor or file
6. Stop and delete the container using the -v option

docker rm -v <container name>
7. Grab the volume path from 6) and run sudo ls <path>
8. Notice that the folder no longer exists

197
Mounting host folders to a volume
• When running a container, you can map folders on the host to a volume
• The files from the host folder will be present in the volume
• Changes made on the host are reflected inside the container volume
• Syntax

docker run -v [host path]:[container path]:[rw|ro]
• rw or ro controls the write status of the volume

198
Simple Example
• In the example below, files inside /home/user/public_html on the
hosts will appear in the /data/www folder of the container
• If the host path or container path does not exist, it will be created
• If the container path is a folder with existing content, the files will be
replaced by those from the host path

Mount the contents of the public_html folder on the hosts to the container
volume at /data/www
docker run –d -v /home/user/public_html:/data/www ubuntu

199
Inspecting the mapped volume
• The volumes field from docker inspect will show the container
volume being mapped to the host path specified during docker run

200
EX8.4 – Mount a host folder
1. In your home directory, create a public_html folder and create an
index.html file inside this folder
2. Run an Ubuntu container and mount the public_html folder to the volume /
data/www

docker run -it -v /home/<user>/public_html:/data/www
ubuntu:14.04
3. In the container, look inside the /data/www folder and verify you can see the
index.html file
4. Exit the container without stopping it

CTRL + P + Q
5. Modify the index.html file on the host by adding some new lines
6. Attach to the container and check the index.html file inside /data/www for the
changes

201
A more practical example with NGINX
• Let’s run an NGINX container and have it serve web pages that we have
on the host machine
• That way we can conveniently edit the page on the host instead of
having to make changes inside the container
• Quick NGINX 101
– NGINX starts with a one default server on port 80
– Default location for pages is the /usr/share/nginx/html folder
– By default the folder has an index.html file which is the welcome page

202
The NGINX welcome page

index.html page in
folder

/usr/share/nginx/html

203
Running the container
• Key aspects to remember when running the container
– Use -d to run it in detached mode
– Use -P to automatically map the container ports (more on this in Module 9)
• When we visit our server URL we should now see our index.html file inside
our public_html folder instead of the default NGINX welcome page


Run an nginx container and map the our public_html folder to the volume at
the /usr/share/nginx/html folder in the container. <path> is the path to
public_html
docker run -d –P –v <path>:/usr/share/nginx/html nginx

204
Running the container (More Examples)

docker run -it -v ~/.bash_history:/.bash_history my image


!
docker run --rm -v $(pwd):/myfiles busybox sh
!
docker run --rm --volumes-from john1 -v $(pwd):/backup busybox tar cvf /
backup/john2.tar /john
!
!

205
EX8.5 – Run an NGINX container
1. Run an NGINX container and map your public_html folder to a 

volume at /usr/share/nginx/html. 

docker run -d -P -v <public_html>:/usr/share/nginx/html nginx

Remember to specify the full path to your public_html folder
2. Get terminal access to your container

docker exec -it <container id> bash
3. Check the /usr/share/nginx/html folder for your index.html file and then exit
the terminal
4. Run docker ps to find the host port which is mapped to port 80 on the container
5. On your browser, access your AWS server URL and specify the port from question
4)
6. Verify you can see the contents of your index.html file from your public_html
folder

206
EX8.5 – (cont’d)
1. Now modify your index.html file
2. Refresh your browser and verify that you can see the changes

207
Use cases for mounting host directories
• You want to manage storage and snapshots yourself.
– With LVM, or a SAN, or ZFS, or anything else!)
• You have a separate disk with better performance (SSD) or resiliency
(EBS) than the system disk, and you want to put important data on that
disk.
• You want to share your source directory between your host (where the
source gets edited) and the container (where it is compiled or executed).
– Good for testing purposes but not for production deployment

208
Volumes in Dockerfile
• VOLUME instruction creates a mount point
• Can specify arguments in a JSON array or string
• Cannot map volumes to host directories
• Volumes are initialized when the container is executed

String example
VOLUME /myvol
!
String example with multiple volumes
VOLUME /www/website1.com /www/website2.com
!
JSON example
VOLUME [“myvol”, “myvol2”]

209
Example Dockerfile with Volumes
• When we run a container from this image, the volume will be initialized along with any
data in the specified location
• If we want to setup default files in the volume folder, the folder and file must be
created first

FROM ubuntu:14.04
!
RUN apt-get update
RUN apt-get install -y vim \
wget
!
RUN mkdir /data/myvol -p && \
echo "hello world" > /data/myvol/testfile
VOLUME ["/data/myvol"]

210
EX8.6 – Volumes in Dockerfile
1. Open your Dockerfile in the myimage folder
2. Delete or comment out the CMD and ENTRYPOINT instructions
3. Add an instruction to create a folder /data/myvol

RUN mkdir /data/myvol -p
4. Add an instruction to create a file called test inside /data/myvol

RUN echo “put some text here” > /data/myvol/test
5. Define a volume at /data/myvol

VOLUME /data/myvol
6. Build the image and tag as 1.1

docker build -t myimage:1.1 .
7. Run a container from the image and specify a bash terminal as your process
8. Check the /data/myvol folder and verify that your file is present


211
Data containers
• A data container is a container created for the purpose of referencing
one or many volumes
• Data containers don’t run any application or process
• Used when you have persistent data that needs to be shared with other
containers
• When creating a data container, you should give it a custom name to
make it easier to reference

212
Custom container names
• By default, containers we create, have a randomly generated name
• To give your container a specific name, use the --name option on the docker
run command
• Existing container can be renamed using the docker rename command

docker rename <old name> <new name>

Create a container and name it mynginx


docker run -d -P --name mynginx nginx

!
Rename the container called happy_einstein to mycontainer
docker rename happy_einstein mycontainer

213
EX8.7 – Rename some containers
1. Run a container with a custom name of “mycontainer”
2. Rename the container to “testcontainer”

214
Creating data containers
• We just need to run a container and specify a volume
• Should run a container using a lightweight image such as busybox
• No need to run any particular process, just run “true”

Run our data container using the busybox image


docker run --name mydata -v /data/app1 busybox true

215
Using data containers
• Data containers can be used by other containers via the 

--volumes-from option in the docker run command
• Reference your data container by its container name. For example: 

--volumes-from datacontainer …

216
EX8.8 – Data containers
1. Run a busybox container and define a volume at /srv/www. Call the container
“webdata”

docker run --name webdata -v /srv/www busybox
2. Run an ubuntu container and reference the volume in your webdata container.
Name it webserver

docker run -it --name webserver --volumes-from webdata
ubuntu:14.04
3. On the container terminal, verify that you can see the /srv/www folder
4. Add a text file to the folder
5. Exit the terminal but do not stop the container 

CTRL + P + Q

217
Chaining containers
Container Container Container

app1_test app1 appdata


--volumes-from --volumes-from

Volume:

/srv/www

• Container app1 will mount the /srv/www volume from appdata


• Container app1_test will mount all volumes in app1, which will be the
same /srv/www volume from appdata
218
EX8.9 – Chain container volumes
1. Run another ubuntu container and mount all the volumes from the
“webserver” container that was created in exercise 8.8. Call this
container webserver2

docker run -it --name webserver2 --volumes-from
webserver ubuntu:14.04
2. In the container terminal, check the /srv/www folder and verify that
the file you created in exercise 8.8 is present
3. Create another text file in the folder
4. Go back into the webserver container and check the /srv/www
folder there to verify the new file from question 3) is present

docker attach webserver

219
A more practical example
• Let’s run an NGINX container but separate its operation across multiple
containers
– One container to handle the log files
– One container to handle the web content that is to be served
– One container to run NGINX

220
Diagram of example

logdata
--volumes-from
Volume:

webserver /var/log/nginx


/home/user/public_html
webdata
Index.html
--volumes-from Volume:
 File1.html
/usr/share/nginx/html …

221
Setup the data containers
• Setup the logdata container
!
!
!
• Setup the webdata container, mount it to our public_html folder

222
Setup the webserver container
• This container needs to run in detached mode
• We can specify the --volumes-from option multiple times
• Remember to use -P for port mapping

223
EX8.10 – Build the NGINX example
1. Double check that your public_html folder contains an index.html
file
2. Create one if otherwise
3. Follow the steps on the previous slides to create your data containers
first
4. Then create the webcontainer and mount the volumes from both
data containers
5. Run docker ps to find the port mapping for port 80 for your
webserver container. Note down this number somewhere.

224
EX8.10 – (cont’d)
5. Get terminal access into webcontainer and check the /usr/share/nginx/
html and /var/log/nginx folders. 

docker exec -it webcontainer bash
6. Check that the index.html file matches the one in your public_html folder
on the host
7. Go to /var/log/nginx, open and follow the access log

tail -f /var/log/nginx/access.log
8. On your browser, go to the URL of your AWS instance and specify the port
number
9. Verify that you can see the contents of your index.html page in
public_html
10. Hit the page a few more times and check for log entries on your terminal

225
EX8.10 – (cont’d)
11. Exit the webcontainer terminal and return to your host terminal
12. Modify the contents of your index.html inside public_html and hit
the server URL on your browser again to confirm that the changes have
been picked up.

226
Backup your data containers
• It’s a good idea to back up data containers such as our logdata
container, which has our NGINX log files
• Backups can be done with the following process:
– Create a new container and mount the volumes from the data container
– Mount a host directory as another volume on the container
– Run the tar process to backup the data container volume onto your host
folder

227
EX8.11 – backup our log container
1. Create a folder called backup in your home directory
2. Backup the volume in your logdata container with the following
command (replace the “johnnytu” with your username and put the whole
command on one line)
!
!
!
!
3. Check your backup folder for the tar file
4. Run tar -tvf nginxlogs.tar and verify that you can see the files
error.log and access.log
228
Volumes defined in images
• Most images will have volumes defined in their Dockerfile
• Can check by using docker inspect command against the image
• docker inspect can be run against an image or a container
• To run against an image, specify either the image repository and tag or the image
id.

Inspect the properties of the ubuntu:14.04 image


docker inspect ubuntu:14.04
OR
docker inspect <image id>

229
Inspecting an image

230
Module summary
• Volumes can be mounted when we run a container during the docker
run command or in a Dockerfile
• Volumes bypass the copy of write system
• We can map a host directory to a volume in a container
• A volume persists even after its container has been deleted
• A data container is a container created for the purpose of referencing
one or many volumes

231
Module 9:

Container Networking
Module objectives
In this module we will
• Explain the Docker networking model for containers
• Learn how to map container ports to host ports manually and
automatically
• Learn how to link containers together

233
Docker networking model
• Containers cannot have a public IPv4 address
• They are allocated a private address
• Services running on a container must be exposed port by port
• Container ports have to be mapped to the host port to avoid conflicts
!
• (Pre Libnetwork)

234
The docker0 bridge
• When Docker starts, it creates a virtual interface called docker0 on the host
machine
• docker0 is assigned a random IP address and subnet from the private range
defined by RFC 1918

johnnytu@docker-ubuntu:~$ ip a


3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default
link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::5484:7aff:fefe:9799/64 scope link
valid_lft forever preferred_lft forever

235
The docker0 bridge
• The docker0 interface is a virtual Ethernet bridge interface
• It passes or switches packets between two connected devices just like a
physical bridge or switch
– Host to container
– Container to container
• Each new container gets one interface that is automatically attached to
the docker0 bridge

236
Checking the bridge interface
• We can use the brctl (bridge control) command to check the
interfaces on our docker0 bridge
• Install bridge-utils package to get the command

apt-get install bridge-utils
• Run 

brctl show docker0

237
Checking the bridge interface
• Spin up some containers and then check the bridge again

238
Check container interface
• Get into the container terminal and run the ip a command

239
Check container networking properties
• Use docker inspect command and look for the NetworkSettings field

240
Diagram of networking model
Host

IP: 172.17.42.1
docker0 bridge
vethdc.. vethdd..

eth0 eth0
container1 container2
IP: 172.17.0.120 IP: 172.17.0.121
241
EX9.1 – Ping container
1. Launch an Ubuntu container in detached mode

docker run -d –it ubuntu:14.04
2. Inspect the container and note down it’s IP address

docker inspect \

--format=‘{{.NetworkSettings.IPAddress}}’ \
<container id>
3. Launch another Ubuntu container and get terminal access inside

docker run -it ubuntu:14.04 bash
4. Ping your first container using the IP Address obtained in question 2.
5. Ping both your containers from your host machine

242
Mapping ports
• Revision: containers have their own network and IP address
• Map exposed container ports to ports on the host machine
• Ports can be manually mapped or auto mapped
• You can see the port mapping for each container on the docker ps
output

243
Manual port mapping
• Uses the -p option (smaller case p) in the docker run command
• Syntax

-p [host port]:[container port]
• To map multiple ports, specify the -p option multiple times

Map port 8080 on the tomcat container to port 80 on the host


docker run -d -p 80:8080 tomcat
!
Map port on the host to port 80 on the nginx container and port 81 on
the host to port 8080 on the nginx container
docker run -d -p 80:80 -p 81:8080 nginx

244
Docker port command
• docker ps output is not very ideal for display port mappings
• We can use the docker port command instead

245
EX9.2 – Manual port mapping
1. Run an nginx container and map port 80 on the container to port 80 

on your host. Map port 8080 on the container to port 90 on the host

docker run -d -p 80:80 -p 90:8080 nginx
2. Verify the port mappings with the docker port command

docker port <container name>

246
Automapping ports
• Use the -P option in docker run command
• Automatically maps exposed ports in the container to a port number in the host
• Host port numbers used go from 49153 to 65535
• Only works for ports defined in the Dockerfile EXPOSE instruction

Auto map ports exposed by the NGINX container to a port value on the
host
docker run –d –P nginx:1.7

247
EXPOSE instruction
• Configures which ports a container will listen on at runtime
• Ports still need to be mapped when container is executed

FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y nginx
!
EXPOSE 80 443
!
CMD ["nginx", "-g", "daemon off;"]

248
EX9.3 – Auto mapping
1. Run a container using the “myimage” image built from previous
exercises. Use the -P option for auto mapping ports

docker run -d -P myimage
2. Check the port mappings. Notice that there are none
3. Modify the Dockerfile inside myimage and expose port 80 and 8080.

EXPOSE 80 8080
4. Build the image 

docker build -t myimage .
5. Repeat step 1 and 2. This time you should notice your port mapping

249
Linking Containers
Linking is a communication method between containers which
allows them to securely transfer data from one to another

• Source and recipient


containers
• Recipient containers have
access to data on source
containers
recipient source
• Links are established based
on container names

250
Uses of Linking
• Containers can talk to each other without having to expose ports to the
host
• Essential for micro service application architecture
• Example:
– Container with Tomcat running
– Container with MySQL running
– Application on Tomcat needs to connect to MySQL

251
Creating a Link
1. Create the source container first
2. Create the recipient container and use the --link option
!
• Best practice – give your containers meaningful names
• Format for linking

name:alias

Create the source container using the postgres


docker run -d --name database postgres
!
Create the recipient container and link it
docker run -d -P --name website --link database:db nginx

252
The underlying mechanism
• Linking provides a secure tunnel between the containers
• Docker will create a set of environment variables based on your --link
parameter
• Docker also exposes the environment variables from the source
container.
– Only the variables created by Docker are exposed
– Variables are prefixed by the link alias
– ENV instruction in the container Dockerfile
– Variables defined during docker run
• DNS lookup entry will be added to /etc/hosts file based on your alias

253
EX9.4 - Link two Containers
1. Run a container in detached mode using the tomcat image. 

Name the container “appserver”

docker run -d --name appserver tomcat
2. Run another container using the Ubuntu image and link it with the “appserver”
container. Use the alias “appserver”, run the bash terminal as the main process

docker run -it --name client \

--link appserver:appserver \

ubuntu:14.04 bash
3. In the “client” container terminal, open the /etc/hosts file
4. What can you observe?
5. Ping your Tomcat container without using the IP address

ping appserver

254
Environment variables in source container
• Let’s inspect the variables defined in our “appserver” Tomcat container

255
Environment variables in recipient container
• Now we will check the environment variables in the Ubuntu container
which is linked to our “appserver” container

Connection information
created from the --link option

Created from source container

256
Module summary
• Docker containers run in a subnet provisioned by the docker0 bridge on
the host machine
• Auto mapping of container ports to host ports only applies to the port
numbers defined in the Dockerfile EXPOSE instruction
• Linking provides a secure method of communication between containers
• Key Dockerfile instructions
– EXPOSE

257
Module summary
• Docker Compose makes it easier to manage micro service applications
by making it easy to spin up and manage multiple containers
• Each service defined in the application is created as a container and can
be scaled to multiple containers
• Docker Compose can create and run containers in a Swarm cluster

258
Further Information
Additional resources
• Docker homepage - http://www.docker.com/
• Docker Hub - https://hub.docker.com
• Docker blog - http://blog.docker.com/
• Docker documentation - http://docs.docker.com/
• Docker Getting Started Guide - http://www.docker.com/gettingstarted/
• Docker code on GitHub - https://github.com/docker/docker
• Docker mailing list - https://groups.google.com/forum/#!forum/docker-user
• Docker on IRC: irc.freenode.net and channels #docker and #docker-dev
• Docker on Twitter - http://twitter.com/docker
• Get Docker help on Stack Overflow -http://stackoverflow.com/search?q=docker

260
THANK YOU

You might also like