Painless Docker Sample

Download as pdf or txt
Download as pdf or txt
You are on page 1of 57
At a glance
Powered by AI
The document discusses Docker containers, their types, and how Docker relates and compares to other containerization tools. It also outlines several use cases for Docker including versioning, fast deployment, and packaging applications.

The document defines containers as standardized units for developing and deploying applications. It lists several container types including chroot jails, FreeBSD jails, Linux containers, Solaris zones, OpenVZ, LXC, and more. It provides a brief description of each type.

Docker builds on existing container technologies like LXC by adding features for building workflows and distributing applications. Docker allows for easy versioning, packaging, and deployment of applications by bundling code and dependencies together. The relationship between the host OS and Docker containers is also explained.

Painless Docker

Unlock The Power Of Docker & Its Ecosystem

Aymen El Amri @eon01


This book is for sale at http://leanpub.com/painless-docker

This version was published on 2017-02-06

This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and
many iterations to get reader feedback, pivot until you have the right book and build traction once
you do.

2016 - 2017 Aymen El Amri @eon01


Tweet This Book!
Please help Aymen El Amri @eon01 by spreading the word about this book on Twitter!
The suggested tweet for this book is:
I just bought Painless Docker: Unlock The Power Of Docker & Its Ecosystem
The suggested hashtag for this book is #PainlessDocker.
Find out what other people are saying about the book by clicking on this link to search for this
hashtag on Twitter:
https://twitter.com/search?q=#PainlessDocker
Also By Aymen El Amri @eon01
Saltstack For DevOps
The Jumpstart Up
Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
To Whom Is This Book Addressed ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
How To Properly Enjoy This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Conventions Used In This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
How To Contribute And Support This Book ? . . . . . . . . . . . . . . . . . . . . . . . . 6

Chapter I - Introduction To Docker & Containers . . . . . . . . . . . . . . . . . . . . . . . 8


What Are Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Containers Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Chroot Jail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
FreeBSD Jails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Linux-VServer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Solaris Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
OpenVZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Process Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
LXC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Warden . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
LMCTFY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
RKT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Introduction To Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
What Is The Relation Between The Host OS And Docker . . . . . . . . . . . . . . . . . . 12
What Does Docker Add To LXC Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Docker Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Versionning & Fast Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Distribution & Collaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Multi Tenancy & High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
CI/CD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Isolation & The Dependency Hell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Using The Ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Chapter II - Installation & Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17


Requirements & Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Installing Docker On Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
CONTENTS

Ubuntu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
CentOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Debian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Docker Toolbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Docker For Mac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Docker For Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Docker Experimental Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Docker Experimental Features For Mac And Windows . . . . . . . . . . . . . . . . . . 37
Removing Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Docker Hub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Docker Registry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Deploying Docker Registry On Amazon Web Services . . . . . . . . . . . . . . . . . . 45
Deploying Docker Registry On Azure . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Docker Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
CONTENTS 1

Docker and the Docker logo are trademarks or registered trademarks of Docker, Inc. in the United
States and/or other countries.
Preface
Docker is an amazing tool, may be you have tried using or testing it or may be you started using
it in some or all of your production servers but managing and optimizing it can be complex very
quickly, if you dont understand some basic and advanced concepts that I am trying to explain in
this book.
The fact that the ecosystem of containers is rapidly changing is also a constraint to stability and a
source of confusion for many operation engineers and developers.
Most of the examples that can be found in some blog posts and tutorials are -in many cases-
promoting Docker or giving tiny examples, managing and orchestrating Docker is more complicated,
especially with high-availability constraints.
This containerization technology is changing the way system engineering, development and release
management are working since years, so it requires all of your attention because it will be one of
the pillars of future IT technologies if it is not actually the case.
At Google, everything runs in a container. According to The Register, two billion containers are
launched every week. Google has been running containers since years, when containerization
technologies were not yet democratized and this is one of the secrets of the performance and ops
smoothness of Google search engine and all of its other services.
Some years ago, I was in doubt about Docker usage, I played with Docker in testing machines and
I decided later to use it in production. I have never regretted my choice, some months ago I created
a self-service in my startup for developers : an internal scalable PaaS - that was awesome ! I gained
more than 14x on some production metrics and I realized my goal of having a service with SLA and
Appdex score of 99%.

Appdex (Application Performance Index) is an open standard that defines a standardized


method to report, benchmark, and track application performance.

SLA (Service Level Agreement) is a contract between a service provider (either internal
or external) and the end user that defines the level of service expected from the service provider.
Preface 3

Goal Reached

It was not just the usage of Docker, this would be too easy, it was a list of todo things, like moving to
micro-services and service-oriented architectures, changing the application and the infrastructure
architecture, continuous integration ..etc But Docker was one of the most important things on my
checklist, because it smoothed the whole stacks operations and transormation, helped me out in the
continuous integration and the automation of routine task and it was a good platform to create our
own internal PaaS.
Some years ago, computers had a central processing unit and a main memory hosted in a main
machine, then come mainframes whose were inspired from the latter technology. Just after that, IT
had a new born called virtual machines. The revolution was in the fact that a computer hardware
using a hypervisor, allows a single machine to act as if it where many machines. Virtual machines
were almost run in on-premise servers, but since the emergence of cloud technologies, VMs have
been moved to the cloud, so instead of having to invest heavily in data centers and physical servers,
one can use the same virtual machine in the infrastructure of servers providers and benefit from the
pay-as-you-go cloud advantage.
Over the years, requirements change and new problems appears, thats why solutions also tends to
change and new technologies emerge.
Nowadays, with the fast democratization of software development and cloud infrastructures, new
problems appears and containers are being largely adopted since they offer suitable solutions.
A good example of the arising problems is supporting software environment in an identical
environment to the production when developing.Weird things happen when your development and
testing environments are not the same, same thing for the production environments. In this particular
case, you should provide and distribute this environment to your R&D and QA teams.
But running a Node.js application that has 1 MB of dependencies plus the 20MB Node.js runtime in
a Ubuntu 14.04 VM will take you up to 1.75 GB. Its better to distribute a small container image than
1G of unused libraries..
Preface 4

Containers contains only the OS libraries and Node.js dependencies, so rather than starting with
everything included, you can start with minimum and then add dependencies so that the same
Node.js application will be 22 times smaller! When using optimized containers, you could run more
applications per host.
Containers are a problem solver and one of the most sophisticated and adopted containers solutions
is Docker.

To Whom Is This Book Addressed ?


To developers, system administrators, QA engineers, operation engineers, architects and anyone
faced to work in one of these environments in collaboration with the other or simply in an
environment that requires knowledge in development, integration and system administration.
The most common idea is that developers think they are here to serve the machines by writing
code and applications, systems administrators think that machines should works for them simply
by making them happy (maintenance, optimization ..etc ).
Moreover, within the same company there is generally some tension between the two teams:

System administrators accuse developers to write code that consumes memory, does not meet
system security standards or not adapted to available machines configuration.
Developers accuse system administrators to be lazy, to lack innovation and to be seriously
uncool!

No more mutual accusations, now with the evolution of software development, infrastructure and
Agile engineering, the concept of DevOps was born.
DevOps is more a philosophy and a culture than a job (even if some of the positions I occupied
were called DevOps). By admitting this, this job seeks closer collaboration and a combination
of different roles involved in software development such as the role of developer, responsible for
operations and responsible of quality assurance. The software must be produced at a frenetic pace
while at the same time the developing in cascade seems to have reached its limits.

If you are a fan of service-oriented architectures, automation and the collaboration culture
if you are a system engineer, a release manager or an IT administrator working on DevOps,
SysOps or WebOps
If you are a developer seeking to join the new movement

This book is addressed to you. Docker one the most used tools in DevOps environments.
And if you are new to Docker ecosystem and no matter what your Docker level is, through this
book, you will firstly learn the basics of Docker (installation, configuration, Docker CLI ..etc) and
Preface 5

then move easily to more complicated things like using Docker in your development, testing and
live environments.
You will also see how to write your own Docker API wrapper and then master Docker ecosystem,
form orchestration, continuous integration to configuration management and much more.
I believe in learning led by practical real-world examples and you ill be guided through all of this
book by tested examples.

How To Properly Enjoy This Book


This book contains technical explanations and shows in each case an example of a command or
a configuration to follow. The only explanation gives you a general idea and the code that follows
gives you convenience and help you to practice what you are reading. Preferably, you should always
look both parts for a maximum of understanding.
Like any new tool or programming language you learned, it is normal to find difficulties and
confusions in the beginning, perhaps even after. If you are not used to learn new technologies,
you can even have a modest understanding while being in an advanced stage of this book. Do not
worry, everyone has passed at least once by this kind of situations.
At the beginning you could try to make a diagonal reading while focusing on the basic concepts, then
you could try the first practical manipulation on your server or using your laptop and occasionally
come back to this book for further reading on a about a specific subject or concept.
This book is not an encyclopedia but sets out the most important parts to learn and even to master
Docker and its fast-growing ecosystem. If you find words or concepts that you are not comfortable
with, just try to take your time and do your own on-line research.
Learning can be serial so understanding a topic require the understanding of an other one, do not
lose patience : You will go through chapters with good examples of explained and practical use cases.
Through the examples, try to showcase your acquired understanding, and, no, it will not hurt to go
back to previous chapters if you are unsure or in doubt.
Finally, try to be pragmatic and have an open mind if you encounter a problem. The resolution
begins by asking the right questions.

Conventions Used In This Book


Basically, this is a technical book where you will find commands (Docker commands) and code
(YAML, Python ..etc).
Commands and code are written in a different format.
Example :
Preface 6

1 docker run hello-world

This book uses italic font for technical words such as libraries, modules, languages names.
The goal is to get your attention when you are reading and help you identify them.
You will find two icons, I have tried to be as simple as possible so I have chosen not to use too
many symbols, you will only find:

To highlight useful and important information.

To highlight a warning or a cautionary advice.

How To Contribute And Support This Book ?


This work will be always a work in progress but it does not mean that it is not a complete
learning resource - writing a perfect book is impossible on the contrary of iterative and continuous
improvement.
I am an adopter of the lean philosophy so the book will be continuously improved in function of
many criteria but the most important one is your feedback.
I imagine that some readers do not know how Lean publishing works. Ill try to explain briefly:
Say, the book is 25% complete, if you pay for it at this stage, you will pay the price of the 25% but
get all of the updates until 100%.
Another point, lean publishing for me is not about money, I refused several interesting offers from
known publishers because I want to be free from restrictions and DRM ..etc
If you have any suggestion or if you encountered a problem, it would be better to use a tracking
system for issues and recommendations about this book, I recommend using this github repository.
You can find me on Twitter or you can use my blog contact page if you would like to get in touch.
This book is not perfect, so you can find typo, punctuation errors or missing words.
Contrariwise every line of the used code, configurations and commands was tested before.
https://github.com/eon01/PainlessDocker/issues
https://twitter.com/eon01
http://eon01.com
Preface 7

If you enjoyed reading Painless Docker and would like to support it, your testimonials will be more
than welcome, send me an email, if you need a development/testing server to manipulate Docker,
I recommend using Digital Ocean, you can also show your support by using this link to sign up.
If you wan to join more than 1000 developers, SRE engineers, sysadmins and IT experts, you can
subscribe to a DevOpsLinks community and you will be invited to join our newsletter and join out
team chat.
mailto:[email protected]
https://m.do.co/c/76a5a96b38be
http://devopslinks.com
Chapter I - Introduction To Docker &
Containers

1 o ^__^
2 o (oo)\_______
3 (__)\ )\/\
4 ||----w |
5 || ||

What Are Containers


Containers are the new virtualization.
Containers are the technology that allows you to isolate, build, package, ship, run and scale an
application.
A container makes it easy to move an application between development, testing and production
environments.
This technology exist since a long time and it is not a revolutionary technology: The real value it
gives is not technology but getting people to agree on something. In the other hand, it is experiencing
a rebirth with easy-to-manage containerization tools like Docker.

Containers Types
The popularity of Docker made some people think that it is the only container technology. But there
are many others actually. Lets enumerate most of them.
The following list is ordered from the least to the most recent technology.

Chroot Jail
The first container was the chroot.
Chroot is a system call for *nix OSs that changes the root directory of the current running process
and their children. The process running in a chroot jail will not know about the real filesystem root
directory.
A program that is run in such environment cannot access files and commands outside that
environmental directory tree. This modified environment is called a chroot jail.
Chapter I - Introduction To Docker & Containers 9

FreeBSD Jails
The FreeBSD jail mechanism is an implementation of OS-level virtualization. A FreeBSD-based OS
could be partitioned into several independent jails.
While chroot jail restricts processes to a particular filesystem view, the FreeBSD is an OS-level
virtualization: A jail restricting the activities of a process with respect to the rest of the system.
Jailed processes are sandboxed.

Linux-VServer
Linux-VServer is a virtual private server using OS-level virtualization capabilities that was added
to the Linux kernel. Linux-VServer technology has many advantages but its networking is based
on isolation, not virtualization which prevents each virtual server from creating its own internal
routing policy.

Solaris Containers
Solaris Containers are an OS-level virtualization technology for x86 and SPARC systems. A Solaris
Container is a combination of system resource controls and the boundary separation provided by
zones.
Zones act as completely isolated virtual servers within a single operating system instance. (source :
wikipedia).

OpenVZ
Open Virtuozzo or OpenVZ is also OS-level virtualization technology for Linux. OpenVZ allows
system administrators to run multiple isolated OS instances (containers), virtual private servers or
virtual environments.

Process Containers
Engineers at Google (primarily Paul Menage and Rohit Seth) started the work on this feature in 2006
under the name process containers. It was then called cgroups (Control Groups). We will see more
details about cgroups later in this book.

LXC
Linux Containers or LXC is an OS-level virtualization technology that allows running multiple
isolated Linux systems (containers) on a control host using a single Linux kernel. LXC provides
a virtual environment that has its own process and network space. It relies on cgroups.
The difference between Docker and LXC is explained later in details.
Chapter I - Introduction To Docker & Containers 10

Warden
Warden used LXC at its initial stage and later on replaced with a CloudFoundry implementation. It
provides isolation to any other system than Linux that support isolation.

LMCTFY
Let Me Contain That For You or LMCTFY is the open source version of Googles container stack,
which provides Linux application containers.
Google engineers have been collaborating with Docker over libcontainer and are in process of
porting the core lmctfy concepts and abstractions to libcontainer.
The project is not actively being developed, in future the core of lmctfy will be replaced by
libcontainer.

Docker
This is what we are going to discover through this book.

RKT
CoreOs started building a container called rkt (pronounce Rocket).
CoreOs is designing rkt following the original premise of containers that Docker introduced but
with more focus on:

Composable ecosystem
Security
A different image distribution philosophy
Openness

Introduction To Docker
Docker is a containerization tool with a rich ecosystem that was conceived to help you develop,
deploy and run any application, anywhere.
Unlike a traditional virtual machine, Docker container share the resources of the host machine
without needing an intermediary (a hypervisor) and therefore you dont need to install an operating
system. It contains the application and its dependencies, but works in an isolated and autonomous
way.
Chapter I - Introduction To Docker & Containers 11

Virtual Machines VS Docker

In other words, instead of a hypervisor with a guest OS on top, Docker uses its engine and containers
on top.
Most of us used to use virtual machines, so why containers and Docker is taking an important part
of today infrastructures ?
This table explains briefly the difference and the advantages of using Docker:

VM Docker

Size Small CoreOs = 1.2GB A Busybox container = 2.5 MB
Startup Time Measured in minutes An optimized Docker container, will
run in less that a second
Integration Difficult More open to be integrated to other
tools
Dependency Hell Frustration Docker fixes this
Versionning No Yes

Docker is a process isolation tool that used LXC (an operating-system-level virtualization method for
running multiple isolated Linux systems (containers) on a control host using a single Linux kernel)
until the version 0.9.
The basic difference between LXC and VMs is that with LXC there is only one instance of Linux
Kernel running.
For curious readers, LXC was was replaced by Docker own libcontainer library written in the Go
programming language.
So, a Docker container isolate your application running in a host OS, the latter can run many other
Chapter I - Introduction To Docker & Containers 12

containers. Using Docker and its ecosystem, you can easily manage a cluster of containers, stop and
start multiple applications, scale them, take snapshots of running containers, link multiple services
running Docker, manage containers and clusters using APIs on top of them, automate tasks, create
applications watchdogs and many other features that are complicated without containers.
After finishing this book, you will learn how to use all of these features and more.

What Is The Relation Between The Host OS And Docker


In a simple phrase, the host OS and the container share the Kernel.
If you are running Ubuntu as a host, containers kernel is going to use Ubuntus Kernel, but you can
use CentOs or any other OS image inside your container. That is why the main difference between
a virtual machine and a Docker container is the fact that there is nothing between the Kernel and
the guest, Docker take place directly within your hosts Kernel.
You are probably saying if Dockers using the host Kernel, why should I install an OS within my
container ?.
You are right, in some cases you can use Dockers scratch image, which is an explicitly empty image,
especially for building images from scratch. This is useful for containers that contain only a single
binary and whatever it requires, such as the hello-world container that we are going to use in the
next section.
So Docker is a process isolation environment and not an OS isolation environment (like virtual
machines), you can, as said, use a container without an OS. But imagine you want to run an Nginx
or an Apache container, you can run the servers binary, but you will need to access the file system
in order to configure nginx.conf, apache.conf, httpd.conf or the available sites configurations.
In this case, if you run a containers without an OS, you will need to map folders from the container
to the host like the /etc directory (since configuration files are under /etc).
You can actually do it but you will lose the change management feature that docker containers offers:
Every change within the container file system will be also mapped to the host file system.
Therefore, amongst other reasons, Docker containers running an OS are used for portability and
change management.
In the examples explained in this book, we often rely on official images that can be found in the
official Docker hub.

What Does Docker Add To LXC Tools


LXC owes its origin to the development of cgroups and namespaces in the Linux kernel. One of the
most asked questions on the net about Docker is the difference between Docker and VMs but also
the difference between Docker and LXC.
http://hub.docker.com
Chapter I - Introduction To Docker & Containers 13

This question was asked in Stackoverflow and I am sharing the response of Solomon Hykes (the
creator of Docker) under CC BY-SA 3.0 license.
Docker is not a replacement for lxc. lxc refers to capabilities of the linux kernel (specifically names-
paces and control groups) which allow sandboxing processes from one another, and controlling their
resource allocations.
On top of this low-level foundation of kernel features, Docker offers a high-level tool with several
powerful functionalities:

Portable deployment across machines Docker defines a format for bundling an application
and all its dependencies into a single object which can be transferred to any docker-enabled
machine, and executed there with the guarantee that the execution environment exposed to
the application will be the same. Lxc implements process sandboxing, which is an important
pre-requisite for portable deployment, but that alone is not enough for portable deployment.
If you sent me a copy of your application installed in a custom lxc configuration, it would
almost certainly not run on my machine the way it does on yours, because it is tied to your
machines specific configuration: networking, storage, logging, distro, etc. Docker defines an
abstraction for these machine-specific settings, so that the exact same docker container can
run - unchanged - on many different machines, with many different configurations.
Application-centric Docker is optimized for the deployment of applications, as opposed to
machines. This is reflected in its API, user interface, design philosophy and documentation. By
contrast, the lxc helper scripts focus on containers as lightweight machines - basically servers
that boot faster and need less ram. We think theres more to containers than just that.
Automatic build Docker includes a tool for developers to automatically assemble a container
from their source code, with full control over application dependencies, build tools, packaging
etc. They are free to use make, maven, chef, puppet, salt, debian packages, rpms, source
tarballs, or any combination of the above, regardless of the configuration of the machines.
Versioning Docker includes git-like capabilities for tracking successive versions of a container,
inspecting the diff between versions, committing new versions, rolling back etc. The history
also includes how a container was assembled and by whom, so you get full traceability from
the production server all the way back to the upstream developer. Docker also implements
incremental uploads and downloads, similar to git pull, so new versions of a container can
be transferred by only sending diffs.
Component re-use Any container can be used as an base image to create more specialized
components. This can be done manually or as part of an automated build. For example you
can prepare the ideal python environment, and use it as a base for 10 different applications.
Your ideal postgresql setup can be re-used for all your future projects. And so on.
Sharing Docker has access to a public registry (https://registry.hub.docker.com/) where
thousands of people have uploaded useful containers: anything from redis, couchdb, postgres
to irc bouncers to rails app servers to hadoop to base images for various distros. The registry
also includes an official standard library of useful containers maintained by the docker team.
http://stackoverflow.com/questions/17989306/what-does-docker-add-to-lxc-tools-the-userspace-lxc-tools/18208445#18208445
Chapter I - Introduction To Docker & Containers 14

The registry itself is open-source, so anyone can deploy their own registry to store and transfer
private containers, for internal server deployments for example.
Tool ecosystem Docker defines an API for automating and customizing the creation and
deployment of containers. There are a huge number of tools integrating with docker to extend
its capabilities. PaaS-like deployment (Dokku, Deis, Flynn), multi-node orchestration (mae-
stro, salt, mesos, openstack nova), management dashboards (docker-ui, openstack horizon,
shipyard), configuration management (chef, puppet), continuous integration (jenkins, strider,
travis), etc. Docker is rapidly establishing itself as the standard for container-based tooling.

Docker Use Cases


Docker has many use cases and advantages:

Versionning & Fast Deployment


Docker registry (or Docker Hub) could be considered as a version control system for a given
application. Rollbacks and updates are easier this way.
Just like Github, BitBucket or any other git system, you can use tags to tag your images versions.
Imagine you can tag differently a container with each application release, it will be easier to deploy
and rollback to the n-1 release.
Chapter I - Introduction To Docker & Containers 15

ElasticSearch Tags

As you may already know, Git-like systems gives you commit identifiers like 2.1-3-xxxxxx but those
are not tags!
Tagging is done with docker tag command, those tags are are the base for the commit.
Docker versionning and tagging system is working also in this way.
Chapter I - Introduction To Docker & Containers 16

Distribution & Collaboration


If you would like to share images and containers, Docker allows this social feature so that anyone
can contribute to a public (or private) image.
Individuals and communities can collaborate and share images. Users can also vote for images. In
Docker Hub, you can find trusted (official) and community images.
Some images have a continuous build and security scan feature to keep them up-to-date.

Multi Tenancy & High Availability


Using the right tools from the ecosystem, it is easier to run many instances of the same application
in the same server with Docker than the main stream way.
Using a proxy, a service discovery and a scheduling tool, you can start a second server (or more)
and load-balance your traffic between the cluster nodes.

CI/CD
Docker is used in production systems but it is considered as a tool to run the same application
in developers laptop. Docker may move from development to QA to production without being
changed. If you would like to be as close as possible to production, then Docker is a good solution.
Since Docker solves the problem of works on my machine, it is important to highlight this use
case. Most problems in software development and operations are due to the differences between
development and production environments.
If your R&D team use the same image that QA team will test against, and the same environment
will be pushed to live servers, it is sure that a great part of the problems (dev vs ops) will disappear.

Isolation & The Dependency Hell


Dockerizing an application is also isolating it into a separate environment.
Imagine having two APIs running with two different languages or running with the same language
but with different versions.
In many cases Docker simplifies dependency hell by its isolation feature.

Using The Ecosystem


You can use Docker with multiple external tools like configuration management tools, orchestration
tools, file storage technologies, filesystem types, logging softwares, monitoring, self-healing ..etc
On the other hands even with all the benefits of Docker, it is not always the best solution to use,
there are always exceptions.
Chapter II - Installation &
Configuration

1 o ^__^
2 o (oo)\_______
3 (__)\ )\/\
4 ||----w |
5 || ||

In Painless Docker, we are going to use the version 1.12 of Docker. I used to use previous stable
version like 1.11, but a new important feature (detailed later) which is the Swarm Mode was
introduced in version 1.12. Swarm orchestration technology is directly integrated into Docker where
before it was an add-on.
I am a GNU/Linux user, but for Windows and Mac users, Docker unveiled with the same version,
the first full desktop editions of the software for development on Mac and Windows machines.
There are many other interesting features, enhancements and simplifications in the version 1.12 of
Docker, you can find the whole list in Docker github repository.
If you are completely new to Docker, you will not get all of the new following features, but you will
be able to understand them as you go along with this book. May be the most important new features
in Docker 1.12 are:
Builder:

Support for UTF-8 in Dockerfiles

Distribution:

Add max-concurrent-downloads and max-concurrent-uploads daemon flags useful for


situations where network connections dont support multiple downloads/uploads
Provide more information to the user on docker load

Logging:
https://github.com/docker/docker
Chapter II - Installation & Configuration 18

Syslog logging driver now supports DGRAM sockets


Add details option to docker logs to also display log tags
Enable syslog logger to have access to env and labels
An additional syslog-format option rfc5424micro to allow microsecond resolution in syslog
timestamp
Inherit the daemon log options when creating containers
Remove docker/ prefix from log messages tag and replace it with {{.DaemonName}} so that
users have the option of changing the prefix

Networking:

Built-in Virtual-IP based internal and ingress load-balancing using IPVS


Routing Mesh using ingress overlay network
Adding network filter to docker ps
Add containers short-id as default network alias
run options dns and net=host are no longer mutually exclusive
Fix DNS issue when renaming containers with generated names
Allow both network inspect -f {{.Id}} and network inspect -f {{.ID}} to address inconsistency
with inspect output

Plugins (experimental):

New plugin command to manager plugins with install, enable, disable, rm, inspect, set
subcommands

Remote API (v1.24) & Client:

Add security options to docker info output


Add insecure registries to docker info output
Prevent docker run -i restart from hanging on exit

Runtime:

New load/save image events


Add support for reloading daemon configuration through systemd
Add support for docker run pid=container:<id>
Add a detach event
Fix an issue where containers are stuck in a Removal In Progress state
Chapter II - Installation & Configuration 19

Fix bug that was returning an HTTP 500 instead of a 400 when not specifying a command on
run/create
If volume-mounted into a container, /etc/hosts, /etc/resolv.conf, /etc/hostname are no longer
SELinux-relabeled #22993

Swarm Mode:

New swarm command to manage swarms with init, join, join-token, leave, update subcom-
mands
New service command to manage swarm-wide services with create, inspect, update, rm, ps
subcommands
New node command to manage nodes with accept, promote, demote, inspect, update, ps, ls
and rm subcommands
(experimental) New stack and deploy commands to manage and deploy multi-service appli-
cations

Volume:

Add support for local and global volume scopes (analogous to network scopes)
Allow volume drivers to provide a Status field
Add name/driver filter support for volume
Mount/Unmount operations now receives an opaque ID to allow volume drivers to differen-
tiate between two callers

I use Ubuntu 14.04 (Trusty) server edition with a 64 bit architecture as my main operating system,
but you will see how to install Docker in other OSs like Windows and MacOS.

Requirements & Compatibility


Docker itself doesnt need many resources so a little RAM could help you install and run Docker
engine. But running containers depends on what are you running exactly, in the case you are running
a Mysql or a MongoDB inside a container, you will need a lot of memory.
Docker requires a 64 bit kernel.
For developers using Windows or Mac, you have the choice to use Docker Toolbox or native Docker.
Native Docker is for sure faster but you still have the choice.
If you will use Docker Toolbox:

Mac users: Your Mac must be running OS X 10.8 Mountain Lion or newer to run Docker.
Chapter II - Installation & Configuration 20

Windows users: Your machine must have a 64-bit operating system running Windows 7 or
higher. You should have an enabled virtualization.

If you prefer Docker for Mac, as it is mentioned in the official Docker website:

Your Mac must be a 2010 or newer model, with Intels hardware support for memory
management unit (MMU) virtualization; i.e., Extended Page Tables (EPT) OS X 10.10.3
Yosemite or newer
You must have at least 4GB of RAM
You must have VirtualBox prior to version 4.3.30 must NOT be installed (it is incompatible
with Docker for Mac : uninstall the older version of VirtualBox and re-try the install if you
already missed this).

And if you prefer Docker for Windows:

Your machine should have a 64bit Windows 10 Pro, Enterprise and Education (1511 November
update, Build 10586 or later).
The Hyper-V package must be enabled and if it will be installed by Docker for Windows
installer, it will enable it for you

Installing Docker On Linux


Docker is supported by all Linux distributions satisfying the requirements, but not with all of the
versions and this is due to compatibility of Docker with old Kernel versions.
Kernels older than 3.10 will not support Docker and can cause data loss or any other bugs.
Check your Kernel by typing:

1 uname -r

Docker recommends making an upgrade, a dist upgrade and having the latest Kernel version for
your servers before using Docker in production.

Ubuntu
For Ubuntu, only those versions are supported to run and manage containers:

Ubuntu Xenial 16.04 (LTS)


Ubuntu Wily 15.10
Ubuntu Trusty 14.04 (LTS)
Ubuntu Precise 12.04 (LTS)

Remember that we are using Ubuntu 14.04.


Update your package manager, add the apt key & Docker list then type the update command.
Chapter II - Installation & Configuration 21

1 sudo apt-get update


2 sudo apt-get install apt-transport-https ca-certificates
3 sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58\
4 118E89F3A912897C070ADBF76221572C52609D
5 echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main"|tee -a /etc/ap\
6 t/sources.list.d/docker.list
7 sudo apt-get update

Purge the old lxc-docker if you were using it before and install the new Docker Engine:

1 sudo apt-get purge lxc-docker


2 sudo apt-get install docker-engine

If you need to run Docker without root rights (with your actual user), run the follwoing commands:

1 sudo groupadd docker


2 sudo usermod -aG docker $USER

If everything was ok, then running this command will create a container that will print a Hello
World message than exits without errors:

1 docker run hello-world

There is a good explanation about how Docker works in the output, if you hav not noticed it, here
it is:

1 Hello from Docker!


2 This message shows that your installation appears to be working correctly.
3
4 To generate this message, Docker took the following steps:
5 1. The Docker client contacted the Docker daemon.
6 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
7 3. The Docker daemon created a new container from that image which runs the
8 executable that produces the output you are currently reading.
9 4. The Docker daemon streamed that output to the Docker client, which sent it
10 to your terminal.

CentOS
Docker runs only on CentOS 7.X Same installation may apply to other EL7 distributions (but they
are not supported by Docker)
Add the yum repo.
Chapter II - Installation & Configuration 22

1 sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'


2 [dockerrepo]
3 name=Docker Repository
4 baseurl=https://yum.dockerproject.org/repo/main/centos/7/
5 enabled=1
6 gpgcheck=1
7 gpgkey=https://yum.dockerproject.org/gpg
8 EOF

Install Docker:

1 sudo yum install docker-engine

Start its service:

1 sudo service docker start

Set the daemon to run at system boot:

1 sudo chkconfig docker on

Test the Hello World image:

1 docker run hello-world

If you see a similar output to the following one, than your installation is fine:

1 Unable to find image 'hello-world:latest' locally


2 latest: Pulling from library/hello-world
3
4 c04b14da8d14: Pull complete
5 Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
6 Status: Downloaded newer image for hello-world:latest
7
8 Hello from Docker!
9 This message shows that your installation appears to be working correctly.
10
11 To generate this message, Docker took the following steps:
12 1. The Docker client contacted the Docker daemon.
13 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
Chapter II - Installation & Configuration 23

14 3. The Docker daemon created a new container from that image which runs the
15 executable that produces the output you are currently reading.
16 4. The Docker daemon streamed that output to the Docker client, which sent it
17 to your terminal.
18
19 To try something more ambitious, you can run an Ubuntu container with:
20 $ docker run -it ubuntu bash
21
22 Share images, automate workflows, and more with a free Docker Hub account:
23 https://hub.docker.com
24
25 For more examples and ideas, visit:
26 https://docs.docker.com/engine/userguide/

Now if you would like to create a Docker group and add your current user to it in order to avoid
running command with sudo privileges :

1 sudo groupadd docker


2 sudo usermod -aG docker $USER

Verify your work by running the hello-world container without sudo.

Debian
Only:

Debian testing stretch (64-bit)


Debian 8.0 Jessie (64-bit)
Debian 7.7 Wheezy (64-bit) (backports required)

are supported.
We are going to use the installation for Weezy. In order to install Docker on Jessie (8.0), change the
entry for backports and source.list entry to Jessie.
First of all, enable backports:
Chapter II - Installation & Configuration 24

1 sudo su
2 echo "deb http://http.debian.net/debian wheezy-backports main"|tee -a /etc/apt/s\
3 ources.list.d/backports.list
4 apt-get update
5
6 Purge other Docker versions if you have already used them:
7
8 ``` bash
9 apt-get purge "lxc-docker*"
10 apt-get purge "docker.io*"

and update you package manager:

1 apt-get update

Install apt-transport-https and ca-certificates

1 apt-get install apt-transport-https ca-certificates

Add the GPG key.

1 apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E8\


2 9F3A912897C070ADBF76221572C52609D

Add the repository:

1 echo "deb https://apt.dockerproject.org/repo debian-wheezy main"|tee -a /etc/apt\


2 /sources.list.d/docker.list
3 apt-get update

And install Docker:

1 apt-get install docker-engine

Start the service

1 service docker start

Run the Hello World container in order to check if everything is good:


Chapter II - Installation & Configuration 25

1 sudo docker run hello-world

You will have a similar output to the following, if Docker is installed without problems:

1 Unable to find image 'hello-world:latest' locally


2 latest: Pulling from library/hello-world
3
4 c04b14da8d14: Pull complete
5 Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
6 Status: Downloaded newer image for hello-world:latest
7
8 Hello from Docker!
9 This message shows that your installation appears to be working correctly.
10
11 To generate this message, Docker took the following steps:
12 1. The Docker client contacted the Docker daemon.
13 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
14 3. The Docker daemon created a new container from that image which runs the
15 executable that produces the output you are currently reading.
16 4. The Docker daemon streamed that output to the Docker client, which sent it
17 to your terminal.
18
19 To try something more ambitious, you can run an Ubuntu container with:
20 $ docker run -it ubuntu bash
21
22 Share images, automate workflows, and more with a free Docker Hub account:
23 https://hub.docker.com
24
25 For more examples and ideas, visit:
26 https://docs.docker.com/engine/userguide/

Now, in order to use your current user (not root user) to manage and run Docker, add the docker
group if it does not already exist.

1 exit # Exit from sudo user


2 sudo groupadd docker

Add your prefered user to this group:

1 sudo gpasswd -a ${USER} docker

Restart the Docker daemon.


Chapter II - Installation & Configuration 26

1 sudo service docker restart

Test the Hello World container to check if your current user have right to execute Docker commands.

Docker Toolbox
Few months ago, installing Docker for my developers using MacOs and Windows was a pain. Now
the new Docker Toolbox have made things easier. Docker Toolbox is quick and easy installer that will
setup a full Docker environment. The installation includes Docker, Machine, Compose, Kitematic,
and VirtualBox.

Installation Wizard

Docker Toolbox could be downloaded from Dockers wesbite.


Using this tool you will be able to work with:

docker-machine commands
docker commands
docker-compose commands
The Docker GUI (Kitematic)
a shell preconfigured for a Docker command-line environment
and Oracle VirtualBox
https://www.docker.com/products/docker-toolbox
Chapter II - Installation & Configuration 27

The installation is quite easy:

Installation Wizard - 1

This is a screenshot for Docker


Chapter II - Installation & Configuration 28

Installation Wizard - 2

If you would like a default installation press Next to accept all and then click on Install. If you are
running Windows, make sure you allow the installer to make the necessary changes.
Chapter II - Installation & Configuration 29

Installation Wizard - 3

Now that you finished the installation, on the application folder, click on Docker Quickstart
Terminal.
Chapter II - Installation & Configuration 30

Docker Quickstart Terminal

Mac users, type the following command in order to :

Create a achine dev


Create a VirtualBox VM
Create SSH key
Start the VirtualBox VM
Start the VM
Start the machine dev
Set the environment variables for machine dev

Windows users you can also follow the following instructions, since there are common commands
between the two OSs.

1 bash '/Applications/Docker Quickstart Terminal.app/Contents/Resources/Scripts/st\


2 art.sh'

Running the following command will show you how to connect Docker to this machine:

1 docker-machine env dev

Now for testing, use the Hello World container:

1 docker run hello-world

Dont worry if you see this message:


Chapter II - Installation & Configuration 31

1 Unable to find image 'hello-world:latest' locally

This is not an error but Docker is saying that the image Hello World will not be used from your
local disk but it will be pulled from Docker Hub.

1 latest: Pulling from library/hello-world


2 535020c3e8ad: Pull complete
3 af340544ed62: Pull complete
4 Digest: sha256:a68868bfe696c00866942e8f5ca39e3e31b79c1e50feaee4ce5e28df2f051d5c
5 Status: Downloaded newer image for hello-world:latest
6
7 Hello from Docker.
8 This message shows that your installation appears to be working correctly.

If you are using Windows, it is actually not very different.


Click on the Docker Quickstart Terminal icon, if your operating system displays a prompt to allow
VirtualBox, choose yes and a terminal will show on your screen. To test if Docker is working, type:

1 docker run hello-world

You will see the following message:

1 Hello from Docker.

You may also notice the explanation of how Docker is working on your local machine.

1 To generate this message ("Hello World" message), Docker took the following step\
2 s:
3
4 - The Docker Engine CLI client contacted the Docker Engine daemon.
5 - The Docker Engine daemon pulled the "hello-world" image from the Docker Hub. (\
6 Assuming it was not already locally available.)
7 - The Docker Engine daemon created a new container from that image which runs th\
8 e executable that produces the output you are currently reading.
9 - The Docker Engine daemon streamed that output to the Docker Engine CLI client,\
10 which sent it to your terminal.

After the installation, you can also start using the GUI or the command line, click on the create
button to create a Hello World container just to make sure if everything is ok.
Chapter II - Installation & Configuration 32

Kitematic

Docker Toolbox is a very good tool for every developer but may be you will need more performance
with your local development, Docker for Mac and Docker for Windows are native for each OS.

Docker For Mac


Use the following link to download the .dmg file and install the native Docker.

1 https://download.docker.com/mac/stable/Docker.dmg

To use native Docker, get back to the requirements section and make sure of your system
configuration.
After the installation, drag and drop Docker.app to your Applications folder and start Docker
from your applications list.
Chapter II - Installation & Configuration 33

You will a whale icon on your status bar and when you click on it, you can see a list of choices and
you can also click on About Docker to verify if you are using the right version.
If you prefer using the CLI, open your terminal and type:

1 docker --version

or

1 docker -v

If you installed Docker 1.12, you will see:

1 Docker version 1.12.0, build 8eab29e

If you go to Docker.app preferences, you can find some configurations, but one of the most important
ones are sharing drivers. In many cases, your containers running in your local machines can use a
file system mounted to a folder in your host, we will not need this for the moment but you should
remember later in this book that if you mount a container to a local folder, you should get back to
this step and share the concerned files, directories, users or volumes on your local system with your
containers.
Chapter II - Installation & Configuration 34

Click + and navigate to the directory you want to share

Docker For Windows


Use the following link to download the .msi file and install the native Docker.
Chapter II - Installation & Configuration 35

1 https://download.docker.com/win/stable/InstallDocker.msi

Same thing for Windows: To use native Docker, get back to the requirements section and
make sure of your system configuration.

Double-click InstallDocker.msi and run the installer


Follow the installation wizard
Authorize Docker if you were asked for that by your system
Click Finish to start Docker

Start Docker on Windows

If everything was ok, you will get a popup with a success message.
Now open cmd.exe (or PowerShell) and type
Chapter II - Installation & Configuration 36

1 docker --version

or

1 docker version

If your containers running on your local development environment may need in many cases (that
we will see in this book) to access to your file system, folders, files or drives. This is the case when
you mount a folder inside the Docker container to your host file system. We will see many examples
of this type so you should remember to get back here and make the right configurations if mounting
a directory or a file will be needed later in this book.

Sharing local drives with Docker in order to make them available to your containers
Chapter II - Installation & Configuration 37

Docker Experimental Features


Even if Docker has a stable version that can be safely used in production environments, many
features are still in development and you may need to plan for your future projects using Docker,
so in this case, you will need to test some of these features.
I have been testing Docker Swarm Mode since it was experimental and I needed to evaluate this
feature in order to prepare the adequate architecture, servers and adopt development and integration
work flows to the coming changes.

You may find some instability and bugs using the experimental installation packages
which is normal.

Docker Experimental Features For Mac And Windows


For native Docker running on both systems, to evaluate the experimental features, you need to
download the beta channel installation packages.
For Mac:

1 https://download.docker.com/mac/beta/Docker.dmg

For Windows

1 https://download.docker.com/win/beta/InstallDocker.msi

### Docker Experimental Features For Linux


Running the following command will install the experimental version of Docker:

You should have curl installed

1 curl -sSL https://experimental.docker.com/ | sh

Genrally curl | bash is not a good security practice even if the transport is over HTTPS.
Content can be modified on the server.
You can download the script, read it and execute it:
Chapter II - Installation & Configuration 38

1 wget https://experimental.docker.com/

Or you can get one of the following binaries in function of your system architecture:

1 https://experimental.docker.com/builds/Linux/i386/docker-latest.tgz
2
3 https://experimental.docker.com/builds/Linux/x86_64/docker-latest.tgz

For the remainder of the installation :

1 tar -xvzf docker-latest.tgz


2 mv docker/* /usr/bin/
3 sudo dockerd &

Removing Docker
Lets take Ubuntu as an example.
Purge the Docker Engine:

1 sudo apt-get purge docker-engine


2 sudo apt-get autoremove --purge docker-engine
3 sudo apt-get autoclean

This is enough in most cases, but to remove all of Dockers files, follow the next steps.
If you wish to remove all the images, containers and volumes:

1 sudo rm -rf /var/lib/docker

Then remove docker from apparmor.d:

1 sudo rm /etc/apparmor.d/docker

Then remove docker group:

1 sudo groupdel docker

You have successfully deleted completely docker.


Chapter II - Installation & Configuration 39

Docker Hub
Docker Hub is a cloud registry service for Docker.
Docker allows to package artifacts/code and configurations into a single image. These images can
be reusable by you, your colleague or even your customer. If you would like to share your code you
will generally use a git repository like Github or Bitbucket.
You can also run your own Gitlab that will allows you to have your own private on-premise Git
repositories.
Things are very similar with Docker, you can use a cloud-based solution to share your images like
Docker Hub or use your own Hub (a private Docker registry).

Docker Hub is a public Docker repository, but if you want to use a cloud-based solution
while keeping your images private, the paid version of Docker Hub allows you to have privates
repositories.
Docker Hub allows you to

Access to community, official, and private image libraries


Have public or paid private image repositories to where you can push your images and from
where your could pull them to your servers
Create and build new images with different tags when the source code inside your container
changes
Create and configure webhooks and trigger actions after a successful push to a repository
Create workgroups and manage access to your private images
Integrate with GitHub and Bitbucket
Chapter II - Installation & Configuration 40

Bitbucket and Github Integration With Docker Hub

Basically Docker Hub could be a component of your dev-test pipeline automation.


In order to use Docker Hub, go to the following link and create an account:

1 https://hub.docker.com/

If you would like to test if your account is enabled, type

1 docker login

Login with your Docker ID to push and pull images from Docker Hub. If you dont have
a Docker ID, head over to https://hub.docker.com to create one.
Now, go to Docker Hub website and create a public repository. We will see how to send a running
container as an image to Docker Hub and for the same reason we are going to use a sample app
Chapter II - Installation & Configuration 41

genrally used by Docker for demos, called vote (that you can also find on Docker official Github
repository).
Vote app is a Python webapp which lets you vote between two options, it uses a Redis queue to
collects new votes, .NET worker which consumes votes and stores them in a Postgres database
backed by a Docker volume and a Node.js webapp which shows the results of the voting in real
time.

The Architecture Of The Vote App

I consider that you created a working account on Docker Hub, typed the login command and entered
the right password. If have a starting level with Docker, you may not understand all of the next
commands but the goal of this section is just to demonstrate how a Docker Registry works (In this
case, the used Docker Registry is a cloud-based one built by Docker, and as said, it is called Docker
Hub).
When you type the following command, Docker will search if it has the image locally, otherwise it
will check if it is on Docker Hub:
docker run -d -it -p 80:80 instavote/vote
Chapter II - Installation & Configuration 42

You can find the image here:

1 https://hub.docker.com/r/instavote/vote/

Vote App On Docker Hub

Now type this command to show the running container. This is the equivalent of ps command in
Linux systems for Docker:

1 docker ps

You can see here that the nauseous_albattani container (a name given automatically by Docker),
is running the vote application pulled from instavote/vote repository.

1 CONTAINER ID IMAGE COMMAND CREATED \


2 STATUS PORTS NAMES
3 136422f45b02 instavote/vote "gunicorn app:app -b " 8 minutes ago \
4 Up 8 minutes 0.0.0.0:80->80/tcp nauseous_albattani

The container id is : 136422f45b02 and the application is reachable via http://0.0.0.0:80


Chapter II - Installation & Configuration 43

Vote App Running

Just like using git, we are going to commit and push the image to our Docker Hub repository. No
need to create a new repository, the commit/push can be used in a lazy mode, it will create it for
you.
Commit

1 docker commit -m "Painless Docker first commit" -a "Aymen El Amri" 136422f45b02 \


2 eon01/painlessdocker.com_voteapp:v1
3 sha256:bf2a7905742d85cca806eefa8618a6f09a00c3802b6f918cb965b22a94e7578a

And push:
Chapter II - Installation & Configuration 44

1 docker push eon01/painlessdocker.com_voteapp:v1


2 The push refers to a repository [docker.io/eon01/painlessdocker.com_voteapp]
3 1f31ef805ed1: Mounted from eon01/painless_docker_vote_app
4 3c58cbbfa0a8: Mounted from eon01/painless_docker_vote_app
5 02e23fb0be8d: Mounted from eon01/painless_docker_vote_app
6 f485a8fdd8bd: Mounted from eon01/painless_docker_vote_app
7 1f1dc3de0e7d: Mounted from eon01/painless_docker_vote_app
8 797c28e44049: Mounted from eon01/painless_docker_vote_app
9 77f08abee8bf: Mounted from eon01/painless_docker_vote_app
10 v1: digest: sha256:658750e57d51df53b24bf0f5a7bc6d52e3b03ce710a312362b99b530442a0\
11 89f size: 1781

Change eon01 by your username.


Notice that a new repository is added automatically to my Docker Hub dashboard:

Vote App Added

Now you can pull the same image with the latest tag:

1 docker pull eon01/painlessdocker.com_voteapp

Or with a specific tag:

1 docker pull eon01/painlessdocker.com_voteapp:v1

In our case, the v1 is the latest version, so the result of the two above commands will be the same
image pulled to your local image.
Chapter II - Installation & Configuration 45

Docker Registry
Docker registry is a scalable server side application conceived to be an on-premise Docker Hub.
Just like Docker Hub, it help you push, pull and distribute your images. The software powering
Docker Registry is open-source under Apache license. Docker Registry could be also a cloud-
based solution, because Docker offers a commercial offer called Docker Trusted Registry.
Docker Registry could be run using Docker. A Docker image for the Docker Registry is available
here:

1 https://hub.docker.com/_/registry/

It is easy to create a Registry, just pull and run the image like this:

1 docker run -d -p 5000:5000 --name registry registry:2.5.0

Lets test it : We will pull an image from Docker Hub, tag it and push it to our own Docker Registry
.

1 docker pull ubuntu


2 docker tag ubuntu localhost:5000/myfirstimage
3 docker push localhost:5000/myfirstimage
4 docker pull localhost:5000/myfirstimage

Deploying Docker Registry On Amazon Web Services


You need to have an Amazon Web Services account with the good privileges, then you need to
configure your aws CLI since we are going to use for the remainder of this part.

1 aws configure

Type your credentials, choose your region and your preferred output format:

1 AWS Access Key ID [None]: ******************


2 AWS Secret Access Key [None]: ***********************
3 Default region name [None]: eu-west-1
4 Default output format [None]: json

Create an EBS disk (Elastic Block Store), specify the region you are using and the availability zone.
Chapter II - Installation & Configuration 46

1 aws ec2 create-volume --size 80 --region eu-west-1 --availability-zone eu-west-1\


2 a --volume-type standard

You should have an similar output to the following one:

1 {
2 "AvailabilityZone": "eu-west-1a",
3 "Encrypted": false,
4 "VolumeType": "standard",
5 "VolumeId": "vol-xxxxxx",
6 "State": "creating",
7 "SnapshotId": "",
8 "CreateTime": "2016-10-14T15:29:35.400Z",
9 "Size": 80
10 }

Keep the output, because we are going to use the volume id later.
Our choice for the volume type was standard and you must choose your own volume type. The
following table can help you:

IOPS Use Case


Magnetic Up to 100 IOPS/volume Little access
GP Up to 3000 IOPS/volume Larger access needs, suitable for the
majority of classic cases
PIOPS Up to 4000 IOPS/volume High speed access

Start an EC2 instance:

1 aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type t1.mediu\


2 m --key-name MyKeyPair --security-group-ids sg-xxxxxxxx --subnet-id subnet-xxxxx\
3 xxx

Replace your image id, instance type, key name, security group ids and subnet id with your proper
values. On the output look for the instance id because we are going to use it.
Chapter II - Installation & Configuration 47

1 {
2 "OwnerId": "xxxxxxxx",
3 "ReservationId": "r-xxxxxxx",
4 "Groups": [
5 {
6 [..]
7 }
8 ],
9 "Instances": [
10 {
11 "InstanceId": "i-5203422c",
12 [..]
13 }

1 aws ec2 attach-volume --volume-id vol-xxxxxxxxxxx --instance-id i-xxxxxxxx --dev\


2 ice /dev/sdf

Now that the volume is attached, you should check your volumes in the EC2 instance with:

1 df -kh

and you will see your new attached instance. We suppose in this example, that the attached EBS has
the following device name:

1 /dev/xvdf

Then create a folder and a new file system upon the volume:

1 sudo mkfs -t ext4 /dev/xvdf


2 mkdir /data

Make sure you get the right device name for the new attached volume.
Now go to the fstab configuration file:
Chapter II - Installation & Configuration 48

1 /etc/fstab
2
3 And add :
4
5 ``` bash
6 /dev/xvdf /data ext4 defaults 1 1

Now mount the volume by typing :

1 mount -a

You should have Docker installed in order to run a private Docker registry.
The next step is to run the registry:

1 docker run -d -p 80:5000 --restart=always -v /data:/var/lib/registry registry:2

If you type docker ps, you should see the registry running:

1 CONTAINER ID IMAGE COMMAND CREATED \


2 STATUS PORTS NAMES
3 bb6201f63cc5 registry:2 "/entrypoint.sh /etc/" 21 hours ago \
4 Up About an hour 0.0.0.0:80->5000/tcp furious_swanson

Now, you should create and ELB but first create its Security Group (expose port 443). Create the ELB
using AWS CLI or AWS Console and redirect traffic to the EC2 port number 80. You can get the ELB
DNS since we are going to use it to push and pull images.
Opening port 443 is needed since the Docker Registry needs it to send and receive data thats why
we used ELB since the latter has integrated certificate management and SSL decryption. It is also
used to build a high available systems.
Now lets test it by pushing an image:

1 docker pull hello-world


2 docker tag hello-world load_balancer_dns/hello-world:1
3 docker push load_balancer_dns/hello-world
4 docker pull load_balancer_dns/hello-world

If you dont want to use ELB, you should bring your own certificates and run:
Chapter II - Installation & Configuration 49

1 docker run -d -p 5000:5000 --restart=always --name registry \


2 -v `pwd`/certs:/certs \
3 -v /data:/var/lib/registry \
4 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
5 -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
6 registry:2

Another option to use is to run storage on AWS S3:

1 docker run \
2 -e SETTINGS_FLAVOR=s3 \
3 -e AWS_BUCKET=my_bucket \
4 -e STORAGE_PATH=/data \
5 -e AWS_REGION="eu-west-1"
6 -e AWS_KEY=*********** \
7 -e AWS_SECRET=*********** \
8 -e SEARCH_BACKEND=sqlalchemy \
9 -p 80:5000 \
10 registry

In this case, you should not forget to add a policy for S3 that allows the Docker Registry to read and
write your images to S3.

Deploying Docker Registry On Azure


In Azure we are going to deploy the same Docker Registry using Azure Storage service. We need to
create a storage account using the Azure CLI:

1 azure storage account create -l "North Europe" <storage_account_name>

Change <storage_account_name> by your proper value. Now we need to list the storage account
keys to use one of them later:

1 azure storage account keys list <storage_account_name>

Then run:
Chapter II - Installation & Configuration 50

1 docker run -d -p 80:5000 \


2 -e REGISTRY_STORAGE=azure \
3 -e REGISTRY_STORAGE_AZURE_ACCOUNTNAME="<storage_account_name>" \
4 -e REGISTRY_STORAGE_AZURE_ACCOUNTKEY="<storage_key>" \
5 -e REGISTRY_STORAGE_AZURE_CONTAINER="registry" \
6 --name=registry \
7 registry:2

If the port 80 is closed on your Azure virtual machine, you should open it:

1 azure vm endpoint create <machine-name> 80 80

Configuring security for the Docker Registry is not covered in this part.

Docker Store
Docker Store is a Docker inc product and it is designed to provide a scalable self-service system for
ISVs to publish and distribute trusted and enterprise-ready content
It provides a publishing process that includes: - security scanning - component inventory - the open-
source license usage - image construction guidelines
Chapter II - Installation & Configuration 51

Docker Store

In other words, it is an official marketplace with workflows to create and distribute content were
you can find free and commercial images.

You might also like