Learning Docker - Sample Chapter
Learning Docker - Sample Chapter
IBM Global Cloud Center of Excellence (CoE) in India for the last 8 years. He has
more than 18 years of experience in the IT industry. In various capacities, he has
technically managed and mentored diverse teams across the globe in envisaging and
building pioneering telecommunication products. He specializes in cloud solution
delivery, with a focus on data center optimization, software-defined environments
(SDEs), and distributed application development, deployment, and delivery
using the newest Docker technology. Jeeva is also a strong proponent of Agile
methodologies, DevOps, and IT automation. He holds a master's degree in computer
science from Manonmaniam Sundaranar University and a graduation certificate in
project management from Boston University. He has been instrumental in crafting
reusable assets for IBM solution architects and consultants in Docker-inspired
containerization technology.
Vinod Singh is a lead architect for IBM's cloud computing offerings. He has
more than 18 years of experience in the cloud computing, networking, and data
communication domains. Currently, he works for IBM's cloud application services
and partner marketplace offerings. Vinod has worked on architecting, deploying,
and running IBM's PaaS offering (BlueMix) on the SoftLayer infrastructure cloud.
He also provides consultancy and advisory services to clients across the globe on
the adoption of cloud technologies. He is currently focusing on various applications
and services on the IBM Marketplace/BlueMix/SoftLayer platform. He is a graduate
engineer from the National Institute of Technology, Jaipur, and completed his
master's degree at BITS, Pilani.
Preface
We have been fiddling with virtualization techniques and tools for quite a long time
now in order to establish the much-demanded software portability. The inhibiting
dependency factor between software and hardware needs to be decimated by
leveraging virtualization, a kind of beneficial abstraction, through an additional layer
of indirection. The idea is to run any software on any hardware. This is achieved
by creating multiple virtual machines (VMs) out of a single physical server, with
each VM having its own operating system (OS). Through this isolation, which is
enacted through automated tools and controlled resource sharing, heterogeneous
applications are accommodated in a physical machine.
With virtualization, IT infrastructures become open, programmable, remotely
monitorable, manageable, and maintainable. Business workloads can be hosted in
appropriately-sized virtual machines and delivered to the outside world, ensuring
broader and more frequent utilization. On the other hand, for high-performance
applications, virtual machines across multiple physical machines can be readily
identified and rapidly combined to guarantee any kind of high-performance
requirement.
The virtualization paradigm has its own drawbacks. Because of the verbosity and
bloatedness (every VM carries its own operating system), VM provisioning typically
takes a while, the performance goes down due to excessive usage of computational
resources, and so on. Furthermore, the growing need for portability is not fully met
by virtualization. Hypervisor software from different vendors comes in the way of
ensuring application portability. Differences in the OS and application distributions,
versions, editions, and patches hinder smooth portability. Computer virtualization
has flourished, whereas the other, closely associated concepts of network and storage
virtualization are just taking off. Building distributed applications through VM
interactions invites and involves some practical difficulties.
Preface
Preface
Preface
Chapter 11, Securing Docker Containers, is crafted to explain the brewing security
and privacy challenges and concerns, and how they are addressed through the
liberal usage of competent standards, technologies, and tools. This chapter inscribes
the mechanism on dropping user privileges inside an image. There is also a brief
introduction on how the security capabilities introduced in SELinux come in handy
when securing Docker containers.
An introduction to Docker
Docker on Linux
[1]
An introduction to Docker
Due to its overwhelming usage across industry verticals, the IT domain has been
stuffed with many new and pathbreaking technologies used not only for bringing
in more decisive automation but also for overcoming existing complexities.
Virtualization has set the goal of bringing forth IT infrastructure optimization and
portability. However, virtualization technology has serious drawbacks, such as
performance degradation due to the heavyweight nature of virtual machines (VM),
the lack of application portability, slowness in provisioning of IT resources, and so
on. Therefore, the IT industry has been steadily embarking on a Docker-inspired
containerization journey. The Docker initiative has been specifically designed for
making the containerization paradigm easier to grasp and use. Docker enables the
containerization process to be accomplished in a risk-free and accelerated fashion.
Precisely speaking, Docker is an open source containerization engine, which
automates the packaging, shipping, and deployment of any software applications
that are presented as lightweight, portable, and self-sufficient containers, that will
run virtually anywhere.
A Docker container is a software bucket comprising everything necessary to run
the software independently. There can be multiple Docker containers in a single
machine and containers are completely isolated from one another as well as from
the host machine.
In other words, a Docker container includes a software component along with
all of its dependencies (binaries, libraries, configuration files, scripts, jars, and so
on). Therefore, the Docker containers could be fluently run on x64 Linux kernel
supporting namespaces, control groups, and file systems, such as Another Union
File System (AUFS). However, as indicated in this chapter, there are pragmatic
workarounds for running Docker on other mainstream operating systems, such
as Windows, Mac, and so on. The Docker container has its own process space and
network interface. It can also run things as root, and have its own /sbin/init,
which can be different from the host machines'.
In a nutshell, the Docker solution lets us quickly assemble composite, enterprisescale, and business-critical applications. For doing this, we can use different and
distributed software components: Containers eliminate the friction that comes with
shipping code to distant locations. Docker also lets us test the code and then deploy
it in production as fast as possible. The Docker solution primarily consists of the
following components:
[2]
Chapter 1
Docker on Linux
Suppose that we want to directly run the containers on a Linux machine. The Docker
engine produces, monitors, and manages multiple containers as illustrated in the
following diagram:
The preceding diagram vividly illustrates how future IT systems would have
hundreds of application-aware containers, which would innately be capable of
facilitating their seamless integration and orchestration for deriving modular
applications (business, social, mobile, analytical, and embedded solutions). These
contained applications could fluently run on converged, federated, virtualized,
shared, dedicated, and automated infrastructures.
[3]
DB
App
Web
Server
Web
DB Server App
App
Docker Apps
Guest
OS
Guest
OS
Bins/Libs
Hypervisor (Type 2)
Bins/Libs
Host OS
Host OS
Hardware
Hardware
[4]
Chapter 1
Containers
Heavyweight
Lightweight
Slow provisioning
Limited performance
Native performance
[5]
Containerization technologies
Having recognized the role and the relevance of the containerization paradigm
for IT infrastructure augmentation and acceleration, a few technologies that leverage
the unique and decisive impacts of the containerization idea have come into
existence and they have been enumerated as follows:
LXC (Linux Containers): This is the father of all kinds of containers and it
represents an operating-system-level virtualization environment for running
multiple isolated Linux systems (containers) on a single Linux machine.
The article LXC on the Wikipedia website states that:
"The Linux kernel provides the cgroups functionality that allows
limitation and prioritization of resources (CPU, memory, block I/O,
network, etc.) without the need for starting any virtual machines,
and namespace isolation functionality that allows complete isolation
of an applications' view of the operating environment, including
process trees, networking, user IDs and mounted file systems."
You can get more information from http://en.wikipedia.org/wiki/LXC.
In this book, considering the surging popularity and the mass adoption happening
to Docker, we have chosen to dig deeper, dwell in detail on the Docker platform, the
one-stop solution for the simplified and streamlined containerization movement.
[6]
Chapter 1
[7]
2. Kick-start the installation by using the following command. This setup will
install the Docker engine along with a few more support files, and it will also
start the docker service instantaneously:
$ sudo apt-get install -y docker.io
3. For your convenience, you can create a soft link for docker.io called docker.
This will enable you to execute Docker commands as docker instead of
docker.io. You can do this by using the following command:
$ sudo ln -sf /usr/bin/docker.io /usr/local/bin/docker
The official Ubuntu package does not come with the latest
stable version of docker.
2. Import the Docker release tool's public key by running the following
command:
$ sudo apt-key adv --keyserver \
hkp://keyserver.ubuntu.com:80 --recv-keys \
36A1D7869245C8950F966E92D8576A8BA88D21E9
[8]
Chapter 1
The Docker community has taken a step forward by hiding these details in an
automated install script. This script enables the installation of Docker on most
of the popular Linux distributions, either through the curl command or through
the wget command, as shown here:
[9]
Let's start our docker journey with the docker version subcommand,
as shown here:
$ sudo docker version
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.1
Git commit (client): a8a31ef
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.1
Git commit (server): a8a31ef
Although the docker version subcommand lists many lines of text, as a Docker
user, you should know what these following output lines mean:
The client and server versions that have been considered here are 1.5.0 and the client
API and the server API, versions 1.17.
If we dissect the internals of the docker version subcommand, then it will first
list the client-related information that is stored locally. Subsequently, it will make a
REST API call to the server over HTTP to obtain the server-related details.
Let's learn more about the Docker environment using the docker info subcommand:
$ sudo docker -D info
Containers: 0
Images: 0
Storage Driver: aufs
[ 10 ]
Chapter 1
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 0
Execution Driver: native-0.2
Kernel Version: 3.13.0-45-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 4
Total Memory: 3.908 GiB
Name: dockerhost
ID: ZNXR:QQSY:IGKJ:ZLYU:G4P7:AXVC:2KAJ:A3Q5:YCRQ:IJD3:7RON:IJ6Y
Debug mode (server): false
Debug mode (client): true
Fds: 10
Goroutines: 14
EventsListeners: 0
Init Path: /usr/bin/docker
Docker Root Dir: /var/lib/docker
WARNING: No swap limit support
As you can see in the output of a freshly installed Docker engine, the number of
Containers and Images is invariably nil. The Storage Driver has been set up
as aufs, and the directory has been given the /var/lib/docker/aufs location.
The Execution Driver has been set to the native mode. This command also lists
details, such as the Kernel Version, the Operating System, the number of CPUs, the
Total Memory, and Name, the new Docker hostname.
[ 11 ]
Once the images have been downloaded, they can be verified by using the docker
images subcommand, as shown here:
$ sudo docker images
REPOSITORY
TAG
IMAGE ID
CREATED
VIRTUAL SIZE
busybox
latest
4986bf8c1536
Cool, isn't it? You have set up your first Docker container in no time. In the
preceding example, the docker run subcommand has been used for creating a
container and for printing Hello World! by using the echo command.
[ 12 ]
Chapter 1
The Amazon EC2 container service (only available in preview mode at the
time of writing this book)
The Amazon EC2 container service lets you start and stop the container-enabled
applications with the help of simple API calls. AWS has introduced the concept of
a cluster for viewing the state of your containers. You can view the tasks from a
centralized service, and it gives you access to many familiar Amazon EC2 features,
such as the security groups, the EBS volumes and the IAM roles.
Please note that this service is still not available in the AWS console. You need to
install AWS CLI on your machine to deploy, run, and access this service.
The AWS Elastic Beanstalk service supports the following:
https://console.aws.amazon.com/elasticbeanstalk/ URL.
2. Select a region where you want to deploy your application, as shown here:
Vinod Kumar Singh 6
Singapore 6
[ 13 ]
Help 6
3. Select the Docker option, which is in the drop down menu, and then click
on Launch Now. The next screen will be shown after a few minutes, as
shown here:
Now, click on the URL that is next to Default-Environment (DefaultEnvironment-pjgerbmmjm.elasticbeanstalk.com), as shown here:
Troubleshooting
Most of the time, you will not encounter any issues when installing Docker.
However, unplanned failures might occur. Therefore, it is necessary to discuss
prominent troubleshooting techniques and tips. Let's begin by discussing the
troubleshooting knowhow in this section. The first tip is that the running status of
Docker should be checked by using the following command:
$ sudo service docker status
[ 14 ]
Chapter 1
However, if Docker has been installed by using the Ubuntu package, then you will
have to use docker.io as the service name. If the docker service is running, then
this command will print the status as start/running along with its process ID.
If you are still experiencing issues with the Docker setup, then you could open
the Docker log by using the /var/log/upstart/docker.log file for further
investigation.
Summary
Containerization is going to be a dominant and decisive paradigm for the enterprise
as well as cloud IT environments in the future because of its hitherto unforeseen
automation and acceleration capabilities. There are several mechanisms in place
for taking the containerization movement to greater heights. However, Docker has
zoomed ahead of everyone in this hot race, and it has successfully decimated the
previously-elucidated barriers.
In this chapter, we have exclusively concentrated on the practical side of Docker for
giving you a head start in learning about the most promising technology. We have
listed the appropriate steps and tips for effortlessly installing the Docker engine in
different environments, for leveraging and for building, installing, and running a
few sample Docker containers, both in local as well as remote environments. We will
dive deep into the world of Docker and dig deeper to extract and share tactically and
strategically sound information with you in the ensuing chapters. Please read on to
gain the required knowledge about advanced topics, such as container integration,
orchestration, management, governance, security, and so on, through the Docker
engine. We will also discuss a bevy of third-party tools.
[ 15 ]
www.PacktPub.com
Stay Connected: