Docker
Docker
In technology, sometimes the jumps in progress are small but, as is the case with
containerization, the jumps have been massive and turn the long-held practices and
teachings completely upside down. With this chapter, we will take you from running a tiny
service to building elastically scalable systems using containerization with Docker, the
cornerstone of this revolution.
We will perform a steady but consistent ramp-up through the basic blocks with a focus on
the inner workings of Docker, and, as we continue, we will try to spend a majority of the time
in the world of complex deployments and their considerations.
Let’s take a look at what we will cover in this chapter :
In the old days, developers would develop a new application. Once that application was
completed in their eyes, they would hand that application over to the operations engineers,
who were then supposed to install it on the production servers and get it running. If the
operations engineers were lucky, they even got a somewhat accurate document with
installation instructions from the developers. So far, so good, and life was easy.
But things get a bit out of hand when, in an enterprise, there are many teams of developers
that create quite different types of application, yet all of them need to be installed on the
same production servers and kept running there. Usually, each application has some
external dependencies, such as which framework it was built on, what libraries it uses, and
so on. Sometimes, two applications use the same framework but in different versions that
might or might not be compatible with each other. Our operations engineer's life became
much harder over time. They had to be really creative with how they could load their ship,
(their servers,) with different applications without breaking something.
Installing a new version of a certain application was now a complex project on its own, and
often needed months of planning and testing. In other words, there was a lot of friction in
the software supply chain. But these days, companies rely more and more on software, and
the release cycles need to become shorter and shorter. We cannot afford to just release
twice a year or so anymore. Applications need to be updated in a matter of weeks or days,
or sometimes even multiple times per day. Companies that do not comply risk going out of
business, due to the lack of agility. So, what's the solution?
One of the first approaches was to use virtual machines (VMs). Instead of running multiple
applications, all on the same server, companies would package and run a single application
on each VM. With this, all the compatibility problems were gone and life seemed to be good
again. Unfortunately, that happiness didn't last long. VMs are pretty heavy beasts on their
own since they all contain a full-blown operating system such as Linux or Windows Server,
and all that for just a single application. This is just as if you were in the transportation
industry and were using a whole ship just to transport a single truckload of bananas. What a
waste! That could never be profitable.
The ultimate solution to this problem was to provide something that is much more
lightweight than VMs, but is also able to perfectly encapsulate the goods it needs to
transport. Here, the goods are the actual application that has been written by our
developers, plus – and this is important – all the external dependencies of the application,
such as its framework, libraries, configurations, and more. This holy grail of a software
packaging mechanism was the Docker container.
These days, the time between new releases of an application become shorter and shorter,
yet the software itself doesn't become any simpler. On the contrary, software projects
increase in complexity. Thus, we need a way to tame the beast and simplify the software
supply chain.
Also, every day, we hear that cyber-attacks are on the rise. Many well-known companies
are and have been affected by security breaches. Highly sensitive customer data gets
stolen during such events, such as social security numbers, credit card information, and
more. But not only customer data is compromised – sensitive company secrets are stolen
too.
Containers can help in many ways. First of all, Gartner found that applications running in a
container are more secure than their counterparts not running in a container. Containers
use Linux security primitives such as Linux kernel namespaces to sandbox different
applications running on the same computers and control groups (cgroups) in order to
avoid the noisy-neighbor problem, where one bad application is using all the available
resources of a server and starving all other applications.
Due to the fact that container images are immutable, it is easy to have them scanned
for common vulnerabilities and exposures (CVEs), and in doing so, increase the overall
security of our applications.
Another way to make our software supply chain more secure is to have our containers use
a content trust. A content trust basically ensures that the author of a container image is who
they pretend to be and that the consumer of the container image has a guarantee that the
image has not been tampered with in transit. The latter is known as a man-in-the-middle
(MITM) attack.
Everything I have just said is, of course, technically also possible without using containers,
but since containers introduce a globally accepted standard, they make it so much easier to
implement these best practices and enforce them.
OK, but security is not the only reason why containers are important. There are other
reasons too.
One is the fact that containers make it easy to simulate a production-like environment, even
on a developer's laptop. If we can containerize any application, then we can also
containerize, say, a database such as Oracle or MS SQL Server. Now, everyone who has
ever had to install an Oracle database on a computer knows that this is not the easiest thing
to do, and it takes up a lot of precious space on your computer. You wouldn't want to do that
to your development laptop just to test whether the application you developed really works
end-to-end. With containers at hand, we can run a full-blown relational database in a
container as easily as saying 1, 2, 3. And when we're done with testing, we can just stop
and delete the container and the database will be gone, without leaving a trace on our
computer.
Since containers are very lean compared to VMs, it is not uncommon to have many
containers running at the same time on a developer's laptop without overwhelming the
laptop.
A third reason why containers are important is that operators can finally concentrate on
what they are really good at: provisioning the infrastructure and running and monitoring
applications in production. When the applications they have to run on a production system
are all containerized, then operators can start to standardize their infrastructure. Every
server becomes just another Docker host. No special libraries or frameworks need to be
installed on those servers, just an OS and a container runtime such as Docker.
Also, operators do not have to have intimate knowledge of the internals of applications
anymore, since those applications run self-contained in containers that ought to look like
black boxes to them, similar to how shipping containers look to the personnel in the
transportation industry.
Containers are an abstracton at the app layer that packages code and dependencies
together. Multiple containers can run on the same machine and share the OS kernel with
other containers, each running as isolated processes in user space. Containers take up less
space than VMs (container images are typically tens of MBs in size), and start almost
instantly.
Virtual machines (VMs) are an abstracton of physical hardware turning one server into
many servers. he hypervisor allows multple VMs to run on a single machine. Each VM
includes a full copy of an operatng system, one or more apps, necessary binaries and
libraries - taking up tens of GBs. VMs can also be slow to boot.
Now, let's take a look at the differences between Containers and typical virtual machine
environments. The following diagram demonstrates the difference between a dedicated,
bare-metal server and a server running virtual machines :
As you can see, for a dedicated machine we have three applications, all sharing the same
orange software stack. Running virtual machines allow us to run three applications, running
two completely different software stacks. The following diagram shows the same orange
and green applications running in containers using Docker :
This diagram gives us a lot of insight into the biggest key benefit of Docker, that is, there is
no need for a complete operating system every time we need to bring up a new container,
which cuts down on the overall size of containers. Since almost all the versions of Linux use
the standard kernel models, Docker relies on using the host operating system's Linux kernel
for the operating system it was built upon, such as Red Hat, CentOS, and Ubuntu.
For this reason, you can have almost any Linux operating system as your host operating
system and be able to layer other Linux-based operating systems on top of the host. Well,
that is, your applications are led to believe that a full operating system is actually installed—
but in reality, we only install the binaries, such as a package manager and, for example,
Apache/PHP and the libraries required to get just enough of an operating system for your
applications to run.
For example, in the earlier diagram, we could have Red Hat running for the orange
application, and Debian running for the green application, but there would never be a need
to actually install Red Hat or Debian on the host. Thus, another benefit of Docker is the size
of images when they are created. They are built without the largest piece: the kernel or the
operating system. This makes them incredibly small, compact, and easy to ship.
pod-username-node01
ssh root@IPADDRESSVMNODE01
pod-username-node02
ssh root@IPADDRESSVMNODE02
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.
Instruksi
2. Spesifikasi VM.
- pod-[username]-node01 : Ubuntu 20.04 (2vcpu, 2gb ram)
- pod-[username]-node02 : Ubuntu 20.04 (2vcpu, 2gb ram)
3. Pastikan IP, Gateway, DNS Resolver, & Hostname sudah benar (contohnya IP, ubah IP
sesuai dengan IP pada lab anda).
Node pod-[username]-node01
IP Address : 10.X.X.10/24
Gateway : 10.X.X.1
Hostname : pod-[username]-node01
Node pod-[username]-node02
IP Address : 10.X.X.20/24
Gateway : 10.X.X.1
Hostname : pod-[username]-node02
Execute on pod-[username]-node01
4. Ubah hostname
root@pod-username-node01:~# ssh-keygen
root@pod-username-node01:~# cat ~/.ssh/id_rsa.pub
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABgQC+1fVCjBbUz7E6kJWGzfEFwrZQ0iACw4/Xz/8c221vA
lUOjdqTv70nKOQc+Y87MP730KIvZSJQ7Km9vTK+7TV/0uuPDqXVST5kUmHyNRm+fDjaBcbq/Z
d+UPOXHUtNM1bxt6N9REvIzKm/wfAkAgt6Y+x0goa209iS8rCXOItyBRH7gw9Mhhlo6d82Nb4
DGg56JYwE9NV6duY60yLPb+PqKKQ5qCgkqc7D278eMJMDQ99Ld0Dgc+1xP4JgbOVKI69AZbzF
PPosAW7JFa1q2t6D2tZL4P80m6EPsnzJGA1CUY9sGuByTAVduUxT5p+IzgFiKXuIanUAxkM4W
U9SrIGR7XPKwkSf4BFY7XcH3/iR0VSPp/
+hBcN8u7PY28ysA5+KGTopxYFcEzaaUc4PqxIkJnat0XfH22/lJMFF/vqkmS8rxs/ZMUF0QFz
sGFba5MQR3Q1GwSUMvlAQqv0u0RfBOooAv6c+uAtRjB6XW/Ow6JDhIf3oXiEFqAReVybyOCM=
root@pod-username-node01
root@pod-username-node02:~# vi ~/.ssh/authorized_keys
... output omitted ...
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABgQC+1fVCjBbUz7E6kJWGzfEFwrZQ0iACw4/Xz/8c221vA
lUOjdqTv70nKOQc+Y87MP730KIvZSJQ7Km9vTK+7TV/0uuPDqXVST5kUmHyNRm+fDjaBcbq/Z
d+UPOXHUtNM1bxt6N9REvIzKm/wfAkAgt6Y+x0goa209iS8rCXOItyBRH7gw9Mhhlo6d82Nb4
DGg56JYwE9NV6duY60yLPb+PqKKQ5qCgkqc7D278eMJMDQ99Ld0Dgc+1xP4JgbOVKI69AZbzF
PPosAW7JFa1q2t6D2tZL4P80m6EPsnzJGA1CUY9sGuByTAVduUxT5p+IzgFiKXuIanUAxkM4W
U9SrIGR7XPKwkSf4BFY7XcH3/iR0VSPp/
+hBcN8u7PY28ysA5+KGTopxYFcEzaaUc4PqxIkJnat0XfH22/lJMFF/vqkmS8rxs/ZMUF0QFz
sGFba5MQR3Q1GwSUMvlAQqv0u0RfBOooAv6c+uAtRjB6XW/Ow6JDhIf3oXiEFqAReVybyOCM=
root@pod-username-node01
7. Ubah hostname
root@pod-username-node02:~# ssh-keygen
root@pod-username-node02:~# cat ~/.ssh/id_rsa.pub
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABgQDYFD929XkulrBijod4lpLZthAnnvSift7ZNCFT5R+VS
f9APlu8MpHnb8dzXpzIN0jYLAJvSQ0Q+aavpFrJSGIAq0q2XhxryPUZyAHkEx9h0fltxDnxxO
hDd8ImIJkcn3OEtrfNu+afsvH3wecVdyInk6tKNgJ1C/RXaEqAiY1QRIHqlkx6oJqWUk7GYW9
XWr+3PnzwJ8G+VMW5jzo47wX6jFrG7+SbYCCp4AEX4P7/4R8T96FUowotlKWRcBYezupIjBbC
+F2BmHd4ZJR/Z5oRJXU0Fgo/zbm2gFuZeFIdUWduJIC58r13F/H88IgM/ZK5i7nuMnALrU3lw
ASfqXUNtlx92xcuwuSIgUEfbXy235OeYY6hOGyS32LboyphiZxf+t86xqRlzLchpbm54egxQf
S7txu3m80Xc+LDSFQ6qghTlcd5iQ/Ep1eTVAT49ZSVWLqr5+7WGoxNZixGDL3t9YNsM74uYU2
sN25qi5cp1ETj/liyCHB9WUDZi0mMhIs= root@pod-username-node02
root@pod-username-node01:~# vi ~/.ssh/authorized_keys
... output omitted ...
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABgQDYFD929XkulrBijod4lpLZthAnnvSift7ZNCFT5R+VS
f9APlu8MpHnb8dzXpzIN0jYLAJvSQ0Q+aavpFrJSGIAq0q2XhxryPUZyAHkEx9h0fltxDnxxO
hDd8ImIJkcn3OEtrfNu+afsvH3wecVdyInk6tKNgJ1C/RXaEqAiY1QRIHqlkx6oJqWUk7GYW9
XWr+3PnzwJ8G+VMW5jzo47wX6jFrG7+SbYCCp4AEX4P7/4R8T96FUowotlKWRcBYezupIjBbC
+F2BmHd4ZJR/Z5oRJXU0Fgo/zbm2gFuZeFIdUWduJIC58r13F/H88IgM/ZK5i7nuMnALrU3lw
ASfqXUNtlx92xcuwuSIgUEfbXy235OeYY6hOGyS32LboyphiZxf+t86xqRlzLchpbm54egxQf
S7txu3m80Xc+LDSFQ6qghTlcd5iQ/Ep1eTVAT49ZSVWLqr5+7WGoxNZixGDL3t9YNsM74uYU2
sN25qi5cp1ETj/liyCHB9WUDZi0mMhIs= root@pod-username-node02
ping -c 3 google.com
ping -c 3 detik.com
ping -c 3 10.X.X.10
ping -c 3 10.X.X.20
hostname
Tugas
Jalankan perintah lab grade do-001-1 untuk menilai hasil lab.
[student@servera ~]$ sudo -i
[root@servera ~]# lab grade do-001-1
Introduction to Docker
What is Docker ?
Docker is an open source project for building, shipping, and running programs.
It is a command-line program,
A background process, and
A set of remote services that take a logistical approach to solving common software
problems and simplifying your experience installing, running, publishing, and removing
software.
Docker products
Docker currently separates its product lines into two segments. There is the Community
Edition (CE), which is closed-source yet completely free, and then there is the Enterprise
Edition (EE), which is also closed-source and needs to be licensed on a yearly basis.
These enterprise products are backed by 24/7 support and are supported by bug fixes.
Docker CE
art of the Docker Community Edition are products such as the Docker Toolbox and
Docker for Desktop with its editions for Mac and Windows. All these products are mainly
targeted at developers.
Docker for Desktop is an easy-to-install desktop application that can be used to build,
debug, and test Dockerized applications or services on a macOS or Windows machine.
Docker for macOS and Docker for Windows are complete development environments that
are deeply integrated with their respective hypervisor framework, network, and filesystem.
These tools are the fastest and most reliable way to run Docker on a Mac or Windows.
Docker EE
Docker 18.06 CE : This is the last of the quarterly Docker CE releases,
released July 18th 2018.
Docker 18.09 CE : This release, due late September/early October 2018, is the
first release of the biannual release cycle of Docker CE.
Docker 19.03 CE : The first supported Docker CE of 2019 is scheduled to be
released March/April 2019.
Docker 19.09 CE : The second supported release of 2019 is scheduled to be
released September/October 2019.
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.
2. Buat user baru dengan nama sesuai dengan username di platform ADINUSA.
3. Install Docker.
apt -y update
apt -y install apt-transport-https ca-certificates curl gnupg-agent
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key
add -
apt-key fingerprint 0EBFCD88
add-apt-repository "deb [arch=amd64]
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt -y update
apt -y install docker-ce docker-ce-cli containerd.io
systemctl status docker
docker version
docker info
docker image ls
docker container ls -a
groupadd docker
su - <username>
sudo usermod -aG docker $USER
docker container ls
Lab 2.2 : Docker Run - Part 1
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.
Setelah login, jalankan perintah lab start do-002-2. Perintah ini digunakan untuk
mempersiapkan environment lab
docker ps
docker container ls
docker ps -a
docker container ls -a
9. Mematikan container.
5. Uji browsing.
curl localhost:$(docker port nginx2 80| cut -d : -f 2)
docker ps -a
docker container ls -a
docker images
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.
Setelah login, jalankan perintah lab start do-002-3. Perintah ini digunakan untuk
mempersiapkan environment lab
docker ps -a
docker container ls -a
curl localhost:8080
curl localhost:8081
nano index.html
...
hello, testing.
...
mv index.html /usr/share/nginx/html
curl localhost:8080
curl localhost:8081
docker ps -a
docker ps -a
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.
Setelah login, jalankan perintah lab start do-002-4. Perintah ini digunakan untuk
mempersiapkan environment lab
4. Uji browsing.
docker ps
4. Check pada daftar container jika status container sudah menjadi pause.
docker ps
5. Check pemakaian resource pada saat container ubuntu dalam keadaan pause.
4. Uji browsing.
Abstract
Overview Design and code a Dockerfile to build a custom container image.
Objectives:
Objectives
After completing this section, students should be able to manage the life cycle of a
container from creation to deletion.
The Docker client, implemented by the docker command, provides a set of verbs to create
and manage containers. The following figure shows a summary of the most commonly used
verbs that change container state.
The Docker client also provides a set of verbs to obtain information about running and
stopped containers. The following figure shows a summary of the most commonly used
verbs that query information related to Docker containers.
Use these two figures as a reference while you learn about the docker command verbs
along this course.
Creating Containers
The docker run command creates a new container from an image and starts a process
inside the new container. If the container image is not available, this command also tries to
download it:
Another important option is to run the container as a daemon, running the containerized
process in the background. The -d option is responsible for running in detached mode.
The management docker commands require an ID or a name. The docker run command
generates a random ID and a random name that are unique. The docker ps command is
responsible for displaying these attributes:
If desired, the container name can be explicitly defined. The --name option is responsible for
defining the container name:
Note The name must be unique. An error is thrown if another container has the same
name, including containers that are stopped.
The container image itself specifies the command to run to start the containerized process,
but a different one can be specified after the container image name in docker run:
Note Since a specified command was provided in the previous example, the nginx service
does not start.
Sometimes it is desired to run a container executing a Bash shell. This can be achieved
with:
Options -t and -i are usually needed for interactive text-based programs, so they get a
proper terminal, but not for background daemons.
Follow this step guide as the instructor shows how to create and manipulate containers.
This command downloads image the official Docker registry and starts it using the dd
command. The container exits when the dd command returns the result. For educational
purposes, the provided dd never stops.
2.Open a new terminal window from the node VM and check if the container is running:
ubuntu@ubuntu20:~$ docker ps
Some information about the container, including the container name demo-container
specified in the last step, is displayed.
3.Open a new terminal window and stop the container using the provided name:
4.Return to the original terminal window and verify that the container was stopped:
ubuntu@ubuntu20:~$ docker ps
1. Open a terminal window and verify the name that was generated
ubuntu@ubuntu20:~$ docker ps
An output similar to the following will be listed:
ubuntu@ubuntu20:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
2ef4addadf8b nginx "/docker-entrypoint.…" 3
seconds ago Up 1 second 80/tcp busy_newton
The busy_newton is the generated name. you probably will have a different name for this
step.
8.Containers can have a default long-running command. For these cases, it is possible to
run a container as a daemon using the -d option. For example, when a MySQL container is
started it creates the databases and keeps the server actively listening on its port. Another
example using dd as the long-running command is as follows:
ubuntu@ubuntu20:~$ docker ps
12.It is possible to run a container in interactive mode. This mode allows for staying in the
container when the container runs:
the -i option specifies that this container should run in interactive mode, and the -t allocates
a pseudo-TTY.
root@7e202f7faf07:/# exit
exit
1. Remove all stopped containers from the environment by running the following from a
terminal window:
ubuntu@ubuntu20:~$ docker ps -a
ubuntu@ubuntu20:~$ docker rm demo-container demo-container-2 demo-
container-3 demo-container-4
15.Remove the container started without a name. Replace the with the container name from
the step 7:
CONTAINER ID Each container, when created, gets a container ID, which is a hexadecimal
number and looks like an image ID, but is actually unrelated.
PORT Ports that were exposed by the container or the port forwards, if configured.
Stopped containers are not discarded immediately. Their local file systems and other states
are preserved so they can be inspected for post-mortem analysis. Option -a lists all
containers, including containers that were not discarded yet:
ubuntu@ubuntu20:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
2ef4addadf8b nginx "/docker-entrypoint.…" 18 hours
ago Exited (137) 18 hours ago busy_newton
9aa8f1ff1c03 nginx "/docker-entrypoint.…" 21 hours
ago Up 21 hours 80/tcp my-nginx-
container
docker inspect: This command is responsible for listing metadata about a running or
stopped container. The command produces a JSON output
ubuntu@ubuntu20:~$ docker inspect my-nginx-container
[
{
"Id":
"9aa8f1ff1c0399cbb24b657de6ce0a078c7ce69dc0c07760664670ea7593aaf3",
...OUTPUT OMITTED....
"NetworkSettings": {
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID":
"9e91a2e58d8a7c21532b0a48bb4d36093a347aa056e85d5b7f3941ff19eeb840",
"EndpointID":
"9d847b45ad4c55b2c1fbdaa8f44a502f2215f569c08cbcc6599b6ff1c25a2869",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
...OUTPUT OMITTED....
This command allows formatting of the output string using the given go template with
the -f option. For example, to retrieve only the IP address, the following command
can be executed:
Using docker stop is easier than finding the container start process on the host OS and
killing it.
docker kill: This command is responsible for stopping a running container forcefully:
$ docker kill my-nginx-container
The docker restart command creates a new container with the same container ID, reusing
the stopped container state and filesystem.
docker rm: This command is responsible for deleting a container, discarding its
state and filesystem:
$ docker rm my-nginx-container
It is possible to delete all containers at the same time. The docker ps command has the -
q option that returns only the ID of the containers. This list can be passed to the docker rm
command:
Before deleting all containers, all running containers must be stopped. It is possible
to stop all containers with:
Note The commands docker inspect, docker stop, docker kill, docker restart, and docker rm
can use the container ID instead of the container name.
$ docker ps
$ docker ps
$ docker ps -a
8.An important feature is the ability to list metadata about a running or stopped container.
The following command returns the metadata:
9.It is possible to format and retrieve a specific item from the inspect command. To retrieve
the IPAddress attribute from the NetworkSettings object, use the following command:
Make a note about the IP address from this container. It will be necessary for a further step.
12.Using the IP address from step 8, try to access the previously created page:
$ curl IP:80/adinusa.html
The following output is be displayed:
14.When the container is restarted, the data is preserved. Verify the IP address from the
restarted container and check that the adinusa page is still available:
17.Verify the IP address from the new container and check if the adinusa page is available:
The page is not available because this page was created just for the previous container.
New containers will not have the page since the container image did not change.
18.In case of a freeze, it is possible to kill a container like any process. The following
command will kill a container:
This command kills the container with the SIGKILL signal. It is possible to specify the signal
with the -s option.
19.Containers can be removed, discarding their state and filesystem. It is possible to
remove a container by name or by its ID. Remove the demo-nginx container:
$ docker ps -a
$ docker rm demo-1-nginx
20.It is also possible to remove all containers at the same time. The -q option returns the list
of container IDs and the docker rm accepts a list of IDs to remove all containers:
$ docker ps -a
22.Clean up the images downloaded by running the following from a terminal window:
OR
Docker Volume
Volumes are the preferred mechanism for persisting data generated by and used by Docker
containers. While bind mounts are dependent on the directory structure and OS of the host
machine, volumes are completely managed by Docker. Volumes have several advantages
over bind mounts:
In addition, volumes are often a better choice than persisting data in a container’s writable
layer, because a volume does not increase the size of the containers using it, and the
volume’s contents exist outside the lifecycle of a given container.
If your container generates non-persistent state data, consider using a tmpfs mount to avoid
storing the data anywhere permanently, and to increase the container’s performance by
avoiding writing into the container’s writable layer.
When building fault-tolerant applications, you might need to configure multiple replicas of
the same service to have access to the same files.
There are several ways to achieve this when developing your applications. One is to add
logic to your application to store files on a cloud object storage system like Amazon S3.
Another is to create volumes with a driver that supports writing files to an external storage
system like NFS or Amazon S3.
Volume drivers allow you to abstract the underlying storage system from the application
logic. For example, if your services use a volume with an NFS driver, you can update the
services to use a different driver, as an example to store data in the cloud, without changing
the application logic.
Volume Driver
When you create a volume using docker volume create, or when you start a container which
uses a not-yet-created volume, you can specify a volume driver. The following examples
use the vieux/sshfs volume driver, first when creating a standalone volume, and then when
starting a container which creates a new volume.
Initial set-up
This example assumes that you have two nodes, the first of which is a Docker host and can
connect to the second using SSH.
This example specifies a SSH password, but if the two hosts have shared keys configured,
you can omit the password. Each volume driver may have zero or more configurable
options, each of which is specified using an -o flag.
This example specifies a SSH password, but if the two hosts have shared keys configured,
you can omit the password. Each volume driver may have zero or more configurable
options. If the volume driver requires you to pass options, you must use the --mount flag to
mount the volume, rather than -v.
$ docker run -d \
--name sshfs-container \
--volume-driver vieux/sshfs \
--mount src=sshvolume,target=/app,volume-
opt=sshcmd=test@node2:/home/test,volume-opt=password=testpassword \
nginx:latest
This example shows how you can create an NFS volume when creating a service. This
example uses 10.0.0.10 as the NFS server and /var/docker-nfs as the exported directory on
the NFS server. Note that the volume driver specified is local.
NFSV3
NFSV4
You can mount a Samba share directly in docker without configuring a mount point on your
host.
Volumes are useful for backups, restores, and migrations. Use the --volumes-from flag to
create a new container that mounts that volume.
Backup a container
Launch a new container and mount the volume from the dbstore container
Mount a local host directory as /backup
Pass a command that tars the contents of the dbdata volume to a backup.tar file
inside our /backup directory.
With the backup just created, you can restore it to the same container, or another that you
made elsewhere.
Then un-tar the backup file in the new container`s data volume:
Remove volumes
A Docker data volume persists after a container is deleted. There are two types of volumes
to consider:
Named volumes have a specific source from outside the container, for example
awesome:/bar.
Anonymous volumes have no specific source so when the container is deleted,
instruct the Docker Engine daemon to remove them.
To automatically remove anonymous volumes, use the --rm option. For example, this
command creates an anonymous /foo volume. When the container is removed, the Docker
Engine removes the /foo volume but not the awesome volume.
Docker Network
Networking overview
One of the reasons Docker containers and services are so powerful is that you can connect
them together, or connect them to non-Docker workloads. Docker containers and services
do not even need to be aware that they are deployed on Docker, or whether their peers are
also Docker workloads or not. Whether your Docker hosts run Linux, Windows, or a mix of
the two, you can use Docker to manage them in a platform-agnostic way.
This topic defines some basic Docker networking concepts and prepares you to design and
deploy your applications to take full advantage of these capabilities.
Network drivers
Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default,
and provide core networking functionality:
bridge: The default network driver. If you don’t specify a driver, this is the type of network
you are creating. Bridge networks are usually used when your applications run in
standalone containers that need to communicate. See bridge networks.
host: For standalone containers, remove network isolation between the container and the
Docker host, and use the host’s networking directly. See use the host network.
overlay: Overlay networks connect multiple Docker daemons together and enable swarm
services to communicate with each other. You can also use overlay networks to facilitate
communication between a swarm service and a standalone container, or between two
standalone containers on different Docker daemons. This strategy removes the need to do
OS-level routing between these containers. See overlay networks.
macvlan: Macvlan networks allow you to assign a MAC address to a container, making it
appear as a physical device on your network. The Docker daemon routes traffic to
containers by their MAC addresses. Using the macvlan driver is sometimes the best choice
when dealing with legacy applications that expect to be directly connected to the physical
network, rather than routed through the Docker host’s network stack. See Macvlan
networks.
none: For this container, disable all networking. Usually used in conjunction with a custom
network driver. none is not available for swarm services. See disable container networking.
Network plugins: You can install and use third-party network plugins with Docker. These
plugins are available from Docker Hub or from third-party vendors. See the vendor’s
documentation for installing and using a given network plugin.
Network driver summary
User-defined bridge networks are best when you need multiple containers to
communicate on the same Docker host. Host networks are best when the network
stack should not be isolated from the Docker host, but you want other aspects of the
container to be isolated.
Overlay networks are best when you need containers running on different Docker
hosts to communicate, or when multiple applications work together using swarm
services.
Macvlan networks are best when you are migrating from a VM setup or need your
containers to look like physical hosts on your network, each with a unique MAC
address.
Third-party network plugins allow you to integrate Docker with specialized network
stacks.
In terms of networking, a bridge network is a Link Layer device which forwards traffic
between network segments. A bridge can be a hardware device or a software device
running within a host machine’s kernel.
In terms of Docker, a bridge network uses a software bridge which allows containers
connected to the same bridge network to communicate, while providing isolation from
containers which are not connected to that bridge network. The Docker bridge driver
automatically installs rules in the host machine so that containers on different bridge
networks cannot communicate directly with each other.
Bridge networks apply to containers running on the same Docker daemon host. For
communication among containers running on different Docker daemon hosts, you can either
manage routing at the OS level, or you can use an overlay network.
When you start Docker, a default bridge network (also called bridge) is created
automatically, and newly-started containers connect to it unless otherwise specified. You
can also create user-defined custom bridge networks. User-defined bridge networks are
superior to the default bridge network.
Use overlay networks
The overlay network driver creates a distributed network among multiple Docker daemon
hosts. This network sits on top of (overlays) the host-specific networks, allowing containers
connected to it (including swarm service containers) to communicate securely when
encryption is enabled. Docker transparently handles routing of each packet to and from the
correct Docker daemon host and the correct destination container.
When you initialize a swarm or join a Docker host to an existing swarm, two new networks
are created on that Docker host:
an overlay network called ingress, which handles control and data traffic related to
swarm services. When you create a swarm service and do not connect it to a user-
defined overlay network, it connects to the ingress network by default.
a bridge network called docker_gwbridge, which connects the individual Docker
daemon to the other daemons participating in the swarm.
You can create user-defined overlay networks using docker network create, in the same
way that you can create user-defined bridge networks. Services or containers can be
connected to more than one network at a time. Services or containers can only
communicate across networks they are each connected to.
Although you can connect both swarm services and standalone containers to an overlay
network, the default behaviors and configuration concerns are different. For that reason, the
rest of this topic is divided into operations that apply to all overlay networks, those that apply
to swarm service networks, and those that apply to overlay networks used by standalone
containers.
If you use the host network mode for a container, that container’s network stack is not
isolated from the Docker host (the container shares the host’s networking namespace), and
the container does not get its own IP-address allocated. For instance, if you run a container
which binds to port 80 and you use host networking, the container’s application is available
on port 80 on the host’s IP address.
Note: Given that the container does not have its own IP-address when
using host mode networking, port-mapping does not take effect, and the
-p, --publish, -P, and --publish-all option are ignored, producing a
warning instead:
WARNING: Published ports are discarded when using host network mode
Host mode networking can be useful to optimize performance, and in situations where a
container needs to handle a large range of ports, as it does not require network address
translation (NAT), and no “userland-proxy” is created for each port.
The host networking driver only works on Linux hosts, and is not supported on Docker
Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
You can also use a host network for a swarm service, by passing --network host to the
docker service create command. In this case, control traffic (traffic related to managing the
swarm and the service) is still sent across an overlay network, but the individual swarm
service containers send data using the Docker daemon’s host network and ports. This
creates some extra limitations. For instance, if a service container binds to port 80, only one
service container can run on a given swarm node.
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.
Setelah login, jalankan perintah lab start do-003-1. Perintah ini digunakan untuk
mempersiapkan environment lab
cd $HOME
mkdir -p latihan/latihan01-volume
cd latihan/latihan01-volume
3. mount volumes.
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.
Setelah login, jalankan perintah lab start do-003-2. Perintah ini digunakan untuk
mempersiapkan environment lab
1. Membuat Direktori
mkdir -p /data/nfs-storage01/
touch /mnt/file1.txt
touch /mnt/file2.txt
exit
4. Verifikasi.
ls /data/nfs-storage01/
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.
Setelah login, jalankan perintah lab start do-003-3. Perintah ini digunakan untuk
mempersiapkan environment lab
1. Membuat volume
docker volume ls
curl http://172.17.XXX.XXX
curl http://172.17.XXX.XXX
10. Tampilkan detil container nginxtest-rovol dan perhatikan Mode di section Mounts
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.
lab login doops
Setelah login, jalankan perintah lab start do-003-4. Perintah ini digunakan untuk
mempersiapkan environment lab
5. Uji browsing.
sudo docker ps
curl http://localhost:8090
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.
Setelah login, jalankan perintah lab start do-003-5. Perintah ini digunakan untuk
mempersiapkan environment lab
ip add
7. Uji ping ke internet (sukses).
ping -c 3 8.8.8.8
ping -c 3 172.17.YYY.YYY
ping -c 3 alpine2
10. Keluar dari container alpine1 tanpa menutup shell tekan Ctrl+P, Ctrl+Q.
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.
Setelah login, jalankan perintah lab start do-003-6. Perintah ini digunakan untuk
mempersiapkan environment lab
curl http://localhost
Abstract
Overview Manipulate pre-built container images to create and manage containerized
services.
Objectives:
Sections:
Docker images
Building Custom Container Images with Dockerfile
Docker images
In Linux, everything is a file. The whole operating system is basically a filesystem with files
and folders stored on the local disk. This is an important fact to remember when looking at
what container images are. As we will see, an image is basically a big tarball containing a
filesystem. More specifically, it contains a layered filesystem.
Container images are templates from which containers are created. These images are not
made up of just one monolithic block but are composed of many layers. The first layer in the
image is also called the base layer. We can see this in the following graphic:
The image as a stack of layers Each individual layer contains files and folders. Each layer
only contains the changes to the filesystem with respect to the underlying layers. A storage
driver handles the details regarding the way these layers interact with each other. Different
storage drivers are available that have advantages and disadvantages in different
situations.
The layers of a container image are all immutable. Immutable means that once generated,
the layer cannot ever be changed. The only possible operation affecting the layer is its
physical deletion. This immutability of layers is important because it opens up a tremendous
amount of opportunities, as we will see.
In the following screenshot, we can see what a custom image for a web application, using
Nginx as a web server, could look like:
A sample custom image based on Alpine and Nginx
Our base layer here consists of the Alpine Linux distribution. Then, on top of that, we have
an Add Nginx layer where Nginx is added on top of Alpine. Finally, the third layer contains
all the files that make up the web application, such as HTML, CSS, and JavaScript files.
As has been said previously, each image starts with a base image. Typically, this base
image is one of the official images found on Docker Hub, such as a Linux distro, Alpine,
Ubuntu, or CentOS. However, it is also possible to create an image from scratch.
Docker Hub is a public registry for container images. It is a central hub ideally suited for
sharing public container images.
Each layer only contains the delta of changes in regard to the previous set of layers. The
content of each layer is mapped to a special folder on the host system, which is usually a
subfolder of /var/lib/docker/.
Since layers are immutable, they can be cached without ever becoming stale. This is a big
advantage, as we will see.
The writable container layer
The writable
container layer
This technique, of course, results in a tremendous reduction in the resources that are
consumed. Furthermore, this helps to decrease the loading time of a container since only a
thin container layer has to be created once the image layers have been loaded into
memory, which only happens for the first container.
Docker Registry
An overview of Docker Registry
source : thecustomizewindows.com
Docker Registry, as stated earlier, is an open source application that you can utilize to store
your Docker images on a platform of your choice. This allows you to keep them 100%
private if you wish, or share them as needed.
Docker Registry makes a lot of sense if you want to deploy your own registry without having
to pay for all the private features of Docker Hub. Next, let's take a look at some
comparisons between Docker Hub and Docker Registry to help you can make an educated
decision as to which platform to choose to store your images.
Get a GUI-based interface that you can use to manage your images
Have a location already set up in the cloud that is ready to handle public and/or
private images
Have the peace of mind of not having to manage a server that is hosting all your
images
Docker Hub
Docker Hub is the world's easiest way to create, manage, and deliver your teams' container
applications.
Docker Hub repositories allow you share container images with your team, customers, or
the Docker community at large.
Docker images are pushed to Docker Hub through the docker push command. A single
Docker Hub repository can hold many Docker images (stored as tags).
Manipulating Container Images
Introduction
There are various ways to manage image containers in a devops fashion. For example, a
developer finished testing a custom container in a machine, and needs to transfer this
container image to another host for another developer, or to a production server. There are
two ways to accomplish this:
One of the ways a developer could have created this custom container is discussed later in
this chapter (docker commit). However, the recommended way to do so, that is,
using Dockerfiles is discussed in next chapters.
Existing images from the Docker cache can be saved to a .tar file using the docker save
command. The generated file is not a regular tar file: it contains image metadata and
preserves the original image layers. By doing so, the original image can be later recreated
exactly as it was.
If the -o option is not used the generated image is sent to the standard output as binary
data.
In the following example, the MySQL container image from the Docker registry is saved to
the mysql.tar file:
.tar files generated using the save verb can be used for backup purposes. To restore the
container image, use the docker load command. The general syntax of the command is as
follows:
To publish an image to the registry, it must be stored in the Docker's cache cache, and
should be tagged for identification purposes. To tag an image, use the tag verb, as follows:
For example, Let's say I want to push the latest version of Alpine to my account and give it a
tag of versi1. I can do this in the following way:
After a successful login, I can then push the image, like this:
For each image that we push to Docker Hub, we automatically create a repository. A
repository can be private or public. Everyone can pull an image from a public repository.
From a private repository, an image can only be pulled if one is logged in to the registry and
has the necessary permissions configured.
To delete all images that are not used by any container, use the following command:
The command returns all the image IDs available in the cache, and passes them as a
parameter to the docker rmi command for removal. Images that are in use are not deleted,
however, this does not prevent any unused images from being removed.
In this section, we will cover Dockerfiles in-depth, along with the best practices to use. So
what is a Dockerfile?
A Dockerfile is simply a plain text file that contains a set of user-defined instructions. When
the Dockerfile is called by the docker image build command, which we will look at next, it is
used to assemble a container image. Dockerfile looks like the following:
FROM alpine:latest
LABEL maintainer="adinusa <[email protected]>"
LABEL description="This example Dockerfile installs NGINX."
RUN apk add --update nginx && \
rm -rf /var/cache/apk/* && \
mkdir -p /tmp/nginx/
COPY files/nginx.conf /etc/nginx/nginx.conf
COPY files/default.conf /etc/nginx/conf.d/default.conf
ADD files/html.tar.gz /usr/share/nginx/
EXPOSE 80/tcp
ENTRYPOINT ["nginx"]
CMD ["-g", "daemon off;"]
As you can see, even with no explanation, it is quite easy to get an idea of what each step
of the Dockerfile instructs the build command to do.
Before we move on to working our way through the previous file, we should quickly touch
upon Alpine Linux.
Alpine Linux, due both to its size, and how powerful it is, has become the default image
base for the official container images supplied by Docker. Because of this, we will be using
it throughout this book. To give you an idea of just how small the official image for Alpine
Linux is, let's compare it to some of the other distributions available at the time of writing:
As you can see from the Terminal output, Alpine Linux weighs in at only 4.41 MB, as
opposed to the biggest image, which is Fedora, at 253 MB. A bare-metal installation of
Alpine Linux comes in at around 130 MB, which is still almost half the size of the Fedora
container image.
Let's take a look at the instructions used in the Dockerfile example. We will look at them in
the order in which they appear:
FROM
The FROM instruction tells Docker which base you would like to use for your image; as
already mentioned, we are using Alpine Linux, so we simply have to put the name of the
image and the release tag we wish to use. In our case, to use the latest official Alpine Linux
image, we simply need to add alpine:latest.
LABEL
The LABEL instruction can be used to add extra information to the image. This information
can be anything from a version number to a description. It's also recommended that you
limit the number of labels you use. A good label structure will help others who have to use
our image later on.
However, using too many labels can cause the image to become inefficient as well, so I
would recommend using the label schema detailed at http://label-schema.org/. You can
view the containers' labels with the following Docker inspect command:
RUN
The RUN instruction is where we interact with our image to install software and run scripts,
commands, and other tasks. As you can see from our RUN instruction, we are actually
running three commands:
The first of our three commands is the equivalent of running the following command if we
had a shell on an Alpine Linux host:
We are using the && operator to move on to the next command if the previous command
was successful. To make it more obvious which commands we are running, we are also
using \ so that we can split the command over multiple lines, making it easy to read.
The next command in our chain removes any temporary files and so on to keep the size of
our image to a minimum:
$ rm -rf /var/cache/apk/*
The final command in our chain creates a folder with a path of /tmp/nginx/, so that nginx will
start correctly when we run the container:
$ mkdir -p /tmp/nginx/
We could have also used the following in our Dockerfile to achieve the same results:
However, much like adding multiple labels, this is considered to be considered inefficient as
it can add to the overall size of the image, which for the most part we should try to avoid.
There are some valid use cases for this, which we will look at later in the chapter. For the
most part, this approach to running commands should be avoided when your image is being
built.
At first glance, COPY and ADD look like they are doing the same task; however, there are
some important differences. The COPY instruction is the more straightforward of the two:
As you have probably guessed, we are copying two files from the files folder on the host we
are building our image on. The first file is nginx.conf.
EXPOSE
The EXPOSE instruction lets Docker know that when the image is executed, the port and
protocol defined will be exposed at runtime. This instruction does not map the port to the
host machine, but instead, opens the port to allow access to the service on the container
network.
For example, in our Dockerfile, we are telling Docker to open port 80 every time the image
runs:
EXPOSE 80/tcp
For example, if you want to have a default command that you want to execute inside a
container, you could do something similar to the following example, but be sure to use a
command that keeps the container alive. In our case, we are using the following:
ENTRYPOINT ["nginx"]
CMD ["-g", "daemon off;"]
What this means is that whenever we launch a container from our image, the nginx binary is
executed, as we have defined that as our ENTRYPOINT, and then whatever we have as
the CMD is executed, giving us the equivalent of running the following command:
This would be the equivalent of running the following command on our host:
$ nginx -v
Notice that we didn't have to tell Docker to use nginx. As we have the nginx binary as our
entry point, any command we pass overrides the CMD that had been defined in the
Dockerfile.
This would display the version of nginx we have installed, and our container would stop, as
the nginx binary would only be executed to display the version information and then the
process would stop. We will look at this example later in this chapter, once we have built our
image.
USER
specifies the username or the UID to use when running the container image for the RUN,
CMD, and ENTRYPOINT instructions in the Dockerfile. It is a good practice to define a
different user other than root for security reasons.
WORKDIR
The WORKDIR instruction sets the working directory for the same set of instructions that
the USER instruction can use (RUN, CMD, and ENTRYPOINT). It will allow you to use the
CMD and ADD instructions as well.
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).
Dockerfile (latihan01)
3. Buat image.
4. Buat container.
curl localhost:8080
Dockerfile (latihan02)
cd $HOME
mkdir latihan02
cd latihan02
vim Dockerfile
...
# Use whalesay image as a base image
FROM docker/whalesay:latest
# Install fortunes
RUN apt -y update && apt install -y fortunes
# Execute command
CMD /usr/games/fortune -a | cowsay
...
docker image ls
5. Uji jalankan image.
6. Menampilkan container.
docker ps
docker container ls -a
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).
Dockerfile (latihan03)
cd $HOME
mkdir latihan03
cd latihan03
4. Membuat file flask.
vim app.py
...
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hey, we have Flask in a Docker container!'
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
...
5. Membuat requirements.txt.
vi requirements.txt
...
Flask==0.10.1
Werkzeug==1.0.0
Jinja2==2.8.1
MarkupSafe==1.1.1
itsdangerous==1.1.0
...
vim Dockerfile
...
FROM ubuntu:16.04
RUN mkdir /app
RUN apt-get update -y && \
apt-get install python-pip python-dev -y
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
...
7. Buat image dari Dockerfile.
8. Tag image.
9. Push image.
curl localhost:5000
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).
Dockerfile (latihan03)
1. Jika belum punya Docker ID, register di https://id.docker.com.
cd $HOME
mkdir latihan03
cd latihan03
vim app.py
...
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hey, we have Flask in a Docker container!'
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
...
5. Membuat requirements.txt.
vi requirements.txt
...
Flask==0.10.1
Werkzeug==1.0.0
Jinja2==2.8.1
MarkupSafe==1.1.1
itsdangerous==1.1.0
...
vim Dockerfile
...
FROM ubuntu:16.04
RUN mkdir /app
RUN apt-get update -y && \
apt-get install python-pip python-dev -y
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
...
8. Tag image.
9. Push image.
curl localhost:5000
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).
6. Uji browsing.
Docker Compose
Abstract
Overview In the previous chapter, we learned a lot about how container networking works
on a single Docker host. We introduced the Container Network Model (CNM), which forms
the basis of all networking between Docker containers, and then we dove deep into different
implementations of the CNM, specifically the bridge network. Finally, we introduced Traefik,
a reverse proxy, to enable sophisticated HTTP application-level routing between containers.
This chapter introduces the concept of an application consisting of multiple services, each
running in a container, and how Docker Compose allows us to easily build, run, and scale
such an application using a declarative approach.
This chapter covers the following topics:
After completing this chapter, the reader will be able to do the following:
Explain in a few short sentences the main differences between an imperative and
declarative approach for defining and running an application
Describe in their own words the difference between a container and a Docker
Compose service
Author a Docker Compose YAML file for a simple multi-service application
Build, push, deploy, and tear down a simple multi-service application using Docker
Compose
Use Docker Compose to scale an application service up and down
Define environment-specific Docker Compose files using overrides
Docker Compose is a tool provided by Docker that is mainly used where you need to run
and orchestrate containers running on a single Docker host. This includes, but is not limited
to, development, continuous integration (CI), automated testing, manual QA, or demos.
Imperative: This is a way in which we can solve problems by specifying the exact
procedure that has to be followed by the system.
If tell a system such as the Docker daemon imperatively how to run an application, then that
means that we have to describe step by step what the system has to do and how it has to
react if some unexpected situation occurs. we have to be very explicit and precise in my
instructions. we need to cover all edge cases and how they need to be treated.
Declarative: This is a way in which we can solve problems without requiring the
programmer to specify an exact procedure to be followed.
A declarative approach means that we tell the Docker engine what my desired state for an
application is and it has to figure out on its own how to achieve this desired state and how to
reconcile it if the system deviates from it.
Docker clearly recommends the declarative approach when dealing with containerized
applications. Consequently, the Docker Compose tool uses this approach.
In most cases, applications do not consist of only one monolithic block, but rather of several
application services that work together. When using Docker containers, each application
service runs in its own container. When we want to run such a multi-service application, we
can, of course, start all the participating containers with the well-known docker container run
command, and we have done this in previous chapters. But this is inefficient at best. With
the Docker Compose tool, we are given a way to define the application in a declarative way
in a file that uses the YAML format.
cat docker-compose.yml
version: "3.2"
services:
web:
image: palopalepalo/web:1.0
build: web
ports:
- 80:3000
db:
image: palopalepalo/db:1.0
build: db
volumes:
- pets-data:/var/lib/postgresql/data
volumes:
pets-data:
version: In this line, we specify the version of the Docker Compose format we want
to use. At the time of writing, this is version 2.4.
services: In this section, we specify the services that make up our application in the
services block. In our sample, we have two application services and we call them
web and db:
web: The web service is using an image called palopalepalo/web:1.0, which, if not
already in the image cache, is built from the Dockerfile found in the web folder . The
service is also publishing container port 3000 to the host port 80.
db: The db service, on the other hand, is using the image name palopalepalo/db:1.0,
which is a customized PostgreSQL database. Once again, if the image is not already
in the cache, it is built from the Dockerfile found in the db folder . We are mounting a
volume called pets-data into the container of the db service.
volumes: The volumes used by any of the services have to be declared in this
section. In our sample, this is the last section of the file. The first time the application
is run, a volume called pets-data will be created by Docker and then, in subsequent
runs, if the volume is still there, it will be reused. This could be important when the
application, for some reason, crashes and has to be restarted. Then, the previous
data is still around and ready to be used by the restarted database service.
Note that we are using version 2.x of the Docker Compose file syntax. This is the one
targeted toward deployments on a single Docker host. There exists also a version 3.x of the
Docker Compose file syntax. This version is used when you want to define an application
that is targeted either at Docker Swarm or Kubernetes. We will discuss this in more detail
starting with Chapter 12, Orchestrators.
In most cases, applications do not consist of only one monolithic block, but rather of several
application services that work together. When using Docker containers, each application
service runs in its own container. When we want to run such a multi-service application, we
can, of course, start all the participating containers with the well-known docker container run
command, and we have done this in previous chapters. But this is inefficient at best. With
the Docker Compose tool, we are given a way to define the application in a declarative way
in a file that uses the YAML format.
cat docker-compose.yml
version: "3.2"
services:
web:
image: palopalepalo/web:1.0
build: web
ports:
- 80:3000
db:
image: palopalepalo/db:1.0
build: db
volumes:
- pets-data:/var/lib/postgresql/data
volumes:
pets-data:
version: In this line, we specify the version of the Docker Compose format we want
to use. At the time of writing, this is version 2.4.
services: In this section, we specify the services that make up our application in the
services block. In our sample, we have two application services and we call them
web and db:
web: The web service is using an image called palopalepalo/web:1.0, which, if not
already in the image cache, is built from the Dockerfile found in the web folder . The
service is also publishing container port 3000 to the host port 80.
db: The db service, on the other hand, is using the image name palopalepalo/db:1.0,
which is a customized PostgreSQL database. Once again, if the image is not already
in the cache, it is built from the Dockerfile found in the db folder . We are mounting a
volume called pets-data into the container of the db service.
volumes: The volumes used by any of the services have to be declared in this
section. In our sample, this is the last section of the file. The first time the application
is run, a volume called pets-data will be created by Docker and then, in subsequent
runs, if the volume is still there, it will be reused. This could be important when the
application, for some reason, crashes and has to be restarted. Then, the previous
data is still around and ready to be used by the restarted database service.
Note that we are using version 2.x of the Docker Compose file syntax. This is the one
targeted toward deployments on a single Docker host. There exists also a version 3.x of the
Docker Compose file syntax. This version is used when you want to define an application
that is targeted either at Docker Swarm or Kubernetes. We will discuss this in more detail
starting with Chapter 12, Orchestrators.
Navigate to the practice03 subfolder of the exercise folder and then build the images:
# cd ~/Docker-For-DevOps/exercise/practice03
# docker-compose build
If we enter the preceding command, then the tool will assume that there must be a file in the
current directory called docker-compose.yml and it will use that one to run. In our case, this
is indeed the case and the tool will build the images.
In the preceding screenshot, you can see that docker-compose first downloads the base
image node:12.12-alpine, for the web image we're building from Docker Hub. Subsequently,
it uses the Dockerfile found in the web folder to build the image and names it
palopalepalo/web:1.0. But this is only the first part; the second part of the output should look
similar to this:
Building db
Step 1/5 : FROM postgres:12-alpine
12-alpine: Pulling from library/postgres
df20fa9351a1: Already exists
600cd4e17445: Already exists
04c8eedc9a76: Already exists
27e869070ff2: Already exists
a64f75a232e2: Already exists
ee71e22a1c96: Already exists
0ef267de4e32: Already exists
69bfaaa66791: Already exists
Digest:
sha256:b49bafad6a2b7cbafdae4f974d9c4c7ff5c6bb7ec98c09db7b156a42a3c57baf
Status: Downloaded newer image for postgres:12-alpine
---> 3d77dd9d9dc3
Step 2/5 : COPY init-db.sql /docker-entrypoint-initdb.d/
---> 1e296971fa64
Step 3/5 : ENV POSTGRES_USER dockeruser
---> Running in 8d5cf6ed47f5
Removing intermediate container 8d5cf6ed47f5
---> 968c208ece21
Step 4/5 : ENV POSTGRES_PASSWORD dockerpass
---> Running in f7bac4b31ddc
Removing intermediate container f7bac4b31ddc
---> 50b5ae16aa0a
Step 5/5 : ENV POSTGRES_DB pets
---> Running in 4c54a5fa0e94
Removing intermediate container 4c54a5fa0e94
---> 5061eb4cc803
Successfully built 5061eb4cc803
Successfully tagged palopalepalo/db:1.0
Here, once again, docker-compose pulls the base image, postgres:12.0-alpine, from Docker
Hub and then uses the Dockerfile found in the db folder to build the image we call
palopalepalo/db:1.0.
Scaling a service
Now, let's, for a moment, assume that our sample application has been live on the web and
become very successful. Loads of people want to see our cute animal images. So now
we're facing a problem since our application has started to slow down. To counteract this
problem, we want to run multiple instances of the web service. With Docker Compose, this
is readily done.
Running more instances is also called scaling up. We can use this tool to scale our web
service up to, say, three instances:
root@ubuntu20:~/Docker-For-DevOps/exercise/practice03# docker-compose up
--scale web=3
practice03_db_1 is up-to-date
WARNING: The "web" service specifies a port on the host. If multiple
containers for this service are created on a single host, the port will
clash.
Starting practice03_web_1 ... done
Creating practice03_web_2 ... error
Creating practice03_web_3 ... error
ERROR: for web Cannot start service web: driver failed programming
external connectivity on endpoint practice03_web_3
(376611b5eddf8c40095e716035ee55fc13c6ba262089bf14421c636241592216): Bind
for 0.0.0.0:80 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.
The second and third instances of the web service fail to start. The error message tells us
why: we cannot use the same host port 80 more than once. When instances 2 and 3 try to
start, Docker realizes that port 80 is already taken by the first instance. What can we do?
Well, we can just let Docker decide which host port to use for each instance.
If in the ports section of the compose file, we only specify the container port and leave out
the host port, then Docker automatically selects an ephemeral port. Let's do exactly this:
version: "3.2"
services:
web:
image: palopalepalo/web:1.0
build: web
ports:
- 3000
db:
image: palopalepalo/db:1.0
build: db
volumes:
- pets-data:/var/lib/postgresql/data
volumes:
pets-data:
3.Now, we can start the application again and scale it up immediately after that:
# docker-compose up -d
# docker-compose up -d --scale web=3
Starting practice03_web_1 ... done
Creating practice03_web_2 ... done
Creating practice03_web_3 ... done
root@ubuntu20:~/Docker-For-DevOps/exercise/practice03# docker-compose ps
Name Command State
Ports
-------------------------------------------------------------------------
----------
practice03_db_1 docker-entrypoint.sh postgres Up 5432/tcp
practice03_web_1 docker-entrypoint.sh /bin/ ... Up
0.0.0.0:32769->3000/tcp
practice03_web_2 docker-entrypoint.sh /bin/ ... Up
0.0.0.0:32770->3000/tcp
practice03_web_3 docker-entrypoint.sh /bin/ ... Up
0.0.0.0:32771->3000/tcp
# curl -4 localhost:32771
Pets Demo Application
The answer, Pets Demo Application, tells us that, indeed, our application is still working as
expected. Try it out for the other two instances to be sure.
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).
Install Compose
1. Unduh Compose.
2. Uji instalasi.
cd $HOME
mkdir -p latihan/my_wordpress
cd latihan/my_wordpress
vim docker-compose.yml
version: '3.2'
services:
db:
image: mysql:5.7
volumes:
- dbdata:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: [username]
MYSQL_PASSWORD: [password]
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: [username]
WORDPRESS_DB_PASSWORD: [password]
volumes:
dbdata:
5. Jalankan compose.
sudo docker-compose up -d
6. Tampilkan daftar container dan uji browsing ke ke halaman wordpress yang sudah
dibuat.
According to the 2020 Jetbrains developer survey , 44% of developers are now using some
form of continuous integration and deployment with Docker containers. We understand that
a large number of developers have got this set up using Docker Hub as their container
registry for part of their workflow. This guide contains some best practices for doing this and
provides guidance on how to get started.
We have also heard feedback that given the changes Docker introduced relating to network
egress and the number of pulls for free users, that there are questions around the best way
to use Docker Hub as part of CI/CD workflows without hitting these limits. This guide covers
best practices that improve your experience and uses a sensible consumption of Docker
Hub which mitigates the risk of hitting these limits, and contains tips on how to increase the
limits depending on your use case.
CI Using Docker
CI/CD merges development with testing, allowing developers to build code collaboratively,
submit it the master branch, and checked for issues. This allows developers to not only
build their code, but also test their code in any environment type and as often as possible to
catch bugs early in the applications development lifecycle. Since Docker can integrate with
tools like Jenkins and GitHub, developers can submit code in GitHub, test the code and
automatically trigger a build using Jenkins, and once the image is complete, images can be
added to Docker registries. This streamlines the process, saves time on build and set up
processes, all while allowing developers to run tests in parallel and automate them so that
they can continue to work on other projects while tests are being run.
Reference : https://collabnix.com/5-minutes-to-continuous-integration-pipeline-using-
docker-jenkins-github-on-play-with-docker-platform/
Docker Hub Automated Build
Docker Hub can automatically build images from source code in an external repository and
automatically push the built image to your Docker repositories.
When you set up automated builds (also called autobuilds), you create a list of branches
and tags that you want to build into Docker images. When you push code to a source code
branch (for example in GitHub) for one of those listed image tags, the push uses a webhook
to trigger a new build, which produces a Docker image. The built image is then pushed to
the Docker Hub registry.
Build images automatically from a build context stored in a repository. A build context is a
Dockerfile and any files at a specific location. Automated Builds have several advantages:
Automated Builds are supported for both public and private repositories on both GitHub and
Bitbucket.
Build Statuses
The docker logs command shows information logged by a running container. The docker
service logs command shows information logged by all containers participating in a service.
The information that is logged and the format of the log depends almost entirely on the
container’s endpoint command.
By default, docker logs or docker service logs shows the command’s output just as it would
appear if you ran the command interactively in a terminal. UNIX and Linux commands
typically open three I/O streams when they run, called STDIN, STDOUT, and STDERR.
STDIN is the command’s input stream, which may include input from the keyboard or input
from another command. STDOUT is usually a command’s normal output, and STDERR is
typically used to output error messages. By default, docker logs shows the command’s
STDOUT and STDERR. To read more about I/O and Linux, see the Linux Documentation
Project article on I/O redirection.
In some cases, docker logs may not show useful information unless you take additional
steps.
If you use a logging driver which sends logs to a file, an external host, a database, or
another logging back-end, docker logs may not show useful information.
If your image runs a non-interactive process such as a web server or a database, that
application may send its output to log files instead of STDOUT and STDERR.
In the first case, your logs are processed in other ways and you may choose not to use
docker logs. In the second case, the official nginx image shows one workaround, and the
official Apache httpd image shows another.
The official httpd driver changes the httpd application’s configuration to write its normal
output directly to /proc/self/fd/1 (which is STDOUT) and its errors to /proc/self/fd/2 (which is
STDERR).
Example logs :
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).
2. Jalankan nginx.
4. Cek Log.
Logging Driver
When building containerized applications, logging is definitely one of the most important
things to get right from a DevOps standpoint. Log management helps DevOps teams debug
and troubleshoot issues faster, making it easier to identify patterns, spot bugs, and make
sure they don’t come back to bite you!
In this article, we’ll refer to Docker logging in terms of container logging, meaning logs that
are generated by containers. These logs are specific to Docker and are stored on the
Docker host. Later on, we’ll check out Docker daemon logs as well. These are the logs that
are generated by Docker itself. You will need those to debug errors in the Docker engine.
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).
vim /etc/docker/daemon.json
...
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "100"
}
}
...
2. Restart Daemon dan Docker.
systemctl daemon-reload
systemctl restart docker
3. Jalankan container.
cat /var/lib/docker/containers/CONTAINER/xxx-json.log
Health Check
Used to determine whether the application is ready and able to perform its function.
Health Check Command:
Health Check
The HEALTHCHECK instruction tells Docker how to test a container to check that it is still
working. This can detect cases such as a web server that is stuck in an infinite loop and
unable to handle new connections, even though the server process is still running.
When a container has a healthcheck specified, it has a health status in addition to its normal
status. This status is initially starting. Whenever a health check passes, it becomes healthy
(whatever state it was previously in). After a certain number of consecutive failures, it
becomes unhealthy.
--interval=DURATION (default: 30s)
--timeout=DURATION (default: 30s)
--start-period=DURATION (default: 0s)
--retries=N (default: 3)
The health check will first run interval seconds after the container is started, and then again
interval seconds after each previous check completes.
If a single run of the check takes longer than timeout seconds then the check is considered
to have failed.
It takes retries consecutive failures of the health check for the container to be considered
unhealthy.
start period provides initialization time for containers that need time to bootstrap. Probe
failure during that period will not be counted towards the maximum number of retries.
However, if a health check succeeds during the start period, the container is considered
started and all consecutive failures will be counted towards the maximum number of retries.
There can only be one HEALTHCHECK instruction in a Dockerfile. If you list more than one
then only the last HEALTHCHECK will take effect.
The command after the CMD keyword can be either a shell command (e.g.
HEALTHCHECK CMD /bin/check-running) or an exec array (as with other Dockerfile
commands; see e.g. ENTRYPOINT for details).
The command’s exit status indicates the health status of the container. The possible values
are:
0: success - the container is healthy and ready for use
1: unhealthy - the container is not working correctly
2: reserved - do not use this exit code
For example, to check every five minutes or so that a web-server is able to serve the site’s
main page within three seconds:
To help debug failing probes, any output text (UTF-8 encoded) that the command writes on
stdout or stderr will be stored in the health status and can be queried with docker inspect.
Such output should be kept short (only the first 4096 bytes are stored currently).
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).
1. Buat direktori.
cd $HOME
mkdir hc-latihan01
cd hc-latihan01
vim Dockerfile
...
FROM katacoda/docker-http-server:health
HEALTHCHECK --interval=1s --retries=3 \
CMD curl --fail http://localhost:80/ || exit 1
...
3. Buat image.
4. Jalankan image.
5. Check image.
docker ps
# check pada bagian status
curl http://localhost/
# maka dapat diakses
curl http://localhost/unhealthy
docker container ls
curl http://localhost/
cd $HOME
mkdir hc-latihan02
cd hc-latihan02
vim server.js
...
"use strict";
const http = require('http');
function createServer () {
return http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('OK\n');
}).listen(8080);
}
vim Dockerfile
...
FROM node
COPY server.js /
EXPOSE 8080 8081
HEALTHCHECK --interval=5s --timeout=10s --retries=3 CMD curl -sS
127.0.0.1:8080 || exit 1
CMD ["node","server.js"]
...
4. Buat image.
5. Jalankan image.
6. Check container.
curl 127.0.0.1:8080
docker ps
docker inspect --format "{{ json .State.Health }}" nodeserver
7. Check container.
curl 127.0.0.1:8081
docker ps
docker inspect --format "{{ json .State.Health }}" nodeserver
8. Check Container.
curl 127.0.0.1:8081
docker ps
docker inspect --format "{{ json .State.Health }}" nodeserver
Security
There are four major areas to consider when reviewing Docker security:
the intrinsic security of the kernel and its support for namespaces and cgroups;
the attack surface of the Docker daemon itself;
loopholes in the container configuration profile, either by default, or when customized
by users.
the “hardening” security features of the kernel and how they interact with containers.
Docker Security
Kernel namespaces
Docker containers are very similar to LXC containers, and they have similar security
features. When you start a container with docker run, behind the scenes Docker creates a
set of namespaces and control groups for the container.
Each container also gets its own network stack, meaning that a container doesn’t get
privileged access to the sockets or interfaces of another container. Of course, if the host
system is setup accordingly, containers can interact with each other through their respective
network interfaces — just like they can interact with external hosts. When you specify public
ports for your containers or use links then IP traffic is allowed between containers. They can
ping each other, send/receive UDP packets, and establish TCP connections, but that can
be restricted if necessary. From a network architecture point of view, all containers on a
given Docker host are sitting on bridge interfaces. This means that they are just like
physical machines connected through a common Ethernet switch; no more, no less.
How mature is the code providing kernel namespaces and private networking? Kernel
namespaces were introduced between kernel version 2.6.15 and 2.6.26. This means that
since July 2008 (date of the 2.6.26 release ), namespace code has been exercised and
scrutinized on a large number of production systems. And there is more: the design and
inspiration for the namespaces code are even older. Namespaces are actually an effort to
reimplement the features of OpenVZ in such a way that they could be merged within the
mainstream kernel. And OpenVZ was initially released in 2005, so both the design and the
implementation are pretty mature.
Control groups
Control Groups are another key component of Linux Containers. They implement resource
accounting and limiting. They provide many useful metrics, but they also help ensure that
each container gets its fair share of memory, CPU, disk I/O; and, more importantly, that a
single container cannot bring the system down by exhausting one of those resources.
So while they do not play a role in preventing one container from accessing or affecting the
data and processes of another container, they are essential to fend off some denial-of-
service attacks. They are particularly important on multi-tenant platforms, like public and
private PaaS, to guarantee a consistent uptime (and performance) even when some
applications start to misbehave.
Control Groups have been around for a while as well: the code was started in 2006, and
initially merged in kernel 2.6.24.
Docker daemon attack surface
Running containers (and applications) with Docker implies running the Docker daemon.
This daemon requires root privileges unless you opt-in to Rootless mode, and you should
therefore be aware of some important details.
First of all, only trusted users should be allowed to control your Docker daemon. This
is a direct consequence of some powerful Docker features. Specifically, Docker allows you
to share a directory between the Docker host and a guest container; and it allows you to do
so without limiting the access rights of the container. This means that you can start a
container where the /host directory is the / directory on your host; and the container can
alter your host filesystem without any restriction. This is similar to how virtualization systems
allow filesystem resource sharing. Nothing prevents you from sharing your root filesystem
(or even your root block device) with a virtual machine.
This has a strong security implication: for example, if you instrument Docker from a web
server to provision containers through an API, you should be even more careful than usual
with parameter checking, to make sure that a malicious user cannot pass crafted
parameters causing Docker to create arbitrary containers.
For this reason, the REST API endpoint (used by the Docker CLI to communicate with the
Docker daemon) changed in Docker 0.5.2, and now uses a UNIX socket instead of a TCP
socket bound on 127.0.0.1 (the latter being prone to cross-site request forgery attacks if you
happen to run Docker directly on your local machine, outside of a VM). You can then use
traditional UNIX permission checks to limit access to the control socket.
You can also expose the REST API over HTTP if you explicitly decide to do so. However, if
you do that, be aware of the above mentioned security implications. Note that even if you
have a firewall to limit accesses to the REST API endpoint from other hosts in the network,
the endpoint can be still accessible from containers, and it can easily result in the privilege
escalation. Therefore it is mandatory to secure API endpoints with HTTPS and certificates.
It is also recommended to ensure that it is reachable only from a trusted network or VPN.
You can also use DOCKER_HOST=ssh://USER@HOST or ssh -L
/path/to/docker.sock:/var/run/docker.sock instead if you prefer SSH over TLS.
The daemon is also potentially vulnerable to other inputs, such as image loading from either
disk with docker load, or from the network with docker pull. As of Docker 1.3.2, images are
now extracted in a chrooted subprocess on Linux/Unix platforms, being the first-step in a
wider effort toward privilege separation. As of Docker 1.10.0, all images are stored and
accessed by the cryptographic checksums of their contents, limiting the possibility of an
attacker causing a collision with an existing image.
Finally, if you run Docker on a server, it is recommended to run exclusively Docker on the
server, and move all other services within containers controlled by Docker. Of course, it is
fine to keep your favorite admin tools (probably at least an SSH server), as well as existing
monitoring/supervision processes, such as NRPE and collectd.
By default, Docker starts containers with a restricted set of capabilities. What does that
mean?
Capabilities turn the binary “root/non-root” dichotomy into a fine-grained access control
system. Processes (like web servers) that just need to bind on a port below 1024 do not
need to run as root: they can just be granted the net_bind_service capability instead. And
there are many other capabilities, for almost all the specific areas where root privileges are
usually needed.
Typical servers run several processes as root, including the SSH daemon, cron daemon,
logging daemons, kernel modules, network configuration tools, and more. A container is
different, because almost all of those tasks are handled by the infrastructure around the
container:
SSH access are typically managed by a single server running on the Docker host;
cron, when necessary, should run as a user process, dedicated and tailored for the
app that needs its scheduling service, rather than as a platform-wide facility;
log management is also typically handed to Docker, or to third-party services like
Loggly or Splunk;
hardware management is irrelevant, meaning that you never need to run udevd or
equivalent daemons within containers;
network management happens outside of the containers, enforcing separation of
concerns as much as possible, meaning that a container should never need to
perform ifconfig, route, or ip commands (except when a container is specifically
engineered to behave like a router or firewall, of course).
This means that in most cases, containers do not need “real” root privileges at all. And
therefore, containers can run with a reduced capability set; meaning that “root” within a
container has much less privileges than the real “root”. For instance, it is possible to:
This means that even if an intruder manages to escalate to root within a container, it is
much harder to do serious damage, or to escalate to the host.
This doesn’t affect regular web apps, but reduces the vectors of attack by malicious users
considerably. By default Docker drops all capabilities except those needed, an allowlist
instead of a denylist approach. You can see a full list of available capabilities in Linux
manpages.
One primary risk with running Docker containers is that the default set of capabilities and
mounts given to a container may provide incomplete isolation, either independently, or
when used in combination with kernel vulnerabilities.
Docker supports the addition and removal of capabilities, allowing use of a non-default
profile. This may make Docker more secure through capability removal, or less secure
through the addition of capabilities. The best practice for users would be to remove all
capabilities except those explicitly required for their processes.
Docker Content Trust Signature Verification
The Docker Engine can be configured to only run signed images. The Docker Content Trust
signature verification feature is built directly into the dockerd binary. This is configured in the
Dockerd configuration file.
This feature provides more insight to administrators than previously available with the CLI
for enforcing and performing image signature verification.
Capabilities are just one of the many security features provided by modern Linux kernels. It
is also possible to leverage existing, well-known systems like TOMOYO, AppArmor,
SELinux, GRSEC, etc. with Docker.
While Docker currently only enables capabilities, it doesn’t interfere with the other systems.
This means that there are many different ways to harden a Docker host. Here are a few
examples.
You can run a kernel with GRSEC and PAX. This adds many safety checks, both at
compile-time and run-time; it also defeats many exploits, thanks to techniques like
address randomization. It doesn’t require Docker-specific configuration, since those
security features apply system-wide, independent of containers.
If your distribution comes with security model templates for Docker containers, you
can use them out of the box. For instance, we ship a template that works with
AppArmor and Red Hat comes with SELinux policies for Docker. These templates
provide an extra safety net (even though it overlaps greatly with capabilities).
You can define your own policies using your favorite access control mechanism. Just
as you can use third-party tools to augment Docker containers, including special
network topologies or shared filesystems, tools exist to harden Docker containers
without the need to modify Docker itself.
As of Docker 1.10 User Namespaces are supported directly by the docker daemon. This
feature allows for the root user in a container to be mapped to a non uid-0 user outside the
container, which can help to mitigate the risks of container breakout. This facility is available
but not enabled by default.
Conclusions
Docker containers are, by default, quite secure; especially if you run your processes as non-
privileged users inside the container.
You can add an extra layer of safety by enabling AppArmor, SELinux, GRSEC, or another
appropriate hardening system.
If you think of ways to make docker more secure, we welcome feature requests, pull
requests, or comments on the Docker community forums.
The Center for Internet Security (CIS) creates best practices for cyber security and defense.
The CIS uses crowdsourcing to define its security recommendations. The CIS Benchmarks
are among its most popular tools.
Organizations can use the CIS Benchmark for Docker to validate that their Docker
containers and the Docker runtime are configured as securely as possible. There are open
source and commercial tools that can automatically check your Docker environment against
the recommendations defined in the CIS Benchmark for Docker to identify insecure
configurations.
The CIS Benchmark for Docker provides a number of helpful configuration checks, but
organizations should think of them as a starting point and go beyond the CIS checks to
ensure best practices are applied. Setting resource constraints, reducing privileges, and
ensuring images run in read-only mode are a few examples of additional checks you’ll want
to run on your container files.
The latest benchmark for Docker (CIS Docker Benchmark v1.2.0).
Host configuration
Docker daemon configuration
Docker daemon configuration files
Container images and build file
Container runtime
Operations
Swarm configuration
Secure computing mode (seccomp) is a Linux kernel feature. You can use it to restrict the
actions available within the container. The seccomp() system call operates on the seccomp
state of the calling process. You can use this feature to restrict your application’s access.
This feature is available only if Docker has been built with seccomp and the kernel is
configured with CONFIG_SECCOMP enabled. To check if your kernel supports seccomp:
Note: seccomp profiles require seccomp 2.2.1 which is not available on Ubuntu 14.04,
Debian Wheezy, or Debian Jessie. To use seccomp on these distributions, you must
download the latest static Linux binaries (rather than packages).
The default seccomp profile provides a sane default for running containers with seccomp
and disables around 44 system calls out of 300+. It is moderately protective while providing
wide application compatibility. The default Docker profile can be found here.
In effect, the profile is a whitelist which denies access to system calls by default, then
whitelists specific system calls. The profile works by defining a defaultAction of
SCMP_ACT_ERRNO and overriding that action only for specific system calls. The effect of
SCMP_ACT_ERRNO is to cause a Permission Denied error. Next, the profile defines a
specific list of system calls which are fully allowed, because their action is overridden to be
SCMP_ACT_ALLOW. Finally, some specific rules are for individual system calls such as
personality, and others, to allow variants of those system calls with specific arguments.
seccomp is instrumental for running Docker containers with least privilege. It is not
recommended to change the default seccomp profile.
When you run a container, it uses the default profile unless you override it with the
--security-opt option. For example, the following explicitly specifies a policy:
Secret
“A secret is a blob of data that should not be transmitted over a network or stored
unencrypted in a Dockerfile or in your application’s source code. “
In terms of Docker Swarm services, a secret is a blob of data, such as a password, SSH
private key, SSL certificate, or another piece of data that should not be transmitted over a
network or stored unencrypted in a Dockerfile or in your application’s source code. You can
use Docker secrets to centrally manage this data and securely transmit it to only those
containers that need access to it. Secrets are encrypted during transit and at rest in a
Docker swarm. A given secret is only accessible to those services which have been granted
explicit access to it, and only while those service tasks are running.
Note: Docker secrets are only available to swarm services, not to standalone containers. To
use this feature, consider adapting your container to run as a service. Stateful containers
can typically run with a scale of 1 without changing the container code.
Another use case for using secrets is to provide a layer of abstraction between the
container and a set of credentials. Consider a scenario where you have separate
development, test, and production environments for your application. Each of these
environments can have different credentials, stored in the development, test, and
production swarms with the same secret name. Your containers only need to know the
name of the secret to function in all three environments.
You can also use secrets to manage non-sensitive data, such as configuration files.
However, Docker supports the use of configs for storing non-sensitive data. Configs are
mounted into the container’s filesystem directly, without the use of a RAM disk.
When you add a secret to the swarm, Docker sends the secret to the swarm manager over
a mutual TLS connection. The secret is stored in the Raft log, which is encrypted. The entire
Raft log is replicated across the other managers, ensuring the same high availability
guarantees for secrets as for the rest of the swarm management data.
When you grant a newly-created or running service access to a secret, the decrypted secret
is mounted into the container in an in-memory filesystem. The location of the mount point
within the container defaults to /run/secrets/<secret_name> in Linux containers,
or C:\ProgramData\Docker\secrets in Windows containers. You can also specify a
custom location.
You can update a service to grant it access to additional secrets or revoke its access to a
given secret at any time.
A node only has access to (encrypted) secrets if the node is a swarm manager or if it is
running service tasks which have been granted access to the secret. When a container task
stops running, the decrypted secrets shared to it are unmounted from the in-memory
filesystem for that container and flushed from the node’s memory.
If a node loses connectivity to the swarm while it is running a task container with access to a
secret, the task container still has access to its secrets, but cannot receive updates until the
node reconnects to the swarm.
You can add or inspect an individual secret at any time, or list all secrets. You cannot
remove a secret that a running service is using. See Rotate a secret for a way to remove a
secret without disrupting running services.
To update or roll back secrets more easily, consider adding a version number or date to the
secret name. This is made easier by the ability to control the mount point of the secret
within a given container.
Secret in Compose
Storage Driver
To use storage drivers effectively, it’s important to know how Docker builds and stores
images, and how these images are used by containers. You can use this information to
make informed choices about the best way to persist data from your applications and avoid
performance problems along the way.
Storage drivers allow you to create data in the writable layer of your container. The files
won’t be persisted after the container is deleted, and both read and write speeds are lower
than native file system performance.
Note: Operations that are known to be problematic include write-intensive database
storage, particularly when pre-existing data exists in the read-only layer. More details are
provided in this document.
Ideally, very little data is written to a container’s writable layer, and you use Docker volumes
to write data. However, some workloads require you to be able to write to the container’s
writable layer. This is where storage drivers come in.
Docker supports several different storage drivers, using a pluggable architecture. The
storage driver controls how images and containers are stored and managed on your Docker
host.
After you have read the storage driver overview, the next step is to choose the best storage
driver for your workloads. In making this decision, there are three high-level factors to
consider:
If multiple storage drivers are supported in your kernel, Docker has a prioritized list of which
storage driver to use if no storage driver is explicitly configured, assuming that the storage
driver meets the prerequisites.
Use the storage driver with the best overall performance and stability in the most usual
scenarios.
overlay2 is the preferred storage driver, for all currently supported Linux
distributions, and requires no extra configuration.
aufs was the preferred storage driver for Docker 18.06 and older, when running on
Ubuntu 14.04 on kernel 3.13 which had no support for overlay2.
fuse-overlayfs is preferred only for running Rootless Docker on a host that does
not provide support for rootless overlay2. On Ubuntu and Debian 10, the fuse-
overlayfs driver does not need to be used overlay2 works even in rootless mode.
See Rootless mode documentation.
devicemapper is supported, but requires direct-lvm for production environments,
because loopback-lvm, while zero-configuration, has very poor performance.
devicemapper was the recommended storage driver for CentOS and RHEL, as their
kernel version did not support overlay2. However, current versions of CentOS and
RHEL now have support for overlay2, which is now the recommended driver.
The btrfs and zfs storage drivers are used if they are the backing filesystem (the
filesystem of the host on which Docker is installed). These filesystems allow for
advanced options, such as creating “snapshots”, but require more maintenance and
setup. Each of these relies on the backing filesystem being configured correctly.
The vfs storage driver is intended for testing purposes, and for situations where no
copy-on-write filesystem can be used. Performance of this storage driver is poor, and
is not generally recommended for production use.
Docker’s source code defines the selection order. You can see the order at the source code
for Docker Engine 20.10
If you run a different version of Docker, you can use the branch selector at the top of the file
viewer to choose a different branch.
Some storage drivers require you to use a specific format for the backing filesystem. If you
have external requirements to use a specific backing filesystem, this may limit your choices.
See Supported backing filesystems.
After you have narrowed down which storage drivers you can choose from, your choice is
determined by the characteristics of your workload and the level of stability you need. See
Other considerations for help in making the final decision.
NOTE: Your choice may be limited by your operating system and distribution. For instance,
aufs is only supported on Ubuntu and Debian, and may require extra packages to be
installed, while btrfs is only supported on SLES, which is only supported with Docker
Enterprise. See Support storage drivers per Linux distribution for more information.
At a high level, the storage drivers you can use is partially determined by the Docker edition
you use.
In addition, Docker does not recommend any configuration that requires you to disable
security features of your operating system, such as the need to disable selinux if you use
the overlay or overlay2 driver on CentOS.
For Docker Engine - Community, only some configurations are tested, and your operating
system’s kernel may not support every storage driver. In general, the following
configurations work on recent versions of the Linux distribution:
¹) The overlay storage driver is deprecated, and will be removed in a future release. It is
recommended that users of the overlay storage driver migrate to overlay2.
²) The devicemapper storage driver is deprecated, and will be removed in a future release.
It is recommended that users of the devicemapper storage driver migrate to overlay2.
Note The comparison table above is not applicable for Rootless mode. For the drivers
available in Rootless mode, see the Rootless mode documentation.
When possible, overlay2 is the recommended storage driver. When installing Docker for the
first time, overlay2 is used by default. Previously, aufs was used by default when available,
but this is no longer the case. If you want to use aufs on new installations going forward,
you need to explicitly configure it, and you may need to install extra packages, such as
linux-image-extra. See aufs.
When in doubt, the best all-around configuration is to use a modern Linux distribution with a
kernel that supports the overlay2 storage driver, and to use Docker volumes for write-heavy
workloads instead of relying on writing data to the container’s writable layer.
The vfs storage driver is usually not the best choice. Before using the vfs storage driver, be
sure to read about its performance and storage characteristics and limitations.
The recommendations in the table above are based on automated regression testing and
the configurations that are known to work for a large number of users. If you use a
recommended configuration and find a reproducible issue, it is likely to be fixed very quickly.
If the driver that you want to use is not recommended according to this table, you can run it
at your own risk. You can and should still report any issues you run into. However, such
issues have a lower priority than issues encountered when using a recommended
configuration.
Docker Desktop for Mac and Docker Desktop for Windows
Docker Desktop for Mac and Docker Desktop for Windows are intended for development,
rather than production. Modifying the storage driver on these platforms is not possible.
The detailed documentation for each individual storage driver details all of the set-up steps
to use a given storage driver.
To see what storage driver Docker is currently using, use docker info and look for the
Storage Driver line:
$ docker info
Containers: 0
Images: 0
Storage Driver: overlay2
Backing Filesystem: xfs
<...>
Prasyarat
Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).
vim /etc/docker/daemon.json
...
{
"storage-driver": "vfs"
}
...
docker info
ls -al /var/lib/docker/vfs/dir/
du -sh /var/lib/docker/vfs/dir/