0% found this document useful (0 votes)
446 views

Docker

Containers provide a standardized way to package applications and their dependencies to simplify software development and deployment. Containers allow applications to be isolated from each other and share operating system resources more efficiently than virtual machines alone. Containers help improve security, enable consistent development environments, and reduce complexity for operations teams by standardizing infrastructure around containerized applications.

Uploaded by

Frank Budi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
446 views

Docker

Containers provide a standardized way to package applications and their dependencies to simplify software development and deployment. Containers allow applications to be isolated from each other and share operating system resources more efficiently than virtual machines alone. Containers help improve security, enable consistent development environments, and reduce complexity for operations teams by standardizing infrastructure around containerized applications.

Uploaded by

Frank Budi
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 126

Introduction of Container

In technology, sometimes the jumps in progress are small but, as is the case with
containerization, the jumps have been massive and turn the long-held practices and
teachings completely upside down. With this chapter, we will take you from running a tiny
service to building elastically scalable systems using containerization with Docker, the
cornerstone of this revolution.
We will perform a steady but consistent ramp-up through the basic blocks with a focus on
the inner workings of Docker, and, as we continue, we will try to spend a majority of the time
in the world of complex deployments and their considerations.
Let’s take a look at what we will cover in this chapter :

 What are containers and why do we need them?


 Docker’s place in the container world
 Thinking with a container mindset

What Are Container?

In the old days, developers would develop a new application. Once that application was
completed in their eyes, they would hand that application over to the operations engineers,
who were then supposed to install it on the production servers and get it running. If the
operations engineers were lucky, they even got a somewhat accurate document with
installation instructions from the developers. So far, so good, and life was easy.

But things get a bit out of hand when, in an enterprise, there are many teams of developers
that create quite different types of application, yet all of them need to be installed on the
same production servers and kept running there. Usually, each application has some
external dependencies, such as which framework it was built on, what libraries it uses, and
so on. Sometimes, two applications use the same framework but in different versions that
might or might not be compatible with each other. Our operations engineer's life became
much harder over time. They had to be really creative with how they could load their ship,
(their servers,) with different applications without breaking something.
Installing a new version of a certain application was now a complex project on its own, and
often needed months of planning and testing. In other words, there was a lot of friction in
the software supply chain. But these days, companies rely more and more on software, and
the release cycles need to become shorter and shorter. We cannot afford to just release
twice a year or so anymore. Applications need to be updated in a matter of weeks or days,
or sometimes even multiple times per day. Companies that do not comply risk going out of
business, due to the lack of agility. So, what's the solution?

One of the first approaches was to use virtual machines (VMs). Instead of running multiple
applications, all on the same server, companies would package and run a single application
on each VM. With this, all the compatibility problems were gone and life seemed to be good
again. Unfortunately, that happiness didn't last long. VMs are pretty heavy beasts on their
own since they all contain a full-blown operating system such as Linux or Windows Server,
and all that for just a single application. This is just as if you were in the transportation
industry and were using a whole ship just to transport a single truckload of bananas. What a
waste! That could never be profitable.

The ultimate solution to this problem was to provide something that is much more
lightweight than VMs, but is also able to perfectly encapsulate the goods it needs to
transport. Here, the goods are the actual application that has been written by our
developers, plus – and this is important – all the external dependencies of the application,
such as its framework, libraries, configurations, and more. This holy grail of a software
packaging mechanism was the Docker container.

Developers use Docker containers to package their applications, frameworks, and libraries


into them, and then they ship those containers to the testers or operations engineers. To
testers and operations engineers, a container is just a black box. It is a standardized black
box, though. All containers, no matter what application runs inside them, can be treated
equally. The engineers know that, if any container runs on their servers, then any other
containers should run too. And this is actually true, apart from some edge cases, which
always exist.
Thus, Docker containers are a means to package applications and their dependencies in a
standardized way. Docker then coined the phrase Build, ship, and run anywhere.

Why are containers important?

These days, the time between new releases of an application become shorter and shorter,
yet the software itself doesn't become any simpler. On the contrary, software projects
increase in complexity. Thus, we need a way to tame the beast and simplify the software
supply chain.

Also, every day, we hear that cyber-attacks are on the rise. Many well-known companies
are and have been affected by security breaches. Highly sensitive customer data gets
stolen during such events, such as social security numbers, credit card information, and
more. But not only customer data is compromised – sensitive company secrets are stolen
too.

Containers can help in many ways. First of all, Gartner found that applications running in a
container are more secure than their counterparts not running in a container. Containers
use Linux security primitives such as Linux kernel namespaces to sandbox different
applications running on the same computers and control groups (cgroups) in order to
avoid the noisy-neighbor problem, where one bad application is using all the available
resources of a server and starving all other applications.

Due to the fact that container images are immutable, it is easy to have them scanned
for common vulnerabilities and exposures (CVEs), and in doing so, increase the overall
security of our applications.

Another way to make our software supply chain more secure is to have our containers use
a content trust. A content trust basically ensures that the author of a container image is who
they pretend to be and that the consumer of the container image has a guarantee that the
image has not been tampered with in transit. The latter is known as a man-in-the-middle
(MITM) attack.
Everything I have just said is, of course, technically also possible without using containers,
but since containers introduce a globally accepted standard, they make it so much easier to
implement these best practices and enforce them.

OK, but security is not the only reason why containers are important. There are other
reasons too.

One is the fact that containers make it easy to simulate a production-like environment, even
on a developer's laptop. If we can containerize any application, then we can also
containerize, say, a database such as Oracle or MS SQL Server. Now, everyone who has
ever had to install an Oracle database on a computer knows that this is not the easiest thing
to do, and it takes up a lot of precious space on your computer. You wouldn't want to do that
to your development laptop just to test whether the application you developed really works
end-to-end. With containers at hand, we can run a full-blown relational database in a
container as easily as saying 1, 2, 3. And when we're done with testing, we can just stop
and delete the container and the database will be gone, without leaving a trace on our
computer.
Since containers are very lean compared to VMs, it is not uncommon to have many
containers running at the same time on a developer's laptop without overwhelming the
laptop.

A third reason why containers are important is that operators can finally concentrate on
what they are really good at: provisioning the infrastructure and running and monitoring
applications in production. When the applications they have to run on a production system
are all containerized, then operators can start to standardize their infrastructure. Every
server becomes just another Docker host. No special libraries or frameworks need to be
installed on those servers, just an OS and a container runtime such as Docker.

Also, operators do not have to have intimate knowledge of the internals of applications
anymore, since those applications run self-contained in containers that ought to look like
black boxes to them, similar to how shipping containers look to the personnel in the
transportation industry.

Host, Virtual Machine and Container

Containers are an abstracton at the app layer that packages code and dependencies
together. Multiple containers can run on the same machine and share the OS kernel with
other containers, each running as isolated processes in user space. Containers take up less
space than VMs (container images are typically tens of MBs in size), and start almost
instantly.

Virtual machines (VMs) are an abstracton of physical hardware turning one server into
many servers. he hypervisor allows multple VMs to run on a single machine. Each VM
includes a full copy of an operatng system, one or more apps, necessary binaries and
libraries - taking up tens of GBs. VMs can also be slow to boot.

Now, let's take a look at the differences between Containers and typical virtual machine
environments. The following diagram demonstrates the difference between a dedicated,
bare-metal server and a server running virtual machines :
As you can see, for a dedicated machine we have three applications, all sharing the same
orange software stack. Running virtual machines allow us to run three applications, running
two completely different software stacks. The following diagram shows the same orange
and green applications running in containers using Docker :
This diagram gives us a lot of insight into the biggest key benefit of Docker, that is, there is
no need for a complete operating system every time we need to bring up a new container,
which cuts down on the overall size of containers. Since almost all the versions of Linux use
the standard kernel models, Docker relies on using the host operating system's Linux kernel
for the operating system it was built upon, such as Red Hat, CentOS, and Ubuntu.

For this reason, you can have almost any Linux operating system as your host operating
system and be able to layer other Linux-based operating systems on top of the host. Well,
that is, your applications are led to believe that a full operating system is actually installed—
but in reality, we only install the binaries, such as a package manager and, for example,
Apache/PHP and the libraries required to get just enough of an operating system for your
applications to run.
For example, in the earlier diagram, we could have Red Hat running for the orange
application, and Debian running for the green application, but there would never be a need
to actually install Red Hat or Debian on the host. Thus, another benefit of Docker is the size
of images when they are created. They are built without the largest piece: the kernel or the
operating system. This makes them incredibly small, compact, and easy to ship.

Orientation To The Lab Environtment

Create virtual machine on Virtual Box .

Access virtual machine using Secure Shell Connection Via PUTTY.


Access virtual machine using Secure Shell Connection Via Linux Terminal.

pod-username-node01
ssh root@IPADDRESSVMNODE01

pod-username-node02
ssh root@IPADDRESSVMNODE02

Lab 1.1 Preparation of Lab Environment


Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.

lab login doops

Instruksi

1. Jika terdapat [username], ubah menjadi username anda. [X] pada IP merupakan


nomor absensi.

2. Spesifikasi VM.
- pod-[username]-node01 : Ubuntu 20.04 (2vcpu, 2gb ram)
- pod-[username]-node02 : Ubuntu 20.04 (2vcpu, 2gb ram)

3. Pastikan IP, Gateway, DNS Resolver, & Hostname sudah benar (contohnya IP, ubah IP
sesuai dengan IP pada lab anda).

Node pod-[username]-node01

 IP Address : 10.X.X.10/24
 Gateway : 10.X.X.1
 Hostname : pod-[username]-node01

Node pod-[username]-node02

 IP Address : 10.X.X.20/24
 Gateway : 10.X.X.1
 Hostname : pod-[username]-node02

Execute on pod-[username]-node01

4. Ubah hostname

root@docker-lab:~# echo 'pod-username-node01' > /etc/hostname


root@docker-lab:~# hostname pod-username-node01
root@docker-lab:~# exit
5. Buat SSH key dan masukan public key ke file authorized_keys pada node pod-
[username]-node01. Masukan public key pada file authorized_keys di node pod-
[username]-node02 juga.

 Generate an SSH key.

root@pod-username-node01:~# ssh-keygen
root@pod-username-node01:~# cat ~/.ssh/id_rsa.pub
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABgQC+1fVCjBbUz7E6kJWGzfEFwrZQ0iACw4/Xz/8c221vA
lUOjdqTv70nKOQc+Y87MP730KIvZSJQ7Km9vTK+7TV/0uuPDqXVST5kUmHyNRm+fDjaBcbq/Z
d+UPOXHUtNM1bxt6N9REvIzKm/wfAkAgt6Y+x0goa209iS8rCXOItyBRH7gw9Mhhlo6d82Nb4
DGg56JYwE9NV6duY60yLPb+PqKKQ5qCgkqc7D278eMJMDQ99Ld0Dgc+1xP4JgbOVKI69AZbzF
PPosAW7JFa1q2t6D2tZL4P80m6EPsnzJGA1CUY9sGuByTAVduUxT5p+IzgFiKXuIanUAxkM4W
U9SrIGR7XPKwkSf4BFY7XcH3/iR0VSPp/
+hBcN8u7PY28ysA5+KGTopxYFcEzaaUc4PqxIkJnat0XfH22/lJMFF/vqkmS8rxs/ZMUF0QFz
sGFba5MQR3Q1GwSUMvlAQqv0u0RfBOooAv6c+uAtRjB6XW/Ow6JDhIf3oXiEFqAReVybyOCM=
root@pod-username-node01

 Masukan public key ke file authorized_keys pada node pod-[username]-node01.

root@pod-username-node01:~# cat ~/.ssh/id_rsa.pub >>


~/.ssh/authorized_keys

 Masukan public key dari node pod-[username]-node01 ke


file authorized_keys pada node pod-[username]-node02.

root@pod-username-node02:~# vi ~/.ssh/authorized_keys
... output omitted ...
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABgQC+1fVCjBbUz7E6kJWGzfEFwrZQ0iACw4/Xz/8c221vA
lUOjdqTv70nKOQc+Y87MP730KIvZSJQ7Km9vTK+7TV/0uuPDqXVST5kUmHyNRm+fDjaBcbq/Z
d+UPOXHUtNM1bxt6N9REvIzKm/wfAkAgt6Y+x0goa209iS8rCXOItyBRH7gw9Mhhlo6d82Nb4
DGg56JYwE9NV6duY60yLPb+PqKKQ5qCgkqc7D278eMJMDQ99Ld0Dgc+1xP4JgbOVKI69AZbzF
PPosAW7JFa1q2t6D2tZL4P80m6EPsnzJGA1CUY9sGuByTAVduUxT5p+IzgFiKXuIanUAxkM4W
U9SrIGR7XPKwkSf4BFY7XcH3/iR0VSPp/
+hBcN8u7PY28ysA5+KGTopxYFcEzaaUc4PqxIkJnat0XfH22/lJMFF/vqkmS8rxs/ZMUF0QFz
sGFba5MQR3Q1GwSUMvlAQqv0u0RfBOooAv6c+uAtRjB6XW/Ow6JDhIf3oXiEFqAReVybyOCM=
root@pod-username-node01

6. Cek apakah host dapat mengakses tanpa menggunakan password (passwordless).

 Akses ssh node pod-[username]-node01.


root@pod-username-node01:~# ssh 10.X.X.10

 Akses ssh node pod-[username]-node02.

root@pod-username-node01:~# ssh 10.X.X.20

Eksekusi pada pod-[username]-node02

7. Ubah hostname

root@docker-lab:~# echo 'pod-username-node02' > /etc/hostname


root@docker-lab:~# hostname pod-username-node02
root@docker-lab:~# exit

8. Buat SSH key dan masukan public key ke file authorized_keys pada node pod-


[username]-node02. Masukan public key pada file authorized_keys pada node pod-
[username]-node01 juga.

 Generate an SSH key.

root@pod-username-node02:~# ssh-keygen
root@pod-username-node02:~# cat ~/.ssh/id_rsa.pub
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABgQDYFD929XkulrBijod4lpLZthAnnvSift7ZNCFT5R+VS
f9APlu8MpHnb8dzXpzIN0jYLAJvSQ0Q+aavpFrJSGIAq0q2XhxryPUZyAHkEx9h0fltxDnxxO
hDd8ImIJkcn3OEtrfNu+afsvH3wecVdyInk6tKNgJ1C/RXaEqAiY1QRIHqlkx6oJqWUk7GYW9
XWr+3PnzwJ8G+VMW5jzo47wX6jFrG7+SbYCCp4AEX4P7/4R8T96FUowotlKWRcBYezupIjBbC
+F2BmHd4ZJR/Z5oRJXU0Fgo/zbm2gFuZeFIdUWduJIC58r13F/H88IgM/ZK5i7nuMnALrU3lw
ASfqXUNtlx92xcuwuSIgUEfbXy235OeYY6hOGyS32LboyphiZxf+t86xqRlzLchpbm54egxQf
S7txu3m80Xc+LDSFQ6qghTlcd5iQ/Ep1eTVAT49ZSVWLqr5+7WGoxNZixGDL3t9YNsM74uYU2
sN25qi5cp1ETj/liyCHB9WUDZi0mMhIs= root@pod-username-node02

 Masukan public key ke file authorized_keys pada node pod-[username]-node02.

root@pod-username-node02:~# cat ~/.ssh/id_rsa.pub >>


~/.ssh/authorized_keys

 Masukan public key dari node pod-[username]-node02 ke


file authorized_keys pada node pod-[username]-node01.

root@pod-username-node01:~# vi ~/.ssh/authorized_keys
... output omitted ...
ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABgQDYFD929XkulrBijod4lpLZthAnnvSift7ZNCFT5R+VS
f9APlu8MpHnb8dzXpzIN0jYLAJvSQ0Q+aavpFrJSGIAq0q2XhxryPUZyAHkEx9h0fltxDnxxO
hDd8ImIJkcn3OEtrfNu+afsvH3wecVdyInk6tKNgJ1C/RXaEqAiY1QRIHqlkx6oJqWUk7GYW9
XWr+3PnzwJ8G+VMW5jzo47wX6jFrG7+SbYCCp4AEX4P7/4R8T96FUowotlKWRcBYezupIjBbC
+F2BmHd4ZJR/Z5oRJXU0Fgo/zbm2gFuZeFIdUWduJIC58r13F/H88IgM/ZK5i7nuMnALrU3lw
ASfqXUNtlx92xcuwuSIgUEfbXy235OeYY6hOGyS32LboyphiZxf+t86xqRlzLchpbm54egxQf
S7txu3m80Xc+LDSFQ6qghTlcd5iQ/Ep1eTVAT49ZSVWLqr5+7WGoxNZixGDL3t9YNsM74uYU2
sN25qi5cp1ETj/liyCHB9WUDZi0mMhIs= root@pod-username-node02

9. Cek apakah host dapat mengakses tanpa menggunakan password (passwordless).

 Akses ssh node pod-[username]-node02.

root@pod-username-node01:~# ssh 10.X.X.20

 Akses ssh node pod-[username]-node01.

root@pod-username-node01:~# ssh 10.X.X.10

Eksekusi pada pod-[username]-node01 & pod-[username]-node02

10. Edit name resolution.

sudo vim /etc/hosts


.....
10.X.X.10 pod-username-node01
10.X.X.20 pod-username-node02
.....

11. Periksa konektivitas.

ping -c 3 google.com
ping -c 3 detik.com
ping -c 3 10.X.X.10
ping -c 3 10.X.X.20
hostname

Tugas
Jalankan perintah lab grade do-001-1 untuk menilai hasil lab.
[student@servera ~]$ sudo -i
[root@servera ~]# lab grade do-001-1
Introduction to Docker

What is Docker ?
Docker is an open source project for building, shipping, and running programs.

 It is a command-line program,
 A background process, and
 A set of remote services that take a logistical approach to solving common software
problems and simplifying your experience installing, running, publishing, and removing
software.

Docker products

Docker, Inc. is the company formed to develop Docker CE and Docker EE. It also


provides SLA-based support services for Docker EE. Finally, they are offer consultative
services to companies who wish take their existing applications and containerize them as
part of Docker's Modernize Traditional Apps (MTA) program.

Docker currently separates its product lines into two segments. There is the Community
Edition (CE), which is closed-source yet completely free, and then there is the Enterprise
Edition (EE), which is also closed-source and needs to be licensed on a yearly basis.
These enterprise products are backed by 24/7 support and are supported by bug fixes.

Docker CE

art of the Docker Community Edition are products such as the Docker Toolbox and
Docker for Desktop with its editions for Mac and Windows. All these products are mainly
targeted at developers.

Docker for Desktop is an easy-to-install desktop application that can be used to build,
debug, and test Dockerized applications or services on a macOS or Windows machine.
Docker for macOS and Docker for Windows are complete development environments that
are deeply integrated with their respective hypervisor framework, network, and filesystem.
These tools are the fastest and most reliable way to run Docker on a Mac or Windows.
Docker EE

The Docker Enterprise Edition consists of the Universal Control Plane (UCP) and


the Docker Trusted Registry (DTR), both of which run on top of Docker Swarm. Both are
swarm applications. Docker EE builds on top of the upstream components of the Moby
project and adds enterprise-grade features such as role-based access control
(RBAC), multi-tenancy, mixed clusters of Docker swarm and Kubernetes, web-based
UI, and content trust, as well as image scanning on top.

Docker Release Cycle


From September 2018, the release cycle for the stable version of Docker CE will be
biannual, which means that it will have a seven-month maintenance cycle. This means that
you have plenty of time to review and plan any upgrades.

At the time of writing, the current timetable for Docker CE releases is :

 Docker 18.06 CE : This is the last of the quarterly Docker CE releases,
released July 18th 2018.
 Docker 18.09 CE : This release, due late September/early October 2018, is the
first release of the biannual release cycle of Docker CE.
 Docker 19.03 CE : The first supported Docker CE of 2019 is scheduled to be
released March/April 2019.
 Docker 19.09 CE : The second supported release of 2019 is scheduled to be
released September/October 2019.

Lab 2.1 : Installing Docker

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.

lab login doops

Eksekusi di semua node

1. Jika terdapat [username], ubah menjadi username anda.

2. Buat user baru dengan nama sesuai dengan username di platform ADINUSA.

useradd -m <username> -s /bin/bash


echo '<username> ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/<username>

3. Install Docker.

apt -y update
apt -y install apt-transport-https ca-certificates curl gnupg-agent
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key
add -
apt-key fingerprint 0EBFCD88
add-apt-repository "deb [arch=amd64]
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt -y update
apt -y install docker-ce docker-ce-cli containerd.io
systemctl status docker

2. Menampilkan versi docker.

docker version

3. Menampilkan detil instalasi docker.

docker info

4. Uji instalasi docker.

docker run hello-world

5. Menampilkan image yang sudah di-download.

docker image ls

6. Menampilkan semua container (active ataupun exit).

docker container ls -a

7. Menambahkan user ke dalam group docker.

groupadd docker
su - <username>
sudo usermod -aG docker $USER

8. Exit terminal kemudian masuk kembali.

9. Test docker tanpa sudo.

docker container ls
Lab 2.2 : Docker Run - Part 1

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.

lab login doops

Setelah login, jalankan perintah lab start do-002-2. Perintah ini digunakan untuk
mempersiapkan environment lab

lab start do-002-2

Eksekusi di node pod-[username]-node01

docker run (latihan01)

1. Check image redis

docker search redis

2. Menjalankan image redis

docker run redis # CTRL + c untuk keluar


docker run -d redis # Berjalan di belakang layar (Background)
docker run -d --name redis1 redis # Memberi nama pada container

3. Menampilkan container yang berjalan.

docker ps
docker container ls

4. Menampilkan semua container docker.

docker ps -a
docker container ls -a

5. Menampilkan deskripsi container .


docker inspect CONTAINER_NAME/CONTAINER_ID
docker inspect redis1

6. Menampilkan isi logs dari container.

docker logs CONTAINER_NAME/CONTAINER_ID


docker logs redis1

7. Menampilkan live stream resource yang terpakai pada container.

docker stats CONTAINER_NAME/CONTAINER_ID


docker stats redis1

8. Menampilkan running process pada container.

docker top CONTAINER_NAME/CONTAINER_ID


docker top redis1

9. Mematikan container.

docker stop CONTAINER_NAME/CONTAINER_ID


docker stop redis1

docker run (latihan02)

1. Check image nginx.

docker search nginx

2. Menjalankan image nginx dan mengexpose port host.

docker run -d --name nginx1 -p 80:80 nginx:latest

3. Menampilkan deskripsi container nginx.

docker inspect nginx1

4. Menjalankan image nginx dan mendeklarasikan port container.

docker run -d --name nginx2 -p 80 nginx:latest

5. Uji browsing.
curl localhost:$(docker port nginx2 80| cut -d : -f 2)

6. Menampilkan container (activate/exit).

docker ps -a
docker container ls -a

7. Menampilkan docker image.

docker images

Lab 2.3 : Docker Run - Part 2

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.

lab login doops

Setelah login, jalankan perintah lab start do-002-3. Perintah ini digunakan untuk
mempersiapkan environment lab

lab start do-002-3

Eksekusi di node pod-[username]-node01

docker run (latihan03)

1. Check image nginx.

docker search nginx

2. Menjalankan image nginx dan mengexpose port.

docker run -d --name nginx1 -p 8080:80 nginx:latest

3. Menampilkan deskripsi container nginx.


docker inspect nginx1

4. Menjalankan image nginx dan mengexpose port.

docker run -d --name nginx2 -p 8081:80 nginx:latest

5. Menampilkan container (activate/exit).

docker ps -a
docker container ls -a

6. Check access pada container.

curl localhost:8080
curl localhost:8081

7. Masuk ke dalam container.

docker exec -it nginx2 /bin/bash

8. Update dan install editor pada container.

apt-get update -y && apt-get install nano -y

9. Edit pada index.html dan ubah pada welcome to nginx.

nano index.html
...
hello, testing.
...

mv index.html /usr/share/nginx/html

10. Restart service nginx.

service nginx restart

11. Menjalankan ulang container.

docker start nginx2

12. Menampilkan container.


docker ps
docker container ls

13. Check access pada container.

curl localhost:8080
curl localhost:8081

14. Menampilkan deskripsi container .

docker inspect nginx1


docker inspect nginx2

15. Menampilkan isi logs dari container.

docker logs nginx1


docker logs nginx2

16. Menampilkan live stream resource yang terpakai pada container.

docker stats nginx1


docker stats nginx2

17. Menampilkan running process pada container.

docker top nginx1


docker top nginx2

docker run (latihan04)

1. Check image ubuntu.

docker search ubuntu

2. Mengambil image ubuntu.

docker pull ubuntu

3. Menjalankan container dan langsung menuju consolenya.

docker run -it --name ubuntu1 ubuntu


keluar dari container dengan Ctrl+d atau exit

docker ps -a

4. Menjalankan container dan langsung dihapus.

docker run -it --rm --name ubuntu2 ubuntu

keluar dari container dengan Ctrl+d atau exit

docker ps -a

Lab 2.4 : Docker Run - Part 3

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.

lab login doops

Setelah login, jalankan perintah lab start do-002-4. Perintah ini digunakan untuk
mempersiapkan environment lab

lab start do-002-4

Eksekusi di node pod-[username]-node01

docker run (latihan05)

1. Menjalankan container mysql.

docker run -d --name my-own-mysql -e MYSQL_ROOT_PASSWORD=RAHASIA -e


MYSQL_DATABASE=latihan05 -p 3306:3306 mysql

2. Pull image phpmyadmin.

docker pull phpmyadmin/phpmyadmin:latest

3. Menjalankan container phpmyadmin dan menghubungkannya dengan container mysql.


docker run --name my-own-phpmyadmin -d --link my-own-mysql:db -p 8090:80
phpmyadmin/phpmyadmin

4. Uji browsing.

ssh [email protected] -p10XX1 -DYYYY


buka dibrowser http://10.X.X.10:8090 kemudian login dengan user: `root`
dan password: `RAHASIA`

docker run (latihan06)

1. Jalankan container ubuntu.

docker run -dit --name ubuntu1 ubuntu


docker run -dit --name ubuntu2 ubuntu

2. Menampilkan daftar container.

docker ps

3. Pause container ubuntu.

docker pause ubuntu1


docker pause ubuntu2

4. Check pada daftar container jika status container sudah menjadi pause.

docker ps

5. Check pemakaian resource pada saat container ubuntu dalam keadaan pause.

docker stats ubuntu1


docker stats ubuntu2

6. Unpause container ubuntu1.

docker unpause ubuntu1

docker run (latihan07)

1. Membuat container database.


docker container run -d --name ch6_mariadb --memory 256m --cpu-shares
1024 --cap-drop net_raw -e MYSQL_ROOT_PASSWORD=test mariadb:5.5

2. Membuat container wordpress.

docker container run -d -p 80:80 -P --name ch6_wordpress --memory 512m


--cpu-shares 512 \
--cap-drop net_raw --link ch6_mariadb:mysql -e WORDPRESS_DB_PASSWORD=test
wordpress:5.0.0-php7.2-apache

3. Check logs, running process, and resource.

docker logs ch6_mariadb


docker logs ch6_wordpress

docker top ch6_mariadb


docker top ch6_wordpress

docker stats ch6_mariadb


docker stats ch6_wordpress

4. Uji browsing.

buka di browser http://10.X.X.10 kemudian selesaikan tahap instalasinya.

Managing Docker Container

Abstract
Overview Design and code a Dockerfile to build a custom container image.
Objectives:

 Describe the approaches for creating custom container images.


 Create a container image using common Dockerfile commands.

Managing the Life Cycle of Containers - Part 1

Objectives
After completing this section, students should be able to manage the life cycle of a
container from creation to deletion.

Docker Client Verbs

The Docker client, implemented by the docker command, provides a set of verbs to create
and manage containers. The following figure shows a summary of the most commonly used
verbs that change container state.
The Docker client also provides a set of verbs to obtain information about running and
stopped containers. The following figure shows a summary of the most commonly used
verbs that query information related to Docker containers.
Use these two figures as a reference while you learn about the docker command verbs
along this course.

Creating Containers
The docker run command creates a new container from an image and starts a process
inside the new container. If the container image is not available, this command also tries to
download it:

ubuntu@ubuntu20:~$ docker run -d nginx


Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
d121f8d1c412: Pull complete
ebd81fc8c071: Pull complete
655316c160af: Pull complete
d15953c0e0f8: Pull complete
2ee525c5c3cc: Pull complete
Digest:
sha256:c628b67d21744fce822d22fdcc0389f6bd763daac23a6b77147d0712ea7102d0
Status: Downloaded newer image for nginx:latest
bc17ead1a9a36294d511aa3ce40a667aa851b4337bbd404edb7b78087e81da54
ubuntu@ubuntu20:~$

Another important option is to run the container as a daemon, running the containerized
process in the background. The -d option is responsible for running in detached mode.

The management docker commands require an ID or a name. The docker run command
generates a random ID and a random name that are unique. The docker ps command is
responsible for displaying these attributes:

CONTAINER ID This ID is generated automatically and must be unique.

NAMES This name can be generated automatically or manually specified.

If desired, the container name can be explicitly defined. The --name option is responsible for
defining the container name:

ubuntu@ubuntu20:~$ docker run -d --name my-nginx-container nginx

Note The name must be unique. An error is thrown if another container has the same
name, including containers that are stopped.
The container image itself specifies the command to run to start the containerized process,
but a different one can be specified after the container image name in docker run:

ubuntu@ubuntu20:~$ docker run nginx ls /usr


bin
games
include
lib
local
sbin
share
src

The specified command must exist inside the container image.

Note Since a specified command was provided in the previous example, the nginx service
does not start.

Sometimes it is desired to run a container executing a Bash shell. This can be achieved
with:

ubuntu@ubuntu20:~$ docker run --name my-nginx-container -it nginx


/bin/bash
root@190caba3547d:/#

Options -t and -i are usually needed for interactive text-based programs, so they get a
proper terminal, but not for background daemons.

Running Commands in a Container

When a container is created, a default command is executed according to what is specified


by the container image. However, it may be necessary to execute other commands to
manage the running container. The docker exec command starts an additional process
inside a running container:

ubuntu@ubuntu20:~$ docker exec 5d0c8fa3c1d9 cat /etc/hostname


5d0c8fa3c1d9
The previous example used the container ID to execute the command. It is also possible to
use the container name:

ubuntu@ubuntu20:~$ docker exec my-nginx-container cat /etc/hostname


5d0c8fa3c1d9

Demonstration: Creating Containers

Follow this step guide as the instructor shows how to create and manipulate containers.

1.Run the following command:

ubuntu@ubuntu20:~$ docker run --name demo-container nginx dd if=/dev/zero


of=/dev/null

This command downloads image the official Docker registry and starts it using the dd
command. The container exits when the dd command returns the result. For educational
purposes, the provided dd never stops.

2.Open a new terminal window from the node VM and check if the container is running:

ubuntu@ubuntu20:~$ docker ps

Some information about the container, including the container name demo-container
specified in the last step, is displayed.

3.Open a new terminal window and stop the container using the provided name:

ubuntu@ubuntu20:~$ docker stop demo-container

4.Return to the original terminal window and verify that the container was stopped:

ubuntu@ubuntu20:~$ docker ps

5.Start a new container without specifying a name:

ubuntu@ubuntu20:~$ docker run nginx dd if=/dev/zero of=/dev/null


If a container name is not provided, docker generates a name for the container
automatically.

1. Open a terminal window and verify the name that was generated

ubuntu@ubuntu20:~$ docker ps
An output similar to the following will be listed:

An output similar to the following will be listed:

ubuntu@ubuntu20:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
2ef4addadf8b nginx "/docker-entrypoint.…" 3
seconds ago Up 1 second 80/tcp busy_newton

The busy_newton is the generated name. you probably will have a different name for this
step.

7.Stop the container with the generated name:

ubuntu@ubuntu20:~$ docker stop busy_newton

8.Containers can have a default long-running command. For these cases, it is possible to
run a container as a daemon using the -d option. For example, when a MySQL container is
started it creates the databases and keeps the server actively listening on its port. Another
example using dd as the long-running command is as follows:

ubuntu@ubuntu20:~$ docker run --name demo-container-2 -d nginx dd


if=/dev/zero of=/dev/null

9.Stop the container

ubuntu@ubuntu20:~$ docker stop demo-container-2

10.Another possibility is to run a container to just execute a specific dommand:

ubuntu@ubuntu20:~$ docker run --name demo-container-3 nginx ls /etc


This command starts a new container, lists all files available in the /etc directory in the
container, and exits.

11.Verify that the container is not running:

ubuntu@ubuntu20:~$ docker ps

12.It is possible to run a container in interactive mode. This mode allows for staying in the
container when the container runs:

ubuntu@ubuntu20:~$ docker run --name demo-container-4 -it nginx /bin/bash

the -i option specifies that this container should run in interactive mode, and the -t allocates
a pseudo-TTY.

13.Exit the Bash shell from the cintainer:

root@7e202f7faf07:/# exit
exit

1. Remove all stopped containers from the environment by running the following from a
terminal window:

ubuntu@ubuntu20:~$ docker ps -a
ubuntu@ubuntu20:~$ docker rm demo-container demo-container-2 demo-
container-3 demo-container-4

15.Remove the container started without a name. Replace the with the container name from
the step 7:

ubuntu@ubuntu20:~$ docker rm <container_name>

16.Remove the nginx container image:

ubuntu@ubuntu20:~$ docker image ls


ubuntu@ubuntu20:~$ docker image rm nginx

This concludes the demonstration.

Managing the Life Cycle of Containers - Part 2


Managing Containers

Docker provides the following commands to manage containers :


docker ps : This command is responsible for listing running containers.

CONTAINER ID Each container, when created, gets a container ID, which is a hexadecimal
number and looks like an image ID, but is actually unrelated.

IMAGE Container image that was used to start the container.

COMMAND Command that was executed when the container started.

CREATED Date and time the container was started.

STATUS Total container uptime, if still running, or time since terminated.

PORT Ports that were exposed by the container or the port forwards, if configured.

NAMES The container name.

Stopped containers are not discarded immediately. Their local file systems and other states
are preserved so they can be inspected for post-mortem analysis. Option -a lists all
containers, including containers that were not discarded yet:

ubuntu@ubuntu20:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
2ef4addadf8b nginx "/docker-entrypoint.…" 18 hours
ago Exited (137) 18 hours ago busy_newton
9aa8f1ff1c03 nginx "/docker-entrypoint.…" 21 hours
ago Up 21 hours 80/tcp my-nginx-
container

 docker inspect: This command is responsible for listing metadata about a running or
stopped container. The command produces a JSON output
ubuntu@ubuntu20:~$ docker inspect my-nginx-container
[
{
"Id":
"9aa8f1ff1c0399cbb24b657de6ce0a078c7ce69dc0c07760664670ea7593aaf3",
...OUTPUT OMITTED....
"NetworkSettings": {
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID":
"9e91a2e58d8a7c21532b0a48bb4d36093a347aa056e85d5b7f3941ff19eeb840",
"EndpointID":
"9d847b45ad4c55b2c1fbdaa8f44a502f2215f569c08cbcc6599b6ff1c25a2869",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
...OUTPUT OMITTED....

 This command allows formatting of the output string using the given go template with
the -f option. For example, to retrieve only the IP address, the following command
can be executed:

ubuntu@ubuntu20:~$ docker inspect -f '{{.Name}} -


{{.NetworkSettings.IPAddress }}' $(docker ps -aq)
/busy_newton -
/my-nginx-container - 172.17.0.2

 docker stop: This command is responsible for stopping a running container


gracefully:

$ docker stop my-nginx-container

Using docker stop is easier than finding the container start process on the host OS and
killing it.

 docker kill: This command is responsible for stopping a running container forcefully:
$ docker kill my-nginx-container

 It is possible to specify the signal with the -s option:

$ docker kill -s SIGKILL my-nginx-container

 The following signals are available

Signal Default action Description

SIGHUP Terminate process Terminate line hangup

SIGINT Terminate process Interrupt program

SIGQUIT Create core image Quit program

SIGABRT Create core image Abort program

SIGKILL Terminate process Kill program

SIGTERM Terminate process Software termination signal

SIGUSR1 Terminate process User-defined signal 1

SIGUSR2 Terminate process User-defined signal 2

 docker restart: This command is responsible for restarting a stopped container:

$ docker restart my-nginx-container

The docker restart command creates a new container with the same container ID, reusing
the stopped container state and filesystem.

 docker rm: This command is responsible for deleting a container, discarding its
state and filesystem:
$ docker rm my-nginx-container

It is possible to delete all containers at the same time. The docker ps command has the -
q option that returns only the ID of the containers. This list can be passed to the docker rm
command:

$ docker rm $(docker ps -aq)

 Before deleting all containers, all running containers must be stopped. It is possible
to stop all containers with:

$ docker stop $(docker ps -q)

Note The commands docker inspect, docker stop, docker kill, docker restart, and docker rm
can use the container ID instead of the container name.

Demonstration: Managing a Container

Following this guide as the instructor shows how to manage a container.

1.Run the following command:

$ docker run --name demo-container -d nginx

This command will start a nginx container as a daemon.

2.List all running containers:

$ docker ps

3.Stop the container with the following command:

$ docker stop demo-container

4.Verify that the container is not running:

$ docker ps

5.Run a new container with the same name:


$ docker run --name demo-container -d nginx

A conflict error is displayed. Remember that a stopped container is not discarded


immediately and their local file systems and other states are preserved so they can be
inspected for post-mortem analysis

6.It is possible to list all containers with the following command:

$ docker ps -a

7.Start a new nginx container:

$ docker run --name demo-1-nginx -d nginx

8.An important feature is the ability to list metadata about a running or stopped container.
The following command returns the metadata:

$ docker inspect demo-1-nginx

9.It is possible to format and retrieve a specific item from the inspect command. To retrieve
the IPAddress attribute from the NetworkSettings object, use the following command:

$ docker inspect -f '{{ .NetworkSettings.IPAddress }}' demo-1-nginx

Make a note about the IP address from this container. It will be necessary for a further step.

10.Run the following command to access the container bash:

$ docker exec -it demo-1-nginx /bin/bash

11.Create a new HTML file on the container and exit:

root@e994cd7b1c5e:/# echo "This is my web nginx1." >


/usr/share/nginx/html/adinusa.html
root@e994cd7b1c5e:/# exit

12.Using the IP address from step 8, try to access the previously created page:

$ curl IP:80/adinusa.html
The following output is be displayed:

"This is my web nginx1."

13.It is possible to restart the container with the following command:

$ docker restart demo-1-nginx

14.When the container is restarted, the data is preserved. Verify the IP address from the
restarted container and check that the adinusa page is still available:

$ docker inspect demo-1-nginx | grep IPAddress


$ curl IP:80/adinusa.html

15.Stop the HTTP container:

$ docker stop demo-1-nginx

16.Start a new HTTP container:

$ docker run --name demo-2-nginx -d nginx

17.Verify the IP address from the new container and check if the adinusa page is available:

$ docker inspect demo-2-nginx | grep IPAddress


$ curl IP:80/adinusa.html

The page is not available because this page was created just for the previous container.
New containers will not have the page since the container image did not change.

18.In case of a freeze, it is possible to kill a container like any process. The following
command will kill a container:

$ docker kill demo-2-nginx

This command kills the container with the SIGKILL signal. It is possible to specify the signal
with the -s option.
19.Containers can be removed, discarding their state and filesystem. It is possible to
remove a container by name or by its ID. Remove the demo-nginx container:

$ docker ps -a
$ docker rm demo-1-nginx

20.It is also possible to remove all containers at the same time. The -q option returns the list
of container IDs and the docker rm accepts a list of IDs to remove all containers:

$ docker rm $(docker ps -aq)

21.Verify that all containers were removed:

$ docker ps -a

22.Clean up the images downloaded by running the following from a terminal window:

$ docker rmi nginx

OR

$ docker image rm nginx

This concludes the demonstration.

Docker Volume

Volumes are the preferred mechanism for persisting data generated by and used by Docker
containers. While bind mounts are dependent on the directory structure and OS of the host
machine, volumes are completely managed by Docker. Volumes have several advantages
over bind mounts:

 Volumes are easier to back up or migrate than bind mounts.


 You can manage volumes using Docker CLI commands or the Docker API.
 Volumes work on both Linux and Windows containers.
 Volumes can be more safely shared among multiple containers.
 Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt
the contents of volumes, or to add other functionality.
 New volumes can have their content pre-populated by a container.
 Volumes on Docker Desktop have much higher performance than bind mounts from
Mac and Windows hosts.

In addition, volumes are often a better choice than persisting data in a container’s writable
layer, because a volume does not increase the size of the containers using it, and the
volume’s contents exist outside the lifecycle of a given container.

If your container generates non-persistent state data, consider using a tmpfs mount to avoid
storing the data anywhere permanently, and to increase the container’s performance by
avoiding writing into the container’s writable layer.

Share data among machines

When building fault-tolerant applications, you might need to configure multiple replicas of
the same service to have access to the same files.

There are several ways to achieve this when developing your applications. One is to add
logic to your application to store files on a cloud object storage system like Amazon S3.
Another is to create volumes with a driver that supports writing files to an external storage
system like NFS or Amazon S3.
Volume drivers allow you to abstract the underlying storage system from the application
logic. For example, if your services use a volume with an NFS driver, you can update the
services to use a different driver, as an example to store data in the cloud, without changing
the application logic.

Volume Driver

When you create a volume using docker volume create, or when you start a container which
uses a not-yet-created volume, you can specify a volume driver. The following examples
use the vieux/sshfs volume driver, first when creating a standalone volume, and then when
starting a container which creates a new volume.

Initial set-up

This example assumes that you have two nodes, the first of which is a Docker host and can
connect to the second using SSH.

On the Docker host, install the vieux/sshfs plugin:

$ docker plugin install --grant-all-permissions vieux/sshfs

Create a volume using a volume driver

This example specifies a SSH password, but if the two hosts have shared keys configured,
you can omit the password. Each volume driver may have zero or more configurable
options, each of which is specified using an -o flag.

$ docker volume create --driver vieux/sshfs \


-o sshcmd=test@node2:/home/test \
-o password=testpassword \
sshvolume

Start a container which creates a volume using a volume driver

This example specifies a SSH password, but if the two hosts have shared keys configured,
you can omit the password. Each volume driver may have zero or more configurable
options. If the volume driver requires you to pass options, you must use the --mount flag to
mount the volume, rather than -v.
$ docker run -d \
--name sshfs-container \
--volume-driver vieux/sshfs \
--mount src=sshvolume,target=/app,volume-
opt=sshcmd=test@node2:/home/test,volume-opt=password=testpassword \
nginx:latest

Create a service which creates an NFS volume

This example shows how you can create an NFS volume when creating a service. This
example uses 10.0.0.10 as the NFS server and /var/docker-nfs as the exported directory on
the NFS server. Note that the volume driver specified is local.

NFSV3

$ docker service create -d \


--name nfs-service \
--mount 'type=volume,source=nfsvolume,target=/app,volume-
driver=local,volume-opt=type=nfs,volume-opt=device=:/var/docker-
nfs,volume-opt=o=addr=10.0.0.10' \
nginx:latest

NFSV4

docker service create -d \


--name nfs-service \
--mount 'type=volume,source=nfsvolume,target=/app,volume-
driver=local,volume-opt=type=nfs,volume-opt=device=:/var/docker-
nfs,"volume-opt=o=addr=10.0.0.10,rw,nfsvers=4,async"' \
nginx:latest

Create CIFS/Samba volumes

You can mount a Samba share directly in docker without configuring a mount point on your
host.

docker volume create \


--driver local \
--opt type=cifs \
--opt device=//uxxxxx.your-server.de/backup \
--opt o=addr=uxxxxx.your-
server.de,username=uxxxxxxx,password=*****,file_mode=0777,dir_mode=0777 \
--name cif-volume
Notice the addr option is required if using a hostname instead of an IP so docker can
perform the hostname lookup.

Backup, restore, or migrate data volumes

Volumes are useful for backups, restores, and migrations. Use the --volumes-from flag to
create a new container that mounts that volume.

Backup a container

For example, create a new container named dbstore:

$ docker run -v /dbdata --name dbstore ubuntu /bin/bash

Then in the next command, we:

 Launch a new container and mount the volume from the dbstore container
 Mount a local host directory as /backup
 Pass a command that tars the contents of the dbdata volume to a backup.tar file
inside our /backup directory.

$ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf


/backup/backup.tar /dbdata
When the command completes and the container stops, we are left with a
backup of our dbdata volume.

Restore container from backup

With the backup just created, you can restore it to the same container, or another that you
made elsewhere.

For example, create a new container named dbstore2:

$ docker run -v /dbdata --name dbstore2 ubuntu /bin/bash

Then un-tar the backup file in the new container`s data volume:

$ docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash


-c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
You can use the techniques above to automate backup, migration and restore testing using
your preferred tools.

Remove volumes

A Docker data volume persists after a container is deleted. There are two types of volumes
to consider:

 Named volumes have a specific source from outside the container, for example
awesome:/bar.
 Anonymous volumes have no specific source so when the container is deleted,
instruct the Docker Engine daemon to remove them.

Remove anonymous volumes

To automatically remove anonymous volumes, use the --rm option. For example, this
command creates an anonymous /foo volume. When the container is removed, the Docker
Engine removes the /foo volume but not the awesome volume.

$ docker run --rm -v /foo -v awesome:/bar busybox top

Remove all volumes

To remove all unused volumes and free up space:

$ docker volume prune

Docker Network

Networking overview

One of the reasons Docker containers and services are so powerful is that you can connect
them together, or connect them to non-Docker workloads. Docker containers and services
do not even need to be aware that they are deployed on Docker, or whether their peers are
also Docker workloads or not. Whether your Docker hosts run Linux, Windows, or a mix of
the two, you can use Docker to manage them in a platform-agnostic way.
This topic defines some basic Docker networking concepts and prepares you to design and
deploy your applications to take full advantage of these capabilities.

Network drivers

Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default,
and provide core networking functionality:

bridge: The default network driver. If you don’t specify a driver, this is the type of network
you are creating. Bridge networks are usually used when your applications run in
standalone containers that need to communicate. See bridge networks.

host: For standalone containers, remove network isolation between the container and the
Docker host, and use the host’s networking directly. See use the host network.

overlay: Overlay networks connect multiple Docker daemons together and enable swarm
services to communicate with each other. You can also use overlay networks to facilitate
communication between a swarm service and a standalone container, or between two
standalone containers on different Docker daemons. This strategy removes the need to do
OS-level routing between these containers. See overlay networks.

macvlan: Macvlan networks allow you to assign a MAC address to a container, making it
appear as a physical device on your network. The Docker daemon routes traffic to
containers by their MAC addresses. Using the macvlan driver is sometimes the best choice
when dealing with legacy applications that expect to be directly connected to the physical
network, rather than routed through the Docker host’s network stack. See Macvlan
networks.

none: For this container, disable all networking. Usually used in conjunction with a custom
network driver. none is not available for swarm services. See disable container networking.

Network plugins: You can install and use third-party network plugins with Docker. These
plugins are available from Docker Hub or from third-party vendors. See the vendor’s
documentation for installing and using a given network plugin.
Network driver summary

 User-defined bridge networks are best when you need multiple containers to
communicate on the same Docker host. Host networks are best when the network
stack should not be isolated from the Docker host, but you want other aspects of the
container to be isolated.
 Overlay networks are best when you need containers running on different Docker
hosts to communicate, or when multiple applications work together using swarm
services.
 Macvlan networks are best when you are migrating from a VM setup or need your
containers to look like physical hosts on your network, each with a unique MAC
address.
 Third-party network plugins allow you to integrate Docker with specialized network
stacks.

Use bridge networks

In terms of networking, a bridge network is a Link Layer device which forwards traffic
between network segments. A bridge can be a hardware device or a software device
running within a host machine’s kernel.

In terms of Docker, a bridge network uses a software bridge which allows containers
connected to the same bridge network to communicate, while providing isolation from
containers which are not connected to that bridge network. The Docker bridge driver
automatically installs rules in the host machine so that containers on different bridge
networks cannot communicate directly with each other.

Bridge networks apply to containers running on the same Docker daemon host. For
communication among containers running on different Docker daemon hosts, you can either
manage routing at the OS level, or you can use an overlay network.

When you start Docker, a default bridge network (also called bridge) is created
automatically, and newly-started containers connect to it unless otherwise specified. You
can also create user-defined custom bridge networks. User-defined bridge networks are
superior to the default bridge network.
Use overlay networks

The overlay network driver creates a distributed network among multiple Docker daemon
hosts. This network sits on top of (overlays) the host-specific networks, allowing containers
connected to it (including swarm service containers) to communicate securely when
encryption is enabled. Docker transparently handles routing of each packet to and from the
correct Docker daemon host and the correct destination container.

When you initialize a swarm or join a Docker host to an existing swarm, two new networks
are created on that Docker host:

 an overlay network called ingress, which handles control and data traffic related to
swarm services. When you create a swarm service and do not connect it to a user-
defined overlay network, it connects to the ingress network by default.
 a bridge network called docker_gwbridge, which connects the individual Docker
daemon to the other daemons participating in the swarm.

You can create user-defined overlay networks using docker network create, in the same
way that you can create user-defined bridge networks. Services or containers can be
connected to more than one network at a time. Services or containers can only
communicate across networks they are each connected to.

Although you can connect both swarm services and standalone containers to an overlay
network, the default behaviors and configuration concerns are different. For that reason, the
rest of this topic is divided into operations that apply to all overlay networks, those that apply
to swarm service networks, and those that apply to overlay networks used by standalone
containers.

Use host networking

If you use the host network mode for a container, that container’s network stack is not
isolated from the Docker host (the container shares the host’s networking namespace), and
the container does not get its own IP-address allocated. For instance, if you run a container
which binds to port 80 and you use host networking, the container’s application is available
on port 80 on the host’s IP address.

Note: Given that the container does not have its own IP-address when
using host mode networking, port-mapping does not take effect, and the
-p, --publish, -P, and --publish-all option are ignored, producing a
warning instead:

WARNING: Published ports are discarded when using host network mode
Host mode networking can be useful to optimize performance, and in situations where a
container needs to handle a large range of ports, as it does not require network address
translation (NAT), and no “userland-proxy” is created for each port.

The host networking driver only works on Linux hosts, and is not supported on Docker
Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.

You can also use a host network for a swarm service, by passing --network host to the
docker service create command. In this case, control traffic (traffic related to managing the
swarm and the service) is still sent across an overlay network, but the individual swarm
service containers send data using the Docker daemon’s host network and ports. This
creates some extra limitations. For instance, if a service container binds to port 80, only one
service container can run on a given swarm node.

Lab 3.1: Mount Volume

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.

lab login doops

Setelah login, jalankan perintah lab start do-003-1. Perintah ini digunakan untuk
mempersiapkan environment lab

lab start do-003-1

Eksekusi di node pod-[username]-node01

1. membuat working directory.

cd $HOME
mkdir -p latihan/latihan01-volume
cd latihan/latihan01-volume

2. membuat container untuk data volume dengan nama test-volume.


for i in {1..10};do touch file-$i;done
sudo docker create -v /test-volume --name test-volume busybox
sudo docker cp . test-volume:/test-volume

3. mount volumes.

sudo docker run --volumes-from test-volume ubuntu ls /test-volume

Lab 3.2: Mount Volume with NFS Server

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.

lab login doops

Setelah login, jalankan perintah lab start do-003-2. Perintah ini digunakan untuk
mempersiapkan environment lab

lab start do-003-2

Eksekusi di node pod-[username]-node01

1. Membuat Direktori

mkdir -p /data/nfs-storage01/

2. Create the NFS Server with docker.

docker run -itd --privileged --restart unless-stopped -e


SHARED_DIRECTORY=/data -v /data/nfs-storage01:/data -p 2049:2049
itsthenetwork/nfs-server-alpine:12

Eksekusi di node pod-[username]-node02

3. Mounting volume di client NFS.

ssh 10.x.x.20 -l root


sudo apt install nfs-client -y
sudo mount -v -o vers=4,loud 10.x.x.10:/ /mnt
atau
sudo mount -v -t nfs4 10.x.x.10:/ /mnt
df -h

touch /mnt/file1.txt
touch /mnt/file2.txt
exit

Eksekusi di node pod-[username]-node01

4. Verifikasi.

ls /data/nfs-storage01/

Lab 3.3: Mount Volume with Read-only Mode

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.

lab login doops

Setelah login, jalankan perintah lab start do-003-3. Perintah ini digunakan untuk
mempersiapkan environment lab

lab start do-003-3

Eksekusi di node pod-[username]-node01

1. Membuat volume

docker volume create my-vol

2. Menampilkan daftar volume.

docker volume ls

3. Menampilkan detil volume


docker volume inspect my-vol

4. Jalankan container dengan volume

docker run -d --name=nginx-test -v my-vol:/usr/share/nginx/html -p


4005:80 nginx:latest

5. Tampilkan alamat IP container

docker inspect nginx-test | grep -i ipaddress

6. Uji browsing ke alamat IP container.

curl http://172.17.XXX.XXX

7. Buat file index.html dan pindahkan ke direktori source volume

sudo echo "This is from my-vol source directory" > index.html


sudo mv index.html /var/lib/docker/volumes/my-vol/_data

8. Uji browsing ke alamat IP container.

curl http://172.17.XXX.XXX

9. Jalankan container dengan read-only volume

docker run -d --name=nginxtest-rovol -v my-vol:/usr/share/nginx/html:ro


nginx:latest

10. Tampilkan detil container nginxtest-rovol dan perhatikan Mode di section Mounts

docker inspect nginxtest-rovol

Lab 3.4: Volume Driver

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.
lab login doops

Setelah login, jalankan perintah lab start do-003-4. Perintah ini digunakan untuk
mempersiapkan environment lab

lab start do-003-4

Eksekusi di node pod-[username]-node02

1. Buat folder /share dan tambahkan index.html.

sudo mkdir /share


sudo chmod 777 /share
echo "Hello World!" > /share/index.html

Eksekusi di node pod-[username]-node01

2. Instal plugin sshfs.

sudo docker plugin install --grant-all-permissions vieux/sshfs


sudo docker plugin ls
sudo docker plugin disable [PLUGIN ID]
sudo docker plugin set vieux/sshfs sshkey.source=/root/.ssh/
sudo docker plugin enable [PLUGIN ID]
sudo docker plugin ls

3. Membuat volume dengan driver sshfs.

sudo docker volume create --driver vieux/sshfs -o


[email protected]:/share -o allow_other sshvolume

4. Uji jalankan container dengan volume.

sudo docker run -d --name=nginxtest-ssh -p 8090:80 -v


sshvolume:/usr/share/nginx/html nginx:latest

5. Uji browsing.

sudo docker ps
curl http://localhost:8090

Lab 3.5: Default Bridge Network


Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.

lab login doops

Setelah login, jalankan perintah lab start do-003-5. Perintah ini digunakan untuk
mempersiapkan environment lab

lab start do-003-5

Eksekusi di node pod-[username]-node01

1. Tampilkan daftar network saat ini.

sudo docker network ls

2. Jalankan 2 container alpine yang menjalankan shell ash.

docker run -dit --name alpine1 alpine ash


docker run -dit --name alpine2 alpine ash
docker container ls

3. Buat network baru dan tambahkan ke container.

docker network create --driver bridge bridge1


docker network connect bridge1 alpine1

4. cek ip container alpine2.

docker inspect alpine2 | grep -i ipaddress

5. Masuk ke container alpine1.

docker exec -it alpine1 sh

6. Tampilkan alamat IP container alpine1.

ip add
7. Uji ping ke internet (sukses).

ping -c 3 8.8.8.8

8. Uji ping ke alamat IP container alpine2 (sukses).

ping -c 3 172.17.YYY.YYY

1. Uji ping ke nama container alpine2 (gagal).

ping -c 3 alpine2

10. Keluar dari container alpine1 tanpa menutup shell tekan Ctrl+P, Ctrl+Q.

Lab 3.6: Host Network

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial akun sama dengan kredensial pada akun platform Adinusa.

lab login doops

Setelah login, jalankan perintah lab start do-003-6. Perintah ini digunakan untuk
mempersiapkan environment lab

lab start do-003-6

Eksekusi di node pod-[username]-node01

1. Jalankan container dari image nginx.

sudo docker run -itd --network host --name my_nginx nginx

2. Uji browsing ke localhost.

curl http://localhost

3. Verifikasi alamat IP dan bound port 80.


ip add
sudo netstat -tulpn | grep :80

Creating Custom Docker Container Image

Abstract
Overview Manipulate pre-built container images to create and manage containerized
services.
Objectives:

 Manage the life cycle of a container from creation to deletion.


 Save application data across container restarts through the use of persistent
storage.
 Describe how Docker provides network access to containers, and access a container
through port forwarding.

Sections:

 Docker images
 Building Custom Container Images with Dockerfile

Docker images

What are images?

In Linux, everything is a file. The whole operating system is basically a filesystem with files
and folders stored on the local disk. This is an important fact to remember when looking at
what container images are. As we will see, an image is basically a big tarball containing a
filesystem. More specifically, it contains a layered filesystem.

The layered filesystem

Container images are templates from which containers are created. These images are not
made up of just one monolithic block but are composed of many layers. The first layer in the
image is also called the base layer. We can see this in the following graphic:
The image as a stack of layers Each individual layer contains files and folders. Each layer
only contains the changes to the filesystem with respect to the underlying layers. A storage
driver handles the details regarding the way these layers interact with each other. Different
storage drivers are available that have advantages and disadvantages in different
situations.

The layers of a container image are all immutable. Immutable means that once generated,
the layer cannot ever be changed. The only possible operation affecting the layer is its
physical deletion. This immutability of layers is important because it opens up a tremendous
amount of opportunities, as we will see.

In the following screenshot, we can see what a custom image for a web application, using
Nginx as a web server, could look like:
A sample custom image based on Alpine and Nginx

Our base layer here consists of the Alpine Linux distribution. Then, on top of that, we have
an Add Nginx layer where Nginx is added on top of Alpine. Finally, the third layer contains
all the files that make up the web application, such as HTML, CSS, and JavaScript files.

As has been said previously, each image starts with a base image. Typically, this base
image is one of the official images found on Docker Hub, such as a Linux distro, Alpine,
Ubuntu, or CentOS. However, it is also possible to create an image from scratch.

Docker Hub is a public registry for container images. It is a central hub ideally suited for
sharing public container images.

Each layer only contains the delta of changes in regard to the previous set of layers. The
content of each layer is mapped to a special folder on the host system, which is usually a
subfolder of /var/lib/docker/.

Since layers are immutable, they can be cached without ever becoming stale. This is a big
advantage, as we will see.
The writable container layer

As we have discussed, a container image is made of a stack of immutable or read-only


layers. When the Docker Engine creates a container from such an image, it adds a writable
container layer on top of this stack of immutable layers. Our stack now looks as follows:

The writable
container layer

The Container Layer is marked as read/write. Another advantage of the immutability of


image layers is that they can be shared among many containers created from this image.
All that is needed is a thin, writable container layer for each container, as shown in the
following screenshot:
Multiple containers sharing the same image layers

This technique, of course, results in a tremendous reduction in the resources that are
consumed. Furthermore, this helps to decrease the loading time of a container since only a
thin container layer has to be created once the image layers have been loaded into
memory, which only happens for the first container.

Docker Registry
An overview of Docker Registry

In this section, we will be looking at Docker Registry. Docker Registry is an open-source


application that you can run anywhere you please and store your Docker image in. We will
look at the comparison between Docker Registry and Docker Hub, and how to choose
between the two. By the end of the section, you will learn how to run your own Docker
Registry and see whether it's a proper fit for you.

source : thecustomizewindows.com

Docker Registry, as stated earlier, is an open source application that you can utilize to store
your Docker images on a platform of your choice. This allows you to keep them 100%
private if you wish, or share them as needed.

Docker Registry makes a lot of sense if you want to deploy your own registry without having
to pay for all the private features of Docker Hub. Next, let's take a look at some
comparisons between Docker Hub and Docker Registry to help you can make an educated
decision as to which platform to choose to store your images.

Docker Registry has the following features:


 Host and manage your own registry from which you can serve all the repositories as
private, public, or a mix between the two
 Scale the registry as needed, based on how many images you host or how many pull
requests you are serving out
 Everything is command-line based

With Docker Hub, you will:

 Get a GUI-based interface that you can use to manage your images
 Have a location already set up in the cloud that is ready to handle public and/or
private images
 Have the peace of mind of not having to manage a server that is hosting all your
images

Docker Hub

Docker Hub is the world's easiest way to create, manage, and deliver your teams' container
applications.

Docker Hub repositories allow you share container images with your team, customers, or
the Docker community at large.

Docker images are pushed to Docker Hub through the docker push command. A single
Docker Hub repository can hold many Docker images (stored as tags).
Manipulating Container Images

Introduction

There are various ways to manage image containers in a devops fashion. For example, a
developer finished testing a custom container in a machine, and needs to transfer this
container image to another host for another developer, or to a production server. There are
two ways to accomplish this:

1.Save the container image to a *.tar file.

2.Publish (push) the container image to an image registry

One of the ways a developer could have created this custom container is discussed later in
this chapter (docker commit). However, the recommended way to do so, that is,
using Dockerfiles is discussed in next chapters.

Saving and Loading Images

Existing images from the Docker cache can be saved to a .tar file using the docker save
command. The generated file is not a regular tar file: it contains image metadata and
preserves the original image layers. By doing so, the original image can be later recreated
exactly as it was.

The general syntax of the docker command save verb is as follows:

# docker save [-o FILE_NAME] IMAGE_NAME[:TAG]

If the -o option is not used the generated image is sent to the standard output as binary
data.

In the following example, the MySQL container image from the Docker registry is saved to
the mysql.tar file:

# docker save -o mysql.tar mysql:5.7

.tar files generated using the save verb can be used for backup purposes. To restore the
container image, use the docker load command. The general syntax of the command is as
follows:

# docker load [-i FILE_NAME]


To save disk space, the file generated by the save verb can be compressed as a Gzip file.
The load verb uses the gunzip command before importing the file to the cache directory.

Publishing Images to a Registry

To publish an image to the registry, it must be stored in the Docker's cache cache, and
should be tagged for identification purposes. To tag an image, use the tag verb, as follows:

# docker tag IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]

For example, Let's say I want to push the latest version of Alpine to my account and give it a
tag of versi1. I can do this in the following way:

# docker image tag alpine:latest palopalepalo/alpine:versi1


# docker images
REPOSITORY TAG IMAGE ID CREATED
SIZE
nginx latest 7e4d58f0e5f3 10 days ago
133MB
mysql 5.7 ef08065b0a30 10 days ago
448MB
mysql latest e1d7dc9731da 10 days ago
544MB
alpine latest a24bb4013296 3 months
ago 5.57MB
palopalepalo/alpine versi1 a24bb4013296 3 months
ago 5.57MB

Now, to be able to push the image, I have to log in to my account, as follows:

# docker login -u palopalepalo -p <my secret password>

After a successful login, I can then push the image, like this:

# docker image push palopalepalo/alpine:versi1

You will see something similar to this in the Terminal:

# docker image push palopalepalo/alpine:versi1


The push refers to repository [docker.io/palopalepalo/alpine]
50644c29ef5a: Mounted from library/alpine
versi1: digest:
sha256:a15790640a6690aa1730c38cf0a440e2aa44aaca9b0e8931a9f2b0d7cc90fd65
size: 528

For each image that we push to Docker Hub, we automatically create a repository. A
repository can be private or public. Everyone can pull an image from a public repository.
From a private repository, an image can only be pulled if one is logged in to the registry and
has the necessary permissions configured.

Deleting All Images

To delete all images that are not used by any container, use the following command:

# docker rmi $(docker images -q)

The command returns all the image IDs available in the cache, and passes them as a
parameter to the docker rmi command for removal. Images that are in use are not deleted,
however, this does not prevent any unused images from being removed.

Building Custom Container Images with Dockerfile

Introducing the Dockerfile

In this section, we will cover Dockerfiles in-depth, along with the best practices to use. So
what is a Dockerfile?

A Dockerfile is simply a plain text file that contains a set of user-defined instructions. When
the Dockerfile is called by the docker image build command, which we will look at next, it is
used to assemble a container image. Dockerfile looks like the following:

FROM alpine:latest
LABEL maintainer="adinusa <[email protected]>"
LABEL description="This example Dockerfile installs NGINX."
RUN apk add --update nginx && \
rm -rf /var/cache/apk/* && \
mkdir -p /tmp/nginx/
COPY files/nginx.conf /etc/nginx/nginx.conf
COPY files/default.conf /etc/nginx/conf.d/default.conf
ADD files/html.tar.gz /usr/share/nginx/

EXPOSE 80/tcp

ENTRYPOINT ["nginx"]
CMD ["-g", "daemon off;"]

As you can see, even with no explanation, it is quite easy to get an idea of what each step
of the Dockerfile instructs the build command to do.

Before we move on to working our way through the previous file, we should quickly touch
upon Alpine Linux.

Alpine Linux is a small, independently developed, non-commercial Linux distribution


designed for security, efficiency, and ease of use. While small (see the following section), it
offers a solid foundation for container images due to its extensive repository of packages,
and also thanks to the unofficial port of grsecurity/PaX, which is patched into its kernel it
offers proactive protection dozens of potential zero-day and other vulnerabilities.

Alpine Linux, due both to its size, and how powerful it is, has become the default image
base for the official container images supplied by Docker. Because of this, we will be using
it throughout this book. To give you an idea of just how small the official image for Alpine
Linux is, let's compare it to some of the other distributions available at the time of writing:
As you can see from the Terminal output, Alpine Linux weighs in at only 4.41 MB, as
opposed to the biggest image, which is Fedora, at 253 MB. A bare-metal installation of
Alpine Linux comes in at around 130 MB, which is still almost half the size of the Fedora
container image.

Reviewing the Dockerfile in Depth

Reviewing the Dockerfile in Depth

Let's take a look at the instructions used in the Dockerfile example. We will look at them in
the order in which they appear:

FROM

The FROM instruction tells Docker which base you would like to use for your image; as
already mentioned, we are using Alpine Linux, so we simply have to put the name of the
image and the release tag we wish to use. In our case, to use the latest official Alpine Linux
image, we simply need to add alpine:latest.

LABEL

The LABEL instruction can be used to add extra information to the image. This information
can be anything from a version number to a description. It's also recommended that you
limit the number of labels you use. A good label structure will help others who have to use
our image later on.

However, using too many labels can cause the image to become inefficient as well, so I
would recommend using the label schema detailed at http://label-schema.org/. You can
view the containers' labels with the following Docker inspect command:

$ docker image inspect <IMAGE_ID>


Alternatively, you can use the following to filter just the labels:

$ docker image inspect -f {{.Config.Labels}} <IMAGE_ID>

RUN

The RUN instruction is where we interact with our image to install software and run scripts,
commands, and other tasks. As you can see from our RUN instruction, we are actually
running three commands:

RUN apk add --update nginx && \


rm -rf /var/cache/apk/* && \
mkdir -p /tmp/nginx/

The first of our three commands is the equivalent of running the following command if we
had a shell on an Alpine Linux host:

$ apk add --update nginx

This command installs nginx using Alpine Linux's package manager.

We are using the && operator to move on to the next command if the previous command
was successful. To make it more obvious which commands we are running, we are also
using \ so that we can split the command over multiple lines, making it easy to read.

The next command in our chain removes any temporary files and so on to keep the size of
our image to a minimum:

$ rm -rf /var/cache/apk/*

The final command in our chain creates a folder with a path of /tmp/nginx/, so that nginx will
start correctly when we run the container:

$ mkdir -p /tmp/nginx/

We could have also used the following in our Dockerfile to achieve the same results:

RUN apk add --update nginx


RUN rm -rf /var/cache/apk/*
RUN mkdir -p /tmp/nginx/

However, much like adding multiple labels, this is considered to be considered inefficient as
it can add to the overall size of the image, which for the most part we should try to avoid.
There are some valid use cases for this, which we will look at later in the chapter. For the
most part, this approach to running commands should be avoided when your image is being
built.

COPY and ADD

At first glance, COPY and ADD look like they are doing the same task; however, there are
some important differences. The COPY instruction is the more straightforward of the two:

COPY files/nginx.conf /etc/nginx/nginx.conf


COPY files/default.conf /etc/nginx/conf.d/default.conf

As you have probably guessed, we are copying two files from the files folder on the host we
are building our image on. The first file is nginx.conf.

EXPOSE

The EXPOSE instruction lets Docker know that when the image is executed, the port and
protocol defined will be exposed at runtime. This instruction does not map the port to the
host machine, but instead, opens the port to allow access to the service on the container
network.

For example, in our Dockerfile, we are telling Docker to open port 80 every time the image
runs:

EXPOSE 80/tcp

ENTRYPOINT and CMD


ENTRYPOINT specifies the default command to execute when the container is created.
CMD provides the default arguments for the ENTRYPOINT instruction.

For example, if you want to have a default command that you want to execute inside a
container, you could do something similar to the following example, but be sure to use a
command that keeps the container alive. In our case, we are using the following:

ENTRYPOINT ["nginx"]
CMD ["-g", "daemon off;"]

What this means is that whenever we launch a container from our image, the nginx binary is
executed, as we have defined that as our ENTRYPOINT, and then whatever we have as
the CMD is executed, giving us the equivalent of running the following command:

$ nginx -g daemon off;

Another example of how ENTRYPOINT can be used is the following:

$ docker container run --name nginx-version dockerfile-example -v

This would be the equivalent of running the following command on our host:

$ nginx -v

Notice that we didn't have to tell Docker to use nginx. As we have the nginx binary as our
entry point, any command we pass overrides the CMD that had been defined in the
Dockerfile.

This would display the version of nginx we have installed, and our container would stop, as
the nginx binary would only be executed to display the version information and then the
process would stop. We will look at this example later in this chapter, once we have built our
image.

USER
specifies the username or the UID to use when running the container image for the RUN,
CMD, and ENTRYPOINT instructions in the Dockerfile. It is a good practice to define a
different user other than root for security reasons.

WORKDIR

The WORKDIR instruction sets the working directory for the same set of instructions that
the USER instruction can use (RUN, CMD, and ENTRYPOINT). It will allow you to use the
CMD and ADD instructions as well.

Lab 4.1: Exploring Dockerfile

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).

lab login doops

Setelah login, jalankan perintah lab start do-004-1. Perintah in digunakan untuk


mempersiapkan environment lab

lab start do-004-1

Eksekusi di node pod-[username]-node01

Dockerfile (latihan01)

1. Clone repository latihan.

git clone https://github.com/spkane/docker-node-hello.git \


--config core.autocrlf=input latihan01

2. Masuk ke dalam direktori.


cd latihan01

3. Buat image.

docker build -t node-latihan01 .

4. Buat container.

docker run -d --rm --name node-latihan01 -p 8080:8080 node-latihan01

5. Coba akses port.

curl localhost:8080

Dockerfile (latihan02)

1. Buat direktori latihan02.

cd $HOME
mkdir latihan02
cd latihan02

2. Buat file Dockerfile.

vim Dockerfile
...
# Use whalesay image as a base image
FROM docker/whalesay:latest

# Install fortunes
RUN apt -y update && apt install -y fortunes

# Execute command
CMD /usr/games/fortune -a | cowsay
...

3. Bangun image dari Dockerfile.

docker build -t docker-whale .

4. Tampilkan image yang sudah dibangun.

docker image ls
5. Uji jalankan image.

docker run docker-whale

6. Menampilkan container.

docker ps
docker container ls -a

Lab 4.2: Exploring Dockerfile (Flask Apps)

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).

lab login doops

Setelah login, jalankan perintah lab start do-004-2. Perintah in digunakan untuk


mempersiapkan environment lab

lab start do-004-2

Eksekusi di node pod-[username]-node01

Dockerfile (latihan03)

1. Jika belum punya Docker ID, register di https://id.docker.com.

2. Login dengan Docker ID.

sudo docker login

3. Buat direktori latihan03.

cd $HOME
mkdir latihan03
cd latihan03
4. Membuat file flask.

vim app.py
...
from flask import Flask
app = Flask(__name__)

@app.route('/')
def hello_world():
return 'Hey, we have Flask in a Docker container!'

if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
...

5. Membuat requirements.txt.

vi requirements.txt
...
Flask==0.10.1
Werkzeug==1.0.0
Jinja2==2.8.1
MarkupSafe==1.1.1
itsdangerous==1.1.0
...

6. Membuat file Dockerfilenya.

vim Dockerfile
...
FROM ubuntu:16.04
RUN mkdir /app
RUN apt-get update -y && \
apt-get install python-pip python-dev -y

COPY ./requirements.txt /app


COPY . /app

WORKDIR /app
RUN pip install -r requirements.txt

ENTRYPOINT ["python"]
CMD ["app.py"]
...
7. Buat image dari Dockerfile.

docker build -t flask-latihan03 .

8. Tag image.

sudo docker tag flask-latihan03 [username]/flask-latihan03:latest

9. Push image.

sudo docker push [username]/flask-latihan03:latest

10. Menjalankan image .

sudo docker run -d -p 5000:5000 --name flask03 [username]/flask-latihan03

11. Uji browsing.

curl localhost:5000

Lab 4.2: Exploring Dockerfile (Flask Apps)

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).

lab login doops

Setelah login, jalankan perintah lab start do-004-2. Perintah in digunakan untuk


mempersiapkan environment lab

lab start do-004-2

Eksekusi di node pod-[username]-node01

Dockerfile (latihan03)
1. Jika belum punya Docker ID, register di https://id.docker.com.

2. Login dengan Docker ID.

sudo docker login

3. Buat direktori latihan03.

cd $HOME
mkdir latihan03
cd latihan03

4. Membuat file flask.

vim app.py
...
from flask import Flask
app = Flask(__name__)

@app.route('/')
def hello_world():
return 'Hey, we have Flask in a Docker container!'

if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
...

5. Membuat requirements.txt.

vi requirements.txt
...
Flask==0.10.1
Werkzeug==1.0.0
Jinja2==2.8.1
MarkupSafe==1.1.1
itsdangerous==1.1.0
...

6. Membuat file Dockerfilenya.

vim Dockerfile
...
FROM ubuntu:16.04
RUN mkdir /app
RUN apt-get update -y && \
apt-get install python-pip python-dev -y

COPY ./requirements.txt /app


COPY . /app

WORKDIR /app
RUN pip install -r requirements.txt

ENTRYPOINT ["python"]
CMD ["app.py"]
...

7. Buat image dari Dockerfile.

docker build -t flask-latihan03 .

8. Tag image.

sudo docker tag flask-latihan03 [username]/flask-latihan03:latest

9. Push image.

sudo docker push [username]/flask-latihan03:latest

10. Menjalankan image .

sudo docker run -d -p 5000:5000 --name flask03 [username]/flask-latihan03

11. Uji browsing.

curl localhost:5000

Lab 4.3: Exploring Dockerfile (Quiz)


Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).

lab login doops

Setelah login, jalankan perintah lab start do-004-3. Perintah in digunakan untuk


mempersiapkan environment lab

lab start do-004-3

Eksekusi di node pod-[username]-node01

1. Buat direktori latihan-quiz01. Kemudian, masuk ke direktori.

2. Clone repository dari https://github.com/rivawahyuda/mywebsite.git

3. Buat Dockerfile dengan ketentuan: - dari image: nginx:latest

4. Build image dengan nama latihan-quiz01-image.

5. Run container dengan image latihan-quiz01-image, expose port 80, dan beri


nama quiz01.

6. Uji browsing.

Docker Compose

Abstract
Overview In the previous chapter, we learned a lot about how container networking works
on a single Docker host. We introduced the Container Network Model (CNM), which forms
the basis of all networking between Docker containers, and then we dove deep into different
implementations of the CNM, specifically the bridge network. Finally, we introduced Traefik,
a reverse proxy, to enable sophisticated HTTP application-level routing between containers.
This chapter introduces the concept of an application consisting of multiple services, each
running in a container, and how Docker Compose allows us to easily build, run, and scale
such an application using a declarative approach.
This chapter covers the following topics:

 Introducing Docker Compose


 Running a multi-service application
 Scaling a service
 Building and pushing an application
 Using Docker Compose overrides

After completing this chapter, the reader will be able to do the following:

 Explain in a few short sentences the main differences between an imperative and
declarative approach for defining and running an application
 Describe in their own words the difference between a container and a Docker
Compose service
 Author a Docker Compose YAML file for a simple multi-service application
 Build, push, deploy, and tear down a simple multi-service application using Docker
Compose
 Use Docker Compose to scale an application service up and down
 Define environment-specific Docker Compose files using overrides

Introducing Docker Compose


source : towardsdatascience.com

Docker Compose is a tool provided by Docker that is mainly used where you need to run
and orchestrate containers running on a single Docker host. This includes, but is not limited
to, development, continuous integration (CI), automated testing, manual QA, or demos.

Docker Compose uses files formatted in YAML as input. By default, Docker Compose


expects these files to be called docker-compose.yml, but other names are possible. The
content of a docker-compose.yml is said to be a declarative way of describing and running a
containerized application potentially consisting of more than a single container.

So, what is the meaning of declarative?


First of all, declarative is the antonym of imperative. Well, that doesn't help much. Now that
we have introduced another definition, we need to explain both of them:

 Imperative: This is a way in which we can solve problems by specifying the exact
procedure that has to be followed by the system.

If tell a system such as the Docker daemon imperatively how to run an application, then that
means that we have to describe step by step what the system has to do and how it has to
react if some unexpected situation occurs. we have to be very explicit and precise in my
instructions. we need to cover all edge cases and how they need to be treated.

 Declarative: This is a way in which we can solve problems without requiring the
programmer to specify an exact procedure to be followed.

A declarative approach means that we tell the Docker engine what my desired state for an
application is and it has to figure out on its own how to achieve this desired state and how to
reconcile it if the system deviates from it.

Docker clearly recommends the declarative approach when dealing with containerized
applications. Consequently, the Docker Compose tool uses this approach.

Running a multi-service app

In most cases, applications do not consist of only one monolithic block, but rather of several
application services that work together. When using Docker containers, each application
service runs in its own container. When we want to run such a multi-service application, we
can, of course, start all the participating containers with the well-known docker container run
command, and we have done this in previous chapters. But this is inefficient at best. With
the Docker Compose tool, we are given a way to define the application in a declarative way
in a file that uses the YAML format.

Let's have a look at the content of a simple docker-compose.yml file:

Clone repo for docker compose


git clone https://github.com/ammarun11/Docker-For-DevOps.git
cd Docker-For-DevOps/exercise/practice03

Create file docker-compose.yml

cat docker-compose.yml
version: "3.2"
services:
web:
image: palopalepalo/web:1.0
build: web
ports:
- 80:3000
db:
image: palopalepalo/db:1.0
build: db
volumes:
- pets-data:/var/lib/postgresql/data

volumes:
pets-data:

The lines in the file are explained as follows:

 version: In this line, we specify the version of the Docker Compose format we want
to use. At the time of writing, this is version 2.4.
 services: In this section, we specify the services that make up our application in the
services block. In our sample, we have two application services and we call them
web and db:
 web: The web service is using an image called palopalepalo/web:1.0, which, if not
already in the image cache, is built from the Dockerfile found in the web folder . The
service is also publishing container port 3000 to the host port 80.
 db: The db service, on the other hand, is using the image name palopalepalo/db:1.0,
which is a customized PostgreSQL database. Once again, if the image is not already
in the cache, it is built from the Dockerfile found in the db folder . We are mounting a
volume called pets-data into the container of the db service.
 volumes: The volumes used by any of the services have to be declared in this
section. In our sample, this is the last section of the file. The first time the application
is run, a volume called pets-data will be created by Docker and then, in subsequent
runs, if the volume is still there, it will be reused. This could be important when the
application, for some reason, crashes and has to be restarted. Then, the previous
data is still around and ready to be used by the restarted database service.

Note that we are using version 2.x of the Docker Compose file syntax. This is the one
targeted toward deployments on a single Docker host. There exists also a version 3.x of the
Docker Compose file syntax. This version is used when you want to define an application
that is targeted either at Docker Swarm or Kubernetes. We will discuss this in more detail
starting with Chapter 12, Orchestrators.

Running a multi-service app

In most cases, applications do not consist of only one monolithic block, but rather of several
application services that work together. When using Docker containers, each application
service runs in its own container. When we want to run such a multi-service application, we
can, of course, start all the participating containers with the well-known docker container run
command, and we have done this in previous chapters. But this is inefficient at best. With
the Docker Compose tool, we are given a way to define the application in a declarative way
in a file that uses the YAML format.

Let's have a look at the content of a simple docker-compose.yml file:

Clone repo for docker compose

git clone https://github.com/ammarun11/Docker-For-DevOps.git


cd Docker-For-DevOps/exercise/practice03

Create file docker-compose.yml

cat docker-compose.yml
version: "3.2"
services:
web:
image: palopalepalo/web:1.0
build: web
ports:
- 80:3000
db:
image: palopalepalo/db:1.0
build: db
volumes:
- pets-data:/var/lib/postgresql/data

volumes:
pets-data:

The lines in the file are explained as follows:

 version: In this line, we specify the version of the Docker Compose format we want
to use. At the time of writing, this is version 2.4.
 services: In this section, we specify the services that make up our application in the
services block. In our sample, we have two application services and we call them
web and db:
 web: The web service is using an image called palopalepalo/web:1.0, which, if not
already in the image cache, is built from the Dockerfile found in the web folder . The
service is also publishing container port 3000 to the host port 80.
 db: The db service, on the other hand, is using the image name palopalepalo/db:1.0,
which is a customized PostgreSQL database. Once again, if the image is not already
in the cache, it is built from the Dockerfile found in the db folder . We are mounting a
volume called pets-data into the container of the db service.
 volumes: The volumes used by any of the services have to be declared in this
section. In our sample, this is the last section of the file. The first time the application
is run, a volume called pets-data will be created by Docker and then, in subsequent
runs, if the volume is still there, it will be reused. This could be important when the
application, for some reason, crashes and has to be restarted. Then, the previous
data is still around and ready to be used by the restarted database service.

Note that we are using version 2.x of the Docker Compose file syntax. This is the one
targeted toward deployments on a single Docker host. There exists also a version 3.x of the
Docker Compose file syntax. This version is used when you want to define an application
that is targeted either at Docker Swarm or Kubernetes. We will discuss this in more detail
starting with Chapter 12, Orchestrators.

Building images with Docker Compose

Navigate to the practice03 subfolder of the exercise folder and then build the images:

# cd ~/Docker-For-DevOps/exercise/practice03
# docker-compose build

If we enter the preceding command, then the tool will assume that there must be a file in the
current directory called docker-compose.yml and it will use that one to run. In our case, this
is indeed the case and the tool will build the images.

In your Terminal window, you should see an output similar to this:

root@ubuntu20:~/Docker-For-DevOps# docker-compose build


Building web
Step 1/9 : FROM node:12.10-alpine
12.10-alpine: Pulling from library/node
e7c96db7181b: Already exists
95b3c812425e: Already exists
778b81d0468f: Already exists
28549a15ba3e: Already exists
Digest:
sha256:744b156ec2dca0ad8291f80f9093273d45eb85378b6290b2fbbada861cc3ed01
Status: Downloaded newer image for node:12.10-alpine
---> ef7d474eab14
Step 2/9 : RUN mkdir /app
---> Running in 175624c5eb32
Removing intermediate container 175624c5eb32
---> 15f158d2dd62
Step 3/9 : WORKDIR /app
---> Running in 5626bfd248a4
Removing intermediate container 5626bfd248a4
---> a7c260bb5028
Step 4/9 : COPY package.json /app/
---> 8e297fe63c20
Step 5/9 : RUN npm install
---> Running in fea27bd8b088
npm notice created a lockfile as package-lock.json. You should commit
this file.
npm WARN [email protected] No repository field.

added 72 packages from 55 contributors and audited 72 packages in 5.465s


found 0 vulnerabilities

Removing intermediate container fea27bd8b088


---> bb09c1e47060
Step 6/9 : COPY ./public /app/public
---> ec4d77e57ba7
Step 7/9 : COPY ./src /app/src
---> 94a332e4d47c
Step 8/9 : EXPOSE 3000
---> Running in a15f21001281
Removing intermediate container a15f21001281
---> 4d1741aa7ac7
Step 9/9 : CMD node src/server.js
---> Running in 2ecf595b2f15
Removing intermediate container 2ecf595b2f15
---> 9ffeb0e23988
Successfully built 9ffeb0e23988
Successfully tagged palopalepalo/web:1.0

Building the Docker image for the web service

In the preceding screenshot, you can see that docker-compose first downloads the base
image node:12.12-alpine, for the web image we're building from Docker Hub. Subsequently,
it uses the Dockerfile found in the web folder to build the image and names it
palopalepalo/web:1.0. But this is only the first part; the second part of the output should look
similar to this:

Building db
Step 1/5 : FROM postgres:12-alpine
12-alpine: Pulling from library/postgres
df20fa9351a1: Already exists
600cd4e17445: Already exists
04c8eedc9a76: Already exists
27e869070ff2: Already exists
a64f75a232e2: Already exists
ee71e22a1c96: Already exists
0ef267de4e32: Already exists
69bfaaa66791: Already exists
Digest:
sha256:b49bafad6a2b7cbafdae4f974d9c4c7ff5c6bb7ec98c09db7b156a42a3c57baf
Status: Downloaded newer image for postgres:12-alpine
---> 3d77dd9d9dc3
Step 2/5 : COPY init-db.sql /docker-entrypoint-initdb.d/
---> 1e296971fa64
Step 3/5 : ENV POSTGRES_USER dockeruser
---> Running in 8d5cf6ed47f5
Removing intermediate container 8d5cf6ed47f5
---> 968c208ece21
Step 4/5 : ENV POSTGRES_PASSWORD dockerpass
---> Running in f7bac4b31ddc
Removing intermediate container f7bac4b31ddc
---> 50b5ae16aa0a
Step 5/5 : ENV POSTGRES_DB pets
---> Running in 4c54a5fa0e94
Removing intermediate container 4c54a5fa0e94
---> 5061eb4cc803
Successfully built 5061eb4cc803
Successfully tagged palopalepalo/db:1.0

Building the Docker image for the db service

Here, once again, docker-compose pulls the base image, postgres:12.0-alpine, from Docker
Hub and then uses the Dockerfile found in the db folder to build the image we call
palopalepalo/db:1.0.

Scaling a service

Now, let's, for a moment, assume that our sample application has been live on the web and
become very successful. Loads of people want to see our cute animal images. So now
we're facing a problem since our application has started to slow down. To counteract this
problem, we want to run multiple instances of the web service. With Docker Compose, this
is readily done.

Running more instances is also called scaling up. We can use this tool to scale our web
service up to, say, three instances:

# docker-compose up --scale web=3


If we do this, we are in for a surprise. The output will look similar to the following
screenshot:

root@ubuntu20:~/Docker-For-DevOps/exercise/practice03# docker-compose up
--scale web=3
practice03_db_1 is up-to-date
WARNING: The "web" service specifies a port on the host. If multiple
containers for this service are created on a single host, the port will
clash.
Starting practice03_web_1 ... done
Creating practice03_web_2 ... error
Creating practice03_web_3 ... error

ERROR: for practice03_web_3 Cannot start service web: driver failed


programming external connectivity on endpoint practice03_web_3
(376611b5eddf8c40095e716035ee55fc13c6ba262089bf14421c636241592216): Bind
for 0.0.0.0:80 failed: port is already allocated

ERROR: for practice03_web_2 Cannot start service web: driver failed


programming external connectivity on endpoint practice03_web_2
(992631eb6f25e950ba941c5ba3f77fcead2bbf540a34d5689e71f76415f6082b): Bind
for 0.0.0.0:80 failed: port is already allocated

ERROR: for web Cannot start service web: driver failed programming
external connectivity on endpoint practice03_web_3
(376611b5eddf8c40095e716035ee55fc13c6ba262089bf14421c636241592216): Bind
for 0.0.0.0:80 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.

The output of docker-compose --scale

The second and third instances of the web service fail to start. The error message tells us
why: we cannot use the same host port 80 more than once. When instances 2 and 3 try to
start, Docker realizes that port 80 is already taken by the first instance. What can we do?
Well, we can just let Docker decide which host port to use for each instance.

If in the ports section of the compose file, we only specify the container port and leave out
the host port, then Docker automatically selects an ephemeral port. Let's do exactly this:

1.First, let's tear down the application:


# docker-compose down

2.Then, we modify the docker-compose.yml file to look as follows:

version: "3.2"
services:
web:
image: palopalepalo/web:1.0
build: web
ports:
- 3000
db:
image: palopalepalo/db:1.0
build: db
volumes:
- pets-data:/var/lib/postgresql/data

volumes:
pets-data:

3.Now, we can start the application again and scale it up immediately after that:

# docker-compose up -d
# docker-compose up -d --scale web=3
Starting practice03_web_1 ... done
Creating practice03_web_2 ... done
Creating practice03_web_3 ... done

4.If we now do docker-compose ps, we should see the following screenshot:

root@ubuntu20:~/Docker-For-DevOps/exercise/practice03# docker-compose ps
Name Command State
Ports
-------------------------------------------------------------------------
----------
practice03_db_1 docker-entrypoint.sh postgres Up 5432/tcp
practice03_web_1 docker-entrypoint.sh /bin/ ... Up
0.0.0.0:32769->3000/tcp
practice03_web_2 docker-entrypoint.sh /bin/ ... Up
0.0.0.0:32770->3000/tcp
practice03_web_3 docker-entrypoint.sh /bin/ ... Up
0.0.0.0:32771->3000/tcp

The output of docker-compose ps


As we can see, each service has been associated with a different host port. We can try to
see whether they work, for example, using curl. Let's test the third instance,
practice03_web_3:

# curl -4 localhost:32771
Pets Demo Application

The answer, Pets Demo Application, tells us that, indeed, our application is still working as
expected. Try it out for the other two instances to be sure.

Lab 5.1: Using Docker Compose

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).

lab login doops

Setelah login, jalankan perintah lab start do-005-1. Perintah in digunakan untuk


mempersiapkan environment lab

lab start do-005-1

Eksekusi di pod-[username]-node01 & pod-[username]-node02

Install Compose

1. Unduh Compose.

sudo apt install -y docker-compose

2. Uji instalasi.

sudo docker-compose --version


Eksekusi di pod-[username]-node01

3. Buat direktori my_wordpress dan masuk ke direktori tersebut.

cd $HOME
mkdir -p latihan/my_wordpress
cd latihan/my_wordpress

4. Buat file docker-compose.yml.

vim docker-compose.yml
version: '3.2'

services:
db:
image: mysql:5.7
volumes:
- dbdata:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: [username]
MYSQL_PASSWORD: [password]

wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: [username]
WORDPRESS_DB_PASSWORD: [password]
volumes:
dbdata:

5. Jalankan compose.

sudo docker-compose up -d
6. Tampilkan daftar container dan uji browsing ke ke halaman wordpress yang sudah
dibuat.

sudo docker container ls

Docker Continous Integration (CI)

According to the 2020 Jetbrains developer survey , 44% of developers are now using some
form of continuous integration and deployment with Docker containers. We understand that
a large number of developers have got this set up using Docker Hub as their container
registry for part of their workflow. This guide contains some best practices for doing this and
provides guidance on how to get started.
We have also heard feedback that given the changes Docker introduced relating to network
egress and the number of pulls for free users, that there are questions around the best way
to use Docker Hub as part of CI/CD workflows without hitting these limits. This guide covers
best practices that improve your experience and uses a sensible consumption of Docker
Hub which mitigates the risk of hitting these limits, and contains tips on how to increase the
limits depending on your use case.

CI Using Docker

Continuous Integration (CI) is a development practice that requires developers to integrate


code into a shared repository several times a day. Each check-in is then verified by an
automated build, allowing teams to detect problems early. CI doesn’t get rid of bugs, but it
does make them dramatically easier to find and remove.

CI/CD merges development with testing, allowing developers to build code collaboratively,
submit it the master branch, and checked for issues. This allows developers to not only
build their code, but also test their code in any environment type and as often as possible to
catch bugs early in the applications development lifecycle. Since Docker can integrate with
tools like Jenkins and GitHub, developers can submit code in GitHub, test the code and
automatically trigger a build using Jenkins, and once the image is complete, images can be
added to Docker registries. This streamlines the process, saves time on build and set up
processes, all while allowing developers to run tests in parallel and automate them so that
they can continue to work on other projects while tests are being run.

1. Developer pushes a commit to GitHub


2. GitHub uses a webhook to notify Jenkins of the update
3. Jenkins pulls the GitHub repository, including the Dockerfile describing the image, as
well as the application and test code.
4. Jenkins builds a Docker image on the Jenkins slave node
5. Jenkins instantiates the Docker container on the slave node, and executes the
appropriate tests
6. If the tests are successful the image is then pushed up to Dockerhub or Docker
Trusted Registry.

Reference : https://collabnix.com/5-minutes-to-continuous-integration-pipeline-using-
docker-jenkins-github-on-play-with-docker-platform/
Docker Hub Automated Build

Docker Hub can automatically build images from source code in an external repository and
automatically push the built image to your Docker repositories.

When you set up automated builds (also called autobuilds), you create a list of branches
and tags that you want to build into Docker images. When you push code to a source code
branch (for example in GitHub) for one of those listed image tags, the push uses a webhook
to trigger a new build, which produces a Docker image. The built image is then pushed to
the Docker Hub registry.

Build images automatically from a build context stored in a repository. A build context is a
Dockerfile and any files at a specific location. Automated Builds have several advantages:

 Images built in this way are built exactly as specified.


 The Dockerfile is available to anyone with access to your Docker Hub repository.
 Your repository is kept up-to-date with code changes automatically.

Automated Builds are supported for both public and private repositories on both GitHub and
Bitbucket.

Build Statuses

Queued: in line for image to be built.

Building: The image is building.

Success: The image has been built with no issues.

Error: There was an issue with your image.


Logging and Error Handling

The docker logs command shows information logged by a running container. The docker
service logs command shows information logged by all containers participating in a service.
The information that is logged and the format of the log depends almost entirely on the
container’s endpoint command.

Logging and Error Handling

By default, docker logs or docker service logs shows the command’s output just as it would
appear if you ran the command interactively in a terminal. UNIX and Linux commands
typically open three I/O streams when they run, called STDIN, STDOUT, and STDERR.
STDIN is the command’s input stream, which may include input from the keyboard or input
from another command. STDOUT is usually a command’s normal output, and STDERR is
typically used to output error messages. By default, docker logs shows the command’s
STDOUT and STDERR. To read more about I/O and Linux, see the Linux Documentation
Project article on I/O redirection.
In some cases, docker logs may not show useful information unless you take additional
steps.

If you use a logging driver which sends logs to a file, an external host, a database, or
another logging back-end, docker logs may not show useful information.

If your image runs a non-interactive process such as a web server or a database, that
application may send its output to log files instead of STDOUT and STDERR.

In the first case, your logs are processed in other ways and you may choose not to use
docker logs. In the second case, the official nginx image shows one workaround, and the
official Apache httpd image shows another.

The official nginx image creates a symbolic link from /var/log/nginx/access.log to


/dev/stdout, and creates another symbolic link from /var/log/nginx/error.log to /dev/stderr,
overwriting the log files and causing logs to be sent to the relevant special device instead.
See the Dockerfile.

The official httpd driver changes the httpd application’s configuration to write its normal
output directly to /proc/self/fd/1 (which is STDOUT) and its errors to /proc/self/fd/2 (which is
STDERR).

Example logs :

Lab 7.1: Log Check


Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).

lab login doops

Setelah login, jalankan perintah lab start do-007-1. Perintah in digunakan untuk


mempersiapkan environment lab

lab start do-007-1

Eksekusi di node pod-[username]-node01

1. Check history image.

docker history nginx:latest

2. Jalankan nginx.

docker run -dit -p 80:80 --name nginx1 nginx

3. Check Filesystem pada nginx.

docker diff nginx1

4. Cek Log.

docker logs --details nginx1


docker logs --timestamps nginx1
docker logs nginx1

Logging Driver

When building containerized applications, logging is definitely one of the most important
things to get right from a DevOps standpoint. Log management helps DevOps teams debug
and troubleshoot issues faster, making it easier to identify patterns, spot bugs, and make
sure they don’t come back to bite you!
In this article, we’ll refer to Docker logging in terms of container logging, meaning logs that
are generated by containers. These logs are specific to Docker and are stored on the
Docker host. Later on, we’ll check out Docker daemon logs as well. These are the logs that
are generated by Docker itself. You will need those to debug errors in the Docker engine.

Lab 8.1: Configuring Logging Driver

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).

lab login doops

Setelah login, jalankan perintah lab start do-008-1. Perintah in digunakan untuk


mempersiapkan environment lab

lab start do-008-1

Eksekusi di node pod-[username]-node01

1. Buat file daemon.json.

vim /etc/docker/daemon.json
...
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "100"
}
}
...
2. Restart Daemon dan Docker.

systemctl daemon-reload
systemctl restart docker

3. Jalankan container.

docker run --log-driver json-file --log-opt max-size=10m alpine echo


hello world

4. Check log pada direktori docker.

cat /var/lib/docker/containers/CONTAINER/xxx-json.log

Note: Untuk ID-nya bisa dilihat dari perintah docker ps -a.

Health Check

Used to determine whether the application is ready and able to perform its function.
Health Check Command:

 Use a HEALTHCHECK instruction when defining the image


 On the command-line when running a container

Health Check

The HEALTHCHECK instruction has two forms:

 HEALTHCHECK [OPTIONS] CMD command (check container health by running a


command inside the container)
 HEALTHCHECK NONE (disable any healthcheck inherited from the base image)

The HEALTHCHECK instruction tells Docker how to test a container to check that it is still
working. This can detect cases such as a web server that is stuck in an infinite loop and
unable to handle new connections, even though the server process is still running.
When a container has a healthcheck specified, it has a health status in addition to its normal
status. This status is initially starting. Whenever a health check passes, it becomes healthy
(whatever state it was previously in). After a certain number of consecutive failures, it
becomes unhealthy.

The options that can appear before CMD are:

 --interval=DURATION (default: 30s)
 --timeout=DURATION (default: 30s)
 --start-period=DURATION (default: 0s)
 --retries=N (default: 3)

The health check will first run interval seconds after the container is started, and then again
interval seconds after each previous check completes.

If a single run of the check takes longer than timeout seconds then the check is considered
to have failed.

It takes retries consecutive failures of the health check for the container to be considered
unhealthy.

start period provides initialization time for containers that need time to bootstrap. Probe
failure during that period will not be counted towards the maximum number of retries.
However, if a health check succeeds during the start period, the container is considered
started and all consecutive failures will be counted towards the maximum number of retries.

There can only be one HEALTHCHECK instruction in a Dockerfile. If you list more than one
then only the last HEALTHCHECK will take effect.

The command after the CMD keyword can be either a shell command (e.g.
HEALTHCHECK CMD /bin/check-running) or an exec array (as with other Dockerfile
commands; see e.g. ENTRYPOINT for details).

The command’s exit status indicates the health status of the container. The possible values
are:
 0: success - the container is healthy and ready for use
 1: unhealthy - the container is not working correctly
 2: reserved - do not use this exit code

For example, to check every five minutes or so that a web-server is able to serve the site’s
main page within three seconds:

HEALTHCHECK --interval=5m --timeout=3s \


CMD curl -f http://localhost/ || exit 1

To help debug failing probes, any output text (UTF-8 encoded) that the command writes on
stdout or stderr will be stored in the health status and can be queried with docker inspect.
Such output should be kept short (only the first 4096 bytes are stored currently).

When the health status of a container changes, a health_status event is generated with


the new status.

Health Check Command

Health Check Command Defining the Image

Health Check Command while Run Container

Lab 9.1: Health Check


Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).

lab login doops

Setelah login, jalankan perintah lab start do-009-1. Perintah in digunakan untuk


mempersiapkan environment lab

lab start do-009-1

Eksekusi di node pod-[username]-node01

Health Check latihan01

1. Buat direktori.

cd $HOME
mkdir hc-latihan01
cd hc-latihan01

2. Buat file Dockerfile.

vim Dockerfile
...
FROM katacoda/docker-http-server:health
HEALTHCHECK --interval=1s --retries=3 \
CMD curl --fail http://localhost:80/ || exit 1
...

3. Buat image.

docker build -t http-healthcheck .

4. Jalankan image.

docker run -d -p 80:80 --name http-healthcheck http-healthcheck

5. Check image.
docker ps
# check pada bagian status

6. check dengan curl.

curl http://localhost/
# maka dapat diakses

7. Check untuk di unhealthy.

curl http://localhost/unhealthy

8. Check pada docker container status.

docker container ls

9. Check dengan curl.

curl http://localhost/

Health Check latihan02

1. Buat direktori baru.

cd $HOME
mkdir hc-latihan02
cd hc-latihan02

2. Buat file server.js.

vim server.js
...
"use strict";
const http = require('http');

function createServer () {
return http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('OK\n');
}).listen(8080);
}

let server = createServer();


http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
if (server) {
server.close();
server = null;
res.end('Shutting down...\n');
} else {
server = createServer();
res.end('Starting up...\n');
}
}).listen(8081);
...

3. Buat file Dockerfile.

vim Dockerfile
...
FROM node
COPY server.js /
EXPOSE 8080 8081
HEALTHCHECK --interval=5s --timeout=10s --retries=3 CMD curl -sS
127.0.0.1:8080 || exit 1
CMD ["node","server.js"]
...

4. Buat image.

docker build -t node-server .

5. Jalankan image.

docker run -d --name nodeserver -p 8080:8080 -p 8081:8081 node-server

6. Check container.

curl 127.0.0.1:8080
docker ps
docker inspect --format "{{ json .State.Health }}" nodeserver

7. Check container.

curl 127.0.0.1:8081
docker ps
docker inspect --format "{{ json .State.Health }}" nodeserver

8. Check Container.

curl 127.0.0.1:8081
docker ps
docker inspect --format "{{ json .State.Health }}" nodeserver

Security

There are four major areas to consider when reviewing Docker security:

 the intrinsic security of the kernel and its support for namespaces and cgroups;
 the attack surface of the Docker daemon itself;
 loopholes in the container configuration profile, either by default, or when customized
by users.
 the “hardening” security features of the kernel and how they interact with containers.

Docker Security

Kernel namespaces

Docker containers are very similar to LXC containers, and they have similar security
features. When you start a container with docker run, behind the scenes Docker creates a
set of namespaces and control groups for the container.

Namespaces provide the first and most straightforward form of isolation: processes


running within a container cannot see, and even less affect, processes running in another
container, or in the host system.

Each container also gets its own network stack, meaning that a container doesn’t get
privileged access to the sockets or interfaces of another container. Of course, if the host
system is setup accordingly, containers can interact with each other through their respective
network interfaces — just like they can interact with external hosts. When you specify public
ports for your containers or use links then IP traffic is allowed between containers. They can
ping each other, send/receive UDP packets, and establish TCP connections, but that can
be restricted if necessary. From a network architecture point of view, all containers on a
given Docker host are sitting on bridge interfaces. This means that they are just like
physical machines connected through a common Ethernet switch; no more, no less.

How mature is the code providing kernel namespaces and private networking? Kernel
namespaces were introduced between kernel version 2.6.15 and 2.6.26. This means that
since July 2008 (date of the 2.6.26 release ), namespace code has been exercised and
scrutinized on a large number of production systems. And there is more: the design and
inspiration for the namespaces code are even older. Namespaces are actually an effort to
reimplement the features of OpenVZ in such a way that they could be merged within the
mainstream kernel. And OpenVZ was initially released in 2005, so both the design and the
implementation are pretty mature.

Control groups

Control Groups are another key component of Linux Containers. They implement resource
accounting and limiting. They provide many useful metrics, but they also help ensure that
each container gets its fair share of memory, CPU, disk I/O; and, more importantly, that a
single container cannot bring the system down by exhausting one of those resources.

So while they do not play a role in preventing one container from accessing or affecting the
data and processes of another container, they are essential to fend off some denial-of-
service attacks. They are particularly important on multi-tenant platforms, like public and
private PaaS, to guarantee a consistent uptime (and performance) even when some
applications start to misbehave.

Control Groups have been around for a while as well: the code was started in 2006, and
initially merged in kernel 2.6.24.
Docker daemon attack surface

Running containers (and applications) with Docker implies running the Docker daemon.
This daemon requires root privileges unless you opt-in to Rootless mode, and you should
therefore be aware of some important details.

First of all, only trusted users should be allowed to control your Docker daemon. This
is a direct consequence of some powerful Docker features. Specifically, Docker allows you
to share a directory between the Docker host and a guest container; and it allows you to do
so without limiting the access rights of the container. This means that you can start a
container where the /host directory is the / directory on your host; and the container can
alter your host filesystem without any restriction. This is similar to how virtualization systems
allow filesystem resource sharing. Nothing prevents you from sharing your root filesystem
(or even your root block device) with a virtual machine.

This has a strong security implication: for example, if you instrument Docker from a web
server to provision containers through an API, you should be even more careful than usual
with parameter checking, to make sure that a malicious user cannot pass crafted
parameters causing Docker to create arbitrary containers.

For this reason, the REST API endpoint (used by the Docker CLI to communicate with the
Docker daemon) changed in Docker 0.5.2, and now uses a UNIX socket instead of a TCP
socket bound on 127.0.0.1 (the latter being prone to cross-site request forgery attacks if you
happen to run Docker directly on your local machine, outside of a VM). You can then use
traditional UNIX permission checks to limit access to the control socket.

You can also expose the REST API over HTTP if you explicitly decide to do so. However, if
you do that, be aware of the above mentioned security implications. Note that even if you
have a firewall to limit accesses to the REST API endpoint from other hosts in the network,
the endpoint can be still accessible from containers, and it can easily result in the privilege
escalation. Therefore it is mandatory to secure API endpoints with HTTPS and certificates.
It is also recommended to ensure that it is reachable only from a trusted network or VPN.
You can also use DOCKER_HOST=ssh://USER@HOST or ssh -L
/path/to/docker.sock:/var/run/docker.sock instead if you prefer SSH over TLS.

The daemon is also potentially vulnerable to other inputs, such as image loading from either
disk with docker load, or from the network with docker pull. As of Docker 1.3.2, images are
now extracted in a chrooted subprocess on Linux/Unix platforms, being the first-step in a
wider effort toward privilege separation. As of Docker 1.10.0, all images are stored and
accessed by the cryptographic checksums of their contents, limiting the possibility of an
attacker causing a collision with an existing image.

Finally, if you run Docker on a server, it is recommended to run exclusively Docker on the
server, and move all other services within containers controlled by Docker. Of course, it is
fine to keep your favorite admin tools (probably at least an SSH server), as well as existing
monitoring/supervision processes, such as NRPE and collectd.

Linux kernel capabilities

By default, Docker starts containers with a restricted set of capabilities. What does that
mean?

Capabilities turn the binary “root/non-root” dichotomy into a fine-grained access control
system. Processes (like web servers) that just need to bind on a port below 1024 do not
need to run as root: they can just be granted the net_bind_service capability instead. And
there are many other capabilities, for almost all the specific areas where root privileges are
usually needed.

This means a lot for container security; let’s see why!

Typical servers run several processes as root, including the SSH daemon, cron daemon,
logging daemons, kernel modules, network configuration tools, and more. A container is
different, because almost all of those tasks are handled by the infrastructure around the
container:

 SSH access are typically managed by a single server running on the Docker host;
 cron, when necessary, should run as a user process, dedicated and tailored for the
app that needs its scheduling service, rather than as a platform-wide facility;
 log management is also typically handed to Docker, or to third-party services like
Loggly or Splunk;
 hardware management is irrelevant, meaning that you never need to run udevd or
equivalent daemons within containers;
 network management happens outside of the containers, enforcing separation of
concerns as much as possible, meaning that a container should never need to
perform ifconfig, route, or ip commands (except when a container is specifically
engineered to behave like a router or firewall, of course).

This means that in most cases, containers do not need “real” root privileges at all. And
therefore, containers can run with a reduced capability set; meaning that “root” within a
container has much less privileges than the real “root”. For instance, it is possible to:

 deny all “mount” operations;


 deny access to raw sockets (to prevent packet spoofing);
 deny access to some filesystem operations, like creating new device nodes,
changing the owner of files, or altering attributes (including the immutable flag);
 deny module loading;
 and many others.

This means that even if an intruder manages to escalate to root within a container, it is
much harder to do serious damage, or to escalate to the host.

This doesn’t affect regular web apps, but reduces the vectors of attack by malicious users
considerably. By default Docker drops all capabilities except those needed, an allowlist
instead of a denylist approach. You can see a full list of available capabilities in Linux
manpages.

One primary risk with running Docker containers is that the default set of capabilities and
mounts given to a container may provide incomplete isolation, either independently, or
when used in combination with kernel vulnerabilities.

Docker supports the addition and removal of capabilities, allowing use of a non-default
profile. This may make Docker more secure through capability removal, or less secure
through the addition of capabilities. The best practice for users would be to remove all
capabilities except those explicitly required for their processes.
Docker Content Trust Signature Verification

The Docker Engine can be configured to only run signed images. The Docker Content Trust
signature verification feature is built directly into the dockerd binary. This is configured in the
Dockerd configuration file.

To enable this feature, trustpinning can be configured in daemon.json, whereby only


repositories signed with a user-specified root key can be pulled and run.

This feature provides more insight to administrators than previously available with the CLI
for enforcing and performing image signature verification.

For more information on configuring Docker Content Trust Signature Verificiation, go to


Content trust in Docker.

Other kernel security features

Capabilities are just one of the many security features provided by modern Linux kernels. It
is also possible to leverage existing, well-known systems like TOMOYO, AppArmor,
SELinux, GRSEC, etc. with Docker.

While Docker currently only enables capabilities, it doesn’t interfere with the other systems.
This means that there are many different ways to harden a Docker host. Here are a few
examples.

 You can run a kernel with GRSEC and PAX. This adds many safety checks, both at
compile-time and run-time; it also defeats many exploits, thanks to techniques like
address randomization. It doesn’t require Docker-specific configuration, since those
security features apply system-wide, independent of containers.
 If your distribution comes with security model templates for Docker containers, you
can use them out of the box. For instance, we ship a template that works with
AppArmor and Red Hat comes with SELinux policies for Docker. These templates
provide an extra safety net (even though it overlaps greatly with capabilities).
 You can define your own policies using your favorite access control mechanism. Just
as you can use third-party tools to augment Docker containers, including special
network topologies or shared filesystems, tools exist to harden Docker containers
without the need to modify Docker itself.
As of Docker 1.10 User Namespaces are supported directly by the docker daemon. This
feature allows for the root user in a container to be mapped to a non uid-0 user outside the
container, which can help to mitigate the risks of container breakout. This facility is available
but not enabled by default.

Conclusions

Docker containers are, by default, quite secure; especially if you run your processes as non-
privileged users inside the container.

You can add an extra layer of safety by enabling AppArmor, SELinux, GRSEC, or another
appropriate hardening system.

If you think of ways to make docker more secure, we welcome feature requests, pull
requests, or comments on the Docker community forums.

CIS Docker Benchmark

The Center for Internet Security (CIS) creates best practices for cyber security and defense.
The CIS uses crowdsourcing to define its security recommendations. The CIS Benchmarks
are among its most popular tools.

Organizations can use the CIS Benchmark for Docker to validate that their Docker
containers and the Docker runtime are configured as securely as possible. There are open
source and commercial tools that can automatically check your Docker environment against
the recommendations defined in the CIS Benchmark for Docker to identify insecure
configurations.

The CIS Benchmark for Docker provides a number of helpful configuration checks, but
organizations should think of them as a starting point and go beyond the CIS checks to
ensure best practices are applied. Setting resource constraints, reducing privileges, and
ensuring images run in read-only mode are a few examples of additional checks you’ll want
to run on your container files.
The latest benchmark for Docker (CIS Docker Benchmark v1.2.0).

 Host configuration
 Docker daemon configuration
 Docker daemon configuration files
 Container images and build file
 Container runtime
 Operations
 Swarm configuration

Secure Computing Mode

Secure computing mode (seccomp) is a Linux kernel feature. You can use it to restrict the
actions available within the container. The seccomp() system call operates on the seccomp
state of the calling process. You can use this feature to restrict your application’s access.

This feature is available only if Docker has been built with seccomp and the kernel is
configured with CONFIG_SECCOMP enabled. To check if your kernel supports seccomp:

$ grep CONFIG_SECCOMP= /boot/config-$(uname -r)


CONFIG_SECCOMP=y

Note: seccomp profiles require seccomp 2.2.1 which is not available on Ubuntu 14.04,
Debian Wheezy, or Debian Jessie. To use seccomp on these distributions, you must
download the latest static Linux binaries (rather than packages).

Pass a profile for a container

The default seccomp profile provides a sane default for running containers with seccomp
and disables around 44 system calls out of 300+. It is moderately protective while providing
wide application compatibility. The default Docker profile can be found here.

In effect, the profile is a whitelist which denies access to system calls by default, then
whitelists specific system calls. The profile works by defining a defaultAction of
SCMP_ACT_ERRNO and overriding that action only for specific system calls. The effect of
SCMP_ACT_ERRNO is to cause a Permission Denied error. Next, the profile defines a
specific list of system calls which are fully allowed, because their action is overridden to be
SCMP_ACT_ALLOW. Finally, some specific rules are for individual system calls such as
personality, and others, to allow variants of those system calls with specific arguments.

seccomp is instrumental for running Docker containers with least privilege. It is not
recommended to change the default seccomp profile.

When you run a container, it uses the default profile unless you override it with the
--security-opt option. For example, the following explicitly specifies a policy:

$ docker run --rm \


-it \
--security-opt seccomp=/path/to/seccomp/profile.json \
hello-world

More : Seccomp security profiles for Docker

Secret

“A secret is a blob of data that should not be transmitted over a network or stored
unencrypted in a Dockerfile or in your application’s source code. “

Data such as:

 Usernames and passwords


 SSH keys
 Other important data such as the name of a database or internal server
 Generic strings or binary content (up to 500 kb in size)

In terms of Docker Swarm services, a secret is a blob of data, such as a password, SSH
private key, SSL certificate, or another piece of data that should not be transmitted over a
network or stored unencrypted in a Dockerfile or in your application’s source code. You can
use Docker secrets to centrally manage this data and securely transmit it to only those
containers that need access to it. Secrets are encrypted during transit and at rest in a
Docker swarm. A given secret is only accessible to those services which have been granted
explicit access to it, and only while those service tasks are running.
Note: Docker secrets are only available to swarm services, not to standalone containers. To
use this feature, consider adapting your container to run as a service. Stateful containers
can typically run with a scale of 1 without changing the container code.

Another use case for using secrets is to provide a layer of abstraction between the
container and a set of credentials. Consider a scenario where you have separate
development, test, and production environments for your application. Each of these
environments can have different credentials, stored in the development, test, and
production swarms with the same secret name. Your containers only need to know the
name of the secret to function in all three environments.

You can also use secrets to manage non-sensitive data, such as configuration files.
However, Docker supports the use of configs for storing non-sensitive data. Configs are
mounted into the container’s filesystem directly, without the use of a RAM disk.

How Docker manages secrets

When you add a secret to the swarm, Docker sends the secret to the swarm manager over
a mutual TLS connection. The secret is stored in the Raft log, which is encrypted. The entire
Raft log is replicated across the other managers, ensuring the same high availability
guarantees for secrets as for the rest of the swarm management data.

When you grant a newly-created or running service access to a secret, the decrypted secret
is mounted into the container in an in-memory filesystem. The location of the mount point
within the container defaults to /run/secrets/<secret_name> in Linux containers,
or C:\ProgramData\Docker\secrets in Windows containers. You can also specify a
custom location.

You can update a service to grant it access to additional secrets or revoke its access to a
given secret at any time.

A node only has access to (encrypted) secrets if the node is a swarm manager or if it is
running service tasks which have been granted access to the secret. When a container task
stops running, the decrypted secrets shared to it are unmounted from the in-memory
filesystem for that container and flushed from the node’s memory.

If a node loses connectivity to the swarm while it is running a task container with access to a
secret, the task container still has access to its secrets, but cannot receive updates until the
node reconnects to the swarm.

You can add or inspect an individual secret at any time, or list all secrets. You cannot
remove a secret that a running service is using. See Rotate a secret for a way to remove a
secret without disrupting running services.

To update or roll back secrets more easily, consider adding a version number or date to the
secret name. This is made easier by the ability to control the mount point of the secret
within a given container.
Secret in Compose
Storage Driver

To use storage drivers effectively, it’s important to know how Docker builds and stores
images, and how these images are used by containers. You can use this information to
make informed choices about the best way to persist data from your applications and avoid
performance problems along the way.
Storage drivers allow you to create data in the writable layer of your container. The files
won’t be persisted after the container is deleted, and both read and write speeds are lower
than native file system performance.
Note: Operations that are known to be problematic include write-intensive database
storage, particularly when pre-existing data exists in the read-only layer. More details are
provided in this document.

Docker Storage Drivers

Ideally, very little data is written to a container’s writable layer, and you use Docker volumes
to write data. However, some workloads require you to be able to write to the container’s
writable layer. This is where storage drivers come in.

Docker supports several different storage drivers, using a pluggable architecture. The
storage driver controls how images and containers are stored and managed on your Docker
host.

After you have read the storage driver overview, the next step is to choose the best storage
driver for your workloads. In making this decision, there are three high-level factors to
consider:

If multiple storage drivers are supported in your kernel, Docker has a prioritized list of which
storage driver to use if no storage driver is explicitly configured, assuming that the storage
driver meets the prerequisites.
Use the storage driver with the best overall performance and stability in the most usual
scenarios.

Docker supports the following storage drivers:

 overlay2 is the preferred storage driver, for all currently supported Linux
distributions, and requires no extra configuration.
 aufs was the preferred storage driver for Docker 18.06 and older, when running on
Ubuntu 14.04 on kernel 3.13 which had no support for overlay2.
 fuse-overlayfs is preferred only for running Rootless Docker on a host that does
not provide support for rootless overlay2. On Ubuntu and Debian 10, the fuse-
overlayfs driver does not need to be used overlay2 works even in rootless mode.
See Rootless mode documentation.
 devicemapper is supported, but requires direct-lvm for production environments,
because loopback-lvm, while zero-configuration, has very poor performance.
devicemapper was the recommended storage driver for CentOS and RHEL, as their
kernel version did not support overlay2. However, current versions of CentOS and
RHEL now have support for overlay2, which is now the recommended driver.
 The btrfs and zfs storage drivers are used if they are the backing filesystem (the
filesystem of the host on which Docker is installed). These filesystems allow for
advanced options, such as creating “snapshots”, but require more maintenance and
setup. Each of these relies on the backing filesystem being configured correctly.
 The vfs storage driver is intended for testing purposes, and for situations where no
copy-on-write filesystem can be used. Performance of this storage driver is poor, and
is not generally recommended for production use.

Docker’s source code defines the selection order. You can see the order at the source code
for Docker Engine 20.10

If you run a different version of Docker, you can use the branch selector at the top of the file
viewer to choose a different branch.

Some storage drivers require you to use a specific format for the backing filesystem. If you
have external requirements to use a specific backing filesystem, this may limit your choices.
See Supported backing filesystems.

After you have narrowed down which storage drivers you can choose from, your choice is
determined by the characteristics of your workload and the level of stability you need. See
Other considerations for help in making the final decision.
NOTE: Your choice may be limited by your operating system and distribution. For instance,
aufs is only supported on Ubuntu and Debian, and may require extra packages to be
installed, while btrfs is only supported on SLES, which is only supported with Docker
Enterprise. See Support storage drivers per Linux distribution for more information.

Supported storage drivers per Linux distribution

At a high level, the storage drivers you can use is partially determined by the Docker edition
you use.

In addition, Docker does not recommend any configuration that requires you to disable
security features of your operating system, such as the need to disable selinux if you use
the overlay or overlay2 driver on CentOS.

Docker Engine - Community

For Docker Engine - Community, only some configurations are tested, and your operating
system’s kernel may not support every storage driver. In general, the following
configurations work on recent versions of the Linux distribution:

¹) The overlay storage driver is deprecated, and will be removed in a future release. It is
recommended that users of the overlay storage driver migrate to overlay2.
²) The devicemapper storage driver is deprecated, and will be removed in a future release.
It is recommended that users of the devicemapper storage driver migrate to overlay2.

Note The comparison table above is not applicable for Rootless mode. For the drivers
available in Rootless mode, see the Rootless mode documentation.

When possible, overlay2 is the recommended storage driver. When installing Docker for the
first time, overlay2 is used by default. Previously, aufs was used by default when available,
but this is no longer the case. If you want to use aufs on new installations going forward,
you need to explicitly configure it, and you may need to install extra packages, such as
linux-image-extra. See aufs.

On existing installations using aufs, it is still used.

When in doubt, the best all-around configuration is to use a modern Linux distribution with a
kernel that supports the overlay2 storage driver, and to use Docker volumes for write-heavy
workloads instead of relying on writing data to the container’s writable layer.

The vfs storage driver is usually not the best choice. Before using the vfs storage driver, be
sure to read about its performance and storage characteristics and limitations.

Expectations for non-recommended storage drivers: Commercial support is not


available for Docker Engine - Community, and you can technically use any storage
driver that is available for your platform. For instance, you can use btrfs with Docker
Engine - Community, even though it is not recommended on any platform for Docker
Engine - Community, and you do so at your own risk.

The recommendations in the table above are based on automated regression testing and
the configurations that are known to work for a large number of users. If you use a
recommended configuration and find a reproducible issue, it is likely to be fixed very quickly.
If the driver that you want to use is not recommended according to this table, you can run it
at your own risk. You can and should still report any issues you run into. However, such
issues have a lower priority than issues encountered when using a recommended
configuration.
Docker Desktop for Mac and Docker Desktop for Windows

Docker Desktop for Mac and Docker Desktop for Windows are intended for development,
rather than production. Modifying the storage driver on these platforms is not possible.

Supported backing filesystems

With regard to Docker, the backing filesystem is the filesystem where /var/lib/docker/ is


located. Some storage drivers only work with specific backing filesystems.

Check your current storage driver

The detailed documentation for each individual storage driver details all of the set-up steps
to use a given storage driver.

To see what storage driver Docker is currently using, use docker info and look for the
Storage Driver line:

$ docker info

Containers: 0
Images: 0
Storage Driver: overlay2
Backing Filesystem: xfs
<...>

Lab 11.1: Configuring Storage Driver

Prasyarat

Sebelum memulai lab, jalankan perintah lab login doops terlebih dahulu untuk login ke
akun anda. Kredensial untuk login sama dengan kredensial yang terdaftar di platform ini
(Adinusa).

lab login doops

Setelah login, jalankan perintah lab start do-011-1. Perintah in digunakan untuk


mempersiapkan environment lab

lab start do-011-1

Eksekusi di node pod-[username]-node01

1. Edit file daemon.json.

vim /etc/docker/daemon.json
...
{
"storage-driver": "vfs"
}
...

2. Restart service docker.

sudo service docker restart

3. Check pada docker info.

docker info

4. Check dengan docker pull.


docker pull ubuntu

5. Check pada direktori docker.

ls -al /var/lib/docker/vfs/dir/
du -sh /var/lib/docker/vfs/dir/

You might also like