Lesson 1 Docker Introduction and Installation
Lesson 1 Docker Introduction and Installation
Installation
Docker is an open-source application container engine based on the Go
language and licensed under Apache License 2.0.
Docker enables developers to package their applications and
dependencies into lightweight and portable containers, which can be deployed
to popular Linux machines and virtualized environments.
Containers are fully isolated using sandboxing mechanisms, ensuring that
there are no interfaces between them (similar to apps on an iPhone), and a
more important aspect is that the containers have extremely low-performance
overhead.
Docker Logo
1. Docker Introduction
1
which belongs to virtualized technology at the operating system level. Because
the processes of isolation are independent of the processes of the host and
other isolation, Docker is also called a container. Its initial implementation is
based on LXC.
Docker automates repetitive tasks, such as the setup and
configuration of the development environments, allowing developers to focus
on the truly important thing: to build excellent software.
You can create and use containers easily to run your applications. The
containers can also manage, replicate, share, and modify versions, just like
managing normal codes.
The image shown above is the logo of Docker, which features a while fully
carrying containers on its back. The containers are isolated from each other,
2
representing the core concept of Docker.
For example, multiple applications run on the same server, which may
cause conflicts in software port occupation. However, with Docker's isolation
feature, each application can run independently. Additionally, Docker can
maximize the utilization of server resources.
3
1.4 Virtual Machines VS Docker
The Docker daemon can directly communicate with the host operating
system to distribute resources for each Docker container. It can also isolate the
containers from the host operating system and each other. Virtual machines
take several minutes to start, while the Docker containers can start within a few
milliseconds. Since Docker has no overstaffed slave operating system, it can
save a significant amount of disk space and other system resources.
Virtual machines are more specialized in thoroughly isolating entire
running environments. For example, cloud service providers usually adopt
virtual machine technology to isolate different users, instead, Docker is widely
used to isolate different applications such as front-end, back-end, and
database.
Docker containers are more resource-efficient and faster (in starting,
closing, creating, and deleting) than virtual machines.
4
avoid questions such as “This code works fine on my machine”.
Faster Start Time
Docker can start within seconds or milliseconds, greatly saving the time of
developing, testing, and deploying.
Isolation
Docker’s isolation prevents resources from being affected by other users
on shared servers.
Elastic Scaling and Rapid Expansion
Docker is adept at dealing with concentrated usage pressure on servers.
Transfer Convenience
Docker can easily transfer applications running on one platform to another
without worrying about the application malfunction caused by the changes in
the runtime environment.
Continuous Delivery and Deployment
Docker can use customized application images to realize continuous
integration, delivery, and deployment.
5
1.7 Three Basic Concepts of Docker
1.7.1 Image --- A Special File System
An operating system consists of a kernel and a user space. As for Linux,
after the kernel is started, it mounts a root file system to provide support for the
user space. A Docker image is essentially a root file system.
A Docker image is a special file system, and it not only provides necessary
files, such as programs, libraries, resources, and configurations, for container
operation, but also includes some configuration parameters (such as
anonymous volumes, environment variables, and users) to prepare for the
runtime. The image does not contain any dynamic data, and its content
remains unchanged after construction.
When designing Docker, the Union FS technology is fully utilized to design
a layered storage architecture. The image is composed of multiple layers of file
systems united together.
The Docker image is constructed layer by layer with the former layer
serving as the basis for the latter layer. Once a layer is constructed, it remains
unchanged, and any changes made in subsequent layers only affect that
specific layer itself. For example, when a file from a previous layer is deleted in
a later layer, the file is not deleted, but instead marked as deleted in the current
6
layer. When the container finally runs, the file is not visible, however, it is still
present in the image. Therefore, when constructing a Docker image, please
operate with caution, each layer should only include the necessary files and
configurations. Any unnecessary files should be cleared before the layer
construction is completed.
The layered storage design makes it easier to reuse and customize
images. You can use a previously constructed image as a base layer to add
new layers to customize the content and construct a new image according to
your specific needs.
1.7.2 Container --- Entity of Image Runtime
The relationship between images and containers is similar to that between
classes and instances in object-oriented programming. An image is a static
definition, and a container is the entity that runs the image and can be created,
started, stopped, deleted, paused, and more.
The essence of a container is a process, but it is different from a process
executed directly on the host. The container process runs in its independent
namespace. As mentioned earlier, images and containers both use
hierarchical storage.
The lifecycle of a container storage layer is the same as that of the
container itself. When a container is deleted, its storage layer is also deleted,
resulting in the loss of any information stored in the storage layer.
According to the best practice requirement of Docker, containers should
not write any data to their storage layer which should remain stateless. All
file-writing operations should use data volumes or assign a host directory. This
approach skips the storage layer to directly read and write to the host (or
network storage), resulting in higher performance and stability. Data volumes
have an independent lifecycle from containers. The data will not be lost even if
the container is deleted. Therefore, using data volumes allows containers to be
deleted and re-run without losing any data.
7
1.7.3 Repository --- The Place for Centralized Storage of Image File
Once an image is constructed, it can easily run on the current host.
However, if the image needs to be used on other servers, a centralized storage
and image distribution is required, such as Docker Registry.
A Docker Registry can contain multiple repositories. Each repository
contains multiple tags and each tag corresponds to an image. Therefore, an
Image Repository is the place for Docker to centrally store image files, which is
similar to the commonly used code repository.
Generally, a repository includes images of different versions of the same
software, and tags are usually used to correspond to each version. The
“<Repository Name>: <Tag>” format can be used to specify which version of
the software’s image to use. If a tag is not provided, the “latest” will be used as
the default tag.
8
docker run
The “executable program” image can be run after obtained with the
"docker run" command. The Docker daemon receives the command and
locates the specific image, which is then loaded into memory and executed.
The running instance of the image is called a container.
9
docker pull
Users send a command using the Docker client, and the Docker daemon
receives the command to send a request to the Docker Registry to download
the image. Once downloaded, the image is stored locally and can be used.
2. Docker Installation
Official Installation Reference Manual: Install Docker Engine on Debian |
Docker Docs
Note: Raspberry Pi's image already has installed Docker when you
receive the package. The following steps are for learning and reference
only.
Let’s use the Aliyun source for downloading.
1) Press “Ctrl+Alt+T” to open the command line terminal, type “sudo
apt-get update”, and press “Enter” to update the apt package list.
10
3) Enter the command “sudo install -m 0755 -d /etc/apt/keyrings” to
create a directory for storing the GPG keyring.
6) Enter “sudo apt-get update” to update the apt software package list.
11
8) Enter “sudo groupadd docker” to create a Docker user group.
9) Enter “sudo gpasswd -a $USER docker” to add the current user to the
Docker user group, which avoids having to enter “sudo” every time you run a
Docker command.
11) After the download is complete, enter “docker version” to check the
Docker version.
12
12) Enter “docker run hello-world”, if the message shown below appears,
it means that the Docker is installed successfully.
13
14