Containers and Docker: Fast, Consistent Delivery of Your Applications
Containers and Docker: Fast, Consistent Delivery of Your Applications
INTRODUCTION
Containers are isolated from one another and bundle their own software, libraries and configuration files; they can
communicate with each other through well-defined channels. All containers are run by a single operating-system
kernel and are thus more lightweight than virtual machines. The service has both free and premium tiers. The
software that hosts the containers is called Docker Engine. It was first started in 2013 and is developed by Docker,
Inc. Docker is a set of platform-as-a-service (PaaS) products that use OS-level virtualization to deliver software in
packages called containers.
APPLICATIONS
Docker streamlines the development lifecycle by allowing developers to work in standardized environments using
local containers which provide your applications and services. Containers are great for continuous integration and
continuous delivery (CI/CD) workflows.
Your developers write code locally and share their work with their colleagues using Docker containers.
They use Docker to push their applications into a test environment and execute automated and manual
tests.
When developers find bugs, they can fix them in the development environment and redeploy them to the
test environment for testing and validation.
When testing is complete, getting the fix to the customer is as simple as pushing the updated image to the
production environment.
Docker’s container-based platform allows for highly portable workloads. Docker containers can run on a
developer’s local laptop, on physical or virtual machines in a data center, on cloud providers, or in a mixture of
environments. Docker’s portability and lightweight nature also make it easy to dynamically manage workloads,
scaling up or tearing down applications and services as business needs dictate, in near real time.
Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-based virtual machines,
so you can use more of your compute capacity to achieve your business goals. Docker is perfect for high density
environments and for small and medium deployments where you need to do more with fewer resources.
1
DOCKER ENGINE
Server: It is a type of long-running program called a daemon process (the dockerd command).
REST API: Specifies interfaces that programs can use to talk to the daemon and instruct it what to do.
Command line interface (CLI) client: Docker commands.
The CLI uses the Docker REST API to control or interact with the Docker daemon through scripting or direct CLI
commands. Many other Docker applications use the underlying API and CLI.
The daemon creates and manages Docker objects, such as images, containers, networks, and volumes.
2
DOCKER ARCHITECTURE
Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy
lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the
same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon
communicate using a REST API, over UNIX sockets or a network interface.
The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images,
containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker
services.
The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use
commands such as docker run, the client sends these commands to dockerd, which carries them out. The docker
command uses the Docker API. The Docker client can communicate with more than one daemon.
DOCKER REGISTRIES
A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is
configured to look for images on Docker Hub by default. You can even run your own private registry. If you use
Docker Datacenter (DDC), it includes Docker Trusted Registry (DTR).
When you use the docker pull or docker run commands, the required images are pulled from your configured
registry. When you use the docker push command, your image is pushed to your configured registry.
3
OBJECTS
Docker objects are various entities used to assemble an application in Docker. The main classes of Docker objects
are images, containers, and services.
TOOLS
COMPOSE
Docker Compose is a tool for defining and running multi-container Docker applications. It uses YAML files to
configure the application's services and performs the creation and start-up process of all the containers with a
single command. The docker-compose CLI utility allows users to run commands on multiple containers at once, for
example, building images, scaling containers, running containers that were stopped, and more. Commands related
to image manipulation, or user-interactive options, are not relevant in Docker Compose because they address one
container. The docker-compose.yml file is used to define an application's services and includes various
configuration options. For example, the build option defines configuration options such as the Dockerfile path, the
command option allows one to override default Docker commands, and more. The first public beta version of
Docker Compose (version 0.0.1) was released on December 21, 2013. The first production-ready version (1.0) was
made available on October 16, 2014.
SWARM
Docker Swarm provides native clustering functionality for Docker containers, which turns a group of Docker
engines into a single virtual Docker engine. In Docker 1.12 and higher, Swarm mode is integrated with Docker
Engine. The swarm CLI utility allows users to run Swarm containers, create discovery tokens, list nodes in the
cluster, and more. The docker node CLI utility allows users to run various commands to manage nodes in a swarm,
for example, listing the nodes in a swarm, updating nodes, and removing nodes from the swarm. Docker manages
swarms using the Raft Consensus Algorithm. According to Raft, for an update to be performed, the majority of
Swarm nodes need to agree on the update.
KUBERNETES
4
DOCKER AND MICROSOFT
On October 15, 2014, Microsoft announced the integration of the Docker engine into the next Windows Server
release and native support for the Docker client role in Windows. On June 8, 2016, Microsoft announced that
Docker now could be used natively on Windows 10 with Hyper-V Containers, to build, ship and run containers
utilizing the Windows Server 2016 Technical Preview 5 Nano Server container OS image.
Since then, a feature known as Windows Containers was made available for Windows 10 and Windows Server
2016. There are two types of Windows Containers: "Windows Server Containers" and "Hyper-V Isolation". The
former has nothing to do with Docker and falls outside the scope of this article. The latter, however, is a form of
hardware virtualization (as opposed to OS-level virtualization) that uses Docker to deliver the guest OS image. This
guest OS image is a Windows Nano Server image, which is 652 MB in size, with a separate end-user license
agreement.
On May 6, 2019, Microsoft announced the second version of Windows Subsystem for Linux (WSL). This version of
WSL integrates a full Linux kernel into Windows 10. As such, Docker, Inc. announced that it has started working on
a version of Docker for Windows that runs on WSL 2 instead of a full virtual machine.