Docker

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

Containers

Virtualization is a process that allows for more efficient utilization of physical computer
hardware.
Virtualization uses software to create an abstraction layer over computer hardware that
allows the hardware elements of a single computer—processors, memory, storage and
more—to be divided into multiple virtual computers, commonly called virtual machines
(VMs). Each VM runs its own operating system (OS) and behaves like an independent
computer, even though it is running on just a portion of the actual underlying computer
hardware.
Today, virtualization is a standard practice in enterprise IT architecture. It is also the
technology that drives cloud computing economics. Virtualization enables cloud providers to
serve users with their existing physical computer hardware; it enables cloud users to
purchase only the computing resources they need when they need it, and to scale those
resources cost-effectively as their workloads grow.
Benefits of virtualization
Virtualization brings several benefits to data centre operators and service providers:
 Resource efficiency: Server virtualization lets you run several applications—each on
its own VM with its own OS—on a single physical computer (typically an x86 server)
without sacrificing reliability. This enables maximum utilization of the physical
hardware’s computing capacity.

 Easier management: Automated deployment and configuration tools enable


administrators to define collections of virtual machines and applications as services,
in software templates.
 Minimal downtime: OS and application crashes can cause downtime and disrupt
user productivity. Admins can run multiple redundant virtual machines alongside
each other and failover between them when problems arise. Running multiple
redundant physical servers is more expensive.

 Faster provisioning: Buying, installing, and configuring hardware for each application


is time-consuming. Provided that the hardware is already in place, provisioning
virtual machines to run all your applications is significantly faster. You can even
automate it using management software and build it into existing workflows.
 Portability: VMs can be relocated as needed among the physical computers in a
network. This makes it possible to allocate workloads to servers that have spare
computing power. VMs can even move between on-premises and cloud
environments, making them useful for hybrid cloud scenarios in which you share
computing resources between your data center and a cloud service provider.
Types of virtualizations
 Desktop virtualization
 Network virtualization
 Storage virtualization
 Data virtualization
 Application virtualization
 Data center virtualization
 CPU virtualization
 GPU virtualization
 Linux virtualization
 Cloud virtualization
Network virtualization
Network virtualization uses software to create a “view” of the network that an
administrator can use to manage the network from a single console. It abstracts hardware
elements and functions (e.g., connections, switches, routers, etc.) and abstracts them into
software running on a hypervisor. The network administrator can modify and control these
elements without touching the underlying physical components, which dramatically
simplifies network management.
Types of network virtualization include software-defined networking (SDN), which
virtualizes hardware that controls network traffic routing (called the “control plane”),
and network function virtualization (NFV), which virtualizes one or more hardware
appliances that provide a specific network function (e.g., a firewall, load balancer, or traffic
analyzer), making those appliances easier to configure, provision, and manage.
CPU virtualization
CPU (central processing unit) virtualization is the fundamental technology that makes
hypervisors, virtual machines, and operating systems possible. It allows a single CPU to be
divided into multiple virtual CPUs for use by multiple VMs.
At first, CPU virtualization was entirely software-defined, but many of today’s processors
include extended instruction sets that support CPU virtualization, which improves VM
performance.

Virtualization offers some security benefits. For example, VMs infected with malware can be
rolled back to a point in time (called a snapshot) when the VM was uninfected and stable;
they can also be more easily deleted and recreated. You can’t always disinfect a non-
virtualized OS, because malware is often deeply integrated into the core components of the
OS, persisting beyond system rollbacks.

Virtualization also presents some security challenges. If an attacker compromises a


hypervisor, they potentially own all the VMs and guest operating systems. Because
hypervisors can also allow VMs to communicate between themselves without touching the
physical network, it can be difficult to see their traffic, and therefore to detect suspicious
activity.

A Type 2 hypervisor on a host OS is also susceptible to host OS compromise.

The market offers a range of virtualization security products that can scan and patch VMs
for malware, encrypt entire VM virtual disks, and control and audit VM access.

HYPERVISOR
A hypervisor is a small software layer that enables multiple operating systems to run
alongside each other, sharing the same physical computing resources. These operating
systems come as virtual machines (VMs)—files that mimic an entire computing hardware
environment in software.
The hypervisor, also known as a virtual machine monitor (VMM), manages these VMs as
they run alongside each other. It separates VMs from each other logically, assigning each its
own slice of the underlying computing power, memory, and storage. This prevents the VMs
from interfering with each other; if for example, one OS suffers a crash or a security
compromise, the others survive.
There are two broad categories of hypervisors: Type 1 and Type 2.
Type 1 hypervisor
A Type 1 hypervisor runs directly on the underlying computer’s physical hardware,
interacting directly with its CPU, memory, and physical storage. For this reason, Type 1
hypervisors are also referred to as bare-metal hypervisors. A Type 1 hypervisor takes the
place of the host operating system.
Examples: VMware ESXi (Elastic Sky X Integrated), Microsoft Hyper-V, Citrix Hypervisor
Type 2 hypervisor
A Type 2 hypervisor doesn’t run directly on the underlying hardware. Instead, it runs as an
application in an OS. Type 2 hypervisors rarely show up in server-based environments.
Instead, they’re suitable for individual PC users needing to run multiple operating systems.
Examples include engineers, security professionals analyzing malware, and business users
that need access to applications only available on other software platforms.
Type 2 hypervisors often feature additional toolkits for users to install into the guest OS.
These tools provide enhanced connections between the guest and the host OS, often
enabling the user to cut and paste between the two or access host OS files and folders from
within the guest VM.
Examples: VMware Fusion, VMware Workstation, Oracle VirtualBox
Virtual Machines
A virtual machine is a virtual representation of a physical computer. Virtualization makes it
possible to create multiple virtual machines, each with their own operating system (OS) and
applications, on a single physical machine. A VM cannot interact directly with a physical
computer. Instead, it needs a lightweight software layer called a hypervisor to coordinate
between it and the underlying physical hardware. The hypervisor allocates physical
computing resources—such as processors, memory, and storage—to each VM. It keeps each
VM separate from others so they don’t interfere with each other.
Virtual machines vs. bare metal servers
Bare metal servers are all about raw hardware, power, and isolation. They’re single-tenant,
physical servers completely void of hypervisor cycles (virtualization software), and entirely
dedicated to a single customer – you.
Workloads that highly prioritize performance and seclusion, like data-intensive applications
and regulatory compliance mandates, are typically best suited for bare metal servers –
especially when deployed over sustained periods of time.
E-commerce, ERP, CRM, SCM, and financial services applications are just a few workloads
ideal for bare metal servers.
So, when would you place a hypervisor on top of the bare metal hardware to make a virtual
machine? When your workloads demand maximum flexibility and scalability.
Virtual machines effortlessly drive-up server capacity and increase utilization – ideal for
moving data from one virtual machine to another, resizing data sets, and dividing dynamic
workloads.

CONTAINERIZATION
Containerization is the packaging of software code with just the operating system (OS)
libraries and dependencies required to run the code to create a single lightweight
executable—called a container—that runs consistently on any infrastructure.
Containerization allows developers to create and deploy applications faster and more
securely. With traditional methods, code is developed in a specific computing environment
which, when transferred to a new location, often results in bugs and errors. For example,
when a developer transfers code from a desktop computer to a VM or from a Linux to a
Windows operating system. Containerization eliminates this problem by bundling the
application code together with the related configuration files, libraries, and dependencies
required for it to run. This single package of software or “container” is abstracted away from
the host operating system, and hence, it stands alone and becomes portable—able to run
across any platform or cloud, free of issues.
Containers are often referred to as “lightweight,” meaning they share the machine’s
operating system kernel and do not require the overhead of associating an operating system
within each application. Containers are inherently smaller in capacity than a VM and require
less start-up time, allowing far more containers to run on the same compute capacity as a
single VM. This drives higher server efficiencies and, in turn, reduces server and licensing
costs.
Perhaps most important, containerization allows applications to be “written once and run
anywhere.” This portability speeds development, prevents cloud vendor lock-in and offers
other notable benefits such fault isolation, ease of management, simplified security and
more (see below).
Application containerization
Containers encapsulate an application as a single executable package of software that
bundles application code together with all of the related configuration files, libraries, and
dependencies required for it to run. Containerized applications are “isolated” in that they do
not bundle in a copy of the operating system. Instead, an opensource runtime engine (such
as the Docker runtime engine) is installed on the host’s operating system and becomes the
conduit for containers to share an operating system with other containers on the same
computing system.
Other container layers, like common bins and libraries, can also be shared among multiple
containers. This eliminates the overhead of running an operating system within each
application and makes containers smaller in capacity and faster to start up, driving higher
server efficiencies. The isolation of applications as containers also reduces the chance that
malicious code present in one container will impact other containers or invade the host
system.
The abstraction from the host operating system makes containerized applications portable
and able to run uniformly and consistently across any platform or cloud. Containers can be
easily transported from a desktop computer to a virtual machine (VM) or from a Linux to a
Windows operating system, and they will run consistently on virtualized infrastructures or
on traditional “bare metal” servers, either on-premise or in the cloud. This ensures that
software developers can continue using the tools and processes they are most comfortable
with.
One can see why enterprises are rapidly adopting containerization as a superior approach to
application development and management. Containerization allows developers to create
and deploy applications faster and more securely, whether the application is a traditional
monolith (a single-tiered software application) or a modular application built
on microservices architecture. New cloud-based applications can be built from the ground
up as containerized microservices, breaking a complex application into a series of smaller
specialized and manageable services. Existing applications can be repackaged into
containers (or containerized microservices) that use compute resources more efficiently.

Benefits of containerization
Containerization offers significant benefits to developers and development teams. Among
these are the following:
Portability: A container creates an executable package of software that is abstracted away
from (not tied to or dependent upon) the host operating system, and hence, is portable and
able to run uniformly and consistently across any platform or cloud. 
Agility: The opensource Docker Engine for running containers started the industry standard
for containers with simple developer tools and a universal packaging approach that works
on both Linux and Windows operating systems. The container ecosystem has shifted to
engines managed by the Open Container Initiative (OCI). Software developers can continue
using agile or DevOps tools and processes for rapid application development and
enhancement.
Speed: Containers are often referred to as “lightweight,” meaning they share the machine’s
operating system (OS) kernel and are not bogged down with this extra overhead. Not only
does this drive higher server efficiencies, it also reduces server and licensing costs while
speeding up start-times as there is no operating system to boot.
Fault isolation: Each containerized application is isolated and operates independently of
others. The failure of one container does not affect the continued operation of any other
containers. Development teams can identify and correct any technical issues within one
container without any downtime in other containers. Also, the container engine can
leverage any OS security isolation techniques—such as SELinux access control—to isolate
faults within containers.
Efficiency: Software running in containerized environments shares the machine’s OS kernel,
and application layers within a container can be shared across containers. Thus, containers
are inherently smaller in capacity than a VM and require less start-up time, allowing far
more containers to run on the same compute capacity as a single VM. This drives higher
server efficiencies, reducing server and licensing costs.
Ease of management: A container orchestration platform automates the installation,
scaling, and management of containerized workloads and services. Container orchestration
platforms can ease management tasks such as scaling containerized apps, rolling out new
versions of apps, and providing monitoring, logging and debugging, among other
functions. Kubernetes, perhaps the most popular container orchestration system available,
is an open-source technology (originally open-sourced by Google, based on their internal
project called Borg) that automates Linux container functions originally. Kubernetes works
with many container engines, such as Docker, but it also works with any container system
that conforms to the Open Container Initiative (OCI) standards for container image formats
and runtimes.
Security: The isolation of applications as containers inherently prevents the invasion of
malicious code from affecting other containers or the host system. Additionally, security
permissions can be defined to automatically block unwanted components from entering
containers or limit communications with unnecessary resources.
Microservices and containerization
Software companies large and small are embracing microservices as a superior approach to
application development and management, compared to the earlier monolithic model that
combines a software application with the associated user interface and underlying database
into a single unit on a single server platform. With microservices, a complex application is
broken up into a series of smaller, more specialized services, each with its own database and
its own business logic. Microservices then communicate with each other across common
interfaces (like APIs) and REST interfaces (like HTTP). Using microservices, development
teams can focus on updating specific areas of an application without impacting it as a whole,
resulting in faster development, testing, and deployment.
The concepts behind microservices and containerization are similar as both are software
development practices that essentially transform applications into collections of smaller
services or components which are portable, scalable, efficient and easier to manage.
Moreover, microservices and containerization work well when used together. Containers
provide a lightweight encapsulation of any application, whether it is a traditional monolith
or a modular microservice. A microservice, developed within a container, then gains all of
the inherent benefits of containerization—portability in terms of the development process
and vendor compatibility (no vendor lock-in), as well as developer agility, fault isolation,
server efficiencies, automation of installation, scaling and management, and layers of
security, among others.
Today’s communications are rapidly moving to the cloud where users can develop
applications quickly and efficiently. Cloud-based applications and data are accessible from
any internet-connected device, allowing team members to work remotely and on-the-go.
Cloud service providers (CSPs) manage the underlying infrastructure, which saves
organizations the cost of servers and other equipment and also provides automated
network backups for additional reliability. Cloud infrastructures scale on demand and can
dynamically adjust computing resources, capacity, and infrastructure as load requirements
change. On top of that, CSPs regularly update offerings, giving users continued access to the
latest innovative technology.
Containers, microservices, and cloud computing are working together to bring application
development and delivery to new levels not possible with traditional methodologies and
environments. These next-generation approaches add agility, efficiency, reliability, and
security to the software development lifecycle—all of which leads to faster delivery of
applications and enhancements to end users and the market.
Security
Containerized applications inherently have a level of security since they can run as isolated
processes and can operate independently of other containers. Truly isolated, this could
prevent any malicious code from affecting other containers or invading the host system.
However, application layers within a container are often shared across containers. In terms
of resource efficiency, this is a plus, but it also opens the door to interference and security
breaches across containers. The same could be said of the shared Operating System since
multiple containers can be associated with the same host Operating System. Security
threats to the common Operating System can impact all of the associated containers, and
conversely, a container breach can potentially invade the host Operating System.
But, what about the container image itself? How can the applications and open-source
components packaged within a container improve security? Container technology providers,
such as Docker, continue to actively address container security challenges. Containerization
has taken a “secure-by-default” approach, believing that security should be inherent in the
platform and not a separately deployed and configured solution. To this end, the container
engine supports all of the default isolation properties inherent in the underlying operating
system. Security permissions can be defined to automatically block unwanted components
from entering containers or to limit communications with unnecessary resources.
For example, Linux Namespaces helps to provide an isolated view of the system to each
container; this includes networking, mount points, process IDs, user IDs, inter-process
communication, and hostname settings. Namespaces can be used to limit access to any of
those resources through processes within each container. Typically, subsystems which do
not have Namespace support are not accessible from within a container. Administrators can
easily create and manage these “isolation constraints” on each containerized application
through a simple user interface.
Researchers are working to further strengthen Linux container security, and a wide range of
security solutions are available to automate threat detection and response across an
enterprise, to monitor and enforce compliance to meet industry standards and security
policies, to ensure the secure flow of data through applications and endpoints, and much
more.

What are containers?


Containers are executable units of software in which application code is packaged, along
with its libraries and dependencies, in common ways so that it can be run anywhere,
whether it be on desktop, traditional IT, or the cloud.
To do this, containers take advantage of a form of operating system (OS) virtualization in
which features of the OS (in the case of the Linux kernel, namely the namespaces and
cgroups primitives) are leveraged to both isolate processes and control the amount of CPU,
memory, and disk that those processes have access to.
Containers are small, fast, and portable because unlike a virtual machine, containers do not
need include a guest OS in every instance and can, instead, simply leverage the features and
resources of the host OS.
Use cases for containers
Containers are becoming increasingly prominent, especially in cloud environments. Many
organizations are even considering containers as a replacement of VMs as the general-
purpose compute platform for their applications and workloads. But within that very broad
scope, there are key use cases where containers are especially relevant.
 Microservices: Containers are small and lightweight, which makes them a good
match for microservice architectures where applications are constructed of many,
loosely coupled and independently deployable smaller services.

 DevOps: The combination of microservices as an architecture and containers as a


platform is a common foundation for many teams that embrace DevOps as the way
they build, ship and run software.

 Hybrid, multi-cloud: Because containers can run consistently anywhere, across


laptop, on-premises and cloud environments, they are an ideal underlying
architecture for hybrid cloud and multi cloud scenarios where organizations find
themselves operating across a mix of multiple public clouds in combination with
their own data center.

 Application modernizing and migration: One of the most common approaches


to application modernization is to containerize applications in preparation for cloud
migration.
Containerization
Software needs to be designed and packaged differently in order to take advantage of
containers—a process commonly referred to as containerization.

When containerizing an application, the process includes packaging an application with its
relevant environment variables, configuration files, libraries, and software dependencies.
The result is a container image that can then be run on a container platform.

Container orchestration with Kubernetes

As companies began embracing containers—often as part of modern, cloud-native


architectures—the simplicity of the individual container began colliding with the complexity
of managing hundreds (even thousands) of containers across a distributed system.

To address this challenge, container orchestration emerged as a way managing large


volumes of containers throughout their lifecycle, including:
 Provisioning
 Redundancy
 Health monitoring
 Resource allocation
 Scaling and load balancing
 Moving between physical hosts

Kubernetes enables developers and operators to declare a desired state of their overall
container environment through YAML files, and then Kubernetes does all the hard work
establishing and maintaining that state, with activities that include deploying a specified
number of instances of a given application or workload, rebooting that application if it fails,
load balancing, auto-scaling, zero downtime deployments and more.

Containers vs. VMs: What are the differences?


In traditional virtualization, a hypervisor virtualizes physical hardware. The result is that
each virtual machine contains a guest OS, a virtual copy of the hardware that the OS
requires to run and an application and its associated libraries
and dependencies. VMs with different operating systems can be run on the same physical
server. For example, a VMware VM can run next to a Linux VM, which runs next to
a Microsoft VM, etc.
Instead of virtualizing the underlying hardware, containers virtualize the operating
system (typically Linux or Windows) so each individual container contains only the
application and its libraries and dependencies. Containers are small, fast, and portable
because, unlike a virtual machine, containers do not need to include a guest OS in every
instance and can, instead, simply leverage the features and resources of the host OS. 
Just like virtual machines, containers allow developers to improve CPU and memory
utilization of physical machines. Containers go even further, however, because they also
enable microservice architectures, where application components can be deployed and
scaled more granularly. This is an attractive alternative to having to scale up an
entire monolithic application because a single component is struggling with load.
Why containers?
While there are still many reasons to use VMs, containers provide a level of flexibility
and portability that is perfect for the multi cloud world. When developers create new
applications, they might not know all of the places it will need to be deployed. Today, an
organization might run the application on its private cloud, but tomorrow it might need to
deploy it on a public cloud from a different provider. Containerizing applications provides
teams the flexibility they need to handle the many software environments of modern IT. 
Containers are also ideal for automation and DevOps pipelines, including continuous
integration and continuous deployment (CI/CD) implementation.

Docker
What is Docker?
Docker is an open-source platform that enables developers to build, deploy, run, update and
manage containers—standardized, executable components that combine application source
code with the operating system (OS) libraries and dependencies required to run that code in
any environment.
How containers work, and why they're so popular
Containers are made possible by process isolation and virtualization capabilities built into
the Linux kernel. These capabilities—such as control groups (Cgroups) for allocating
resources among processes, and namespaces for restricting a processes access or visibility
into other resources or areas of the system—enable multiple application components to
share the resources of a single instance of the host operating system in much the same way
that a hypervisor enables multiple virtual machines (VMs) to share the CPU, memory and
other resources of a single hardware server. 
As a result, container technology offers all the functionality and benefits of VMs—including
application isolation, cost-effective scalability, and disposability—plus important additional
advantages:
 Lighter weight: Unlike VMs, containers don’t carry the payload of an entire OS
instance and hypervisor. They include only the OS processes and dependencies
necessary to execute the code. Container sizes are measured in megabytes (vs.
gigabytes for some VMs), make better use of hardware capacity, and have faster
startup times. 
 Improved developer productivity: Containerized applications can be written once
and run anywhere. And compared to VMs, containers are faster and easier to
deploy, provision and restart. This makes them ideal for use in continuous
integration and continuous delivery (CI/CD) pipelines and a better fit for
development teams adopting Agile and DevOps practices.
 Greater resource efficiency: With containers, developers can run several times as
many copies of an application on the same hardware as they can using VMs. This can
reduce cloud spending.
Why use Docker?
LinuXContainers (LXC) was implemented in the Linux kernel, fully enabling virtualization for
a single instance of Linux. While LXC is still used today, newer technologies using the Linux
kernel are available. Ubuntu, a modern, open-source Linux operating system, also provides
this capability. 
Docker lets developers access these native containerization capabilities using simple
commands, and automate them through a work-saving application programming interface
(API). Compared to LXC, Docker offers:
 Improved and seamless container portability: While LXC containers often reference
machine-specific configurations, Docker containers run without modification across
any desktop, data center and cloud environment. 
 Even lighter weight and more granular updates: With LXC, multiple processes can
be combined within a single container. This makes it possible to build an application
that can continue running while one of its parts is taken down for an update or
repair. 
 
 Automated container creation: Docker can automatically build a container based on
application source code. 
 
 Container versioning: Docker can track versions of a container image, roll back to
previous versions, and trace who built a version and how. It can even upload only
the deltas between an existing version and a new one. 
 
 Container reuse: Existing containers can be used as base images—essentially like
templates for building new containers. 
 
 Shared container libraries: Developers can access an open-source registry
containing thousands of user-contributed containers. 
Today Docker containerization also works with Microsoft Windows and Apple MacOS.
Developers can run Docker containers on any operating system, and most leading cloud
providers, including Amazon Web Services (AWS), Microsoft Azure, and IBM Cloud offer
specific services to help developers build, deploy and run applications containerized with
Docker.
DockerFile
Every Docker container starts with a simple text file containing instructions for how to build
the Docker container image. DockerFile automates the process of Docker image creation.
It’s essentially a list of command-line interface (CLI) instructions that Docker Engine will run
in order to assemble the image.
Docker images
Docker images contain executable application source code as well as all the tools, libraries,
and dependencies that the application code needs to run as a container. When you run the
Docker image, it becomes one instance (or multiple instances) of the container. 
It’s possible to build a Docker image from scratch, but most developers pull them down
from common repositories. Multiple Docker images can be created from a single base
image, and they’ll share the commonalities of their stack. 
Docker images are made up of layers, and each layer corresponds to a version of the image.
Whenever a developer makes changes to the image, a new top layer is created, and this top
layer replaces the previous top layer as the current version of the image. Previous layers are
saved for rollbacks or to be re-used in other projects. 
Each time a container is created from a Docker image, yet another new layer called the
container layer is created. Changes made to the container—such as the addition or deletion
of files—are saved to the container layer only and exist only while the container is running.
This iterative image-creation process enables increased overall efficiency since multiple live
container instances can run from just a single base image, and when they do so, they
leverage a common stack. 
Docker containers
Docker containers are the live, running instances of Docker images. While Docker images are
read-only files, containers are life, ephemeral, executable content. Users can interact with
them, and administrators can adjust their settings and conditions using Docker commands. 
Docker Hub
Docker Hub is the public repository of Docker images that calls itself the “world’s largest
library and community for container images.” It holds over 100,000 container images
sourced from commercial software vendors, open-source projects, and individual
developers. It includes images that have been produced by Docker, Inc., certified images
belonging to the Docker Trusted Registry, and many thousands of other images. 
All Docker Hub users can share their images at will. They can also download predefined base
images from the Docker filesystem to use as a starting point for any containerization
project. 
Other image repositories exist, as well, notably GitHub. GitHub is a repository hosting
service, well known for application development tools and as a platform that fosters
collaboration and communication. Users of Docker Hub can create a repository (repo) which
can hold many images. The repository can be public or private, and can be linked to GitHub
or BitBucket accounts. 
Docker Desktop
Docker Desktop is an application for Mac or Windows that includes Docker Engine, Docker
CLI client, Docker Compose, Kubernetes, and others. It also includes access to Docker Hub. 

Docker daemon
Docker daemon is a service that creates and manages Docker images, using the commands
from the client. Essentially Docker daemon serves as the control center of your Docker
implementation. The server on which Docker daemon runs is called the Docker host.
The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects
such as images, containers, networks, and volumes. A daemon can also communicate with
other daemons to manage Docker services.
Docker registry
A Docker registry is a scalable open-source storage and distribution system for Docker
images. The registry enables you to track image versions in repositories, using tagging for
identification.
Docker Compose
Developers can use Docker Compose to manage multi-container applications, where all
containers run on the same Docker host. Docker Compose creates a YAML (.YML) file that
specifies which services are included in the application and can deploy and run containers
with a single command.
Developers can also use Docker Compose to define persistent volumes for storage, specify
base nodes, and document and configure service dependencies. 
Docker Client
The Docker client (docker) is the primary way that many Docker users interact with Docker.
When you use commands such as docker run, the client sends these commands to dockerd,
which carries them out. The docker command uses the Docker API. 

You might also like