History: Adoption
History: Adoption
Contents
1History
o 1.1Adoption
2Technology
o 2.1Components
o 2.2Tools
3Operation
o 3.1Integration
o 3.2For Windows
4See also
5Notes
6References
7External links
History[edit]
Solomon Hykes started Docker in France as an internal project within dotCloud, a platform-as-a-
service company,[9] with initial contributions by other dotCloud engineers including Andrea Luzzardi
and Francois-Xavier Bourlet.[10] Jeff Lindsay also became involved as an independent
collaborator.[citation needed] Docker represents an evolution of dotCloud's proprietary technology, which is
itself built on earlier open-source projects such as Cloudlets.[clarification needed][citation needed]
The software debuted to the public in Santa Clara at PyCon in 2013.[11]
Docker was released as open source in March 2013.[12] On March 13, 2014, with the release of
version 0.9, Docker dropped LXC as the default execution environment and replaced it with its
own libcontainer library written in the Go programming language.[13][14]
Adoption[edit]
On September 19, 2013, Red Hat and Docker announced a collaboration around Fedora, Red
Hat Enterprise Linux, and OpenShift.[15]
In November 2014 Docker container services were announced for the Amazon Elastic Compute
Cloud (EC2).[16]
On November 10, 2014, Docker announced a partnership with Stratoscale.[17]
On December 4, 2014, IBM announced a strategic partnership with Docker that enables Docker
to integrate more closely with the IBM Cloud.[18]
On June 22, 2015, Docker and several other companies announced that they are working on a
new vendor and operating-system-independent standard for software containers.[19][20]
As of October 24, 2015, the project had over 25,600 GitHub stars (making it the 20th most-
starred GitHub project), over 6,800 forks, and nearly 1,100 contributors.[21]
In April 2016, Windocks, an independent ISV released a port of Docker's open source project to
Windows, supporting Windows Server 2012 R2 and Server 2016, with all editions of SQL Server
2008 onward. [22]
A May 2016 analysis showed the following organizations as main contributors to Docker: The
Docker team, Cisco, Google, Huawei, IBM, Microsoft, and Red Hat.[23]
On October 4, 2016, Solomon Hykes announced InfraKit as a new self-healing container
infrastructure effort for Docker container environments.[24][25]
A January 2017 analysis of LinkedIn profile mentions showed Docker presence grew by 160% in
2016.[26] The software has been downloaded more than 13 billion times as of 2017.
Technology[edit]
Docker can use different interfaces to access virtualization features of the Linux kernel.[27]
Docker is developed primarily for Linux, where it uses the resource isolation features of the Linux
kernel such as cgroups and kernel namespaces, and a union-capable file system such
as OverlayFS and others[28] to allow independent "containers" to run within a single Linux instance,
avoiding the overhead of starting and maintaining virtual machines (VMs).[29] The Linux kernel's
support for namespaces mostly[30] isolates an application's view of the operating environment,
including process trees, network, user IDs and mounted file systems, while the kernel's cgroups
provide resource limiting for memory and CPU.[31] Since version 0.9, Docker includes
the libcontainer library as its own way to directly use virtualization facilities provided by the
Linux kernel, in addition to using abstracted virtualization interfaces via libvirt, LXC and systemd-
nspawn.[13][32][27]
Building on top of facilities provided by the Linux kernel (primarily cgroups and namespaces), a
Docker container, unlike a virtual machine, does not require or include a separate operating
system.[33] Instead, it relies on the kernel's functionality and uses resource isolation for CPU and
memory,[31] and separate namespaces to isolate the application's view of the operating system.
Docker accesses the Linux kernel's virtualization features either directly using
the libcontainer library, which is available as of Docker 0.9, or indirectly
via libvirt, LXC (Linux Containers) or systemd-nspawn.[27][14]
Components[edit]
The Docker software is a service consisting of three components:
Software: The Docker daemon, called dockerd , is a persistent process that manages Docker
containers and handles container objects. The daemon listens for requests sent via the Docker
Engine API.[34][35] The Docker client program, called docker , provides a command-line
interface that allows users to interact with Docker daemons.[36][34]
Objects: Docker objects are various entities used to assemble an application in Docker. The
main classes of Docker objects are images, containers, and services.[34]
A Docker container is a standardized, encapsulated environment that runs applications.[37] A
container is managed using the Docker API or CLI.[34]
A Docker image is a read-only template used to build containers. Images are used to store
and ship applications.[34]
A Docker service allows containers to be scaled across multiple Docker daemons. The
result is known as a "swarm", a set of cooperating daemons that communicate through the
Docker API.[34]
Registries: A Docker registry is a repository for Docker images. Docker clients connect to
registries to download ("pull") images for use or upload ("push") images that they have built.
Registries can be public or private. Two main public registries are Docker Hub and Docker
Cloud. Docker Hub is the default registry where Docker looks for images.[38][34]
Tools[edit]
Docker Compose is a tool for defining and running multi-container Docker applications.[39] It
uses YAML files to configure the application's services and performs the creation and start-up
process of all the containers with a single command. The docker-compose CLI utility allows
users to run commands on multiple containers at once, for example, building
images, scaling containers, running containers that were stopped, and more.[40] Commands
related to image manipulation, or user-interactive options, are not relevant in Docker Compose
because they address one container.[41] The docker-compose.ymlfile is used to define an
application's services and includes various configuration options. For example,
the build option defines configuration options such as the Dockerfile path,
the command option allows one to override default Docker commands, and more.[42] The first
public version of Docker Compose (version 0.0.1) was released on December 21, 2013.[43] The
first production-ready version (1.0) was made available on October 16, 2014.[44]
Docker Swarm provides native clustering functionality for Docker containers, which turns a
group of Docker engines into a single virtual Docker engine.[45] In Docker 1.12 and higher,
Swarm mode is integrated with Docker Engine.[46] The swarm CLI utility allows users to run
Swarm containers, create discovery tokens, list nodes in the cluster, and more.[47] The docker
node CLI utility allows users to run various commands to manage nodes in a swarm, for
example, listing the nodes in a swarm, updating nodes, and removing nodes from the
swarm.[48] Docker manages swarms using the Raft Consensus Algorithm. According to Raft, for
an update to be performed, the majority of Swarm nodes need to agree on the update.[49][50]
Operation[edit]
Docker implements a high-level API to provide lightweight containers that run processes in
isolation.[12]
According to a Linux.com article,
Docker is a tool that can package an application and its dependencies in a virtual container that can
run on any Linux server. This helps enable flexibility and portability on where the application can run,
whether on premises, public cloud, private cloud, bare metal, etc.[33]
Because Docker containers are lightweight, a single server or virtual machine can run several
containers simultaneously. A 2016 analysis found that a typical Docker use case involves running
five containers per host, but that many organizations run 10 or more.[51]
Using containers may simplify the creation of highly distributed systems by allowing multiple
applications, worker tasks and other processes to run autonomously on a single physical machine or
across multiple virtual machines. This allows the deployment of nodes to be performed as the
resources become available or when more nodes are needed, allowing a platform as a
service (PaaS)-style of deployment and scaling for systems such as Apache
Cassandra, MongoDB and Riak.[52][53]
Integration[edit]
Docker can be integrated into various infrastructure tools, including Amazon Web
Services,[54] Ansible,[55] CFEngine,[56] Chef,[57] Google Cloud Platform,[58] IBM Bluemix,[59] HPE Helion
Stackato, Jelastic,[60]Jenkins,[61] Kubernetes,[62] Microsoft
Azure,[63] OpenStack Nova,[64] OpenSVC,[65] Oracle Container Cloud
Service,[66] Puppet,[67] ProGet,[68] Salt,[69] Vagrant,[70] and VMware vSphere Integrated Containers.[71][72]
The Cloud Foundry Diego project integrates Docker into the Cloud Foundry PaaS.[73]
Nanobox uses Docker (natively and with VirtualBox) containers as a core part of its software
development platform.[74]
Red Hat's OpenShift PaaS integrates Docker with related projects (Kubernetes, Geard, Project
Atomic and others) since v3 (June 2015).[75]
The Apprenda PaaS integrates Docker containers in version 6.0 of its product.[76]
Jelastic PaaS provides managed multi-tenant Docker containers with full compatibility to the native
ecosystem.[77]
The Tsuru PaaS integrates Docker containers in its product in 2013, the first PaaS to use Docker in
a production environment.[78]
For Windows[edit]
On October 15, 2014, Microsoft announced integration of the Docker engine into the next Windows
Server release, and native support for the Docker client role in Windows.[79][80] On June 8, 2016,
Microsoft announced that Docker now could be used natively on Windows 10 with Hyper-V
Containers, to build, ship and run containers utilizing the Windows Server 2016 Technical Preview 5
Nano Server container OS image.[81]
Since then, a feature known as Windows Containers was made available for Windows
10 and Windows Server 2016. There are two types of Windows Containers: "Windows Server
Containers" and "Hyper-V Isolation". The former has nothing to do with Docker. The latter, however,
is a form of hardware virtualization (as opposed to OS-level virtualization) and uses Docker to deliver
the guest OS image.[82] The guest OS image is a Windows Nano Server image, which is 652 MB in
size and has the same limitations of Nano Server,[83] as well as a separate end-user license
agreement.[84]