Docker Notes
Docker Notes
DOCKER
Virtualization
This is the process of running multiple OS's parallelly on a single piece of h/w. Here we have h/w(bare metal) on
top of which we have host Os and on the host Os we install an application called as hypervisor on the hypervisor
we can run any no of OSs as guest OS
The disadvantage of this approach is this application running on the guest OS have to pass through n number of
lavers to access the H/W resources.
Containerization
Here we have bare metal on top of which we install the host Os and on the host OS we install an application
called as Docker Engine On the docker engine we can run any application in the form of containers Docker is a
technology for creating these containers
Docker achieve what is commonly called as "process isolation" i.e. all the applications(processes) have some
dependency on a specific OS. This dependency is removed by docker and we can run them on any OS as
containers if we have Docker engine installed
These containers pass through less no of layers to access the h/w resources also organizations need not spend
money on purchasing licenses of different OS's to maintain various applications Docker can be used at the the
stages of S/W development life cycle
Build---->Ship--->Run
A Docker image is a combination of bin/libs that are necessary for a s/w application to work. Initially all the s/w's
of docker are available in the form of docker images A running instance of an image is called as a container
Docker Host: The server where docker is installed is called docker host
Docker Client: This is the CLI of docker where the user can execute the docker commands, The docker client
accepts these commands and passes them to a background process called "docker daemon"
Docker deamon: This process accepts the commands coming from the docker client and routes them to
work on docker images or containers or the docker registry
Docker registry: This is the cloud site of docker where docker images are stored. This is of two types
1 Public Registry( hub.docker.com)
2 Private Registry(Setup on one of our local servers)
2|Page
DOCKER Madhav
11 To create a docker image from a dockerfile
docker build -t image_name .
15 To start a container
docker start container_id/container_name
16 To stop a container
docker stop container_id/container_name
17 To restart a container
docker restart container_id/container_name
To restart after 10 seconds
docker restart -t 10 container_id/container_name
3|Page
DOCKER Madhav
4|Page
DOCKER Madhav
40 To delete a volume
docker volume rm volume_name/volume_id
Day 3
Use Case 1
Create an nginx container in detached mode and name its webserver
Also perform port mapping
5|Page
DOCKER Madhav
Use Case 2
Start tomcat as a container and perform automatic port mapping
docker run --name appserver -d -P tomee
Start a jenkins container in detached mode and also perform port mapping
docker run --name myjenkins -d -p 9999:8080 jenkins/jenkins
Use Case
Create a mysql container and login as root user and create some SQL tables
6|Page
DOCKER Madhav
8 To see the data of the tables
select * from emp;
select * from dept;
Use Case
4 Python Scripting
5 Ansible Playbooks
Use Case
4 Check if c2 is pinging to c1
ping c1
7|Page
DOCKER Madhav
Use Case
Setup wordpress and link it with mysql container
Use Case
Setup CI-CD environment where a Jenkins container is linked with 2 tomcat containers for Qaserver and
Prodserver
3 Create a php container and link with mysql and apache containers
docker run --name php -d --link mydb:mysql --link apache:httpd php:7.2-apache
8|Page
DOCKER Madhav
Create a testing environment where a selenium hub container is linked with 2 node containers one with
chrome and other with firefox installed
4 The above 2 containers are GUI ubuntu containers and we can access their GUI using VNC viewer
a) Install VNC viewer from
https://www.realvnc.com/en/connect/download/viewer/
9|Page
DOCKER Madhav
Docker compose
The disadvantage of "link" option is it is deprecated and the same individual command have to be given multiple
times to setup similar architectures. To avoid this, we can use docker compose Docker compose uses yml files to
setup the multi container architecture and these files can be reused any number of times
UseCase
Create a docker compose file to setup a mysql and wordpress container and link them
---
version: '3.8'
services:
mydb:
image: mysql:5
environment:
MYSQL_ROOT_PASSWORD: intelliqit
mywordpress:
image: wordpress
ports:
- 8888:80
links:
- mydb:mysql
...
10 | P a g e
DOCKER Madhav
UseCase
Create a docker compose file to setup the CI-CD environment where a jenkins container is linked with 2 tomee
containers one for qaserver and other for prodserver
vim docker-compose.yml
---
version: '3.8'
services:
myjenkins:
image: jenkins/jenkins
ports:
- 5050:8080
qaserver:
image: tomee
ports:
- 6060:8080
links:
- myjenkins:jenkins
prodserver:
image: tomee
ports:
- 7070:8080
links:
- myjenkins:jenkins
UseCase
vim lamp.yml
---
version: '3.8'
services:
mydb:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: intelliqit
apache:
image: httpd
ports:
- 8989:80
Links
- mydb:mysql
php:
11 | P a g e
DOCKER Madhav
image: php:7.2-apache
links:
- mydb:mysql
- apache:httpd
...
vim docker-compose.yml
---
version: '3.8'
services:
hub:
image: selenium/hub
ports:
- 4444:4444
container_name: hub
chrome:
image: selenium/node-chrome-debug
ports:
- 5901:5900
links:
- hub:selenium
container_name: chrome
firefox:
image: selenium/node-firefox-debug
ports:
- 5902:5900
links:
- hub:selenium
container_name: firefox
...
12 | P a g e
DOCKER Madhav
Docker Volumes
Containers are ephemeral(temporary) but the data processed by the containers should be persistent. Once a
container is deleting all the data of the container is lost to preserve the data even if the container is deleted, we
can use volumes
13 | P a g e
DOCKER Madhav
4 Create another centos container c2 and it should use the volumes used by c1
docker run --name c2 -it --volumes-from c1 centos
7 Create another centos container c3 and it should use the volume used by c2
docker run --name c3 -it --volumes-from c2 centos
10 Go into any of the 3 containers and we will see all the files
docker attach c1
cd /data --à ls
exit
1 Create a volume
docker volume create myvolume
5 Create a centos container and mount the above volume into the tmp folder
docker run --name c1 -it -v myvolume:/tmp centos
If we create any files here, they will be reflected to host machine and these files will be present on the host
even after deleting the container.
UseCase
Create a volume "newvolume" and create tomcat-users.xml file in it
Create a tomcat container and mount the above volume into it Copy the tomcat-users.xml files to the required
location
1 Create a volume
docker volume create newvolume
15 | P a g e
DOCKER Madhav
UseCase
Create an ubuntu container and install some s/w's in it Save this container as an image and later create a new
container from the newly created image. We will find all the s/w's that we installed.
16 | P a g e
DOCKER Madhav
Dockerfile
Dockerfile uses predefined keyword to create customised docker images.
MAINTAINER : This represents the name of the organization or the author that has created this dockerfile
RUN : Used to run Linux commands in the container Generally it used to do s/w installation or
running scripts
USER : This is used to specify who should be the default user to login into the container
COPY : Used to copy files from host to the customised image that we are creating
ADD : This is similar to copy where it can copy files from host to image but ADD can also download files from .
some remote server
VOLUME : Used for automatic volume mounting i.e. we will have a volume mounted automatically when the .
container start
CMD : Used to run the default process of the container from outside
ENTRYPOINT : This is also used to run the default process of the container
LABEL: Used to store data about the docker image in key value pairs
SHELL : Used to specify what shell should be by default used by the image
UseCase
à Create a dockerfile to use nginx as base image and specify the maintainer as intelliqit
FROM nginx
MAINTAINER intelliqit
17 | P a g e
DOCKER Madhav
3 Check if the image is created or not
docker images
UseCase
4 Create a container from the new image and it should have git installed
docker run --name u1 -it myubuntu
git --version
Cache Busting
When we create an image from a dockerfile docker stores all the executed instructions in a its cache. Next time
if we edit the same docker file and add few new instructions and build an image out of it docker will not execute
the previously executed statements Instead it will read them from the cache This is a time saving mechanism The
disadvantage is if the docker file is edited with a huge time gap then we might end up installing s/w's that are
outdated
Eg:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y git
If we build an image from the above dockerfile docker saves all these instructions in the docker cache and if we
add the below statement
RUN apt-get install -y tree
only this latest statement will be executed
To avoid this problem and make docker execute all the instructions once more time without reading from cache
we use "cache busting" docker build --no-cache -t myubuntu .
18 | P a g e
DOCKER Madhav
Download docker shell script into the docker host and copy it into the customised docker image and later
install it at the time of creating the image
2 Build an image
docker build -t ansible
This container will have ansible installed in it
Create a dockerfile from ubuntu base image and download jenkins.war into it
1 Create a dockerfile
vim dockerfile
FROM ubuntu
MAINTIANER intelliqit
ADD https://get.jenkins.io/war-stable/2.263.4/jenkins.war /
19 | P a g e
DOCKER Madhav
Create a dockerfile from jenkins base image and make the default user as root
1 vim dockerfile
FROM jenkins/jenkins
MAINTAINER intelliqit
USER root
4 Go into the interactive shell and check if the default user is root
docker exec -it j1 bash
whoami
Create a dockerfile from nginx base image and expose 90 ports
1 vim dockerfile
FROM nginx
MAINTAIENR intelliqit
EXPOSE 90
1 Create a dockerfile
vim dockerfile
FROM ubuntu
MAINTAINER intelliqit
VOLUME /data
2 Create an image
docker build -t myubuntu .
3 Create a container
docker run --name u1 -it myubuntu
cd data
touch file1 file2
exit
4 Delete container
docker rm -f u1
20 | P a g e
DOCKER Madhav
vim dockerfile
FROM ubuntu
MAINTAINER intelliqit
RUN apt-get update
RUN apt-get install -y openjdk-11-jdk
ADD https://get.jenkins.io/war-stable/2.426.2/jenkins.war /
ENTRYPOINT ["java","-jar","jenkins.war"]
2 Create an image
docker build -t myubuntu .
3 Create a container
docker run --name u1 -it myubuntu
the container behaves like jenkins
UseCase
Create a dockerfile from ubuntu base image and make it behave like nginx
1 Create a dockerfile
vim dockerfile
FROM ubuntu
MAINTAINER intelliqit
RUN apt-get update
RUN apt-get install -y nginx
ENTRYPOINT ["/usr/sbin/nginx","-g","daemon off;"]
EXPOSE 80
3 Create a container from the above image and it will work like nginx
docker run --name n1 -d -P myubuntu
21 | P a g e
DOCKER Madhav
Eg:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y nginx
CMD ["/usr/sbin/nginx","-g","daemon off;"]
EXPOSE 80
Though the default process is to trigger nginx we can bypass that and make it work on some other process
Here if we inspect the default process, we will see that nginx as the default process
docker container ls
on the other hand, we can modify that default process to something else
docker run --name u1 -d -P myubuntu ls -la
Now if we do "docker container ls" we will see the default process to be "ls -la"
Docker Networking
Docker supports 4 types of networks
1 Bridge
2 Host
3 Null
4 Overlay
UseCase
Create 2 bridge networks intelliq1 and intelliq2
Create 2 busybox containers c1,c2 and c3
c1 and c2 should run on intelliq1 network and should ping each other
c3 should run on intelliq2 network and it should not be able to ping c1 or c2
Now put c2 on intelliq2 network, since c2 is on both intelliq1 and intelliq2
networks it should be able to ping to both c1 and c3 but c1 and c3 should not ping each other directly
22 | P a g e
DOCKER Madhav
1 Create 2 bridge networks
docker network create --driver bridge intelliq1
docker network create --driver bridge intelliq2
UseCase
Create a customised centos image and upload into the public registry
3 Push to registry
docker push intelliqit/nginx19
==================================================================================
Day 12
==================================================================================
Private Registry
24 | P a g e
DOCKER Madhav
ECR
1 Create an IAM role with admin privileges and assign to docker host
2 Search for ECR service on aws and create a private ecr registry
3 Click on View push command and copy paste the command in the docker host
Docker compose by default creates its own customised bridge network and creates containers on the network
vim docker-compose.yml
---
version: '3.8'
services:
mydb:
image: postgres
environment:
POSTGRES_PASSWORD: intelliqit
POSTGRES_DB: mydb
POSTGRES_USER: myuser
adminer:
image: adminer
ports:
- 8080:8080
To above 2 containers will be running on a new bridge network that is created by docker compose
This will not only delete the containers it will also delete the networks that got created.
25 | P a g e
DOCKER Madhav
UseCase
Create a custom bridge network and create a docker compose file to start postgres and adminer container on
the above created network
adminer:
image: adminer
ports:
- 8888:8080
networks:
default:
external:
name: intelliqit
...
vim dockerfile
FROM jenkins/jenkins
MAINTAINER intelliqit
RUN apt-get update
RUN apt-get install -y git
vim docker-compose.yml
version: '3.8'
services:
jenkins:
build: .
ports:
- 7070:8080
mytomcat:
image: tomee
ports:
- 6060:8080
...
Docker compose file to create 2 networks and run containers on different network
vim docker-compose.yml
---
version: '3.8'
services:
mydb:
image: jenkins/jenkins
ports:
- 5050:8080
networks:
- abc
qaserver:
image: tomee
ports:
- 6060:8080
networks:
- xyz
prodserver:
image: tomee
ports:
27 | P a g e
DOCKER Madhav
- 7070:8080
networks:
- xyz
networks:
abc: {}
xyz: {}
...
Docker compose file to create 2 containers and also create 2 volumes for both the containers
---
version: '3.8'
services:
db:
image: mysql:5
environment:
MYSQL_ROOT_PASSWORD: intelliqit
volumes:
mydb:/var/lib/mysql
wordpress:
image: wordpress
ports:
- 9999:80
volumes:
wordpress:/var/www/html
volumes:
mydb:
wordpress
==================================================================================
28 | P a g e
DOCKER Madhav
Container orchestration
Container orchestration is the process of managing multiple containers in a way that ensures they all
run at their best. This can be done through container orchestration tools, which are software
programs that automatically manage and monitor a set of containers on a single machine or across
multiple machines1. Here are some key points about container orchestration:
3. Popular Platform: Kubernetes is currently the most widely used container orchestration platform.
Major public cloud providers, such as Amazon Web Services (AWS), Google Cloud Platform, IBM
Cloud, and Microsoft Azure, offer managed Kubernetes services.
4. Other Tools: Besides Kubernetes, there are other container orchestration tools like Docker
Swarm and Apache Mesos.
5. How It Works:
o The configuration file typically includes information about container images, their locations
(registries), resource provisioning, network connections, and security settings.
In summary, container orchestration simplifies the management of container lifecycles, making it easier
to handle complex, dynamic environments where containerized software and applications thrive.
29 | P a g e
DOCKER Madhav
Docker Swarm
Docker Swarm is a container orchestration tool that simplifies the deployment and management
of containerized applications in a clustered environment. Here are some key concepts and
features of Docker Swarm:
1. Cluster Management Integrated with Docker Engine:
o Docker Swarm allows you to create a swarm of Docker Engines (nodes) without needing
additional orchestration software.
o You can deploy application services to the swarm using the Docker CLI.
o The decentralized design means that specialization between node roles (managers and
workers) is handled at runtime, allowing you to build an entire swarm from a single disk
image.
2. Declarative Service Model:
o Define the desired state of services in your application stack using a declarative approach.
o For example, you can describe an application with a web front end service, message
queueing services, and a database backend.
3. Scaling:
o Specify the number of tasks (replicas) you want to run for each service.
o The swarm manager automatically adapts by adding or removing tasks to maintain the
desired state.
4. Desired State Reconciliation:
o The swarm manager constantly monitors the cluster state and reconciles differences
between the actual state and the desired state.
o If a worker machine hosting replicas crashes, the manager creates new replicas to replace
them.
5. Multi-Host Networking:
o Specify an overlay network for your services.
o The swarm manager assigns addresses to containers on the overlay network during
initialization or updates.
6. Service Discovery:
o Each service in the swarm is assigned a unique DNS name by the swarm manager.
o Load balancing is handled, and you can query every container running in the swarm through
an embedded DNS server.
7. Load Balancing:
o Expose service ports to an external load balancer.
o Internally, specify how to distribute service containers between nodes.
8. Secure by Default:
o TLS mutual authentication and encryption secure communications between swarm nodes.
Docker Swarm is a powerful tool for managing multiple Docker containers across different
machines, providing a robust and efficient way to orchestrate containerized applications
30 | P a g e
DOCKER Madhav
===============================================================================
TCP port 2376 for secure Docker client communication. This port is required for Docker Machine to work. Docker
Machine is used to orchestrate Docker hosts.
TCP port 2377. This port is used for communication between the nodes of a Docker Swarm or cluster. It only
needs to be opened on manager nodes.
TCP and UDP port 7946 for communication among nodes (container network discovery).
UDP port 4789 for overlay network traffic (container ingress networking).
=========================================================================
Load Balancing:
Each docker containers has a capability to sustain a specific user load. To increase this capability, we can increase
the number of replicas(containers) on which a service can run
UseCase
Create nginx with 5 replicas and check where these replicas are running
31 | P a g e
DOCKER Madhav
UseCase
Create mysql with 3 replicas and also pass the necessary environment variables
Day 14
Scaling
This is the process of increasing the number of replicas or decreasing the replicas count based on requirement
without the end user experiencing any down time.
UseCase
Create httpd with 4 replicas and scale it to 8 and scale it down to 2
32 | P a g e
DOCKER Madhav
Rolling updates
Services running in docker swarm should be updated from once version to other without the end user
downtime
UseCase
Create redis:3 with 5 replicas and later update it to redis:4 also, rollback to redis:3
4 Check redis:3 replicas are shut down and in tis place redis:4 replicas are running
docker service ps myredis
6 Check if redis:4 replicas are shut down and, in its place, redis:3 is running
docker service ps myredis
================================================================================
To remove a worker from swarm cluster
docker node update --availability drain Worker1
33 | P a g e
DOCKER Madhav
Drain Worker1 from the docker swarm and check if all 6 replicas are running on Manager and Worker2,make
Worker1 rejoin the swarm
Make Worker2 leave the swarm and check if all the 6 replicas are running on Manager and Worker1
4 Delete a replica
docker rm -f container_id_from_step3
34 | P a g e
DOCKER Madhav
To avoid this, we should maintain multiple managers Manager nodes have the status as Leader or Reachable
If one manager node goes down other manager becomes the Leader Quorum is responsible for doing this
activity and if uses a RAFT algorithm for handling the failovers of managers. Quorum also is responsible for
maintaining the min number of manager
Min count of manager required for docker swarm should be always more than half of the total count of
Managers
==================================================================================
35 | P a g e
DOCKER Madhav
Overlay network
This is the default network used by docker swarm when container run multiple servers and the name of this
network is ingress.
UseCase
36 | P a g e
DOCKER Madhav
Docker Stack
docker compose + docker swarm = docker stack
docker compose + Kubernetes = kompose
Docker compose when implemented at the level of docker swarm it is called docker stack. Using docker stack we
can create an orchestreta a micro services architecture at the level of production servers
4 To delete a stack
docker stack rm stack_name
UseCase
Create a docker stack file to start 3 replicas of wordpress and one replica of mysql
vim stack1.yml
---
version: '3.8'
services:
db:
image: "mysql:5"
environment:
MYSQL_ROOT_PASSWORD: intelliqit
wordpress:
image: wordpress
ports:
- "8989:80"
deploy:
replicas: 3
37 | P a g e
DOCKER Madhav
UseCase
Create a stack file to setup CI-cd architecture where a Jenkins container is linked with tomcats for qa and prod
environments The jenkins containers should run only on Manager the qaserver tomcat should run only on
Worker1 and prodserver tomcat should run only on worker2
vim stack2.yml
---
version: '3.8'
services:
myjenkins:
image: jenkins/jenkins
ports:
- 5050:8080
deploy:
replicas: 2
placement:
constraints:
- node.hostname == Manager
qaserver:
image: tomcat
ports:
- 6060:8080
deploy:
replicas: 3
placement:
constraints:
- node.hostname == Worker1
prodserver:
image: tomcat
ports:
- 7070:8080
deploy:
replicas: 4
placement:
constraints:
- node.hostname == Worker2
...
38 | P a g e
DOCKER Madhav
UseCase
Create a stack file to setup the selenium hub and nodes architecture but also specify a upper limit on the h/w
vim stack3.yml
---
version: '3.8'
services:
hub:
image: selenium/hub
ports:
- 4444:4444
deploy:
replicas: 2
resources:
limits:
cpus: "0.1"
memory: "300M"
chrome:
image: selenium/node-chrome-debug
ports:
- 5901:5900
deploy:
replicas: 3
resources:
limits:
cpus: "0.01"
memory: "100M"
firefox:
image: selenium/node-firefox-debug
ports:
- 5902:5900
deploy:
replicas: 3
resources:
limits:
cpus: "0.01"
memory: "100M"
39 | P a g e
DOCKER Madhav
Global mode
Certain containers have to be created one per server in the cluster then we use this global mode
This ensures that one replica is created for every node as the node count increases the replicas count will also
increase automatically
If we check the services no, we will see 3 replicas created as we have 3 servers in the cluster
docker service ls
Docker Secrets
This is a feature of docker swarm using which we can pass secret data to the services running in swarm cluster
These secrets are created on the host machine and they will be available from all the replicas in the swarm
cluster
40 | P a g e