Mod 2
Mod 2
Fundamentals
Day 2
Course Content – Day 2
◦ Persistent Data: Data Volumes
◦ Introduction to Docker Compose
◦ Introduction to Docker Swarm
◦ Design & Build Docker Swarm Cluster
◦ Docker Swarm features and Swarm App Lifecycle
◦ Introduction to Kubernetes [ day 3 ]: pushed
Persistent Data: Data
Volumes
Persistent Data: Bind Mounting
◦ Maps a host file or directory to container
◦ Using your local disk rather then containers UFS based disk
◦ Containers are immutable infrastructure
◦ For persistent data, aka DATABASE cannot be immutable
◦ Persistent Data is solution :
◦ Volume
◦ Run time Solution: bind volume
Persistent Data: Volume Mounting
Persistent Data: Volume Mounting
Persistent Data: Volume Mounting
Persistent Data: Volume Mounting
Persistent Data: Volume Mounting
Persistent Data: Volume Mounting
Introduction to
Docker Compose
What’s in this section?
◦ Introduction to Docker Compose
◦ YAML file
◦ docker-compose sample usage
◦ Using compose to Build Image and bringing up the containers
◦ Practice A : Using docker-compose to create multi-container environment – nginx-httpd-mysql
◦ Practice B : Using docker-compose to create multi-container environment – Drupal - Postgres
Introduction to Docker Compose
◦ Compose is a tool for defining and running multi-container Docker applications.
◦ With Compose, you use a YAML file to configure your application’s services.
◦ Then, with a single command, you create and start all the services from your configuration.
◦ Compose works in all environments: production, staging, development, testing, as well as CI workflows.
◦ Using Compose is basically a three-step process:
◦ A) Define your app’s environment with a Dockerfile so it can be reproduced anywhere
◦ B) Define the services that make up your app in docker-compose.yml so they can be run together in an isolated
environment
◦ C) Run docker-compose up and Compose starts and runs your entire app
YAML file concept for build setup
Version to match Docker Version
◦ Naming of the containers are automated , if no settings supplied, images will use default
Using compose to Build Image and bringing
up the containers
◦ Standard build image operation:
Using compose to Build Image and bringing
up the containers
◦ Instead of using standard build operation, you can use docker-compose to build the image before
bringing up the container
◦ In the docker-compose.yml, you are required to use build option to specify the build operation before
bringing the container up
Using compose to Build Image and bringing
up the containers
Using compose to Build Image and bringing
up the containers
Practice A : Using docker-compose to create
multi-container environment – nginx-httpd-mysql
◦ Create a docker-compose.yml file that will bring up 3 containers
◦ 3 containers are:
◦ Nginx that listens to port 8080 named as proxy
◦ Mysql that uses ‘mypass’ as the password and named as mydb
◦ Httpd service that uses port 80 and named as webserver
◦ All 3 containers are not connected to each other, this practice is about docker-compose.yml and
understanding of how to bring up multiple container up in single command
Practice A: Solution
Practice B : Using docker-compose to create
multi-container environment – Drupal - Postgres
◦ Setup 2 containers using drupal CMS and Postgres Database as backend using Docker-compose
◦ The drupal image needs customization
◦ Use drupal 8.6 ( or any other version if this does not work )
◦ Install git in the drupal image
◦ Use the git to clone https://git.drupal.org/project/bootstrap.git
◦ This will clone newer theme into the custom drupal image ( so we can use after the container is up )
◦ The postgres image do not need any customization
◦ Try to use version 9.6 else drop version if its not working
◦ You also need to use volume to create separate volume to preserve the data on both drupal and postgres
◦ This is required to keep data when you bring the down the container, when you start again, it will use the volumes and
your data are preserved
◦ Read the documentation on hub for both drupal and postgres to get the insight view of what keys to use to
customize
Practice B: Solution
◦ NEED to FIND ONE ☺
◦ Once a group of machines have been clustered together, you can still run the Docker commands that
you're used to, but they will now be carried out by the machines in your cluster.
◦ Docker swarm is a container orchestration tool, meaning that it allows the user to manage multiple
containers deployed across multiple host machines
◦ One of the key benefits associated with the operation of a docker swarm is the high level of availability
offered for applications
Docker Definitions
◦ Docker is a software platform that enables software developers to easily integrate the use of containers
into the software development process
◦ Containers and their utilization and management in the software development process are the main
focus of the docker application. Containers allow developers to package applications with all of the
necessary code and dependencies that are necessary for them to function in any computing environment
◦ An Image is a package of executable files that contains all of the code, libraries, runtime, binaries and
configuration files necessary to run an application
◦ A Dockerfile is the name given to the type of file that defines the contents of a portable image
Docker Swarm Definitions
◦ A docker swarm is comprised of a group of physical or virtual machines operating in a cluster.
◦ When a machine joins the cluster, it becomes a node in that swarm.
◦ The docker swarm function recognizes three different types of nodes, each with a different role within
the docker swarm ecosystem
◦ Manager Node
◦ The primary function of manager nodes is to assign tasks to worker nodes in the swarm.
◦ Leader Node
◦ The leader node makes all of the swarm management and task orchestration decisions for the swarm
◦ Worker Node
◦ By default, all manager modes are also worker nodes and are capable of executing tasks when they have the
resources available to do so.
Benefits of Running a Docker Swarm
◦ Leverage the Power of Containers
◦ Developers love using docker swarm because it fully leverages the design advantages offered by
containers.
◦ Containers allow developers to deploy applications or services in self-contained virtual environments
◦ Containers are proving a more lightweight version of virtual machines, as their architecture allows them
to make more efficient use of computing power.
Benefits of Running a Docker Swarm
◦ Docker Swarm Helps Guarantee High Service Availability
◦ One of the main benefits of docker swarms is increasing application availability through redundancy.
◦ In order to function, a docker swarm must have a swarm manager that can assign tasks to worker nodes
◦ By implementing multiple managers, developers ensure that the system can continue to function even if
one of the manager nodes fails.
◦ Docker recommends a maximum of seven manager nodes for each cluster.
Benefits of Running a Docker Swarm
◦ Automated Load-Balancing
◦ Docker swarm schedules tasks using a variety of methodologies to ensure that there are enough
resources available for all of the containers.
◦ Through a process that can be described as automated load balancing, the swarm manager ensures that
container workloads are assigned to run on the most appropriate host for optimal efficiency.
Design & Build Docker
Swarm Cluster
What’s in this section?
◦ Docker Swarm Concept
◦ Build Single node Docker Swarm
◦ Build 3 Node Docker Swarm
Docker Swarm Concept
◦ Manager nodes handle cluster management tasks:
◦ maintaining cluster state
◦ scheduling services
◦ serving swarm mode HTTP API endpoints
◦ Using a Raft implementation, the managers maintain a consistent internal state of the entire swarm and
all the services running on it.
◦ Worker nodes
◦ Worker nodes are also instances of Docker Engine whose sole purpose is to execute containers.
◦ Worker nodes don’t participate in the Raft distributed state, make scheduling decisions.
Docker Swarm Concept
Docker Swarm Concept
Docker Swarm Concept
◦ Swarm Manager - uses the docker service command
◦ Replicas = how many task should a service spawn in nodes
◦ Task are containers that you want to run in cluster mode
Docker Swarm Concept
Docker Swarm Concept
◦ When you create a new docker service, an entire orchestration happens
API
Orchestrator
Allocator
Scheduler
Dispatcher
Before into swarm – requirement
◦ 3 Node [ VM or PM ] ( centosA , centosC , centosD )
◦ Off SeLinux – seems BAD !
◦ Able to communicate with each other via TCP/IP [ IP / DNS must be set ]
◦ Update /etc/hosts – for each machine to see each other with names
◦ Open protocols and ports between the hosts
◦ TCP port 2377 for cluster management communications
◦ TCP and UDP port 7946 for communication among nodes
◦ UDP port 4789 for overlay network traffic
◦ TCP port 80 for HTTP API endpoint
◦ Additional : Setup KEY based SSH between all nodes [ for ease of checking status and this course ]
◦ Remove any old containers that is running …. [ prune … research this … ] ( start fresh )
◦ https://docs.docker.com/engine/swarm/swarm-tutorial/
Before into swarm – requirement
◦ Checklist
❑ Off Selinux and reboot each node
❑Update /etc/hosts – for each machine to see each other with names
❑ Open protocols and ports between the hosts
❑ TCP port 2377 for cluster management communications
❑ TCP and UDP port 7946 for communication among nodes
❑ UDP port 4789 for overlay network traffic
❑ TCP port 80 for HTTP API endpoint
❑ Additional : Setup KEY based SSH between all nodes
Build Single node Docker Swarm
Build Single node Docker Swarm
Build Single node Docker Swarm
◦ docker swarm init : What happened?
◦ Lots of PKI and security automation
◦ Root Signing Certificate created for our Swarm
◦ Certificate is issued for first Manager node
◦ Join tokens are created
◦ You can use secrets to manage any sensitive data which a container needs at runtime but you don’t want to store in the image or in
source control, such as: