0% found this document useful (0 votes)
42 views

Mod 2

This document provides an overview of the content covered in Day 2 of a Docker Fundamentals course. The topics covered include persistent data using volumes and bind mounting, an introduction to Docker Compose including using a YAML file and docker-compose commands, and an introduction to Docker Swarm including concepts, building a single and multi-node swarm cluster, and swarm features. Hands-on practices are also provided to use docker-compose to create multi-container environments for nginx-httpd-mysql and Drupal with Postgres.

Uploaded by

Steven
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Mod 2

This document provides an overview of the content covered in Day 2 of a Docker Fundamentals course. The topics covered include persistent data using volumes and bind mounting, an introduction to Docker Compose including using a YAML file and docker-compose commands, and an introduction to Docker Swarm including concepts, building a single and multi-node swarm cluster, and swarm features. Hands-on practices are also provided to use docker-compose to create multi-container environments for nginx-httpd-mysql and Drupal with Postgres.

Uploaded by

Steven
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Docker

Fundamentals
Day 2
Course Content – Day 2
◦ Persistent Data: Data Volumes
◦ Introduction to Docker Compose
◦ Introduction to Docker Swarm
◦ Design & Build Docker Swarm Cluster
◦ Docker Swarm features and Swarm App Lifecycle
◦ Introduction to Kubernetes [ day 3 ]: pushed
Persistent Data: Data
Volumes
Persistent Data: Bind Mounting
◦ Maps a host file or directory to container
◦ Using your local disk rather then containers UFS based disk
◦ Containers are immutable infrastructure
◦ For persistent data, aka DATABASE cannot be immutable
◦ Persistent Data is solution :
◦ Volume
◦ Run time Solution: bind volume
Persistent Data: Volume Mounting
Persistent Data: Volume Mounting
Persistent Data: Volume Mounting
Persistent Data: Volume Mounting
Persistent Data: Volume Mounting
Persistent Data: Volume Mounting
Introduction to
Docker Compose
What’s in this section?
◦ Introduction to Docker Compose
◦ YAML file
◦ docker-compose sample usage
◦ Using compose to Build Image and bringing up the containers
◦ Practice A : Using docker-compose to create multi-container environment – nginx-httpd-mysql
◦ Practice B : Using docker-compose to create multi-container environment – Drupal - Postgres
Introduction to Docker Compose
◦ Compose is a tool for defining and running multi-container Docker applications.
◦ With Compose, you use a YAML file to configure your application’s services.
◦ Then, with a single command, you create and start all the services from your configuration.
◦ Compose works in all environments: production, staging, development, testing, as well as CI workflows.
◦ Using Compose is basically a three-step process:
◦ A) Define your app’s environment with a Dockerfile so it can be reproduced anywhere

◦ B) Define the services that make up your app in docker-compose.yml so they can be run together in an isolated
environment

◦ C) Run docker-compose up and Compose starts and runs your entire app
YAML file concept for build setup
Version to match Docker Version

Services – the containers we want to


run

Container Specific settings and Naming


docker-compose sample usage
◦ Create the yaml file – you might not get it right the first time, try again
docker-compose sample usage
◦ Run docker-compose command with up
docker-compose sample usage
◦ If the docker-compose ran successfully, it will pull all the image needed and will start the containers with
all the settings supplied in docker-compose.yml file

◦ Naming of the containers are automated , if no settings supplied, images will use default
Using compose to Build Image and bringing
up the containers
◦ Standard build image operation:
Using compose to Build Image and bringing
up the containers
◦ Instead of using standard build operation, you can use docker-compose to build the image before
bringing up the container
◦ In the docker-compose.yml, you are required to use build option to specify the build operation before
bringing the container up
Using compose to Build Image and bringing
up the containers
Using compose to Build Image and bringing
up the containers
Practice A : Using docker-compose to create
multi-container environment – nginx-httpd-mysql
◦ Create a docker-compose.yml file that will bring up 3 containers
◦ 3 containers are:
◦ Nginx that listens to port 8080 named as proxy
◦ Mysql that uses ‘mypass’ as the password and named as mydb
◦ Httpd service that uses port 80 and named as webserver

◦ All 3 containers are not connected to each other, this practice is about docker-compose.yml and
understanding of how to bring up multiple container up in single command
Practice A: Solution
Practice B : Using docker-compose to create
multi-container environment – Drupal - Postgres
◦ Setup 2 containers using drupal CMS and Postgres Database as backend using Docker-compose
◦ The drupal image needs customization
◦ Use drupal 8.6 ( or any other version if this does not work )
◦ Install git in the drupal image
◦ Use the git to clone https://git.drupal.org/project/bootstrap.git
◦ This will clone newer theme into the custom drupal image ( so we can use after the container is up )
◦ The postgres image do not need any customization
◦ Try to use version 9.6 else drop version if its not working
◦ You also need to use volume to create separate volume to preserve the data on both drupal and postgres
◦ This is required to keep data when you bring the down the container, when you start again, it will use the volumes and
your data are preserved
◦ Read the documentation on hub for both drupal and postgres to get the insight view of what keys to use to
customize
Practice B: Solution
◦ NEED to FIND ONE ☺

◦ Find in compose-assignment2 && good luck


Practice B: Solution
Practice B: Solution
Introduction to
Docker Swarm
What’s in this section?
◦ Problems with Containers
◦ What is docker swarm?
Problems with Containers
◦ Containers are everywhere …. ☺
◦ How to automate container delivery? AKA Lifecyle?
◦ Demands grow…. up up up … more more more …. slow slow slow …. 
◦ How to scale out?
◦ How to scale up?
◦ How to make containers HA or FT ?
◦ How to update container without downtime? --- rolling update ☺
◦ How to create cross-node network
◦ How to store secrets and keys and password to ensure only the right container can retrieve them?
HA and Scale up / Out
What is docker swarm?
◦ A Docker Swarm is a group of either physical or virtual machines that are running the Docker
application and that have been configured to join together in a cluster

◦ Once a group of machines have been clustered together, you can still run the Docker commands that
you're used to, but they will now be carried out by the machines in your cluster.

◦ Docker swarm is a container orchestration tool, meaning that it allows the user to manage multiple
containers deployed across multiple host machines

◦ One of the key benefits associated with the operation of a docker swarm is the high level of availability
offered for applications
Docker Definitions
◦ Docker is a software platform that enables software developers to easily integrate the use of containers
into the software development process

◦ Containers and their utilization and management in the software development process are the main
focus of the docker application. Containers allow developers to package applications with all of the
necessary code and dependencies that are necessary for them to function in any computing environment

◦ An Image is a package of executable files that contains all of the code, libraries, runtime, binaries and
configuration files necessary to run an application

◦ A Dockerfile is the name given to the type of file that defines the contents of a portable image
Docker Swarm Definitions
◦ A docker swarm is comprised of a group of physical or virtual machines operating in a cluster.
◦ When a machine joins the cluster, it becomes a node in that swarm.
◦ The docker swarm function recognizes three different types of nodes, each with a different role within
the docker swarm ecosystem
◦ Manager Node
◦ The primary function of manager nodes is to assign tasks to worker nodes in the swarm.
◦ Leader Node
◦ The leader node makes all of the swarm management and task orchestration decisions for the swarm
◦ Worker Node
◦ By default, all manager modes are also worker nodes and are capable of executing tasks when they have the
resources available to do so.
Benefits of Running a Docker Swarm
◦ Leverage the Power of Containers
◦ Developers love using docker swarm because it fully leverages the design advantages offered by
containers.
◦ Containers allow developers to deploy applications or services in self-contained virtual environments
◦ Containers are proving a more lightweight version of virtual machines, as their architecture allows them
to make more efficient use of computing power.
Benefits of Running a Docker Swarm
◦ Docker Swarm Helps Guarantee High Service Availability
◦ One of the main benefits of docker swarms is increasing application availability through redundancy.
◦ In order to function, a docker swarm must have a swarm manager that can assign tasks to worker nodes
◦ By implementing multiple managers, developers ensure that the system can continue to function even if
one of the manager nodes fails.
◦ Docker recommends a maximum of seven manager nodes for each cluster.
Benefits of Running a Docker Swarm
◦ Automated Load-Balancing
◦ Docker swarm schedules tasks using a variety of methodologies to ensure that there are enough
resources available for all of the containers.
◦ Through a process that can be described as automated load balancing, the swarm manager ensures that
container workloads are assigned to run on the most appropriate host for optimal efficiency.
Design & Build Docker
Swarm Cluster
What’s in this section?
◦ Docker Swarm Concept
◦ Build Single node Docker Swarm
◦ Build 3 Node Docker Swarm
Docker Swarm Concept
◦ Manager nodes handle cluster management tasks:
◦ maintaining cluster state
◦ scheduling services
◦ serving swarm mode HTTP API endpoints
◦ Using a Raft implementation, the managers maintain a consistent internal state of the entire swarm and
all the services running on it.

◦ Worker nodes
◦ Worker nodes are also instances of Docker Engine whose sole purpose is to execute containers.
◦ Worker nodes don’t participate in the Raft distributed state, make scheduling decisions.
Docker Swarm Concept
Docker Swarm Concept
Docker Swarm Concept
◦ Swarm Manager - uses the docker service command
◦ Replicas = how many task should a service spawn in nodes
◦ Task are containers that you want to run in cluster mode
Docker Swarm Concept
Docker Swarm Concept
◦ When you create a new docker service, an entire orchestration happens

API

Orchestrator

Allocator

Scheduler

Dispatcher
Before into swarm – requirement
◦ 3 Node [ VM or PM ] ( centosA , centosC , centosD )
◦ Off SeLinux – seems BAD !
◦ Able to communicate with each other via TCP/IP [ IP / DNS must be set ]
◦ Update /etc/hosts – for each machine to see each other with names
◦ Open protocols and ports between the hosts
◦ TCP port 2377 for cluster management communications
◦ TCP and UDP port 7946 for communication among nodes
◦ UDP port 4789 for overlay network traffic
◦ TCP port 80 for HTTP API endpoint
◦ Additional : Setup KEY based SSH between all nodes [ for ease of checking status and this course ]
◦ Remove any old containers that is running …. [ prune … research this … ] ( start fresh )
◦ https://docs.docker.com/engine/swarm/swarm-tutorial/
Before into swarm – requirement
◦ Checklist
❑ Off Selinux and reboot each node
❑Update /etc/hosts – for each machine to see each other with names
❑ Open protocols and ports between the hosts
❑ TCP port 2377 for cluster management communications
❑ TCP and UDP port 7946 for communication among nodes
❑ UDP port 4789 for overlay network traffic
❑ TCP port 80 for HTTP API endpoint
❑ Additional : Setup KEY based SSH between all nodes
Build Single node Docker Swarm
Build Single node Docker Swarm
Build Single node Docker Swarm
◦ docker swarm init : What happened?
◦ Lots of PKI and security automation
◦ Root Signing Certificate created for our Swarm
◦ Certificate is issued for first Manager node
◦ Join tokens are created

◦ Raft database created to store root CA, configs and secrets


◦ Encrypted by default on disk
◦ No need for another key/value system to hold orchestration/secrets
◦ Replicates logs amongst Managers via mutual TLS in “control plane”
Docker Swarm Commands
◦ New commands for docker swarm
◦ docker node :- Manage Swarm nodes
◦ docker swarm :- Manage Swarm ( join / leave )
◦ docker service :- Manage service [ -eq docker container run ]
◦ docker stack :-Manage Docker stacks (Deploy a new stack or update an existing stack)
◦ docker secret :- Manage Docker secrets ( for Cert, Password )
Build Single node Docker Swarm – TestOut!!
◦ Use docker service to create single node docker swarm service
◦ Task
◦ Pull stv707/vote:v1 image
Build Single node Docker Swarm
◦ run docker service create –d --replicas 3 –p 9090:80 vote:v1
◦ This will create a swarm container with 3 replicas on single node
Build Single node Docker Swarm
◦ Access port 9090 via browser
◦ NOTE: this is to test that container is served from 3 replicas [ single node cluster ]
◦ Application will NOT WORK ! Its POC of Container ID are different as you refresh
Docker Swarm – Handling Failure
Docker Swarm – Handling Failure
Build Single node: Sample RUN
◦ In this section, we will run a simple web app to demonstrate Multi WEB APP accessing single database to
retrieve data
◦ Pull stv707/mywebapp:v5 and stv707/mysql57:v5
◦ Run stv707/mysql57 as a normal container
◦ Run swarm on stv707/mywebapp:v5 that uses an overlay network
Build Single node- Overlay network
◦ The only way to let containers running on different hosts connect to each other is by using an overlay
network.
◦ It can be thought of as a container network that is built on top of another network (in this case, the
physical hosts network).
◦ Docker Swarm mode comes with a default overlay network which implements a VxLAN-based solution
with the help of libnetwork and libkv
◦ Overlay Network will make sure internal load-balancing will not interrupt session
Build Single node: Sample RUN
Build Single node: Sample RUN
Build Single node: Sample RUN
Build 3 Node Docker Swarm
◦ Just add other note using the join node with code
Build 3 Node Docker Swarm
Build 3 Node Docker Swarm
Build 3 Node Docker Swarm
Build 3 Node Docker Swarm – Practice
◦ Crate a Multi Node Web App
◦ ## Goal: create networks, volumes, and services for a web-based "cats vs. dogs" voting app.
◦ Here is a basic diagram of how the 5 services will work:
Build 3 Node Docker Swarm – Practice
◦ Crate a Multi Node Web App
◦ - All images are on Docker Hub, so you should use editor to craft your commands locally, then paste
them into swarm shell (at least that's how I'd do it)
◦ - a `backend` and `frontend` overlay network are needed.
◦ Nothing different about them other then that backend will help protect database from the voting web
app. (similar to how a VLAN setup might be in traditional architecture)
◦ -The database server should use a named volume for preserving data. Use the new `--mount` format to
do this: `--mount type=volume,source=db-data,target=/var/lib/postgresql/data`
Docker Swarm features
Swarm App Lifecycle
Swarm features : swarm mode routing mesh
◦ Docker Engine swarm mode makes it easy to publish ports for services to make them available to
resources outside the swarm.
◦ All nodes participate in an ingress routing mesh.
◦ The routing mesh enables each node in the swarm to accept connections on published ports for any
service running in the swarm, even if there’s no task running on the node
◦ Limitation:
◦ Needs LB in front of node otherwise, it will use build in LB which is stateless
Swarm features : Using Secrets in Swarm
◦ in terms of Docker Swarm services, a secret is a blob of data, such as a password, SSH private key, SSL certificate, or another piece of
data that should not be transmitted over a network or stored unencrypted in a Dockerfile or in your application’s source code.
◦ In Docker 1.13 and higher, you can use Docker secrets to centrally manage this data and securely transmit it to only those containers
that need access to it.
◦ Secrets are encrypted during transit and at rest in a Docker swarm. A given secret is only accessible to those services which have
been granted explicit access to it, and only while those service tasks are running.

◦ You can use secrets to manage any sensitive data which a container needs at runtime but you don’t want to store in the image or in
source control, such as:

◦ Usernames and passwords


◦ TLS certificates and keys
◦ SSH keys
◦ Other important data such as the name of a database or internal server
◦ Generic strings or binary content (up to 500 kb in size)
Swarm features : Using Secrets in Swarm
Swarm features : Using Secrets in Swarm
Swarm features : Using Secrets in Swarm
Swarm features : Using Stacks in Swarm
Swarm features : Using Stacks in Swarm
Swarm features : Using Stacks in Swarm
◦ Compose File
Swarm features : Using Stacks in Swarm:Run
Swarm features : Using Stacks in Swarm
◦ Example: Stack Image
◦ Practice with : swarm-stack-1
Swarm App Lifecycle
Swarm App Lifecycle – update image
Swarm App Lifecycle – update meta
The END of DAY 2

You might also like