Devopsin 4 Weeksoreily
Devopsin 4 Weeksoreily
Devops in 4 Weeks
Click to edit Master title style
Agenda
Poll Question
Click to edit Master title style
What is your experience with DevOps
• What is DevOps?
• None
• Just starting
• Reasonable
• Advanced
Poll Question
Click to edit Master title style
• Which days are you planning to attend (select all that apply)
• Day 1: DevOps intro
• Day 2: Containers intro
• Day 3: Kubernetes intro
• Day 4: OpenShift intro
Poll Question
Click to edit Master title style
Which of the following topics are most interesting for you (choose all
that aplly)
• Working with Git
• Understanding DevOps
• Using CI/CD
• Working with Containers
• Kubernetes basics
• Kubernetes intermediate
• OpenShift basics
• OpenShift intermediate
Poll Question
Click to edit Master title style
Which of the following topics do you feel already confident with?
(select all that apply)
• Working with Git
• Understanding DevOps
• Using CI/CD
• Working with Containers
• Kubernetes basics
• Kubernetes intermediate
• OpenShift basics
• OpenShift intermediate
Poll Question
Click to edit Master title style
• Where are you from?
• India
• Asia (not India)
• USA or Canada
• Central America
• South America
• Africa
• Netherlands
• Europe
• Australia/Pacific
WARNING
Click to edit Master title style
• Today is the second time I'm starting this course
• You may see small differences between the course agenda and the
course topics list that is published for this course
• Some things may go wrong
• Your feedback is more important than ever! Feel free to send to
[email protected]
Course Overview
Click to edit Master title style
• On day 1, you'll learn about DevOps fundamentals. It has significant
amount of lectures, and you'll learn how to work with GitHub,
Jenkins and Ansible, which are essential DevOps tools
• On day 2, we'll explore containers, the preferred way of offering
access to appplications in a DevOps world. A strong focus is on
managing container images the DevOps way
• On day 3, you'll learn how to work with Kubernetes, the perfect
tool to build container based microservices and decouple site-
specific information from the code you want to distribute
• On day 4, you'll learn how to work with the OpenShift Kubernetes
distribution, because it has a strong and advanced approach to
DevOps integration
Course Objectives
Click to edit Master title style
• In this course, you will learn about DevOps and common DevOps
solutions
• You will learn how to apply these solutions in Orchestrated
Containerized IT environments
• We'll zoom into the specific parts, but in the end the main goal is to
bring these parts together, allowing you to make DevOps work
more efficient by working with containers
Minimal Software Requirements
Click to edit Master title style
• Day 1: a base installation of any Linux distribution as a virtual
machine. Recommended: Ubuntu LTS 20.04 Workstation
• Day 2: Ubuntu LTS 20.04 Workstation for working with Docker. Add
a CentOS or RHEL 8.x VM if you want to learn about Podman
• Day 3: Centos 7.x in a VM with at least 4 GiB RAM
• Day 4: Fedora Workstation in a VM with at least 12 GiB RAM
• At the end of each day, next day lab setup instructions are provided
Day 1 Agenda
Click to edit Master title style
• Understanding DevOps
• Using Git
• Using CI/CD
• Understanding Microservices
• Using Containers in Microservices
• Getting started with Ansible
• Homework assignment
Day 1 Objectives
Click to edit Master title style
• Learn about different DevOps concepts and tools
• Learn about MicroServices fundamentals
• Understand why Containers can be used to bring it all together
• Learn how Container based Microservices are the perfect solution
for working the DevOps way
Day 2 Agenda
Click to edit Master title style
• Understanding Containers
• Running Containers in Docker or Podman
• Managing Container Images
• Managing Container Storage
• Accessing Container Workloads
• Preparing week 3 Setup and Homework
Day 2 Objectives
Click to edit Master title style
• Learn about containers
• Learn how to setup a containerized environment with Ansible
Day 3 Agenda
Click to edit Master title style
• Understanding Kubernetes
• Running Applications in Kubernetes
• Exposing Applications
• Configuring Application Storage
• Implementing Decoupling in Kubernetes
• Exploring week 4 setup and homework
Day 3 Objectives
Click to edit Master title style
• Learn about Kubernetes Fundamentals
• Learn how to implement Microservices based decoupling using
common Kubernetes tools
Day 4 Agenda
Click to edit Master title style
• Comparing OpenShift to Kubernetes
• Running Kubernetes applications in OpenShift
• Building OpenShift applications from Git source code
• Using OpenShift Pipelines
Day 4 Objectives
Click to edit Master title style
• Learn about OpenShift Fundamentals
• Understand how OpenShift brings Microservices based decoupling
together with CI/CD
How this course is different
Click to edit Master title style
• Topics in this course have overlap with other courses I'm teaching
• Containers in 4 Hours
• Kubernetes in 4 Hours
• Ansible in 4 Hours
• Getting Started with OpenShift
• This course is different, as its purpose is to learn how to do DevOps
using the tools described in these courses
• As such, this course gives an overview of technology explained
more in depth in the above mentioned courses
• Consider attending these other courses to fill in some of the details
Container Devops in 4 Weeks
Click to edit Master title style
Day 1
Day 1 Agenda
Click to edit Master title style
• Understanding DevOps
• Understanding Microservices
• Using Git
• Using CI/CD
• An Introduction to Jenkins
• Getting Started with Ansible
• Using Containers in Microservices
• Homework assignment
Container Devops in 4 Weeks
Click to edit Master title style
Understanding DevOps
Understanding DevOps
Click to edit Master title style
• In DevOps, Developers and Operators work together on
implementing new software and updates to software in the most
efficient way
• The purpose of DevOps is to reduce the time between committing a
change to a system and the change being placed in production
• DevOps is Microservices-oriented by nature, as multiple smaller
project are easier to manage than one monolithic project
• In DevOps, CI/CD pipelines are commonly implemented, using
anything from simple GitHub repositories, up to advanced CI/CD-
oriented software solutions such as Jenkins and OpenShift
Configuration as Code
Click to edit Master title style
• In the DevOps way of working, Configuration as code is the
common approach
• Complex commands are to be avoided, use manifest files containing
the desired configuration instead
• YAML is a common language to create these manifest files
• YAML is used in different DevOps based solutions, including
Kubernetes and Ansible
The DevOps Cycle and its Tools
Click to edit Master title style
This is the framework for this course
• Coding: source code management tools - Git
• Building: continuous integration tools – Jenkins, OpenShift
• Testing: continuous testing tools – Jenkins, OpenShift
• Packaging: packaging tools – Jenkins, Dockerfile, Docker compose,
OpenShift
• Releasing: release automation – Docker, Kubernetes, Openshift
• Configuring: configuration management tools – Ansible, Kubernetes
• Monitoring: applications monitoring – Kubernetes
Container Devops in 4 Weeks
Click to edit Master title style
Understanding Microservices
Understanding Microservices
Click to edit Master title style
• Microservices define an application as a collection of loosely
coupled services
• Each of these services can be deployed independently
• Each of them is independently developed and maintained
• Microservices components are typically deployed as containers
• Microservices are a replacement of monolithic applications
Microservices benefits
Click to edit Master title style
• When broken down in pieces, applications are easier to build and
maintain
• Smaller pieces are easier to understand
• Developers can work on applications independently
• Smaller components are easier to scale
• One failing component doesn't necessarily bring down the entire
application
Container Devops in 4 Weeks
Click to edit Master title style
Understanding CI/CD
What is CI/CD
Click to edit Master title style
• CI/CD is Continuous integration and continuous deliver/continuous
deployment
• It's a core Devops element that enforces automation in building,
testing and deployment of applications
• The CI/CD pipeline is the backbone of modern DevOps operations
• In CI, all developers merge code changes in a central repository
multiple times a day
• CD automates the software release process based on these
frequent changes
• To do so, CD includes automated infrastructure provisioning and
deployment
Understanding CI/CD pipelines
Click to edit Master title style
• The Ci/CD pipeline automates the software delivery process
• It builds code, runs tests (CI) and deployes a new version of the
application (CD)
• Pipelines are automated so that errors can be reduced
• Pipelines are a runnable specification of the steps that a developer
needs to perform to deliver a new version of a software product
• A CI/CD pipeline can be used as just a procedure that describes how
to get from code to running software
• CI/CD pipelines can also be automated using software like Jenkins
or OpenShift
Understanding Stages of Software Release
Click to edit Master title style
• 1: From source to Git: git push
• 2: From Git to running code: docker build, make
• 3: Testing: smoke test, unit test, integration test
• 4: Deployment: staging, QA, production
Source Stage
Click to edit Master title style
• Source code ends up in a repository
• Developers need to use git push or something to get their software
into the repository
• The pipeline run is triggered by the source code repository
Build Stage
Click to edit Master title style
• The source code is converted into a runnable instance
• Source code written in C, Go or Java needs to be compiled
• Cloud-native software is deployed by using container images
• Failure to pass the build stage indicates there's a fundamental
problem in either the code or the generic CI/CD configuration
Test Stage
Click to edit Master title style
• Automated testing is used to validate code correctness and product
behavior
• Automated tests should be written by the developers
• Smoke tests are quick sanity checks
• End-to-end tests should test the entire system from the user point
of view
• Typically, test suites are used
• Failure in this stage will expose problems that the developers didn't
foresee while writing their code
Deploy Stage
Click to edit Master title style
• In deployment, the software is first deployed in a beta or staging
environment
• After is passes the beta environment successfully, it can be pushed
to the production environment for end users
• Deployment can be a continuous process, where different parts of a
microservice are deployed individually and can automatically be
approved and commited to the master branch for production
Benefits of using pipelines
Click to edit Master title style
• Developers can focus on writing code and monitoring behavior of
their code in production
• QA have access to the latest version of the system at any time
• Product updates are easy
• Logs of all changes are always available
• Rolling back to a previous version is easy
• Feedback can be provided fast
Container Devops in 4 Weeks
Click to edit Master title style
Configuration Management:
Using Ansible in DevOps
What is Ansible?
Click to edit Master title style
• Ansible is a Configuration Management tool
• It can be used to manage Linux, Windows, Network Devices, Cloud,
Docker and more
• The Control node runs the Ansible software, which is based on
Python
• The Control node reaches out to the managed nodes to compare
the current state with the desired state
• Desired state is defined in Playbooks, that are written in YAML
Why is Ansible DevOps?
Click to edit Master title style
• Ansible is Configuration as Code
Setting up a simple Ansible Environment
Click to edit Master title style
• On control hosts
• Use CentOS 8.x
• Enable EPEL repository
• Enable host name resolving for all managed nodes
• Generate SSH keys and copy over to managed hosts
• Install Ansible software
• Create an inventory file
• On managed hosts
• Ensure Python is installed
• Enable (key-based) SSH access
• Make sure you have a user with (passwordless) sudo privileges
Lab: Setting up Ansible
Click to edit Master title style
• On the Ubuntu 20.04 LTS managed hosts
• sudo apt-install openssh-server
• On the CentOS 8.x control host
• sudo dnf install epel-release
• sudo dnf install –y ansible
• sudo sh –c 'echo <your.ip.addr.ess> ubuntu.example.com ubuntu >>
/etc/hosts'
• ssh-keygen
• ssh-copy-id ubuntu
• echo ubuntu >> inventory
• ansible ubuntu –m ping –i inventory –u student
Using Ad-Hoc Commands
Click to edit Master title style
• Ansible provides 3000+ different modules
• Modules provide specific functionality and run as Python scripts on
managed nodes
• Use ansible-doc -l for a list of all modules
• Modules can be used in ad-hoc commands:
• ansible ubuntu -i inventory -u student -b -K -m user -a "name=linda"
• ansible ubuntu -i inventory -u student -b -K -m package -a
"name=nmap"
Using ansible.cfg
Click to edit Master title style
• While using Ansible commands, command line options can be used
to provide further details
• Alternatively, use ansible.cfg to provide some standard values
• An example ansible.cfg is in the Git repository at
https://github.com/sandervanvugt/devopsinfourweeks
Using Playbooks
Click to edit Master title style
• Playbooks provide a DevOps way for working with Ansible
• In a playbook the desired state is defined in YAML
• The ansible-playbook command is used to compare the current
state of the managed machine with the desired state, and if they
don't match the desired state is implemented
• ansible-playbook -i inventory -u student -K my-playbook.yaml
Container Devops in 4 Weeks
Click to edit Master title style
Day 2
Day 2 Agenda
Click to edit Master title style
• Understanding Containers
• Using Ansible to Setup a Docker Environment
• Running Containers in Docker or Podman
• Managing Container Images
• Uploading Images to Docker Hub
• Managing Container Storage
• Accessing Container Workloads
• Using Docker Compose
• Preparing week 3 Setup and Homework
Poll Question
Click to edit Master title style
Have you attended last weeks class or seen its recording?
• yes
• no
Poll Question
Click to edit Master title style
Are you planning to attend the next days in this class?
• Yes
• Only day 3: Kubernetes
• Only day 4: OpenShift
Poll Question
Click to edit Master title style
How would you rate your own knowledge about containers?
• 0
• 1
• 2
• 3
• 4
• 5
Container Devops in 4 Weeks
Click to edit Master title style
Understanding Containers
Understanding Containers
Click to edit Master title style
• A container is a running instance of a container image that is
fetched from a registry
• An image is like a smartphone App that is downloaded from the
AppStore
• It's a fancy way of running an application, which includes all that is
required to run the application
• A container is NOT a virtual machine
• Containers run on top of a Linux kernel, and depend on two
important kernel features
• Cgroups
• Namespaces
Understanding Container History
Click to edit Master title style
• Containers started as chroot directories, and have been around for
a long time
• Docker kickstarted the adoption of containers in 2013/2014
• Docker was based on LXC, a Linux native container alternative that
had been around a bit longer
Understanding Container Solutions
Click to edit Master title style
• Containers run on top of a container engine
• Different Container engines are provided by different solutions
• Some of the main solutions are:
• Docker
• Podman
• LXC/LXD
• systemd-nspawn
Understanding Container Types
Click to edit Master title style
• System containers are used as the foundation to build your own
application containers. They are not a replacement for a virtual
machine
• Application containers are used to start just one application.
Application containers are the standard
• To run multiple connected containers, you need to create a
microservice. Use docker-compose or Kubernetes Pods to do this in
an efficient way
Container Devops in 4 Weeks
Click to edit Master title style
Using Dockerfile
Understanding Dockerfile
Click to edit Master title style
• Dockerfile is a way to automate container builds
• It contains all instructions required to build a container image
• So instead of distributing images, you could just distribute the
Dockerfile
• Use docker build . to build the container image based on the
Dockerfile in the current directory
• Images will be stored on your local system, but you can direct the
image to be stored in a repository
• Tip: images on hub.docker.com have a link to Dockerfile. Read it to
understand how an image is build using Dockerfile!
Using Dockerfile Instructions
Click to edit Master title style
• FROM: identifies the base image to use. This must be the first
instruction in Dockerfile
• MAINTAINER: the author of the image
• RUN: executes a command while building the container, it is
executed before the container is run and changes what is in the
resulting container
• CMD: specifies a command to run when the container starts
• EXPOSE: exposes container ports on the container host
• ENV: sets environment variables that are passed to the CMD
• ADD: copies files from the host to the container. By default files are
copied from the Dockerfile directory
• ENTRYPOINT: specifies a command to run when the container starts
Using Dockerfile Instructions
Click to edit Master title style
• VOLUME: specifies the name of a volume that should be mounted
from the host into the container
• USER: identifies the user that should run tasks for building this
container, use for services to run as a specific user
• WORKDIR: set the current working directory for commands that are
running from the container
Understanding ENTRYPOINT and CMD
Click to edit Master title style
• Both ENTRYPOINT and CMD specify a command to run when the
container starts
• CMD specifies the command that should be run by default after
starting the container. You may override that, using docker run
mycontainer othercommand
• ENTRYPOINT can be overridden as well, but it's more work: you
need docker run --entrypoint mycommand mycontainer to
override the default command that is started
• Best practice is to use ENTRYPOINT in situations where you
wouldn't expect this default command to be overridden
ENTRYPOINT and CMD Syntax
Click to edit Master title style
• Commands in ENTRYPOINT and COMMAND can be specified in
different ways
• The most common way is the Exec form, which is shaped as
<instruction> ["executable", "arg1", "arg2"]
• The alternative is to use Shell form, which is shaped as
<instruction> <command>
• While shell form seems easier to use, it runs <command> as an
argument to /bin/sh, which may lead to confusion
Demo: Using a Dockerfile
Click to edit Master title style
• Dockerfile demo is in
https://github.com/devopsinfourweeks/dockerfile
• Use docker build -t nmap . to run it from the current directory
• Tip: use docker build --no-cache -t nmap . to ensure the complete
procedure is performed again if you need to run again
• Next, use docker run nmap to run it
Building Images in podman
Click to edit Master title style
• podman build allows you to build images based on Dockerfile
• Alternatively, use buildah
• buildah build is doing the exact same thing
• buildah from scratch allows you to build images from scratch, using
script-like style
• See also:
https://developers.redhat.com/blog/2019/02/21/podman-and-
buildah-for-docker-users/
Lab: Working with Dockerfile
Click to edit Master title style
• Create a Dockerfile that deploys an httpd web server that is based
on the latest Fedora container image. Use a sample file index.html
which contains the text "hello world" and copy this file to the
/var/www/html directory. Ensure that the following packages are
installed: nginx curl wget
• Use the Dockerfile to generate the image and test its working
Using Tags
Click to edit Master title style
• Consider using tags on custom images you create
• Without setting a tag, the default tag :latest is used
• Use docker tag to manually set a tag: docker tag myapache:1.0
• Consider using meaningfull tags docker tag myapache:testing;
docker tag myapache:production
Using docker commit
Click to edit Master title style
• docker commit allows you to easily save changes to a container in
an image
• The container image keeps its original metadata
• Use on a running container
• docker commit -m my-change running_containername my-container
• docker images
• docker push my-container
Container Devops in 4 Weeks
Click to edit Master title style
Homework
Day 2 homework
Click to edit Master title style
• For next weeks session, prepare a CentOS 7.x virtual machine that
has at least 4GiB RAM, 20 GiB disk and 2 CPUs. Install this machine
with the minimal installation server pattern so that it is ready for
installation of a Kubernetes AiO server. Further instructions are
provided next week.
Container Devops in 4 Weeks
Click to edit Master title style
Day 3
Day 3 Agenda
Click to edit Master title style
• Understanding Kubernetes
• Running Applications in Kubernetes
• Exposing Applications
• Configuring Application Storage
• Implementing Decoupling in Kubernetes
• Exploring week 4 setup and homework
Poll Question
Click to edit Master title style
Have you attended the previous course days or watched its recordings?
• Day 1 only
• Day 2 only
• Day 1 and Day 2
• no
Container Devops in 4 Weeks
Click to edit Master title style
Understanding Kubernetes
Understanding Kubernetes
Click to edit Master title style
• Kubernetes offers enterprise features that are needed in a
containerized world
• Scalability
• Availability
• Decoupling between static code and site specific data
• Persistent external storage
• The flexibility to be used on premise or in cloud
• Kubernetes is the de facto standard and currently there are no
relevant competing products
Installing Kubernetes
Click to edit Master title style
• In cloud, managed Kubernetes solutions exist to offer a Kubernetes
environment in just a few clicks
• On premise, administrators can build their own Kubernetes cluster
using kubeadm
• For testing, minikube can be used
Installing an AiO on-prem Cluster - 1/4
Click to edit Master title style
• Install some packages
• yum install git vim bash-completion
• As ordinary user with sudo privileges, clone the course Git
repository
• git clone https://github.com/sandervanvugt/microservices
• Run the setup scripts:
• cd /microservices
• ./setup-docker.sh
• ./setup-kubetools.sh
• In a root shell, install a Kubernetes master node
• kubeadm init --pod-network-cidr=10.10.0.0/16
Installing an AiO on-prem Cluster - 2/4
Click to edit Master title style
• In a user shell, set up the kubectl client:
• mkdir -p $HOME/.kube
• sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
• sudo chown $(id -un):$(id -un) .kube/config
Installing an AiO on-prem Cluster - 3/4
Click to edit Master title style
• In a user shell, set up the Calico networking agent
• kubectl create -f https://docs.projectcalico.org/manifests/tigera-
operator.yaml
• wget https://docs.projectcalico.org/manifests/custom-resources.yaml
• You now need to define the Pod network, which by default is set to
192.168.0.0/24, which in general is a bad idea. I suggest setting it to
10.10.0.0 - make sure this address range is not yet used for something
else!
• sed -i -e s/192.168.0.0/10.10.0.0/g custom-resources.yaml
• kubectl create -f custom-resources.yaml
• kubectl get pods -n calico-system: wait until all pods show a state of
Ready, this can take about 5 minutes!
Installing an AiO on-prem Cluster - 4/4
Click to edit Master title style
• By default, user Pods cannot run on the Kubernetes control node.
Use the following command to remove the taint so that you can
schedule nodes on it:
kubectl taint nodes --all node-role.kubernetes.io/master-
• Type kubectl get all to verify the cluster works.
• Use kubectl create deployment nginx --image=nginx to verify that
you can create applications in Kubernetes
Container Devops in 4 Weeks
Click to edit Master title style
Understanding Kubernetes
dropping Docker
Docker and Kubernetes
Click to edit Master title style
• Kubernetes orchestrates containers, which are based on images
• Container images are highly standardized
• Docker container images are compatible with other container
runtimes such as containerd and CRI-O
• Hence, when switching to a different container runtime, your
images will still run
• Docker is no longer supported as the runtime on the K8s cluster
starting version 1.22 (late 2021)
• That means that instead of setting up Docker as the container
runtime for building a Kubernetes cluster, you'll need to set up
another runtime
Understanding the Situation
Click to edit Master title style
• Currently Docker is used as the container runtime in K8s
• But the only thing that K8s cares about is the containerd
component in Docker
• Including full Docker makes work for K8s too complicated
• K8s needs the runtime to be compliant with the Container Runtime
Interface (CRI)
• Docker is not CRI compliant, which is why an additional component
called Dockershim needs to be included as well, and that is not
efficient
• Full explanation is here:
https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-
and-docker/
Container Devops in 4 Weeks
Click to edit Master title style
Running Applications in
Kubernetes
Understanding Kubernetes Resources
Click to edit Master title style
• Kubernetes resources are defined in the APIs
• Use kubectl api-resources for an overview
• Kubernetes resources are extensible, which means that you can add
your own resources
Understanding Kubernetes Key Resources
Click to edit Master title style
• Pod: used to run one (or more) containers and volumes
• Deployment: adds scalability and update strategy to pods
• Service: exposes pods for external use
• Persistent Volume Claim: connects to persistent storage
• ConfigMap: used to store site specific data separate from pods
Exploring kubectl
Click to edit Master title style
• kubectl is the main management interface
• Make sure that bash-completion is installed for awesome tab
completion
• source <(kubectl completion bash)
• Explore kubectl -h at all levels of kubectl
Running Applications in Kubernetes
Click to edit Master title style
• kubectl create deployment allows you to create a deployment
• kubectl run allows you to run individual pods
• Individual pods (aka "naked pods") are unmanaged and should not
be used
• kubectl get pods will show all running Pods in the current
namespace
• kubectl get all shows running Pods and related resources in the
current namespace
• kubectl get all -A shows resources in all namespaces
Troubleshooting Kubernetes Applications
Click to edit Master title style
• kubectl describe pod <podname> is the first thing to do: it shows
events that have been generated while defining the application in
the Etcd database
• kubectl logs connects to the application STDOUT and can indicate
errors while starting application. This only works on running
applications
• kubectl exec -it <podname> -- sh can be used to open a shell on a
running application
Lab: Troubleshooting Kubernetes Applications
Click to edit Master title style
• Use kubectl create deployment --image=busybox to start a Busybox
deployment
• It fails: use the appropriate tools to find out why
• After finding out why it fails, delete the deployment and start it
again, this time in a way that it doesn't fail
Container Devops in 4 Weeks
Click to edit Master title style
Exposing Applications
Understanding Application Access
Click to edit Master title style
• Kubernetes applications are running as scaled pods in the pod
network
• The pod network is provided by the kube-apiserver and not
reachable from the outside
• To expose access to applications, service resources are used
Container Devops in 4 Weeks
Click to edit Master title style
Implementing Decoupling in
Kubernetes
Demo: Running MySQL
Click to edit Master title style
• kubectl run mymysql --image=mysql:latest
• kubectl get pods
• kubectl describe pod mymysql
• kubectl logs mymysql
Providing Variables to Kubernetes Apps
Click to edit Master title style
• In imperative way, the -e command line option can be used to
provide environment variables to Kubernetes applications
• That's not very DevOps though, and something better is needed
• But let's verify that it works first: kubectl run newmysql --
image=mysql --env=MYSQL_ROOT_PASSWORD=password
• Notice alternative syntax: kubectl set env deploy/mysql
MYSQL_DABASASE=mydb
Understanding ConfigMaps
Click to edit Master title style
• ConfigMaps are used to separate site-specific data from static data
in a Pod
• Variables: kubectl create cm variables --from-
literal=MYSQL_ROOT_PASSWORD=password
• Config files: kubectl create cm myconf --from-file=my.conf
• Secrets are base64 encoded ConfigMaps
• Adressing the ConfigMap from a Pod depends on the type of
ConfigMap
• Use envFrom to address variables
• Use volumes to mount ConfigMaps that contain files
Demo: Using a ConfigMap for Variables
Click to edit Master title style
• kubectl create cm myvars --from-literal=VAR1=goat --from-
literal=VAR2=cow
• kubectl create -f cm-test-pod.yaml
• kubectl logs test-pod
Demo: Using a ConfigMap for Storage
Click to edit Master title style
• kubectl create cm nginxconf --from-file nginx-custom-config.conf
• kubectl create -f nginx-cm.yml
• Did that work? Fix it!
• kubectl exec -it nginx-cm -- /bin/bash
• cat /etc/nginx/conf.d/default.conf
Lab: Running MySQL the DevOps way
Click to edit Master title style
• Create a ConfigMap that stores all required MySQL variables
• Start a new mysql pod that uses the ConfigMap to ensure the
availability of the required variables within the Pod
Container Devops in 4 Weeks
Click to edit Master title style
Homework
Day 3 homework
Click to edit Master title style
• Next week, we are going to work with Red Hat CodeReady
Containers. Make sure to prepare the following to follow along. If
you don't have the resources to create this installation, that's OK.
You can just attend next weeks session without working through
the labs
• Latest version of Fedora Workstation, with at least 12 GiB RAM, 4 CPUs
and 40 GiB disk space and embedded virtualization enabled
• Create an account on https://developers.redhat.com, and download
the CodeReady Containers software
• Further instructions are provided next week
Container Devops in 4 Weeks
Click to edit Master title style
Day 4
Day 4 Agenda
Click to edit Master title style
• Getting Started with CodeReady Containers
• Comparing OpenShift to Kubernetes
• Running Kubernetes applications in OpenShift
• Understanding Helm Charts, Operators and Custom Resources
• Building OpenShift applications from Git source code
• ## Creating an Application with a Template
• Understanding Pipelines in Kubernetes
• Using OpenShift Pipelines
Poll Question
Click to edit Master title style
Have you attended the previous course days or watched its recordings?
• Day 1 only
• Day 2 only
• Day 3 only
• Day 1 and Day 2
• Day 2 and Day 3
• Day 1 and Day 3
• All days
• no
Container Devops in 4 Weeks
Click to edit Master title style
Comparing OpenShift to
Kubernetes
Understanding OpenShift
Click to edit Master title style
• OpenShift is a Kubernetes distribution!
• Expressed in main functionality, OpenShift is a Kubernetes
distribution where developer options are integrated in an
automated way
• Source 2 Image
• Pipelines (currently tech preview)
• More developed authentication and RBAC
• OpenShift adds more operators than vanilla Kubernetes
• OpenShift adds many extensions to the Kubernetes APIs
Container Devops in 4 Weeks
Click to edit Master title style
Running Kubernetes
Applications in OpenShift
Running Applications in OpenShift
Click to edit Master title style
• Applications can be managed like in Kubernetes
• OpenShift adds easier to use interfaces as well
• oc new-app --docker-image=mariadb
• oc set -h
• oc adm -h
• Managing a running environment is very similar
• oc get all
• oc logs
• oc describe
• oc explain
• etc.
Container Devops in 4 Weeks
Click to edit Master title style
Understanding Pipelines in
Kubernetes
Understanding Pipelines in a K8s Environment
Click to edit Master title style
• Based on https://containerjournal.com/topics/container-
ecosystems/kubernetes-pipelines-hello-new-world/
• Traditional pipelines focus on deploying workloads to specific types
of servers: Dev, Test, Prod and so on
• Pipelines in Kubernetes often do just that, with the only difference
that the dev, test and prod application is running in either a dev,
test and prod Kubernetes cluster or namespace
• In a Microservices approach this model doesn't work anymore
Understanding Pipelines in a Microservices environment
Click to edit Master title style
• Kubernetes is all about Microservices
• In Microservices, small incremental updates happen daily
• The monolithic traditional pipeline doesn't work well anymore in
such environments, and a different model is needed
• Pipelines are no longer needed for the entire application, but for
the packages in the microservice
Pipelines and Servicemesh
Click to edit Master title style
• Service Mesh is the solution for pipelines in a Kubernetes driven
microservices environment
• The Kubernetes pipeline will instruct service mesh to route (dev or
test) users to specific versions of application components
• In this scenario, different versions of the application run side by
side, and rollback or rollfoward is just an instruction from the
service mesh to reroute to another version of the application
• As a result, a completely new approach to CI/CD pipelines is needed
• OpenShift Pipelines
• Istio Servicemesh
Container Devops in 4 Weeks
Click to edit Master title style
Further Learning
Related Live Courses
Click to edit Master title style
• Containers:
• Containers in 4 Hours: May 4th
• Kubernetes
• Kubernetes in 4 Hours: March 9
• CKAD Crash Course: March 15-17
• CKA Crash Course: March 18, 19
• Building Microservices with Containers: May 21st
• OpenShift
• Getting Started with OpenShift: March 25
• EX180 Crash Course: April 19/20
• EX280 Crash Course: April 21/22