0% found this document useful (0 votes)
291 views202 pages

Devopsin 4 Weeksoreily

This document outlines a 4-week course on DevOps using containers. The course covers DevOps fundamentals like Git, CI/CD, and configuration management with Ansible on Day 1. Day 2 focuses on containers with Docker and Podman. Day 3 introduces Kubernetes for container orchestration. Finally, Day 4 compares OpenShift to Kubernetes and shows how to build applications with OpenShift pipelines. The goal is to teach how to apply common DevOps solutions like containers, Kubernetes, and OpenShift in an orchestrated environment.

Uploaded by

Second Wind Tech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
291 views202 pages

Devopsin 4 Weeksoreily

This document outlines a 4-week course on DevOps using containers. The course covers DevOps fundamentals like Git, CI/CD, and configuration management with Ansible on Day 1. Day 2 focuses on containers with Docker and Podman. Day 3 introduces Kubernetes for container orchestration. Finally, Day 4 compares OpenShift to Kubernetes and shows how to build applications with OpenShift pipelines. The goal is to teach how to apply common DevOps solutions like containers, Kubernetes, and OpenShift in an orchestrated environment.

Uploaded by

Second Wind Tech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 202

Container

Devops in 4 Weeks
Click to edit Master title style

Agenda
Poll Question
Click to edit Master title style
What is your experience with DevOps
• What is DevOps?
• None
• Just starting
• Reasonable
• Advanced
Poll Question
Click to edit Master title style
• Which days are you planning to attend (select all that apply)
• Day 1: DevOps intro
• Day 2: Containers intro
• Day 3: Kubernetes intro
• Day 4: OpenShift intro
Poll Question
Click to edit Master title style
Which of the following topics are most interesting for you (choose all
that aplly)
• Working with Git
• Understanding DevOps
• Using CI/CD
• Working with Containers
• Kubernetes basics
• Kubernetes intermediate
• OpenShift basics
• OpenShift intermediate
Poll Question
Click to edit Master title style
Which of the following topics do you feel already confident with?
(select all that apply)
• Working with Git
• Understanding DevOps
• Using CI/CD
• Working with Containers
• Kubernetes basics
• Kubernetes intermediate
• OpenShift basics
• OpenShift intermediate
Poll Question
Click to edit Master title style
• Where are you from?
• India
• Asia (not India)
• USA or Canada
• Central America
• South America
• Africa
• Netherlands
• Europe
• Australia/Pacific
WARNING
Click to edit Master title style
• Today is the second time I'm starting this course
• You may see small differences between the course agenda and the
course topics list that is published for this course
• Some things may go wrong
• Your feedback is more important than ever! Feel free to send to
[email protected]
Course Overview
Click to edit Master title style
• On day 1, you'll learn about DevOps fundamentals. It has significant
amount of lectures, and you'll learn how to work with GitHub,
Jenkins and Ansible, which are essential DevOps tools
• On day 2, we'll explore containers, the preferred way of offering
access to appplications in a DevOps world. A strong focus is on
managing container images the DevOps way
• On day 3, you'll learn how to work with Kubernetes, the perfect
tool to build container based microservices and decouple site-
specific information from the code you want to distribute
• On day 4, you'll learn how to work with the OpenShift Kubernetes
distribution, because it has a strong and advanced approach to
DevOps integration
Course Objectives
Click to edit Master title style
• In this course, you will learn about DevOps and common DevOps
solutions
• You will learn how to apply these solutions in Orchestrated
Containerized IT environments
• We'll zoom into the specific parts, but in the end the main goal is to
bring these parts together, allowing you to make DevOps work
more efficient by working with containers
Minimal Software Requirements
Click to edit Master title style
• Day 1: a base installation of any Linux distribution as a virtual
machine. Recommended: Ubuntu LTS 20.04 Workstation
• Day 2: Ubuntu LTS 20.04 Workstation for working with Docker. Add
a CentOS or RHEL 8.x VM if you want to learn about Podman
• Day 3: Centos 7.x in a VM with at least 4 GiB RAM
• Day 4: Fedora Workstation in a VM with at least 12 GiB RAM
• At the end of each day, next day lab setup instructions are provided
Day 1 Agenda
Click to edit Master title style
• Understanding DevOps
• Using Git
• Using CI/CD
• Understanding Microservices
• Using Containers in Microservices
• Getting started with Ansible
• Homework assignment
Day 1 Objectives
Click to edit Master title style
• Learn about different DevOps concepts and tools
• Learn about MicroServices fundamentals
• Understand why Containers can be used to bring it all together
• Learn how Container based Microservices are the perfect solution
for working the DevOps way
Day 2 Agenda
Click to edit Master title style
• Understanding Containers
• Running Containers in Docker or Podman
• Managing Container Images
• Managing Container Storage
• Accessing Container Workloads
• Preparing week 3 Setup and Homework
Day 2 Objectives
Click to edit Master title style
• Learn about containers
• Learn how to setup a containerized environment with Ansible
Day 3 Agenda
Click to edit Master title style
• Understanding Kubernetes
• Running Applications in Kubernetes
• Exposing Applications
• Configuring Application Storage
• Implementing Decoupling in Kubernetes
• Exploring week 4 setup and homework
Day 3 Objectives
Click to edit Master title style
• Learn about Kubernetes Fundamentals
• Learn how to implement Microservices based decoupling using
common Kubernetes tools
Day 4 Agenda
Click to edit Master title style
• Comparing OpenShift to Kubernetes
• Running Kubernetes applications in OpenShift
• Building OpenShift applications from Git source code
• Using OpenShift Pipelines
Day 4 Objectives
Click to edit Master title style
• Learn about OpenShift Fundamentals
• Understand how OpenShift brings Microservices based decoupling
together with CI/CD
How this course is different
Click to edit Master title style
• Topics in this course have overlap with other courses I'm teaching
• Containers in 4 Hours
• Kubernetes in 4 Hours
• Ansible in 4 Hours
• Getting Started with OpenShift
• This course is different, as its purpose is to learn how to do DevOps
using the tools described in these courses
• As such, this course gives an overview of technology explained
more in depth in the above mentioned courses
• Consider attending these other courses to fill in some of the details
Container Devops in 4 Weeks
Click to edit Master title style

Day 1
Day 1 Agenda
Click to edit Master title style
• Understanding DevOps
• Understanding Microservices
• Using Git
• Using CI/CD
• An Introduction to Jenkins
• Getting Started with Ansible
• Using Containers in Microservices
• Homework assignment
Container Devops in 4 Weeks
Click to edit Master title style

Understanding DevOps
Understanding DevOps
Click to edit Master title style
• In DevOps, Developers and Operators work together on
implementing new software and updates to software in the most
efficient way
• The purpose of DevOps is to reduce the time between committing a
change to a system and the change being placed in production
• DevOps is Microservices-oriented by nature, as multiple smaller
project are easier to manage than one monolithic project
• In DevOps, CI/CD pipelines are commonly implemented, using
anything from simple GitHub repositories, up to advanced CI/CD-
oriented software solutions such as Jenkins and OpenShift
Configuration as Code
Click to edit Master title style
• In the DevOps way of working, Configuration as code is the
common approach
• Complex commands are to be avoided, use manifest files containing
the desired configuration instead
• YAML is a common language to create these manifest files
• YAML is used in different DevOps based solutions, including
Kubernetes and Ansible
The DevOps Cycle and its Tools
Click to edit Master title style
This is the framework for this course
• Coding: source code management tools - Git
• Building: continuous integration tools – Jenkins, OpenShift
• Testing: continuous testing tools – Jenkins, OpenShift
• Packaging: packaging tools – Jenkins, Dockerfile, Docker compose,
OpenShift
• Releasing: release automation – Docker, Kubernetes, Openshift
• Configuring: configuration management tools – Ansible, Kubernetes
• Monitoring: applications monitoring – Kubernetes
Container Devops in 4 Weeks
Click to edit Master title style

Understanding Microservices
Understanding Microservices
Click to edit Master title style
• Microservices define an application as a collection of loosely
coupled services
• Each of these services can be deployed independently
• Each of them is independently developed and maintained
• Microservices components are typically deployed as containers
• Microservices are a replacement of monolithic applications
Microservices benefits
Click to edit Master title style
• When broken down in pieces, applications are easier to build and
maintain
• Smaller pieces are easier to understand
• Developers can work on applications independently
• Smaller components are easier to scale
• One failing component doesn't necessarily bring down the entire
application
Container Devops in 4 Weeks
Click to edit Master title style

Coding: Using Git


Using Git in a Microservices Environment
Click to edit Master title style
• Git can be used for version control and cooperation between
different developers and teams
• Using Git makes it easy to manage many changes in an effective
way
• Different projects in a Microservice can have their own Git
repository
• For that reason, Git and Microservices are a perfect match
Using Git
Click to edit Master title style
• Git is typically offered as a web service
• GitHub and GitLab are commonly used
• Alternatively, private Git repositories can be used
Understanding Git
Click to edit Master title style
• Git is a version control system that makes collaboration easy and
effective
• Git works with a repository, which can contain different
development branches
• Developers and users can easily upload as well as download new
files to and from the Git repository
• To do so, a Git client is needed
• Git clients are available for all operating systems
• Git servers are available online, and can be installed locally as well
• Common online services include GitHub and GitLabs
Git Client and Repository
Click to edit Master title style
• The Git repository is where files are uploaded, and shared with
other users
• Individual developers have a local copy of the Git repository on
their computer and use the Git client to upload and download to
and from the repository
• The organization of the Git client lives in the .git directory, which
contains several files to maintain the status
Understanding Git Workflow
Click to edit Master title style
• To offer the best possible workflow control, A Git repository
consists of three trees maintained in the Git-managed directory
• The working directory holds the actual files
• The Index acts as a staging area
• The HEAD points to the last commit that was made
Applying the Git Workflow
Click to edit Master title style
• The workflow starts by creating new files in the working directory
• When working with Git, the git add command is used to add files to
the index
• To commit these files to the head, use git commit -m "commit
message"
• Use git add origin https://server/reponame to connect to the
remote repository
• To complete the sequence, use git push origin master. Replace
"master" with the actual branch you want to push changes to
Creating a GitHub Repository
Click to edit Master title style
• Create the repository on your GitHub server
• Set your user information
• git config --global user.name “Your Name”
• git config --global user.email “[email protected]
• Create a local directory that contains a README.md file. This should
contain information about the current repository
• Use git init to generate the Git repository metadata
• Use git add <filenames> to add files to the staging area
• From there, use git commit -m "commit message" to commit the
files. This will commit the files to HEAD, but not to the remote
repository yet
• Use git remote add origin https://server/reponame
• Push local files to remote repository: git push -u origin master
Understanding GitHub 2FA
Click to edit Master title style
• GitHub uses 2 Factor Authentication
• In the GitHub website, use Account > Account Security > Two-factor
authentication to setup 2FA
• Check github.com/ateucher/setup-gh-cli-auth-2fa.md for
instructions on how to use 2FA in CLI clients
Using Git Repositories
Click to edit Master title style
• Use git clone https://gitserver/reponame to clone the contents of
a remote repository to your computer
• To update the local repository to the latest commit, use git pull
• Use git push to send local changes back to the Git server (after
using git add and git commit obviously)
Uploading Changed Files
Click to edit Master title style
• Modified files need to go through the staging process
• After changing files, use git status to see which files have changed
• Next, use git add to add these files to the staging area; use git rm
<filename> to remove files
• Then, commit changes using git commit -m "minor changes"
• Synchronize, using git push origin master
• From any client, use git pull to update the current Git clone
Removing Files from Git Repo's
Click to edit Master title style
• While removing files, the Git repository needs to receive specific
instructions about the removal
• rm vault*
• git rm vault*
• git commit
• git push
• Check in the repository, the files will now be removed
Understanding Branches
Click to edit Master title style
• Branches are used to develop new features in isolation from the
main branch
• The master branch is the default branch, other branches can be
manually added
• After completion, merge the branches back to the master
Using Branches
Click to edit Master title style
• Use git checkout -b dev-branch to create a new branch and start
using it
• Use git push origin dev-branch to push the new branch to the
remote repository
• Use git checkout master to switch back to the master
• Use git merge dev-branch to merge the dev-branch back into the
master
• Delete the branch using git branch -d dev-branch
Lab: Using Git
Click to edit Master title style
• Got to https://github.com, and create an account if you don't have an
account yet
• Create a new Git repository from the website
• From a Linux client, create a local directory with the name of the Git
repository
• Use the following commands to put some files in it
• echo "new git repo" >README.md
• git init
• git add *
• git status
• git commit -m "first commit"
• git remote add origin https://github.com/yourname/yourrepo
• git push -u origin master
Container Devops in 4 Weeks
Click to edit Master title style

Understanding CI/CD
What is CI/CD
Click to edit Master title style
• CI/CD is Continuous integration and continuous deliver/continuous
deployment
• It's a core Devops element that enforces automation in building,
testing and deployment of applications
• The CI/CD pipeline is the backbone of modern DevOps operations
• In CI, all developers merge code changes in a central repository
multiple times a day
• CD automates the software release process based on these
frequent changes
• To do so, CD includes automated infrastructure provisioning and
deployment
Understanding CI/CD pipelines
Click to edit Master title style
• The Ci/CD pipeline automates the software delivery process
• It builds code, runs tests (CI) and deployes a new version of the
application (CD)
• Pipelines are automated so that errors can be reduced
• Pipelines are a runnable specification of the steps that a developer
needs to perform to deliver a new version of a software product
• A CI/CD pipeline can be used as just a procedure that describes how
to get from code to running software
• CI/CD pipelines can also be automated using software like Jenkins
or OpenShift
Understanding Stages of Software Release
Click to edit Master title style
• 1: From source to Git: git push
• 2: From Git to running code: docker build, make
• 3: Testing: smoke test, unit test, integration test
• 4: Deployment: staging, QA, production
Source Stage
Click to edit Master title style
• Source code ends up in a repository
• Developers need to use git push or something to get their software
into the repository
• The pipeline run is triggered by the source code repository
Build Stage
Click to edit Master title style
• The source code is converted into a runnable instance
• Source code written in C, Go or Java needs to be compiled
• Cloud-native software is deployed by using container images
• Failure to pass the build stage indicates there's a fundamental
problem in either the code or the generic CI/CD configuration
Test Stage
Click to edit Master title style
• Automated testing is used to validate code correctness and product
behavior
• Automated tests should be written by the developers
• Smoke tests are quick sanity checks
• End-to-end tests should test the entire system from the user point
of view
• Typically, test suites are used
• Failure in this stage will expose problems that the developers didn't
foresee while writing their code
Deploy Stage
Click to edit Master title style
• In deployment, the software is first deployed in a beta or staging
environment
• After is passes the beta environment successfully, it can be pushed
to the production environment for end users
• Deployment can be a continuous process, where different parts of a
microservice are deployed individually and can automatically be
approved and commited to the master branch for production
Benefits of using pipelines
Click to edit Master title style
• Developers can focus on writing code and monitoring behavior of
their code in production
• QA have access to the latest version of the system at any time
• Product updates are easy
• Logs of all changes are always available
• Rolling back to a previous version is easy
• Feedback can be provided fast
Container Devops in 4 Weeks
Click to edit Master title style

Building– testing – packaging:


Taking a Jenkins Quick Start
Using Jenkins Pipelines
Click to edit Master title style
• Jenkins is a very common open source tool to manage pipelines
• Jenkins pipelines offer four states of continuous delivery
• Build
• Deploy
• Test
• Release
Benefits of using Jenkins pipelines
Click to edit Master title style
• CI/CD is defined in a Jenkinsfile that can be scanned into Source
Code Management
• It supports complex pipelines with conditional loop, forks, parallel
execution and more
• It can resume from previously saved checkpoints
• Many plugins are available
Understanding Jenkinsfile
Click to edit Master title style
• Jenkinsfile stores the whole process as code
• Jenkinsfile can be written in Groovy DSL syntax (scripted approach)
• Jenkinsfile can be generated by a tool (declarative approach)
Installing Jenkins on Ubuntu
Click to edit Master title style
• sudo apt-get install openjdk-11-jdk
• wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo
apt-key add -
• sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ >
/etc/apt/sources.list.d/jenkins.list'
• sudo apt-get update
• sudo apt-get install jenkins
• read password: sudo cat /var/lib/jenkins/secrets/initialAdminPassword
• Access Jenkins at http://localhost:8080, skip over initial user creation
• Install suggested plugins
Understanding Jenkins Working
Click to edit Master title style
• Jenkins is going to run jobs on the computer that hosts Jenkins
• If the Jenkinsfile needs the Docker agent to run a Job, the Docker
software must be installed on the host computer
• Also, the jenkins user must be a member of the docker group, so
that this user has sufficient permissions to run the jobs
Installing Docker on Ubuntu
Click to edit Master title style
• sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent
software-properties-common
• curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key
add -
• sudo apt-key fingerprint 0EBFCD88
• sudo add-apt-repository "deb [arch=amd64]
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
• sudo apt-get update
• sudo apt-get install docker-ce docker-ce-cli containerd.io
• sudo docker run hello-world
• sudo usermod -aG docker jenkins
• sudo systemctl restart jenkins
Understanding Jenkinsfile syntax
Click to edit Master title style
• pipeline is a mandatory block in the Jenkinsfile that defines all
stages that need to be processed
• node is a system that runs a workflow
• agent defines which agent should process the CI/CD. docker is a
common agent, and can be specified to run a specific image that
runs the commands in the Jenkinsfile
• stages defines the different levels that should be processed: (build,
test, qa, deploy, monitor)
• steps defines the steps in the individual stages
Using Jenkins
Click to edit Master title style
• Login in: http://localhost:8080
• Select Manage Jenkins > Manage Plugins.
• Select Available > Docker and Docker Pipelines plugins and install it
Creating your First Pipeline
Click to edit Master title style
• Select Dasboard > New Item
• Enter an item name: myfirstpipeline
• Select Pipeline, click OK
• Select Pipeline, set Definition to Pipeline Script
• In the script code, manually copy contents of
https://github.com/sandervanvugt/devopsinfourweeks/firstpipeline
• Click Apply, Save
• In the menu on the left, select Build Now, it should run successfully
• Click the build time and date, from there select Console Output to
see console output
Lab: Running a Pipeline
Click to edit Master title style
• In https://github.com/sandervanvugt/devopsinfourweeks, you'll
find the file secondpipeline, which contains a pipeline script. Build a
pipeline in Jenkins based on this file and verify that it is successful
• Notice that step two prompts for input. Click the step 2 field to view
the prompt and provide your input
Container Devops in 4 Weeks
Click to edit Master title style

Configuration Management:
Using Ansible in DevOps
What is Ansible?
Click to edit Master title style
• Ansible is a Configuration Management tool
• It can be used to manage Linux, Windows, Network Devices, Cloud,
Docker and more
• The Control node runs the Ansible software, which is based on
Python
• The Control node reaches out to the managed nodes to compare
the current state with the desired state
• Desired state is defined in Playbooks, that are written in YAML
Why is Ansible DevOps?
Click to edit Master title style
• Ansible is Configuration as Code
Setting up a simple Ansible Environment
Click to edit Master title style
• On control hosts
• Use CentOS 8.x
• Enable EPEL repository
• Enable host name resolving for all managed nodes
• Generate SSH keys and copy over to managed hosts
• Install Ansible software
• Create an inventory file
• On managed hosts
• Ensure Python is installed
• Enable (key-based) SSH access
• Make sure you have a user with (passwordless) sudo privileges
Lab: Setting up Ansible
Click to edit Master title style
• On the Ubuntu 20.04 LTS managed hosts
• sudo apt-install openssh-server
• On the CentOS 8.x control host
• sudo dnf install epel-release
• sudo dnf install –y ansible
• sudo sh –c 'echo <your.ip.addr.ess> ubuntu.example.com ubuntu >>
/etc/hosts'
• ssh-keygen
• ssh-copy-id ubuntu
• echo ubuntu >> inventory
• ansible ubuntu –m ping –i inventory –u student
Using Ad-Hoc Commands
Click to edit Master title style
• Ansible provides 3000+ different modules
• Modules provide specific functionality and run as Python scripts on
managed nodes
• Use ansible-doc -l for a list of all modules
• Modules can be used in ad-hoc commands:
• ansible ubuntu -i inventory -u student -b -K -m user -a "name=linda"
• ansible ubuntu -i inventory -u student -b -K -m package -a
"name=nmap"
Using ansible.cfg
Click to edit Master title style
• While using Ansible commands, command line options can be used
to provide further details
• Alternatively, use ansible.cfg to provide some standard values
• An example ansible.cfg is in the Git repository at
https://github.com/sandervanvugt/devopsinfourweeks
Using Playbooks
Click to edit Master title style
• Playbooks provide a DevOps way for working with Ansible
• In a playbook the desired state is defined in YAML
• The ansible-playbook command is used to compare the current
state of the managed machine with the desired state, and if they
don't match the desired state is implemented
• ansible-playbook -i inventory -u student -K my-playbook.yaml
Container Devops in 4 Weeks
Click to edit Master title style

Day 1 Homework Assignment


Day 1 Homework
Click to edit Master title style
• Next week, we're going to work with Docker Containers managed
by Ansible. Setup a CentOS 8.x based Ansible control node to
manage an Ubuntu 20.04 LTS workstation according to the
instructions in todays session so that we're ready to deploy Docker
using Ansible. Further instructions on how to do this are provided
next week.
Container Devops in 4 Weeks
Click to edit Master title style

Day 2
Day 2 Agenda
Click to edit Master title style
• Understanding Containers
• Using Ansible to Setup a Docker Environment
• Running Containers in Docker or Podman
• Managing Container Images
• Uploading Images to Docker Hub
• Managing Container Storage
• Accessing Container Workloads
• Using Docker Compose
• Preparing week 3 Setup and Homework
Poll Question
Click to edit Master title style
Have you attended last weeks class or seen its recording?
• yes
• no
Poll Question
Click to edit Master title style
Are you planning to attend the next days in this class?
• Yes
• Only day 3: Kubernetes
• Only day 4: OpenShift
Poll Question
Click to edit Master title style
How would you rate your own knowledge about containers?
• 0
• 1
• 2
• 3
• 4
• 5
Container Devops in 4 Weeks
Click to edit Master title style

Understanding Containers
Understanding Containers
Click to edit Master title style
• A container is a running instance of a container image that is
fetched from a registry
• An image is like a smartphone App that is downloaded from the
AppStore
• It's a fancy way of running an application, which includes all that is
required to run the application
• A container is NOT a virtual machine
• Containers run on top of a Linux kernel, and depend on two
important kernel features
• Cgroups
• Namespaces
Understanding Container History
Click to edit Master title style
• Containers started as chroot directories, and have been around for
a long time
• Docker kickstarted the adoption of containers in 2013/2014
• Docker was based on LXC, a Linux native container alternative that
had been around a bit longer
Understanding Container Solutions
Click to edit Master title style
• Containers run on top of a container engine
• Different Container engines are provided by different solutions
• Some of the main solutions are:
• Docker
• Podman
• LXC/LXD
• systemd-nspawn
Understanding Container Types
Click to edit Master title style
• System containers are used as the foundation to build your own
application containers. They are not a replacement for a virtual
machine
• Application containers are used to start just one application.
Application containers are the standard
• To run multiple connected containers, you need to create a
microservice. Use docker-compose or Kubernetes Pods to do this in
an efficient way
Container Devops in 4 Weeks
Click to edit Master title style

Using Ansible to Setup a Docker


Environment
Demo: Using Ansible to Setup Docker
Click to edit Master title style
• Make sure you have setup the Ubuntu 20.04 workstation for
management by Ansible
• Use ansible-playbook -u student -K -i inventory ansible-ubuntu.yml
to set up the Ubuntu host
• On Ubuntu, log out and log in as your user student
• Use docker run hello-world
Container Devops in 4 Weeks
Click to edit Master title style

Running Containers in Docker


and Podman
Podman or Docker?
Click to edit Master title style
• Red Hat has changed from Docker to Podman as the default
container stack in RHEL 8
• Docker is no longer supported in RHEL 8 and related distributions
• Even if you can install Docker on top of RHEL 8, you shouldn't do it
as it will probably break with the next software update
• Podman is highly compatible with Docker
• By default, Podman runs rootless containers, which have no IP
address and cannot bind to privileged ports
• Both Docker as Podman are based on OCI standards
• For optimal compatibility, install the podman-docker package
Demo: Running Containers
Click to edit Master title style
• docker run ubuntu
• docker ps
• docker ps -a
• docker run -d nginx
• docker ps
• docker run -it ubuntu sh; Ctrl-p, Ctrl-q
• docker inspect ubuntu
• docker rm ubuntu
• docker run --name webserver --memory="128m" -d -p 8080:80
nginx
• curl localhost:8080
Container Devops in 4 Weeks
Click to edit Master title style

Managing Container Images


Understanding Images
Click to edit Master title style
• A container is a running instance of an image
• The image contains application code, language runtime and
libraries
• External libraries such as libc are typically provided by the host
operating system, but in container is included in the image
• While starting a container it adds a writable layer on the top to
store any changes that are made while working with the container
• These changes are ephemeral
• Container images are highly compatible, and either defined in
Docker or in OCI format
Getting Container Images
Click to edit Master title style
• Containers are normally fetched from registries
• Public registries such as https://hub.docker.com are available
• Red hat offers https://quay.io as a registry with more advanced CI
features offered
• Alternatively, private registries can easily be created
• Use Dockerfile to create custom images
Fetching Images from Registries
Click to edit Master title style
• By default, Docker fetches containers from Docker Hub
• In Podman, the /etc/containers/registries.conf file is used to specify
registry location
• Alternatively, the complete path to an image can be used to fetch it
from a specific registry: docker pull localhost:5000/fedora:latest
Understanding Image Tags
Click to edit Master title style
• Normally, different versions of images are available
• If nothing is specified, the latest version is pulled
• Use tags to pull a different version: docker pull nginx:1.14
Demo: Managing Container Images
Click to edit Master title style
• Explore https://hub.docker.com
• docker search mariadb will search for the mariadb image
• docker pull mariadb
• docker images
• docker inspect mariadb
• docker image history mariadb
• docker image rm mariadb
Container Devops in 4 Weeks
Click to edit Master title style

Creating a Private Registry


Running a Private Registry
Click to edit Master title style
• A private registry can easily be implemented by running the registry
image as a container
• Expose the registry on local port 5000
• Tag the image with the hostname and port in the first part of the
tag, to ensure that Docker interprets it as the location of the
registry: docker tag fedora localhost:5000/fedora
• Consider using a dedicated volume for storing images in your
private registry: docker run -d -p 5000:5000 --restart=always --
name registry -v /mnt/registry:/var/lib/registry registry:2
Running a Private Registry for External Use
Click to edit Master title style
• To provide services for external users, a private registry should be
configured with TLS certificates for secure access
• Assuming that a CA-signed certificate key pair is available as
/certs/my.key and /certs/my.crt, use the following command to
start the secured registry:
• docker run -d --restart=always --name registry -v
"$(pwd)"/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e
REGISTRY_HTTP_TLS_CERTIFICATE=/certs/my.crt -e
REGISTRY_HTTP_TLS_KEY=/certs/my.key -p 443:443 registry:2
Demo: Running a local Private Registry
Click to edit Master title style
• docker run -d -p 5000:5000 --restart=always --name registry registry:latest
• sudo ufw allow 5000/tcp
• docker pull fedora
• docker images
• docker tag fedora:latest localhost:5000/myfedora (the tag is required to
push it to your own image registry)
• docker push localhost:5000/myfedora
• docker rmi fedora; also remove the image based on the tag you've just
created
• docker exec -it registry sh; find . -name "myfedora"
• docker pull localhost:5000/myfedora downloads it again from your own
local registry
Container Devops in 4 Weeks
Click to edit Master title style

Using Dockerfile
Understanding Dockerfile
Click to edit Master title style
• Dockerfile is a way to automate container builds
• It contains all instructions required to build a container image
• So instead of distributing images, you could just distribute the
Dockerfile
• Use docker build . to build the container image based on the
Dockerfile in the current directory
• Images will be stored on your local system, but you can direct the
image to be stored in a repository
• Tip: images on hub.docker.com have a link to Dockerfile. Read it to
understand how an image is build using Dockerfile!
Using Dockerfile Instructions
Click to edit Master title style
• FROM: identifies the base image to use. This must be the first
instruction in Dockerfile
• MAINTAINER: the author of the image
• RUN: executes a command while building the container, it is
executed before the container is run and changes what is in the
resulting container
• CMD: specifies a command to run when the container starts
• EXPOSE: exposes container ports on the container host
• ENV: sets environment variables that are passed to the CMD
• ADD: copies files from the host to the container. By default files are
copied from the Dockerfile directory
• ENTRYPOINT: specifies a command to run when the container starts
Using Dockerfile Instructions
Click to edit Master title style
• VOLUME: specifies the name of a volume that should be mounted
from the host into the container
• USER: identifies the user that should run tasks for building this
container, use for services to run as a specific user
• WORKDIR: set the current working directory for commands that are
running from the container
Understanding ENTRYPOINT and CMD
Click to edit Master title style
• Both ENTRYPOINT and CMD specify a command to run when the
container starts
• CMD specifies the command that should be run by default after
starting the container. You may override that, using docker run
mycontainer othercommand
• ENTRYPOINT can be overridden as well, but it's more work: you
need docker run --entrypoint mycommand mycontainer to
override the default command that is started
• Best practice is to use ENTRYPOINT in situations where you
wouldn't expect this default command to be overridden
ENTRYPOINT and CMD Syntax
Click to edit Master title style
• Commands in ENTRYPOINT and COMMAND can be specified in
different ways
• The most common way is the Exec form, which is shaped as
<instruction> ["executable", "arg1", "arg2"]
• The alternative is to use Shell form, which is shaped as
<instruction> <command>
• While shell form seems easier to use, it runs <command> as an
argument to /bin/sh, which may lead to confusion
Demo: Using a Dockerfile
Click to edit Master title style
• Dockerfile demo is in
https://github.com/devopsinfourweeks/dockerfile
• Use docker build -t nmap . to run it from the current directory
• Tip: use docker build --no-cache -t nmap . to ensure the complete
procedure is performed again if you need to run again
• Next, use docker run nmap to run it
Building Images in podman
Click to edit Master title style
• podman build allows you to build images based on Dockerfile
• Alternatively, use buildah
• buildah build is doing the exact same thing
• buildah from scratch allows you to build images from scratch, using
script-like style
• See also:
https://developers.redhat.com/blog/2019/02/21/podman-and-
buildah-for-docker-users/
Lab: Working with Dockerfile
Click to edit Master title style
• Create a Dockerfile that deploys an httpd web server that is based
on the latest Fedora container image. Use a sample file index.html
which contains the text "hello world" and copy this file to the
/var/www/html directory. Ensure that the following packages are
installed: nginx curl wget
• Use the Dockerfile to generate the image and test its working
Using Tags
Click to edit Master title style
• Consider using tags on custom images you create
• Without setting a tag, the default tag :latest is used
• Use docker tag to manually set a tag: docker tag myapache:1.0
• Consider using meaningfull tags docker tag myapache:testing;
docker tag myapache:production
Using docker commit
Click to edit Master title style
• docker commit allows you to easily save changes to a container in
an image
• The container image keeps its original metadata
• Use on a running container
• docker commit -m my-change running_containername my-container
• docker images
• docker push my-container
Container Devops in 4 Weeks
Click to edit Master title style

Publishing on Docker Hub


Creating an autobuild Repository on Docker Hub
Click to edit Master title style
• Access https://github.com
• If required, create an account and log in
• Click Create Repository
• Enter the name of the new repo; e.g. devops and set to Public
• Check Settings > Webhooks. Don't change anything, but check
again later
Creating an autobuild Repository on Docker Hub
Click to edit Master title style
• On a Linux console, create the local repository
• mkdir devops
• echo "hello" >> README.md
• cat > Dockerfile <<EOF
FROM busybox
CMD echo "Hello world!"
EOF
• git init
• git add *
• git commit -m "initial commit"
• git remote add origin https://github.com/yourname/devops.git
• git push -u origin master
Creating an autobuild Repository on Docker Hub
Click to edit Master title style
• Access https://hub.docker.com
• If required, create an account and log in
• Click Create Repository
• Enter the name of the new repo; e.g. devops and set to Public
• Under Build Settings, click Connected and enter your GitHib
"organization" as well as a repository
Creating an autobuild Repository on Docker Hub
Click to edit Master title style
• Still from hub.docker.com: Add a Build Rule that sets the following:
• Source: Branch
• Source: master
• Docker Tag: latest
• Dockerfile location: Dockerfile
• Build Context: /
• Check Builds > Build Activity to see progress
• Once the build is completed successfully, from a terminal user
docker pull yourname/devops:latest to pull this latest image
• On GitHub, check Settings > Webhooks for your repo
Creating an autobuild Repository on Docker Hub
Click to edit Master title style
• From the Git repo on the Linux console: edit the Dockerfile and add
the following line: MAINTAINER yourname [email protected]
• git status
• git add *
• git commit -m "minor update"
• git push -u origin master
• From Docker hub: Check your repository > Builds > Build Activity.
You'll see that a new automatic build has been triggered
Container Devops in 4 Weeks
Click to edit Master title style

Managing Container Storage


Configuring Storage
Click to edit Master title style
• To work with storage, bind mounts and volumes can be used
• A bind mount provides access to a directory on the Docker host
• Volumes exist outside of the container spec, and as such outlive the
container lifetime
• Volumes offer an option to use other storage types as well
• Within Docker-CE, volume types are limited to local and nfs
• In Kubernetes or Docker Swarm more useful storage types are
provided
Demo: Using an NFS-based Volume -1
Click to edit Master title style
• sudo apt install nfs-server nfs-common
• sudo mkdir /nfsdata
• sudo vim /etc/exports
• /nfsdata *(rw,no_root_squash)
• sudo chown nobody:nogroup /nfsdata
• sudo systemctl restart nfs-kernel-server
• showmount -e localhost
Demo: Using an NFS-based Volume -1
Click to edit Master title style
• docker volume create --driver local --opt type=nfs --opt
o=addr=127.0.0.1,rw --opt device=:/nfsdata nfsvol
• docker volume ls
• docker volume inspect nfsvol
• docker run -it --name nfstest --rm --mount
source=nfsvol,target=/data nginx:latest /bin/sh
• touch /data/myfile; exit
• ls /nfsdata
Lab: Managing Volumes
Click to edit Master title style
• Use docker volume create myvol to create a volume that uses local
storage as its backend
• Inspect the volume to see what it's doing with files that are created
• Mount this volume in an nginx container in the directory /data.
Create a file in this directory and verify this file is created on the
local volume storage backend
Working with Volumes
Click to edit Master title style
• docker volume create myvol creates a simple volume that uses the local
file system as the storage backend
• docker volume ls will show the volume
• docker volume inspect my-vol shows the properties of the volume
• docker run -it --name voltest --rm --mount source=myvol,target=/data
nginx:latest /bin/sh will run a container and attach to the running volume
• From the container, use cp /etc/hosts /data; touch /data/testfile; ctrl-p,
ctrl-q
• sudo -I; ls /var/lib/docker/volumes/myvol/_data/
• docker run -it --name voltest2 --rm --mount source=myvol,target=/data
nginx:latest /bin/sh
• From the second container: ls /data; touch /data/newfile; ctrl-p, ctrl-q
Container Devops in 4 Weeks
Click to edit Master title style

Using Docker Compose


Understanding Docker Compose
Click to edit Master title style
• Docker Compose uses the declarative approach to start Docker
containers, or Microservices consisting of multiple Docker
containers
• The YAML file is used to include parameters that are normally used
on the command line while starting a Docker container
• To use it, create a docker-compose.yml file in a directory, and from
that directory run the docker-compose up -d command
• Use docker-compose down to remove the container
Demo: Bringing up a Simple Nginx Server
Click to edit Master title style
• Use the simple-nginx/docker-compose.yml file from
https://github.com/sandervanvugt/devopsinfourweeks
• cd simple-nginx
• docker-compose up -d
• docker ps
Demo: Bringing up a Microservice
Click to edit Master title style
• Use the wordpress-mysql/docker-compose.yml file from
https://github.com/sandervanvugt/devopsinfourweeks
• cd wordpress-mysql
• docker-compose up -d
• docker ps
Lab: Using Docker Compose
Click to edit Master title style
• Start an nginx container, and copy the
/etc/nginx/conf.d/default.conf configuration file to the local
directory ~/nginx-conf/
• Use Docker compose to deploy an application that runs Nginx.
Expose the application on ports 80 and 443 and mount the
configuration file by using a volume that exposes the ~/nginx-conf
directory
Container Devops in 4 Weeks
Click to edit Master title style

Homework
Day 2 homework
Click to edit Master title style
• For next weeks session, prepare a CentOS 7.x virtual machine that
has at least 4GiB RAM, 20 GiB disk and 2 CPUs. Install this machine
with the minimal installation server pattern so that it is ready for
installation of a Kubernetes AiO server. Further instructions are
provided next week.
Container Devops in 4 Weeks
Click to edit Master title style

Day 3
Day 3 Agenda
Click to edit Master title style
• Understanding Kubernetes
• Running Applications in Kubernetes
• Exposing Applications
• Configuring Application Storage
• Implementing Decoupling in Kubernetes
• Exploring week 4 setup and homework
Poll Question
Click to edit Master title style
Have you attended the previous course days or watched its recordings?
• Day 1 only
• Day 2 only
• Day 1 and Day 2
• no
Container Devops in 4 Weeks
Click to edit Master title style

Understanding Kubernetes
Understanding Kubernetes
Click to edit Master title style
• Kubernetes offers enterprise features that are needed in a
containerized world
• Scalability
• Availability
• Decoupling between static code and site specific data
• Persistent external storage
• The flexibility to be used on premise or in cloud
• Kubernetes is the de facto standard and currently there are no
relevant competing products
Installing Kubernetes
Click to edit Master title style
• In cloud, managed Kubernetes solutions exist to offer a Kubernetes
environment in just a few clicks
• On premise, administrators can build their own Kubernetes cluster
using kubeadm
• For testing, minikube can be used
Installing an AiO on-prem Cluster - 1/4
Click to edit Master title style
• Install some packages
• yum install git vim bash-completion
• As ordinary user with sudo privileges, clone the course Git
repository
• git clone https://github.com/sandervanvugt/microservices
• Run the setup scripts:
• cd /microservices
• ./setup-docker.sh
• ./setup-kubetools.sh
• In a root shell, install a Kubernetes master node
• kubeadm init --pod-network-cidr=10.10.0.0/16
Installing an AiO on-prem Cluster - 2/4
Click to edit Master title style
• In a user shell, set up the kubectl client:
• mkdir -p $HOME/.kube
• sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
• sudo chown $(id -un):$(id -un) .kube/config
Installing an AiO on-prem Cluster - 3/4
Click to edit Master title style
• In a user shell, set up the Calico networking agent
• kubectl create -f https://docs.projectcalico.org/manifests/tigera-
operator.yaml
• wget https://docs.projectcalico.org/manifests/custom-resources.yaml
• You now need to define the Pod network, which by default is set to
192.168.0.0/24, which in general is a bad idea. I suggest setting it to
10.10.0.0 - make sure this address range is not yet used for something
else!
• sed -i -e s/192.168.0.0/10.10.0.0/g custom-resources.yaml
• kubectl create -f custom-resources.yaml
• kubectl get pods -n calico-system: wait until all pods show a state of
Ready, this can take about 5 minutes!
Installing an AiO on-prem Cluster - 4/4
Click to edit Master title style
• By default, user Pods cannot run on the Kubernetes control node.
Use the following command to remove the taint so that you can
schedule nodes on it:
kubectl taint nodes --all node-role.kubernetes.io/master-
• Type kubectl get all to verify the cluster works.
• Use kubectl create deployment nginx --image=nginx to verify that
you can create applications in Kubernetes
Container Devops in 4 Weeks
Click to edit Master title style

Understanding Kubernetes
dropping Docker
Docker and Kubernetes
Click to edit Master title style
• Kubernetes orchestrates containers, which are based on images
• Container images are highly standardized
• Docker container images are compatible with other container
runtimes such as containerd and CRI-O
• Hence, when switching to a different container runtime, your
images will still run
• Docker is no longer supported as the runtime on the K8s cluster
starting version 1.22 (late 2021)
• That means that instead of setting up Docker as the container
runtime for building a Kubernetes cluster, you'll need to set up
another runtime
Understanding the Situation
Click to edit Master title style
• Currently Docker is used as the container runtime in K8s
• But the only thing that K8s cares about is the containerd
component in Docker
• Including full Docker makes work for K8s too complicated
• K8s needs the runtime to be compliant with the Container Runtime
Interface (CRI)
• Docker is not CRI compliant, which is why an additional component
called Dockershim needs to be included as well, and that is not
efficient
• Full explanation is here:
https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-
and-docker/
Container Devops in 4 Weeks
Click to edit Master title style

Running Applications in
Kubernetes
Understanding Kubernetes Resources
Click to edit Master title style
• Kubernetes resources are defined in the APIs
• Use kubectl api-resources for an overview
• Kubernetes resources are extensible, which means that you can add
your own resources
Understanding Kubernetes Key Resources
Click to edit Master title style
• Pod: used to run one (or more) containers and volumes
• Deployment: adds scalability and update strategy to pods
• Service: exposes pods for external use
• Persistent Volume Claim: connects to persistent storage
• ConfigMap: used to store site specific data separate from pods
Exploring kubectl
Click to edit Master title style
• kubectl is the main management interface
• Make sure that bash-completion is installed for awesome tab
completion
• source <(kubectl completion bash)
• Explore kubectl -h at all levels of kubectl
Running Applications in Kubernetes
Click to edit Master title style
• kubectl create deployment allows you to create a deployment
• kubectl run allows you to run individual pods
• Individual pods (aka "naked pods") are unmanaged and should not
be used
• kubectl get pods will show all running Pods in the current
namespace
• kubectl get all shows running Pods and related resources in the
current namespace
• kubectl get all -A shows resources in all namespaces
Troubleshooting Kubernetes Applications
Click to edit Master title style
• kubectl describe pod <podname> is the first thing to do: it shows
events that have been generated while defining the application in
the Etcd database
• kubectl logs connects to the application STDOUT and can indicate
errors while starting application. This only works on running
applications
• kubectl exec -it <podname> -- sh can be used to open a shell on a
running application
Lab: Troubleshooting Kubernetes Applications
Click to edit Master title style
• Use kubectl create deployment --image=busybox to start a Busybox
deployment
• It fails: use the appropriate tools to find out why
• After finding out why it fails, delete the deployment and start it
again, this time in a way that it doesn't fail
Container Devops in 4 Weeks
Click to edit Master title style

Managing Kubernetes the


DevOps way
Declarative versus Imperative
Click to edit Master title style
• In Imperative mode, the administrator uses command with
command line options to define Kubernetes resources
• In Declarative mode, Configuration as Code is used by the DevOps
engineer to ensure that resources are created in a consistent way
throughout the entire environment
• To do so, YAML files are used
• YAML files can be written from scratch (not recommended), or
generated: kubectl create deployment mynginx --image=nginx --
replicas=3 --dry-run=client -o yaml > mynginx.yaml
• For complete documentation: use kubectl explain <resource>.spec
Container Devops in 4 Weeks
Click to edit Master title style

Exposing Applications
Understanding Application Access
Click to edit Master title style
• Kubernetes applications are running as scaled pods in the pod
network
• The pod network is provided by the kube-apiserver and not
reachable from the outside
• To expose access to applications, service resources are used
Container Devops in 4 Weeks
Click to edit Master title style

Configuring Application Storage


Understanding K8s Storage Solutions
Click to edit Master title style
• Pod storage by nature is ephemeral
• Pods can refer to external storage to make it less ephemeral
• Storage can be decoupled by using Persistent Volume Claim (PCV)
• PVC addresses Persistent Volume
• Persistent Volume can be manually created
• Persistent Volume can be automatically provisioned using
StorageClass
• StorageClass provides default storage in specific (cloud)
environments
• Check pv-pvc-pod.yaml for an example
Container Devops in 4 Weeks
Click to edit Master title style

Implementing Decoupling in
Kubernetes
Demo: Running MySQL
Click to edit Master title style
• kubectl run mymysql --image=mysql:latest
• kubectl get pods
• kubectl describe pod mymysql
• kubectl logs mymysql
Providing Variables to Kubernetes Apps
Click to edit Master title style
• In imperative way, the -e command line option can be used to
provide environment variables to Kubernetes applications
• That's not very DevOps though, and something better is needed
• But let's verify that it works first: kubectl run newmysql --
image=mysql --env=MYSQL_ROOT_PASSWORD=password
• Notice alternative syntax: kubectl set env deploy/mysql
MYSQL_DABASASE=mydb
Understanding ConfigMaps
Click to edit Master title style
• ConfigMaps are used to separate site-specific data from static data
in a Pod
• Variables: kubectl create cm variables --from-
literal=MYSQL_ROOT_PASSWORD=password
• Config files: kubectl create cm myconf --from-file=my.conf
• Secrets are base64 encoded ConfigMaps
• Adressing the ConfigMap from a Pod depends on the type of
ConfigMap
• Use envFrom to address variables
• Use volumes to mount ConfigMaps that contain files
Demo: Using a ConfigMap for Variables
Click to edit Master title style
• kubectl create cm myvars --from-literal=VAR1=goat --from-
literal=VAR2=cow
• kubectl create -f cm-test-pod.yaml
• kubectl logs test-pod
Demo: Using a ConfigMap for Storage
Click to edit Master title style
• kubectl create cm nginxconf --from-file nginx-custom-config.conf
• kubectl create -f nginx-cm.yml
• Did that work? Fix it!
• kubectl exec -it nginx-cm -- /bin/bash
• cat /etc/nginx/conf.d/default.conf
Lab: Running MySQL the DevOps way
Click to edit Master title style
• Create a ConfigMap that stores all required MySQL variables
• Start a new mysql pod that uses the ConfigMap to ensure the
availability of the required variables within the Pod
Container Devops in 4 Weeks
Click to edit Master title style

Creating Custom Resources


Why Creating Custom Resources?
Click to edit Master title style
• Kubernetes is used to manage the life cycle of an application
• By using Custom Resources, you have the Kubernetes API server
handle the entire application lifecycle
• By using custom resources Role Based Access Control (RBAC) or
service accounts can be used to provide access to specific other
applications
• Custom resources are also the foundation for Operators, which are
common in OpenShift
Demo: Creating Custom Resources
Click to edit Master title style
• kubectl apply -f sslcerts-crd.yaml
• kubectl apply -f my-sslcert.yaml
• kubectl api-resources | grep example
Container Devops in 4 Weeks
Click to edit Master title style

Homework
Day 3 homework
Click to edit Master title style
• Next week, we are going to work with Red Hat CodeReady
Containers. Make sure to prepare the following to follow along. If
you don't have the resources to create this installation, that's OK.
You can just attend next weeks session without working through
the labs
• Latest version of Fedora Workstation, with at least 12 GiB RAM, 4 CPUs
and 40 GiB disk space and embedded virtualization enabled
• Create an account on https://developers.redhat.com, and download
the CodeReady Containers software
• Further instructions are provided next week
Container Devops in 4 Weeks
Click to edit Master title style

Day 4
Day 4 Agenda
Click to edit Master title style
• Getting Started with CodeReady Containers
• Comparing OpenShift to Kubernetes
• Running Kubernetes applications in OpenShift
• Understanding Helm Charts, Operators and Custom Resources
• Building OpenShift applications from Git source code
• ## Creating an Application with a Template
• Understanding Pipelines in Kubernetes
• Using OpenShift Pipelines
Poll Question
Click to edit Master title style
Have you attended the previous course days or watched its recordings?
• Day 1 only
• Day 2 only
• Day 3 only
• Day 1 and Day 2
• Day 2 and Day 3
• Day 1 and Day 3
• All days
• no
Container Devops in 4 Weeks
Click to edit Master title style

Getting Started with CodeReady


Containers
Understanding CodeReady Containers
Click to edit Master title style
• CodeReady Containers (CRC) is a free all-in-one OpenShift solution
• You need a free Red Hat developer account
• CodeReady Containers can be installed in different ways
• On top of your current OS
• Isolated in a Linux VM
• To prevent having conflicts with other stuff running on your
computer, it's recommended to install in an isolated VM
• For other usage options, see here:
https://developers.redhat.com/products/codeready-
containers/overview
Installing CRC in an Isolated VM
Click to edit Master title style
• The VM needs the following
• 12 GB RAM
• 4 CPU cores
• 40 GB disk
• Support for nested virtualization
• Download the tar ball and the pull-secret
• Extract the tarball
• move the crc file to /usr/local/bin
• crc setup
• crc start -p pull-secret -m 8192
Container Devops in 4 Weeks
Click to edit Master title style

Comparing OpenShift to
Kubernetes
Understanding OpenShift
Click to edit Master title style
• OpenShift is a Kubernetes distribution!
• Expressed in main functionality, OpenShift is a Kubernetes
distribution where developer options are integrated in an
automated way
• Source 2 Image
• Pipelines (currently tech preview)
• More developed authentication and RBAC
• OpenShift adds more operators than vanilla Kubernetes
• OpenShift adds many extensions to the Kubernetes APIs
Container Devops in 4 Weeks
Click to edit Master title style

Running Kubernetes
Applications in OpenShift
Running Applications in OpenShift
Click to edit Master title style
• Applications can be managed like in Kubernetes
• OpenShift adds easier to use interfaces as well
• oc new-app --docker-image=mariadb
• oc set -h
• oc adm -h
• Managing a running environment is very similar
• oc get all
• oc logs
• oc describe
• oc explain
• etc.
Container Devops in 4 Weeks
Click to edit Master title style

Understanding Helm Charts,


Operators and Custom
Resources
What is this about?
Click to edit Master title style
• It's all about running custom applications in Kubernetes
• Helm Charts are like packages that can be used in Kubernetes
• Custom Resource Definitions allow you to extend the Kubernetes
API to add new resources (discussed last week)
• Operators are using Custom Resource Definitions to provide
applications
Understanding Helm
Click to edit Master title style
• Helm is about reusing YAML manifests through templates
• These templates work with properties that are defined in a
separate file
• Helm merges the YAML templates with the values before applying
them to the cluster
• The resulting package is called a Helm Chart
Understanding Operators
Click to edit Master title style
• An operator extends the Kubernetes API to run a stateful
application to run natively on Kubernetes
• An operator consists of Kubernetes custom resources and/or APIs
and controllers
• Operators are typically written in a standard programming language
like Golang, Python or Java
• Operators are packages as container images and deployed using
YAML manifests
• As a result, new resources will be available in the cluster
• Operators can be distributed using Helm Charts
• We'll later use operators to deploy Red Hat CI/CD in OpenShift
Working with Helm
Click to edit Master title style
• To work with Helm, you'll need to install it
• Make sure you use Helm 3, version 2 is obsolete
• Helm charts are the helm packages
• A running instance of a helm chart is called a release
Installing Helm
Click to edit Master title style
• https://docs.openshift.com/container-
platform/4.6/cli_reference/helm_cli/getting-started-with-helm-on-
openshift-container-platform.html
• Use helm version to verify
• Use helm create my-demo-app and check the directory that is
created and its contents
Demo: Installing a Helm Chart on OpenShift
Click to edit Master title style
• oc new-project mysql
• helm repo add stable https://charts.helm.sh/stable
• helm repo update
• helm list
• helm install example-mysql stable/mysql
• helm list
• oc get all
Demo: Working with Customized Helm Charts
Click to edit Master title style
• cat my-ghost-app/Chart.yaml
• cat my-ghost-app/templates/deployment.yaml
• cat my-ghost-app/templates/service.yaml
• cat my-ghost-app/values.yaml
• helm template --debug my-ghost-app
• helm install -f my-ghost-app/values.yaml my-ghost-app my-ghost-
app/
Container Devops in 4 Weeks
Click to edit Master title style

Building OpenShift Applications


from Git Source
Understanding S2I
Click to edit Master title style
• S2i allows you to run an application directly from source code
• oc new-app allows you to work with S2i directly
• S2i connects source code to an S2i image stream builder image to
create a temporary builder pod that writes an application image to
the internal image registry
• Based on this custom image, a deployment configuration is created
• S2i takes away the need for the developer to know anything about
Dockerfile and related items
• S2i also allows for continuous patching as updates can be triggered
using web hooks
Understanding S2i Image Stream
Click to edit Master title style
• The image stream is offered by the internal image repository to
provide different versions of images
• Use oc get is -n openshift for a list
• Image streams are managed by Red Hat through OpenShift. If a new
image stream becomes available, it will automatically trigger a new
build of application code
• Custom image streams can also be integrated
Understanding S2i Resources
Click to edit Master title style
• ImageStream: defines the interpreter needed to create the custom
image
• BuildConfig: defines all that is needed to convert source code into
an image (Git repo, imagestream)
• DeploymentConfig/Deployment: defines how to run the container
in the cluster; contains the Pod template that refers to the custom
built image
• Service: defines how the application running in the deployment is
exposed
Performing the S2i process
Click to edit Master title style
• oc new-app php~https://github.com/sandervanvugt/simpleapp --
name=simple-app
• oc get is -n openshift
• oc get builds: allows for monitoring the build process
• oc get buildconfig: shows the buildconfig used
• oc get deployment: shows the resulting deployment
Container Devops in 4 Weeks
Click to edit Master title style

Understanding Pipelines in
Kubernetes
Understanding Pipelines in a K8s Environment
Click to edit Master title style
• Based on https://containerjournal.com/topics/container-
ecosystems/kubernetes-pipelines-hello-new-world/
• Traditional pipelines focus on deploying workloads to specific types
of servers: Dev, Test, Prod and so on
• Pipelines in Kubernetes often do just that, with the only difference
that the dev, test and prod application is running in either a dev,
test and prod Kubernetes cluster or namespace
• In a Microservices approach this model doesn't work anymore
Understanding Pipelines in a Microservices environment
Click to edit Master title style
• Kubernetes is all about Microservices
• In Microservices, small incremental updates happen daily
• The monolithic traditional pipeline doesn't work well anymore in
such environments, and a different model is needed
• Pipelines are no longer needed for the entire application, but for
the packages in the microservice
Pipelines and Servicemesh
Click to edit Master title style
• Service Mesh is the solution for pipelines in a Kubernetes driven
microservices environment
• The Kubernetes pipeline will instruct service mesh to route (dev or
test) users to specific versions of application components
• In this scenario, different versions of the application run side by
side, and rollback or rollfoward is just an instruction from the
service mesh to reroute to another version of the application
• As a result, a completely new approach to CI/CD pipelines is needed
• OpenShift Pipelines
• Istio Servicemesh
Container Devops in 4 Weeks
Click to edit Master title style

Using OpenShift Pipelines


NTS: Mac Catalina ready to demo
Click to edit Master title style
Understanding OpenShift Pipelines
Click to edit Master title style
• In OS 4.6, Pipelines is a tech preview feature
• It uses Tekton Custom Resource Definitions (CRDs) for defining
CI/CD pipelines that are portable across K8s distributions
• The CI/CD system runs in isolated containers
OpenShift Pipeline Components
Click to edit Master title style
• The OpenShift pipeline CRDs bring different resources:
• Task: the basic element that executes different steps to create
Kubernetes resources
• TaskRun: the running instance of a task
• Pipeline: a collection of tasks that executes as a workflow
• PipelineRun: the running instance of a pipeline
• Workspace: a storage volume used for input and output
• Trigger: the external event that triggers a pipeline
• Conditions: the if statements that are used to trigger a task
Installing OpenShift Pipelines
Click to edit Master title style
• From Web Console (use crc console to start), log in and install
OpenShift Pipelines Operator from Operator Hub
• Use default settings
• After installing, give it 5 minutes to create all resources
• Install the tkn CLI utility: https://docs.openshift.com/container-
platform/4.6/pipelines/creating-applications-with-cicd-
pipelines.html
Using OpenShift Pipelines
Click to edit Master title style
• https://docs.openshift.com/container-
platform/4.6/pipelines/creating-applications-with-cicd-
pipelines.html
• Note that this is tech preview and may fail!
Container Devops in 4 Weeks
Click to edit Master title style

Summary: Container Based


Devops
Container Devops in 4 Weeks
Click to edit Master title style

Further Learning
Related Live Courses
Click to edit Master title style
• Containers:
• Containers in 4 Hours: May 4th
• Kubernetes
• Kubernetes in 4 Hours: March 9
• CKAD Crash Course: March 15-17
• CKA Crash Course: March 18, 19
• Building Microservices with Containers: May 21st
• OpenShift
• Getting Started with OpenShift: March 25
• EX180 Crash Course: April 19/20
• EX280 Crash Course: April 21/22

You might also like