Practical Devops Tools
Practical Devops Tools
v1.3
SUMMARY
GETTING STARTED
Students who are interested to start their career in DevOps by It is presented a technological development environment to initiate the
preparing the LPI DevOps Tools Engineer Certification. reader to the use of DevOps tools to enable the efficient management of
Junior Engineers who work as System Administrators, infrastructure through practical cases and thus will help in the
Software Engineers, DevOps and Cloud Engineer who need to preparation of the certification,
improve their skills in a specific area.
It is necessary to have basic knowledge of Linux.
https://learning.lpi.org/en/learning-materials/all-materials/#devops-version-10
https://gitlab.com/GilbertFongan/devops-book-labs
6
PART I.
Software Engineering
…
9
PLAN
-- Waterfall
-- Iterative
V-model
- Agile model
DevOps
-- REST API,
- CORS Headers,
CSRF Token
-- Monolithic
MODULE I-1
11
SDLC
Software Development Life Cycle (SDLC) is a process used by the software industry to design, develop and
test high quality softwares. The SDLC aims to produce a high-quality software that meets or exceeds
customer expectations, reaches completion within times and cost estimates.
Waterfall model
Iterative model
V model
Agile model
MODULE I-1
12
Phase in the development process begins only if the previous phase is complete. The outcome of one
phase acts as the input for the next phase sequentially.
MODULE I-1
13
System Design : Helps in specifying hardware and system requirements and helps in defining the
overall system architecture.
Implementation : The system is first developed in small programs called units, which are integrated in
the next phase. Each unit is developed and tested for its functionality.
Integration and testing : All units developed in the previous phase are integrated into a system after
testing of each unit. Post integration the entire system is tested for any faults and failures.
Deployment of system : Once the functional and non-functional testing is done, the product is
deployed in the customer environment or released into the market.
Maintenance : Patches are released to fix some issues which come up in the client environment.
Maintenance is done to deliver these changes in the customer environment.
MODULE I-1
14
Advantages Disadvantages
Highly-disciplined model and Phases are completed Not a good model for complex and object-oriented
one at a time projects
Clearly defined stages. Process and results are well Dificulty to go back and change the functionnality in
documented testing stage
MODULE I-1
15
Iteratively enhances the evolving versions until the full system is implemented
Design modifications are made, and new functional capabilities are added (at each iteration)
MODULE I-1
16
Advantages Disadvantages
Highly-disciplined model and Phases are completed Not a good model for complex and object-oriented
one at a time projects
Clearly defined stages. Process and results are well Dificulty to go back and change the functionnality in
documented testing stage
MODULE I-1
17
SDLC/ V-model
It is also known as Verification and Validation model
MODULE I-1
18
SDLC/ V-model
Advantage and disadvantages are :
Advantages Disadvantages
MODULE I-1
19
Agile SDLC model is a combination of iterative and incremental process models with focus on process
adaptability and customer satisfaction by rapid delivery of working software product.
Individuals and interactions : self-organization and motivation are important, as are interactions like
co-location and pair programming.
Working software : Communication with the customers to understand their requirements, instead of
just depending on documentation.
Customer collaboration : Continuous customer interaction is very important to get proper product
requirements.
Responding to change : Focused on quick responses to change and continuous development.
MODULE I-1
20
MODULE I-1
21
Advantages Disadvantages
Promotes teamwork and cross training and Suitable An overall plan, an agile leader and agile PM
for fixed or changing requirements practice is a must without which it will not work.
MODULE I-1
22
SDLC/ Synthesis
Agile software development has broken down some of the silos between
requirements analysis, testing and development.
MODULE I-1
23
SDLC/ DevOps
DevOps combines development (dev) and operations(ops) to increase the efficiency, speed and security of
software development and delivery compared to traditional processes.
It is defined as a software engineering methodology which aims to integrate the work of software
development and software operations teams by facilitating a culture of collaboration and shared
responsibility.
These four (04) key principles can improve the organization's software development practice :
MODULE I-1
24
MODULE I-1
25
Parameter/Software Development
Agile DevOps
Technologies
Goal Gap between customer need and Gap between development + testing and
development & testing teams Ops
Advantage Shorter development cycle and improved Supports Agile's release cycle
defect detection
MODULE I-1
26
The menu provides a list of pizzas you can order, along with a description of each pizza.
You don't know exactly how the restaurant prepares this food, and you don't really need to.
MODULE I-1
27
MODULE I-1
28
REST is an acronym for REpresentational State Transfer and an architectural style for distributed
hypermedia systems.
REST is a way for two computer systems to communicate over HTTP in a similar way to web browsers and
servers.
MODULE I-1
29
Client-Server Architecture : Client and server systems can be improved and updated independently each
other
Statelessness : All client requests are treated equally. There's no special, server-side memory of past
client activity. The responsibility of managing state is on the client.
Cacheability : Clients and servers should be able to cache resource data that changes infrequently
further improving scalability and performance.
Layered System : A client cannot ordinarily tell whether it is connected directly to the end server or an
intermediary along the way. Intermediary servers can also improve system scalability.
Code on demand (optional) : Servers can temporarily extend or customize the functionality of a client by
transferring executable code.
Uniform interface : All resources should be accessible through a common approach such as HTTP GET
and similarly modified using a consistent approach.
MODULE I-1
30
Syntax rules
MODULE I-1
31
An Endpoint URL : An application implementing a RESTful API will define one or more URL endpoints
with a domain, port, path, and/or query string for example, https://mydomain/user/123?format=json
The HTTP method : Differing HTTP methods can be used on any endpoint which map to application
create, read, update, and delete (CRUD) operations :
HTTP headers : Information such as authentication tokens or cookies can be contained in the HTTP
request header.
Body Data : Data is normally transmitted in the HTTP body in an identical way to HTML <form>
submissions or by sending a single JSON-encoded data string
MODULE I-1
32
Data responses are typically JSON-encoded, but XML, CSV, simple strings, or any other format can be
used.
Return format could be specified in the request. For example, /user/123?format=json or
/user/123?format=xml
An appropriate HTTP status code should also be set in the response header.
MODULE I-1
33
REST challenges
Several challenges are possible :
API Versioning : API changes are inevitable, but endpoint URLs should never be invalidated when they're
being used internally and/or by third-party applications
Authentication : Client-side applications on the same domain as the RESTful API will send and receive
cookies. An API request can therefore be validated to ensure a user is logged in and has appropriate
rights.
Security : A RESTful API provides another route to access and manipulate your application.
Use HTTPS
Use a robust authentication method
Use CORS to limit client-side calls to specific domains
Provide minimum functionality
Validate all endpoint URLs and body data
Avoid exposing API tokens in client-side Javascript
Block unexpectedly large payloads.
MODULE I-1
34
CSRF token is transmitted to the client in such a way that it is included in a subsequent HTTP request
made by the client.
CSRF tokens can prevent CSRF attacks by making it impossible for an attacker to construct a fully valid
HTTP request suitable for feeding to a victim user.
MODULE I-1
35
Cross-origin resource sharing (CORS) is a mechanism that consists of adding HTTP headers to allow a user
agent to access resources on a server located on another origin than the current site.
CORS is used to bypass a certain basic setting like SOP (Same-Origin Policy) which prohibits loading
from other servers when visiting a website
MODULE I-1
36
The architecture of a system describes its major components, their relationships (structures), and how
they interact with each other.
Software architecture and design includes several contributory factors such as Business strategy,
quality attributes, human dynamics, design, and IT environment. Software architecture # Software
design.
MODULE I-1
37
Software Architecture serves as a blueprint for a system. It provides an abstraction to manage the system
complexity and establish a communication and coordination mechanism among components.
Fundamental properties and define guidelines
Cross-cutting concerns and high-impact
Communicate with business stakeholders
Non-functional
Manage uncertainty
requirements
Conceptual integrity
Scope: System
Software design provides a design plan that describes the elements of a system, how they fit, and work
together to fulfill the requirement of the system. The objectives of having a design plan are as follows
Detailed properties
Communicate with developers
Functional
Individual components
requirements
Use guidelines
Avoid uncertainty
Scope: Module
MODULE I-1
38
Microservices
Architecture
Serverless Architecture
MODULE I-1
39
Monolithic Architecture
A Monolithic architecture is a traditional model of a software program, which is built as a unified unit that is
self-contained and independent from other applications.
Traditional solution
Comfortable fo small teams
Interconnected and interdependent
Software self-contained
MODULE I-1
40
Monolithic Architecture
Advantages Disadvantages
MODULE I-1
41
Business value
Strategic goals
Basic interoperability
Shared services
Continued improvement
MODULE I-1
42
Advantages Disadvantages
MODULE I-1
43
Microservice Architecture
Microservice is a type of service-oriented software architecture that focuses on building a series of
autonomous components that make up an application. It is an architectural style that structures an
application as a collection of services that are :
MODULE I-1
44
Microservice Architecture
Advantages Disadvantages
MODULE I-1
45
Comparison of Architectures
MODULE I-1
46
Comparison of Architectures
In resume :
Monolithic apps consist of interdependent, indivisible units and feature very slow development speed.
SOA is broken into smaller, moderately coupled services and features slow development.
Microservices are very small, loosely coupled independent services and feature rapid continuous
development.
MODULE I-1
47
Serverless Architecture
Serverless architecture is an approach to software design that allows developers to build and run services
without having to manage the underlying infrastructure.
MODULE I-1
48
Serverless Architecture
Advantages Disadvantages
MODULE I-1
49
Which Architecture?
MODULE I-1
50
Quizz
1. Which of the following software development 2. A service should be provided to arbitrary clients on the
process are Agile? (Choose Two correct answers.) Internet using HTTPS. Any standard clients on the Internet
should be able to consume the service without further
configuration. Which of the following approaches can be used
Kanban to implement these requirements? (Choose Three correct
Rational Unified Process answers.)
V-Model
SCRUM Configure the Web servers to not use a server certificate
when serving HTTPS
Waterfall
Generate self-signed certificates during the deployment
of each backend server
Use a certificate issuing service to request certificates
during each server deployment
Use a load balancer that decrypts incoming requests
and passes them on HTTP
Install a wildcard certificate and the respective private
key on all the backend servers.
MODULE I-1
51
PLAN
INFRASTRUCTURE:
DATABASE
-- Mutable -- SQL
Immutable
- NoSQL
Serverless
DATA STORAGE :
--
MESSAGE QUEUES
Object
-
CLOUD COMPUTING
Block
File -- Key features
DATA STRUCTURE: -- Benefits
Service model
Deployment model
CAP & ACID
MODULE I-2
53
Mutable infrastructure
Mutable Server means the infrastructure will be continually updated, tweaked, and tuned to meet the ongoing
needs of the purpose it serves.
Ability to change
Updating Operating System
Updating Software
MODULE I-2
54
Mutable infrastructure
Advantages Disadvantages
IT team does not need to build servers from scratch Configuration drift (harder to diagnose and manage
every time a change is required. each server)
Ensure that the infrastructure used meets the Updates can fail due to several reasons. Debugging
specific needs of each user. is time consuming due to update tracking problems.
MODULE I-2
55
Immutable infrastructure
Immutable Server means the infrastructure cannot be modified once deployed. When changes are
necessary, it is recommended to deploy afresh, add infrastructure and decommission old infrastructure.
No updates, security patches or configuration changes
New version of the architecture is built and deployed
New servers are deployed instead of updating the ones already used
MODULE I-2
56
Immutable infrastructure
Advantages Disadvantages
Easier tracking, testing different servers and rolling In case of problems, servers with the same
back configuration need a complete overhaul.
Great for interdependent environments such as Externalize data storage instead of copying it to a
cloud technologies. local disk.
MODULE I-2
57
Data Storage
Data storage is the retention of information using technology specifically developed to keep that data and
have it as accessible, as necessary.
File Storage is a hierarchical storage methodology used to organize and store data on a computer hard
drive or on a network-attached storage (NAS)
Block Storage is when a category of data storage is saved in huge volumes known as blocks
Object Storage is a computer data storage architecture that manages data as objects
MODULE I-2
58
PROTOCOLS REST and SOAP over HTTP SMB and NFS SCSI, Fibre Channel, SATA
MODULE I-2
59
Semi-Structured Data is a form of structured data that does not conform with the formal structure of
data models associated with relational databases or other forms of data tables
Structured Data is data that adheres to a pre-defined data model and is therefore straightforward to
analyze.
Unstructured Data is information that is not organized in a pre-defined manner.
MODULE I-2
60
MODULE I-2
61
In normal operations, your data store provides all three functions. But the CAP theorem maintains that when
a distributed database experiences a network failure, you can provide either consistency or availability.
MODULE I-2
62
ACID describe the set of properties of database transactions that guarantee data integrity despite errors,
system failures, power failures, or other issues.
MODULE I-2
63
Database
Relational Database : These databases are categorized by a set of tables where data gets fit into a pre-
defined category.
- The table consists of rows and columns where the column has an entry for data for a specific category
- Rows contains instance for that data defined according to the category.
- The Structured Query Language (SQL) is the standard user and application program interface.
NoSQL Database : These are used for large sets of distributed data.
- There are some big data performance issues which are effectively handled by relational databases, such
- There
kind of issues are easily managed by NoSQL databases.
are very efficient in analyzing large size unstructured data that may be stored at multiple virtual
servers of the cloud.
MODULE I-2
64
Database/Relational Database
MODULE I-2
65
Database/NoSQL Database
MODULE I-2
66
MODULE I-2
67
DataBase Comparison
Pros Cons
No standardized language
Continuous availability
Smaller user community
Query speed
NoSQL Inefficiency with complex queries
Agility
Data retrieval inconsistency
Cost
MODULE I-2
68
Message Queues
Message queuing allows applications to communicate by sending messages to each other. The message
queue provides temporary message storage when the destination program is busy or not connected.
Producer creates message and send it to queue which stores the message if consumer is busy
Consumer retrieve the message from the queue and start processing it
Queue temporarily locks the message to prevent it from being read by another consumer
Consumer delete the message from the queue after it completes the message processing
MODULE I-2
69
Cloud Computing/Definition
Cloud Computing "Using resources without directly owning them" is a model that allows ubiquitous,
convenient, on-demand access to a shared network and a set of configurable computing resources. "NIST"
05 Key features
03 Service model
04 Deployment model
Benefits
MODULE I-2
70
On-demand service
Measurable and
billable service Universal access via
the network
MODULE I-2
71
Cloud Computing/Benefits
Benefit from massive Increase speed and agility Stop spending money
economies of scale to manage data centers
MODULE I-2
72
MODULE I-2
73
MODULE I-2
74
PRIVATE PUBLIC
Different types of architectures
COMMUNITY HYBRID
MODULE I-2
75
Cloud Computing/Stakeholders
MODULE I-2
76
Quizz
1. Which of the following statements are true 2. An online shop needs to store information about clients and
regarding immutable servers ? (Choose Two orders. A list of fixed properties order clients and orders exists.
correct answers.) The data storage should enforce specific data types on these
properties and ensure that each order is associated with an
existing client. Which of the following cloud services is capable
In case of small changes, immutable servers of fulfilling these requirements?
must be rebuilt and redeployed.
Preparation and configuration tasks are
An Object store like OpenStack Swift.
moved from the time of deployment to the
time of building An in-memory database like MEMCACHED
Immutable servers store persistent data and A relational database like MariaDB
cannot be deleted without data loss. A messaging service like OpenStack Zaqar
Immutable servers cannot use external A NoSQL database like MariaDB
services such as databases and object
stores
All interactions with immutable servers have
to happen through shared file systems.
MODULE I-2
77
PLAN
-- Benefits
-- Local
Centralized
Distributed
GIT BRANCHING
GIT MERGING
GIT REBASE
GIT :
-- Installation
-- Configuration
Terminology
- LifeCycle
Workflow
MODULE I-3
79
Version control systems are a category of software tools that helps in recording changes made to files by
keeping a track of modifications done in the code in a special kind of database.
MODULE I-3
80
MODULE I-3
81
MODULE I-3
82
MODULE I-3
83
MODULE I-3
84
MODULE I-3
85
Distributed Version Control Systems: contain multiple repositories. Each user has their own repository and
working copy.
MODULE I-3
86
Git
Advantages Disadvantages
Code changes easy and clearly tracked Does not support keyword expansion
MODULE I-3
87
Git installation
On Windows :
https://gitforwindows.org/
http://babun.github.io/ (Shell Emulator)
On OS X :
https://sourceforge.net/projects/git-osx-installer/
On Linux
<your package manager> install git
(apt-get, rpm,…)
MODULE I-3
88
Git configuration
MODULE I-3
89
Git terminology
MODULE I-3
90
Git terminology
MODULE I-3
91
MODULE I-3
92
MODULE I-3
93
Git Workflow
MODULE I-3
94
Git command
git init Convert an existing unversioned project(workspace) to git repository or to create a new empty git repository. ".git" subdirectory will be created
Download an existing git repository to your local computer. git clone -b branch_name <git url>: The -b argument lets you specify
git clone
a specific branch to clone instead of the branch the remote HEAD is pointing to, usually the master branch.
Current branch | Files that have differences between Workspace ↔ Staging area (Untracked(new) files and Unstaged changes) | Files that have
git status
differences between Staging ↔ Local Git Repository (Uncommitted changes)
git add Add changes in the workspace to the staging area. git add <file-name> or git add . to add all files
git commit Add changes in the staging area to the local Git repository. git commit: Staging area → Local git repository | git commit -a: Workspace →
Local git repository (Untracked files are not included, only those that have been added with git add at some point) . git commit -m
‘commit message’
git pull Update local git repository from the corresponding remote git repository. git pull <remote> <local>:Local git repository ← Remote git
repository
git push Add changes in the local git repository to the remote repository. git push <remote> <local>: Local git repository → Remote git repository
git branch List all local branches. git branch -a: List all remote branches as well | git branch -d <branch>: Delete the specified branch | git
branch <new branch>: Create a new branch
git checkout Navigate between different branches. git checkout <branch> | git checkout -b <new branch>: Create a new branch from your
current branch and switch to it.
git merge Integrate changes from multiple branches into one. git merge <branch>
MODULE I-3
95
Git command
Manage connections to remote repositories. It allows you to show which remotes are currently connected, but also to add new connections or
remove existing ones.
git remote
git remote -v: List all remote connections | git remote add <name> <url>: Create a new remote connection | git remote rm
<name>: Delete a connection to a remote repository | git remote rename <old name> <new name>: Rename a remote connection
Update local git repository from the corresponding remote git repository. Git fetch does not change your workspace, it keeps the fetched content
separate until it is merged. git fetch <remote> <local> | git checkout <remote>/<local>: To view the change | git fetch vs git
git fetch
pull: git pull = git fetch + git merge
git stash Takes your uncommitted changes (staged and unstaged), saves them for later use
git fork It is a copy of a repository. It allows you to feely experiment with changes without affecting the original project.
git head HEAD is a reference to the last commit in the currently check-out branch
git revert Revert some existing commits. Given one or more existing commits, revert the changes that the related patches introduce, and record some
new commits that record them. This requires your working tree to be clean (no modifications from the HEAD commit).
git reset Reset current HEAD to the specified state. git reset HEAD~ --hard to remove the last commit
git cherry-pick Sometimes you don't want to merge a whole branch into another, and only need to pick one or two specific commits (Cherry picking).
git diff Show changes between commits, commit and working tree
MODULE I-3
96
Git command
git blame Show what revision and author last modified each line of a file
git tags Ability to tag specific points in a repository's history as being important (v1.0, v2.0)
git rebase Involves moving code to a new base commit or combining a sequence of commits
git squash To squash or regroup previous commits into one. This is a great way to group certain changes together before sharing them with others.
.gitignore A text file which tells which files and folders to ignore in a project. A local ".gitignore" file is usually placed in the root directory of a project. You
can also create a global ".gitignore" file and any entries in that file will be ignored in all of your Git repositories.
MODULE I-3
97
To copy a file from the working directory to the staging area, we use git
add.
To save the staging area in the git repository and create a new commit, we
use git commit.
To copy a file from the Git repository to the staging area, we use git reset.
To copy a file from the staging to the working directory (thus deleting the
current modifications), we use git checkout.
To view the changes between the working directory and the staging area,
we use git diff.
To see the changes between the staging area and the last commit, we use
git diff --cached.
MODULE I-3
98
1 2
4 3
MODULE I-3
99
Git branching
1 2
4 3
MODULE I-3
100
Git branching
1 2
MODULE I-3
101
4 3
MODULE I-3
102
1 2
MODULE I-3
103
MODULE I-3
104
Rebasing and merging are both designed to integrate changes from one branch into another
branch but in different ways.
Merge is the result of the combination of commits in feature branch
Rebase add all the changes in feature branch starting from the last commit of the master
branch
Rebasing a feature branch into master leads to move the base of the feature branch to master
branch's ending point.
Merging takes the contents of the feature branch and integrates it with the master branch. As a
result, only the master branch is changed. The feature branch history remains same.
Merging adds a new commit to your history.
MODULE I-3
105
MODULE I-3
106
Quizz
1. Which of the following git commands is used to 2. Which of the following information is contained in the
manage files in a repository? (Choose Two correct output of “git status”? (Choose Three correct answers.)
answers.)
MODULE I-3
107
PLAN
TRADITIONAL INTEGRATION:
CONTINUOUS DEPLOYMENT
CONTINUOUS INTEGRATION:
CI/CD DEPLOYMENT
--
SOFTWARE TESTING :
-- Build stages
Architecture Master & Slave
Jenkins Declarative Pipeline
MODULE I-4
109
Traditional Integration
In Traditional Integration or/software development cycle :
Each developer gets a copy of the code from the central repository
All developers begin at the same starting point and work on it
Each developer makes progress by working on their own team
Each developer add methods and functions, shaping the code to meet their
needs
Meanwhile, the other developers and teams continue working on their own tasks,
solving the problems they have been assigned
MODULE I-4
111
Benefits of CI/CD
MODULE I-4
112
Continuous Integration
Continuous Integration :
Software development practice
Developers integrate code into a shared repository frequently
Each integration is verified by an automated build and automated tests to detect integration errors as
quickly as possible
This approach leads significantly to develop cohesive software more rapidly
MODULE I-4
113
MODULE I-4
114
Software Testing
Software Testing is a method to check whether the actual software product matches expected requirements
and to ensure that software product is defect free. Some prefer saying Software testing definition as a White
Box, Black Box Testing and Grey Box Testing.
Cost-Effective
Security
Product quality
Customer Satisfaction
MODULE I-4
115
In order to better understand the concepts of Continuous Delivery and Continuous Deployment ,
we need to understand what is the Software Testing and what are its different types :
MODULE I-4
116
MODULE I-4
117
MODULE I-4
118
Functional Testing : verify that there are no gaps between developed features/functions and required
features/functions.
MODULE I-4
119
Integration Testing : Individual units are grouped for testing. The aim is to
detect errors in the integrated unit's interaction.
MODULE I-4
120
MODULE I-4
121
Today, many software testing tools are of great importance, especially for automation testing.
MODULE I-4
122
Continuous Delivery
Continuous Delivery:
Software development practice where code changes are automatically prepared for a release to production.
Expands upon continuous integration by deploying all code changes to a testing environment after the build
stage.
Developers will always have a deployment-ready build artifact that has passed through a standardized test
process
MODULE I-4
123
MODULE I-4
124
Continuous Deployment
Continuous Deployment:
MODULE I-4
125
With Continuous Delivery, every code change is built, tested and then pushed to a non-production testing
or staging environment
There can be multiple, parallel test stages before a production deployment
The difference between Continuous Delivery and Continuous Deployment is the presence of a manual
approval to update to production.
MODULE I-4
126
MODULE I-4
127
CI/CD Deployment
Blue / Green Deployment is a technique for deployments where the existing running deployment is left in
place. A new version of the application is installed in parallel with the existing version.
When the new version is ready, cut over to the new version by changing the load balancer configuration.
MODULE I-4
128
CI/CD Deployment
Canary Deployment are like Blue/Green, although only a small amount of the servers are upgraded. Then,
using a cookie or similar, a fraction of users are directed to the new version.
MODULE I-4
129
CI/CD Tools
MODULE I-4
130
Jenkins
Jenkins :
MODULE I-4
131
Why Jenkins?
Easy to install :
Download one file -> jenkins.war
Run one command –> java-jar jenkins.war
Easy to use :
Create a new job-checkout and build a small project
Checking a change-watch it build
Create/fix a test – Watch it build and run/checkin and watch it pass
Multi-technology :
Build C, Java, C#, Python, Perl, SQL
Test with JUnit, NUnit, MSTest
Great extensibility :
Support different VCS
Code quality metrics , Build notifiers and UI customization
MODULE I-4
132
Jenkins Interface
Free style
Building a Maven Project
Pipeline and multibranch pipeline (most used for
Git projects)
MODULE I-4
133
MODULE I-4
134
MODULE I-4
135
Jenkins workflow
MODULE I-4
136
MODULE I-4
137
MODULE I-4
138
MODULE I-4
139
JavaDoc publication
MODULE I-4
140
MODULE I-4
141
Jenkins / Architecture
MODULE I-4
142
Jenkins / Architecture
Jenkins Master :
Scheduling and execute build jobs directly
Dispatching builds to the slaves for the actual execution.
Monitor the slaves (possibly taking the online and offline as required)
Recording and presenting the build results.
Jenkins Slave :
It hears request from the Master Instance
Slaves can run a variety of Operating Systems.
The job of a slave is to do as they are told to, which involves executing build jobs
dispatched by the Master.
A project can be configured to always run on a particular Slave machine/type or simply let
Jenkins pick the next available Slave
MODULE I-4
143
MODULE I-4
144
Go to the Manage Jenkins section and scroll down to the section of Manage Nodes
MODULE I-4
145
On New Node
Give a name for the Node, Choose the Permanent Agent option and click on OK
MODULE I-4
146
MODULE I-4
147
Declarative Pipeline is a relatively recent addition to Jenkins Pipeline, which features a more simplified and
customized syntax in addition to the Pipeline subsystems. Declarative "Section" blocks for common
configuration areas like :
Stages
Tools
Post-build actions
Notifications
Environment
Build agent
All wrapped up in a pipeline { } step,
with syntactic and semantic validation available.
MODULE I-4
148
Stages block look the same as the new block-scoped stage step
Think of each stage block as like an individual Build Step in a Freestyle job
There must be a stages section present in your pipeline block
Example
stages {
stage("build") {
timeout(time: 5, units: 'MINUTES') {
sh './run-some-script.sh'
}
}
stage("deploy") {
sh "./deploy-something.sh"
}
}
MODULE I-4
149
MODULE I-4
150
MODULE I-4
151
Block of key=value pairs that will be added to the environment when the build runs in.
Example
environment {
FOO = “bar”
BAZ = “faz”
}
MODULE I-4
152
Post Build and notifications both contain blocks with one or more
build condition keys and related step blocks.
The steps for a particular build condition will be invoked if that build
condition is met.
Post Build checks its conditions and executes them, if satisfied,
after all stages have completed, in the same Node/Docker container
as the stages.
Notifications checks its conditions and executes them, if satisfied,
after Post Build, but doesn't run on a Node at all.
MODULE I-4
153
notifications {
success { hipchatSend 'Build passed' }
failure {
hipchatSend 'Build failed' mail to:'[email protected]',
subject:'Build failed',
body:'Fix me please!'
}
}
----------------------------------------------
postBuild {
always { archive "target/**/*" junit 'path/to/*.xml' }
failure {
sh './cleanup-failure.sh'
}
}
MODULE I-4
154
MODULE I-4
155
pipeline {
agent none
stages {
stage('distribute') {
parallel (
'windows': {
node('windows') {
bat 'print from windows'
}
},
'mac': {
node('osx') {
sh 'print from mac'
}
},
'linux': {
node('linux') {
sh 'print from linux'
}
}
)
}
}
}
MODULE I-4
156
Quizz
1. Which of the following statements are true about 2. Which of the following post condition exist in a Jenkins
Jenkins? (Choose Two correct answers.) Declarative Pipeline?
MODULE I-4
157
PART II.
Machine Deployment
158
PLAN
VIRTUAL MACHINE:
VAGRANT :
-- Features
- Architecture
Workflow
VAGRANTFILE:
-- Configure
-- Options
Providers
- Provisioners
Boxes
VAGRANT COMMAND
MODULE II-1
160
Virtual Machine
Virtual Machine (VM) is a software implementation of a machine (computer) that executes programs like a
physical machine.
The operating system creates the illusion of multiple processes, each executing on its own processor
with its own (virtual) memory.
MODULE II-1
161
MODULE II-1
162
MODULE II-1
163
Virtual Machine
Advantages Disadvantages
MODULE II-1
164
MODULE II-1
165
Vagrant features
Providers are the services that Vagrant uses to set up and create
virtual environments.
Example : VirtualBox, VMWare, Hyper-V, Docker, AWS...
MODULE II-1
166
Vagrant Architecture
MODULE II-1
167
Vagrant Workflow
MODULE II-1
168
Vagrantfile
Vagrantfile is a Ruby file that instructs Vagrant to create, depending on how it is executed, new Vagrant
machines or boxes.
Vagrant Box is considered as an image, a template from which we will deploy our future virtual machines.
Vagrant Box is a compiled Vagrantfile describing a type of Vagrant machines. A new Vagrant machines
can be created from a Vagrant Box
Vagrantfile can directly create one or more Vagrant machines
# Simple Vagrantfile example
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/bionic64"
config.vm.hostname = "node1"
config.vm.network "private_network", ip: "192.168.33.10"
config.vm.synced_folder "../data", "/vagrant_data"
config.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", 1024 * 4]
config.vm.provision :shell, path: "bootstrap.sh"
end
MODULE II-1
169
Vagrantfile
Vagrant.configure("2"): returns the Vagrant configuration object for the new box. Config alias is
used to refer to this object. The version 2 of Vagrant API is used
Vm.box : is the base box that we are going to use. The schema for box names is the maintainer
account in Vagrant Cloud followed by the box name.
Vm.hostname : sets the hostname of the box
Vm.network : Configures network
vm.synced_folder : to configures the synced folders between the host and the guest
Vm.provider : Configures settings specific to a provider. Allows overriding options for the Virtual
Machine provider. For example: memory, CPU, ...
Vm.provision : to specify the name of the file that is going to be executed at the machine creation
MODULE II-1
170
Port forwarding : Forward all requests from a service running on port 80 of the Vagrant Virtual Machine
to port 8080 of the host machine.
config.vm.network "forward_port", guest:80,host:8080
By default, networks are private (only accessible from the host machine)
Use the flag "public_network" to make the guest network accessible from the LAN (Loacl Area Network)
MODULE II-1
171
Vagrantfile Options
MODULE II-1
172
Vagrant provisioners
Allows initial configuration of the VM to easily set up your VM with evrything it needs to run your
software
Important part of making VM creation repeatable
Scripts made for provisioning can typically be used to set up production machines quickly as well
Some available provisioners :
MODULE II-1
173
Vagrant Boxes
Box is the base image used to create a virtual environment with Vagrant
A box is a compressed file containing the following :
Vagrantfile : the information from this will be merged into your Vagrantfile that is created when you run
vagrant init boxname in a folder.
Box-disk.vmdk : The Virtual Machine image.
Box.ovf : Defines the virtual hardware for the box
Metadata.json : Inform Vagrant about the provider the box work with.
vagrant box list See a list of all installed boxes on your computer
MODULE II-1
174
Creating a VM : Vagrant init : Initialize vagrant with Vagrantfile and ./.vagrant directory
vagrant init - f Create a new Vagrantfile, overwriting the one at the current path
vagrant init -box-version Create a Vagrantfile, locking the box to a version constraint
Vagrant init <boxpath> Initialize Vagrant with a specific box. To find a box, go to the public Vagrant box catalog. For example,
vagrant init ubuntu/trusty64
Starting a VM
vagrant up Starts vagrant environment (also provisions only on the FIRST vagrant up command)
vagrant resume Resume a suspended machine (vagrant up works just fine for this as well)
Vagrant reload --provsion Restart the virtual machine and force provisioning
MODULE II-1
175
Getting into a VM
vagrant ssh <boxname> If you give your box a name in your Vagrantfile, you can ssh into it with. Boxname works from any directory
Stopping a VM
Saving Progress
Vagrant snapshot save [options][vm-name] <name> Allows us to save the VM so that we can roll back at a later time.
MODULE II-1
176
Other tips
vagrant global-status --prune Outputs status of all vagrant machines, but prunes invalid entries
Vagrant provision --debug Use the debug flag to increase the verbosity of the output
Vagrant push Vagrant can be configured to deploy code on remote central registry
Vagrant up –provision | tee provision.log Runs vagrant up, forces provisioning and logs all output to a file
MODULE II-1
177
Quizz
1. Which of the following are default Vagrant 2. Which of the following elements are present in a Vagrant
providers? box file ?
MODULE II-1
178
MODULE II-2 :
Cloud deployment
179
PLAN
-- Private
-- Public
Hybrid
Community
-- Cloud Foundry
- Openshift
Openstack
CLOUD INIT:
-- Syntax
Example
MODULE II-2
180
Cloud can be classified in terms of who owns and manages the cloud. Types of Cloud (Deployment Model)
Public Cloud,
Private Cloud,
Hybrid Cloud,
Community Cloud,
MODULE II-2
181
Private Cloud
A private Cloud or internal Cloud is used when the Cloud infrastructure, proprietary network or data
center, is operated solely for a business or organization, and serves customers within the business fire-
wall.
Most of the private Cloud are large company or government departments who prefer to keep their data in
a more controlled and secure environment.
The difference between a private Cloud and public Cloud is that in a private Cloud-based service, data
and processes are managed within the organization without the restrictions network bandwidth, security
exposures and legal
MODULE II-2
182
Private Cloud
MODULE II-2
183
MODULE II-2
184
Cloud Foundry : provides a highly efficient, modern model for cloud native application
delivery on top of Kubernetes.
Application and services centric lifecycle API
Container-based architecture
External dependencies are considered services
Openstack :
Virtual servers and other resources are made available to customers
Interrelated components that control diverse, multi-vendor hardware pools of
processing, storage and networking resources throughout a data center
MODULE II-2
185
Cloud Deployment
Cloud-init :
MODULE II-2
186
Cloud-init Modules
Cloud-init has modules for handling : Some of the things it configures are :
MODULE II-2
187
Import ssh keys for launchpad user 'smoser' and add his ppa
#cloud-config
ssh_import_id : [smoser]
apt_sources :
- source : "ppa:smoser/ppa"
MODULE II-2
188
Cloud-init Example
Configuration of instance through "user-data" provided to cloud-init
The most popular formats for scripts user-data is the cloud-config.
Example of YAML file "cloud-init.yaml"
#cloud-config
package_update: true
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
runcmd:
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update -y
- apt-get install -y docker-ce docker-ce-cli containerd.io
- systemctl start docker
- systemctl enable docker
File compatible with Ubuntu instance. Necessary to adapt this file if you are using another operating
system
MODULE II-2
189
Cloud-init Example
Some explanations :
- package_update : Update of the apt database on first boot.
- packages : The list of packages to install
- runcmd : Contains a list of commands to be executed
- final_message : This message will be displayed at the end of the first start (Find it in the logof Cloud-init)
MODULE II-2
190
MODULE II-2
191
Quizz
1. What must be the first line of a plain text user- 2. How is cloud-init integrated with a managed system image?
data configuration containing YAML configuration
for cloud-init?
Provides the cloud-init-worker command which has to
be invoked periodically within the running instance
Cloud-config
Provides its own startup mechanism which replaces the
--cloud-config instance’s original init system such as system
#1 /user/bin/cloud-init Provides system units which must be included in
several stages of the booting process of instance
[cloud-config]
Provides a Linux Kernel Module that must be included
#cloud-config
and loaded in the instance’s initframfs
Provides the cloud-init-daemon service which is
launched during startup and keeps the instance in sync
with the desired configuration.
MODULE II-2
192
PLAN
PACKER :
-- Advantages
Use cases
PACKER INSTALLATION
PACKER WORKFLOW
PACKER BUILD
PACKER PROVISION
PACKER COMMAND
MODULE II-3
194
Packer
Packer is an open-source tool for creating identical machine images for multiple platforms from a single
source configuration
Packer is lightweight, runs on every major operating system
Packer does not replace configuration management like Chef/Puppet when building images, on the
contrary it uses them to install software onto the image.
MODULE II-3
195
Multi-provider portability
Packer creates identical images for multiple platforms : Run development in desktop virtualization
solutions like VMWare/VirtualBox , staging/QA in a private Cloud like Openstack and production in
AWS/Azure.
Improved stability
Packer installs and configures all the software for a machine at the time the image is built
If there are bugs in these scripts, they'll be caught early, rather than several minutes after a machine is
launched.
MODULE II-3
196
Continuous Delivery
Packer is lightweight, portable and command-line driven. This makes it the perfect tool to put in the middle of
your Continuous delivery pipeline.
Dev/Prod Parity
Packer helps keep development, staging and production as similar as possible
Appliance/Demo Creation
Packer is perfect for creating appliances and disposable product demos. As your software changes, you can
automatically create appliances with the software pre-installed.
MODULE II-3
197
Supported Platforms
You can add support to any platform by extending Packer using plugins
MODULE II-3
198
Packer Installation
Windows :
$ brew tap hashicorp/tap
$ brew install hashicorp/tap/packer
MODULE II-3
199
Packer Workflow
MODULE II-3
200
Packer Build
packer.json
{
"variables": {
"aws_access_key":"{{env'AWS_ACCESS_KEY'}}",
"aws_secret_key":"{{env'AWS_SECRET_KEY'}}"
},
"builders":[{
"type":"amazon-ebs",
"access_key":"{{user'aws_access_key'}}",
"secret_key":"{{user'aws_secret_key'}}",
"region":"us-east-1",
"source_ami":"ami-fce3c696",
"instance_type":"t2.micro",
"ssh_username":"admin",
"ami_name":"yourApp {{timestamp}}"
}]
}
Packer can create multiple images for multiple platforms in parallel, all configured from a single template [8].
MODULE II-3
201
Var-file flag can be specified multiple times and variables from multiple files will be read and applied. Combining the –var and –var-file flags together also works how you'd expect. Flags
set later in the command override flags set earlier.
MODULE II-3
202
Packer Provision
packer.json
{
"variables": ["..."],
"builders": ["..."],
"provisioners": [{
"type":"shell",
"inline":[
"sleep 30",--waiting for SSH to be available
"sudo apt-get update",
"sudo apt-get install –y redis-server"
]
},
{
"type":"shell",
"script":"./scripts/install-java.sh",
}]
Others – Remote shell, File uploads, Ansible (local&remote), Chef, Puppet, Salt, PowerShell etc.
MODULE II-3
203
Others – Amazon Import, CheckSum, Docker Push/Tag/Save, Google Compute Export, Vagrant, vSphere.
MODULE II-3
204
MODULE II-3
205
Packer command
CLI
Packer plugins required List plugins that will be installed by “packer init”
Packer build Build image from template.Takes a template and runs all the builds within it in order to generate a set of
artifacts. Use –force to forces a builder to run
Packer fmt Format HCL2 configuration files to a canonical format and style
MODULE II-3
206
Quizz
1. What does the command packer validate template.json 2. Which of the following sections must exist in a Packer
do? template ?
MODULE II-3
207
PART III.
Container Management
…
208
PLAN
VIRTUALIZATION
CONTAINERIZATION OF APP :
--
DOCKER
-- Functionality
--
Dockerfile
Instructions
-- Benefits
Architecture
Engine
Example
Command
MULTI-STAGE BUILD
CONTAINER RUNTIME
DOCKER IMAGES
-- Image
-- Registries
Naming and tagging
- Layers
Commands
MODULE III-1
210
Virtualization
In Computing, Virtualization refers to the act of making a virtual version of one thing, together with virtual
hardware platforms, storage devices, and electronic network resources.
Containerization is a form of virtualization where applications run in isolated user spaces, called containers,
while using the same shared operating system (OS).
MODULE III-1
211
MODULE III-1
212
Docker
Docker is a software that runs on Linux and Windows environments
It creates, manage and orchestrates Containers.
The Docker project is open-source and the upstream lives in the moby/moby repo on GitHub
Docker, Inc. Is the overall maintainer of the open-source project and offers commercial versions of
Docker with support contracts.
There are two main editions of Docker : Enterprise Edition (EE) and Community Edition (CE)
Docker version numbers follow the YY.MM-xx versioning scheme "19.03.12(25 juin 2020)"
Tool that is designed to benefit both developers and IT operators, making it a part of many DevOps
toolchains.
MODULE III-1
213
Docker Technologies
Docker technologies include at least three things to be aware :
The runtime
The daemon or Engine
The orchestrator
MODULE III-1
214
Docker Properties
MODULE III-1
215
Benefits of Docker
MODULE III-1
216
Docker Architecture
MODULE III-1
217
Docker Engine is the infrastructure plumbing software that runs and orchestrates containers (VMWare
admin -> like ESXi).
Docker Engine is modular in design with many swappable components. Where possible, these are based
on open-standards outlined by the Open Container Initiative (OCI).
Docker Engine is made from many specialized tools APIs, execution driver, runtime, shims etc.
All other Docker, Inc. And 3rd party products plug into the Docker Engine and build around it.
MODULE III-1
218
Containers
Container is the runtime instance of an image. In the same way that we can start a VM from Virtual Machine
template.
Run until the App they are executing exits and share the OS/kernel with the host they're running on.
MODULE III-1
219
Containers Commands
Docker container run –it ubuntu /bin/bash Start an Ubuntu container in the foreground, and tell it to run the Bash shell
[Ctrl + PQ] Detach your shell from the terminal of a container and leave the container running (UP) in the background
Lists all containers in the running (UP) state. -a flag you will also see containers in the stopped (Exited)
Docker container ls
state
Docker container exec –it Let's you run a new process inside of a running container. This command will start a new Bash Shell inside
<cointainer-name or container-id> bash of a running container and connect to it.
Docker container stop Stop a running container and put it in the Exited(0) state
<cointainer-name or container-id>
Docker container inspect Show detailed configuration and runtime information about a container
<cointainer-name or container-id>
MODULE III-1
220
Docker Images
MODULE III-1
221
Docker images are stored in image registries. The most common registry is Docker Hub
(https://hub.docker.com)
The Docker client is opinionated and defaults to using Docker Hub.
Image registries contain multiple image repositories
MODULE III-1
222
Official repositories : contain images that have been vetted by Docker. Inc.
Examples : nginx (https://hub.docker.com/_/nginx/), mongodb (https://hub.docker.com/_/mongo/)
Unofficial repositories : you should not expect them to be safe, well-documented or built according to
best practices.
MODULE III-1
223
Images from official repositories are as simple as giving the repository name and tag separated by a
colon(:). Docker image pull <repository>:tag
If you do not specify the image tag, Docker will assume you are referring to the image tagged as latest
Image is tagged as a latest does not guarantee it is the most recent image in a repository.
A single image can have as many tags as you want
MODULE III-1
224
To see the layers of an image, you can inspect the image with the docker image inspect command
MODULE III-1
225
All Docker images start with a base layer, and as changes are made and new content is added, new
layers are added on top.
Multiple images can, and do, share layers. This leads to efficiencies in space and performance.
These lines tell us that Docker is smart enough to recognize when it’s being asked to pull an image layer that
it already has a copy of/
MODULE III-1
226
Multi-architecture images
A single image (repository:tag) can have an image for Linux on x64, Linux on PowerPC, Windows x64,
ARM etc.
To make this happen, the Registry API supports two important construct :
- Manifest lists : a list of architectures supported by a particular image tag. Each supported architecture then
has its own “manifest detailing the layers it’s composed from.
- Manifests : containing image config and layer data
MODULE III-1
227
Docker image pull The command to download images. By default, images will be pulled from repositories on Docker Hub
<image_name>: <image_tag>
Docker image ls Lists all of the images stored in your Docker host’s local cache
Docker image inspect Gives all the details of an image layer data and metadata
<image_name>: <image_tag>
Docker rmi Delete an image. It’s impossible to delete an image associated with a container in the runnin(Up) or
<image_name>: <image_tag> stopped (Excited) states
MODULE III-1
228
Containerizing an App
The process of taking an application and configuring it to run as a container is called “Containerizing”,
Sometimes we call it “Dockerizing”.
Containers are all about apps. They're about making apps simple to build, ship, and run.
The process of containerizing an app looks like this :
MODULE III-1
229
Dockerfile
Dockerfile is the blueprint that describes the application and tells Docker how to build it into an image
The directory containing the application is referred to as the build context
It’s a common practice to keep your Dockerfile in the root directory of the build context
Dockerfile starts with a capital “D” and is all one word “Dockerfile”
It can help bridge the gap between development and operations
Should be treated as code, and checked into a source control system
If an instruction is adding new content such as files and programs to the image, it will create a new
layer. If it is adding instructions on how to build the image and run the application, it will create
metadata.
MODULE III-1
230
Dockerfile Instructions
INSTRUCTION DESCRIPTION
FROM First instruction in Dockerfile and it identifies the image to inherit from
ENTRYPOINT The final script or application used to bootstrap the container, making it an executable application
CMD Provide default arguments to the ENTRYPOINT using a JSON array format
WORKDIR Sets working directory for RUN, CMD, ENTRYPOINT, COPY, and/or ADD instructions
MODULE III-1
231
Dockerfile Example
Dockerfile is a text document that contains all the commands a user could call on the command line to
assemble an image
$ cat Dockerfile
FROM ubuntu
RUN apt-get update && apt-get install -y python3 pip
&& pip install flask
COPY app.py /opt/
ENTRYPOINT FLASK_APP=/opt/app.py flask run --
host=0.0.0.0 --port=8080
MODULE III-1
232
The Docker build command builds Docker images from a Dockerfile and a “context”
FROM ubuntu
RUN apt-get update && apt-get install -y python3 pip
RUN pip install flask
COPY app.py /opt/
ENTRYPOINT FLASK_APP=/opt/app.py flask run --host=0.0.0.0 --
port=8080
MODULE III-1
233
Before we can push an image, we need to tag it in a special way (if we don’t specify values of registry or
tag, Docker will assume Registry=docker,io and Tag=latest
# docker image tag newflaskapp:latest gilbertfongan/newflaskapp:latest
MODULE III-1
234
MODULE III-1
235
Dockerfile/Multi-stage Builds
Docker images with complexity and big Instructions are bad => More potential vulnerabilities and
possibly a bigger attack surface
Multi-stage builds are all about optimizing builds without adding complexity.
With multi-stage builds, we have a single Dockerfile containing multiple FROM instructions. Each FROM
instruction is a new build stage that can easily COPY artefacts from previous stages.
MODULE III-1
236
WORKDIR /usr/src/atsea/app/react-app
COPY react-app .
WORKDIR /usr/src/atsea
COPY pom.xml .
COPY . .
FROM java:8-jdk-alpine
WORKDIR /static
WORKDIR /app
CMD ["--spring.profiles.active=postgres"]
MODULE III-1
237
Commands DESCRIPTION
Docker image build –t <repository_name>:<tagname> Command that reads a Dockerfile and containerizes the application.
<build_context> -t flag tags the image
-f flag lets you specify the name and location of the Dockerfile
Docker image push <repository_name>:<tagname> Push containerized app to image registry (by default Docker Hub)
Docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG] Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
MODULE III-1
238
Docker workflow
MODULE III-1
239
Docker create <image> Create a container without starting it Docker build <URL> Create an image from a Dockerfile
Docker rename <CONTAINER_NAME> Rename a container Docker build -t <URL> Build an image from a Dockerfile and tags it
<NEW_CONTAINER_NAME>
Docker pull <IMAGE> Pull an image from a registry
Container Lifecycle
Image Lifecycle
Docker run <IMAGE> Create and start a container
Docker push <IMAGE> Push an image to a registry
Docker update <CONTAINER> Update the configuration of a container Docker load <TAR_FILE/STDIN_FILE> Load an image from a tar archieve as stdin
MODULE III-1
240
Information
Docker wait <CONTAINER> Block a container until other containers stop Docker events <CONTAINER> List real time events from a container
Docker attach <CONTAINER> Attach local standard input, output, and error
streams to a running container Docker top <CONTAINER> Show running processes in a container
MODULE III-1
241
Quizz
1. Which of the following instructions in a Dockerfile can 2. Which of the following mechanisms are used for service
download a file via HTTP into the container image? discovery in a container environment ?
MODULE III-1
242
PLAN
DOCKER COMPOSE
DOCKER SWARM:
-- Compose file
--
- Deploy
Commands
--
Swarm Cluster
Swarm services
Swarm network mode
Commands
DOCKER NETWORKING
- Service Discovery
KUBERNETES
--
Commands
Architecture
-- Pods
Services
- Deployment
Commands
MODULE III-2
244
Docker Compose
MODULE III-2
245
Docker Compose
Docker Compose define multi-container (multi-service) apps in a YAML file, pass the YAML file to the docker-
compose binary, and Compose deploys it through the Docker Engine API.
Docker Compose lets you describe an entire app in a single declarative configuration file
Deploy an entire app with a single command (docker-compose up)
Once the app is deployed, you can manage its entire life cycle with a simple set of commands
MODULE III-2
246
Compose File
The example shown on the right shows a simple Compose file that defines a
small Flask app with two services, a networks and a volumes [Simple web
server that counts the number of visits and stores the value in Redis].
Compose File
The structure of the Docker Compose File is as follows
Version : The version of the Compose file format (API) [Which can be depending on the Docker Engine
release]. This does not define the version of Docker Compose or the Docker Engine
Services : Define the different application services (Compose will deploy each of these services as its
own container).
- Build : Build a new image using the instructions in the Dockerfile in the current directory.
- Command : Run a command (For Example to run Python app as the main App in the container.)
- Ports : Map port 5000 inside the container (target) to port 5000 on the host (published)
- Networks : Which network to attach the service’s container to. Network should be already defined in the
netwotks top-level key.
- Volumes : Which volume to attach the service’s container to. The Volume should be already defined in the
volume top-level key.
MODULE III-2
248
Dockerfile : Describes how to build the image for the web-fe service
App.py : Is the python Flask application code
Requirements.txt : List the Python Packages required for the App.
Docker-compose.yml : Describes how Docker should deploy the App.
MODULE III-2
249
MODULE III-2
250
After running the Docker-Compose command on our repository, we can discover three images built or
pulled as part of the deployment
The list of running containers is as follows (We can notice that the names of each containers is prefixed
with the name of the project or name of the working directory.),
With the scalability feature of the compose service, each container has a numeric suffix that indicates the
instance number.
MODULE III-2
251
In addition to the services, Docker-compose also created the networks and volumes
To stop the Docker-Compose App without deleting the images and volumes created
MODULE III-2
252
COMMANDS DESCRIPTION
Docker-compose up Deploy a Compose App. It expects the Compose file to be called docker-compose.yml or docker-
compose.yaml, but you can specify a custom filename with the –f flag. It’s common to start the App in the
background with the –d flag
Docker-compose stop Stop all containers in a Compose App without deleting them from the system
Docker-compose restart Restart a Compose App that has been stopped with docker-compose stop. If you have made changes to
your Compose App since stopping it, these changes will not appear in the restarted App. You will need to
re-deploy the App to get the changes.
Docker-compose ps List each container in the Compose App. It show the current state, the command each one is running, and
network ports
Docker-compose down Stop and delete a running Compose App. It deletes containers and networks, but not volumes and images.
MODULE III-2
253
Docker Networking
MODULE III-2
254
Docker Networking
Docker runs applications inside of containers, and these need to communicate over lots of different
networks.
Docker has solutions for container-to-container networks, as well as connecting to existing networks
and VLANs.
Docker networking is based on an open-source pluggable architecture called the Container Network
Model (CNM)
Libnetwork is Docker’s real-world implementation of CNM, and it provides all of Docker’s core
networking capabilities.
Drivers plug in to libnetwork to provide specific network topologies such as VXLAN overlay networks.
MODULE III-2
255
MODULE III-2
256
MODULE III-2
257
--network none \
--name no-net-alpine \
alpine:latest \
ash
MODULE III-2
258
Single-host means it only exists on a single Docker host and can only connect containers that are on the
same host
Bridge means that it’s an implementation of an 802.1d bridge (layer 2 switch)
Docker on Linux creates single-host bridge networks with the built-in bridge driver, whereas Docker on
Windows creates them using the built-in nat driver.
MODULE III-2
259
MODULE III-2
260
For this example, illustrated by this diagram, the application running in the
container is operating on Port 80
All incoming traffic on the Host 10.0.0.15 with port 5000 is mapped to port
80 of the running container.
MODULE III-2
261
MODULE III-2
262
MODULE III-2
263
MODULE III-2
264
MODULE III-2
265
Host Network
The Host Network driver option, as opposed to the bridge, eliminates the network isolation between the
container and the host system by allowing the container to directly access the host network.
Features :
An overlay network is used to manage swarm and service-related traffic
Docker daemon Host network and ports are used to send data for individual swarm service
Advantages :
Optimizes the performance (eliminating the need for NAT since the container ports are automatically
published and available as host ports).
Handles a large range of ports
Does not require “userland-proxy” for each port.
Host Network
MODULE III-2
267
MODULE III-2
268
MODULE III-2
269
To attach a service to the Overlay Network : $ docker service create –name=test-devops –network devops –p 80:80
gilbertfongan/demo:v1
MODULE III-2
270
MODULE III-2
271
MODULE III-2
272
Advantages :
Precautionary measures :
Cut down the large number of unique MAC to save the Network from damage
Handle “promiscuous mode” which isn’t allowed on most public Cloud Platforms
MODULE III-2
273
MODULE III-2
274
Create a new MACVLAN Network called “macvlan5” that will connect containers to VLAN5:
$ docker network create –d macvlan \
--subnet= 172.16.0.0/24 \
--ip-range=172.16.0.0/25 \
--gateway=10.0.0.1 \
-o parent=eth0.5 \
macvlan5
The MACVLAN5 Network is ready for containers. Create a container to deploy with the Network:
MODULE III-2
275
MODULE III-2
276
1. Ping c2 command invokes the local DNS resolver to resolve the name « c2 ». All Docker containers have a local DNS
2.
resolver
In case that the local resolver doesn’t have an IP adress for « c2 » in its local cache, it initiates a recursive query to the
3.
Docker DNS Server. The local resolver is pre-configured to know how to reach the Docker DNS server.
Docker DNS server holds name-to-IP mappings for all containers created with the --name or --net-alias flags. It know
4.
the IP adress of container « c2 »
5.
[Same Network] DNS resolver returns the IP adress of « c2 » to the local resolver in « c1 »
The ping command is sent to the corresponding target IP adress of « c2 »
MODULE III-2
277
COMMANDS DESCRIPTION
Docker network inspect Provides detailed configuration information about a Docker Network
MODULE III-2
278
Security : Nodes Allow enforcement of encryption and mutual authentication to enhance Hight security
in communications between nodes.
Scalability : Automatic addition or removal of tasks that allow users to scale up or down as per their
needs
Decentralized design : Allow to create a Swarm from one disk image
Integration : The cluster management has been integrated with Docker Engine. This allows users to
manage swarms without requiring another orchestrations software.
Rolling updates : Services updates on nodes can be made incrementally during rollout. In case of a
problem, you can roll back to a previous safe service.
Declarative Service : Allow to define the required state of a service
Service discovery : Embedded DNS server can be used to query a container that runs within the Swarm.
MODULE III-2
279
MODULE III-2
280
MODULE III-2
281
- Managers : Control Plan of the Cluster – Manage the state of the cluster and dispatch tasks to workers
- Workers : Accept tasks from Managers and execute them
Configuration and state of the Swarm is held in a distributed etcd database located on all managers
MODULE III-2
282
- Service is a higher-level construct that wraps some advanced features around containers.
- A task or replica is a container wrapped in a service
High-level view of Swarm cluster :
MODULE III-2
283
Swarm Mode
Replicated services (default) : This deploys a desired number of replicas and distributes them as evenly
as possible across the cluster.
Global services : This runs a single replica on every node in the Swarm.
MODULE III-2
284
MODULE III-2
285
The default port that swarm mode operates on is 2377 for secured(HTTPS) client-to-swarm connections
Join a worker to the Swarm (Copy the output of the previous command)
$ docker swarm join-token worker (Run from Manager 1 node to extract the following commands and tokens
required to add a new workers)
$ docker swarm join --token XXXXXXXXXXXXXXX 172.17.8.104:2377 (Run from Worker)
MODULE III-2
286
MODULE III-2
287
Swarm Services
MODULE III-2
288
MODULE III-2
289
MODULE III-2
290
$ docker service ls
MODULE III-2
291
Ingress mode (default) : Services published (with --publish) can be accessed from any node in the
Swarm
Host mode : Services published (with --publish and add mode=host) can only be accessed via nodes
running service replicas
MODULE III-2
292
The overlay network creates a new layer 2 container network on top of potentially multiple different underlying networks
MODULE III-2
293
MODULE III-2
294
MODULE III-2
295
Docker swarm join-token Reveals the command and tokens needed to join workers and managers to existing Swarms.
Docker swarm join-token manager command is used to expose the command to join a new manager
Docker swarm join-token worker command is used to expose the command to join a new worker
Docker node ls List all nodes in the Swarm including which are managers and leader
Docker service ls List running services in the Swarm and gives basic info on the state of the service and any replicas it's
running
Docker service ps <service> Give more detailed information about individual service replicas
Docker service scale Scale the number of replicas in a service up and down
MODULE III-2
296
Docker Stacks
Help to define complex multi-service apps in a single declarative file.
While Docker is a great tool for development and testing, Docker stacks are great tools for scale and
production.
Provide a simple way to deploy the App and Manage its entire lifecycle :
- Health checks
- Scaling
- Updates and Rollbacks
The stack file includes the entire stack of services that make up the Appin form of Compose file :
- Services
- Volumes
- Networks
- Secrets
MODULE III-2
297
Docker Stacks
Stacks are often compared to Compose with the only difference being that it deploys on a cluster swarm
They are at the top of the Docker application hierarchy.
They build on top of services, which turn build on top of containers
MODULE III-2
298
MODULE III-2
299
MODULE III-2
300
MODULE III-2
301
COMMANDS DESCRIPTION
docker stack deploy Command used to deploy and update stacks of services defined in a stack file which is usually docker-
stack.yml
Docker stack ls List all stacks on the Swarm, including how many services they have
Docker stack ps Gives detailed information about a deployed stack. List which node each replica is running on, and shows
desired state and current state
MODULE III-2
302
Kubernetes (K8s)
MODULE III-2
303
Kubernetes
MODULE III-2
304
Kubernetes Architecture
MODULE III-2
305
Kubernetes Architecture
Master :
Worker (node) :
MODULE III-2
306
Kube-dns
Heapster
Metrics collector for Kubernetes cluster, used by some resources such as the Horizontal Pod
Autotscaler
Kube-dashboard
MODULE III-2
309
Kubernetes Concept
Cluster : A collection of hosts that aggregate their available resources including CPU, RAM,
Disk, and their devices into a usable pool.
Master : A collection of components that make up the control plane of Kubernetes and are
responsible for all cluster decisions including both scheduling and responding to cluster
events.
Node/Worker : A single host, physical or virtual capable of running pods.He is managed by
the Master(s), and at a minimum runs both Kubelet and Kube-proxy to be considered part of
the Cluster.
Namespace : A logical cluster or environment. Primary method of dividing a cluster or
scoping access
Label : Key-value pairs that are used to identify, describe and group together related sets of
objects. Labels have a strict syntax and available character set.
Selector : Use labels to filter or select objects.
MODULE III-2
310
MODULE III-2
311
MODULE III-2
312
MODULE III-2
313
MODULE III-2
314
MODULE III-2
315
Kubernetes Pods
MODULE III-2
316
MODULE III-2
317
COMMANDS DESCRIPTION
Kubectl create -f POD.yaml Command used to run Pod based on the YAML file "POD.yaml"
Kubectl logs POD_NAME [-c CONTAINER_NAME] Get logs from a Pod or/and specifically a container running inside the Pod
Kubectl exec POD_NAME [-c CONTAINER_NAME] -- COMMAND Execute command in an existing Pod or/and specially a container running inside the Pod
Kubectl port-forward POD_NAME HOST_PORT:CONTAINER_PORT Forwarding Port in a Pod (Allows to publish the port of a Pod on the host machine)
MODULE III-2
318
Kubernetes Services
MODULE III-2
319
MODULE III-2
320
ClusterIP (default) : Exposes the Service on an internal IP in the cluster. This type makes the
Service only reachable from within the Cluster
NodePort : Exposes the Service on the same port of each selected Node in the Cluster using
NAT (Network Address Translation). Makes the Service accessible from outside the Cluster
using <NodeIP>:<NodePort>.
LoadBalancer : Creates an external load balancer in the current cloud provider (AWS, Azure,
GCP) and assigns a fixed, external IP to the Service.
MODULE III-2
321
MODULE III-2
322
COMMANDS DESCRIPTION
Kubectl create -f SERVICES.yaml Command used to create services based on the YAML file "SERVICES.yaml"
Kubectl get services List all services deployed in the namespace. With [--all-namespaces] to list in all namespaces
Kubectl port-fordward svc/SERVICE_NAME 5000 Listen on Local port 5000 and forward to port 5000 on service backend
Kubectl port-fordward svc/SERVICE_NAME 5000:TARGET_PORT Listen on Local port 5000 and forward to service target port
MODULE III-2
323
Kubernetes Deployment
A Kubernetes deployment is a resource object that provides declarative updates to
applications and allows you to explain life cycle.
Kubernetes Deployment
Kind : It specifies what you are going to achieve with the file (deployment, pod, metadata:
name: nginx-deployment
Spec.selector : Defines how the Deployment finds which Pods to manage app: nginx
spec:
Spec.containers : Create one container, name it and specify image and ports image: nginx:1.14.2
ports:
- containerPort: 80
MODULE III-2
326
Kubectl create deployment nginx –image=nginx Command used to create deployment based on the YAML file "DEPLOY.yaml"
Kubectl apply -f DEPLOYMENT.yaml
Kubectl get deployments List all deployments in the namespace. With [--all-namespaces] to list in all namespaces
Kubectl delete deploy DEPLOYMENT Remove or delete a Deployment. Add [-l name=myLabel] to delete deployment with Label name=myLabel
Kubectl set image deployment/DEPLOYMENT www=image:v2 Rolling update "www" containers of "DEPLOYMENT", updating the image
Kubectl rollout history deployment/DEPLOYMENT Check the history of deployments including the revision
Kubectl rollout status –w deployment/DEPLOYMENT Watch rolling update status of "DEPLOYMENT" deployment until completion
Kubectl autoscale deployment DEPLOYMENT --min=2 --max=10 Auto scale a deployment "DEPLOYMENT"
MODULE III-2
327
Quizz (1/2)
1. Which of the following elements exist in a Docker 2. In a Docker Swarm, what is the unique property of a
Compose file (Choose THREE correct answers)? replicated service (Choose TWO correct answers) ?
MODULE III-2
328
Quizz (2/2)
1. Which of the following container properties are shared 3. The following command is issued on two docker nodes
by all containers within a Kubernetes Pod (Choose TWO “docker network create --driver bridge isolated_new”
correct answers)? Afterwards, one container is started at each node with the
parameter --network=isolated_new. It turns out that the
containers cannot interact with each other. What must be
The container base image done to correct that (Choose TWO correct answers) ?
All labels assigned to the containers
All resources limits and requests Add the option “—inter-container” to the docker
The IP address and networking ports network create command
All storage volumes Start the containers on the same node
Change the “—network” parameter of Docker create
with “nofence” at the end
2. Which element exist in the highest level of the definition
of every Kubernetes Object? Use an overlay network instead of a bridged network
Use a host network instead of a bridged network
MODULE III-2
329
PLAN
DOCKER MACHINE
DOCKER DESKTOP
MODULE III-3
331
Docker Machine
Docker Machine lets you create Docker hosts on your computer (VirtualBox, VMWare), on
cloud providers (AWS, Azure), and inside your own data center.
It creates servers, installs Docker
It allows us to control the Docker engine of a VM created using docker-machine remotely
Docker Machine is another command-line utility used for managing one or more local or
remote machines
Local machines are often run in separate VirtualBox instances.
MODULE III-3
332
The drivers concept act as a connector to 3rd party services such as Azure, AWS, etc.
Allows to create a complete set of resources around the VM to easily manage it from each
service's admin portal
Generic driver allows you to convert an actual(existing) VM into a Docker-machine
MODULE III-3
333
MODULE III-3
334
Verifying version
$ docker-machine version
MODULE III-3
335
MODULE III-3
336
- Run a container : Docker commands are not run locally but on Docker-machine
$ docker-machine active
MODULE III-3
337
Docker-machine env Display the commands to set up the environment for the Docker client
MODULE III-3
338
Docker Desktop
Docker Desktop is an easy-to-install application for your Mac, Linux, or Windows environment that
enables you to build and share containerized applications and microservices.
MODULE III-3
339
Quizz
1. Which docker-machine sub command outputs a list of 2. Which of the following command list machines?
command that set environment variables which are
required 10 make dock, with a Docker host managed by
docker-machine ? Docker engine ls
Docker node ls
Docker machine ls
Docker machine list
MODULE III-3
340
PART I.V.
Configuration Management
…
341
PLAN
ANSIBLE ARCHITECTURE
ANSIBLE INSTALLATION
ANSIBLE CONFIGURATION
ANSIBLE INVENTORY
ANSIBLE COMMANDS
ANSIBLE PLAYBOOKS
-- Structure
- Example
Roles and variables
ANSIBLE GALAXY
ANSIBLE TOWER
MODULE IV-1
343
Ansible
MODULE IV-1
344
Why Ansible?
MODULE IV-1
345
Ansible features
MODULE IV-1
346
Management Machine : Machine on which Ansible is installed. Since Ansible is agentless, no software
Playbook : A simple file in YAML format defining the target servers and the tasks to be performed
Task : A block defining a procedure to be executed (e.g. create a user or a group, install a software
package, etc)
Module : Group of similar Ansible commands that are expected to be executed from the client-side
MODULE IV-1
347
Tag : Name set of the task and could be used later for just issuing certain group tasks or specific tasks
Role : Allows organizing the Playbooks and all the other necessary files (templates, scripts, etc) to
Facts : Global variables containing information about the system (machine name, system version,
Notifier : Attributed to the task that shall call the handler and when the output is modified.
MODULE IV-1
348
MODULE IV-1
349
Modules : Ansible stack all functions as module utilities for reducing the duplication and
handling the maintenance
Plugins : Amplify Ansible's Core functionality. They are executed on the control node
Inventory : Depicts the machine that it shall handle in the file and gathers every machine in a
group which you have chosen
APIs : The Ansible APIs function as the bridge of Public and Private cloud services
CMDB : Kind of repository that acts as the data warehouse for IT installations
Hosts : Node systems that are automated using Ansible and machines like Linux, and
Windows.
Networking : Ansible is used for automating different networks and this uses the simple,
powerful, secure agentless automation framework for IT development and operations.
MODULE IV-1
350
Ansible in DevOps
The integration is a major factor for modern test-driven and application design. Ansible helps in
integrating it by providing a stable environment for both the Operations and Development and it
results in Continuous orchestration
MODULE IV-1
351
Ansible Installation
- Redhat/CentOS : $ sudo yum install epel-release && sudo yum install ansible
MODULE IV-1
352
Ansible Configuration
- Configuration settings
- Command-line options
- Playbook keywords
- Variables
Each category overrides any information from all lower-precedence categories
Last "defined" wins and overrides any previous definitions
MODULE IV-1
353
Configuration settings include both values from the ansible.cfg file and environment variables.
MODULE IV-1
354
MODULE IV-1
355
Connecting with SSH key authorized on Target Node without asking password (Controller)
$ ssh [email protected]
MODULE IV-1
356
Ansible Inventory
The default location for inventory is a file called /etc/ansible/hosts
A different inventory file can be specified at the command line using the "-i <path>" option
Multiple inventory file can be used at the same time
Inventory can be pulled from dynamic or Cloud sources or different formats (YAML, ini)
Example (Basic INI & YAML)
mail.example.com all:
hosts:
[webservers] mail.example.com:
foo.example.com children:
bar.example.com webservers:
hosts:
[dbservers] foo.example.com:
one.example.com bar.example.com:
two.example.com dbservers:
three.example.com hosts:
one.example.com:
two.example.com:
three.example.com:
MODULE IV-1
357
- HOSTS : Entry in the inventory file. To specify all host, use "all" or "*"
- MODULE_NAME : Modules available in the Ansible such as file, copy, yum, shell and apt
- ARGUMENTS : Pass values required by the module and can change according to the module used
- USERNAME : Specify the user account in which Ansible can execute commands.
- Become : Specify when to run operations that need sudo privilege.
MODULE IV-1
358
Managing files for file transfer : For SCP (secure copy protocol) to transfer many files to multiple
machines in parallel
- Transferring file on many machine in webservers group
$ ansible webservers –m copy –a "src=/etc/ssh/sshd_config dest=/tmp/sshd_config"
MODULE IV-1
359
MODULE IV-1
360
Gathering facts
- Discovered variables about a system. Facts can be used to implement conditional execution of
tasks and get ad hoc information about the systems.
$ ansible all –m setup
MODULE IV-1
361
Ansible Playbooks
Ansible Playbooks offer a repeatable, reusable, simple configuration management and multi-
Playbooks are the files where Ansible code is written (in YAML format): variables, files,
templates, etc.
Describes which hosts to configure, and ordered list of tasks to perform on those hosts
MODULE IV-1
362
PLAYBOOK
-
name : Test connectivity to target servers
PLAY : Test connectivity to target servers
hosts: all
HOSTS : All (All machine in the inventory file)
tasks:
TASK : Gathering facts - name: Ping test
TASK : Ping test ping:
PLAY : RECAP
MODULE IV-1
364
MODULE IV-1
365
MODULE IV-1
366
The playbook above uses with_items to create users - name: install packages
apt:
name: "{{item}}"
state: present
with_items:
- apache2
- ufw
- mysql
MODULE IV-1
367
Jinja2 expression
Ansible uses Jinja2 templating to enable dynamic expressions and access to a variables and facts.
Example : Create a template for a configuration file and deploy it to multiple environments
MODULE IV-1
368
---
- hosts: linuxservers [linuxservers]
tasks:
ansibleclient2 ansible_host=172.17.11.5 webserver_pkg=httpd
- name: Install Apache Web Server
yum: name=httpd state=latest
[linuxservers:vars]
- name: upload index page
get_url: url=http://www.purple.com/faq.html dest=/var/www/html/index.html http_server_port=80
notify:
- openport80
- startwebserver
handlers:
- name: openport80
service: name=httpd state=started
- name: startwebserver
firewalld: port=80/tcp permanent=true state=enabled immediate=yes
MODULE IV-1
369
MODULE IV-1
371
MODULE IV-1
372
MODULE IV-1
373
Ansible variables are one of the ways that make playbooks more generic.
Variables can be defined in multiple locations
- Inventory file
- Playbook
- Role definition
- Runtime
MODULE IV-1
374
MODULE IV-1
375
tasks:
- name: Install Apache Web Server
yum: name={{webserver_pkg}} state=latest
notify:
- openport80
- startwebserver
- name: upload index page
[linuxservers]
get_url: url=http://www.purple.com/faq.html dest=/var/www/html/index.html
ansibleclient2 ansible_host=172.17.11.5
handlers:
- name: openport80 [linuxservers:vars]
service: name=httpd state=started
- name: startwebserver
firewalld: port={{http_server_port}}/tcp permanent=true state=enabled immediate=yes
MODULE IV-1
376
MODULE IV-1
377
MODULE IV-1
378
MODULE IV-1
379
Ansible Galaxy
MODULE IV-1
380
Ansible Tower
MODULE IV-1
381
Ansible Tower
Ansible Tower (AWX) is a web-based solution that makes Ansible even more easy to use
MODULE IV-1
382
Quizz
1. Which Ansible Keyword allows tasks to be run only 5. An Ansible variable file contains the following content
under specific conditions? FlaskApp:
Version : 2.6
2. Which Ansible command is used to create and edit Which of the following strings can be used to reference
encrypted files which contain secrets used by ansible the defined variable?
when configuring remote systems?
?
MODULE IV-1
383
PLAN
-
CHEF
-
Chef Expression
-
Chef Workstation
-
Chef Server
-
Chef Nodes
Chef Tools
PUPPET
-- Architecture
Commands
MODULE IV-2
385
Chef
Chef® is an open source, systems management and cloud infrastructure automation platform
Chef transforms infrastructure into code to automate server deployment and management.
Chef is a configuration management tool for dealing with machine setup on physical servers, virtual
machines and in the cloud
Used by several companies including Facebook, Yahoo, Etsy.
There are three major Chef components : Workstation, Server, and Nodes
MODULE IV-2
386
Data Bags
- Data Bags store global variables as JSON data .
- Indexed for searching and can be loaded by a Cookbook or accessed during search
MODULE IV-2
388
Chef Workstation
The Workstation is the location from which all of Chef configurations are managed
Holds all the configuration data that can later be pushed to the central Chef Server
These configurations are tested in the workstation before pushing it into the Chef Server
A workstation consists of a command-line tool called Knife, that used to interact with the Chef Server
There can be multiple Workstations that together manage the central Chef Server
Workstations are responsible for performing the below functions :
- Writing Cookbooks and Recipes that will later be pushed to the central Chef Server
- Managing Nodes on the central Chef Server
MODULE IV-2
389
Chef Workstation/Recipes
A Chef recipes is a file that groups related resources, such as everything needed to configure a web server,
database, or a Load balancer.
These Recipes describe a series of resources that should be in a particular state, i.e. Packages that should
be installed, services that should running, or files that should be written.
package "httpd" do
action :install
end
include_recipe "apache::fwrules"
service "httpd" do
action [ :enable, :start]
end
MODULE IV-2
390
Chef Workstation/Cookbook
- Recipes that specify which Chef Infra buit-in resources to use (in order)
- Attributes values which allow environment-based configurations such as dev or prod
- Custom resources for extending Chef Infra beyond the built-in resources
- Files and Templates for distributing information to systems
- Metadata (metadata.rb) which contains information about the cookbook such as the name, description
and version
package "httpd" do
action :install
end
MODULE IV-2
391
Knife Utility : Command line tool for communication with the central Chef Server (Adding,
removing, changing configurations of Nodes – Upload Cookbooks and roles)
Local Chef repository : Store every configuration component of central Chef Server (With
Knife utility)
MODULE IV-2
392
Chef Server
MODULE IV-2
393
Chef Nodes
Nodes can be a cloud-based virtual server or an on-premise physical server, that is managed
using central Chef Server.
Chef-client is the main component agent present on the Node that will establish communication
with the Central Chef Server
MODULE IV-2
394
Chef Tools
- Chef-solo is a command that executes Chef Client in a way that does not require the Chef
Server to converge Cookbooks
- Run as a daemon
- Uses Chef Client’s local mode and does not support some functionality (Centralized
distribution of Cookbooks and Authentication)
- Cookbook can be run from a local directory and a URL at which a “tar.gz” archive is located
Knife
- Interaction between a local chef-repo and the Chef Server
- Helps users to manage Nodes, Cookbooks and recipes, Roles, Environments, and Data Bags
MODULE IV-2
395
Chef / Workflow
MODULE IV-2
396
Puppet
Puppet uses a declarative language that models the infrastructure as a series of resources
Manifests consist of a set of JSON files, pull together these resources and define the
Puppet stores manifests on the servers and uses them to create compiled configuration
Facter is a puppet tool that discovers and reports facts about nodes which are then used to
Puppet architecture , master and agents (can run in a server-only model with command line
access)
MODULE IV-2
397
Puppet Architecture
MODULE IV-2
398
Puppet Architecture
Puppet Master
- Handles all the configuration related process in the form of puppet codes
- Create catalog and sends it to the targeted Puppet Agent
- Installed on a Linux based system
- SSL certificates are checked and marked
Puppet Node (Slave or Agent)
Puppet Architecture
Config Repository
Storage area where all the servers and nodes related configurations are stored, and these
configurations can be pulled as per requirements
Facts
Key-value data pair. It contains information about the node or Master Machine.
It represents a puppet client states such as Operating System, network interface, IP Address, etc.
Catalog
Compiled format which result from the entire configuration and manifest files.
Catalog can be applie on the target machine.
MODULE IV-2
400
Puppet Command
COMMANDS DESCRIPTION
MODULE IV-2
401
Quizz
1. Which of the following commands lists the cookbooks 2. What is the Puppet equivalent to an Ansible Playbook
available on a Chef Server ? called ?
MODULE IV-2
402
PART V.
Service Operations
…
403
PLAN
DEVOPS MONITORING
-
PROMETHEUS
-
Architecture
-
Configuration file
-
Metric types
Exporter
GRAFANA
-- Data source
Dashboard
MODULE V-1
405
DevOps Monitoring
DevOps Monitoring is the practice of tracking and measuring the performance and health of
Collect data on everything from CPU utilization to disk space to application response times
MODULE V-1
406
Prometheus
MODULE V-1
407
Prometheus Architecture
MODULE V-1
408
Prometheus Architecture
MODULE V-1
409
Global : Contains global settings for controlling the Prometheus server’s behaviour
- Scrape Interval : Specifies the interval between scrapes of any application or service
- Evaluation Interval : Tells Prometheus how often to evaluate its rules
Alerting : Provided by an independent alert management tool called Alertmanager.
List each alertmanager used by the Prometheus server
MODULE V-1
410
- Recording rules : Allow you to precompute frequent and expensive expressions and
to save their result as derived time series data
- Alerting rules : Allow you to define alert conditions. Prometheus will re-evaluate these
rules every 15 seconds (#evaluation_interval).
Scrape Configuration : Specifies all the targets that Prometheus will scrape
MODULE V-1
411
MODULE V-1
412
Prometheus Dashboard
MODULE V-1
413
Exporter
There are several libraries and servers which help in exporting existing metrics from third-party
systems as Prometheus metrics and maintained as part of the official Prometheus GitHub
- Databases (e.g., MySQL server exporter-Official)
- Hardware (e.g., Node exporter-Official)
- Trackers and CI (e.g., Jenkins exporter)
- Messaging systems (e.g., Kafka exporter)
- HTTP (e.g., Apache exporter)
Software exposing Prometheus metrics
- Ansible Tower (AWX)
- Kubernetes (direct)
Other utilities
- Java/JVM (EclipseLink metrics collector)
- Node,js (swagger-stats)
MODULE V-1
.
414
Data Visualization
Prometheus scrapes metrics from instrumented jobs, either directly or via an intermediary
push gateway for short-lived jobs
Its stores all scraped samples locally and runs rules over this data to either aggregate and
record new time series from existing data or generate alerts
An API consumers like Grafana can be used to visualize the collected data
- Used to query and visualize metrics
- Support multiple backend (Prometheus, MySQL, Datadog)
- Combine data from different sources
- Dashboard visualization fully customizable
- Each panel has a wide variety of styling and formatting options
- Supports templates and collection of add-ons and pre-built dashboards
MODULE V-1
415
Grafana
MODULE V-1
416
MODULE V-1
417
Grafana Dashboard
MODULE V-1
418
Grafana Dashboard
MODULE V-1
419
Quizz
1. How does Prometheus gather information about 2. Which section of the Prometheus configuration defines
monitored hosts and services ? which nodes are monitored?
MODULE V-1
420
PLAN
LOGS
ELASTICSEARCH
LOGSTASH
KIBANA
MODULE V-2
422
Logs
Most of applications, containers and Virtual Machines constantly generate information about
numerous events
Collecting and analyzing this log data become challenging in a dynamic architecture or
microservice environment
Lots of users, systems (Routers, firewalls, servers) and logs (Netflow, syslogs, access logs,
MODULE V-2
423
Helps DevOps teams to gain insights into their applications that allow them to better
Accelerating Releases
MODULE V-2
424
ELK Stack
ELK Stack is a collection of three (03) open-source products (Elasticsearch, Logstash, and
Kibana).
- Elasticsearch : used for storing logs
- Logstash : used for both shipping as well as processing and storing logs
- Kibana : used for visualization of data through a web interface
ELK stack provides centralized logging in order to identify problems with servers or
applications
Helps to find issues in multiple servers by connecting logs during a specific time frame
MODULE V-2
425
ELK Architecture
MODULE V-2
426
Elasticsearch
Elasticsearch can control incredibly quick searches that help your information revelation
applications
MODULE V-2
427
MODULE V-2
428
Node
Single physical or virtual machine that holds full or part of your data and provides computing
power for indexing and searching your data
Based on the responsibilities, the following are different types of nodes that are supported
- Data Node
Node that has storage and compute capability. Participate in the CRUD, search, and aggregate
operations
- Master Node
Nodes are reserved to perform administrative tasks. It track the availability/failure of the data
nodes. They are responsible for creating and deleting the indices
- Coordinating-Only Node
Act as a smart load balancers. They are exposed to end-user requests. It appropriately redirects
the requests between data nodes and master node
MODULE V-2
430
Index
It is a container to store data like a database in the relational databases. An index contains a
collection of documents that have similar characteristics or are logically related (e.g customer
data product catalog).
Document
Document is the piece indexed by Elasticsearch and it is represented in the JSON format
Mapping types
Mapping types is needed to create different types in an index and specified during index creation
MODULE V-2
431
Shards
Shard is a full-featured subset of an index and help with enabling Elasticsearch to become
horizontally scalable (An index can store millions of documents and occupy terabytes of data,
this can cause problems with performance, scalability, and maintenance).
Replication
To ensure fault tolerance and high availability, Elasticsearch provides this feature to replicate the
data (Shards can be replicated)
- High Availability : Data can be available through the replica shard even if the node failed
- Performance : search queries will be executed parallelly across the replicas
MODULE V-2
432
Logstash
Logstash is a free and open-source, server-side data processing pipeline that can be used to
ingest data from multiple sources, transform it, and then send it to further processing or storage
Data collection pipeline tool that collect data inputs and feeds into Elasticsearch
Gathers all types of data from different sources and makes it available for further use
Unify data from disparate sources and normalize the data into a desired destination
MODULE V-2
433
Logstash Architecture
MODULE V-2
434
Inputs are the starting point of Logstash configuration. By default, Logstash will automatically
create a stdin input if there are no inputs defined.
File : Reads from a file on the filesystem, much like the UNIX command tail -DF
Syslog : Listens on the well-known port 514 for syslog messages and parses.
Redis : Reads from a redis server, using both redis channels and redis lists
MODULE V-2
435
Syslog input {
syslog {
port => 514
codec => cef
syslog_field => “syslog”
grok_pattern => “<%{POSINT:priority}>%{SYSLOGTIMESTAMP:timestamp} CUSTOM GROK HERE"
}
}
Redis
input {
redis {
id => “my_plugin_id”
}
}
MODULE V-2
436
Logstash Filters
Filters are intermediary processing devices in the Logstash pipeline. They can be combined with
conditionals to perform an action on an event if it meets certain criteria.
GROK : Parse unstructured log data into something structured and queryable
MUTATE : Perform general transformations on event fields. Help to rename, remove,
replace, and modify fields in your events
DROP : Drop everything that gets to the filter
CLONE : Make a copy of an event, possibly adding or removing fields
GEOIP : Add information about geographical location of IP addresses (Displays amazing
charts in Kibana)
MODULE V-2
437
GROK (from http request log 55.3.244.1 GET /index.html 15824 0.043)
filter {
grok {
match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
}
}
MUTATE
filter {
mutate {
split => [“hostname”,”,”]
add_field => {“shortHostname”=>”%{hostname[0]}”}
}
}
MODULE V-2
438
Logstash Outputs
Outputs are the final phase of the logstash pipeline. An event can pass through multiple outputs,
but once all output processing is complete, the event has finished its execution
Some useful outputs are :
MODULE V-2
439
FILE
output {
file {
path => “C:/devops-book-labs/logstash/bin/log/output.log”
codec => line { format => "custom format: %{message}"}
}
}
Statsd
output {
statsd {
host => "statsd.example.org"
count => {
"http.bytes" => "%{bytes}"
}
}
}
MODULE V-2
440
MODULE V-2
441
Kibana
Kibana is a data visualization and exploration tool used for log and time-series analytics,
application monitoring, and operational intelligence use cases
Dashboard offers various interactive diagrams, geospatial data, and graphs to visualize
complex queries
It can be used for search, view, and interact with data stored in Elasticsearch directories
It helps users to perform advanced data analysis and visualize their data in a variety of
MODULE V-2
442
Kibana
MODULE V-2
443
ELK
Centralized logging can be useful when attempting to identify problems with servers or
applications
Elasticsearch is a NoSQL database, Logstash is the data collection pipeline tool and Kibana
Netflix, LinkedIn, Tripware, Medium all are using ELK stack for their business
ELK works best when logs from various Apps of an enterprise converge into a single ELK
instance
MODULE V-2
444
Quizz
1. Which of the following tasks can Logstash fulfill without 2. Which sections can exist in a Logstash configuration
using other components of the Elastic Stack? file
MODULE V-2
445
References/ CI/CD
https://www.lpi.org/blog/2018/02/06/devops-tools-introduction-05-continuous-delivery
https://www.slideshare.net/stevemac/introduction-to-cicd
https://medium.com/edureka/continuous-integration-615325cfeeac
https://aws.amazon.com/devops/continuous-integration/
https://www.guru99.com/software-testing-introduction-importance.html
https://www.leewayhertz.com/software-testing-process/
https://quodem.com/en/blog/what-is-software-testing-outsourcing/
https://www.leewayhertz.com/software-testing-process/
https://vivadifferences.com/difference-between-functional-and-non-functional-testing/
https://www.gcreddy.com/tag/types-of-test-tools
https://aws.amazon.com/devops/continuous-delivery/?nc1=h_ls
https://www.spiceworks.com/tech/devops/articles/cicd-vs-devops/
https://1902software.com/blog/automated-deployment/
https://developer.cisco.com/learning/labs/jenkins-lab-01-intro/
https://www.gocd.org/2017/07/25/blue-green-deployments.html
https://www.gocd.org/2017/08/15/canary-releases/
https://www.census.gov/fedcasic/fc2018/ppt/6CVazquez.pdf
https://formation.cloud-gfi-nord.fr/assets/integrationContinue/CI-Jenkins.pptx
https://subscription.packtpub.com/book/application-development/9781784390891/3/ch03lvl1sec20/the-jenkins-user-interface
https://blog.devgenius.io/what-is-jenkins-pipeline-and-jenkinsfile-96f30f3a29c
https://formation.cloud-gfi-nord.fr/assets/integrationContinue/CI-Jenkins.pptx
https://www.edureka.co/blog/jenkins-master-and-slave-architecture-a-complete-guide/
https://www.jenkins.io/doc/book/pipeline/syntax/
449
References/Container Usage
https://www.lpi.org/blog/2018/02/13/devops-tools-introduction-06-container-basics
https://www.slideteam.net/introduction-to-dockers-and-containers-powerpoint-presentation-slides.html#images-3
https://www.citrix.com/fr-fr/solutions/app-delivery-and-security/what-is-containerization.html
https://github.com/kaan-keskin/introduction-to-docker/blob/main/Introduction.md
https://kaushal28.github.io/Docker-Basics/
https://www.slideteam.net/introduction-to-dockers-and-containers-powerpoint-presentation-slides.html
https://kaushal28.github.io/Docker-Basics/
https://medium.com/@mccode/using-semantic-versioning-for-docker-image-tags-dfde8be06699
https://github.com/kaan-keskin/introduction-to-docker/blob/main/ContainerizingAnApp.md
https://kapeli.com/cheat_sheets/Dockerfile.docset/Contents/Resources/Documents/index
https://iceburn.medium.com/dockerfile-cheat-sheet-9f52aa4a99b3
https://docs.docker.com/engine/reference/commandline/build/
https://aboullaite.me/multi-stage-docker-java/
https://www.padok.fr/en/blog/docker-image-multi-staging
https://github.com/dockersamples/atsea-sample-shop-app/blob/master/app/Dockerfile
https://blog.octo.com/en/docker-registry-first-steps/
https://phoenixnap.com/kb/list-of-docker-commands-cheat-sheet
453
References/Container Infrastructure
https://www.lpi.org/blog/2018/02/27/devops-tools-introduction-08-container-infrastructure
https://github.com/docker/machine
https://devopscube.com/docker-machine-tutorial-getting-started-guide/
https://medium.com/@cnadeau_/docker-machine-basic-examples-7d1ef640779b
https://lucanuscervus-notes.readthedocs.io/en/latest/Virtualization/Docker/Docker%20Machine/
https://docs.ionos.com/docker-machine-driver-1/v/docker-machine-driver/usage/commands
https://docs.docker.com/desktop/
455
References/ Ansible
https://www.lpi.org/blog/2018/03/13/devops-tools-introduction-10-ansible
https://datascientest.com/ansible
https://www.ansible.com/overview/how-ansible-works
https://spacelift.io/blog/ansible-vs-terraform
https://docs.rockylinux.org/books/learning_ansible/01-basic/
https://dev.to/rahulku48837211/ansible-architecture-and-setup-2355
https://www.fita.in/devops-tutorial/
https://www.openvirtualization.pro/red-hat-ansible-automation-architecture-and-features/
https://www.fita.in/devops-tutorial/
https://www.fita.in/devops-tutorial/
Installation Guide — Ansible Documentation
https://docs.ansible.com/ansible/latest/reference_appendices/general_precedence.html#general-precedence-rules
https://www.openvirtualization.pro/ansible-part-i-basics-and-inventory/
https://tekneed.com/creating-and-managing-ansible-configuration-file-linux/
https://www.c-sharpcorner.com/article/getting-started-with-ansible-part-3/
https://www.ssh.com/academy/ssh/copy-id
https://linuxhint.com/use-ssh-copy-id-command/
https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#intro-inventory
https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html
https://www.javatpoint.com/ansible-playbooks
https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html
https://digitalvarys.com/how-to-write-an-ansible-playbook/
https://www.slideshare.net/knoldus/introduction-to-ansible-81369741
https://www.bogotobogo.com/DevOps/Ansible/Ansible_SettingUp_Webservers_Nginx_Install_Env_Configure_Deploy_App.php
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_conditionals.html
https://www.ansible.com/overview/how-ansible-works
https://linuxhint.com/ansible-with_item/
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_loops.html
https://ericsysmin.com/2019/06/20/how-to-loop-blocks-of-code-in-ansible/
https://www.toptechskills.com/ansible-tutorials-courses/ansible-include-import-variables-tutorial-examples/
https://www.dasblinkenlichten.com/ansible-roles-and-variables/
https://galaxy.ansible.com/docs/contributing/creating_role.html
https://www.redhat.com/sysadmin/intro-ansible-tower
https://developer.arubanetworks.com/aruba-aoscx/docs/ansible-tower-overview
456
https://www.lpi.org/blog/2018/03/27/devops-tools-introduction-12-it-operations-and-monitoring
https://www.crowdstrike.com/cybersecurity-101/observability/devops-monitoring/
https://www.tigera.io/learn/guides/prometheus-monitoring/
https://k21academy.com/docker-kubernetes/prometheus-grafana-monitoring/
https://samirbehara.com/2019/05/30/cloud-native-monitoring-with-prometheus/
https://sensu.io/blog/introduction-to-prometheus-monitoring
https://prometheus.io/docs/instrumenting/exporters/
458
THANKS!
Any questions?
You can find me at [email protected]