IBMCE - DevOpsFundamentals - Jul10
IBMCE - DevOpsFundamentals - Jul10
or
Training Module
Preface
December 2019
Table of Contents
Chapter 1: DevOps Fundamentals ...................................................... 6
Introduction to DevOps .............................................................................. 6
What is DevOps? .................................................................................... 6
A definition of DevOps ........................................................................... 7
Benefits of DevOps approach ................................................................ 7
Drivers of DevOps .................................................................................. 8
Understanding the Business Need for DevOps ..................................... 8
How is DevOps different from Traditional IT? ....................................... 9
Issues in traditional application ............................................................ 9
Recognizing the Business Value of DevOps ........................................ 10
Return on Investment .......................................................................... 11
When to adopt / not adopt DevOps? ................................................... 11
How is DevOps different from Agile? DevOps vs Agile........................ 12
DevOps Principles ................................................................................ 13
DevOps Lifecycle ................................................................................. 15
Test your knowledge ................................................................................ 17
Introduction to Continuous Integration / Continuous Delivery /
Continuous Deployment........................................................................... 18
Continuous Integration ........................................................................ 18
Continuous Delivery ............................................................................. 19
Continuous Deployment ...................................................................... 19
Introduction to DevOps Tools .................................................................. 20
Version Control – Git ............................................................................ 20
What is DevOps?
The screen capture depicts the meaning of DevOps in business. The term
DevOps is a blend of processes,
methods and tools combined with a set
of principles and practices that can be
adopted by the development and
operations teams to achieve faster
releases, better quality and happier
teams.
It is a modern way of delivering business value through bringing
development and operations together to collaborate and co-create better
outcomes.
Making any change in “business as usual” is always hard and usually
requires an investment. So whenever an organization adopts any new
technology, methodology, or approach, that adoption has to be driven by a
business need. To develop a business case for adopting DevOps, you must
understand the business need for it, including the challenges that it
addresses. In this chapter, we give you the foundation you need to start
building your case.
A definition of DevOps
DevOps (n.) –
DevOps is a philosophy, a cultural shift that merges operations with
development and demands a linked toolchain of technologies to facilitate
collaborative change. DevOps toolchains can include dozens of
noncollaborative tools, making the task of automation a technically
complex and arduous one.
Drivers of DevOps
Refer the table for the key dimensions where DevOps & Traditional IT
differ:
To explain in detail:
Systems of record
Traditional software applications are large systems that functions as
systems of record. This in turn contain massive amounts of data
and/or transactions and are designed to be highly reliable and stable.
This application doesn’t need to change often, organizations can
satisfy their customers and business needs by delivering only one or
two large new releases a year.
Systems of engagement
With the advent of mobile communications and the maturation of web
applications.
Systems of record are being supplemented by systems of
engagement.
Customers can access directly and use to interact with the business.
Such applications must be easy to use, high performing, and capable
of rapid change to address customers’ changing needs and evolving
market forces.
Systems of engagement are used directly by customers, they require
intense focus on user experience, speed of delivery, and agility — in
other words, a DevOps approach.
Systems of engagement aren’t isolated islands and are often tied to
systems of record, so rapid changes to systems of engagement result
in changes to systems of record.
Indeed, any kind of system that needs rapid delivery of innovation
requires DevOps.
DevOps is essential
In a cloud-native environment, applications are viewed as collections
of microservices with many deployable components that deploy at
various speeds and independently of each other.
Manually deploying anything becomes an unsustainable effort prone
to error, inconsistency, and delays.
Return on Investment
Many companies that incorporate DevOps practices get more things done
in a short span and also derive many business benefits. But it does not mean
it is right for every situation or project. DevOps is not a silver bullet. There
are some scenarios where DevOps approach will fail, and it is imperative to
understand where and when it will fit / misfit.
Below table provides few pointers on when to adopt / not adopt DevOps:
When to adopt? When not to adopt?
The main benefit of Microservice Transitioning legacy applications to
architecture is speed and DevOps newer technology is a big challenge
helps to achieve this through and if DevOps process also to be
people, process and technology included, then the challenge is
change escalated even further. A gradual
phase-out should be planned
When there is a demand for For smaller organizations that do
frequent production releases, not have much resources or there is
faster time to market no need of frequent production
Agile and DevOps are not same though they have similar aims i.e. release
the product as quickly and efficiently as possible. But that doesn’t mean
one should be adopted over the other. On the contrary, both methodologies
can work in tandem. DevOps is not a replacement for Agile. But, it is an
improvement. Let’s see how.
Agile’s aim is to bring agility to Development. But DevOps’s aim is to bring
agility to both Development & Operations. It follows a set of best practices
for creating quality software in a timely manner. But, involves people
working in silos.
own separate silos, from the initial design phase right through to product
release. The intent is to enable communication between the teams so that
they can build, test, and release software more quickly and with greater
efficiency and speed. In combining these two distinct teams and processes
together, it promoted continuous integration, continuous deployment,
automated testing, and transparency in code repositories.
Basically, DevOps brings together two large silo-ed teams together to allow
for quicker software releases while Agile is focused on getting smaller
teams to collaborate with each other, so it can react quickly to the ever-
changing consumer needs.
Both DevOps and Agile can work in tandem since they can complement
each other. DevOps promotes a fully automated continuous integration and
deployment pipeline to enable frequent releases, while Agile provides the
ability to rapidly adapt to the changing requirements and better
collaboration between different smaller teams.
DevOps Principles
Everyone is responsible
In traditional software development methodology, developers and
operation personnel had unique roles. But in DevOps, both of them work as
a single team who is fully accountable for the application from beginning to
end. This is one of the core principles of DevOps where there should be
control and responsibility of all services from the get-go till end.
Continuous improvement
End-to-end responsibility also means that the team must continuously
adapt to changing circumstances like new technology, customer needs etc.
Embracing failure will foster a climate for learning which will positively
impact organizational culture.
Cross-functional independent teams
As DevOps teams are required to be involved at every stage of the software
development lifecycle, it requires a cross-functional team that has a
balanced set of skills and each member is an all-rounder. As it is not that
easy to find an IT professional with versatile skills, the teams should be
encouraged to share responsibility.
DevOps Lifecycle
Plan: The first phase in DevOps lifecycle is to plan the application. Create a
set of achievable targets that must be delivered by the application. Once
these targets are finalized, project development can commence.
Code: This phase consists of developing the code as per requirements.
Since DevOps lifecycle is continuous, sometimes the development can
happen on already existing code as well.
Build: The application is built by performing the integrations of various
codes developed in previous phase
Test: Test the application to ensure functional and performance
requirements are met. If not, fix the code, rebuild and test again.
Release: Once the testing phase is completed successfully, the code
should be packaged and released for deployment.
Deploy: Deploy the code in respective environment for further usage. It is
important to ensure that the deployment should not affect the functioning
of application for the end users.
Operate: All DevOps operations are based on automation of release
process, applying patches etc. which will allow organizations to accelerate
the overall time to market on an ongoing basis.
Monitor: In the monitoring phase, key information about application usage
is recorded and carefully analyzed to find out trends and identify the
problem areas. Automatically monitor metrics so any change in code that
impacts the production environment can be identified quickly.
Continuous Integration
that person. So, Continuous Integration will help in saving on costs in the
long run since it is more expensive to fix defects in the architecture when
it’s discovered later on in the process.
Continuous Delivery
Continuous Deployment
The most widely used modern version control system in the world today is
Git. It is a mature and open source project originally developed in 2005 by
Linus Torvalds, the famous creator of the Linux operating system kernel.
Large number of software projects rely on Git for version control, that
includes both commercial projects as well as open source.
GitHub is a collaboration version control platform for developers. Using Git,
source code for each project is stored and maintained and tracks the
complete history of the commits. Git provide tools which effectively
collaborate on a project and manages all conflicting changes in commits
from multiple developers. It delivers quality project by allowing developers
to change, adapt and improve the source code.
Command Description
Git –version Displays the git version.
Git init --bare Creates a new bare local git repository.
Git clone Downloads the project and its entire version
history.
Git config –global user.name Sets the name you want to your commit
“[name]” transactions.
Git config –global user.email Sets the email you want to your commit
“[email_address]” transactions.
Git config --list Displays all the configurations.
Git status Lists all the new or modified files to be committed.
Git add [file] Snapshots the file in preparation for versioning.
Git branch [branch_name] Creates a new branch.
Git checkout [branch_name] Switches to the specified branch and updates the
working directory.
Git commit -m Records file snapshots permanently in version
“[commit_message]” history.
Git push -u origin [branch_name] Uploads all the local branch commits to Github.
Git merge [branch_name] Combines the specified branch’s history into the
current branch.
Note: If you change the file location where you save your ssh
key, then make sure you give the same location in step in 6.
Note: Set the password for the new git user, use the same
password in step 6 & 7.
8. Create a repository
4. The git client setup is completed, and we can access the server and
commit the files.
In list you can see, you have two branches and currently you are on
master branch.
9. Create some files (sample.txt) to git and stage them for commit.
12. Now, checkout to master branch and list all the content of directory.
You can see, master branch’s content is in the state where you left it.
No code commit made to master.
13. Merge devops_branch with master branch (If you have already
merged it will show).
Now you can see all the changes are merged to master branch.
4) How do you check the state of your local git repository since your last
commit?
A. git check
B. git status
C. git commit
D. git diff
5) What's the git command that downloads your repository from GitHub to
your computer?
A. git push
B. git fork
C. git clone
D. git commit
2) Download Eclipse
Now that Java JDK 8 is installed, download the latest Eclipse IDE package
for your systems. The link below can be used to get it.
https://www.eclipse.org/downloads/
3) Install Eclipse
Use the commands below to extract the content in the ~/Downloads folder
$ tar xfz ~/Downloads/eclipse-inst-linux64.tar.gz
Now launch the Eclipse installer-
~/Downloads/eclipse-installer/eclipse-inst
Select the package IDE you want to install and continue.
Use the onscreen instructions to complete the installer. Accept the default
installation directory and continue.
Next, accept the license terms and continue, it will install Eclipse along with
all the packages.
[Desktop Entry]
Name=Eclipse JEE Oxygen
Type=Application
Exec=/home/administrator/eclipse/java-2019-03/eclipse/eclipse
Terminal=false
Icon=/home/administrator/eclipse/java-2019-03/eclipse/icon.xpm
Comment=Integrated Development Environment
NoDisplay=false
Categories=Development;IDE;
Name[en]=Eclipse
Now, provide suitable parameters (Group ID and Artifact ID) for the project
import java.util.Scanner;
import org.apache.log4j.BasicConfigurator;
import org.apache.log4j.Logger;
float result=0;
Scanner in=new Scanner(System.in);
a=Integer.parseInt(args[0]);
logger.info("First number: "+a);
b=Integer.parseInt(args[1]);
logger.info("Second number: "+b);
choice=Integer.parseInt(args[2]);
logger.info("\nYour choice: "+choice);
switch(choice)
{
case 1:
result=addition(a,b); break;
case 2:
result=subtraction(a,b); break;
case 3:
result=multiplication(a,b); break;
case 4:
result=division(a,b); break;
case 5:
result=remainder(a,b); break;
default:
logger.info("An Invalid Choice!!!\n");
}
if(choice>=1 && choice<=5)
logger.info("Result is: " + result);
}
}
public static int multiplication(int a,int b) {
return a*b;
}
public static float division(int a,int b) {
return (float)((float)a/(float)b);
}
public static int remainder(int a,int b) {
return a%b;
}
}
2. Next, add the dependencies used in pom.xml file. Add the following
code in the pom.xml file.
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.devops</groupId>
<artifactId>calcProject</artifactId>
<version>0.2</version>
<packaging>jar</packaging>
<name>CalcProject</name>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
<configuration>
<archive>
<manifest>
<addClasspath>true</addClasspath>
<mainClass>com.devops.calcProject.CalcMain</mainClass>
</manifest>
</archive>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
</plugin>
</plugins>
</build>
</project>
3. You have created your maven project and added all the class and
dependencies. Now, you will run the project which will create an
executable jar.
For this, right click on your project from Project Explorer bar, select
“Run as” and then click on “maven clean”. This will clean your project
removing target folder, all class files, java docs, jars etc.
4. After this, perform “maven install” which will add all artifacts and
dependencies specified in pom, to the local repository and creates an
executable jar out of it.
6. Now, you can take the jar from above location and paste it to a
directory and execute it using the below command.
$ java -jar {jarName} {commandLineArguments}
2) Which of the following command removes the target directory with all
the build data before starting the build process?
A. mvn clean
B. mvn build
C. mvn compile
D. mvn site
2. You need to install puppet master and puppet agent both on the
same machine
$ sudo apt-get update
$ sudo apt-get install puppetmaster
$ sudo apt-get install puppet
3. Now, you will install puppet dependencies tools for the development
process
$ sudo apt-get install vim-puppet puppet-lint
4. Next step is to configure the puppet services, for this you need to use
puppet config file which can be found at - /etc/puppet/puppet.conf.
This file has 3 sections namely-
[main] – These configurations are global and are used by all services.
[master] – These configurations are used by puppet master service.
[agent] - These configurations are used by puppet agent service.
[main]
confdir = /etc/puppet
logdir = /var/log/puppet
vardir = /var/lib/puppet
ssldir = /var/lib/puppet/ssl
rundir = /var/run/puppet
environmentpath = $confdir/environments
factpath = $vardir/lib/facter
pluginsource = puppet:///plugins
pluginsync = true
srv_domain = localhost
strict_variables = true
parser = future
[master]
vardir = /var/lib/puppet
cadir = /var/lib/puppet/ssl/ca
dns_alt_names = puppet
certname = localhost
report = true
reports = log
ssl_client_header = HTTP_X_CLIENT_DN
ssl_client_verify_header = HTTP_X_CLIENT_VERIFY
[agent]
certname = localhost
server = localhost
pluginsync = true
report = true
summarize = true
6. After saving the puppet config file, restart the puppet master.
$ sudo service puppetmaster restart
9. Finally, test the puppet installation. For this you need to create puppet
file (.pp file extension) under manifests folder of your environment as
shown below.
If write permission is not there, then go back to environments folder and
execute below command:
$ sudo chmod 777 devops
file { '/home/administrator/Desktop/Wallpaper.png':
ensure => present
}
Before applying this configuration, check whether your file exist or not
(file does not exist)-
3. Now add the stable docker APT repository to your system’s software
repository list.
$ sudo add-apt-repository "deb [arch=amd64]
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
4. The above step enabled the docker repository, you need to update
your default apt packages.
You can check that your image is created using the below command.
$ sudo docker images
3) Which file should you use to create reproducible builds for Docker
images?
A. docker.yml
B. docker.config
C. Dockerfile
D. README.md
2. For Jenkins to work you need a web server, here you will install
nginx web server. To install nginx use the below step.
$ sudo apt-get install nginx
Make sure our web server is running, for this, type our localhost IP
on browser.
6. Once Jenkins is installed, you can check the status using the below
command.
$ sudo systemctl status jenkins
7. Default Jenkins port is 8080. To start the Jenkins, give the IP and
port 8080 in the browser, it looks like - http://localhost:8080.
After this you will be asked to enter Administrator password which
can be found in this file-
“/var/lib/jenkins/secrets/initialAdminPassword”. Take the password
and paste it to get started.
9. After all the plugins are installed, you will be prompted to set up the
administrative user. Create a user of your choice-
10. Next you will be prompted with Instance Configuration where you
need to provide the URL for our Jenkins instance and then click
“Save and Finish”.
11. After setting the URL, Jenkins installation is complete, and you can
start using our Jenkins instance.
2) Which command is used to run the Jenkins installation file in the war
format?
A. java –jar Jenkins.war
B. java –j Jenkins.war
C. javac Jenkins.war
D. java waenkins.war
In Chapter 2, you will learn more about how to setup a CI/CD pipeline from
scratch and how Jenkins achieve Continuous integration.
3. Place the following code in the CalcTest.java class. Here you have to
create both valid and invalid junit tests for all methods (which can be
individually tested).
package com.devops.calcProject;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotEquals;
import org.junit.*;
import com.devops.calcProject.CalcMain;
@Test
public void testValidAdddition(){
int result = CalcMain.addition(4,6);
assertEquals(result,10);
}
@Test
@Test
public void testValidMultiplication(){
int result = CalcMain.multiplication(4,6);
assertEquals(result,24);
}
@Test
public void testValidDivison(){
float result = CalcMain.division(10,2);
assertEquals(result,5.0,5.0f);
}
@Test
public void testValidRemainder(){
int result = CalcMain.remainder(10,3);
assertEquals(result,1);
}
@Test
public void testInvalidAdddition(){
int result = CalcMain.addition(6,2);
assertNotEquals(result,9);
}
@Test
public void testInvalidSubtraction(){
int result = CalcMain.subtraction(10,4);
assertNotEquals(result,8);
}
@Test
@Test
public void testInvalidDivison(){
float result = CalcMain.division(20,5);
assertNotEquals(result,5.0f);
}
@Test
public void testInvalidRemainder(){
int result = CalcMain.remainder(12,6);
assertNotEquals(result,1);
}
}
4. To test your Junit test cases, right click on your project from Project
Explorer bar, select “Run As” and then click on “maven test”.
5. When you perform “Maven test”, all the junit test cases in the project
are executed, but you can also test your junit tests separately. To test
the Junit test cases right click on the “CalcTest.java” class and under
“Run As” option, click on Junit Test.
2) JUnit is used for what type of software testing for the Java language?
A. Unit Testing
B. Integration Testing
C. Functional Testing
D. System Testing
4) JUnit test files are written in files with which file extension?
A. .junit
B. .test
C. .java
D. .unit
Commands Description
Create a group for Nagios setup “nagcmd” and add Nagios user to this
group. Also, add Nagios user in the Apache group.
$ sudo groupadd nagcmd
$ sudo usermod -a -G nagcmd nagios
$ sudo usermod -a -G nagcmd www-data
$ cd /opt
$ sudo wget
https://assets.nagios.com/downloads/nagioscore/releases/nagios-
4.4.3.tar.gz
$ sudo tar xzf nagios-4.4.3.tar.gz
$ cd nagios-4.4.3
$ sudo ./configure --with-command-group=nagcmd
$ sudo make all
$ sudo make install
$ sudo make install-init
$ sudo make install-daemoninit
$ sudo make install-config
$ sudo make install-commandmode
$ sudo make install-exfoliation
In /etc/apache2/conf-available/nagios.conf
$ sudo vi /etc/apache2/conf-available/nagios.conf
<Directory "/usr/local/nagios/sbin">
Options ExecCGI
AllowOverride None
Order allow,deny
Allow from all
AuthName "Restricted Area"
AuthType Basic
AuthUserFile /usr/local/nagios/etc/htpasswd.users
Require valid-user
</Directory>
<Directory "/usr/local/nagios/share">
Options None
AllowOverride None
Order allow,deny
Allow from all
AuthName "Restricted Area"
AuthType Basic
AuthUserFile /usr/local/nagios/etc/htpasswd.users
Require valid-user
</Directory>
$ cd /opt
$ sudo wget http://www.nagios-plugins.org/download/nagios-plugins-
2.2.1.tar.gz
$ sudo tar xzf nagios-plugins-2.2.1.tar.gz
$ cd nagios-plugins-2.2.1
9. Verify Settings
Use the below Nagios commands to verify the Nagios installation and
configuration file. After successful installation start the Nagios core
service.
$ sudo /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
$ service nagios start
$ sudo curl -L
https://github.com/docker/compose/releases/download/1.21.2/dock
er-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-
compose
Create a directory:
version: "3"
services:
grafana:
image: grafana/grafana
container_name: grafana
restart: always
ports:
- 3000:3000
networks:
- grafana-net
volumes:
- grafana-volume
graphite:
image: graphiteapp/graphite-statsd
container_name: graphite
restart: always
networks:
- grafana-net
networks:
grafana-net:
volumes:
grafana-volume:
external: true
This Compose file uses the official Docker images for both Graphite
and Grafana. It also specifies a network to connect the containers.
4. Use the below URL to navigate to Grafana console (use port 3000
which is configurable in docker-compose.yml file)
URL - http://localhost:3000
5. You need to create a data source. Click on Create your first data
source.
Do necessary configuration:
Name: Graphite
URL: http://graphite:8080
Access: Server (Default)
Version: Select the newest available. 1.1.3 in this example.
7. Graphite does not collect data by itself. To collect the data for
graphite we need to install some third-party utility like collectd.
Install collectd
$ sudo apt-get update
$ sudo apt-get install collectd collectd-utils
Configure Collectd
$ sudo nano /etc/collectd/collectd.conf
LoadPlugin apache
LoadPlugin cpu
LoadPlugin df
LoadPlugin entropy
LoadPlugin interface
LoadPlugin load
LoadPlugin memory
LoadPlugin processes
LoadPlugin rrdtool
LoadPlugin users
LoadPlugin write_graphite
9. For the df plugin, which tells us how full our disks are, we can
uncomment the plugin (if present) or add a simple configuration in
“sudo vim /etc/collectd/collectd.conf” file which looks like this:
<Plugin df>
Device "/dev/sda1"
MountPoint "/"
FSType "ext3"
</Plugin>
10. You should point the device to the device name of the drive on
your system. You can find this by typing the command in the
terminal:
$ df
<Plugin write_graphite>
<Node "example">
Host "172.18.0.3"
Port "2003"
Protocol "tcp"
LogSendErrors true
Prefix "collectd."
StoreRates true
AlwaysAppendDS false
EscapeCharacter "_"
</Node>
</Plugin>
12. Save and close the file when you are finished.
$ sudo service collectd stop
$ sudo service collectd start
All the steps discussed above are manual and developers have to put lot of
effort in it. But using Jenkins, we can automate the build and deployment
process. Jenkins, an open source Continuous Integration server, helps to
automate software development process, with continuous integration and
continuous delivery. For automating build and deployment process,
developers have to just create a pipeline in Jenkins and configure the
pipeline to perform all the functions. In the Jenkins pipeline, source code
repository can be integrated which can take any new commits, also build
process can be configured which performs maven clean and build actions
and generates the jar and the post build process will execute the program.
So, the full DevOps process can be automated which will ease the efforts of
developers.
For this exercise, we will deploy our calculator java project in Jenkins server
and integrate all tools to the Jenkins project which includes the source code
repository (GitHub), maven for build process which generates executable
jar.
1. First step is to add the required plugins for your project. For this click on
“Manage Jenkins” option from the left panel on Jenkins homepage. Now
click on the “Manage Plugins” from the main options. Now under the
“Available” tab search and install “Hudson Post build task” and
“PostBuildScript Plugin” plugins. These are essential for running
executable jar.
3. Now we need to push our maven project to git server which we set-up
in Exercise 3. Copy our pom.xml and /src folder to git client and then
we’ll add these files and push the changes to the git server.
5. Now, configure your pipeline. First, provide description for the Jenkins
pipeline (This step is optional).
7. After this, add build configuration which will tell what tasks needs to be
performed during the build process.
Here, click on “Add build step” and select “Execute shell”, in this give
“mvn clean package” command which will clean the maven project and
build the package (executable jar).
8. As Jenkins use the host machine as “jenkins” user, it doesn’t have sudo
access. So, you have to make changes to tell the host machine not to ask
for password for Jenkins user. For this add the below entry (update if
already present) in the sudoers file which can be found at
“/etc/sudoers”.
$ sudo vi /etc/sudoers
Add this entry – Jenkins ALL=(ALL) NOPASSWD: ALL
9. Now, add the Post-build Actions configuration which tell what tasks will
be performed after build process is completed.
For this under “Post-build Actions” click “Add post-build action” and
select “Execute Scripts”. Add post build step and select “Execute shell”
option under Add build step. Take the jar build in the previous step and
copy it into a directory then execute the jar.
sudo cp /var/lib/jenkins/workspace/devops-integration/target/calcProject-
0.2-jar-with-dependencies.jar /home/administrator/Documents/devops/
10. You have configured the Jenkins pipeline and let’s test it now. For this
go to Jenkins home page and issue build against your pipeline by clicking
the below command.
11. You can see the build running on the left panel.
12. Once the build is run you can see the status on the homepage against the
pipeline. Where Blue is built passed and Red is build failed. You can also
see the build log and status details, for this click on your pipeline from
Jenkins homepage. Here on the left panel you can find all pipeline
related details as well as all the builds.
13. To see the log for your build, click on the build number from “Build
History” and then from the left panel click on “Console Output”.
The Console Output for your build will look like this –
DevOps in eCommerce
The next big step for DevOps is its evolution into the systems or embedded-
devices space where it’s often referred to as Continuous engineering. When
the Internet started, most of the data shared on it was human-generated.
Today, innumerable Internet-connected devices (such as sensors and
actuators) generate much more data than humans do. This network of inter-
connected devices on the Internet is commonly referred to as the Internet
of Things.
In this space, DevOps is potentially even more essential, because of the co-
dependence of the hardware and the embedded software that runs on it.
DevOps principles are reflected in continuous engineering to ensure that
the embedded software delivered to the devices is high-quality software
with the right engineering specifications.
A. Maven
B. Jenkins
C. Automation Anywhere
D. Puppet
E. Docker
Automatic Rollback
When there is a failure during the monitoring phase, then someone has to
verify and rollback the failed release to the previously successful one. This
is time consuming process which requires someone to watch the
monitoring dashboard and react to it.
If the team follows DevOps way of working, then there will be an alert raised
when something goes wrong. But still, even after receiving the alert,
someone has to look at the issue if it is coming from the last release and
decide whether or not there is a need to roll back. So, there is a significant
amount of time wasted during the process, just because a release failed,
and nobody kept an eye on it. What if this process could be automated?
There are DevOps tools available that monitor the dashboard and if
something goes wrong, they initiate automatic rollback of current release
to the previously working release. Also, if there are new features rolled out,
there should be a tested rollback plan for every feature so if something goes
wrong, that feature can be turned off automatically.
Automatic Provisioning
7. Now save the configuration and run the build. After the build is
completed, check the logs and you will find the build failed but then
it rolled back to previous build and executed the jar.
IaC dispenses with the traditional manual processes used by IT ops teams.
This is not simply writing ad-hoc scripts; you write code to provision and
manage the lifecycle of your cloud from creation to tearing down. IAC
describes the full infrastructure and its topology to host an application. It
encompasses every step required to take a blank sheet to the full stack
needed to run applications. It also defines how to tear it down again. These
IAC patterns become reusable building blocks for complex infrastructures.
count = 2
os_reference_code = “CENTOS_7_64″
count = 3
datacenter = LON05”
flavor = B1_1X2X25
private_security_group_ids = [“${ibm_security_group.sg_private_lamp.id}”]
public_security_group_ids = [“${ibm_security_group.sg_public_lamp.id}”]
}
name = “${var.lb_name}
protocols = [{
frontend_protocol = “HTTPS”
frontend_port = 443
backend_protocol = “HTTP”
backend_port = 80
}
tls_certificate_id = “${ibm_compute_ssl_certificate.lb-web-cert.id}”
}]
Treating infrastructure as code has many benefits. Handling it the same way
that developers treat code and applying DevOps techniques brings mature
practices that improve quality and reduce risk. Existing best practices of
version control, testing, small deployments, and design patterns apply
equally to cloud infrastructure. IAC is also a key enabler of DevOps as dev
and test environments can be created and torn down as needed.
Scalability
Scalability is one of the primary reasons that DevOps and Cloud computing
are so integrally linked. There are many factors that could impact scalability
which may potentially impact DevOps success. Here are few things to
remember to ensure the environment is ready to scale:
Continuous optimization
Some tools and apps will run properly as the underlying infrastructure
scales up or down but may not run optimally. Ensure that the tools and apps
used can continuously adjust to changes in server, bandwidth and storage
capacity. They should always be able to take full advantage of the resources
available to ensure the best performance possible.
Storage
Most apps will have a data component which means there will be data
storage that must be scaled as well
Cost
It may be easy to automate scalability on cloud platforms but it’s not free.
And the DevOps tools used may include licensing fees that could be
impacted when the usage also scales.
Clustering
Cloud is a term that refers to a global network of servers which are hooked
together to operate as a single ecosystem. These servers can store/manage
data, run software’s, deliver service like web mail, streaming videos etc.
Instead of accessing the files or software’s from your local computer, you
are accessing them online from any device that can connect to internet.
Some of the reasons why businesses move to cloud are:
1. Reduced cost
2. Flexibility – work from anywhere
3. Scalability – add additional storage or features whenever needed
4. No need for any backup plan
5. It is difficult to breach security measures on cloud platforms
IBM Cloud is a suite of cloud computing services from IBM that combines
platform as a service (PaaS) with infrastructure as a service (IaaS) to
provide an integrated experience. The platform scales and supports both
small development teams and organizations, and large enterprise
businesses. Globally deployed across data centers around the world, the
solution you build on IBM Cloud™ spins up fast and performs reliably in a
tested and supported environment you can trust.
A robust console that serves as the front end for creating, viewing,
managing your cloud resources
Whether you have existing code that you want to modernize and bring to
the cloud or you're developing a brand-new application, you can tap into the
rapidly growing ecosystem of available services and runtime frameworks in
IBM Cloud.
For more details, refer to
https://cloud.ibm.com/docs/overview?topic=overview-whatis-platform
Toolchains
With the IBM Cloud DevOps service, you can create Continuous
Development and Continuous Integration toolchains to go from source code
to a running application in minutes. Toolchains allow you to develop, track,
plan and deploy applications in one place, you can access everything you
need to build all types of applications.
A DevOps toolchain is a set of tools that automates the tasks of developing
and deploying your app. You can perform DevOps manually with simple
apps, but the need for automation increases quickly as app complexity
increases, and toolchain automation is a must-have for continuous delivery.
The core component of a DevOps toolchain is a version control repository
like GitHub. More tools might include backlog tracking, delivery pipelines,
an integrated development environment (IDE), and monitoring like IBM®
Cloud DevOps Insights.
When you create an app by using a starter kit, and then click Configure
continuous delivery on the App details page, a DevOps toolchain is created.
The toolchain has a code repository, delivery pipeline, and web IDE. You
can then build on this toolchain to collaboratively manage and deploy your
app to separate environments for development, test, and production.
DevOps Insights
It is a cloud-based solution that provides comprehensive insights from
popular continuous integration and continuous delivery tools to increase
the speed, quality and control of your applications.
Features:
Deployment risk is like a continuous delivery safety net. It analyzes the
results from unit tests, functional tests, application scans and code
Under GitHub, click on Authorize button and login into your GitHub
account. Once it is authorized, in the pipeline option you will need to
create an API key.
9) This will show up files which are the GitHub files which are set default
in the initial configuration time.
10) Open the main.ejs file which consist of default code.
<!DOCTYPE html>
<html lang="en-us">
<head>
<title>Welcome to Devops Learning!!</title>
<link rel="stylesheet" href="/built/css/default.css" />
<script src="/bower_components/angular/angular.js"></script>
<script src="/bower_components/angular-route/angular-
route.js"></script>
<script src="/js/controllers.js"></script>
<style>
img {
display: block;
margin-left: auto;
margin-right: auto;
padding-top: 10%;
}
marquee {
text-align:center;
}
</style>
</head>
<body>
<center>
<img src= "https://encrypted-
tbn0.gstatic.com/images?q=tbn:ANd9GcSE65xZ9tUWsRzustJpOZ1rWiA
-veUSfL9mNNZG7-Y6Rqb2j5nz" alt="ibm_logo"width="200"
height="150" >
<br>
</center>
</body>
<script>
angular.element(document).ready(function() {
angular.bootstrap(document, ['ConsoleModule']);
});
</script>
</html>
Commit
13) Commit the changes with a proper message. Once the changes are
committed, on the right-hand side click the button Push with the
respective outgoing changes
Deploy
17) The project will be up and running and you can modify it to your wish
in the Eclipse IDE and commit in Git
18) If any changes are committed, it will build and deploy by itself in
Delivery Pipeline, and you can see those changes
Answer Keys
Chapter Name Question Number -Answer Keys
Introduction to DevOps 1-D; 2-B; 3-C; 4-C; 5-D
DevOps Tools - Git 1-C; 2-D; 3-A;4-B; 5-C
DevOps Tools - Maven 1-A; 2-A; 3-D;4-B; 5-D
DevOps Tools - Puppet 1-A; 2-A; 3-C;4-A; 5-A
DevOps Tools - Docker 1-B; 2-C; 3-C;4-D; 5-D
DevOps Tools - Jenkins 1-B; 2-A; 3-D
DevOps Tools - JUnit 1-C; 2-A; 3-D;4-C; 5-D
DevOps Tools - Nagios 1-D; 2-D; 3-A;4-D; 5-C
DevOps Tools – Graphite & Grafana 1-A,B,D; 2-E; 3-A;4-C
DevOps Usecase & Setup 1-C; 2-A; 3-C
DevOps on IBM Cloud 1-A; 2-B; 3-B;4-B; 5-B
NOTICES
This material is meant for IBM Academic Initiative use only. NOT FOR RESALE.
TRADEMARKS
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of the
International Business machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies. A
current list of IBM trademarks in available on the web at “Copyright and trademark
information” at www.ibm.com/legal/copytrade.html.
Adobe, and the Adobe logo are either registered trademarks or trademarks of Adobe
Systems Incorporated in the United States, and/or other countries.
Microsoft, Windows and the Windows logo are trademarks of Microsoft Corporation in
the United States, other countries or both.
This document may not be reproduced in whole or in part without prior permission of
IBM.