0% found this document useful (0 votes)
121 views

DevOps Lab Manual 2021

Uploaded by

nishchaygo88
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
121 views

DevOps Lab Manual 2021

Uploaded by

nishchaygo88
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

DevOps Lab Manual

BE - IT
Even Semester (VIII)
(2020-21)

By:
Prof. Ninad V Gaikwad
(Email: [email protected])
Website:
https://sites.google.com/a/mes.ac.in/ninad
List of Contents

Experiment 01: Introduction to DevOps 3

Experiment 02: Git and Github Basics (Distributed Version Control System) 7

Experiment 03: Git and Github Advanced 14

Experiment 04: SVN - Subversion (Centralized Version Control System) 18

Experiment 05: Docker Basics (Virtualization and Containerization) 24

Experiment 06: Docker Advanced 28

Experiment 07: Jenkins (Continuous Integration) 31

Experiment 08: Jenkins (Role Based Authorization Strategy) 35

Experiment 09: Ansible (Provisioning) 37

Experiment 10: Combining all tools (GIT, Maven, Docker, Ansible and Jenkins) 41
Experiment 01: Introduction to DevOps

Sources
https://www.edureka.co/blog/devops-lifecycle/

Before DevOps

Waterfall Model:
● Sequential model
● Top-Down approach
● Phases such as Requirement Gathering, Software Design,
Implementation, Testing, Deployment, and Maintenance
● Lot of time for software development
● Suitable for projects which had stable requirements
(Requirements should not change with time)

AGILE Methodology:
● Promotes continuous iteration of development and testing
● Brought agility to development, but not on Operations
● Lack of collaboration between Developers and Operation
Engineers

What is DevOps ?

● The term DevOps is a combination of two words namely


Development and Operations. DevOps is a practice that allows a
single team to manage the entire application development life
cycle, that is, development, testing, deployment, operations.
● DevOps is a software development approach through which
superior quality software can be developed quickly and with
more reliability.

DevOps Life Cycle

Phases/Stages in DevOps Lifecycle:


1. Continuous Development (Plan, Code and Build)
2. Continuous Testing (Test)
3. Continuous Integration
4. Continuous Deployment (Deploy and Operate)
5. Continuous Monitoring (Monitor)
These DevOps stages are carried out on loop continuously till you
achieve the desired product quality.

DevOps Tools

1 - Continuous Development

● Involves ‘planning‘, ‘coding‘ and ‘building’ of the software


● No DevOps tools that are required for planning
● Code is maintained by using Version Control tools
● Maintaining the code is referred to as Source Code Management
● Most popular tools used for Source Code Management are Git,
SVN, Mercurial, CVS, and JIRA
● Tools like Ant, Maven, Gradle can be used in this phase for
building / packaging the code into an executable file that can
be forwarded to any of the next phases

2 - Continuous Testing

● Stage where the developed software is continuously tested for


bugs
● Automation testing tools like Selenium, TestNG, JUnit, etc are
used
● Selenium does the automation testing, and the reports are
generated by TestNG
● Entire testing phase can be automated with the help of a
Continuous Integration tool called Jenkins
● We can also schedule the execution of the test cases at
predefined times

3 - Continuous Integration

● This stage is the heart of the entire DevOps life cycle


● Software developers commit changes to the source code on a
daily or a weekly basis.
● Every commit is then built and this allows early detection of
problems
● Jenkins is a very popular tool used in this phase. Other tools
used for integration are Bamboo and Hudson
● Whenever there is a change in the Git repository, Jenkins
fetches the updated code and it prepares a build of that code
which is an executable file in the form of a war or a jar.
This build is then forwarded to the test server or the
production server.

4 - Continuous Deployment
● This is the stage where the code is deployed to the production
servers
● Configuration management and Containerization tools involve in
this stage
● Configuration management
○ It is the act of releasing deployments to servers,
scheduling updates on all servers and keeping the
configurations consistent across all the servers.
○ Some popular tools that are used here are Puppet, Chef,
SaltStack, and Ansible.
● Containerization
○ Docker and Vagrant are the popular tools used for this
purpose.
○ These tools help produce consistency across Development,
Test, Staging and Production environments. Besides this,
they also help in scaling-up and scaling-down of
instances swiftly.

5 - Continuous Monitoring

● Vital information about the use of the software is recorded.


This information is processed to recognize the proper
functionality of the application. The system errors such as
low memory, server not reachable, etc are resolved in this
phase.
● The root cause of any issue is determined in this phase.
● This practice involves the participation of the Operations
team who will monitor the user activity for bugs or any
improper behavior of the system.
● The popular tools used for this are Splunk, ELK Stack, Nagios,
NewRelic and Sensu.
● These tools help you monitor the application’s performance and
the servers closely and also enable you to check the health of
the system proactively.
● Any major issues if found are reported to the development team
so that it can be fixed in the continuous development phase.
This leads to a faster resolution of the problems.
Experiment 02: Git and Github Basics (Distributed Version Control System)

Prerequisite
● Install GIT on your local machine:
https://git-scm.com/downloads
● Create/Login to an account on github.com: https://github.com/

Git - Distributed Version Control System. [https://git-scm.com/doc]


Github - GitHub is a code hosting platform (website) for
collaboration and version control and lets you (and others) work
together on projects. [https://help.github.com]
Online tool to learn git: https://learngitbranching.js.org/
Sources:
https://git-scm.com/book/en/v2/Getting-Started-About-Version-Contro
l

What is Version Control System?

Version control is a system that records changes to a file or set


of files over time so that you can recall specific versions later.
A Version Control System (VCS) is a very wise thing to use. It
allows you to revert selected files back to a previous state,
revert the entire project back to a previous state, compare changes
over time, see who last modified something that might be causing a
problem, who introduced an issue and when, and more. Using a VCS
also generally means that if you screw things up or lose files, you
can easily recover. In addition, you get all this for very little
overhead.

Types of Version control systems:

● Local Version Control Systems


● Centralized Version Control Systems(Discussed in Experiment 4)
● Distributed Version Control Systems

Local Version Control System

Many people’s version-control method of choice is to copy files


into another directory (perhaps a time-stamped directory, if
they’re clever). This approach is very common because it is so
simple, but it is also incredibly error prone. It is easy to forget
which directory you’re in and accidentally write to the wrong file
or copy over files you don’t mean to.
To deal with this issue, programmers long ago developed local VCSs
that had a simple database that kept all the changes to files under
revision control.

One of the most popular VCS tools was a system called RCS, which is
still distributed with many computers today. RCS works by keeping
patch sets (that is, the differences between files) in a special
format on disk; it can then re-create what any file looked like at
any point in time by adding up all the patches.

Distributed Version Control Systems

Whenever you have the entire history of the project in a single


place, you risk losing everything. This is where Distributed
Version Control Systems (DVCSs) step in.

In a DVCS (such as Git, Mercurial, Bazaar or Darcs), clients don’t


just check out the latest snapshot of the files; rather, they fully
mirror the repository, including its full history. Thus, if any
server dies, and these systems were collaborating via that server,
any of the client repositories can be copied back up to the server
to restore it. Every clone is really a full backup of all the data.

Furthermore, many of these systems deal pretty well with having


several remote repositories they can work with, so you can
collaborate with different groups of people in different ways
simultaneously within the same project. This allows you to set up
several types of workflows that aren’t possible in centralized
systems, such as hierarchical models.
Git Architecture
Video tutorial: The Coding Train Git and Github

Git cheat sheet: https://github.github.com/training-kit/downloads/github-git-cheat-sheet.pdf

Delete Saved Credentials (Previous User)

Credential manager → Windows Credentials → Generic Credentials → git (remove)

Git Commands (type following command in the git bash)

To check git version and list common git commands

git --version

To configure git for all repositories

git config --global user.name "xyz"

git config --global user.email "[email protected]"

To configure git for single repository

git config user.name "xyz"

git config user.email "[email protected]"

Creating local repository (Project Originating on your PC)

Right click on the folder for which you want to create repository and then select Git Bash
Here command and start typing the following commands:

git init

Untrack Directory
rm –rf .git

To link remote repository to local repository

git remote add origin “URL-of-Remote-Repository”

git pull origin master // ** If remote repository is non empty

To remove remote repository

git remote remove origin

To check remote

git remote –v

Make two text files (one to ignore other to track). Put some text in both

vi newfile.txt >> Press ‘Insert’ or ‘a’ >> insert some text >>
Press ‘Esc’ >> ‘:wq’

vi ignore.txt >> Press ‘Insert’ or ‘a’ >> insert some text >> Press
‘Esc’ >> ‘:wq’

OR

Use notepad to create files “newfile.txt” and “ignore.txt”

Add file name to “.gitignore” file to ignore that particular file

Display status of files (Working Directory, Staging area, Repository)

git status

Add file to staging area

git add filename.txt // add specific file

git add –A OR git add . // add all files


Remove files from Staging Area

git reset filename.txt

git reset //Remove all files

***** Make changes in one file ******

View changes made after last commit / edit

git diff

Save changes to local repo

git commit –m “commit message”

Get Commit Details

git log

Pull remote files

git pull origin master

Push changes to the remote repo (always Pull first)

git push origin master

Restore previous version (Commit)

git checkout -b newbranch commit-hash // Will create a new branch on the commit
hash specified

OR

git checkout commit-hash // Commit you want to go to


git branch newbranch // Make a new branch on the commit

git checkout newbranch // Go to new branch

******* Work on Ongoing Project ******

Clone remote Repo

git clone “URL-of-Remote-Repository” // use current directory

git clone “URL-of-Remote-Repository” desktop/gitrepo1 // use specified


directory
Experiment 03: Git and Github Advanced

Online tool to learn git: https://learngitbranching.js.org/

Clone remote Repo

git clone “URL-of-Remote-Repository” // use current directory

git clone “URL-of-Remote-Repository” desktop/gitrepo1 // use specified


directory

View all branches

git branch

git branch –a // display all branches remote and local

Create new branch

git branch mergebranch


git branch rebasebranch

Go to New branch (to make changes)

git checkout mergebranch

***** add one file and commit ******

Push branch to remote repo

git push origin mergebranch


git push --all origin // Push all Branches to remote

git branch –a // Display all the branches


Merge Branches (into master - Use online tool to demonstrate)

git checkout master // Go to master branch

git pull origin master // Pull changes if any

git branch --merged // Display Merged branches if any

git merge mergebranch // Merge mergebranch into master

Rebase Branches (into master - Use online tool to demonstrate)

git checkout master


***** add one file and commit ******
git rebase rebasebranch

Deleting a Branch

git branch --merged // Display Merged branches if any

git branch –d mergebranch // Delete Branch on local repo

git branch –a // Display Branches

git push origin -–delete mergebranch // Delete Branch on remote repo

Undo the changes made (in staging area but not commited)

git checkout filename.txt

Edit the commit message (changes the hash – recommended if changes made are not
pushed to central repo)

git commit --stat // See the Stats

git commit --amend -m “Updated Message” // Change only the commit message

** Make changes in a file and add one file **


git commit –-amend // to add files modified or newly added to previous commit

Move commit to a different branch (Use online tool to demonstrate)

git checkout newbranch // Go to that branch

git cherry-pick commit-hash

** delete commit from original branch **

git reset --soft commit-hash //check the log to see changes (Hash should be
one commit old)

git reset commit-hash // mixed (moves changes to working directory)

git reset –-hard commit-hash // (Hash should be one commit old)

git clean –df // gets rid of untracked directories and files

Get back files before being collected by the garbage collector

git reflog

git checkout commit-hash

git log

git branch backup

git checkout master

git branch

git checkout backup

Revert commit (undo changes that others have pulled)

git revert commit-hash // save and exit the message

git diff commit-hash revert-hash


Stash Command (when not ready to commit but save changes)

git stash save “Stash-message”

git stash list

git stash apply stash-code // Restore changes but keep stash (applicable between
branches)

git checkout -- . // Go back to state before apply

git stash pop // Restore changes most recent and remove stash (applicable between
branches)

git stash drop stash-code // Delete specific stash

git stash clear // Clear all stashes

Use Gitdiff tool (Many other diff tools available online for better UI)

git difftool master // Changes made in local repository

git difftool commit1-hash commit2-hash // View exact changes between the two
commits

git difftool origin/master // Difference between local and remote repository


Experiment 04: SVN - Subversion (Centralized Version Control System)

Requirements:

Two Lubuntu / Ubuntu systems (Server and Client)


Download Link for Lubuntu Desktop: https://lubuntu.net/downloads/

Centralized Version Control System

To collaborate with developers on other systems, Centralized


Version Control Systems (CVCSs) were developed. These systems (such
as CVS, Subversion, and Perforce) have a single server that contains
all the versioned files, and a number of clients that check out
files from that central place. For many years, this has been the
standard for version control.

This setup offers many advantages, especially over local VCSs. For
example, everyone knows to a certain degree what everyone else on
the project is doing. Administrators have fine-grained control over
who can do what, and it’s far easier to administer a CVCS than it
is to deal with local databases on every client.

However, this setup also has some serious downsides. The most
obvious is the single point of failure that the centralized server
represents. If that server goes down for an hour, then during that
hour nobody can collaborate at all or save versioned changes to
anything they’re working on. If the hard disk the central database
is on becomes corrupted, and proper backups haven’t been kept, you
lose absolutely everything — the entire history of the project
except whatever single snapshots people happen to have on their
local machines. Local VCS systems suffer from this same
problem — whenever you have the entire history of the project in a
single place, you risk losing everything.

Difference between SVN and Git

SVN Git
It's a Centralized version It's a distributed version
control system control system.

It is revision control. Git is an SCM (source code


management).

It does not keep a cloned It has a cloned repository.


repository.

Branches in SVN are a folder The Git branches are familiar to


that takes place in the work. The Git system helps in
repository. Some special merging the files quickly and
commands are required For also assists in finding the
merging the branches. unmerged ones.

It has an internationalized It does not have a Global


revision number. revision number.

SVN does not contain any It has cryptographically hashed


cryptographically hashed contents that protect the
contents. contents from repository
corruption taking place due to
network issues or disk failures.

SVN stores content as files. Git stored content as metadata.


SVN's content is less secure Git has more content protection
than Git. than SVN

CollabNet, Inc developed SVN. Linus Torvalds developed git for


Linux kernel.

SVN is distributed under the Git is distributed under GNU


open-source license. (General public license).

On Server side:

sudo apt-get install subversion // Will install SVN on server

sudo mkdir -p /svn/repos // Make Directory to store repositories

sudo svnadmin create /svn/repos/helloworld // Make SVN repository

sudo apt-get install nano


sudo nano /svn/repos/helloworld/conf/svnserve.conf // Open configuration file
to edit
● anon-access = none
● auth-access = write
● password-db = passwd

sudo nano svn/repos/helloworld/conf/passwd // Open password file to add new


username and password
● ninad = ninad // Username = Password

sudo svnserve -d -r /svn/repos // Start Server


sudo kill -9 TaskID // To stop server (TaskId available in Task Manager)

On Client Side:

sudo apt-get install subversion // Install SVN on client machine

mkdir helloworld // Making a folder to track

cd helloworld // go inside the local repository


mkdir trunk // Main branch
mkdir branches // Branches folder
mkdir tags // Name to stable releases
cd .. // Go outside helloworld repository
svn import helloworld/ svn://192.168.56.101/helloword // Import Local
repository files into a Server and add a commit message.

SVN will prompt for Username and Password

rm -rf helloworld // Delete untracked helloworld folder

svn co svn://192.168.56.101/helloworld // Checkout the server folder. Will create


a local folder

cd helloworld

ls -a //Should contain a .svn file

cd trunk

sudo nano code.txt //edit greetings.py to add following


This is some code
// save pressing “ctrl+x” → “y” enter → enter

svn status

svn add code.txt // add file to track

svn commit code.txt -m “added file code.txt to repository” // Commit


changes to central repository

To delete saved Credentials:

rm -r ~/.subversion/ // Remove .subversion file from home folder


nano code.txt // Edit code.txt file by adding following line
# This is a comment // Add a comment to file and save the file
// save pressing “ctrl+x” → “y” enter → enter

svn commit -m “Comment added” // Will ask for login information

Creating Branches:

// Go to “helloworld” folder

svn copy svn://192.168.56.101/helloworld/trunk


svn://192.168.56.101/helloworld/branches/informal //Copying trunk data to
new branch named “informal”

cd branches // go to branches folder


ls
svn up // Update working directory
ls
cd informal // go to informal folder
nano code.txt
// add to the end of file
# Comment

svn commit code.txt -m “Added Comment” // Commit new changes to the branch (Go
to trunk and check new commit not added to trunk)

Merge Branches (Does not copy commit info):

// Go to branch folder “informal”

svn up

svn merge ^/trunk // Merge branch in trunk

// Go to trunk folder

svn merge --reintegrate ^/branches/informal // Reintegrate informal branch in


trunk

svn commit -m “Merged branch into Trunk” // Merge the Branch into the trunk

svn up

// To delete branch “informal” → go to “branches” folder

svn rm informal

svn commit -m “deleted informal branch”

// go to “Documents” folder

rm -rf helloworld

svn co svn://192.168.56.101/helloworld

cd helloworld/branches

ls
// To Checkout to a previous revision

svn co -r 4 svn://192.168.56.101/helloworld
Experiment 05: Docker Basics (Virtualization and Containerization)

Requirements:
An Account with Docker Hub
Enable hypervisor for windows steps here
(Note: If you have Virtualbox running then you already have
hypervisor ON)
Docker for Windows (Link in Resources): Installation steps here

Installation on Ubuntu:
sudo apt install docker.io
systemctl start docker

Resources:
Download Docker for windows:
https://hub.docker.com/editions/community/docker-ce-desktop-windows
/
User manual for docker windows:
https://docs.docker.com/docker-for-windows/
Docker tutorials: https://www.tutorialspoint.com/docker/index.htm

Introduction:

Docker is a container management service. The keywords of Docker


are develop, ship and run anywhere. The whole idea of Docker is for
developers to easily develop applications, ship them into
containers which can then be deployed anywhere.

The initial release of Docker was in March 2013 and since then, it
has become the buzzword for modern world development, especially in
the face of Agile-based projects.
Features of Docker:

● Docker has the ability to reduce the size of development by


providing a smaller footprint of the operating system via
containers.
● With containers, it becomes easier for teams across different
units, such as development, QA and Operations to work
seamlessly across applications.
● You can deploy Docker containers anywhere, on any physical and
virtual machines and even on the cloud.
● Since Docker containers are pretty lightweight, they are very
easily scalable.

Components of Docker:

1. Docker for Mac - It allows one to run Docker containers on the


Mac OS.
2. Docker for Linux - It allows one to run Docker containers on
the Linux OS.
3. Docker for Windows - It allows one to run Docker containers on
the Windows OS.
4. Docker Engine - It is used for building Docker images and
creating Docker containers.
5. Docker Hub - This is the registry which is used to host
various Docker images.
6. Docker Compose - This is used to define applications using
multiple Docker containers.

Docker Images:

In Docker, everything is based on Images. An image is a combination


of a file system and parameters. A Docker container image is a
lightweight, standalone, executable package of software that
includes everything needed to run an application: code, runtime,
system tools, system libraries and settings.

Docker Container:

A container is a standard unit of software that packages up code


and all its dependencies so the application runs quickly and
reliably from one computing environment to another.
Container images become containers at runtime and in the case of
Docker containers - images become containers when they run on
Docker Engine. Available for both Linux and Windows-based
applications, containerized software will always run the same,
regardless of the infrastructure. Containers isolate software from
its environment and ensure that it works uniformly despite
differences for instance between development and staging.

Docker Hub:

Docker Hub is a registry service on the cloud that allows you to


download Docker images that are built by other communities. You can
also upload your own Docker built images to Docker hub.

Docker Commands:

docker –version
docker help

docker images // List images with image-id

docker pull ubuntu // pull ubuntu image from DockerHub

docker ps // List active Containers with container-id


docker ps -a // List all Containers with container-id

docker run -it -d image-id // Create container from system (will


pull if unavailable)

docker exec -it container-id bash // Run a command (application) in


a running container (try update and nano text editor)

docker diff container-id // Inspect changes to files or directories


on a container's filesystem
docker commit container-id new-image-name // Create a new image from
a container's changes preferably username/imagename

docker restart container-id // Restart one or more containers


docker rename container-id new-container-name // Rename a container

docker stop container-id // Stop one or more running containers


docker kill container-id // Kill one or more running containers
(Abruptly Stop container)

docker rm container-id // Remove one or more containers


docker container prune // Remove all stopped containers
docker rmi image-id // Remove one or more images
docker login // Log in to a Docker registry
docker logout // Log out from a Docker registry

docker push image-name // Push an image or a repository to a


registry

docker export container-id // Export a container's file system as a


tar archive
docker import file-path-with-filename // Import the contents from a
tarball to create a filesystem image

Creating python app image file

mkdir python-app
cd python-app

nano app.py
print("Hello from python file")
.. save file ctrl + x -> Y

nano Dockerfile

// Contents of Dockerfile

FROM python
COPY . /src
CMD ["python", "/src/app.py"]
.. save file ctrl + x -> Y

docker build . -t python_app


docker images
docker run python_app
Experiment 06: Docker Advanced

Docker Compose:

Compose is a tool for defining and running multi-container Docker


applications. With Compose, you use a YAML file to configure your
application’s services. Then, with a single command, you create and
start all the services from your configuration.

Contents of “docker-compose.yml” file:


version: '2'
services:

postgres:
image: "ninadg89/postgres"
restart: always
environment:
POSTGRES_PASSWORD: postgres
ports:
- "5434:5432"

pgadmin:
image: "ninadg89/pgadmin4"
restart: always
environment:
PGADMIN_DEFAULT_EMAIL : pgadmin
PGADMIN_DEFAULT_PASSWORD : pgadmin
ports:
- "82:80"

webapp:
build: ./tomcat
restart: always
ports:
- "8082:8080"

Contents of “./tomcat/Dockerfile”
FROM tomcat:8.0

EXPOSE 8080

Docker Swarm:

A Docker Swarm is a group of either physical or virtual machines


that are running the Docker application and that have been
configured to join together in a cluster. Once a group of machines
have been clustered together, you can still run the Docker commands
that you're used to, but they will now be carried out by the
machines in your cluster. The activities of the cluster are
controlled by a swarm manager, and machines that have joined the
cluster are referred to as nodes.

Initialize and join Swarm:

docker swarm init --advertise-addr 192.168.99.101 // Initializing


Swarm will create <swarm-join-token>
docker node ls // List all nodes in swarm
docker swarm join --token <swarm-join-token> // Join the swarm as
Worker/node
docker swarm leave // Leave a swarm (Manager should add “--force”
tag)

docker ps // List active Containers


docker service ls // List services running in swarm
docker service create -d --name “webapp_ninad” -p 8082:8080
ninadg89/pgadmin4 // Create new service
docker service ps webapp_ninad // list running instances of Service
docker service rm webapp_ninad // remove Service from swarm

Replication of services in swarm:

docker service create -d --name “webapp_ninad” -p 8082:8080


--replicas 2 ninadg89/pgadmin4 // Create service in replicas mode

docker service create -d --name “webapp_ninad” -p 8082:8080 --mode


global ninadg89/pgadmin4 // Create service in Global mode

Scaling services:

docker service scale webapp_ninad=5 // Scale Service

Draining servers : Generally done to managers

docker node update --availability drain manager // Drain node


docker node inspect --pretty manager // Check node status
docker node update --availability active manager // Make node active
Experiment 07: Jenkins (Continuous Integration)

Prerequisites:
Java version 8 OR 11 installed

Jenkins Installation procedure:


https://www.youtube.com/watch?v=1y8RsUbxtAw&t=512s // For Windows
https://www.youtube.com/watch?v=YyPMhKdBCxw // For Ubuntu ( Recommend
using LUbuntu )
Jenkins Introduction Tutorial // Link of the Video

What is Jenkins ?

Jenkins is a powerful application that allows continuous


integration and continuous delivery of projects, regardless of
the platform you are working on. It is a free source that can
handle any kind of build or continuous integration. You can
integrate Jenkins with a number of testing and deployment
technologies.

Continuous Integration :
Continuous Integration is a development practice that requires
developers to integrate code into a shared repository at
regular intervals. This concept was meant to remove the problem
of finding later occurrence of issues in the build lifecycle.
Continuous integration requires the developers to have frequent
builds. The common practice is that whenever a code commit
occurs, a build should be triggered.

Jenkins Dashboard: ( Default Location: localhost:8080 )

Prerequisite:
Create Java file “HelloHome.java” on your Desktop with code:

class HelloHome{
public static void main(String args[]){
System.out.println("Hello Home");
}
}

Creating a new job:

Job 1: JavaHomeBuild (Running at Periodic Intervals)

1. Select "New job" on the dashboard


2. Enter name as "JavaHomeBuild"
a. Select “Freestyle Project”
b. click "ok"
3. In "General" tab
a. Add "description" as “Build Project for Developer”
4. In "SCM" (source code management) tab
a. select "none" (default)
5. In "Triggers" tab
a. Select "Periodically"
b. Enter the value "* * * * *"
6. In "Build" tab
a. select "Execute Windows Batch Command" // “Execute
Shell” for Ubuntu users
b. Type "cd <path-of-java-file-saved>"
c. Type "javac HelloHome.java"
7. Click on "Save"

Chaining jobs together:

Job 2: HelloHomeRun (Job Configuration copied from other job)

1. Select "New job" on the dashboard


2. Enter name as "HelloHomeRun"
a. Select “Copy from”
b. In text box enter “HelloHomeBuild”
c. Click "ok"
3. In "General" tab
a. Edit "description" as “Run Project for Intern”
4. In "SCM" (source code management) tab
a. select "none" (default)
5. In "Triggers" tab
a. Select "After other projects"
b. In the text box "HelloHomeBuild"
6. In "Build" tab
a. select "Execute Windows Batch Command" // “Execute
Shell” for Ubuntu users
b. Type "cd <path-of-java-file-saved>"
c. Type "java HelloHome"
7. Click on "Save"

Job 3: TestHomeMsg (Simple job)

1. Select "New job" on the dashboard


2. Enter name as "TestHomeMsg"
a. Select “Freestyle Project”
b. Click "ok”
3. In "General" tab
a. Edit "description" as “Display Message for tester”
4. In "SCM" (source code management) tab
a. select "none" (default)
5. In "Triggers" tab
a. Select "After other projects"
b. In the text box "HelloHomeRun"
6. In "Build" tab
a. select "Execute Windows Batch Command" // “Execute
Shell” for Ubuntu users
b. Type "echo ----- Successful Execution -----"
7. Click on "Save"

Running Jenkins Project:

● Click on the arrow appearing in front of the project name on


Dashboard
● Select “Build Now” (Execution will start automatically for
jobs scheduled to build periodically)

Viewing Console Output:

● Click on the project_name on Dashboard


● Click on the respective build number
● Click on “Console Output” in the left hand options pane

Renaming Project:

● Click on the arrow appearing in front of the project name on


Dashboard
● Select “Rename”
● Enter the “New_name” in text box and click on Rename
Experiment 08: Jenkins (Role Based Authorization Strategy)

Important Links:
Video 1 Jenkins Configuration:
https://drive.google.com/open?id=1lsSXVrS2237QWN3ug-mS-msOiKTDvI5M

Trigger Job Remotely:

1. Select configure in the menu displayed on project name arrow


2. In Build Triggers remotely tab:
a. Select Trigger builds remotely tab
b. Give Authentication Token as 12345
3. Keeping the rest of values unchanged click on Save
4. Open a new tab and type
“localhost:8080/job/JavaHomeBuild/build?token=12345” in address
bar. Hit enter key.

Add Plugins to Jenkins:

Method 1: Search Plugin in Available tab


1. Click on Manage Jenkins on Dashboard
2. Click on Manage Plugins
3. In Available tab search and install Role-based Authorization
Strategy

Method 2: Install “.hpi” file


1. Download .hpi file of the required plugin
2. Go to advanced tab and select the hpi file

Add Users to Jenkins:

1. From Dashboard click on Manage Jenkins.


2. Scroll down to click on Manage Users.
3. From the left side menu select Create User.
4. Create Three new users:
a. Username: UserOne
Password: userone
Confirm password: userone
Full name: user one
E-mail address: [email protected]
Click on Create User
b. Username: UserTwo
Password: UserTwo
Confirm password: UserTwo
Full name: UserTwo
E-mail address: [email protected]
Click on Create User
c. Username: UserThree
Password: UserThree
Confirm password: UserThree
Full name: UserThree
E-mail address: [email protected]
Click on Create User

Create Roles in Jenkins

1. Click on Manage Jenkins on Dashboard


2. Select Configure Global Security
3. Check Enable Security checkbox
4. Under Authorization section select Role Based Security
5. Click on Save

Assign Roles to Users

1. Click on Manage Jenkins on Dashboard


2. Click on Manage and assign Roles
3. In Manage Roles Section:
a. In Global Roles add employee role check the boxes to
grant overall read privilege
4. In Item Roles Section (RoleToAdd - Pattern) add job read
permission to all:
a. JavaDeveloper -> Java.*
b. Tester -> Test.*
c. Intern -> Hello.*

Assign Project Roles

1. Click on Manage Jenkins on Dashboard


2. Click on Assign Roles:
a. Add UserOne/UserTwo/UserThree and add global role as
Employee
3. In Item Roles section:
a. Check box of JavaDeveloper for UserOne
b. Check box of Tester for UserTwo
c. Check box of Intern for UserThree
Experiment 09: Ansible (Provisioning)

Requirements:

● Ansible CONTROLLER should be Linux


● Go to Microsoft store & install “Ubuntu” - CONTROLLER (Windows
Subsystem for Linux):
https://www.youtube.com/watch?v=X-DHaQLrBi8&t=472s
● Install two Virtual Systems NODES (LUbuntu) on Virtual Box

On NODE Machines [VirtualBox - LUbuntu machines]:

Set network adapter type as bridged


sudo apt-get install openssh-server // install ssh server

Ansible installation (Ansible CONTROLLER has to be linux):

sudo apt-get install openssh-server // install ssh server


apt-get install sshpass // Install sshpass to enable password passing
sudo apt-get install ansible // Install ansible
ansible --version

Edit ansible.cfg [ /etc/ansible/ansible.cfg ] file to add:

[defaults]
host_key_checking=false

Create Host inventory (Either add separate inventory file or add hosts to
/etc/ansible/hosts)

sudo leafpad inventory


OR
sudo leafpad /etc/ansible/hosts // add at the end
Add contents:

[developers]
local1 ansible_host=192.168.0.106 ansible_ssh_user=lubuntu1
ansible_ssh_pass=ninad ansible_sudo_pass=ninad

[integrators]
local2 ansible_host=192.168.0.107 ansible_ssh_user=lubuntu2
ansible_ssh_pass=ninad ansible_sudo_pass=ninad

[allsys]
local1 ansible_host=192.168.0.106 ansible_ssh_user=lubuntu1
ansible_ssh_pass=ninad ansible_sudo_pass=ninad
local2 ansible_host=192.168.0.107 ansible_ssh_user=lubuntu2
ansible_ssh_pass=ninad ansible_sudo_pass=ninad

Using Ad-hoc commands: (using PING module)

sudo ansible local1 -m ping // Will ping “local1” server


sudo ansible -i inventory developers -m ping // Will ping
“developers” group
sudo ansible -i inventory all -m ping // will ping all groups

Creating ansible playbooks: installjava.yml, uninstalljava.yml, installjenkins.yml,


uninstalljenkins.yml

sudo leafpad installjava.yml

---
- hosts: all

become: yes
become_method: sudo
remote_user: lubuntu1

tasks:
- name: Update apt repository
apt: update_cache=yes

- name: install java


command: apt-get install openjdk-8-jdk -y

sudo leafpad uninstalljava.yml

---
- hosts: all

become: yes
become_method: sudo
remote_user: lubuntu1
tasks:
- name: uninstall java
command: apt-get autoremove openjdk-8-jdk -y

sudo leafpad installjenkins.yml

---
- hosts: developers

become: yes
become_method: sudo
remote_user: lubuntu2

tasks:
- name: ensure jenkins apt repository key is installed
apt_key:
url=https://pkg.jenkins.io/debian-stable/jenkins.io.key
state=present

- name: ensure the repository is configured


apt_repository: repo='deb https://pkg.jenkins.io/debian-stable
binary/' state=present

- name: ensure jenkins is installed


apt: name=jenkins state=present update_cache=yes

- name: ensure jenkins is running


service: name=jenkins state=started

sudo leafpad uninstalljenkins.yml

---
- hosts: local2

become: yes
become_method: sudo
remote_user: lubuntu2

tasks:
- name: Ensure jenkins is Stopped
service: name=jenkins state=stopped

- name: Ensure jenkins is Uninstalled


apt: name=jenkins state=absent
Install java on all systems using host file:

ansible-playbook installjava.yml

Uninstall java from group "developers":

ansible-playbook -l developers uninstalljava.yml

Install jenkins on integrators group specified in yml file

ansible-playbook -i inventory installjekins.yml

Uninstall jenkins from local2 machine specified in yml file

ansible-playbook -i inventory -l local1 uninstalljenkins.yml


Experiment 10: Combining all tools (GIT, Maven, Docker, Ansible and Jenkins)

Resources:
Video Link : https://www.youtube.com/watch?v=13FpCxCClLY
Git Repository : https://github.com/gaikwadninad89/allinone

For Remote Host [preferably Amazon ec2 instance]: Two - Integration server and
Deployment / Production server

Ubuntu Server (free version) on amazon EC2:


https://aws.amazon.com/console/

Create Ubuntu server on Amazon EC2 [Video]:


https://www.youtube.com/watch?v=a8CBE_WN7rA

Go to amazon EC2 -> create Ubuntu server instance (Free) -> Keep
default options for “Instance type, Instance Details, Add Storage,
Add Tags” -> In “Configure security Groups” Add Rule “Type = Custom
TCP Rule”, “Port Range = 8080” and “Source as Custom ‘0.0.0.0/0,
::/0’” -> Create a new Key pair and name it “ubuntu-pass” ->
Download the private key “ubuntu-pass.pem”

Test the SSH connection with EC2 host:

sudo chmod 400 ubuntu-pass.pem // Gives read only permission

sudo ssh -i ubuntu-pass.pem [email protected] // Log into remote server


via SSH

OR for Windows

ssh -i ubuntu-pass.pem [email protected]

On Integration Server:

Install Git -> Install jdk8 (Prerequisite for jenkins) -> Install
Jenkins -> Install Ansible -> Install Docker
Step 1: Create a Java Web Application in Netbeans

Create a Java Netbeans project (Web Application using Maven) on your


Computer and Upload it to your gitHub Repository (“allinone”)

Step 2: Create a New Jenkins Pipeline Project and Configure it to Poll SCM (GitHub):

In Jenkins:
1. Create “new Item” -> name it “AllInOneApp”
2. Select type as “Pipeline” -> Click “OK”
3. In Configurations give “Description”
4. Go to pipeline tab in script make following changes

pipeline{
agent any

stages{
stage('Polling SCM'){
steps{
// Code 1
}
}
}
}

For Code 1:
1. Click on ”Pipeline Syntax” link outside Script box
2. Click on “Snippet Generator”
3. In sample step select “git:Git”
4. Provide URL and Branch details
5. For credentials
a. Click on Add
b. Select kind as “Username and Password”
c. Provide Username and Password in the allotted Textbox
d. Give id as “GitHub”
e. Add Description as “GitHub”
f. Click on “Add”
6. Select created GitHub credentials from the list
7. Click on “Generate Pipeline Script”
8. Copy the generated script as “Code 1”

// End Result
pipeline{
agent any

stages{
stage('Polling SCM'){
steps{
git branch: 'main', credentialsId: 'GitHub', url:
'https://github.com/gaikwadninad89/allinone.git'
}
}
}
}

Step 3: Build the code using Maven

In Jenkins:
1. Go to “manage jenkins”
2. In “Global Tool Configuration” tab
3. Go to “Maven” section
4. Click “Maven Installation” -> “Add Maven”
5. Give name as “maven3”
6. Click on “Install Automatically”
7. Keep the rest as default
8. Click on ”Save”

On Integration Server shell:


Go to “/var/lib/jenkins/tools/” and list the files and folders you
should be able to see maven installation folder

Change pipeline Script to the following:

// End Result
pipeline{
agent any
tools{
maven ‘maven3’
}

stages{
stage('Polling SCM'){
steps{
git branch: 'main', credentialsId: 'GitHub', url:
'https://github.com/gaikwadninad89/allinone.git'
}
}
stage(‘Build using Maven’){
steps{
sh “mvn clean package”
}
}
}
}

Step 4: Build a Tomcat8 container image with War file deployed in it

Create “Dockerfile” on Github repository with contents:

FROM tomcat:8.0

COPY target/*.war /usr/local/tomcat/webapps/webapp.war

In Shell of Integration Server:


1. Type “sudo usermod -a -G docker jenkins”
2. Type “sudo systemctl restart jenkins” OR “sudo service jenkins
restart”
3. Type “chkconfig docker on”
4. Type “sudo service docker start”

Change the pipeline script as:

pipeline{
agent any
tools{
maven ‘maven3’
}
environment {
DOCKER_TAG = getVersion()
}

stages{
stage('Polling SCM'){
steps{
git branch: 'main', credentialsId: 'GitHub', url:
'https://github.com/gaikwadninad89/allinone.git'
}
}
stage(‘Build using Maven’){
steps{
sh “mvn clean package”
}
}
stage(‘Build Docker Image’){
steps{
sh "docker build . -t ninadg89/devopsapp:${DOCKER_TAG}"
}
}
}
}

def getVersion(){
def commitHash = // Code 2
return commitHash
}

For Code 2:
1. Go to “Snippet Generator”
2. Select sample step as “sh:Shell Script”
3. In the text box type the command “git rev-parse --short HEAD”
4. Click on “Advanced”
5. Check the “Return Standard output” checkbox
6. Click “Generate Pipeline Script”
7. Copy the script generated in place of “Code 2”

// End Result
pipeline{
agent any
tools{
maven ‘maven3’
}
environment {
DOCKER_TAG = getVersion()
}

stages{
stage('Polling SCM'){
steps{
git branch: 'main', credentialsId: 'GitHub', url:
'https://github.com/gaikwadninad89/allinone.git'
}
}
stage(‘Build using Maven’){
steps{
sh “mvn clean package”
}
}
stage(‘Build Docker Image’){
steps{
sh "docker build . -t ninadg89/devopsapp:${DOCKER_TAG}"
}
}
}
}

def getVersion(){
def commitHash = sh returnStdout: true, script: 'git rev-parse
--short HEAD'
return commitHash
}

Step 5: Push the built Docker image to DockerHub

For logging to DockerHub with credentials in Jenkins:


1. Go to “Snippet Generator”
2. Select sample step as “withcredentials”
3. In Bindings click on “Add” -> choose “secret text”
4. Name the variable as “dockerHubPwd”
5. In Credentials click on “Add”
a. Click on “Jenkins”
b. Select kind as “Secret Text”
c. In secret text box type password
d. Write id as “docker-hub”
e. Write description as “docker-hub”
f. Click on “Add”
6. In credentials select the docker-hub credential created
7. Click “Generate Pipeline Script”
8. Copy the script generated as below:

withCredentials([string(credentialsId: 'docker-hub', variable:


'dockerHubPwd')]) {
// some block
}

Replace “// some block” to produce:

// End Result
pipeline{
agent any
tools{
maven ‘maven3’
}
environment {
DOCKER_TAG = getVersion()
}

stages{
stage('Polling SCM'){
steps{
git branch: 'main', credentialsId: 'GitHub', url:
'https://github.com/gaikwadninad89/allinone.git'
}
}
stage(‘Build using Maven’){
steps{
sh “mvn clean package”
}
}
stage(‘Build Docker Image’){
steps{
sh "docker build . -t ninadg89/devopsapp:${DOCKER_TAG}"
}
}
stage(‘Push image to DockerHub’){
steps{
withCredentials([string(credentialsId: 'docker-hub', variable:
'dockerHubPwd')]) {
sh "docker login -u ninadg89 -p ${dockerHubPwd}"
}
sh "docker push ninadg89/devopsapp:${DOCKER_TAG}"
}
}
}
}

def getVersion(){
def commitHash = sh returnStdout: true, script: 'git rev-parse
--short HEAD'
return commitHash
}

Step 6: Deploy Docker Container on Production server

In Integration server shell


1. Type “ansible --version”
2. Copy executable location “/usr/bin/”

In Jenkins:
1. Go to “Manage Jenkins” -> “Global Tool Configuration”
2. In ansible section click “Add”
3. Set name as “ansible”
4. Set path to executable as “/usr/bin/”
5. Click “Save”

Create “inventory” file on git repository with contents:

[dev]
54.146.13.210 ansible_user=ubuntu

Create ansible playbook “deploy-docker.yml” on git repository with


contents:

---
- hosts: dev
become: yes
become_method: sudo

tasks:
- name: Update apt repository
apt: update_cache=yes
- name: Install python pip
apt:
name: python3-pip
state: present
- name: Install Docker
apt:
name: docker.io
state: present
- name: Start Docker
service:
name: docker
state: started

- name: Install docker-py python module


pip:
name: docker-py
state: present
- name: Start the Container
docker_container:
name: devopsapp
image: "ninadg89/devopsapp:{{DOCKER_TAG}}"
state: started
restart: yes
ports:
- "8080:8080"

Add Stage ‘Deploy webapp using ansible':

pipeline{
agent any
tools{
maven ‘maven3’
}
environment {
DOCKER_TAG = getVersion()
}

stages{
stage('Polling SCM'){
steps{
git branch: 'main', credentialsId: 'GitHub', url:
'https://github.com/gaikwadninad89/allinone.git'
}
}
stage(‘Build using Maven’){
steps{
sh “mvn clean package”
}
}
stage(‘Build Docker Image’){
steps{
sh "docker build . -t ninadg89/devopsapp:${DOCKER_TAG}"
}
}
stage(‘Push image to DockerHub’){
steps{
withCredentials([string(credentialsId: 'docker-hub', variable:
'dockerHubPwd')]) {
sh "docker login -u ninadg89 -p ${dockerHubPwd}"
}
sh "docker push ninadg89/devopsapp:${DOCKER_TAG}"
}
}
stage(‘Deploy webapp using ansible’){
steps{
// Code 3
}
}
}
}

def getVersion(){
def commitHash = sh returnStdout: true, script: 'git rev-parse
--short HEAD'
return commitHash
}

For Code 3:
1. Go to “Snippet Generator”
2. Select sample step as “ansible Playbook:”
3. Write Playbook file path in workspace as “deploy-docker.yml”
4. Write Inventory file path in workspace as “inventory”
5. For Credentials:
6. Click “Add” -> “Jenkins”
a. Select kind as “SSH username and private key”
b. Give id as “dev-server”
c. Description as “dev-server”
d. Username as “ubuntu”
e. In Private key select radio button “Enter directly”
f. Paste the RSA key present in “ubuntu-pass.pem” file
g. Click on “Add”
7. Select the SSH Credentials created
8. Select “Disable the host SSH key check” checkbox
9. Add extra parameters as “-e DOCKER_TAG=${DOCKER_TAG}”
10. Click “Generate Pipeline Script”
11. Copy generated script as Code 3

// End Result
pipeline{
agent any
tools{
maven ‘maven3’
}
environment {
DOCKER_TAG = getVersion()
}

stages{
stage('Polling SCM'){
steps{
git branch: 'main', credentialsId: 'GitHub', url:
'https://github.com/gaikwadninad89/allinone.git'
}
}
stage(‘Build using Maven’){
steps{
sh “mvn clean package”
}
}
stage(‘Build Docker Image’){
steps{
sh "docker build . -t ninadg89/devopsapp:${DOCKER_TAG}"
}
}
stage(‘Push image to DockerHub’){
steps{
withCredentials([string(credentialsId: 'docker-hub', variable:
'dockerHubPwd')]) {
sh "docker login -u ninadg89 -p ${dockerHubPwd}"
}
sh "docker push ninadg89/devopsapp:${DOCKER_TAG}"
}
}
stage(‘Deploy webapp using ansible’){
steps{
ansiblePlaybook credentialsId: 'dev-server', disableHostKeyChecking:
true, extras: '-e DOCKER_TAG=${DOCKER_TAG}', installation: 'ansible',
inventory: 'inventory', playbook: 'deploy-docker.yml'
}
}
}
}

def getVersion(){
def commitHash = sh returnStdout: true, script: 'git rev-parse
--short HEAD'
return commitHash
}

Final pipeline Script:

pipeline{
agent any

tools{
maven 'maven3'
}
environment {
DOCKER_TAG = getVersion()
}
stages{
stage('Polling SCM'){
steps{
git branch: 'main', credentialsId: 'GitHub', url:
'https://github.com/gaikwadninad89/allinone.git'
}
}
stage('Build using Maven'){
steps{
sh "mvn clean package"
}
}
stage('Build Docker Image'){
steps{
sh "docker build . -t
ninadg89/devopsapp:${DOCKER_TAG}"
}
}
stage('Push image to DockerHub'){
steps{
withCredentials([string(credentialsId: 'docker-hub',
variable: 'dockerHubPwd')]) {
sh "docker login -u ninadg89 -p ${dockerHubPwd}"
}
sh "docker push ninadg89/devopsapp:${DOCKER_TAG}"
}
}
stage('Deploy webapp usinng ansible'){
steps{
ansiblePlaybook credentialsId: 'dev-server',
disableHostKeyChecking: true, extras: '-e DOCKER_TAG=${DOCKER_TAG}',
installation: 'ansible', inventory: 'inventory', playbook:
'deploy-docker.yml'
}
}
}
}

def getVersion(){
def commitHash = sh returnStdout: true, script: 'git rev-parse
--short HEAD'
return commitHash
}

Final Repository Files:


Build the Jenkins Project:

Output on Production / Operations server:

You might also like