DevOps Labmanual
DevOps Labmanual
Institute’s Mission
Department’s Mission
K. C. COLLEGE OF ENGINEERING
AND MANAGEMENT STUDIES AND
RESEARCH THANE (EAST).
Certificate
DATE :- .
_ _ _ _ _ _ _ _ _
Head of Department External Examiner
COLLEGE SEAL
SYLLABUS
2. Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics,
natural sciences, and engineering sciences.
3. Design/development of solutions: Design solutions for complex engineering problems and design
system components or processes that meet the specified needs with appropriate consideration for the
public health and safety, and the cultural, societal, and environmental considerations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to the
professional engineering practice.
7. Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and need for
sustainable development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or leader
in diverse teams, and in multidisciplinary settings.
11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.
12. Life-long learning: Recognize the need for, and have the preparation and ability to engage
in independent and life-long learning in the broadest context of technological change.
Department of Information Technology
Semester: VIII
Class : BEIT
Course Outcomes
Course Code
At the end of experiment student will able to
Remember the importance of DevOps tools used in software
ITL803.1
development life cycle
Understand the importance of Jenkins to Build, Deploy and Test
ITL803.2
Software Applications
ITL803.3 Examine the different Version Control Strategies .
Analyze & Illustrate the Containerization of OS images and
ITL803.4
deployment of applications over Docker
Summarize the importance of Software Configuration Management in
ITL803.5
DevOps
ITL803.6 Synthesize the provisioning using Chef/Puppet/Ansible or Saltstack
Rubrics of Practical
Punctuality &
Implementation Understanding (5) Discipline (5) Total
(5) (15)
Practical Incharge
EXPERIMENT NO 1
1. Initialize
2. Add
3. Commit
4. Pull
5. Push
1. Branching
2. Merging
3. Rebasing
Installation of GIT
1. In windows, download GIT from https://git-scm.com/ and perform
the straightforward installation.
2. In Ubuntu, install GIT using $sudo apt install git, Confirm the version after installation using
command $git –version
Once installation is done, open the terminal in Ubuntu and perform the following steps or in windows
Right click and select Git bash here.
The output of GIT Bash in windows and GIT shell in Ubuntu is shown below
By default, we can create public repository in GitHub. So, we can copy the entire public repository of
any other users in to own account using “FORK” Operation. Now fork the repository (Sharing with
other users who wants to contribute).
Login with another account→Copy and Paste URL of repository→then just click on fork to clone to
others account. Suppose we want to fork public repository “timetracker”. So, search for “timetracker”
github repository on google and once its opened clicked on “Fork button” from the top of the github
web page as shown below.
After fork it will be added in your local repository. Example, it will list all the users who has forked
the code.
To delete the repository, open the desired repository you want to delete and go to the settings option.
There you will see delete repository button to delete it.
Now, if you want to download a repository in local machine, then git clone command is used
followed by path to repository. In GitHub the path of repository can be known through clone or
download button and it can be downloaded using git clone command as shown below.
Conclusion:
Git, is a powerful tool that can improve code development and documentation. Ultimately, the
complexity of a VCS not only gives users a well-documented “undo” button for their analyses, but it
also allows for collaboration and sharing of code on a massive scale. Furthermore, it does not need to
be learned in its entirety to be useful. Instead, you can derive tangible benefits from adopting version
control in stages. With a few commands (git init, git add, git commit), you can
start tracking your code development and avoid a file system full of copied file. Lastly, by forking
public repositories and sending pull requests, you can directly improve scientific software.
EXPERIMENT NO. -2 202
Punctuality &
Implementation Understanding (5) Discipline (5) Total
(5) (15)
Practical Incharge
EXPERIMENT NO 2
Aim: To perform version control on websites/software’s using GIT with push and pull
commands.
Theory:
First open Github.com and create a new account.After verifying account through
E- mail, create a Repository on github.com
Pull and Push Processes The pull command used to fetch the repository from github to
local while push is used to commit files from local repository to Github.
Push → Push changes to Web repository
Pull → Pull changes to Local repository
Push:
Pull:
Fetch:
Before committing changes in index.html
Conclusion:
Adding a few additional commands (git push, git clone, git pull) and a GitHub account, you
can share your code online, transfer your changes across machines, and collaborate in small
groups.
Lab outcome:
ITL803.1: Remember the importance of DevOps tools used in software development life cycle.
ITL803.3: Examine the different Version Control Strategies.
EXPERIMENT NO. - 03
Aim of the experiment :- To install and configure Jenkins to test and deploy Java or
Web Applications.
Punctuality &
Implementation Understanding (5) Discipline (5) Total
(5) (15)
Practical Incharge
EXPERIMENT NO 3
Aim: To install and configure Jenkins to test and deploy Java or Web Applications.
Theory:
Output:
Step 1: Install GIT
Step 2: Install Notepad++
Course Outcome :-Understand the importance of Jenkins to Build, Deploy and Test Software
Application
Punctuality &
Implementation Understanding (5) Discipline (5) Total
(5) (15)
Practical Incharge
EXPERIMENT NO 4
Theory:
“Continuous Integration is a software development practice where members of a team
integrate their work frequently, usually each person integrates at least daily - leading to
multiple integrations per day. Each integration is verified by an automated build (including
test) to detect integration errors as quickly as possible.” In simple way, Continuous
integration (CI) is the practice of frequently building and testing each change done to your
code automatically. Jenkins is a self-contained, open source automation server which can be
used to automate all sorts of tasks related to building, testing, and delivering or deploying
software. Our first job will execute the shell commands. The freestyle project provides
enough options and features to build the complex jobs that you will need in your projects.
OUTPUT:
1. Installation in a Docker container
docker pull jenkins/jenkins:lts
docker ps will not show a running container, as we did not yet start it. Start an instance of Jenkins
with the following command:
If we need other tools installed, we can create wer own Docker file with the necessary tooling
installed. For example the following creates a new image based on Jenkins with Maven installed.
All the configuration and jobs will be stores in wer user defined directory.
Download the jenkins.war file.. From this file we can start Jenkins directly via the command line
with java -jar jenkins*.war.
To run it in wer Tomcat server, put the .war file into the webapps directory. If we start Tomcat,
wer Jenkins installation will be available under
http://localhost:8080/jenkins
If the jenkins.war is deployed in wer webapps directory, but can not be started and the tomcat
manager says FAIL - Application at context path /jenkins could not be started, we may need to
grant the permissions for JENKINS_HOME.
sudo
3. Configure Jenkins
After installation, open a browser and connect to it. The default port of Jenkins is :8080,
therefore on wer local machine we find it under the following URL:
http://localhost:8080/
We will need to copy the initial password from the file system of the server.
Afterwards we can select to install Plugins. Select the Install suggested Plugins to get a typical
configuration.
If we want to create a role based authorization Strategy we first need to install the Role-based
Authorization Strategy Plugin. Go to Manage Jenkins Manage Plugins Available enter Role-
based Authorization Strategy in the filter box and select and install the Plugin. To see a list of
commonly used Plugins go to Plugin management.
Now go to Manage Jenkins Manage and Assign Roles Assign Roles to grant users additional
access rights.
Navigate to Manage Roles to define access restrictions in detail. Pattern is a regex value of the job
name. The following grants unregistered users read-only access to wer build jobs that start with
the L-, C-, I- or M- and only those.
If we want to access a private Git repo, for example at GitHub, we need to generate an ssh key-
pair. Create a SSH key with the following command.
The public key must be uploaded to the service we are using, e.g., GitHub.
Enter a description for the job (project name) and configure how many builds should be retained and for
how long.
Configure how the source code can be retrieved. If we are for example using Git, enter the URL to the Git
repository. If the repository is not public, we may also need to configure the credentials.
Specify when and how wer build should be triggered. The following example polls the Git
repository every 15 min. It triggers a build, if something has changed in the repo.
To trigger a build after a commit on the Git repository, select GitHub hook trigger for GITScm
polling instead of Poll SCM.
Press Save to finish the job definition. Press Build Now on the job page to validate that the job
works as expected.
After a while the job should go to green or blue (depending on wer configuration), if successful.
Click on the job and afterwards on Console Output to see the log file. Here we can analyze the
build errors.
5. Build Pipelines
Jenkins pipelines help we align the build process of a project. This is done by specifying tasks
and the order in which they are executed. There are all kinds of possible tasks that a Jenkins
pipeline can do for we. For example, build assets, send an email on error, send the build artifacts
via SSH to wer application server, etc.
Jenkins allows to specify pipelines using a Jenkinsfile. This is just a textfile that contains the
necessary data for Jenkins to execute the pipeline.
Jenkins supports two different syntaxes.
1. Declarative (since Pipeline version 2.5)
2. Scripted
For this tutorial we will focus on the declarative approach.
The following example shows a pipeline with 2 stages:
pipeline {
agent any
stages {
stage('Build Assets') {
agent any
steps {
echo 'Building Assets...'
}
}
stage('Test') {
agent any
steps {
echo 'Testing stuff...'
}
}
}
}
The agent directive tells Jenkins to allocate a workspace and an executor for the pipeline. Without it, the
pipeline is not valid and therefore required.
Setup using the Blue Ocean Plugin
The above process can also be done using the Blue Ocean Jenkins Plugin.
To install the Plugin go to Manage Jenkins Manage Plugins Available and select the Blue
Ocean Plugin.
After the installation is finished we have an additional menu entry called Open Blue Ocean in
wer main Jenkins navigation.
The Blue Ocean application will provide a link to the GitHub page we need to visit. The necessary
permissions that Blue Ocean needs to operate are already selected. Add a description and click
on Generate Token at the bottom of the page.
Copy the generated token and paste it in the Blue Ocean mask.
Select the account the repository belongs to and select the repository.
Adding steps to wer Pipeline
In the next screen we will see a visual representation of wer Pipeline. Here we can add or remove steps.
To create a new step click on + in the canvas. A menu will open on the right that lets we specify
a name and what steps we want to perform.
After we have finished editing the Pipeline Blue Ocean will offer we to commit the newly
created pipeline to wer repository.
Under the hood Blue Ocean only created a valid Jenkinsfile to be used by Jenkins.
After committing Jenkins will build the project using the newly modified Pipelines.
6. Restart wer Jenkins
After installing Plugins we will need to restart Jenkins. We can do so by adding restart as URL
parameter,
CONCLUSION:
Here, we got to learn about the Jenkins Programming and how to install and configure it.
EXPERIMENT NO. - 05
Punctuality &
Implementation Understanding (5) Discipline (5) Total
(5) (15)
Practical Incharge
EXPERIMENT NO 5
Theory:
First, we add the GPG key, by entering the following command in the command line:
2. Once we run the command sudo apt-get install apt-transport-https ca-certificates we get
the following result.
3. The next step is to add the new GPG key. The following command will download the key
with the ID 58118E89F3A912897C070ADBF76221572C52609D from
the keyserver hkp://ha.pool.sks-keyservers.net:80 and adds it to the adv keychain.
4. Once we run the command echo "deb https://apt.dockerproject.org/repo ubuntu-
trusty main” | sudo tee /etc/apt/sources.list.d/docker.list we get the following result -
5. Next, we issue the apt-get update command to update the packages on the Ubuntu system.
6. To verify that the package manager is pointing to the right repository, you can do it by issuing
the apt-cache command.
7. Issue the apt-get update command to ensure all the packages on the local system are up
to date.
8. We have to install the linux-image-extra-* kernel packages, which allows one to use
the aufs storage driver. It can be done by using the following command.
sudo apt-get install linux-image-extra-$(uname -r)
linux-image-extra-virtual
9. The final step is to install Docker and we can do this with the following command −
sudo apt-get install –y docker-engine
10. To see the version of Docker running, you can issue the following command − docker
version
11. To see more information on the Docker running on the system, you can issue the
following command − docker info
Conclusion: By following all the steps we got to learn, how to install docker in ubuntu and
configure the same.
EXPERIMENT NO. - 06
Aim of the experiment :- :To perform Docker commands with push and pull commands.
Punctuality &
Implementation Understanding (5) Discipline (5) Total
(5) (15)
Practical Incharge
EXPERIMENT NO 6
Aim :To perform Docker commands with push and pull commands.
Theory:
To Build your own Docker Image
1. Write a docker file
#mkdir mydockerbuild (create a directory named mydockerbuild)#cd mydockerbuild (go to
directory created above)#vi Dockerfile (create a new docker file using vi editor)
FROM docker/whalesay:latest
RUN apt-get -y update && apt-get install -y fortunes
CMD /usr/games/fortune -a | cowsay
If you see ‘no permission’ error, try to prefix above command with ‘sudo’ as shown in above
snapshot.
3. Now if you can see your latest build using ‘docker images’ command
#docker images
If you are programmer or in some way related to coding or software world, I assume that you
must be aware of GitHub. Similarly, there exists DockerHub.
What is DockerHub?
People all over the world create Docker images. You can find these images by browsing the
Docker Hub. In this next section, you’ll search for and find the image you’ll use in the rest of
this getting started.
What is Docker Store?
The Docker Store contains images from individuals like you and official images from
organisations like RedHat, IBM, Google, Microsoft, and a whole lot more. Each image
repository contains information about an image.
We will use our docker-whale image to push to docker hub account which you created above.
4.2 Tag the ‘docker-whale’ image using the ‘docker tag’ command and the image
ID. #docker tag <image-id of docker-whale> <your dockerhub username>/docker-
whale:latest
Now if your use ‘docker images’ command to list images, you shall see your dockerhub
username against docker-whale image as shown in above snapshot.
5. Push your tagged image to Docker Hub, using the ‘docker push’ command
7.2 Let’s remove all versions of docker-whale image on our local system
#docker rmi -f <Image ID of docker-whale>
Use ‘docker images’ command to confirm if all instances of ‘docker-whale’ has been
removed.
When you use ‘docker run’ it automatically downloads (pulls) images that don’t yet
exist locally, creates a container, and starts it. Use the following command to pull and
run the ‘docker-whale’ image, substituting your Docker Hub username.
Output:
Step 1. Create a docker Hub account
Step2: Build a demo application my-python-app by running the following commands in
command prompt
Till now we have create the my-python-app file
See if the file is working in the browser using port number
Step3: See all the images present in the docker. Here you will see that my-python-app is also
present.
Step6: Check the images present, this should include the image name with repository
Step8: Check that the images with the repository is now present in the list
Conclusion:
Built docker image with the help of python and then learned about the usage of docker image
push to share our images to the Docker Hub registry and also how to pull an image.
EXPERIMENT NO. - 07
Aim of the experiment :-To create Docker containers of different operating system images.
Punctuality &
Implementation Understanding (5) Discipline (5) Total
(5) (15)
Practical Incharge
EXPERIMENT NO 7
Theory:
set GOOS=windows
go build -o quotes-windows-amd64.exe
Try out the Windows executable without using Docker:
./quotes-windows-amd64.exe
Open another “Windows PowerShell” terminal window and try out the service using curl:
Expected response:
StatusCode : 200
StatusDescription : OK
Content
{"hardwareArchitecture":"amd64","operatingSystem":"windows","ipAddress":"4d131b511ab9/f
e80::9846:5be3:c0bb:2d91%Ethernet172.24.224.172","quote":"In Go, the code does exactly
what it says on the page ."...
Ensure that your “Docker for Windows” runs Windows containers. enter the command:
docker info
FROM microsoft/nanoserver
EXPOSE 8080
ADD quotes-windows-amd64.exe /
ENTRYPOINT ["./quotes-windows-amd64.exe"]
The current version of “Docker for Windows” has a limitation that prevents access ports
published by containers using localhost, e.g. curl http://localhost:8080/api/quote does not work.
For details, see https://blog.sixeyed.com/published-ports-on-windows-containers-dont-do-
loopback/.
Instead, we can use the PC’s IP address. You can use ipconfig to get the IP address.
E.g.:
Expect a similar response as from the curl command above. Verify that operatingSystem field
has the value windows!
docker rm -f quotes
set GOOS=linux
go build -o quotes-linux-amd64
The Dockerfile for building the Linux Docker image, Dockerfile-linux-amd64, looks like:
FROM scratch
EXPOSE 8080
ADD quotes-linux-amd64 /
ENTRYPOINT ["./quotes-linux-amd64"]
Expected response:
StatusCode : 200
StatusDescription : OK
Content :
{"hardwareArchitecture":"amd64","operatingSystem":"linux","ipAddress":"0c4e0824f479/172.1
7.0.2","quote":"I like a lot of the design decisions they made in the [Go] language. Basically, I
like all oft...
Verify that the operatingSystem field now has the value linux!
docker rm -f quotes
Now, it’s time to combine the two platform specific Docker images into one common Docker
image.
We will use the standalone tool manifest-tool. Executables can be downloaded from:
https://github.com/estesp/manifest-tool/releases.
I used version v0.7.0 of the tool compiled for macOS (I got stuck on some strange authentication
problem when trying out the Windows version).
image: magnuslarsson/quotes:24-go
manifests:
-
image: magnuslarsson/quotes:24-go-linux-amd64
platform:
architecture: amd64
os: linux
-
image: magnuslarsson/quotes:24-go-windows-amd64
platform:
architecture: amd64
os: windows
Verify that we now have a Docker image for our Go service that supports both Linux and
Windows:
Expected response:
Image: magnuslarsson/quotes:24-go
* Manifest List: Yes
* Supported platforms:
- linux/amd64
- windows/amd64:10.0.14393.1944
You can also take a look into DockerHub to see the resulting three Docker images, e.g. in my
case: https://hub.docker.com/r/magnuslarsson/quotes/tags/
Expected result:
Now we should be able to run our Go service in both Windows and Linux containers using one
and the same Docker image: magnuslarsson/quotes:24-go!
Since we currently have “Docker for Windows” configured to run Linux containers, let’s start
with trying it out on Linux:
docker run -d -p 8080:8080 --name quotes magnuslarsson/quotes:24-go
Verify that the operatingSystem field in the response has the value linux!
docker rm -f quotes
Verify that the operatingSystem field in the response now has the value windows!
docker rm -f quotes
Conclusion:
I have learned how to create a Docker image that works on different hardware architectures and
operating systems. I Have also created a Docker image based on a service written in Go and package
it for use in both Linux and Windows on 64 bit Intel based hardware.
EXPERIMENT NO. - 08
Aim of the experiment :-To perform continuous testing of web applications using Selenium
Punctuality &
Implementation Understanding (5) Discipline (5) Total
(5) (15)
Practical Incharge
EXPERIMENT N0.08
Theory:
Continuous Testing is the process of executing automated tests as part of the software delivery
pipeline in order to obtain feedback on the business risks associated with a software release
candidate as rapidly as possible. It evolves and extends test automation to address the increased
complexity and pace of modern application development and delivery.
Continuous Testing in DevOps is a software testing type that involves testing the software at
every stage of the software development life cycle. The goal of Continuous testing is evaluating
the quality of software at every step of the Continuous Delivery Process by testing early and
testing often. The continuous testing process in DevOps involves stakeholders like Developer,
DevOps, QA and
The old way of testing was hand off centric. The software was handed off from one team to
another. A project would have definite Development and QA phases. QA teams always wanted
more time to ensure quality. The goal was that the quality should prevail over project schedule.
However, business wants faster delivery of software to the end user. The newer is the software,
the better it can be marketed and increase revenue potential of the company. Hence, a new way
of testing was evolved.
Selenium
Selenium is an open-source tool which automates web browser testing. It is mainly used for
functional (unit testing & regression testing).
In this practical, we would use Selenium for creating test cases, TestNG for getting detailed
report of those test cases and further we use Jenkins to run our test cases. Jenkins will see if our
test cases are getting passed and if those test cases fail’s we would receive an email notification.
And this is how we complete the process of continuous testing in DevOps.
Test Case: Automate amazon.in search and Facebook login. And at the end, receive a mail
confirmation if the test result fail. This whole process has to be automated and in continuous
manner for continuous testing.
Steps for Demonstration:
Step 1: Create Test cases with Selenium on Eclipse IDE.
Step 1.1: Download Selenium standalone-3.9.0.jar and Selenium server-3.9.0.zip files from the
link below and unzip the file.
https://selenium-release.storage.googleapis.com/index.html?path=3.9/
Step 1.2: Download latest Java as a prerequisite for Eclipse followed by installation of Eclipse.
Step 1.3: Launch the Eclipse IDE and open Eclipse Marketplace to install TestNG plugin.
Step 1.4: Restart Eclipse IDE and create a new java project under New 🡪 Project 🡪 Java Project
and Select JRE 1.8 under “Use Execution Environment for JRE”
Step 1.5: Now add new package from File Tab.
Step 1.6: Select Project 🡪 Click New 🡪 Class and Click Finish and enter the below code
package <Package name>;
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
public class <File name> {
public void JenkinsTest() {
System.setProperty("webdriver.chrome.driver","file_location");
WebDriver driver = new ChromeDriver();
driver.navigate().to("https://www.amazon.in");
driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);
driver.findElement(By.id("twotabsearchtextbox")).sendKeys("Nike shoes");
driver.findElement(By.xpath("/html/body/div[1]/header/div/div[1]/div[3]/div/form/div[2]/div/input")).clic
k();
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
driver.navigate().back();
String title = driver.getTitle();
System.out.println("Page Title is:-" + title);
driver.navigate().to("https://www.facebook.com");
driver.findElement(By.name("email")).sendKeys("[email protected]");
driver.findElement(By.name("pass")).sendKeys("1234");
driver.findElement(By.id("loginbutton")).click();
driver.manage().timeouts().implicitlyWait(60, TimeUnit.SECONDS);
driver.quit();
}
}
Step 1.7: Select Project 🡪 Properties 🡪 Java Build Path 🡪 Select Add External JARs and Select
all the Jar files in the unzipped Selenium Jar files also in Lib folder and Selenium standalone jar
file. Apply changes
Step 1.8: Select and right click Project 🡪 Properties 🡪 Java Build Path 🡪 Select Add Library and
Select TestNG and click on Finish
Step 2: Converting into XML files for executing TestNG test suite.
Step 2.1: Select and right click on Class file 🡪 TestNG🡪 Convert to TestNG and type the below
code in Preview
Test Result: (Explanation: A new chrome browser opens up and automatically open amazon.in
and enter nike shoes in the search bar and then navigates back again and then open facebook.com
and login with email: [email protected] and password: 1234 and hit Login. This has been coded
in the class file)
Step 3: Create windows batch file
Create a Text Document and Type the command below and create a bat file with Save as file
name as shown below to create a Windows batch file
Step 4.2: Create a new Freestyle project on Jenkins by browsing to localhost:8080 or whichever
port has been assigned to Jenkins during the installation.
Step 4.3: Under General Section 🡪 Tab on Advanced and Select the checkbox “Use custom
workspace” and in the Directory Form Fill enter the workspace location of Eclipse project
Step 4.4: Under Build Section 🡪 Select “Add build step” button 🡪 Select Execute Windows
batch command and in Post-build Actions Section 🡪 Select Add “post-build action” 🡪 Select
“Email notification” and add your email address in the Recipients field. Finally, click on Apply
and Save buttons.
Step 5: Schedule Jobs & configure email notification
Step 5.1: Go the Jenkins Dashboard 🡪 Manage Jenkins 🡪 Manage Plugins 🡪 Search Email
Extension Template 🡪 click on the checkbox and click on Install without restart.
Step 5.2: Turn On “Less Secure app access” of your google account, in order to receive mails
Step 5.5: Click on Test configuration by sending test e-mail, Enter mail address in Test e-mail
recipient and hit Test Configuration Button.
Step 5.6: Check the test mail in your Gmail inbox
Step 5.7: Go to the Dashboard 🡪 Select the project 🡪 Select Configure 🡪 Build Trigger
Section 🡪 Select Build periodically checkbox and Enter the Schedule details in cron format
H 10 * * *
Meaning: In the morning 10 on every date of every month and every day of the week.
Step 5.10: Verify email notification feature by creating an error by commenting any line in class
file under your project in Eclipse IDE
Step 5.11: Check your Gmail Inbox for failed Test Case mail
Conclusion:
We have successfully completed the task of executing continuous testing with the help of
Selenium to build test cases, TestNG to create testing.xml file so that we can integrate Jenkins for
continuous testing on specified time intervals and receive email notification if the test cases fail.
EXPERIMENT NO. - 09
Aim of the experiment :- To install and configure Software Configuration Management using
Puppet tool.
Punctuality &
Implementation Understanding (5) Discipline (5) Total
(5) (15)
Practical Incharge
EXPERIMENT NO 9
Aim: To install and configure Software Configuration Management using Puppet tool.
Theory:
.
Puppet is a configuration management tool that simplifies system administration.
Puppet uses a client/server model in which your managed nodes, running a process called the
Puppet agent, talk to and pull down configuration profiles from a Puppet master.
Puppet deployments can range from small groups of servers up to enterprise-level
operations. This guide will demonstrate how to install Puppet 6.1 on three servers:
● A Puppet master running Ubuntu 18.04
● A managed Puppet Agent node running Ubuntu 18.04
STEPS:
1. First install Puppet master and puppet slave in ubuntu. in the master virtual machine, theres no
master certificate , we will confirm that by adding following code. In Puppet agent virtual
machine, a puppet agent certificate and we will fetch puppet master certificate.
We can see the certificate in puppet master send by puppet agent and will sign that certificate.
Now we will securely establish the connection between puppet master and puppet agent.
Now, here we’ll deploy php and mysql using puppet. We will first download the predefined
modules of php and mysql which are present in puppet forge. We cannot directly deploy the
modules, we need to declare the classes.
Now, we’ll define the file in the manifest directory.
Now we’ll write the code in order to deploy the apache in the same pp file.
Puppet agent automatically pulls from the puppet master.
After that, we will install and download the php module from the forge.
We will check the downloaded modules from puppet forge by using following command.
Now we can clearly deploy the php and mysql modules in agent by using the following
command.
Puppet agent –t
So, We have successfully php and mysql using puppet.
Conclusion:
Here, We got to learn about how we can do the configuration management easily and precisely
using puppet tools..
EXPERIMENT NO. - 10
Aim of the experiment :- To install and configure Software Configuration Management and
provisioning using Ansible tool
Punctuality &
Implementation Understanding (5) Discipline (5) Total
(5) (15)
Practical Incharge
EXPERIMENT NO :-10
Aim: To install and configure Software Configuration Management and provisioning using
Ansible tool.
Theory:
Ansible is a free and open source automation engine that removes the workload
associated with repetitive tasks. It allows you to comfortably simplify your workload by
automating different tasks in your IT environment. It makes use of SSH protocol to retrieve
information from remote machines and manage them.
How to install Ansible:
https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html
Ansible is successfully installed in the Ubuntu operating system.You can check this by
running this command.
Conclusion:
Ansible can easily run and configure Unix-like systems as well as Windows systems to provide
infrastructure as code.