UNIT 5 Part1

Download as pps, pdf, or txt
Download as pps, pdf, or txt
You are on page 1of 90

Unit 5

DevOps Pipeline
• A DevOps delivery pipeline is a series of steps
and tools that facilitate the continuous
delivery of software applications. It
encompasses the entire process, from code
development to deployment and monitoring.
• Here is an example of typical DevOps delivery
pipeline:
• 1. Code Development: Developers write and
test the code for new features or bug fixes.
They typically use version control systems like
Git to manage code changes.
• 2. Continuous Integration (CI): The code changes are
automatically merged with the existing codebase. CI
tools like Jenkins or Travis CI commonly used to build
and test the application.
• 3. Automated Testing: Various types of tests, such as
unit tests, integration tests, and end-to-end tests,
are performed automatically to ensure the code
quality and functionality.
• 4. Artifact Generation: The CI system produces deployable artifacts,
such as executable files or container images, which are ready to be
deployed.
• 5. Deployment: The artifacts are deployed to the target environment,
whether it's a development, staging, or production environment.
Deployment tools like Ansible or Kubernetes may be used to
automate this process.
• 6. Continuous Delivery (CD): Once the code is
successfully deployed, additional tests, such as
smoke tests or performance tests, may be
performed to ensure the application works as
expected in the production environment.

• 7. Monitoring and Feedback: Continuous monitoring


tools, such as Prometheus or New Relic, are used to
track the application's performance and gather
feedback about any issues or bottlenecks.
• 8. Continuous Improvement: Based on the
monitoring data and user feedback, developers make
further improvements or bug fixes to continuously
enhance the application.
• The goal of a DevOps delivery pipeline is to automate
and streamline the software delivery process,
enabling teams to deliver high-quality software more
and reliably. By continuously integrating code
changes, automating testing, and enabling rapid
deployment, DevOps pipelines help improve
collaboration, increase productivity, and ensure
faster time-to-market for software products.
What is Selenium
• Selenium is one of the most widely used open
source Web UI (User Interface) automation
testing suite.It was originally developed by
Jason Huggins in 2004 as an internal tool at
Thought Works. Selenium supports
automation across different browsers,
platforms and programming languages.
What is Selenium
• Selenium can be easily deployed on platforms
such as Windows, Linux, Solaris and
Macintosh. Moreover, it supports OS
(Operating System) for mobile applications
like iOS, windows mobile and android.
What is Selenium
• Selenium supports a variety of programming
languages through the use of drivers specific to each
language.Languages supported by Selenium include
C#, Java, Perl, PHP, Python and Ruby.Currently,
Selenium Web driver is most popular with Java and
C#. Selenium test scripts can be coded in any of the
supported programming languages and can be run
directly in most modern web browsers. Browsers
supported by Selenium include Internet Explorer,
Mozilla Firefox, Google Chrome and Safari.
What does Selenium software do?
• Automated Testing: Automated Testing comes in handy in larger projects where
if not for Selenium, the tester would have to manually test each and every
created functionality. With Selenium, all of the manual tasks are automated,
thereby reducing the burden and stress on the testers.
• Cross Browsers Compatibility: Selenium supports a wide range of browsers such
as Chrome, Mozilla Firefox, Internet Explorer, Safari, and Opera.
• Increases Test Coverage: With the automation of tests, the overall testing time
gets reduced which results in freeing up time for the tester to perform more
testing on different test scenarios in the same time.
• Reduces Test Execution Time: Since Selenium supports parallel test execution, it
greatly helps in reducing parallel test execution time.
• Multiple OS Support: Selenium WebDriver provides support across multiple
Operating Systems like Windows, Linux, UNIX, Mac, etc. With
Selenium WebDriver you can create a test case on Windows OS and execute it
on Mac OS.
Architecture
• This architecture involves first injecting Selenium Core into the web
browser. Then Selenium Core will receive the instructions from the
RC server and convert it into a JavaScript command. This JavaScript
code is responsible for accessing and testing the web elements. If
you look at the image below, you will get an idea of how RC works.
Architecture
• To overcome these problems, Selenium WebDriver was developed.
WebDriver is faster because it interacts directly with the browser
and there is no involvement of an external proxy server. The
architecture is also simpler as the browser is controlled from the OS
level. The below image will help you understand how WebDriver
works.
Architecture
• Another benefit with WebDriver is that it
supports testing on the HTML Unit driver
which is a headless driver. When we say a
headless driver, it refers to the fact that the
browser has no GUI. RC on the other hand
does not support the HTML Unit driver. These
are some of the reasons why WebDriver
scores over RC.
Architecture
Features of Selenium
• Selenium is an open source and portable Web
testing Framework.
• Selenium IDE provides a playback and record
feature for authoring tests without the need to
learn a test scripting language.
• It can be considered as the leading cloud-based
testing platform which helps testers to record their
actions and export them as a reusable script with a
simple-to-understand and easy-to-use interface.
Features of Selenium
• It also supports parallel test execution which reduces
time and increases the efficiency of tests.
• Selenium can be integrated with frameworks like Ant
and Maven for source code compilation.
• Selenium can also be integrated with testing
frameworks like TestNG for application testing and
generating reports.
• Selenium requires fewer resources as compared to
other automation test tools.
• Selenium supports various operating systems,
browsers and programming languages.
Following is the list:
– Programming Languages: C#, Java, Python, PHP,
Ruby, Perl, and JavaScript
– Operating Systems: Android, iOS, Windows, Linux,
Mac, Solaris.
– Browsers: Google Chrome, Mozilla Firefox,
Internet Explorer, Edge, Opera, Safari, etc.
• WebDriver API has been indulged in selenium whichis one of
the most important modifications done to selenium.
• Selenium web driver does not require server installation, test
scripts interact directly with the browser.
• Selenium commands are categorized in terms of different
classes which make it easier to understand and implement.
• Selenium Remote Control (RC) in conjunction with WebDriver
API is known as Selenium 2.0. This version was built to
support the vibrant web pages and Ajax.
Selenium Limitations
• Selenium does not support automation testing for desktop
applications.
• Selenium requires high skill sets in order to automate tests
more effectively.
• Since Selenium is open source software, you have to rely on
community forums to get your technical issues resolved.
• We can't perform automation tests on web services like SOAP
or REST using Selenium.
• We should know at least one of the supported programming
languages to create tests scripts in Selenium WebDriver.
• It does not have built-in Object Repository like UTF/QTP to
maintain objects/elements in centralized location. However,
we can overcome this limitation using Page Object Model.
• Selenium does not have any inbuilt reportingcapability; you
have to rely on plug-ins like JUnit and TestNG for test reports.
• It is not possible to perform testing on images. We need to
integrate Selenium with Sikuli for image based testing.
• Creating test environment in Selenium takes more time as
compared to vendor tools like UFT, RFT, Silk test, etc.
• No one is responsible for new features usage; they may or
may not work properly.
• Selenium does not provide any test tool integration for Test
Management.
Advantages of Containerization
over Virtualization:
• Containers on the same OS kernel are lighter
and smaller
• Better resource utilization compared to VMs
• Boot-up process is short and takes few
seconds
Docker
• Docker is an OS virtualized software platform
that allows IT organizations to quickly create,
deploy, and run applications in Docker
containers, which have all the dependencies
within them. The container itself is a very
lightweight package with all the instructions
and dependencies—such as frameworks,
libraries, and bins—within it.
Docker Benifits
• Docker has the ability to reduce the size of development by providing
a smaller footprint of the operating system via containers.
• With containers, it becomes easier for teams across different units,
such as development, QA and Operations to work seamlessly across
applications.
• You can deploy Docker containers anywhere, on any physical and
virtual machines and even on the cloud.
• Since Docker containers are pretty lightweight, they are very easily
scalable.
• The Docker container can be moved from
environment to environment very easily. In
a DevOps life cycle, Docker really shines when
used for deployment. When you deploy your
solution, you want to guarantee that the code
tested will actually work in the production
environment. In addition, when you're building
and testing the code, it's beneficial to have a
container running the solution at those stages
because you can validate your work in the same
environment used for production.
• You can use Docker throughout multiple
stages of your DevOps cycle, but it is
especially valuable in the deployment stage,
especially since it allows developers to use
rapid deployment. In addition, the
environment itself is highly portable and was
designed with efficiencies that will enable you
to run multiple Docker containers in a single
environment, unlike traditional virtual
machine environments.
• The virtual environment has a hypervisor layer, whereas Docker has a
Docker engine layer.
• There are additional layers of libraries within the virtual machine, each
of which compounds and creates very significant differences between
a Docker environment and a virtual machine environment.
• With a virtual machine, the memory usage is very high, whereas, in a
Docker environment, memory usage is very low.
• In terms of performance, when you start building out a virtual
machine, particularly when you have more than one virtual machine
on a server, the performance becomes poorer. With Docker, the
performance is always high because of the single Docker engine.
• In terms of portability, virtual machines just are not ideal. They’re still
dependent on the host operating system, and a lot of problems can
happen when you use virtual machines for portability. In contrast,
Docker was designed for portability. You can actually build solutions in
a Docker container, and the solution is guaranteed to work as you
have built it no matter where it’s hosted.
• The boot-up time for a virtual machine is fairly
slow in comparison to the boot-up time for a
Docker environment, in which boot-up is
almost instantaneous.

• One of the other challenges of using a virtual machine is that if you have
unused memory within the environment, you cannot reallocate it. If you
set up an environment that has 9 gigabytes of memory, and 6 of those
gigabytes are free, you cannot do anything with that unused memory.
With Docker, if you have free memory, you can reallocate and reuse it
across other containers used within the Docker environment.
• Another challenge of virtual machines is that running multiples of them
in a single environment can lead to instability and performance issues.
Docker, on the other hand, is designed to run multiple containers in the
same environment—it actually gets better with more containers run in
that hosted single Docker engine.
• Virtual machines have portability issues; the software can work on one
machine, but if you move that virtual machine to another machine,
suddenly some of the software won’t work, because some dependencies
will not be inherited correctly. Docker is designed to be able to run across
multiple environments and to be deployed easily across systems.
• The boot-up time for a virtual machine is about a few minutes, in
contrast to the milliseconds it takes for a Docker environment to boot up.
How Does Docker Work?

• Docker works via a Docker engine that is


composed of two key elements: a server and a
client; and the communication between the
two is via REST API. The server communicates
the instructions to the client. On older
Windows and Mac systems, you can take
advantage of the Docker toolbox, which
allows you to control the Docker engine using
Compose and Kitematic.
• A server which is a type of long-running program called a
daemon process (the docker command).
• A REST API which specifies interfaces that programs can use
to talk to the daemon and instruct it what to do.
• A command line interface (CLI) client (the docker command).
• The CLI uses the Docker REST API to control or interact with
the Docker daemon through scripting or direct CLI
commands. Many other Docker applications use the
underlying API and CLI.
• REST stands for Representational State Transfer. This
means that when a client requests a resource using a
REST API, the server transfers back the current state of
the resource in a standardized representation.
• Kitematic is an open source project built to simplify
and streamline using Docker on a Mac or Windows PC.
Kitematic automates the Docker installation and setup
process and provides an intuitive graphical user
interface (GUI) for running Docker containers.
• Compose is a tool for defining and running multi-
container Docker applications. With Compose, you use
a YAML file to configure your application's services.
What Is Kubernetes?

• Kubernetes is an open-source container


management (orchestration) tool. Its
container management responsibilities
include container deployment, scaling &
descaling of containers & container load
balancing.
What is Kubernetes?

• Kubernetes is an open-source Container Management tool


that automates container deployment, container scaling, and
container load balancing (also called a container
orchestration tool). It is written in Golang and has a vast
community because it was first developed by Google and
later donated to CNCF (Cloud Native Computing Foundation).
Kubernetes can group ‘n’ number of containers into one
logical unit for managing and deploying them easily. It works
brilliantly with all cloud vendors i.e. public, hybrid, and on-
premises.
Why Use Kubernetes?
• Companies out there may be using Docker,
Rocket, or simply Linux containers to
containerize their applications. But, whatever it
is, they use it on a massive scale. They don’t
stop at using 1 or 2 containers in Prod. But
rather, 10’s or 100’s of containers for load
balancing the traffic and ensuring high
availability.
• Keep in mind that, as the traffic increases, they
even have to scale up the number of containers
to service the ‘n’ no of requests that come in
every second. And they have also to scale down
the containers when the demand is less. Can all
this be done natively?
Features Of Kubernetes
Kubernetes – Architecture
• Kubernetes Cluster mainly consists of Worker
Machines called Nodes and a Control Plane. In
a cluster, there is at least one worker node.
The Kubectl CLI communicates with the
Control Plane and Control Plane manages the
Worker Nodes.
• Master controls the cluster and the nodes in
it. It ensures the execution only happens in
nodes and coordinates the act. Nodes host the
containers; in fact these Containers are
grouped logically to form Pods. Each node can
run multiple such Pods, a group of containers
interacting with each other, for a
deployment.
Kubernetes – Cluster Architecture

• Kubernetes has a client-server architecture


and has master and worker nodes, with the
master being installed on a single Linux
system and the nodes on many Linux
workstations.
Kubernetes Components

• Kubernetes is composed of a number of


components, each of which plays a specific
role in the overall system. These components
can be divided into two categories:
• nodes: Each Kubernetes cluster requires at
least one worker node, which is a collection of
worker machines that make up the nodes
where our container will be deployed.
• Control plane: The worker nodes and any
pods contained within them will be under the
control plane.
Control Plane Components

• It is basically a collection of various


components that help us in managing the
overall health of a cluster. For example, if you
want to set up new pods, destroy pods, scale
pods, etc. Basically, 4 services run on Control
Plane:
• Kube-API server
• The API server is a component of the Kubernetes control plane
that exposes the Kubernetes API. It is like an initial gateway to
the cluster that listens to updates or queries via CLI like Kubectl.
Kubectl communicates with API Server to inform what needs to
be done like creating pods or deleting pods etc. It also works as
a gatekeeper. It generally validates requests received and then
forwards them to other processes. No request can be directly
passed to the cluster, it has to be passed through the API Server.
• Kube-Scheduler
• When API Server receives a request for Scheduling Pods then
the request is passed on to the Scheduler. It intelligently decides
on which node to schedule the pod for better efficiency of the
cluster.
• Kube-API server
• The API server is a component of the Kubernetes control
plane that exposes the Kubernetes API. It is like an initial
gateway to the cluster that listens to updates or queries via
CLI like Kubectl. Kubectl communicates with API Server to
inform what needs to be done like creating pods or deleting
pods etc. It also works as a gatekeeper. It generally
validates requests received and then forwards them to
other processes. No request can be directly passed to the
cluster, it has to be passed through the API Server.
• Kube-Scheduler
• When API Server receives a request for Scheduling Pods
then the request is passed on to the Scheduler. It
intelligently decides on which node to schedule the pod for
better efficiency of the cluster.
• Node Components
• These are the nodes where the actual work happens. Each Node
can have multiple pods and pods have containers running inside
them. There are 3 processes in every Node that are used to
Schedule and manage those pods.
• Container runtime
• A container runtime is needed to run the application containers
running on pods inside a pod. Example-> Docker
• kubelet
• kubelet interacts with both the container runtime as well as the
Node. It is the process responsible for starting a pod with a
container inside.
• kube-proxy
• It is the process responsible for forwarding the request from
Services to the pods. It has intelligent logic to forward the request
to the right pod in the worker node.
Advantages
• 1. Scalability:
• Kubernetes makes it easy to scale applications up or down by adding or
removing containers as needed. This allows applications to automatically
adjust to changes in demand and ensures that they can handle large
volumes of traffic.
• 2. High availability:
• It automatically ensures that applications are highly available by
scheduling containers across multiple nodes in a cluster and automatically
replacing failed containers. This helps prevent downtime & ensures that
applications are always available to users.
• 3. Improved resource utilization:
• Kubernetes automatically schedules containers based on the available
resources, which helps to improve resource utilization and reduce waste.
This can help reduce costs and improve the efficiency of your applications.
• 4. Easy deployment & updates:
• It makes it easy to deploy & update applications by using a
declarative configuration approach. This allows developers to
specify the desired state of their applications & it will
automatically ensure that the actual state matches the
desired state.
• 5. Portable across cloud providers:
• It is portable across different cloud providers, which means
that you can use the same tools & processes to manage your
applications regardless of where they are deployed. This can
make it easier to move applications between cloud providers.
• Continuous integration is possible with
Jenkins, a software program DevOps tool.
What is Continuous Integration?
• According to the development practice known as
continuous integration, developers must periodically
integrate new code into a shared repository.
• This idea was developed to solve the issue of
discovering issues in the build lifecycle after they had
already occurred.
• The developers must conduct frequent builds in
order to use continuous integration.
• It is standard procedure to launch a build whenever a
code commit takes place.
What Is Jenkins?
• Jenkins is an open-source solution comprising
an automation server to enable continuous
integration and continuous delivery (CI/CD),
automating the various stages of software
development such as build, test, and
deployment.
• Jenkins is a Java-based open-source automation
platform with plugins designed for continuous
integration. It is used to continually create and
test software projects, making it easier for
developers and DevOps engineers to integrate
changes to the project and for consumers to get
a new build. It also enables you to release your
software continuously by interacting with
various testing and deployment methods.
• Organizations may use Jenkins to automate
and speed up the software development
process. Jenkins incorporates a variety of
development life-cycle operations, such as
build, document, test, package, stage, deploy,
static analysis, and more.
What is Jenkins and why we use it?

• Jenkins is an open-source automation tool


written in Java with plugins built for continuous
integration. Jenkins is used to build and test your
software projects continuously making it easier
for developers to integrate changes to the
project, and making it easier for users to obtain a
fresh build. It also allows you to continuously
deliver your software by integrating with a large
number of testing and deployment technologies.
• With Jenkins, organizations can accelerate the software
development process through automation. Jenkins integrates
development life-cycle processes of all kinds, including build,
document, test, package, stage, deploy, static analysis, and
much more.
• Jenkins achieves Continuous Integration with the help of
plugins. Plugins allow the integration of Various DevOps
stages. If you want to integrate a particular tool, you need to
install the plugins for that tool. For example Git, Maven 2
project, Amazon EC2, HTML publisher etc.
Continuous Integration With
Jenkins
• Developers have to wait until the complete software is developed
for the test results.
• There is a high possibility that the test results might show multiple
bugs. It was tough for developers to locate those bugs because
they have to check the entire source code of the application.
• It slows the software delivery process.
• Continuous feedback pertaining to things like coding or
architectural issues, build failures, test status and file release
uploads was missing due to which the quality of software can go
down.
• The whole process was manual which increases the risk of
frequent failure.
• It is evident from the above-stated problems that
not only the software delivery process became
slow but the quality of software also went down.
This leads to customer dissatisfaction. So to
overcome such chaos there was a dire need for a
system to exist where developers can
continuously trigger a build and test for every
change made in the source code. This is what CI is
all about. Jenkins is the most mature CI tool
available
• First, a developer commits the code to the source code repository. Meanwhile, the
Jenkins server checks the repository at regular intervals for changes.
• Soon after a commit occurs, the Jenkins server detects the changes that have
occurred in the source code repository. Jenkins will pull those changes and will
start preparing a new build.
• If the build fails, then the concerned team will be notified.
• If built is successful, then Jenkins deploys the built in the test server.
• After testing, Jenkins generates a feedback and then notifies the developers about
the build and test results.
• It will continue to check the source code repository for changes made in the
source code and the whole process keeps on repeating.
Jenkins Master
• Your main Jenkins server is the Master. The Master’s job is to
handle:
• Scheduling build jobs.
• Dispatching builds to the slaves for the actual execution.
• Monitor the slaves (possibly taking them online and offline as
required).
• Recording and presenting the build results.
• A Master instance of Jenkins can also execute build jobs
directly.
Jenkins Slave

• A Slave is a Java executable that runs on a remote machine.


The following are the characteristics of Jenkins Slaves:
• It hears requests from the Jenkins Master instance.
• Slaves can run on a variety of operating systems.
• The job of a Slave is to do as they are told to, which involves
executing build jobs dispatched by the Master.
• You can configure a project to always run on a particular
Slave machine or a particular type of Slave machine, or
simply let Jenkins pick the next available Slave.
Advantages of Jenkins include:

• It is an open-source tool with great community


support.
• It is easy to install.
• It has 1000+ plugins to ease your work. If a
plugin does not exist, you can code it and share
it with the community.
• It is free of cost.
• It is built with Java and hence, it is portable to
all the major platforms.
Jenkins Features
• Adoption: Jenkins is widespread, with more
than 147,000 active installations and over 1
million users around the world.
• Plugins: Jenkins is interconnected with well
over 1,000 plugins that allow it to integrate
with most of the development, testing and
deployment tools.
What Is Jenkins Used For?
• Deploying code into production
• If all of the tests developed for a feature or release
branch are green, Jenkins or another CI system may
automatically publish code to staging or production.
This is often referred to as continuous deployment.
Changes are done before a merging action can also
be seen. One may do this in a dynamic staging
environment. Then it’s distributed to a central
staging system, a pre-production system, or even a
production environment when combined.
• Enabling task automation
• Another instance in which one may use Jenkins is
to automate workflows and tasks. If a developer is
working on several environments, they will need to
install or upgrade an item on each of them. If the
installation or update requires more than 100 steps to
complete, it will be error-prone to do it manually.
Instead, you can write down all the steps needed to
complete the activity in Jenkins. It will take less time,
and you can complete the installation or update
without difficulty.
• Reducing the time it takes to review a code
• Jenkins is a CI system that may communicate with
other DevOps tools and notify users when a merge request is
ready to merge. This is typically the case when all tests have
been passed and all other conditions have been satisfied.
Furthermore, the merging request may indicate the
difference in code coverage. Jenkins cuts the time it takes to
examine a merge request in half. The number of lines of code
in a component and how many of them are executed
determines code coverage. Jenkins supports a transparent
development process among team members by reducing the
time it takes to review a code.
• Driving continuous integration
• Before a change to the software can be released, it must go
through a series of complex processes. The Jenkins pipeline
enables the interconnection of many events and tasks in a
sequence to drive continuous integration. It has a collection of
plugins that make integrating and implementing continuous
integration and delivery pipelines a breeze. A Jenkins pipeline’s
main feature is that each assignment or job relies on another task
or job.
• On the other hand, continuous delivery pipelines have different
states: test, build, release, deploy, and so on. These states are
inextricably linked to one another. A CD pipeline is a series of
events that allow certain states to function.
• Increasing code coverage
• Jenkins and other CI servers may verify code to increase
test coverage. Code coverage improves as a result of tests.
This encourages team members to be open and
accountable. The results of the tests are presented on the
build pipeline, ensuring that team members adhere to the
guidelines. Like code review, comprehensive code
coverage guarantees that testing is a transparent process
for all team members.

• Enhancing coding efficiency
• Jenkins dramatically improves the efficiency of the
development process. For example, a command
prompt code may be converted into a GUI button
click using Jenkins. One may accomplish this by
encapsulating the script in a Jenkins task. One may
parameterize Jenkins tasks to allow for
customization or user input. Hundreds of lines of
code can be saved as a result.
Why We Need Continuous
Monitoring?
• Continuous Monitoring Tools resolve any
system errors ( low memory, unreachable
server etc. ) before they have any negative
impact on your business productivity.
Important reasons to use a
monitoring tool are:
• It detects any network or server problems
• It determines the root cause of any issues
• It maintains the security and availability of the service
• It monitors and troubleshoot server performance issues
• It allows us to plan for infrastructure upgrades before outdated
systems cause failures
• It can respond to issues at the first sign of a problem
• It can be used to automatically fix problems when they are
detected
• It ensures IT infrastructure outages have a minimal effect on your
organization’s bottom line
• It can monitor your entire infrastructure and business processes
Continuous Monitoring

• Continuous Monitoring is all about the ability of an


organization to detect, report, respond, contain and mitigate
the attacks that occur, in its infrastructure. Continuous
Monitoring is actually not new, it’s been around for some
time. For years our security professionals are performing
static analysis from – system log, firewall logs, IDS logs, IPS
logs etc. But, it did not provide proper analysis and response.
Today’s Continuous Monitoring approach gives us the ability
to aggregate all of the events that I discussed above, co-relate
them, compare them and then estimate the organization’s
risk posture.
• We have various security tools, like Firewall, IDS, End Point Protection etc. they are
connected with a ‘Security Information and Event Management system.
• In order to achieve Continuous Monitoring, we need to have all the parts talking to
each other,.
• So we have security tools and series of ‘End Points’, this can include client and
servers, routers, switches, mobile devices and so on.
• These two groups can then talk to a Security Information and Event Management
system (SIEM), through a common language and in more automated fashion.
• Connected to this SIEM there are two important components, first one is a Data
Warehouse. Now to this Data Warehouse, we will connect ‘Analytics’ and ‘Security
Intelligence’.
• Security intelligence (SI) is the information relevant to protecting an organization
from external and insider threats as well as the processes, policies and tools
designed to gather and analyze that information.
• This SIEM is also connected to a ‘Governance Risk and Compliance System’ it
basically provides dashboarding.
• To this ‘Governance Risk and Compliance System’ we attach a risk database. This
gives us ‘Actionable Intelligence’.
• Actionable Intelligence is nothing but information that can be acted upon, with the
further implication that actions should be taken.
• Nagios is used for Continuous monitoring of
systems, applications, services, and business
processes etc in a DevOps culture. In the event of
a failure, Nagios can alert technical staff of the
problem, allowing them to begin remediation
processes before outages affect business
processes, end-users, or customers. With Nagios,
you don’t have to explain why an unseen
infrastructure outage affect your organization’s
bottom line.
How Nagios works.
• Nagios runs on a server, usually as a daemon or a service.
• It periodically runs plugins residing on the same server, they
contact hosts or servers on your network or on the internet.
One can view the status information using the web interface.
You can also receive email or SMS notifications if something
happens.
The Nagios daemon behaves like a scheduler that runs certain
scripts at certain moments. It stores the results of those
scripts and will run other scripts if these results change.
• Plugins: These are compiled executables or scripts (Perl
scripts, shell scripts, etc.) that can be run from a command
line to check the status or a host or service. Nagios uses the
results from the plugins to determine the current status of
the hosts and services on your network.
Nagios Architecture:
• Nagios is built on a server/agents architecture.
• Usually, on a network, a Nagios server is running on a host,
and Plugins interact with local and all the remote hosts that
need to be monitored.
• These plugins will send information to the Scheduler, which
displays that in a GUI.
NRPE (Nagios Remote Plugin
Executor).
• The NRPE addon is designed to allow you to
execute Nagios plugins on remote Linux/Unix
machines. The main reason for doing this is to
allow Nagios to monitor “local” resources (like
CPU load, memory usage, etc.) on remote
machines. Since these public resources are
not usually exposed to external machines, an
agent like NRPE must be installed on the
remote Linux/Unix machines.
• The check_nrpe plugin, resides on the local monitoring
machine.
• The NRPE daemon, runs on the remote Linux/Unix
machine.
• There is a SSL (Secure Socket Layer) connection between
monitoring host and remote host as shown in the diagram
above.
Why Nagios

• It can monitor Database servers such as SQL


Server, Oracle, Mysql, Postgres
• It gives application level information (Apache,
Postfix, LDAP, Citrix etc.).
• Provides active development.
• Has excellent support form huge active
community.
• Nagios runs on any operating system.
• It can ping to see if host is reachable.

You might also like