0% found this document useful (0 votes)
16 views42 pages

Unit 05 Devops

The document discusses the importance of testing in DevOps, highlighting the two main types: Manual Testing and Automated Testing. It explains the roles of various testing tools, particularly Selenium and soapUI, and outlines the advantages and disadvantages of automated testing. Additionally, it covers methodologies like Test-Driven Development (TDD) and REPL-driven development, emphasizing their impact on software quality and testing efficiency.

Uploaded by

indhu mathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views42 pages

Unit 05 Devops

The document discusses the importance of testing in DevOps, highlighting the two main types: Manual Testing and Automated Testing. It explains the roles of various testing tools, particularly Selenium and soapUI, and outlines the advantages and disadvantages of automated testing. Additionally, it covers methodologies like Test-Driven Development (TDD) and REPL-driven development, emphasizing their impact on software quality and testing efficiency.

Uploaded by

indhu mathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

UNIT - V

TESTING TOOLS & AUTOMATION


Balike Mahesh
7207030340

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


• In DevOps, testing plays a crucial role in ensuring the quality and reliability of software
applications throughout the development and deployment process.
• The software testing mainly divided into two parts, which are as follows:

• Manual Testing
• Automation Testing

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


What is Manual Testing?
• Testing any software or an application according to the client's
needs without using any automation tool is known as manual
testing.
• In
other words, we can say that it is a procedure of verification
and validation. Manual testing is used to verify the behavior of an
application or software in contradiction of requirements
specification.
• Wedo not require any precise knowledge of any testing tool to
execute the manual test cases. We can easily prepare the test
document while performing manual testing on any application.
• Even if test automation has larger potential benefits for DevOps than manual
testing, manual testing will always be an important part of software
development. If nothing else, we(YOUTUBE
B.MAHESH will need
CHANNEL ::to perform
SV TECH KNOWLEDGE )our tests manually at least
• Acceptance testing is a crucial type of testing that can be challenging to replace, even with attempts
to do so. Software requirement specifications can sometimes be difficult to understand, even for the
people developing the features based on those requirements. In these situations, quality assurance
(QA) personnel who are focused on the task at hand are extremely valuable and irreplaceable.
• The aspects that make manual testing easier are also beneficial for automated integration testing.
There is a synergy that can be achieved by combining different testing strategies.
• To keep QA personnel happy, there are a couple of important factors to consider:
1. Test Data Management: It's essential to manage test data, especially the contents of backend
databases, so that tests produce consistent results when run repeatedly. This ensures reliability in
the testing process.
2. Rapid Deployment of New Code: Being able to quickly deploy new code is crucial for verifying bug
fixes and ensuring smooth testing. However, this can be challenging in practice. For example, large
production databases may be difficult to copy to test environments, or they may contain sensitive
user data that needs protection under the law. In such cases, it becomes necessary to de-identify
and cleanse the data by removing any personal details before deploying it to test environments.
• It's important to remember that each organization is different, and there is no one-size-fits-all
solution in this area. However, following the principle of "Keep it simple, stupid" (KISS) can be
useful. Simplifying processes and strategies can often lead to better outcomes in managing test data
and deployments.
• Overall, acknowledging the value of acceptance testing and ensuring the right support and
resources are in place for QA personnel
B.MAHESHcan contribute
(YOUTUBE toTECH
CHANNEL :: SV successful
KNOWLEDGE ) software testing and
development.
Automated Testing
• Automated Testing is the technique for automating the manual testing
process. In this process, manual testing is replaced by the collection of
automated testing tools. Automated testing helps the software testers to
check out the quality of the software. The mechanical aspects of the
software testing task are automated by the automated testing.
• Automated testing refers to the use of software tools and scripts to execute
tests and validate the behavior of a software application. It involves writing
test scripts that can be run automatically, without requiring manual
intervention, to check if the application meets the expected requirements
and functionalities.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


Automation of testing Pros and cons
• Advantages of Automated Testing:
Automated testing improves the coverage of testing as automated execution of test cases is faster
than manual execution.
1. Automated testing reduces the dependability of testing on the availability of the test engineers.
2. Automated testing provides round the clock coverage as automated tests can be run all time in
24*7 environment.
3. Automated testing takes far less resources in execution as compared to manual testing.
4. It helps to train the test engineers to increase their knowledge by producing a repository of
different tests.
5. It helps in testing which is not possible without automation such as reliability testing, stress testing,
load and performance testing.
6. It includes all other activities like selecting the right product build, generating the right test data
and analyzing the results.
7. It acts as test data generator and produces maximum test data to cover a large number of input
and expected output for result comparison.
8. Automated testing has less chances of error hence more reliable.
9. As with automated testing test engineers have free time and can focus on other creative tasks.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
Disadvantages of Automated Testing :


Automated Testing has the following disadvantages:
1.Automated testing is very much expensive than the manual testing.
2.It also becomes inconvenient and burdensome as to decide who
would automate and who would train.
3.It has limited to some organizations as many organizations not
prefer test automation.
4.Automated testing would also require additionally trained and skilled
people.
5.Automated testing only removes the mechanical execution of testing
process, but creation of test cases still required testing professionals.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
What is Selenium?

• Selenium is a free (open-source) automated testing framework used to validate


web applications across different browsers and platforms. You can use multiple
programming languages like Java, C#, Python, etc to create Selenium Test Scripts.
Testing done using the Selenium testing tool is usually referred to as Selenium
Testing.
• Let us now understand each one of the tools available in the Selenium suite
and their usage.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


Selenium IDE
Selenium Integrated Development Environment (IDE) is a Firefox plugin that
lets testers to record their actions as they follow the workflow that they need to
test.
Selenium RC
Selenium Remote Control (RC) was the flagship testing framework that allowed
more than simple browser actions and linear execution. It makes use of the full power
of programming languages such as Java, C#, PHP, Python, Ruby and PERL to create
more complex tests.
Selenium WebDriver
Selenium WebDriver is the successor to Selenium RC which sends commands
directly to the browser and retrieves results.
Selenium Grid
Selenium Grid is a tool used to run parallel tests across different machines and
different browsers simultaneously which results in minimized execution time.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
Features of Selenium (Webdriver)
• Open Source and Portable – Selenium is an open source and portable Web testing
Framework.
• Combination of tool and DSL – Selenium is combination of tools and DSL
(Domain Specific Language) in order to carry out various types of tests.
• Easier to understand and implement – Selenium commands are categorized in
terms of different classes which make it easier to understand and implement.
• Reduce test execution time – Selenium supports parallel test execution that
reduce the time taken in executing parallel tests.
• Lesser resources required – Selenium requires lesser resources when compared to
its competitors like UFT, RFT, etc.
• Supports Multiple Programming Languages – C#, Java, Python, PHP, Ruby, Perl,
and JavaScript
• Supports Multiple Operating Systems – Android, iOS, Windows, Linux, Mac,
Solaris.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• Supports Multiple Browsers – Google Chrome, Mozilla Firefox, Internet Explorer,
Edge, Opera, Safari, etc.
• Parallel Test Execution – It also supports parallel test execution which reduces
time and increases the efficiency of tests.
• A flexible language – Once the test cases are prepared, they can be executed on
any operating system like Linux, Macintosh, etc.
• No installation Required – Selenium web driver does not require server
installation, test scripts interact directly with the browser.
• Selenium is an open source and portable Web testing Framework.

•Selenium IDE provides a playback and record feature for authoring tests without the
need to learn a test scripting language.
•It can be considered as the leading cloud-based testing platform which helps testers to
record their actions and export them as a reusable script with a simple-to-understand
and easy-to-use interface.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• Selenium supports various operating systems, browsers and programming languages. Following is the list:
• Programming Languages: C#, Java, Python, PHP, Ruby, Perl, and JavaScript
• Operating Systems: Android, iOS, Windows, Linux, Mac, Solaris.
• Browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Edge, Opera, Safari, etc.
• It also supports parallel test execution which reduces time and increases the efficiency of tests.
• Selenium can be integrated with frameworks like Ant and Maven for source code compilation.
• Selenium can also be integrated with testing frameworks like TestNG for application testing and generating reports.
• Selenium requires fewer resources as compared to other automation test tools.
• WebDriver API has been indulged in selenium whichis one of the most important modifications done to selenium.
• Selenium web driver does not require server installation, test scripts interact directly with the browser.
• Selenium commands are categorized in terms of different classes which make it easier to understand and
implement.
• Selenium Remote Control (RC) in conjunction with WebDriver API is known as Selenium 2.0. This version was built to
support the vibrant web pages and Ajax.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
JavaScript testing
• JavaScript testing frameworks play a significant role in testing web user interfaces (UIs)
of various products. Here are a few noteworthy frameworks:
1.Karma: Karma is a test runner specifically designed for running unit tests written in
JavaScript. It helps execute these tests in different browsers and environments, ensuring
consistent results across platforms.
2.Jasmine: Jasmine is a behavior testing framework that resembles Cucumber. It allows
developers to write tests in a more descriptive and readable format. Jasmine helps verify
the expected behavior of JavaScript code.
3.Protractor: Protractor is a testing framework primarily used for AngularJS applications.
While it utilizes the underlying Selenium web driver, Protractor is optimized for testing
AngularJS-specific features. It simplifies locating controllers within the testing code,
leveraging its understanding of the Angular framework.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


• One might wonder why Protractor exists when Selenium can also test
AngularJS applications. The advantage of Protractor lies in its deep
integration with AngularJS, understanding its unique model/view setup. This
knowledge helps streamline the testing process and provides specialized
constructs to locate and interact with AngularJS components.
• It's worth noting that while Protractor allows writing tests in JavaScript,
Selenium supports various programming languages, including JavaScript.
Therefore, you have the flexibility to choose the language that suits your
preference and skill set for writing test cases.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


Testing backend integration points
• Automated testing of backend functionality, such as SOAP and REST endpoints, is often
cost-effective. Backend interfaces are usually stable, requiring less maintenance effort
compared to GUI tests. Tools like soapUI simplify the process of writing and executing
tests for these interfaces.
• soapUI appeals to different roles involved in testing. Testers can use its well-structured
environment to create and run tests interactively, building them incrementally.
Developers can integrate test cases into their builds using command-line runners and
Maven plugins. This integration is particularly useful for maintaining the build server.
• An advantage of soapUI is its open-source licensing, with additional features available in
a separate, proprietary version. This open-source nature ensures reliability in builds,
preventing unexpected failures due to license limitations.
• Overall, automated testing of backend functionality with tools like soapUI brings
efficiency, reliability, and collaboration between testers, developers, and build server
maintainers.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• The soapUI user interface is easy to navigate. On the left
side, there's a tree view that shows test cases. You can select
individual tests or entire test suites and run them. The results
are displayed on the right side of the interface.
• One notable aspect is that test cases in soapUI are defined in
XML format. This allows you to manage them as code in a
source code repository. It also means you can edit them using
a text editor when needed, such as when performing a global
search and replace to update identifiers that have changed
names. This flexibility aligns well with the principles of
DevOps.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


Test-driven development
• Test-driven development (TDD) has an added focus on test automation. It was
made popular by the Extreme programming movement of the nineties.
• TDD is usually described as a sequence of events, as follows:
• Implement the test: As the name implies, you start out by writing the test and
write the code afterwards. One way to see it is that you implement the interface
specifications of the code to be developed and then progress by writing the code.
To be able to write the test, the developer must find all relevant requirement
specifications, use cases, and user stories.
• The shift in focus from coding to understanding the requirements can be
beneficial for implementing them correctly.
• Verify that the new test fails: The newly added test should fail because there is
nothing to implement the behavior properly yet, only the stubs and interfaces
needed to write the test. Run the test and verify that it fails.
• Write code that implements the tested feature: The code we write doesn't yet
have to be particularly elegant or efficient. Initially, we just want to make the new
test pass. B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• Verify that the new test passes together with the old tests: When the new test
passes, we know that we have implemented the new feature correctly. Since the
old tests also pass, we haven't broken existing functionality.
• Refactor the code: The word "refactor" has mathematical roots. In
programming, it means cleaning up the code and, among other things, making it
easier to understand and maintain. We need to refactor since we cheated a bit
earlier in the development.
• TDD is a style of development that fits well with DevOps, but it's not necessarily
the only one. The primary benefit is that you get good test suites that can be used
in Continuous Integration tests.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


REPL-driven development
• While REPL-driven development isn't a widely recognized term, it is my favored
style of development and has a particular bearing on testing. This style of
development is very common when working with interpreted languages,such as
Lisp, Python, Ruby, and JavaScript.
• When you work with a Read Eval Print Loop (REPL), you write small functions that

• are independent and also not dependent on a global state. The


functions are tested even as you write them.
• This style of development differs a bit from TDD. The focus is on writing small
functions with no or very few side effects. This makes the code easy to
comprehend rather than when writing test cases before functioning code is
written, as in TDD.
• You can combine this style of development with unit testing. Since you can use
REPL-driven development to develop your tests as well, this combination is a very
effective strategy. B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• REPL-driven development, although not widely recognized, is a preferred style of
development that particularly influences testing. It is commonly used when working with
interpreted languages like Lisp, Python, Ruby, and JavaScript.
• In REPL-driven development, you write small functions that are independent and do not
rely on a global state. These functions are tested as you write them, using a Read Eval
Print Loop (REPL) environment.
• This development style differs somewhat from Test-Driven Development (TDD). Instead of
focusing on writing test cases before functional code, REPL-driven development
emphasizes writing small functions with minimal side effects. This approach makes the
code easier to understand.
• You can also combine REPL-driven development with unit testing. Since you can use the
REPL environment to develop your tests alongside your code, this combination becomes a
highly effective strategy for ensuring code quality and reliability.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


UNIT –V PART-2
DEPLOYMENT OF THE SYSTEM
Balike Mahesh
7207030340

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


Deployment systems :
• Deployment in DevOps is a process that enables you to retrieve important codes
from version control so that they can be made readily available to the public and
they can use the application in a ready-to-use and automated fashion.

• Why are there so many deploymentsystems?


• There is a bewildering abundance of options regarding the installation of packages and configuring
them on actual servers, not to mention all the ways to deploy client-side code.
• Let's first examine the basics of the problem we are trying to solve.
• We have a typical enterprise application, with a number of different high-level components. We
don't need to make the scenario overly complex in order to start reasoning about the challenges
that exist in this space.
• In our scenario, we have:
• A web server
• An application server
• A database server

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


• If we only have a single physical server and these few components to worry about that
get released once a year or so, we can install the software manually and be donewith
the task. It will be the most cost-effective way of dealing with the situation, even though
manual work is boring and error prone.
• It's not reasonable to expect a conformity to this simplified release cycle in reality
though. It is more likely that a large organization has hundreds of servers and
applications and that they are all deployed differently, with different requirements.
• Managing all the complexity that the real world displays is hard, so it starts to make
sense that there are a lot of different solutions that do basically the same thing in
different ways.
• Whatever the fundamental unit that executes our code is, be it a physical server, a
virtual machine, some form of container technology, or a combination of these, we
have several challenges to deal with. We will look at them now.
• Configuring the base OS
• The configuration of the base operating system must be dealt with somehow.
• Often, our application stack has subtle, or not so subtle, requirements on the base operating
system. Some application stacks, such as Java, Python, or Ruby, make these operating system
requirements less apparent, because these technologies go to a great length to offer cross-
platform functionality. At other times, the operating system requirements are apparent to a greater
degree, such as when you work with low-level mixed hardware and software integrations, which is
common in the telecom industry. B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• There are many existing solutions that deal with this fundamental issue. Some systems
work with a bare metal (or bare virtual machine) approach, where they install the desired
operating system from scratch and then install all the base dependencies that the
organization needs for their servers. Such systems include, for example, Red Hat Satellite
and Cobbler, which works in a similar way but is more lightweight.
• Cobbler allows you to boot a physical or virtual machine over the network using dhcpd. The
DHCP server can then allow you to provide a netboot-compliant image. When the netboot
image is started, it contacts Cobbler to retrieve the packages that will be installed in order to
create the new operating system. Which packages are installed can be decided on the server
from the target machine's network MAC address for instance.
• Another method that is very popular today is to provide base operating system images that
can be reused between machines. Cloud systems such as AWS, Azure, or OpenStack work
this way. When you ask the cloud system for a new virtual machine, it is created using an
existing image as a base. Container systems such as Docker also work in a similar way,
where you declare your base container image and then describe the changes you want to
formulate for your own image.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


Describing clusters
• There must be a way to describe clusters.
• If your organization only has a single machine with a single application, then you might not need to
describe how a cluster deployment of your application would look like. Unfortunately (or fortunately,
depending on your outlook), the reality isnormally that your applications are spread out over a set of
machines, virtual
• or physical.

• All the systems we work with in this chapter support this idea in different ways. Puppet has an
extensive system that allows machines to have different roles that in turn imply a set of packages and
configurations. Ansible and Salt have these systems as well. The container-based Docker system has an
emerging infrastructure for describing sets of containers connected together and Docker hosts that can
accept and deploy such cluster descriptors.
• Cloud systems such as AWS also have methods and descriptors for cluster deployments.
• Cluster descriptors are normally also used to describe the application layer.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


Delivering packages to a system in devops
• In DevOps, delivering packages to a system typically involves a combination of automation, version control,
and deployment tools. Here's a high-level overview of the process:
1. Package Management: Before delivering packages, it's common to use package management systems like
npm, pip, Maven, or NuGet to manage dependencies and bundle the application or software into a package
format. These package managers handle versioning and dependency resolution.
2. Version Control: The application code, configuration files, and package specifications are stored in a version
control system like Git. Developers commit their changes, creating a version history that enables collaboration,
review, and tracking of modifications.
3. Continuous Integration (CI): Continuous Integration servers like Jenkins, GitLab CI/CD, or CircleCI
automatically build and test the codebase whenever changes are pushed to the version control system. This
step ensures that the application is in a deployable state and that any tests defined are passing.
4. ArtifactRepositories: After successful build and test execution, the resulting artifacts (e.g., compiled binaries,
package files, or container images) are stored in artifact repositories. These repositories act as a centralized
storage location for the packages and allow for easy retrieval and sharing among different stages of the
deployment pipeline.
5. Continuous Deployment (CD): Continuous Deployment involves automating the process of delivering packages
to the target system(s). CD tools like Kubernetes, Docker Swarm, or deployment scripts utilize the artifacts from
the artifact repositories and orchestrate the deployment process. They handle tasks such as provisioning
infrastructure, configuring environments, deploying
B.MAHESH containers
(YOUTUBE or packages,
CHANNEL :: SV TECH KNOWLEDGE ) and managing the deployment
lifecycle.
6.Infrastructure as Code (IaC): Infrastructure as Code tools like Terraform or
CloudFormation enable the definition and provisioning of infrastructure
resources programmatically. Infrastructure configurations are written as
code, versioned alongside the application code, and deployed together,
ensuring consistent and reproducible environments.
7.Release Management: Release management tools such as Spinnaker or
Octopus Deploy provide advanced features for managing the release
process. They offer capabilities like release pipelines, canary deployments,
rollout strategies, and rollback mechanisms, making it easier to control and
monitor the deployment across different environments.
8.Monitoring and Observability: Once the package is deployed, it's important
to have monitoring and observability tools in place. These tools, such as
Prometheus, Grafana, or ELK Stack, help track the performance, health, and
logs of the deployed system. They enable proactive detection of issues,
troubleshooting, and performance optimization.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


VIRTUALIZATION STACKS
Balike Mahesh
7207030340

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


• Virtualization is widely used in organizations with their own server farms to encapsulate different
components of applications. It provides virtual machines with virtual hardware, allowing you to
simulate different hardware configurations. This is useful for tasks like emulating mobile phone
hardware for testing mobile applications.
• When it comes to server virtualization, the main goal is to encapsulate application server
components. By creating virtual machines, you can isolate and control resource allocation. For
example, if a server component misbehaves and consumes excessive resources, it won't affect the
entire physical machine. Container-based techniques offer similar encapsulation and resource
control but without the emulation capabilities.
• The hypervisor is the component that abstracts the underlying hardware and manages resources
for virtual machines. It can run directly on the hardware (bare metal hypervisor) or within an
operating system using the operating system kernel. VMware, KVM, Xen, and VirtualBox are
examples of virtualization solutions.
• VMware is a widely used proprietary solution, available in desktop and server variants (VMware
ESX). KVM is an open-source virtualization solution for Linux that runs within a Linux host. Xen
offers features like paravirtualization, which improves efficiency by modifying the guest operating
system's kernel. VirtualBox is an open-source solution popular among developers, useful for
emulating different environments on their machines.
• All these virtualization technologies provide APIs to automate virtual machine management. The
libvirt API, for example, works with various hypervisors such as KVM, QEMU, Xen, and LXC,
allowing for streamlined management across different virtualization platforms.
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
NOTE :: DRAW ANY ONE DIAGRAM IN EXAM NO NEED TO DRAW TWO

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


• Here are some popular virtualization stacks:
1. VMware vSphere: VMware vSphere is a comprehensive virtualization stack that provides a
hypervisor (ESXi), management tools (vCenter Server), and additional features such as High
Availability (HA), Distributed Resource Scheduler (DRS), and vMotion for live VM migration.
2. Microsoft Hyper-V: Hyper-V is Microsoft's virtualization platform, available as part of Windows
Server. It includes the Hyper-V hypervisor, virtual machine management tools, and integration with
other Microsoft technologies such as System Center Virtual Machine Manager (SCVMM) for
centralized management.
3. KVM (Kernel-based Virtual Machine): KVM is an open-source virtualization stack for Linux. It
utilizes the Linux kernel as the hypervisor and provides support for running Linux and other
operating systems as guest VMs. KVM is integrated into the Linux kernel, making it a popular
choice for Linux-based virtualization.
4. Xen: Xen is an open-source hypervisor that allows for running multiple VMs on a single host
machine. It provides paravirtualization and hardware-assisted virtualization capabilities. Xen can be
used as a standalone hypervisor or as part of other virtualization platforms like Citrix XenServer.
5. Docker and Kubernetes: While not traditional virtualization stacks, Docker and Kubernetes form a
containerization stack that enables lightweight and efficient application virtualization. Docker
provides containerization capabilities, while Kubernetes is an orchestration platform for managing
containerized applications at scale.
6. Proxmox VE: Proxmox Virtual Environment (VE) is an open-source virtualization stack based on
KVM and LXC (Linux Containers). It offers a web-based management interface and supports both
VMs and containers, making it suitable for various virtualization needs.
7. Oracle VM VirtualBox: VirtualBox B.MAHESH
is a free(YOUTUBE
and CHANNEL :: SV TECH KNOWLEDGE
open-source )
virtualization software that allows
running guest VMs on a host system. It supports a wide range of operating systems and provides
CODE EXECUTION AT THE
CLIENT
Balike Mahesh
7207030340

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


• Some configuration management systems provide the ability to execute code on specific
nodes. In the Puppet ecosystem, this feature is called Marionette Collective or MCollective. It
allows you to run commands on matching nodes, which can be useful for tasks like running a
directory listing command on HTTP servers facing the public Internet for debugging
purposes.
• When experimenting with different deployment systems, using Docker to manage the base
operating system can be convenient and time-saving. Docker allows you to develop and
debug deployment code specific to a particular system, which can later be used on physical
or virtual machines.
• We will start by trying out the various deployment systems in local deployment modes.
Later, we will simulate the deployment of a system with multiple containers forming a
virtual cluster.
• While using official Docker images is preferred, sometimes they are not available or may
disappear. This is a common occurrence in the fast-paced world of DevOps.
• It's important to note that Docker has limitations when it comes to emulating a full
operating system. Some containers may require elevated privilege modes, which we will
handle as they arise.
• Although many people prefer using Vagrant for these tests, Docker is favored for its
lightweight and fast nature, which is usually
B.MAHESH sufficient
(YOUTUBE CHANNEL for
:: SV TECH most )scenarios.
KNOWLEDGE
PUPPET MASTER AND AGENTS
Balike Mahesh
7207030340

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


• Puppet is a deployment solution that is very popular in larger organizations and is
• one of the first systems of its kind.
• Puppet consists of a client/server solution, where the client nodes check in regularly with the Puppet server
to see if anything needs to be updated in thelocal configuration.
• The Puppet server is called a Puppet master, and there is a lot of similar wordplay inthe names chosen for the
various Puppet components.
• Puppet provides a lot of flexibility in handling the complexity of a server farm,
• and as such, the tool itself is pretty complex.

• This is an example scenario of a dialogue between a Puppet client and aPuppet master:
1. The Puppet client decides that it's time to check in with the Puppet master to discover any new configuration
changes. This can be due to a timer or manual intervention by an operator at the client. The dialogue
between thePuppet client and master is normally encrypted using SSL.
2. The Puppet client presents its credentials so that the Puppet master can know exactly which client is
calling. Managing the client credentials is aseparate issue.
3. The Puppet master figures out which configuration the client should have by compiling the Puppet catalogue
and sending it to the client. This involvesa number of mechanisms, and a particular setup doesn't need to
utilize
• all possibilities.
• It is pretty common to have both a role-based and concrete configuration for a Puppet client. Role-based
configurations can be inherited.
4 .The Puppet master runs the necessary code on the client side such that the
• configuration matches the one decided on by the Puppet master.
• B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
• In this sense, a Puppet configuration is declarative. You declare what
configuration a machine should have, and Puppet figures out how to get from the
current to the desired client state.

There are both pros and cons of the Puppet ecosystem:
• Puppet has a large community, and there are a lot of resources on the Internet
for Puppet. There are a lot of different modules, and if you don't have a really
strange component to deploy, there already is, with all likelihood, an existing
module written for your component that you can modify according to your
needs.
• Puppet requires a number of dependencies on the Puppet client machines.
Sometimes, this gives rise to problems. The Puppet agent will require a Ruby
runtime that sometimes needs to be ahead of the Ruby version available in your
distribution's repositories. Enterprise distributions often lag behind
• in versions.
• Puppet configurations can be complex to write and test.

B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )


B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )
B.MAHESH (YOUTUBE CHANNEL :: SV TECH KNOWLEDGE )

You might also like