0% found this document useful (0 votes)
46 views13 pages

DevOps UNIT-5

Kindly check

Uploaded by

226y1a0567
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views13 pages

DevOps UNIT-5

Kindly check

Uploaded by

226y1a0567
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

UNIT - V

Testing Tools and automation: Various types of testing, Automation of


testing Pros and cons, Selenium - Introduction, Selenium features,
JavaScript testing, Testing backend integration points, Test-driven
development, REPL-driven development
Deployment of the system: Deployment systems, Virtualization stacks,
code execution at the client, Puppet master and agents, Ansible,
Deployment tools: Chef, Salt Stack and Docker
Deployment systems
In DevOps, deployment systems play a crucial role in automating the process
of releasing software applications and infrastructure changes. These systems
help ensure consistency, reliability, and efficiency in deploying code from
development to production environments. Here are some popular deployment
systems used in DevOps:
1. Jenkins: Jenkins is a widely adopted open-source automation server that
supports continuous integration and deployment. It provides a flexible
platform for building, testing, and deploying applications across various
platforms and technologies. Jenkins offers a vast ecosystem of plugins,
making it highly customizable and extensible.
2. GitLab CI/CD: GitLab is a complete DevOps platform that includes a built-
in Continuous Integration/Continuous Deployment (CI/CD) system. It allows
teams to define and manage their CI/CD pipelines directly within the GitLab
repository, integrating code building, testing, and deployment in a single
platform.
3. CircleCI: CircleCI is a cloud-based CI/CD platform that provides fast and
scalable automation for building, testing, and deploying applications. It
supports a wide range of programming languages and frameworks, and its
configuration can be defined using a YAML file within the code repository.
4. Travis CI: Travis CI is another cloud-based CI/CD platform that integrates
with popular version control systems like GitHub and Bitbucket. It supports
various programming languages and provides a simple YAML-based
configuration for defining CI/CD pipelines.
5. AWS CodePipeline: AWS CodePipeline is a fully managed CI/CD service
provided by Amazon Web Services (AWS). It allows you to define and automate
your software release processes using a visual interface. CodePipeline
integrates with other AWS services, making it convenient for deploying
applications to AWS infrastructure.
6. Azure DevOps: Azure DevOps (formerly known as Visual Studio Team
Services or VSTS) is a comprehensive set of development tools offered by
Microsoft Azure. It includes features for source control, CI/CD pipelines, and
release management, enabling end-to-end DevOps practices on the Azure
platform.
7. Spinnaker: Spinnaker is an open-source, multi-cloud continuous delivery
platform developed by Netflix. It supports deploying applications to various
cloud providers, including AWS, Google Cloud Platform, and Kubernetes.
Spinnaker offers advanced deployment strategies, canary releases, and blue-
green deployments.
These deployment systems provide automation, orchestration, and
monitoring capabilities, allowing teams to streamline the software delivery
process and achieve faster, more reliable deployments. The choice of
deployment system depends on factors such as the technology stack,
infrastructure environment, team preferences, and integration requirements
with other DevOps tools.
Virtualization stacks
Virtualization stacks play a significant role in enabling efficient and scalable
infrastructure management in DevOps. These stacks provide the foundation
for creating and managing virtualized environments, allowing organizations
to leverage the benefits of virtualization for their application deployments.
Here are some common virtualization stacks used in DevOps:
1. VMware vSphere: VMware vSphere is a comprehensive virtualization
platform that enables organizations to create, manage, and migrate virtual
machines (VMs) in data center environments. It provides features such as
high availability, resource management, and live migration capabilities,
making it popular for enterprise-level virtualization.
2. Microsoft Hyper-V: Hyper-V is a hypervisor-based virtualization
technology provided by Microsoft. It allows organizations to run multiple VMs
on a single physical server, offering features like live migration, high
availability, and integration with other Microsoft products and services.
3. KVM: Kernel-based Virtual Machine (KVM) is an open-source
virtualization solution built into the Linux kernel. It provides a hypervisor that
enables running multiple VMs on Linux servers. KVM offers good performance
and is commonly used in Linux-based DevOps environments.
4. Xen: Xen is an open-source virtualization platform that provides a
hypervisor for running multiple VMs. It offers paravirtualization and
hardware-assisted virtualization capabilities, making it suitable for both
Linux and Windows environments.
5. Docker: Docker is a popular containerization platform that utilizes
lightweight, isolated containers to run applications. While not a traditional
virtualization stack, Docker provides an alternative approach to virtualization
by allowing applications to be packaged with their dependencies, providing
consistency and portability across different environments.
6. Kubernetes: Kubernetes is an open-source container orchestration
platform that automates the deployment, scaling, and management of
containerized applications. It works in conjunction with container runtimes
like Docker and container orchestration frameworks like Docker Swarm or
Kubernetes.
These virtualization stacks provide the foundation for creating and managing
virtualized infrastructure, whether through traditional VM-based
virtualization or containerization. They enable organizations to achieve
scalability, flexibility, and efficient resource utilization in their DevOps
workflows. The choice of virtualization stack depends on factors such as the
specific infrastructure requirements, technology stack, and organization's
preferences and expertise.
Code execution at the client
In traditional DevOps practices, code execution primarily occurs on servers
or infrastructure managed by the organization. However, with the rise of
modern web applications and client-side technologies, there is a growing
trend towards executing code on the client side as well. Here are some key
aspects of code execution at the client in DevOps:
1. Client-Side Rendering (CSR): Client-Side Rendering involves sending raw
data from the server to the client and rendering the user interface using
JavaScript frameworks like React, Angular, or Vue.js. This approach shifts
some of the processing burden to the client's web browser, allowing for faster
and more interactive user experiences.
2. Progressive Web Apps (PWA): PWAs are web applications that can function
offline and provide app-like experiences to users. They leverage modern web
technologies, including service workers and client-side caching, to execute
code on the client side. PWAs can be deployed and managed as part of a
DevOps workflow, ensuring seamless updates and deployments.
3. Single-Page Applications (SPA): SPAs are web applications that load once
and dynamically update the content without requiring a full page refresh.
They rely on JavaScript frameworks and libraries to execute code on the client
side, handling user interactions and data fetching. DevOps practices ensure
smooth deployment and version control of SPA code.
4. Static Site Generators (SSG): SSGs generate static HTML files during the
build process, which can be served directly to clients without the need for
server-side processing. Popular SSGs like Jekyll, Hugo, or Gatsby execute
code during the build phase to generate the final static assets. DevOps helps
automate the build and deployment processes for SSG-generated sites.
5. Mobile Apps: With the proliferation of mobile applications, DevOps
practices extend to managing the code execution on client devices. Mobile app
development frameworks like React Native or Flutter enable the execution of
shared codebases across multiple platforms, combining server-side logic and
client-side rendering within the mobile app context.
When executing code on the client side in DevOps, it's important to consider
aspects like code versioning, testing, and continuous integration. Automation
tools, such as build systems and task runners like Webpack or Grunt, can be
integrated into the DevOps pipeline to streamline the client-side code
execution processes. Additionally, monitoring and analytics tools can provide
insights into client-side performance and usage patterns to inform future
updates and improvements.
Puppet master and agents
Puppet is a deployment solution that is very popular in larger organizations
and is one of the first systems of its kind.
Puppet consists of a client/server solution, where the client nodes check in
regularly with the Puppet server to see if anything needs to be updated in the
local configuration.
The Puppet server is called a Puppet master, and there is a lot of similar
wordplay in the names chosen for the various Puppet components. Puppet
provides a lot of flexibility in handling the complexity of a server farm, and, as
such, the tool itself is pretty complex.
This is an example scenario of a dialogue between a Puppet client and a
Puppet master:

 The Puppet client decides that it's time to check in with the Pup- pet
master to discover any new configuration changes. This can be due to
a timer or manual intervention by an operator at the client. The dialogue
between the Puppet client and master is normally encrypted using SSL.
 The Puppet client presents its credentials so that the Pup- pet master
knows exactly which client is calling. Managing the client's credentials
is a separate issue.
 The Puppet master figures out which configuration the client should
have by compiling the Puppet catalogue and sending it to the client.
This involves a number of mechanisms, and a particular setup doesn't
need to utilize all possibilities:
 It is pretty common to have both a role-based configuration and a
concrete configuration for a Puppet client. Role-based configurations
can be inherited.
 The Puppet master runs the necessary code on the client side so that
the con- figuration matches the one decided on by the Puppet master.
In this sense, a Puppet configuration is declarative. You declare what con-
figuration a machine should have, and Puppet figures out how to get from the
current to the desired client state.
There are both pros and cons of the Puppet ecosystem, and they are as
follows:

 Puppet has a large community, and there are a lot of resources on the
internet for Puppet. There are a lot of different modules, and if you don't
have a really strange component to deploy, there already is, in all
likelihood, an existing module written for your component that you can
modify according to your: needs.
 Puppet requires a number of dependencies on the Puppet client
machines. Sometimes, this gives rise to problems. The Puppet agent will
require a Ruby run- time that sometimes needs to be ahead of the Ruby
version avail- able in your distribution's repositories. Enterprise
distributions often lag behind in versions.
 Puppet configurations can be complex to write and test.
Ansible
Ansible is a deployment solution that favours simplicity.
The Ansible architecture is agentless; it doesn't need a running daemon on
the client-side like Puppet does. In- stead, the Ansible server logs into the
Ansible node and issues commands over SSH in order to install the re- quired
configuration.
While Ansible's agentless architecture does make things simpler, you need a
Python interpreter installed on the Ansible nodes. Ansible is somewhat more
lenient about the Python version required for its code to run than Puppet is
for its Ruby code to run, so this dependence on Python being available is not
a great hassle in practice.
Like Puppet and others, Ansible focuses on configuration descriptors that are
idempotent. This basically means that the descriptors are declarative and the
Ansible system figures out how to bring the server to the desired state. You
can rerun the configuration run, and it will be safe, which is not necessarily
the case for an imperative system.
Let's try out Ansible with the Docker method we discussed earlier.
We will use the williamyeh/an- sible image, which has been devel- oped for
this purpose, but it should be possible to use any Ansible Docker image or
different ones altogether, to which we just add Ansible later:
1. Create a Dockerfile with this statement:
FROM williamyeh/ansible: centos7
2. Build the Docker container with the following command:
docker build.
This will download the image and create an empty Docker container that we
can use. Normally, you would, of course, have a more complex Dockerfile that
can add the things we need, but in this case, we are going to use the image
interactively, so we will instead mount the directory with Ansi- ble files from
the host so that we can change them on the host and rerun them easily.
3. Run the container
The following command can be used to run the container. You will need the
hash from the previous build command:
docker run -V ble:/ansible bash `pwd`/ansi- -it <hash>
Now we have a prompt, and Ansible is available. The -V trick is to make parts
of the host file system visible to the Docker guest container. The files will be
visible in the /ansible directory in the container.
The playbook.yml file is as follows:
hosts: localhost vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: ensure apache is at the latest version yum: name=httpd state=latest
This playbook doesn't do very much, but it demonstrates some concepts of
Ansible playbooks.
Now we can try to run our Ansible playbook:
cd /ansible ansible-playbook -i inven- tory playbook.yml --con- nection=local
--sudo
The output will look like this:
Deployment tools: Chef, Salt Stack and Docker
Chef:
Deploying with Chef in DevOps involves integrating Chef into the overall
DevOps workflow to automate the deployment of infrastructure and
applications. Here's how Chef fits into the DevOps context:
1. Infrastructure as Code: Chef follows the Infrastructure as Code (IaC)
principle, allowing you to define your infrastructure and application
configurations as code using Chef cookbooks. This enables version control,
collaboration, and reproducibility, essential aspects of DevOps practices.
2. Continuous Integration and Continuous Deployment (CI/CD): Chef can
be integrated with CI/CD pipelines to automate the deployment process. As
part of the CI/CD workflow, you can include steps to build and test your
cookbooks, validate configurations, and deploy them to target environments.
3. Version Control and Collaboration: Chef cookbooks and associated files
can be stored in version control systems like Git. This enables collaboration
among team members, tracking changes, and rolling back to previous
versions if needed. DevOps teams can use branching and merging strategies
to manage cookbook versions and promote changes across different
environments.
4. Configuration Management: Chef provides a powerful configuration
management system that helps manage the desired state of your
infrastructure. By defining and applying cookbooks and recipes, you can
ensure that the configurations of your systems remain consistent and aligned
with your infrastructure requirements.
5. Infrastructure Orchestration: Chef can be used to orchestrate the
provisioning and configuration of infrastructure resources. It integrates with
various cloud providers and infrastructure-as-a-service platforms, allowing
you to provision virtual machines, configure networking, and manage other
resources required for your application deployments.
6. Automated Testing: Chef provides tools like Test Kitchen and ChefSpec
that enable automated testing of your cookbooks. You can write unit tests,
integration tests, and acceptance tests to validate the correctness and
behavior of your configurations. Testing helps ensure that your deployments
are reliable and free of errors.
7. Monitoring and Compliance: Chef allows you to define and enforce
compliance policies within your infrastructure. You can monitor the
configuration drift, apply security patches, and ensure that your systems
adhere to regulatory and organizational standards. Compliance automation is
an integral part of DevOps practices.
8. Infrastructure Monitoring and Logging: Integrating Chef with monitoring
and logging tools helps capture real-time data about your infrastructure and
applications. This data can be used to gain insights, troubleshoot issues, and
improve the overall performance and reliability of your deployments.
By leveraging Chef within the DevOps workflow, you can automate the
deployment and management of your infrastructure and applications,
improve collaboration among teams, ensure consistency and reliability, and
achieve faster and more predictable deployments.
Salt Stack
Deploying with SaltStack in DevOps involves utilizing SaltStack's
configuration management and orchestration capabilities to automate the
deployment and management of infrastructure and applications. Here's an
overview of deploying with SaltStack in a DevOps context:
1. Infrastructure Orchestration: SaltStack enables infrastructure
orchestration by defining infrastructure configurations as code. Using
SaltStack's configuration files called "states," you can specify the desired state
of your infrastructure, including server configurations, networking settings,
package installations, and more.
2. Salt Master and Minions: SaltStack uses a master-minion architecture.
The Salt Master is the central control node that manages and controls the
Salt Minions, which are the managed systems. Minions communicate with the
Salt Master to receive instructions, retrieve configurations, and report their
current state.
3. Salt States: SaltStack's configuration management is based on Salt States.
States are written in YAML or Jinja, representing the desired configuration of
a system or a group of systems. States can be used to define package
installations, file configurations, service management, user management, and
other aspects of your infrastructure.
4. Pillars and Grains: SaltStack provides two key concepts for managing data:
Pillars and Grains. Pillars allow you to securely store sensitive data, such as
credentials and secrets, separate from the Salt State files. Grains, on the other
hand, are system-specific data that can be used for targeting specific
configurations or applying customizations based on system properties.
5. Orchestration and Reactors: SaltStack offers powerful orchestration
capabilities that allow you to automate complex workflows and coordination
between multiple systems. Orchestration modules enable you to define
sequences of tasks, parallel executions, event-driven actions, and more.
Reactors enable reacting to specific events and triggering actions based on
those events.
6. Remote Execution: SaltStack allows you to execute commands and scripts
remotely on targeted minions, making it convenient for managing and
deploying configurations across multiple systems simultaneously. Remote
execution capabilities provide efficiency and consistency in managing your
infrastructure.
7. High Availability and Scalability: SaltStack supports high availability
setups, where multiple Salt Masters can be configured for redundancy and
failover. It also supports scalability with the ability to manage thousands of
minions efficiently.
8. Testing and Continuous Integration: SaltStack provides testing
frameworks like SaltStack Test Suite (STS) and SaltStack Formulas that
enable automated testing of Salt States and configurations. Integrating
SaltStack into the continuous integration and delivery (CI/CD) pipeline
ensures that changes to infrastructure configurations are tested and validated
before deployment.
9. Monitoring and Logging: Integrating SaltStack with monitoring and
logging tools allows you to capture and analyze real-time data about your
infrastructure and applications. Monitoring helps detect issues and
performance bottlenecks, while logging provides visibility into the actions and
changes made by SaltStack during deployments.
By leveraging SaltStack within the DevOps workflow, you can automate
infrastructure configurations, enforce consistency, reduce manual effort, and
achieve faster and more reliable deployments. SaltStack's flexible and scalable
architecture makes it suitable for managing small to large-scale
infrastructures efficiently.

Docker
Deploying with Docker in DevOps involves leveraging Docker containers to
streamline the deployment, scaling, and management of applications. Docker
provides a lightweight and portable runtime environment that encapsulates
applications and their dependencies, making them highly portable and
consistent across different environments. Here's an overview of deploying with
Docker in a DevOps context:
1. Containerization: Docker allows you to package applications, along with
their dependencies and configurations, into self-contained units called
containers. Containers are isolated and encapsulate the application and its
runtime environment, including libraries, binaries, and configuration files.
This enables consistent and reproducible deployments across different
environments.
2. Docker Images: Docker images are the building blocks of containers. An
image is a read-only template that contains the application, its dependencies,
and the instructions to run it. Images are created using Dockerfiles, which
define the steps to build the image. Docker images can be versioned, stored
in registries, and easily shared among team members.
3. Docker Engine: The Docker Engine is the core component of Docker that
runs and manages containers. It provides the runtime environment for
containers and handles tasks such as container creation, resource allocation,
networking, and lifecycle management.
4. Docker Compose: Docker Compose is a tool that allows you to define and
manage multi-container applications. It uses a YAML file to specify the
services, networks, and volumes required by your application. Docker
Compose simplifies the orchestration of multiple containers, allowing you to
define and manage the relationships between them.
5. Container Orchestration: In larger-scale deployments, container
orchestration platforms like Kubernetes or Docker Swarm can be used. These
platforms provide advanced features for managing containerized applications,
including scaling, load balancing, service discovery, rolling updates, and fault
tolerance.
6. Continuous Integration and Continuous Deployment (CI/CD): Docker
integrates well with CI/CD pipelines, enabling automated builds, tests, and
deployments. Docker images can be built, tested, and pushed to a registry as
part of the CI/CD workflow. Containers can then be deployed to different
environments, such as development, staging, and production, using the same
Docker image.
7. Infrastructure as Code: Docker, along with tools like Docker Compose or
Kubernetes, allows you to define your infrastructure as code. Infrastructure
configurations can be version-controlled, reviewed, and managed alongside
your application code, ensuring consistency and reproducibility in
deployments.
8. Monitoring and Logging: Docker provides logging mechanisms to capture
container logs, which can be aggregated and analyzed using logging
frameworks. Monitoring tools can be integrated to collect metrics from
containers, enabling visibility into resource usage, performance, and health
of the deployed applications.
By adopting Docker in the DevOps workflow, you can achieve faster and more
reliable deployments, improved scalability, simplified environment
management, and better isolation of applications. Docker's containerization
approach enhances consistency, portability, and flexibility, making it easier
to manage and scale applications across different environments.

You might also like