Unit 5-Dev
Unit 5-Dev
Automated testing is a powerful tool that can help you and your team with
various tasks, including writing better code to simplify the process of
regression. Unfortunately, automated testing can be misunderstood by some
developers who don’t see any value in it.
Many people tend to confuse automated testing with automatic (or robotic)
testing, a form of automated testing that uses automation tools to execute tests
without human intervention. In this article, however, we will be focusing on the
more common de nition of automated testing.
1. Increased accuracy
One of the main bene ts of automated testing is that it can increase accuracy.
Indeed, automated testing is less likely to be affected by human error.
When tests are automated, they run more frequently and with greater
consistency than when running tests manually. This can be bene cial when
dealing with a large codebase or when new features are added. In addition,
automation testing helps ensure that any errors or defects in the code are
identi ed and xed as quickly as possible.
2. Faster execution
Automated testing can also lead to faster execution of tests. This is because the
tests will run concurrently instead of serially. Running tests concurrently means
more tests run in a shorter amount of time.
3. Reduced costs
Automated testing can also lead to reduced costs. When tests are automated, the
need for manual testers is reduced. In addition, the time needed to execute tests
is reduced, leading to savings in terms of both time and money.
fi
fi
fi
fi
fi
Moreover, automated tests can help reduce the cost of software development by
detecting and xing errors earlier in the process. They can also help reduce the
cost of supporting your application, as automated tests will need less time to nd
and x bugs.
5. Increased ef ciency
Automated testing can help improve developer productivity by automating tasks
that would otherwise have to be done manually.
For example, you can con gure your continuous integration (CI) system to
automatically execute and monitor the results of your automated tests each time
a new feature or change is introduced into your application. This will help
ensure that any issues in the recent changes are identi ed and xed as quickly
as possible.
7. Improved scalability
Automated tests can be used on many devices and con gurations, making it
easier to test more things at once.
For example, automated tests can be written to measure the performance of your
application on different devices or browsers. This allows you to more easily test
the different variations in which your application is being served and ensure that
these are running as expected across a variety of end-user devices.
1. Complexity
Automated tests can take longer to develop than manual tests, especially if they
are not well designed. They can also be more challenging to implement into
your development workflow.
If your tests are complex or hard to maintain, it could lead to a reduction in the
quality of your test suite. This can have negative consequences for achieving
continuous testing throughout the application lifecycle.
Moreover, UIlicious offers affordable premium plans that anyone can afford. Our
objective is to make automation available to the most people on the web!
This is the reason why we made UIlicious able to recognize web page elements
based on their labels, not only on their XPath or CSS. With our automation tool,
you can change your code as you want, if the user flow has not changed, you
will not need to adapt your test scripts.
4. Dif cult to design tests that are both reliable and maintainable
Designing a comprehensive suite of automated tests is no small task. They need
to be reliable enough that they can be run frequently and consistently without
giving you false positives or negatives. On the other hand, your test scripts must
be maintainable enough to adapt to changes in your application. This requires a
high level of developer expertise and careful design and implementation.
Selenium
- Introduction
Selenium is one of the most widely used open source Web UI (User Interface)
automation testing suite.It was originally developed by Jason Huggins in 2004 as
an internal tool at Thought Works. Selenium supports automation across different
browsers, platforms and programming languages.
fi
fi
fi
fi
fi
Selenium can be easily deployed on platforms such as Windows, Linux, Solaris
and Macintosh. Moreover, it supports OS (Operating System) for mobile
applications like iOS, windows mobile and android.
Selenium can be used to automate functional tests and can be integrated with
automation test tools such as Maven, Jenkins, & Docker to achieve continuous
testing. It can also be integrated with tools such as TestNG, & JUnit for managing
test cases and generating reports.
- Selenium features,
• Selenium is an open source and portable Web testing Framework.
• Selenium IDE provides a playback and record feature for authoring tests
without the need to learn a test scripting language.
• It can be considered as the leading cloud-based testing platform which helps
testers to record their actions and export them as a reusable script with a
simple-to-understand and easy-to-use interface.
• Selenium supports various operating systems, browsers and programming
languages. Following is the list:
• Programming Languages: C#, Java, Python, PHP, Ruby, Perl, and JavaScript
• Operating Systems: Android, iOS, Windows, Linux, Mac, Solaris.
• Browsers: Google Chrome, Mozilla Firefox, Internet Explorer, Edge, Opera,
Safari, etc.
• It also supports parallel test execution which reduces time and increases the
ef ciency of tests.
fi
fi
• Selenium can be integrated with frameworks like Ant and Maven for source
code compilation.
• Selenium can also be integrated with testing frameworks like TestNG for
application testing and generating reports.
• Selenium requires fewer resources as compared to other automation test tools.
• WebDriver API has been indulged in selenium whichis one of the most
important modi cations done to selenium.
• Selenium web driver does not require server installation, test scripts interact
directly with the browser.
• Selenium commands are categorized in terms of different classes which make
it easier to understand and implement.
• Selenium Remote Control (RC) in conjunction with WebDriver API is known as
Selenium 2.0. This version was built to support the vibrant web pages and
Ajax.
- JavaScript testing
JavaScript Unit Testing is a method in which JavaScript test code is written for a
web page or application module.
It is then combined with HTML as an inline event handler and executed in the
browser to test if all functionalities work as desired. These unit tests are then
organized in the test suite.
The following JavaScript Testing Frameworks are helpful for unit testing in
JavaScript. They are as follows:
1. Unit.js
An assertion library for Javascript runs on Node.js and the browser. It works
with any test runner and unit testing framework like Mocha, Jasmine, Karma,
protractor (E2E test framework for Angular apps), QUnit, etc.
2. Mocha
Mocha is a test framework running both in Node.js and in the browser. This
framework makes asynchronous testing simple by running tests serially. Mocha
tests run serially, allowing for flexible and accurate reporting while mapping
uncaught exceptions to the correct test case. It provides support for all
browsers, including the headless Chrome library and is convenient for the
developers to write test cases
3. Jest
It is an open-source testing framework built on JavaScript, designed majorly to
work with React and React Native-based web applications. Often, unit tests are
not very useful when run on the front end of any software. This is mostly because
unit tests for the front end require extensive, time-consuming con guration. This
complexity can be reduced to a great extent with the Jest framework.
4. Jasmine
Jasmine is a popular JavaScript behavior-driven development framework for unit
testing JavaScript applications. It provides utilities that run automated tests for
both synchronous and asynchronous code. It is also highly bene cial for front-
end testing.
fi
fi
fi
5. Karma
Karma is a node-based test tool allowing you to test your JavaScript codes across
multiple browsers. It makes test-driven development fast, fun, and easy and is
termed as a test-runner technically.
6. Cypress
Cypress framework is a JavaScript-based end-to-end testing framework built on
top of Mocha – a feature-rich JavaScript test framework running on and in the
browser, making asynchronous testing simple and convenient Unit tests in
Cypress are executed without even having to run a web server. That makes
Cypress the ideal tool for testing a JS/TS library meant to be used in the
browser.
7. NightwatchJS
Nightwatch.js framework is a Selenium-based test automation framework written
in Node.js and uses the W3C WebDriver API (formerly Selenium WebDriver). It
communicates over a restful HTTP API with a WebDriver server (such as
ChromeDriver or Selenium Server). The protocol is de ned by the W3C
WebDriver spec, which is derived from the JSON Wire protocol.
Integration Testing
Integration Testing is de ned as a type of testing where software modules are
integrated logically and tested as a group. A typical software project consists of
multiple software modules, coded by different programmers. The purpose of
this level of testing is to expose defects in the interaction between these
software modules when they are integrated
fi
fi
fi
Integration Testing focuses on checking data communication amongst these
modules. Hence it is also termed as ‘I & T’ (Integration and Testing), ‘String
Testing’ and sometimes ‘Thread Testing’.
- Test-driven development
Test Driven Development (TDD) is software development approach in which test
cases are developed to specify and validate what the code will do. In simple
terms, test cases for each functionality are created and tested rst and if the test
fails then the new code is written in order to pass the test and making code
simple and bug-free.
Test-Driven Development starts with designing and developing tests for every
small functionality of an application. TDD framework instructs developers to
write new code only if an automated test has failed. This avoids duplication of
code. The TDD full form is Test-driven development.
fi
fi
The simple concept of TDD is to write and correct the failed tests before writing
new code (before development). This helps to avoid duplication of code as we
write a small amount of code at a time in order to pass tests. (Tests are nothing
but requirement conditions that we need to test to ful ll them).
- REPL-driven development
REPL is short for read-evaluate-print-loop. It means an interactive terminal such
as your Bash shell or DOS command line interface, where you type a command
and see an immediate response. A command or expression is read and then
evaluated. The result is then printed to screen.
- Deployment systems
Deployment in DevOps is a process that enables you to retrieve important codes
from version control so that they can be made readily available to the public and
they can use the application in a ready-to-use and automated fashion.
Deployment tools DevOps comes into play when the developers of a particular
application are working on certain features that they need to build and
implement in the application. It is a very effective, reliable, and ef cient means
of testing and deploying organizational work.
• In the rst phase of testing, the DevOps codes are merged for internal testing.
• The next phase is staging where the client's test takes place as per their
requirements.
• Last but not least the production phase makes sure that any other feature does
not get impacted because of the updating these codes on the server.
DevOps deployment tools make the functionality of the servers very convenient
and easy for the users. It is different from the traditional way of dealing with the
applications and the improvement has given positive results to all the
companies as well as to all the users.
DevOps tools make the whole deployment process and easy going one and they
can help you with the following aspects:
• Increased development.
• Improvement in operational ef ciency.
• Faster release.
• Non-stop delivery.
• Quicker rate of innovation.
• Improvement in collaboration.
• Seamless flow in the process chain.
fi
fi
fi
- Virtualization stack
1. Hardware Virtualization
It may be considered as the most common type these days. The best example of
it is a Virtual Machine. A virtual machine works and looks like a real system with
the same or a different operating system.
2. Network Virtualization
It is a process in which a combination of software and hardware network
resources form a single software network, which is commonly known as Virtual
Network. Also, the available bandwidth is divided into several independent
channels, which can be used by real devices and servers.
3. Desktop Virtualization
In the case of it, the logical or virtual desktop is separate from the physical
desktop. Here, instead of accessing the desktop using the computer hardware
like keyboard, mouse of the system, the desktop is located remotely from
another system by using a network connection. The network can be a wired/
wireless LAN or the internet. So, the user can access their les from any system
without physically operating the order that contains the data.
4. Storage Virtualization
In this case, a combination of several storage disks forms a storage pool or
group. These groups are virtual storage units. These can then be assigned to
servers for use. Logical volumes are one of the examples of it, which represent
the storage as a coherent unit rather than a physical unit.
5. Application Virtualization
In this, applications are virtualized and encapsulated. Virtual applications are
not installed like traditional applications but are used as they are installed.
6. Server Virtualization
This type comes in handy when we need to run a single physical server on
multiple operating systems simultaneously. With this process, the performance,
capacity and ef ciency of the server are increased, while managing costs and
complexity are reduced.
2. Testing Environment
With the help of it, we can set up a local testing environment. This environment
can be used for various kinds of testing for software. Even if a server crashes,
there won’t be any data loss. So, the reliability is increased, and the software can
be tested on this virtual environment until it is ready for live deployment.
3. Energy-saving
It saves energy as instead of using local software or servers; the virtualization
takes place with the help of virtual machines, which lowers the power or energy
utilization. By saving this energy, the cost is reduced, and this saved money can
be used for other useful operations.
1. Time consumption
Even if the development and testing time is saved, but it still consumes much
time, as its con guration and application need time.
2. Security risk
There is a big chance of data breach with the process of it as the remote
accessibility and virtualizing desktop or applications is not a very secure option.
3. Infrastructure knowledge
To work with it, the IT staff should have expertise in virtualization. Hence, either
the existing employees can be trained, or new employees are required for an
organization if they want to start working with it and devops. It involves much
time and costs much money.
Puppet follows the client-server model, where one machine in any cluster acts
as the server, known as puppet master and the other acts as a client known as a
slave on nodes. Puppet has the capability to manage any system from scratch,
starting from initial con guration till the end-of-life of any particular machine.
1. Idempotency
Puppet supports Idempotency which makes it unique. Similar to Chef, in Puppet,
one can safely run the same set of con guration multiple times on the same
machine. In this flow, Puppet checks for the current status of the target machine
and will only make changes when there is any speci c change in the
con guration.
2. Cross-platform
In Puppet, with the help of Resource Abstraction Layer (RAL) which uses Puppet
resources, one can target the speci ed con guration of system without worrying
about the implementation details and how the con guration command will work
inside the system, which are de ned in the underlying con guration le.
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
Puppet uses the following workflow to apply con guration on the system.
• In Puppet, the rst thing what the Puppet master does is to collect the details
of the target machine. Using the factor which is present on all Puppet nodes
(similar to Ohai in Chef) it gets all the machine level con guration details.
These details are collected and sent back to the Puppet master.
• Then the puppet master compares the retrieved con guration with de ned
con guration details, and with the de ned con guration it creates a catalog
and sends it to the targeted Puppet agents.
• The Puppet agent then applies those con gurations to get the system into a
desired state.
• Finally, once one has the target node in a desired state, it sends a report back
to the Puppet master, which helps the Puppet master in understanding where
the current state of the system is, as de ned in the catalog.
Puppet Archietecture
Puppet Master
Puppet Master is the key mechanism which handles all the con guration related
stuff. It applies the con guration to nodes using the Puppet agent.
Puppet Agent
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
Puppet Agents are the actual working machines which are managed by the
Puppet master. They have the Puppet agent daemon service running inside
them.
Con g Repository
This is the repo where all nodes and server-related con gurations are saved and
pulled when required.
Facts
Facts are the details related to the node or the master machine, which are
basically used for analyzing the current status of any node. On the basis of facts,
changes are done on any target machine. There are pre-de ned and custom
facts in Puppet.
Catalog
All the manifest les or con guration which are written in Puppet are rst
converted to a compiled format called catalog and later those catalogs are
applied on the target machine.
- Ansible
Ansible is simple open source IT engine which automates application
deployment, intra service orchestration, cloud provisioning and many other IT
tools.
Ansible is easy to deploy because it does not use any agents or custom security
infrastructure.
Ansible uses playbook to describe automation jobs, and playbook uses very
simple language i.e. YAML (It’s a human-readable data serialization language &
is commonly used for con guration les, but could be used in many applications
where data is being stored)which is very easy for humans to understand, read
and write. Hence the advantage is that even the IT infrastructure support guys
can read and understand the playbook and debug if needed (YAML – It is in
human readable form).
Ansible is designed for multi-tier deployment. Ansible does not manage one
system at time, it models IT infrastructure by describing all of your systems are
interrelated. Ansible is completely agentless which means Ansible works by
connecting your nodes through ssh(by default). But if you want other method for
connection like Kerberos, Ansible gives that option to you.
The management node in the above picture is the controlling node (managing
node) which controls the entire execution of the playbook. It’s the node from
which you are running the installation. The inventory le provides the list of
hosts where the Ansible modules needs to be run and the management node
does a SSH connection and executes the small modules on the hosts machine
and installs the product/software.
Beauty of Ansible is that it removes the modules once those are installed so
effectively it connects to host machine , executes the instructions and if it’s
successfully installed removes the code which was copied on the host machine
which was executed.
fi
Deployment tools:
Chef have got its own convention for different building blocks, which are
required to manage and automate infrastructure.
Why Chef?
Chef is a con guration management technology used to automate the
infrastructure provisioning. It is developed on the basis of Ruby DSL language. It
is used to streamline the task of con guration and managing the company’s
server. It has the capability to get integrated with any of the cloud technology.
In DevOps, we use Chef to deploy and manage servers and applications in-
house and on the cloud.
Features of Chef
Following are the most prominent features of Chef −
• Chef uses popular Ruby language to create a domain-speci c language.
• Chef does not make assumptions on the current status of a node. It uses its
mechanisms to get the current status of machine.
fi
fi
fi
• Chef is ideal for deploying and managing the cloud server, storage, and
software.
Advantages of Chef
Chef offers the following advantages −
• Lower barrier for entry − As Chef uses native Ruby language for
con guration, a standard con guration language it can be easily picked up by
anyone having some development experience.
• Excellent integration with cloud − Using the knife utility, it can be easily
integrated with any of the cloud technologies. It is the best tool for an
organization that wishes to distribute its infrastructure on multi-cloud
environment.
Disadvantages of Chef
• Some of the major drawbacks of Chef are as follows −
• One of the huge disadvantages of Chef is the way cookbooks are controlled. It
needs constant babying so that people who are working should not mess up
with others cookbooks.
• Only Chef solo is available.
• In the current situation, it is only a good t for AWS cloud.
• It is not very easy to learn if the person is not familiar with Ruby.
• Documentation is still lacking.
Architecture
Chef works on a three-tier client server model wherein the working units such
as cookbooks are developed on the Chef workstation. From the command line
fi
fi
fi
utilities such as knife, they are uploaded to the Chef server and all the nodes
which are present in the architecture are registered with the Chef server.
Chef Workstation
This is the location where all the con gurations are developed. Chef workstation
is installed on the local machine. Detailed con guration structure is discussed in
the later chapters of this tutorial.
Chef Server
This works as a centralized working unit of Chef setup, where all the
con guration les are uploaded post development. There are different kinds of
Chef server, some are hosted Chef server whereas some are built-in premise.
Chef Nodes
They are the actual machines which are going to be managed by the Chef
server. All the nodes can have different kinds of setup as per requirement. Chef
client is the key component of all the nodes, which helps in setting up the
communication between the Chef server and Chef node. The other components
of Chef node is Ohai, which helps in getting the current state of any node at a
given point of time.
Saltstack
Salt is a very powerful automation framework. Salt architecture is based on the
idea of executing commands remotely. All networking is designed around some
aspect of remote execution. This could be as simple as asking a Remote Web
Server to display a static Web page, or as complex as using a shell session to
interactively issue commands against a remote server. Salt is an example of one
of the more complex types of remote execution.
Imagine that you have multiple servers and want to do things to those servers.
You would need to login to each one and do those things one at a time on each
one and then you might want to do complicated things like installing software
and then con guring that software based on some speci c criteria.
Let us assume you have ten or maybe even 100 servers. Imagine logging in one
at a time to each server individually, issuing the same commands on those 100
machines and then editing the con guration les on all 100 machines becomes
very tedious task. To overcome those issues, you would love to update all your
servers at once, just by typing one single command. SaltStack provides you
exactly the solution for all such problems.
Features of SaltStack
SaltStack is an open-source con guration management software and remote
execution engine. Salt is a command-line tool. While written in Python, SaltStack
con guration management is language agnostic and simple. Salt platform uses
the push model for executing commands via the SSH protocol. The default
con guration system is YAML and Jinja templates. Salt is primarily competing
with Puppet, Chef and Ansible.
Salt provides many features when compared to other competing tools. Some of
these important features are listed below.
• Fault tolerance − Salt minions can connect to multiple masters at one time by
con guring the master con guration parameter as a YAML list of all the
available masters. Any master can direct commands to the Salt infrastructure.
• Easy to Setup − Salt is easy to setup and provides a single remote execution
architecture that can manage the diverse requirements of any number of
servers.
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
• Language Agnostic − Salt state con guration les, templating engine or le
type supports any type of language.
Bene ts of SaltStack
Being simple as well as a feature-rich system, Salt provides many bene ts and
they can be summarized as below −
• Robust − Salt is powerful and robust con guration management framework
and works around tens of thousands of systems.
• Execution − Modules and Adhoc commands executed from the command line
against one or more minions. It performs Real-time Monitoring.
• Pillar − A pillar is an interface that generates and stores highly sensitive data
speci c to a particular minion, such as cryptographic keys and passwords. It
stores data in a key/value pair and the data is managed in a similar way as
the Salt State Tree.
• Top File − Matches Salt states and pillar data to Salt minions.
• SaltSSH − Run Salt commands over SSH on systems without using Salt minion.
fi
fi
fi
fi
fi