Types of Software Testing
Types of Software Testing
Testing is the process of executing a program to find errors. To make our software
perform well it should be error-free. If testing is done successfully it will remove all
the errors from the software. In this article, we will discuss first the principles of
testing and then we will discuss, the different types of testing.
Principles of Testing
All the tests should meet the customer’s requirements.
To make our software testing should be performed by a third party.
Exhaustive testing is not possible. As we need the optimal amount of testing
based on the risk assessment of the application.
All the tests to be conducted should be planned before implementing it
It follows the Pareto rule(80/20 rule) which states that 80% of errors come from
20% of program components.
Start testing with small parts and extend it to large parts.
Types of Testing
There are basically 10 types of Testing.
Unit Testing
Integration Testing
System Testing
Functional Testing
Acceptance Testing
Smoke Testing
Regression Testing
Performance Testing
Security Testing
User Acceptance Testing
1
Unit Testing
Unit testing is a method of testing individual units or components of a software
application. It is typically done by developers and is used to ensure that the individual
units of the software are working as intended. Unit tests are usually automated and
are designed to test specific parts of the code, such as a particular function or
method. Unit testing is done at the lowest level of the software development process ,
where individual units of code are tested in isolation.
Advantages of Unit Testing: Some of the advantages of Unit Testing are listed
below.
It helps to identify bugs early in the development process before they become
more difficult and expensive to fix.
It helps to ensure that changes to the code do not introduce new bugs.
It makes the code more modular and easier to understand and maintain.
It helps to improve the overall quality and reliability of the software.
Note: Some popular frameworks and tools that are used for unit testing
include JUnit, NUnit, and xUnit.
It’s important to keep in mind that Unit Testing is only one aspect of software testing
and it should be used in combination with other types of testing such as integration
testing, functional testing, and acceptance testing to ensure that the software meets
the needs of its users.
It focuses on the smallest unit of software design. In this, we test an individual unit or
group of interrelated units. It is often done by the programmer by using sample input
and observing its corresponding outputs.
Example:
a) In a program we are checking if the loop, method, or function is
working fine.
b) Misunderstood or incorrect, arithmetic precedence.
c) Incorrect initialization.
Integration Testing
Integration testing is a method of testing how different units or components of a
software application interact with each other. It is used to identify and resolve any
issues that may arise when different units of the software are combined. Integration
testing is typically done after unit testing and before functional testing and is used to
verify that the different units of the software work together as intended.
Different Ways of Performing Integration Testing: There are different ways of
Integration Testing that are discussed below.
Top-down integration testing: It starts with the highest-level modules and
differentiates them from lower-level modules.
Bottom-up integration testing: It starts with the lowest-level modules and
integrates them with higher-level modules.
Big-Bang integration testing: It combines all the modules and integrates them all
at once.
Incremental integration testing: It integrates the modules in small groups, testing
each group as it is added.
Advantages of Integrating Testing
1. It helps to identify and resolve issues that may arise when different units of the
software are combined.
2
2. It helps to ensure that the different units of the software work together as
intended.
3. It helps to improve the overall reliability and stability of the software.
4. It’s important to keep in mind that Integration testing is essential for complex
systems where different components are integrated together.
5. As with unit testing, integration testing is only one aspect of software testing and it
should be used in combination with other types of testing such as unit testing,
functional testing, and acceptance testing to ensure that the software meets the
needs of its users.
The objective is to take unit-tested components and build a program structure that
has been dictated by design. Integration testing is testing in which a group of
components is combined to produce output.
Integration testing is of four types: (i) Top-down (ii) Bottom-up (iii) Sandwich (iv) Big-
Bang
Example:
(a) Black Box testing:- It is used for validation. In this, we ignore
internal working mechanisms and focus on what is the output?.
(b) White box testing:- It is used for verification. In this, we focus on
internal mechanisms i.e. how the output is achieved?.
Regression Testing
Regression testing is a method of testing that is used to ensure that changes made to
the software do not introduce new bugs or cause existing functionality to break. It is
typically done after changes have been made to the code, such as bug fixes or new
features, and is used to verify that the software still works as intended.
4
Object-Oriented Testing
Object-Oriented Testing testing is a combination of various testing techniques that
help to verify and validate object-oriented software. This testing is done in the
following manner:
Testing of Requirements,
Design and Analysis of Testing,
Testing of Code,
Integration testing,
System testing,
User Testing.
Acceptance Testing
Acceptance testing is done by the customers to check whether the delivered products
perform the desired tasks or not, as stated in the requirements. We use Object-
Oriented Testing for discussing test plans and for executing the projects.
Advantages of Software Testing
Improved software quality and reliability.
Early identification and fixing of defects.
Improved customer satisfaction.
Increased stakeholder confidence.
Reduced maintenance costs.
Disadvantages of Software Testing
Time-Consuming and adds to the project cost.
This can slow down the development process.
Not all defects can be found.
Can be difficult to fully test complex systems.
Potential for human error during the testing process.
Questions For Practice
1. With respect to Software Testing, consider a flow graph G with one connected
component. Let E be the number of edges, N be the number of nodes, and P be the
number of predicate nodes of G. Consider the following four expressions: [GATE IT -
2006]
I. E-N+P
II. E-N+2
III. P+2
IV. P+1
The cyclomatic complexity of G is given by
(A) I or III
(B) II or III
(C) II or IV
(D) I or IV
Solution: Correct Answer is (C).
Frequently Asked Questions
1. What is a Test Case?
Answer:
Test Cases can be simply determined as conditions that a tester will check whether
the code runs perfectly or not.
2. What is the use of automation testing?
Answer:
5
Automation Testing is used to reduce the testing efforts, also testing faster delivering
capability.
Feeling lost in the vast world of System Design? It's time for a transformation! Enroll
in our Mastering System Design From Low-Level to High-Level Solutions - Live
Course and embark on an exhilarating journey to efficiently master system design
concepts and techniques.
What We Offer:
1. Acceptance Testing :
It is a kind of testing conducted to ensure whether the requirement of the users
are fulfilled prior to its delivery and the software works correctly in the user’s
working environment.
These testing can be conducted at various stages of software development. The
levels of testing along with the corresponding software development phase is shown
by the following diagram –
6
While performing the software testing, following Testing principles must be applied by
every software engineer:
The requirements of customers should be traceable and identified by all different
tests.
Planning of tests that how tests will be conducted should be done long before the
beginning of the test.
The Pareto principle can be applied to software testing- 80% of all errors identified
during testing will likely be traceable to 20% of all program modules.
Testing should begin “in the small” and progress toward testing “in the large”.
Exhaustive testing which simply means to test all the possible combinations of
data is not possible.
Testing conducted should be most effective and for this purpose, an independent
third party is required.
Master Software Testing and Automation in an efficient and time-bound manner by
mentors with real-time industry experience. Join our Software Automation
Course and embark on an exciting journey, mastering the skill set with ease!
What We Offer:
Comprehensive Software Automation program
Expert Guidance for Efficient Learning
Hands-on Experience with Real-world Projects
Proven Track Record with 10,000+ Successful Geeks
8
1. Black Box Testing: In this technique, the tester or the QA analyst will only check
the functionality of the particular module or particular method or sometimes the
entire application by providing the different test cases manually. Here, the tester
will give the input for the application and test it manually. If it returns the expected
output, then the tester will proceed with another set of inputs and report all the
results to the team. If the input given by the user manually is failed during the
testing, then he/she will report this issue to the development team.
2. White Box Testing: In this technique, the person will check the internal structure
of the system like designs, coding, etc., manually. Here, the development team
will review the entire coding part line by line to ensure the correctness of the code.
If he/she finds anything dissimilarities or errors in the code, they will correct or fix
the errors in the coding or designs. Here, the process is entirely carried out
manually and the process is efficient since the checking code or design is
manually checked by humans.
3. Gray Box Testing: This technique is the combination of both white-box testing
and black-box testing. Here, the internal structure of the application is partially
known by the tester. The tester will check both the internal structure and the
functionality of the application manually. The tester will check the coding part as
well as test the application by providing different test cases manually. If the input
fails at some stage, the tester will then make the changes in the coding part.
Tools Used for Manual Testing
Test Link: It is a web-based test management system that facilitates software quality
assurance, and it is one of the most user-friendly programs. It is available through a
browser connected to the internet.
Features of Text Link
This manual software testing tool support various programming language.
Support cross-browser testing across different platform.
Facility ti record and playback functionality for test automation.
Bugzilla: It is a web-based bug-tracking tool that is developed by Mozilla. It has a
simple bug search that searches the complete text of the bug.eatures
Features of Bugzilla
Supports various OS like Mac, Windows, and Linux.
Facility to list the software bug in different format.
Bugzilla has advance searching facility.
Jira: It is a manual testing tool that helps teams assign, track, report, and manage
work and bring teams together. This tool is compatible with agile software projects
also.
Features of Jira
Option to track and manage bugs and defects.
Prioritize and assign tasks.
Collaborate with team members.
Easily generate reports and track progress.
9
LoadRunner: It is one of the most widely used performance testing tools. The
primary purpose of this tool is to categorize the most prevalent causes of
performance problems.
Features of LoadRunner
Simulates real-world user behavior and load.
Identifies bottlenecks and performance issues.
Scalable architecture for large-scale testing.
Provides detailed reports and analysis.
Apache JMeter: It is an open-source load testing tool for analysing and measuring
the performance of a variety of services. It has an easy-to-use user interface.
Features of JMeter
Simulates various types of load (web, database, etc.).
Highly configurable and extensible.
Integrates with various plugins for additional features.
Provides comprehensive performance reports and graphs.
Manual Testing vs Automation Testing
Below are the differences between manual testing and automation testing:
10
Advantages of Manual Testing
1. Fast and accurate visual feedback: It detects almost every bug in the software
application and is used to test the dynamically changing GUI designs like layout,
text, etc.
2. Less expensive: It is less expensive as it does not require any high-level skill or a
specific type of tool.
3. No coding is required: No programming knowledge is required while using the
black box testing method. It is easy to learn for the new testers.
4. Efficient for unplanned changes: Manual testing is suitable in case of
unplanned changes to the application, as it can be adopted easily.
Disadvantages of Manual Testing
1. Less reliable: Manual testing is less reliable as it does not provide testing on all
aspects of testing.
2. Can not be reused: There is a need to develop separate test cases for each new
software.
3. Large human resources required: Manual testing requires numerous human
resources, and there are some tasks that can’t be performed manually.
4. Needs experience: The tester needs to know the application well. They develop
test cases based on their experience, there is no proof that all the functions are
covered or not.
5. Time-consuming: If the project is large, then the testing process is time-
consuming.
Overview
Well, this is the end of the article, here we have discussed a detailed information
about manual testing as well as automated testing. You will also find the key
difference between manual and automated software testing along with the popular
tool that is used for manual testing.
Manual Testing: FAQs
No, coding skills are not required for manual testing. However, some say that a basic
understanding of programming can help manual testers grow professionally.
Practice using manual testing tool like Selenium, Bugzilla, Text Link will give you an
overview on how this tool. Along with this use guide that include manual testing.
Manual testing can be a good career choice. Becaue It is easy to learn, no coding
required along with this there is no prerequisites to learn manual software testing.
12
Parameters Manual Testing Automated Testing
13
Performance testing: Performance testing is a type of software testing that is
carried out to determine how the system performs in terms of stability and
responsiveness under a particular load.
Regression testing: Regression testing is a type of software testing that confirms
that previously developed software still works fine after the change and that the
change has not adversely affected existing features.
Security testing: Security testing is a type of software testing that uncovers the
risks, and vulnerabilities in the security mechanism of the software application. It
helps an organization to identify the loopholes in the security mechanism and take
corrective measures to rectify the security gaps.
Acceptance testing: Acceptance testing is the last phase of software testing that
is performed after the system testing. It helps to determine to what degree the
application meets end users’ approval.
API testing: API testing is a type of software testing that validates the Application
Programming Interface(API) and checks the functionality, security, and reliability
of the programming interface.
UI Testing: UI testing is a type of software testing that helps testers ensure that
all the fields, buttons, and other items on the screen function as desired.
Test Automation Frameworks
Some of the most common types of automation frameworks are:
Linear framework: This is the most basic form of framework and is also known as
the record and playback framework. In this testers create and execute the test
scripts for each test case. It is mostly suitable for small teams that don’t have a lot
of test automation experience.
Modular-Based Framework: This framework organizes each test case into small
individual units known as modules each module is independent of the other,
having different scenarios but all modules are handled by a single master script.
This approach requires a lot of pre-planning and is best suited for testers who
have experience with test automation.
Library Architecture Framework: This framework is the expansion of a modular-
based framework with few differences. Here, the task is grouped within the test
script into functions according to a common objective. These functions are stored
in the library so that they can be accessed quickly when needed. This framework
allows for greater flexibility and reusability but creating scripts takes a lot of time
so testers with experience in automation testing can benefit from this framework.
Which Tests to Automate?
Below are some of the parameters to decide which tests to automate:
Monotonous test: Repeatable and monotonous tests can be automated for
further use in the future.
A test requiring multiple data sets: Extensive tests that require multiple data
sets can be automated.
Business critical tests: High-risk business critical test cases can be automated
and can be scheduled to run regularly.
Determinant test: Determinant test cases where it is easy for the computer to
decide whether the test has failed or not can be automated.
Tedious test: Test cases that involve repeatedly doing the same action can be
automated so that the computer can do the repetitive task as humans are very
14
poor at performing the repetitive task with efficiency, which increases the chances
of error.
Automation Testing Process
1. Test Tool Selection: There will be some criteria for the Selection of the tool. The
majority of the criteria include: Do we have skilled resources to allocate for
automation tasks, Budget constraints, and Do the tool satisfies our needs?
2. Define Scope of Automation: This includes a few basic points such as the
Framework should support Automation Scripts, Less Maintenance must be there,
High Return on Investment, Not many complex Test Cases
3. Planning, Design, and Development: For this, we need to Install particular
frameworks or libraries, and start designing and developing the test cases such as
NUnit, JUnit, QUnit, or required Software Automation Tools
4. Test Execution: Final Execution of test cases will take place in this phase and it
depends on Language to Language for .NET, we’ll be using NUnit, for Java, we’ll
be using JUnit, for JavaScript, we’ll be using QUnit or Jasmine, etc.
5. Maintenance: Creation of Reports generated after Tests and that should be
documented to refer to that in the future for the next iterations.
Criteria to Select Automation Tool
Following are some of the criteria for selecting the automation tool:
Ease of use: Some tools have a steep learning curve, they may require users to
learn a completely new scripting language to create test cases and some may
require users to maintain a costly and large test infrastructure to run the test
cases.
Support for multiple browsers: Cross-browser testing is vital for acceptance
testing. Users must check how easy it is to run the tests on different browsers that
the application supports.
Flexibility: No single tool framework can support all types of testing, so it is
advisable to carefully observe what all tool offers and then decide.
Ease of analysis: Not all tools provide the same sort of analysis. Some tools
have a nice dashboard feature that shows all the statistics of the test like which
test failed and which test passed. On the other hand, there can be some tools that
will first request users to generate and download the test analysis report thus, not
very user-friendly. It depends entirely on the tester, project requirement, and
budget to decide which tool to use.
Cost of tool: Some tools are free and some are commercial tools but many other
factors need to be considered before deciding whether to use free or paid tools. If
a tool takes a lot of time to develop test cases and it is a business-critical process
that is at stake then it is better to use a paid tool that can generate test cases
easily and at a faster rate.
Availability of support: Free tools mostly provide community support on the
other hand commercial tools provide customer support, and training material like
tutorials, videos, etc. Thus, it is very important to keep in mind the complexity of
the tests before selecting the appropriate tool.
Best Practices for Test Automation
Below are some of the best practices for test automation that can be followed:
15
Plan self-contained test cases: It is important to ensure that the test is clearly
defined and well-written. The test cases should be self-contained and easy to
understand.
Plan the order to execute tests: Planning the test in the manner that the one test
creates the state for the second test can be beneficial as it can help to run test
cases in order one after another.
Use tools with automatic scheduling: If possible use tools that can schedule
testing automatically according to a schedule.
Set up an alarm for test failure: If possible select a tool that can raise an alarm
when a test failure occurs. Then a decision needs to be made whether to continue
with the test or abort it.
Reassess test plans as the app develops and changes: It is important to
continuously reassess the test plan as there is no point in wasting resources in
testing the legacy features in the application under test.
Popular Automation Tools
Selenium: Selenium is an automated testing tool that is used for Regression
testing and provides a playback and recording facility. It can be used with
frameworks like JUnit and Test NG. It provides a single interface and lets users
write test cases in languages like Ruby, Java, Python, etc.
QTP: Quick Test Professional (QTP) is an automated functional testing tool to test
both web and desktop applications. It is based on the VB scripting language and it
provides functional and regression test automation for software applications.
Sikuli: It is a GUI-based test automation tool that is used for interacting with
elements of web pages. It is used to search and automate graphical user
interfaces using screenshots.
Appium: Apium is an open-source test automation framework that allows QAs to
conduct automated app testing on different platforms like iOS, Android, and
Windows SDK.
Jmeter: Apache JMeter is an open-source Java application that is used to load
test the functional behavior of the application and measure the performance.
Advantages of Automation Testing
Simplifies Test Case Execution: Automation testing can be left virtually
unattended and thus it allows monitoring of the results at the end of the process.
Thus, simplifying the overall test execution and increasing the efficiency of the
application.
Improves Reliability of Tests: Automation testing ensures that there is equal
focus on all the areas of the testing, thus ensuring the best quality end product.
Increases amount of test coverage: Using automation testing, more test cases
can be created and executed for the application under test. Thus, resulting in
higher test coverage and the detection of more bugs. This allows for the testing of
more complex applications and more features can be tested.
Minimizing Human Interaction: In automation testing, everything is automated
from test case creation to execution thus there are no changes for human error
due to neglect. This reduces the necessity for fixing glitches in the post-release
phase.
Saves Time and Money: The initial investment for automation testing is on the
higher side but it is cost-efficient and time-efficient in the long run. This is due to
16
the reduction in the amount of time required for test case creation and execution
which contributes to the high quality of work.
Earlier detection of defects: Automation testing documents the defects, thus
making it easier for the development team to fix the defect and give a faster
output. The earlier the defect is identified, the more easier and cost-efficient it is to
fix the defects.
Disadvantages of Automation Testing
High initial cost: Automation testing in the initial phases requires a lot of time and
money investment. It requires a lot of effort for selecting the tool and designing
customized software.
100% test automation is not possible: Generally, the effort is to automate all the
test cases but in practical real situations not all test cases can be automated some
test cases require human intervention for careful observation. There is always a
human factor, i.e., it can’t test everything like humans(design, usability, etc.).
Not possible to automate all testing types: It is not possible to automate tests
that verify the user-friendliness of the system. Similarly, if we talk about the
graphics or sound files, even their testing cannot be automated as automated
tests typically use textual descriptions to verify the output.
Programming knowledge is required: Every automation testing tool uses any
one of the programming languages to write test scripts. Thus, it is mandatory to
have programming knowledge for automation testing.
False positives and negatives: Automation tests may sometimes fail and reflect
that there is some issue in the system but there is no issue present and in some
cases, it may generate false negatives if tests are designed to verify that some
functionality exists and not to verify that it works as expected.
Conclusion
Well automated testing for software is a standard software development procedure.
Here in this article we have explained what is automated testing and why it is better
then manual testing along with pros and cons of the automated testing.
Automated Testing: FAQs
Automated software testing is a process that utilizes software tools to execute test
cases automatically. This method replaces the need for manual testing, which can be
time-consuming, inconsistent, and prone to human error.
17
What do automation testers do?
Automation testers are responsible for using automated tools and frameworks to test
software applications. They play a crucial role in ensuring the quality and functionality
of software before it is released to users.
2. Branch Coverage:
In this technique, test cases are designed so that each branch from all decision
points is traversed at least once. In a flowchart, all edges must be traversed at least
once.
18
3. Condition Coverage
In this technique, all individual conditions must be covered as shown in the following
example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1 – X = 0, Y = 55
#TC2 – X = 5, Y = 0
4. Multiple Condition Coverage
In this technique, all the possible combinations of the possible outcomes of
conditions are tested at least once. Let’s consider the following example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1: X = 0, Y = 0
#TC2: X = 0, Y = 5
#TC3: X = 55, Y = 0
#TC4: X = 55, Y = 5
5. Basis Path Testing
In this technique, control flow graphs are made from code or flowchart and then
Cyclomatic complexity is calculated which defines the number of independent paths
so that the minimal number of test cases can be designed for each independent
path. Steps:
Make the corresponding control flow graph
Calculate the cyclomatic complexity
Find the independent paths
Design test cases corresponding to each independent path
V(G) = P + 1, where P is the number of predicate nodes in the flow graph
V(G) = E – N + 2, where E is the number of edges and N is the total number of
nodes
V(G) = Number of non-overlapping regions in the graph
#P1: 1 – 2 – 4 – 7 – 8
19
#P2: 1 – 2 – 3 – 5 – 7 – 8
#P3: 1 – 2 – 3 – 6 – 7 – 8
#P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
6. Loop Testing
Loops are widely used and these are fundamental to many algorithms hence, their
testing is very important. Errors often occur at the beginnings and ends of loops.
Simple loops: For simple loops of size n, test cases are designed that:
1. Skip the loop entirely
2. Only one pass through the loop
3. 2 passes
4. m passes, where m < n
5. n-1 ans n+1 passes
Nested loops: For nested loops, all the loops are set to their minimum count, and
we start from the innermost loop. Simple loop tests are conducted for the
innermost loop and this is worked outwards till all the loops have been tested.
Concatenated loops: Independent loops, one after another. Simple loop tests are
applied for each. If they’re not independent, treat them like nesting.
White Testing is performed in 2 Steps
1. Tester should understand the code well
2. Tester should write some code for test cases and execute them
Tools required for White box testing:
PyUnit
Sqlmap
Nmap
Parasoft Jtest
Nunit
VeraUnit
CppUnit
Bugzilla
Fiddler
JSUnit.net
OpenGrok
Wireshark
HP Fortify
CSUnit
Features of White box Testing
1. Code coverage analysis: White box testing helps to analyze the code coverage
of an application, which helps to identify the areas of the code that are not being
tested.
2. Access to the source code: White box testing requires access to the
application’s source code, which makes it possible to test individual functions,
methods, and modules.
3. Knowledge of programming languages: Testers performing white box testing
must have knowledge of programming languages like Java, C++, Python, and
PHP to understand the code structure and write tests.
4. Identifying logical errors: White box testing helps to identify logical errors in the
code, such as infinite loops or incorrect conditional statements.
20
5. Integration testing: White box testing is useful for integration testing, as it allows
testers to verify that the different components of an application are working
together as expected.
6. Unit testing: White box testing is also used for unit testing, which involves testing
individual units of code to ensure that they are working correctly.
7. Optimization of code: White box testing can help to optimize the code by
identifying any performance issues, redundant code, or other areas that can be
improved.
8. Security testing: White box testing can also be used for security testing, as it
allows testers to identify any vulnerabilities in the application’s code.
9. Verification of Design: It verifies that the software’s internal design is
implemented in accordance with the designated design documents.
10. Check for Accurate Code: It verifies that the code operates in accordance
with the guidelines and specifications.
11. Identifying Coding Mistakes: It finds and fix programming flaws in your code,
including syntactic and logical errors.
12. Path Examination: It ensures that each possible path of code execution is
explored and test various iterations of the code.
13. Determining the Dead Code: It finds and remove any code that isn’t used
when the programme is running normally (dead code).
Advantages of Whitebox Testing
1. Thorough Testing: White box testing is thorough as the entire code and
structures are tested.
2. Code Optimization: It results in the optimization of code removing errors and
helps in removing extra lines of code.
3. Early Detection of Defects: It can start at an earlier stage as it doesn’t require
any interface as in the case of black box testing.
4. Integration with SDLC: White box testing can be easily started in Software
Development Life Cycle.
5. Detection of Complex Defects: Testers can identify defects that cannot be
detected through other testing techniques.
6. Comprehensive Test Cases: Testers can create more comprehensive and
effective test cases that cover all code paths.
7. Testers can ensure that the code meets coding standards and is optimized for
performance.
Disadvantages of White box Testing
1. Programming Knowledge and Source Code Access: Testers need to have
programming knowledge and access to the source code to perform tests.
2. Overemphasis on Internal Workings: Testers may focus too much on the
internal workings of the software and may miss external issues.
3. Bias in Testing: Testers may have a biased view of the software since they are
familiar with its internal workings.
4. Test Case Overhead: Redesigning code and rewriting code needs test cases to
be written again.
5. Dependency on Tester Expertise: Testers are required to have in-depth
knowledge of the code and programming language as opposed to black-box
testing.
21
6. Inability to Detect Missing Functionalities: Missing functionalities cannot be
detected as the code that exists is tested.
7. Increased Production Errors: High chances of errors in production.
Each column corresponds to a rule which will become a test case for testing. So
there will be 4 test cases.
5. Requirement-based testing – It includes validating the requirements given in the
SRS of a software system.
6. Compatibility testing – The test case results not only depends on the product but is
also on the infrastructure for delivering functionality. When the infrastructure
parameters are changed it is still expected to work properly. Some parameters that
generally affect the compatibility of software are:
1. Processor (Pentium 3, Pentium 4) and several processors.
2. Architecture and characteristics of machine (32-bit or 64-bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).
Black Box Testing Type
The following are the several categories of black box testing:
1. Functional Testing
2. Regression Testing
3. Nonfunctional Testing (NFT)
Functional Testing: It determines the system’s software functional requirements.
Regression Testing: It ensures that the newly added code is compatible with the
existing code. In other words, a new software update has no impact on the
functionality of the software. This is carried out after a system maintenance operation
and upgrades.
Nonfunctional Testing: Nonfunctional testing is also known as NFT. This testing is
not functional testing of software. It focuses on the software’s performance, usability,
and scalability.
Tools Used for Black Box Testing:
1. Appium
2. Selenium
3. Microsoft Coded UI
4. Applitools
5. HP QTP.
23
What can be identified by Black Box Testing
1. Discovers missing functions, incorrect function & interface errors
2. Discover the errors faced in accessing the database
3. Discovers the errors that occur while initiating & terminating any functions.
4. Discovers the errors in performance or behaviour of software.
Features of black box testing:
1. Independent testing: Black box testing is performed by testers who are not involved
in the development of the application, which helps to ensure that testing is
unbiased and impartial.
2. Testing from a user’s perspective: Black box testing is conducted from the
perspective of an end user, which helps to ensure that the application meets user
requirements and is easy to use.
3. No knowledge of internal code: Testers performing black box testing do not have
access to the application’s internal code, which allows them to focus on testing the
application’s external behaviour and functionality.
4. Requirements-based testing: Black box testing is typically based on the application’s
requirements, which helps to ensure that the application meets the required
specifications.
5. Different testing techniques: Black box testing can be performed using various
testing techniques, such as functional testing, usability testing, acceptance testing,
and regression testing.
6. Easy to automate: Black box testing is easy to automate using various automation
tools, which helps to reduce the overall testing time and effort.
7. Scalability: Black box testing can be scaled up or down depending on the size and
complexity of the application being tested.
8. Limited knowledge of application: Testers performing black box testing have limited
knowledge of the application being tested, which helps to ensure that testing is
more representative of how the end users will interact with the application.
Advantages of Black Box Testing:
The tester does not need to have more functional knowledge or programming
skills to implement the Black Box Testing.
It is efficient for implementing the tests in the larger system.
Tests are executed from the user’s or client’s point of view.
Test cases are easily reproducible.
It is used in finding the ambiguity and contradictions in the functional
specifications.
Disadvantages of Black Box Testing:
There is a possibility of repeating the same tests while implementing the testing
process.
Without clear functional specifications, test cases are difficult to implement.
It is difficult to execute the test cases because of complex inputs at different
stages of testing.
Sometimes, the reason for the test failure cannot be detected.
Some programs in the application are not tested.
It does not reveal the errors in the control structure.
Working with a large sample space of inputs can be exhaustive and consumes a
lot of time.
24
Gray Box Testing – Software Testing
Prerequisite – Software Testing | Basics
The Gray Box Testing is a combination of Black Box and White Box Testing. This
article focuses on discussing the Gray Box Testing in detail.
What is Gray Box Testing?
Gray Box Testing is a software testing technique that is a combination of the Black
Box Testing technique and the White Box Testing technique.
1. In the Black Box Testing technique, the tester is unaware of the internal structure
of the item being tested and in White Box Testing the internal structure is known
to the tester.
2. The internal structure is partially known in Gray Box Testing.
3. This includes access to internal data structures and algorithms to design the test
cases.
4. Gray Box Testing is named so because the software program is like a
semitransparent or gray box inside which the tester can partially see.
5. It commonly focuses on context-specific errors related to web systems.
6. It is based on requirement test case generation because it has all the conditions
presented before the program is tested.
25
4. Regression Testing
Regression testing is testing the software after every change in the software to make
sure that the changes or the new functionalities are not affecting the existing
functioning of the system. Regression testing is also carried out to ensure that fixing
any defect has not impacted other functionality of the software.
5. State transition Testing
State transition testing is frequently applied to systems that display various states
while they are being operated. Testers who have just a limited understanding of the
internal states create test cases with the intention of making sure that state
transitions are handled correctly.
6. Testing Decision Tables
Decision tables are a useful tool for organizing and condensing complicated business
rules and reasoning. Decision tables are used by testers with limited understanding
to generate test cases covering multiple combinations of input conditions and
expected results.
7. Testing APIs
Even though the main code is not entirely known, gray box testing, also known as
API (Application Programming Interface) testing, focuses on testing the system’s
exposed interfaces. The main goal of testing is to make sure the API accepts various
input formats and operates as intended.
8. Data Flow Testing
Analyzing the flow of data through the system forms the basis of data flow testing.
Partial knowledge testers create test cases that examine the data’s pathways
throughout the application, assisting in the identification of possible problems with
handling and processing the data.
Advantages of Gray Box Testing
1. Clarity of goals: Users and developers have clear goals while doing testing.
2. Done from user perspective: Gray box testing is mostly done by the user
perspective.
3. High programming skills not required: Testers are not required to have high
programming skills for this testing.
4. Non-intrusive: Gray box testing is non-intrusive.
5. Improved product quality: Overall quality of the product is improved.
6. Defect fixing: In gray box testing, developers have more time for defect fixing.
7. Benefits of black box and white box testing: By doing gray box testing, benefits
of both black box and white box testing is obtained.
8. Unbiased: Gray box testing is unbiased. It avoids conflicts between a tester and a
developer.
9. Effective testing: Gray box testing is much more effective in integration testing.
Disadvantages of Gray Box Testing
1. Difficulty in defect association: Defect association is difficult when gray testing
is performed for distributed systems.
2. Limited access to internal structure: Limited access to internal structure leads
to limited access for code path traversal.
3. Source code not accessible: Because source code cannot be accessed, doing
complete white box testing is not possible.
26
4. Not suitable for algorithm testing: Gray box testing is not suitable for algorithm
testing.
5. Test cases difficult to design: Most of the test cases are difficult to design.
If a statement is a loop or if condition then its DEF set is empty and USE set is based
on the condition of statement s. Data Flow Testing uses the control flow graph to find
the situations that can interrupt the flow of the program. Reference or define
anomalies in the flow of the data are detected at the time of associations between
values and variables. These anomalies are:
A variable is defined but not used or referenced,
A variable is used but never defined,
A variable is defined twice before it is used
Types of Data Flow Testing:
1. Testing for All-Du-Paths: It Focuses on “All Definition-Use Paths. All-Du-Paths is an
acronym for “All Definition-Use Paths.” Using this technique, every possible path
from a variable’s definition to every usage point is tested.
2. All-Du-Path Predicate Node Testing: This technique focuses on predicate nodes, or
decision points, in the control flow graph.
3. All-Uses Testing: This type of testing checks every place a variable is used in the
application.
4. All-Defs Testing: This type of testing examines every place a variable is specified
within the application’s code.
5. Testing for All-P-Uses: All-P-Uses stands for “All Possible Uses.” Using this method,
every potential use of a variable is tested.
6. All-C-Uses Test: It stands for “All Computation Uses.” Testing every possible path
where a variable is used in calculations or computations is the main goal of this
technique.
7. Testing for All-I-Uses: All-I-Uses stands for “All Input Uses.” With this method, every
path that uses a variable obtained from outside inputs is tested.
8. Testing for All-O-Uses: It stands for “All Output Uses.” Using this method, every path
where a variable has been used to produce output must be tested.
27
9. Testing of Definition-Use Pairs: It concentrates on particular pairs of definitions and
uses for variables.
10. Testing of Use-Definition Paths: This type of testing examines the routes that lead
from a variable’s point of use to its definition.
Advantages of Data Flow Testing:
Data Flow Testing is used to find the following issues-
To find a variable that is used but never defined,
To find a variable that is defined but never used,
To find a variable that is defined multiple times before it is use,
Deallocating a variable before it is used.
Disadvantages of Data Flow Testing
Time consuming and costly process
Requires knowledge of programming languages
Example:
1. read x, y;
2. if(x>y)
3. a = x+1
else
4. a = y-1
5. print a;
x 1 2, 3
y 1 2, 4
a 3, 4 5
30
4. It provides to find a quantitative measure of code coverage.
5. Branch testing is generally ignored branches inside the Boolean expressions.
Advantage of Branch testing:
Branch testing is also provide help to software developer for testing a project and
also provide help. There are some advantage which are given below:
It is generally easy to implement.
It is very simple to apply.
It is approved that all the branches in the code are reached.
It generally takes the guarantee that no branches prompt any irregularity of the
program’s operation.
It also resolves issues that happen with statement coverage testing.
Disadvantage of Branch testing:
There are some disadvantage of Branch testing which are given below:
It is neglect branches inside Boolean expressions which happen because of short-
circuit administrators.
It is costly.
It is take more time for performing this task.
It is one type of white box testing technique that ensures that all the statements of the source
code are executed at least once. It covers all the paths, lines, and statements of a source code. It
is used to design test box cases where it will find out the total number of executed statements
out of the total statements present in the code.
Formula:
Statement coverage = (Number of executed statements / Total number of statements
in source code) * 100
Example 1:
Read A
Read B
if A > B
Print “A is greater than B”
else
Print “B is greater than A”
endif
Case 1:
If A = 7, B= 3
No of statements Executed= 5
Total statements= 7
Statement coverage= 5 / 7 * 100
= 71.00 %
31
Case 2:
If A = 4, B= 8
No of statements Executed= 6
Total statements= 7
Statement coverage= 6 / 7 * 100
= 85.20 %
Example 2:
print (int a, int b)
{
int sum = a + b;
if (sum > 0)
print (“Result is positive”)
else
print (“Result is negative”)
}
Case 1:
If A = 4, B= 8
No of statements Executed= 6
Total statements= 8
Statement coverage= 6 / 8 * 100
= 75.00 %
Case 2:
If A = 4, B= -8
No of statements Executed= 7
Total statements= 8
Statement coverage= 7 / 8 * 100
= 87.50 %
In the internal code structure, there are loops, arrays, methods, exceptions, and
control statements. Some code would be executed based on input while some may
not. Statement coverage will execute all possible paths and statements of the code
Statement coverage covers:
Dead code.
Unused statements.
Unused branches.
Missing statements.
Why is statement coverage used?
To check the quality of the code.
To determine the flow of different paths of the program.
Check whether the source code expected to perform is valid or not.
tests the software’s internal coding and infrastructure.
Drawback of Statement Coverage:
Cannot check the false condition.
Different input values to check all the conditions.
32
More than one test case may be required to cover all the paths with a coverage of
100%.
Code Coverage Testing in Software Testing
Prerequisite : Software Testing
Every Software Developer followsSoftware Development Life Cycle (SDLC) for the
development of any software application. In which testing is one of the important
phase which is performed to check whether the developed software application is
fulfilling the requirements or not. Different types of software testing are there which
are performed based on various metrics/testing parameters.
Code Coverage :
Code coverage is a software testing metric or also termed as a Code Coverage
Testing which helps in determining how much code of the source is tested which
helps in accessing quality of test suite and analyzing how comprehensively a
software is verified. Actually in simple code coverage refers to the degree of which
the source code of the software code has been tested. This Code Coverage is
considered as one of the form of white box testing.
As we know at last of the development each client wants a quality software product
as well as the developer team is also responsible for delivering a quality software
product to the customer/client. Where this quality refers to the product’s performance,
functionalities, behavior, correctness, reliability, effectiveness, security, and
maintainability. Where Code Coverage metric helps in determining the performance
and quality aspects of any software.
The formula to calculate code coverage is
Code Coverage = (Number of lines of code executed)/(Total Number of lines of code
in a system component) * 100
Code Coverage Criteria :
To perform code coverage analysis various criteria are taken into consideration.
These are the major methods/criteria which are considered.
1. Statement Coverage/Block coverage :
The number of statements that have been successfully executed in the program
source code.
Statement Coverage = (Number of statements executed)/(Total Number of
statements)*100.
2. Decision Coverage/Branch Coverage :
The number of decision control structures that have been successfully executed in
the program source code.
Decision Coverage = (Number of decision/branch outcomes exercised)/(Total
number of decision outcomes in the source code)*100.
3. Function coverage :
The number of functions that are called and executed at least once in the source
code.
Function Coverage = (Number of functions called)/(Total number of function)*100.
33
4. Condition Coverage/Expression Coverage :
The number of Boolean condition/expression statements executed in the conditional
statement.
Condition Coverage =(Number of executed operands)/(Total Number of
Operands)*100.
Tools For Code Coverage :
Below are the few important code coverage tools
Cobertura
Clover
Gretel
Kalistick
JaCoCo
JTest
OpenCover
Emma
GCT
Advantages of Using Code Coverage :
It helps in determining the performance and quality aspects of any software.
It helps in evaluating quantitative measure of code coverage.
It helps in easy maintenance of code base.
It helps in accessing quality of test suite and analyzing how comprehensively a
software is verified.
It helps in exposure of bad, dead, and unused code.
It helps in creating extra test cases to increase coverage.
It helps in developing the software product faster by increasing its productivity and
efficiency.
It helps in measuring the efficiency of test implementation.
It helps in finding new test cases which are uncovered.
Disadvantages of Using Code Coverage :
Some times it fails to cover code completely and correctly.
It can not guarantee that all possible values of a feature is tested with the help of
code coverage.
It fails in ensuring how perfectly the code has been covered.
1. Condition Stubs : The conditions are listed in this first upper left part of the decision table
that is used to determine a particular action or set of actions.
2. Action Stubs : All the possible actions are given in the first lower left portion (i.e, below
condition stub) of the decision table.
3. Condition Entries : In the condition entry, the values are inputted in the upper right portion
of the decision table. In the condition entries part of the table, there are multiple rows and
columns which are known as Rule.
4. Action Entries : In the action entry, every entry has some associated action or set of
actions in the lower right portion of the decision table and these values are called outputs.
Types of Decision Tables :
The decision tables are categorized into two types and these are given below:
1. Limited Entry : In the limited entry decision tables, the condition entries are restricted to
binary values.
2. Extended Entry : In the extended entry decision table, the condition entries have more than
two values. The decision tables use multiple conditions where a condition may have many
possibilities instead of only ‘true’ and ‘false’ are known as extended entry decision tables.
Applicability of Decision Tables :
The order of rule evaluation has no effect on the resulting action.
The decision tables can be applied easily at the unit level only.
Once a rule is satisfied and the action selected, n another rule needs to be examined.
The restrictions do not eliminate many applications.
Example of Decision Table Based testing :
Below is the decision table of the program for determining the largest amongst three numbers in
which its input is a triple of positive integers (x,y, and z) and values are from the interval [1,
300].
Table 1 : Decision Table of largest amongst three numbers :
Conditio R R R R R R R R1 R1 R1 R1 R1
R1 R2
ns 3 4 5 6 7 8 9 0 1 2 3 4
c1: x > =
F T T T T T T T T T T T T T
1?
35
c2: x <=
F T T T T T T T T T T T T
300?
c3: y > =
F T T T T T T T T T T T
1?
c4: x <=
F T T T T T T T T T T
300?
c5: z > =
F T T T T T T T T T
1?
c6: z <=
F T T T T T T T T
300?
c7: x>y? T T T T F F F F
c8: y>z? T T F F T T F F
c9: z>x? T F T F T F T F
Rule 25 12
64 32 16 8 1 1 1 1 1 1 1 1
Count 6 8
a1 :
Invalid X X X X X X
input
a2 : x is
X X
largest
a3 : y is
X X
largest
a4 : z is
X X
largest
a5 :
Impossib X
le
36
Pairwise Software Testing
Pairwise Testing is a type of software testing in which permutation and combination
method is used to test the software. Pairwise testing is used to test all the possible
discrete combinations of the parameters involved.
Pairwise testing is a P&C based method, in which to test a system or an application,
for each pair of input parameters of a system, all possible discrete combinations of
the parameters are tested. By using the conventional or exhaustive testing approach
it may be hard to test the system but by using the permutation and combination
method it can be easily done.
Example:
Suppose there is a software to be tested which has 20 inputs and 20 possible
settings for each input so in that case there are total 20^20 possible inputs to be
tested. Therefore in this case, exhaustive testing is impossible even all combinations
are tried to be tested.
Graphical Representation of Pairwise Testing:
37
Pairwise testing increases the defect detection ratio.
Pairwise testing takes less time to complete the execution of the test suite.
Pairwise testing reduces the overall testing budget for a project.
Disadvantages of Pairwise Testing:
The disadvantages of pairwise testing are:
Pairwise testing is not beneficial if the values of the variables are inappropriate.
In pairwise testing it is possible to miss the highly probable combination while
selecting the test data.
In pairwise testing, defect yield ratio may be reduced if a combination is missed.
Pairwise testing is not useful if combinations of variables are not understood
correctly.
Cause Effect Graphing in Software Engineering
Cause Effect Graphing based technique is a technique in which a graph is used to
represent the situations of combinations of input conditions. The graph is then
converted to a decision table to obtain the test cases. Cause-effect graphing
technique is used because boundary value analysis and equivalence class
partitioning methods do not consider the combinations of input conditions. But since
there may be some critical behaviour to be tested when some combinations of input
conditions are considered, that is why cause-effect graphing technique is used.
Steps used in deriving test cases using this technique are:
1. Division of specification:
Since it is difficult to work with cause-effect graphs of large specifications as they
are complex, the specifications are divided into small workable pieces and then
converted into cause-effect graphs separately.
1. Identification of cause and effects:
This involves identifying the causes(distinct input conditions) and effects(output
conditions) in the specification.
2. Transforming the specifications into a cause-effect graph:
The causes and effects are linked together using Boolean expressions to obtain a
cause-effect graph. Constraints are also added between causes and effects if
possible.
3. Conversion into decision table:
The cause-effect graph is then converted into a limited entry decision table. If
you’re not aware of the concept of decision tables, check out this link.
4. Deriving test cases:
Each column of the decision-table is converted into a test case.
Basic Notations used in Cause-effect graph:
Here c represents cause and e represents effect.
The following notations are always used between a cause and an effect:
1. Identity Function: if c is 1, then e is 1. Else e is 0.
38
3. OR Function: if c1 or c2 or c3 is 1, then e is 1. Else e is 0.
3. One and Only One constraint or O-constraint: This constraint exists between
causes. It states that one and only one of c1 and c2 must be 1.
39
4. Requires constraint or R-constraint: This constraint exists between causes. It
states that for c1 to be 1, c2 must be 1. It is impossible for c1 to be 1 and c2 to be
0.
40
Time Set:
When this mode is activated, display mode changes from ALTER TIME to TIME.
Date Set:
When this mode is activated, display mode changes from ALTER DATE to DATE.
State Transition Diagram:
State Transition Diagram shows how the state of the system changes on certain
inputs.
It has four main components:
1. States
2. Transition
3. Events
4. Actions
Advantages of State Transition Testing:
State transition testing helps in understanding the behavior of the system.
State transition testing gives the proper representation of the system behavior.
State transition testing covers all the conditions.
Disadvantages of State Transition Testing:
State transition testing can not be performed everywhere.
State transition testing is not always reliable.
41
What is Use Case Testing?
Use Case Testing is generally a part of black box testing and that helps developers
and testers to identify test scenarios that exercise the whole system on each
transaction basis from start to finish. Business experts and developers must have a
mutual understanding of the requirement, as it’s very difficult to attain.
Use case testing is a functional testing technique that helps in identifying and
testing scenarios on the whole system or doing start-to-end transactions.
It helps to identify the gaps in software that might not be identified by testing
individual components.
It is used to develop test cases at the system level or acceptance level.
Feature of Use Case Testing
Below are the features of the use case testing:
Use case testing is not testing that is performed to decide the quality of the
software.
Although it is a type of end-to-end testing, it won’t ensure the entire coverage of
the user application.
Use case testing will find out the defects in integration testing.
It is very effective in identifying the gaps in the software that won’t be identified by
testing individual components in isolation.
Benefits of Use Case Testing
Use case testing provides some functionality that is used to help to develop a
software project. These are given below:
Helps manage complexity: Use case-driven analysis that helps manage
complexity since it focuses on one specific usage aspect at a time.
Testing from the user’s perspective: Use cases are designed from the user’s
perspective. Thus, use case testing is done from the user’s perspective and helps
to uncover the issues related to the user experience.
Reduced complexity of test cases: The complexity of the test cases will be
reduced as the testing team will follow the path given in the use case document.
Test functional requirements: Use cases help to capture the functional
requirements of a system. Thus, use case testing tests the functional
requirements of the system.
Starts from a simple view of the system: Use cases start from the simple view
of the system and are used primarily for the users.
Drawbacks of Use Case Testing
Below are some of the limitations of the use case testing:
Missing use case: If there is a use case missing from the use case document,
then it will impact the testing process as there is a high possibility that the test
cases for the missing use case will also be left out.
Cover only functional requirements: Since use cases cover only functional
requirements so use case testing by default is functional requirements oriented.
Use cases are from the user’s perspective: 100% test coverage is not possible
in cases as use cases are written from the user’s perspective and there may be
some scenarios that are not from the user’s perspective, so then it may not be
included in the test document.
42
Type of Black Box
Functional Testing – Software Testing
Functional Testing is a type of Software Testing in which the system is tested against
the functional requirements and specifications. Functional testing ensures that the
requirements or specifications are properly satisfied by the application. This type of
testing is particularly concerned with the result of processing. It focuses on the
simulation of actual system usage but does not develop any system structure
assumptions. The article focuses on discussing function testing.
What is Functional Testing?
Functional testing is basically defined as a type of testing that verifies that each
function of the software application works in conformance with the requirement and
specification. This testing is not concerned with the source code of the application.
Each functionality of the software application is tested by providing appropriate test
input, expecting the output, and comparing the actual output with the expected
output. This testing focuses on checking the user interface, APIs, database, security,
client or server application, and functionality of the Application Under Test. Functional
testing can be manual or automated.
Purpose of Functional Testing
Functional testing mainly involves black box testing and can be done manually or
using automation. The purpose of functional testing is to:
Test each function of the application: Functional testing tests each function of
the application by providing the appropriate input and verifying the output against
the functional requirements of the application.
Test primary entry function: In functional testing, the tester tests each entry
function of the application to check all the entry and exit points.
Test flow of the GUI screen: In functional testing, the flow of the GUI screen is
checked so that the user can navigate throughout the application.
What to Test in Functional Testing?
The goal of functional testing is to check the functionalities of the application under
test. It concentrates on:
Basic Usability: Functional testing involves basic usability testing to check
whether the user can freely navigate through the screens without any difficulty.
Mainline functions: This involves testing the main feature and functions of the
application.
Accessibility: This involves testing the accessibility of the system for the user.
Error Conditions: Functional testing involves checking whether the appropriate
error messages are being displayed or not in case of error conditions.
Functional Testing Process
Functional testing involves the following steps:
1. Identify test input: This step involves identifying the functionality that needs to be
tested. This can vary from testing the usability functions, and main functions to
error conditions.
2. Compute expected outcomes: Create input data based on the specifications of
the function and determine the output based on these specifications.
3. Execute test cases: This step involves executing the designed test cases and
recording the output.
43
4. Compare the actual and expected output: In this step, the actual output
obtained after executing the test cases is compared with the expected output to
determine the amount of deviation in the results. This step reveals if the system is
working as expected or not.
44
12. Database Testing: Database testing is a type of software testing that checks
the schema, tables, etc of the database under test.
13. Adhoc Testing: Adhoc testing also known as monkey testing or random
testing is a type of software testing that does not follow any documentation or test
plan to perform testing.
14. Recovery Testing: Recovery testing is a type of software testing that verifies
the software’s ability to recover from the failures like hardware failures, software
failures, crashes, etc.
15. Static Testing: Static testing is a type of software testing which is performed to
check the defects in software without actually executing the code of the software
application.
16. Greybox Testing: Grey box testing is a type of software testing that includes
black box and white box testing.
17. Component Testing: Component testing also known as program testing or
module testing is a type of software testing that is done after the unit testing. In
this, the test objects can be tested independently as a component without
integrating with other components.
Functional Testing vs Non-Functional Testing
Below are the differences between functional testing and non-functional testing:
Non-functional
Parameters Functional Testing Testing
The objective is to
The objective is to validate
Objective performance of the
software actions.
software system
Non-functional testing is
Functional testing is
carried out using the
Requirements carried out using the
performance
functional specification.
specifications.
45
Non-functional
Parameters Functional Testing Testing
46
Dedicated automation team: Automation requires time, effort, and a special skill
set. It is considered best to allocate automation tasks to those who are equipped
to accomplish them.
Create test early: It is best to create test cases when the project is in its early
phases as the requirements are fresh and it is always possible to amend test
cases later in the project development cycle.
Pick the right tests: It is very important to pick the right test cases to automate.
Some tests require setup and configuration during and before execution, so it’s
best not to automate them. Automate tests that need to be executed repeatedly,
tests that are prone to human error.
Prioritize: Testers have finite time and budget, so it is not possible to test each
and every feature in the application. Consider high-priority functions first to create
test cases.
Test frequently: Prepare a basic test automation bucket and create a strategy for
frequent execution of this test bucket.
Benefits of Functional Testing
Bug-free product: Functional testing ensures the delivery of a bug-free and high-
quality product.
Customer satisfaction: It ensures that all requirements are met and ensures that
the customer is satisfied.
Testing focussed on specifications: Functional testing is focussed on
specifications as per customer usage.
Proper working of application: This ensures that the application works as
expected and ensures proper working of all the functionality of the application.
Improves quality of the product: Functional testing ensures the security and
safety of the product and improves the quality of the product.
Limitations of Functional Testing
Missed critical errors: There are chances while executing functional tests that
critical and logical errors are missed.
Redundant testing: There are high chances of performing redundant testing.
Incomplete requirements: If the requirement is not complete then performing this
testing becomes difficult.
Non-Functional Testing
behaviorNon-functional Testing is a type of Software Testing that is performed to
verify the non-functional requirements of the application. It verifies whether the
behavior of the system is as per the requirement or not. It tests all the aspects that
are not tested in functional testing. Non-functional testing is a software testing
technique that checks the non-functional attributes of the system. Non-functional
testing is defined as a type of software testing to check non-functional aspects of a
software application. It is designed to test the readiness of a system as per
nonfunctional parameters which are never addressed by functional testing. Non-
functional testing is as important as functional testing.
Objectives of Non-functional Testing
The objectives of non-functional testing are:
Increased usability: To increase usability, efficiency, maintainability, and portability
of the product.
47
Reduction in production risk: To help in the reduction of production risk related to
non-functional aspects of the product.
Reduction in cost: To help in the reduction of costs related to non-functional
aspects of the product.
Optimize installation: To optimize the installation, execution, and monitoring way of
the product.
Collect metrics: To collect and produce measurements and metrics for internal
research and development.
Enhance knowledge of product: To improve and enhance knowledge of the product
behavior and technologies in use.
Non-Functional Testing Techniques
Compatibility testing: Compatibility testing is a type of testing to ensure that a
software program or system is compatible with other software programs or
systems. For example, in this, the tester checks that the software is compatible
with other software, operating systems, etc.
Compliance testing: Compliance testing is a type of testing to ensure that a
software program or system meets a specific compliance standard, such as
HIPAA or Sarbanes-Oxley. It is often the first type of testing that is performed
when accessing the control environment.
Endurance testing: Endurance testing is a type of testing to ensure that a software
program or system can handle a long-term, continuous load. For example for the
banking application, the application is tested to know if the system can sustain
under the continuous expected load.
Load testing: Load testing is a type of testing to ensure that a software program or
system can handle a large number of users or transactions. For example, Running
multiple applications on the computer simultaneously.
Performance testing: Performance testing is a type of testing to ensure that a
software program or system meets specific performance goals, such as response
time or throughput. For example, organizations perform performance tests in order
to identify performance-related bottlenecks.
Recovery testing: Recovery testing is a type of testing to ensure that a software
program or system can be recovered from a failure or data loss. For example,
when the application is running and the computer is restarted, check the validity of
the application’s integrity.
Security testing: Security testing is a type of testing to ensure that a software
program or system is secure from unauthorized access or attack. For example,
Organizations perform security testing to reveal flaws in the security mechanism of
the information system.
Scalability testing: Scalability testing is a type of testing to ensure that a software
program or system can be scaled up or down to meet changing needs. For
example, to measure the application’s capability to scale up or scale out in terms
of non-functional capability.
Stress testing: Stress testing is a type of testing to ensure that a software program
or system can handle an unusually high load. For example, extremely large
numbers of concurrent users try to log into the application.
Usability testing: Usability testing is a type of testing to ensure that a software
program or system is easy to use. For example, on the e-commerce website, it
can be tested whether the users can easily locate the Buy Now button or not.
48
Volume testing: Volume testing is a type of testing to ensure that a software
program or system can handle a large volume of data. For example, if the website
is developed to handle traffic of 500 users, volume testing will whether the site is
able to handle 500 users or not.
Failover testing: Failover testing validates the system’s capability to allocate
sufficient resources toward recovery during a server failure.
Portability testing: Portability testing is testing the ease with which the application
can be moved from one environment to another.
Reliability testing: Reliability testing checks that the application can perform a
failure-free operation for the specified period of time in the given environmental
conditions.
Baseline testing: Baseline testing is used to make sure that the application
performance is not degraded over time with new changes.
Documentation testing: Documentation testing is a type of software testing that
involves testing the documented artifacts developed before or during the software
testing process.
Localization testing: Localization testing is a type of software testing that is
performed to verify the performance and quality of the software for a specific
culture and to make the product look more natural for the foreign target audience.
Internationalization testing: Internationalization testing is a type of software testing
that ensures the adaptability of software to different cultures and languages
around the world accordingly without any modifications in source code.
Non-functional Testing Parameters
1. Security: This parameter is tested during Security testing. This parameter defines
how the system is secure against sudden attacks from internal and external
sources.
2. Reliability: This parameter is tested during Reliability testing. This defines the
extent to which the system performs its intended functions without failure.
3. Survivability: This parameter is tested during Recovery testing. This parameter
checks that the software system is able to recover itself in the case of failure and
continuously performs the specified function without any failure.
4. Availability: This is tested during Stability testing. Availability here means the
availability percentage of the software system to the original service level
49
agreement. It means the degree to which the user can rely on the software during
its operation.
5. Efficiency: This parameter means the extent to which the software system can
handle the quantity and response time.
6. Integrity: This parameter measures how high the source code quality is when it is
passed on to the QA.
7. Usability: This is tested in usability testing. This parameter means how easily
usable the system is from the user’s perspective.
8. Flexibility: This parameter means how well the system can respond to uncertainty
in a way that allows it to function normally.
9. Scalability: This parameter is tested during scalability testing. This parameter
measures the degree to which the application can scale up or scale out its
processing capacity to meet an increase in demand.
10. Reusability: This means how many existing assets can be reused in some form
within the software product development process or in another application.
11. Interoperability: This parameter is tested during the Interoperability testing. This
checks that the application interfaces properly with its components or other
application or software.
12. Portability: This parameter checks the ease with which the software can be
moved from one environment to another.
Benefits of Non-functional Testing
Improved performance: Non-functional testing checks the performance of the
system and determines the performance bottlenecks that can affect the
performance.
Less time-consuming: Non-functional testing is overall less time-consuming than the
other testing process.
Improves user experience: Non-functional testing like Usability testing checks how
easily usable and user-friendly the software is for the users. Thus, focus on
improving the overall user experience for the application.
More secure product: As non-functional testing specifically includes security testing
that checks the security bottlenecks of the application and how secure is the
application against attacks from internal and external sources.
Limitations of Non-functional Testing
Non-functional tests are performed repeatedly: Whenever there is a change in the
application, non-functional testing is performed again. Thus, it is more resource
intensive.
Expensive in case of software update: In case of software update, non-functional
testing is performed again thus incurring extra charges to re-examine the
software, and thus software becomes expensive.
Type of Functional
Unit Testing – Software Testing
Unit testing is a type of software testing that focuses on individual units or
components of a software system. The purpose of unit testing is to validate that each
unit of the software works as intended and meets the requirements. Unit testing is
typically performed by developers, and it is performed early in the development
process before the code is integrated and tested as a whole system.
50
Unit tests are automated and are run each time the code is changed to ensure that
new code does not break existing functionality. Unit tests are designed to validate the
smallest possible unit of code, such as a function or a method, and test it in isolation
from the rest of the system. This allows developers to quickly identify and fix any
issues early in the development process, improving the overall quality of the software
and reducing the time required for later testing.
Prerequisite – Types of Software Testing
Unit Testing is a software testing technique using which individual units of software
i.e. group of computer program modules, usage procedures, and operating
procedures are tested to determine whether they are suitable for use or not. It is a
testing method using which every independent module is tested to determine if there
is an issue by the developer himself. It is correlated with the functional correctness of
the independent modules. Unit Testing is defined as a type of software testing where
individual components of a software are tested. Unit Testing of the software product
is carried out during the development of an application. An individual component may
be either an individual function or a procedure. Unit Testing is typically performed by
the developer. In SDLC or V Model, Unit testing is the first level of testing done
before integration testing. Unit testing is a type of testing technique that is usually
performed by developers. Although due to the reluctance of developers to test,
quality assurance engineers also do unit testing.
Objective of Unit Testing:
The objective of Unit Testing is:
1. To isolate a section of code.
2. To verify the correctness of the code.
3. To test every function and procedure.
4. To fix bugs early in the development cycle and to save costs.
5. To help the developers understand the code base and enable them to make
changes quickly.
6. To help with code reuse.
51
There are 3 types of Unit Testing Techniques. They are
1. Black Box Testing: This testing technique is used in covering the unit tests for
input, user interface, and output parts.
2. White Box Testing: This technique is used in testing the functional behavior of
the system by giving the input and checking the functionality output including the
internal design structure and code of the modules.
3. Gray Box Testing: This technique is used in executing the relevant test cases,
test methods, and test functions, and analyzing the code performance for the
modules.
Unit Testing Tools:
Here are some commonly used Unit Testing tools:
1. Jtest
2. Junit
3. NUnit
4. EMMA
5. PHPUnit
Advantages of Unit Testing:
1. Unit Testing allows developers to learn what functionality is provided by a unit and
how to use it to gain a basic understanding of the unit API.
2. Unit testing allows the programmer to refine code and make sure the module
works properly.
3. Unit testing enables testing parts of the project without waiting for others to be
completed.
4. Early Detection of Issues: Unit testing allows developers to detect and fix issues
early in the development process before they become larger and more difficult to
fix.
5. Improved Code Quality: Unit testing helps to ensure that each unit of code works
as intended and meets the requirements, improving the overall quality of the
software.
6. Increased Confidence: Unit testing provides developers with confidence in their
code, as they can validate that each unit of the software is functioning as
expected.
7. Faster Development: Unit testing enables developers to work faster and more
efficiently, as they can validate changes to the code without having to wait for the
full system to be tested.
8. Better Documentation: Unit testing provides clear and concise documentation of
the code and its behavior, making it easier for other developers to understand and
maintain the software.
9. Facilitation of Refactoring: Unit testing enables developers to safely make
changes to the code, as they can validate that their changes do not break existing
functionality.
10. Reduced Time and Cost: Unit testing can reduce the time and cost required for
later testing, as it helps to identify and fix issues early in the development process.
Disadvantages of Unit Testing:
1. The process is time-consuming for writing the unit test cases.
2. Unit Testing will not cover all the errors in the module because there is a chance
of having errors in the modules while doing integration testing.
52
3. Unit Testing is not efficient for checking the errors in the UI(User Interface) part of
the module.
4. It requires more time for maintenance when the source code is changed
frequently.
5. It cannot cover the non-functional testing parameters such as scalability, the
performance of the system, etc.
6. Time and Effort: Unit testing requires a significant investment of time and effort to
create and maintain the test cases, especially for complex systems.
7. Dependence on Developers: The success of unit testing depends on the
developers, who must write clear, concise, and comprehensive test cases to
validate the code.
8. Difficulty in Testing Complex Units: Unit testing can be challenging when dealing
with complex units, as it can be difficult to isolate and test individual units in
isolation from the rest of the system.
9. Difficulty in Testing Interactions: Unit testing may not be sufficient for testing
interactions between units, as it only focuses on individual units.
10. Difficulty in Testing User Interfaces: Unit testing may not be suitable for testing
user interfaces, as it typically focuses on the functionality of individual units.
11. Over-reliance on Automation: Over-reliance on automated unit tests can lead
to a false sense of security, as automated tests may not uncover all possible
issues or bugs.
12. Maintenance Overhead: Unit testing requires ongoing maintenance and
updates, as the code and test cases must be kept up-to-date with changes to the
software.
Integration Testing – Software Engineering
Integration testing is the process of testing the interface between two software units
or modules. It focuses on determining the correctness of the interface. The purpose
of integration testing is to expose faults in the interaction between integrated units.
Once all the modules have been unit-tested, integration testing is performed.
Integration testing is a software testing technique that focuses on verifying the
interactions and data exchange between different components or modules of a
software application. The goal of integration testing is to identify any problems or
bugs that arise when different components are combined and interact with each
other. Integration testing is typically performed after unit testing and before system
testing. It helps to identify and resolve integration issues early in the development
cycle, reducing the risk of more severe and costly problems later on.
Integration testing can be done by picking module by module. This can be done so
that there should be a proper sequence to be followed. And also if you don’t want to
miss out on any integration scenarios then you have to follow the proper
sequence. Exposing the defects is the major focus of the integration testing and the
time of interaction between the integrated units.
Integration test approaches – There are four types of integration testing
approaches. Those approaches are the following:
Big-Bang Integration Testing – It is the simplest integration testing approach, where
all the modules are combined and the functionality is verified after the completion of
individual module testing. In simple words, all the modules of the system are simply
put together and tested. This approach is practicable only for very small systems. If
an error is found during the integration testing, it is very difficult to localize the error
53
as the error may potentially belong to any of the modules being integrated. So,
debugging errors reported during Big Bang integration testing is very expensive to fix.
Big-bang integration testing is a software testing approach in which all components or
modules of a software application are combined and tested at once. This approach is
typically used when the software components have a low degree of interdependence
or when there are constraints in the development environment that prevent testing
individual components. The goal of big-bang integration testing is to verify the overall
functionality of the system and to identify any integration problems that arise when
the components are combined. While big-bang integration testing can be useful in
some situations, it can also be a high-risk approach, as the complexity of the system
and the number of interactions between components can make it difficult to identify
and diagnose problems.
Advantages:
1. It is convenient for small systems.
2. Simple and straightforward approach.
3. Can be completed quickly.
4. Does not require a lot of planning or coordination.
5. May be suitable for small systems or projects with a low degree of
interdependence between components.
Disadvantages:
1. There will be quite a lot of delay because you would have to wait for all the
modules to be integrated.
2. High-risk critical modules are not isolated and tested on priority since all modules
are tested at once.
3. Not Good for long projects.
4. High risk of integration problems that are difficult to identify and diagnose.
5. This can result in long and complex debugging and troubleshooting efforts.
6. This can lead to system downtime and increased development costs.
7. May not provide enough visibility into the interactions and data exchange between
components.
8. This can result in a lack of confidence in the system’s stability and reliability.
9. This can lead to decreased efficiency and productivity.
10. This may result in a lack of confidence in the development team.
11. This can lead to system failure and decreased user satisfaction.
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower levels
are tested with higher modules until all modules are tested. The primary purpose of
this integration testing is that each subsystem tests the interfaces among various
modules making up the subsystem. This integration testing uses test drivers to drive
and pass appropriate data to the lower-level modules.
Advantages:
In bottom-up testing, no stubs are required.
A principal advantage of this integration testing is that several disjoint subsystems
can be tested simultaneously.
It is easy to create the test conditions.
Best for applications that uses bottom up design approach.
It is Easy to observe the test results.
Disadvantages:
Driver modules must be produced.
54
In this testing, the complexity that occurs when the system is made up of a large
number of small subsystems.
As Far modules have been created, there is no working model can be
represented.
3. Top-Down Integration Testing – Top-down integration testing technique is used in
order to simulate the behaviour of the lower-level modules that are not yet integrated.
In this integration testing, testing takes place from top to bottom. First, high-level
modules are tested and then low-level modules and finally integrating the low-level
modules to a high level to ensure the system is working as intended.
Advantages:
Separately debugged module.
Few or no drivers needed.
It is more stable and accurate at the aggregate level.
Easier isolation of interface errors.
In this, design defects can be found in the early stages.
Disadvantages:
Needs many Stubs.
Modules at lower level are tested inadequately.
It is difficult to observe the test output.
It is difficult to stub design.
4. Mixed Integration Testing – A mixed integration testing is also called sandwiched
integration testing. A mixed integration testing follows a combination of top down and
bottom-up testing approaches. In top-down approach, testing can start only after the
top-level module have been coded and unit tested. In bottom-up approach, testing
can start only after the bottom level modules are ready. This sandwich or mixed
approach overcomes this shortcoming of the top-down and bottom-up approaches. It
is also called the hybrid integration testing. also, stubs and drivers are used in mixed
integration testing.
Advantages:
Mixed approach is useful for very large projects having several sub projects.
This Sandwich approach overcomes this shortcoming of the top-down and bottom-
up approaches.
Parallel test can be performed in top and bottom layer tests.
Disadvantages:
For mixed integration testing, it requires very high cost because one part has a
Top-down approach while another part has a bottom-up approach.
This integration testing cannot be used for smaller systems with huge
interdependence between different modules.
Applications:
1. Identify the components: Identify the individual components of your application that
need to be integrated. This could include the frontend, backend, database, and
any third-party services.
2. Create a test plan: Develop a test plan that outlines the scenarios and test cases
that need to be executed to validate the integration points between the different
components. This could include testing data flow, communication protocols, and
error handling.
55
3. Set up test environment: Set up a test environment that mirrors the production
environment as closely as possible. This will help ensure that the results of your
integration tests are accurate and reliable.
4. Execute the tests: Execute the tests outlined in your test plan, starting with the most
critical and complex scenarios. Be sure to log any defects or issues that you
encounter during testing.
5. Analyze the results: Analyze the results of your integration tests to identify any
defects or issues that need to be addressed. This may involve working with
developers to fix bugs or make changes to the application architecture.
6. Repeat testing: Once defects have been fixed, repeat the integration testing
process to ensure that the changes have been successful and that the application
still works as expected.
System Testing
INTRODUCTION:
System testing is a type of software testing that evaluates the overall functionality
and performance of a complete and fully integrated software solution. It tests if the
system meets the specified requirements and if it is suitable for delivery to the end-
users. This type of testing is performed after the integration testing and before the
acceptance testing.
System Testing is a type of software testing that is performed on a complete
integrated system to evaluate the compliance of the system with the corresponding
requirements. In system testing, integration testing passed components are taken as
input. The goal of integration testing is to detect any irregularity between the units
that are integrated together. System testing detects defects within both the integrated
units and the whole system. The result of system testing is the observed behavior of
a component or a system when it is tested. System Testing is carried out on the
whole system in the context of either system requirement specifications or functional
requirement specifications or in the context of both. System testing tests the design
and behavior of the system and also the expectations of the customer. It is performed
to test the system beyond the bounds mentioned in the software requirements
specification (SRS). System Testing is basically performed by a testing team that is
independent of the development team that helps to test the quality of the system
impartial. It has both functional and non-functional testing. System Testing is a
black-box testing. System Testing is performed after the integration testing and
before the acceptance testing.
56
System Testing Process: System Testing is performed in the following steps:
Test Environment Setup: Create testing environment for the better quality
testing.
Create Test Case: Generate test case for the testing process.
Create Test Data: Generate the data that is to be tested.
Execute Test Case: After the generation of the test case and the test data, test
cases are executed.
Defect Reporting: Defects in the system are detected.
Regression Testing: It is carried out to test the side effects of the testing
process.
Log Defects: Defects are fixed in this step.
Retest: If the test is not successful then again test is performed.
Types of Non-functional
Performance Testing – Software Testing
Performance Testing is a type of software testing that ensures software applications
perform properly under their expected workload. It is a testing technique carried out
to determine system performance in terms of sensitivity, reactivity, and stability under
a particular workload.
Performance testing is a type of software testing that focuses on evaluating the
performance and scalability of a system or application. The goal of performance
testing is to identify bottlenecks, measure system performance under various loads
and conditions, and ensure that the system can handle the expected number of users
or transactions.
There are several types of performance testing, including:
Load testing: Load testing simulates a real-world load on the system to see how it
performs under stress. It helps identify bottlenecks and determine the maximum
number of users or transactions the system can handle.
Stress testing: Stress testing is a type of load testing that tests the system’s ability
to handle a high load above normal usage levels. It helps identify the breaking
point of the system and any potential issues that may occur under heavy load
conditions.
Spike testing: Spike testing is a type of load testing that tests the system’s ability to
handle sudden spikes in traffic. It helps identify any issues that may occur when
the system is suddenly hit with a high number of requests.
Soak testing: Soak testing is a type of load testing that tests the system’s ability to
handle a sustained load over a prolonged period. It helps identify any issues that
may occur after prolonged usage of the system.
Endurance testing: This type of testing is similar to soak testing, but it focuses on
the long-term behaviour of the system under a constant load.
Performance Testing is the process of analysing the quality and capability of a
product. It is a testing method performed to determine the system’s performance
in terms of speed, reliability, and stability under varying workloads. Performance
testing is also known as Perf Testing.
Performance Testing Attributes:
Speed:
It determines whether the software product responds rapidly.
Scalability:
It determines the amount of load the software product can handle at a time.
59
Stability:
It determines whether the software product is stable in case of varying workloads.
Reliability:
It determines whether the software product is secure or not.
Objective of Performance Testing:
1. The objective of performance testing is to eliminate performance congestion.
2. It uncovers what needs to be improved before the product is launched in the
market.
3. The objective of performance testing is to make software rapid.
4. The objective of performance testing is to make software stable and reliable.
5. The objective of performance testing is to evaluate the performance and
scalability of a system or application under various loads and conditions. It helps
identify bottlenecks, measure system performance, and ensure that the system
can handle the expected number of users or transactions. It also helps to ensure
that the system is reliable, stable, and can handle the expected load in a
production environment.
Types of Performance Testing:
1. Load testing:
It checks the product’s ability to perform under anticipated user loads. The
objective is to identify performance congestion before the software product is
launched in the market.
2. Stress testing:
It involves testing a product under extreme workloads to see whether it handles
high traffic or not. The objective is to identify the breaking point of a software
product.
3. Endurance testing:
It is performed to ensure the software can handle the expected load over a long
period.
4. Spike testing:
It tests the product’s reaction to sudden large spikes in the load generated by
users.
5. Volume testing:
In volume testing, large number of data is saved in a database and the overall
software system’s behaviour is observed. The objective is to check the product’s
performance under varying database volumes.
6. Scalability testing:
In scalability testing, the software application’s effectiveness is determined by
scaling up to support an increase in user load. It helps in planning capacity
additions to your software system.
Performance Testing Process:
60
Performance Testing Tools:
1. Jmeter
2. Open STA
3. Load Runner
4. Web Load
Advantages of Performance Testing :
Performance testing ensures the speed, load capability, accuracy, and other
performances of the system.
It identifies, monitors, and resolves the issues if anything occurs.
It ensures the great optimization of the software and also allows many users to
use it at the same time.
It ensures the client as well as end-customer’s satisfaction. Performance testing
has several advantages that make it an important aspect of software testing:
Identifying bottlenecks: Performance testing helps identify bottlenecks in the
system such as slow database queries, insufficient memory, or network
congestion. This helps developers optimize the system and ensure that it can
handle the expected number of users or transactions.
Improved scalability: By identifying the system’s maximum capacity,
performance testing helps ensure that the system can handle an increasing
number of users or transactions over time. This is particularly important for web-
based systems and applications that are expected to handle a high volume of
traffic.
Improved reliability: Performance testing helps identify any potential issues that
may occur under heavy load conditions, such as increased error rates or slow
response times. This helps ensure that the system is reliable and stable when it is
deployed to production.
Reduced risk: By identifying potential issues before deployment, performance
testing helps reduce the risk of system failure or poor performance in production.
Cost-effective: Performance testing is more cost-effective than fixing problems
that occur in production. It is much cheaper to identify and fix issues during the
testing phase than after deployment.
Improved user experience: By identifying and addressing bottlenecks,
performance testing helps ensure that users have a positive experience when
using the system. This can help improve customer satisfaction and loyalty.
Better Preparation: Performance testing can also help organizations prepare for
unexpected traffic patterns or changes in usage that might occur in the future.
Compliance: Performance testing can help organizations meet regulatory and
industry standards.
Better understanding of the system: Performance testing provides a better
understanding of how the system behaves under different conditions, which can
help in identifying potential issue areas and improving the overall design of the
system.
Disadvantages of Performance Testing :
Sometimes, users may find performance issues in the real-time environment.
Team members who are writing test scripts or test cases in the automation tool
should have a high level of knowledge.
Team members should have high proficiency in debugging the test cases or test
scripts.
61
Low performances in the real environment may lead to loss of large number of
users
Performance testing also has some disadvantages, which include:
Resource-intensive: Performance testing can be resource-intensive, requiring
significant hardware and software resources to simulate many users or
transactions. This can make performance testing expensive and time-consuming.
Complexity: Performance testing can be complex, requiring specialized knowledge
and expertise to set up and execute effectively. This can make it difficult for teams
with limited resources or experience to perform performance testing.
Limited testing scope: Performance testing is focused on the performance of the
system under stress, and it may not be able to identify all types of issues or bugs.
It’s important to combine performance testing with other types of testing such as
functional testing, regression testing, and acceptance testing.
Inaccurate results: If the performance testing environment is not representative of
the production environment or the performance test scenarios do not accurately
simulate real-world usage, the results of the test may not be accurate.
Difficulty in simulating real-world usage: It’s difficult to simulate real-world usage,
and it’s hard to predict how users will interact with the system. This makes it
difficult to know if the system will handle the expected load.
Complexity in analysing the results: Performance testing generates a large
amount of data, and it can be difficult to analyse the results and determine the root
cause of performance issues.
Usability Testing
You design a product (say a refrigerator) and when it becomes completely ready, you
need a potential customer to test it to check it working. To understand whether the
machine is ready to come on the market, potential customers test the machines.
Likewise, the best example of usability testing is when the software also undergoes
various testing process which is performed by potential users before launching into
the market. It is a part of the software development lifecycle (SDLC).
Table of Content
What is Usability Testing?
Phases of Usability Testing:
Advantages and Disadvantages of Usability Testing
Why we Used Usability Testing?
Factors Affecting Cost of Usability Testing:
Techniques and Methods of Usability Testing:
Conclusion:
What is Usability Testing?
Several tests are performed on a product before deploying it. You need to collect
qualitative and quantitative data and satisfy customers’ needs with the product. A
proper final report is made mentioning the changes required in the product
(software). Usability Testing in software testing is a type of testing, that is done
from an end user’s perspective to determine if the system is easily usable. Usability
testing is generally the practice of testing how easy a design is to use on a group of
representative users. A very common mistake in usability testing is conducting a
62
study too late in the design process If you wait until right before your product is
released, you won’t have the time or money to fix any issues – and you’ll have
wasted a lot of effort developing your product the wrong way.
This testing has a cycle wherein when:
1. the product is ready,
2. customers are asked to test it,
3. If any further changes,
4. product (software) is returned to the development team with feedback to update
the changes,
5. again the software had to run usability testing,
6. if there are no more changes required,
7. the software is launched in the market.
This whole process from 1 to 5 is repeated unless the software is completely ready
and there are no further changes required. This process helps you to meet
customers’ needs and identify the problems faced by customers during the usage of
the software. Usability testing is also referred to as User Experience.
Phases of Usability Testing:
There are five phases in usability testing which are followed by the system when
usability testing is performed. These are given below:
1. Prepare your product or design to test: The first phase of usability testing is
choosing a product and then making it ready for usability testing. For usability
testing, more functions and operations are required than this phase provided that
type of requirement. Hence, this is one of the most significant phases in usability
testing.
2. Find your participants: The second phase of usability testing is finding an
employee who is helping you with performing usability testing. Generally, the
number of participants that you need is based on several case studies. Mostly,
five participants can find almost as many usability problems as you’d find using
many more test participants.
3. Write a test plan: This is the third phase of usability testing. The plan is one of
the first steps in each round of usability testing is to develop a plan for the test.
The main purpose of the plan is to document what you are going to do, how you
are going to conduct the test, what metrics you are going to find, the number of
participants you are going to test, and what scenarios you will use.
4. Take on the role of the moderator: This is the fourth phase of usability testing
and here the moderator plays a vital role that involves building a partnership with
the participant. Most of the research findings are derived by observing the
participant’s actions and gathering verbal feedback to be an effective moderator,
you need to be able to make instant decisions while simultaneously overseeing
various aspects of the research session.
63
5. Present your findings/ final report: This phase generally involves combining
your results into an overall score and presenting it meaningfully to your audience.
An easy method to do this is to compare each data point to a target goal and
represent this as one single metric based on the percentage of users who
achieved this goal.
Advantages and Disadvantages of Usability Testing
Usability testing is preferred to evaluate a product or service by testing it with the
proper users. In Usability testing, the development and design teams will use to
identify issues before coding and the result will be earlier issues will be solved.
During a Usability test, you can,
Learn if participants will be able to complete the specific task completely.
identify how long it will take to complete the specific task.
Gives excellent features and functionalities to the product
Improves user satisfaction and fulfils requirements based on user’s feedback
The product becomes more efficient and effective
The biggest cons of usability testing are the cost and time. The more usability
testing is performed, the more cost and time is being used.
Why we Used Usability Testing?
When software is ready, it is important to make sure that the user experience with the
product should be seamless. It should be easy to navigate and all the functions
should be working properly, the competitor’s website will win the race. Therefore,
usability testing is performed. The objective of usability testing is to understand
customers’ needs and requirements and also how users interact with the product
(software). With the test, all the features, functions, and purposes of the software are
checked.
The primary goals of usability testing are – discovering problems (hidden
issues) and opportunities, comparing benchmarks, and comparison against
other websites. The parameters tested during usability testing are efficiency,
effectiveness, and satisfaction. It should be performed before any new design is
made. This test should be iterated unless all the necessary changes have been
made. Improving the site consistently by performing usability testing enhances its
performance which in return makes it the best website.
66
Testing the application in a same environment but having different versions. For
example, to test compatibility of Facebook application in your android mobile. First
check for the compatibility with Android 9.0 and then with Android 10.0 for the same
version of Facebook App.
Testing the application in a same versions but having different environment. For
example, to test compatibility of Facebook application in your android mobile. First
check for the compatibility with a Facebook application of lower version with a
Android 10.0(or your choice) and then with a Facebook application of higher version
with a same version of Android.
Why compatibility testing is important ?
1. It ensures complete customer satisfaction.
2. It provides service across multiple platforms.
3. Identifying bugs during development process.
Compatibility testing defects :
1. Variety of user interface.
2. Changes with respect to font size.
3. Alignment issues.
4. Issues related to existence of broken frames.
5. Issues related to overlapping of content.
67
3. After Testing:
After testing, only the test summary remains which is a collective analysis of all test
reports and logs. The software is released under the version control system if it is
ready to launch. It summarizes and concludes whether the software is ready to
launch.
How to write Test Cases – Software Testing
Software testing is known as a process for validating and verifying the working of a
software/application. It makes sure that the software is working without any errors,
bugs, or any other issues and gives the expected output to the user. The software
testing process isn’t limited to finding faults in the present software but also finding
measures to upgrade the software in various factors such as efficiency, usability, and
accuracy. So, to test software the software testing provides a particular format called
a Test Case.
This article focuses on discussing the following topics in the Test Case:
1. What is a Test Case?
2. Test Case vs Test Scenario.
3. When do we Write Test Cases?
4. Why Write Test Cases?
5. Test Case Template.
6. Best Practice for Writing Test Cases.
7. Test Case Management Tools.
8. Formal and Informal Test Case.
9. Types of Test Cases.
10. Example.
What is a Test Case?
A test case is a defined format for software testing required to check if a particular
application/software is working or not. A test case consists of a certain set of
conditions that need to be checked to test an application or software i.e. in more
simple terms when conditions are checked it checks if the resultant output meets with
the expected output or not. A test case consists of various parameters such as ID,
condition, steps, input, expected result, result, status, and remarks.
Parameters of a Test Case:
Module Name: Subject or title that defines the functionality of the test.
Test Case Id: A unique identifier assigned to every single condition in a test
case.
Tester Name: The name of the person who would be carrying out the test.
Test scenario: The test scenario provides a brief description to the tester, as in
providing a small overview to know about what needs to be performed and the
small features, and components of the test.
Test Case Description: The condition required to be checked for a given
software. for eg. Check if only numbers validation is working or not for an age
input box.
Test Steps: Steps to be performed for the checking of the condition.
Prerequisite: The conditions required to be fulfilled before the start of the test
process.
Test Priority: As the name suggests gives priority to the test cases that had to be
performed first, or are more important and that could be performed later.
68
Test Data: The inputs to be taken while checking for the conditions.
Test Expected Result: The output which should be expected at the end of the
test.
Test parameters: Parameters assigned to a particular test case.
Actual Result: The output that is displayed at the end.
Environment Information: The environment in which the test is being performed,
such as the operating system, security information, the software name, software
version, etc.
Status: The status of tests such as pass, fail, NA, etc.
Comments: Remarks on the test regarding the test for the betterment of the
software.
Test Case vs Test Scenario
Below are some of the points of difference between a test case and a test scenario:
Paramete
r Test Case Test Scenario
A test case is a
defined format for
software testing
required to check if
a particular The test Scenario provides a small description of
Definitio application/software what needs to be performed based on the use
n /module is working case.
or not. Here we
check for different
conditions
regarding the
same.
69
Paramete
r Test Case Test Scenario
It focuses on “What
Objectiv
to test” and “How to It focuses more on ‘What to test”.
e
test”.
It includes all
positive and
negative inputs,
Inputs They are one-liner statements.
expected results,
navigation steps,
etc.
Test Case Each test case should have a proper description to let testers
Description know what the test case is about.
Mention all test steps in detail and to be executed from the end-
Test Steps
user’s perspective.
Test Data Test data could be used as input for the test cases.
Expected Result The result is expected after executing the test cases.
71
Fields Description
The result that which system shows once the test case is
Actual Result
executed.
Set the status as Pass or Fail on the expected result against the
Status
actual result.
Project Name Name of the project to which the test case belongs.
Module Name Name of the module to which the test case belongs.
Reference
Mention the path of the reference document.
Document
72
In the given template below it’s identifiable that the section from module name to test
scenario is the header section while the table that lies below the test scenario (from
test case ID to comments) is the body of the test case template.
Here a test case template for login functionality has been created with its parameters
and values.
74
Types of Test Cases
Functionality Test Case: The functionality test case is to determine if the
interface of the software works smoothly with the rest of the system and its users
or not. Black box testing is used while checking for this test case, as we check
everything externally and not internally for this test case.
Unit Test Case: In unit test case is where the individual part or a single unit of the
software is tested. Here each unit/ individual part is tested, and we create a
different test case for each unit.
User Interface Test Case: The UI test or user interface test is when every
component of the UI that the user would come in contact with is tested. It is to test
if the UI components requirement made by the user are fulfilled or not.
Integration Test Case: Integration testing is when all the units of the software
are combined and then they are tested. It is to check that each component and its
units work together without any issues.
Performance Test Case: The performance test case helps to determine
response time as well as the overall effectiveness of the system/software. It’s to
see if the application will handle real-world expectations.
Database Test Case: Also known as back-end testing or data testing checks that
everything works fine concerning the database. Testing cases for tables, schema,
triggers, etc. are done.
Security Test Case: The security test case helps to determine that the application
restricts actions as well as permissions wherever necessary. Encryption and
authentication are considered as main objectives of the security test case. The
security test case is done to protect and safeguard the data of the software.
Usability Test Case: Also known as a user experience test case, it checks how
user-friendly or easy to approach a software would be. Usability test cases are
designed by the User experience team and performed by the testing team.
User Acceptance Test Case: The user acceptance case is prepared by the
testing team but the user/client does the testing and review if they work in the real-
world environment.
Example
Below is an example of preparing various test cases for a login page with a
username and password.
Unit Test case: Here we are only checking if the username validates at least for the
length of eight characters.
Tes Test
Te Test t Expect
st Condit Ste Test Input ed Actual
Id ion ps Result Result Status Remarks
75
Tes Test
Te Test t Expect
st Condit Ste Test Input ed Actual
Id ion ps Result Result Status Remarks
accepts
the input
of
ers. ers.
thirteen
characte
rs.
Here it is only checked whether the passing of input of thirteen characters is valid or
not. So since the character word ‘geeksforgeeks’ is entered then the test is
successful it would have failed for any other test case.
Functionality Test case: Here it is checked whether the username and password
both work together on the login click.
Test
Test Expe Actu
Te Con Test Steps Test Input cted al Stat Remar
st ditio Res Resul us ks
Id n ult t
Chec
k
that
with
the
corre
ct 1. Enter the
username: geeks
user username Login Login
1 for geeks
nam 2. Enter the succe succe Pass None
. password:
e password ssful ssful
geeksforever
and 3. Click on the
pass login
word
able
to
log
in.
76
Test
Test Expe Actu
Te Con Test Steps Test Input cted al Stat Remar
st ditio Res Resul us ks
Id n ult t
k
that
if
with
an
incor
rect
username
user 2. Enter the geeksforgeeks unsuc unsuc
. nam password password: cessf cessfu
e 3. Click on the geekstogether ul l
and login
pass
word
able
to
not
login.
Here it is being checked whether passing wrong and right inputs and if the login
functionality is working or not, it’s showing login is successful for the right credentials
and unsuccessful for the wrong ones, hence both tests have passed otherwise would
have failed.
User Acceptance Test Case: Here the user feedback is taken if the login page is
loading properly or not.
Tes
t Ac
Tes Exp tua
t ect l
Test Id Con Test Input ed Re Stat
diti Res sul us Rem
on Test Steps ult t arks
77
Tes
t Ac
Tes Exp tua
t ect l
Test Id Con Test Input ed Re Stat
diti Res sul us Rem
on Test Steps ult t arks
loade
ng
d due
pag
to a
e
brows
loadi the
logi er
ng logi
n comp
effici n
pag atibilit
ently pag
e. y
for e.
issue
the
on the
clien
user’s
t.
side.
Here it is being checked in by clicking on the login button if the page is loaded and
the ‘Welcome to login page’ message is displayed. The test has failed here as the
page was not loaded due to a browser compatibility issue, it would have loaded if the
test had passed.
Testing Technique
Error Guessing in Software Testing
Software application is a part of our daily life. May be in laptop or may be in our
mobile phone, or it may be any digital device/interface our day starts with the use of
various software applications and also ends with the use of various software
applications. That’s why software companies are also trying their best to develop
good quality error free software applications to the users.
So when a company develops any software application software testing plays a
major role in that. Testers not only test the product with a set of specified test cases
they also test the software by coming out of the testing documents. There the term
error guessing comes which is not specified in any testing instruction manual still it is
performed. So in this article we will discuss about that error then error guessing,
where and how it is performed. The benefits that we get by performing it. So let’s
start the topic.
Actually an error appears when there is any logical mistake in code by developer.
And It’s very hard for a developer to find an error in large system. To solve this
problem Error guessing technique is used. Error guessing technique is a software
78
technique where test engineer guesses and try to break the software code. Error
Guessing technique is also applied to all of the other testing techniques to produce
more effective and workable tests.
What is the use of Error Guessing ?
In software testing error guessing is a method in which experience and skill plays an
important role. As here possible bugs and defects are guessed in the areas where
formal testing would not work. That’s why it is also called as experience based testing
which has no specific method of testing. This is not a formal way of performing
testing still it has importance as it sometimes solves many unresolved issues also.
Where or how to use it ?
Error guessing in software testing approach which is a sort of black box testing
technique and also error guessing is best used as a part of the conditions where
other black box testing techniques are performed, for instance, boundary value
analysis and equivalence split are not prepared to cover all of the condition which are
slanted to error in the application.
Advantages and Disadvantages of Error Guessing Technique :
Advantages :
It is effective when used with other testing approaches.
It is helpful to solve some complex and problematic areas of application.
It figures out errors which may not be identified through other formal testing
techniques.
It helps in reducing testing times.
Disadvantages :
Only capable and skilled tests can perform.
Dependent on testers experience and skills.
Fails in providing guarantee the quality standard of the application.
Not an efficient way of error detection as compared to effort.
Drawbacks of Error Guessing technique:
Not sure that the software has reached the expected quality.
Never provide full coverage of an application.
Factors used in error guessing :
1. Lessons learned from past releases.
2. Experience of testers.
3. Historical learning.
4. Test execution report.
5. Earlier defects.
6. Production tickets.
7. Normal testing rules.
8. Application UI.
9. Previous test results.
Error Guessing is one of the popular techniques of testing, even if it is not an
accurate approach of performing testing still it makes the testing work simple and
saves a lots of time. But when it is combined with other testing techniques we get
better results. In this testing, it is essential to have skilled and experienced testers.
Equivalence Partitioning Method
Equivalence Partitioning Method is also known as Equivalence class partitioning
(ECP). It is a software testing technique or black-box testing that divides input
domain into classes of data, and with the help of these classes of data, test cases
79
can be derived. An ideal test case identifies class of error that might require many
arbitrary test cases to be executed before general error is observed.
In equivalence partitioning, equivalence classes are evaluated for given input
conditions. Whenever any input is given, then type of input condition is checked, then
for this input conditions, Equivalence class represents or describes set of valid or
invalid states.
Guidelines for Equivalence Partitioning :
If the range condition is given as an input, then one valid and two invalid
equivalence classes are defined.
If a specific value is given as input, then one valid and two invalid equivalence
classes are defined.
If a member of set is given as an input, then one valid and one invalid equivalence
class is defined.
If Boolean no. is given as an input condition, then one valid and one invalid
equivalence class is defined.
Example-1:
Let us consider an example of any college admission process. There is a college that
gives admissions to students based upon their percentage.
Consider percentage field that will accept percentage only between 50 to 90 %, more
and even less than not be accepted, and application will redirect user to an error
page. If percentage entered by user is less than 50 %or more than 90 %, that
equivalence partitioning method will show an invalid percentage. If percentage
entered is between 50 to 90 %, then equivalence partitioning method will show valid
percentage.
Example 2:
Let us consider an example of an online shopping site. In this site, each of products
has a specific product ID and product name. We can search for product either by
using name of product or by product ID. Here, we consider search field that accepts
only valid product ID or product name.
80
Let us consider a set of products with product IDs and users wants to search for
Mobiles. Below is a table of some products with their product Id.
Product Product ID
Mobiles 45
Laptops 54
Pen Drives 67
Keyboard 76
Headphone
34
s
If the product ID entered by user is invalid then application will redirect customer or
user to error page. If product ID entered by user is valid i.e. 45 for mobile, then
equivalence partitioning method will show a valid product ID.
Example-3 :
Let us consider an example of software application. There is function of software
application that accepts only particular number of digits, not even greater or less than
that particular number.
Consider an OTP number that contains only 6 digit number, greater and even less
than six digits will not be accepted, and the application will redirect customer or user
to error page. If password entered by user is less or more than six characters, that
equivalence partitioning method will show an invalid OTP. If password entered is
exactly six characters, then equivalence partitioning method will show valid OTP.
Boundary Value Analysis is based on testing the boundary values of valid and invalid
partitions. The behavior at the edge of the equivalence partition is more likely to be
incorrect than the behavior within the partition, so boundaries are an area where
testing is likely to yield defects.
It checks for the input values near the boundary that have a higher chance of error.
Every partition has its maximum and minimum values and these maximum and
minimum values are the boundary values of a partition.
Note:
A boundary value for a valid partition is a valid boundary value.
A boundary value for an invalid partition is an invalid boundary value.
For each variable we check-
Minimum value.
Just above the minimum.
Nominal Value.
Just below Max value.
Max value.
Example: Consider a system that accepts ages from 18 to 56.
Boundary Value Analysis(Age accepts 18 to 56)
Invalid Valid
Invalid
(min- (min, min + 1, nominal, max – 1,
(max + 1)
1) max)
Valid Test cases: Valid test cases for the above can be any value entered greater
than 17 and less than 57.
Enter the value- 18.
Enter the value- 19.
Enter the value- 37.
Enter the value- 55.
Enter the value- 56.
Invalid Testcases: When any value less than 18 and greater than 56 is entered.
Enter the value- 17.
Enter the value- 57.
Single Fault Assumption: When more than one variable for the same application is
checked then one can use a single fault assumption. Holding all but one variable to
the extreme value and allowing the remaining variable to take the extreme value. For
n variable to be checked:
Maximum of 4n+1 test cases
82
Problem: Consider a Program for determining the Previous Data.
Input: Day, Month, Year with valid ranges as-
1 ≤ Month≤12
1 ≤ Day ≤31
1900 ≤ Year ≤ 2000
Design Boundary Value Test Cases.
Solution: Taking the year as a Single Fault Assumption i.e. year will be having
values varying from 1900 to 2000 and others will have nominal values.
Test
Cases Month Day Year Output
Taking Day as Single Fault Assumption i.e. Day will be having values varying from 1
to 31 and others will have nominal values.
Test
Case Month Day Year Output
Taking Month as Single Fault Assumption i.e. Month will be having values varying
from 1 to 12 and others will have nominal values.
83
Mont
Test Case h Day Year Output
The idea and motivation behind BVA are that errors tend to occur near the extremes
of the variables. The defect on the boundary value can be the result of countless
possibilities.
Typing of Languages: BVA is not suitable for free-form languages such as COBOL
and FORTRAN, These languages are known as weakly typed languages. This can
be useful and can cause bugs also.
PASCAL, ADA is the strongly typed language that requires all constants or variables
defined with an associated data type.
Limitation of Boundary Value Analysis:
It works well when the product is under test.
It cannot consider the nature of the functional dependencies of variables.
BVA is quite rudimentary.
Equivalence Partitioning
It is a type of black-box testing that can be applied to all levels of software testing. In
this technique, input data are divided into the equivalent partitions that can be used
to derive test cases-
In this input data are divided into different equivalence data classes.
84
It is applied when there is a range of input values.
Example: Below is the example to combine Equivalence Partitioning and Boundary
Value.
Consider a field that accepts a minimum of 6 characters and a maximum of 10
characters. Then the partition of the test cases ranges 0 – 5, 6 – 10, 11 – 14.
Enter value 0 to 5
1 Not accepted
character
Why Combine Equivalence Partitioning and Boundary Analysis Testing: Following are
some of the reasons why to combine the two approaches:
In this test cases are reduced into manageable chunks.
The effectiveness of the testing is not compromised on test cases.
Works well with a large number of variables.
Test Management
Test plan – Software Testing
In software testing, documentation is very important. Testing should be documented to
provide efficient resource control monitoring. For successful testing, a test plan plays
a very important role. Here, we will discuss the following points:
1. What is Test Plan.
2. Why are Test Plan Important.
3. Objectives of the Test Plan
4. Components and Attributes of Test Plan.
5. How to create a Test Plan.
6. Types of Test Plans.
What is Test Plan:
A test plan is a document that consists of all future testing-related activities. It is
prepared at the project level and in general, it defines work products to be tested,
how they will be tested, and test type distribution among the testers. Before starting
testing there will be a test manager who will be preparing a test plan. In any company
whenever a new project is taken up before the tester is involved in the testing the test
manager of the team would prepare a test Plan.
The test plan serves as the blueprint that changes according to the progressions
in the project and stays current at all times.
It serves as a base for conducting testing activities and coordinating activities
among a QA team.
85
It is shared with Business Analysts, Project Managers, and anyone associated
with the project.
Factors Roles
Who writes Test Plans? Test lead, Test Manager, Test Engineer
86
4. Serves as a blueprint: The test plan serves as a blueprint for all the testing
activities, it has every detail from beginning to end.
5. Helps to identify solutions: A test plan helps the team members They consider
the project’s challenges and identify the solutions.
6. Serves as a rulebook: The test plan serves as a rulebook for following rules
when the project is completed phase by phase.
Types of Test Plans:
The following are the three types of test plans:
Master Test Plan: In this type of test plan, includes multiple test strategies and
has multiple levels of testing. It goes into great depth on the planning and
management of testing at the various test levels and thus provides a bird’s eye
view of the important decisions made, tactics used, etc. It includes a list of tests
that must be executed, test coverage, the connection between various test levels,
etc.
Phase Test Plan: In this type of test plan, emphasis is on any one phase of
testing. It includes further information on the levels listed in the master testing
plan. Information like testing schedules, benchmarks, activities, templates, and
other information that is not included in the master test plan is included in the
phase test plan.
Specific Test Plan: This type of test plan, is designed for specific types of testing
especially non-functional testing for example plans for conducting performance
tests or security tests.
Components and Attributes of Test Plan:
There is no hard and fast rule for preparing a test plan but it has some standard 15
attributes that companies follow:
1. Objective: It describes the aim of the test plan, whatever the good process and
procedure they are going to follow to give quality software to customers. The overall
objective of the test is to find as many defects as possible and to make software bug-
free. The test objective must be broken into components and sub-components. In
every component following activities should be performed.
List all the functionality and performance to be tested.
Make goals and targets based on the application feature.
87
2. Scope: It consists of information that needs to be tested concerning an
application. The scope can be divided into two parts:
In-Scope: The modules that are to be tested rigorously.
Out Scope: The modules that are not to be tested rigorously.
Example: In an application A, B, C, and D features have to be developed, but the B
feature has already been designed by other companies. So the development team
will purchase B from that company and perform only integrated testing with A, B, and
C.
3. Testing Methodology: The methods that are going to be used for testing depend
on application to application. The testing methodology is decided based on the
feature and application requirements.
Since the testing terms are not standard, one should define what kind of testing will
be used in the testing methodology. So that everyone can understand it.
4. Approach: The approach of testing different software is different. It deals with the
flow of applications for future reference. It has two aspects:
High-Level Scenarios: For testing critical features high-level scenarios are
written. For Example, login to a website, and book from a website.
The Flow Graph: It is used when one wants to make benefits such as converging
and merging easy.
5. Assumption: In this phase, certain assumptions will be made.
Example:
The testing team will get proper support from the development team.
The tester will get proper knowledge transfer from the development team.
Proper resource allocation will be given by the company to the testing department.
6. Risk: All the risks that can happen if the assumption is broken. For Example, in the
case of wrong budget estimation, the cost may overrun. Some reason that may lead
to risk is:
Test Manager has poor management skills.
Hard to complete the project on time.
Lack of cooperation.
7. Mitigation Plan: If any risk is involved then the company must have a backup
plan, the purpose is to avoid errors. Some points to resolve/avoid risk:
Test priority is to be set for each test activity.
Managers should have leadership skills.
Training course for the testers.
8. Roles and Responsibilities: All the responsibilities and role of every member of a
particular testing team has to be recorded.
Example:
Test Manager: Manages the project, takes appropriate resources, and gives
project direction.
Tester: Identify the testing technique, verify the test approach, and save project
costs.
9. Schedule: Under this, it will record the start and end date of every testing-related
activity. For Example, writing the test case date and ending the test case date.
10. Defect Tracking: It is an important process in software engineering as lots of
issue arises when you develop a critical system for business. If there is any defect
found while testing that defect must be given to the developer team. There are the
following methods for the process of defect tracking:
88
Information Capture: In this, we take basic information to begin the process.
Prioritize: The task is prioritized based on severity and importance.
Communication: Communication between the identifier of the bug and the fixer
of the bug.
Environment: Test the application based on hardware and software.
Example: The bug can be identified using bug-tracking tools such as Jira, Mantis,
and Trac.
11. Test Environments: It is the environment that the testing team will use i.e. the
list of hardware and software, while testing the application, the things that are said to
be tested will be written under this section. The installation of software is also
checked under this.
Example:
Software configuration on different operating systems, such as Windows, Linux,
Mac, etc.
Hardware Configuration depends on RAM, ROM, etc.
12. Entry and Exit Criteria: The set of conditions that should be met to start any
new type of testing or to end any kind of testing.
Entry Condition:
Necessary resources must be ready.
The application must be prepared.
Test data should be ready.
Exit Condition:
There should not be any major bugs.
Most test cases should be passed.
When all test cases are executed.
Example: If the team member reports that 45% of the test cases failed, then testing
will be suspended until the developer team fixes all defects.
13. Test Automation: It consists of the features that are to be automated and which
features are not to be automated.
If the feature has lots of bugs then it is categorized as Manual Testing.
If the feature is frequently tested then it can be automated.
14. Effort Estimation: This involves planning the effort that needs to be applied by
every team member.
15. Test Deliverables: It is the outcome from the testing team that is to be given to the
customers at the end of the project.
Before the testing phase:
Test plan document.
Test case document.
89
Test design specification.
During the testing phase:
Test scripts.
Test data.
Error logs.
After the testing phase:
Test Reports.
Defect Report.
Installation Report.
It contains a test plan, defect report, automation report, assumption report, tools, and
other components that have been used for developing and maintaining the testing
effort.
16. Template: This is followed by every kind of report that is going to be prepared by
the testing team. All the test engineers will only use these templates in the project to
maintain the consistency of the product.
How to create a Test Plan:
Below are the eight steps that can be followed to write a test plan:
1. Analyze the product: This phase focuses on analyzing the product, Interviewing
clients, designers, and developers, and performing a product walkthrough. This stage
focuses on answering the following questions:
What is the primary objective of the product?
Who will use the product?
What are the hardware and software specifications of the product?
How does the product work?
2. Design the test strategy: The test strategy document is prepared by the manager
and details the following information:
Scope of testing which means the components that will be tested and the ones
that will be skipped.
Type of testing which means different types of tests that will be used in the
project.
Risks and issues that will list all the possible risks that may occur during testing.
Test logistics mentions the names of the testers and the tests that will be run by
them.
3. Define test objectives: This phase defines the objectives and expected results of
the test execution. Objectives include:
A list of software features like functionality, GUI, performance standards, etc.
The ideal expected outcome for every aspect of the software that needs testing.
4. Define test criteria: Two main testing criteria determine all the activities in the
testing project:
90
Suspension criteria: Suspension criteria define the benchmarks for suspending
all the tests.
Exit criteria: Exit criteria define the benchmarks that signify the successful
completion of the test phase or project. These are expected results and must
match before moving to the next stage of development.
5. Resource planning: This phase aims to create a detailed list of all the resources
required for project completion. For example, human effort, hardware and software
requirements, all infrastructure needed, etc.
6. Plan test environment: This phase is very important as the test environment is
where the QAs run their tests. The test environments must be real devices, installed
with real browsers and operating systems so that testers can monitor software
behavior in real user conditions.
7. Schedule and Estimation: Break down the project into smaller tasks and allocate
time and effort for each task. This helps in efficient time estimation. Create a
schedule to complete these tasks in the designated time with a specific amount of
effort.
8. Determine test deliverables: Test deliverables refer to the list of documents,
tools, and other equipment that must be created, provided, and maintained to support
testing activities in the project.
Deliverables Deliverables
Deliverables required required during required after
before testing testing testing
91
test case should be checked to ensure the success and thoroughness of the test.
Here, we will discuss the following points:
1. Why Review Test Cases?
2. What is Test Case Repository?
3. Benefits of Test Case Repository.
4. Test Case Review Process.
5. Techniques of Test Case Review.
6. Tips While Reviewing Test Cases.
7. Factors to Consider During Test Case Review.
8. Common Mistakes During Test Case Review.
9. Classifying Defects in Review of the Test Cases.
Peer review should be done on any work product that is deemed a deliverable. Test
cases, which are important deliverables for the testing team, are included in this
category. It is critical to write effective test cases that successfully uncover as many
faults as possible during the testing process. As a result, a check is required to
determine whether:
Test cases are developed with the intention of detecting faults, and the
requirement is correctly understood.
Areas of potential impact are identified and put to the test.
The test data is accurate and covers every possible domain class. There are
scenarios that are both favorable and negative.
The expected behavior is accurately documented.
The test coverage is sufficient.
Keeping test cases organized pays off handsomely, particularly in medium to large
projects. Furthermore, testing is a procedure that can be repeated. Everyone benefits
from reusing test cases since it saves time. Large elements of projects can be
repeated for testing. Maintaining a test case repository allows one to reuse past test
resources as needed, which helps to save time. The good news is that keeping a
well-organized test case repository isn’t difficult at all. The quantity and variety of test
cases that constitute the basis of testing cycles are often linked to the success of a
software testing team. The assimilation of test cases may take a significant amount of
time and effort, with the main focus being on creating a complete test case repository
for each application. Test cases in the repository cover all essential permutations and
combinations in workflow execution and transaction, ensuring that all system and
user interactions are covered.
A test case repository is a centralized storage area for all baseline test cases
(authored, reviewed, and authorized).
When the client provides the requirements, the developer begins constructing the
modules, and the test engineer begins writing the test cases.
The authorized test cases are stored in a test case repository.
92
If a test engineer wishes to test the application, he or she must use the test case
repository to get the test case.
We can remove test cases from the test case repository if we don’t need them.
We have a separate test case repository for each version.
Without the authorization of the test lead, test cases cannot be modified or
changed once they have been baselined or saved in the test case repository.
If there is a crash that affects the software, the testing team always has a
complete backup of the test case repository.
Furthermore, because a test case repository increases over time, testers should keep
it up to date with each new version of the business application or software product. If
this is not done, it will lose synchronization with the software’s actual function and
behavior over time. As a result, subsequent QA cycles’ findings will suffer as a result.
The following are the list of activities involved in the review process:
1. Planning: This is the first phase and begins with the author requesting a
moderator for the review process. The moderator is responsible for scheduling the
date, time, place, and invitation of the review. The entry check is done to make sure
that the document is ready for review and does not have a large number of defects.
Once the document clears the entry check the author and moderator decide which
part of the document to be reviewed.
2. Kick-Off: This is an optional step in the review process. The goal is to give a short
introduction on the objectives of the review and documents to everyone in the
meeting.
3. Preparation: The reviewers review the document using the related documents,
procedures, rules, and checklists provided. Each participant while reviewing identifies
the defects, questions, and comments according to their understanding of the
document.
4. Review Meeting: The review meeting consists of three phases-
Logging phase: The issues and defects identified during the preparation phase
are logged page by page. The logging is done by an author or scribe, where a
93
scribe is a person to do the logging. Every defect and its severity should be
logged.
Discussion phase: If any defects need discussion then they will be logged and
handled in this phase. The outcome of the discussion is documented for future
purposes.
Decision phase: A decision on the document under review has to be made by the
participants.
5. Rework: If the number of defects found per page exceeds a certain level then the
document has to be reworked.
6. Follow-Up: The moderator checks to make sure that the author has taken action
on all known defects.
Below are some of the tips to be kept in mind while reviewing the test cases:
In the review process, it’s best to stick to version numbers. For example, if
reviewing a test case plan for the first time, make it v.1. Once the tester has
completed all of the changes, re-review it and make it v.1.1. One will be able to tell
which one is the most recent this way, and there will be a complete record of the
plan’s changes.
It is always preferable to meet with the tester face to face to ensure that he fully
comprehends all of the review input.
94
If at all possible, run test cases on the SUT (System Under Test) to gain a better
understanding of the outcomes and actions involved in their execution.
It is preferable to have a copy of SRS/FRD with you while reading the test case for
reference.
If you are unsure about a test case or expected outcome, consult with the client or
your supervisor before making a decision.
During the review, the reviewer looks for the following in the test cases:
1. Template: The reviewer determines if the template meets the product’s
requirements.
2. Header: The following aspects will be checked in the header:
Whether or not all of the qualities are captured is debatable.
Whether or not all of the traits are relevant is debatable.
Whether all of the traits are filled or not is up to you.
3. Body: Look at the following components in the test case’s body:
The test case should be written in such a way that the execution procedure takes
as little time as possible.
Whether or not all feasible circumstances are covered is debatable.
Look for a flow that has the maximum amount of test coverage.
Whether or not to use the test case design technique.
The test case should be easy to comprehend.
Severit
Comments
y
ST200 Invalid
45 Major Fixed ——–
SNNC-XD007 Parameter
NJ120 Unused
18 Minor Fixed ———
BKKL-PP330 Variable
95
TestCaseNam Step Author
e No Reviewer Comments Comments
input value
Production 70 70 70 0 100% 0%
This report was written by the test lead, and the test engineer submitted the particular
features that he or she had tested and implemented.
This report is sent to the following addresses by the test lead:
Development Team.
Management.
Test manager.
Customer.
96
Where the list of failed test cases is required by a development team. There is a list
of test case names, related status, and comments, as shown in the table below. The
data from the Sales test case is shown in the table below-
Test Case
Step Number Name Test Case Status Comments
ST100
1 Pass –
CNNB-ET001
ST200
2 Pass –
SNNC-XD007
ST200
3 Failed Bug
SNNC-XD007
98
99
100
Below are some common mistakes checked during the test case review process:
97
1. Spelling errors: Spelling errors can sometimes cause a lot of confusion or make
a statement difficult to grasp.
2. Replication of Test Cases: It relates to the reduction of redundant test cases. It’s
possible that two or more test cases are testing the same item and can be
combined into one, saving time and space.
3. Standard/Guidelines: It’s critical to examine whether all of the standards and
guidelines are being followed correctly during the review process.
4. Redundancy: When a test case is rendered obsolete owing to a change in
requirements or certain adjustments, it is referred to be redundancy. These types
of test scenarios must be eliminated.
5. The manner used: Test cases should be written in a basic, easy-to-understand
language.
6. Grammar: If the grammar is incorrect, the test case can be misinterpreted,
leading to incorrect findings.
7. Format of Template: When a suitable template is followed, it is simple to
add/modify test cases in the future, and the test case plan appears orderly.
When these checklists are utilized consistently and problems are discovered, it is
recommended that the defects be classified into one of the following categories:
Incomplete test cases.
Missing negative test cases.
No test data.
Inappropriate/Incorrect test data.
Incorrect Expected behavior.
Grammatical problems.
Typos.
Inconsistent tense/voice.
Incomplete results/number of test runs.
Defect information was not recorded in the test case.
Defects could sneak into production if test cases aren’t thoroughly reviewed. As a
result, production issues could be reported, thereby impacting the Software’s quality.
Resolving problems at this time would be much more expensive than fixing them if
they had been discovered during the testing phase.
98
Let’s start discussing each of these topics in detail.
What is Requirement Traceability Matrix (RTM)?
RTM stands for Requirement Traceability matrix . RTM maps all the requirements with
the test cases. By using this document one can verify test cases cover all
functionality of the application as per the requirements of the customer.
Requirements: Requirements of a particular project from the client.
Traceability: The ability to trace the tests.
Matrix: The data which can be stored in rows and columns form.
The main purpose of the requirement traceability matrix is to verify that the all
requirements of clients are covered in the test cases designed by the testers.
In simple words, one can say it is a pen and pencil approach i.e., to analyze the two
data information but here we are using an Excel sheet to verify the data in a
requirement traceability matrix.
Why is Requirement Traceability Matrix (RTM) Important?
When business analysis people get the requirements from clients, they prepare a
document called SRS (System/Software Requirement Specification) and these
requirements are stored in this document. If we are working in the Agile model, we
call this document Sprint Backlog, and requirements are present in it in the form of
user stories.
When QA gets the SRS/Sprint backlog document they first try to understand the
requirements thoroughly and then start writing test cases and reviewing them with the
entire project team. But sometimes it may happen that in these test cases, some
functionality of requirements is missing, so to avoid it we required a requirement
traceability matrix.
Each test case is traced back to each requirement in the RTM. Therefore, there is
less chance of missing any requirement in testing, and 100% test coverage can be
achieved.
RTM helps users discover any change that was made to the requirements as well
as the origin of the requirement.
Using RTM, requirements can be traced to determine a particular group or person
that wanted that requirement, and it can be used to prioritize the requirement.
It helps to keep a check between requirements and other development artifacts
like technical and other requirements.
The Traceability matrix can help the tester identify whether by adding any
requirement previous requirements are affected or not.
RTM helps in evaluating the effect on the QA team to reuse the test case.
Parameters of Requirement Traceability Matrix (RTM):
The below figure shows the basic template of RTM. Here the requirement IDs are
row-wise and test case IDs are column-wise which means it is a forward traceability
matrix.
99
From the figure below, it can be seen that: RTM
The following are the parameters to be included in RTM:
1. Requirement ID: The requirement ID is assigned to every requirement of the
project.
2. Requirement description: for every requirement a detailed description is given in
the SRS (System/Software Requirement Specification) document.
3. Requirement Type: understand the type of requirements i.e., banking, telecom,
healthcare, traveling, e-commerce, education, etc.
4. Test cases ID: the testing team designs test cases. Test cases are also assigned
with some ID.
Types of Traceability Matrix:
There are 3 types of traceability matrix:
1. Forward traceability matrix
2. Backward traceability matrix
3. Bi-directional traceability matrix
1. Forward traceability matrix:
In the forward traceability matrix, we mapped the requirements with the test cases.
Here we can verify that all requirements are covered in test cases and no
functionality is missing in test cases. It helps you to ensure that all the requirements
available in the SRS/ Sprint backlog can be traced back to test cases designed by
the testers. It is used to check whether the project progresses in the right direction.
101
Requirement Traceability Matrix (RTM) Template:
The below figure shows the basic template of RTM. Here the requirement IDs are
row-wise and test case IDs are column-wise which means it is a forward traceability
matrix.
From the figure below, it can be seen that:
For verifying requirement number 1 there are test cases number 1 and 7.
In requirement number 2 there are test cases number 2 and 10 and similarly, for
all other requirements, there are test cases to verify them.
Dedect Tracking
Bugs in Software Testing
Software testing is the process of testing and verifying that a software product or
application is doing what it is supposed to do. The benefits of testing include
preventing distractions, reducing development costs, and improving performance.
There are many different types of software testing, each with specific goals and
strategies. Some of them are below:
1. Acceptance Testing: Ensuring that the whole system works as intended.
2. Integration Testing: Ensuring that software components or functions work
together.
3. Unit Testing: To ensure that each software unit is operating as expected. The
unit is a testable component of the application.
4. Functional Testing: Evaluating activities by imitating business conditions, based
on operational requirements. Checking the black box is a common way to confirm
tasks.
5. Performance Testing: A test of how the software works under various operating
loads. Load testing, for example, is used to assess performance under real-life
load conditions.
6. Re-Testing: To test whether new features are broken or degraded. Hygiene
checks can be used to verify menus, functions, and commands at the highest level
when there is no time for a full reversal test.
102
What is a Bug?
Below are the steps in the lifecycle of the bug in software testing:
103
1. Open: The editor begins the process of analyzing bugs here, where possible, and
works to fix them. If the editor thinks the error is not enough, the error for some
reason can be transferred to the next four regions, Reject or No, i.e. Repeat.
2. New: This is the first stage of the distortion of distractions in the life cycle of the
disorder. In the later stages of the bug’s life cycle, confirmation and testing are
performed on these bugs when a new feature is discovered.
3. Shared: The engineering team has been provided with a new bug fixer recently
built at this level. This will be sent to the designer by the project leader or team
manager.
4. Pending Review: When fixing an error, the designer will give the inspector an error
check and the feature status will remain pending ‘review’ until the tester is working
on the error check.
5. Fixed: If the Developer completes the debugging task by making the necessary
changes, the feature status can be called “Fixed.”
6. Confirmed: If the tester had no problem with the feature after the designer was
given the feature on the test device and thought that if it was properly adjusted,
the feature status was given “verified”.
7. Open again / Reopen: If there is still an error, the editor will then be instructed to
check and the feature status will be re-opened.
8. Closed: If the error is not present, the tester changes the status of the feature to
‘Off’.
9. Check Again: The inspector then begins the process of reviewing the error to check
that the error has been corrected by the engineer as required.
10. Repeat: If the engineer is considering a factor similar to another factor. If the
developer considers a feature similar to another feature, or if the definition of
malfunction coincides with any other malfunction, the status of the feature is
changed by the developer to ‘duplicate’.
Few more stages to add here are:
1. Rejected: If a feature can be considered a real factor the developer will mean
“Rejected” developer.
2. Duplicate: If the engineer finds a feature similar to any other feature or if the
concept of the malfunction is similar to any other feature the status of the feature
is changed to ‘Duplicate’ by the developer.
3. Postponed: If the developer feels that the feature is not very important and can be
corrected in the next release, however, in that case, he can change the status of
the feature such as ‘Postponed’.
4. Not a Bug: If the feature does not affect the performance of the application, the
corrupt state is changed to “Not a Bug”.
104
Bug Report
1. Defect/ Bug Name: A short headline describing the defect. It should be specific and
accurate.
2. Defect/Bug ID: Unique identification number for the defect.
3. Defect Description: Detailed description of the bug including the information of the
module in which it was detected. It contains a detailed summary including the
severity, priority, expected results vs actual output, etc.
4. Severity: This describes the impact of the defect on the application under test.
5. Priority: This is related to how urgent it is to fix the defect. Priority can be High/
Medium/ Low based on the impact urgency at which the defect should be fixed.
6. Reported By: Name/ ID of the tester who reported the bug.
7. Reported On: Date when the defect is raised.
8. Steps: These include detailed steps along with the screenshots with which the
developer can reproduce the same defect.
9. Status: New/ Open/ Active
10. Fixed By: Name/ ID of the developer who fixed the defect.
11. Data Closed: Date when the defect is closed.
Factors to be Considered while Reporting a Bug:
1. The whole team should clearly understand the different conditions of the trauma
before starting research on the life cycle of the disability.
2. To prevent future confusion, a flawed life cycle should be well documented.
3. Make sure everyone who has any work related to the Default Life Cycle
understands his or her best results work very clearly.
4. Everyone who changes the status quo should be aware of the situation which
should provide sufficient information about the nature of the feature and the
reason for it so that everyone working on that feature can easily see the reason
for that feature.
5. A feature tracking tool should be carefully handled in the course of a defective life
cycle work to ensure consistency between errors.
Bug Tracking Tools
Below are some of the bug tracking tools–
1. KATALON TESTOPS: Katalon TestOps is a free, powerful orchestration platform
that helps with your process of tracking bugs. TestOps provides testing teams and
DevOps teams with a clear, linked picture of their testing, resources, and locations to
launch the right test, in the right place, at the right time.
Features:
Applies to Cloud, Desktop: Window and Linux program.
Compatible with almost all test frames available: Jasmine, JUnit, Pytest, Mocha,
etc .; CI / CD tools: Jenkins, CircleCI, and management platforms: Jira, Slack.
Track real-time data for error correction, and for accuracy.
Live and complete performance test reports to determine the cause of any
problems.
Plan well with Smart Scheduling to prepare for the test cycle while maintaining
high quality.
Rate release readiness to improve release confidence.
Improve collaboration and enhance transparency with comments, dashboards,
KPI tracking, possible details – all in one place.
105
2. KUALITEE: Collection of specific results and analysis with solid failure analysis in
any framework. The Kualitee is for development and QA teams look beyond the
allocation and tracking of bugs. It allows you to build high-quality software using tiny
bugs, fast QA cycles, and better control of your build. The comprehensive suite
combines all the functions of a good error management tool and has a test case and
flow of test work built into it seamlessly. You would not need to combine and match
different tools; instead, you can manage all your tests in one place.
Features:
Create, assign, and track errors.
Tracing between disability, needs, and testing.
Easy-to-use errors, test cases, and test cycles.
Custom permissions, fields, and reporting.
Interactive and informative dashboard.
Integration of external companies and REST API.
An intuitive and easy-to-use interface.
3. QA Coverage: QACoverage is the place to go for successfully managing all your
testing processes so that you can produce high-quality and trouble-free products. It
has a disability control module that will allow you to manage errors from the first
diagnostic phase until closed. The error tracking process can be customized and
tailored to the needs of each client. In addition to negative tracking, QACoverage has
the ability to track risks, issues, enhancements, suggestions, and recommendations.
It also has full capabilities for complex test management solutions that include needs
management, test case design, test case issuance, and reporting.
Features:
1. Control the overall workflow of a variety of Tickets including risk, issues, tasks,
and development management.
2. Produce complete metrics to identify the causes and levels of difficulty.
3. Support a variety of information that supports the feature with email attachments.
4. Create and set up a workflow for enhanced test visibility with automatic
notifications.
5. Photo reports based on difficulty, importance, type of malfunction, disability
category, expected correction date, and much more.
4. BUG HERD: BugHerd is an easy way to track bugs, collect and manage webpage
responses. Your team and customers search for feedback on web pages, so they can
find the exact problem. BugHerd also scans the information you need to replicate and
resolve bugs quickly, such as browser, CSS selector data, operating system, and
screenshot. Distractions and feedback, as well as technical information, are
submitted to the Kanban Style Task Board, where distractions can be assigned and
managed until they are eliminated. BugHerd can also integrate with your existing
project management tools, helping to keep your team on the same page with bug
fixes.
1. New: When any new defect is identified by the tester, it falls in the ‘New’ state. It is
the first state of the Bug Life Cycle. The tester provides a proper Defect document to
107
the Development team so that the development team can refer to Defect Document
and can fix the bug accordingly.
2. Assigned: Defects that are in the status of ‘New’ will be approved and that newly
identified defect is assigned to the development team for working on the defect and
to resolve that. When the defect is assigned to the developer team the status of the
bug changes to the ‘Assigned’ state.
3. Open: In this ‘Open’ state the defect is being addressed by the developer team and
the developer team works on the defect for fixing the bug. Based on some specific
reason if the developer team feels that the defect is not appropriate then it is
transferred to either the ‘Rejected’ or ‘Deferred’ state.
4. Fixed: After necessary changes of codes or after fixing identified bug developer
team marks the state as ‘Fixed’.
5. Pending Request: During the fixing of the defect is completed, the developer team
passes the new code to the testing team for retesting. And the code/application is
pending for retesting on the Tester side so the status is assigned as ‘Pending Retest’.
6. Retest: At this stage, the tester starts work of retesting the defect to check whether
the defect is fixed by the developer or not, and the status is marked as ‘Retesting’.
7. Reopen: After ‘Retesting’ if the tester team found that the bug continues like
previously even after the developer team has fixed the bug, then the status of the bug
is again changed to ‘Reopened’. Once again bug goes to the ‘Open’ state and goes
through the life cycle again. This means it goes for Re-fixing by the developer team.
8. Verified: The tester re-tests the bug after it got fixed by the developer team and if
the tester does not find any kind of defect/bug then the bug is fixed and the status
assigned is ‘Verified’.
9. Closed: It is the final state of the Defect Cycle, after fixing the defect by the
developer team when testing found that the bug has been resolved and it does not
persist then they mark the defect as a ‘Closed’ state.
Few More States that also come under this Defect Life Cycle:
1. Rejected: If the developer team rejects a defect if they feel that defect is not
considered a genuine defect, and then they mark the status as ‘Rejected’. The cause
of rejection may be any of these three i.e Duplicate Defect, NOT a Defect, Non-
Reproducible.
2. Deferred: All defects have a bad impact on developed software and also they have
a level based on their impact on software. If the developer team feels that the defect
that is identified is not a prime priority and it can get fixed in further updates or
releases then the developer team can mark the status as ‘Deferred’. This means from
the current defect life cycle it will be terminated.
3. Duplicate: Sometimes it may happen that the defect is repeated twice or the defect
is the same as any other defect then it is marked as a ‘Duplicate’ state and then the
defect is ‘Rejected’.
4. Not a Defect: If the defect has no impact or effect on other functions of the software
then it is marked as ‘NOT A DEFECT’ state and ‘Rejected’.
5. Non-Reproducible: If the defect is not reproduced due to platform mismatch, data
mismatch, build mismatch, or any other reason then the developer marks the defect
as in a ‘Non-Reproducible’ state.
6. Can’t be Fixed: If the developer team fails to fix the defect due to Technology
support, the Cost of fixing a bug is more, lack of required skill, or due to any other
reasons then the developer team marks the defect as in ‘Can’t be fixed’ state.
108
7. Need more information: This state is very close to the ‘Non-reproducible’ state. But
it is different from that. When the developer team fails to reproduce the defect due to
the steps/document provided by the tester being insufficient or the Defect Document
is not so clear to reproduce the defect then the developer team can change the
status to “Need more information’. When the Tester team provides a good defect
document the developer team proceeds to fix the bug.
Defect Lifecycle
Consider the flow chart below to understand the defect lifecycle.
1. The tester finds a defect.
2. The defect status is changed to New.
3. The development manager will then analyze the defect.
4. The manager determines if the defect is valid or not.
5. If the defect is not valid then the development manager assigns the
status Rejected to defect.
6. If the defect is valid then it is checked whether the defect is in scope or not. If no
then the defect status is changed to Deferred.
7. If the defect is in scope then the manager checks whether a similar defect was
raised earlier. If yes then the defect is assigned a status duplicate.
8. If the defect is not already raised then the manager starts fixing the code and the
defect is assigned the status In-Progress.
9. Once the defect is fixed the status is changed to fixed.
10. The tester retests the code if the test is passed then the defect status is
changed to closed.
11. If the test fails again then the defect is assigned status reopened and assigned
to the developer.
109
Limitations in Defect Lifecycle
Variations of the Bug Life Cycle
No Control on Test Environment
110
Features Severity Priority
Severity is a parameter to
Priority is a parameter to
denote the impact of a
Definition decide the order in which
particular defect on the
defects should be fixed.
software.
Priority is related to
Severity is related to the
Relation scheduling to resolve the
quality standard.
problem.
Its value doesn’t change from Its value changes from time
Value change
time to time. to time.
111
Features Severity Priority
It is driven by business
Driving factor It is driven by functionality
value.
112
defects. This is mainly used in agile project management. The defect triage meetings
frequency depends on a number of factors:
Project schedule.
Overall project health.
The number of bugs in the system.
Impact on schedules of team members’ availability.
The defect triage process can be summarized as:
1. Defect Review: This step involves reviewing all the defects including the defects
that were rejected by the team.
2. Defect Assessment: This step involves an initial assessment of the defects
based on the content and respective priority and severity settings.
3. Defect Assignment: This involves prioritizing the defects, and assigning the
defects to the correct release by the product manager.
113
The staging environment is a copy of the production environment for software testing.
This is used before the actual deployment of the software so that final tests can be
executed.
Importance of Test Environment
Knowing about the quality and functionality of applications under process in a test
environment is very important. Because it provides a dedicated environment for us to
isolate the code and examine the application so that other actions have no impact on
the output of the tests that are running on the server. In addition to this, a test
environment can mimic the work of a production environment. Below are some of the
benefits of using the test environment:
Identify bugs: The test environment facilitates the tester to identify the bugs in the
application and find a solution to fix those bugs.
Provide a standardized environment: The test environment provides a standardized
environment that helps to validate the application behavior and the application can
be tested securely by the tester.
Helps to launch the secure application: Properly configured test environments help to
search for vulnerabilities and launch a secure and tested application.
Helps provide precise feedback: The test environments help to provide precise
feedback about the features and functionality of the application.
Types of Test Environment
Below are the different types of test environments:
1. Integration Test Environment: In this environment, software modules are integrated,
and integrated behavior is verified. In this environment, one, two, or many
modules can be integrated, and functional testing can be used to verify the
behavior and correctness of the application. It should imitate the production
environment closely.
2. Performance Test Environment: The performance environment tells how well a system
will perform based on goals like throughput, stability, response time, etc. The
setup here is quite complex as it requires very selective choice and infrastructure
configuration. Performance testing needs to be run in different environments with
distinct configurations by varying the size of RAM, the volume of data, etc.
Performance testing is time-consuming and expensive.
3. Security Test Environment: While using a security test environment, the testing team
tries to ensure that the software is free of security flaws in confidentiality, integrity,
and authenticity. Setting up a secure testing environment requires ensuring that
the system is not left unattended, there is an isolated test environment, not
touching the production data.
4. Chaos Test Environment: Here the main aim is to find a specific area that can cause
the application to fail before the application can lead to negative user feedback.
After identifying the area, the tester tries to fix it.
Key Areas to Set up Test Environment
A stable test environment allows the tester to conduct tests efficiently and results in
consistent performance from the application under test. Below are the key areas to
set up the test environment for the testers to execute the test cases:
Database: This is one of the most important software applications to be present in
the test environment. It can be client-server, mobile, application, or any other. The
database is needed at every part of the backend.
114
Operating System: This is the program that is loaded into the system and manages
every application on the system. This includes the Client operating system and
server operating system.
Network protocol: These are the network configurations required by the software,
that need to be set up according to the requirements of the application. Different
applications have different requirements like wireless networks, LAN, or private
networks for testing.
Test Data: This is also one of the most important elements. Complete, accurate,
and consistent test data is very important for testers to design effective test cases.
Manual testers: Manual testers will check the application quality and conduct the
test cases manually.
Automation testers: These are the developers working on programming, designing,
and testing any new or existing software. They use automated testing tools to
automate the test case generation and execution.
Documentation: Documentation like configuration guides, installation guides, and
user manuals are required to understand the system and design appropriate test
cases.
Process for Setup of Test Environment
System admins, developers, and testers are some of the people that are involved in
the testing of the application. The test environment involves setting up different areas
like:
1. Test Server: Every application that is tested may not be tested on the local
machine. It may require setting up a test server. For example, Java-based
applications, Fedora setup, etc.
2. Network: A network setup like LAN, CAN, or any wireless medium to fulfill the
requirement of the internet. It ensures that the congestion during testing does not
affect other members of the team like developers, designers, etc.
3. PC setup: PC setup may include setting up different browsers for different testers
or different OS for different testers.
4. Bug Reporting: A bug reporting tool should be included in the test environment for
bug reporting.
5. Test Tool: A test tool setup to perform automation testing.
6. Test Data: The common approach is to copy the production data to test. This helps
the tester to detect the same issues without corrupting the production data.
Privacy is the main concern in using production data. To overcome it look into
obfuscated and anonymized test data.
Test Environment Management
Test Environment Management mainly deals with the maintenance and updating of
test beds. Some of the activity involved in the functioning of Test Environment
Management includes:
Always maintain the test environment with its recent version.
Assigning the test environment to respective teams as per their requirement.
Continuous monitoring of test environments.
Removing the outdated test environments and their tools, techniques, and other
details.
Identifying test environment issues and resolving those issues.
Frequent improvement to continuously and effectively evaluate the test
environments.
115
Enable automation to reduce manual activities for improving efficacy.
Challenges in Setting Up Test Environment
Below are some of the challenges faced during setting up the test environment:
1. Planning resource utilization: Effective planning of resource utilization is very
important as it may impact the results and can lead to conflicts between the
teams. Inefficient management and use of test resources deviate from the testing
process.
2. Dependency on external environment: There are scenarios where the test
environment depends on the external environment. In such cases, the testing
team has to rely on the support team for various test assets like hardware,
software, etc.
3. Remote test environment: In cases where the test environment is located
geographically apart the testing team has to rely on the support team for the test
assets.
4. Collaboration between teams: There is a possibility that the test results are not
accurate in the cases where the test environment is shared between the different
teams.
5. Setting up complex tests: Some of the tests require extensive test environment
configuration. The team may need to consider factors like time and resources to
conduct complex tests.
Best Practices to Set up Test Environment
Below are some of the best practices that can be followed for setting up the test
environment:
1. Software requirements: It is a good practice to recognize the software requirements
of the test environment carefully and make sure that all the software that is
already available is compatible with the test environment.
2. Hardware requirements: It is important to make a list of the required hardware
components and if any hardware installations are done then test them before
setting up the test environment.
3. Tools: Check for automation tools and their configurations. All the necessary tools
must be available for debugging, defect reporting, etc.
4. Availability of test data: It is vital to check the availability of the test data and to
ensure whether the test data is available in production or needs to be created.
117
3. Provide Valuable Metrics : DMP also provides valuable defect metrics along with
automation tools. These defect metrics help in reporting and continuous
improvements.
4. Improved software quality – By identifying and resolving defects, the software will
perform as intended and be of higher quality.
5. Increased efficiency – The Defect Management Process provides a systematic
approach to managing defects, leading to a more efficient use of resources and
faster resolution of defects.
6. Better collaboration – The Defect Management Process facilitates communication
and collaboration among different teams, such as development, testing, and
management, leading to a more cohesive and effective development process.
7. Improved visibility – The Defect Management Process provides regular reports on
the status of defects, giving stakeholders visibility into the development process
and helping to ensure that defects are being resolved in a timely manner.
8. Better tracking – The Defect Management Process provides a centralized system
for tracking and managing defects, making it easier to track the progress of defect
resolution and ensure that defects are not forgotten.
Disadvantages of DMP :
1. If DMP is not handled properly, then there will a huge increased cost in a creeping
i.e. increase in price of product.
2. If errors or defects are not managed properly at early stage, then afterwards,
defect might cause greater damage, and costs to fix or resolve the defect will also
get increased.
3. There will be other disadvantages also like loss of revenue, loss of customers,
damaged brand reputations if DMP is not done properly.
4. Overhead – The Defect Management Process requires a significant amount of
overhead, including time spent logging and triaging defects, and managing the
defect tracking system.
5. Resource constraints – The Defect Management Process may require a significant
amount of resources, including personnel, hardware, and software, which may be
challenging for smaller organizations.
6. Resistance to change – Some stakeholders may resist the Defect Management
Process, particularly if they are used to a more informal approach to managing
defects.
7. Dependence on technology – The Defect Management Process relies on technology,
such as a defect tracking system, to manage defects. If the technology fails, the
process may be disrupted, leading to delays and inefficiencies.
8. Lack of standardization – Without a standard approach to Defect Management,
different organizations may have different processes, leading to confusion and
inefficiencies when working together on software development projects.
119
Tools for regression testing:
In regression testing, we generally select the test cases from the existing test suite
itself and hence, we need not compute their expected output, and it can be easily
automated due to this reason. Automating the process of regression testing will be
very much effective and time saving. Most commonly used tools for regression
testing are:
Selenium
WATIR (Web Application Testing In Ruby)
QTP (Quick Test Professional)
RFT (Rational Functional Tester)
Winrunner
Silktest
Advantages of Regression Testing:
It ensures that no new bugs has been introduced after adding new functionalities
to the system.
As most of the test cases used in Regression Testing are selected from the
existing test suite, and we already know their expected outputs. Hence, it can be
easily automated by the automated tools.
It helps to maintain the quality of the source code.
Disadvantages of Regression Testing:
It can be time and resource consuming if automated tools are not used.
It is required even after very small changes in the code.
121
also automate the tests using the tool. It increases the performance of the testing
as it combines both manual checking and tools.
Applying Smoke Testing at different levels:
It is applicable at 3 levels of testing. They are
Acceptance Testing Level
System Testing Level
Integration testing Level
Tools used for Smoke Testing:
Selenium
PhantomJS
These tools are used while implementing the automated test cases.
Advantages of Smoke Testing:
1. Smoke testing is easy to perform.
2. It helps in identifying defects in the early stages.
3. It improves the quality of the system.
4. Smoke testing reduces the risk of failure.
5. Smoke testing makes progress easier to access.
6. It saves test effort and time.
7. It makes it easy to detect critical errors and helps in the correction of errors.
8. It runs quickly.
9. It minimizes integration risks.
Disadvantages of Smoke Testing:
1. Smoke Testing does not cover all the functionality in the application. Only a
certain part of the testing is done.
2. Errors may occur even after implementing all the smoke tests.
3. In the case of manual smoke testing, it takes a lot of time to execute the testing
process for larger projects.
4. It will not be implemented against the negative tests or with the invalid input.
5. It usually consists of a minimum number of test cases and hence we cannot find
the other issues that happened during the testing process.
Important Points:
1. Smoke testing is a type of software testing performed early in the development
process
2. The goal is to quickly identify and fix major issues with the software
3. It tests the most critical functions of the application
4. Helps to determine if the build is stable enough to proceed with further testing
5. It is also known as Build Verification Testing or Build Acceptance Testing.
References:
Several reference books provide information on smoke testing and software testing in
general. Some popular ones include:
1. “Effective Software Testing: 50 Specific Ways to Improve Your Testing” by Elfriede
Dustin
2. “Software Testing: A Guide to the TMap® Approach” by Joost Schouten
3. “Testing Computer Software” by Cem Kaner, Jack Falk, Hung Q. Nguyen
4. “A Practitioner’s Guide to Software Test Design” by Lee Copeland
5. “Agile Testing: A Practical Guide for Testers and Agile Teams” by Lisa Crispin,
Janet Gregory
122
These books provide detailed information on various testing methodologies,
techniques, and best practices and are considered good references for software
testing professionals and students.
123
Narrow and deep: In the Software testing sanity testing is a narrow and deep
method to protect the components.
A Subset of Regression Testing: Subset if regression testing mainly focus on
the less important unit of the application. it’s used to test application new features
with the requirement that is matched or not.
Unscripted: sanity testing commonly unscripted.
Not documented: sanity testing can’t be documented.
Performed by testers. sanity testing is done by the test engineers.
Sanity Testing Process:
126
1. Review: In static testing review is a process or technique that is performed to find
the potential defects in the design of the software. It is process to detect and remove
errors and defects in the different supporting documents like software requirements
specifications. People examine the documents and sorted out errors, redundancies
and ambiguities. Review is of four types:
Informal: In informal review the creator of the documents put the contents in front
of audience and everyone gives their opinion and thus defects are identified in the
early stage.
Walkthrough: It is basically performed by experienced person or expert to check
the defects so that there might not be problem further in the development or
testing phase.
Peer review: Peer review means checking documents of one-another to detect
and fix the defects. It is basically done in a team of colleagues.
Inspection: Inspection is basically the verification of document the higher
authority like the verification of software requirement specifications (SRS).
2. Static Analysis: Static Analysis includes the evaluation of the code quality that is
written by developers. Different tools are used to do the analysis of the code and
comparison of the same with the standard. It also helps in following identification of
following defects:
(a) Unused variables
(b) Dead code
(c) Infinite loops
(d) Variable with undefined value
(e) Wrong syntax
Static Analysis is of three types:
Data Flow: Data flow is related to the stream processing.
Control Flow: Control flow is basically how the statements or instructions are
executed.
Cyclomatic Complexity: Cyclomatic complexity defines the number of
independent paths in the control flow graph made from the code or flowchart so
that minimum number of test cases can be designed for each independent path.
127
Objectives of Dynamic Testing
1. Find errors and bugs: Through comprehensive testing, find and expose flaws,
faults, or defects in the software code and its functionality so that they can be
fixed as soon as possible.
2. Verify the behavior of the system: Verify that the software operates as expected
and complies with company requirements, industry or regulatory standards, user
expectations, and any applicable business regulations.
3. Assessing Performance: To make sure the software satisfies performance
requirements, evaluate its performance by monitoring reaction times, throughput,
and use of resources under various scenarios.
4. Assure Trustworthiness: Examine the software’s dependability by determining
how well it performs regularly under typical operating conditions, free of
unexpected faults or crashes.
5. Accuracy of Test Data: Verify the precision and consistency of the data handled
by the software to guarantee reliable and uniform information handling.
6. Assess Scalability: Examine whether the application can grow to handle more
users, workloads, or data volumes without seeing an obvious decline in
performance.
Levels of Dynamic Testing
Several levels of dynamic testing are commonly used in the software development
process, including:
1. Unit testing: Unit testing is the process of testing individual software components
or “units” of code to ensure that they are working as intended. Unit tests are
typically small and focus on testing a specific feature or behavior of the software.
2. Integration testing: Integration testing is the process of testing how different
components of the software work together. This level of testing typically involves
testing the interactions between different units of code, and how they function
when integrated into the overall system.
3. System testing: System testing is the process of testing the entire software
system to ensure that it meets the specified requirements and is working as
intended. This level of testing typically involves testing the software’s functionality,
performance, and usability.
4. Acceptance testing: Acceptance testing is the final stage of dynamic testing,
which is done to ensure that the software meets the needs of the end-users and is
ready for release. This level of testing typically involves testing the software’s
functionality and usability from the perspective of the end-user.
5. Performance testing: Performance testing is a type of dynamic testing that is
focused on evaluating the performance of a software system under a specific
workload. This can include testing how the system behaves under heavy loads,
how it handles a large number of users, and how it responds to different inputs
and conditions.
6. Security testing: Security testing is a type of dynamic testing that is focused on
identifying and evaluating the security risks associated with a software system.
This can include testing how the system responds to different types of security
threats, such as hacking attempts, and evaluating the effectiveness of the
system’s security features.
128
Dynamic Testing Process Phase
1. Test Case Design: It defines the test objectives, scope and criteria. It defines test
data and expected outcomes and develops test cases based on requirements and
specifications. It generates test cases that address various programmes features.
2. Test Environment Setup: It sets up the settings and infrastructure required for
testing. It configured the network, hardware and software in the test environment.
Additionally, it makes sure that the test environment matches the production
environment by installing and configuring the required test tools and test
harnesses.
3. Test Case Execution: Using the specified test data, it runs the test cases in order
to verify the software’s behavior. It keeps track of and logs the actual outcomes,
comparing them with the predicted results to find any differences. It runs test
scenarios in both positive and negative modes.
4. Test Analysis: It evaluates the general behavior of the system and finds faults by
analyzing the test case outcomes. Any inconsistencies or flaws discovered during
test execution are documented and reported. It works along with development
teams to figure out and address concerns that are reported.
Advantages of Dynamic Testing
1. Disclosure of Difficult and Complex Defects: It discloses very difficult and
complex defects.
2. Improvement in Software Quality: It increases the quality of the software
product or application being tested.
3. Security Threat Detection: Dynamic testing detects security threats and ensure
the better secure application.
129
4. Early-Stage Functionality Testing: It can be used to test the functionality of the
software at the early stages of development.
5. Ease of Implementation: It is easy to implement and does not require any
special tools or expertise.
6. Testing with Different Inputs, Data Sets, and User Profiles: It can be used to
test the software with different input values, data sets and user profiles.
7. Functionality and Performance Testing: It can be used to test the functionality
of the code and performance of the code.
Disadvantages of Dynamic Testing
1. Time-Consuming Process: It is a time consuming process as in dynamic testing
whole code is executed.
2. Increased Budget: It increases the budget of the software as dynamic testing is
costly.
3. Resource Intensive: Dynamic testing may require more resources than static
testing.
4. Less Effective in Some Cases: Dynamic testing may be less effective than static
testing in some cases.
5. Incomplete Test Scenario Coverage: It is difficult to cover all the test scenarios.
6. Difficulty in Root Cause Analysis: It is difficult to find out the root cause of the
defects.
IMPORTANTS POINTS:
Some important points to keep in mind when performing dynamic testing include:
1. Defining clear and comprehensive test cases: It is important to have a clear set
of test cases that cover a wide range of inputs and use cases. This will help to
ensure that the software is thoroughly tested and any issues are identified and
addressed.
2. Automation: Automated testing tools can be used to quickly and efficiently
execute test cases, making it easier to identify and fix any issues that are found.
3. Performance testing: It’s important to evaluate the software’s performance under
different loads and conditions to ensure that it can handle the expected usage and
the expected number of users.
4. Security testing: It is important to identify and evaluate the security risks
associated with a software system, and to ensure that the system is able to
withstand different types of security threats.
5. Defect tracking: A defect tracking system should be implemented to keep track of
any issues that are identified during dynamic testing, and to ensure that they are
addressed and resolved in a timely manner.
6. Regular testing: It’s important to regularly perform dynamic testing throughout
the software development process, to ensure that any issues are identified and
addressed as soon as they arise.
7. Test-Driven Development: It’s important to design and implement test cases
before the actual development starts, this approach ensures that the software
meets the requirements and is thoroughly tested.
Stress testing: Testing the system’s ability to handle a high load above normal
usage levels
Spike testing: Testing the system’s ability to handle sudden spikes in traffic
Soak testing: Testing the system’s ability to handle a sustained load over a
prolonged period of time
Tools such as Apache JMeter, LoadRunner, Gatling, and Grinder can be used to
simulate load and measure system performance. It’s important to ensure that the
load testing is done in an environment that closely mirrors the production
environment to get accurate results.
Objectives of Load Testing: The objective of load testing is:
To maximize the operating capacity of a software application.
To determine whether the latest infrastructure is capable to run the software
application or not.
To determine the sustainability of application with respect to extreme user load.
To find out the total count of users that can access the application at the same
time.
To determine scalability of the application.
To allow more users to access the application.
Load Testing Process:
131
1. Test Environment Setup: Firstly create a dedicated test environment setup for
performing the load testing. It ensures that testing would be done in a proper way.
2. Load Test Scenario: In second step load test scenarios are created. Then load
testing transactions are determined for an application and data is prepared for
each transaction.
3. Test Scenario Execution: Load test scenarios that were created in previous step
are now executed. Different measurements and metrices are gathered to collect
the information.
4. Test Result Analysis: Results of the testing performed is analyzed and various
recommendations are made.
5. Re-test: If the test is failed then the test is performed again in order to get the
result in correct way.
Metrics of Load Testing :
Metrics are used in knowing the performance of load testing under different
circumstances. It tells how accurately the load testing is working under different test
cases. It is usually carried out after the preparation of load test scripts/cases. There
are many metrics to evaluate the load testing. Some of them are listed below.
1. Average Response Time : It tells the average time taken to respond to the
request generated by the clients or customers or users. It also shows the speed of
the application depending upon the time taken to respond to the all requests
generated.
2. Error Rate : The Error Rate is mentioned in terms of percentage denotes the
number of errors occurred during the requests to the total number of requests. These
errors are usually raised when the application is no longer handling the request at the
given time or for some other technical problems. It makes the application less
efficient when the error rate keeps on increasing.
3. Throughput : This metric is used in knowing the range of bandwidth consumed
during the load scripts or tests and it is also used in knowing the amount of data
which is being used for checking the request that flows between the user server and
application main server. It is measured in kilobytes per second.
4. Requests Per Second : It tells that how many requests are being generated to the
application server per second. The requests could be anything like requesting of
images, documents, web pages, articles or any other resources.
132
5. Concurrent Users : This metric is used to take the count of the users who are
actively present at the particular time or at any time. It just keeps track of count those
who are visiting the application at any time without raising any request in the
application. From this, we can easily know that at which time the high number of
users are visiting the application or website.
6. Peak Response Time : Peak Response Time measures the time taken to handle
the request. It also helps in finding the duration of the peak time(longest time) at
which the request and response cycle is handled and finding that which resource is
taking longer time to respond the request.
Load Testing Tools:
1. Apache Jmeter
2. WebLoad
3. NeoLoad
4. LoadNinja
5. HP Performance Tester
6. LoadUI Pro
7. LoadView
Advantages of Load Testing:
Load testing enhances the sustainability of the system or software application.
It improves the scalability of the system or software application.
It helps in the minimization of the risks related to system downtime.
It reduces the costs of failure of the system.
It increases customer’s satisfaction.
Load testing has several advantages that make it an important aspect of software
testing:
Identifying bottlenecks: Load testing helps identify bottlenecks in the system such
as slow database queries, insufficient memory, or network congestion. This helps
developers optimize the system and ensure that it can handle the expected
number of users or transactions.
Improved scalability: By identifying the system’s maximum capacity, load testing
helps ensure that the system can handle an increasing number of users or
transactions over time. This is particularly important for web-based systems and
applications that are expected to handle a high volume of traffic.
Improved reliability: Load testing helps identify any potential issues that may occur
under heavy load conditions, such as increased error rates or slow response
times. This helps ensure that the system is reliable and stable when it is deployed
to production.
Reduced risk: By identifying potential issues before deployment, load testing helps
reduce the risk of system failure or poor performance in production.
Cost-effective: Load testing is more cost-effective than fixing problems that occur
in production. It is much cheaper to identify and fix issues during the testing phase
than after deployment.
Improved user experience: By identifying and addressing bottlenecks, load testing
helps ensure that users have a positive experience when using the system. This
can help improve customer satisfaction and loyalty.
Disadvantages of Load Testing:
To perform load testing there in need of programming knowledge.
133
Load testing tools can be costly.Load testing also has some disadvantages, which
include:
Resource-intensive: Load testing can be resource-intensive, requiring significant
hardware and software resources to simulate a large number of users or
transactions. This can make load testing expensive and time-consuming.
Complexity: Load testing can be complex, requiring specialized knowledge and
expertise to set up and execute effectively. This can make it difficult for teams with
limited resources or experience to perform load testing.
Limited testing scope: Load testing is focused on the performance of the system
under stress, and it may not be able to identify all types of issues or bugs. It’s
important to combine load testing with other types of testing such as functional
testing, regression testing, and acceptance testing.
Inaccurate results: If the load testing environment is not representative of the
production environment or the load test scenarios do not accurately simulate real-
world usage, the results of the test may not be accurate.
Difficulty in simulating real-world usage: It’s difficult to simulate real-world usage,
and it’s hard to predict how users will interact with the system. This makes it
difficult to know if the system will handle the expected load.
Complexity in analyzing the results: Load testing generates a large amount of
data, and it can be difficult to analyze the results and determine the root cause of
performance issues.
It’s important to keep in mind that load testing is one aspect of software testing,
and it should be combined with other types of testing to ensure that the system is
thoroughly tested and that any issues are identified and addressed before
deployment.
135
Types of Stress Testing
1. Server-client Stress Testing: Server-client stress testing also known as distributed
stress testing is carried out across all clients from the server.
2. Product Stress Testing: Product stress testing concentrates on discovering defects
related to data locking and blocking, network issues, and performance congestion
in a software product.
3. Transactional Stress Testing: Transaction stress testing is performed on one or more
transactions between two or more applications. It is carried out for fine-tuning and
optimizing the system.
4. Systematic Stress Testing: Systematic stress testing is integrated testing that is
used to perform tests across multiple systems running on the same server. It is
used to discover defects where one application data blocks another application.
5. Analytical Stress Testing: Analytical or exploratory stress testing is performed to test
the system with abnormal parameters or conditions that are unlikely to happen in
a real scenario. It is carried out to find defects in unusual scenarios like a large
number of users logged at the same time or a database going offline when it is
accessed from a website.
6. Application Stress Testing: Application stress testing also known as product stress
testing is focused on identifying the performance bottleneck, and network issues
in a software product.
Stress Testing Tools
1. Jmeter: Apache JMeter is a stress testing tool is an open-source, pure Java-based
software that is used to stress test websites. It is an Apache project and can be
used for load testing for analyzing and measuring the performance of a variety of
services.
2. LoadNinja: LoadNinja is a stress testing tool developed by SmartBear that enables
users to develop codeless load tests, substitutes load emulators with actual
browsers, and helps to achieve high speed and efficiency with browser-based
metrics.
3. WebLoad: WebLoad is a stress testing tool that combines performance, stability,
and integrity as a single process for the verification of mobile and web
applications.
4. Neoload: Neoload is a powerful performance testing tool that simulates large
numbers of users and analyzes the server’s behavior. It is designed for both
136
mobile and web applications. Neoload supports API testing and integrates with
different CI/ CD applications.
5. SmartMeter: SmartMeter is a user-friendly tool that helps to create simple tests
without coding. It has a graphical user interface and has no necessary plugins.
This tool automatically generates advanced test reports with complete and
detailed test results.
Metrics of Stress Testing
Metrics are used to evaluate the performance of the stress and it is usually carried
out at the end of the stress scripts or tests. Some of the metrics are given below.
1. Pages Per Second: Number of pages requested per second and number of pages
loaded per second.
2. Pages Retrieved: Average time is taken to retrieve all information from a particular
page.
3. Byte Retrieved: Average time is taken to retrieve the first byte of information from
the page.
4. Transaction Response Time: Average time is taken to load or perform transactions
between the applications.
5. Transactions per Second: It takes count of the number of transactions loaded per
second successfully and it also counts the number of failures that occurred.
6. Failure of Connection: It takes count of the number of times that the client faced
connection failure in their system.
7. Failure of System Attempts: It takes count of the number of failed attempts in the
system.
8. Rounds: It takes count of the number of test or script conditions executed by the
clients successfully and it keeps track of the number of rounds failed.
Benefits of Stress Testing
Determines the behavior of the system: Stress testing determines the behavior of the
system after failure and ensures that the system recovers quickly.
Ensure failure does not cause security issues: Stress testing ensures that system
failure doesn’t cause security issues.
Makes system function in every situation: Stress testing makes the system work in
normal as well as abnormal conditions in an appropriate way.
Limitations of Stress Testing
1. Manual stress testing is complicated: The manual process of stress testing takes a
longer time to complete and it is a complicated process.
2. Good scripting knowledge required: Good scripting knowledge for implementing the
script test cases for the particular tool is required.
3. Need for external resources: There is a need for external resources to implement
stress testing. It leads to an extra amount of resources and time.
4. Constantly licensed tool: In the case of a licensed stress testing tool, it charges
more than the average amount of cost.
5. Additional tool required in case of open-source stress testing tool: In the case of some
open-source tools, there is a need for a load testing tool additionally for setting up
the stress testing environment.
6. Improper test script implementation results in wastage: If proper stress scripts or test
cases are not implemented then there will be a chance of failure of some
resources and wastage of time.
137
Recovery Testing in Software Testing
Recovery testing is a type of system testing which aims at testing whether a system
can recover from failures or not. The technique involves failing the system and then
verifying that the system recovery is performed properly.
To ensure that a system is fault-tolerant and can recover well from failures, recovery
testing is important to perform. A system is expected to recover from faults and
resume its work within a pre-specified time period. Recovery testing is essential for
any mission-critical system, for example, the defense systems, medical devices, etc.
In such systems, there is a strict protocol that is imposed on how and within what
time period the system should recover from failure and how the system should
behave during the failure.
A system or software should be recovery tested for failures like :
Power supply failure
The external server is unreachable
Wireless network signal loss
Physical conditions
The external device not responding
The external device is not responding as expected, etc.
Steps to be performed before executing a Recovery Test :
A tester must ensure that the following steps are performed before carrying out the
Recovery testing procedure :
1. Recovery Analysis –
It is important to analyze the system’s ability to allocate extra resources like
servers or additional CPUs. This would help to better understand the recovery-
related changes that can impact the working of the system. Also, each of the
possible failures, their possible impact, their severity, and how to perform them
should be studied.
2. Test Plan preparation –
Designing the test cases keeping in mind the environment and results obtained in
recovery analysis.
3. Test environment preparation –
Designing the test environment according to the recovery analysis results.
4. Maintaining Back-up –
Information related to the software, like various states of the software and
database should be backed up. Also, if the data is important, then the backing up
of the data at multiple locations is important.
5. Recovery personnel Allocation –
For the recovery testing process, it is important to allocate recovery personnel
who is aware and educated enough for the recovery testing being conducted.
6. Documentation –
This step emphasizes on documenting all the steps performed before and during
the recovery testing so that the system can be analyzed for its performance in
case of a failure.
Example of Recovery Testing :
138
When a system is receiving some data over a network for processing purposes,
we can stimulate software failure by unplugging the system power. After a while,
we can plug in the system again and test its ability to recover and continue
receiving the data from where it stopped.
Another example could be when a browser is working on multiple sessions, we
can stimulate software failure by restarting the system. After restarting the system,
we can check if it recovers from the failure and reloads all the sessions it was
previously working on.
While downloading a movie over a Wifi network, if we move to a place where there
is no network, then the downloading process will be interrupted. Now to check if
the process recovers from the interruption and continues working as before, we
move back to a place where there is a Wifi network. If the downloading resumes,
then the software has a good recovery rate.
Advantages of Recovery Testing :
Improves the quality of the system by eliminating the potential flaws in the
system so that the system works as expected.
Recovery testing is also referred to as Disaster Recovery Testing. A lot of
companies have disaster recovery centers to make sure that if any of the systems
is damaged or fails due to some reason, then there is back up to recover from the
failure.
Risk elimination is possible as the potential flaws are detected and removed from
the system.
Improved performance as faults are removed and the system becomes more
reliable and performs better in case a failure occurs.
Disadvantages of Recovery testing :
Recovery testing is a time-consuming process as it involves multiple steps and
preparations before and during the process.
The recovery personnel must be trained as the process of recovery testing
takes place under his supervision. So, the tester needs to be trained to ensure
that recovery testing is performed in the proper way. For performing recovery
testing, he should have enough data and back up files to perform recovery testing.
The potential flaws or issues are unpredictable in a few cases. It is difficult to
point out the exact reason for the same, however, since the quality of the software
must be maintained, so random test cases are created and executed to ensure
such potential flaws are removed.
Exploratory Testing
Exploratory Testing is a type of software testing in which the tester is free to select
any possible methodology to test the software. It is an unscripted approach to
software testing. In exploratory testing, software developers use their learning,
knowledge, skills, and abilities to test the software developed by themselves.
Exploratory testing checks the functionality and operations of the software as well as
identify the functional and technical faults in it. Exploratory testing aims to optimize
and improve the software in every possible way. The exploratory testing technique
combines the experience of testers with a structured approach to testing. It is often
performed as a black box testing technique. 4 Exploratory testing is an unscripted
testing technique.
139
History of exploratory testing:
Exploratory testing is first named “ad-hoc testing”. the “ exploratory testing” was
named by the software testing expert Cem Kaner in the classic book, which is Testing
Computer Software.
No matter how many test cases you have created you will run out of formally planned
test cases. you can keep on testing, you can run new test cases. without wasting
much time on the preparation of test cases or explaining them try to trust your
instincts.
Why use Exploratory Testing?
Below are some of the reasons for using exploratory testing:
Random and unstructured testing: Exploratory testing is unstructured and thus
can help to reveal bugs that would of undiscovered during structured phases of
testing.
Testers can play around with user stories: With exploratory testing, testers can
annotate defects, add assertions, and voice memos and in this way, the user story
is converted to a test case.
Facilitate agile workflow: Exploratory testing helps formalize the findings and
document them automatically. Everyone can participate in exploratory testing with
the help of visual feedback thus enabling the team to adapt to changes quickly
and facilitating agile workflow.
Reinforce traditional testing process: Using tools for automated test case
documentation testers can convert exploratory testing sequences into functional
test scripts.
Speeds up documentation: Exploratory testing speeds up documentation and
creates an instant feedback loop.
Export documentation to test cases: Integration exploratory testing with tools
like Jira recorded documentation can be directly exported to test cases.
When should you use Exploratory Testing?
When need to learn quickly about the application: Exploratory testing is
beneficial for the scenarios when a new tester enters the team and needs to learn
quickly about the application and provide rapid feedback.
Review from a user perspective: It comes in handy when there is a need to
review products from a user perspective.
Early iteration required: Exploratory testing is helpful in scenarios when an early
iteration is required as the teams don’t have much time to structure the test cases.
Testing mission-critical applications: Exploratory testing ensures that the tester
doesn’t miss the edge cases that can lead to critical quality failures.
Aid unit test: Exploratory testing can be used to aid unit tests, document the test
cases, and use test cases to test extensively during the later sprints.
When to say no to exploratory testing:
Organizations must be able to get the proper balance between exploratory testing
and scripted testing. Until you reach a proper initial state only exploratory testing will
not work and will not cover the expected result for the team.
especially when with any type of testing that is regulated the compliance-based
scripted testing is beneficial to use at that time. In compliance testing, many certain
checklists and mandatory to follow the legal reason. it’s best to use scripted testing
where several laws govern the testing protocol and some standards are needed to
match.
140
Importance of exploratory testing for CI/CD:
Exploratory testing is open to all stakeholders and not just only to train the testers.
using these tests we will able to capture screenshots, record voice during the
session, and able to give feedback at the same time. this will be more fastly able to
review as compared to traditional software testers
The current test approach used by QA teams is enhanced by exploratory testing. It
consists of several unrecorded testing sessions to find issues or bugs that have not
yet been found. It improves the software product overall, finds edge cases, increases
test coverage, and may lead to the addition of new features when paired with
automated testing and other testing techniques. It promotes experimentation,
creativity, and discovery within the teams because it lacks structural rigidity.
The almost instantaneous nature of feedback helps close the gaps between testers
and developers. Above all, the results of exploratory testing provide a user-oriented
perspective and feedback to the development teams. The goal is to complement
traditional testing to find million-dollar defects that are generally hidden behind the
defined workflow.
Types of Exploratory Testing:
There are 3 types of exploratory testing:
1. Freestyle: In freestyle exploratory testing, the application is tested in an ad-hoc
way, there is no maximum coverage, and there are no rules to follow for testing. It
is done in the following cases:
1. When there is a need to get friendly with the application.
2. To check other test engineers’ work.
3. To perform smoke tests quickly.
2. Strategy Based: Strategy-based testing can be performed with the help of
multiple testing techniques like decision-table testing, cause-effect graphing,
boundary value analysis, equivalence partitioning, and error guessing. It is done
by an experienced tester who has known the application for the longest time.
1. Learn: This is the first phase of exploratory testing in which the tester learns
about the faults or issues that occur in the software. The tester uses his/her
knowledge, skill, and experience to observe and find what kind of problem the
software is suffering from. This is the initial phase of exploratory testing. It also
involves different new learning for the tester.
141
2. Test Case Creation: When the fault is identified i.e. tester comes to know what
kind of problem the software is suffering from then the tester creates test cases
according to defects to test the software. Test cases are designed by keeping in
mind the problems end users can face.
3. Test Case Execution: After the creation of test cases according to end user
problems, the tester executes the test cases. Execution of test cases is a
prominent phase of any testing process. This includes the computational and
operational tasks performed by the software to get the desired output.
4. Analysis: After the execution of the test cases, the result is analyzed and
observed whether the software is working properly or not. If the defects are found
then they are fixed and the above three steps are performed again. Hence this
whole process goes on in a cycle and software testing is performed.
Exploratory Testing vs Automated Testing:
Below are the differences between exploratory testing and automated testing:
Parameters Exploratory Testing Automated Testing
Testing cannot be
Testing can be
reproduced, only defects
reproduced.
Is testing reproducible can be reproduced.
There is a significant
There is no investment
investment in preparing
in preparing
Investment in documentation and test
documentation.
documentation scripts. scripts.
142
and other factors. Testers must be able to approach the software from all those
user perspectives.
The aim of testing should be clear: For effective exploratory testing, the testers
need to have a clear mindset and have clarity on the mission of testing. Testers
should maintain clear notes on what needs to be tested, and why it needs to be
tested.
Proper documentation: It is important to make proper notes and take a
document and monitor test coverage, risk, Tets execution log, issues, and queries.
Tracking of issues: The tester should maintain a proper record of questions and
issues raised during testing.
Challenges of Exploratory Testing:
Replication of failure: In exploratory testing replication of failure to identify the
cause is difficult.
Difficult to determine the best test case: In exploratory testing, determining the
best test case to execute or to determine the best tool to use can be challenging.
Difficult to document all events: During exploratory testing documentation of all
events is difficult.
Difficult reporting: Reporting test results is difficult in exploratory testing as the
report does not have well-planned test scripts to compare with the outcome.
Advantages of Exploratory Testing:
Less preparation required: It takes no preparation as it is an unscripted testing
technique.
Finds critical defects: Exploratory testing involves an investigation process that
helps to find critical defects very quickly.
Improves productivity: In exploratory testing, testers use their knowledge, skills,
and experience to test the software. It helps to expand the imagination of the
testers by executing more test cases, thus enhancing the overall quality of the
software.
Generation of new ideas: Exploratory testing encourages creativity and intuition
thus the generation of new ideas during test execution.
Catch defects missed in test cases: Exploratory testing helps to uncover bugs
that are normally ignored by other testing techniques.
Disadvantages of Exploratory Testing:
Tests cannot be reviewed in advance: In exploratory testing, Testing is
performed randomly so once testing is performed it cannot be reviewed.
Dependent on the tester’s knowledge: In exploratory testing, the testing is
dependent on the tester’s knowledge, experience, and skill. Thus, it is limited by
the tester’s domain knowledge.
Difficult to keep track of tests: In Exploratory testing, as testing is done in an
ad-hoc manner, keeping track of tests performed is difficult.
Not possible to repeat test methodology: Due to the ad-hoc nature of testing in
exploratory testing, tests are done randomly and thus it is not suitable for longer
execution time, and it is not possible to repeat the same test methodology.
Software Testing – Visual Testing
Visual Testing is also called Visual UI Testing. It validates whether the developed
software user interface (UI) is compatible with the user’s view. It ensures that the
developed web design is correctly following the spaces, sizes, shapes, and positions
143
of UI elements. It also ensures that the elements are working properly with various
devices and browsers. Visual testing validates how multiple devices, browsers,
operating systems, etc., affect the software.
Features of Visual Testing:
Deliver a consistent user interface.
Rapid and responsive testing.
Continuous visual regression testing.
Test on every commit.
No test scripts are needed.
This article focuses on discussing each of these topics-
1. Visual Inspection System.
2. Working of Visual Testing.
3. Why Visual Testing?
4. Why Functional Testing Can’t Cover Visual Testing?
5. Visual Testing Methods.
6. Types of Visual Testing.
7. Tools for Automated Visual Testing.
8. Advantages of Visual Testing.
9. Disadvantages of Visual Testing.
Let’s start discussing each of these topics in detail.
Visual tests generate, compare and analyze browser snapshots to detect if any pixels
have changed. These pixel’s differences are called visual pixels.
Steps in Visual Testing:
The Quality Analyst or the tester runs the developed code to test the web
application’s user interface part.
Initially, it will record the screen as snapshots. It acts as a baseline with which the
further test results will get compared.
After that, the QA runs the code in the background and it will take or record the
snapshots of those running codes.
Now, it will start comparing with the baseline snapshots.
If changes are found among those snapshots then the test is considered as failed.
If no changes are found then it will be tested positively.
144
Some visual testing tools will generate reports where the differences in the snapshots
are captured. It finds where actually the snapshots get differed. Also, it generates the
report for successful test results.
If these image differences are caused by errors, developers can fix them and
return the test to check
If the fixes actually worked. If differences are caused by subsequent changes in
the UI, developers will have to review the screenshot and update baseline images
against which visual tests can be run in the future.
Visual testing is done because visual errors happen more frequently than one might
realize. Some of the reasons for doing visual testing are-
It verifies or ensures that the developed product UI appears as expected to the
users.
It helps in evaluating the defects in the UI interface.
It correctly detects the variations in the UI which is not relevant to the baseline
snapshots.
It helps to create dedicated visual test cases and covers the functional points.
Visual testing allows the tester or Quality Analyst to evaluate the test cases
visually which is easier to carry out.
Visual bugs are rendering issues. Rendering validation is not caught by functional
testing tools. Functional testing measures functional behavior. But, if there is a
requirement to check the functionality of the website, in that case, function testing
works properly and ensures the same. If the visualization of the website is very
messy as not expected then it will not be detected by functional testing.
Example: While creating a website the submit button is placed at the center but after
the entire process by mistake if it is moved to the right side of the browser page then
during the functional testing it will not catch or find that defect that the submit button
is wrongly placed. Because it checks whether the submit button functionality is
145
working properly or not. Here, it can’t cover the visual testing.
In the case of visual testing implementation, it compares the various snapshots with
the baseline snapshots and will detect the defect that the submit button is wrongly
placed. It helps the tester to find the defect with minimum test runs.
146
In automation testing, the scope is narrower unless screenshot testing is in place.
Also, there is a steep learning curve as the organizations take time to learn about
automation testing tools.
To go with automation would be a good choice if you are required to perform
regression visual testing to deal with frequent changes happening to a stable UI.
Automated testing also helps in a great visual screenshot comparison.
Automated screenshot comparison offers a great degree of precision in visual
testing and increases the ROI.
The automated screenshot comparison can capture those bugs that are
impossible to get detected with human eyes and manual comparison. It is also
helpful in the end-to-end testing for complex user stories.
The following are some of the tools for automated visual testing:
Code-Based automated visual testing (Open Source Tools):
1. Specter:
Specter is the automated visual regression testing framework.
After the web page is created, each individual component will be checked whether
it is rendered properly or not.
Specter will capture screenshots of the elements matching the selectors specified,
at all the screen dimensions desired.
2. Needle:
It checks that visuals like images, layouts, buttons, CSS, SVG, etc., are rendered
correctly by taking screenshots of portions of a website and comparing them
against known good screenshots.
It also provides tools for testing calculated CSS values and the position of HTML
elements.
3. Gemini:
It checks the visual appearance of the web page. Here, it tests the web page
separately.
It includes some of the CSS properties while checking the web pages with correct
elements and their position.
It gathers the CSS test statistics.
Some of the rendering features for images are not supported.
4. Pix-Diff:
It is developed to compare the screenshots of the developed web pages.
The image comparison is carried out in three ways- Pixel-by-Pixel, Perceptual,
and context.
It ensures that the part of the image is missing.
It is used to check the low-frequency images.
5. FBSnapshotTestcases:
It takes a UI View or layer and uses the necessary UI kit to generate the
automated image snapshots of its content.
It creates the reference image and compares it with the actual image of the
generated code.
One pixel change will lead to the failure of testing.
147
Configuration – Based automated visual testing (Open Source Tools):
1. CSS visual test: It checks the correctness of the CSS properties with the image
generated.
2. VIFF: It finds visual differences between web pages in different environments
such as developing, staging, production and browsers.
3. GreenOnion: It checks only the UI part of the website and ensures that the
designs, views, etc., are made correctly.
4. Galen Framework: It is used to test the layout of the web application from various
devices.
5. CSSCritic: It is to check the current layout of the web page constantly against the
reference image generated early.
6. Baskstop JS: It checks the entire layout or part of the layout of the UI and
compares it with DOM screenshots.
148
Types of Acceptance Testing:
1. User Acceptance Testing (UAT): User acceptance testing is used to determine
whether the product is working for the user correctly. Specific requirements which
are quite often used by the customers are primarily picked for the testing purpose.
This is also termed as End-User Testing.
2. Business Acceptance Testing (BAT): BAT is used to determine whether the
product meets the business goals and purposes or not. BAT mainly focuses on
business profits which are quite challenging due to the changing market
conditions and new technologies so the current implementation may have to being
changed which results in extra budgets.
3. Contract Acceptance Testing (CAT): CAT is a contract that specifies that once
the product goes live, within a predetermined period, the acceptance test must be
performed and it should pass all the acceptance use cases. Here is a contract
termed a Service Level Agreement (SLA), which includes the terms where the
payment will be made only if the Product services are in-line with all the
requirements, which means the contract is fulfilled. Sometimes, this contract
happens before the product goes live. There should be a well-defined contract in
terms of the period of testing, areas of testing, conditions on issues encountered
at later stages, payments, etc.
4. Regulations Acceptance Testing (RAT): RAT is used to determine whether the
product violates the rules and regulations that are defined by the government of
the country where it is being released. This may be unintentional but will impact
negatively on the business. Generally, the product or application that is to be
released in the market, has to go under RAT, as different countries or regions
have different rules and regulations defined by its governing bodies. If any rules
and regulations are violated for any country then that country or the specific region
then the product will not be released in that country or region. If the product is
released even though there is a violation then only the vendors of the product will
be directly responsible.
5. Operational Acceptance Testing (OAT): OAT is used to determine the
operational readiness of the product and is non-functional testing. It mainly
includes testing of recovery, compatibility, maintainability, reliability, etc. OAT
assures the stability of the product before it is released to production.
6. Alpha Testing: Alpha testing is used to determine the product in the development
testing environment by a specialized testers team usually called alpha testers.
7. Beta Testing: Beta testing is used to assess the product by exposing it to the real
end-users, usually called beta testers in their environment. Feedback is collected
149
from the users and the defects are fixed. Also, this helps in enhancing the product
to give a rich user experience.
Use of Acceptance Testing:
To find the defects missed during the functional testing phase.
How well the product is developed.
A product is what actually the customers need.
Feedback help in improving the product performance and user experience.
Minimize or eliminate the issues arising from the production.
Advantages of Acceptance Testing :
This testing helps the project team to know the further requirements from the
users directly as it involves the users for testing.
Automated test execution.
It brings confidence and satisfaction to the clients as they are directly involved in
the testing process.
It is easier for the user to describe their requirement.
It covers only the Black-Box testing process and hence the entire functionality of
the product will be tested.
Disadvantages of Acceptance Testing :
Users should have basic knowledge about the product or application.
Sometimes, users don’t want to participate in the testing process.
The feedback for the testing takes long time as it involves many users and the
opinions may differ from one user to another user.
Development team is not participated in this testing process.
150
Advantages of alpha testing include
1. Early identification of bugs and issues: Alpha testing allows for the early
identification of bugs and issues, providing an opportunity to fix them before they
reach end-users.
2. Improved quality: By identifying and fixing bugs and issues early in the
development process, alpha testing helps to improve the overall quality of the
software.
3. Increased user satisfaction: Alpha testing helps to ensure that the software meets
the needs of the target audience, leading to increased user satisfaction.
4. Faster resolution of problems: Alpha testing allows for the rapid resolution of
problems, reducing the likelihood of further issues down the line.
5. Cost savings: By identifying and fixing issues early in the development process,
alpha testing can help to save time and money by avoiding the need for more
extensive testing and bug fixing later on.
Objective of Alpha Testing
1. The objective of alpha testing is to refine the software product by finding the bugs
that were not discovered during the previous tests.
2. The objective of alpha testing is to refine the software product by fixing the bugs
that were not discovered during the previous tests.
3. The objective of alpha testing is to involve customers deep into the process of
development.
4. The objective of alpha testing is to give better insight into the software’s reliability
at the early stages of development.
5. The main objective of alpha testing is to identify and resolve critical bugs and
issues in the software before it is released to the public. The goal is to assess the
software’s overall quality, functionality, usability, performance, and stability in a
controlled environment, and to ensure that it meets the needs and expectations of
the target audience.
6. During alpha testing, the software is evaluated against a set of predetermined
acceptance criteria, and any issues or bugs that are identified are documented
and reported back to the development team for resolution. The objective of alpha
testing is to provide an early opportunity to identify and fix bugs and issues,
reducing the likelihood of them affecting end-users and potentially causing
damage to the software’s reputation.
7. Overall, the objective of alpha testing is to improve the quality of the software,
ensure that it meets the needs of the target audience, and reduce the risk of
issues and bugs affecting end-users after the software has been released.
Alpha Testing Process
1. Review the design specification and functional requirements.
2. Develop comprehensive test cases and test plans.
3. Execute test plan
4. Log defects
5. Retest once the issues have been fixed
151
Phases of Alpha Testing
There are two phases in alpha testing:
1st Phase: The first phase of testing is done by in-house developers or software
engineers. They either use hardware-aided debuggers or debugger software. The
aim is to catch bugs quickly. Usually while alpha testing, a tester comes across to lots
of bugs, crashes, missing features, and docs.
2nd Phase: The second phase of alpha testing is done by software quality assurance
staff for additional testing in an environment. It includes a black box as well as white
box testing.
The phases of alpha testing typically include
1. Planning: This phase involves defining the scope, objectives, and schedule for the
alpha testing process. It also includes identifying the target audience, the test
environment, and the resources required for the testing.
2. Preparation: This phase involves setting up the test environment, configuring the
test cases, and preparing the test data. It also includes creating the test scripts
and building the test infrastructure.
3. Execution: This phase involves running the test cases and collecting the test
results. Testers will report any bugs or issues they encounter, and the
development team will work to fix them.
4. Evaluation: This phase involves analysing the test results and determining
whether the software meets the requirements and performs as expected. It also
includes identifying areas of improvement and making recommendations for
further testing.
5. Reporting: This phase involves documenting the test results and providing a report
to the development team and stakeholders. It also includes presenting the findings
and recommendations for future testing and development.
6. Closure: This phase involves wrapping up the testing process and releasing the
software for further testing or for release to the end-users.
Advantages of Alpha Testing
Better insight about the software’s reliability at its early stages.
Free up your team for other projects.
It reduces delivery time to market.
Early feedback helps to improve software quality.
Disadvantages of Alpha Testing
It will need a longer time for test plan execution if the project is large.
Sometimes, the defects in the products can be unknown during this alpha testing.
It is difficult to test the entire product since it is still under development.
152
For smaller projects, time spent on alpha testing is not worthy enough.
It does not carry out reliability and security testing.
This test will only cover the business requirements mentioned by the client. The
project team will not go through the deep testing of each and every module.
It requires a separate lab environment for testing.
Benefits of Alpha Testing
The benefits of alpha testing include:
1. Early identification of bugs and issues: Alpha testing allows for the early
identification of bugs and issues that may not be discovered during development,
reducing the risk of these issues being found by end-users and causing problems
in the production environment.
2. Improved quality: Alpha testing helps ensure that the software is of high quality
and meets the requirements before it is released to the end-users.
3. Cost-effective: Alpha testing is a cost-effective way to identify and fix issues early
in the development process, which can save time and money in the long run.
4. User feedback: Alpha testing can provide valuable feedback from users, allowing
the development team to make improvements and enhance the user experience.
5. Increased confidence in the software: Alpha testing provides a level of confidence
that the software is ready for beta testing and release to the end-users.
6. Helps in stress testing: Alpha testing helps in identifying the limit of the software’s
performance and its ability to handle heavy load in terms of usage, this helps to
identify if the software can perform well in real-world scenarios.
153
version of the software, whose feedback is needed, is released to a limited number of
end-users of the product to obtain feedback on the product quality. Beta testing helps
in minimization of product failure risks and it provides increased quality of the product
through customer validation. It is the last test before shipping a product to the
customers. One of the major advantages of beta testing is direct feedback from
customers.
Why need Beta Testing ?
Beta testing is necessary for several reasons:
1. Identify and fix bugs: Beta testing helps to identify and fix bugs or errors in the
software. It allows developers to catch issues that were not detected during the
development process and resolve them before the official launch.
2. Ensure software quality: Beta testing helps to ensure that the software meets the
expected quality standards before it is released to the public. This helps to reduce
negative reviews, returns, and refunds that can affect the product’s reputation.
3. Evaluate performance: Beta testing enables developers to evaluate the software’s
performance in real-world scenarios, which can help identify issues with the
software’s functionality, speed, and responsiveness.
4. Get user feedback: Beta testing provides a platform for users to provide feedback
about the software, its features, and usability. This feedback can be used to
improve the software’s overall performance and user experience.
5. Improve user engagement: Beta testing can improve user engagement by
allowing users to test the software and provide feedback. This helps to build a
relationship between the developers and the users, leading to increased user
satisfaction.
1. Beta Testing is performed by clients or users who are not employees of the
company.
2. Reliability, security, and robustness are checked during beta testing.
3. Beta Testing commonly uses black-box testing.
4. Beta testing is carried out in the user’s location.
5. Beta testing doesn’t require a lab or testing environment.
154
4. Focused Beta Testing: Software product is released to the market for collecting
feedback on specific features of the program. For example, important functionality
of the software.
5. Post-release Beta Testing: Software product is released to the market and data is
collected to make improvements for the future release of the product.
TestFairy
CenterCode
TryMyUI
UserTesting
TestRail
Usersnap
Zephyr
TestFlight
Uses of Beta Testing:
Some of the uses of beta testing are:
1. Identifying and fixing bugs: Beta testing helps developers identify and fix bugs in
the software before its official release. Beta testers can use the software in real-
world scenarios, identify any bugs or glitches, and provide feedback to the
developers. This feedback helps the developers fix the bugs and improve the
software’s overall performance.
2. Testing software compatibility: Beta testing is used to test the software’s
compatibility with different operating systems, hardware, and software
configurations. This helps ensure that the software will work correctly on a wide
range of devices and configurations.
3. Gathering user feedback: Beta testing allows developers to gather user feedback
and insights about the software’s features and functionalities. This feedback can
be used to improve the user experience and make the software more user-
friendly.
4. Evaluating performance: Beta testing helps developers evaluate the software’s
performance in real-world scenarios. This includes measuring the software’s
speed, responsiveness, and overall stability.
5. Building customer loyalty: Beta testing involves users in the development process,
making them feel valued and involved in the product’s creation. This can help
build customer loyalty and increase the chances of the software’s success after
launch.
155
Advantages of Beta Testing:
156
A database tester should be familiar with the database structure and should fully
understand the business rules of the application.
Database tests can be fully automated, fully manual, or a hybrid approach using a
combination of both manual and automated processes.
Why is Database Testing Important?
Below are some of the reasons to perform database testing:
Ensures database efficiency: Database testing helps to ensure the database’s
efficiency, maximum stability, performance, and security.
Ensures information validity: Database testing helps to ensure data values and
the information received and stored in the database is valid or not.
Helps to save data loss: Database testing helps to save data loss and saves
aborted transaction data.
Differences between User-Interface Testing and Data Testing
Below are some of the differences between user interface testing and database
testing:
User Interface
Parameters Testing Database Testing
UI testing includes
validating text boxes, Database testing
buttons, select includes validating
dropdowns, the look, and schema, columns,
feel of the application, database tables, etc.
Testing Includes etc.
157
User Interface
Parameters Testing Database Testing
DropDowns Columns
158
reliability of the system. This test particularly determines the system on its
robustness and error handling under extremely heavy load conditions.
3. Security Testing : Security Testing is a type of Software Testing that uncovers
vulnerabilities in the system and determines that the data and resources of the
system are protected from possible intruders. It ensures that the software system
and application are free from any threats or risks that can cause a loss. Security
testing of any system is focused on finding all possible loopholes and weaknesses
of the system that might result in the loss of information or repute of the
organization. Security testing is a type of software testing that focuses on
evaluating the security of a system or application.
4. Usability Testing : Several tests are performed on a product before deploying it.
You need to collect qualitative and quantitative data and satisfy customers’ needs
with the product. A proper final report is made mentioning the changes required in
the product (software). Usability Testing in software testing is a type of testing,
that is done from an end user’s perspective to determine if the system is easily
usable.
5. Compatibility Testing : Compatibility testing is software testing which comes
under the non functional testing category, and it is performed on an application to
check its compatibility (running capability) on different platform/environments. This
testing is done only when the application becomes stable. Means simply this
compatibility test aims to check the developed software application functionality on
various software, hardware platforms, network and browser etc
Database Testing Process
1. Test Environment Setup : Database testing starts with setting up the testing
environment for the testing process to be carried out in order to get a good quality
testing process.
2. Test Scenario Generation: After setting up the test environment test cases are
designed for conducting the test. Test scenarios involve the different inputs and
different transactions related to the database.
3. Test Execution: Execution is the core phase of the testing process in which the
testing is conducted. It is basically related to the execution of the test cases
designed for the testing process.
4. Analysis: Once the execution phase is ended then all the process and the output
obtained is analyzed. It is checked whether the testing process has been
conducted properly or not.
5. Log Defects: Log defects are also known as report submitting. In this last phase,
the tester informs the developer about the defects found in the database of the
system.
Objectives of Database Testing
1. Data Mapping
It checks whether the fields in the user interface or front-end forms are mapped
consistently with the corresponding fields in the database table.
Verifies the data that passes through and out between the applications and the
backend database.
The test engineer verifies whether the correct CRUD (Create, Retrieve, Update,
and Delete) activity gets used at the backend when a specific action is done at the
front end and whether the user action is effective or not.
2. ACID Properties of Transactions
159
Every transaction a database performs has to stick to these four properties:
Atomicity, Consistency, Isolation, and Durability.
1. Atomicity: This means that the database transactions are atomic i.e. if a
transaction is performed on data, it should be performed entirely or should not be
implemented at all. Thus, a transaction can result in either success or failure. This
is also known as All-or-Nothing.
2. Consistency: This means that the database state should remain valid and
preserved after the transaction is completed.
3. Isolation: This means that multiple transactions can be implemented all at once
without impacting one another and altering the database state. The database
should remain consistent even if two or more transactions occur concurrently.
4. Durability: This means if a transaction is committed, it will keep the modifications
without any fail irrespective of the effect of the external factors.
3. Data Integrity
The updated and the most recent values of shared data should appear on all the
forms and screens.
The value should not be updated on one screen and display an older value on
another one.
The status should also be updated simultaneously.
This focuses on testing the consistency and accuracy of the data stored in the
database so that expected results are obtained.
4. Accuracy of Business Rules
Complex databases lead to complicated components like relational constraints,
triggers, and stored procedures.
Hence in order for testers to come up with appropriate SQL queries to validate the
complex objects.
Database Testing Components
1. Transactions: Transactions mean the access and retrieval of data. Hence in
order during the transaction processes the ACID properties should be followed.
2. Database Schema: It is the design or the structure of the organization of the data
in the database. Tools like SchemaCrawler which is a free database discovery
and comprehension tool can be used or Regular expressions are also a good
approach to follow.
3. Triggers: When a certain event occurs in a certain table, a trigger is auto-
instructed to be executed. White box testing and black box testing have their
procedures and set of rules which help to precisely test the triggers.
4. Stored Procedures: It is the collection of the statements or functions governing
the transactions in the database. The stored procedure systems are used for
multiple applications where data is kept in RDBMS. White box testing and Black
box testing can be used to test the stored procedures.
5. Field Constraints: Field constraints involves default values, exclusive values, and
foreign key. Testing field constraints involves verifying the outcomes retrieved
from the SQL commands.
How Automation can Help in Database Testing?
Automation in software testing helps to automate repetitive tasks and thus reduce
manual work, thus helping test engineers to focus on more critical features. Below
are some of the scenarios where automation can be helpful in database testing for
test engineers:
160
1. Frequently altering applications: In Agile methodology where there is a new
release to production at the end of every sprint. But with the automation of the
features which are constant in the recent sprint, test engineers can focus on new
modified requirements as it takes at least 3 weeks to complete one round of
testing.
2. Easier to monitor variations: With an automated monitoring process, it becomes
easier to find the variations where a set of data gets corrupted due to human error
or other issues and fix them as soon as possible.
3. Modification in database schema: Every time when database schema is
modified, in-depth testing is needed to make sure that everything is working
correctly. This is a time-consuming process if done manually.
Most common occurring issues during database testing
Below are some of the challenges of database testing and their solutions:
1. Frequently changing database structure: The database tester needs to create
test cases from the specific structure that gets modified at the time of
implementation and there is a need to intercept the modification and impact of
modification as early as possible.
2. Time-consuming to determine transactions state: The overall planning and
timing should be organized so that no extra time and cost issues appear later.
3. Unwanted data modification: The best solution to this challenge is to implement
access control and provide access to modify data only to a limited number of
people. Access should be restricted for EDIT and DELETE operations.
4. Cost and Time-consuming to get data: It is very important to maintain a balance
between the project timelines, expected quality, and data load.
Myths or Misconceptions related to Database Testing
1. Requires expertise: Database testing requires experts to carry out testing which
makes the entire process efficient and gives long-term functional stability to the
application.
2. Time-consuming: The process of database testing is lengthy but it helps to
enhance the database application’s overall quality.
3. Adds extra work bottlenecks: Conducting database testing helps to enhance the
quality and value of the overall work.
4. Expensive Process: Database testing needs expenses but it is a long-term
investment that leads to the long-term robustness of the application.
Database Testing Tools
Below are 5 automation tools that can be used in database testing:
1. Apache JMeter
Apache JMeter is an open-source performance testing tool that is used to test the
performance of database and web applications. It can be used for load testing, stress
testing, and functional testing of databases thus making them a versatile tool for
database testing.
It supports distributed testing for load testing and scalability testing.
It supports multiple protocols like LDAP, JDBC, etc.
It is possible to integrate Apache JMeter with other testing tools and frameworks.
2. DbFit
DbFit is an open-source tool that helps to create and maintain automated database
tests. It can be integrated with delivery tools to help automate testing.
It supports features like version control, data-driven testing, etc.
161
It is lightweight and easy to install.
It provides a simple and easy-to-understand syntax for creating test cases.
3. SQLTest
SQLTest is a database testing tool that is designed specifically for SQL Server
databases. It allows one to easily create and run automated tests.
It supports automated testing of stored procedures, triggers, etc.
It allows for easy sharing of test suites among the team members.
It has a feature to get a comprehensive report after each test execution to identify
issues.
4. Orion
Orion is an open-source tool that is used for the performance and stress testing of
databases. It is primarily designed for Oracle databases.
It supports multiple databases like Oracle, MySQL, DB2, etc.
It offers an easy-to-use interface for test configuration and execution.
It supports multi-threaded test case execution.
5. DBUnit
DBUnit is an open-source tool and is a JUnit extension. It provides a framework to
create test data, insert test data into the database, and verify data is correct after
execution.
It is easy to set up and use and requires no special training or skills.
It supports multiple databases like MySQL, Oracle, Postgre SQL, SQL Server, etc.
It can be used for both unit testing and integration testing.
162
What is a Mainframe?
The mainframe is a high-performance, high-speed multi-user computer system. The
mainframe machine system is the most secure, scalable, and reliable machine
system available. In other words, these systems are utilized for larger-scale
computing, which requires a high level of availability and security. Mainframe
systems are commonly employed in industries such as retail, insurance, finance, and
other essential areas where large amounts of data must be processed several times.
One can perform millions of instructions per second [up to 569,632 MIPS] with the
help of the following factors:
Maximum input/output bandwidth: If there are excessive input and output
bandwidth, the links between drives and processors have a few choke points.
Reliability: Mainframes frequently agree to graceful deterioration and service
while the system is running.
Reliable single-thread: Performance is critical for realistic database operations.
Maximum Input/Output Connectivity: Maximum input/output connectivity
indicates that mainframes excel at delivering large disc farms.
Mainframe Testing Methodologies
Some of the most commonly used Mainframe testing commands are as follows:
SUBMIT: This command is used to submit the background job.
CANCEL: This command is used to cancel the background job.
ALLOCATE: This command allocates a dataset.
COPY: This command is used to copy a dataset.
RENAME: This command is used to rename the dataset.
DELETE: This command is used to delete the dataset.
JOB SCAN: This command is used to fix the JCL with libraries, program files, and
so on without implementing it.
Prerequisites For Mainframe Testing
Below are some of the prerequisites of mainframe testing:
A login ID and password are required to access the application.
A basic understanding of ISPF commands.
The file names, file qualifiers, and kinds are all listed.
The following points should be checked before beginning mainframe testing.
1. Job:
Before performing a job, do a job scan (Command – JOBSCAN) to check for
problems.
The test class should be specified in the CLASS parameter.
By utilizing the MSGCLASS argument, one can direct the task output to a spool, a
JHS, or wherever else one wants.
Redirect the job’s email to a spool or a test mail ID.
For initial testing, comment out the FTP steps and point the job to a test server.
If the job generates an IMR (Incident Management Record), just comment
“TESTING PURPOSE” on the job or param card.
All of the job’s production libraries should be switched to test libraries.
It is not a good idea to leave the job unattended.
TIME parameter should be added with a specified time to avoid the job from
running in an infinite loop if there is an error.
163
Save the job’s output, which includes the spool. XDC can be used to save the
spool.
2. File:
Only make a test file of the required size. When storing data into successive files
with the same name, use GDGs (Generation Data Groups – Files with the same
name but sequential version numbers– MYLIB.LIB.TEST.G0001V00,
MYLIB.LIB.TEST.G0002V00, and so on).
The files’ DISP (Disposition – defines the procedure for keeping or deleting the
dataset following a normal or abnormal step or task termination) parameter should
be coded correctly.
To avoid the job going into HOLD, make sure all of the files utilized for job
execution are saved and closed appropriately.
If you’re using GDGs to test, make sure you’re pointing at the correct version.
3. Database:
Ensure that no undesired data is inserted, changed, or deleted while running the
job or online program.
Also, make sure you’re testing in the correct DB2 region.
4. Test Case:
Always check for boundary conditions such as an empty file, the first record being
processed, the last record being processed, and so on.
Include both positive and negative test conditions whenever possible.
Include test cases to validate if the modules have been utilized correctly if
standard procedures are used in the software, such as Checkpoint restart, Abend
Modules, Control files, and so on.
5. Test Data:
Before you start testing, make sure the test data is ready.
Never make changes to the test region’s data without first informing the user.
Other teams may be working with the same data, and their tests may fail.
Before copying or accessing the production files, sufficient authorization should be
obtained.
Mainframe Attributes
The following are the various mainframe attributes:
Multiprogramming:
The multiprogramming feature is a feature that helps us to make the
most of the CPU.
The computer is running many programs at the same time.
Time-sharing:
Foreground processing refers to time-share processing, whilst
Background processing refers to batch job processing. As a result, it is
referred to as Interactive Processing since it allows the user to interact
directly with the computer.
In a time-sharing system, each user has terminal device access to the
system.
Virtual storage:
164
As an extension of physical storage, virtual storage makes use of disc
storage.
It is a method of efficiently using memory to store and accomplish a large
number of operations.
Spooling:
The Spool stands for Simultaneous Peripheral Operations Online, and
it’s used to collect a program’s or application’s output.
If necessary, the spooled output is sent to output devices such as a
printer.
Batch processing:
Batch processing is a technology that allows us to complete any task in
pieces called jobs.
One can run one or more applications in a specific order depending on
the tasks at hand.
The job scheduler comes to a conclusion regarding the order in which
the jobs are executed.
To maximize the average production, jobs are arranged according to
their priority and class.
With the help of JOB CONTROL LANGUAGE, batch processing provides
us with the necessary information (JCL).
To begin, the business or development team builds test plans based on the Business
requirement document, System requirement document, other project documents, and
inputs. It also dictates how a particular item or process will be modified during the
release cycle. In the meantime, the testing team will collaborate with the development
and project management teams to prepare test scenarios and test cases in advance.
165
Step 2: Make a Schedule
Once the requirement document has been appropriately written, it will be turned over
to the development and testing teams. In addition, the testing schedule should be
created in line with the precise project delivery plan.
Step 3: Deliverables
After receiving the paper, they will review the deliverables. The deliverables should
also be well-defined, with no ambiguity, and meet the scope of the test objectives.
Step 4: Implementation
The implementation should next proceed in accordance with the plan and
deliverables. In most cases, the modified requirement in a release will directly affect
15-25% of the application. The remaining 60-75 % of the release will rely on out-of-
the-box features such as application and process testing. As a result, there will be a
need to test the Mainframe application twice-
Testing Requirements: The application will be tested for the features or changes
specified in the requirement document.
Testing Integration: This testing activity is just for the purpose of testing. The
complete procedure will be put to the test, as well as any other apps that receive
or transmit data to the important application.
Step 5: Reporting
The test results will be shared with the development team on a regular basis after
that. The testing team should connect with the development team to make fast
modifications in crucial instances to maintain consistency.
Mainframe Testing Procedures to Follow
When undertaking mainframe testing, keep the following steps in mind:
Step 1: Smoke Testing
Start with smoke testing to see if the code deployed is in the right test environment. It
also ensures that the code is free of important flaws, saving time and effort for testers
who would otherwise have to test a bad build.
Step 2: Testing/System Testing
Following the smoke testing, one round of functionality or system testing will be done
to evaluate the functionalities of several models independently and in relation to one
another. The sorts of testing that must be performed while implementing System
Testing are listed below-
Batch testing: Conduct batch testing to verify that the test results on the output
files and data changes made by the batch job comply with the testing
specifications.
Online testing: Evaluate the mainframe applications’ front-end functionality via
online testing. Online testing covers a variety of topics, including user-friendliness,
data input validations, look and feel, and screen navigation. Exact entry fields,
166
such as interest on the plan, an insurance plan, and so on, should be tested in the
application.
Online-batch integration testing: On systems with batch processes and online
applications, online-batch integration testing can be performed. The online
process’s integration features with the backend process will also be tested here.
Essentially, this testing verifies the accuracy of the data flow and the interactions
between the screens and the backend system. Furthermore, the batch task is
utilized to verify data flow and communication across the online screens.
Database testing: Database testing is performed to ensure that the data stored
by transactions meet the system’s requirements. And the databases, which
contain data from mainframe applications such as IMS, IDMS, DB2, VSAM/ISAM,
Sequential datasets, and GDGs, validated their layout and data storage. The data
integrity and other database parameters may also be validated for optimal
performance during database testing.
System integration testing is used to verify the functionality of systems that are
related to the system under test. Because it’s vital to test the interface and various
types of messages like Job Successful, Job Failed, Database Updated, and so on,
it’s run after unit tests. The data flow between modules and apps will also be checked
for accuracy. System integration testing is carried out to ensure that the build is ready
for deployment. One can execute the following tests during system integration
testing-
Batch Testing
Online Testing
Online -Batch Integration Testing
Step 4: Regression Testing
Regression testing is the most crucial part of any testing. Regression testing ensures
that batch jobs and online screens cannot directly relate to the system under test and
that the current project release has no effect on them. Regression testing ensures
that changes made to a module do not have an impact on the parent application’s
integrated application’s overall functionality. To achieve successful regression
testing, a specified collection of test cases should be accepted depending on their
complexity, and a Test cases repository should be built. And the specific test should
be updated anytime a new feature is added to the release.
The next step in mainframe testing is performance testing. The aim is to uncover
bottlenecks in key areas like front-end data, upgrading online databases, and
protecting the application’s scalability during performance testing. One may face the
following performance problems in Mainframe applications-
The internet response time may be slow, causing user dissatisfaction.
Batch jobs and backend processes can take longer than expected, limiting online
users’ access to the system.
167
Issues with scalability.
To fix the issues listed above, run the application through the following tests-
Parameters for system integration.
Coding for application and database design.
Parameters of the system and the database.
Back-end job scheduling.
Step 6: Security Testing
Threats, hazards, and vulnerabilities are evaluated, and remedial actions for
applications and networks are recommended. Use cases in identity and access
management, risk and compliance management, data protection, and privacy policy
adherence should all be included in the security testing. To put it another way,
security testing is done to see how well an application is designed and constructed to
withstand anti-security attacks. The two types of security systems that should be
tested are mainframe security and network security. One must test the following
factors during security testing-
Authorization
Integrity
Authentication
Confidentiality
Availability
168
Appium and Selenium are open-source standards that are used to test mobile and
online applications.
The web interface is simple to use, making it suitable for non-programmers.
2. LambdaTest
LambdaTest is a cross-browser testing platform that is scalable, cloud-based, and
designed for both manual and automated software testing. It lets you test your public
or locally hosted website or web app on more than 2000 different browsers, browser
versions, operating systems, and resolutions. It provides a rapid preview of how the
site will seem and allows one to test the layout on 36 various devices with just one
click. On top of that, the platform lets one run Appium and Selenium scripts on a
scalable online Selenium Grid across mobile browsers on both iOS and Android.
Features:
Fully automated and interactive in real-time.
The user interface is fantastic and simple to use.
For real-time testing, a large number of browsers and mobile devices are
available.
3. HeadSpin
Another popular Connected Intelligence Platform is HeadSpin, which provides
mobile, 5G, web, and IoT applications. It unifies network, application, and device
monitoring and testing by integrating with all automation and testing frameworks. One
can test, monitor, and analyze any application running on any device, on any
network, or anywhere in the world using HeadSpin.
Features:
Profiling and debugging of local code.
Debugging from afar.
Testing for localization.
There are almost 500 tests running in parallel.
On a shared cloud device, there is access to more than 300 devices in more than
30 countries.
4. QTP
Quick Test Professional (QTP) now called UFT is used to test functional regression
test cases of the web-based application. This is used to perform automated functional
testing seamlessly without monitoring the system.
The tool is used for functional, regression, and service testing.
It is used to test web, desktop applications, and client-server.
There is a UFT extension in Chrome Store.
Some of the newly supported technologies are JDK 1.8, and XenDesktop 7.
UFT supports browsers Windows 8.1, Windows Server 2012, and Safari.
5. REXX
REXX is an interactive programming language that can execute system commands
such as TSO, ISPF, etc. It is easy to use by experts and casual users.
It has the capability to issue commands to its host environment.
It has the capability to call programs and functions written in other languages.
It has convenient built-in functions.
It has the debugging capability.
169
Best Practices For Mainframe Testing
1. Dry run of job: Doing a dry run of the job under test is always a smart idea.
Empty input files are used for the dry run. This procedure should be undertaken
for any jobs that are affected by the test cycle changes.
2. Complete test task setup: The test task setup should be completed well in
advance of the start of the test cycle. This will aid in the early detection of any JCL
errors, saving time during execution.
3. Set auto-commit to NO: Always set auto-commit to “NO” when accessing DB2
tables through SPUFI (the emulator’s option for accessing DB2 tables) to avoid
unintentional updates.
4. Confirm technical inventory: Don’t underestimate the importance of effective
project management and solution architect support for your project. Typically,
these projects focus on applications that have been critical to the business for a
long time. Confirming technical inventory and obtaining test and use case data are
the two greatest time and expense drivers for mainframe migration initiatives.
Make sure that your expertise is available and invested in the project.
5. Create required data in advance: It is considered best practice to create test
data in advance of the test cycle and should be checked for completeness of the
data.
Mainframe Testing Challenges and Troubleshooting
Every type of testing is a succession of trials and errors until you find the best system
possible. Testing on mainframes is no different. Throughout the process, the testing
team will be confronted with problems or troubleshooting. Some concerns that have
been often reported by testers are discussed below, as well as a suggested approach
that might be used to discover a solution.
Although a user handbook or training guide may be available, these are not the same
as the stated requirements.
Solution: The testing team should be involved in the Software Development Life
Cycle from the moment the system’s requirements are defined. They will be able to
verify that the criteria being specified are testable and feasible if they are involved
early in the process. This saves the team time, money, and effort while also ensuring
that the Software Development Process does not stall during the testing phase.
There may be times when current data should be utilized to meet a specific need.
Identifying the required data from the available data can be difficult at times.
Solution: Homegrown tools can be used to set up data as needed. Queries should
be developed ahead of time to retrieve existing data. In the event of a problem, a
request for the creation or cloning of required data can be made to the data
management team.
170
3. No impact analysis
It’s possible that the code impact will completely alter the system’s appearance and
functionality. Changes to test cases, scripts, and data may be necessary.
Solution: Impact analysis and a scope change management strategy should be in
place.
4. Ad-hoc Request
It’s possible that faults with upstream or downstream applications will demand end-to-
end testing. These unanticipated demands have the ability to derail the testing
process’s pre-determined timetable by adding time, effort, and other resources to the
execution cycle.
Solution: To prepare for unforeseeable issues throughout the testing process,
automation scripts, regression scripts, skeleton scripts, and any other backup plans
should be ready to use as soon as a problem arises. This cuts down on the total
amount of time and work required to complete the project.
Benefits of Mainframe Testing
The following are some of the benefits of successfully completing the mainframe
testing:
1. Optimized resource usage: It makes the most of the resources available and
utilizes resources optimally.
2. Avoid duplicate rework: It assists in avoiding duplicate rework.
3. Improved user experience: It improves the overall user experience.
4. Reduced production time: It cuts down on production downtime.
5. Increased customer retention: It assists us in increasing customer retention.
6. Reduced IT operations cost: It also assists us in lowering the overall cost of IT
operations.
No Documentation.
No Test cases.
No Test Design.
As it is not based on any test cases or require documentation or test design so
resolving issues that are identified at last becomes very difficult for developers.
171
Sometimes very interesting and unexpected errors or uncommon errors are found
which would never have been found in written test cases existed. Actually this Adhoc
testing is used in Acceptance testing.
Adhoc testing saves a lot of time and one great example of Adhoc testing can be
when the client needs the product by today 6 PM but the product development will be
completed at 4 PM same day. So in hand only limited time i.e. 2hrs only, so within
that 2hrs the developer and tester team can test the system as a whole by taking
some random inputs and can check for any errors.
Types of Adhoc Testing :
Adhoc testing is divided into three types as follows.
1. Buddy Testing –
Buddy testing is a type of Adhoc testing where two bodies will be involved one is
from Developer team and one from tester team. So that after completing one
module and after completing Unit testing the tester can test by giving random
inputs and the developer can fix the issues too early based on the currently
designed test cases.
2. Pair Testing –
Pair testing is a type of Adhoc testing where two bodies from the testing team can
be involved to test the same module. When one tester can perform the random
test and another tester can maintain the record of findings. So when two testers
get paired they exchange their ideas, opinions and knowledge so good testing is
performed on the module.
3. Monkey Testing –
Monkey testing is a type of Adhoc testing in which the system is tested based on
random inputs without any test cases and the behavior of the system is tracked
and all the functionalities of the system is working or not is monitored. As the
randomness approach is followed there is no constraint on inputs so it is called as
Monkey testing.
Characteristics of Adhoc Testing :
It is good for finding bugs and inconsistencies which are mentioned in test cases.
When to conduct Adhoc testing :
172
When there limited time in hand to test the system.
The errors which can not be identified with written test cases can be identified by
Adhoc testing.
This test helps to build a strong product which is less prone towards future
problems.
This testing can be performed any time during Software Development Life Cycle
Process (SDLC)
Disadvantages of Adhoc testing :
It does not provide any assurance that the error will be definitely identified.
174
2. Pincode format: The application should be designed in such a way that it handles
the zip code functionality properly. For example, if the user enters Country India and
then the Pincode field should only accept 6-digit Pincode.
In India —> 6-Digit Pincode
In US —> 5-Digit-4-Digit
3. Phone number and mobile number format: The application should be able to
handle phone numbers, mobile number formats, and ISD codes of all countries.
In India —> +91
In UK —> +44
4. Currency format: The application should support all types of currency formats as
every country has its own currency format.
In India —> INR
In Canada —> CAD
5. Date and Time Format: The time and date format vary from country to country. The
application should be able to handle multiple formats.
In India —> DD MMMM YYYY
In US —> MM-DD-YYYY
6. Address format: The application should be tested in such a way that it can access
the address format for multiple countries.
In India —> Address order is name, city, state and postal code.
In Japan —> Address order is postal code, state, city.
Types of Globalization Testing
There are two types of globalization testing:
1. Localization testing: Localization testing is a process of modifying the software
product according to each locale (language, code page, territory, etc) that is to be
supported. The objective here is to provide a product the look and feel for a target
market irrespective of their culture, location, and the language. It is also known as
L10N testing.
It involves translation of the software and its presentation to the end-user.
The transalation considers icons, graphics, user manuals , documentation, etc.
2. Internationalization testing: Internationalization testing is the process of developing
and planning the software which allows to localize the application for any given
language, culture, or region without demanding any changes in the source code. It is
also known as I18N testing.
It checks whether the application is working uniformly around various global
regions and cultures.
The aim here is to verify if the code can deal with all the international support with
no breaking of functionality.
It focus on language compatibility testing which involves verifying if the product
can behave correctly in a particular language environment.
It involves UI validation where aim is to identify visual problems like graphical
issues, text overlapping, etc.
It involves installation testing which involves trying to install app in different native
languages and check if the installation messages are displayed correctly in
different languages.
Internationalization testing involves interoperability testing that involves testing the
software over targeted cross platforms, app versions, operating systems, etc.
175
Globalization Testing Approach
Below are the steps that can be followed for creating a globalized product that can be
released in multiple markets simultaneously:
1. Test Strategy and Planning: This phase involves identifying the I18N and L10N areas
for testing and creating test strategies for both the types of globalization testing.
2. Test Case Design: In this phase, test cases are designed for I18N and L10N
requirements.
3. Test Environment Setup: This step involves setting up the environment with common
server with multiple locale or as per the client’s requirement.
4. Test Execution: This steps involves executing the designed test cases in the
configured setup as per the user expectations.
5. Defect Reporting and Analysis: Critical bugs are detected, reported and further
analyzed to find a solution to fix them.
6. Test Summary Report: Test summary report is created listing all the detected
defects with possible fixes.
Globalization Testing vs Localization Testing
Features Globalization Testing Localization Testing
Focus application’s
Focus is on a subset of users in
Focus area capabilities on users as the
a given culture or locale.
generic user base.
177
cost reduced the use of mutation testing but now it is widely used for languages such
as Java and XML.
Changed Code:
if(a < b)
178
c = 10;
else
c = 20;
Changed Code:
if(a > b)
c = 10;
else
c = 20;
3. Statement Mutations:
In statement mutations a statement is deleted or it is replaces by some other
statement.
Example:
Initial Code:
if(a < b)
c = 10;
else
c = 20;
Changed Code:
if(a < b)
d = 10;
else
d = 20;
Tools used for Mutation Testing :
Judy
Jester
Jumble
PIT
MuClipse.
Advantages of Mutation Testing:
It brings a good level of error detection in the program.
It discovers ambiguities in the source code.
It finds and solves the issues of loopholes in the program.
It helps the testers to write or automate the better test cases.
It provides more efficient programming source code.
Disadvantages of Mutation Testing:
It is highly costly and time-consuming.
It is not able for Black Box Testing.
Some, mutations are complex and hence it is difficult to implement or run against
various test cases.
Here, the team members who are performing the tests should have good
programming knowledge.
Selection of correct automation tool is important to test the programs.
179
Security Testing – Software Testing
Security Testing is a type of Software Testing that uncovers vulnerabilities in the
system and determines that the data and resources of the system are protected from
possible intruders. It ensures that the software system and application are free from
any threats or risks that can cause a loss. Security testing of any system is focused
on finding all possible loopholes and weaknesses of the system that might result in
the loss of information or repute of the organization. Security testing is a type of
software testing that focuses on evaluating the security of a system or application.
The goal of security testing is to identify vulnerabilities and potential threats and to
ensure that the system is protected against unauthorized access, data breaches, and
other security-related issues.
The goal of Security Testing:
The goal of security testing is to:
To identify the threats in the system.
To measure the potential vulnerabilities of the system.
To help in detecting every possible security risk in the system.
To help developers fix security problems through coding.
The goal of security testing is to identify vulnerabilities and potential threats in a
system or application and to ensure that the system is protected against
unauthorized access, data breaches, and other security-related issues. The main
objectives of security testing are to:
Identify vulnerabilities: Security testing helps identify vulnerabilities in the system,
such as weak passwords, unpatched software, and misconfigured systems, that
could be exploited by attackers.
Evaluate the system’s ability to withstand an attack: Security testing evaluates the
system’s ability to withstand different types of attacks, such as network attacks,
social engineering attacks, and application-level attacks.
Ensure compliance: Security testing helps ensure that the system meets relevant
security standards and regulations, such as HIPAA, PCI DSS, and SOC2.
Provide a comprehensive security assessment: Security testing provides a
comprehensive assessment of the system’s security posture, including the
identification of vulnerabilities, the evaluation of the system’s ability to withstand
an attack, and compliance with relevant security standards.
Help organizations prepare for potential security incidents: Security testing helps
organizations understand the potential risks and vulnerabilities that they face,
enabling them to prepare for and respond to potential security incidents.
Identify and fix potential security issues before deployment to production: Security
testing helps identify and fix security issues before the system is deployed to
production. This helps reduce the risk of a security incident occurring in a
production environment.
Principle of Security Testing:
Below are the six basic principles of security testing:
Confidentiality
Integrity
Authentication
Authorization
Availability
180
Non-repudiation
Major Focus Areas in Security Testing:
Network Security
System Software Security
Client-side Application Security
Server-side Application Security
Authentication and Authorization: Testing the system’s ability to properly
authenticate and authorize users and devices. This includes testing the strength
and effectiveness of passwords, usernames, and other forms of authentication, as
well as testing the system’s access controls and permission mechanisms.
Network and Infrastructure Security: Testing the security of the system’s network
and infrastructure, including firewalls, routers, and other network devices. This
includes testing the system’s ability to defend against common network attacks
such as denial of service (DoS) and man-in-the-middle (MitM) attacks.
Database Security: Testing the security of the system’s databases, including
testing for SQL injection, cross-site scripting, and other types of attacks.
Application Security: Testing the security of the system’s applications, including
testing for cross-site scripting, injection attacks, and other types of vulnerabilities.
Data Security: Testing the security of the system’s data, including testing for data
encryption, data integrity, and data leakage.
Compliance: Testing the system’s compliance with relevant security standards
and regulations, such as HIPAA, PCI DSS, and SOC2.
Cloud Security: Testing the security of cloud-
Types of Security Testing:
1. Vulnerability Scanning: Vulnerability scanning is performed with the help of
automated software to scan a system to detect known vulnerability patterns.
2. Security Scanning: Security scanning is the identification of network and system
weaknesses. Later on, it provides solutions for reducing these defects or risks.
Security scanning can be carried out in both manual and automated ways.
3. Penetration Testing: Penetration testing is the simulation of the attack from a
malicious hacker. It includes an analysis of a particular system to examine for
potential vulnerabilities from a malicious hacker who attempts to hack the system.
4. Risk Assessment: In risk assessment testing security risks observed in the
organization are analyzed. Risks are classified into three categories i.e., low,
medium, and high. This testing endorses controls and measures to minimize the
risk.
5. Security Auditing: Security auditing is an internal inspection of applications and
operating systems for security defects. An audit can also be carried out via line-
by-line checking of code.
6. Ethical Hacking: Ethical hacking is different from malicious hacking. The purpose of
ethical hacking is to expose security flaws in the organization’s system.
7. Posture Assessment: It combines security scanning, ethical hacking, and risk
assessments to provide an overall security posture of an
8. Application security testing: Application security testing is a type of testing that
focuses on identifying vulnerabilities in the application itself. It includes testing the
application’s code, configuration, and dependencies to identify any potential
vulnerabilities.
181
9. Network security testing: Network security testing is a type of testing that focuses on
identifying vulnerabilities in the network infrastructure. It includes testing firewalls,
routers, and other network devices to identify potential vulnerabilities.
10. Social engineering testing: Social engineering testing is a type of testing that
simulates phishing, baiting, and other types of social engineering attacks to
identify vulnerabilities in the system’s human element.
11. Tools such as Nessus, OpenVAS, and Metasploit can be used to automate and
simplify the process of security testing. It’s important to ensure that security
testing is done regularly and that any vulnerabilities or threats identified during
testing are fixed immediately to protect the system from potential attacks.
organization.
Advantages of Security Testing:
1. Identifying vulnerabilities: Security testing helps identify vulnerabilities in the
system that could be exploited by attackers, such as weak passwords, unpatched
software, and misconfigured systems.
2. Improving system security: Security testing helps improve the overall security of
the system by identifying and fixing vulnerabilities and potential threats.
3. Ensuring compliance: Security testing helps ensure that the system meets
relevant security standards and regulations, such as HIPAA, PCI DSS, and SOC2.
4. Reducing risk: By identifying and fixing vulnerabilities and potential threats before
the system is deployed to production, security testing helps reduce the risk of a
security incident occurring in a production environment.
5. Improving incident response: Security testing helps organizations understand the
potential risks and vulnerabilities that they face, enabling them to prepare for and
respond to potential security incidents.
Disadvantages of Security Testing:
1. Resource-intensive: Security testing can be resource-intensive, requiring
significant hardware and software resources to simulate different types of attacks.
2. Complexity: Security testing can be complex, requiring specialized knowledge and
expertise to set up and execute effectively.
3. Limited testing scope: Security testing may not be able to identify all types of
vulnerabilities and threats.
4. False positives and negatives: Security testing may produce false positives or
false negatives, which can lead to confusion and wasted effort.
5. Time-consuming: Security testing can be time-consuming, especially if the system
is large and complex.
6. Difficulty in simulating real-world attacks: It’s difficult to simulate real-world
attacks, and it’s hard to predict how attackers will interact with the system.
184
Types of Disability Description
185
3. Hybrid Accessibility Testing:
The Hybrid Accessible method is always the best method to ensure the website is
accessible. At first, you can use only a few features such as navigating and scrolling
the page with the keyboard and are not able to test with the help of any software.
Example of Accessibility Testing (Sample Test Cases):
Below are some of the sample test cases for accessibility testing:
1. If the labels are correctly placed and written or not.
2. If the instructions are provided as a part of user documentation or manual.
3. If the instructions provided are easy to understand or not.
4. If the application has followed all the principles and guidelines or not.
5. Is a meaningful caption provided or not?
6. If the instructions are given or not.
7. If the audio and video-related content is properly heard by all disabled people.
8. If the content is clear, concise, or understandable or not.
9. If training is provided for users with disabilities will enable them to become familiar
with the software.
10. If the highlighting is viewable with inverted colors or not.
Benefits of Accessibility Testing:
Efficient access: Accessibility testing makes sure that the product provides easy
and efficient access to users with disabilities or challenges.
Increase market share: It helps to increase the audience reach by making the
product disabled-friendly and increasing the target audience thus increasing the
market share.
Improves efficiency: Accessibility testing improves the maintainability and efficiency
of the product.
Legal compliance: Product companies can avoid a host of legal tangles and
penalties by implementing accessibility testing for their products and services.
Improve code quality: Accessibility testing increases the scope of usability testing
and creates a high-quality codebase for the finished product and services.
Improved SEO: Accessibility-friendly websites contain rich text content, thus
enabling search engines to locate them while looking up relevant content easily.
Accessibility Testing Tools:
Below are the top 5 accessibility testing tools:
1. Wave
The WAVE tool was developed by WebAIM to evaluate the accessibility of web
content. It evaluates the accessibility of web content by annotating the copy of the
web page.
It performs accessibility evaluation on the browser and does not save anything on
the server.
It can identify many accessibility and Web Content Accessibility Guidelines
(WCAG) errors.
It also facilitates human evaluation of the web content.
2. SortSite
SortSite is a one-click user experience testing tool for Mac, OS X, and Windows. This
tool is used for websites, inside or outside the firewall.
This tool is compatible with Mobile browsers, Desktop browsers, and Internet
Explorer.
186
It checks for HTTP error codes and script errors.
It scans the entire website for quality issues including browser compatibility,
accessibility, broken links, etc.
3. JAWS
Job Access With Speech (JAWS) is the world’s most popular screen reader. It is
developed for individuals whose vision loss prevents them from seeing the screen
content and navigating with a mouse.
It includes two multi-lingual synthesizers, Eloquence, and Vocalizer Expressive.
It works with IE, Firefox, and Microsoft Office and supports Windows and
touchscreen gestures.
It provides Braille input from the Braille Keyboard and also includes drivers for
Braille display.
4. QualityLogic
It provides a combination of automated and manual testing services to evaluate
website accessibility.
This tool is used by visually impaired QA engineers who know exactly what is
needed to make a website accessible.
It helps to discover issues like structural issues, contrast errors, etc.
It also creates a compliance report containing a summary of errors detected.
5. DYNO Mapper
It is one of the best website accessibility testing tools for testing site accessibility on
all online applications.
It includes daily keyword tracking, content inventory, and site auditing.
It evaluates the HTML content of the website and can create a sitemap for any
URL.
It also imports XML files to generate the sitemap.
Myths about Accessibility Testing:
Below are some of the myths associated with accessibility testing:
Myths: Creating Accessibility Testing is Costly.
Fact: Accessibility testing is not constant if the accessibility issues are identified at
the design phase besides the extensive testing, thus the cost and extra rework can
be reduced.
Myths: Accessibility testing is Time-consuming to convert inaccessible
websites to accessible.
Fact: It is not important to integrate all modifications at one time, prioritize things and
work on the basic needs first.
Myths: Accessibility testing is boring.
Fact: It is not necessary to include only text in the website to make it accessible,
images can also be included to make it more attractive but the major concern is to
make it accessible for every category of the person.
Myths: Accessibility testing is only for disabled persons.
Fact: It is a myth that accessibility is for only disabled individuals, all types of users
can use accessibility testing and enhance the credibility of the software.
Conclusion:
In these software Engineering the Accessibility testing help to the Disabled Persons
and the normal Peoples also. If Due to complexity of Accessibility testing guideline for
187
the web-application for avoiding these we try to develop the normal web-application
or the website for the normal user and other for disabled persons.
Structural Software Testing
Structural testing is a type of software testing which uses the internal design of the
software for testing or in other words the software testing which is performed by the
team which knows the development phase of the software, is known as structural
testing.
Structural testing is basically related to the internal design and implementation of the
software i.e. it involves the development team members in the testing team. It
basically tests different aspects of the software according to its types. Structural
testing is just the opposite of behavioral testing.
Types of Structural Testing:
There are 4 types of Structural Testing:
Volume Testing
Volume Testing is a type of software testing which is carried out to test a software
application with a certain amount of data. The amount used in volume testing could
be a database size or it could also be the size of an interface file that is the subject of
volume testing.
While testing the application with a specific database size, database is extended to
that size and after that the performance of the application is tested. When application
needs interaction with an interface file this could be either reading or writing the file or
same from the file. A sample file of the size needed is created and then functionality
of the application is tested with that file in order to test the performance.
In volume testing a huge volume of data is acted upon the software. It is basically
performed to analyze the performance of the system by increasing the volume of
data in the database. Volume testing is performed to study the impact on response
time and behavior of the system when the volume of data is increased in the
database.
Volume Testing is also known as Flood Testing.
Characteristics of Volume Testing:
Following are the characteristics of the Volume Testing:
Performance of the software decline as passing of the time as there is huge amount
of data overtime.
Basically the test data is created by test data generator.
Only small amount of data is tested during development phase.
The test data need to be logically correct.
The test data is used to assess the performance of the system.
Objectives of Volume Testing:
The objectives of volume testing is:
To recognize the problems that may be created with large amount of data.
To check The system’s performance by increasing the volume of data in the
database.
To find the point at which the stability of the system reduces.
To identify the capacity of the system or application.
Volume Testing Attributes:
Following are the important attributes that are checked during the volume testing:
189
System’s Response Time:
During the volume testing, the response time of the system or the application is
tested. It is also tested whether the system responses within the finite time or not.
If the response time is large then the system is redesigned.
Data Loss:
During the volume testing, it is also tested that there is no data loss. If there is
data loss some key information might be missing.
Data Storage:
During the volume testing, it is also tested that the data is stored correctly or not. If
the data is not stored correctly then it is restored accordingly in proper place.
Data Overwriting:
In volume testing, it is tested that whether the data is overwritten without giving
prior information to the developer. If it so then developer is notified.
Volume Testing is a type of Performance Testing.
190
scale up or scale down the number of user request load or other such performance
attributes. It can be carried out at a hardware, software or database level. Scalability
Testing is defined as the ability of a network, system, application, product or a
process to perform the function correctly when changes are made in the size or
volume of the system to meet a growing need. It ensures that a software product can
manage the scheduled increase in user traffic, data volume, transaction counts
frequency and many other things. It tests the system, processes or database’s ability
to meet a growing need.
Scalability Testing is to measure at what point the software product or the system
stops scaling and identify the reason behind it. The parameters used for this testing
differs from one application to another. For example, scalability testing of a web page
depends on the number of users, CPU usage, network usage while scalability testing
of a web server depends on the number of requests processed.
Objective of Scalability Testing:
The objective of scalability testing is:
To determine how the application scales with increasing workload.
To determine the user limit for the software product.
To determine client-side degradation and end user experience under load.
To assess the system’s performance under various network circumstances, such as
latency and bandwidth fluctuations, in order to guarantee dependable operation in
a range of settings.
To determine whether the system is capable of withstanding scenarios of high usage,
making sure that unexpected spikes in traffic can be accommodated without
causing performance issues.
To guarantee that the system’s scalability prevents performance decline and
maintains acceptable response times, both of which improve user experience.
To determine server-side robustness and degradation.
To help developers improve the system design or code by pointing out locations that
could become bottlenecks when the load grows.
To evaluate the effective use of system resources, including CPU, memory and
network bandwidth, in relation to the system’s increasing load, in order to
guarantee resource management.
To make that the system satisfies performance criteria and offers a satisfying user
experience, assess the system’s response time under various loads.
Scalability Testing Attributes:
Response Time: Response time is the time consumed between the user’s request and
the application’s response. Response time may increase or decrease based on
different user load on the application. Basically, the response time of an
application decreases as the user load increases. Application having the lesser
response time is considered as the higher performance application.
Throughput: Throughput is the measurement of number of requests processed in a
unit time by the application. It differs from one application to another as in web
application it is measured in number of user requests processed in a unit time
whereas in database application it is measured in number of queries processed in
a unit time.
Performance measurement with number of users: Depending on the application type, it is
always tested for the number of users that it can support without its breakdown or
busy standby situation.
191
Threshold load: Threshold load is the number of requests or transactions the
application can process with desired throughput.
CPU Usage: CPU Usage is the measurement of the CPU utilization while executing
application code instructions. It is basically measured in terms of the
unit Megahertz.
Memory Usage: Memory usage is the measurement of the memory consumed for
performing a task by an application. It is basically measured in terms of the
unit bytes.
Network Usage: Network usage is the measurement of the bandwidth consumed by an
application under test. It is measured in terms of bytes received per second,
frames received per second, segments received and sent per second etc.
Steps of Scalability Testing:
Following are the steps involve in the scalability testing:
Define a process that is repeatable for executing scalability test.
Determine the criteria for scalability test.
Determine the software tools required to carry out the test.
Set the testing environment and configure the hardware required to execute a
scalability test.
Create and verify visual script.
Create and verify the load test scenarios.
Execute the test.
Evaluate the result.
Generate required report.
Advantages of Scalability Testing:
It provides more accessibility to the product.
It detects issues with web page loading and other performance issues.
It finds and fixes the issues earlier in the product which saves a lot of time.
It ensures the end user experience under the specific load. It provides customer
satisfaction.
It helps in effective tool utilization tracking.
Disadvantages of Scalability Testing:
Sometimes, it fails to find the functional errors or issues in the product.
Some automation tools used for Scalability testing is costlier which ultimately
increases the budget of the product.
Team members involve in this testing technique should have high level of testing
skills.
The time spent on testing some parts of product may consume more time than
expected time.
Unexpected results may also be raised after launching the product in the customer
environment.
Scalability testing is a type of software testing that verifies a system’s ability to scale
up or down as the workload increases or decreases. This testing is important for
ensuring that the system can handle increasing amounts of traffic, data, or users
without degrading performance or stability.
Key points of scalability testing:
Define the performance metrics: Before conducting scalability testing, it’s essential to
define the performance metrics that you will measure. These may include
response time, throughput, concurrency, and resource utilization.
192
Identify the scalability factors: The scalability factors are the elements of the system
that may impact its ability to scale, such as the number of users, the amount of
data, or the complexity of the system. Identify the scalability factors and determine
the maximum and minimum values for each.
Define the test scenarios: Define the test scenarios that you will use to measure the
system’s scalability. These scenarios should simulate different levels of workload
and traffic and should be designed to test the system’s ability to handle increasing
levels of demand.
Prepare the test environment: Set up the test environment to replicate the production
environment as closely as possible. This includes hardware, software, and
network configurations.
Conduct the scalability tests: Run the scalability tests and monitor the system’s
performance metrics. Use the test results to identify any bottlenecks or
performance issues.
Analyze the test results: Analyze the test results to identify the system’s performance
characteristics under different levels of workload and traffic. Use this information
to optimize the system’s performance and scalability.
Some tools that can be used for scalability testing include Apache Meter, HP
Roadrunner, and Gatling. These tools can simulate different levels of traffic and
workload and measure the system’s performance metrics.
In summary, scalability testing is an important part of software testing that helps
ensure that the system can handle increasing levels of workload and traffic without
degrading performance or stability. By following best practices and using the right
tools, scalability testing can help optimize the system’s performance and scalability.
193
Stability Testing Process:
Test Planning: Considering the expected usage patterns of the system and the need
for constant performance over an extended period of time, define the general
goals and aims of stability testing.
Test Case Design: To evaluate the stability of the system, provide detailed test
cases that match real-world usage patterns and circumstances.
Test Case Review: Examine the test cases for accuracy and completeness to
guarantee their effectiveness and quality.
Test Execution: Conduct stability tests to evaluate the system’s capacity to sustain
stable operation for a prolonged period of time and in a variety of circumstances.
Report Defects: Determine and record any flaws, irregularities or problems found
during stability testing to help in repair and growth.
Effects of not performing Stability Testing:
If stability testing is not carried out, system slows down with large amount of data.
Without stability testing, system crashes suddenly.
In absence of stability testing, System’s behavior is abnormal when it goes to
different environment.
In absence of stability testing, system’s performance decreases which in turn can
have bad effects on the business.
Stability Testing covers the following Parameters:
Memory Usage
Efficiency of CPU performance.
Transaction responses and Transaction per second.
Throughput – Amount of data received from the server by the user at a time is called
throughput.
Hits Per Second – It ensures number of users currently using the application.
Checking the disk spaces.
Testing Tools used in Stability Testing:
1. Apache JMeter
2. NeoLoad
3. WebLOAD
4. LoadRunner
194
5. HeavyLoad
6. IntelBurn Test
Advantages of Stability Testing:
It gives the limit of the data that a system can handle practically.
It provides the confidence on the performance of the system.
It determines the stability and robustness of the system under load.
Stability testing leads to a better end user experience.
195
Spike Testing Tools:
All the performance testing tools can be used to perform the spike testing as the
spike testing is a type of performance testing, but there are some specific tools that
are commonly used in spike testing. The commonly used spike testing tools are:
1. Loadrunner
2. Apache Jmeter
Advantages of Spike Testing
Spike testing helps in maintaining the system under the extreme load.
Spike testing saves system or software application from crashing.
It reduces the chances of failure for the system or software application.
Disadvantages of Spike Testing
Spike testing needs only experts to perform it.
Spike testing is highly costly.
197
Login page: Blank user ID with a blank password, correct user ID with an incorrect
password, incorrect user ID with an incorrect password, incorrect user ID with the
correct password, and so on.
Uploading images: Uploading image files of size out of permissible size limits,
uploading image files with an invalid image file type.
Uploading documents: Uploading documents with invalid file types. For example, if
only pdf is allowed and a .docx file is being uploaded.
Navigations in the application: Tester may test the invalid navigation route in the
application that is different from the standard path.
Negative Test Cases
Some components of Negative testing are called Negative test cases. The team
creates it to test the application. The team uses the following testing efforts:
Data Bound Test: The team tests all the upper and lower bounds of data fields.
Correspondence between data and field types: The team tests how the
application reacts when wrong data is entered into a control.
Field Size Test: It prevents users from facing more characters before getting the
error message that they have crossed the limit.
Necessary Data Test: The test ensures that every data on the screen is verified
before critical data is entered.
Numeric Bound Test: This test ensures that test cases of the negative tests are
accurate where the team analyzes both lower and upper bounds.
Implanted Quote: Software systems face some issues when end-users store
information with a single quote. So for all the screens, the team should provide
more than one single quote.
Modification in Performance: This test contains test cases that compare previous
and current release performance statistics which can help to identify potential
performance problems.
Web Session Testing: The testing team builds test cases to release web pages
within the application only which don’t involve user login.
How to Perform Negative Testing?
Consider all possible cases: Initially it is important to think about the possible
scenarios that could affect your application negatively.
Prioritize the inputs: While exploring scenarios we need to prioritize some testing
parameters where it is ensured there is no waste of time or money.
Design test cases: Now we build a test case that includes testing of data input
where the application may be crashed. That is exactly what we don’t want to
happen when a client uses the product.
Eliminate security pitfalls: The security pitfalls should be eradicated while forming
priorities of the test cases.
Benefits of Negative Testing
Helps identify incorrect bug processing: Negative testing helps to confirm if the
software code stopped managing a programmed use case, thus avoiding
application failures caused by faulty bug processing.
Covers all aspects: It covers all bases and enhances the possibility by covering
each type of error. To ensure all test cases are covered, one round of negative
tests is performed before positive testing.
Ensures good quality product: Implementation of negative testing ensures a
product is of good quality with zero or negligible vulnerabilities.
198
Helps to maintain a clean database: Negative testing increases the possibility that
only valid information is stored and displayed by the application and thus the
database will be in good condition as it will contain only valid data.
Positive testing describes how the developed application performs for the valid
positive set of data. It is implemented to make sure that how the developed
application is helpful to meet the client’s requirements. It also ensures whether all the
inputs which are specified in the application are working properly or not.
Example 1:
Enter Name in Capital Letters: (as Input)
GEEKSFORGEEKS
The name is given in capital letters. So, here the requirement meets as we expected.
Hence the testing is implemented correctly for Positive Testing.
Example 2:
Enter Name in Capital Letter: (as Input)
geeksforgeeks
199
The name is given in small letters. So, here the positive testing becomes fail because
the entered input is against the requirement and It is considered as negative testing.
Example 3:
<input type=”file” accept=”application/pdf” required>
Condition 1: In this, it accepts only the input as file type.
Condition 2: It will only accept the file type of pdf only.
If we input the PDF file type, the requirement will get fulfilled. Hence, it is considered
as positive testing.
Execution of Positive Testing
Two different techniques can be used for Positive Testing Validation-
Boundary Value Analysis
Equivalence Partitioning
Let’s discuss these techniques in detail.
1. Boundary Value Analysis: It is related to the valid partition in the input data
range. It consists of 2 boundary values – upper bound and lowers bound. It checks
the value of the lower boundary and upper boundary to ensure positive validation for
test cases.
Example:
<input type=”number” min=”1″ max=”4″>
Here,
1. The range is between 1-4.
2. The lower boundary value is 1 and the upper boundary value is 4.
3. It will get the input from this range only.
4. It will not accept the numbers from out of the given range.
5. The range from 1-4 is considered as positive test cases.
Let us consider an example
A= 1, B= 4.
The Test Cases that we designed are-
A, A+1, B-1, B.
Test Case 1: Accepts A (i.e 1)
Test Case 2: Accepts A+1 (i.e 1+1 =2)
Test Case 3: Accepts B-1 (i.e 4-1=3)
Test Case 4: Accepts B (i.e 4)
These are all Positive Test Cases.
2. Equivalence Partitioning: In this, the test data is divided into ‘n’ partitions, and
the input data which satisfies the valid data will be considered as positive test cases
for the system.
Example: To appear for a particular exam, one should be above 18 years of age and
below 33 years of age. This is the condition to appear for the exam.
Here the valid partition is in the range of 18-32 and Invalid partitions are <=17 and
>=33.
Test Case 1:
if(age>=18 && age<=32)
Then “VALID”
200
Test Case 2:
if(age <=17)
Then “INVALID”
Test Case 3:
if(age >=33)
Then “INVALID”
Here, Test Case 1 is the only positive test case.
In case of positive testing, it will accept only the test case 1 for the system.
Features of Postive Testing:
Evaluation of Functionality: Positive testing evaluates the functionality of the
software system to ensure that it meets the intended requirements.
Accurate Results: This testing methodology aims to provide accurate and expected
results by providing valid input data.
Detection of Errors: Positive testing helps detect any errors or defects in the
software system and ensures that they are identified and fixed during the testing
phase.
Improved Software Quality: Positive testing plays a crucial role in improving the
overall quality of the software system by identifying and fixing issues and defects
before the software is released to the end-users.
Increased Confidence: Positive testing increases the confidence of the development
team and the stakeholders that the software system is functioning correctly and as
per the intended design.
Time-Saving: Positive testing is a time-efficient method of testing as it focuses on
testing the expected functionalities and features of the software system.
Advantages of Positive Testing:
It takes less time than Negative testing since it covers only valid test cases.
It verifies that all the requirements are met.
It makes sure that the software is working perfectly as built.
Positive Testing saves the efforts of a tester by identifying the wrong build in the
initial stages.
Verifications of the product will be done with a known set of test cases/conditions.
It helps in improving the code ability.
Compared to other testing techniques, Positive testing scenarios will have fewer
defects count.
It accurately tests or checks the expected behavior of the application.
Disadvantages of Positive Testing:
Positive Testing will not carry out all possible test cases.
It will not handle the unexpected error of the product/application.
Providing a valid set of data for each test case should be given properly else it will
not cover the test cases.
Specifying positive test cases for a large number of data require special attention and
learning.
It does not ensure the accuracy of the product completely.
It is less efficient as compared to Negative Testing.
Below are the differences between positive testing and negative testing-
201
S.
No Positive Testing Negative Testing
202
corresponding parameters of the software application. Endurance testing is also
known as Soak Testing.
Endurance testing, also known as stress testing or soak testing, is a type of software
testing that is used to determine how well a system or application can handle
prolonged usage or a large number of users over a long period. The goal of
endurance testing is to identify any issues that may arise when the system is used
for an extended period, such as memory leaks, performance degradation, or other
problems that may not be immediately apparent during shorter testing periods.
During endurance testing, the system or application is subjected to a heavy workload
or a large number of users for an extended period, typically hours or days. The
system is then monitored for any performance issues or errors, and any problems
that are identified are reported and addressed by the development team.
Endurance testing is particularly important for systems that will be used in a
production environment, as it can help identify issues that may not be immediately
apparent during shorter testing periods. It is also useful for identifying
performance bottlenecks and capacity limits, which can help improve the overall
performance of the system.
Endurance testing can be automated by using specialized tools and scripts, which
can simulate the expected usage patterns and loads on the system, this way it can
run for an extended period, even days or weeks.
Endurance testing includes examining a system while it withstands a huge load for a
long period and measuring the reaction parameters of the system under such
conditions. Endurance testing includes the testing of the operating system and the
computer hardware up to or above their maximum loads for a long period.
Hence endurance testing can be defined as a software testing type where a system
or software application is tested with a load extended over a long period to observe
the behavior of the software under such conditions.
It is performed at the last stage of the performance run cycle. Endurance testing
ensures that the application is capable of handling the extended load without any
delay in response time.
Endurance testing is a long process and sometimes it may last for up to a year. In
endurance testing, external loads like internet traffic and user actions are used.
Endurance testing is different from load testing as load testing ends in some hours.
Endurance Testing Process:
Establish the test environment: Determine the test environment and configure it so
that it resembles the production environment. Make sure that every aspect
possible of the databases, network configurations, hardware, and software
matches the real production setup.
Creating the test plan: Create a thorough test plan that details the goals,
parameters, methodology, materials, timetable, and outputs of the endurance
203
evaluation procedure. Provide a clear definition of the test’s success criteria,
performance metrics, and workload model.
Test estimation: Calculate how much hardware, software, testing equipment, and
labor will be needed for the endurance testing procedure. Take into account the
test’s duration, the amount of data it contains, and any other elements that can
affect the amount of resources needed.
Risk Analysis: Determine any possible dangers that can arise from endurance
testing, such as corrupted data, device malfunctions, or network problems.
Evaluate each risk’s potential impact and probability, then create mitigation plans
to reduce any negative consequences on the testing procedure.
Test Schedule: Create a thorough test plan with deadlines for every stage of the
endurance testing procedure. The preparation of test data, creation of test scripts,
testing, monitoring, and analysis should all be taken into account in this timeline.
Test Execution: Follow the established test plan and timetable when conducting the
endurance test. This involves placing on the system an ongoing load over an
extended length of time to mimic real-world usage conditions. Track system
behavior, gather performance data, and spot obstacles.
Test Closure: After the endurance test is over, gather the data and contrast it with
the established criteria of success. A thorough test closure report should be
created, containing performance measurements, issues or bottlenecks found, and
suggestions for improvement. Then, Share the report with the appropriate parties.
Endurance Testing Tools:
1. WebLOAD
2. LoadComplete
3. Apache JMeter
4. LoadRunner
204
Simulating long-term usage: Endurance testing allows testing teams to test the
system for a prolonged period, this way it can help to identify bugs that may occur
only after a certain time has passed.
Disadvantages of Endurance Testing:
It takes more time to complete this testing technique.
Here, Manual Endurance Testing can not be performed.
Selection of the correct automation tool is important to test the system otherwise it
leads to unexpected results.
We can not determine how much stress needs to be applied.
Over-stressing will result in performance problems, performance degradation,
permanent loss of data, etc.
Time-consuming: Endurance testing can be time-consuming and may require
significant resources, as the system must be tested for an extended period.
Costly: Endurance testing can be costly, as it may require specialized testing
equipment or additional resources to simulate a heavy workload or a large number
of users.
Complex: Endurance testing can be complex, as it may require specialized tools and
scripts to automate the testing process.
Difficult to reproduce: If a problem is identified during endurance testing, it may be
difficult to reproduce the issue to fix it.
Limited coverage: Endurance testing may not be able to cover all possible scenarios
and usage patterns, leading to less complete testing results.
205
Different Ways to Perform Reliability Testing
Stress testing: This testing involves subjecting the system to high levels of load or
usage to identify performance bottlenecks or issues that can cause the system to
fail
Endurance testing: Endurance testing involves running the system continuously for an
extended period to identify issues that may occur over time
Recovery testing: Recovery testing is testing the system’s ability to recover from
failures or crashes.
Environmental Testing: Conducting tests on the product or system in various
environmental settings, such as temperature shifts, humidity levels, vibration
exposure or shock exposure, helps in evaluating its dependability in real-world
circumstances.
Performance Testing: It is possible to make sure that the system continuously satisfies
the necessary specifications and performance criteria by assessing its
performance at both peak and normal load levels.
Regression Testing: After every update or modification, the system should be tested
again using the same set of test cases to help find any potential problems caused
by code changes.
Fault Tree Analysis: Understanding the elements that lead to system failures can be
achieved by identifying probable failure modes and examining the connections
between them.
It is important to note that reliability testing may require specialized tools and test
environments, and that it’s often a costly and time-consuming process.
Objective of Reliability Testing
To find the perpetual structure of repeating failures.
To find the number of failures occurring is the specific period of time.
To discover the main cause of failure.
To conduct performance testing of various modules of software product after fixing
defects.
It builds confidence in the market, stakeholders and users by providing a dependable
product that meets quality criteria and operates as expected.
Understanding the dependability characteristics and potential mechanisms of failure
of the system helps companies plan and schedule maintenance actions more
efficiently.
It evaluates whether a system or product can be used continuously without
experiencing a major loss in dependability, performance or safety.
It confirms that in the absence of unexpected shutdown or degradation, the system or
product maintains constant performance levels under typical operating settings.
Types of Reliability Testing
1. Feature Testing
Following three steps are involved in this testing:
Each function in the software should be executed at least once.
Interaction between two or more functions should be reduced.
Each function should be properly executed.
2. Regression Testing
Regression testing is basically performed whenever any new functionality is added,
old functionalities are removed or the bugs are fixed in an application to make sure
206
with introduction of new functionality or with the fixing of previous bugs, no new bugs
are introduced in the application.
3. Load Testing
Load testing is carried out to determine whether the application is supporting the
required load without getting breakdown. It is performed to check the performance of
the software under maximum work load.
4. Stress Testing
This type of testing involves subjecting the system to high levels of usage or load in
order to identify performance bottlenecks or issues that can cause the system to fail.
5. Endurance Testing
This type of testing involves running the system continuously for an extended period
of time in order to identify issues that may occur over time, such as memory leaks or
other performance issues.
Recovery testing: This type of testing involves testing the system’s ability to recover
from failures or crashes, and to return to normal operation.
6. Volume Testing
This type of testing involves testing the system’s ability to handle large amounts of
data.
Soak testing: This type of testing is similar to endurance testing, but it focuses on the
stability of the system under a normal, expected load over a long period of time.
7. Spike Testing
This type of testing involves subjecting the system to sudden, unexpected increases
in load or usage in order to identify performance bottlenecks or issues that can cause
the system to fail.
Measurement of Reliability Testing
Mean Time Between Failures (MTBF): Measurement of reliability testing is done in
terms of mean time between failures (MTBF).
Mean Time To Failure (MTTF): The time between two consecutive failures is called as
mean time to failure (MTTF).
Mean Time To Repair (MTTR): The time taken to fix the failures is known as mean
time to repair (MTTR).
MTBF = MTTF + MTTR
207
any test cases. Monkey Testing is also part of the standard testing tools for stress
testing in Android Studio.
Features of Monkey Testing
The features of Monkey Testing are as follows:
Monkey Testing needs testers with very good domain and technical knowledge.
It is so random that the reproduction of the defect is almost impossible.
Its efficiency is not 100% i.e. sometimes the result may not be correct.
There is no specification while performing monkey testing.
It is implemented when the defects are not detected at regular intervals.
Monkey testing helps to make sure the reliability and efficiency of the system.
Where can we use Monkey Testing?
Monkey testing can be used in the following cases:
It can be used for database testing for testing the application by beginning the
transaction and inserting the random data.
It can be used to test an application for OSWAP issues where pre-compiled and
random data can be used.
It can be used for testing in cases to imitate the activities of monkeys who are
inserting random data.
Types of Monkey Testing
There are 3 types of Monkey Testing:
Dumb Monkey Test: In Dumb Monkey Test, the tester has no knowledge about the
application or system. Testers don’t know if their input or behavior is valid or
invalid. Tester also doesn’t know their or the system’s capabilities or the flow of
the application. Dumb Monkey Test can find fewer bugs than smart monkeys, but
can also find important bugs that are hard to catch by smart monkey tests.
Smart Monkey Test: In Smart Monkey Test, the tester has a brief idea about the
application or system. The tester knows its own location, where it can go, and
where it has been. Tester also knows their own capability and the system’s
capability. In smart monkey tests, the focus is to break the system and report bugs
if they are found.
Brilliant Monkey Test: In the brilliant monkey test, the tester who has the domain
knowledge of the domain is assigned to test the application by the Manager. The
test engineer knows the pattern of the product usage and can perform testing from
the user’s viewpoint.
Monkey Testing vs Gorilla Testing
Below are the differences between monkey testing and gorilla testing:
208
Parameters Monkey Testing Gorilla Testing
209
Parameters Monkey Testing Adhoc Testing
Knowledge about The tester has a brief idea about Tester has no knowledge of the
application the application. application.
Understanding The tester knows their and the The tester is not aware of either
system capability system’s capability. their or the system’s capability.
210
Advantages of Monkey Testing
Testers have full exposure to implementing tests as per their understanding apart
from previously stated scenarios, which may give various new types of bugs or
defects existing in the system.
Execution is easy in monkey testing as random data is executed.
Monkey Testing can be performed without highly skilled testers because it is
randomized testing.
It requires less amount of expenditure to set up and execute test cases because
there is no need for environment setup and test case generation.
Disadvantages of Monkey Testing
In monkey testing, the tester performs tests randomly with random data reproducing
defects is almost impossible.
The accuracy of monkey testing is very less and it doesn’t give always the correct
result.
To make monkey testing more accurate, testers that are needed must have good
technical knowledge of the domain.
This testing can go longer as there are no predefined tests and can find less number
of bugs which may cause loopholes in the system.
Agile Software Testing
Agile Testing is a type of software testing that follows the principles of agile software
development to test the software application. All members of the project team along
with the special experts and testers are involved in agile testing. Agile testing is not a
separate phase and it is carried out with all the development phases i.e.
requirements, design and coding, and test case generation. Agile testing takes place
simultaneously throughout the Development Life Cycle. Agile testers participate in
the entire development life cycle along with development team members and the
testers help in building the software according to the customer requirements and with
better design and thus code becomes possible. The agile testing team works as a
single team towards the single objective of achieving quality. Agile Testing has
shorter time frames called iterations or loops. This methodology is also called the
delivery-driven approach because it provides a better prediction on the workable
products in less duration time.
Agile testing is an informal process that is specified as a dynamic type of testing.
It is performed regularly throughout every iteration of the Software Development
Lifecycle (SDLC).
Customer satisfaction is the primary concern for agile test engineers at some stage in
the agile testing process.
Features of Agile Testing
Some of the key features of agile software testing are:
Simplistic approach: In agile testing, testers perform only the necessary tests but at the
same time do not leave behind any essential tests. This approach delivers a
product that is simple and provides value.
Continuous improvement: In agile testing, agile testers depend mainly on feedback and
self-learning for improvement and they perform their activities efficiently
continuously.
Self-organized: Agile testers are highly efficient and tend to solve problems by bringing
teams together to resolve them.
211
Testers enjoy work: In agile testing, testers enjoy their work and thus will be able to
deliver a product with the greatest value to the consumer.
Encourage Constant communication: In agile testing, efficient communication channels
are set up with all the stakeholders of the project to reduce errors and
miscommunications.
Constant feedback: Agile testers need to constantly provide feedback to the
developers if necessary.
Agile Testing Principles
Shortening feedback iteration: In Agile Testing, the testing team gets to know the
product development and its quality for each and every iteration. Thus continuous
feedback minimizes the feedback response time and the fixing cost is also
reduced.
Testing is performed alongside Agile testing is not a different phase. It is performed
alongside the development phase. It ensures that the features implemented during
that iteration are actually done. Testing is not kept pending for a later phase.
Involvement of all members: Agile testing involves each and every member of the
development team and the testing team. It includes various developers and
experts.
Documentation is weightless: In place of global test documentation, agile testers use
reusable checklists to suggest tests and focus on the essence of the test rather
than the incidental details. Lightweight documentation tools are used.
Clean code: The defects that are detected are fixed within the same iteration. This
ensures clean code at any stage of development.
Constant response: Agile testing helps to deliver responses or feedback on an ongoing
basis. Thus, the product can meet the business needs.
Customer satisfaction: In agile testing, customers are exposed to the product
throughout the development process. Throughout the development process, the
customer can modify the requirements, and update the requirements and the tests
can also be changed as per the changed requirements.
Test-driven: In agile testing, the testing needs to be conducted alongside the
development process to shorten the development time. But testing is implemented
after the implementation or when the software is developed in the traditional
process.
Agile Testing Methodologies
Some of the agile testing methodologies are:
Test-Driven Development (TDD): TDD is the software development process relying on
creating unit test cases before developing the actual code of the software. It is an
iterative approach that combines 3 operations, programming, creation of unit
tests, and refactoring.
Behavior Driven Development (BDD): BDD is agile software testing that aims to
document and develop the application around the user behavior a user expects to
experience when interacting with the application. It encourages collaboration
among the developer, quality experts, and customer representatives.
Exploratory Testing: In exploratory testing, the tester has the freedom to explore the
code and create effective and efficient software. It helps to discover the unknown
risks and explore each aspect of the software functionality.
212
Acceptance Test-Driven Development (ATDD): ATDD is a collaborative process where
customer representatives, developers, and testers come together to discuss the
requirements, and potential pitfalls and thus reduce the chance of errors before
coding begins.
1. Iteration 0
It is the first stage of the testing process and the initial setup is performed in this
stage. The testing environment is set in this iteration.
This stage involves executing the preliminary setup tasks such as finding people for
testing, preparing the usability testing lab, preparing resources, etc.
The business case for the project, boundary situations, and project scope are
verified.
Important requirements and use cases are summarized.
Initial project and cost valuation are planned.
Risks are identified.
Outline one or more candidate designs for the project.
2. Construction Iteration
It is the second phase of the testing process. It is the major phase of the testing and
most of the work is performed in this phase. It is a set of iterations to build an
increment of the solution. This process is divided into two types of testing:
Confirmatory testing: This type of testing concentrates on verifying that the system
meets the stakeholder’s requirements as described to the team to date and is
performed by the team. It is further divided into 2 types of testing:
Agile acceptance testing: It is the combination of acceptance testing and
functional testing. It can be executed by the development team and the
stakeholders.
213
Developer testing: It is the combination of unit testing and integration
testing and verifies both the application code and database schema.
Investigative testing: Investigative testing detects the problems that are skipped or
ignored during confirmatory testing. In this type of testing, the tester determines
the potential problems in the form of defect stories. It focuses on issues like
integration testing, load testing, security testing, and stress testing.
This phase is also known as the transition phase. This phase includes the full system
testing and the acceptance testing. To finish the testing stage, the product is tested
more relentlessly while it is in construction iterations. In this phase, testers work on
the defect stories. This phase involves activities like:
Training end-users.
Support people and operational people.
Marketing of the product release.
Back-up and restoration.
Finalization of the system and user documentation.
4. Production
It is the last phase of agile testing. The product is finalized in this stage after the
removal of all defects and issues raised.
214
Pair testing.
Testing scenarios and workflow.
Testing user stories and experiences like prototypes.
3. Quadrant 3 (Manual)
The third agile quadrant provides feedback to the first and the second quadrant. This
quadrant involves executing many iterations of testing, these reviews and responses
are then used to strengthen the code. The test cases in this quadrant are developed
to implement automation testing. The testing that can be carried out in this quadrant
are:
Usability testing.
Collaborative testing.
User acceptance testing.
Collaborative testing.
Pair testing with customers.
4. Quadrant 4 (Tools)
The fourth agile quadrant focuses on the non-functional requirements of the product
like performance, security, stability, etc. Various types of testing are performed in this
quadrant to deliver non-functional qualities and the expected value. The testing
activities that can be performed in this quadrant are:
Non-functional testing such as stress testing, load testing, performance testing, etc.
Security testing.
Scalability testing.
Infrastructure testing.
Data migration testing.
216
Improve product quality: In agile testing, regular feedback is obtained from the user
and other stakeholders, which helps to enhance the software product quality.
Limitations of Agile Testing
Below are some of the limitations of agile software testing:
Project failure: In agile testing, if one or more members leave the job then there are
chances for the project failure.
Limited documentation: In agile testing, there is no or less documentation which
makes it difficult to predict the expected results as there are explicit conditions and
requirements.
Introduce new bugs: In agile software testing, bug fixes, modifications, and releases
happen repeatedly which may sometimes result in the introduction of new bugs in
the system.
Poor planning: In agile testing, the team is not exactly aware of the end result from
day one, so it becomes challenging to predict factors like cost, time, and
resources required at the beginning of the project.
No finite end: Agile testing requires minimal planning at the beginning so it becomes
easy to get sidetracked while delivering the new product. There is no finite end
and there is no clear vision of what the final product will look like.
Challenges During Agile Testing
Below are some of the challenges that are faced during agile testing:
Changing requirements: Sometimes during product development changes in the
requirements or the specifications occur but when they occur near the end of the
sprint, the changes are moved to the next sprint and thus become the overhead
for developers and testers.
Inadequate test coverage: In agile testing, testers sometimes miss critical test
cases because of the continuously changing requirements and continuous
integration. This problem can be solved by keeping track of test coverage by
analyzing the agile test metrics.
Tester’s availability: Sometimes the testers don’t have adequate skills to perform
API and Integration testing, which results in missing important test cases. One
solution to this problem is to provide training for the testers so that they can carry
out essential tests effectively.
Less Documentation: In agile testing, there is less or no documentation which
makes the task of the QA team more tedious.
Performance Bottlenecks: Sometimes developer builds products without
understanding the end-user requirements and following only the specification
requirements, resulting in performance issues in the product. Using load testing
tools performance bottlenecks can be identified and fixed.
Early detection of defects: In agile testing, defects are detected at the production
stage or at the testing stage, which makes it very difficult to fix them.
Skipping essential tests: In agile testing, sometimes agile testers due to time
constraints and the complexity of the test cases put some of the non-functional
tests on hold. This may cause some bugs later that may be difficult to fix.
Risks During Agile Testing
Automated UI slow to execute: Automated UI gives confidence in the testing but
they are slow to execute and expensive to build.
217
Use a mix of testing types: To achieve the expected quality of the product, a
mixture of testing types and levels must be used.
Poor Automation test plan: Sometimes automation tests plan is poorly organized
and unplanned to save time which results in a test failure.
Lack of expertise: Automated testing sometimes is not the only solution that should
be used, it can sometimes lack the expertise to deliver effective solutions.
Unreliable tests: Fixing failing tests and resolving issues of brittle tests should be
the top priority to avoid false positives.
Requirement Analysis:
User requirement related to each component is observed.
Test Planning:
Test is planned according to the analysis of the requirements of the user.
218
Test Specification:
In this section it is specified that which test case must be run and which test case
should be skipped.
Test Execution:
Once the test cases are specified according to the user requirements, test cases
are executed.
Test Recording:
Test recording is the having record of the defects that are detected.
Test Verification:
Test verification is the process to determine whether the product meet
specification.
Completion:
This is the last phase of the testing process in which the result is analyzed.
219
label text, what is the current value) reliably regardless of where that object is on
the screen.
Challenges with Graphical User Interface Testing (GUI) Testing:
There are some challenges that occur during Graphical user interface testing. These
are given below.
Technology Support
Stability of Objects
Instrumentation
220
What is a Test Strategy Document?
A test strategy document is a well-described document that is derived from actual
business requirements that guide the whole team about the software testing
approach and objectives for each activity in the software testing process.
The test strategy document is approved and reviewed by the test team lead,
development manager, quality analyst manager, and product manager.
The test strategy document specifies the resources, scope, plan, and methodology
for different testing activities.
It answers all the questions like what needs to get done, how to accomplish it, etc.
Components of a Test Strategy
The test effort, test domain, test setups, and test tools used to verify and validate a
set of functions are all outlined in a Test Strategy. It also includes schedules,
resource allocations, and employee utilization information. This data is essential for
the test team (Test) to be as structured and efficient as possible. A Test Strategy
differs from a Test Plan, which is a document that gathers and organizes test cases
by functional areas and/or types of testing in a format that can be presented to other
teams and/or customers. Both are critical components of the Quality Assurance
process since they aid in communicating the breadth of the test method and ensuring
test coverage while increasing the testing effort’s efficiency.
Below diagram shows the components of the test strategy:
1. Scope and Overview: Scope and Overview is the first section of the test strategy
paper. Any product’s overview includes information about who should approve,
review, and use the document. The testing activities and phases that must be
approved were also described in the test strategy paper.
An overview of the project, as well as information on who should utilize this page.
Include information such as who will evaluate and approve the document.
Define the testing activities and phases that will be performed, as well as the
timetables that will be followed in relation to the overall project timelines stated in
the test plan.
2. Testing Methodology: Testing methodology is the next module in the test strategy
document, and it is used to specify the degrees of testing, testing procedures, roles,
221
and duties of all team members. The change management process, which includes
the modification request submission, pattern to be utilized, and activity to manage the
request, is also included in the testing strategy. Above all, if the test plan document is
not properly established, it may result in future errors or blunders. This module is
used to specify the following information-
Define the testing process, testing level, roles, and duties of each team member.
Describe why each test type is defined in the test plan (for example, unit, integration,
system, regression, installation/uninstallation, usability, load, performance, and
security testing) should be performed, as well as details such as when to begin,
test owner, responsibilities, testing approach, and details of automation strategy
and tool (if applicable).
3. Testing Environment Specifications: Testing Environment Specification is another
section of the test strategy paper. The specification of the test data requirements, as
we well know, is quite important. As a result, the testing environment specification in
the test strategy document includes clear instructions on how to produce test data.
This module contains information on the number of environments and the required
setup. The strategies for backup and restoration are equally important.
The information about the number of environments and the needed configuration for
each environment should be included in the test environment setup.
For example, the functional test team might have one test environment and the UAT
team might have another.
Define the number of users supported in each environment, as well as each user’s
access roles and software and hardware requirements, such as the operating
system, RAM, free disc space, and the number of systems.
It’s just as crucial to define the test data needs.
Give specific instructions on how to generate test data (either generate data or use
production data by masking fields for privacy).
Define a backup and restoration strategy for test data.
Due to unhandled circumstances in the code, the test environment database may
encounter issues.
The backup and restoration method should state who will take backups when
backups should be taken, what should be included in backups, when the database
should be restored, who will restore it, and what data masking procedures should
be implemented if the database is restored.
4. Testing Tools: Testing tools are an important part of the test strategy document
since it contains all of the information on the test management and automation tools
that are required for test execution. The necessary approaches and tools for security,
performance and load testing are dictated by the details of the open-source or
commercial tool and the number of users it can support.
Define the tools for test management and automation that will be utilized to execute
the tests.
Describe the test approach and tools needed for performance, load, and security
testing.
Mention whether the product is open-source or commercial, as well as the number of
individuals it can accommodate, and make suitable planning.
5. Release Control: Release Control is a crucial component of the test strategy
document. It’s used to make sure that test execution and release management
strategies are established in a systematic way. It specifies the following information-
222
Different software versions in test and UAT environments can occur from unplanned
release cycles.
All adjustments in that release will be tested using the release management strategy,
which includes a proper version history.
Set up a build management process that answers questions like where the new build
should be made available, where it should be deployed when to receive the new
build, where to acquire the production build, who will give the go signal for the
production release, and so on.
6. Risk Analysis: Risk Analysis is the next section of the test strategy paper. All
potential hazards associated with the project are described in the test strategy
document and can become an issue during test execution. Furthermore, a defined
strategy is established for inclining these risks in order to ensure that they are carried
out appropriately. If the development team is confronted with these hazards in real-
time, we establish a contingency plan. Make a list of all the potential dangers.
Provide a detailed plan to manage these risks, as well as a backup plan in case the
hazards materialize.
7. Review and Approval: Review and Approval is the last section of the Testing
strategy paper.
When all of the testing activities are stated in the test strategy document, it is
evaluated by the persons that are involved, such as:
System Administration Team.
Project Management Team.
Development Team.
Business Team.
Starting the document with the right date, approver name, comment, and summary of
the reviewed modifications should be followed.
It should also be evaluated and updated on a regular basis as the testing procedure
improves.
Test Strategy vs Test Plan
Below are the differences between Test Strategy and Test Plan:
S
No. Test Plan Test Strategy
223
S
No. Test Plan Test Strategy
After the requirements have been The test strategy comes first,
4.
approved, the test plan is written. followed by the test plan.
224
Reactive strategy: Only when the real program is released are tests devised and
implemented. As a result, testing is based on faults discovered in the real system.
Consider the following scenario: you’re conducting exploratory testing. Test
charters are created based on the features and functionalities that already exist.
The outcomes of the testing by testers are used to update these test charters.
Agile development initiatives can also benefit from exploratory testing.
Consultative strategy: In the same way, that user-directed testing uses input from key
stakeholders to set the scope of test conditions, this testing technique does as
well. Let’s consider a scenario in which the browser compatibility of any web-
based application is being evaluated. In this section, the app’s owner would
provide a list of browsers and their versions in order of preference. They may also
include a list of connection types, operating systems, anti-malware software, and
other requirements for the program to be tested against. Depending on the priority
of the items in the provided lists, the testers can use various strategies such as
pairwise or equivalence splitting.
Regression-averse strategy: In this case, the testing procedures are aimed at lowering
the risk of regression for both functional and non-functional product aspects.
Using the web application as an example, if the program needs to be tested for
regression issues, the testing team can design test automation for both common
and unusual use cases. They can also employ GUI-based automation tools to
conduct tests every time the application is updated. Any of the strategies outlined
above does not have to be used for any testing job. Two or more strategies may
be integrated depending on the needs of the product and the organization.
Test Strategy Selection
The following factors may influence the test approach selection:
The test strategy chosen is determined by the nature and size of the organization.
One can choose a test strategy based on the project needs; for example, safety and
security applications necessitate a more rigorous technique.
The test strategy can be chosen based on the product development model.
Is this a short-term or long-term strategy?
Organization type and size.
Project requirements, Safety and security applications necessitate a well-thought-out
strategy.
Product development model.
Details Included in Test Strategy Document
The test strategy document includes the following important details:
Overview and Scope.
Software and testing work products that can be reused.
Details about the various test levels, their relationships, and the technique for
integrating the various test levels.
Techniques for testing the environment.
Level of testing automation.
Various testing tools.
Risk Assessment.
For each level of the test Conditions for both entry and exit.
Reports on test results.
Each test’s degree of independence.
During testing, metrics and measurements will be analyzed.
225
Regression and confirmation testing.
Taking care of discovered flaws.
Configuring and managing test tools and infrastructure.
Members of the Test team’s roles and responsibilities.
Conclusion
The test strategy document presents a bright vision of what the test team will do for
the entire project. Because the test strategy document will drive the entire team, only
individuals with extensive experience in the product area should prepare. Because it
is a static document, it cannot be edited or changed throughout the project life cycle.
The Test strategy document can be sent to the complete testing team before any
testing operations begin. If the test strategy document is properly created, it will result
in the development of a high-quality system and the expansion of the entire testing
process.
Software Testing – Test Maturity Model
The Test Maturity Model (TMM) in software testing is a framework for assessing the
software testing process with the intention of improving it. It is based on Capability
Maturity Model(CMM). It was first produced by the Illinois Institute of Technology with
an aim to assess the maturity of the test processes and to provide targets that
improve the maturity.
Currently, there is a Test Maturity Model Integration (TMMI) which has replaced the
Test Maturity Model. TMMI has a five-level model that provides a framework to
measure the maturity of the testing processes. The purpose of a Test maturity model
is to find the maturity and provide targets for enhancing the overall software testing
process.
The following topics will be discussed here:
1. Five Levels Of TMM
2. TMM vs CMM
3. Benefits Of TMM
4. Need For TMM
5. How To Achieve Highest Test Maturity With TMM
Let’s start discussing each of these topics in detail.
226
Five Levels of TMM
Below are the five different levels that help in achieving the Test Maturity:
Level 1: Initialization
At this level, we are able to run the software without any hindrances or blocks.
There are no exactly defined testing processes.
Quality checks are not done before the software release.
Adhoc Testing is performed. (I.e. No testing process is there)
Level 2: Definition
This is the second level of the Test Maturity Model.
At this level, the requirements are defined
The test strategies, test plans, and test cases are created at this level.
All the test cases are executed against the requirements and hence the testing is
done.
Level 3: Integration
This is the third level of the Test Maturity Model.
Testing procedures are integrated with the SDLC process and it is performed
independently after the development phase is completed.
The object is tested to manage the risks.
Level 4: Measurement and Management
This is the fourth level of the Test Maturity Model.
All the testing procedures become part of the software life cycle.
These include reviews of requirement analysis, design documents, and Code
reviews.
Integration and Unit testing as a part of coding is done here.
All the Testing related activities are measured here.
Level 5: Optimization
This is the fifth level of the Test Maturity Model.
Testing processes are optimized.
The Testing process is verified and measures are taken for improvement.
There are proper measures taken for defect prevention and care is taken for those
improvements to not reoccur in the future.
This step is characterized by the usage of different tools for testing optimization.
227
TMM vs CMM
Benefits of TMM
The process is organized as each level is well defined and all the deliverables are
achieved.
As evident from level 4, all the codes are reviewed, and test plans are properly
executed. This leads to no contradictions and therefore the requirements are
clear.
This model was created keeping in mind the minimization of defects. Hence,
maximum defects are identified and the final product is defect-free, therefore
prioritizing its defect prevention objective.
Quality of the software is assured as the testing procedures are integrated with all
phases of the Software lifecycle.
Risks are reduced considerably and time is saved.
TMM in software testing offers great help to the testing team which includes the
testers, managers, and key stakeholders for determining the required test cycles for
proceeding to the next stage. It starts with the QA operations team matching each of
the TMM stage’s elements for figuring out the exact level of the test cycle. Next,
proper steps are required for improving the test maturity model.
229
2. Document-Driven: The waterfall model relies heavily on documentation to ensure
that the project is well-defined and the project team is working towards a clear set
of goals.
3. Quality Control: The waterfall model places a high emphasis on quality control
and testing at each phase of the project, to ensure that the final product meets the
requirements and expectations of the stakeholders.
4. Rigorous Planning: The waterfall model involves a rigorous planning process,
where the project scope, timelines, and deliverables are carefully defined and
monitored throughout the project lifecycle.
Overall, the waterfall model is used in situations where there is a need for a highly
structured and systematic approach to software development. It can be effective in
ensuring that large, complex projects are completed on time and within budget, with a
high level of quality and customer satisfaction.
Phases of Classical Waterfall Model
Waterfall Model is a classical software development methodology that was first
introduced by Winston W. Royce in 1970. It is a linear and sequential approach to
software development that consists of several phases that must be completed in a
specific order. The phases include:
1. Requirements Gathering and Analysis: The first phase involves gathering
requirements from stakeholders and analyzing them to understand the scope and
objectives of the project.
2. Design: Once the requirements are understood, the design phase begins. This
involves creating a detailed design document that outlines the software
architecture, user interface, and system components.
3. Implementation: The implementation phase involves coding the software based
on the design specifications. This phase also includes unit testing to ensure that
each component of the software is working as expected.
4. Testing: In the testing phase, the software is tested as a whole to ensure that it
meets the requirements and is free from defects.
5. Deployment: Once the software has been tested and approved, it is deployed to
the production environment.
6. Maintenance: The final phase of the Waterfall Model is maintenance, which
involves fixing any issues that arise after the software has been deployed and
ensuring that it continues to meet the requirements over time.
The classical waterfall model divides the life cycle into a set of phases. This model
considers that one phase can be started after the completion of the previous phase.
That is the output of one phase will be the input to the next phase. Thus the
development process can be considered as a sequential flow in the waterfall. Here
the phases do not overlap with each other. The different sequential phases of the
classical waterfall model are shown in the below figure.
230
Let us now learn about each of these phases in detail.
1. Feasibility Study
The main goal of this phase is to determine whether it would be financially and
technically feasible to develop the software.
The feasibility study involves understanding the problem and then determining the
various possible strategies to solve the problem. These different identified solutions
are analyzed based on their benefits and drawbacks, The best solution is chosen and
all the other phases are carried out as per this solution strategy.
2. Requirements Analysis and Specification
The aim of the requirement analysis and specification phase is to understand the
exact requirements of the customer and document them properly. This phase
consists of two different activities.
Requirement gathering and analysis: Firstly all the requirements regarding the
software are gathered from the customer and then the gathered requirements are
analyzed. The goal of the analysis part is to remove incompleteness (an
incomplete requirement is one in which some parts of the actual requirements
have been omitted) and inconsistencies (an inconsistent requirement is one in
which some part of the requirement contradicts some other part).
Requirement specification: These analyzed requirements are documented in a
software requirement specification (SRS) document. SRS document serves as a
contract between the development team and customers. Any future dispute
between the customers and the developers can be settled by examining the SRS
document.
3. Design
The goal of this phase is to convert the requirements acquired in the SRS into a
format that can be coded in a programming language. It includes high-level and
detailed design as well as the overall software architecture. A Software Design
Document is used to document all of this effort (SDD)
4. Coding and Unit Testing
In the coding phase software design is translated into source code using any suitable
programming language. Thus each designed module is coded. The aim of the unit
testing phase is to check whether each module is working properly or not.
5. Integration and System testing
Integration of different modules is undertaken soon after they have been coded and
unit tested. Integration of various modules is carried out incrementally over a number
of steps. During each integration step, previously planned modules are added to the
partially integrated system and the resultant system is tested. Finally, after all the
231
modules have been successfully integrated and tested, the full working system is
obtained and system testing is carried out on this.
System testing consists of three different kinds of testing activities as described
below.
Alpha testing: Alpha testing is the system testing performed by the development
team.
Beta testing: Beta testing is the system testing performed by a friendly set of
customers.
Acceptance testing: After the software has been delivered, the customer
performed acceptance testing to determine whether to accept the delivered
software or reject it.
6. Maintenance
Maintenance is the most important phase of a software life cycle. The effort spent on
maintenance is 60% of the total effort spent to develop a full software. There are
basically three types of maintenance.
Corrective Maintenance: This type of maintenance is carried out to correct errors
that were not discovered during the product development phase.
Perfective Maintenance: This type of maintenance is carried out to enhance the
functionalities of the system based on the customer’s request.
Adaptive Maintenance: Adaptive maintenance is usually required for porting the
software to work in a new environment such as working on a new computer
platform or with a new operating system.
Advantages of the Classical Waterfall Model
The classical waterfall model is an idealistic model for software development. It is
very simple, so it can be considered the basis for other software development life
cycle models. Below are some of the major advantages of this SDLC model.
Easy to Understand: Classical Waterfall Model is very simple and easy to
understand.
Individual Processing: Phases in the Classical Waterfall model are processed
one at a time.
Properly Defined: In the classical waterfall model, each stage in the model is
clearly defined.
Clear Milestones: Classical Waterfall model has very clear and well-understood
milestones.
Properly Documented: Processes, actions, and results are very well
documented.
Reinforces Good Habits: Classical Waterfall Model reinforces good habits like
define-before-design and design-before-code.
Working: Classical Waterfall Model works well for smaller projects and projects
where requirements are well understood.
Disadvantages of the Classical Waterfall Model
The Classical Waterfall Model suffers from various shortcomings, basically, we can’t
use it in real projects, but we use other software development lifecycle models which
are based on the classical waterfall model. Below are some major drawbacks of this
model.
No Feedback Path: In the classical waterfall model evolution of software from
one phase to another phase is like a waterfall. It assumes that no error is ever
232
committed by developers during any phase. Therefore, it does not incorporate any
mechanism for error correction.
Difficult to accommodate Change Requests: This model assumes that all the
customer requirements can be completely and correctly defined at the beginning
of the project, but actually customer’s requirements keep on changing with time. It
is difficult to accommodate any change requests after the requirements
specification phase is complete.
No Overlapping of Phases: This model recommends that a new phase can start
only after the completion of the previous phase. But in real projects, this can’t be
maintained. To increase efficiency and reduce cost, phases may overlap.
Limited Flexibility: The Waterfall Model is a rigid and linear approach to software
development, which means that it is not well-suited for projects with changing or
uncertain requirements. Once a phase has been completed, it is difficult to make
changes or go back to a previous phase.
Limited Stakeholder Involvement: The Waterfall Model is a structured and
sequential approach, which means that stakeholders are typically involved in the
early phases of the project (requirements gathering and analysis) but may not be
involved in the later phases (implementation, testing, and deployment).
Late Defect Detection: In the Waterfall Model, testing is typically done toward the
end of the development process. This means that defects may not be discovered
until late in the development process, which can be expensive and time-
consuming to fix.
Lengthy Development Cycle: The Waterfall Model can result in a lengthy
development cycle, as each phase must be completed before moving on to the
next. This can result in delays and increased costs if requirements change or new
issues arise.
Not Suitable for Complex Projects: The Waterfall Model is not well-suited for
complex projects, as the linear and sequential nature of the model can make it
difficult to manage multiple dependencies and interrelated components.
When to Use the Classical Waterfall Model
Only well-defined, unambiguous, and fixed requirements are employed with this
paradigm.
The definition of a product is constant.
People understand technology.
There are no unclear prerequisites.
There are many resources with the necessary knowledge readily available.
When it’s a brief project.
The Waterfall approach involves little client engagement in the product development
process. The product can only be shown to end consumers when it is ready.
Applications of Classical Waterfall Model
Large-scale Software Development Projects: The Waterfall Model is often used
for large-scale software development projects, where a structured and sequential
approach is necessary to ensure that the project is completed on time and within
budget.
Safety-Critical Systems: The Waterfall Model is often used in the development of
safety-critical systems, such as aerospace or medical systems, where the
consequences of errors or defects can be severe.
233
Government and Defense Projects: The Waterfall Model is also commonly used
in government and defense projects, where a rigorous and structured approach is
necessary to ensure that the project meets all requirements and is delivered on
time.
Projects with well-defined Requirements: The Waterfall Model is best suited for
projects with well-defined requirements, as the sequential nature of the model
requires a clear understanding of the project objectives and scope.
Projects with Stable Requirements: The Waterfall Model is also well-suited for
projects with stable requirements, as the linear nature of the model does not allow
for changes to be made once a phase has been completed.
For more, you can refer to the Uses of Waterfall Model .
FAQs
1. What is the difference between Waterfall Model and Agile Model?
Answer:
The main difference between the Waterfall Model and the Agile Model is that the
Waterfall model relies on thorough-up front planning whereas the Agile model is
more flexible as it takes these processes in repeating cycles.
2. What are the benefits of the Waterfall Model?
Answer:
The waterfall Model has several benefits as it helps projects keep a well-defined,
predictable project under the budget.
For more Software Engineering Models, you can refer to Iterative Model, Agile
Model, Spiral Model, etc.
234
1. Planning
The first phase of the Spiral Model is the planning phase, where the scope of the
project is determined and a plan is created for the next iteration of the spiral.
Risk Analysis
In the risk analysis phase, the risks associated with the project are identified and
evaluated.
3. Engineering
In the engineering phase, the software is developed based on the requirements
gathered in the previous iteration.
4. Evaluation
In the evaluation phase, the software is evaluated to determine if it meets the
customer’s requirements and if it is of high quality.
5. Planning
The next iteration of the spiral begins with a new planning phase, based on the
results of the evaluation.
The Spiral Model is often used for complex and large software development projects,
as it allows for a more flexible and adaptable approach to software development. It is
also well-suited to projects with significant uncertainty or high levels of risk.
The Radius of the spiral at any point represents the expenses(cost) of the project so
far, and the angular dimension represents the progress made so far in the current
phase.
Each phase of the Spiral Model is divided into four quadrants as shown in the above
figure. The functions of these four quadrants are discussed below:
1. Objectives determination and identify alternative solutions: Requirements are gathered
from the customers and the objectives are identified, elaborated, and analyzed at
the start of every phase. Then alternative solutions possible for the phase are
proposed in this quadrant.
2. Identify and resolve Risks: During the second quadrant, all the possible solutions
are evaluated to select the best possible solution. Then the risks associated with
that solution are identified and the risks are resolved using the best possible
strategy. At the end of this quadrant, the Prototype is built for the best possible
solution.
3. Develop the next version of the Product: During the third quadrant, the identified
features are developed and verified through testing. At the end of the third
quadrant, the next version of the software is available.
235
4. Review and plan for the next Phase: In the fourth quadrant, the Customers evaluate
the so-far developed version of the software. In the end, planning for the next
phase is started.
Risk Handling in Spiral Model
A risk is any adverse situation that might affect the successful completion of a
software project. The most important feature of the spiral model is handling these
unknown risks after the project has started. Such risk resolutions are easier done by
developing a prototype.
1. The spiral model supports coping with risks by providing the scope to build a
prototype at every phase of software development.
2. The Prototyping Model also supports risk handling, but the risks must be identified
completely before the start of the development work of the project.
3. But in real life, project risk may occur after the development work starts, in that
case, we cannot use the Prototyping Model.
4. In each phase of the Spiral Model, the features of the product dated and analyzed,
and the risks at that point in time are identified and are resolved through
prototyping.
5. Thus, this model is much more flexible compared to other SDLC models.
Why Spiral Model is called Meta Model?
The Spiral model is called a Meta-Model because it subsumes all the other SDLC
models. For example, a single loop spiral actually represents the Iterative Waterfall
Model.
1. The spiral model incorporates the stepwise approach of the Classical Waterfall
Model.
2. The spiral model uses the approach of the Prototyping Model by building a
prototype at the start of each phase as a risk-handling technique.
3. Also, the spiral model can be considered as supporting the Evolutionary model –
the iterations along the spiral can be considered as evolutionary levels through
which the complete system is built.
Advantages of the Spiral Model
Below are some advantages of the Spiral Model.
1. Risk Handling: The projects with many unknown risks that occur as the
development proceeds, in that case, Spiral Model is the best development model
to follow due to the risk analysis and risk handling at every phase.
2. Good for large projects: It is recommended to use the Spiral Model in large and
complex projects.
3. Flexibility in Requirements: Change requests in the Requirements at a later phase
can be incorporated accurately by using this model.
4. Customer Satisfaction: Customers can see the development of the product at the
early phase of the software development and thus, they habituated with the
system by using it before completion of the total product.
5. Iterative and Incremental Approach: The Spiral Model provides an iterative and
incremental approach to software development, allowing for flexibility and
adaptability in response to changing requirements or unexpected events.
6. Emphasis on Risk Management: The Spiral Model places a strong emphasis on risk
management, which helps to minimize the impact of uncertainty and risk on the
software development process.
236
7. Improved Communication: The Spiral Model provides for regular evaluations and
reviews, which can improve communication between the customer and the
development team.
8. Improved Quality: The Spiral Model allows for multiple iterations of the software
development process, which can result in improved software quality and reliability.
Disadvantages of the Spiral Model
Below are some main disadvantages of the spiral model.
1. Complex: The Spiral Model is much more complex than other SDLC models.
2. Expensive: Spiral Model is not suitable for small projects as it is expensive.
3. Too much dependability on Risk Analysis: The successful completion of the project is
very much dependent on Risk Analysis. Without very highly experienced experts,
it is going to be a failure to develop a project using this model.
4. Difficulty in time management: As the number of phases is unknown at the start of
the project, time estimation is very difficult.
5. Complexity: The Spiral Model can be complex, as it involves multiple iterations of
the software development process.
6. Time-Consuming: The Spiral Model can be time-consuming, as it requires multiple
evaluations and reviews.
7. Resource Intensive: The Spiral Model can be resource-intensive, as it requires a
significant investment in planning, risk analysis, and evaluations.
The most serious issue we face in the cascade model is that taking a long length to
finish the item, and the product became obsolete. To tackle this issue, we have
another methodology, which is known as the Winding model or spiral model. The
winding model is otherwise called the cyclic model.
When To Use the Spiral Model?
1. When a project is vast in software engineering, a spiral model is utilized.
2. A spiral approach is utilized when frequent releases are necessary.
3. When it is appropriate to create a prototype
4. When evaluating risks and costs is crucial
5. The spiral approach is beneficial for projects with moderate to high risk.
6. The SDLC’s spiral model is helpful when requirements are complicated and
ambiguous.
7. If modifications are possible at any moment
8. When committing to a long-term project is impractical owing to shifting economic
priorities.
Questions For Practice
1. Match each software lifecycle model in List – I to its description in List – II: [UGC
NET CSE 2016]
List-1 List-2
237
List-1 List-2
I II III IV V
A e b a c d
B e c a b d
C d a b c e
D c e a b d
The Prototyping Model is one of the most popularly used Software Development Life
Cycle Models (SDLC models) . This model is used when the customers do not know
the exact project requirements beforehand. In this model, a prototype of the end
product is first developed, tested, and refined as per customer feedback repeatedly
till a final acceptable prototype is achieved which forms the basis for developing the
final product.
In this process model, the system is partially implemented before or during the
analysis phase thereby giving the customers an opportunity to see the product early
in the life cycle. The process starts by interviewing the customers and developing the
incomplete high-level paper model. This document is used to build the initial
prototype supporting only the basic functionality as desired by the customer. Once
the customer figures out the problems, the prototype is further refined to eliminate
them. The process continues until the user approves the prototype and finds the
working model to be satisfactory.
Steps Prototyping Model
Step 1: Requirement Gathering and Analysis: This is the initial step in designing a
prototype model. In this phase, users are asked about what they expect or what they
want from the system.
Step 2: Quick Design: This is the second step in Prototyping Model. This model
covers the basic design of the requirement through which a quick overview can be
easily described.
Step 3: Build a Prototype: This step helps in building an actual prototype from the
knowledge gained from prototype design.
Step 4: Initial User Evaluation: This step describes the preliminary testing where
the investigation of the performance model occurs, as the customer will tell the
strength and weaknesses of the design, which was sent to the developer.
Step 5: Refining Prototype: If any feedback is given by the user, then improving the
client’s response to feedback and suggestions, the final system is approved.
Step 6: Implement Product and Maintain: This is the final step in the phase of the
Prototyping Model where the final system is tested and distributed to production, here
program is run regularly to prevent failures.
For more, you can refer to Software Prototyping Model Phases .
239
Types of Prototyping Models
There are four types of Prototyping Models, which are described below.
Rapid Throwaway Prototyping
Evolutionary Prototyping
Incremental Prototyping
Extreme Prototyping
1. Rapid Throwaway Prototyping
This technique offers a useful method of exploring ideas and getting customer
feedback for each of them. In this method, a developed prototype need not
necessarily be a part of the ultimately accepted prototype. Customer feedback helps
in preventing unnecessary design faults and hence, the final prototype developed is
of better quality.
2. Evolutionary Prototyping
In this method, the prototype developed initially is incrementally refined on the basis
of customer feedback till it finally gets accepted. In comparison to Rapid Throwaway
Prototyping, it offers a better approach that saves time as well as effort. This is
because developing a prototype from scratch for every iteration of the process can
sometimes be very frustrating for the developers.
3. Incremental Prototyping
In this type of incremental Prototyping, the final expected product is broken into
different small pieces of prototypes and developed individually. In the end, when all
individual pieces are properly developed, then the different prototypes are collectively
merged into a single final product in their predefined order. It’s a very efficient
approach that reduces the complexity of the development process, where the goal is
divided into sub-parts and each sub-part is developed individually. The time interval
between the project’s beginning and final delivery is substantially reduced because
240
all parts of the system are prototyped and tested simultaneously. Of course, there
might be the possibility that the pieces just do not fit together due to some lack of
ness in the development phase – this can only be fixed by careful and complete
plotting of the entire system before prototyping starts.
4. Extreme Prototyping
This method is mainly used for web development. It consists of three sequential
independent phases:
1. In this phase, a basic prototype with all the existing static pages is presented in
HTML format.
2. In the 2nd phase, Functional screens are made with a simulated data process
using a prototype services layer.
3. This is the final step where all the services are implemented and associated with
the final prototype.
This Extreme Prototyping method makes the project cycling and delivery robust and
fast and keeps the entire developer team focused and centralized on product
deliveries rather than discovering all possible needs and specifications and adding
unnecessitated features.
Advantages of Prototyping Model
The customers get to see the partial product early in the life cycle. This ensures a
greater level of customer satisfaction and comfort.
New requirements can be easily accommodated as there is scope for refinement.
Missing functionalities can be easily figured out.
Errors can be detected much earlier thereby saving a lot of effort and cost,
besides enhancing the quality of the software.
The developed prototype can be reused by the developer for more complicated
projects in the future.
Flexibility in design.
Early feedback from customers and stakeholders can help guide the development
process and ensure that the final product meets their needs and expectations.
Prototyping can be used to test and validate design decisions, allowing for
adjustments to be made before significant resources are invested in development.
Prototyping can help reduce the risk of project failure by identifying potential
issues and addressing them early in the process.
Prototyping can facilitate communication and collaboration among team members
and stakeholders, improving overall project efficiency and effectiveness.
Prototyping can help bridge the gap between technical and non-technical
stakeholders by providing a tangible representation of the product.
Disadvantages of the Prototyping Model
Costly with respect to time as well as money.
There may be too much variation in requirements each time the prototype is
evaluated by the customer.
Poor Documentation due to continuously changing customer requirements.
It is very difficult for developers to accommodate all the changes demanded by the
customer.
There is uncertainty in determining the number of iterations that would be required
before the prototype is finally accepted by the customer.
After seeing an early prototype, the customers sometimes demand the actual
product to be delivered soon.
241
Developers in a hurry to build prototypes may end up with sub-optimal solutions.
The customer might lose interest in the product if he/she is not satisfied with the
initial prototype.
The prototype may not be scalable to meet the future needs of the customer.
The prototype may not accurately represent the final product due to limited
functionality or incomplete features.
The focus on prototype development may shift the focus away from the final
product, leading to delays in the development process.
The prototype may give a false sense of completion, leading to the premature
release of the product.
The prototype may not consider technical feasibility and scalability issues that can
arise during the final product development.
The prototype may be developed using different tools and technologies, leading to
additional training and maintenance costs.
The prototype may not reflect the actual business requirements of the customer,
leading to dissatisfaction with the final product.
Applications of Prototyping Model
The Prototyping Model should be used when the requirements of the product are
not clearly understood or are unstable.
The prototyping model can also be used if requirements are changing quickly.
This model can be successfully used for developing user interfaces, high-
technology software-intensive systems, and systems with complex algorithms and
interfaces.
The prototyping Model is also a very good choice to demonstrate the technical
feasibility of the product.
For more software engineering models, you can refer to Classical Waterfall
Model, Spiral Model, and Iterative Waterfall Model .
A, B, and C are modules of Software Products that are incrementally developed and
delivered.
Life cycle activities:
Requirements of Software are first broken down into several modules that can be
242
incrementally constructed and delivered. At any time, the plan is made just for the
next increment and not for any kind of long-term plan. Therefore, it is easier to modify
the version as per the need of the customer. The Development Team first undertakes
to develop core features (these do not need services from other features) of the
system.
Once the core features are fully developed, then these are refined to increase levels
of capabilities by adding new functions in Successive versions. Each incremental
version is usually developed using an iterative waterfall model of development.
As each successive version of the software is constructed and delivered, now the
feedback of the Customer is to be taken and these were then incorporated into the
next version. Each version of the software has more additional features than the
previous ones.
After Requirements gathering and specification, requirements are then split into
several different versions starting with version 1, in each successive increment, the
next version is constructed and then deployed at the customer site. After the last
version (version n), it is now deployed at the client site.
Types of Incremental model:
1. Staged Delivery Model: Construction of only one part of the project at a time.
243
When to use this:
1. Funding Schedule, Risk, Program Complexity, or need for early realization of
benefits.
2. When Requirements are known up-front.
3. When Projects have lengthy development schedules.
4. Projects with new Technology.
Error Reduction (core modules are used by the customer from the beginning of
the phase and then these are tested thoroughly)
Uses divide and conquer for a breakdown of tasks.
Lowers initial delivery cost.
Incremental Resource Deployment.
5. Requires good planning and design.
6. The total cost is not lower.
7. Well-defined module interfaces are required.
Characteristics of an Incremental model –
System development is divided into several smaller projects.
To create a final complete system, partial systems are constructed one after the
other.
Priority requirements are addressed first.
The requirements for that increment are frozen once they are created.
Advantages-
1. Prepares the software fast.
2. Clients have a clear idea of the project.
3. Changes are easy to implement.
4. Provides risk handling support, because of its iterations.
5. Adjusting the criteria and scope is flexible and less costly.
6. Comparing this model to others, it is less expensive.
7. The identification of errors is simple.
Disadvantages-
1. A good team and proper planned execution are required.
2. Because of its continuous iterations the cost increases.
3. Issues may arise from the system design if all needs are not gathered upfront
throughout the duration of the program lifecycle.
4. Every iteration step is distinct and does not flow into the next.
5. It takes a lot of time and effort to fix an issue in one unit if it needs to be corrected
in all the units.
244
Software Engineering | Rapid application
development model (RAD)
The Rapid Application Development Model was first proposed by IBM in the 1980s.
The RAD model is a type of incremental process model in which there is extremely
short development cycle. When the requirements are fully understood and the
component-based construction approach is adopted then the RAD model is used.
Various phases in RAD are Requirements Gathering , Analysis and Planning, Design,
Build or Construction, and finally Deployment.
The critical feature of this model is the use of powerful development tools and
techniques. A software project can be implemented using this model if the project can
be broken down into small modules wherein each module can be assigned
independently to separate teams. These modules can finally be combined to form the
final product. Development of each module involves the various basic steps as in the
waterfall model i.e. analyzing, designing, coding, and then testing, etc. as shown in
the figure. Another striking feature of this model is a short time span i.e. the time
frame for delivery(time-box) is generally 60-90 days.
Multiple teams work on developing the software system using RAD model
parallely.
The use of powerful developer tools such as JAVA, C++, Visual BASIC, XML, etc. is
also an integral part of the projects. This model consists of 4 basic phases:
1. Requirements Planning – It involves the use of various techniques used in
requirements elicitation like brainstorming, task analysis, form analysis, user
scenarios, FAST (Facilitated Application Development Technique), etc. It also
consists of the entire structured plan describing the critical data, methods to obtain
it, and then processing it to form a final refined model.
2. User Description – This phase consists of taking user feedback and building the
prototype using developer tools. In other words, it includes re-examination and
validation of the data collected in the first phase. The dataset attributes are also
identified and elucidated in this phase.
3. Construction – In this phase, refinement of the prototype and delivery takes place.
It includes the actual use of powerful automated tools to transform processes and
data models into the final working product. All the required modifications and
enhancements are too done in this phase.
245
4. Cutover – All the interfaces between the independent modules developed by
separate teams have to be tested properly. The use of powerfully automated tools
and subparts makes testing easier. This is followed by acceptance testing by the
user.
The process involves building a rapid prototype, delivering it to the customer, and
taking feedback. After validation by the customer, the SRS document is developed
and the design is finalized.
When to use RAD Model?
When the customer has well-known requirements, the user is involved throughout the
life cycle, the project can be time-boxed, the functionality delivered in increments,
high performance is not required, low technical risks are involved and the system can
be modularized. In these cases, we can use the RAD Model. when it is necessary to
design a system that can be divided into smaller units within two to three months.
when there is enough money in the budget to pay for both the expense of automated
tools for code creation and designers for modeling.
Advantages:
The use of reusable components helps to reduce the cycle time of the project.
Feedback from the customer is available at the initial stages.
Reduced costs as fewer developers are required.
The use of powerful development tools results in better quality products in
comparatively shorter time spans.
The progress and development of the project can be measured through the
various stages.
It is easier to accommodate changing requirements due to the short iteration time
spans.
Productivity may be quickly boosted with a lower number of employees.
Disadvantages:
The use of powerful and efficient tools requires highly skilled professionals.
The absence of reusable components can lead to the failure of the project.
The team leader must work closely with the developers and customers to close
the project on time.
The systems which cannot be modularized suitably cannot use this model.
Customer involvement is required throughout the life cycle.
It is not meant for small-scale projects as in such cases, the cost of using
automated tools and techniques may exceed the entire budget of the project.
Not every application can be used with RAD.
Applications:
1. This model should be used for a system with known requirements and requiring a
short development time.
2. It is also suitable for projects where requirements can be modularized and
reusable components are also available for development.
3. The model can also be used when already existing system components can be
used in developing a new system with minimum changes.
4. This model can only be used if the teams consist of domain experts. This is
because relevant knowledge and the ability to use powerful techniques are a
necessity.
246
5. The model should be chosen when the budget permits the use of automated tools
and techniques required.
Drawbacks of rapid application development:
It requires multiple teams or a large number of people to work on the scalable
projects.
This model requires heavily committed developer and customers. If commitment is
lacking then RAD projects will fail.
The projects using RAD model requires heavy resources.
If there is no appropriate modularization then RAD projects fail. Performance can
be problem to such projects.
The projects using RAD model find it difficult to adopt new technologies.
247
What is RAD Model?
Unlike the traditional SDLC model in which the end product is available in the end, in
the RAD model (Rapid Application Development) after each iteration the model is
shown to the client and based on the feedback of the client, necessary changes will
be done, hence in this, there is the total involvement of the client in every phase of
the model.
1. It represents a Radical shift in software development.
2. In this model, the product is continually demonstrated to the user to provide the
required input to help enhance it.
3. It is suited for developing software that are driven by user interface requirements.
4. It emphasizes on delivering the incremental and iterative delivery of functioning
models to the client.
Various Phases of RAD Model
1. Planning: Initial phase is planning, which involves requirement gathering,
discussing the timeline of the project.
2. Prototype: In this phase the model is built, or the prototype will be constructed so
that it can be shown to the client and necessary changes can be done quickly,
despite the traditional SDLC model where the complete model is constructed first.
3. Feedback: Once the prototype is available now it can be shown to the client and
necessary feedback can be taken and depending on their requirement further
actions will be taken, if clients require any change then those changes will be
done until there is no modification from client side.
4. Deployed: Once the above three phases are completed, the application will
be deployed to the client.
Benefits of RAD Model
1. Better quality software: It provides better quality of software that is more usable and
more focused on businesses.
2. Better reusability: RAD Models has a better reusability of components.
3. Flexible: RAD Models are more flexible as it helps in easy adjustments.
4. Minimum failures: It helps in completing projects within time and within budget.
Failures are minimum in RAD Model.
Differences Between Traditional SDLC and RAD Model
Parameters RAD Model Traditional SDLC
Different stages of
Follows a predictive,
application development
inflexible, and rigid
can be reviewed and
approach to application
Application repeated as the
development.
Development Approach approach is iterative.
248
Parameters RAD Model Traditional SDLC
development. development.
Difficult to accommodate
Easier to accommodate changes due to the
changes. sequential nature of
Changes models.
Extensive customer
feedback leading to
Limited customer
more customer
feedback.
satisfaction and better
Customer Feeback quality of final software.
As there is no
Separate small teams modularization, a larger
can be assigned to team is required for
individual modules. different stages with
Team Size strictly defined roles.
249
Parameters RAD Model Traditional SDLC
251
9. Never give up on excellence.
10. Take advantage of change to gain a competitive edge.
The Agile Software Development Process:
Step 1: In the first step, concept, and business opportunities in each possible
project are identified and the amount of time and work needed to complete the
project is estimated. Based on their technical and financial viability, projects can
then be prioritized and determined which ones are worthwhile pursuing.
Step 2: In the second phase, known as inception, the customer is consulted
regarding the initial requirements, team members are selected, and funding is
252
secured. Additionally, a schedule outlining each team’s responsibilities and the
precise time at which each sprint’s work is expected to be finished should be
developed.
Step 3: Teams begin building functional software in the third step,
iteration/construction, based on requirements and ongoing feedback. Iterations,
also known as single development cycles, are the foundation of the Agile software
development cycle.
Design Process of Agile software Development:
In Agile development, Design and Implementation are considered to be the central
activities in the software process.
The design and Implementation phase also incorporates other activities such as
requirements elicitation and testing.
In an agile approach, iteration occurs across activities. Therefore, the
requirements and the design are developed together, rather than separately.
The allocation of requirements and the design planning and development as
executed in a series of increments. In contrast with the conventional model, where
requirements gathering needs to be completed to proceed to the design and
development phase, it gives Agile development an extra level of flexibility.
An agile process focuses more on code development rather than documentation.
Example of Agile Software Development:
Let’s go through an example to understand clearly how agile works. A Software
company named ABC wants to make a new web browser for the latest release of its
operating system. The deadline for the task is 10 months. The company’s head
assigned two teams named Team A and Team B for this task. To motivate the
teams, the company head says that the first team to develop the browser would be
given a salary hike and a one-week full-sponsored travel plan. With the dreams of
their wild travel fantasies, the two teams set out on the journey of the web browser.
Team A decided to play by the book and decided to choose the Waterfall model for
the development. Team B after a heavy discussion decided to take a leap of faith and
choose Agile as their development model. The Development Plan of the Team A is
as follows:
Requirement analysis and Gathering – 1.5 Months
Design of System – 2 Months
Coding phase – 4 Months
System Integration and Testing – 2 Months
User Acceptance Testing – 5 Weeks
The Development Plan for the Team B is as follows:
Since this was an Agile, the project was broken up into several iterations.
The iterations are all of the same time duration.
At the end of each iteration, a working product with a new feature has to be
delivered.
Instead of Spending 1.5 months on requirements gathering, they will decide the
core features that are required in the product and decide which of these features
can be developed in the first iteration.
Any remaining features that cannot be delivered in the first iteration will be
delivered in the next subsequent iteration, based on the priority.
At the end of the first iterations, the team will deliver working software with the
core basic features.
253
The team has put their best efforts into getting the product to a complete stage. But
then out of the blue due to the rapidly changing environment, the company’s head
came up with an entirely new set of features that wanted to be implemented as
quickly as possible and wanted to push out a working model in 2 days. Team A was
now in a fix, they were still in their design phase and had not yet started coding and
they had no working model to display. Moreover, it was practically impossible for
them to implement new features since the waterfall model there is not revert to the
old phase once you proceed to the next stage, which means they would have to start
from square one again. That would incur heavy costs and a lot of overtime. Team B
was ahead of Team A in a lot of aspects, all thanks to Agile Development. They also
had a working product with most of the core requirements since the first increment.
And it was a piece of cake for them to add the new requirements. All they had to do
was schedule these requirements for the next increment and then implement them.
254
Agile development is heavily dependent on the inputs of the customer. If the
customer has ambiguity in his vision of the outcome, it is highly likely that the
project to get off track.
Face-to-face communication is harder in large-scale organizations.
Only senior programmers are capable of making the kind of decisions required
during the development process. Hence, it’s a difficult situation for new
programmers to adapt to the environment.
Lack of predictability: Agile Development relies heavily on customer feedback
and continuous iteration, which can make it difficult to predict project outcomes,
timelines, and budgets.
Limited scope control: Agile Development is designed to be flexible and
adaptable, which means that scope changes can be easily accommodated.
However, this can also lead to scope creep and a lack of control over the project
scope.
Lack of emphasis on testing: Agile Development places a greater emphasis on
delivering working code quickly, which can lead to a lack of focus on testing and
quality assurance. This can result in bugs and other issues that may go
undetected until later stages of the project.
Risk of team burnout: Agile Development can be intense and fast-paced, with
frequent sprints and deadlines. This can put a lot of pressure on team members
and lead to burnout, especially if the team is not given adequate time for rest and
recovery.
Lack of structure and governance: Agile Development is often less formal and
structured than other development methodologies, which can lead to a lack of
governance and oversight. This can result in inconsistent processes and
practices, which can impact project quality and outcomes.
Agile is a framework that defines how software development needs to be carried on.
Agile is not a single method, it represents the various collection of methods and
practices that follow the value statements provided in the manifesto. Agile methods
and practices do not promise to solve every problem present in the software industry
(No Software model ever can). But they sure help to establish a culture and
environment where solutions emerge.
Agile software development is an iterative and incremental approach to software
development. It emphasizes collaboration between the development team and the
customer, flexibility, and adaptability in the face of changing requirements, and the
delivery of working software in short iterations.
The Agile Manifesto, which outlines the principles of agile development, values
individuals and interactions, working software, customer collaboration, and response
to change.
Practices of Agile Software Development:
Scrum: Scrum is a framework for agile software development that involves
iterative cycles called sprints, daily stand-up meetings, and a product backlog that
is prioritized by the customer.
Kanban: Kanban is a visual system that helps teams manage their work and
improve their processes. It involves using a board with columns to represent
different stages of the development process, and cards or sticky notes to
represent work items.
255
Continuous Integration: Continuous Integration is the practice of frequently
merging code changes into a shared repository, which helps to identify and
resolve conflicts early in the development process.
Test-Driven Development: Test-Driven Development (TDD) is a development
practice that involves writing automated tests before writing the code. This helps
to ensure that the code meets the requirements and reduces the likelihood of
defects.
Pair Programming: Pair programming involves two developers working together
on the same code. This helps to improve code quality, share knowledge, and
reduce the likelihood of defects.
Advantages of Agile Software Development over traditional software development
approaches:
1. Increased customer satisfaction: Agile development involves close
collaboration with the customer, which helps to ensure that the software meets
their needs and expectations.
2. Faster time-to-market: Agile development emphasizes the delivery of working
software in short iterations, which helps to get the software to market faster.
3. Reduced risk: Agile development involves frequent testing and feedback, which
helps to identify and resolve issues early in the development process.
4. Improved team collaboration: Agile development emphasizes collaboration and
communication between team members, which helps to improve productivity and
morale.
5. Adaptability to change: Agile Development is designed to be flexible and
adaptable, which means that changes to the project scope, requirements, and
timeline can be accommodated easily. This can help the team to respond quickly
to changing business needs and market demands.
6. Better quality software: Agile Development emphasizes continuous testing and
feedback, which helps to identify and resolve issues early in the development
process. This can lead to higher-quality software that is more reliable and less
prone to errors.
7. Increased transparency: Agile Development involves frequent communication
and collaboration between the team and the customer, which helps to improve
transparency and visibility into the project status and progress. This can help to
build trust and confidence with the customer and other stakeholders.
8. Higher productivity: Agile Development emphasizes teamwork and
collaboration, which helps to improve productivity and reduce waste. This can lead
to faster delivery of working software with fewer defects and rework.
9. Improved project control: Agile Development emphasizes continuous monitoring
and measurement of project metrics, which helps to improve project control and
decision-making. This can help the team to stay on track and make data-driven
decisions throughout the development process.
In summary, Agile software development is a popular approach to software
development that emphasizes collaboration, flexibility, and the delivery of working
software in short iterations. It has several advantages over traditional software
development approaches, including increased customer satisfaction, faster time-to-
market, and reduced risk.
256
What is Extreme Programming (XP)?
Extreme programming (XP) is one of the most important software development
frameworks of Agile models. It is used to improve software quality and
responsiveness to customer requirements. The extreme programming model
recommends taking the best practices that have worked well in the past in program
development projects to extreme levels.
Good practices need to be practiced in extreme programming: Some of the good
practices that have been recognized in the extreme programming model and
suggested to maximize their use are given below:
Code Review: Code review detects and corrects errors efficiently. It suggests pair
programming as coding and reviewing of written code carried out by a pair of
programmers who switch their work between them every hour.
Testing: Testing code helps to remove errors and improves its reliability. XP
suggests test-driven development (TDD) to continually write and execute test
cases. In the TDD approach, test cases are written even before any code is
written.
Incremental development: Incremental development is very good because
customer feedback is gained and based on this development team comes up with
new increments every few days after each iteration.
Simplicity: Simplicity makes it easier to develop good-quality code as well as to
test and debug it.
Design: Good quality design is important to develop good quality software. So,
everybody should design daily.
Integration testing: It helps to identify bugs at the interfaces of different
functionalities. Extreme programming suggests that the developers should
achieve continuous integration by building and performing integration testing
several times a day.
Basic principles of Extreme programming: XP is based on the frequent iteration
through which the developers implement User Stories. User stories are simple and
informal statements of the customer about the functionalities needed. A User Story is
a conventional description by the user of a feature of the required system. It does not
mention finer details such as the different scenarios that can occur. Based on User
stories, the project team proposes Metaphors. Metaphors are a common vision of
how the system would work. The development team may decide to build a Spike for
some features. A Spike is a very simple program that is constructed to explore the
suitability of a solution being proposed. It can be considered similar to a prototype.
Some of the basic activities that are followed during software development by using
the XP model are given below:
Coding: The concept of coding which is used in the XP model is slightly different
from traditional coding. Here, the coding activity includes drawing diagrams
(modeling) that will be transformed into code, scripting a web-based system, and
choosing among several alternative solutions.
Testing: The XP model gives high importance to testing and considers it to be the
primary factor in developing fault-free software.
Listening: The developers need to carefully listen to the customers if they have to
develop good quality software. Sometimes programmers may not have the depth
knowledge of the system to be developed. So, the programmers should
257
understand properly the functionality of the system and they have to listen to the
customers.
Designing: Without a proper design, a system implementation becomes too
complex, and very difficult to understand the solution, thus making maintenance
expensive. A good design results elimination of complex dependencies within a
system. So, effective use of suitable design is emphasized.
Feedback: One of the most important aspects of the XP model is to gain
feedback to understand the exact customer needs. Frequent contact with the
customer makes the development effective.
Simplicity: The main principle of the XP model is to develop a simple system that
will work efficiently in the present time, rather than trying to build something that
would take time and may never be used. It focuses on some specific features that
are immediately needed, rather than engaging time and effort on speculations of
future requirements.
Pair Programming: XP encourages pair programming where two developers
work together at the same workstation. This approach helps in knowledge sharing,
reduces errors, and improves code quality.
Continuous Integration: In XP, developers integrate their code into a shared
repository several times a day. This helps to detect and resolve integration issues
early on in the development process.
Refactoring: XP encourages refactoring, which is the process of restructuring
existing code to make it more efficient and maintainable. Refactoring helps to
keep the codebase clean, organized, and easy to understand.
Collective Code Ownership: In XP, there is no individual ownership of code.
Instead, the entire team is responsible for the codebase. This approach ensures
that all team members have a sense of ownership and responsibility towards the
code.
Planning Game: XP follows a planning game, where the customer and the
development team collaborate to prioritize and plan development tasks. This
approach helps to ensure that the team is working on the most important features
and delivers value to the customer.
On-site Customer: XP requires an on-site customer who works closely with the
development team throughout the project. This approach helps to ensure that the
customer’s needs are understood and met, and also facilitates communication and
feedback.
Applications of Extreme Programming (XP): Some of the projects that are suitable
to develop using the XP model are given below:
Small projects: The XP model is very useful in small projects consisting of small
teams as face-to-face meeting is easier to achieve.
Projects involving new technology or Research projects: This type of project
faces changing requirements rapidly and technical problems. So XP model is used
to complete this type of project.
Web development projects: The XP model is well-suited for web development
projects as the development process is iterative and requires frequent testing to
ensure the system meets the requirements.
Collaborative projects: The XP model is useful for collaborative projects that
require close collaboration between the development team and the customer.
258
Projects with tight deadlines: The XP model can be used in projects that have a
tight deadline, as it emphasizes simplicity and iterative development.
Projects with rapidly changing requirements: The XP model is designed to
handle rapidly changing requirements, making it suitable for projects where
requirements may change frequently.
Projects where quality is a high priority: The bureaucracy-basedXP model
places a strong emphasis on testing and quality assurance, making it a suitable
approach for projects where quality is a high priority.
Extreme Programming (XP) is an AThe ile software development methodology that
focuses on delivering high-quality software through frequent and continuous
feedback, collaboration, and adaptation. XP emphasizes a close working relationship
between the development team, the customer, and stakeholders, with an emphasis
on rapid, iterative development and deployment.
Agile development approaches evolved in the 1990s as a reaction to documentation
and bureaucracy-based processes, particularly the waterfall approach. Agile
approaches are based on some common principles, some of which are:
1. Working software is the key measure of progress in a project.
2. For progress in a project, therefore software should be developed and delivered
rapidly in small increments.
3. Even late changes in the requirements should be entertained.
4. Face-to-face communication is preferred over documentation.
5. Continuous feedback and involvement of customers is necessary for developing
good-quality software.
6. A simple design which involves and improves with time is a better approach than
doing an elaborate design up front for handling all possible scenarios.
7. The delivery dates are decided by empowered teams of talented individuals.
Extreme programming is one of the most popular and well-known approaches in the
family of agile methods. an XP project starts with user stories which are short
descriptions of what scenarios the customers and users would like the system to
support. Each story is written on a separate card, so they can be flexibly grouped.
XP, and other agile methods, are suitable for situations where the volume and space
of requirements change are high and where requirement risks are considerable.
XP includes the following practices:
1. Continuous Integration: Code is integrated and tested frequently, with all changes
reviewed by the development team.
2. Test-Driven Development: Tests are written before code is written, and the code is
developed to pass those tests.
3. Pair Programming: Developers work together in pairs to write code and review
each other’s work.
4. Continuous Feedback: Feedback is obtained from customers and stakeholders
through frequent demonstrations of working software.
5. Simplicity: XP prioritizes simplicity in design and implementation, to reduce
complexity and improve maintainability.
6. Collective Ownership: All team members are responsible for the code, and anyone
can make changes to any part of the codebase.
7. Coding Standards: Coding standards are established and followed to ensure
consistency and maintainability of the code.
259
8. Sustainable Pace: The pace of work is maintained at a sustainable level, with
regular breaks and opportunities for rest and rejuvenation.
9. XP is well-suited to projects with rapidly changing requirements, as it emphasizes
flexibility and adaptability. It is also well-suited to projects with tight timelines, as it
emphasizes rapid development and deployment.
10. Refactoring: Code is regularly refactored to improve its design and
maintainability, without changing its functionality.
11. Small Releases: Software is released in small increments, allowing for frequent
feedback and adjustments based on that feedback.
12. Customer Involvement: Customers are actively involved in the development
process, providing feedback and clarifying requirements.
13. On-Site Customer: A representative from the customer’s organization is
present with the development team to provide continuous feedback and answer
questions.
14. Short Iterations: Work is broken down into short iterations, usually one to two
weeks in length, to allow for rapid development and frequent feedback.
15. Planning Game: The team and customer work together to plan and prioritize
the work for each iteration, to deliver the most valuable features first.
16. Metaphor: A shared metaphor is used to guide the design and implementation
of the system.
17. Coding Standards: Coding standards are established and followed to ensure
consistency and maintainability of the code.
Advantages of Extreme Programming (XP):
Slipped schedules − Timely delivery is ensured through slipping timetables and
doable development cycles.
Misunderstanding the business and/or domain − Constant contact and
explanations are ensured by including the client on the team.
Canceled projects − Focusing on ongoing customer engagement guarantees
open communication with the consumer and prompt problem-solving.
Staff turnover − Teamwork that is focused on cooperation provides excitement
and goodwill. Team spirit is fostered by multidisciplinary cohesion.
Costs incurred in changes − Extensive and continuing testing ensures that the
modifications do not impair the functioning of the system. A functioning system
always guarantees that there is enough time to accommodate changes without
impairing ongoing operations.
Business changes − Changes are accepted at any moment since they are seen
to be inevitable.
Production and post-delivery defects: Emphasis is on − the unit tests to find
and repair bugs as soon as possible.
260
The V-Model is a software development life cycle (SDLC) model that provides a
systematic and visual representation of the software development process. It is
based on the idea of a “V” shape, with the two legs of the “V” representing the
progression of the software development process from requirements gathering and
analysis to design, implementation, testing, and maintenance.
V-Model Design:
1. Requirements Gathering and Analysis: The first phase of the V-Model is the
requirements gathering and analysis phase, where the customer’s requirements
for the software are gathered and analyzed to determine the scope of the project.
2. Design: In the design phase, the software architecture and design are developed,
including the high-level design and detailed design.
3. Implementation: In the implementation phase, the software is actually built based
on the design.
4. Testing: In the testing phase, the software is tested to ensure that it meets the
customer’s requirements and is of high quality.
5. Deployment: In the deployment phase, the software is deployed and put into use.
6. Maintenance: In the maintenance phase, the software is maintained to ensure
that it continues to meet the customer’s needs and expectations.
7. The V-Model is often used in safety: critical systems, such as aerospace and
defence systems, because of its emphasis on thorough testing and its ability to
clearly define the steps involved in the software development process.
The following illustration depicts the different phases in a V-Model of the SDLC.
Verification Phases:
It involves static analysis technique (review) done without executing code. It is the
process of evaluation of the product development phase to find whether specified
requirements meet.
There are several Varification phases in the V-Model:
Business Requirement Analysis:
These is the first step of the designation of development cycle where product
requirement needs to be cure with the customer perspectives. in these phases
include the proper communication with the customer to understand the requirement
of the customers. these is the very important activity which need to handle with
proper way, as most of the time customer did not know exact what they want, and
they did not sure about it that time then we use an acceptance test design planning
261
which done at the time of business requirement it will be used as an input for
acceptance testing.
System Design:
Design of system will start when the overall we clear with the product requirements,
then need to design the system completely. these understanding will do at the
beginning of complete under the product development process. these will be
beneficial for the future execution of test cases.
Architectural Design:
In this stage, architectural specifications are comprehended and designed. Usually, a
number of technical approaches are put out, and the ultimate choice is made after
considering both the technical and financial viability. The system architecture is
further divided into modules that each handle a distinct function. Another name for
this is High Level Design (HLD).
At this point, the exchange of data and communication between the internal modules
and external systems are well understood and defined. During this phase, integration
tests can be created and documented using the information provided.
Module Design:
This phase, known as Low Level Design (LLD), specifies the comprehensive internal
design for each and every system module. Compatibility between the design and
other external systems as well as other modules in the system architecture is crucial.
Unit tests are a crucial component of any development process since they assist
identify and eradicate the majority of mistakes and flaws at an early stage. Based on
the internal module designs, these unit tests may now be created.
Coding Phase:
The Coding step involves actually writing the code for the system modules that were
created during the Design phase. The system and architectural requirements are
used to determine which programming language is most appropriate.
The coding standards and principles are followed when performing the coding.
Before the final build is checked into the repository, the code undergoes many code
reviews and is optimised for optimal performance.
Validation Phases:
It involves dynamic analysis technique (functional, non-functional), testing done by
executing code. Validation is the process to evaluate the software after the
completion of the development phase to determine whether software meets the
customer expectations and requirements.
So, V-Model contains Verification phases on one side of the Validation phases on the
other side. Verification and Validation phases are joined by coding phase in V-shape.
Thus, it is called V-Model.
There are several Validation phases in the V-Model:
Unit Testing:
Unit Test Plans are developed during module design phase. These Unit Test Plans
are executed to eliminate bugs at code or unit level.
Integration testing:
After completion of unit testing Integration testing is performed. In integration testing,
the modules are integrated and the system is tested. Integration testing is performed
on the Architecture design phase. This test verifies the communication of modules
among themselves.
System Testing:
262
System testing test the complete application with its functionality, inter dependency,
and communication. It tests the functional and non-functional requirements of the
developed application.
User Acceptance Testing (UAT):
UAT is performed in a user environment that resembles the production environment.
UAT verifies that the delivered system meets user’s requirement and system is ready
for use in real world.
Design Phase:
Requirement Analysis: This phase contains detailed communication with the
customer to understand their requirements and expectations. This stage is known
as Requirement Gathering.
System Design: This phase contains the system design and the complete hardware
and communication setup for developing product.
Architectural Design: System design is broken down further into modules taking up
different functionalities. The data transfer and communication between the internal
modules and with the outside world (other systems) is clearly understood.
Module Design: In this phase the system breaks down into small modules. The
detailed design of modules is specified, also known as Low-Level Design (LLD).
Testing Phases:
Unit Testing: Unit Test Plans are developed during module design phase. These
Unit Test Plans are executed to eliminate bugs at code or unit level.
Integration testing: After completion of unit testing Integration testing is performed.
In integration testing, the modules are integrated, and the system is tested.
Integration testing is performed on the Architecture design phase. This test
verifies the communication of modules among themselves.
System Testing: System testing test the complete application with its functionality,
inter dependency, and communication. It tests the functional and non-functional
requirements of the developed application.
User Acceptance Testing (UAT): UAT is performed in a user environment that
resembles the production environment. UAT verifies that the delivered system
meets user’s requirement and system is ready for use in real world.
Industrial Challenge:
As the industry has evolved, the technologies have become more complex,
increasingly faster, and forever changing, however, there remains a set of basic
principles and concepts that are as applicable today as when IT was in its infancy.
Accurately define and refine user requirements.
Design and build an application according to the authorized user requirements.
Validate that the application they had built adhered to the authorized business
requirements.
Principles of V-Model:
Large to Small: In V-Model, testing is done in a hierarchical perspective, for
example, requirements identified by the project team, create High-Level Design,
and Detailed Design phases of the project. As each of these phases is completed
the requirements, they are defining become more and more refined and detailed.
Data/Process Integrity: This principle states that the successful design of any project
requires the incorporation and cohesion of both data and processes. Process
elements must be identified at each and every requirement.
263
Scalability: This principle states that the V-Model concept has the flexibility to
accommodate any IT project irrespective of its size, complexity or duration.
Cross Referencing: Direct correlation between requirements and corresponding
testing activity is known as cross-referencing.
Tangible Documentation:
This principle states that every project needs to create a document. This
documentation is required and applied by both the project development team and the
support team. Documentation is used to maintaining the application once it is
available in a production environment.
Why preferred?
It is easy to manage due to the rigidity of the model. Each phase of V-Model has
specific deliverables and a review process.
Proactive defect tracking – that is defects are found at early stage.
When to use?
Where requirements are clearly defined and fixed.
The V-Model is used when ample technical resources are available with technical
expertise.
Small to medium-sized projects with set and clearly specified needs are
recommended to use the V-shaped model.
Since it is challenging to keep stable needs in large projects, the project should be
small.
Advantages:
This is a highly disciplined model and Phases are completed one at a time.
V-Model is used for small projects where project requirements are clear.
Simple and easy to understand and use.
This model focuses on verification and validation activities early in the life cycle
thereby enhancing the probability of building an error-free and good quality
product.
It enables project management to track progress accurately.
Clear and Structured Process: The V-Model provides a clear and structured
process for software development, making it easier to understand and follow.
Emphasis on Testing: The V-Model places a strong emphasis on testing, which
helps to ensure the quality and reliability of the software.
Improved Traceability: The V-Model provides a clear link between the
requirements and the final product, making it easier to trace and manage changes
to the software.
Better Communication: The clear structure of the V-Model helps to improve
communication between the customer and the development team.
Disadvantages:
High risk and uncertainty.
It is not a good for complex and object-oriented projects.
It is not suitable for projects where requirements are not clear and contains high
risk of changing.
This model does not support iteration of phases.
It does not easily handle concurrent events.
Inflexibility: The V-Model is a linear and sequential model, which can make it
difficult to adapt to changing requirements or unexpected events.
264
Time-Consuming: The V-Model can be time-consuming, as it requires a lot of
documentation and testing.
Overreliance on Documentation: The V-Model places a strong emphasis on
documentation, which can lead to an overreliance on documentation at the
expense of actual development work.
265
Maintenance of Software: This is the last and the final phase of the classic
waterfall model. In this phase the ongoing maintenance and the support of the
software occurs.
Advantages of the Classical Waterfall Model
Clear and Structured process: The model is very straightforward and it is very
easy to implement.
Documentation: Each phase requires documentation, which aims in better
understanding, knowledge transfer between the team members.
Well suited for small project: Classic waterfall model works good for the small
project
Disadvantages of the Classical Waterfall Model
Inflexible: The sequential nature of the model makes it inflexible to changes. If
the requirement changes after the project goes to the next stage then it becomes
so time consuming and cost effective to rewrite the new changes.
Variable demands are hard to meet: This technique assumes that all client
needs can be precisely specified at the outset of the project, yet customers’ needs
change with time. After requirements definition, amendment requests are tough.
Late detection of Defects: Defects are not detected until the testing phase
comes into picture and after that resolving that particular defect becomes costly.
Risk management: The model’s structure can lead to a lack of proper risk
management.
The Iterative Waterfall Model
The iterative waterfall model is the modified version of the classical waterfall model.
The iterative waterfall model follows the sequential software development process. In
the traditional Waterfall Model, each phase is finished before going on to the next
one, and there isn’t no such scope to go back to phases that have already been
done. On the other hand, iterative waterfall model uses “iterations” to let comments,
changes, and improvements happen during the development process.
Gathering Requirements: Similar to the classical waterfall model, project
requirements are acquired from the client or stakeholders at this phase. The
requirements are analyzed for further scopes, scalability and potential risk.
Designing System: This phase includes the high level and the low level design
specification of the system’s architecture.
Implementation of Software: During this phase, the physical coding of the
software takes place. Programmers develop code in accordance with the design
specifications. This stage leads to the development of software modules and
components.
Testing of Software: The software is tested properly the defects, bugs and
errors. Several kinds of testing is performed like Unit testing, Integration testing
and system testing.
Evaluation Phase: Iteration comes into play at this point. Instead of putting the
software into use right after testing, stakeholders look at it. Feedback is collected,
and any changes that need to be made are found.
Adjustment Phase: In the adjustment phase, changes are made to the software,
design, or requirements based on the comments and evaluations.
Reiteration: Reiteration happens allowing for incremental improvements based
on stakeholder feedback and changing requirements.
266
Advantages of the Iterative Waterfall Model
Incorporating Feedbacks: In traditional waterfall there was no option for the
feedback but Iterative waterfall model gives the privilege of working the feedback
provided from one phase to the previous phase.
Continuous Improvement: As the software is run over and over again, it gets
better and better over time.
More Flexibility: Compared to the traditional Waterfall Model, the model can
better adapt to changes in needs.
Disadvantages of the Iterative Waterfall Model
Increased Complexity: Keeping track of iterations and multiple rounds can make
the project management process more complicated.
Time and Cost: Iterations can take more time and cost more money if they are
not handled well.
Agile Model
The Agile version is an iterative and incremental method to software improvement.
This version is primarily based at the Agile Manifesto, which emphasises flexibility,
collaboration, and rapid reaction to trade. Agile improvement involves the continuous
delivery of running software program in brief iterations, commonly lasting from one to
4 weeks. The Agile version is well-desirable for initiatives with swiftly converting
necessities or for groups that cost collaboration and communication. However, this
model calls for an excessive degree of collaboration between group individuals, and it
may be tough to control large projects.
Advantages of the Agile Model
Flexibility: Agile projects has the flexible as they can easily change themselves
for meeting new needs, goals and perform as per market condition.
Frequent Deliverables: Agile projects help in producing software in shorter
iterations, which overall impacts to see real progress and making changes easily.
Customer Satisfaction: Agile helps in delivering useful features at a time, that
gives customer more satisfaction.
Continuous Improvement: Agile teams helps in continuously improving their
processes and try to become more effective and efficient.
Disadvantages of the Agile Model
Lack of Predictability: Because of the flexibility of Agile, it is difficult to get exact
estimation of project costs, timelines for some long term project.
Complex Project Management: Because of making of Agile in smaller steps,
skilled management of project is required to keep project goals in mind.
Spiral Model
The Spiral version is a chance-driven version that mixes elements of each the
Waterfall and Agile fashions. This model involves non-stop chance evaluation and
mitigation for the duration of the software development manner. The Spiral version
consists of 4 levels: Planning, Risk Analysis, Engineering, and Evaluation. Each
section includes a combination of making plans, design, implementation, and trying
out. This version is useful whilst managing big or complicated tasks where
necessities aren’t nicely understood. However, the Spiral model can be time-eating,
and it may be hard to decide when to move from one segment to another.
Selection of Appropriate Existence Cycle Model for a Venture
Selection of right lifecycle model to finish a project is the most vital assignment. It
may be decided on via maintaining the benefits and drawbacks of diverse models in
267
mind. The unique troubles that are analysed earlier than choosing a appropriate
lifestyles cycle model are given beneath :
Characteristics of the software to be developed: The choice of the life cycle
model largely depends on the type of the software that is being developed. For
small services projects, the agile model is favoured. On the other hand, for
product and embedded development, the Iterative Waterfall model can be
preferred. The evolutionary model is suitable to develop an object-oriented
project. User interface part of the project is mainly developed through prototyping
model.
Characteristics of the development team: Team member’s skill level is an
important factor to deciding the life cycle model to use. If the development team is
experienced in developing similar software, then even an embedded software can
be developed using the Iterative Waterfall model. If the development team is
entirely novice, then even a simple data processing application may require a
prototyping model.
Risk associated with the project: If the risks are few and can be anticipated at
the start of the project, then prototyping model is useful. If the risks are difficult to
determine at the beginning of the project but are likely to increase as the
development proceeds, then the spiral model is the best model to use.
Characteristics of the customer: If the customer is not quite familiar with
computers, then the requirements are likely to change frequently as it would be
difficult to form complete, consistent and unambiguous requirements. Thus, a
prototyping model may be necessary to reduce later change requests from the
customers. Initially, the customer’s confidence is high on the development team.
During the lengthy development process, customer confidence normally drops off
as no working software is yet visible. So, the evolutionary model is useful as the
customer can experience a partially working software much earlier than whole
complete software. Another advantage of the evolutionary model is that it reduces
the customer’s trauma of getting used to an entirely new system.
Frequently Asked Questions
Q.1: What is the basic difference between the Spiral and the Iterative model?
Answer:
The Spiral model combines iterative development with risk analysis, whereas the
Iterative model involves repeating development phases.
Q.2: What is the Waterfall model, and how does it differ from Agile?
Answer:
Waterfall is a linear, sequential approach, while Agile is iterative and flexible,
accommodating changes during development.
The analysis and design process of a user interface is iterative and can be
represented by a spiral model. The analysis and design process of user interface
consists of four framework activities.
1. User, Task, Environmental Analysis, and Modeling
Initially, the focus is based on the profile of users who will interact with the system,
i.e., understanding, skill and knowledge, type of user, etc., based on the user’s profile
users are made into categories. From each category requirements are gathered.
Based on the requirement’s developer understand how to develop the interface.
Once all the requirements are gathered a detailed analysis is conducted. In the
analysis part, the tasks that the user performs to establish the goals of the system
are identified, described and elaborated. The analysis of the user environment
focuses on the physical work environment. Among the questions to be asked are:
1. Where will the interface be located physically?
2. Will the user be sitting, standing, or performing other tasks unrelated to the
interface?
3. Does the interface hardware accommodate space, light, or noise constraints?
4. Are there special human factors considerations driven by environmental factors?
2. Interface Design
The goal of this phase is to define the set of interface objects and actions i.e., control
mechanisms that enable the user to perform desired tasks. Indicate how these
control mechanisms affect the system. Specify the action sequence of tasks and
subtasks, also called a user scenario. Indicate the state of the system when the user
performs a particular task. Always follow the three golden rules stated by Theo
Mandel. Design issues such as response time, command and action structure, error
handling, and help facilities are considered as the design model is refined. This
phase serves as the foundation for the implementation phase.
3. Interface Construction and Implementation
The implementation activity begins with the creation of a prototype (model) that
enables usage scenarios to be evaluated. As iterative design process continues a
269
User Interface toolkit that allows the creation of windows, menus, device interaction,
error messages, commands, and many other elements of an interactive environment
can be used for completing the construction of an interface.
4. Interface Validation
This phase focuses on testing the interface. The interface should be in such a way
that it should be able to perform tasks correctly, and it should be able to handle a
variety of tasks. It should achieve all the user’s requirements. It should be easy to
use and easy to learn. Users should accept the interface as a useful one in their
work.
User Interface Design Golden Rules
The following are the golden rules stated by Theo Mandel that must be followed
during the design of the interface. Place the user in control:
1. Define the interaction modes in such a way that does not force the user into unnecessary
or undesired actions: The user should be able to easily enter and exit the mode with
little or no effort.
2. Provide for flexible interaction: Different people will use different interaction
mechanisms, some might use keyboard commands, some might use mouse,
some might use touch screen, etc., Hence all interaction mechanisms should be
provided.
3. Allow user interaction to be interruptible and undoable: When a user is doing a
sequence of actions the user must be able to interrupt the sequence to do some
other work without losing the work that had been done. The user should also be
able to do undo operation.
4. Streamline interaction as skill level advances and allow the interaction to be
customized: Advanced or highly skilled user should be provided a chance to
customize the interface as user wants which allows different interaction
mechanisms so that user doesn’t feel bored while using the same interaction
mechanism.
5. Hide technical internals from casual users: The user should not be aware of the
internal technical details of the system. He should interact with the interface just to
do his work.
6. Design for direct interaction with objects that appear on-screen: The user should be
able to use the objects and manipulate the objects that are present on the screen
to perform a necessary task. By this, the user feels easy to control over the
screen.
Reduce the User’s Memory Load
1. Reduce demand on short-term memory: When users are involved in some complex
tasks the demand on short-term memory is significant. So the interface should be
designed in such a way to reduce the remembering of previously done actions,
given inputs and results.
2. Establish meaningful defaults: Always an initial set of defaults should be provided to
the average user, if a user needs to add some new features then he should be
able to add the required features.
3. Define shortcuts that are intuitive: Mnemonics should be used by the user.
Mnemonics means the keyboard shortcuts to do some action on the screen.
4. The visual layout of the interface should be based on a real-world metaphor: Anything
you represent on a screen if it is a metaphor for a real-world entity then users
would easily understand.
270
5. Disclose information in a progressive fashion: The interface should be organized
hierarchically i.e., on the main screen the information about the task, an object or
some behavior should be presented first at a high level of abstraction. More detail
should be presented after the user indicates interest with a mouse pick.
Make the Interface Consistent
1. Allow the user to put the current task into a meaningful context: Many interfaces have
dozens of screens. So it is important to provide indicators consistently so that the
user know about the doing work. The user should also know from which page has
navigated to the current page and from the current page where it can navigate.
2. Maintain consistency across a family of applications: in The development of some set
of applications all should follow and implement the same design, rules so that
consistency is maintained among applications.
3. If past interactive models have created user expectations do not make changes
unless there is a compelling reason.
User interface design is a crucial aspect of software engineering, as it is the means
by which users interact with software applications. A well-designed user interface can
improve the usability and user experience of an application, making it easier to use
and more effective.
Key Principles for Designing User Interfaces
1. User-centered design: User interface design should be focused on the needs and
preferences of the user. This involves understanding the user’s goals, tasks, and
context of use, and designing interfaces that meet their needs and expectations.
2. Consistency: Consistency is important in user interface design, as it helps users to
understand and learn how to use an application. Consistent design elements such
as icons, color schemes, and navigation menus should be used throughout the
application.
3. Simplicity: User interfaces should be designed to be simple and easy to use, with
clear and concise language and intuitive navigation. Users should be able to
accomplish their tasks without being overwhelmed by unnecessary complexity.
4. Feedback: Feedback is significant in user interface design, as it helps users to
understand the results of their actions and confirms that they are making progress
towards their goals. Feedback can take the form of visual cues, messages, or
sounds.
5. Accessibility: User interfaces should be designed to be accessible to all users,
regardless of their abilities. This involves considering factors such as color
contrast, font size, and assistive technologies such as screen readers.
6. Flexibility: User interfaces should be designed to be flexible and customizable,
allowing users to tailor the interface to their own preferences and needs.
Overall, user interface design is a key component of software engineering, as it can
have a significant impact on the usability, effectiveness, and user experience of an
application. Software engineers should follow best practices and design principles to
create interfaces that are user-centered, consistent, simple, and accessible.
271
Specification) document. The output of the design phase is Software Design
Document (SDD).
Coupling and Cohesion are two key concepts in software engineering that are used
to measure the quality of a software system’s design.
Coupling refers to the degree of interdependence between software modules. High
coupling means that modules are closely connected and changes in one module may
affect other modules. Low coupling means that modules are independent and
changes in one module have little impact on other modules.
Cohesion refers to the degree to which elements within a module work together to
fulfill a single, well-defined purpose. High cohesion means that elements are closely
related and focused on a single purpose, while low cohesion means that elements
are loosely related and serve multiple purposes.
Both coupling and cohesion are important factors in determining the maintainability,
scalability, and reliability of a software system. High coupling and low cohesion can
make a system difficult to change and test, while low coupling and high cohesion
make a system easier to maintain and improve.
Basically, design is a two-part iterative process. First part is Conceptual Design that
tells the customer what the system will do. Second is Technical Design that allows
the system builders to understand the actual hardware and software needed to solve
customer’s problem.
Types of Coupling:
Data Coupling: If the dependency between the modules is based on the fact that
they communicate by passing only data, then the modules are said to be data
coupled. In data coupling, the components are independent of each other and
communicate through data. Module communications don’t contain tramp data.
Example-customer billing system.
Stamp Coupling In stamp coupling, the complete data structure is passed from
one module to another module. Therefore, it involves tramp data. It may be
necessary due to efficiency factors- this choice was made by the insightful
designer, not a lazy programmer.
Control Coupling: If the modules communicate by passing control information,
then they are said to be control coupled. It can be bad if parameters indicate
completely different behavior and good if parameters allow factoring and reuse of
functionality. Example- sort function that takes comparison function as an
argument.
External Coupling: In external coupling, the modules depend on other modules,
external to the software being developed or to a particular type of hardware. Ex-
protocol, external file, device format, etc.
Common Coupling: The modules have shared data such as global data
structures. The changes in global data mean tracing back to all modules which
access that data to evaluate the effect of the change. So it has got disadvantages
like difficulty in reusing modules, reduced ability to control data accesses, and
reduced maintainability.
Content Coupling: In a content coupling, one module can modify the data of
another module, or control flow is passed from one module to the other module.
This is the worst form of coupling and should be avoided.
Temporal Coupling: Temporal coupling occurs when two modules depend on the
timing or order of events, such as one module needing to execute before another.
This type of coupling can result in design issues and difficulties in testing and
maintenance.
273
Sequential Coupling: Sequential coupling occurs when the output of one module
is used as the input of another module, creating a chain or sequence of
dependencies. This type of coupling can be difficult to maintain and modify.
Communicational Coupling: Communicational coupling occurs when two or
more modules share a common communication mechanism, such as a shared
message queue or database. This type of coupling can lead to performance
issues and difficulty in debugging.
Functional Coupling: Functional coupling occurs when two modules depend on
each other’s functionality, such as one module calling a function from another
module. This type of coupling can result in tightly-coupled code that is difficult to
modify and maintain.
Data-Structured Coupling: Data-structured coupling occurs when two or more
modules share a common data structure, such as a database table or data file.
This type of coupling can lead to difficulty in maintaining the integrity of the data
structure and can result in performance issues.
Interaction Coupling: Interaction coupling occurs due to the methods of a class
invoking methods of other classes. Like with functions, the worst form of coupling
here is if methods directly access internal parts of other methods. Coupling is
lowest if methods communicate directly through parameters.
Component Coupling: Component coupling refers to the interaction between two
classes where a class has variables of the other class. Three clear situations exist
as to how this can happen. A class C can be component coupled with another
class C1, if C has an instance variable of type C1, or C has a method whose
parameter is of type C1,or if C has a method which has a local variable of type C1.
It should be clear that whenever there is component coupling, there is likely to be
interaction coupling.
Cohesion: Cohesion is a measure of the degree to which the elements of the module
are functionally related. It is the degree to which all elements directed towards
performing a single task are contained in the component. Basically, cohesion is the
internal glue that keeps the module together. A good software design will have high
cohesion.
Types of Cohesion:
Functional Cohesion: Every essential element for a single computation is contained
in the component. A functional cohesion performs the task and functions. It is an
ideal situation.
Sequential Cohesion: An element outputs some data that becomes the input for
other element, i.e., data flow between the parts. It occurs naturally in functional
programming languages.
274
Communicational Cohesion: Two elements operate on the same input data or
contribute towards the same output data. Example- update record in the database
and send it to the printer.
Procedural Cohesion: Elements of procedural cohesion ensure the order of
execution. Actions are still weakly connected and unlikely to be reusable. Ex-
calculate student GPA, print student record, calculate cumulative GPA, print
cumulative GPA.
Temporal Cohesion: The elements are related by their timing involved. A module
connected with temporal cohesion all the tasks must be executed in the same time
span. This cohesion contains the code for initializing all the parts of the system.
Lots of different activities occur, all at unit time.
Logical Cohesion: The elements are logically related and not functionally. Ex- A
component reads inputs from tape, disk, and network. All the code for these
functions is in the same component. Operations are related, but the functions are
significantly different.
Coincidental Cohesion: The elements are not related(unrelated). The elements have
no conceptual relationship other than location in source code. It is accidental and
the worst form of cohesion. Ex- print next line and reverse the characters of a
string in a single component.
Procedural Cohesion: This type of cohesion occurs when elements or tasks are
grouped together in a module based on their sequence of execution, such as a
module that performs a set of related procedures in a specific order. Procedural
cohesion can be found in structured programming languages.
Communicational Cohesion: Communicational cohesion occurs when elements or
tasks are grouped together in a module based on their interactions with each
other, such as a module that handles all interactions with a specific external
system or module. This type of cohesion can be found in object-oriented
programming languages.
Temporal Cohesion: Temporal cohesion occurs when elements or tasks are
grouped together in a module based on their timing or frequency of execution,
such as a module that handles all periodic or scheduled tasks in a system.
Temporal cohesion is commonly used in real-time and embedded systems.
Informational Cohesion: Informational cohesion occurs when elements or tasks are
grouped together in a module based on their relationship to a specific data
structure or object, such as a module that operates on a specific data type or
object. Informational cohesion is commonly used in object-oriented programming.
Functional Cohesion: This type of cohesion occurs when all elements or tasks in a
module contribute to a single well-defined function or purpose, and there is little or
no coupling between the elements. Functional cohesion is considered the most
desirable type of cohesion as it leads to more maintainable and reusable code.
Layer Cohesion: Layer cohesion occurs when elements or tasks in a module are
grouped together based on their level of abstraction or responsibility, such as a
module that handles only low-level hardware interactions or a module that handles
only high-level business logic. Layer cohesion is commonly used in large-scale
software systems to organize code into manageable layers.
275
Advantages of low coupling:
Improved maintainability: Low coupling reduces the impact of changes in one
module on other modules, making it easier to modify or replace individual
components without affecting the entire system.
Enhanced modularity: Low coupling allows modules to be developed and tested in
isolation, improving the modularity and reusability of code.
Better scalability: Low coupling facilitates the addition of new modules and the
removal of existing ones, making it easier to scale the system as needed.
Advantages of high cohesion:
Improved readability and understandability: High cohesion results in clear,
focused modules with a single, well-defined purpose, making it easier for
developers to understand the code and make changes.
Better error isolation: High cohesion reduces the likelihood that a change in one
part of a module will affect other parts, making it easier to
isolate and fix errors. Improved reliability: High cohesion leads to modules that are
less prone to errors and that function more consistently,
leading to an overall improvement in the reliability of the system.
Disadvantages of high coupling:
Increased complexity: High coupling increases the interdependence between
modules, making the system more complex and difficult to understand.
Reduced flexibility: High coupling makes it more difficult to modify or replace
individual components without affecting the entire system.
Decreased modularity: High coupling makes it more difficult to develop and test
modules in isolation, reducing the modularity and reusability of code.
Disadvantages of low cohesion:
Increased code duplication: Low cohesion can lead to the duplication of code, as
elements that belong together are split into separate modules.
Reduced functionality: Low cohesion can result in modules that lack a clear
purpose and contain elements that don’t belong together, reducing their
functionality and making them harder to maintain.
Difficulty in understanding the module: Low cohesion can make it harder for
developers to understand the purpose and behavior of a module, leading to errors
and a lack of clarity.
276
Types of Testing Tools
Software testing is of two types, static testing, and dynamic testing. Also, the tools
used during these testing are named accordingly on these testings. Testing tools can
be categorized into two types which are as follows:
1. Static Test Tools: Static test tools are used to work on the static testing
processes. In the testing through these tools, the typical approach is taken. These
tools do not test the real execution of the software. Certain input and output are not
required in these tools. Static test tools consist of the following:
Flow analyzers: Flow analyzers provides flexibility in the data flow from input to
output.
Path Tests: It finds the not used code and code with inconsistency in the software.
Coverage Analyzers: All rationale paths in the software are assured by the
coverage analyzers.
Interface Analyzers: They check out the consequences of passing variables and
data in the modules.
2. Dynamic Test Tools: Dynamic testing process is performed by the dynamic test
tools. These tools test the software with existing or current data. Dynamic test tools
comprise the following:
Test driver: The test driver provides the input data to a module-under-test (MUT).
Test Beds: It displays source code along with the program under execution at the
same time.
Emulators: Emulators provide the response facilities which are used to imitate parts
of the system not yet developed.
Mutation Analyzers: They are used for testing the fault tolerance of the system by
knowingly providing the errors in the code of the software.
There is one more categorization of software testing tools. According to this
classification, software testing tools are of 10 types:
Test Management Tools: Test management tools are used to store information on
how testing is to be done, help to plan test activities, and report the status of
quality assurance activities. For example, JIRA, Redmine, Selenium, etc.
Automated Testing Tools: Automated testing tools helps to conduct testing
activities without human intervention with more accuracy and less time and effort.
For example, Appium, Cucumber, Ranorex, etc.
Performance Testing Tools : Performance testing tools helps to perform effectively
and efficiently performance testing which is a type of non-functional testing that
checks the application for parameters like stability, scalability, performance,
speed, etc. For example, WebLOAD, Apache JMeter, Neo Load, etc.
Cross-browser Testing Tools : Cross-browser testing tools helps to perform cross-
browser testing that lets the tester check whether the website works as intended
when accessed through different browser-OS combinations. For example,
Testsigma, Testim, Perfecto, etc.
Integration Testing Tools: Integration testing tools are used to test the interface
between the modules and detect the bugs. The main purpose here is to check
whether the specific modules are working as per the client’s needs or not. For
example, Citrus, FitNesse, TESSY, etc.
Unit Testing Tools: Unit testing tools are used to check the functionality of individual
modules and to make sure that all independent modules works as expected. For
example, Jenkins, PHPUnit, JUnit, etc.
277
Mobile Testing Tools: Mobile testing tools are used to test the application for
compatibility on different mobile devices. For example, Appium, Robotium, Test
IO, etc.
GUI Testing Tools: GUI testing tools are used to test the graphical user interface of
the software. For example, EggPlant, Squish, AutoIT, etc.
Bug Tracking Tools: Bug tracking tool helps to keep track of various bugs that come
up during the application lifecycle management. It helps to monitor and log all the
bugs that are detected during software testing. For example, Trello, JIRA, GitHub,
etc.
Security Testing Tools: Security testing is used to detect the vulnerabilities and
safeguard the application against the malicious attacks. For example, NetSparker,
Vega, ImmuniWeb, etc.
Top 10 Software Testing Tools
1. TestComplete: TestComplete developed by SmartBear Software is a functional
automated testing tool that ensures the quality of the application without sacrificing
quality or agility.
Features:
TestComplete has built-in keyword-driven test editor that consists of keyword
operations that correspond to automated testing actions.
It records the key actions that are necessary to replay test and discard all unneeded
actions.
It can run several automated tests across separate virtual machines.
It has built-in code editor that helps testers write scripts manually.
It automatically captures screenshots during test recording and playback.
2. LambdaTest: LambdaTest is a cross-browser testing tool that helps to evaluate
how web application responds when accessed through a variety of different
browsers.
Features:
It has Selenium scripts on 3000+ browsers and operating system environments,
giving higher test coverage.
It can perform automated cross-browser testing of locally hosted web pages using
LambdaTest tunnel.
It can also help to run a single test across multiple browser/ OS configurations
simultaneously.
3. TestRail: TestRail is a test management tool that helps to streamline software
testing processes, get visibility into QA. This tool is used by testers, developers, and
team leads to manage, track, and organize software testing efforts.
Features:
It helps to manage test cases, plans, and runs.
It helps to increase test coverage.
It helps to get real-time insights into your QA progress.
It helps to document test plans and track real-time progress.
4. Xray: Xray is a test management app for Jira that helps to plan, execute, and track
quality assurance with requirements traceability.
Features:
It promotes Native Quality Management, where all tools, tests used by QA are built
natively into development environment like Jira.
278
It integrates with leading automation frameworks like Cucumber, Selenium, and JUnit
to automate testing.
It allows easy integration with CI tools like Jenkins, Bamboo, and GitLab.
It helps to easily map stories using BDD.
5. Zephyr Scale: Zephyr Scale is a test management provides a smarter and more
structured way to plan, manage, and measure tests inside Jira.
Features:
It offers cross-project integration, traceability, and a structured designed useful in
large environments.
It helps to scale tests in Jira.
It helps to improve visibility, data analysis, and collaboration.
It provides detailed changed history, test case versioning, and end-to-end traceability
with Jira issues and challenges.
6. Selenium: Selenium provides a playback tool for authoring tests across most web
browsers without the need to learn a test scripting language.
Features:
It provides multi-browser support.
It makes it easy to identify web elements on the web apps with the help of its several
locators.
It is able to execute test cases quicker than the other tools.
7. Ranorex: Ranorex Studio is a GUI test automation framework used for testing
web-based, desktop, and mobile applications. It does not have its own scripting
language to automate application.
Features:
It helps to automate tests on Windows desktop, then execute locally or remotely on
real or virtual machines.
It runs tests in parallel to accelerate cross-browser testing for Chrome, Firefox,
Safari, etc.
It tests on real iOS or Android devices, simulators, emulators, etc.
8. TestProject: TestProject is a test automation tool that allows users to create
automated tests for mobile and web applications. It is built on top of popular
frameworks like Selenium and Appium.
Features:
It is a free end-to-end test automation platform for web, mobile, and API testing.
Tests are saved as local files directly on your machine with no cloud-footprint to get a
complete offline experience.
It helps to create reliable codeless tests powered by self-healing, adaptive wait, and
community add-ons.
It provides insights about release quality, step-by-step detailed report with
screenshots and logs.
9. Katalon Platform: Katalon Platform is a comprehensive quality management
platform that enables team to easily and efficiently test, launch, and optimize the best
digital experiences.
Features:
It is designed to create and reuse automated test scripts for UI without coding.
It allows running automated tests of UI elements including pop-ups, iFrames, and
wait-time.
It eases deployment and allows wider set of integrations compared to Selenium.
279
10. UFT/QTP: Micro Focus UFT is a software that provides functional and regression
tests automation for software applications and environments.
Features:
It helps to accelerate end-to-end testing.
It boasts AI-based machine learning and advanced OCR for advanced object
recognition.
It helps to test both front-end functionality and back-end service parts.
282
5. Zephyr Squad
Zephyr Squad is better for Agile teams to have a flexible and seamlessly integrated
test management tool that works as a native Jira application. With Zephyr Squad,
one can create test cases, execute tests, and view test execution reports.
Features:
Seamless Integration with Jira.
Easy to use look and feel makes a better choice for Agile teams.
View test executions and results by story view.
The Jira dashboard allows you to view test executions by test cycle and testers.
Integrate with Automation testing tools such as Selenium and CI/CD tools such as
Jenkins and Bamboo.
6. PractiTest
It is a Saas-based end-to-end test case management tool. Using this users can
efficiently create and run tests, track bugs and generate reports.
Features:
It provides the ability to import and export issues, tests, steps, and requirements.
Gives users the choice to perform Manual, Exploratory, and Automation testing
without integrating with 3rd party tools.
Create a custom field relevant to the project which could be used with different tests
ad issues.
It supports multiple browsers such as Chrome, Firefox, IE, Edge, and Safari.
Better visibility for Manual and Automation test results supported by offering multiple
reporting options.
Provides seamless integration with JIRA, Pivotal, Azure DevOps, Jenkins, GitHub,
Bugzilla, and Slack.
Seamless integration with CI/CD tools such as Jenkins and Bamboo and their API
gives the flexibility to add your own custom integrations as well.
Integration with FireCracker tool to import XML test results to PractiTest without using
any API code.
7. TestLink
TestLink is a web-based open-source test management tool that allows users to
manage test cases, test suites, test projects, and user management.
Features:
Supports both manual and automated test execution.
Multiple users can access the functionality of the tool with their credentials and
assigned roles.
Generation of test execution reports in various formats such as Word, Excel, and
HTML formats.
Easy Import/Export of test cases.
Seamless integration with Bugzilla, and Mantis.
Linking of test cases with defects.
Filter and sort test cases based on Testcase ID, and version.
8. QTest
QTest is a test management tool developed by QASymphony and integrated with
Agile development. One can add the project requirements, create test cases, run the
tests, and store test results.
Features:
An easy user interface enables users to track testing activities.
283
Integrate of QTest with Bugzilla or JIRA.
Easy Import/Export of test cases from Excel spreadsheet.
Customizable reports to display the data useful to you using filter and sort options on
date or field.
Track changes to test cases and requirements.
Reuse test cases and test suites across multiple releases.
9. QMetry Test Management
The QMetry tool is one of the best test management tools. Integrated with Jira and
CI/CD tools like Bamboo and Jenkins and Automation Frameworks. Its main features
include requirement tracking, test case management, test execution, reporting, user
management, and issue management.
Features:
End to End test management.
Test Execution management.
Integration with CI/CD and Automation tools.
Helps Agile and DevOps teams to increase the quality of the product.
10. Kualitee
It is designed to organize all the testing efforts on one platform. You can maintain test
case repositories, execute test cycles, and log defects.
Features:
Easy to use and Simple UI.
One-stop shop for test case management and bug tracking.
Multiple users can work on the same item and all feedback is shared in one place.
Allows users to easily create the projects, modules, test cases, and testing cycles,
Execute the test cases, log the defects, and generate reports.
Easy import/export of test cases in various formats(Excel, Word, CSV).
11. TestCollab
TestCollab is a test management tool which helps the agile team to manage testing
and supports customer user rights management, configurable test plans, custom
fields, integration with JIRA.
Features
It provides centralized test repository for team.
Helps to create collaborative test plans and checklists with ease.
It provides a flexible interface thus allows to customize the workflow.
It provides modern features like @mention comments, in-app notifications, etc.
It supports reusability of the test suites, thus allowing to use them across multiple
projects.
12. Requirements and Test Management for Jira (RTM)
It is test management application that helps to manage and track efforts within any
Jira Cloid Project. This helps to systematize testing process and improve the quality
of the final product.
Features
It provides built-in requirement management.
It supports easy test execution.
RTM displays the requirement, test cases, test plans, test execution, and defects are
displayed as a tree.
It provides detailed test case creation.
It support reusability of the test plans.
284
13. XQual
XQual delivers XStudio one of the best test management tool that helps to manage
releases, requirements, risks, specifications, test cases, documents, etc.
Features
It includes an integrated bug tracker but it also allows integration with other bug
trackers with the help of connectors.
Each release is versioned and covered by requirements, tests, etc with which
bidirectional traceability is managed.
All testing types like exploratory, automation, manual testing are supported in
XStudio.
XStudio supports nearly 90 test automation frameworks. One can also integrate with
one’s own proprietary processes.
It also allows to integrate with third-party systems.
XStudio is delivered as a service in the cloud but one can also install it on the system
if needed.
14. Tuskr
Tusk is a test management tool that iis easy on pocket, provides 30-day free trial and
generous free plan.
Features
It has WYSIWYG editor that supports rich-text formatting.
It allows to easily conduct test runs including all the test cases in a project, specific
ones, or the ones that are matching a complex filter.
It helps to optimize resources by encouraging transparency.
It supports visual monitoring of the progress through burndown charts, dashborads,
activity stream, etc.
It supports dark theme mode to help reduce eye strain and fatigue.
15. TestFLO for JIRA
TestFlo is a paid add-on for Jira that is used to write test cases. This helps to
manage test cases in Jira.
Features
It helps to create test flow and manage it smoothly.
It enables large scale software testing in Jira.
It supports full traceability fromrequirements through test cases to bugs.
It is best suited for large-scale enterprises, highly regulated industires, requirements
testing, agile and DevOps testing.
It supports test repository with a tree folder structure for reusable templates.
It supports importing tests from CSV and TestLink.
It provides support for Cucumber, JUnit, TestNG.
16. Qase
Qase is a test management software that helps to deliver quality products to
customers and provides a single workspace for manual and automateed tests.
Features
It supports organizing test cases into logical groups called Test Suites.
The smart wizard guides through creating test plans, and helps to check every test
case at once.
It supports 1-click integrations with major issue trackers.
It helps to create tasks and submit reports directly from the workspace.
285
17. Testiny
Testiny is a fast growing web application that is built-on latest technologies and aims
to make manual testing and QA management as seemless as possible. It is designed
to be extremely easy to use and helps testers to perform tests without adding
overhead to the testing process.
Features
It is beneficial for small to mid-size QA teams that are looking to integrate manual
and automated testing into development process.
it is free for open-source projects and small teams with up to 3 persons.
It helps in easy creation of test cases, test runs, etc.
It helps to organize tests in a tree structure.
18. Testpad
Testpad is a test plan tool that helps to find the bugs and simple to use.
Features
No training is required to use this tool.
It provides secure hosting, secure communication, and reliable data.
It has keyboard-driven editor having javascript that is the responsive user interface.
It also provides facility to invite guest testers when accounts are not required.
This tool is tablet and mobile friendly.
19. JunoOne
JunoOne is the test management tool that is designed to streamline test
management and incident management. It offers a number of tools that makes all the
testing activities well arranged, helps to organize work, and control the overall state
of the projects.
Features
It protects data and solves issues.
It supports performing test analysis and campaign creation.
20. Panaya
Panaya is a smart test management tool for ERP and enterprise cloud applications
that reduces the test cyle by 85% and accelaerates digital transformation with zero
risk.
Features
The recorder captures users’ interactions and screenshots for audit-ready test
documentation and compliance.
It helps to optimize the test cycle by 85%, by pinpointing the impacted business
scenarios and suggested what to test and what not to test.
It supports tracking and sharing test progress metrics and charts with user-friendly
customized dashboards and widgets.
It helps elimate risks and uncertainity with AI-powered change analysis to know
exactly what to test for and what not to test for.
1. Jira
One of the most essential bug tracking tools is Jira. Jira is an open-source platform
used in manual testing for bug tracking, project management, and problem tracking.
Jira contains a variety of capabilities such as reporting, recording, and workflow. We
can monitor all types of faults and issues connected to software that is created by the
test engineer in Jira.
Key Features:
Boards that are flexible and glossy.
Powerful project configuration.
Project-based independent configuration.
Roadmaps that are linked to real-world projects.
Including rules and guardrails for the team.
2. BugHerd
BugHerd is the simplest way to monitor issues, collect feedback, and manage web
page feedback. Your staff and clients can tie feedback to specific elements on a web
page to pinpoint problems. BugHerd also saves information like the browser, CSS
selector data, operating system, and even a screenshot to help you quickly recreate
287
and fix errors. This application saves all of the data you’ll need to reproduce and
quickly fix any fault, such as issues with your web browser or operating system.
BugHerd is the most user-friendly tool for tracking problems and managing website
feedback. Bugs and feedback should be pinned to elements on a website, and
technical information should be captured to help fix issues. With the Kanban-style
task board, you can track feedback tasks all the way to completion.
Key Features:
Bug and feedback capture with a simple point-and-click interface
You will be provided with technical information such as your browser, operating
system, and screen resolution.
BugHerd collects technical data such as browser, operating system, and other
details.
Feedback should be pinned to webpage elements rather than a specific location.
Commenting in real-time.
Task Specifications
Boards of Task
Comments made in real-time.
Integrates with GitLab, BitBucket, and GitHub.
3. Bugzilla
Bugzilla is another key bug tracking program that is extensively used by many
businesses to monitor issues. It is an open-source program that is used to assist the
customer and client in keeping track of issues. It is also used as a test management
tool since it allows us to quickly connect other test case management solutions such
as ALM, quality Centre, and so on. It runs on a number of operating systems,
including Windows, Linux, and Mac.
Key Features:
Capabilities for Advanced Search.
User Preferences Control Email Notifications.
Bug Lists in Various Formats (Atom, iCal, etc.)
Email reports on a regular basis (daily, weekly, hourly, etc.).
Reports and graphs.
Automatic Detection of Duplicate Bugs.
Bugs can be reported/modified through email.
Time management.
4. BugNet
It is a free and open-source defect tracking and project issue management solution
designed in the ASP.NET and C# programming languages and compatible with the
Microsoft SQL database. BugNet’s goal is to limit the complicity of the programming
that facilitates deployment.BugNet’s advanced version is commercially licensed.
Key Features:
It will provide strong security while being simple to use and administer.
BugNet provides support for a variety of projects and databases.
288
We can receive an email notice using this technology.
This is capable of managing the project and milestones.
There is an online support community for this tool.
5. Axosoft
6. Redmine
7. Mantis
MantisBT is an open source issue tracker that strikes a fine balance between
strength and simplicity. Users can get up and running in minutes, manage their
projects, and efficiently collaborate with their peers and clients. If you’ve used
previous bug tracking software, you’ll find this one to be simple to use. Mantis is
accessible as a web application and a mobile application. It integrates with apps such
289
as chat, time tracking, wiki, RSS feeds, and many others, and works with multiple
databases such as MySQL, PostgreSQL, and MS SQL.
Key Features:
Support for projects, sub-projects, and categories.
security dependent on the user.
tools for advanced searches.
graphing and reporting.
Support for e-mail and RSS feeds.
Issue pages and workflow can be customized.
Integration of revision control.
handling of documents.
8. SpiraTeam
SpiraTeam offers a comprehensive Application Lifecycle Management (ALM) solution
that allows you to manage all of your requirements, tests, plans, activities, defects,
and issues in one place, with full traceability from start to finish. SpiraTeam is a
complete Application Lifecycle Management (ALM) solution that includes integrated
bug tracking. With built-in end-to-end traceability, SpiraTeam allows you to manage
your whole testing process, from requirements through tests, problems, and issues.
The following functionalities are included with SpiraTeam out of the box.
Key Features:
During the run of the test script, new incidents are automatically created.
Statuses, priorities, defect kinds, and severity levels are fully customizable event
fields.
The ability to associate occurrences (bugs) with other artefacts and incidents.
Reporting, searching, and sorting capabilities, as well as a change audit trail.
When the customized workflow status changes, email notifications are sent out.
Email-based bug and issue reporting.
9. Backlog
290
10. Trac
11. Monday
It’s possible to manage your team and do performance reviews using Monday, a bug-
tracking application. For simple data visualization, it offers a versatile dashboard.
Key Features:
Supported Platforms: Windows, Mac, iOS, Android, and Linux
It can automate your daily work.
It enables you to work remotely.
You can track your work progress.
Seamlessly integrates with Outlook, Microsoft Teams, Dropbox, Slack, Google
Calendar, Google Drive, Excel, Gmail, LinkedIn, OneDrive, Zapier, and Adobe
Creative Cloud
You can export your file in PDF, PNG, JPEG, SVG, and CSV formats
Set Scans to run hourly, daily, and weekly
Benefits of Defect Testing Tool: The following are some of the benefits of the
defect testing tool
Delivery of high-quality products.
Reduces the cost of development thus increasing the Return On Investment (ROI).
Better communication and teamwork.
Understand the defect trends.
Better customer satisfaction.
292
Enhance software optimization capability: With performance testing, one can
optimize the software in such a way that helps the software withstand high
numbers of concurrent users.
Identifying issues in the software: siteThis helps to identify the issues that can be
corrected before launching the app or site, thus developer can focus on improving
the technology instead of resolving the issues after the release.
Measure software’s performance metrics: Performance testing helps to measure
the metrics like speed, reliability, scalability, scalability, and other metrics that
affect the performance of the software under the workload. Testing and monitoring
these factors helps to identify how the application will behave under pressure
conditions.
Factors determine to Consider While Selecting a Performance Testing Tool
The following are some factors that should be considered before selecting a
performance testing tool:
License cost and type: Review and be aware of the license of the tool before using
it as commercial tools offer better protocol support but with certain restrictions.
Check and compare the prices of the paid tool with other tools in the market and
select the one that meets your requirements and falls within your budget.
Desired protocol support: Choose a tool based on the nature of the application
protocol you would like to utilize as different vendors offer different protocols such
as HTTP, HTTPS, etc.
Hardware/ Software requirements of the automation tool: It is important to
consider the hardware and software requirements of the automation tool to keep
the cost of the software testing within budget.
Languages for writing scripts: It is better to choose a tool that supports many
different common languages like Java, and Python for writing the test scripts.
Tools that require special skill development before using it may appear less user-
friendly in comparison to the tools that use common languages for scripts.
Option to record or playback: The record and playback option helps to run the test
cases without programming knowledge.
Online forums and vendor assistance: Commercial vendors generally offer high-
quality support through various channels of communication.
Ease of use: The performance testing tool chosen should be easy enough to use for
the testers.
Test environment: The performance testing tool must access enough network and
hardware resources. If the selected tool cannot generate a test environment to
simulate an expected amount of network traffic then it might not be suitable to
meet the company’s requirements.
Tool efficiency: The tool efficiency is more if it needs fewer devices and produces
large-scale tests. It must be capable of generating an expected number of virtual
users on the current hardware and software setup.
Seamless Integration: The performance testing tools work best when integrated
with other tools like defect management. This will help the tester to get an idea of
how to track tests and find defects easily.
Performance Testing Tools:
WebLOAD
LoadNinja
293
LoadRunner
Apache JMeter
NeoLoad
LoadUI Pro
LoadView
StormForge
LoadComplete
Gatling
1. WebLOAD
Web application load and performance testing tool for enterprises. WebLOAD is the
tool of choice for organizations with high user traffic and sophisticated testing needs.
It enables you to load and stress test any internet application by simulating load from
the cloud and on-premises machines. WebLOAD’s benefits are its versatility and
ease of use, allowing you to quickly define the tests you require using capabilities
such as DOM-based recording/playback, automatic correlation, and the JavaScript
scripting language. The tool gives a detailed study of your web application’s
performance, identifying flaws and bottlenecks that may be impeding you from
meeting your load and response requirements.WebLOAD supports hundreds of
technologies, ranging from web protocols to corporate applications, and offers built-in
connections with Jenkins, Selenium, and many other tools to enable DevOps
continuous load testing.WebLOAD testing tool supports HTTP, HTTPS protocols and
Enterprise applications, Network Technology as well as Server Technologies.
Key Features:
Correlation: Automatically correlates dynamic data such as session IDs, allowing
scripts to run on different virtual clients.
Protocols supported: HTTPS, HTTP, and XML are all supported protocols.
Integration: Works with technologies like Selenium, Jenkins, and others.
Customer Service Representatives- You can contact them by phone, fax, or
through a contact form.
2. LoadNinja
It enables you to construct advanced load tests without using scripts and cuts testing
time in half. It also replaces load emulators with real browsers and provides
actionable, browser-based analytics at ninja speed. LoadNinja enables teams to
expand test coverage without sacrificing quality by eliminating the time-consuming
tasks of dynamic correlation, script translation, and script scrubbing. It supports
HTTP, HTTPS, SAP GUI Web, WebSocket, Java-based protocol, Google Web
Toolkit, and Oracle forms. Engineers, testers, and product teams may use LoadNinja
to focus on designing apps that scale rather than load-testing scripts. Client-side
interactions can be easily captured and debugged in real time, and performance
issues can be identified quickly. LoadNinja enables teams to expand test coverage
without losing quality by automating dynamic correlation, script translation, and script
scrubbing.
Key Features:
Automated test: Automated tests utilizing bespoke CI/CD plugins or a REST API.
Customer Support: You can get answers from the LoadNinja user community or by
reading their extensive documentation and FAQs.
294
Protocol Supported: HTTP, HTTPS, SAP GUI Web, WebSocket, Java-based
protocol, Google Web Toolkit, Oracle forms.
3. LoadRunner
295
Customer Service: You can interact with a vast community of developers and
contributors. They’ve also kept tutorials up to date so you can get a better
understanding of the product.
Protocols Supported: HTTPS, HTTP, SAOP, XML, FTP, and Java-based protocols are
supported.
5. NeoLoad
NeoLoad is the most automated performance testing platform for enterprises that
need to test applications and APIs on a regular basis. NeoLoad offers testers and
developers automated test design and management, the most realistic user behavior
simulation, quick root cause analysis, and built-in integrations with the whole SDLC
toolchain. From functional testing tools to analytics and metrics from APM tools,
NeoLoad allows you to reuse and share test files and results. To meet all testing
needs, NeoLoad supports a wide range of mobile, online, and packaged apps, such
as SAP. Schedule, manage and disseminate test resources and findings across the
organization on a regular basis to ensure application performance. System
Requirements: This tool is compatible with Microsoft Windows, Linux, and Solaris
operating systems.
Key Features:
It works with HTML, Angular, HTTP/2, WebSocket, and other web frameworks and
protocols, as well as packaged apps from Salesforce, SAP, Oracle, and IBM.
Dynamic parameters are linked to automatic handling, and app-specific parameters
are detected using established criteria such as.Net, Siebel, and JSF.
SOAP/REST support, Selenium, Tricentis Tosca, Dynatrace, Azure, Jenkins, Git, and
other DevOps tools; SOAP/REST support, Selenium, Tricentis Tosca, Dynatrace,
Azure, Jenkins, Git, and other DevOps tools.
Protocols Supported: HTTP, HTTPS, SOAP, REST, Flex Push, AJAX Push.
6. LoadUI Pro
LoadUI allows you to build and edit test cases as they run. The focus on usability
through its visual interface and easy design, combined with the flexibility that comes
with the ability to make changes during the test, is what makes load UI so strong.
LoadUI Pro enables you to rapidly construct sophisticated load tests without using
scripts, distribute them on the cloud using load agents, and track the performance of
your servers as the demand for them increases. You can get thorough reports and
automate your load tests rapidly.
Key Features:
Compatibility: Mac OS, Windows, and Linux are all supported.
Test Reuse: You can save time by reusing functional tests that already exist in your
pipelines.
Integration: Works with SoapUI, a functional testing tool.
Protocols Supported: HTTP, REST, SOAP, JSON, API Blueprint, JSON Schema, XML
Schema.
7. LoadView
296
real browsers. LoadView’s cloud network is managed by AWS and Azure, allowing
you to create multiple tests on even the most complicated projects. Utilizing load
injectors from 30 worldwide locations spanning the US, South America, Canada,
APAC, and Europe, you may define users, duration, and behavior using various
scenarios and realistically imitate people. To analyze traffic spikes, scalability, and
infrastructure restrictions, the tool includes three load curves: Load Step curve,
Dynamic Adjustable curve, and Goal-based curve.
Key Features:
LoadView has dedicated IPs that you may allow and control, so you can run tests
behind a firewall.
Video recording: Use video recording to capture the rendering of a website or app for
better inspection and assessment.
Reference servers, thorough waterfall charts, dynamic variables, and load injector
controls are among the other features.
Protocols Supported: Flash, Silverlight, Java, HTML5, PHP, Ruby.
8. StormForge
StormForge enables you to execute load testing for the speed and scalability of your
apps at a minimal cost, right within your CI/CD workflow. It enables you to boost
application uptime, throughput, latency, and application faults while also allowing you
to scale to additional users. The application provides all of these capabilities while
using fewer resources, requiring no manual processes, promoting environmental
sustainability, and assisting you in lowering your monthly cloud expenditures. To
ensure that the test reflects real-world traffic patterns, you can capture real-world
traffic. It operates on an open-workload model, accurately models real-world
scenarios, and solves error detection issues.
Key Features:
Use Performance Testing as Code in your CI/CD process to make it more repeatable.
Cloud-native: On Kubernetes, it works nicely.
Java, Nginx, Go, and Python is among the supported programming languages.
Integration: It works seamlessly with your ecosystem, including cloud providers (AWS,
DigitalOcean, GCP, IBM, Azure), monitoring tools (Prometheus, Dynatrace,
Datadog, New Relic, and Circonus), and DevOps tools (Prometheus, Dynatrace,
Datadog, New Relic, and Circonus) (Jenkins, Puppet, Chef, and Rancher Labs).
9. LoadComplete
It’s yet another tool for performance (load) testing. It’s used to build and perform
automated tests for web services and servers. It works with all browsers and web
services. When we have a large load, it will examine the performance of our web
server. Throughout the test runs, we can use this program to monitor numerous
server metrics such as CPU utilization.
Key Features:
It will allow us to produce a big load for stress testing by providing load modeling for
performance testing.
297
We may record and playback our actions in the web browser using this.
It works on a variety of platforms, including Windows and UNIX.
During load testing, template-based criteria will be used to evaluate the server
message body, ensuring that the server is running properly.
It can test Flash, Flex, Silverlight, and Ajax apps, among others.
It will generate load test reports, which will contain user interface customization.
Protocol Supported: AMP, SOAP(XML), WebSocket, Binary XML, JSON Format.
10. Gatling
298