0% found this document useful (0 votes)
13 views

Module 1

The document provides an overview of software testing, including definitions, the purpose of testing, different types of testing such as manual testing and automation testing, and examples of system testing use cases. It also describes white box testing and black box testing.

Uploaded by

armaanmishra48
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Module 1

The document provides an overview of software testing, including definitions, the purpose of testing, different types of testing such as manual testing and automation testing, and examples of system testing use cases. It also describes white box testing and black box testing.

Uploaded by

armaanmishra48
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

CSE4017

Software Testing

Module 1: Overview of Testing

School of Computing Science and Engineering


VIT Bhopal University
1
Overview of Testing
• Software Testing Definition
• Debugging
• Testing Vs Debugging
• Purpose of Testing
• Dichotomies
• Model for Testing
• Consequences of Bugs
• Taxonomy of Bugs
2
Software Testing Fundamentals
● Software Testing is the process of exercising or evaluating a system or
system components by manual or automated means to verify that it
satisfies specified requirements.
● Software testing can be stated as the process of verifying and validating
whether a software or application is
○ bug-free,
○ meets the technical requirements as guided by its design and development, and
○ meets the user requirements effectively and efficiently by handling all the
exceptional and boundary cases.
● The process of software testing aims not only at finding faults in the existing
software but also at finding measures to improve the software in terms of
efficiency, accuracy, and usability.

Prepared & Compiled by: Prof. Bhupendra Panchal


Why is Software Testing?
● Software Testing is a method to assess the functionality of the software
program. The process checks whether the actual software matches the
expected requirements and ensures the software is bug-free.
● The purpose of software testing is to identify the errors, faults, or missing
requirements in contrast to actual requirements. It mainly aims at measuring
the specification, functionality, and performance of a software program or
application.

Software testing can be divided into two steps:


● Verification: It refers to the set of tasks that ensure that the software correctly
implements a specific function. It means “Are we building the product
right?”.
● Validation: It refers to a different set of tasks that ensure that the software
that has been built is traceable to customer requirements. It means “Are we
building the rightPrepared
product?”.
& Compiled by: Prof. Bhupendra Panchal
Importance of Software Testing:
● Defects can be identified early: Bugs can be identified early and can be fixed
before the delivery of the software.
● Improves quality of software: Software Testing uncovers the defects in the
software, and fixing them improves the quality of the software.
● Increased customer satisfaction: Software testing ensures reliability, security,
and high performance which results in saving time, costs, and customer
satisfaction.
● Helps with scalability: Software testing type non-functional testing helps to
identify the scalability issues and the point where an application might stop
working.
● Saves time and money: After the application is launched it will be very
difficult to trace and resolve the issues, as performing this activity will incur
more costs and time. Thus, it is better to conduct software testing at regular
intervals during software development.

Prepared & Compiled by: Prof. Bhupendra Panchal


Purpose of Testing
● To Catch Bugs
• Bugs are due to imperfect Communication among programmers
• Specs, design, low level functionality
• Statistics say: about 3 bugs / 100 statements

● Productivity Related Reasons


○ Insufficient effort in QA => High Rejection Ratio
○ Higher Rework => Higher Net Costs

● Bug Prevention
○ Bug Reporting
○ Debugging
○ Correction
○ Retesting

Prepared & Compiled by: Prof. Bhupendra Panchal


Purpose of Testing
● Testing & Inspection
 Inspection is also called static testing.
 Methods and Purposes of testing and inspection are different, but the
objective is to catch & prevent different kinds of bugs.
 To prevent and catch most of the bugs, we must

 Review
 Inspect &
 Read the code
 Do walkthroughs on the code & then do Testing

Prepared & Compiled by: Prof. Bhupendra Panchal


System Testing Examples and Use Cases

● Software Applications: Use cases for an online airline’s booking system –


customers browse flight schedules and prices, select dates and times, etc.
● Web Applications: An e-commerce company lets you search and filter items,
select an item, add it to the cart, purchase it, and more.
● Mobile Applications: An UPI app let you do mobile recharge or transfer money
securely. So, first, you have to select the mobile number, then the biller name,
recharge amount, and payment method, and proceed to pay.
● Games: For a gaming app, check the animation, landscape-portrait orientation,
background music, sound on/off, score, leaderboard, etc.
● Operating Systems: Login to the system with your password, check your files,
folders, apps are well placed and working, battery percentage, time-zone, go to
the ‘settings’ for additional checkups, etc.
● Hardware: Test the mechanical parts – speed, temperature, etc., electronic parts –
voltage, currents, power input-output, communication parts- bandwidths, etc.

Prepared & Compiled by: Prof. Bhupendra Panchal


Different Types Of Software Testing

Prepared & Compiled by: Prof. Bhupendra Panchal


Different Types Of Software Testing
● Manual Testing:
○ Manual testing includes testing software manually, i.e., without using any
automation tool or script.
○ In this type, the tester takes over the role of an end-user and tests the
software to identify any unexpected behaviour or bug.
○ There are different stages for manual testing such as unit testing, integration
testing, system testing, and user acceptance testing. Testers use test plans, test
cases, or test scenarios to test software to ensure the completeness of testing.
● Automation Testing:
○ Automation testing, which is also known as Test Automation, is when the
tester writes scripts and uses another software to test the product.
○ This process involves the automation of a manual process. Automation Testing
is used to re-run the test scenarios quickly and repeatedly, that were
performed manually in manual testing.

Prepared & Compiled by: Prof. Bhupendra Panchal


Different Types Of Software Testing
Software testing techniques can be majorly classified into two categories:

● Black Box Testing: Black box technique of testing in which the tester doesn’t
have access to the source code of the software and is conducted at the
software interface without any concern with the internal logical structure of the
software known as black-box testing.
● White-Box Testing: White box technique of testing in which the tester is
aware of the internal workings of the product, has access to its source code,
and is conducted by making sure that all internal operations are performed
according to the specifications is known as white box testing.
● Grey Box Testing: Grey Box technique is testing in which the testers should
have knowledge of implementation, however, they need not be experts.

Prepared & Compiled by: Prof. Bhupendra Panchal


Different Types Of Software Testing

Types of testing:

1. White Box Testing

2. Black Box Testing

Prepared & Compiled by: Prof. Bhupendra Panchal


White Box Testing
● The White Box Test method is the one that looks at the code and structure of
the product to be tested and uses that knowledge to perform the tests.
● This method is used in the Unit Testing phase, although it can also occur in
other stages such as Integration Tests. For the execution of this method, the
tester or the person must have extensive knowledge of the technology used to
develop the program.
● White box testing is something that can be used by any organization to test
out their applications.
● This type of testing should be a part of any strong software development
program. Its always better for an ethical hacker to find a critical vulnerability in
your software systems than a malicious hacker who exploits it to gain system
or network access.

Prepared & Compiled by: Prof. Bhupendra Panchal


Black Box Testing
● Black Box Testing is the method that does not consider the internal structure, design, and
product implementation to be tested. In other words, the tester does not know its internal
functioning.
● The Black Box only evaluates the external behaviour of the system. The inputs received by the
system and the outputs or responses it produces are tested.

Black box testing can be done in the following ways:


● Syntax-Driven Testing – This type of testing is applied to systems that can be syntactically
represented by some language. For example, language can be represented by context-free
grammar.
● Equivalence partitioning –The idea is to partition the input domain of the system into
several equivalence classes such that each member of the class works similarly,
○ i.e., if a test case in one class results in some error, other members of the class would also result in
the same error.

Prepared & Compiled by: Prof. Bhupendra Panchal


Black Box and White Box Testing

Real life scenario


● Suppose there is a registration page in an e-commerce website to be tested.
If we are doing black box testing, it will only be checked whether the
registration page is working fine or not. But in the case of white box testing,
all the functions or classes(which are called) will be checked when that
registration page is executed.
● If you are using a calculator, then in the case of black box testing, you will be
concerned if the output you are getting is correct or not. But in the case of
white box testing, testers will check the internal working of the calculator and
how this output was calculated.

Prepared & Compiled by: Prof. Bhupendra Panchal


Different Types Of Software Testing

Prepared & Compiled by: Prof. Bhupendra Panchal


Different Types Of Software Testing
Software Testing can be broadly classified into 3 types:
● Functional Testing: Functional testing is a type of software testing that
validates the software systems against the functional requirements. It is
performed to check whether the application is working as per the software’s
functional requirements or not. Various types of functional testing are Unit
testing, Integration testing, System testing, Smoke testing, and so on.
● Non-functional Testing: Non-functional testing is a type of software testing
that checks the application for non-functional requirements like performance,
scalability, portability, stress, etc. Various types of non-functional testing are
Performance testing, Stress testing, Usability Testing, and so on.
● Maintenance Testing: Maintenance testing is the process of changing,
modifying, and updating the software to keep up with the customer’s needs. It
involves regression testing that verifies that recent changes to the code have
not adversely affected other previously working parts of the software.

Prepared & Compiled by: Prof. Bhupendra Panchal


Different Levels of Software Testing
(Unit, Integration, System & Acceptance Testing)

Prepared by: Prof. Bhupendra Panchal


Different Levels of Software Testing
In software testing, we have four different levels of testing, which are as discussed below:

1. Unit Testing
2. Integration Testing
3. System Testing
4. Acceptance Testing

Prepared & Compiled by: Prof. Bhupendra Panchal


Different Levels of Software Testing

1. Unit Testing:
● Unit testing is the first level of software testing, which is used to test if software
modules are satisfying the given requirement or not.
● Unit testing involves analysing each unit or an individual component of the
software application. A unit component is an individual function or the smallest
testable part of the software.
● The primary purpose of executing unit testing is to validate unit components
with their performance and test the correctness of inaccessible code.
● Unit testing is also the first level of functional testing.
● Unit testing will help the test engineer and developers in order to understand
the base of code that makes them able to change defect causing code quickly.
The developers implement the unit.

Prepared & Compiled by: Prof. Bhupendra Panchal


Different Levels of Software Testing
Integration Testing:
● The second level of software testing is integration testing where individual
units are combined and tested as a group.
● The primary purpose of executing the integration testing is to identify the
defects at the interaction between integrated components or units.
● When each component or module works separately, we need to check the
data flow between the dependent modules, and this process is known as
integration testing.
● We only go for the integration testing when the functional testing has been
completed successfully on each application module.
● In simple words, we can say that integration testing aims to evaluate the
accuracy of communication among all the modules.

Prepared & Compiled by: Prof. Bhupendra Panchal


Different Levels of Software Testing
System Testing:
● The third level of software testing is system testing, where a complete,
integrated system/software is tested, i.e. we will test the application as a whole
system.
● The purpose of this test is to evaluate the software's functional and
nonfunctional requirements.
● It is end-to-end testing where the testing environment is parallel to the
production environment.
● In simple words, we can say that System testing is a sequence of different
types of tests to implement and examine the entire working of an integrated
software computer system against requirements.

Prepared & Compiled by: Prof. Bhupendra Panchal


Different Levels of Software Testing
Acceptance Testing:
● The last and fourth level of software testing is acceptance testing, where a
system is tested for acceptability.
● It is used to evaluate whether a specification or the requirements are met as
per its delivery.
● The purpose of this test is to evaluate the system’s compliance with the
business requirements and assess whether it is acceptable for delivery.
● In simple words, we can say that Acceptance testing is the squeezing of all the
testing processes that are previously done.
● The acceptance testing is also known as User acceptance testing (UAT) and is
done by the customer before accepting the final product.
● Usually, UAT is done by the domain expert (customer) for their satisfaction and
checks whether the application is working according to given business
scenarios and real-time scenarios.
Prepared & Compiled by: Prof. Bhupendra Panchal
Software Testing Life Cycle
(STLC)

Prepared by: Prof. Bhupendra


Panchal
Software Testing Life Cycle (STLC)

● Software Testing Life Cycle (STLC) is a sequence of specific activities


conducted during the testing process to ensure software quality goals are
met.
● STLC involves both verification and validation activities.
● Software Testing is not just a single/isolate activity. It consists of a series of
activities carried out methodologically to help certify your software product.

Prepared & Compiled by: Prof. Bhupendra Panchal


What is Software Testing Life Cycle (STLC)?

Prepared & Compiled by: Prof. Bhupendra Panchal


Software Testing Life Cycle (STLC) Phases
1. Requirement Phase Testing:
● Identify types of tests to be performed.
● Gather details about testing priorities and focus.
● Prepare Requirement Traceability Matrix (RTM).
● Identify test environment details where testing is supposed to be carried out.

2. Test Planning Activities


● Preparation of test plan/strategy document for various types of testing
● Test tool selection
● Test effort estimation
● Resource planning and determining roles and responsibilities.
● Training requirement

Prepared & Compiled by: Prof. Bhupendra Panchal


Software Testing Life Cycle (STLC) Phases
3. Test Case Development Activities
● Create test cases, automation scripts (if applicable)
● Review test cases and scripts
● Create test data (If Test Environment is available)

4. Test Environment Setup Activities


● Understand the required architecture and environment set-up
● prepare hardware and software requirement for the Test Environment.
● Setup test Environment and test data
● Perform smoke test on the build

Prepared & Compiled by: Prof. Bhupendra Panchal


Software Testing Life Cycle (STLC) Phases
5. Test Execution Activities
● Execute tests as per plan
● Document test results, and log defects for failed cases
● Retest the Defect fixes
● Track the defects to closure

6. Test Cycle Closure Activities


● Evaluate cycle completion criteria based on Time, Test coverage, Cost, Software, Critical
Business Objectives, Quality
● Prepare test metrics based on the above parameters.
● Document the learning out of the project
● Prepare Test closure report

Prepared & Compiled by: Prof. Bhupendra Panchal


How to write system test cases?

Writing system test cases involves approximately 10 steps:


1. Test case ID: Generate a unique ID for your test case.
2. Test case scenario: Create a one-liner description for it.
3. Pre-condition: Your test case must be ready with some predefined conditions. Ex. a registered
email id and user name.
4. Test steps: Write your test steps with the proper description.
5. Test data: Include the data you need for your test cases. Suppose you’re testing an application
form. You must include test data like the applicant’s name, address, contact number, email id, etc.
6. Expected Result: You get the predefined test result. Ex. successful submission of an application.
7. Post conditions: Some unique data will be generated. Ex. application ID.
8. Actual result: As per the expectation.
9. Status: Passed or failed.
10. Review and revise: Finally, review and revise the test cases as necessary to ensure they are
accurate and effective in testing the system requirements.
Prepared & Compiled by: Prof. Bhupendra Panchal
Sample test case
Test
Test
Input Test Expected Status
Test Id Test Condition Steps Actual Result Remarks
Result

Check that with the 1. Enter username username:


correct username and 2. Enter the ABC Login Login
1. Pass None
password able to log password password: successful successful
in. 3. Click on login 1234ABa

Check that if with username:


1. Enter username Login
incorrect username AB1 Login
2. 2. Enter password unsuccessful Pass None
and password able to password: unsuccessful
3. Click on login
not login. 1234AB
3. The login page
is not loaded
Check if the loading
due to a
page loading 1. Click on login Welcome to Welcome to
None Fail browser
efficiently for the button. login page. login page.
compatibility
client.
issue on the
user’s side.
31
Phases in a tester's mental life
5 Phases in tester’s thinking

Phase 0: (Until 1956: Debugging Oriented)


 says no difference between debugging & testing
 Today, it’s a barrier to good testing & quality software.

Phase 1: (1957-1978: Demonstration Oriented)


 The purpose of testing here is to show that software works
 This failed because the probability of showing that software works
'decreases' as testing increases i.e. the more you test, the more likely you
will find a bug.

Phase 2: (1979-1982: Destruction Oriented)


 The purpose of testing is to show that software doesn`t work.
 This also failed because the software will never get released as you will find
one bug or the other.
 Also, a bug corrected may also lead to another bug.
Phase 3: (1983-1987: Evaluation Oriented)
 The purpose of testing is to reduce the risks
 The product is released when the confidence on that product is high
enough. (Note: This is applied to large software products with millions of
code and years of use.)
 We apply principles of statistical quality control.

Phase 4: (1988-2000: Prevention Oriented)


 Testability is the factor considered here. One reason is to reduce the labour
of testing.
 Other reason is to check the testable and non-testable code. Testable code
has fewer bugs than the code that's hard to test.
 Identifying the testing techniques to test the code is the main key here.
Advantages and Disadvantages of System Testing

The advantages are-


● It checks the overall functionality of a system to fulfil all the business
requirements.
● As it checks the whole system, it easily detects the bugs and finds defects
skipped in unit and integration testing. Thus it makes the product ready for
acceptance testing.
● You can perform the testing in a practical environment. Thus it helps to detect
real-time bugs.

The disadvantages are-


● This testing is time-consuming and expensive because it checks the full
system.
● System testing becomes challenging for large and complex systems.
Prepared & Compiled by: Prof. Bhupendra Panchal
Dichotomies in Software Testing

● Dichotomies: A division or contrast or partition between two things.


● Software testing is a discipline that involves various types of tests, approaches,
techniques, and tools.
● Within this field, we very often encounter dichotomies - which can lead to confusion
when making decision.
● Dichotomies are crucial for a QA as it broaden their thought process and helps them
in taking an informed decision during the design of test strategy.
● Let`s consider some of them-
a. Testing & Debugging
b. Functional Vs Structural Testing
c. Designer vs Tester
d. Modularity (Design) vs Efficiency
e. Programming in Small versus Large
f. Builder versus Buyer
Prepared & Compiled by: Prof. Bhupendra Panchal
a. Testing & Debugging

● Many people consider both as same.


○ Testing is to find bugs.
○ Debugging is to find the cause or misconception leading to the bugs

○ Their roles are confused to be the same. But, there are differences in goals,
methods and psychology applied to these

Prepared & Compiled by: Prof. Bhupendra Panchal


a. Testing & Debugging
# Testing Debugging
1 Starts with known conditions. Uses predefined Starts with possibly unknown initial conditions.
procedure. Has predictable outcomes. End cannot be predicted.
2 Planned, Designed and Scheduled. Procedures & Duration are not constrained.

3 A demo of an error or apparent correctness. A Deductive process. (No demonstration)


4 Proves programmer’s success or failure. It is programmer’s justification.
5 Much of testing can be done without design Impossible without a detailed design
knowledge. knowledge.
6 Can be done by outsider to the development team. Must be done by an insider (development team).

7 A theory establishes what testing can do or Time, effort, how etc. depends on human ability
cannot do.
8 Test execution and design can be automated. Debugging - Automation is a dream.

Prepared & Compiled by: Prof. Bhupendra Panchal


b. Function versus Structure Testing
# Functional Testing Structural Testing
1 Treats a program as a black box. Treats a program as a white box.
Looks at the implementation details:
programming style, control method,
source language, database & coding
details.
2 Inputs and Outputs are verified for Both functional and structural tests
conformance to specifications are useful, both have imitations, and
from user’s point of view. both target different kinds of bugs.
3 Functional tests can detect all Structural tests are inherently finite
bugs but would take infinite time but can not detect all errors even if
to do so completely executed.

Prepared & Compiled by: Prof. Bhupendra Panchal


c. Designer versus Tester:
# Designer Tester
1 Test designer is the person who where as the tester is the one actually
designs the tests tests the code.
2 During functional testing, the During unit testing, the tester and the
designer and tester are probably programmer merge into one person.
different persons.

d. Modularity versus Efficiency :


• A module is a discrete, well-defined, small component of a system.
• Smaller the modules, difficult to integrate;
• larger the modules, difficult to understand.
• Both tests and systems can be modular. Testing should be organized into modular
components.
• Small, independent test cases can be designed to test independent modules.
Prepared & Compiled by: Prof. Bhupendra Panchal
e. Small versus Large programs:
● Programming in large means constructing programs that consists of many components
written by many different programmers.
● Programming in small is what we do for ourselves in the privacy of our own offices.

f. Builder versus Buyer:


● Most software is written and used by the same organization.
● Unfortunately, this situation is dishonest because it clouds accountability.
● If there is no separation between builder and buyer, there can be no accountability.
● The different roles / users in a system include:
○ 1. Builder: Who designs the system and is accountable to the buyer.
○ 2. Buyer: Who pays for the system in the hope of profits from providing services?
○ 3. User: Ultimate beneficiary or victim of the system. The user's interests are also
guarded by.
○ 4. Tester: Who is dedicated to the builder's destruction?
○ 5. Operator: Who has to live with the builders' mistakes.
Prepared & Compiled by: Prof. Bhupendra Panchal
A Model for Testing

The World The Model World

Environment Unexpected
Environment
Model

Expected
Program
Program Tests Outcome
Model

Nature &
Bug Model
Psychology
Model For Testing:
It includes three models:
a model of the environment,
a model of the program and
a model of the expected bugs

a. Environment:
• A Program's environment is the hardware and software required to make
it run.
• For online systems, the environment may include communication lines,
other systems, terminals and operators.
• The environment also includes all programs that interact with and are
used to create the program under test - such as OS, linkage editor, loader,
compiler, utility routines.

Prepared & Compiled by: Prof. Bhupendra Panchal


Model For Testing:
b. Program:
• The concept of the program is to be simplified in order to test it.
• Most programs are too complicated to understand in detail.
• If simple model of the program doesn`t explain the unexpected behavior, we
may have to modify that model to include more facts and details.

c. Bugs:
• Bugs are more harmful than ever we expect them to be.
• An unexpected test result may lead us to change our notion of what a bug is
and our model of bugs.
• Some optimistic notions that many programmers or testers have about bugs
are usually unable to test effectively

Prepared & Compiled by: Prof. Bhupendra Panchal


Model For Testing:
Optimistic notions about bugs:
1. Benign Bug Hypothesis: The belief that bugs are nice, and logical. (Benign: Not
Dangerous)
2. Bug Locality Hypothesis: The belief that a bug with in a component affects only
that component's behavior.
3. Control Bug Dominance: The belief those errors in the control structures (if, switch
etc) of programs dominate the bugs.
4. Code / Data Separation: The belief that bugs respect the separation of code and
data.
5. Language syntax and semantics: The belief that the language syntax and semantics
(e.g. Structured Coding, Strong typing, etc) eliminates most bugs.
6. Corrections Abide: The mistaken belief that a corrected bug remains corrected.
7. Silver Bullets: The mistaken belief that Language, Design method, representation,
environment grants immunity from bugs.
8. Angelic Testers: The belief that testers are better at test design than programmers
is at code design.
Prepared & Compiled by: Prof. Bhupendra Panchal
System Testing Metrics

● A software testing metric indicates the degree to which a process,


component, or tool is efficient.

Importance of Metrics in Software Testing


● Software testing metrics are used to increase the overall productivity of the
development process.
● It helps to make more informed choices about the tools and technologies
being used.
● It helps to identify unique ways and techniques that are beneficial for their
system, hence increasing performance.
● Software testing metrics determine the health of a process, tool, and approach
used.

Prepared & Compiled by: Prof. Bhupendra Panchal


Metrics Life Cycle

Prepared & Compiled by: Prof. Bhupendra Panchal


• Example of Software Test Metrics Calculation
Data retrieved during
S No. Testing Metric
test case development
1 No. of requirements 5 Percentage test cases executed

The average number of test cases written per Test Case Effectiveness
2 40
requirement Failed Test Cases Percentage
3 Total no. of Test cases written for all requirements 200 Blocked Test Cases Percentage

4 Total no. of Test cases executed 164 Fixed Defects Percentage


5 No. of Test cases unexecuted 36 Accepted Defects Percentage
6 No. of Test cases passed 100
Defects Deferred Percentage
7 No. of Test cases failed 60
8 No. of Test cases blocked 4
9 Total no. of defects identified 20
10 Defects accepted as valid by the dev team 15
11 Defects deferred for future releases 5

12 Defects fixed 12
• Example of Software Test Metrics Calculation
Data retrieved during 1. Percentage test cases executed =
S No. Testing Metric test case
development (No of test cases executed / Total no of test cases written) x 100
1 No. of requirements 5
= (164 / 200) x 100
= 82
The average number of test cases written
2 40
per requirement

Total no. of Test cases written for all


2. Test Case Effectiveness =
3 200 (Number of defects detected / Number of test cases run) x 100
requirements

4 Total no. of Test cases executed 164


= (20 / 164) x 100
= 12.2
5 No. of Test cases unexecuted 36

6 No. of Test cases passed 100


3. Failed Test Cases Percentage = (Total number of failed test
7 No. of Test cases failed 60 cases / Total number of tests executed) x 100
8 No. of Test cases blocked 4 = (60 / 164) * 100
= 36.59
9 Total no. of defects identified 20

10 Defects accepted as valid by the dev team 15 4. Blocked Test Cases Percentage = (Total number of blocked tests
11 Defects deferred for future releases 5 / Total number of tests executed) x 100
= (4 / 164) * 100
12 Defects fixed 12
= 2.44
• Example of Software Test Metrics Calculation
Data retrieved during 5. Fixed Defects Percentage = (Total number of flaws fixed /
S No. Testing Metric test case
development Number of defects reported) x 100
1 No. of requirements 5
= (12 / 20) * 100
= 60
The average number of test cases written
2 40
per requirement
6. Accepted Defects Percentage = (Defects Accepted as Valid by
Total no. of Test cases written for all
3
requirements
200 Dev Team / Total Defects Reported) x 100
= (15 / 20) * 100
4 Total no. of Test cases executed 164
= 75
5 No. of Test cases unexecuted 36

6 No. of Test cases passed 100 7. Defects Deferred Percentage = (Defects deferred for future
releases / Total Defects Reported) x 100
7 No. of Test cases failed 60
= (5 / 20) * 100
8 No. of Test cases blocked 4 = 25
9 Total no. of defects identified 20

10 Defects accepted as valid by the dev team 15

11 Defects deferred for future releases 5

12 Defects fixed 12
Software Testing Tools

Prepared by: Prof. Bhupendra


Panchal
System Testing Tools

● The software testing tools can be ○ Test management tool


categorized, depending on the
○ Bug tracking tool
licensing (paid or commercial, open-
source), technology usage, type of ○ Automated testing tool
testing, and so on. ○ Performance testing tool
● With the help of testing tools, we can ○ Cross-browser testing tool
improve our software performance,
deliver a high-quality product, and ○ Integration testing tool
reduce the duration of testing, which ○ Unit testing tool
is spent on manual efforts.
○ Mobile/android testing tool
● The software testing tools can be
GUI testing tool
divided into the following: ○

○ Security testing tool

Prepared & Compiled by: Prof. Bhupendra Panchal


System Testing Tools

1. Test management tool


Test management tools are used to keep track
of all the testing activity, fast data analysis,
manage manual and automation test cases,
various environments, and plan and maintain
manual testing as well.

2. Bug/ defect tracking tool


The defect tracking tool is used to keep track of
the bug fixes and ensure the delivery of a quality
product. This tool can help us to find the bugs in
the testing stage so that we can get the defect-
free data in the production server.

Prepared & Compiled by: Prof. Bhupendra Panchal


System Testing Tools

3. Automation testing tool


This type of tool is used to enhance the productivity
of the product and improve the accuracy. We can
reduce the time and cost of the application by
writing some test scripts in any programming
language.

4. Performance testing tool


Performance or Load testing tools are used to check the
load, stability, and scalability of the application. When n-
number of the users are using the application at the
same time, and if the application gets crashed because
of the immense load, to get through this type of issue,
we need load testing tools.
Prepared & Compiled by: Prof. Bhupendra Panchal
System Testing Tools

5. Cross-browser testing tool


This type of tool is used when we need
to compare a web application in the
various web browser platforms. It will
ensure the consistent behaviour of the
application in multiple devices,
browsers, and platforms.

6. Integration testing tool


This type of tool is used to test the interface
between modules and find the critical bugs
that are happening because of the different
modules and ensuring that all the modules are
working as per the client requirements.

Prepared & Compiled by: Prof. Bhupendra Panchal


System Testing Tools

7. Unit testing tool


This testing tool is used to help the programmers
to improve their code quality, and with the help of
these tools, they can reduce the time of code and
the overall cost of the software.

8. Mobile/android testing tool


We can use this type of tool when we are
testing any mobile application. Some of the
tools are open-source, and some of the tools
are licensed. Each tool has its functionality and
features.

Prepared & Compiled by: Prof. Bhupendra Panchal


System Testing Tools

9. GUI testing tool


A GUI testing tool is used to test the User interface of the
application. These types of tools will help to grab the user's
attention and find the loopholes in the application's design
and make it better.

10. Security testing tool


The security testing tool is used to ensure the
security of the software. If any security
loophole is there, it could be fixed at the early
stage of the product. We need this type of the
tool when the software has encoded the
security code which is not accessible by the
unauthorized users.
Prepared & Compiled by: Prof. Bhupendra Panchal
Taxonomy of Bugs:
Taxonomy of Bugs:

● There is no universally correct way to categorize bugs.


● A given bug can be put into one or another category depending on its history and
the programmer's state of mind.
● The major categories are:

1)Requirements, Features, Functionality Bugs 24.3% bugs

2)Structural Bugs 25.2%

3)Data Bugs 22.3%

4)Coding Bugs 9.0%

5)Interface, Integration and System Bugs 10.7%


6)Testing & Test Design Bugs 2.8 %

Prepared & Compiled by: Prof. Bhupendra Panchal


Taxonomy of Bugs:
1) Requirements, Features, Functionality Bugs

I. Requirements & Specs.


 Incompleteness, ambiguous or self-contradictory requirements
 Analyst’s assumptions not known to the designer
 These are expensive: introduced early in SDLC and removed at the last

II. Feature Bugs


 Specification problems create feature bugs
 Wrong feature bug has design implications

III. Feature Interaction Bugs


 Arise due to unpredictable interactions
 Examples: call forwarding & call waiting.

Prepared & Compiled by: Prof. Bhupendra Panchal


Taxonomy of Bugs:
2. Structural Bugs

I. Control & Sequence bugs: include unreachable code, improper nesting of


loops, incorrect loop termination criteria, missing process steps

II. Logic Bugs: misunderstanding how case statements and logic operators behave,
e.g. Boolean expressions, logical expression

III. Processing bugs: include arithmetic bugs, algebraic, mathematical function


evaluation, algorithm selection

IV. Initialization bugs: include: Forgetting to initialize the variables, initializing


wrong format

V. Data flow bugs & anomalies: include: using an uninitialized variable,


modifying and then not storing the result

Prepared & Compiled by: Prof. Bhupendra Panchal


Taxonomy of Bugs:
3. Data Bugs:
include all bugs that arise from the specification of data objects, their formats, the
number of objects, and their initial values

i. Dynamic Data Vs Static data:


• Dynamic data bugs are due to leftover garbage in a shared resource
• Static Data are fixed in form and content. They appear in the source code or
database directly or indirectly

ii. Information, parameter, and control

iii. Content, Structure and Attributes:


• Content can be an actual bit pattern, character string.
• Structure relates to the size, shape and numbers that describe the data
object
• Attributes relates to the specification
Prepared & Compiled by: Prof. Bhupendra Panchal
Taxonomy of Bugs:

4. Coding Bugs
• Coding errors of all kinds can create any of the other kind of bugs.
• Includes syntax errors, may lead to many logic and coding bugs.
• The documentation bugs are also considered as coding bugs which may mislead the
maintenance programmers.

5. Testing & Test Design Bugs

 Tests require code that uses complicated scenarios & databases, to be executed.

 may lead to an incorrect interpretation of the specs.

 Testing process is correct, but the criterion for judging software’s response to tests
is incorrect or impossible.

Prepared & Compiled by: Prof. Bhupendra Panchal


Taxonomy of Bugs:
6. Interface, Integration and Systems Bugs

i. External Interfaces System

component
ii. Internal Interfaces
component

iii. Hardware Architecture Bugs

iv. Operating System Bugs


hardware
v. Software architecture bugs

vi. Control & Sequence bugs O. S.


Drivers

vii. Resource management bugs


Application
viii. Integration bugs software

ix. System bugs


6. Interface, Integration and Systems Bugs

• External Interfaces: include devices, actuators, sensors, input terminals, printers, communication lines.

• Internal Interfaces: include communicating routines, procedures.

• Hardware Architecture Bugs: include address generation error, i/o device operation / instruction error, waiting too
long for a response, incorrect interrupt handling etc.

• Operating System Bugs: these are a combination of hardware architecture and interface bugs mostly caused by a
misunderstanding of what it is the operating system does.

• Software architecture bugs: Software architecture bugs are the kind that called - interactive.

• Control & Sequence bugs: include: missing, wrong, or redundant process steps, Ignored timing, assuming that events
occur in a specified sequence

• Resource management bugs: include: Wrong resource used, Resource is already in use, Resource dead lock etc

• Integration bugs: integration of the interfaces between, working and tested components.

• System bugs: System bugs covering all kinds of bugs


Examples of Bugs
● Unexpected program crashes
● Results that don’t match expectations,
● The program encountering an infinite loop,
● Incorrect calculations,
● Data missing from a database
● Malfunctioning user interface elements,
● Unresponsive API
● Security issues
● Errors with file permissions
● Compatibility issues

Prepared & Compiled by: Prof. Bhupendra Panchal


Consequences of Bugs
● Functional Issues:
○ Bugs can lead to incorrect calculations
○ wrong data processing
○ resulting in inaccurate results or unexpected behavior
○ Bugs may cause the software to crash or become unresponsive,
○ leading to data loss.
● Security Vulnerabilities:
○ leading to unauthorized access, data breaches
○ may compromise the confidentiality of sensitive information
○ leading to privacy violations and legal consequences
○ Performance Issues:
○ degrade the overall performance of the software
○ causing slow response times
○ inefficient use of system resources
Prepared & Compiled by: Prof. Bhupendra Panchal
Consequences of Bugs
● Financial Impact:
○ Identifying and fixing bugs can be time-consuming and expensive
○ degraded performance for online services may experience a loss of revenue.
● Reputation Damage:
○ Users may become frustrated or lose trust in a software product
○ damaging the reputation of the software or the company behind it.
● Legal Consequences:
○ Bugs that lead to data breaches or privacy violations may result in legal action and
regulatory penalties.
○ Breaching service level agreements or other contractual commitments due to bugs may
lead to legal disputes.
● Operational Disruptions:
○ Bugs may disrupt business processes,
○ leading to operational inefficiencies
Prepared & Compiled by: Prof. Bhupendra Panchal
Consequences of Bugs

Some consequences of a bug on a scale are:


● Mild: bugs may lead to misspelled output or misaligned printout.
● Moderate: Outputs are misleading or redundant
● Annoying: Till the bugs are fixed operators must use unnatural command
sequences to get proper response.
● Disturbing: It refuses to handle authorized / legal transactions. For e.g. ATM
machine may malfunction with ATM card / credit card.
● Serious: It loses track of its transactions. Accountability is lost.
● Very Serious: System does another transaction instead of requested e.g. Credit
another account, convert withdrawals to deposits.
● Extreme: The problems aren't limited to a few users or to few transaction types.
● Intolerable: Long term unrecoverable corruption of the database
● Infectious: Corrupts other systems, even when it may not fail.
Prepared & Compiled by: Prof. Bhupendra Panchal
Taxonomy of Bugs:

1. Test Debugging:

Testing & Debugging tests, test scripts etc. Simpler when tests have localized affect.

2. Test Quality Assurance:

To monitor quality in independent testing and test design.

3. Test Execution Automation:

Test execution bugs are eliminated by test execution automation tools & not using manual testing.

4. Test Design Automation:

Test design is automated like automation of software development.

You might also like