0% found this document useful (0 votes)
73 views261 pages

Stqa Ise

The document discusses the evolution of software testing over time. It explains that in early days, testing was considered a debugging process after development to remove errors. However, by the 1970s, the need for separate testing techniques was realized. The document then covers key milestones in testing in the 1980s, 1990s, and characterizes different phases of testing evolution. It provides definitions of software testing and discusses concepts like effective vs exhaustive testing and testing as a process. Finally, it outlines some important testing terminology.

Uploaded by

Hunain Adhikari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views261 pages

Stqa Ise

The document discusses the evolution of software testing over time. It explains that in early days, testing was considered a debugging process after development to remove errors. However, by the 1970s, the need for separate testing techniques was realized. The document then covers key milestones in testing in the 1980s, 1990s, and characterizes different phases of testing evolution. It provides definitions of software testing and discusses concepts like effective vs exhaustive testing and testing as a process. Finally, it outlines some important testing terminology.

Uploaded by

Hunain Adhikari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 261

Module 1

Testing Methodology

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra 1
Evolution of Software Testing
Evolution of Software Testing
• In the early days of software development, Software Testing was
considered only as a debugging process for removing the errors
after the development of software.
• By 1970, software engineering term was in common use. But
software testing was just a beginning at that time.
• In 1978, G.J. Myers realized the need to discuss the techniques of
software testing in a separate subject. He wrote the book “The Art
of Software Testing” which is a classic work on software testing.
• Myers discussed the psychology of testing and emphasized that
testing should be done with the mind-set of finding the errors not to
demonstrate that errors are not present.
• By 1980, software professionals and organizations started talking
about the quality in software. Organizations started Quality
assurance teams for the project, which take care of all the testing
activities for the project right from the beginning.

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 2
Pooja Malhotra
Evolution of Software Testing
Evolution of Software Testing
• In the 1990s testing tools finally came into their own. There was
flood of various tools, which are absolutely vital to adequate testing
of the software systems. However, they do not solve all the
problems and cannot replace a testing process.
• Gelperin and Hetzel [79] have characterized the growth of software
testing with time. Based on this, we can divide the evolution of
software testing in following phases:

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 3
Pooja Malhotra
Evolution of Software Testing

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 4
Pooja Malhotra
Psychology for Software Testing:
Testing is the process of demonstrating that there are no errors.
Testing is the process of executing a program with the intent of finding
errors.

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 5
Pooja Malhotra
The Quality Revolution
The Shewhart cycle

• Deming introduced Shewhart’s PDCA cycle to Japanese researchers


• It illustrate the activity sequence:
– Setting goals
– Assigning them to measurable milestones
– Assessing the progress against the milestones
– Take action to improve the process in the next cycle
6
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing Goals

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 7
Pooja Malhotra
Testing produces Reliability and Quality

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 8
Pooja Malhotra
Quality leads to customer satisfaction

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 9
Pooja Malhotra
Testing control Risk factors

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 10
Pooja Malhotra
Software Testing Definitions

• “Testing is the process of executing a program with the


intent of finding errors.”
- Myers [2]
• “A successful test is one that uncovers an as-yet-
undiscovered error.”
- Myers [2]
• “Testing can show the presence of bugs but never their
absence.”
- W. Dijkstra [125].

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 11
Pooja Malhotra
Software Testing Definitions

• “Testing is a concurrent lifecycle process of engineering, using and


maintaining testware (i.e. testing artifacts) in order to measure and
improve the quality of the software being Tested.”
- Craig [117]

• “Software testing is a process that detects important


bugs with the objective of having better quality software.”

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 12
Pooja Malhotra
Model for Software Testing

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 13
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 14
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing

The domain of possible inputs to the software is too


large to test.

• Valid Inputs
• Invalid Inputs
• Edited Inputs
• Race Conditions

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 15
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 16
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 17
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing

There are too many possible paths through the


program to test.

1for (int i = 0; i < n; ++i)


2{
3 if (m >=0)
4 x[i] = x[i] + 10;
5 else
6 x[i] = x[i] - 2;
7}...

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
18
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing

• Total number of paths will be 2n + 1.,where n is the number of


times the loop will be carried out.
• if n is 20, then the number of paths will be 220 + 1, i.e. 1048577.

• Every Design error cannot be found.


Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 19
Pooja Malhotra
Software Testing as a Process

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 20
Pooja Malhotra
Software Testing as a Process

An organization for the better quality software must adopt a testing


process and consider the following points:
• Testing process should be organized such that there is enough time
for important and critical features of the software.
• Testing techniques should be adopted such that these techniques
detect maximum bugs.
• Quality factors should be quantified so that there is clear
understanding in running the testing process. In other words,
process should be driven by the quantified quality goals. In this way,
process can be monitored and measured.
• Testing procedures and steps must be defined and documented.
• There must be scope for continuous process improvement.

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 21
Pooja Malhotra
Software Testing Terminology

• Failure
The inability of a system or component to perform a required
function according to its specification.

• Fault / Defect / Bug


Fault is a condition that in actual causes a system to produce
failure. It can be said that failures are manifestation of bugs.

22
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing Terminology

Error
Whenever a member of development team makes any mistake in
any phase of SDLC, errors are produced. It might be a typographical
error, a misleading of a specification, a misunderstanding of what a
subroutine does and so on. Thus, error is a very general term used
for human mistakes.

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 23
Pooja Malhotra
Software Testing Terminology

Module A()
{
---
While(a > n+1);
{
---
print(“The value of x is”,x);
}
---
}
24
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing Terminology

Test Case is a well –


documented
procedure designed to
test the functionality of
a feature in the
system.

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 25
Pooja Malhotra
Software Testing Terminology

• Testware
The documents created during the testing activities are known as
Testware. (Test Plan, test specifications, test case design , test
reports etc.)

• Incident
The symptom(s) associated with a failure that alerts the user to the
occurrence of a failure.

• Test Oracle
To judge the success or failure of a test(correctness of the system
for some test) *Comparing actual results with expected results by
hand.*
26
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Life Cycle of a Bug

27
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
States of a Bug

28
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Bug affects Economics of Software Testing
Software Testing Myths

29
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Bug Classification based on Criticality

• Critical Bugs
the worst effect on the functioning of software such that it stops or
hangs the normal functioning of the software.

• Major Bug
This type of bug does not stop the functioning of the software but
it causes a functionality to fail to meet its requirements as
expected.

• Medium Bugs
Medium bugs are less critical in nature as compared to critical and
major bugs.(not according to standards- Redundant /Truncated
output)
• Minor Bugs
This type of bug does not affect the functioning of the
software.(Typographical error or misaligned printout)
30
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Bug Classification based on SDLC

Requirements and Specifications Bugs


Design Bugs
Control Flow Bugs
Logic Bugs: Improper layout of cases, missing cases
Processing Bugs: Arithmetic errors, Incorrect data conversion
Data Flow Bugs
Error Handling Bugs
Race Condition Bugs
Boundary Related Bugs
User Interface Bugs
Coding Bugs
Interface and Integration Bugs
System Bugs
Testing Bugs

31
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Testing Principles

• Effective Testing not Exhaustive Testing

• Testing is not a single phase performed in SDLC

• Destructive approach for constructive testing

• Early Testing is the best policy.

• The probability of the existence of an error in a section


of a program is proportional to the number of errors
already found in that section.

• Testing strategy should start at the smallest module


level and expand toward the whole program.
32
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Testing Principles

• Testing should also be performed by an independent


team.
• Everything must be recorded in software testing.

• Invalid inputs and unexpected behavior have a high


probability of finding an error.

• Testers must participate in specification and design


reviews.

33
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Software Testing Life Cycle (STLC):Well defined series of
steps to ensure successful and effective testing.

34
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing Life Cycle (STLC):Well defined series of
steps to ensure successful and effective testing.

• The major contribution of STLC is to involve the


testers at early stages of development.
• This has a significant benefit in the project
schedule and cost.
• The STLC also helps the management in
measuring specific milestones.

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 35
Pooja Malhotra
Test Planning

• Defining the Test Strategy


• Estimate of the number of test cases, their duration and cost.
• Plan the resources like the manpower to test, tools required,
documents required.
• Identifying areas of risks.
• Defining the test completion criteria.
• Identification of methodologies, techniques and tools for
various test cases.
• Identifying reporting procedures, bug classification,
databases for testing, Bug Severity levels, project metrics

36
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Test Planning

The major output of test planning is the test plan document. Test
plans are developed for each level of testing. After analysing the
issues, the following activities are performed:
• Develop a test case format.
• Develop test case plans according to every phase of SDLC.
• Identify test cases to be automated.
• Prioritize the test cases according to their importance and
criticality.
• Define areas of stress and performance testing.
• Plan the test cycles required for regression testing.

Test Plan: A document describing the scope, approach, resources


and schedule of intended test activities as a roadmap.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 37
Pooja Malhotra
Test Design

• Determining the test objectives and their


Prioritization(broad categories of things to test)
• Preparing List of Items to be Tested under each objective
• Mapping items to test cases
• Selection of Test case design techniques(black box and
white box)
• Creating Test Cases and Test Data
• Setting up the test environment and supporting tools
• Creating Test Procedure Specification(description of how
the test case will be run). It is in the form of sequenced
steps. This procedure is actually used by the tester at the
time of execution of test cases.
38
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Test Design

All the details specified in the test design phase are documented in
the test design specification. This document provides the details of the
input specifications, output specifications, environmental needs, and
other procedural requirements for the test case.

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 39
Pooja Malhotra
Test Execution: Verification and Validation

In this phase, all test cases are executed including verification and validation.
• Verification test cases are started at the end of each phase of SDLC.
• Validation test cases are started after the completion of a module.
• It is the decision of the test team to opt for automation or manual execution.
• Test results are documented in the test incident reports, test logs, testing status,
and test summary reports etc.

40
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Post-Execution / Test Review

After successful test execution, bugs will be reported to


the concerned developers. This phase is to analyse bug-
related issues and get feedback so that maximum
number of bugs can be removed.
As soon as developer gets the bug report, he perform
the following activities:
– Understanding the Bug
– Reproducing the bug
– Analyzing the nature and cause of the bug
Review Process:
– Reliability analysis
– Coverage analysis
– Overall defect analysis (Quality Improvement)
41
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing Methodology is the organization of software testing by
means of which the test strategy and test tactics are achieved.

42
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Test Strategy

Planning of the whole testing process into a well-planned series


of steps. Test strategy provides a roadmap that includes very
specific activities that must be performed by the test team in
order to achieve a specific goal.
• Test Factors Test factors are risk factors or issues related to
the system under development. Risk factors need to be
selected and ranked according to a specific system under
development.
• Test Phase This is another component on which the testing
strategy is based. It refers to the phases of SDLC where
testing will be performed. Testing strategy may be different
for different models of SDLC, e.g. strategies will be different
for waterfall and spiral models.
43
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Test Strategy Matrix:A test strategy matrix identifies the concerns
that will become the focus of test planning and execution.
In this way, this matrix becomes an input to develop the testing
strategy.
• Select and Rank Test Factors
• Identify the System Development Phases
• Identify the Risks(concerns) associated with System under
Development
• Plan the test strategy for every risk identified.

Let’s take a project


as an example.
Suppose a new
operating system
has to be designed,
which needs a test
strategy.
44
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
45
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Load Testing Objectives
• "To have a response time under XX seconds"

• "Error rate is under XX%"


• "The infrastructure can handle XXX users"

46
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Development of Test Strategy

The testing strategy should start at the component level and finish at the
integration of the entire system. Thus, a test strategy includes testing the
components being built for the system, and slowly shifts towards testing the
whole system. This gives rise to two basic terms—Verification and
Validation—the basis for any type of testing. It can also be said that the
testing process is a combination of verification and validation.

Software Verification : The process of evaluating software to determine


whether the products of a given development phase satisfy the conditions
imposed at the start of that phase.
Software Validation :The process of evaluating software during or at the end
of the development process to determine whether it satisfies specified
requirements . (in conformance with customer expectation)

Verification: “Are we building the product right?”

Validation: “Are we building the right product?” 47


Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Verification would check the design doc and correcting the spelling mistake.
consider the following specification
A clickable button with name Submet

Owing to Validation testing, the development team will


make the submit button clickable
48
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
V Testing Life Cycle Model

49
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Validation Activities

• Unit Testing
• Integration Testing
• System Testing
• Acceptance Testing

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 50
Pooja Malhotra
Testing Tactics
The ways to perform various types of testing under a specific test strategy.
#Manual Testing
#Automated Testing

Software Testing techniques: Methods for design test cases.


#Static Testing
#Dynamic Testing
* White-Box Testing
* Black-Box Testing

Testing Tools :
Resource for performing a test process

51
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Considerations in Developing
Testing Methodologies

Determine project risks

Determine the type of development project

Identify test activities according to SDLC phase

Build test plan

52
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verification and Validation (V & V) Activities

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 53
Pooja Malhotra
VERIFICATION
Verification is a set of activities that ensures correct implementation of
specific functions in a software.
Verification is to check whether the software conforms to
specifications.
• If verification is not performed at early stages, there are always a
chance of mismatch between the required product and the delivered
product.
• Verification exposes more errors.
• Early verification decreases the cost of fixing bugs.
• Early verification enhances the quality of software.
VERIFICATION ACTIVITIES :All the verification activities are performed
in connection with the different phases of SDLC. The following
verification activities have been identified:
– Verification of Requirements and Objectives
– Verification of High-Level Design
– Verification of Low-Level Design
54
– Verification of Coding SoftwareVerification)
Reference:(Unit Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verification of Requirements

1. Verification of acceptance criteria


(An acceptance criterion defines the goals and requirements of the proposed system
and acceptance limits for each of the goals and requirements.)
2. Acceptance Test plan
-----------------------------------------------------------
1. Verification of Objectives/Specifications(SRS): The purpose of this verification is
to ensure that the user’s needs are properly understood before proceeding with
the project.
2. System Test plan
In verifying the requirements and objectives, the tester must consider both
functional and non-functional requirements.

• Correctness
• Unambiguous(Every requirement has only one interpretation. )
• Consistent(No specification should contradict or conflict with another.)
• Completeness
• Updation
• Traceability
– Backward Traceability 55
– Forward Traceability.
Verification of High Level Design

1. The tester verifies the high-level design.

2. The tester also prepares a Function Test Plan which is based on the
SRS. This plan will be referenced at the time of Function Testing .
3. The tester also prepares an Integration Test Plan which will be
referred at the time of integration testing.

4. The tester verifies that all the components and their interfaces are in
tune with requirements of the user. Every requirement in SRS should
map the design.

56
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verification of High Level Design

Data Design: It creates a model of data and/or information that is


represented at a high level of abstraction (the customer/user’s view of
data).
Verification of Data Design

• Check whether sizes of data structure have been estimated


appropriately.

• Check the provisions of overflow in a data structure.

• Check the consistency of data formats with the requirements.

• Check whether data usage is consistent with its declaration.

• Check the relationships among data objects in data dictionary.

• Check the consistency of databases and data warehouses with


requirements in SRS. 57
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verification of High Level Design

Architectural Design: It focuses on the representation of the structure of


software components, their properties, and interactions.

Verification of Architectural Design


• Check that every functional requirement in SRS has been take care in
this design.
• Check whether all exceptions handling conditions have been taken care.
• Verify the process of transform mapping and transaction mapping used
for transition from the requirement model to architectural design.
• check the functionality of each module according to the requirements
specified.
• Check the inter-dependence and interface between the modules.
– Coupling and Module Cohesion.

58
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verification of High Level Design

Interface Design: It creates an effective communication medium between


the interfaces of different software modules, interfaces between the
software system and any other external entity, and interfaces between a
user and the software system.
Verification of Interface Design
• Check all the interfaces between modules according to architecture
design.
• Check all the interfaces between software and other non-human
producer and consumer of information.
• Check all the interfaces between human and computer.
• Check all the above interfaces for their consistency.
• Check that the response time for all the interfaces are within required
ranges.
• Help Facility
• Error messages and warnings
• Check mapping between every menu option and their corresponding
59
commands. Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verify Low Level Design
Software Testing Myths
1.The tester verifies the LLD. The details and logic of each
module is verified such that the high-level and low-level
abstractions are consistent.
2. The tester also prepares the Unit Test Plan which will be
referred at the time of Unit Testing

• Verify the SRS of each module.


• Verify the SDD of each module.
• In LLD, data structures, interfaces and algorithms are
represented by design notations; so verify the consistency
of every item with their design notations.
• Traceability matrix between SRS and SDD.
60
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
How to Verify Code
Coding is the process of converting LLD specifications into a specific
language. This is the last phase when we get the operational software with
the source code.

• Check that every design specification in HLD and LLD has been
coded using traceability matrix.
• Examine the code against a language specification checklist.
• Verify every statement, control structure, loop, and logic
• Misunderstood or incorrect Arithmetic precedence
• Mixed mode operations
• Incorrect initialization
• Precision Inaccuracy
• Incorrect symbolic representation of an expression
• Different data types
• Improper or nonexistent loop termination 61
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
• Failure to exit Pooja Malhotra
How to Verify Code
Two kinds of techniques are used to verify the coding:
(a) static testing, and (b) dynamic testing.

Static testing techniques :


This technique does not involve actual execution. It considers only static
analysis of the code or some form of conceptual execution of the code.

Dynamic testing techniques:


It is complementary to the static testing technique. It executes the code on
some test data. The developer is the key person in this process who can
verify the code of his module by using the dynamic testing technique.

62
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
How to Verify Code
UNIT VERIFICATION :

Verification of coding cannot be done for the whole system. Moreover, the
system is divided into modules. Therefore, verification of coding means the
verification of code of modules by their developers. This is also known as
unit verification testing.
Listed below are the points to be considered while performing unit
verification :
• Interfaces are verified to ensure that information properly flows in and
out of the program unit under test.
• The local data structure is verified to maintain data integrity.
• Boundary conditions are checked to verify that the module is working
fine on boundaries also.
• All independent paths through the control structure are exercised to
ensure that all statements in a module have been executed at least
once.
• All error handling paths are tested.

Unit verification is largely white-box oriented


63
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Validation

• Validation is the next step after verification.


• Validation is performed largely by black box testing techniques.

• Developing tests that will determine whether the product satisfies the
users’ requirements, as stated in the requirement specification.

• Developing tests that will determine whether the product’s actual


behavior matches the desired behavior, as described in the functional
design specification.

• The bugs, which are still existing in the software after coding need to
be uncovered.

• last chance to discover the bugs otherwise these bugs will move to
the final product released to the customer.

• Validation enhances the quality of software.


64
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Validation Activities

Validation Test Plan


• Acceptance Test Plan
• System Test Plan
• Function Test Plan
• Integration Test Plan
• Unit Test Plan

Validation Test Execution
• Unit Validation Testing
• Integration Testing
• Function Testing
• System Testing
• Acceptance Testing
• Installation Testing

65
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Concept of Unit Testing
• Unit is?
– Function
– Procedure
– Method
– Module
– Component
Unit Testing
– Testing program unit in isolation i.e. in a stand alone
manner.
– Objective: Unit works as expected.
66
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Unit Testing

module
to be
tested
interface
local data structures

boundary conditions
independent paths
error handling paths

test cases
67
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Unit Testing

Dynamic unit test environment


68
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Unit Testing
• The environment of a unit is emulated and tested in isolation
• The caller unit is known as test driver
– A test driver is a program that invokes the unit under test
(UUT)
– It provides input data to unit under test and report the test
result
• The emulation of the units called by the UUT are called stubs
– It is a dummy program
• The test driver and the stubs are together called scaffolding
• The low-level design document provides guidance for selection
of input test data
69
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Evolution
Unitof Software
Validation Testing Testing

 Drivers

• A test driver is supporting code and data used to provide an


environment for testing part of a system in isolation.
• A test driver may take inputs in the following form and call
the unit to be tested:
– It may hardcode the inputs as parameters of the calling unit.
– It may take the inputs from the user.
– It may read the inputs from a file.

70
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Unit Validation Testing
Evolution of Software Testing
Stubs
Stub can be defined as a piece of software that works similar to
a unit which is referenced by the Unit being tested, but it is much
simpler that the actual unit.

71
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Integration Testing

• Integration testing exposes inconsistency between the modules such


as improper call or return sequences.
• Data can be lost across an interface.
• One module when combined with another module may not give the
desired result.
• Data types and their valid ranges may mismatch between the
modules.

72
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Integration Testing

73
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing
Decomposition based Myths
integration testing

• Based on decomposition of design into functional components or


modules.
• The integration testing effort is computed as number of test
sessions. A test session is one set of test cases for a specific
configuration.
The total test sessions in decomposition based integration is computed
as:

Number of test sessions = nodes – leaves + edges

74
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing
Decomposition based Myths
integration testing

1.Non-Incremental Integration Testing


– Big Bang Method

2.Incremental Integration Testing


• Top-down Integration Testing
• Bottom – up Integration Testing
• Practical approach for Integration Testing
– Sandwich Integration Testing

75
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Incremental Integration Testing

76
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Practical Approach for Integration Testing

 There is no single strategy adopted for industry practice.

 For integrating the modules, one cannot rely on a single strategy.


There are situations depending upon the project in hand which will
force to integrate the modules by combining top-down and bottom-
up techniques.

 This combined approach is sometimes known as Sandwich


Integration testing.

 The practical approach for adopting sandwich testing is driven by


the following factors:
-Priority
-Availability

77
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Call Graph Based Integration

Refine the functional decomposition tree into a form of module calling graph,
then we are moving towards behavioural testing at the integration level.
This can be done with the help of a call graph
A call graph is a directed graph wherein nodes are modules or units and a
directed edge from one node to another node means one module has called
another module. The call graph can be captured in a matrix form which is
known as adjacency matrix.

78
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Pair-wise Integration

Consider only one pair of


calling and called modules
and then we can make a
set of pairs for all such
modules.

Number of test
sessions=no. of edges in
call graph
= 19

79
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Neighborhood Integration

The neighborhood for a node is the


immediate predecessor as well as the
immediate successor of the node.

The total test sessions in


neighborhood integration can be
calculated as:
Neighborhoods = nodes – sink nodes
= 20 - 10
= 10
where Sink Node is an instruction in a
module at which execution terminates.

80
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Path Based Integration

This passing of control from one unit to another unit is necessary for
integration testing. Also, there should be information within the module
regarding instructions that call the module or return to the module. This must
be tested at the time of integration. It can be done with the help of path-based
integration defined by Paul C.

Source Node :It is an instruction in the module at which the execution starts or
resumes. The nodes where the control is being transferred after calling the
module are also source nodes.

Sink Node: It is an instruction in a module at which the execution terminates.


The nodes from which the control is transferred are also sink nodes.

81
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Path Based Integration

Module Execution Path (MEP) : It is a path consisting of a set of executable


statements within a module like in a fl ow graph.

Message:When the control from one unit is transferred to another unit, then
the programming language mechanism used to do this is known as a
message.

MM-Path (Path consisting of MEPs and messages):


MM-path is a set of MEPs and transfer of control among different units in the
form of messages.

MM-Path Graph:
It can be defined as an extended flow graph where nodes are MEPs and
edges are messages. It returns from the last called unit to the first unit where
the call was made.
*In this graph, messages are highlighted with thick lines.* 82
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Path Based Integration

83
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Path Based Integration

MEP Graph

84
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Function Testing

Functional Testing is a testing technique that is used to test the


features/functionality of the system or Software.
Every functionality of the system specified in the functions is tested
according to its external specifications. An external specification is a
precise description of the software behaviour from the viewpoint of the
outside world
•The process of attempting to detect discrepancies between the functional
specifications of a software and its actual behavior.
•The function test must determine if each component or business event:
– performs in accordance to the specifications,
– responds correctly to all conditions that may be presented by
incoming events / data,
– moves data correctly from one business event to the next
(including data stores), and
– business events are initiated in the order required to meet the
business objectives of the system.
85
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Function Testing
The primary processes/deliverables for requirements based function test are
discussed below:
•Test Planning: During planning, the test leader with assistance from the test
team defines the scope, schedule, and deliverables for the function test cycle.
•Functional Decomposition
•Requirement Definition:The testing organization needs specifi ed
requirements in the form of proper documents to proceed with the function
test.
•Test Case Design:A tester designs and implements a test case to validate
that the product performs in accordance with the requirements.
•Function Coverage Matrix

Functions Priority Test Cases


F1 3 T2,T4
F2 1 T1,T3
•Test Case Execution 86
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
System Testing

SYSTEM TESTING is a level of software testing where a complete and


integrated software is tested. The purpose of this test is to evaluate the system’s
compliance with the specified requirements.

87
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Categories of System Tests

88
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Recovery Testing

• Recovery is just like the exception handling feature in a programming


language.
• Recovery is the ability of a system to restart operations after the
integrity of the application has been lost.
• It reverts to a point where the system was functioning correctly and
then reprocesses the transactions up until the point of failure .
• Recovery Testing is the activity of testing how well the software
is able to recover from crashes , hardware failures and other
similar problems.
• It is the forced failure of the software in various ways to verify
that the recovery is properly performed.
– Checkpoints
– Swichover

89
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Security Testing
Security tests are designed to verify that the system meets the security
requirements. Security may include controlling access to data, encrypting data
in communication, ensuring secrecy of stored data, auditing security events, etc
• Confidentiality-It is the requirement that data and the processes be
protected from unauthorized disclosure
• Integrity-It is the requirement that data and process be protected from
unauthorized modification
• Availability-It is the requirement that data and processes be protected form
the denial of service to authorized users
• Authentication- A measure designed to establish the validity of a
transmission, message, or originator. It allows the receiver to have
confidence that the information it receives originates from a specific known
source.
• Authorization- It is the process of determining that a requester is allowed to
receive a service or perform an operation. Access control is an example of
authorization.
• Non-repudiation- A measure intended to prevent the later denial that an
90
action happened, or a communication took place, etc.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Security Testing
Security Testing is the process of attempting to devise
test cases to evaluate the adequacy of protective
procedures and countermeasures.
• Security test scenarios should include negative scenarios
such as misuse and abuse of the software system.
• Security requirements should be associated with each
functional requirement. For example, the log-on requirement
in a client-server system must specify the number of retries
allowed, the action to be taken if the log-on fails, and so on.
• A software project has security issues that are global in
nature, and are therefore, related to the application’s
architecture and overall implementation. For example, a
Web application may have a global requirement that all
private customer data of any kind is stored in encrypted form
in the database
91
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Security Testing
– Useful types of security tests includes the following:
• Verify that only authorized accesses to the system are
permitted
• Verify the correctness of both encryption and decryption
algorithms for systems where data/messages are encoded.
• Verify that illegal reading of files, to which the perpetrator
is not authorized, is not allowed
• Ensure that virus checkers prevent or curtail entry of
viruses into the system
• Try to identify any “backdoors” in the system usually left
open by the software developers

92
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Performance Testing

Performance testing is to test the run time performance of the


system on the basis of various parameters.

• The performance testing requires that performance requirements


must be clearly mentioned in SRS and system test plans. The main
thing is that these requirements must be quantified.

• For example, a requirement that the system return a response to a


query in a reasonable amount is not an acceptable requirement; the
time must be specified in a quantitative way.

93
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Performance Testing
• Tests are designed to determine the performance of the
actual system compared to the expected one
• Tests are designed to verify response time, execution time,
throughput, resource utilization and traffic rate
• One needs to be clear about the specific data to be captured
in order to evaluate performance metrics.
• For example, if the objective is to evaluate the response
time, then one needs to capture
– End-to-end response time (as seen by external user)
– CPU time
– Network connection time
– Database access time
– Network connection time
– Waiting time
94
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Stress Tests
• The goal of stress testing is to evaluate and determine the behavior
of a software component while the offered load is in excess of its
designed capacity
• The system is deliberately stressed by pushing it to and beyond its
specified limits
• It ensures that the system can perform acceptably under worst-case
conditions, under an expected peak load. If the limit is exceeded and
the system does fail, then the recovery mechanism should be
invoked
• Stress tests are targeted to bring out the problems associated with
one or more of the following:
– Memory leak: A failure in a program to release discarded memory
– Buffer allocation: To control the allocation and freeing of buffers
– Memory carving: A useful tool for analyzing physical and virtual
memory dumps when the memory structures are unknown or
have been overwritten.
95
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Load and Stability Tests
• Tests are designed to ensure that the system remains
stable for a long period of time under full load
• When a large number of users are introduced and
applications that run for months without restarting, a
number of problems are likely to occur:
– the system slows down
– the system encounters functionality problems
– the system crashes altogether
• Load and stability testing typically involves exercising
the system with virtual users and measuring the
performance to verify whether the system can support
the anticipated load
• This kind of testing help one to understand the ways the
system will fare in real-life situations
96
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
97
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
98
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Usability Testing

Testing to determine the extent to which the software


product is understood, easy to learn, easy to operate
and attractive to the users under specified conditions.

• Ease of Use

• Interface steps

• Response Time

• Help System

• Error Messages
99
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Usability Testing
Graphical User Interface Tests
– Tests are designed to look-and-feel the interface to the users of an
application system
– Tests are designed to verify different components such as icons,
menu bars, dialog boxes, scroll bars, list boxes, and radio buttons
– The GUI can be utilized to test the functionality behind the
interface, such as accurate response to database queries
– Tests the usefulness of the on-line help, error messages, tutorials,
and user manuals
– The usability characteristics of the GUI is tested, which includes
the following
• Accessibility: Can users enter, navigate, and exit with relative ease?
• Responsiveness: Can users do what they want and when they want in a
way that is clear?
• Efficiency: Can users do what they want to with minimum number of
steps and time?
• Comprehensibility: Do users understand the product structure with a
minimum amount of effort?
100
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Compatibility/Conversion/Configuration Testing

Compatibility Testing is a type of Software testing to check whether your


software is capable of running on different hardware, operating systems,
applications, network environments or Mobile devices.

• Operating systems: The specifications must state all the targeted end-
user operating systems on which the system being developed will be run.
• Software/ Hardware: The product may need to operate with certain
versions of web browsers, with hardware devices such as printers, or with
other software, such as virus scanners or word processors.
• Conversion Testing: Compatibility may also extend to upgrades from
previous versions of the software. Therefore, in this case, the system
must be upgraded properly and all the data and information from the
previous version should also be considered.
• Ranking of possible configurations(most to the least common, for the
target system)
• Testers must identify appropriate test cases and data for compatibility
testing. 101
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Acceptance Testing

• Acceptance Testing is the formal testing conducted to determine


whether a software system satisfies its acceptance criteria and to
enable buyer to determine whether to accept the system or not.
• Determine whether the software is fit for the user to use.
• Making users confident about product
• Determine whether a software system satisfies its acceptance
criteria.
• Enable the buyer to determine whether to accept the system.

• Alpha Testing Beta Testing

102
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Acceptance Testing

Alpha Testing :

Alpha testing is a type of acceptance testing; performed to


identify all possible issues/bugs before releasing the product
to everyday users or public. The focus of this testing is to
simulate real users by using blackbox and whitebox
techniques. The aim is to carry out the tasks that a typical
user might perform. Alpha testing is carried out in a lab
environment and usually the testers are internal employees
of the organization. To put it as simple as possible, this kind
of testing is called alpha only because it is done early on,
near the end of the development of the software, and before
beta testing.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 103
Pooja Malhotra
Acceptance Testing

Beta Testing:

Beta Testing of a product is performed by "real users" of the


software application in a "real environment" and can be
considered as a form of external user acceptance testing.
Beta version of the software is released to a limited number
of end-users of the product to obtain feedback on the product
quality. Beta testing reduces product failure risks and
provides increased quality of the product through customer
validation. It is the final test before shipping a product to the
customers. Direct feedback from customers is a major
advantage of Beta Testing. This testing helps to tests the
product in real time environment.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 104
Pooja Malhotra
Acceptance Testing

Entry Criteria for Alpha testing:


– Software requirements document or Business requirements specification
– Test Cases for all the requirements
– Testing Team with good knowledge about the software application
– Test Lab environment setup
– QA Build ready for execution
– Test Management tool for uploading test cases and logging defects
– Traceability Matrix to ensure that each design requirement has alteast one test
case that verifies it

Exit Criteria for Alpha testing


– All the test cases have been executed and passed.
– All severity issues need to be fixed and closed
– Delivery of Test summary report
– Make sure that no more additional features can be included
– Sign off on Alpha testing

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 105
Pooja Malhotra
Acceptance Testing

Entrance criteria for Beta Testing:

– Positive responses from alpha sites


– Sign off document on Alpha testing
– Beta version of the software should be ready
– Environment ready to release the software application to the public
– Beta sites are ready for installation

Exit Criteria for Beta Testing:

– Feedback report should be prepared from public(good feedback)


– Prepare Beta test summary report
– Notify bug fixing issues to developers

Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 106
Pooja Malhotra
References:

1.Software Testing Principles and Practices, Naresh Chauhan,


Second edition, Oxford Higher Education
2. https://www.guru99.com/software-testing.html

107
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Module 2

Testing Techniques
Static Testing

2
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Static Testing

Static testing is a complimentary technique to dynamic testing


technique to acquire higher quality software. Static testing
techniques do not execute the software.

Static testing can be applied for most of the verification activities.

The objectives of static testing can be summarized as follows:


• To identify errors in any phase of SDLC as early as possible
• To verify that the components of software are in conformance with
its requirements
• To provide information for project monitoring
• To improve the software quality and increase productivity

3
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Static Testing

• Static testing techniques do not demonstrate that the software is


operational or one function of software is working;

• They check the software product at each SDLC stage for


conformance with the required specifications or standards.
Requirements, design specifications, test plans, source code,
user’s manuals, maintenance procedures are some of the items
that can be statically tested.

• Static testing has proved to be a cost-effective technique of error


detection.

• Another advantage in static testing is that a bug is found at its exact


location whereas a bug found in dynamic testing provides no
indication to the exact source code location.

4
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Static Testing
Evolution of Software Testing
Types of Static Testing

• Software Inspections

• Walkthroughs

• Technical Reviews

5
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Inspections

• Inspection process is an in-process manual examination of an item


to detect bugs.

• Inspection process is carried out by a group of peers. The group of


peers first inspects the product at individual level. After this, they
discuss potential defects of the product observed in a formal
meeting.

• It is a very formal process to verify a software product. The


documents which can be inspected are SRS, SDD, code and test
plan.

6
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Inspections

Inspection process involves the interaction of the following elements:

• Inspection steps
• Roles for participants
• Item being inspected

7
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Inspection Process

8
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection

9
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
1.Planning : During this phase, the following is executed:
• The product to be inspected is identified.
• A moderator is assigned.
• The objective of the inspection is stated. If the objective is defect detection,
then the type of defect detection like design error, interface error, code
error must be specified.
During planning, the moderator performs the following activities:
• Assures that the product is ready for inspection
• Selects the inspection team and assigns their roles
• Schedules the meeting venue and time
• Distributes the inspection material like the item to be inspected, checklists,
etc.
Readiness Criteria
• Completeness ,Minimal functionality
• Readability, Complexity, Requirements and design documents 10
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
Inspection Team:
• Moderator
• Author
• Presenter
• Record keeper
• Reviewers
• Observer
2.Overview: In this stage, the inspection team is provided with the
background information for inspection. The author presents the rationale for
the product, its relationship to the rest of the products being developed, its
function and intended use, and the approach used to develop it. This
information is necessary for the inspection team to perform a successful
inspection.
The opening meeting may also be called by the moderator. In this meeting, the
objective of inspection is explained to the team members. The idea is that
every member should be familiar with the overall purpose of the inspection.
11
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
3.Individual Preparation:After the overview, the reviewers individually
prepare themselves for the inspection process by studying the documents
provided to them in the overview session.
– List of questions
– Potential Change Request (CR)
– Suggested improvement opportunities
Completed preparation logs are submitted to the moderator prior to the
inspection meeting.
Inspection Meeting/Examination:
– The author makes a presentation
– The presenter reads the code
– The record keeper documents the CR
– Moderator ensures the review is on track
12
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
At the end, the moderator concludes the meeting and produces a
summary of the inspection meeting.

Change Request (CR) includes the following details:


– Give a brief description of the issue
– Assign a priority level (major or minor) to a CR
– Assign a person to follow it up
– Set a deadline for addressing a CR

13
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
4.Re-work: The summary list of the bugs that arise during the inspection
meeting needs to be reworked by the author.
– Make the list of all the CRs
– Make a list of improvements
– Record the minutes meeting
– Author works on the CRs to fix the issue

5.Validation and Follow-up: It is the responsibility of the moderator to check


that all the bugs found in the last meeting have been addressed and fixed.
• Moderator prepares a report and ascertains that all issues have been
resolved. The document is then approved for release.
• If this is not the case, then the unresolved issues are mentioned in a report
and another inspection meeting is called by the moderator.
14
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Software Testing Myths
Benefits of Inspection Process

• Bug Reduction
• Bug Prevention
• Productivity
• Real-time Feedback to Software Engineers
• Reduction in Development Resource
• Quality Improvement
• Project Management
• Checking Coupling and Cohesion
• Learning through Inspection
• Process Improvement

15
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Variants of Inspection process

16
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Active Design Reviews

17
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Formal Technical Asynchronous
review method (FTArm)

18
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Gilb Inspection

19
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Humphrey’s Inspection Process

20
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
N-Fold Inspection

21
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Structure:
❏ Does the code completely and correctly implement the design?
❏ Does the code conform to any applicable coding standards?
❏ Is the code well-structured, consistent in style, and consistently
formatted?
❏ Are there any uncalled or unneeded procedures or any
unreachable code?
❏ Are there any leftover stubs or test routines in the code?
❏ Can any code be replaced by calls to external reusable
components or library functions?
❏ Are there any blocks of repeated code that could be condensed
into a single procedure?
❏ Is storage use efficient?
❏ Are any modules excessively complex and should be restructured
or split into multiple routines? 23
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Arithmetic Operations:
❏ Does the code avoid comparing floating-point
numbers for equality?
❏ Does the code systematically prevent rounding
errors?
❏ Are divisors tested for zero or noise?

24
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Loops and Branches:
❏ Are all loops, branches, and logic constructs complete, correct,
and properly nested?
❏ Are all cases covered in an IF- -ELSEIF or CASE block,
including ELSE or DEFAULT clauses?
❏ Does every case statement have a default?
❏ Are loop termination conditions obvious and always achievable?
❏ Are indexes or subscripts properly initialized, just prior to the
loop?
❏ Does the code in the loop avoid manipulating the index variable
or using it upon exit from the loop?

25
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Documentation:
❏ Is the code clearly and adequately documented
with an easy-to-maintain commenting style?
❏ Are all comments consistent with the code?
Variables:
❏ Are all variables properly defined with
meaningful, consistent, and clear names?
❏ Do all assigned variables have proper type
consistency or casting?
❏ Are there any redundant or unused variables?
26
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Input / Output errors:
• If the file or peripheral is not ready, is that error
condition handled?
• Does the software handle the situation of the
external device being disconnected?
• Have all error messages been checked for
correctness, appropriateness, grammar, and
spelling?
• Are all exceptions handled by some part of the
code?

27
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Scenario based Reading

Perspective based Reading


• software item should be inspected from the perspective of
different stakeholders. Inspectors of an inspection team have to
check software quality as well as the software quality factors of
a software artifact from different perspectives.
Usage based Reading
• This method given is applied in design inspections. Design
documentation is inspected based on use cases, which are
documented in requirements specification.
Abstraction driven Reading
• This method is designed for code inspections. In this method,
an inspector reads a sequence of statements in the code and
abstracts the functions these statements compute.

28
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Scenario based Reading

Task driven Reading


• This method is also for code inspections . In this method, the
inspector has to create a data dictionary, a complete description of
the logic and a cross-reference between the code and the
specifications.
Function-point based Scenarios
• This is based on scenarios for defect detection in requirements
documents [103]. The scenarios, designed around function-points
are known as the Function Point Scenarios. A Function Point
Scenario consists of questions and directs the focus of an
inspector to a specific function-point item within the inspected
requirements document.

29
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Structured Walkthroughs

It is a less formal and less rigorous technique as compared to inspection.


• Author presents their developed artefact to an audience of peers.
• Peers question and comment on the artefact to identify as many
defects as possible.
• It involves no prior preparation by the audience. Usually involves
minimal documentation of either the process or any arising issues. Defect
tracking in walkthroughs is inconsistent.
• A walk through is an evaluation process which is an informal
meeting, which does not require preparation.
• The product is described by the author and queries for the
comments of participants.
• The results are the information to the participants about the product
instead of correcting it.

30
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Technical Reviews

A technical review is intended to evaluate the software in the light of


development standards, guidelines, and specifications and to provide the
management with evidence that the development process is being carried
out according to the stated objectives. A review is similar to an inspection or
walkthrough, except that the review team also includes management.
Therefore, it is considered a higher-level technique than inspection or
walkthrough.
A technical review team is generally comprised of management-level
representatives of the User and Project Management. Review agendas
should focus less on technical issues and more on oversight than an
inspection. The purpose is to evaluate the system relative to specifications
and standards, recording defects and deficiencies. The moderator should
gather and distribute the documentation to all team members for
examination before the review. He should also prepare a set of indicators to
measure the following points:
• Appropriateness of the problem definition and requirements
• Adequacy of all underlying assumptions
31
• Adherence to standards, Consistency , Completeness,
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University Documentation
Pooja Malhotra
Dynamic Testing:

32
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Black Box Testing

Black-box technique is one of the major techniques in dynamic testing for


designing effective test cases. This technique considers only the functional
requirements of the software or module. In other words, the structure or logic of the
software is not considered. Therefore, this is also known as functional testing.
• The software system is considered as a black box, taking no notice of its internal
structure, so it is also called as black-box testing technique.
• It is obvious that in black-box technique, test cases are designed based on
functional specifications. Input test data is given to the system, which is a black
box to the tester, and results are checked against expected outputs after
executing the software,

33
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Black Box Testing
Evolution of Software Testing
• To test the modules independently.

• To test the functional validity of the software

• Interface errors are detected.

• To test the system behavior and check its performance.

• To test the maximum load or stress on the system.

• Customer accepts the system within defined acceptable limits.

34
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Boundary Value Analysis (BVA)

‘Boundary value analysis’ testing technique is used


to identify errors at boundaries rather than finding
those exist in centre of input domain.

35
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Boundary Value Analysis (BVA)

• The BVA technique is an extension and


refinement of the equivalence class
partitioning technique

• In the BVA technique, the boundary


conditions for each of the equivalence class are
analyzed in order generate test cases

36
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Guidelines for Boundary Value Analysis
• The equivalence class specifies a range
– If an equivalence class specifies a range of values, then construct
test cases by considering the boundary points of the range and
points just beyond the boundaries of the range

• The equivalence class specifies a number of values


– If an equivalence class specifies a number of values, then
construct test cases for the minimum and the maximum value of
the number
– In addition, select a value smaller than the minimum and a value
larger than the maximum value.

• The equivalence class specifies an ordered set


– If the equivalence class specifies an ordered set, such as a linear
list, table, or a sequential file, then focus attention on the first
and last elements of the set.
37
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
BVA- “Single fault "assumption theory
“Single fault” assumption in reliability theory:
failures are only rarely the result of the
simultaneous occurrence of two (or more) faults.
The function f that computes the number of test cases
for a given number of variables n can be shown as:
f = 4n+ 1
As there are four extreme values this accounts for the
4n.The addition of the constant one constitutes for the
instance where all variables assume their nominal value.

38
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
BVA- “Single fault "assumption theory
The basic form of implementation is to maintain all but one
of the variables at their nominal (normal or average) values
and allowing the remaining variable to take on its extreme
values. The values used to test the extremities are:
• Min ------------------------------------ - Minimal
• Min+ --------------------------- Just above Minimal
• Nom ---------------------------------- Average
• Max- ------------- -------- Just below Maximum
• Max --------------- --------------- Maximum

39
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Boundary Value Checking

• Test cases are designed by holding one variable


at its extreme value and other variables at their
nominal values in the input domain. The variable
at its extreme value can be selected at:

– Minimum value (Min)


– Value just above the minimum value (Min+ )
– Maximum value (Max)
– Value just below the maximum value (Max-)
40
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Boundary Value Checking

• Anom, Bmin
• Anom, Bmin+
• Anom, Bmax
• Anom, Bmax-
• Amin, Bnom
• Amin+, Bnom
• Amax, Bnom
• Amax-, Bnom
• Anom, Bnom

• 4n+1 test cases can be designed with boundary value checking


method.

41
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Robustness Testing Method

A value just greater than the Maximum value (Max+)


A value just less than Minimum value (Min-)
• When test cases are designed considering above points in
addition to BVC, it is called Robustness testing.

• Amax+, Bnom
• Amin-, Bnom
• Anom, Bmax+
• Anom, Bmin-

• It can be generalized that for n input variables in a module,
6n+1 test cases are designed with Robustness testing.
42
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Software Testing
Worst Case Myths
Testing Method

• When more than one variable are in extreme values, i.e. when more
than one variable are on the boundary. It is called Worst case
testing method.

• It can be generalized that for n input variables in a module, 5n test


cases are designed with worst case testing.

43
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Example

• A program reads an integer number within the range [1,100] and


determines whether the number is a prime number or not. Design all
test cases for this program using BVC, Robust testing and worst-
case testing methods.

• 1) Test cases using BVC

44
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Example

• Test Cases Using Robust Testing

45
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
BVA- The triangle problem
The triangle problem accepts three integers (a, b and c)as its input,
each of which are taken to be sides of a triangle . The values of these
inputs are used to determine the type of the triangle (Equilateral,
Isosceles, Scalene or not a triangle).
For the inputs to be declared as being a triangle they must satisfy the
six conditions:
C1. 1 ≤ a ≤ 200. C2. 1 ≤ b ≤ 200.
C3. 1 ≤c ≤ 200. C4. a < b + c.
C5. b < a + c. C6. c < a + b.
Otherwise this is declared not to be a triangle.
The type of the triangle, provided the conditions are met, is determined
as follows:
1. If all three sides are equal, the output is Equilateral.
2. If exactly one pair of sides is equal, the output is Isosceles.
3. If no pair of sides is equal, the output is Scalene.
46
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Test Cases for the Triangle
Problem
Boundary Value Analysis Test Cases
Case a b c Expected Output
1 100 100 1 Isosceles
2 100 100 2 Isosceles
min = 1
min+ = 2
3 100 100 100 Equilateral
nom = 100 4 100 100 199 Isosceles
max- = 199 5 100 100 200 Not a Triangle
max = 200
6 100 1 100 Isosceles
7 100 2 100 Isosceles
8 100 199 100 Isosceles
9 100 200 100 Not a Triangle
10 1 100 100 Isosceles
11 2 100 100 Isosceles
12 199 100 100 Isosceles
13 200 100 100 Not a Triangle

47
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Equivalence Class Testing
• An input domain may be too large for all its elements to be used as test
input
• The input domain is partitioned into a finite number of subdomains
• Each subdomain is known as an equivalence class, and it serves as a source
of at least one test input
• A valid input to a system is an element of the input domain that is expected
to return a non error value
• An invalid input is an input that is expected to return an error value.

48
Figure (a)Reference:
Too many test Principles
Software Testing input;and(b)Practices,
OneNaresh
inputChauhan
is selected from each of the subdomain
, Oxford University
Pooja Malhotra
Equivalence Class Testing

Equivalence partitioning is a method for deriving test cases wherein classes of


input conditions called equivalence classes are identified such that each
member of the class causes the same kind of processing and output to occur.
Thus, instead of testing every input, only one test case from each partitioned
class can be executed.

49
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Guidelines for Equivalence Class Partitioning
• An input condition specifies a range [a, b]
– one equivalence class for a < X < b, and
– two other classes for X < a and X > b to test the system with invalid
inputs
• An input condition specifies a set of values
– one equivalence class for each element of the set {M1}, {M2}, ....,
{MN}, and
– one equivalence class for elements outside the set {M 1,M2, ...,MN}
• Input condition specifies for each individual value
– If the system handles each valid input differently then create one
equivalence class for each valid input
• An input condition specifies the number of valid values (Say N)
– Create one equivalence class for the correct number of inputs
– two equivalence classes for invalid inputs – one for zero values and one
for more than N values
• An input condition specifies a “must be” value
– Create one equivalence class for a “must be” value, and
– one equivalence class for something that is not a “must be” value

50
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Identification of Test Cases
Test cases for each equivalence class can be identified by:

• Assign a unique number to each equivalence class

• For each equivalence class with valid input that has not
been covered by test cases yet, write a new test case
covering as many uncovered equivalence classes as possible

• For each equivalence class with invalid input that has not
been covered by test cases, write a new test case that covers
one and only one of the uncovered equivalence classes

51
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Example

• A program reads three numbers A, B and C with range [1,50] and


prints largest number. Design all test cases for this program using
equivalence class testing technique.

I1 = {<A,B,C> : 1 ≤ A ≤ 50}
I2 = {<A,B,C> : 1 ≤ B ≤ 50}
I3 = {<A,B,C> : 1 ≤ C ≤ 50}
I4 = {<A,B,C> : A < 1}
I5 = {<A,B,C> : A > 50}
I6 = {<A,B,C> : B < 1}
I7 = {<A,B,C> : B > 50}
I8 = {<A,B,C> : C < 1}
I9 = {<A,B,C> : C > 50}

52
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Example

• I1 = {<A,B,C> : A > B, A > C}


• I2 = {<A,B,C> : B > A, B > C}
• I3 = {<A,B,C> : C > A, C > B}
• I4 = {<A,B,C> : A = B, A ≠ C}
• I5 = {<A,B,C> : B = C, A ≠ B}
• I6 = {<A,B,C> : A = C, C ≠ B }
• I7 = {<A,B,C> : A = B = C}

53
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Advantages of Equivalence Class Partitioning

• A small number of test cases are needed to adequately cover


a large input domain

• One gets a better idea about the input domain being covered
with the selected test cases

• The probability of uncovering defects with the selected test


cases based on equivalence class partitioning is higher than
that with a randomly chosen test suite of the same size

• The equivalence class partitioning approach is not restricted


to input conditions alone – the technique may also be used
for output domains
54
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Decision Table Based Testing

A major limitation of the EC-based testing is that it


only considers each input separately. The technique
does not consider combining conditions.

Different combinations of equivalent classes can be


tried by using a new technique based on the
decision table to handle multiple inputs.

55
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Formation of Decision Table

• Condition Stub
• Action Stub
• Condition Entry
• Action Entry

56
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Formation of Decision Table
• It comprises a set of conditions (or, causes) and a set of effects (or,
results) arranged in the form of a column on the left of the table
• In the second column, next to each condition, we have its possible
values: Yes (Y), No (N), and Don’t Care (Immaterial) state.
• To the right of the “Values” column, we have a set of rules. For each
combination of the three conditions {C1,C2,C3}, there exists a rule
from the set {R1,R2, ..}
• Each rule comprises a Yes (Y), No (N), or Don’t Care (“-”) response,
and contains an associated list of effects(actions) {E1,E2,E3}
• For each relevant effect, an effect sequence number specifies the order
in which the effect should be carried out, if the associated set of
conditions are satisfied
• Each rule of a decision table represents a test case

57
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Test case design using decision table

The steps for developing test cases using decision table


technique:
• Step 1: The test designer needs to identify the conditions and
the actions/effects for each specification unit.
– A condition is a distinct input condition or an equivalence
class of input conditions
– An action/effect is an output condition. Determine the
logical relationship between the conditions and the effects
• Step 2: List all the conditions and actions in the form of a
decision table. Write down the values the condition can take.
• Step 3: Fill the columns with all possible combinations – each
column corresponds to one combination of values.
• Step 4: Define rules by indicating what action occurs for a set 58
of conditions.Reference: Software Testing PrinciplesPooja
and Practices, Naresh Chauhan , Oxford University
Malhotra
Test case design using decision table

• Interpret condition stubs as the inputs for the test case.


• Interpret action stubs as the expected output for the test case.

• Rule, which is the combination of input conditions becomes the test


case itself.

• The columns in the decision table are transformed into test cases.

• If there are K rules over n binary conditions, there are at least K test
cases and at the most 2^n test cases.

59
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Decision Table Based Testing

Example
• A program calculates the total salary of an employee
with the conditions that if the working hours are less than
or equal to 48, then give normal salary. The hours over
48 on normal working days are calculated at the rate of
1.25 of the salary. However, on holidays or Sundays, the
hours are calculated at the rate of 2.00 times of the
salary. Design the test cases using decision table
testing.

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 60
Pooja Malhotra
Decision Table Based Testing

The decision table for the program is shown below:

The test cases derived from the decision table are given below:

61
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Dynamic Testing: White Box Testing Techniques
White-box testing (also known as clear box testing, glass box testing,
transparent box testing, and structural testing) is a method of testing
software that tests internal structures or workings of an application. White-
box testing can be applied at the unit, integration and system levels of the
software testing process.
• White box testing needs the full understanding of the logic/structure
of the program.
• Test case designing using white box testing techniques
– Control Flow testing method
• Basis Path testing method
• Loop testing
– Data Flow testing method
– Mutation testing method
• Control flow refers to flow of control from one instruction to another
• Data flow refers to propagation of values from one variable or constant to
another variable

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 62
Pooja Malhotra
Logic Coverage Criteria: Structural testing considers the program
code, and test cases are designed based on the logic of the
program such that every element of the logic is covered.
Statement Coverage: The first kind of logic coverage can be identified in
the form of statements. It is assumed that if all the statements of the
module are executed once, every bug will be notified.

Test case 1: x = y = n, where n is any number


Test case 2: x = n, y = n’, where n and n’ are different numbers.

Test case 3: x > y


Test case 4: x < y
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 63
Pooja Malhotra
Logic Coverage Criteria

Decision or Branch Coverage:Branch coverage states that each


decision takes on all possible outcomes (True or False) at least once.
In other words, each branch direction must be traversed at least once.
• Test case 1: x = y
• Test case 2: x != y
• Test case 3: x < y
• Test case 4: x > y

Condition Coverage:Condition coverage states that each condition in a


decision takes on all possible outcomes at least once.
while ((I <= 5) && (J < COUNT))
• Test case 1: I <= 5, J < COUNT
• Test case 2: I > 5, J > COUNT

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 64
Pooja Malhotra
Logic Coverage Criteria
Decision / Condition Coverage:Condition coverage in a decision does not
mean that the decision has been covered. It requires sufficient test cases
such that each condition in a decision takes on all possible outcomes at least
once.
If (A && B)
• Test Case 1: A is True, B is False.
• Test Case 2: A is False, B is True.

Multiple Condition Coverage: In case of multiple conditions, even decision/


condition coverage fails to exercise all outcomes of all conditions.
Therefore, multiple condition coverage requires that we should write
sufficient test cases such that all possible combinations of condition
outcomes in each decision and all points of entry are invoked at least
once.
• Test Case 1: A = TRUE, B = TRUE
• Test Case 2: A = TRUE, B = FALSE
• Test Case 3: A = FALSE, B = TRUE
• Test Case 4: A = FALSE, B = FALSE
65
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Basis Path Testing

Basis path testing is the technique of selecting the paths that provide a basis
set of execution paths through the program.

• Path Testing is based on control structure of the program for which flow
graph is prepared.

• requires complete knowledge of the program’s structure.

• closer to the developer and used by him to test his module.

• The effectiveness of path testing is reduced with the increase in size of


software under test.

• Choose enough paths in a program such that maximum logic coverage


is achieved.
66
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Control Flow Graph

The control flow graph is a graphical representation of control


structure of a program. Flow graphs can be prepared as a directed
graph. A directed graph (V, E) consists of a set of vertices V and a
set of edges E that are ordered pairs of elements of V. Based on the
concepts of directed graph, following notations are used for a flow
graph:
Node: It represents one or more procedural statements. The nodes are denoted by
a circle. These are numbered or labelled.
Edges or links: They represent the flow of control in a program. This is denoted
by an arrow on the edge. An edge must terminate at a node.
Decision node: A node with more than one arrow leaving it is called a decision
node.
Junction node: A node with more than one arrow entering it is called a junction.
Regions: Areas bounded by edges and nodes are called regions. When counting
the regions, the area outside the graph is also considered a region.
67
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Control Flow Graph

Flow Graph Notations for Different Programming


Constructs

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 68
Pooja Malhotra
Path Testing Terminology
Path: A path through a program is a sequence of instructions or statements that starts
at an entry, junction, or decision and ends at another, or possibly the same, junction,
decision, or exit.
Segment: Paths consist of segments. The smallest segment is a link, that is, a single
process that lies between two nodes (e.g., junction-process-junction, junction process-
decision, decision-process-junction, decision-process-decision).

Path Segment: A path segment is a succession of consecutive links that belongs to


some path.

Length of a Path: The length of a path is measured by the number of links in it and
not by the number of instructions or statements executed along the path. An
alternative way to measure the length of a path is by the number of nodes traversed.

Independent Path: An independent path is any path through the graph that
introduces at least one new set of processing statements or new conditions. An
independent path must move along at least one edge that has not been traversed
before the path is
69
defined. Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Path Testing Terminology

Cyclomatic complexity is a software metric used to indicate the complexity


of a program. It is a quantitative measure of the number of linearly
independent paths through a program's source code. It was developed by
Thomas J. McCabe, Sr. in 1976.
• The testing strategy, called basis path testing by McCabe who first
proposed it, is to test each linearly independent path through the program;
in this case, the number of test cases will equal the cyclomatic complexity
of the program.
• Cyclomatic Complexity (logical complexity of program)

Cyclomatic complexity number can be derived through any of the following


three formulae
1. V(G) = e – n + 2p where e is number of edges, n is the number of nodes
in the graph, and p is number of components in the whole graph.
2. V(G) = d + p where d is the number of decision nodes in the graph.
3. V(G) = number of regions in the graph.
70
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Path Testing Terminology
• Calculating the number of decision nodes for Switch-Case/Multiple
If-Else :
When a decision node has exactly two arrows leaving it, then we count it as
a single decision node. However, switch-case and multiple if-else statements
have more than two arrows leaving a decision node, and in these cases, the
formula to calculate the number of nodes is
d = k – 1, where k is the number of arrows leaving the node.
• Calculating the cyclomatic complexity number of the program
having many connected components:
Let us say that a program P has three components: X, Y, and Z. Then we
prepare the flow graph for P and for components, X, Y, and Z. The
complexity number of the whole program is
V(G) = V(P) + V(X) + V(Y) + V(Z)
We can also calculate the cyclomatic complexity number of the full program
with the first formula by counting the number of nodes and edges in all the
components of the program collectively and then applying the formula
V(G) = e – n + 2P Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 71
Pooja Malhotra
Guidelines for Basis Path Testing
We can use the cyclomatic complexity number in basis path testing.
Cyclomatic number, which defines the number of independent paths, can be
utilized as an upper bound for the number of tests that must be conducted to
ensure that all the statements have been executed at least once. Thus,
independent paths are prepared according to the upper limit of the
cyclomatic number. The set of independent paths becomes the basis set for
the flow graph of the program. Then test cases can be designed according to
this basis set.
The following steps should be followed for designing test cases using path
testing:
• Draw the fl ow graph using the code provided for which we have to write test
cases.
• Determine the cyclomatic complexity of the flow graph.
• Cyclomatic complexity provides the number of independent paths.
• Determine a basis set of independent paths through the program control
structure.
• The basis set is in fact the base for designing the test cases. Based on every
independent path, choose the data such that this path is executed. 72
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Example: Consider the following program segment:

1. Draw the DD graph for the program.


2. Calculate the cyclomatic complexity of the program
using all the methods.
3. List all independent paths.
4. Design test cases from independent paths.
73
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Example : DD graph

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 74
Pooja Malhotra
Example

Cyclomatic Complexity
V(G) = e – n + 2 * P
= 10 – 8 +2
= 4
V(G) = Number of predicate nodes + 1
= 3 (Nodes B,C and F) + 1
= 4

V(G) = No. of Regions


• = 4 (R1, R2, R3, R4)

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra 75
Example

Independent Paths

• A-B-F-H
• A-B-F-G-H
• A-B-C-E-B-F-G-H
• A-B-C-D-F-H

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 76
Pooja Malhotra
Loop Testing

Simple Loops:

• Check whether you can bypass the loop or not. If the test case for
bypassing the loop is executed and, still you enter inside the loop, it
means there is a bug.
• Check whether the loop control variable is negative.
• Write one test case that executes the statements inside the loop.
• Write test cases for a typical number of iterations through the loop.
• Write test cases for checking the boundary values of maximum and
minimum number of iterations defined (say min and max) in the loop. It
means we should test for the min, min+1, min-1, max-1, max and
max+1 number of iterations through the loop.
77
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Loop Testing

Nested Loops: Nested loops When two or more loops are embedded, it
is called a nested loop.

The the strategy is to start with the innermost loops while holding outer
loops to their minimum values. Continue this outward in this manner
until all loops have been covered

78
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Loop Testing

Concatenated Loops:

Loops are concatenated if it is possible to reach one after exiting the


other while still on a path from entry to exit.

79
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Mutation Testing
Mutation testing is a technique that focuses on
measuring the adequacy of test data (or test cases).
The original intention behind mutation testing was to
expose and locate weaknesses in test cases. Thus,
mutation testing is a way to measure the quality of test
cases.

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra 80
Mutation Testing

• Mutation testing is the process of mutating some segment of


code(putting some error in the code) and then testing this mutated
code with some test data. If the test data is able to detect the
mutations in the code, then the test data is quite good.

• Mutation testing helps a user create test data by interacting with the
user to iteratively strengthen the quality of test data. During mutation
testing, faults are introduced into a program by creating many
versions of the program, each of which contains one fault. Test data
are used to execute these faulty programs with the goal of causing
each faulty program to fail.

• Faulty programs are called mutants of the original program and a


mutant is said to be killed when a test case causes it to fail. When
this happens, the mutant is considered dead

81
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Mutation Testing
• Modify a program by introducing a single small change to
the code
• A modified program is called mutant
• A mutant is said to be killed when the execution of test
case cause it to fail. The mutant is considered to be dead
• A mutant is an equivalent to the given program if it
always produce the same output as the original program
• A mutant is called killable or stubborn, if the existing set
of test cases is insufficient to kill it.

82
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Mutation Score
A mutation score for a set of test cases is the percentage
of non-equivalent mutants killed by the test suite
100*D/(N-E) where
D -> Dead
N-> Total No of Mutants
E-> No of equivalent mutants

The test suite is said to be mutation-adequate if its


mutation score is 100%
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 83
Pooja Malhotra
Mutation Testing

Primary Mutants:
• Let us take one example of C program shown below

If (a>b)
x = x + y;
else
x = y;
printf(“%d”,x);
….
We can consider the following mutants for above example:
• M1: x = x – y;
• M2: x = x / y;
• M3: x = x+1;
• M4: printf(“%d”,y);

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 84
Pooja Malhotra
Mutation Testing

Secondary Mutants:
Multiple levels of mutation are applied on the initial program.
Example Program:
If(a<b)
c=a;
Mutant for this code may be :
If(a==b)
c=a+1;
85
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Mutation Testing Process

• Step 1: Begin with a program P and a set of test


cases T known to be correct.
• Step 2: Run each test case in T against the
program P.
– If it fails (o/p incorrect) P must be modified
and restarted. Else, go to step 3
• Step 3: Create a set of mutants {Pi }, each
differing from P by a simple, syntactically correct
modification of P.

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 86
Pooja Malhotra
Mutation Testing Process

• Step 4: Execute each test case in T against


each mutant Pi .
• If the o/p is differ the mutant Pi is considered
incorrect and is said to be killed by the test case
• If Pi produces exactly the same results:
– P and Pi are equivalent
– Pi is killable (new test cases must be created
to kill it)

87
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Mutation Testing Process

• Step 5: Calculate the mutation score for the set


of test cases T.
• Mutation score = 100×D/(N −E),

• Step 6: If the estimated mutation adequacy of T


in step 5 is not sufficiently high, then design a
new test case that distinguishes Pi from P, add
the new test case to T, and go to step 2.

88
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Regression Testing
Progressive Vs Regressive Testing

• Whatever test case design methods or testing techniques,


discussed until now, have been referred to as progressive testing
or development testing. From verification to validation, the testing
process progresses towards the release of the product.

• To maintain the software, bug fixing may be required during any


stage of development and therefore, there is a need to check the
software again to validate that there has been no adverse effect on
the already working software.

• A system under test ( SUT) is said to regress
– if a modified component fails, or
– a new component, when used with unchanged components,
causes failures in the unchanged components by generating
side-effects or feature interactions.
89
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Regression Testing
Therefore, now the following versions will be there in the system:

• Baseline version :The version of a component (system) that has passed


a test suite.

• Delta version :A changed version that has not passed a regression test.

• Delta build : An executable configuration of the SUT that contains all the
delta and baseline components.

Thus, it can be said that most test cases begin as progressive test cases and
eventually become regression test cases.

Regression testing is not another testing activity. Rather, it is the re-


execution of some or all of the already developed test cases.

90
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Regression Testing

• Regression testing is defined as a type of software testing to confirm


that a recent program or code change has not adversely affected
existing features.

• Regression testing is nothing but full or partial selection of already


executed test cases which are re-executed to ensure existing
functionalities work fine.

• This testing is done to make sure that new code changes should not
have side effects on the existing functionalities. It ensures that old
code still works once the new code changes are done.

• Regression Testing is necessary to maintain software whenever


there is update in it.

• Regression testing increases quality of software.

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 91
Pooja Malhotra
Regression Testing produces Quality Software

The importance of regression testing is well-understood for the following reasons:


• It validates the parts of the software where changes occur.
• It validates the parts of the software which may be affected by some changes,
but otherwise unrelated.
• It ensures proper functioning of the software, as it was before changes occurred.
• It enhances the quality of software, as it reduces the risk of high-risk bugs.
92
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Objectives of Regression Testing

• Regression Tests to check that the bug has been addressed

• Regression Tests to find other related bugs

• Regression tests to check the effect on other parts of the


program

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 93
Pooja Malhotra
Need / When to do regression testing?

• Software maintenance
Corrective maintenance
Adaptive maintenance
Perfective maintenance
Preventive maintenance

Adaptive – modifying the system to cope with changes in the software


environment (DBMS, OS)
Perfective – implementing new or changed user requirements which
concern functional enhancements to the software
Corrective – diagnosing and fixing errors, possibly ones found by users
Preventive – increasing software maintainability or reliability to prevent
problems in the future

• Rapid iterative development


• First step of integration
• Compatibility assessment and benchmarking(confirmance with 94
standards) Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Regression Testing Types

Bug-Fix regression:

This testing is performed after a bug has been reported and fixed. Its
goal is to repeat the test cases that expose the problem in the first place.

Side-Effect regression/Stability regression:

It involves retesting a substantial part of the product. The goal is to


prove that the change has no detrimental effect on something that was
earlier in order. It tests the overall integrity of the program, not the
success of software fixes.

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 95
Pooja Malhotra
Usability testing

• The testing that validates the ease of use, speed, and


aesthetics of the product from the user's point of view is
called usability testing.
• some of the characteristics of “usability testing” or
“usability validation” are as follows:
– Usability testing tests the product from the users’ point of view. It
encompasses a range of techniques for identifying how users
actually interact with and use the product.
– Usability testing is for checking the product to see if it is easy to
use for the various categories of users.
– Usability testing is a process to identify discrepancies between
the user interface of the product and the human user
requirements, in terms of the pleasantness and aesthetics
aspects.
96
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Usability testing

From the above definition it is easy to conclude that


Something that is easy for one user may not be easy for
another user due to different types of users a product can
have . Something what is considered fast (interms of say,
response time) by one user may be slow for another user
as the machines used by them and the expectations of
speed can be different. Something that is considered
beautiful by someone may look ugly to another. A view
expressed by one user of the product may not be the view
of another.

97
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Usability testing

Throughout the industry, usability testing is gaining


momentum as sensitivity towards usability in
products is increasing and it is very difficult to sell
a product that does not meet the usability
requirements of the users. There are several
standards (for example, accessibility guidelines),
organizations, tools (for example, Microsoft
Magnifier), and processes that try to remove the
subjectivity and improve the objectivity of usability
testing.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 98
Pooja Malhotra
Usability testing

Usability testing is not only for product binaries or


executables. It also applies to documentation and other
deliverables that are shipped along with a product. The
release media should also be verified for usability. Let us
take an example of a typical AUTORUN script that
automatically brings up product setup when the release
media is inserted in the machine. Sometimes this script is
written for a particular operating system version and may
not get auto executed on a different OS version. Even
though the user can bring up the setup by clicking on the
setup executable manually, this extra click (and the fact
that the product is not automatically installed) may be
considered as an irritant by the person performing the99
installation. Reference: Software Testing PrinciplesPooja
and Practices, Naresh Chauhan , Oxford University
Malhotra
Who performs Usability testing

Generally, the people best suited to perform usability testing are


typical representatives of the actual user segments who would be
using the product, so that the typical user patterns can be
captured, and People who are new to the product, so that they
can start without any bias and be able to identify usability
problems. A person who has used the product several times may
not be able to see the usability problems in the product as he or
she would have “got used” to the product's (potentially
inappropriate) usability. Hence, a part of the team performing
usability testing is selected from representatives outside the
testing team. Inviting customer-facing teams (for example,
customer support, product marketing) who know what the
customers want and their expectations, will increase the
effectiveness of usability testing.
100
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Deliverables /Usability testing

A right approach for usability is to test every


artifact that impacts users—such as product
binaries, documentation, messages,
media—covering usage patterns through
both graphical and command user
interfaces, as applicable.

101
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Usability testing

Usability should not be confused with


graphical user interface (GUI). Usability is
also applicable to non-GUI interface such as
command line interfaces (CLI). A large
number of Unix/Linux users find CLIs more
usable than GUIS. SQL command is another
example of a CLI, and is found more usable
by database users. Hence, usability should
also consider CLI and other interfaces that
are used by the users
102
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
WHEN TO DO USABILITY TESTING?

The most appropriate way of ensuring usability is by performing the usability


testing in two phases. First is design validation and the second is usability
testing done as a part of component and integration testing phases of a test
cycle. When planning for testing, the usability requirements should be planned
in parallel, upfront in the development cycle, similar to any other type of
testing. Generally, however, usability is an ignored subject (or at least given
less priority) and is not planned and executed from the beginning of the
project. When there are two defects—one on functionality and other on
usability—the functionality defect is usually given precedence. This approach
is not correct as usability defects may demotivate users from using the
software (even if it performs the desired function) and it may mean a huge
financial loss to the product organization if users reject the product. Also,
postponing usability testing in a testing cycle can prove to be very expensive
as a large number of usability defects may end up as needing changes in
design and needing fixes in more than one screen, affecting different code
paths. All these situations can be avoided if usability testing is planned upfront.

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 103
Pooja Malhotra
WHEN TO DO USABILITY TESTING?

104
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
WHEN TO DO USABILITY TESTING?

Usability design is verified through several means. Some of them are as follows:
• Style sheets : Style sheets are grouping of user interface design elements.
Use of style sheets ensures consistency of design elements across several
screens and testing the style sheet ensures that the basic usability design is
tested. Style sheets also include frames, where each frame is considered as
a separate screen by the user. Style sheets are reviewed to check whether
they force font size, color scheme, and so on, which may affect usability.
• Screen prototypes : Screen prototype is another way to test usability design.
The screens are designed as they will be shipped to the customers, but are
not integrated with other modules of the product. Therefore, this user
interface is tested independently without integrating with the functionality
modules. This prototype will have other user interface functions simulated
such as screen navigation, message display, and so on. The prototype gives
an idea of how exactly the screens will look and function when the product is
released. The test team and some real-life users test this prototype and their
ideas for improvements are incorporated in the user interface. Once this
prototype is completely tested, it is integrated with other modules of 105 the
product. Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
WHEN TO DO USABILITY TESTING?

• Paper designs : Paper design explores the earliest opportunity to validate


the usability design, much before the actual design and coding is done for
the product. The design of the screen, layout, and menus are drawn up on
paper and sent to users for feedback. The users visualize and relate the
paper design with the operations and their sequence to get a feel for usage
and provide feedback. Usage of style sheets requires further coding,
prototypes need binaries and resources to verify, but paper designs do not
require any other resources. Paper designs can be sent through email or
as a printout and feedback can be collected.
• Layout design : Style sheets ensure that a set of user interface elements
are grouped and used repeatedly together. Layout helps in arranging
different elements on the screen dynamically. It ensures arrangement of
elements, spacing, size of fonts, pictures, justification, and so on, on the
screen. This is another aspect that needs to be tested as part of usability
design.

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 106
Pooja Malhotra
WHEN TO DO USABILITY TESTING?

• If an existing product is redesigned or enhanced, usability issues can be


avoided by using the existing layout, as the user who is already familiar with
the product will find it more usable. Making major usability changes to an
existing product (for example, reordering the sequence of buttons on a
screen) can end up confusing users and lead to user errors.
• In the second phase, tests are run to test the product for usability. Prior to
performing the tests, some of the actual users are selected (who are new to
the product and features) and they are asked to use the product. Feedback
is obtained from them and the issues are resolved. Sometimes it could be
difficult to get the real users of the product for usability testing. In such a
case, the representatives of users can be selected from teams outside the
product development and testing teams—for instance, from support,
marketing, and sales teams. When to do usability also depends on the type
of the product that is being developed.

107
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
QUALITY FACTORS FOR USABILITY

Some quality factors are very important when performing usability testing. As
was explained earlier, usability is subjective and not all requirements for
usability can be documented clearly. However focusing on some of the quality
factors given below help in improving objectivity in usability testing are as
follows.
Comprehensibility : The product should have simple and logical structure of
features and documentation. They should be grouped on the basis of user
scenarios and usage. The most frequent operations that are performed early in
a scenario should be presented first, using the user interfaces. When features
and components are grouped in a product, they should be based on user
terminologies, not technology or implementation.
Consistency: A product needs to be consistent with any applicable standards,
platform look-and-feel, base infrastructure, and earlier versions of the same
product. Also, if there are multiple products from the same company, it would
be worthwhile to have some consistency in the look-and-feel of these multiple
products. Following same standards for usability helps in meeting the
consistency aspect of the usability. 108
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
WHEN TO DO USABILITY TESTING?

Navigation : This helps in determining how easy it is to select the different


operations of the product. An option that is buried very deep requires the user to
travel to multiple screens or menu options to perform the operation. The number
of mouse clicks, or menu navigations that is required to perform an operation
should be minimized to improve usability. When users get stuck or get lost,
there should be an easy option to abort or go back to the previous screen or to
the main menu so that the user can try a different route.
Responsiveness : How fast the product responds to the user request is another
important aspect of usability. This should not be confused with performance
testing. Screen navigations and visual displays should be almost immediate
after the user selects an option or else it could give an impression to the user
that there is no progress and cause him or her to keep trying the operation
again. Whenever the product is processing some information, the visual display
should indicate the progress and also the amount of time left so that the users
can wait patiently till the operation is completed. Adequate dialogs and popups
to guide the users also improve usability.
109
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
USABILITY TESTING : AESTHETICS TESTING

AESTHETICS TESTING : Another important aspect in usability is making the


product “beautiful.” Performing aesthetics testing helps in improving usability
further. This testing is important as many of the aesthetics related problems in
the product from many organizations are ignored on the ground that they are not
functional defects. All the aesthetic problems in the product are generally
mapped to a defect classification called “Cosmetic,” which is of low priority.
Having a separate cycle of testing focusing on aesthetics helps in setting up
expectations and also in focusing on improving the look and feel of the user
interfaces. Aesthetics is not in the external look alone. It is in all the aspects such
as messages, screens, colors, and images. A pleasant look for menus, pleasing
colors, nice icons, and so on can improve aesthetics.

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 110
Pooja Malhotra
Accessibility testing

Accessibility Testing is a subset of usability testing, and it is performed


to ensure that the application being tested is usable by people with
disabilities like hearing, color blindness, old age and other
disadvantaged groups.
• People with disabilities use assistive technology which helps them
in operating a software product.

Accessibility testing involves testing these alternative methods of using


the product and testing the product along with accessibility tools.
Accessibility is a subset of usability and should be included as part of
usability test planning.

Verifying the product usability for physically challenged users is called


accessibility testing.
111
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Accessibility testing

Accessibility testing may be challenging for testers because they are


unfamiliar with disabilities. It is better to work with disabled people who
have specific needs to understand their challenges.

Accessibility to the product can be provided by two means :


Making use of accessibility features provided by the underlying
infrastructure (for example, operating system), called basic
accessibility, and

Providing accessibility in the product through standards and


guidelines, called product accessibility.

112
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Accessibility testing

Basic Accessibility: Basic accessibility is provided by the hardware and


operating system. All the input and output devices of the computer and
their accessibility options are categorized under basic accessibility.
Examples:
Keyboard accessibility
Screen accessibility
• Speech Recognition Software - It will convert the spoken word to
text , which serves as input to the computer.
• Screen reader software - Used to read out the text that is
displayed on the screen
• Screen Magnification Software- Used to enlarge the monitor and
make reading easy for vision-impaired users.
• Special keyboard made for the users for easy typing who have
motion control difficulties.
113
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Accessibility testing

Product Accessibility :A good understanding


of the basic accessibility features is needed
while providing accessibility to the product.
A product should do everything possible to
ensure that the basic accessibility features
are utilized by it. For example, providing
detailed text equivalent for multimedia files
ensures the captions feature is utilized by
the product.
114
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Accessibility testing

Sample requirement #1: Text equivalents have to be provided for


audio, video, and picture images.
Sample requirement #2: Documents and fields should be organized so
that they can be read without requiring a particular resolution of the
screen, and templates (known as style sheets).
Sample requirement #3: User interfaces should be designed so that all
information conveyed with color is also available without color.
Sample requirement #4: Reduce flicker rate, speed of moving text;
avoid flashes and blinking text.
Sample requirement #5: Reduce physical movement requirements for
the users when designing the interface and allow adequate time for
user responses.

115
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
116
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Accessibility testing

Following are the point's needs to be checked for application to be


used by all users. This checklist is used for signing off accessibility
testing.
• Whether an application provides keyboard equivalents for all mouse
operations and windows?
• Whether instructions are provided as a part of user documentation
or manual? Is it easy to understand and operate the application
using the documentation?
• Whether tabs are ordered logically to ensure smooth navigation?
• Whether shortcut keys are provided for menus?
• Whether application supports all operating systems?
• Whether color of the application is flexible for all users?
• Whether images or icons are used appropriately, so it's easily
understood by the end users?
117
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Accessibility testing

• Whether an application has audio alerts?


• Whether user can adjust or disable flashing, rotating or moving
displays?
• Check to ensure that color-coding is never used as the only
means of conveying information or indicating an action
• Whether highlighting is viewable with inverted colors? Testing of
color in the application by changing the contrast ratio
• Whether audio and video related content are properly heard by
the disability people ? Test all multimedia pages with no
speakers in websites
• Whether training is provided for users with disabilities that will
enable them to become familiar with the software or application?

118
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
References:

1.Software Testing Principles and Practices, Naresh Chauhan,


Second edition, Oxford Higher Education
2. Software Testing: Principles and Practice by Srinivasan
Desikan, Gopalaswamy Ramesh

119
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Module 3 :
Testing Metrics for Monitoring and Controlling the
Testing Process

1
Software Metrics
Metrics can be defined as “STANDARDS OF MEASUREMENT”.

Software Metrics are used to measure the quality of the project. Simply,
Metric is a unit used for describing an attribute. Metric is a scale for
measurement.
Suppose, in general, “Kilogram” is a metric for measuring the attribute
“Weight”. Similarly, in software, “How many issues are found in
thousand lines of code?”
Test metrics example:

How many defects exist within the module?


How many test cases are executed per person?
What is the Test coverage %?
2
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Evolution
Need ofof Software
Software Testing
Measurement

• Understanding

• Control

• Improvement

3
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Evolution Software
of Software
Metrics Testing

Product Metrics
Measures of the software product at any stage of its development
From requirements to installed system.
– Complexity of S/W design and code
– Size of the final program
– Number of pages of documentation produced
Process Metrics
Measures of the S/W development process
– Overall development time
– Type of methodology used
– Average level of experience of programming staff

4
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Measurement Objectives for Testing
Evolution of Software Testing
The objectives for assessing a test process should
be well defined.
GQM(Goal Question Metric) Framework:
• List the major goals of the test process.
• Drives from each goal, the questions that must
be answered to determine if the goals are being
met.
• Decides what must be measured in order to
answer the questions adequately.

5
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes and Corresponding Metrics
Evolution of Software Testing
in Software Testing

6
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes: Progress

7
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes : Progress

• Scope of testing: Overall amount of work involved(testing effort)


• Tracking Test Progress: Schedule , budget, resources
– Major testing milestones
• Measurements need to be available for comparing the planned and actual progress
towards achieving the testing milestones.
– NPT
– Test case Escapes (TCE)
– Planned versus Actual Execution (PAE) Rate
– Execution Status of Test (EST) Cases(Failed, Passed, Blocked, Invalid and
Untested)
• Defect Backlog: Number of defects that are unresolved/outstanding.
If the backlog increases, the bugs should be prioritized according to
their criticality
• Staff productivity: Time spent in test planning , designing, Number
of test cases developed.(useful to estimate the cost and duration for
8
testing activities) Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes : Progress

Suspension criteria: It describe the circumstances under which


testing would stop temporarily.
• Incomplete tasks on critical path, Large volume of bugs,
critical bugs, incomplete test environment
Exit criteria: It indicates the conditions that move the testing
activities forward from one level to the next.
Example:
The exit criteria should also be standardized. If there are any open faults,
then the software development manager, along with the project manager
and the members of the change control board, decides whether to fix the
faults, or defer them to the next release, or take the risk of shipping it to the
customer with the faults.
• Rate of fault discovery in regression tests, frequency of
failing fault fixes , fault detection rate.
9
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes: Cost
Evolution of Software Testing

10
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes: Cost

Testing cost estimation: The metrics supporting the budget estimation of


testing need to be established early.
Duration of testing: There is also a need to estimate the testing schedule
during test planning. As a part of the testing schedule, the time required to
develop a test plan is also estimated. A testing schedule contains the
timelines of all testing milestones.
Resource requirements: Test planning activities have to estimate the number
of testers required for all testing activities with their assigned duties planned
in the test plan.
Training needs of testing groups and tool requirements: Since the test
planning also identifies the training needs of testing groups and their tool
requirements, we need to have metrics for training and tool requirements.
Cost-effectiveness of automated tools: When a tool is selected to be used for
testing, it is beneficial to evaluate its cost-effectiveness. The cost-
effectiveness is measured by taking into account the cost of tool evaluation,
tool training, tool acquisition, and tool update and maintenance.
11
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes: Quality

12
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes: Quality

Effectiveness of Test Cases: The test cases produced should be effective so


that they uncover more and more bugs.
• Number of faults found in testing.
• Number of failures observed by the customer which can be used as a
reflection of the effectiveness of the test cases.
Defect age : Metric that can be used to measure the test effectiveness, which
assigns a numerical value to the fault, depending on the phase in which it is
discovered.
Defect spoilage: Defect age is used in another metric called defect spoilage to
measure the effectiveness of defect removal activities.

Spoilage = Sum of (Number of Defects x defect age) / Total number of defects

Spoilage is more suitable for measuring the long-term trend of test-


effectiveness. Generally, low values of defect spoilage mean more effective
defect discovery processes. it is an in-process metric that is applicable when
the testing is actually done.
13
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Spoilage Metric
• Defects are injected and removed at different phases of a software
development cycle
• The cost of each defect injected in phase X and removed in phase Y
increases with the increase in the distance between X and Y
• An effective testing method would find defects earlier than a less effective
testing method .
• An useful measure of test effectiveness is defect age, called PhAge

14

Table: Scale for defect age


Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Spoilage Metric

Table: Defect injection and versus discovery on project Boomerang

A new metric called spoilage is defined as

Spoliage 
 (Number of Defects  Discovered Phase) 15

Total Number of Defects


Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Spoilage Metric

16
Table: Number of defects weighted by defect age on project Boomerang
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Spoilage Metric
• The spoilage value for the Boomerang test
project is 2.2
• A spoilage value close to 1 is an indication of a
more effective defect discovery process
• This metric is useful in measuring the long-
term trend of test effectiveness in an
organization
17

Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
18
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes: Quality

Effectiveness of Test Cases:


• Defect Removal Efficiency (DRE) metric defined as follows:

DRE 
Number of DefectsFound in Testing
Number of DefectsFound in Testing Number of DetectsNot Found

There are potential issues that must be taken into account while
measuring the defect-removal efficiency. For example, the severity of
bugs and an estimate of time by which the customers would have
discovered most of the failures are to be established. This metric is
more helpful in establishing the test effectiveness in the long run as
compared to the current project.

19
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes: Quality
Measuring Test completeness : Refer to how much of code and
requirements are covered by the test set. The advantages of
measuring test coverage are that it provides the ability to design
new test cases and improve existing ones.
The relationship between code coverage and the number of test
cases:
-(p/N)*x
C(x)=1 - e
C(x) is coverage after executing x number of test cases, N is the
number of blocks in the program and p is the average of number of
blocks covered by a test case during the function test.
• At the system testing level, we should measure whether all the
features of the product are being tested or not. Common
requirements coverage metric is the percentage of requirements
covered by at least one test . A requirements traceability matrix 20
can be used for this purpose.Pooja Malhotra
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Attributes: Quality
Effectiveness of smoke tests:
• establish confidence over stability of a system
SMOKE TESTING, also known as “Build Verification Testing”, is a type of
software testing that comprises of a non-exhaustive set of tests that aim at
ensuring that the most important functions work. The result of this testing is
used to decide if a build is stable enough to proceed with further testing.
The tests that are included in smoke testing cover the basic operations that
are most frequently used, e.g. logging in, addition, and deletion of records.
Smoke tests need to be a subset of the regression testing suite.
Quality of Test Plan : The quality of test plan is measured in concern with
the possible number of errors. Thus, the quality of a test plan is measured
in concern with the probable number of errors.
– To evaluate a test plan, Berger describes a multi-dimensional qualitative method
using rubrics
1.Theory of objective 2. Theory of scope 3. Theory of coverage 4. Theory
of risk 5. Theory of data 6. Theory of originality 7. Theory of
communication 8. Theory of usefulness 9. Theory of completeness
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 21
10.Theory of insightfulness Pooja Malhotra
Attributes : Size

22
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes : Size
• Estimation of test cases: To fully exercise a system and to estimate its resources,
an initial estimate of the number of test cases is required.
• Number of regression tests: Regression testing is performed on a modified
program that establishes confidence that the changes and fixes against reported
faults are correct and have not affected the unchanged portions of the program.
However, the number of test cases in regression testing becomes too large to
test. Therefore, careful measures are required to select the test cases effectively.
• Some of the measurements to monitor regression testing are :
• Number of test cases re-used
• Number of test cases added to the tool repository or test database
• Number of test cases rerun when changes are made to the software
• Number of planned regression tests executed
• Number of planned regression tests executed and passed
• Tests to automate: Tasks that are repetitive in nature and tedious to perform
manually are prime candidates for an automated tool. The categories of tests that
come under repetitive tasks are: Regression tests, Smoke tests, Load tests,
Performance tests
23
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Architectural Design Metric used for Testing
Card and Glass introduced three types of software design complexity that can also be used in
testing.
Structural Complexity
2
S(m) = f out(m)
where S is the structural complexity of a module m
and fout(m) is the fan-out of module m.
This metric gives us the number of stubs required for unit testing of the module m.(Unit Testing)

Data Complexity
D(m) = v(m) / [fout(m) + 1]
where v(m) is the number of input and output variables that are passed to and from module m.
This metric measures the complexity in the internal interface for a module m and indicates the
probability of errors in module m.

System Complexity
SC(m) = S(m) + D(m)
It is defined as the sum of structural and data complexity.
Overall architectural complexity of system is the sum total of system complexities of all the
modules.
• The testing effort of a module is directly proportional to its system complexity, it will be
difficult to unit test a module with higher system complexity.

• Efforts required for integration testing increases with the architectural complexity of 24 the
system. Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Information Flow Metrics used for Testing

1. Local direct flow exists if


– a module invokes a second module and passes information to it.
– the invoked module returns a result to the caller.
2. Local indirect flow exists if the invoked module returns information
that is subsequently passed to a second invoked module.
3. Global fl ow exists if information flows from one module to another
via a global data structure.
The two particular attributes of the information flow can be described as
follows:
(i) Fan-in of a module m is the number of local flows that terminates at
m, plus the number of data structures from which information is
retrieved by m.
(ii) Fan-out of a module m is the number of local flows that emanate
from m, plus the number of data structures that are updated by m.
25
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Information Flow Metrics used for Testing

26
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Information Flow Metrics used for Testing :
Henry & Kafura Design Metric

27
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Cyclomatic Complexity Measures for Testing

• Cyclomatic number measures the number of linearly independent


paths through flow graphs, it can be used as the set of minimum
number of test cases.

• McCabe has suggested that ideally, cyclomatic number should be


less than equal to 10. This number provides a quantitative measure
of testing difficulty. If cyclomatic number is more than 10, then
testing effort increases due to :
Number of errors increases.
Time required to find and correct the errors increases

28
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Function Point Metrics for Testing

The function point (F P) metric is used effectively for measuring the size
of a software system.
Function-based metrics can be used as a predictor for the overall testing
effort.
Various project-level characteristics (e.g. testing effort and time, errors
uncovered, number of test cases produced) of past projects can be
collected and correlated with the number of FP produced by a project
team.
The team can then project the expected values of these characteristics
for the current project.

Listed below are a few FP measures:


1. Number of hours required for testing per FP.
2. Number of FPs tested per person-month.
3. Total cost of testing per FP.

29
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Function Point Metrics for Testing

Defect density measures the number of defects identified across one or


more phases of the development project lifecycle and compares that
value with the total size of the system.

Defect density =Number of defects (by phase or in total) / Total number of


FPs

Test case coverage measures the number of test cases that are
necessary to adequately support thorough testing of a development
project.

Test case Coverage =Number of Test cases / Total number of FPs

• Number of Test Cases = (Function Points)1.2


• Number of Acceptance Test Cases = (Function Points) x 1.2

30
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Testing Progress Metrics
• Everyone in the testing team wants to know when the testing should
stop.
• To know when the testing is complete, we need to track the
execution of testing. This is achieved by collecting data or metrics,
showing the progress of testing.
• Using these progress metrics, the release date of the project can be
determined.
• These metrics are collected iteratively during the stages of test
execution cycle.

31
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Testing Progress Metrics

• Test Procedure Execution Status:


Test proc Exec. Status=Number of executed test cases/Total number of
test cases
• Defect Aging :Turnaround time for a bug to be corrected.
Defect aging = closing date of bug - start date when bug was opened
• Defect Fix Time to Retest : This metric provides a measure that
the test team is retesting all the modifications corresponding to bugs at an
adequate rate.
Defect Fix Time to Retest = Date of fixing the bug and releasing in new
build – Date of retesting the bug
• Defect Trend Analysis: It is defined as the trend in the number of
defects found as the testing life cycle progresses.
– Number of defects of each type detected in unit test per hour
– Number of defects of each type detected in integration test per hour
– Severity level for all defects
32
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Testing Progress Metrics

• Recurrence Ratio : Indicates the quality of bug-fixes. The quality


of bug-fixes is good if it does not introduce any new bug in the
previous working functionality of the software and the bug does not
re-occur.
Number of bugs remaining per fix.
• Defect Density:
1. Defect Density = Total number of defects found for a requirement
/Total number of test cases executed for that requirement
2. Pre- ship defect density/Post-ship defect density
• Coverage Measures: Helps in identifying the work to be done.
White-box testing
Degree of statement , branch , data flow and basis path coverage
Actual degree of coverage/Planned degree of coverage
Black-box Testing
Number of features or Ecs actually covered/ Total number of features or
33
Ecs. Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Testing Progress Metrics

Tester Productivity:

1. Time spent in test planning


2. Time spent in test case design
3. Time spent in test execution
4. Time spent in test reporting
5.Number of test cases developed
6. Number of test cases executed

34
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Testing Progress Metrics
• Budget and Resource Monitoring Measures:
Earned value tracking
For the planned earned values, we need the following measurement
data :
1. Total estimated time or cost for overall testing effort
2. Estimated time or cost for each testing activity
3. Actual time or cost for each testing activity

Estimated time or cost for testing activity / Actual time or cost of


testing activity

• Test case effectiveness Metric:


TCE= 100*(Number of defects found by the test cases/ total number of
defects)

35
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Reference:

Software Testing Principles and Practices,


Naresh Chauhan, Second edition, Oxford
Higher Education

36
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra

You might also like