Stqa Ise
Stqa Ise
Testing Methodology
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra 1
Evolution of Software Testing
Evolution of Software Testing
• In the early days of software development, Software Testing was
considered only as a debugging process for removing the errors
after the development of software.
• By 1970, software engineering term was in common use. But
software testing was just a beginning at that time.
• In 1978, G.J. Myers realized the need to discuss the techniques of
software testing in a separate subject. He wrote the book “The Art
of Software Testing” which is a classic work on software testing.
• Myers discussed the psychology of testing and emphasized that
testing should be done with the mind-set of finding the errors not to
demonstrate that errors are not present.
• By 1980, software professionals and organizations started talking
about the quality in software. Organizations started Quality
assurance teams for the project, which take care of all the testing
activities for the project right from the beginning.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 2
Pooja Malhotra
Evolution of Software Testing
Evolution of Software Testing
• In the 1990s testing tools finally came into their own. There was
flood of various tools, which are absolutely vital to adequate testing
of the software systems. However, they do not solve all the
problems and cannot replace a testing process.
• Gelperin and Hetzel [79] have characterized the growth of software
testing with time. Based on this, we can divide the evolution of
software testing in following phases:
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 3
Pooja Malhotra
Evolution of Software Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 4
Pooja Malhotra
Psychology for Software Testing:
Testing is the process of demonstrating that there are no errors.
Testing is the process of executing a program with the intent of finding
errors.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 5
Pooja Malhotra
The Quality Revolution
The Shewhart cycle
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 7
Pooja Malhotra
Testing produces Reliability and Quality
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 8
Pooja Malhotra
Quality leads to customer satisfaction
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 9
Pooja Malhotra
Testing control Risk factors
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 10
Pooja Malhotra
Software Testing Definitions
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 11
Pooja Malhotra
Software Testing Definitions
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 12
Pooja Malhotra
Model for Software Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 13
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 14
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing
• Valid Inputs
• Invalid Inputs
• Edited Inputs
• Race Conditions
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 15
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 16
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 17
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
18
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 20
Pooja Malhotra
Software Testing as a Process
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 21
Pooja Malhotra
Software Testing Terminology
• Failure
The inability of a system or component to perform a required
function according to its specification.
22
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing Terminology
Error
Whenever a member of development team makes any mistake in
any phase of SDLC, errors are produced. It might be a typographical
error, a misleading of a specification, a misunderstanding of what a
subroutine does and so on. Thus, error is a very general term used
for human mistakes.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 23
Pooja Malhotra
Software Testing Terminology
Module A()
{
---
While(a > n+1);
{
---
print(“The value of x is”,x);
}
---
}
24
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing Terminology
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 25
Pooja Malhotra
Software Testing Terminology
• Testware
The documents created during the testing activities are known as
Testware. (Test Plan, test specifications, test case design , test
reports etc.)
• Incident
The symptom(s) associated with a failure that alerts the user to the
occurrence of a failure.
• Test Oracle
To judge the success or failure of a test(correctness of the system
for some test) *Comparing actual results with expected results by
hand.*
26
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Life Cycle of a Bug
27
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
States of a Bug
28
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Bug affects Economics of Software Testing
Software Testing Myths
29
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Bug Classification based on Criticality
• Critical Bugs
the worst effect on the functioning of software such that it stops or
hangs the normal functioning of the software.
• Major Bug
This type of bug does not stop the functioning of the software but
it causes a functionality to fail to meet its requirements as
expected.
• Medium Bugs
Medium bugs are less critical in nature as compared to critical and
major bugs.(not according to standards- Redundant /Truncated
output)
• Minor Bugs
This type of bug does not affect the functioning of the
software.(Typographical error or misaligned printout)
30
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Bug Classification based on SDLC
31
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Testing Principles
33
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Software Testing Life Cycle (STLC):Well defined series of
steps to ensure successful and effective testing.
34
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing Life Cycle (STLC):Well defined series of
steps to ensure successful and effective testing.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 35
Pooja Malhotra
Test Planning
36
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Test Planning
The major output of test planning is the test plan document. Test
plans are developed for each level of testing. After analysing the
issues, the following activities are performed:
• Develop a test case format.
• Develop test case plans according to every phase of SDLC.
• Identify test cases to be automated.
• Prioritize the test cases according to their importance and
criticality.
• Define areas of stress and performance testing.
• Plan the test cycles required for regression testing.
All the details specified in the test design phase are documented in
the test design specification. This document provides the details of the
input specifications, output specifications, environmental needs, and
other procedural requirements for the test case.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 39
Pooja Malhotra
Test Execution: Verification and Validation
In this phase, all test cases are executed including verification and validation.
• Verification test cases are started at the end of each phase of SDLC.
• Validation test cases are started after the completion of a module.
• It is the decision of the test team to opt for automation or manual execution.
• Test results are documented in the test incident reports, test logs, testing status,
and test summary reports etc.
40
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Post-Execution / Test Review
42
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Test Strategy
46
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Development of Test Strategy
The testing strategy should start at the component level and finish at the
integration of the entire system. Thus, a test strategy includes testing the
components being built for the system, and slowly shifts towards testing the
whole system. This gives rise to two basic terms—Verification and
Validation—the basis for any type of testing. It can also be said that the
testing process is a combination of verification and validation.
49
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Validation Activities
• Unit Testing
• Integration Testing
• System Testing
• Acceptance Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 50
Pooja Malhotra
Testing Tactics
The ways to perform various types of testing under a specific test strategy.
#Manual Testing
#Automated Testing
Testing Tools :
Resource for performing a test process
51
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Considerations in Developing
Testing Methodologies
52
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verification and Validation (V & V) Activities
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 53
Pooja Malhotra
VERIFICATION
Verification is a set of activities that ensures correct implementation of
specific functions in a software.
Verification is to check whether the software conforms to
specifications.
• If verification is not performed at early stages, there are always a
chance of mismatch between the required product and the delivered
product.
• Verification exposes more errors.
• Early verification decreases the cost of fixing bugs.
• Early verification enhances the quality of software.
VERIFICATION ACTIVITIES :All the verification activities are performed
in connection with the different phases of SDLC. The following
verification activities have been identified:
– Verification of Requirements and Objectives
– Verification of High-Level Design
– Verification of Low-Level Design
54
– Verification of Coding SoftwareVerification)
Reference:(Unit Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verification of Requirements
• Correctness
• Unambiguous(Every requirement has only one interpretation. )
• Consistent(No specification should contradict or conflict with another.)
• Completeness
• Updation
• Traceability
– Backward Traceability 55
– Forward Traceability.
Verification of High Level Design
2. The tester also prepares a Function Test Plan which is based on the
SRS. This plan will be referenced at the time of Function Testing .
3. The tester also prepares an Integration Test Plan which will be
referred at the time of integration testing.
4. The tester verifies that all the components and their interfaces are in
tune with requirements of the user. Every requirement in SRS should
map the design.
56
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verification of High Level Design
58
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verification of High Level Design
• Check that every design specification in HLD and LLD has been
coded using traceability matrix.
• Examine the code against a language specification checklist.
• Verify every statement, control structure, loop, and logic
• Misunderstood or incorrect Arithmetic precedence
• Mixed mode operations
• Incorrect initialization
• Precision Inaccuracy
• Incorrect symbolic representation of an expression
• Different data types
• Improper or nonexistent loop termination 61
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
• Failure to exit Pooja Malhotra
How to Verify Code
Two kinds of techniques are used to verify the coding:
(a) static testing, and (b) dynamic testing.
62
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
How to Verify Code
UNIT VERIFICATION :
Verification of coding cannot be done for the whole system. Moreover, the
system is divided into modules. Therefore, verification of coding means the
verification of code of modules by their developers. This is also known as
unit verification testing.
Listed below are the points to be considered while performing unit
verification :
• Interfaces are verified to ensure that information properly flows in and
out of the program unit under test.
• The local data structure is verified to maintain data integrity.
• Boundary conditions are checked to verify that the module is working
fine on boundaries also.
• All independent paths through the control structure are exercised to
ensure that all statements in a module have been executed at least
once.
• All error handling paths are tested.
• Developing tests that will determine whether the product satisfies the
users’ requirements, as stated in the requirement specification.
• The bugs, which are still existing in the software after coding need to
be uncovered.
• last chance to discover the bugs otherwise these bugs will move to
the final product released to the customer.
65
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Concept of Unit Testing
• Unit is?
– Function
– Procedure
– Method
– Module
– Component
Unit Testing
– Testing program unit in isolation i.e. in a stand alone
manner.
– Objective: Unit works as expected.
66
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Unit Testing
module
to be
tested
interface
local data structures
boundary conditions
independent paths
error handling paths
test cases
67
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Unit Testing
Drivers
70
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Unit Validation Testing
Evolution of Software Testing
Stubs
Stub can be defined as a piece of software that works similar to
a unit which is referenced by the Unit being tested, but it is much
simpler that the actual unit.
71
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Integration Testing
72
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Integration Testing
73
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing
Decomposition based Myths
integration testing
74
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing
Decomposition based Myths
integration testing
75
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Incremental Integration Testing
76
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Practical Approach for Integration Testing
77
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Call Graph Based Integration
Refine the functional decomposition tree into a form of module calling graph,
then we are moving towards behavioural testing at the integration level.
This can be done with the help of a call graph
A call graph is a directed graph wherein nodes are modules or units and a
directed edge from one node to another node means one module has called
another module. The call graph can be captured in a matrix form which is
known as adjacency matrix.
78
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Pair-wise Integration
Number of test
sessions=no. of edges in
call graph
= 19
79
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Neighborhood Integration
80
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Path Based Integration
This passing of control from one unit to another unit is necessary for
integration testing. Also, there should be information within the module
regarding instructions that call the module or return to the module. This must
be tested at the time of integration. It can be done with the help of path-based
integration defined by Paul C.
Source Node :It is an instruction in the module at which the execution starts or
resumes. The nodes where the control is being transferred after calling the
module are also source nodes.
81
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Path Based Integration
Message:When the control from one unit is transferred to another unit, then
the programming language mechanism used to do this is known as a
message.
MM-Path Graph:
It can be defined as an extended flow graph where nodes are MEPs and
edges are messages. It returns from the last called unit to the first unit where
the call was made.
*In this graph, messages are highlighted with thick lines.* 82
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Path Based Integration
83
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Path Based Integration
MEP Graph
84
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Function Testing
87
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Categories of System Tests
88
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Recovery Testing
89
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Security Testing
Security tests are designed to verify that the system meets the security
requirements. Security may include controlling access to data, encrypting data
in communication, ensuring secrecy of stored data, auditing security events, etc
• Confidentiality-It is the requirement that data and the processes be
protected from unauthorized disclosure
• Integrity-It is the requirement that data and process be protected from
unauthorized modification
• Availability-It is the requirement that data and processes be protected form
the denial of service to authorized users
• Authentication- A measure designed to establish the validity of a
transmission, message, or originator. It allows the receiver to have
confidence that the information it receives originates from a specific known
source.
• Authorization- It is the process of determining that a requester is allowed to
receive a service or perform an operation. Access control is an example of
authorization.
• Non-repudiation- A measure intended to prevent the later denial that an
90
action happened, or a communication took place, etc.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Security Testing
Security Testing is the process of attempting to devise
test cases to evaluate the adequacy of protective
procedures and countermeasures.
• Security test scenarios should include negative scenarios
such as misuse and abuse of the software system.
• Security requirements should be associated with each
functional requirement. For example, the log-on requirement
in a client-server system must specify the number of retries
allowed, the action to be taken if the log-on fails, and so on.
• A software project has security issues that are global in
nature, and are therefore, related to the application’s
architecture and overall implementation. For example, a
Web application may have a global requirement that all
private customer data of any kind is stored in encrypted form
in the database
91
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Security Testing
– Useful types of security tests includes the following:
• Verify that only authorized accesses to the system are
permitted
• Verify the correctness of both encryption and decryption
algorithms for systems where data/messages are encoded.
• Verify that illegal reading of files, to which the perpetrator
is not authorized, is not allowed
• Ensure that virus checkers prevent or curtail entry of
viruses into the system
• Try to identify any “backdoors” in the system usually left
open by the software developers
92
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Performance Testing
93
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Performance Testing
• Tests are designed to determine the performance of the
actual system compared to the expected one
• Tests are designed to verify response time, execution time,
throughput, resource utilization and traffic rate
• One needs to be clear about the specific data to be captured
in order to evaluate performance metrics.
• For example, if the objective is to evaluate the response
time, then one needs to capture
– End-to-end response time (as seen by external user)
– CPU time
– Network connection time
– Database access time
– Network connection time
– Waiting time
94
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Stress Tests
• The goal of stress testing is to evaluate and determine the behavior
of a software component while the offered load is in excess of its
designed capacity
• The system is deliberately stressed by pushing it to and beyond its
specified limits
• It ensures that the system can perform acceptably under worst-case
conditions, under an expected peak load. If the limit is exceeded and
the system does fail, then the recovery mechanism should be
invoked
• Stress tests are targeted to bring out the problems associated with
one or more of the following:
– Memory leak: A failure in a program to release discarded memory
– Buffer allocation: To control the allocation and freeing of buffers
– Memory carving: A useful tool for analyzing physical and virtual
memory dumps when the memory structures are unknown or
have been overwritten.
95
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Load and Stability Tests
• Tests are designed to ensure that the system remains
stable for a long period of time under full load
• When a large number of users are introduced and
applications that run for months without restarting, a
number of problems are likely to occur:
– the system slows down
– the system encounters functionality problems
– the system crashes altogether
• Load and stability testing typically involves exercising
the system with virtual users and measuring the
performance to verify whether the system can support
the anticipated load
• This kind of testing help one to understand the ways the
system will fare in real-life situations
96
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
97
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
98
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Usability Testing
• Ease of Use
• Interface steps
• Response Time
• Help System
• Error Messages
99
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Usability Testing
Graphical User Interface Tests
– Tests are designed to look-and-feel the interface to the users of an
application system
– Tests are designed to verify different components such as icons,
menu bars, dialog boxes, scroll bars, list boxes, and radio buttons
– The GUI can be utilized to test the functionality behind the
interface, such as accurate response to database queries
– Tests the usefulness of the on-line help, error messages, tutorials,
and user manuals
– The usability characteristics of the GUI is tested, which includes
the following
• Accessibility: Can users enter, navigate, and exit with relative ease?
• Responsiveness: Can users do what they want and when they want in a
way that is clear?
• Efficiency: Can users do what they want to with minimum number of
steps and time?
• Comprehensibility: Do users understand the product structure with a
minimum amount of effort?
100
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Compatibility/Conversion/Configuration Testing
• Operating systems: The specifications must state all the targeted end-
user operating systems on which the system being developed will be run.
• Software/ Hardware: The product may need to operate with certain
versions of web browsers, with hardware devices such as printers, or with
other software, such as virus scanners or word processors.
• Conversion Testing: Compatibility may also extend to upgrades from
previous versions of the software. Therefore, in this case, the system
must be upgraded properly and all the data and information from the
previous version should also be considered.
• Ranking of possible configurations(most to the least common, for the
target system)
• Testers must identify appropriate test cases and data for compatibility
testing. 101
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Acceptance Testing
102
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Acceptance Testing
Alpha Testing :
Beta Testing:
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 105
Pooja Malhotra
Acceptance Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 106
Pooja Malhotra
References:
107
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Module 2
Testing Techniques
Static Testing
2
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Static Testing
3
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Static Testing
4
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Static Testing
Evolution of Software Testing
Types of Static Testing
• Software Inspections
• Walkthroughs
• Technical Reviews
5
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Inspections
6
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Inspections
• Inspection steps
• Roles for participants
• Item being inspected
7
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Inspection Process
8
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
9
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
1.Planning : During this phase, the following is executed:
• The product to be inspected is identified.
• A moderator is assigned.
• The objective of the inspection is stated. If the objective is defect detection,
then the type of defect detection like design error, interface error, code
error must be specified.
During planning, the moderator performs the following activities:
• Assures that the product is ready for inspection
• Selects the inspection team and assigns their roles
• Schedules the meeting venue and time
• Distributes the inspection material like the item to be inspected, checklists,
etc.
Readiness Criteria
• Completeness ,Minimal functionality
• Readability, Complexity, Requirements and design documents 10
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
Inspection Team:
• Moderator
• Author
• Presenter
• Record keeper
• Reviewers
• Observer
2.Overview: In this stage, the inspection team is provided with the
background information for inspection. The author presents the rationale for
the product, its relationship to the rest of the products being developed, its
function and intended use, and the approach used to develop it. This
information is necessary for the inspection team to perform a successful
inspection.
The opening meeting may also be called by the moderator. In this meeting, the
objective of inspection is explained to the team members. The idea is that
every member should be familiar with the overall purpose of the inspection.
11
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
3.Individual Preparation:After the overview, the reviewers individually
prepare themselves for the inspection process by studying the documents
provided to them in the overview session.
– List of questions
– Potential Change Request (CR)
– Suggested improvement opportunities
Completed preparation logs are submitted to the moderator prior to the
inspection meeting.
Inspection Meeting/Examination:
– The author makes a presentation
– The presenter reads the code
– The record keeper documents the CR
– Moderator ensures the review is on track
12
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
At the end, the moderator concludes the meeting and produces a
summary of the inspection meeting.
13
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
4.Re-work: The summary list of the bugs that arise during the inspection
meeting needs to be reworked by the author.
– Make the list of all the CRs
– Make a list of improvements
– Record the minutes meeting
– Author works on the CRs to fix the issue
• Bug Reduction
• Bug Prevention
• Productivity
• Real-time Feedback to Software Engineers
• Reduction in Development Resource
• Quality Improvement
• Project Management
• Checking Coupling and Cohesion
• Learning through Inspection
• Process Improvement
15
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Variants of Inspection process
16
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Active Design Reviews
17
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Formal Technical Asynchronous
review method (FTArm)
18
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Gilb Inspection
19
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Humphrey’s Inspection Process
20
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
N-Fold Inspection
21
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Structure:
❏ Does the code completely and correctly implement the design?
❏ Does the code conform to any applicable coding standards?
❏ Is the code well-structured, consistent in style, and consistently
formatted?
❏ Are there any uncalled or unneeded procedures or any
unreachable code?
❏ Are there any leftover stubs or test routines in the code?
❏ Can any code be replaced by calls to external reusable
components or library functions?
❏ Are there any blocks of repeated code that could be condensed
into a single procedure?
❏ Is storage use efficient?
❏ Are any modules excessively complex and should be restructured
or split into multiple routines? 23
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Arithmetic Operations:
❏ Does the code avoid comparing floating-point
numbers for equality?
❏ Does the code systematically prevent rounding
errors?
❏ Are divisors tested for zero or noise?
24
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Loops and Branches:
❏ Are all loops, branches, and logic constructs complete, correct,
and properly nested?
❏ Are all cases covered in an IF- -ELSEIF or CASE block,
including ELSE or DEFAULT clauses?
❏ Does every case statement have a default?
❏ Are loop termination conditions obvious and always achievable?
❏ Are indexes or subscripts properly initialized, just prior to the
loop?
❏ Does the code in the loop avoid manipulating the index variable
or using it upon exit from the loop?
25
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Documentation:
❏ Is the code clearly and adequately documented
with an easy-to-maintain commenting style?
❏ Are all comments consistent with the code?
Variables:
❏ Are all variables properly defined with
meaningful, consistent, and clear names?
❏ Do all assigned variables have proper type
consistency or casting?
❏ Are there any redundant or unused variables?
26
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Input / Output errors:
• If the file or peripheral is not ready, is that error
condition handled?
• Does the software handle the situation of the
external device being disconnected?
• Have all error messages been checked for
correctness, appropriateness, grammar, and
spelling?
• Are all exceptions handled by some part of the
code?
27
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Scenario based Reading
28
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Scenario based Reading
29
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Structured Walkthroughs
30
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Technical Reviews
32
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Black Box Testing
33
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Black Box Testing
Evolution of Software Testing
• To test the modules independently.
34
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Boundary Value Analysis (BVA)
35
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Boundary Value Analysis (BVA)
36
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Guidelines for Boundary Value Analysis
• The equivalence class specifies a range
– If an equivalence class specifies a range of values, then construct
test cases by considering the boundary points of the range and
points just beyond the boundaries of the range
38
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
BVA- “Single fault "assumption theory
The basic form of implementation is to maintain all but one
of the variables at their nominal (normal or average) values
and allowing the remaining variable to take on its extreme
values. The values used to test the extremities are:
• Min ------------------------------------ - Minimal
• Min+ --------------------------- Just above Minimal
• Nom ---------------------------------- Average
• Max- ------------- -------- Just below Maximum
• Max --------------- --------------- Maximum
39
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Boundary Value Checking
• Anom, Bmin
• Anom, Bmin+
• Anom, Bmax
• Anom, Bmax-
• Amin, Bnom
• Amin+, Bnom
• Amax, Bnom
• Amax-, Bnom
• Anom, Bnom
41
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Robustness Testing Method
• Amax+, Bnom
• Amin-, Bnom
• Anom, Bmax+
• Anom, Bmin-
•
• It can be generalized that for n input variables in a module,
6n+1 test cases are designed with Robustness testing.
42
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Software Testing
Worst Case Myths
Testing Method
• When more than one variable are in extreme values, i.e. when more
than one variable are on the boundary. It is called Worst case
testing method.
43
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Example
44
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Example
45
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
BVA- The triangle problem
The triangle problem accepts three integers (a, b and c)as its input,
each of which are taken to be sides of a triangle . The values of these
inputs are used to determine the type of the triangle (Equilateral,
Isosceles, Scalene or not a triangle).
For the inputs to be declared as being a triangle they must satisfy the
six conditions:
C1. 1 ≤ a ≤ 200. C2. 1 ≤ b ≤ 200.
C3. 1 ≤c ≤ 200. C4. a < b + c.
C5. b < a + c. C6. c < a + b.
Otherwise this is declared not to be a triangle.
The type of the triangle, provided the conditions are met, is determined
as follows:
1. If all three sides are equal, the output is Equilateral.
2. If exactly one pair of sides is equal, the output is Isosceles.
3. If no pair of sides is equal, the output is Scalene.
46
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Test Cases for the Triangle
Problem
Boundary Value Analysis Test Cases
Case a b c Expected Output
1 100 100 1 Isosceles
2 100 100 2 Isosceles
min = 1
min+ = 2
3 100 100 100 Equilateral
nom = 100 4 100 100 199 Isosceles
max- = 199 5 100 100 200 Not a Triangle
max = 200
6 100 1 100 Isosceles
7 100 2 100 Isosceles
8 100 199 100 Isosceles
9 100 200 100 Not a Triangle
10 1 100 100 Isosceles
11 2 100 100 Isosceles
12 199 100 100 Isosceles
13 200 100 100 Not a Triangle
47
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Equivalence Class Testing
• An input domain may be too large for all its elements to be used as test
input
• The input domain is partitioned into a finite number of subdomains
• Each subdomain is known as an equivalence class, and it serves as a source
of at least one test input
• A valid input to a system is an element of the input domain that is expected
to return a non error value
• An invalid input is an input that is expected to return an error value.
48
Figure (a)Reference:
Too many test Principles
Software Testing input;and(b)Practices,
OneNaresh
inputChauhan
is selected from each of the subdomain
, Oxford University
Pooja Malhotra
Equivalence Class Testing
49
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Guidelines for Equivalence Class Partitioning
• An input condition specifies a range [a, b]
– one equivalence class for a < X < b, and
– two other classes for X < a and X > b to test the system with invalid
inputs
• An input condition specifies a set of values
– one equivalence class for each element of the set {M1}, {M2}, ....,
{MN}, and
– one equivalence class for elements outside the set {M 1,M2, ...,MN}
• Input condition specifies for each individual value
– If the system handles each valid input differently then create one
equivalence class for each valid input
• An input condition specifies the number of valid values (Say N)
– Create one equivalence class for the correct number of inputs
– two equivalence classes for invalid inputs – one for zero values and one
for more than N values
• An input condition specifies a “must be” value
– Create one equivalence class for a “must be” value, and
– one equivalence class for something that is not a “must be” value
50
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Identification of Test Cases
Test cases for each equivalence class can be identified by:
• For each equivalence class with valid input that has not
been covered by test cases yet, write a new test case
covering as many uncovered equivalence classes as possible
• For each equivalence class with invalid input that has not
been covered by test cases, write a new test case that covers
one and only one of the uncovered equivalence classes
51
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Example
I1 = {<A,B,C> : 1 ≤ A ≤ 50}
I2 = {<A,B,C> : 1 ≤ B ≤ 50}
I3 = {<A,B,C> : 1 ≤ C ≤ 50}
I4 = {<A,B,C> : A < 1}
I5 = {<A,B,C> : A > 50}
I6 = {<A,B,C> : B < 1}
I7 = {<A,B,C> : B > 50}
I8 = {<A,B,C> : C < 1}
I9 = {<A,B,C> : C > 50}
52
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Example
53
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Advantages of Equivalence Class Partitioning
• One gets a better idea about the input domain being covered
with the selected test cases
55
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Formation of Decision Table
• Condition Stub
• Action Stub
• Condition Entry
• Action Entry
56
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Formation of Decision Table
• It comprises a set of conditions (or, causes) and a set of effects (or,
results) arranged in the form of a column on the left of the table
• In the second column, next to each condition, we have its possible
values: Yes (Y), No (N), and Don’t Care (Immaterial) state.
• To the right of the “Values” column, we have a set of rules. For each
combination of the three conditions {C1,C2,C3}, there exists a rule
from the set {R1,R2, ..}
• Each rule comprises a Yes (Y), No (N), or Don’t Care (“-”) response,
and contains an associated list of effects(actions) {E1,E2,E3}
• For each relevant effect, an effect sequence number specifies the order
in which the effect should be carried out, if the associated set of
conditions are satisfied
• Each rule of a decision table represents a test case
57
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Test case design using decision table
• The columns in the decision table are transformed into test cases.
• If there are K rules over n binary conditions, there are at least K test
cases and at the most 2^n test cases.
59
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Decision Table Based Testing
Example
• A program calculates the total salary of an employee
with the conditions that if the working hours are less than
or equal to 48, then give normal salary. The hours over
48 on normal working days are calculated at the rate of
1.25 of the salary. However, on holidays or Sundays, the
hours are calculated at the rate of 2.00 times of the
salary. Design the test cases using decision table
testing.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 60
Pooja Malhotra
Decision Table Based Testing
The test cases derived from the decision table are given below:
61
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Dynamic Testing: White Box Testing Techniques
White-box testing (also known as clear box testing, glass box testing,
transparent box testing, and structural testing) is a method of testing
software that tests internal structures or workings of an application. White-
box testing can be applied at the unit, integration and system levels of the
software testing process.
• White box testing needs the full understanding of the logic/structure
of the program.
• Test case designing using white box testing techniques
– Control Flow testing method
• Basis Path testing method
• Loop testing
– Data Flow testing method
– Mutation testing method
• Control flow refers to flow of control from one instruction to another
• Data flow refers to propagation of values from one variable or constant to
another variable
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 62
Pooja Malhotra
Logic Coverage Criteria: Structural testing considers the program
code, and test cases are designed based on the logic of the
program such that every element of the logic is covered.
Statement Coverage: The first kind of logic coverage can be identified in
the form of statements. It is assumed that if all the statements of the
module are executed once, every bug will be notified.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 64
Pooja Malhotra
Logic Coverage Criteria
Decision / Condition Coverage:Condition coverage in a decision does not
mean that the decision has been covered. It requires sufficient test cases
such that each condition in a decision takes on all possible outcomes at least
once.
If (A && B)
• Test Case 1: A is True, B is False.
• Test Case 2: A is False, B is True.
Basis path testing is the technique of selecting the paths that provide a basis
set of execution paths through the program.
• Path Testing is based on control structure of the program for which flow
graph is prepared.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 68
Pooja Malhotra
Path Testing Terminology
Path: A path through a program is a sequence of instructions or statements that starts
at an entry, junction, or decision and ends at another, or possibly the same, junction,
decision, or exit.
Segment: Paths consist of segments. The smallest segment is a link, that is, a single
process that lies between two nodes (e.g., junction-process-junction, junction process-
decision, decision-process-junction, decision-process-decision).
Length of a Path: The length of a path is measured by the number of links in it and
not by the number of instructions or statements executed along the path. An
alternative way to measure the length of a path is by the number of nodes traversed.
Independent Path: An independent path is any path through the graph that
introduces at least one new set of processing statements or new conditions. An
independent path must move along at least one edge that has not been traversed
before the path is
69
defined. Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Path Testing Terminology
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 74
Pooja Malhotra
Example
Cyclomatic Complexity
V(G) = e – n + 2 * P
= 10 – 8 +2
= 4
V(G) = Number of predicate nodes + 1
= 3 (Nodes B,C and F) + 1
= 4
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra 75
Example
Independent Paths
• A-B-F-H
• A-B-F-G-H
• A-B-C-E-B-F-G-H
• A-B-C-D-F-H
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 76
Pooja Malhotra
Loop Testing
Simple Loops:
• Check whether you can bypass the loop or not. If the test case for
bypassing the loop is executed and, still you enter inside the loop, it
means there is a bug.
• Check whether the loop control variable is negative.
• Write one test case that executes the statements inside the loop.
• Write test cases for a typical number of iterations through the loop.
• Write test cases for checking the boundary values of maximum and
minimum number of iterations defined (say min and max) in the loop. It
means we should test for the min, min+1, min-1, max-1, max and
max+1 number of iterations through the loop.
77
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Loop Testing
Nested Loops: Nested loops When two or more loops are embedded, it
is called a nested loop.
The the strategy is to start with the innermost loops while holding outer
loops to their minimum values. Continue this outward in this manner
until all loops have been covered
78
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Loop Testing
Concatenated Loops:
79
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Mutation Testing
Mutation testing is a technique that focuses on
measuring the adequacy of test data (or test cases).
The original intention behind mutation testing was to
expose and locate weaknesses in test cases. Thus,
mutation testing is a way to measure the quality of test
cases.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra 80
Mutation Testing
• Mutation testing helps a user create test data by interacting with the
user to iteratively strengthen the quality of test data. During mutation
testing, faults are introduced into a program by creating many
versions of the program, each of which contains one fault. Test data
are used to execute these faulty programs with the goal of causing
each faulty program to fail.
81
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Mutation Testing
• Modify a program by introducing a single small change to
the code
• A modified program is called mutant
• A mutant is said to be killed when the execution of test
case cause it to fail. The mutant is considered to be dead
• A mutant is an equivalent to the given program if it
always produce the same output as the original program
• A mutant is called killable or stubborn, if the existing set
of test cases is insufficient to kill it.
82
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Mutation Score
A mutation score for a set of test cases is the percentage
of non-equivalent mutants killed by the test suite
100*D/(N-E) where
D -> Dead
N-> Total No of Mutants
E-> No of equivalent mutants
Primary Mutants:
• Let us take one example of C program shown below
…
If (a>b)
x = x + y;
else
x = y;
printf(“%d”,x);
….
We can consider the following mutants for above example:
• M1: x = x – y;
• M2: x = x / y;
• M3: x = x+1;
• M4: printf(“%d”,y);
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 84
Pooja Malhotra
Mutation Testing
Secondary Mutants:
Multiple levels of mutation are applied on the initial program.
Example Program:
If(a<b)
c=a;
Mutant for this code may be :
If(a==b)
c=a+1;
85
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Mutation Testing Process
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 86
Pooja Malhotra
Mutation Testing Process
87
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Mutation Testing Process
88
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Regression Testing
Progressive Vs Regressive Testing
• Delta version :A changed version that has not passed a regression test.
• Delta build : An executable configuration of the SUT that contains all the
delta and baseline components.
Thus, it can be said that most test cases begin as progressive test cases and
eventually become regression test cases.
90
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Regression Testing
• This testing is done to make sure that new code changes should not
have side effects on the existing functionalities. It ensures that old
code still works once the new code changes are done.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 91
Pooja Malhotra
Regression Testing produces Quality Software
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 93
Pooja Malhotra
Need / When to do regression testing?
• Software maintenance
Corrective maintenance
Adaptive maintenance
Perfective maintenance
Preventive maintenance
Bug-Fix regression:
This testing is performed after a bug has been reported and fixed. Its
goal is to repeat the test cases that expose the problem in the first place.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 95
Pooja Malhotra
Usability testing
97
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Usability testing
101
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Usability testing
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 103
Pooja Malhotra
WHEN TO DO USABILITY TESTING?
104
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
WHEN TO DO USABILITY TESTING?
Usability design is verified through several means. Some of them are as follows:
• Style sheets : Style sheets are grouping of user interface design elements.
Use of style sheets ensures consistency of design elements across several
screens and testing the style sheet ensures that the basic usability design is
tested. Style sheets also include frames, where each frame is considered as
a separate screen by the user. Style sheets are reviewed to check whether
they force font size, color scheme, and so on, which may affect usability.
• Screen prototypes : Screen prototype is another way to test usability design.
The screens are designed as they will be shipped to the customers, but are
not integrated with other modules of the product. Therefore, this user
interface is tested independently without integrating with the functionality
modules. This prototype will have other user interface functions simulated
such as screen navigation, message display, and so on. The prototype gives
an idea of how exactly the screens will look and function when the product is
released. The test team and some real-life users test this prototype and their
ideas for improvements are incorporated in the user interface. Once this
prototype is completely tested, it is integrated with other modules of 105 the
product. Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
WHEN TO DO USABILITY TESTING?
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 106
Pooja Malhotra
WHEN TO DO USABILITY TESTING?
107
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
QUALITY FACTORS FOR USABILITY
Some quality factors are very important when performing usability testing. As
was explained earlier, usability is subjective and not all requirements for
usability can be documented clearly. However focusing on some of the quality
factors given below help in improving objectivity in usability testing are as
follows.
Comprehensibility : The product should have simple and logical structure of
features and documentation. They should be grouped on the basis of user
scenarios and usage. The most frequent operations that are performed early in
a scenario should be presented first, using the user interfaces. When features
and components are grouped in a product, they should be based on user
terminologies, not technology or implementation.
Consistency: A product needs to be consistent with any applicable standards,
platform look-and-feel, base infrastructure, and earlier versions of the same
product. Also, if there are multiple products from the same company, it would
be worthwhile to have some consistency in the look-and-feel of these multiple
products. Following same standards for usability helps in meeting the
consistency aspect of the usability. 108
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
WHEN TO DO USABILITY TESTING?
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 110
Pooja Malhotra
Accessibility testing
112
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Accessibility testing
115
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
116
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Accessibility testing
118
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
References:
119
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Module 3 :
Testing Metrics for Monitoring and Controlling the
Testing Process
1
Software Metrics
Metrics can be defined as “STANDARDS OF MEASUREMENT”.
Software Metrics are used to measure the quality of the project. Simply,
Metric is a unit used for describing an attribute. Metric is a scale for
measurement.
Suppose, in general, “Kilogram” is a metric for measuring the attribute
“Weight”. Similarly, in software, “How many issues are found in
thousand lines of code?”
Test metrics example:
• Understanding
• Control
• Improvement
3
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Evolution Software
of Software
Metrics Testing
Product Metrics
Measures of the software product at any stage of its development
From requirements to installed system.
– Complexity of S/W design and code
– Size of the final program
– Number of pages of documentation produced
Process Metrics
Measures of the S/W development process
– Overall development time
– Type of methodology used
– Average level of experience of programming staff
4
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Measurement Objectives for Testing
Evolution of Software Testing
The objectives for assessing a test process should
be well defined.
GQM(Goal Question Metric) Framework:
• List the major goals of the test process.
• Drives from each goal, the questions that must
be answered to determine if the goals are being
met.
• Decides what must be measured in order to
answer the questions adequately.
5
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes and Corresponding Metrics
Evolution of Software Testing
in Software Testing
6
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes: Progress
7
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes : Progress
10
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes: Cost
12
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes: Quality
14
Spoliage
(Number of Defects Discovered Phase) 15
16
Table: Number of defects weighted by defect age on project Boomerang
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Spoilage Metric
• The spoilage value for the Boomerang test
project is 2.2
• A spoilage value close to 1 is an indication of a
more effective defect discovery process
• This metric is useful in measuring the long-
term trend of test effectiveness in an
organization
17
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
18
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes: Quality
DRE
Number of DefectsFound in Testing
Number of DefectsFound in Testing Number of DetectsNot Found
There are potential issues that must be taken into account while
measuring the defect-removal efficiency. For example, the severity of
bugs and an estimate of time by which the customers would have
discovered most of the failures are to be established. This metric is
more helpful in establishing the test effectiveness in the long run as
compared to the current project.
19
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes: Quality
Measuring Test completeness : Refer to how much of code and
requirements are covered by the test set. The advantages of
measuring test coverage are that it provides the ability to design
new test cases and improve existing ones.
The relationship between code coverage and the number of test
cases:
-(p/N)*x
C(x)=1 - e
C(x) is coverage after executing x number of test cases, N is the
number of blocks in the program and p is the average of number of
blocks covered by a test case during the function test.
• At the system testing level, we should measure whether all the
features of the product are being tested or not. Common
requirements coverage metric is the percentage of requirements
covered by at least one test . A requirements traceability matrix 20
can be used for this purpose.Pooja Malhotra
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Attributes: Quality
Effectiveness of smoke tests:
• establish confidence over stability of a system
SMOKE TESTING, also known as “Build Verification Testing”, is a type of
software testing that comprises of a non-exhaustive set of tests that aim at
ensuring that the most important functions work. The result of this testing is
used to decide if a build is stable enough to proceed with further testing.
The tests that are included in smoke testing cover the basic operations that
are most frequently used, e.g. logging in, addition, and deletion of records.
Smoke tests need to be a subset of the regression testing suite.
Quality of Test Plan : The quality of test plan is measured in concern with
the possible number of errors. Thus, the quality of a test plan is measured
in concern with the probable number of errors.
– To evaluate a test plan, Berger describes a multi-dimensional qualitative method
using rubrics
1.Theory of objective 2. Theory of scope 3. Theory of coverage 4. Theory
of risk 5. Theory of data 6. Theory of originality 7. Theory of
communication 8. Theory of usefulness 9. Theory of completeness
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 21
10.Theory of insightfulness Pooja Malhotra
Attributes : Size
22
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Attributes : Size
• Estimation of test cases: To fully exercise a system and to estimate its resources,
an initial estimate of the number of test cases is required.
• Number of regression tests: Regression testing is performed on a modified
program that establishes confidence that the changes and fixes against reported
faults are correct and have not affected the unchanged portions of the program.
However, the number of test cases in regression testing becomes too large to
test. Therefore, careful measures are required to select the test cases effectively.
• Some of the measurements to monitor regression testing are :
• Number of test cases re-used
• Number of test cases added to the tool repository or test database
• Number of test cases rerun when changes are made to the software
• Number of planned regression tests executed
• Number of planned regression tests executed and passed
• Tests to automate: Tasks that are repetitive in nature and tedious to perform
manually are prime candidates for an automated tool. The categories of tests that
come under repetitive tasks are: Regression tests, Smoke tests, Load tests,
Performance tests
23
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Architectural Design Metric used for Testing
Card and Glass introduced three types of software design complexity that can also be used in
testing.
Structural Complexity
2
S(m) = f out(m)
where S is the structural complexity of a module m
and fout(m) is the fan-out of module m.
This metric gives us the number of stubs required for unit testing of the module m.(Unit Testing)
Data Complexity
D(m) = v(m) / [fout(m) + 1]
where v(m) is the number of input and output variables that are passed to and from module m.
This metric measures the complexity in the internal interface for a module m and indicates the
probability of errors in module m.
System Complexity
SC(m) = S(m) + D(m)
It is defined as the sum of structural and data complexity.
Overall architectural complexity of system is the sum total of system complexities of all the
modules.
• The testing effort of a module is directly proportional to its system complexity, it will be
difficult to unit test a module with higher system complexity.
• Efforts required for integration testing increases with the architectural complexity of 24 the
system. Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Information Flow Metrics used for Testing
26
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Information Flow Metrics used for Testing :
Henry & Kafura Design Metric
27
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Cyclomatic Complexity Measures for Testing
28
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Function Point Metrics for Testing
The function point (F P) metric is used effectively for measuring the size
of a software system.
Function-based metrics can be used as a predictor for the overall testing
effort.
Various project-level characteristics (e.g. testing effort and time, errors
uncovered, number of test cases produced) of past projects can be
collected and correlated with the number of FP produced by a project
team.
The team can then project the expected values of these characteristics
for the current project.
29
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Function Point Metrics for Testing
Test case coverage measures the number of test cases that are
necessary to adequately support thorough testing of a development
project.
30
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Testing Progress Metrics
• Everyone in the testing team wants to know when the testing should
stop.
• To know when the testing is complete, we need to track the
execution of testing. This is achieved by collecting data or metrics,
showing the progress of testing.
• Using these progress metrics, the release date of the project can be
determined.
• These metrics are collected iteratively during the stages of test
execution cycle.
31
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Testing Progress Metrics
Tester Productivity:
34
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Testing Progress Metrics
• Budget and Resource Monitoring Measures:
Earned value tracking
For the planned earned values, we need the following measurement
data :
1. Total estimated time or cost for overall testing effort
2. Estimated time or cost for each testing activity
3. Actual time or cost for each testing activity
35
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Reference:
36
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra