Software Testing
Maleekjan Choudhari, 23-January-
2009
Contents
What is Software Testing ?
Objectives of SW Testing
Testing vs. Debugging
Psychology of Testing
Basic Testing Strategies: 1. Black-box testing 2. White-box
testing
February 19, 2010 - 2
What is Software Testing ?
Testing is a verification and validation activity that is performed by
executing program code.
Which definition of SW Testing is most appropriate ?
a) Testing is the process of demonstrating that errors are not present.
b) Testing is the process of demonstrating that a program performs its
intended functions.
c) Testing is the process of removing errors from a program and fixing them.
None of the above definitions set the right goal for effective SW Testing
A Good Definition
Testing is the process of executing a program with the intent of finding
errors.
February 19, 2010 - 3
Objectives of Software Testing
The main objective of Software Testing is to find errors.
Indirectly testing provides assurance that the SW meets its
requirements.
Testing helps in assessing the quality and reliability of software.
What testing cannot do ?
Show the absence of errors.
February 19, 2010 - 4
Testing Vs Debugging
Debugging is not Testing.
Debugging always occurs as a consequence of testing.
Debugging attempts to find the cause of an error and correct
it.
February 19, 2010 - 5
Psychology of Testing
Testing is a destructive process -- show that a program does not
work by finding errors in it.
Start testing with the assumption that -
the program contains errors.
A successful test case is one that finds an error.
It difficult for a programmer to test his/her own program
effectively with the proper frame of mind required for testing.
February 19, 2010 - 6
Basic Testing Strategies
Black-box testing
White-box testing
February 19, 2010 - 7
Black-Box Testing
Tests that validate business requirements -- (what the system is
supposed to do)
Test cases are derived from the requirements specification of the
software. No knowledge of internal program structure is used.
Also known as -- functional, data-driven, or Input/Output testing
February 19, 2010 - 8
White-Box Testing
Tests that validate internal program logic (control flow , data
structures, data flow)
Test cases are derived by examination of the internal structure of
the program.
Also known as -- structural or logic-driven testing
February 19, 2010 - 9
Black-box vs White-Box Testing
Black box testing can detect errors such as
incorrect functions, missing functions
It cannot detect design errors, coding errors, unreachable code,
hidden functions
White box testing can detect errors such as
logic errors, design errors
It cannot detect whether the program is performing its expected
functions, missing functionality.
Both methods of testing are required.
February 19, 2010 - 10
Black-box Vs White-box Testing
Black-box Testing White-box testing
Tests function Tests structure
Can find requirements Can find design and
specification errors coding errors
Can find missing Can’t find missing
functions functions
February 19, 2010 - 11
Is Complete Testing Possible ?
Can Testing prove that a program is
completely free of errors ?
--- No
Complete testing in the sense of a proof is not theoretically
possible, and certainly not practically possible.
February 19, 2010 - 12
Example
Test a function that adds two 32-bit numbers and returns the result.
Assume we can execute 1000 test cases per sec
How long will it take to thoroughly test this function?
585 million years
February 19, 2010 - 13
Is Complete Testing Possible ?
Exhaustive Black-box testing is generally not possible because
the input domain for a program may be infinite or incredibly large.
Exhaustive White-box testing is generally not possible because a
program usually has a very large number of paths.
February 19, 2010 - 14
Implications ...
Test-case design
careful selection of a subset of all possible test cases
The objective should be to maximize the number of errors
found by a small finite number of test cases.
Test-completion criteria
February 19, 2010 - 15
Black-Box Testing
Program viewed as a Black-box, which accepts some inputs and
produces some outputs
Test cases are derived solely from the specifications, without
knowledge of the internal structure of the program.
February 19, 2010 - 16
Functional Test-Case Design Techniques
Equivalence class partitioning
Boundary value analysis
Cause-effect graphing
Error guessing
February 19, 2010 - 17
Equivalence Class Partitioning
Partition the program input domain into equivalence classes
(classes of data which according to the specifications are treated
identically by the program).
The basis of this technique is that test of a representative value of
each class is equivalent to a test of any other value of the same
class.
identify valid as well as invalid equivalence classes.
For each equivalence class, generate a test case to exercise an
input representative of that class.
February 19, 2010 - 18
Example
Example: input condition 0 <= x <= max
valid equivalence class : 0 <= x <= max
invalid equivalence classes : x < 0, x > max
3 test cases
February 19, 2010 - 19
Guidelines for Identifying Equivalence Classes
Input Condition Valid Eq Classes Invalid Eq Classes
range of values one valid two invalid
(eg. 1 - 200) (value within range) (one outside each
each end of range)
number N valid one valid two invalid
values (none, more than N)
Set of input values one valid eq class one
each handled for each value (eg. any value not
differently by the in valid input set )
program (eg. A, B, C)
February 19, 2010 - 20
Guidelines for Identifying Equivalence Classes
Input Condition Valid Eq Classes Invalid Eq Classes
must be condition one one
(eg. Id name must begin (eg. it is a letter) (eg. it is not a letter)
with a letter )
If you know that elements in an equivalence class are not handled
identically by the program, split the equivalence class into smaller
equivalence classes.
February 19, 2010 - 21
Identifying Test Cases for Equivalence Classes
Assign a unique number to each equivalence class
Until all valid equivalence classes have been covered by test
cases, write a new test case covering as many of the uncovered
valid equivalence classes as possible.
Each invalid equivalence class cover by a separate test case.
February 19, 2010 - 22
Boundary Value Analysis
Design test cases that exercise values that lie at the boundaries of
an input equivalence class and for situations just beyond the ends.
Also identify output equivalence classes, and write test cases to
generate o/p at the boundaries of the output equivalence classes,
and just beyond the ends.
Example: input condition 0 <= x <= max
Test for values : 0, max ( valid inputs)
: -1, max+1 (invalid inputs)
February 19, 2010 - 23
Error Guessing
From intuition and experience, enumerate a list of possible errors
or error prone situations and then write test cases to expose those
errors.
February 19, 2010 - 24
Testing Principles
☛ A good test case is one likely to show an error.
☛ Description of expected output or result is an essential part of
test-case definition.
☛ A programmer should avoid attempting to test his/her own
program.
testing is more effective and successful if performed by an
Independent Test Team.
February 19, 2010 - 25
Testing Principles (contd)
☛ Avoid on-the-fly testing.
Document all test cases.
☛ Test valid as well as invalid cases.
☛ Thoroughly inspect all test results.
☛ More detected errors implies even more errors present.
February 19, 2010 - 26
Testing Principles (contd)
☛ Decide in advance when to stop testing
☛ Do not plan testing effort under the tacit assumption that no
errors will be found.
☛ Testing is an extremely creative and intellectually challenging task
February 19, 2010 - 27
Some Terminology
Verification
“Are we building the product right?”
Does the software meet the specification?
Validation
“Are we building the right product?”
Does the software meet the user requirements?
Software Quality Assurance
Not the same as software testing …
Create and enforce standards and methods to improve the
development process and to prevent bugs from occurring.
February 19, 2010 - 28
Testing Activities in the SW Life Cycle
User
Manual
Requirements System
System Test,
Test,
Specification Acceptance Test
AcceptanceTest
Tested
SRS software
System
System Design Integration Test
SRS
System Integrated
Design software
Detailed
Integration Test
Design
Module Tested
designs modules
Coding Unit test
Code
February 19, 2010 - 29
Levels of Testing
Type of Testing Performed By
Low-level testing
Unit (module) testing Programmer
integration testing Development team
High-level testing
Function testing Independent Test Group
System testing Independent Test Group
Acceptance testing Customer
February 19, 2010 - 30
Unit Testing
done on individual modules
test module w.r.t module specification
largely white-box oriented
mostly done by programmer
Unit testing of several modules can be done in parallel
requires stubs and drivers
February 19, 2010 - 31
What are Stubs, Drivers ?
Stub eg. module eg. to unit test B
dummy module which call hierarchy in isolation
simulates the function of
a module called by a A Driver
given module under test
Driver
a module which
transmits test cases in
the form of input B B
arguments to the given
module under test and
either prints or interprets
the results produced by
it
C Stub for C
February 19, 2010 - 32
Integration Testing
tests a group of modules, or a subsystem
test subsystem structure w.r.t design, subsystem functions
focuses on module interfaces
largely structure-dependent
done by one/group of developers
February 19, 2010 - 33
Integration Test Approaches
Non-incremental ( Big-Bang integration )
unit test each module independently
combine all the modules to form the system in one step, and
test the combination
Incremental
instead of testing each module in isolation, the next module to
be tested is first combined with the set of modules that have
already been tested
testing approaches:- Top-down, Bottom-up
February 19, 2010 - 34
Example: Module Hierarchy
B C D
E F H
February 19, 2010 - 35
Bottom-Up Integration Testing
Example:
Driver E Driver F
E F
February 19, 2010 - 36
Bottom-Up Integration Testing
Example:
Driver A
E F
February 19, 2010 - 37
Comparison
Top-down Integration Bottom-up Integration
Advantage Disadvantage
the program as a whole
a skeletal version of the does not exist until the
program can exist early last module is added
Disadvantage
required stubs could be
expensive
No clear winner
• Effective alternative -- use Hybrid of bottom-up and top-down
- prioritize the integration of modules based on risk
- highest risk functions are integration tested earlier
than modules with low risk functions
February 19, 2010 - 38
Function Testing
Test the complete system with regard to its functional
requirements
Test cases derived from system’s functional specification
all black-box methods for test-case design are applicable
February 19, 2010 - 39
System Testing
Different from Function testing
Process of attempting to demonstrate that the program or system
does not meet its original requirements and objectives as stated in
the requirements specification
Test cases derived from
requirements specification
system objectives, user documentation
February 19, 2010 - 40
Types of System Tests
Volume testing
to determine whether the program can handle the required
volumes of data, requests, etc.
Load/Stress testing
to identify peak load conditions at which the program will fail
to handle required processing loads within required time
spans
Usability (human factors) testing
to identify discrepancies between the user interfaces of a
product and the human engineering requirements of its
potential users.
Security Testing
to show that the program’s security requirements can be
subverted
February 19, 2010 - 41
Types of System Tests
Performance testing
to determine whether the program meets its performance
requirements (eg. response times, throughput rates, etc.)
Recovery testing
to determine whether the system or program meets its
requirements for recovery after a failure
Installability testing
to identify ways in which the installation procedures lead to
incorrect results
Configuration Testing
to determine whether the program operates properly when
the software or hardware is configured in a required
manner
February 19, 2010 - 42
Types of System Tests
Compatibility/conversion testing
to determine whether the compatibility objectives of the
program have been met and whether the conversion
procedures work
Reliability/availability testing
to determine whether the system meets its reliability and
availability requirements
Resource usage testing
to determine whether the program uses resources
(memory, disk space, etc.) at levels which exceed
requirements
February 19, 2010 - 43
Acceptance Testing
performed by the Customer or End user
compare the software to its initial requirements and needs of
its end users
February 19, 2010 - 44
Alpha and Beta Testing
Tests performed on a SW Product before its released to a wide
user community.
Alpha testing
conducted at the developer’s site by a User
tests conducted in a controlled environment
Beta testing
conducted at one or more User sites by the end user of the
SW
it is a “live” use of the SW in an environment over which the
developer has no control
February 19, 2010 - 45
Regression Testing
Re-run of previous tests to ensure that SW already tested has not
regressed to an earlier error level after making changes to the
SW.
February 19, 2010 - 46
When to Stop Testing ?
Stop when the scheduled time for testing expires
Stop when all the test cases execute without detecting errors
-- both criteria are not good
February 19, 2010 - 47
Better Test Completion Criteria
Base completion on use of specific test-case design methods.
Example: Test cases derived from
1) satisfying multicondition coverage and
2) boundary-value analysis and
3) cause-effect graphing and
all resultant test cases are eventually
unsuccessful
February 19, 2010 - 48
Better Test Completion Criteria
State the completion criteria in terms of number of errors to be
found.
This requires:
an estimate of total number of errors in the pgm
an estimate of the % of errors that can be found through testing
estimates of what fraction of errors originate in particular design
processes, and during what phases of testing they get detected.
February 19, 2010 - 49
Better Test Completion Criteria
Plot the number of errors found per unit time during the test
phase.
The rate of error detection falls below a specified threshold
40- 40-
# Errors found
30- 30-
# Errors found
20- 20-
10- 10-
1 2 3 4 5 6 1 2 3 4 5 6
Week Week
February 19, 2010 - 50
Test Planning
One master test plan should be produced for the overall testing
effort
purpose is to provide an overview of the entire testing effort
It should identify the test units, features to be tested,
approach for testing, test deliverables, schedule, personnel
allocation, the overall training needs and the risks
One or more detailed test plans should be produced for each
activity - (unit testing, integration testing, system testing,
acceptance testing)
purpose to describe in detail how that testing activity will be
performed
February 19, 2010 - 51
Master Test Plan (outline)
(IEEE/ANSI, 1983 [Std 829-1983])
Purpose:
to prescribe the scope, approach, resources, and schedule
of the testing activities
Outline:
Test plan identifier
Introduction
Test Items
Features to be tested
Features not to be tested
February 19, 2010 - 52
Master Test Plan (outline)
Approach
Item pass / fail criteria
Suspension criteria and resumption requirements
Test deliverables
Testing tasks
Environment needs
Responsibilities
Staffing and training needs
Schedule
Risks and contingencies
Approvals
February 19, 2010 - 53
SW Test Documentation
Test Plan
Test design specification
Test cases specification
Test procedure specification
Test incident reports, test logs
Test summary report
February 19, 2010 - 54
SW Test Documentation
Test design specification
to specify refinements of the test approach and to identify
the features to be covered by the design and its associated
tests. It also identifies the test cases and test procedures, if
any, required to accomplish the testing and specifies the
feature pass/fail criteria
Test cases specification
to define a test case identified by a test design specification.
The test case spec documents the actual values used for the
input along with the anticipated outputs. It identifies any
constraints on the test procedures resulting from use of that
specific test case.
Test cases are separated from test designs to allow for use
in more than one design and to allow for reuse in other
situations.
February 19, 2010 - 55
SW Test Documentation
Test procedure specification
to identify all steps required to operate the system and
execute the specified test cases in order to implement the
associated test design.
The procedures are separated from test design specifications
as they are indented to be followed step by step and should
not have extraneous detail.
February 19, 2010 - 56
SW Test Documentation
Test Log
to provide a chronological record of relevant details about
the execution of tests.
Test incident report
to document any test execution event which requires
further investigation
Test summary report
to summarize the results of the testing activities
associated with one or more test design specs and to
provide evaluations based on these results
February 19, 2010 - 57
SW Testing Tools
Capture/playback tools
capture user operations including keystrokes, mouse
activity, and display output
these captured tests form a baseline for future testing of
product changes
the tool can automatically play back previously captured
tests whenever needed and validate the results by
comparing them to the previously saved baseline
this makes regression testing easier
Coverage analyzers
tell us which parts of the product under test have been
executed (covered) by the current tests
identifies parts not covered
varieties of coverage - statement, decision, … etc.
February 19, 2010 - 58
SW Testing Tools
Memory testing (bounds-checkers)
detect memory problems, exceeding array bounds, memory
allocated but not freed, reading and using uninitialized
memory
Test case management
provide a user interface for managing tests
organize tests for ease of use and maintenance
start and manage test execution sessions that run user-
selected tests
provide seamless integration with capture/playback and
coverage analysis tools
provide automated test reporting and documentation
Tools for performance testing of client/server applications
February 19, 2010 - 59
SW Testing Support Tools
Defect tracking tools
used to record, track, and generally assist with the
management of defects
submit and update defect reports
generate pre-defined or user-defined management reports
selectively notify users automatically of changes in defect
status
provide secured access to all data via user-defined queries
February 19, 2010 - 60
THANK YOU
February 19, 2010 - 61