Test Case & Debugin MK 1.2
Test Case & Debugin MK 1.2
white-box black-box
methods methods
Methods
Strategies
Test Case
Design
output
input events
Requirements based testing
• A general principle of requirements
engineering is that requirements should be
testable.
• Requirements-based testing is a validation
testing technique where you consider each
requirement and derive a set of tests for
that requirement.
Partition testing
• Input data and output results often fall into
different classes where all members of a class are
related.
• Each of these classes is an equivalence partition
or domain where the program behaves in an
equivalent way for each class member.
• Test cases should be chosen from each partition.
Equivalence
Partitioning
user output FK
queries mouse formats input
data
picks prompts
Equivalence partitioning
System
Outputs
Sample Equivalence
Classes
Valid data
user supplied commands
responses to system prompts
file names
computational data
physical parameters
bounding values
initiation values
output data formatting
responses to error messages
graphical data (e.g., mouse picks)
Invalid data
data outside bounds of the program
physically impossible data
proper value supplied in wrong place
Equivalence partitions
3 11
4 7 10
9999 100000
10000 50000 99999
Less than 1
0000 Between 10000 and 99999 More than 99999
Input values
Structural testing
• Sometime called white-box testing.
• Derivation of test cases according to
program structure. Knowledge of the
program is used to identify additional test
cases.
• Objective is to exercise all program
statements (not all path combinations).
Structural testing
Test data
Tests Derives
Component Test
code outputs
Path testing
• The objective of path testing is to ensure that the
set of test cases is such that each path through the
program is executed at least once.
• The starting point for path testing is a program
flow graph that shows nodes representing
program decisions and arcs representing the flow
of control.
• Statements with conditions are therefore nodes in
the flow graph.
flow graph
1
8
12 13
14 10
Independent paths
• 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 14
• 1, 2, 3, 4, 5, 14
• 1, 2, 3, 4, 5, 6, 7, 11, 12, 5, …
• 1, 2, 3, 4, 6, 7, 2, 11, 13, 5, …
• Test cases should be derived so that all of
these paths are executed
• A dynamic program analyser may be used
to check that paths have been executed
Selective Testing
Selected path
loop < 20 X
Cyclomatic
Complexity
A number of industry studies have indicated
that the higher V(G), the higher the probability
or errors.
modules
V(G)
or
Since V(G) = 4,
2
there are four paths
3 Path 1: 1,2,3,6,7,8
4
5 6
Path 2: 1,2,3,5,7,8
Path 3: 1,2,4,7,8
Path 4: 1,2,4,7,2,4,...7,8
7
Finally, we derive test
cases to exercise these
8
paths.
Basis Path Testing
Notes
you don't need a flow chart,
but the picture will help when
you trace program paths
Simple
loop
Nested
Loops
Concatenated
Loops Unstructured
Loops
Loop Testing: Simple
Loops
Minimum conditions—Simple Loops
1. skip the loop entirely
2. only one pass through the loop
3. two passes through the loop
4. m passes through the loop m < n
5. (n-1), n, and (n+1) passes through
the loop
where n is the maximum number
of allowable passes
Loop Testing: Nested
Nested Loops Loops
Start at the innermost loop. Set all outer loops to their
minimum iteration parameter values.
Test the min+1, typical, max-1 and max for the
innermost loop, while holding the outer loops at their
minimum values.
Move out one loop and set it up as in step 2, holding all
other loops at typical values. Continue this step until
the outermost loop has been tested.
Concatenated Loops
If the loops are independent of one another
then treat each as a simple loop
else* treat as nested loops
endif*
for example, the final loop counter value of loop 1 is
used to initialize loop 2.
Alpha, Beta and Acceptance Testing
• The term Acceptance Testing is used when the software is developed
for a specific customer. A series of tests are conducted to enable the
customer to validate all requirements.
These tests are conducted by the end user / customer and may range from
adhoc tests to well planned systematic series of tests.
• The terms alpha and beta testing are used when the software is
developed
as a product for anonymous customers.
• Alpha Tests are conducted at the developer’s site by some potential
customers. These tests are conducted in a controlled environment. Alpha
testing may be started when formal testing process is near completion.
• Beta Tests are conducted by the customers / end users at their sites.
Unlike alpha testing, developer is not present here. Beta testing is
conducted in a real environment that cannot be controlled by the
developer.
Verification and Validation
• Verification is the process of evaluating a
system or component to determine whether the
products of a given development phase satisfy the
conditions imposed at the start of that phase.
• Validation is the process of evaluating a system
or component during or at the end of
development process to determine whether it
satisfies the specified requirements .
• Testing= Verification+Validation
Validation Testing
• It refers to test the software as a complete product.
• This should be done after unit & integration testing.
• Alpha, beta & acceptance testing are nothing but the
various ways of involving customer during testing.
• IEEE has developed a standard (IEEE standard 1059-
1993) entitled “ IEEE guide for software verification and
validation “ to provide specific guidance about planning
and documenting the tasks required by the standard so that
the customer may write an effective plan.
Validation testing improves the quality of software product
in terms of functional capabilities and quality attributes.
Static Testing
• Static testing is a form of software testing
where the software isn't actually used. This
is in contrast to dynamic testing. It is
generally not detailed testing, but checks
mainly for the sanity of the code, algorithm,
or document. It is primarily syntax checking
of the code and/or manually reviewing the
code or document to find errors. This type
of testing can be used by the developer who
wrote the code, in isolation.
Static Testing
• Code reviews, inspections and
walkthroughs are also used.
• From the black box testing point of view,
static testing involves reviewing
requirements and specifications. This is
done with an eye toward completeness or
appropriateness for the task at hand. This is
the verification portion of
Verification and Validation.
• Even static testing can be automated. A static
testing, test suite consists of programs to be
analyzed by an interpreter or a compiler that
asserts the programs syntactic validity.
• Bugs discovered at this stage of development
are less expensive to fix than later in the
development cycle.
• The people involved in static testing are
application developers, testers, and business
analyst.
Code Review
• Code review is systematic examination
(often as peer review) of computer source
code intended to find and fix mistakes
overlooked in the initial development
phase, improving both the overall quality of
software and the developers' skills. Reviews
are done in various forms such as pair
programming, informal walkthroughs, and
formal inspections
Code Inspection
• An inspection is one of the most common
sorts of review practices found in software
projects. The goal of the inspection is for all
of the inspectors to reach consensus on a
work product and approve it for use in the
project. Commonly inspected work
products include software requirements
specifications and test plans.
Code Inspection
backtracking
induction
deduction
Debugging Techniques
Induction approach
• Locate the pertinent data
• Organize the data
• Devise a hypothesis
• Prove the hypothesis
Deduction approach
• Enumerate the possible causes or hypotheses
• Use the data to eliminate possible causes
• Refine the remaining hypothesis
• Prove the remaining hypothesis
Debugging: Final
Thoughts
1. Don't run off half-cocked, think about the
symptom you're seeing.
• change in environment
• change in infrastructure/technology
• major change in requirements
• increase in complexity
• extremely difficult to maintain deterioration in
structure of the code
• slow execution speed
• poor graphical user interfaces
SW Reliability
“Software reliability means operational reliability. Who cares
how many bugs are in the program?
As per IEEE standard:
“Software reliability is defined as the ability of a system or
component to perform its required functions under stated
conditions for a specified period of time”.