CS608
CS608
Lecture No. 1
Module 1
Introduction
Software Verification and Validation
Agenda
1. Introduction.
2. Motivation for software
testing
3. Sources of problems
4. Working definition of
reliability and software
testing
5. What is a software fault,
error, bug, failure or
debugging
Software Verification and Validation
Agenda
6. Software testing and
Software lifecycle
7. Software testing myths
8. Goals and Limitations of
Testing
Module 2
Motivation for Software
Testing
Motivation for Software Testing
1. Software today:
Software systems are
increasingly getting
complex.
Size of the software
Time to market
Increasing emphasis on
GUI component
Are becoming defective
• How many?
• What kind?
Motivation for Software Testing
2. Several reasons that
contribute these defects,
e.g.,
Poor Requirements
elicitation: Erroneous,
incomplete,
inconsistent
requirements.
Inadequate Design:
Fundamental design flaws
in the software.
Motivation for Software Testing
2. Several reasons that
contribute these defects,
e.g.,
Improper Implementation:
Mistakes in chip
fabrication, wiring,
programming faults,
malicious code.
Defective
Support
Systems:
Poor programming
languages, faulty
compilers and debuggers,
misleading development
tools.
Motivation for Software Testing
2. Several reasons that
contribute these defects,
e.g.,
Inadequate Testing of
Software: Incomplete
testing, poor
verification, mistakes in
debugging.
Motivation for Software Testing
2. Several reasons that
contribute these defects,
e.g.,
Evolution: Sloppy
redevelopment or
maintenance,
introduction of new flaws
in attempts to fix old
flaws, incremental
escalation to inordinate
complexity.
Motivation for Software Testing
3. Defective software
contribute several
issues, examples
include:
Faulty
Communications: Loss
or corruption of
communication media,
non delivery of data.
Space Applications: Lost
lives, launch delays.
Defense systems:
Misidentification of friend
or foe.
Motivation for Software Testing
3. Defective software
contribute several
issues, examples
include:
Transportation: Deaths,
delays, sudden
acceleration, inability to
brake.
Safety-critical
applications: Death,
injuries.
Health care applications:
Death, injuries, power
outages, long-term health
hazards (radiation).
Motivation for Software Testing
Money Management:
Fraud, violation of
privacy, shutdown of
stock exchanges and
banks, negative interest
rates.
Control of Elections:
Wrong results
(intentional or non-
intentional).
Motivation for Software Testing
Money Management:
Fraud, violation of
privacy, shutdown of
stock exchanges and
banks, negative interest
rates.
Control of Elections:
Wrong results
(intentional or non-
intentional).
Motivation for Software Testing
Control of Jails:
Technology-aided escape
attempts and successes,
accidental release of
inmates, failures in
software controlled locks.
Law
Enforcement:
False arrests and
imprisonments.
Motivation for Software Testing
1. We consider some
examples of software
failure that resulted or
could have resulted in
monetary and/or
financial losses
2. Examples of human losses:
In Texas, 1986, a man
received between 16,500-
25,000 rads in less than 1
sec, over an area of about 1
cm.
Motivation for Software Testing
He lost his left arm, and died
of complications 5 months
later.
In Texas, 1986, a man
received at least 4,000 rads
in the right temporal lobe of
his brain.
The patient eventually died
as a result of the overdose.
Motivation for Software Testing
3. Examples of financial
losses:
A group of hacker-
thieves hijacked the
Bangladesh Bank
system to steal funds.
The group successfully
transferred $81
million in four
transactions, before
making a spelling
error that tipped off
the bank, causing
another $870 million
in transfers to be
canceled.
Motivation for Software Testing
NASA Mars Polar Lander,
1999
On December 3, 1999,
NASA's Mars Polar
Lander disappeared
during its landing
attempt on the Mars
surface.
A Failure Review Board
investigated the failure
and determined that the
most likely reason for the
malfunction was the
unexpected setting of a
single data bit.
The problem wasn't caught
by internal tests.
Motivation for Software Testing
Malaysia Airlines jetliner, August 2005
As a Malaysia Airlines jetliner cruised from Perth,
Australia, to Kuala Lumpur, Malaysia, autopilot system
malfunctioned.
Captain disconnected autopilot and eventually regained
control and manually flew their 177 passengers safely back
to Australia.
Investigators discovered that a defective software
program had provided incorrect data about the aircraft’s
speed and acceleration, confusing flight computers.
There are countless such examples…
Module 3
Sources of problems
Sources of Problems
1. Software does not do
something that specification
says it should do.
2. Software does
something that
specification says it
should not do.
3. Software does something
that specification does not
mention.
Sources of Problems
4. Software does not do
something that product
specification does not
mention but should.
5. The software is difficult to
understand, hard to use,
slow …
6. Failures result due to:
1. Lack of logic
2. Inadequate testing of
software under test
(SUT)
3. Unanticipated use of
application
Sources of Problems
1. We need to spend time and
financial resources to fix these
errors or bugs in the following
manner:
Cost to fix a bug increases
exponentially (10x)
• i.e., it increases tenfold
as time increases
E.g., a bug found during
specification costs $1 to fix.
… if found in design cost is
$10
… if found in code cost is
$100
… if found in released
software cost is $1000
Module 4
Working definition of software
reliability and software testing
Definition: Software Reliability
1. Is Bug Free
Software Possible?
1. We have human factors
2. Specification-
Implementation
mismatches
3. Discussed in details in
failure reasons
2. We are releasing
software that is full of
errors, even after doing
sufficient testing
Definition: Software Reliability
3. No software would ever be
released by its developers
if they are asked to certify
that the software is free of
errors
4. Software reliability is one of
the important factors of
software quality.
Definition: Software Reliability
5. Other factors are
understandability,
completeness,
portability, consistency,
maintainability, usability,
efficiency,
6. These quality factors are
known as non-functional
requirements for a software
system.
Definition: Software Reliability
1. Some definitions:
Error: A measure of the
difference between the
actual and the ideal.
Fault: A condition that
causes a system to fail
in performing its
required function.
Fault, error, bug, failure …
1. Some definitions:
Error: A measure of the difference between the actual and the ideal.
Fault: A condition that causes a system to fail in performing
its required function.
Failure: Inability of a
system or component
to perform a required
function according to s
it specifications.
Debugging: Activity by
which faults are
identified and
rectified.
Fault, error, bug, failure …
1. Faults have different levels of severity:
Critical. A core functionality of the system fails or the system
doesn’t work at all.
Major. The defect impacts basic functionality and the system
is unable to function properly.
Moderate. The defect causes the system to generate
false, inconsistent, or incomplete results.
Minor. The defect impacts the business but only in very few cases.
Cosmetic. The defect is only related to the interface
and appearance of the application.
2. While testing, we attribute different outcomes to
different severities
Fault, error, bug, failure …
1. Test case: Inputs to test the program and the predicted
outcomes (according to the specification). Test cases are
formal procedures:
inputs are prepared
outcomes are predicted
tests are documented
commands are executed
results are observed and evaluated
Fault, error, bug, failure …
1. All of these steps are subject
to mistakes.
When does a test “succeed”?
“fail”?
2. Test suite: A collection of
test cases
Testing oracle: a program,
process, or body of data which
helps us determine whether the
program produced the correct
outcome.
Oracles are a set of
input/expected output
pairs.
Fault, error, bug, failure …
1. Test data: Inputs which have been devised to test the system.
2. Test cases: Inputs to test the system and the predicted
outputs from these inputs if the system operates according to
its specification
3. Outcome: What we expect to happen as a results of the test.
In practice, outcome and output may not be the same.
For example, the fact that the screen did not change as a
result of a test is a tangible outcome although there is not
output.
4. In testing we are concerned with outcomes, not just outputs.
5. If the predicted and actual outcome match, can we say that
the test has passed?
Fault, error, bug, failure …
1. Expected Outcome: is the
expectation that we
associate with execution
response of a particular test
execution
2. Some times, specifying the
expected outcome for a
given test case can be tricky
business!
Fault, error, bug, failure …
For some applications
we might not know what
the outcome should be.
For other applications
the developer might have
a misconception
Finally, the program may
produced too much output
to be able to analyze it in a
reasonable amount of time.
Fault, error, bug, failure …
In general, this is a fragile
part of the testing
activity, and can be very
time consuming.
In practice, this is an area
with a lot of hand-
waving.
When possible, automation
should be considered as a
way of specifying the
expected outcome, and
END comparing it to the actual
outcome.
Module 6
Software testing and Software development
lifecycle
Software Testing - development lifecycle
Software Development
Lifecycle
Software Testing - development lifecycle
Software Development
Lifecycle
Module 7
Software testing myths
Software testing myths
1. If we were really good at
programming, there would
be no bugs to catch. There
are bugs because we are bad
at what we do.
2. Testing implies an
admission of failure.
3. Tedium of testing is a
punishment for our
mistakes.
Software testing myths
4. All we need to do is:
concentrate
use structured programming
use OO methods
use a good
programming language
…
Software testing myths
1. Human beings make
mistakes, especially when
asked to create complex
artifacts such as software
systems.
2. Studies show that even
good programs have 1-3
bugs per 100 lines of code.
Software testing myths
1. Software testing:
A successful test is a
test which discovers one
or more faults.
Only validation
technique for non-
functional requirements.
Should be used in
conjunction with
static verification.
Software testing myths
2. Defect testing:
The objective of
defect testing is to
discover defects in
programs.
A successful defect test is
a test which causes a
program to behave in an
anomalous way.
Tests show the presence
not the absence of
defects.
Module 8
Goals and Limitations of Testing
Goals and Limitations of Testing
1. Discover and prevent bugs.
2. The act of designing tests
is one of the best bug
preventers known. (Test,
then code philosophy)
3. The thinking that must be
done to create a useful test
can discover and eliminate
bugs in all stages of
software development.
Goals and Limitations of Testing
4. However, bugs will always
slip by, as even our test
designs will sometimes be
buggy.
5. Most widely-used activity
for ensuring that software
systems satisfy the specified
requirements.
6. Consumes substantial
project resources. Some
estimates:
~50% of development costs
Goals and Limitations of Testing
• Testing cannot occur until
after the code is written.
• The problem is big!
• Perhaps the least
understood major SE
activity.
• Exhaustive testing is not
practical even for the
simplest programs. WHY?
Goals and Limitations of Testing
• Even if we “exhaustively” test
all execution paths of a
program, we cannot
guarantee its correctness.
• The best we can do is
increase our confidence
!
• “Testing can show the
presence of bug, not their
absence.”
Goals and Limitations of Testing
1. Testers do not have
immunity to bugs.
2. slight modifications – after a
program has been tested –
invalidate (some or even all
of) our previous testing
effort.
3. Automation is critically
important.
4. Unfortunately, there are only
a few good tools, and in
general, effective use of
END these good tools is very
l
i
m
i
t
e
d
.
Software Verification
and Validation
Lecture No. 2
Module 9
Software Quality
Attributes
Software Quality Attributes
What is Quality
Conformance to explicitly
stated functional and
performance requirements,
explicitly documented
development standards,
implicit characteristics that
are expected of all
professionally developed
software (Pressman)
The degree to which a
system, component, or
process meets specified
requirements.
Software Quality Attributes
The degree to which a
system, component, or
process meets customer
or user needs or
expectations (IEEE)
Why Quality
You would not like MS
Office to occasionally
save your documents
Competitiveness of
International
Market
Software Quality Attributes
Cost of Quality
Software bugs, or errors,
are so prevalent and so
detrimental that they cost
the U.S. economy an
estimated $59.5 billion
annually, or about 0.6
percent of gross domestic
product. (US Dept of
Commerce, 2002)
Software Quality Attributes
Quality (Our working
definition)
The totality of features and
characteristics of a product,
process of service that bear
on its ability to satisfy
stated or implied needs.
Software Quality Attributes
The default document for
quality:
• We need to know where
are the stated or implied
need
• Against which we test
if they are met
• We refer to these quality
features and
characteristics as Quality
Attributes
• They are part of Software
requirements specification
(SRS ) document
Software Quality Attributes
Non-functional Requirements
Product transition related
Portability
Interoperability
Reusability
Product Operations related
Reliability
Robustness
Efficiency
Usability
Software Quality Attributes
Safety
Security
Fault-tolerance
Product revision related
Maintainability
…
We need to know/gauge
from our default
document for quality,
requirement for
assurance of quality and
reliability
We need to plan accordingly
Software Quality Attributes
Quality attribute examples:
System should be capable
of processing 10
transactions per minute
The system should only
allow authorized
access.
Users should be forced
to change their
passwords every tenth
usage
System should maintain
logs of disk activity
…
END
Module 10:
Quality Control vs. Quality Assurance
Quality Control vs. Quality Assurance
Quality Assurance
• It is fault prevention
through process design
and auditing
Creating processes,
procedures, tools, jigs
etc., to prevent faults
from occurring
Prevent as much as
possible defect injection
Process oriented
Quality Control vs. Quality Assurance
Quality Control
• It is fault/failure detection
through static and/or
dynamic testing of
artefacts
• It is examining of
artefacts against pre-
determined criteria to
measure conformance
• Product oriented
Quality Control vs. Quality Assurance
Quality Assurance (QA) Quality Control (QC)
It is a procedure that focuses
It is a procedure that focuses on
on providing assurance that
fulfilling the quality requested.
quality requested will be
achieved
QA aims to prevent the defect QC aims to identify and fix defects
It is a method to manage It is a method to verify the quality-
the quality- Verification Validation
It does not involve executing the It always involves executing a
program program
It's a Preventive technique It's a Corrective technique
It's a Proactive measure It's a Reactive measure
Quality Control vs. Quality Assurance
END
Module 11:
Product Quality and Process Quality
Product Quality and Process Quality
1. Product Quality
1. Product quality means
to incorporate features
that have a capacity to
meet consumer needs
(requirements)
2. gives customer
satisfaction by improving
products (goods) and
making them free from
any deficiencies or
defects.
Product Quality and Process Quality
3. There are various
important aspects
which define a product
quality like:
1. Storage, Quality
of Design
2. Quality of Conformance
3. Reliability, Safety
Product Quality and Process Quality
1. Process Quality
1. Process Quality is defined
as all the steps used in
the manufacturing the
final product.
2. Its focus are on all
activities and steps used
to achieve a maximum
acceptance regardless
of its final product.
Product Quality and Process Quality
1. Difference
1.Process quality is one of a
number of contributors
to product quality.
2.Product quality is the
overall quality of the
product in question:
how well it conforms to
the product
requirements,
specifications, and
ultimately customer
expectations.
Product Quality and Process Quality
1. Process quality focuses
on how well some part of
the process of developing
a particular product, and
getting it into the
customer’s hands, is
working.
2. The process being
analyzed in a particular
case may have a very
broad scope, or it may
focus in on minute
details of a single step.
Product Quality and Process Quality
1. Product Quality – Degree
of conformance to
specification as defined
by the Customer
2. Process Quality –
Degree of variability in
process execution and
the minimization of the
Cost of Quality
END
Module 12:
Software verification and validation
Software verification and validation
Verification:
Process of assessing a
software product or system
during a particular phase to
determine if it meets the
requirements/conditions
specified at beginning of
that phase.
It is static and includes
substantiation of necessary
artifacts, code, design and
program.
Software verification and validation
Validation:
Validation is the process
of testing a software
product after its
development process gets
completed
It is conducted to
determine whether a
particular product
meets the requirements
that were specified for
the product
Software verification and validation
It is a dynamic process
that evaluates a
product on parameters
that help assess
whether it meets the
expectations and
requirements of the
customers
Some common methods
of conducting validation
END
are white box testing,
while box testing and
g g.
r
a
y
b
o
x
t
e
s
t
i
n
Module 13:
Difference between Validation
and Verification
Difference between Validation & Verification
Validation:
"Are we building the right
product?"
The software should do
what the user really
requires.
V and V process
Verification:
"Are we building the
product right?"
The software should
conform to its
specification.
Validation:
"Are we building the right
product?"
The software should do
what the user really
requires.
V and V process
Validation: process of evaluating a system or a component
during or at the end of development process to determine
whether it satisfies specified requirements. That is: are we
building the right product
formal concrete
world world
Verification is only as good as Testing can only show the
the validity of the model on presence of errors, not
which it is based their absence
V and V process
Verification and validation should establish confidence that
the software is fit for its purpose.
This does NOT mean completely free of defects.
Rather, it must be good enough for its intended use. The type
of use will determine the degree of confidence that is
needed.
Reliability validation
Does the measured
reliability of the
system meet its
specification?
Safety validation
Does the system
always operate in such
a way that accidents do
not occur or that
accident consequences
are minimised?
Critical System Validation
Security validation
Is the system and
its data secure
against external
attack?
END
Software Verification
and Validation
Lecture No. 3
Software Verification and Validation
Agenda
1. Reliability Validation, Safety
Assurance, Security assessment,
2. Testing philosophies
3. Testing strategies
4. top down testing strategy
5. bottom up testing strategy
6. Analytical, model based, process oriented
strategies
7. Testing Comprehensiveness
Module 16:
Reliability Validation, Safety Assurance,
Security assessment
Reliability Validation
Critical systems validation (as
discussed)
integrates the components residing at the lowest level (i.e., lowest level)
by providing a process and eliminates the need of the stubs.
Lecture No. 4
Software Verification and Validation
Agenda
1. Testing phases
2. unit testing
3. integration testing
4. system testing
5. acceptance testing
6. alpha and beta testing
7. Focus of unit, integration and (sub-
)system
8. Artifacts involved in testing
9. The big picture
Module 21:
Testing
Phases
Testing
Phases Testing Phases
Unit Testing
individual units of a software
are tested.
purpose is to validate that
each unit of the software
performs as designed.
Integration Testing
individual units are combined
and tested as a group.
Testing
Phases purpose of this level of testing
is to expose faults in the
interaction between integrated
units.
System Testing
complete, integrated system is
tested.
purpose of this test is to
evaluate the system’s
compliance with the specified
requirements.
Testing Phases
Acceptance Testing
system is tested for acceptability.
purpose of this test is to evaluate the system’s compliance with the business
requirements and assess whether it is acceptable for delivery.
Important: the direction of arrow
Testing
Phases We assume at each phase of
testing that:
Previous phase of
testing completed
As an example:
For integration testing,
Our assumption is:
Unit testing is done
and complete
End
Module 22:
Unit Testing
Unit Testing
Performed by each developer.
Scope: Ensure that each module (i.e., class, subprogram), or
collection of modules as appropriate, has been
implemented correctly.
‘White-box’ form of testing.
Unit Testing
Code
Test Result
Developer
Test Cases
Unit Testing
End
Module 23:
Integration Testing
Integration testing
• System design is implemented
by group of developers.
• logic implemented by one
developer may differ than
another developer,
• Sometimes, data structure, its
implementation changes when it
travels from one module to
another. Some values are
appended or removed, which
causes issues in the later
modules
Integration testing
• Modules may interact with
some third party tools or APIs
• We need to test that data
accepted by that API / tool is
correct and that the response
generated is also as expected.
• Frequent requirement change,
misunderstanding design etc.
Integration testing
Performed by a small team.
Scope: Ensure that the interfaces between components (which developers could
not test) have been implemented correctly.
Approaches:
• “big-bang”
• Incremental construction
Test cases have to be planned, documented, and reviewed.
Performed in a small time-frame
Integration testing
Top-down Integration
Bottom-up Integration
A
top module is tested with
stubs
B F G
studbrisvearrsearreeprlaecpeladc
oetime,
ndeoa"depth
nte at afirst"
a time, "depth
C first"
a s ne w m o d u le s
w o r k er m o d u l e s
a r e i n te g r a t ed ,
a r e g r o u p e d in to
D E sboumiledssuabnsdeitnotefgterasttes
cluster
Integration testing
• Big-bang – Advantages:
– The whole system is available
• Big-bang – Disadvantages:
– Hard to guarantee that all
components will be ready at
the same time
– Focus is not on a specific
component
– Harder to locate errors
Integration testing
• Incremental – Advantages:
– Focus is on each module leads
to better testing ?
– Easy to locate errors
• Incremental – Disadvantage:
– Need to develop special code
(stubs and/or drivers)…
End
Module 24:
System Testing
System
Testing Performed by a separate group
within the organization.
Scope: Pretend we are the end-users of
the product.
Focus is on functionality, but must
also perform many other types of
tests (e.g., recovery, performance).
System
Testing Black-box form of testing.
Test case specification driven by
use- cases.
The whole effort has to be planned
(System Test Plan).
Test cases have to be designed,
documented, and reviewed.
System
Testing Adequacy based on requirements
coverage.
• but must think beyond
stated requirements
Support tools have to be
[developed]/used for
preparing data, executing the test
cases, analyzing the results.
Group members must develop
expertise on specific system features
/ capabilities.
System Testing
Often, need to collect and
analyze project quality data.
The burden of proof is always on the
ST group.
Often, the ST group gets the initial
blame for “not seeing the
problem before the customer
did”..
End
Module 25:
Alpha and Beta Testing
Alpha and Beta
Testing Depending on the system being
developed, and the organization
developing the system, other
testing phases may be appropriate:
Alpha and Beta
Testing • Alpha Testing
• is a type of acceptance testing;
• performed to identify all
possible issues/bugs before
releasing the product to
everyday users or public
• The testers are internal
employees of the organization,
mainly in-house software QA and
testing teams
Alpha and Beta Testing
• Beta Testing
• is the second phase of software testing
Alpha and Beta
Testing • Beta Testing
• is the second phase of
software testing
• a representative sample of
the intended audience
tries the product out.
• Beta Testing of a product is
performed by real users of
the software application in
a real environment..
End
Module 26:
Focus, Artifacts and The Big Picture
Focus, artifacts and the big picture
Developer1 Tester1
Developer2 Tester2
… …
Tester n
ality
Developer n
Man
ger
Techniques on the boundary
PM
CM
Focus
Unit Testing Integration Testing System Testing
Every Unit is working
properly All units/modules are All requirements are implemented
Tech
Techniques on collaborating as per given o niques
All states are represented the boundary design n the All functions are working
properly bou ndary
Quality Assurance
2. Integration
Developer1 PM Tester1
Developer2 Tester2
… …
CM
Developer n Tester n
Development Manager
Quality Assurance
PM
Developer1 Tester1
2
Developer2 Tester
… …
CM
Developer n Tester n
Artifact
s Integration Testing System Testing
Unit Testing
Code Techniques on
the boundary
Techniques on
the boundary
Code + Design SRS
Development Manager
Quality Assurance
PM Tester1
Tester
Developer1 …
Developer2 CM Tester n 2
…
Developer n
Techniques on the boundary
The Big
Picture Integration Testing
Techniques
System
Testing Techniques
Unit Testing 1. MM Path Techniques on 1. Specification-based
Techniques
1. CFG, DFG
2. … Techniques on the boundary 2. Model-based techniques
the
2. State-Machine
boundary
Exit Criteria Exit Criteria
Exit Criteria
1. Specific Coverage 1. Specific Coverage Crit.
1. Specific Coverage
2. Confidence Crit. Crit.
Development Manager
Quality Assurance
PM Tester1
Tester
Developer1 …
Developer2 CM Tester n
2
…
Developer n
Techniques on the boundary
Focus, artifacts and the big picture
• We have fixed:
• Our focus
• We have pinpointed:
• Artifacts of our interest
• We have painted:
• Our big picture
End
Software Verification
and Validation
Lecture No. 5
Software Verification and Validation
Agenda
1. Structural testing techniques
2. Parts of a graph: nodes, edges, cycles
3. Control flow based testing
4. Control flow graph (CFG)
5. Example of a CFG
6. Control flow graphs of common
control structures
7. From CFG to path selection
Module 27:
Structural testing techniques
Structural Testing Techniques
• We have defined:
• A mistake in coding is called Error, error found by tester is called Defect,
defect accepted by development team then it is called Bug
• A test case is inputs, pre-conditions, expected output
• Test case with positive / negative intent
• Structural or white-box techniques consider code
Structural Testing Techniques
Example
Consider the following code with a constraint that n < 5:
Structural Testing Techniques
• Is 2 = factorial (2) equiv 6 = factorial (3)
• Which one should be included, why and which one should be excluded
• What is part of the code that was exercised, why? and why not?
• How to select test cases to test our system under test (SUT)
• Once selected, can we run all possible test cases?
• Remember, we are working in a software house
• Economic activities must justify spending on testing effort
• Each test run costs money
• Imagine, SUT is a code fragment from:
• Safety critical system
• Lab assignment
Structural Testing Techniques
• Uranium enrichment gas handling system
• Radiation treatment equipment
• Can we have
• Same number of test cases
• Many test cases with lots of repetition of values
• What to do?
• We need to devise a strategy for selecting a subset of test cases from
all possible test cases
• We need to prove that they were representative of all possible
test cases possible
• We need to prove that we have tested all aspects of SUT
Structural Testing
Techniques • When to test?
• At unit, integration, (sub-)
system, acceptance stages
• How and how much?
• We want to find out when to
stop testing
• We consider functional
aspects of our SUT while unit
and integration testing
• However, we need to define
• Test case selection
End
• Test adequacy..
Module 28:
Test Selection and Adequacy Criteria
Test selection and adequacy criteria
Test Selection and
Adequacy Criteria
• Test selection criteria
i.e., conditions that must
be fulfilled by a test.
• For example, a criterion for
a numerical program whose
input domain is the integers
• Might specify that each test
contain one positive integer,
one negative integer, and
zero.
{ 3, 0, -7 }, { 122, 0, -11 }, { 1, 0,
-1 } are three of the tests
selected by this criterion.
Test selection and adequacy criteria
• Test adequacy criteria i.e., the
properties of a program that
must be exercised to
constitute a thorough testing.
• For example, we consider
that our software has been
adequately tested:
• If the test suite causes each
method to be executed at
least once. This provides a
measurable objective for
completeness.
Test selection and adequacy criteria
Adequacy
Given a program P written to
meet a set of functional
requirements R = {R1, R2, …,
Rn}.
Let T contain k tests for
determining whether or not
P meets all requirements in
R.
Assume that P produces correct
behavior for all tests in T.
Test selection and adequacy criteria
• We now ask?
• Is P been tested
thoroughly? or: Is T
adequate?
• In software testing, the term
“thorough” and “adequate”
have the same meaning.
• Measurement of adequacy
• Adequacy is measured for
a given test set and a given
criterion.
• A test set is adequate with
respect to criterion C when
it satisfies C.
Test selection and adequacy criteria
Measurement of adequacy
Let
C be a test adequacy criterion
Test suite T be a set of test cases such that T = {t1, t2, t3, … tn}
Program P be the implementation of Specification S
R denotes set of requirements such that program P written to meet
functional requirements set R = {R1, R2, …, Rm}.
Test suite T is adequate with respect to C if for each r in R there is a
test case in t in T that tests the correctness of P with respect to r.
Test selection and adequacy criteria
Given an adequacy criterion C, we derive a finite set Ce known as
the coverage domain.
A criterion C is a white-box test adequacy criterion if the corresponding Ce
depends solely on the program P under test.
A criterion C is a black-box test adequacy criterion if the corresponding Ce
depends solely on the requirements R for the program P under test.
Coverage, which is a means to test selection criteria, we measure:
T covers Ce if, for each e' in Ce, there is a test case in T that tests e'. T is
adequate wrt C if it covers all elements of Ce.
T is inadequate with respect to C if it covers k<n elements of Ce.
k/n is the coverage of T wrt C.
Test selection and adequacy criteria
Example1 : We need to write a program that takes two integer inputs
a, and b and computes sum if a > b and difference if a < b
Requirements specification
Accept integer inputs
Return sum of two integers if first input is greater than b
Return difference of two integers if first input is greater than b
• Consider criterion: “A test T for program (P, R) is considered adequate
if each path in P is traversed at least once.”
Test selection and adequacy criteria
int myCompute(int a, int b) {
IF (a > b)
return a + b;
ELSE
return a – b;
}
j=1
j limit
T F
S1 ???
S3
…
j=j+1
Sn
Parts of a graph
• Nodes
• Statement
• Condition
• Edges
• Convention is: all edges representing “T” and “F” be on the same
side throughout the graph. i.e., if on left in one IF statement, than on
left on all IFEnsdtatements..
Module 30:
Control flow graph based testing
Control flow graph based testing
• Two kinds of basic program statements:
• Assignment statements (Ex. x = 2*y; )
• Conditional statements (Ex. if(), for(), while(), …)
• Control flow
• Successive execution of program statements is viewed as flow of control.
• Conditional statements alter the default flow.
Control flow graph based testing
• Program path
• A program path is a sequence of statements from entry to exit.
• There can be a large number of paths in a program.
• There is an (input, expected output) pair for each path.
Control flow graph based testing
• Executing a path requires invoking the program unit with the right test input.
• Paths are chosen by using the concepts of path selection criteria.
• Tools: Automatically generate test inputs from program paths
Control flow graph based testing
• A decision is a program point at which the control can diverge.
• (e.g., if and case statements).
• A junction is a program point where the control flow can merge.
• (e.g., end if, end loop, goto label)
Control flow graph based testing
• A process block is a sequence of program statements uninterrupted by
either decisions or junctions. (i.e., straight-line code).
• A process has one entry and one exit.
• A program does not jump into or out of a process..
End
Module 31:
Control flow graph (CFG)
Control flow graph
• Control-flow testing is a structural testing strategy that uses the
program’s control flow as a model.
• Control-flow testing techniques are based on judiciously selecting
a set of test paths through the program.
Control flow graph
• The set of paths chosen is used to achieve a certain measure
of testing thoroughness.
• E.g., pick enough paths to assure that every source statement is executed
as least once.
Csocannft(r“o%ld,fl
scanf ( … )
if o w
(y < 0) g r a p h
% d ”, & x, & y );
pow = -y;
else
pow = y; y<0
T F
pow = -y pow = y
z = 1.0;
while (pow != 0) { z = 1.0
z = z * x;
pow = pow – 1; pow != 0 F y<0
} z=z*x T T
if (y < 0) F
z = 1.0 / z ; printf(“%f”, z);
z = 1.0/z
pow = pow-1
printf ( … )
Control flow graph
• Test Requirement: test requirement is a specific element of
a software artifact that a test case must satisfy or cover.
• Coverage Criterion: A coverage criterion is a rule or collection of
rules that impose test requirements on a test set
Control flow graph
• Coverage: Given a set of test requirements TR for a coverage
criterion C, a test set T satisfies C if and only if for every test
requirement tr in TR, at least one test t in T exists such that t satisfies
tr .
• Coverage level: Given a set of test requirements TR and a test set
T, the coverage level is simply the ratio of the number of test
requirements satisfied by T to the size of TR..
End
Module 32:
Control flow graph examples
Control flow graph examples
• We consider flow graph examples for the following:
• For Loop
• Case Statement
• While Loop
Control flow graph examples
What does the CFG for the following
S0
code fragment look like?
S0; j=1
for ( j = 1; j <= limit; j=j+1 )
j limit
{ F
T
S1;
} S1
Sn
Sn;
j=j+1
Control flow graph examples
S0;
switch ( e )
S0
{
case v1: S1; break; e
case v2: S2; break;
v1 v2 default
default: S3;
} S1 S2 S3
Sn;
Sn
Csocannft(r“o%ld,fl%odw”, scanf ( … )
g r a p
& x, & y );
h examples
if (y < 0)
pow = -y;
y<0
else F
T
pow = y;
pow = -y pow = y
z = 1.0;
while (pow != 0) { z = 1.0
z = z * x;
pow = pow – 1; pow != 0 F y<0
} z=z*x T T
if (y < 0) F
z = 1.0 / z ; printf(“%f”, z);
pow = pow-1 z = 1.0/z
printf ( … )
Control flow graph examples
• Control flow graph follows conventions followed for flow
graph development.
• They must be followed.
• If there are a number of sequential statements, we represent them
by a composite node..
End
Module 33:
From CFG to Path Selection
From CFG to Path Selection
• A path is a unique sequence of executable statements from one
point of the program to another point.
• In a graph, a path is a sequence (n1, n2, …, nt) of nodes such that
<ni,ni+1> is an edge in the graph, i=1, 2,.., t-1 (t > 0).
From CFG to Path Selection
• Complete path starts with the first node and ends at the last node
of the graph.
• Execution path a complete path that can be exercised by some
input data.
• Subpath a subsequence of the sequence, e.g.,
• n1, n2, …, nt
• Elementary path all nodes are unique.
• Simple path all edges are unique
From CFG to Path Selection
1 2
T F Paths Example:
3 4
(1,2,3,5,6,7,6,8,10) is a complete path
5 (1,2,3,5,6,7,6,7,6,8,10) is a different
complete path
6 (1,2,3,5) is a subpath
T F (1,2,3,5,6,8,10) is an elementary path
8 (1,2,4,5,6,7,6,8,10) is a simple path
7
T F
9
10
From CFG to Path Selection
max = x S1
The simple CFG shown here has 2 different paths:
P1
S3)F
y >(S1x, P1, S2,
(S1, P1, S3)T
F
max = y S2
return max S3
From CFG to Path Selection
A How many paths?
F
T B
T F if ((A||B) && (C||D) {
C S1;
F } else { S2; }
T D
There are: 2 statements + Sn
T F 4 branches,
S1 S2
7 paths (4T+3 F)
(A, C, S1), (A, C, D, S1)
Sn
(A, C, D, S2), (A, B, C, S1)
(A, B, C, D, S1), (A, B, C, D, S2),
From CFG to Path Selection
• Infeasible path
PP x<
11 • A path is said to be feasible if it
T 10 can be exercised by some
input data; otherwise the path
FF is said to be infeasible.
• Infeasible paths are the result
SS11 of contradictory predicates.
P2 x<
20
S4
T F
SS22 SS
33
•T P t feasible.
h 2 • Unless S1 changes the value of x
e , …
p
S
a
3
t
,
h
S
•(
4
P
)
1
, •i
S s
1 n
, o
Module 34:
From Path to Test Case
From Path to Test Case
• Every path corresponds to a succession of true or false values for
the predicates traversed on that path.
• A Path Predicate Expression is a Boolean expression that
characterizes the set of input values that will cause a path to be
traversed.
• Multiway branches (e.g., case/switch statements) are treated
as equivalent if then else statements.
From Path to Test Case
• Any set of input values that satisfies ALL of the conditions of the
path predicate expression will force the routine through that path.
• If there is no such set of inputs, the path is not achievable.
From Path to Test Case
• Process of creating path expression:
• Write down the predicates for the decisions you meet along a path.
• The result is a set of path predicate expressions.
• All of these expressions must be satisfied to achieve a selected path.
From Path to Test Case
• Path condition: the conjunction of the individual predicate
conditions that are generated at each branch point along the path.
• The path condition must be satisfied by the input data in order for
the path to be executed.
From Path Test Example:
1
to Case
2
T F
3 4 PC(1,2,4,5,6,8,10)
= (y≥0 ∧ pow==0 ∧ y ≥0)
5
6
T F
8
7
T F
9
10
From Path to Test Case
• Path Sensitization
• The act of finding a set of solutions to the path predicate expression
is called path sensitization.
• This yields set of values when given as input allow us to traverse
that path
• We manually compute expected outputs
• Set of inputs together with expected outputs, they form test cases
From Path to Test Case
• Inputs to the test generation process
• Source code
• Path selection criteria: statement, branch, …
• Generation of control flow graph (CFG)
• A CFG is a graphical representation of a program unit.
• Compilers are modified to produce CFGs. (You can draw one by hand.)
From Path to Test Case
• Selection of paths
• Enough entry/exit paths are selected to satisfy path selection criteria.
• Generation of test input data
• Two kinds of paths
• Executable path: There exists input so that the path is executed.
• Infeasible path: There is no input to execute the path.
Fscraonfm(“%Pda, %thd”,t&ox,T&ey)s;t Case
1 2 We want to traverse path where
if (y < 0)
F First IF evaluates to F
pow = -Ty;
While Loop is not
else executed
pow 3= y; 4
Second IF is again evaluated to F
z = 1.0;
5 {
while (pow != 0) Path = 1,2,4,5,6,8,10
z = z * x; 10
pow = pow – 6
1;
T F
} 8
if (y < 0) 7
z = 1.0 / z ; T F
printf(“%f”, z);9
PC(1,2,4,5,6,8,10)
= (y≥0 ∧ pow==0 ∧ y ≥0)
End
Software Verification
and Validation
Lecture No. 6
Software Verification and Validation
Agenda
1. Dataflow Testing
2. Dataflow Testing – anomalies
3. Dataflow Testing Coverage Criteria
4. Program Slice based Testing
5. Program Slice – examples
6. Program Slice – Uses in testing
Module 36:
Dataflow Testing
Dataflow Testing
• Data-flow testing is the name given to a family of test strategies
based on selecting paths through the program’s control flow in
order to explore sequences of events related to the status of data
objects.
• E.g., Pick enough paths to assure that:
• Every data object has been initialized prior to its use.
• All defined objects have been used at least once.
Dataflow Testing
• Data-flow testing uses the control flow graph to explore the
unreasonable things that can happen to data (i.e.,
anomalies).
• We annotate CFG such that
• (d) Defined, Created, Initialized
• (k) Killed, Undefined, Released
• (u) Used:
• (c) Used in a calculation
• (p) Used in a predicate
Dataflow Testing
• An object (e.g., variable) is defined (d) when it:
• appears in a data declaration
• is assigned a new value
• is a file that has been opened
• is dynamically allocated, etc.
• An object is killed when it is:
• released (e.g., free) or otherwise made unavailable (e.g., out of scope)
Dataflow Testing
• a loop control variable when the loop exits
• a file that has been closed
• An object is used when it is part of a computation or a predicate.
• A variable is used for a computation (c) when it appears on the
RHS (sometimes even the LHS in case of array indices) of an
assignment statement.
Dataflow Testing
• A variable is used in a predicate (p) when it appears directly in
that predicate.
• A data-flow anomaly is denoted by a two character sequence
of actions. E.g.,
• ku: Means that an object is killed and then used.
• dd: Means that an object is defined twice without an intervening usage.
Dataflow Testing
Def C-use P-use
1. read
d (x, y);
);
xx,,
22.. zz xx 22;;
yy
== ++
33.. ii ((z yy)) zz xx
f z z,y
z , y
f << w x
w x
4 w = x + 1;
yY yy
eellssee
55.. yy yy 11;; xx,,
== ++
6. print (x, y, w, z);
Module 37:
Dataflow Testing - anomalies
Dataflow Testing - anomalies
• Two letter situations for d, k and u
• dd: Probably harmless, but suspicious.
• dk: Probably a bug.
• du: Normal situation.
• kd: Normal situation.
• kk: Harmless, probably a bug.
• ku: Definitely a bug.
• ud: Normal (reassignment).
• uk: Normal situation.
• uu: Normal situation.
Dataflow Testing - anomalies
• Single letter situations
• leading dash means nothing of interest (d, k, u) occurs prior to
action noted along entry-exit path of interest.
• trailing dash means nothing of interest happens after point of action
until exit.
• -k: Possibly anomalous as killing a variable that does not exist or it
is global.
END
• -u, d-: Possibly anomalous, unless variable is global..
Module 38:
Dataflow Testing Coverage Criteria
Dataflow Testing Coverage Criteria
• Data values are created at some point in the program and used
later on
• A definition (def) is a location where a value of a variable is store
into memory
• A use is a location where value is accessed
• Values are carried from their defs to uses. We call them du-pairs
• A du-pair is pair of locations (li, lj) such that variable v is defined at
li and used at lj
Dataflow Testing Coverage Criteria
• Let V be the set of variables that
1 x = 30 are associated with program
artifact being modeled as
graph
2 3 • The subset of V that each
node n (edge e) defines is
called def(n) (def(e))
4 • Example
• Defs:
• Def (1) = {x}
• Def (5) = def (6) = {z}
z = x*5 5 6 z = x - 10 7
• Uses:
• Use (5) =
{x}
• Use (6) =
{x}
Dataflow Testing Coverage Criteria
• If v is used in c-use, the use is referred to as:
• dcu(li, lj, v)
• If v is used in p-use, the use is referred to as set of two def-use
pairs as:
• dpu(li, (lj, lt), v) and
• dpu (li, (lj, lf), v)
• Since v is defined at li, used at lj having two flow directions (lj, lt) and (lj, lf)
Dataflow Testing Coverage Criteria
• Def-Use Pairs
• Def of a variable at line li and its use at line lj constitute a def-use pair. li and lj
can be the same.
• A path from li to lj is def-clear path with respect to variable v if v is
not given another value on any of the nodes or edges in the path
• Def-clear path from li to lj with respect to v means def of v at
li reaches use of lj.
Dataflow Testing Coverage Criteria
• A du-path with respect to variable v is a simple path that is def-
clear from a def of v to use of v where:
• Du-paths are parameterized by v
• They need to be simple paths
• There may be intervening uses on the path
• Du(ni, nj, v) is set of du-paths from ni to nj for v
• Du(ni, v) is set of du paths starting from ni for v
Dataflow Testing Coverage Criteria
• A def-pair set du(ni, nj, v) of
du-paths wrt v starts at ni
and ends at nj
• They represent set of all
paths from definition to use
• Test requirements
(TR) examples:
• TR: each def reaches al
least one use
• TR: each def reaches
all possible uses
• TR: each def reaches all
possible uses through
all possible du-paths
Dataflow Testing Coverage Criteria
• All-Defs Coverage: For each def-path set S = du(n, v), TR contains at
least one path d in S
• Every def reaches al least one use
• All-Uses Coverage: For each def-pair set S = du (ni, nj, v), TR contains
at least one path d in S
• Every def reaches every uses
• Since a def-pair set du(ni, nj, v) represents all def-clear simple paths from a def of v
at ni to a use of v at nj, all-uses requires us to tour at least one path from every def-
use pair
• All du Paths Coverage: for each def-pair set S = du(ni, nj, v), TR
contains every path d in S
• every def reaches every use through every possible du-paths
Dataflow Testing Coverage Criteria
• All defs of x
1 x = 30 = test path {1,2,4,6,7}
= (one of the uses)
• All uses of x
2 3
= test path {1,2,4,5,7},
test path {1,2,4,6,7}
4 = (all of the uses,5 and 6)
• All du-paths of x
= test path {1,2,4,5,7},
z = x*5 5 6 z = x - 10
test path {1,3,4,5,7},
test path {1,2,4,6,7},
7 test path {1,3,4,6,7}
Dataflow Testing Coverage Criteria
ALL-PATHS
ALL-DU-PATHS
ALL-USES
ALL-P-USES/SOME-C-USES
ALL-C-USES/SOME-P-USES
ALL-P-USES
ALL-C-USES ALL-DEFS
ALL-EDGES
ALL-NODES
Dataflow Testing Coverage Criteria
• Subsumption in Coverage Criteria
• Every use is preceded by a def
• Every def reaches at least one use
• For every node with multiple out-going edges, at least one variable is used
on each out edge, and the same variables are used on each out edge.
• All-Defs ⊆ All-Uses …
END
Module 39:
Program Slice based Testing
• Program slice
• Program slice is a decomposition technique that extracts
statements relevant to a particular computation from a program
• Program slices was first introduced by Mark Weiser (1980) are
known as executable backward static slices
Program Slice based Testing
• Program Slice
• A program slice is a subset of a program.
• Program slicing enables programmers to view subsets of a program
by filtering out code that is not relevant to the computation of
interest.
• E.g., if a program computes many things, including the average of a
set of numbers, slicing can be used to isolate the code that
computes the average.
Program Slice based Testing
• Definition:
• Assume that:
• P is a program.
• V is the set of variables at a program location (line number) n.
• A slice S(V,n) produces the portions of the program that contribute
to the value of V just before the statement at location n is executed.
• S(V,n) is called the slicing criteria.
Program Slice based Testing
• Conditions for Program slice
• Slice S(V,n) must be derived from P by deleting statements from P.
• Slice S(V,n) must be syntactically correct.
• For all executions of P, the value of V in the execution of S(V,n) just
before the location n must be the same value of V in the execution
of the program P just before location n.
Program Slice based Testing
• Types of Slices
• Static Slice
– Slice derived from source code
– No assumption about input values
– Contains all statements that may affect a variable
• Dynamic Slice
– Uses information about a particular execution
Program Slice based Testing
• Types of Slice
– Execution is monitored and slice is computed with respect to program history
– Relatively small as compared to static slice and contains statements actually
affecting a variable
– Conditional of quasi-static slicing technique acts as bridge between two
extremes i.e., static and dynamic slicing
Program Slice based Testing
• Types of Slice
• Backward Slice
– Contains statements of a program P that may have some affect on slicing
criterion
• Forward Slice
– Contains those statements of program P that are affected by slicing criterion
S(V,n)
• Chopping, Interface Slice, …
Program Slice based Testing
• Conditions for Program slice
• Slice S(V,n) must be derived from P by deleting statements from P.
• Slice S(V,n) must be syntactically correct.
• For all executions of P, the value of V in the execution of S(V,n) just
before the location n must be the same value of V in the execution
of the program P just before location n.
Module 40:
Program Slice – examples
P
ma i g ram Slice –
l >e = 21. av = sum / num;
{ 1 0.
r o
n ( ) e x a m p
s0)
w h il e( tm p
1. int mx, mn, av; 11. { 22. printf(“\nMax=%d”, mx);
2. int tmp, sum, num; 12. if (mx < tmp) 23. printf(“\nMin=%d”, mn);
3. 13. mx = tmp; 24. printf(“\nAvg=%d”, av);
4. tmp = readInt(): 14. if (mn > tmp) 25. printf(“\nSum=%d”, sum);
5. mx = tmp; 15. mn = tmp; 26. printf(“\nNum=%d”, num);
6. mn = tmp; 16. sum += tmp; }
7. sum = tmp; 17. ++num;
8. num = 1; 18. tmp = readInt();
9. 19. }
20.
Slice S(num,27)
P r o gram Slice w–hilee(xtmapm>p=l0e)s
ma in ( ) {
int mx, mn, av; { av = sum / num; printf(“\
int tmp, sum, if (mx < tmp) nMax=%d”, mx); printf(“\
num; mx = tmp; nMin=%d”, mn); printf(“\
if (mn > tmp) nAvg=%d”, av); printf(“\
tmp = readInt(): mn = tmp; nSum=%d”, sum); printf(“\
mx = tmp; sum += tmp; nNum=%d”, num);
mn = tmp; ++num; }
sum = tmp; tmp = readInt();
num = 1; }
Slice S(num,27)
P
ma i g ram Slice –
l >e = 21. av = sum / num;
{ 1 0.
r o
n ( ) e x a m p
s0)
w h il e( tm p
1. int mx, mn, av; 11. { 22. printf(“\nMax=%d”, mx);
2. int tmp, sum, num; 12. if (mx < tmp) 23. printf(“\nMin=%d”, mn);
3. 13. mx = tmp; 24. printf(“\nAvg=%d”, av);
4. tmp = readInt(): 14. if (mn > tmp) 25. printf(“\nSum=%d”, sum);
5. mx = tmp; 15. mn = tmp; 26. printf(“\nNum=%d”, num);
6. mn = tmp; 16. sum += tmp; }
7. sum = tmp; 17. ++num;
8. num = 1; main() {
18. tmp = readInt();
1. int mn;
9. 19. } 2. int tmp;
20. 4. tmp = readInt():
6. mn = tmp;
10. while(tmp >= 0)
11. {
Slice S(av,26) 14. if (mn > tmp)
15. mn = tmp;
18. tmp = readInt();
19. }
Program Slice – examples
Static Slice
• Static Slice criterion (12,i)
• criterion (12,i) 1 main( )
1 main( ) 2{
2{ 3 int i, sum;
3 int i, sum; 4 sum = 0;
4 sum = 0; 5 i = 1;
5 i = 1;
6 while(i <= 10)
6 while(i <= 10)
7 { 7 {
8 sum = sum + 1; 8 sum = sum + 1;
9 ++ i; 9 ++ i;
10 } 10 }
11 Cout<< sum; 11 Cout<< sum;
12 Cout<< i; 12 Cout<< i;
13 }
13 }
Program Slice – examples
• DDyynnaammiiccsslilcicee, ,nn=
=1, 1C,1,CC12,Car2e T,
aSlrieceTc, rSitleicreioncr<i1te,
r1i0o,nz<>1, 10, z>
1 . r e a d ( n)
2.1.forre a d1 to( nn do
I := )
2. for I := 1 to n do
3. a := 2
a:= 2
3. i f c 1==1 then
4 .
4. if c1 = then
5 . i
= 1 th e f
c 2= = 1
n
65.. if c2=a=:1=th4en
76.. eals:e= 4
87.. else a := 6
98.. za::== 6a
19.0. wrizte:=(az)
Module 41:
Program Slice – Uses in Testing
Program Slice – Uses in Testing
• Observations
• Given a slice S(X,n) where variable X depends on variable Y
with respect to location n:
• All d-uses and p-uses of Y before n are included in S(X,n).
• The c-uses of Y will have no effect on X unless X is a d-use in that statement.
• Slices can be made on a variable at any location.
Program Slice based Testing
• Uses of Program Slice
• provide a mechanism to
filter out “irrelevant” code.
• Debugging:
• Helps visualize control and
data dependencies
• Highlights statements influencing
a particular slice
• Testing: Test cases can
be decomposed
Program Slice – Uses in Testing
• Testing: reduce cost of
regression testing after
modifications (only run those
tests that needed)
• Software maintenance:
changing source code
without unwanted side
effects
• Software quality assurance:
• validate interactions between
safety-critical components
• Safety critical code can be
isolated
Program Slice – Uses in Testing
• Software support
• CodeSurfer: GUI Based,
has scripting language-Tk
• Spyder: A debugging tool
based on program
slicing.
• Unravel: program slicer
for ANSI C.
• Wisconsin: Slicer for C
• Bandera: Open source
version of Wisconsin
Software Verification
and Validation
Lecture No. 7
Software Verification and Validation
Agenda
1. Control-flow graph based testing –
Lab
2. Preparation for Lab on unit testing
3. Introduction to System Under
Test (SUT)
4. Control-flow graph
5. Test Cases
Module 42:
Control-flow graph based Testing - Lab
Control-flow graph based Testing - Lab
Goal of testing: maximize the number and severity of defects found
per dollar spent …
Limits of testing: Testing can only determine presence of defects,
never their absence use proofs of correctness to establish “absence”
Who should test: developer.
• Why?
Control-flow graph based Testing - Lab
3. System tests
Include use-cases
2. Integration tests
OO:
Module combination
Packages of classes
Function Methods
C o n t
R eq ui re m
ro l -
en t s
flow graph b a s e d T esting
1 . Pl a n fo r
- Lab
Identify largest
unit testing
trouble spots
Detailed design
Unit test plan
2. Design test
cases and acquire
test I/O pairs
Generate I/O pairs
Test results
Module 43:
Unit Testing Lab Preparation
Control-flow graph based Testing - Lab
• Plan for unit testing
• We select suitable unit testing technique
• We know two techniques so far, which ones?
• Control flow based technique
• Dataflow based technique
• We acquire code for testing
• We decide which coverage criteria would be useful
U n i t T e s
R eq ui r em e n ts
ting Lab Identify largest
Pre p a ra t i o n trouble spots
1. P la n f o r
unit testing
Unit test plan
2. Design test
Generate I/O pairs cases and acquire
test I/O pairs
Test set
3. Execute
Code
unit test Test results
under test
Control-flow graph based Testing - Lab
• Our plan as follows:
• We selected
• control-flow based testing
• We acquired/written code for testing
• This was assigned to the developer
• Following detailed design given by the designer
END
• We select
• statement coverage..
Module 44:
Unit Testing – Test Case Design
Unit Testing – Test Case Design
• Our have finalized that:
• We would perform control-flow based testing
• We have written code
• statement coverage is:
• Either advised to us from development manager (we are working as developer)
• Test case design process is up next
U n i t T e s
R eq ui r em e n ts
ting – Test Identify largest
C a s e D e sign trouble spots
1. P la n fo r
unit testing
Detailed design
Unit test plan
2. Design test
Generate I/O pairs cases and acquire
test I/O pairs
Test set
3. Execute
unit test
Code
Test results
Upnubiltic iTntesesatrcihn(ingt e esign
ke–y, inTte[] esletmCArraays) D{
int bottom = 0; int top = elemArray.Length - 1; int mid;
int index = -1; Boolean found = false;
while (bottom <= top && found == false) {
mid = (top + bottom) / 2;
if (elemArray[mid] == key) {
index = mid; found = true; return index;
} else {
if (elemArray[mid] < key)
bottom = mid + 1;
else top = mid - 1;
}
} return index;
}
Unit Testing – Test Case Design
• We take the following steps:
• Develop CFG
• Find paths through CFG
• Find path conditions
• Find appropriate inputs
Upubnliciitnt sTeaercsh(tinitnkegy, in–t[] eTleemAsrrtay)Cas{ e1 Design
int bottom = 0; int top = elemArray.Length - 1; int mid;
int index = -1; Boolean found = false;
while (bottom <= top while bottom <= top
&&boftotoumnd>=t=opfalse) { 2
mid = (top + bottom) / 2;
if (elemArray[mid] == key) {
index = mid; found = true; return i3ndex; if (elemArray [mid] == key
} else {
if (elemArray[mid] < key) 4
8
bottom = mid + 1; (if (elemArray [mid]< key
1, 2, 3, 4, 6, 7, 2 ? ?
1, 2, 3, 4, 5, 7, 2 ? ?
1, 2, 3, 4, 6, 7,
? ?
2, 8, 9
Module 45:
Unit Testing with Microsoft Visual Studio
Unit Testing with Microsoft Visual Studio
• We have the test machinery ready including
• Code
• Test stubs
• Execution machinery
• Coverage Report
• We select Microsoft Visual Studio
• We test our code such that:
• We add a separate test project in our solution
• Keeping code and test cases separate
U n i t T e s
R eq ui r em e n ts
ting with M i c r o so f t
1 . P l an fo r
Visual Studio
Identify largest
unit testing
trouble spots
Detailed design
Unit test plan
2. Design test
cases and acquire
test I/O pairs
Generate I/O pairs
Test set
3. Execute
Code
unit test Test results
under test
• Fil
U iNtewTe->sPtrionjegctwith Microsoft Visual Studio
e- >
n
• U n i t T e s t in g w i t h
R igh t C li ck o n P ro jec t N a m e
Microsoft Visual
Studio
-> Add
-> New Item
Unit Testing with Microsoft Visual Studio
Code
• Fil
U iNtewTe->sTteisnt g with Microsoft Visual Studio
e- >
n
Unit Testing with Microsoft Visual Studio
Unit Testing with Microsoft Visual Studio
Uni uTseingsStyisntemg; osoft Visual Studio
t uwsinigtShysteMm.Tiec
xtr;
using System.Collections.Generic; using System.Linq;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using SQELab1;
namespace TestProjectBSearch {
[TestClass]
public class test1 {
[TestMethod]
public void TestMethod1() {
int[] array = { 1, 2, 3, 4 }; int key = 1;
myBSearch bs = new myBSearch();
Assert.AreEqual(1, bs.search(key, array));
}
}
}
Unit Testing with Microsoft Visual Studio
Unit Testing with Microsoft Visual Studio
• Passed and Failed
Test behavior
• Test execution Log
Module 46:
Coverage with Microsoft Visual Studio
Coverage with Microsoft Visual Studio
• We used our test machinery to test our code
• We require to use the same machinery (i.e., Visual Studio
for Coverage Report
Code Coverage with M/S Visual Studio
Code Coverage with M/S Visual Studio
Code Coverage with M/S Visual Studio
Code Coverage with M/S Visual Studio
Code Coverage with M/S Visual Studio
Code Coverage with M/S Visual Studio
• We have learnt how to do unit testing using Microsoft Visual Studio
• We have attained statement coverage..
END
Software Verification
and Validation
Lecture No.8
Software Verification and Validation
Agenda
1. Integration Testing
2. Functional decomposition based
Integration
3. Call Graph based Integration
4. Path based Integration
Module 47:
Integration Testing
Integration Testing
• While developing unit test strategy, we consider:
• Test Requirement
• Documentation requirement
• Tools / Test utilities?
• Individual units get combined into (sub-) modules
• We need to assure if:
• they are following design accurately
• they are collaborating properly
• We do integration testing
Integration Testing
• Unit assumptions
– All other units are correct
– Compiles correctly
• Integration assumptions
– Unit testing complete
• System assumptions
– Integration testing complete
– Tests occur at port boundary
Integration Testing
• Unit goals
– Correct unit function
– Coverage metrics satisfied
• Integration goals
– Interfaces correct
– Correct function across units
– Fault isolation support
• System goals
– Correct system functions
– Non-functional requirements tested
– Customer satisfaction
Integration Testing
• Imagine, developer X implemented and tested “Register” Class
• Imagine, developer Y implemented and tested “Sales” Class
Integration
• Development
Testing manager wants to:
• Test if individual units
are working properly?
• To verify if adequate
testing at developer
level was performed
• NOT redoing the whole
testing effort again
• If units are
collaborating as
anticipated?
Integration Testing
At unit level, our test case was as Single Step or a Test Step
At integration level, it becomes a Test sequence
Integration Testing
• Imagine,
• We were to test Order class
• We went step by step
• We wrote a unit test case for dispatch()
• We wrote a unit test case for close()
• Single step test case
• While writing integration test case,
• We test if Order and Customer both are working together
• We wrote a test cases which is a series of invocations.
Integration Testing
• While integration testing,
• We want to see collaborative behavior
• Issue is that all classes are not ready at the same time
• Even if they are ready, we don’t want to put them together in one go
and run tests for the following reason:
• Fault localization
Integration Testing
• Integration testing techniques
• Functional Decomposition
• applies best to procedural code
• Call Graph
• applies to both procedural and object-oriented code
• MM-Paths
• apply to both procedural and object-oriented code
Module 48:
Functional decomposition based Integration
Integration Testing
• Top-down integration strategy
• focuses on testing the top layer or the controlling subsystem first
• i.e. the main, or the root of the call tree)
• Sometimes, we have an early prototype ready
• Sometimes required to build an understanding (both SE team and
the client)
• We then naturally follow top-down integration of system and test
Integration Testing
• We gradually add more subsystems that are referenced/required
by the already tested subsystems
A 10 B C
D E 11 12 13 14 15 16 17 F 22
2 3 4 5 6 7 8 9 18 19 20 21 23 24 25 26 27
Functional ion based Integra tion
dToepcSuobtmree posit
(Sessions 1-4)
Top-Down Integration
Bottom-up Integration
Second Level Subtree
(Sessions 25-28)
Top Subtree
(Sessions 29-32)
Functional decomposition based Integration
• Issues:
• Not optimal strategy for functionally decomposed systems:
• Tests the most important subsystem (UI) last
• More useful for integrating object-oriented systems
• Drivers may be more complicated than stubs
• Less drivers than stubs are typically required
Functional decomposition based Integration
• Corporate customer and
Personal customer classes are
ready
• Together with Customer class
• Order Class in not ready
• We use drivers
• The driver (Control
program) is used in the
bottom-up integration to
arrange test case input and
output
Functional decomposition based Integration
• Sandwich Integration
• Combines top-down strategy with bottom-up strategy
• Less stub and driver development effort
• Added difficulty in fault isolation
Functional decomposition based Integration
Sandwich Integration
Module 49:
Call Graph based Integration
Call Graph Based Integration Testing
• Call Graph of a program is a directed graph in which
• nodes are units
• edges correspond to actual program calls (or messages)
• Can still use the notions of stubs and drivers.
• The basic idea is to use the call graph instead of the decomposition tree
• Two sub-types
• Pair-wise Integration
• Neighborhood Integration
Call 5 Based Integration Testing
Graph
1
20
21 22
9 16
4
10
13
12
11 17 18 19 23
24
26
27 25
14 15 6 8 2 3
Call Graph Based Integration Testing
• By definition, and edge in the Call Graph refers to an
interface between the units that are the endpoints of the
edge.
• Every edge represents a pair of units to test.
• We want to test all edges are:
• Essentially required
• Meaningfully implemented
C all Graph Based Integration Testing
5
1
7
• P air-wise integration
2 0
est te st a
24
• The result is that we have o ne integration test session for each edge
26
14
15
6
2
C all Graph Based Integration Testing
5
1
7
• neighborhood integration
20
21
• We define neighborhood 22of a node in a graph to be the set of nodes that are
9
one16 edge away from the given node
4
• In a directed graph means all the immediate predecessor nodes
10
of
11 17
18 19 23
• Fausions
lt isolation is harder
6 8 2 3
14 15
Call Graph Based Integration Testing
• The neighborhood (or radius 1) of a node in a graph is the set
of nodes that are one edge away from the given node.
• This can be extended to larger sets by choosing larger values for
the radius.
• Stub and driver effort is reduced.
Call Graph Based Integration Testing
A A
B E
B C D E
C D
Total nodes = 5
Total nodes = 5
Sink nodes = 4
Sink nodes = 3
Neighborhood = 5 - 4 = 1
Neighborhood = 5 – 3 = 2
A A A
B
B E B C D E
C D
neighborhood(1) neighborhood (2) (1) neighborhood
Call Graph BaAsed Integration Testing
D
B C
E G H
F
END
Software Verification
and Validation
Lecture No. 9
Module 51:
System Testing
System Testing
• Assumption
• Unit level testing is performed
• Integration level testing is performed
• Therefore, individual modules are working correctly
• System level focus
• Functionality based on Multi-modules
System Testing
• System level focus
• External interfaces
• Non-functional aspects, e.g.,
• Security
• Recovery
• Performance
• Operational and user business process requirements
System
Testing Integration Testing System
Techniques Testing Techniques
Unit Testing 1. MM Path Techniques on 1. Specification-based
Techniques
1. CFG, DFG
2. … Techniques on the boundary 2. Model-based techniques
the
2. State-Machine
boundary
Exit Criteria Exit Criteria
Exit Criteria
1. Specific Coverage 1. Specific Coverage Crit.
1. Specific Coverage
2. Confidence Crit. Crit.
Development Manager
Quality Assurance
PM Tester1
Developer1
Tester
2
Developer2
…
… CM Tester n
Developer n
Techniques on the boundary
System Testing
• The process of testing of an integrated hardware and software
system to verify that the system meets its specified requirements
• verification: conformance to examination and provisions of objective
evidence that specified requirements have been fulfilled
• Definitions taken from IEEE
System Testing
• More definitions:
• Testing to confirm that all code modules work as specified and that
the system as a whole performs adequately on the platform on
which it will be deployed
v
Module 52:
System Testing Aspects
System Testing Aspects
• Functional aspects
• Business processes
• System goals
• Non-functional aspects
• Load, Stress, Volume
• Performance, Security, Usability
• Storage, Install-ability, Documentation,
• Recovery, …
• …
• There could be specific functional as well as non-functional
requirements
System Testing Aspects
System Testing Aspects
• Functional Aspects Testing:
• Based on requirements / specifications
• System models
• Context diagrams
• ERD
• Design documents
• Our Focus while deriving test cases
• Data
• Action
• Device
• Event
System Testing Aspects
• Non-functional aspects testing
• We identify non-functional user requirements with the specifications
• User defined non-functional requirements
• Local regulations e.g.,
• EEaNcDh computer in UK must have a warning displayed for user awareness
• That the system usage could be harmful under certain conditions e.g., disconnect
power before opening
• Standard SOPs..
Module 53:
System Testing – Non-functional aspects
System Testing – Non functional aspects
• We study a couple of system testing techniques considering
non- functional testing methods first
• We then move towards system testing considering functional aspects
System Testing – Non functional aspects
• Load Testing
• System is subjected to statistically calculated load of
• Transactions
• Processing
• Parallel connections
• Etc.
• Test Case format
• It is a setup
• Mimicking real life boundary situation
• Software example for Load testing: J-meter
System Testing – Non-functional aspects
• Stress Testing
• Form of Load testing
• The resources are denied
• Finding out the boundary conditions where the system would crash
• Finding situations where system usage would become harmful
• We test the system behavior upon lack of resources
System Testing – Non-functional aspects
• Stress Testing (Contd.)
• Test Case format
• It is a setup
• Mimicking real life boundary situation
• Bugs are not (normally) repaired by reported to the end user
for avoidance.
System Testing – Non-functional aspects
• Performance Testing
• We study the performance
requirements, e.g.,
• Response time
• Worst, best, average case time
to complete specified set of
operations, e.g.,
• Transactions per second
• Memory usage (wastage)
• Handling extra-ordinary situations
• Test Case format
• It is a setup
• Simulation
System Testing – Non-functional aspects
• Volume Testing:
• We test if the system is able to handle expected volume of data, e.g.,
• Backend e.g., PRAL
• Affiliated resources, e.g. ZEROX printers
• Word document having 200,000 pages
• We need to measure and report to user:
• System boundaries with respect the volume or capacity of its processing
System Testing – Non-functional aspects
• Security Testing:
• We want to see if
• Use cases are allowed
• Misuse cases are not allowed
• Aimed at breaking the system
• Test cases format
• Negative intent or penetration
• Specific situations
• SQL injections
• Network security features
System Testing – Non-functional aspects
• GUI Testing
• Verifying if
• HCI principles are properly followed
• Documented and de facto standards are met
• Uses
• Scenarios
• Questionnaire
• User activity logs
• Inspections
• Examples of Test cases
• Lower part of each website
• Every windows OS based application
System Testing – Non-functional aspects
• Storage testing
• Install-ability testing
• Documentation testing
• Recovery testing
•…
• We follow:
END
• what was promised with client,
• what is required of such systems and
• what are regulatory requirements..
Module 54:
System Testing – Functional Aspects
System Testing – Functional Aspects
• Functional aspects are represented as:
• User stories
• Use cases, descriptions
• Formal specifications
Steps:
• Identify the use case scenarios.
• For each scenario, identify one or more test cases.
• For each test case, identify the conditions that will cause it
to execute.
• Complete the test case by adding data values
Use Case Analysis based System Testing
• Step – 1:
• Use simple matrix that can be implemented in a
spreadsheet, database or test management tool.
• Number the scenarios and define the combinations of basic
and alternative flows that leads to them.
• Many scenarios are possible for one use case.
Use Case Analysis based System Testing
• Step – 1:
• Not all scenarios may be documented. Use an iterative process.
• Not all documented scenarios may be tested.
• Use cases may be at a level that is insufficient for testing.
• Team’s review process may discover additional scenarios.
Use Case Analysis based System Testing
Step 2:
We Find parameters of a test case:
• Conditions
• Input (data values)
• Expected result
• Actual result
Use Case Analysis based System Testing
• Step 3:
• For each test case identify the conditions that will cause it
to execute a specific events.
1. Use matrix with columns for the conditions and for each
condition state whether it is
• Valid (V): Must be true for the basic flow to execute.
• Invalid (I): This will invoke an alternative flow
• Not applicable (N/A): To the test case
Use Case Analysis based System Testing
• Step 4
• Design real input data values that will make such conditions to
be valid or invalid and hence the scenarios to happen.
• Options
• look at the use case constructs and branches.
• Consider category-partitioning
• Boundary value analysis
Use Case Analysis based System Testing
• Coverage analysis
• Model-based coverage i.e.,
• All MCAs covered
• All ACAs covered
• Question
• How can we relate or equate this to actual code coverage
• Do we need to make any assumptions?
Module 56:
Use case analysis - example
Use case analysis - example
• We take an example of a system
which is a control system of
Microwave oven
• We consider its specifications
• We extract test cases from
specifications
Use case analysis - example
Initiator Cook
Food
Reference: Gomaa, H., Designing Software Product Lines with UML : From Use Cases to Pattern-
Based Software Architectures, Addison-Wesley, Reading 2004.
UBsreiefcdaessceripatniona:lyTshiissu-
e x a m p l e
se c a se de s c ribes the user
interaction and operation of a microwave oven.
Cook invokes this use case in order to cook food
in the microwave.
Added value: Food is cooked.
Scope: A microwave oven
Primary actor: Cook
Supporting actors: Timer
Preconditions:
The microwave oven
is waiting to be used.
Use case analysis - example
Main Course of Action
1. Cook opens the microwave oven door, puts food into the oven, and
then closes the door.
2. Cook specifies a cooking time.
3. System displays entered cooking time to Cook.
4. Cook starts the system.
5. System cooks the food, and continuously displays the remaining cooking time.
6. Timer indicates that the specified cooking time has been reached,
and notifies the system.
7. System stops cooking and displays a visual and audio signal to indicate
that cooking is completed.
8. Cook opens the door, removes food, then closes the door.
9. System resets the display.
Use case analysis - example
• Alternative flows:
1a. If Cook does not close the door before starting the system (step 4), the
system will not start.
4a. If Cook starts the system without placing food inside the system, the
system will not start.
4b. If Cook enters a cooking time equal to zero, the system will not start.
5a. If Cook opens the door during cooking, the system will stop cooking.
Cook can either close the door and restart the system (continue at step
5), or Cook can cancel cooking.
5b. Cook cancels cooking. The system stops cooking. Cook may start the
system and resume at step 5. Alternatively, Cook may reset the
microwave to its initial state (cancel timer and clear displays).
Use case analysis - example
Use case analysis - example
• Test case generation:
1. For Each use case, generate a full set of use-case scenarios.
2. For each scenario, identify at least one test case and the
conditions that will make it "execute.“
3. For each test case, identify the data values with which to test.
Use case analysis - example
• Step 1: Read use-case textual description and identify each
combination of main and alternate flows of scenarios and create a
scenario matrix.
U/C S ID Scenario Description Starting Target
ID flow Flow
… … … … … … …
Use case analysis - example
• What if
• We identify additional scenarios e.g.,
• Power failure and restoration during cooking
• Cooking started with too much load
• Cooking with metal plate in the food
Use case analysis - example
• We find
• scenarios implemented but not defined
• scenarios defined but not implemented
• We report anomalies on the basis of
• Mismatches between specification and implementation
• Next we eENxDtract test cases from SRS written in plain English
Software Verification
and Validation
Lecture No. 10
Module 57:
Specification based Testing
Specification based testing
• Specification or model?
• Recall
• Models typically provide some abstract representation of the behavior of the
system.
• Typical notations are:
• Algebraic Specifications
• Control/Dataflow Graphs
• Logic-based Specifications
Specification based testing
• Specification or model?
• Typical notations are (Contd.):
• Finite State Machine Specifications
• Grammar-based Specifications
• Considering Specifications written using informal languages,
e.g., English
Specification based testing
• Independently Testable Feature (ITF):
• depends on the control and observation that is available in the interface to the
system
• Test case:
• inputs, environment conditions, expected results.
• Test case specification:
• a property of test cases that identifies a class of test cases.
Specification based testing
• Functional specification:
• This can be some kind of formal specification that claims to be comprehensive.
• Often it is much more informal comprising a short English description of the inputs and
their relationship with the outputs.
• For some classes of system the specification is hard to provide (e.g. a GUI, since many of
the important properties relate to hard to formalize issues like usability).
Specification based testing
Specification based testing
• We slice specification into features (that may spread across many code modules).
• Each feature should be independent of the other, i.e. we can concentrate on testing one
at a time.
• The design of the code will make this easier or more difficult depending on how
much attention has been given to testability in the systems design.
• called as Independently Testable Function (ITF)
Specification based testing
• For each of the inputs we consider classes of values that will all generate similar
behavior.
• This will result in a very large number of potential classes of input for non-trivial
programs.
• We then identify constraints that disallow certain combinations of classes. The goal is to
reduce the number of potential test cases by eliminating combinations that do not
make sense.
Specification based testing
• Command: find
• Function: The find command is used to locate one or more instances of the given pattern
in the named file. All matching lines in the named file are written to standard output. …
Specification based testing
• We study methods where we have specifications written in plain English and we extract
test cases out of them
• We also study how we can extract test cases from model of a system
• The we can resume the testing and go on until we find a new bug
and so on.
• Scenario tests discover more design errors than coding errors. Thus,
it is not useful for e.g. regression testing or testing a new fix.
Scenario-based Testing
• Scenario testing type 1:
• When we do scenario testing type 1, we use the scenarios to
write transactions as sequences of
• Input
• Expected output
05-Use-Cases
Isosceles 15 18 14
Scalene
Equivalence Class (with boundary value)
Example
• Since there is no further sub-intervals inside the valid inputs for the
3 sides a, b, and c, Strong Normal Equivalence is the same as the
Weak Normal Equivalence
Equivalence Class (with boundary value)
Ex• aWmeakprolbeust equivalence test cases:
• Include 6 invalid test case in addition to Weak
Normal
<100,100,100>
• <101, 45, 50 >
• < -5, 76, 89 >
• <45, 104, 78 >
• < 56, -5, 89 >
• <50, 78, 108 >
• < 56, 89, 0 >
<1, 1, 1>
Equivalence Class (with boundary value)
E• Sxtraonmg Ropbulset equivalence test cases
• Similar to Weak robust, but all combinations of “invalid” inputs but be included.
• Look at the “cube” figure and consider the corners (two diagonal ones)
• Consider one of the corners: there should be (23 – 1) = 7 cases of “invalids”
• < 101, 101, 101 > < 50 , 101, 50 >
• < 101, 101, 50 > < 50 , 101, 101 >
• < 101, 50 , 101 > < 50, 50 , 101 >
• < 101, 50 , 50 >
• There will be 7 more “invalids” when we consider the other corner , <0,0,0 >
Equivalence Class (with boundary value)
Example
• The following testing schemes are overlapping in their concepts:
• Equivalence classes based testing
• Boundary value analysis
• Category-partitioning
• We have covered equivalence class testing with few references
to other two.
Module 63:
Decision Table Based Testing
Decision Table Based Testing
• Decision table is based on logical relationships just as the truth table.
C1 T T T T F F F F
values of
conditions C2 T T F F T T F F conditions
C3 T F T F T F T F
a1 x x x x
a2 x x
x x
actions a3 actions
x x x x taken
a4
a5 x x
Decision Table Based Testing
• The conditions in the decision table may take on any number of
values. When it is binary, then the decision table conditions are just
like a truth table set of conditions.
c
3. c < a + - - - T T T T F F F F
b - - - T T F F T T F F
- - - T F T F T F T F
4. a = b
5. a = c
6. b = c
1. Not triangle 2. I sceles
s 3. Equilateral
1. Scalene o 4. “impossible”
X X X
X
X
X
X
X
X
X
Decision Table Based Testing Example
• There is the “invalid situation” --- not a triangle:
• There are 3 test conditions in the Decision table
• Note the “-” entries, which represents “don’t care,” when it is determined
that the input sides <a, b, c> do not form a triangle
• There is the “valid” triangle situation:
• There are 3 types of valid; so there are 23 = 8 test conditions
• But there are 3 “impossible” situations
• So there are only 8 – 3 = 5 test conditions
• So, for values of a, b, and c, we need to come up with 8 sets of <a,
b, c> to test the (3 + 5) = 8 test conditions.
Decision Table Based Testing Example
• Advantages: (check completeness & consistency)
1. Allow us to start with a “complete” view, with no consideration of dependence
2. Allow us to look at and consider “dependence,” “impossible,” and “not relevant”
situations and eliminate some test cases.
3. Allow us to detect potential error in our Specifications
Decision Table Based Testing Example
• Disadvantages:
1. Need to decide (or know) what conditions are relevant for testing - - - this may
require Domain knowledge
• e.g. need to know leap year for “next date” problem in the book
2. Scaling up can be massive: 2n rules for n conditions - - - that’s if the conditions are
binary and gets worse if the values are more than binary