0% found this document useful (0 votes)
51 views554 pages

CS608

This document discusses the motivation for software testing through a series of modules. It covers reasons for software defects like poor requirements, inadequate design, and improper implementation. It also discusses the issues defective software can cause, such as faulty systems, deaths, injuries, and financial losses. Specific examples of human losses and costly failures are provided.

Uploaded by

Saad Sajid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views554 pages

CS608

This document discusses the motivation for software testing through a series of modules. It covers reasons for software defects like poor requirements, inadequate design, and improper implementation. It also discusses the issues defective software can cause, such as faulty systems, deaths, injuries, and financial losses. Specific examples of human losses and costly failures are provided.

Uploaded by

Saad Sajid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 554

Module 1 Introduction

Module 2 Motivation for Software


Module 3 Sources of problems
Module 4 Working definition of software reliability and software testing
Module 5: What is a software fault, error, bug, failure or debugging
Module 6 Software testing and Software development lifecycle
Module 7 Software testing myths
Module 8 Goals and Limitations of Testing
Module 9 Software Quality Attributes
Module 10: Control vs. Quality Assurance
Module 11: Product Quality and Process Quality
Module 12: Software Verification and Validation
Module 13: Difference between Validation and Verification
Module 14: V and V Process
Module 15: Critical System Validation
Module 16: Reliability Validation, Safety Assurance, Security Assess esment
Module 17: Testing Philosophies
Module 18: Testing Strategies
Module 19: Analytical, model based, process oriented…
Module 20:Testing Comprehensiveness
Module 21: Testing Phases
Module 22: Unit Testing
Module 23: Integration testing
Module 24: System Testing
Module 25: Alpha and Beta Testing
Module 26: Focus, artifacts and the big picture
Module 27: Structural Testing Techniques
Module 28: Test selection and adequacy criteria
Module 29: Parts of a graph
Module 30: Control flow graph based testing
Module 31: Control flow graph
Module 32: Control flow graph examples

Module 33: From CFG to Path Selection


Module 34: From Path to Test Case
Module 35: Coverage criteria for CFG based testing
Module 36: Dataflow Testing
Module 37: Dataflow Testing - anomalies
Module 38: Dataflow Testing Coverage Criteria
Module 39: Program Slice based Testing
Module 40: Program Slice – examples
Module 41: Program Slice – Uses in Testing
Module 42: Control-flow graph based Testing - Lab
Module 43: Unit Testing Lab Preparation
Module 44: Unit Testing – Test Case Design
Module 45: Unit Testing with Microsoft Visual Studio
Module 46: Coverage with Microsoft Visual Studio
Module 47: Integration Testing
Module 48: Functional decomposition based Integration
Module 49: Call Graph Based Integration Testing
Module 50: Path based Integration
Module 51: System Testing
Module 52: System Testing Aspects
Module 53: System Testing – Non functional aspects
Module 54: System Testing – Functional Aspects
Module 55: Use Case Analysis based System Testing
Module 56: Use case analysis - example
Module 57: Specification based testing
Module 58: Scenario-based Testing
Module 59: Scenario based testing - example
Module 60: Equivalence Class Testing
Module 61: Weak and Strong Normal and Robust Equivalence
Module 62:Equivalence Class (with boundary value) Example
Module 63:Decision Table Based Testing
Module 64:Decision Table Based Testing Example
Software Verification
and Validation

Lecture No. 1
Module 1
Introduction
Software Verification and Validation
Agenda
1. Introduction.
2. Motivation for software
testing
3. Sources of problems
4. Working definition of
reliability and software
testing
5. What is a software fault,
error, bug, failure or
debugging
Software Verification and Validation
Agenda
6. Software testing and
Software lifecycle
7. Software testing myths
8. Goals and Limitations of
Testing
Module 2
Motivation for Software
Testing
Motivation for Software Testing
1. Software today:
 Software systems are
increasingly getting
complex.
 Size of the software
 Time to market
 Increasing emphasis on
GUI component
 Are becoming defective
• How many?
• What kind?
Motivation for Software Testing
2. Several reasons that
contribute these defects,
e.g.,

 Poor Requirements
elicitation: Erroneous,
incomplete,
inconsistent
requirements.

 Inadequate Design:
Fundamental design flaws
in the software.
Motivation for Software Testing
2. Several reasons that
contribute these defects,
e.g.,

 Improper Implementation:
Mistakes in chip
fabrication, wiring,
programming faults,
malicious code.

 Defective
Support
Systems:
Poor programming
languages, faulty
compilers and debuggers,
misleading development
tools.
Motivation for Software Testing
2. Several reasons that
contribute these defects,
e.g.,

 Inadequate Testing of
Software: Incomplete
testing, poor
verification, mistakes in
debugging.
Motivation for Software Testing
2. Several reasons that
contribute these defects,
e.g.,

 Evolution: Sloppy
redevelopment or
maintenance,
introduction of new flaws
in attempts to fix old
flaws, incremental
escalation to inordinate
complexity.
Motivation for Software Testing
3. Defective software
contribute several
issues, examples
include:

 Faulty
Communications: Loss
or corruption of
communication media,
non delivery of data.
 Space Applications: Lost
lives, launch delays.
 Defense systems:
Misidentification of friend
or foe.
Motivation for Software Testing
3. Defective software
contribute several
issues, examples
include:
 Transportation: Deaths,
delays, sudden
acceleration, inability to
brake.
 Safety-critical
applications: Death,
injuries.
 Health care applications:
Death, injuries, power
outages, long-term health
hazards (radiation).
Motivation for Software Testing
 Money Management:
Fraud, violation of
privacy, shutdown of
stock exchanges and
banks, negative interest
rates.

 Control of Elections:
Wrong results
(intentional or non-
intentional).
Motivation for Software Testing
 Money Management:
Fraud, violation of
privacy, shutdown of
stock exchanges and
banks, negative interest
rates.

 Control of Elections:
Wrong results
(intentional or non-
intentional).
Motivation for Software Testing
 Control of Jails:
Technology-aided escape
attempts and successes,
accidental release of
inmates, failures in
software controlled locks.

 Law
Enforcement:
False arrests and
imprisonments.
Motivation for Software Testing
1. We consider some
examples of software
failure that resulted or
could have resulted in
monetary and/or
financial losses
2. Examples of human losses:
 In Texas, 1986, a man
received between 16,500-
25,000 rads in less than 1
sec, over an area of about 1
cm.
Motivation for Software Testing
 He lost his left arm, and died
of complications 5 months
later.
 In Texas, 1986, a man
received at least 4,000 rads
in the right temporal lobe of
his brain.
 The patient eventually died
as a result of the overdose.
Motivation for Software Testing
3. Examples of financial
losses:
 A group of hacker-
thieves hijacked the
Bangladesh Bank
system to steal funds.
 The group successfully
transferred $81
million in four
transactions, before
making a spelling
error that tipped off
the bank, causing
another $870 million
in transfers to be
canceled.
Motivation for Software Testing
 NASA Mars Polar Lander,
1999
 On December 3, 1999,
NASA's Mars Polar
Lander disappeared
during its landing
attempt on the Mars
surface.
 A Failure Review Board
investigated the failure
and determined that the
most likely reason for the
malfunction was the
unexpected setting of a
single data bit.
 The problem wasn't caught
by internal tests.
Motivation for Software Testing
 Malaysia Airlines jetliner, August 2005
 As a Malaysia Airlines jetliner cruised from Perth,
Australia, to Kuala Lumpur, Malaysia, autopilot system
malfunctioned.
 Captain disconnected autopilot and eventually regained
control and manually flew their 177 passengers safely back
to Australia.
 Investigators discovered that a defective software
program had provided incorrect data about the aircraft’s
speed and acceleration, confusing flight computers.
 There are countless such examples…
Module 3
Sources of problems
Sources of Problems
1. Software does not do
something that specification
says it should do.
2. Software does
something that
specification says it
should not do.
3. Software does something
that specification does not
mention.
Sources of Problems
4. Software does not do
something that product
specification does not
mention but should.
5. The software is difficult to
understand, hard to use,
slow …
6. Failures result due to:
1. Lack of logic
2. Inadequate testing of
software under test
(SUT)
3. Unanticipated use of
application
Sources of Problems
1. We need to spend time and
financial resources to fix these
errors or bugs in the following
manner:
 Cost to fix a bug increases
exponentially (10x)
• i.e., it increases tenfold
as time increases
 E.g., a bug found during
specification costs $1 to fix.
 … if found in design cost is
$10
 … if found in code cost is
$100
 … if found in released
software cost is $1000
Module 4
Working definition of software
reliability and software testing
Definition: Software Reliability
1. Is Bug Free
Software Possible?
1. We have human factors
2. Specification-
Implementation
mismatches
3. Discussed in details in
failure reasons
2. We are releasing
software that is full of
errors, even after doing
sufficient testing
Definition: Software Reliability
3. No software would ever be
released by its developers
if they are asked to certify
that the software is free of
errors
4. Software reliability is one of
the important factors of
software quality.
Definition: Software Reliability
5. Other factors are
understandability,
completeness,
portability, consistency,
maintainability, usability,
efficiency,
6. These quality factors are
known as non-functional
requirements for a software
system.
Definition: Software Reliability

1. Software reliability is defined


as:
“The probability of failure
free operation for a specified
time in a specified
environment”
Definition: Software Testing
1. Goal of software testing is:
1. … to find bugs
2. … as early in the software development processes as possible
3. … and make sure they get fixed.
2. We define software testing as: [Reference Book]
“Testing is the process of demonstrating that errors are not present”
OR
“The purpose of testing is to show that a program performs its intended
functions correctly”
OR
“Testing is the process of establishing confidence that a program does
what it is supposed to do”
3. Another definition [Mayers, 04]
“Testing is the process of executing a program with the intent of finding
faults”
Module 5: What is a software fault, error,
bug, failure or debugging
Fault, error, bug, failure …

1. Some definitions:
 Error: A measure of the
difference between the
actual and the ideal.
 Fault: A condition that
causes a system to fail
in performing its
required function.
Fault, error, bug, failure …

1. Some definitions:
 Error: A measure of the difference between the actual and the ideal.
 Fault: A condition that causes a system to fail in performing
its required function.
 Failure: Inability of a
system or component
to perform a required
function according to s
it specifications.
 Debugging: Activity by
which faults are
identified and
rectified.
Fault, error, bug, failure …
1. Faults have different levels of severity:
 Critical. A core functionality of the system fails or the system
doesn’t work at all.
 Major. The defect impacts basic functionality and the system
is unable to function properly.
 Moderate. The defect causes the system to generate
false, inconsistent, or incomplete results.
 Minor. The defect impacts the business but only in very few cases.
 Cosmetic. The defect is only related to the interface
and appearance of the application.
2. While testing, we attribute different outcomes to
different severities
Fault, error, bug, failure …
1. Test case: Inputs to test the program and the predicted
outcomes (according to the specification). Test cases are
formal procedures:
 inputs are prepared
 outcomes are predicted
 tests are documented
 commands are executed
 results are observed and evaluated
Fault, error, bug, failure …
1. All of these steps are subject
to mistakes.
When does a test “succeed”?
“fail”?
2. Test suite: A collection of
test cases
Testing oracle: a program,
process, or body of data which
helps us determine whether the
program produced the correct
outcome.
 Oracles are a set of
input/expected output
pairs.
Fault, error, bug, failure …
1. Test data: Inputs which have been devised to test the system.
2. Test cases: Inputs to test the system and the predicted
outputs from these inputs if the system operates according to
its specification
3. Outcome: What we expect to happen as a results of the test.
In practice, outcome and output may not be the same.
 For example, the fact that the screen did not change as a
result of a test is a tangible outcome although there is not
output.
4. In testing we are concerned with outcomes, not just outputs.
5. If the predicted and actual outcome match, can we say that
the test has passed?
Fault, error, bug, failure …
1. Expected Outcome: is the
expectation that we
associate with execution
response of a particular test
execution
2. Some times, specifying the
expected outcome for a
given test case can be tricky
business!
Fault, error, bug, failure …
 For some applications
we might not know what
the outcome should be.
 For other applications
the developer might have
a misconception
 Finally, the program may
produced too much output
to be able to analyze it in a
reasonable amount of time.
Fault, error, bug, failure …
 In general, this is a fragile
part of the testing
activity, and can be very
time consuming.
 In practice, this is an area
with a lot of hand-
waving.
 When possible, automation
should be considered as a
way of specifying the
expected outcome, and
END comparing it to the actual
outcome.
Module 6
Software testing and Software development
lifecycle
Software Testing - development lifecycle

Software Development
Lifecycle
Software Testing - development lifecycle

 Code and Fix


 Waterfall
 Spiral
…
Software Testing - development lifecycle
1. Software testing is a critical element of software quality
assurance and represents the ultimate review of:
• specification
• design
• coding
2. Software life-cycle models (e.g., waterfall) frequently include
software testing as a separate phase that follows
implementation!
3. Contrary to life-cycle models, testing is an activity that must
be carried out throughout the life-cycle.
4. It is not enough to test the end product of each phase.
Ideally, testing occurs during each phase.
5. This gives rise to concept of verification and validation
Software Testing - development lifecycle

Software Development
Lifecycle
Module 7
Software testing myths
Software testing myths
1. If we were really good at
programming, there would
be no bugs to catch. There
are bugs because we are bad
at what we do.
2. Testing implies an
admission of failure.
3. Tedium of testing is a
punishment for our
mistakes.
Software testing myths
4. All we need to do is:
 concentrate
 use structured programming
 use OO methods
 use a good
programming language
…
Software testing myths
1. Human beings make
mistakes, especially when
asked to create complex
artifacts such as software
systems.
2. Studies show that even
good programs have 1-3
bugs per 100 lines of code.
Software testing myths
1. Software testing:
 A successful test is a
test which discovers one
or more faults.
 Only validation
technique for non-
functional requirements.
 Should be used in
conjunction with
static verification.
Software testing myths
2. Defect testing:
 The objective of
defect testing is to
discover defects in
programs.
 A successful defect test is
a test which causes a
program to behave in an
anomalous way.
 Tests show the presence
not the absence of
defects.
Module 8
Goals and Limitations of Testing
Goals and Limitations of Testing
1. Discover and prevent bugs.
2. The act of designing tests
is one of the best bug
preventers known. (Test,
then code philosophy)
3. The thinking that must be
done to create a useful test
can discover and eliminate
bugs in all stages of
software development.
Goals and Limitations of Testing
4. However, bugs will always
slip by, as even our test
designs will sometimes be
buggy.
5. Most widely-used activity
for ensuring that software
systems satisfy the specified
requirements.
6. Consumes substantial
project resources. Some
estimates:
~50% of development costs
Goals and Limitations of Testing
• Testing cannot occur until
after the code is written.
• The problem is big!
• Perhaps the least
understood major SE
activity.
• Exhaustive testing is not
practical even for the
simplest programs. WHY?
Goals and Limitations of Testing
• Even if we “exhaustively” test
all execution paths of a
program, we cannot
guarantee its correctness.
• The best we can do is
increase our confidence
!
• “Testing can show the
presence of bug, not their
absence.”
Goals and Limitations of Testing
1. Testers do not have
immunity to bugs.
2. slight modifications – after a
program has been tested –
invalidate (some or even all
of) our previous testing
effort.
3. Automation is critically
important.
4. Unfortunately, there are only
a few good tools, and in
general, effective use of
END these good tools is very
l
i
m
i
t
e
d
.
Software Verification
and Validation

Lecture No. 2
Module 9
Software Quality
Attributes
Software Quality Attributes
 What is Quality
 Conformance to explicitly
stated functional and
performance requirements,
explicitly documented
development standards,
implicit characteristics that
are expected of all
professionally developed
software (Pressman)
 The degree to which a
system, component, or
process meets specified
requirements.
Software Quality Attributes
 The degree to which a
system, component, or
process meets customer
or user needs or
expectations (IEEE)
 Why Quality
 You would not like MS
Office to occasionally
save your documents
 Competitiveness of
International
Market
Software Quality Attributes
 Cost of Quality
 Software bugs, or errors,
are so prevalent and so
detrimental that they cost
the U.S. economy an
estimated $59.5 billion
annually, or about 0.6
percent of gross domestic
product. (US Dept of
Commerce, 2002)
Software Quality Attributes
 Quality (Our working
definition)
 The totality of features and
characteristics of a product,
process of service that bear
on its ability to satisfy
stated or implied needs.
Software Quality Attributes
 The default document for
quality:
• We need to know where
are the stated or implied
need
• Against which we test
if they are met
• We refer to these quality
features and
characteristics as Quality
Attributes
• They are part of Software
requirements specification
(SRS ) document
Software Quality Attributes
 Non-functional Requirements
 Product transition related
 Portability
 Interoperability
 Reusability
 Product Operations related
 Reliability
 Robustness
 Efficiency
 Usability
Software Quality Attributes
 Safety
 Security
 Fault-tolerance
 Product revision related
 Maintainability
…
 We need to know/gauge
 from our default
document for quality,
 requirement for
assurance of quality and
reliability
 We need to plan accordingly
Software Quality Attributes
 Quality attribute examples:
 System should be capable
of processing 10
transactions per minute
 The system should only
allow authorized
access.
 Users should be forced
to change their
passwords every tenth
usage
 System should maintain
logs of disk activity
 …
END
Module 10:
Quality Control vs. Quality Assurance
Quality Control vs. Quality Assurance
 Quality Assurance
• It is fault prevention
through process design
and auditing
 Creating processes,
procedures, tools, jigs
etc., to prevent faults
from occurring
 Prevent as much as
possible defect injection
 Process oriented
Quality Control vs. Quality Assurance
 Quality Control
• It is fault/failure detection
through static and/or
dynamic testing of
artefacts
• It is examining of
artefacts against pre-
determined criteria to
measure conformance
• Product oriented
Quality Control vs. Quality Assurance
Quality Assurance (QA) Quality Control (QC)
It is a procedure that focuses
It is a procedure that focuses on
on providing assurance that
fulfilling the quality requested.
quality requested will be
achieved
QA aims to prevent the defect QC aims to identify and fix defects
It is a method to manage It is a method to verify the quality-
the quality- Verification Validation
It does not involve executing the It always involves executing a
program program
It's a Preventive technique It's a Corrective technique
It's a Proactive measure It's a Reactive measure
Quality Control vs. Quality Assurance

Quality Assurance (QA) Quality Control (QC)


It is the procedure to create the It is the procedure to verify that
deliverables deliverables
QA involves in full software QC involves in full software testing
development life cycle life cycle
In order to meet the customer QC confirms that the standards are
requirements, QA defines followed while working on the
standards and methodologies product
It is performed before Quality It is performed only after QA
Control activity is done
Quality Control vs. Quality Assurance
Quality Assurance (QA) Quality Control (QC)
Its main motive is to prevent Its main motive is to identify
defects in the system. It is a defects or bugs in the system. It is a
less time-consuming activity more time-consuming activity
QA ensures that everything is QC ensures that whatever we have
executed in the right way, and that done is as per the requirement, and
is why it falls under verification that is why it falls under validation
activity activity
It requires the involvement of the It requires the involvement of the
whole team Testing team

END
Module 11:
Product Quality and Process Quality
Product Quality and Process Quality
1. Product Quality
1. Product quality means
to incorporate features
that have a capacity to
meet consumer needs
(requirements)
2. gives customer
satisfaction by improving
products (goods) and
making them free from
any deficiencies or
defects.
Product Quality and Process Quality
3. There are various
important aspects
which define a product
quality like:
1. Storage, Quality
of Design
2. Quality of Conformance
3. Reliability, Safety
Product Quality and Process Quality
1. Process Quality
1. Process Quality is defined
as all the steps used in
the manufacturing the
final product.
2. Its focus are on all
activities and steps used
to achieve a maximum
acceptance regardless
of its final product.
Product Quality and Process Quality
1. Difference
1.Process quality is one of a
number of contributors
to product quality.
2.Product quality is the
overall quality of the
product in question:
how well it conforms to
the product
requirements,
specifications, and
ultimately customer
expectations.
Product Quality and Process Quality
1. Process quality focuses
on how well some part of
the process of developing
a particular product, and
getting it into the
customer’s hands, is
working.
2. The process being
analyzed in a particular
case may have a very
broad scope, or it may
focus in on minute
details of a single step.
Product Quality and Process Quality
1. Product Quality – Degree
of conformance to
specification as defined
by the Customer
2. Process Quality –
Degree of variability in
process execution and
the minimization of the
Cost of Quality

END
Module 12:
Software verification and validation
Software verification and validation

 Verification:
 Process of assessing a
software product or system
during a particular phase to
determine if it meets the
requirements/conditions
specified at beginning of
that phase.
 It is static and includes
substantiation of necessary
artifacts, code, design and
program.
Software verification and validation

 This phase includes all the


activities that ensure the
delivery of best quality
product such as design
analysis, inspection and
specification analysis.
 Some common methods of
conducting verification are
meetings, inspection and
review.
Software verification and validation

 Validation:
 Validation is the process
of testing a software
product after its
development process gets
completed
 It is conducted to
determine whether a
particular product
meets the requirements
that were specified for
the product
Software verification and validation

 It is a dynamic process
that evaluates a
product on parameters
that help assess
whether it meets the
expectations and
requirements of the
customers
 Some common methods
of conducting validation
END
are white box testing,
while box testing and
g g.
r
a
y
b
o
x
t
e
s
t
i
n
Module 13:
Difference between Validation
and Verification
Difference between Validation & Verification

 We have gone through


 Quality control
 Quality assurance
 We have covered
 Verification
 Validation
 Let us have a look at
difference between
validation and
verification
Difference between Validation & Verification
Verification Validation
Are we building the system right? Are we building the right system?
Validation is the process of
Verification is the process of evaluating software at the end of
evaluating products of a development process to determine
development phase to find out whether software meets the
whether they meet the specified customer expectations and
requirements. requirements.
Following activities are involved in
Following activities are involved in
Validation: Testing like black box
Verification: Reviews, Meetings and
testing, white box testing, gray box
Inspections.
testing etc.
Difference between Validation & Verification
Verification Validation
Are we building the system right? Are we building the right system?
The objective of Validation is to
The objective of Verification is to make sure that the product
make sure that the product being actually meet up the user’s
develop is as per the requirements requirements, and check whether
and design specifications. the specifications were correct in
the first place.
Verification is carried out before Validation activity is carried out
the Validation. just after the Verification.
Difference between Validation & Verification
Verification Validation
Verification is carried out by QA
team to check whether Validation is carried out by testing
implementation software is as per team.
specification document or not.
Execution of code is not comes Execution of code is comes under
under Verification. Validation.
Verification process explains Validation process
whether the outputs are according describes whether the software is
to inputs or not. accepted by the user or not.
Difference between Validation & Verification
Verification Validation
Following items are evaluated during
Following item is evaluated during
Verification: Plans, Requirement
Validation: Actual product or
Specifications, Design Specifications,
Software under test.
Code, Test Cases etc,
Cost of errors caught in Verification Cost of errors caught in Validation is
is less than errors found in more than errors found in
Validation. Verification.
It is basically manually checking the It is basically checking of
of documents and files like developed program based on the
requirement specifications etc. requirement specifications
documents & files.
Module 14:
V and V process
V and V process
 Verification:
"Are we building the
product right?"
 The software should
conform to its
specification.

 Validation:
"Are we building the right
product?"
 The software should do
what the user really
requires.
V and V process
 Verification:
"Are we building the
product right?"
 The software should
conform to its
specification.

 Validation:
"Are we building the right
product?"
 The software should do
what the user really
requires.
V and V process
 Validation: process of evaluating a system or a component
during or at the end of development process to determine
whether it satisfies specified requirements. That is: are we
building the right product

 Verification: process of evaluating a system or a component to


determine whether products of a given development phase
satisfy the conditions imposed at the start of that phase. That
is: are we building the product right
Static Vs. Dynamic V and V
 Static vs. Dynamic V & V
 Code and document inspections - Concerned with the analysis of
the static system representation to discover problems (static v &
v)
 May be supplement by tool-based document and code analysis

 Software testing - Concerned with exercising and


observing product behaviour (dynamic v & v)
 The system is executed with test data and its
operational behaviour is observed
Static Vs. Dynamic V and V
Verification: Validation:
• formal manipulation • experimentation
• prove properties • show error
• performed on model / • concrete system
artefacts

formal concrete
world world
Verification is only as good as Testing can only show the
the validity of the model on presence of errors, not
which it is based their absence
V and V process
 Verification and validation should establish confidence that
the software is fit for its purpose.
 This does NOT mean completely free of defects.
 Rather, it must be good enough for its intended use. The type
of use will determine the degree of confidence that is
needed.

 This leads us to review of definition of testing, we have


already discussed in the previous lecture. That is:
 Software testing is the process of analyzing a software item to
detect the differences between existing and required
conditions (that is, bugs) and to evaluate the features of the
software item
V and V process
 Is a whole life-cycle process -
V & V must be applied at each
stage in the software process.
 Example: Peer document
reviews
 Has two principal objectives
 The discovery of defects in
a system
 The assessment of
whether or not the system
END
is usable in an operational
situation
Module 15:
Critical System
Validation
Critical System Validation

 Validating the reliability,


safety and security of
computer-based systems

 Reliability validation
 Does the measured
reliability of the
system meet its
specification?

 Is the reliability of the


system good enough
to satisfy users?
Critical System Validation

 Safety validation
 Does the system
always operate in such
a way that accidents do
not occur or that
accident consequences
are minimised?
Critical System Validation

 Security validation
 Is the system and
its data secure
against external
attack?
END
Software Verification
and Validation

Lecture No. 3
Software Verification and Validation
Agenda
1. Reliability Validation, Safety
Assurance, Security assessment,
2. Testing philosophies
3. Testing strategies
4. top down testing strategy
5. bottom up testing strategy
6. Analytical, model based, process oriented
strategies
7. Testing Comprehensiveness
Module 16:
Reliability Validation, Safety Assurance,
Security assessment
Reliability Validation
Critical systems validation (as
discussed)

 Validating the reliability, safety


and security of computer-based
systems
 Includes a list of
 Static techniques
 Design reviews
 program inspections
 Mathematical arguments
and proof
…
Reliability Validation
 Dynamic techniques
 Statistical testing
 Scenario-based testing
 Run-time checking
…
Reliability Validation
 Reliability Validation
 Comes under dynamic validation techniques
 We test the system while executing under test conditions
 We analyze system under test (SUT) outside its operational environment
 It is run-time checking of SUT where we test during execution that the
system is operating within a dependability ‘envelope’
 Reliability validation involves exercising the program to assess whether or
not it has reached the required level of reliability
 Where software reliability as already defined is “The probability of failure
free operation for a specified time in a specified environment”
Reliability
Validation  Cannot be included as part of
a normal defect testing
process because data for
defect testing is (usually)
atypical of actual usage data
 Statistical testing must be used
where a statistically significant
data sample based on
simulated usage is used to
assess the reliability
Reliability
Validation  Statistical Testing
 Testing software for reliability
rather than fault detection
 Measuring the number of
errors allows the reliability of
the software to be predicted.
 Test data from our test process
is used for the purpose
Reliability
Validation  Overall Process
 We construct an operational
profile which is a set of test
data whose frequency matches
the actual frequency of these
inputs from ‘normal’ usage of
the system.
 Construct test data reflecting the
operational profile
Reliability
Validation  Test the system and observe the
number of failures and the times
of these failures
 Compute the reliability after a
statistically significant number
of failures have been observed
Safety Assurance
 Design validation
 Checking the design to ensure that hazards do not arise or that they can be
handled without causing an accident.
 Code validation
 Testing the system to check the conformance of the code to its specification
and to check that the code is a true implementation of the design.
 Run-time validation
 Designing safety checks while the system is in operation to ensure that it
does not reach an unsafe state.
Security Assessment
 Security validation has something in common with safety validation
 It is intended to demonstrate that the system cannot enter some state (an
unsafe or an insecure state) rather than to demonstrate that the system can do
something
 However, there are differences
 Safety problems are accidental; security problems are deliberate
 Security problems are more generic; Safety problems are related to the
application domain
Module 17:
Testing Philosophies
Testing
Philosophies  White-box (Glass-box or
Structural) testing: Testing
techniques that use the source
code as the point of reference for
test selection and adequacy.
 Also known as program-based
testing, structural testing
Testing
Philosophies  Black-box (or functional) testing:
 Testing techniques that use
the specification as the point
of reference for test selection
and adequacy.
 Also known as specification-based
testing, functional testing

 Which approach is superior?


Testing Philosophies – White-box
 White-box testing
considers implementation
details.
 exercises different control and
data structures used in program.
 Requires programmer’s knowledge of
the source code.
 Criteria are quite precise as they are
based on program structures.
 Assumes that the source code fully
implements the specification.
Testing Philosophies – Black-box
 Characteristics of Black-box testing:
 Program is treated as a black
box.
 Implementation details do not
matter.
 Requires an end-user
perspective.
 Criteria are not precise.
 Test planning can begin early.
Testing Philosophies – Comparison
 The real difference between white-
box and black-box testing is how
test cases are generated and how
adequacy is determined.
 In both cases, usually, the
correctness of the program
being tested is done via
specifications.
 Neither can be exhaustive.
 Each techniques has its strengths
and weaknesses.
 Should use both. In what order?
End
Module 18:
Testing
Strategies
Testing Strategies
 We need to understand /
formulate quality objectives
 We need to identify the
appropriate testing phases for the
given release, and the types of
testing the need to be performed.
 We then prioritize / organize the
testing activities.
 Plan how to deal with “cycles”.
 Top-down and bottom-up strategies
Top-down Integration strategies
 incremental technique of building
a program structure
 incorporates the modules while
moving downward, beginning
with the main control in the
hierarchy
 Sub-modules are then integrated
to the main module using either a
depth-first or breadth-first method
 top-down integration verifies
significant control and decision
points earlier in the test
process
Top-down Integration strategies
 Integration process involves the following steps in the top-down approach:
 Starting with the major control module, stubs are then replaced for the
components residing below the main modules.
 The replacement strategy of the subordinate stub relies on the type of
integration approach followed (i.e., depth and breadth first), but only one
stub is allowed to be replaced with actual components at a time.
 After the integration of the components, the tests are carried out.
 As a set of test is accomplished, the remaining stub is replaced with the
actual component.
 In the end, the regression test is conducted to assure the absence of the
new errors.
Bottom-up Integration Strategies
 Bottom-up integration testing
 starts with the construction of the fundamental modules (i.e., lowest level
program elements).

 integrates the components residing at the lowest level (i.e., lowest level)
by providing a process and eliminates the need of the stubs.

 As the integration goes towards the upper direction, the requirement of


the separate test drivers decreases.

 Therefore, the amount of overhead is also reduced as compared to Top-


bottom integration testing approach.
Bottom-up Integration Strategies
 Bottom-up integration includes the following steps:
 It merges the low-level elements also known as builds into clusters which execute
a certain software sub-function.
 The driver (Control program) is used in the bottom-up integration to arrange
test case input and output.
 Then the cluster is tested.
 Clusters are incorporated while going upwardly in the program structure
and drivers are eliminated.
Stubs and Harnesses

• Stubs and Drivers are two types of test harness


• Test harness are collection of software and test data configured to test programs
simulating different set of conditions, while monitoring behavior and outputs
• Stubs are used in top down testing, when you have major modules ready to
test, but sub modules not ready yet.
• stubs are "called" programs
Stubs and Harnesses

• Drivers are used in bottom up testing approach.


• Drivers are the ones, which are the "calling" programs.
• Drivers are dummy code, which is used when the sub modules are ready but the
main module is still not ready
Testing Strategies – Comparison
Basis for Bottom-up
Top-down Integration Testing
comparison Integration Testing
Uses stubs as the momentary
Use test drivers to
replacements for the invoked
Basic initiate and pass the
modules and simulates the
required data to the
behavior of the separated lower-
lower-level of
level modules.
modules.
If the crucial flaws
If the significant defect occurs
Beneficial encounters towards the
toward the top of the program.
bottom of the program.
Testing Strategies – Comparison
Basis for Top-down Integration Bottom-up
comparison Testing Integration Testing
Modules are created
Main function is written at first
first then are
Approach then the subroutines are
integrated with the
called from it.
main function.
Structure/procedure-oriented Object-oriented
Implemented
programming languages. languages.
Module 19:
Analytical, model based, process oriented
strategies
Analytical, model based, process oriented…

• We have discussed test strategies in a broad manner


• Let us discuss a couple of specific test strategies
Analytical, model based, process oriented…
Analytical, model based, process oriented…
 Analytical testing strategy
 Includes strategies e.g.,
 Requirements based testing
 Risk based testing
 in case of testing based on
requirements, requirements
are analyzed to derive the test
conditions.
 Then tests are designed,
implemented and executed to
meet those requirements.
Analytical, model based, process oriented…
 Model-based testing strategy
 Two important aspects:
 Considers model of SUT
 Set of test case generation
guidelines or principals
 We generate a model
representing system behavior
when stimuli are provided in the
form of:
 State machine
 Transition system
 Graph transformation system
 Etc.
Analytical, model based, process oriented…
 Methodical testing strategy
 test teams follow a pre-defined
quality standard
 Tests functions and status of
software according to checklist,
based on the user
requirements.
 used to test the functionality,
reliability, usability and
performance of the software.
 Use standard set of test
conditions, one of its benefits is
consistent testing of defined
attributes
Analytical, model based, process oriented…
• Process-oriented testing strategy
• Predefined processes e.g., Medical
systems following US Food &
Drugs administration.
• testers follow these processes
guidelines as standards or panel
of industry experts to identify test
conditions, define test cases.
• strategy typically addresses
documentation, test basis and
test oracle, test team
Analytical, model based, process oriented…
• Strategy selection
• The testing strategy selection
may depend on these factors:
• Is the strategy a short term or
long term one?
• Organization type and size
• Project requirements – Safety
and security related
applications require rigorous
strategy
• Product development model.
End
Module 20:
Testing Comprehensiveness
Testing
Comprehensiveness  Is complete testing possible
 NO.
 Complete testing is both
practically and
theoretically impossible for
non-trivial software.
 A complete functional test would
consist of subjecting a program to
all possible input streams.
 Let a program has an input stream
of 10 characters, it would require 280
tests.
Testing
Comprehensiveness  At 1 microsecond/test, exhaustive
functional testing needs more
time than twice the current age of
universe
 Therefore, we select a subset “t” of
set of all possible test cases “T”
such that t ⊆ T
 We prove sufficiency of test cases
in t by:
 Establishing that the effect of
running t and T would be the same
 We study phases of testing in the
upcoming lecture-
End
Software Verification
and Validation

Lecture No. 4
Software Verification and Validation
Agenda
1. Testing phases
2. unit testing
3. integration testing
4. system testing
5. acceptance testing
6. alpha and beta testing
7. Focus of unit, integration and (sub-
)system
8. Artifacts involved in testing
9. The big picture
Module 21:
Testing
Phases
Testing
Phases  Testing Phases
 Unit Testing
 individual units of a software
are tested.
 purpose is to validate that
each unit of the software
performs as designed.
 Integration Testing
 individual units are combined
and tested as a group.
Testing
Phases  purpose of this level of testing
is to expose faults in the
interaction between integrated
units.
 System Testing
 complete, integrated system is
tested.
 purpose of this test is to
evaluate the system’s
compliance with the specified
requirements.
Testing Phases
 Acceptance Testing
 system is tested for acceptability.
 purpose of this test is to evaluate the system’s compliance with the business
requirements and assess whether it is acceptable for delivery.
 Important: the direction of arrow
Testing
Phases  We assume at each phase of
testing that:
 Previous phase of
testing completed
 As an example:
 For integration testing,
Our assumption is:
 Unit testing is done
and complete
End
Module 22:
Unit Testing
Unit Testing
 Performed by each developer.
 Scope: Ensure that each module (i.e., class, subprogram), or
collection of modules as appropriate, has been
implemented correctly.
 ‘White-box’ form of testing.
Unit Testing

Code

Test Result

Developer
Test Cases
Unit Testing

local data structures


boundary conditions
error handling paths
stub stub
interface
Test
Cases
Unit Testing
 Need an effective adequacy criterion.
 Work with source-code analysis tools appropriate for the criterion and
development environment.
 May need to develop stubs and domain-specific automation tools.
 Although a critical testing phase, it is often performed poorly …

End
Module 23:
Integration Testing
Integration testing
• System design is implemented
by group of developers.
• logic implemented by one
developer may differ than
another developer,
• Sometimes, data structure, its
implementation changes when it
travels from one module to
another. Some values are
appended or removed, which
causes issues in the later
modules
Integration testing
• Modules may interact with
some third party tools or APIs
• We need to test that data
accepted by that API / tool is
correct and that the response
generated is also as expected.
• Frequent requirement change,
misunderstanding design etc.
Integration testing
 Performed by a small team.
 Scope: Ensure that the interfaces between components (which developers could
not test) have been implemented correctly.
 Approaches:
• “big-bang”
• Incremental construction
 Test cases have to be planned, documented, and reviewed.
 Performed in a small time-frame
Integration testing
 Top-down Integration
 Bottom-up Integration
A
top module is tested with
stubs

B F G

studbrisvearrsearreeprlaecpeladc
oetime,
ndeoa"depth
nte at afirst"
a time, "depth
C first"
a s ne w m o d u le s
w o r k er m o d u l e s
a r e i n te g r a t ed ,
a r e g r o u p e d in to
D E sboumiledssuabnsdeitnotefgterasttes

cluster
Integration testing
• Big-bang – Advantages:
– The whole system is available
• Big-bang – Disadvantages:
– Hard to guarantee that all
components will be ready at
the same time
– Focus is not on a specific
component
– Harder to locate errors
Integration testing
• Incremental – Advantages:
– Focus is on each module leads
to better testing ?
– Easy to locate errors
• Incremental – Disadvantage:
– Need to develop special code
(stubs and/or drivers)…
End
Module 24:
System Testing
System
Testing  Performed by a separate group
within the organization.
 Scope: Pretend we are the end-users of
the product.
 Focus is on functionality, but must
also perform many other types of
tests (e.g., recovery, performance).
System
Testing  Black-box form of testing.
 Test case specification driven by
use- cases.
 The whole effort has to be planned
(System Test Plan).
 Test cases have to be designed,
documented, and reviewed.
System
Testing  Adequacy based on requirements
coverage.
• but must think beyond
stated requirements
 Support tools have to be
[developed]/used for
preparing data, executing the test
cases, analyzing the results.
 Group members must develop
expertise on specific system features
/ capabilities.
System Testing
 Often, need to collect and
analyze project quality data.
 The burden of proof is always on the
ST group.
 Often, the ST group gets the initial
blame for “not seeing the
problem before the customer
did”..

End
Module 25:
Alpha and Beta Testing
Alpha and Beta
Testing  Depending on the system being
developed, and the organization
developing the system, other
testing phases may be appropriate:
Alpha and Beta
Testing • Alpha Testing
• is a type of acceptance testing;
• performed to identify all
possible issues/bugs before
releasing the product to
everyday users or public
• The testers are internal
employees of the organization,
mainly in-house software QA and
testing teams
Alpha and Beta Testing
• Beta Testing
• is the second phase of software testing
Alpha and Beta
Testing • Beta Testing
• is the second phase of
software testing
• a representative sample of
the intended audience
tries the product out.
• Beta Testing of a product is
performed by real users of
the software application in
a real environment..

End
Module 26:
Focus, Artifacts and The Big Picture
Focus, artifacts and the big picture

• We explain, with the help of a diagram


• Focus of the testing a various levels
• Artifacts involved
• And “the big picture”
Focus
Unit Testing Integration Testing System Testing
Every Unit is working
properly All units/modules are All requirements are implemented
Techniques on collaborating as per Techniques on
All states are the boundary the boundary All functions are working
give design
represented
properly All non-functional requirements are
Design is implemented
taken care off
All transitions are properly
implemented

Developer1 Tester1
Developer2 Tester2
… …
Tester n

ality
Developer n
Man
ger
Techniques on the boundary

PM

CM
Focus
Unit Testing Integration Testing System Testing
Every Unit is working
properly All units/modules are All requirements are implemented
Tech
Techniques on collaborating as per given o niques
All states are represented the boundary design n the All functions are working
properly bou ndary

Design is implemented All non-functional requirements are


All transitions are
implemented properly taken care off
Functional Aspects of Application
1. Units Development Manager

Quality Assurance
2. Integration
Developer1 PM Tester1
Developer2 Tester2
… …
CM
Developer n Tester n

Techniques on the boundary


Focu
Unit Testing SystemTesting
Testing
s Testing Integration System
properly All units/modules are All requirements are implemented
Techniques
Techniques on collaborating as per n
give on the All functions are working
the boundary boundary
All states are represented
properly design All non-functional requirements
Design is implemented are
All transitions are
properly taken care off
implemented

Development Manager

Quality Assurance
PM
Developer1 Tester1
2
Developer2 Tester
… …
CM
Developer n Tester n
Artifact
s Integration Testing System Testing

Unit Testing

Code Techniques on
the boundary
Techniques on
the boundary
Code + Design SRS

Development Manager

Quality Assurance
PM Tester1
Tester
Developer1 …
Developer2 CM Tester n 2


Developer n
Techniques on the boundary
The Big
Picture Integration Testing
Techniques
System
Testing Techniques
Unit Testing 1. MM Path Techniques on 1. Specification-based
Techniques
1. CFG, DFG
2. … Techniques on the boundary 2. Model-based techniques
the
2. State-Machine
boundary
Exit Criteria Exit Criteria
Exit Criteria
1. Specific Coverage 1. Specific Coverage Crit.
1. Specific Coverage
2. Confidence Crit. Crit.
Development Manager

Quality Assurance
PM Tester1
Tester
Developer1 …
Developer2 CM Tester n
2


Developer n
Techniques on the boundary
Focus, artifacts and the big picture

• We have fixed:
• Our focus
• We have pinpointed:
• Artifacts of our interest
• We have painted:
• Our big picture

End
Software Verification
and Validation

Lecture No. 5
Software Verification and Validation
Agenda
1. Structural testing techniques
2. Parts of a graph: nodes, edges, cycles
3. Control flow based testing
4. Control flow graph (CFG)
5. Example of a CFG
6. Control flow graphs of common
control structures
7. From CFG to path selection
Module 27:
Structural testing techniques
Structural Testing Techniques
• We have defined:
• A mistake in coding is called Error, error found by tester is called Defect,
defect accepted by development team then it is called Bug
• A test case is inputs, pre-conditions, expected output
• Test case with positive / negative intent
• Structural or white-box techniques consider code
Structural Testing Techniques
 Example
 Consider the following code with a constraint that n < 5:
Structural Testing Techniques
• Is 2 = factorial (2) equiv 6 = factorial (3)
• Which one should be included, why and which one should be excluded
• What is part of the code that was exercised, why? and why not?
• How to select test cases to test our system under test (SUT)
• Once selected, can we run all possible test cases?
• Remember, we are working in a software house
• Economic activities must justify spending on testing effort
• Each test run costs money
• Imagine, SUT is a code fragment from:
• Safety critical system
• Lab assignment
Structural Testing Techniques
• Uranium enrichment gas handling system
• Radiation treatment equipment
• Can we have
• Same number of test cases
• Many test cases with lots of repetition of values
• What to do?
• We need to devise a strategy for selecting a subset of test cases from
all possible test cases
• We need to prove that they were representative of all possible
test cases possible
• We need to prove that we have tested all aspects of SUT
Structural Testing
Techniques • When to test?
• At unit, integration, (sub-)
system, acceptance stages
• How and how much?
• We want to find out when to
stop testing
• We consider functional
aspects of our SUT while unit
and integration testing
• However, we need to define
• Test case selection
End
• Test adequacy..
Module 28:
Test Selection and Adequacy Criteria
Test selection and adequacy criteria
 Test Selection and
Adequacy Criteria
• Test selection criteria
i.e., conditions that must
be fulfilled by a test.
• For example, a criterion for
a numerical program whose
input domain is the integers
• Might specify that each test
contain one positive integer,
one negative integer, and
zero.
{ 3, 0, -7 }, { 122, 0, -11 }, { 1, 0,
-1 } are three of the tests
selected by this criterion.
Test selection and adequacy criteria
• Test adequacy criteria i.e., the
properties of a program that
must be exercised to
constitute a thorough testing.
• For example, we consider
that our software has been
adequately tested:
• If the test suite causes each
method to be executed at
least once. This provides a
measurable objective for
completeness.
Test selection and adequacy criteria
 Adequacy
 Given a program P written to
meet a set of functional
requirements R = {R1, R2, …,
Rn}.
 Let T contain k tests for
determining whether or not
P meets all requirements in
R.
 Assume that P produces correct
behavior for all tests in T.
Test selection and adequacy criteria
• We now ask?
• Is P been tested
thoroughly? or: Is T
adequate?
• In software testing, the term
“thorough” and “adequate”
have the same meaning.
• Measurement of adequacy
• Adequacy is measured for
a given test set and a given
criterion.
• A test set is adequate with
respect to criterion C when
it satisfies C.
Test selection and adequacy criteria
 Measurement of adequacy
 Let
 C be a test adequacy criterion
 Test suite T be a set of test cases such that T = {t1, t2, t3, … tn}
 Program P be the implementation of Specification S
 R denotes set of requirements such that program P written to meet
functional requirements set R = {R1, R2, …, Rm}.
 Test suite T is adequate with respect to C if for each r in R there is a
test case in t in T that tests the correctness of P with respect to r.
Test selection and adequacy criteria
 Given an adequacy criterion C, we derive a finite set Ce known as
the coverage domain.
 A criterion C is a white-box test adequacy criterion if the corresponding Ce
depends solely on the program P under test.
 A criterion C is a black-box test adequacy criterion if the corresponding Ce
depends solely on the requirements R for the program P under test.
 Coverage, which is a means to test selection criteria, we measure:
 T covers Ce if, for each e' in Ce, there is a test case in T that tests e'. T is
adequate wrt C if it covers all elements of Ce.
 T is inadequate with respect to C if it covers k<n elements of Ce.
 k/n is the coverage of T wrt C.
Test selection and adequacy criteria
 Example1 : We need to write a program that takes two integer inputs
a, and b and computes sum if a > b and difference if a < b
 Requirements specification
 Accept integer inputs
 Return sum of two integers if first input is greater than b
 Return difference of two integers if first input is greater than b
• Consider criterion: “A test T for program (P, R) is considered adequate
if each path in P is traversed at least once.”
Test selection and adequacy criteria
int myCompute(int a, int b) {
IF (a > b)
return a + b;
ELSE
return a – b;
}

T1 = [{(2, 1), 3}]


T2 = [{(2, 1), 3}, {1, 0), 1}]
Criterion: White-box
T1 inadequate with coverage=50%
T2 adequate with cov. =100%
Test selection and adequacy criteria
 Example 2: We need to write a
program that takes two integer
inputs a, and b and computes
sum if a > b and difference if a
<b
 Requirements specification
 Accept integer inputs
 Return sum of two integers if first
input is greater than b
 Return difference of two integers if
first input is greater than b
Test selection and adequacy criteria
 Consider criterion: “A test T for
(P, R) is adequate if , for each
requirement r in R, there is at
least one test case in T that
tests the correctness of P with
respect to r.”
 Criterion: black box
 T1 = [{(1, 2), 3}],
 T2 = [{(1, 2), 3}, {2, 1), 1}]
End  T1 is inadequate with coverage
of 50% where T2 is adequate
with coverage of 100%..
Module 29:
Parts of a graph
Parts of a graph
• A Control Flow Graph (CFG)
is a static, abstract
representation of a program.
• CFG is a directed graph G
= (N, E)
• Each node, in the set N, is
either a statement node
or a predicate node.
• statement node represents
a simple statement.
Alternatively, a statement
node can be used to
represent a basic block.
Parts of a graph
• A predicate node represents
a conditional statement.
• Each edge, in the set E,
represents the flow of
control between statements.
• Optionally, we use circles
to represent statement
nodes, and rectangles to
represent predicate nodes.
Parts of a graph
Node S0
Edge

j=1

j  limit
T F

S1 ???
S3

j=j+1
Sn
Parts of a graph
• Nodes
• Statement

• Condition

• Edges
• Convention is: all edges representing “T” and “F” be on the same
side throughout the graph. i.e., if on left in one IF statement, than on
left on all IFEnsdtatements..
Module 30:
Control flow graph based testing
Control flow graph based testing
• Two kinds of basic program statements:
• Assignment statements (Ex. x = 2*y; )
• Conditional statements (Ex. if(), for(), while(), …)
• Control flow
• Successive execution of program statements is viewed as flow of control.
• Conditional statements alter the default flow.
Control flow graph based testing
• Program path
• A program path is a sequence of statements from entry to exit.
• There can be a large number of paths in a program.
• There is an (input, expected output) pair for each path.
Control flow graph based testing
• Executing a path requires invoking the program unit with the right test input.
• Paths are chosen by using the concepts of path selection criteria.
• Tools: Automatically generate test inputs from program paths
Control flow graph based testing
• A decision is a program point at which the control can diverge.
• (e.g., if and case statements).
• A junction is a program point where the control flow can merge.
• (e.g., end if, end loop, goto label)
Control flow graph based testing
• A process block is a sequence of program statements uninterrupted by
either decisions or junctions. (i.e., straight-line code).
• A process has one entry and one exit.
• A program does not jump into or out of a process..

End
Module 31:
Control flow graph (CFG)
Control flow graph
• Control-flow testing is a structural testing strategy that uses the
program’s control flow as a model.
• Control-flow testing techniques are based on judiciously selecting
a set of test paths through the program.
Control flow graph
• The set of paths chosen is used to achieve a certain measure
of testing thoroughness.
• E.g., pick enough paths to assure that every source statement is executed
as least once.
Csocannft(r“o%ld,fl
scanf ( … )
if o w
(y < 0) g r a p h
% d ”, & x, & y );
pow = -y;
else
pow = y; y<0
T F
pow = -y pow = y
z = 1.0;
while (pow != 0) { z = 1.0
z = z * x;
pow = pow – 1; pow != 0 F y<0
} z=z*x T T
if (y < 0) F
z = 1.0 / z ; printf(“%f”, z);
z = 1.0/z
pow = pow-1
printf ( … )
Control flow graph
• Test Requirement: test requirement is a specific element of
a software artifact that a test case must satisfy or cover.
• Coverage Criterion: A coverage criterion is a rule or collection of
rules that impose test requirements on a test set
Control flow graph
• Coverage: Given a set of test requirements TR for a coverage
criterion C, a test set T satisfies C if and only if for every test
requirement tr in TR, at least one test t in T exists such that t satisfies
tr .
• Coverage level: Given a set of test requirements TR and a test set
T, the coverage level is simply the ratio of the number of test
requirements satisfied by T to the size of TR..

End
Module 32:
Control flow graph examples
Control flow graph examples
• We consider flow graph examples for the following:
• For Loop
• Case Statement
• While Loop
Control flow graph examples
What does the CFG for the following
S0
code fragment look like?

S0; j=1
for ( j = 1; j <= limit; j=j+1 )
j  limit
{ F
T
S1;
} S1
Sn
Sn;
j=j+1
Control flow graph examples
S0;
switch ( e )
S0
{
case v1: S1; break; e
case v2: S2; break;
v1 v2 default
default: S3;
} S1 S2 S3
Sn;

Sn
Csocannft(r“o%ld,fl%odw”, scanf ( … )
g r a p
& x, & y );
h examples
if (y < 0)
pow = -y;
y<0
else F
T
pow = y;
pow = -y pow = y
z = 1.0;
while (pow != 0) { z = 1.0
z = z * x;
pow = pow – 1; pow != 0 F y<0
} z=z*x T T
if (y < 0) F
z = 1.0 / z ; printf(“%f”, z);
pow = pow-1 z = 1.0/z
printf ( … )
Control flow graph examples
• Control flow graph follows conventions followed for flow
graph development.
• They must be followed.
• If there are a number of sequential statements, we represent them
by a composite node..

End
Module 33:
From CFG to Path Selection
From CFG to Path Selection
• A path is a unique sequence of executable statements from one
point of the program to another point.
• In a graph, a path is a sequence (n1, n2, …, nt) of nodes such that
<ni,ni+1> is an edge in the graph,  i=1, 2,.., t-1 (t > 0).
From CFG to Path Selection
• Complete path starts with the first node and ends at the last node
of the graph.
• Execution path a complete path that can be exercised by some
input data.
• Subpath a subsequence of the sequence, e.g.,
• n1, n2, …, nt
• Elementary path all nodes are unique.
• Simple path all edges are unique
From CFG to Path Selection
1 2
T F Paths Example:
3 4
(1,2,3,5,6,7,6,8,10) is a complete path
5 (1,2,3,5,6,7,6,7,6,8,10) is a different
complete path
6 (1,2,3,5) is a subpath
T F (1,2,3,5,6,8,10) is an elementary path
8 (1,2,4,5,6,7,6,8,10) is a simple path
7
T F
9
10
From CFG to Path Selection
max = x S1
The simple CFG shown here has 2 different paths:

P1
S3)F
y >(S1x, P1, S2,
(S1, P1, S3)T
F

max = y S2

return max S3
From CFG to Path Selection
A How many paths?
F
T B
T F if ((A||B) && (C||D) {
C S1;
F } else { S2; }
T D
There are: 2 statements + Sn
T F 4 branches,
S1 S2
7 paths (4T+3 F)
(A, C, S1), (A, C, D, S1)
Sn
(A, C, D, S2), (A, B, C, S1)
(A, B, C, D, S1), (A, B, C, D, S2),
From CFG to Path Selection
• Infeasible path
PP x<
11 • A path is said to be feasible if it
T 10 can be exercised by some
input data; otherwise the path
FF is said to be infeasible.
• Infeasible paths are the result
SS11 of contradictory predicates.
P2 x<
20
S4
T F
SS22 SS
33
•T P t feasible.
h 2 • Unless S1 changes the value of x
e , …
p
S
a
3
t
,
h
S
•(
4
P
)
1
, •i
S s
1 n
, o
Module 34:
From Path to Test Case
From Path to Test Case
• Every path corresponds to a succession of true or false values for
the predicates traversed on that path.
• A Path Predicate Expression is a Boolean expression that
characterizes the set of input values that will cause a path to be
traversed.
• Multiway branches (e.g., case/switch statements) are treated
as equivalent if then else statements.
From Path to Test Case
• Any set of input values that satisfies ALL of the conditions of the
path predicate expression will force the routine through that path.
• If there is no such set of inputs, the path is not achievable.
From Path to Test Case
• Process of creating path expression:
• Write down the predicates for the decisions you meet along a path.
• The result is a set of path predicate expressions.
• All of these expressions must be satisfied to achieve a selected path.
From Path to Test Case
• Path condition: the conjunction of the individual predicate
conditions that are generated at each branch point along the path.
• The path condition must be satisfied by the input data in order for
the path to be executed.
From Path Test Example:
1
to Case
2
T F

3 4 PC(1,2,4,5,6,8,10)
= (y≥0 ∧ pow==0 ∧ y ≥0)
5

6
T F
8
7
T F
9
10
From Path to Test Case
• Path Sensitization
• The act of finding a set of solutions to the path predicate expression
is called path sensitization.
• This yields set of values when given as input allow us to traverse
that path
• We manually compute expected outputs
• Set of inputs together with expected outputs, they form test cases
From Path to Test Case
• Inputs to the test generation process
• Source code
• Path selection criteria: statement, branch, …
• Generation of control flow graph (CFG)
• A CFG is a graphical representation of a program unit.
• Compilers are modified to produce CFGs. (You can draw one by hand.)
From Path to Test Case
• Selection of paths
• Enough entry/exit paths are selected to satisfy path selection criteria.
• Generation of test input data
• Two kinds of paths
• Executable path: There exists input so that the path is executed.
• Infeasible path: There is no input to execute the path.
Fscraonfm(“%Pda, %thd”,t&ox,T&ey)s;t Case
1 2 We want to traverse path where
if (y < 0)
F First IF evaluates to F
pow = -Ty;
While Loop is not
else executed
pow 3= y; 4
Second IF is again evaluated to F
z = 1.0;
5 {
while (pow != 0) Path = 1,2,4,5,6,8,10

z = z * x; 10
pow = pow – 6
1;
T F
} 8
if (y < 0) 7
z = 1.0 / z ; T F
printf(“%f”, z);9
PC(1,2,4,5,6,8,10)
= (y≥0 ∧ pow==0 ∧ y ≥0)

• Test Case, assuming expected


output = 3 is:

• [(x=2, y=2), expected output


=3]
From Path to Test
• Solve the path conditions to
Case produce test input for each
path.
• We generate test inputs and
expected output sets for all cases
• Examples
• [(x=2,y=2),exp. output =3]
• [(x=2,y=-1),exp. output =3]
• [(x=-2,y=2),exp. output =3]
END • ..
Module 35:
Coverage Criteria for Control flow testing
Coverage criteria for CFG based testing
• Coverage:
• Node Coverage:
• Edge Coverage:
• Statement Coverage:
• Condition Coverage
• Decision Coverage
• MC/DC
•…
Coverage criteria for CFG based testing
• Node Coverage: all nodes executed
• Edge Coverage: all edges executed
• Statement Coverage: Achieved when all statements in a method
have been executed at least once
Coverage criteria for CFG based testing
• Example Code:
IF ((x || y) && z)
a = b + c;
• Condition Coverage:
• each boolean expression X,Y and Z in above statement should be evaluated
to TRUE and FALSE at least one time
• Test cases:
T1: X=T, Y=T,
Z=T T2: X=F,
Y=F, Z=F
Coverage criteria for CFG based testing
• Example Code:
IF ((x || y) && z)
a = b + c;
• Decision Coverage:
• we need to ensure that the IF statement evaluates to TRUE and FALSE at
least once. So the test set will be: Test cases:
T1: X=T, Y=T,
Z=T T2: X=F,
Y=F, Z=F
Coverage criteria for CFG based testing
• Example Code:
IF ((x || y) && z)
a = b + c;
• Modified/Multiple Condition Decision Coverage (MCDC):
• each Boolean variable should be evaluated to TRUE and FALSE at least
once and also affect the decision outcome cases:
T3: X=F, Y=F, Z=T
T4: X=F, Y=T,
Z=T T5: X=F,
Y=T, Z=F T6: X=T,
Y=F, Z=T
Coverage criteria for CFG based testing
• We define subsumption
• Criteria subsumption : A test criterion C1 subsumes C2 if and only
if every set of test cases that satisfies criterion C1 also satisfies C2
• In our case Edge Coverage subsumes Node coverage
• However, Coverage does not depend on the number of test cases..
Csocavnef (r“a%gd,e%cd”r,i&tex,r&iya);for CFG based
testing
11 22 Test cases for paths:
if (y < 0)
FF
pow =TT -y; 1,2,4,5, 6,8,9,10
else33 1,2,3,5, 6,7,6,8,9,10
pow = y; 44
Give complete node coverage
z = 1.0;
55 0) {
while (pow != Yet Edge 8 10 is not
covered.
z = z * x; We need following additional test
66
pow = pow – 1; cases
T FF 1,2,3,5,6, 8,10
} T 88
77
if (y < 0 )
z = 1.0 / z ; T
printf(“%f”, For Edge coverage
99 FF
z );
1100
Coverage criteria for CFG based testing
• Once we have test cases,
• We need software support to
• Execute test cases
• Find out coverage..

End
Software Verification
and Validation

Lecture No. 6
Software Verification and Validation
Agenda
1. Dataflow Testing
2. Dataflow Testing – anomalies
3. Dataflow Testing Coverage Criteria
4. Program Slice based Testing
5. Program Slice – examples
6. Program Slice – Uses in testing
Module 36:
Dataflow Testing
Dataflow Testing
• Data-flow testing is the name given to a family of test strategies
based on selecting paths through the program’s control flow in
order to explore sequences of events related to the status of data
objects.
• E.g., Pick enough paths to assure that:
• Every data object has been initialized prior to its use.
• All defined objects have been used at least once.
Dataflow Testing
• Data-flow testing uses the control flow graph to explore the
unreasonable things that can happen to data (i.e.,
anomalies).
• We annotate CFG such that
• (d) Defined, Created, Initialized
• (k) Killed, Undefined, Released
• (u) Used:
• (c) Used in a calculation
• (p) Used in a predicate
Dataflow Testing
• An object (e.g., variable) is defined (d) when it:
• appears in a data declaration
• is assigned a new value
• is a file that has been opened
• is dynamically allocated, etc.
• An object is killed when it is:
• released (e.g., free) or otherwise made unavailable (e.g., out of scope)
Dataflow Testing
• a loop control variable when the loop exits
• a file that has been closed
• An object is used when it is part of a computation or a predicate.
• A variable is used for a computation (c) when it appears on the
RHS (sometimes even the LHS in case of array indices) of an
assignment statement.
Dataflow Testing
• A variable is used in a predicate (p) when it appears directly in
that predicate.
• A data-flow anomaly is denoted by a two character sequence
of actions. E.g.,
• ku: Means that an object is killed and then used.
• dd: Means that an object is defined twice without an intervening usage.
Dataflow Testing
Def C-use P-use
1. read
d (x, y);
);
xx,,
22.. zz xx 22;;
yy
== ++
33.. ii ((z yy)) zz xx
f z z,y
z , y
f << w x
w x
4 w = x + 1;
yY yy
eellssee
55.. yy yy 11;; xx,,
== ++
6. print (x, y, w, z);
Module 37:
Dataflow Testing - anomalies
Dataflow Testing - anomalies
• Two letter situations for d, k and u
• dd: Probably harmless, but suspicious.
• dk: Probably a bug.
• du: Normal situation.
• kd: Normal situation.
• kk: Harmless, probably a bug.
• ku: Definitely a bug.
• ud: Normal (reassignment).
• uk: Normal situation.
• uu: Normal situation.
Dataflow Testing - anomalies
• Single letter situations
• leading dash means nothing of interest (d, k, u) occurs prior to
action noted along entry-exit path of interest.
• trailing dash means nothing of interest happens after point of action
until exit.
• -k: Possibly anomalous as killing a variable that does not exist or it
is global.
END
• -u, d-: Possibly anomalous, unless variable is global..
Module 38:
Dataflow Testing Coverage Criteria
Dataflow Testing Coverage Criteria
• Data values are created at some point in the program and used
later on
• A definition (def) is a location where a value of a variable is store
into memory
• A use is a location where value is accessed
• Values are carried from their defs to uses. We call them du-pairs
• A du-pair is pair of locations (li, lj) such that variable v is defined at
li and used at lj
Dataflow Testing Coverage Criteria
• Let V be the set of variables that
1 x = 30 are associated with program
artifact being modeled as
graph
2 3 • The subset of V that each
node n (edge e) defines is
called def(n) (def(e))
4 • Example
• Defs:
• Def (1) = {x}
• Def (5) = def (6) = {z}
z = x*5 5 6 z = x - 10 7
• Uses:
• Use (5) =
{x}
• Use (6) =
{x}
Dataflow Testing Coverage Criteria
• If v is used in c-use, the use is referred to as:
• dcu(li, lj, v)
• If v is used in p-use, the use is referred to as set of two def-use
pairs as:
• dpu(li, (lj, lt), v) and
• dpu (li, (lj, lf), v)
• Since v is defined at li, used at lj having two flow directions (lj, lt) and (lj, lf)
Dataflow Testing Coverage Criteria
• Def-Use Pairs
• Def of a variable at line li and its use at line lj constitute a def-use pair. li and lj
can be the same.
• A path from li to lj is def-clear path with respect to variable v if v is
not given another value on any of the nodes or edges in the path
• Def-clear path from li to lj with respect to v means def of v at
li reaches use of lj.
Dataflow Testing Coverage Criteria
• A du-path with respect to variable v is a simple path that is def-
clear from a def of v to use of v where:
• Du-paths are parameterized by v
• They need to be simple paths
• There may be intervening uses on the path
• Du(ni, nj, v) is set of du-paths from ni to nj for v
• Du(ni, v) is set of du paths starting from ni for v
Dataflow Testing Coverage Criteria
• A def-pair set du(ni, nj, v) of
du-paths wrt v starts at ni
and ends at nj
• They represent set of all
paths from definition to use
• Test requirements
(TR) examples:
• TR: each def reaches al
least one use
• TR: each def reaches
all possible uses
• TR: each def reaches all
possible uses through
all possible du-paths
Dataflow Testing Coverage Criteria
• All-Defs Coverage: For each def-path set S = du(n, v), TR contains at
least one path d in S
• Every def reaches al least one use

• All-Uses Coverage: For each def-pair set S = du (ni, nj, v), TR contains
at least one path d in S
• Every def reaches every uses
• Since a def-pair set du(ni, nj, v) represents all def-clear simple paths from a def of v
at ni to a use of v at nj, all-uses requires us to tour at least one path from every def-
use pair

• All du Paths Coverage: for each def-pair set S = du(ni, nj, v), TR
contains every path d in S
• every def reaches every use through every possible du-paths
Dataflow Testing Coverage Criteria
• All defs of x
1 x = 30 = test path {1,2,4,6,7}
= (one of the uses)
• All uses of x
2 3
= test path {1,2,4,5,7},
test path {1,2,4,6,7}
4 = (all of the uses,5 and 6)
• All du-paths of x
= test path {1,2,4,5,7},
z = x*5 5 6 z = x - 10
test path {1,3,4,5,7},
test path {1,2,4,6,7},
7 test path {1,3,4,6,7}
Dataflow Testing Coverage Criteria
ALL-PATHS

ALL-DU-PATHS

ALL-USES

ALL-P-USES/SOME-C-USES
ALL-C-USES/SOME-P-USES

ALL-P-USES
ALL-C-USES ALL-DEFS
ALL-EDGES

ALL-NODES
Dataflow Testing Coverage Criteria
• Subsumption in Coverage Criteria
• Every use is preceded by a def
• Every def reaches at least one use
• For every node with multiple out-going edges, at least one variable is used
on each out edge, and the same variables are used on each out edge.
• All-Defs ⊆ All-Uses …

END
Module 39:
Program Slice based Testing
• Program slice
• Program slice is a decomposition technique that extracts
statements relevant to a particular computation from a program
• Program slices was first introduced by Mark Weiser (1980) are
known as executable backward static slices
Program Slice based Testing
• Program Slice
• A program slice is a subset of a program.
• Program slicing enables programmers to view subsets of a program
by filtering out code that is not relevant to the computation of
interest.
• E.g., if a program computes many things, including the average of a
set of numbers, slicing can be used to isolate the code that
computes the average.
Program Slice based Testing
• Definition:
• Assume that:
• P is a program.
• V is the set of variables at a program location (line number) n.
• A slice S(V,n) produces the portions of the program that contribute
to the value of V just before the statement at location n is executed.
• S(V,n) is called the slicing criteria.
Program Slice based Testing
• Conditions for Program slice
• Slice S(V,n) must be derived from P by deleting statements from P.
• Slice S(V,n) must be syntactically correct.
• For all executions of P, the value of V in the execution of S(V,n) just
before the location n must be the same value of V in the execution
of the program P just before location n.
Program Slice based Testing
• Types of Slices
• Static Slice
– Slice derived from source code
– No assumption about input values
– Contains all statements that may affect a variable
• Dynamic Slice
– Uses information about a particular execution
Program Slice based Testing
• Types of Slice
– Execution is monitored and slice is computed with respect to program history
– Relatively small as compared to static slice and contains statements actually
affecting a variable
– Conditional of quasi-static slicing technique acts as bridge between two
extremes i.e., static and dynamic slicing
Program Slice based Testing
• Types of Slice
• Backward Slice
– Contains statements of a program P that may have some affect on slicing
criterion
• Forward Slice
– Contains those statements of program P that are affected by slicing criterion
S(V,n)
• Chopping, Interface Slice, …
Program Slice based Testing
• Conditions for Program slice
• Slice S(V,n) must be derived from P by deleting statements from P.
• Slice S(V,n) must be syntactically correct.
• For all executions of P, the value of V in the execution of S(V,n) just
before the location n must be the same value of V in the execution
of the program P just before location n.
Module 40:
Program Slice – examples
P
ma i g ram Slice –
l >e = 21. av = sum / num;
{ 1 0.
r o
n ( ) e x a m p
s0)
w h il e( tm p
1. int mx, mn, av; 11. { 22. printf(“\nMax=%d”, mx);
2. int tmp, sum, num; 12. if (mx < tmp) 23. printf(“\nMin=%d”, mn);
3. 13. mx = tmp; 24. printf(“\nAvg=%d”, av);
4. tmp = readInt(): 14. if (mn > tmp) 25. printf(“\nSum=%d”, sum);
5. mx = tmp; 15. mn = tmp; 26. printf(“\nNum=%d”, num);
6. mn = tmp; 16. sum += tmp; }
7. sum = tmp; 17. ++num;
8. num = 1; 18. tmp = readInt();
9. 19. }
20.

Slice S(num,27)
P r o gram Slice w–hilee(xtmapm>p=l0e)s
ma in ( ) {
int mx, mn, av; { av = sum / num; printf(“\
int tmp, sum, if (mx < tmp) nMax=%d”, mx); printf(“\
num; mx = tmp; nMin=%d”, mn); printf(“\
if (mn > tmp) nAvg=%d”, av); printf(“\
tmp = readInt(): mn = tmp; nSum=%d”, sum); printf(“\
mx = tmp; sum += tmp; nNum=%d”, num);
mn = tmp; ++num; }
sum = tmp; tmp = readInt();
num = 1; }

Slice S(num,27)
P
ma i g ram Slice –
l >e = 21. av = sum / num;
{ 1 0.
r o
n ( ) e x a m p
s0)
w h il e( tm p
1. int mx, mn, av; 11. { 22. printf(“\nMax=%d”, mx);
2. int tmp, sum, num; 12. if (mx < tmp) 23. printf(“\nMin=%d”, mn);
3. 13. mx = tmp; 24. printf(“\nAvg=%d”, av);
4. tmp = readInt(): 14. if (mn > tmp) 25. printf(“\nSum=%d”, sum);
5. mx = tmp; 15. mn = tmp; 26. printf(“\nNum=%d”, num);
6. mn = tmp; 16. sum += tmp; }
7. sum = tmp; 17. ++num;
8. num = 1; main() {
18. tmp = readInt();
1. int mn;
9. 19. } 2. int tmp;
20. 4. tmp = readInt():
6. mn = tmp;
10. while(tmp >= 0)
11. {
Slice S(av,26) 14. if (mn > tmp)
15. mn = tmp;
18. tmp = readInt();
19. }
Program Slice – examples
 Static Slice
• Static Slice  criterion (12,i)
• criterion (12,i) 1 main( )
1 main( ) 2{
2{ 3 int i, sum;
3 int i, sum; 4 sum = 0;
4 sum = 0; 5 i = 1;
5 i = 1;
6 while(i <= 10)
6 while(i <= 10)
7 { 7 {
8 sum = sum + 1; 8 sum = sum + 1;
9 ++ i; 9 ++ i;
10 } 10 }
11 Cout<< sum; 11 Cout<< sum;
12 Cout<< i; 12 Cout<< i;
13 }
13 }
Program Slice – examples
• DDyynnaammiiccsslilcicee, ,nn=
=1, 1C,1,CC12,Car2e T,
aSlrieceTc, rSitleicreioncr<i1te,
r1i0o,nz<>1, 10, z>

1 . r e a d ( n)
2.1.forre a d1 to( nn do
I := )
2. for I := 1 to n do
3. a := 2
a:= 2
3. i f c 1==1 then
4 .
4. if c1 = then
5 . i
= 1 th e f
c 2= = 1
n
65.. if c2=a=:1=th4en
76.. eals:e= 4
87.. else a := 6
98.. za::== 6a
19.0. wrizte:=(az)
Module 41:
Program Slice – Uses in Testing
Program Slice – Uses in Testing
• Observations
• Given a slice S(X,n) where variable X depends on variable Y
with respect to location n:
• All d-uses and p-uses of Y before n are included in S(X,n).
• The c-uses of Y will have no effect on X unless X is a d-use in that statement.
• Slices can be made on a variable at any location.
Program Slice based Testing
• Uses of Program Slice
• provide a mechanism to
filter out “irrelevant” code.
• Debugging:
• Helps visualize control and
data dependencies
• Highlights statements influencing
a particular slice
• Testing: Test cases can
be decomposed
Program Slice – Uses in Testing
• Testing: reduce cost of
regression testing after
modifications (only run those
tests that needed)
• Software maintenance:
changing source code
without unwanted side
effects
• Software quality assurance:
• validate interactions between
safety-critical components
• Safety critical code can be
isolated
Program Slice – Uses in Testing
• Software support
• CodeSurfer: GUI Based,
has scripting language-Tk
• Spyder: A debugging tool
based on program
slicing.
• Unravel: program slicer
for ANSI C.
• Wisconsin: Slicer for C
• Bandera: Open source
version of Wisconsin
Software Verification
and Validation

Lecture No. 7
Software Verification and Validation
Agenda
1. Control-flow graph based testing –
Lab
2. Preparation for Lab on unit testing
3. Introduction to System Under
Test (SUT)
4. Control-flow graph
5. Test Cases
Module 42:
Control-flow graph based Testing - Lab
Control-flow graph based Testing - Lab
Goal of testing: maximize the number and severity of defects found
per dollar spent …
Limits of testing: Testing can only determine presence of defects,
never their absence use proofs of correctness to establish “absence”
Who should test: developer.
• Why?
Control-flow graph based Testing - Lab
3. System tests
Include use-cases

2. Integration tests

OO:
Module combination
Packages of classes

1. Unit tests Combinations of


Module methods in class

Function Methods
C o n t
R eq ui re m
ro l -
en t s
flow graph b a s e d T esting
1 . Pl a n fo r
- Lab
Identify largest
unit testing
trouble spots
Detailed design
Unit test plan
2. Design test
cases and acquire
test I/O pairs
Generate I/O pairs

IEEE, 1986 under test


3. Execute
unit test
Code
Test set

Test results
Module 43:
Unit Testing Lab Preparation
Control-flow graph based Testing - Lab
• Plan for unit testing
• We select suitable unit testing technique
• We know two techniques so far, which ones?
• Control flow based technique
• Dataflow based technique
• We acquire code for testing
• We decide which coverage criteria would be useful
U n i t T e s
R eq ui r em e n ts
ting Lab Identify largest
Pre p a ra t i o n trouble spots
1. P la n f o r
unit testing
Unit test plan
2. Design test
Generate I/O pairs cases and acquire
test I/O pairs

Test set

3. Execute
Code
unit test Test results
under test
Control-flow graph based Testing - Lab
• Our plan as follows:
• We selected
• control-flow based testing
• We acquired/written code for testing
• This was assigned to the developer
• Following detailed design given by the designer
END
• We select
• statement coverage..
Module 44:
Unit Testing – Test Case Design
Unit Testing – Test Case Design
• Our have finalized that:
• We would perform control-flow based testing
• We have written code
• statement coverage is:
• Either advised to us from development manager (we are working as developer)
• Test case design process is up next
U n i t T e s
R eq ui r em e n ts
ting – Test Identify largest
C a s e D e sign trouble spots
1. P la n fo r
unit testing
Detailed design
Unit test plan

2. Design test
Generate I/O pairs cases and acquire
test I/O pairs

Test set
3. Execute
unit test
Code
Test results
Upnubiltic iTntesesatrcihn(ingt e esign
ke–y, inTte[] esletmCArraays) D{
int bottom = 0; int top = elemArray.Length - 1; int mid;
int index = -1; Boolean found = false;
while (bottom <= top && found == false) {
mid = (top + bottom) / 2;
if (elemArray[mid] == key) {
index = mid; found = true; return index;
} else {
if (elemArray[mid] < key)
bottom = mid + 1;
else top = mid - 1;
}
} return index;
}
Unit Testing – Test Case Design
• We take the following steps:
• Develop CFG
• Find paths through CFG
• Find path conditions
• Find appropriate inputs
Upubnliciitnt sTeaercsh(tinitnkegy, in–t[] eTleemAsrrtay)Cas{ e1 Design
int bottom = 0; int top = elemArray.Length - 1; int mid;
int index = -1; Boolean found = false;
while (bottom <= top while bottom <= top
&&boftotoumnd>=t=opfalse) { 2
mid = (top + bottom) / 2;
if (elemArray[mid] == key) {
index = mid; found = true; return i3ndex; if (elemArray [mid] == key
} else {
if (elemArray[mid] < key) 4
8
bottom = mid + 1; (if (elemArray [mid]< key

else top = mid - 1;


5 6
}
} return index; 9
7
}
Unit Testing – Test Case Design
1 • Find paths through CFG, path
predicates and test data for test
cases:
bottom > top while bottom <= top
2 • 1, 2, 3, 8, 9
• 1, 2, 3, 4, 6, 7, 2
• 1, 2, 3, 4, 5, 7, 2
• 1, 2, 3, 4, 6, 7, 2, 8, 9
3 if (elemArray [mid] == key
• Test cases should be derived so that
all of these paths are executed
4
8 (if (elemArray [mid]< • Example test case
key • {1,2,3,4,5}, 1
5 6 • Question: which path is sensitized
9
with this set of inputs?
7
Unit Testing – Test Case Design
Path Condition Test Case
Bottom < top && found == false
1, 2, 3, 8, 9 elemArray[mid] == key
2, {1,2,3}, 1

1, 2, 3, 4, 6, 7, 2 ? ?

1, 2, 3, 4, 5, 7, 2 ? ?

1, 2, 3, 4, 6, 7,
? ?
2, 8, 9
Module 45:
Unit Testing with Microsoft Visual Studio
Unit Testing with Microsoft Visual Studio
• We have the test machinery ready including
• Code
• Test stubs
• Execution machinery
• Coverage Report
• We select Microsoft Visual Studio
• We test our code such that:
• We add a separate test project in our solution
• Keeping code and test cases separate
U n i t T e s
R eq ui r em e n ts
ting with M i c r o so f t
1 . P l an fo r
Visual Studio
Identify largest
unit testing
trouble spots
Detailed design
Unit test plan
2. Design test
cases and acquire
test I/O pairs
Generate I/O pairs

Test set
3. Execute
Code
unit test Test results
under test
• Fil
U iNtewTe->sPtrionjegctwith Microsoft Visual Studio
e- >
n
• U n i t T e s t in g w i t h
R igh t C li ck o n P ro jec t N a m e
Microsoft Visual
Studio
-> Add
-> New Item
Unit Testing with Microsoft Visual Studio
Code
• Fil
U iNtewTe->sTteisnt g with Microsoft Visual Studio
e- >
n
Unit Testing with Microsoft Visual Studio
Unit Testing with Microsoft Visual Studio
Uni uTseingsStyisntemg; osoft Visual Studio
t uwsinigtShysteMm.Tiec

xtr;
using System.Collections.Generic; using System.Linq;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using SQELab1;
namespace TestProjectBSearch {
[TestClass]
public class test1 {
[TestMethod]
public void TestMethod1() {
int[] array = { 1, 2, 3, 4 }; int key = 1;
myBSearch bs = new myBSearch();
Assert.AreEqual(1, bs.search(key, array));
}
}
}
Unit Testing with Microsoft Visual Studio
Unit Testing with Microsoft Visual Studio
• Passed and Failed
Test behavior
• Test execution Log
Module 46:
Coverage with Microsoft Visual Studio
Coverage with Microsoft Visual Studio
• We used our test machinery to test our code
• We require to use the same machinery (i.e., Visual Studio
for Coverage Report
Code Coverage with M/S Visual Studio
Code Coverage with M/S Visual Studio
Code Coverage with M/S Visual Studio
Code Coverage with M/S Visual Studio
Code Coverage with M/S Visual Studio
Code Coverage with M/S Visual Studio
• We have learnt how to do unit testing using Microsoft Visual Studio
• We have attained statement coverage..

END
Software Verification
and Validation

Lecture No.8
Software Verification and Validation
Agenda
1. Integration Testing
2. Functional decomposition based
Integration
3. Call Graph based Integration
4. Path based Integration
Module 47:
Integration Testing
Integration Testing
• While developing unit test strategy, we consider:
• Test Requirement
• Documentation requirement
• Tools / Test utilities?
• Individual units get combined into (sub-) modules
• We need to assure if:
• they are following design accurately
• they are collaborating properly
• We do integration testing
Integration Testing
• Unit assumptions
– All other units are correct
– Compiles correctly

• Integration assumptions
– Unit testing complete

• System assumptions
– Integration testing complete
– Tests occur at port boundary
Integration Testing
• Unit goals
– Correct unit function
– Coverage metrics satisfied
• Integration goals
– Interfaces correct
– Correct function across units
– Fault isolation support
• System goals
– Correct system functions
– Non-functional requirements tested
– Customer satisfaction
Integration Testing
• Imagine, developer X implemented and tested “Register” Class
• Imagine, developer Y implemented and tested “Sales” Class
Integration
• Development
Testing manager wants to:
• Test if individual units
are working properly?
• To verify if adequate
testing at developer
level was performed
• NOT redoing the whole
testing effort again
• If units are
collaborating as
anticipated?
Integration Testing
 At unit level, our test case was as Single Step or a Test Step
 At integration level, it becomes a Test sequence
Integration Testing
• Imagine,
• We were to test Order class
• We went step by step
• We wrote a unit test case for dispatch()
• We wrote a unit test case for close()
• Single step test case
• While writing integration test case,
• We test if Order and Customer both are working together
• We wrote a test cases which is a series of invocations.
Integration Testing
• While integration testing,
• We want to see collaborative behavior
• Issue is that all classes are not ready at the same time
• Even if they are ready, we don’t want to put them together in one go
and run tests for the following reason:
• Fault localization
Integration Testing
• Integration testing techniques
• Functional Decomposition
• applies best to procedural code
• Call Graph
• applies to both procedural and object-oriented code
• MM-Paths
• apply to both procedural and object-oriented code
Module 48:
Functional decomposition based Integration
Integration Testing
• Top-down integration strategy
• focuses on testing the top layer or the controlling subsystem first
• i.e. the main, or the root of the call tree)
• Sometimes, we have an early prototype ready
• Sometimes required to build an understanding (both SE team and
the client)
• We then naturally follow top-down integration of system and test
Integration Testing
• We gradually add more subsystems that are referenced/required
by the already tested subsystems

• Do this until all subsystems are incorporated into the test


Integration Testing
1

A 10 B C

D E 11 12 13 14 15 16 17 F 22

2 3 4 5 6 7 8 9 18 19 20 21 23 24 25 26 27
Functional ion based Integra tion
dToepcSuobtmree posit
(Sessions 1-4)

Top-Down Integration

Second Level Subtree


(Sessions 12-15)

Botom Level Subtree


(Sessions 38-42)
Functional decomposition based Integration
• Issues:
• Writing stubs can be difficult
• when parameter passing is complex.
• Stubs must allow all possible conditions to be tested
• number of stubs required may become high,
• In case lowest level of the system contains many functional units
Functional decomposition based Integration
• Top level classes are ready
• Yet the corporate
customer and Personal
customer classes are not
ready
• We use stubs
• stubs are "called" programs
• Fake classes just to test
• Replaced with actual
• Either when ready
• Or when desired (we
keep actual code out for
fault localization)
Functional decomposition based Integration
• Bottom-Up integration strategy
• We do testing of units at the lowest levels first
• Gradually include sub systems that reference / require previously tested subsystems
• This is done repeatedly until all subsystems are included in the testing
• Drivers required which is a “fake” routine that requires a subsystem and passes a test
case to it
Functional decomposition based Integration
Bottom Level Subtree
(Sessions 13-17)

Bottom-up Integration
Second Level Subtree
(Sessions 25-28)

Top Subtree
(Sessions 29-32)
Functional decomposition based Integration
• Issues:
• Not optimal strategy for functionally decomposed systems:
• Tests the most important subsystem (UI) last
• More useful for integrating object-oriented systems
• Drivers may be more complicated than stubs
• Less drivers than stubs are typically required
Functional decomposition based Integration
• Corporate customer and
Personal customer classes are
ready
• Together with Customer class
• Order Class in not ready
• We use drivers
• The driver (Control
program) is used in the
bottom-up integration to
arrange test case input and
output
Functional decomposition based Integration
• Sandwich Integration
• Combines top-down strategy with bottom-up strategy
• Less stub and driver development effort
• Added difficulty in fault isolation
Functional decomposition based Integration

Sandwich Integration
Module 49:
Call Graph based Integration
Call Graph Based Integration Testing
• Call Graph of a program is a directed graph in which
• nodes are units
• edges correspond to actual program calls (or messages)
• Can still use the notions of stubs and drivers.
• The basic idea is to use the call graph instead of the decomposition tree
• Two sub-types
• Pair-wise Integration
• Neighborhood Integration
Call 5 Based Integration Testing
Graph
1

20

21 22

9 16

4
10

13
12

11 17 18 19 23
24

26
27 25
14 15 6 8 2 3
Call Graph Based Integration Testing
• By definition, and edge in the Call Graph refers to an
interface between the units that are the endpoints of the
edge.
• Every edge represents a pair of units to test.
• We want to test all edges are:
• Essentially required
• Meaningfully implemented
C all Graph Based Integration Testing
5
1
7

• P air-wise integration
2 0

21 • we eliminate the d for22 developing stubs/drivers


9 nee e act
16

• The objective is to us orateual code instead of stubs/drivers


10
4

• I n order not to deterion t the


1 2 c ess to a big-bang strategy, we
1 3

r rict a stin g se ssi o jupro pair of units in the call graph


1 1 17 18 19 23

est te st a
24

• The result is that we have o ne integration test session for each edge
26

in the call graph


27 25

14
15
6
2
C all Graph Based Integration Testing
5
1
7

• neighborhood integration
20

21
• We define neighborhood 22of a node in a graph to be the set of nodes that are
9
one16 edge away from the given node
4
• In a directed graph means all the immediate predecessor nodes
10

all the immediate successor nodes and a given node


13
12

of
11 17
18 19 23

• Ne duces the number of test


24

sesighborhood Integration Testing re


26
27 25

• Fausions
lt isolation is harder
6 8 2 3
14 15
Call Graph Based Integration Testing
• The neighborhood (or radius 1) of a node in a graph is the set
of nodes that are one edge away from the given node.
• This can be extended to larger sets by choosing larger values for
the radius.
• Stub and driver effort is reduced.
Call Graph Based Integration Testing
A A

B E
B C D E

C D
Total nodes = 5
Total nodes = 5
Sink nodes = 4
Sink nodes = 3
Neighborhood = 5 - 4 = 1
Neighborhood = 5 – 3 = 2
A A A

B
B E B C D E
C D
neighborhood(1) neighborhood (2) (1) neighborhood
Call Graph BaAsed Integration Testing
D
B C

E G H
F

( 10 nodes – 5 sink nodes)


= 5 neighborhoods
J -1ABCD
I
-2ABEF
-3BEIJ
-4ACG
-5ADH
Module 50:
Path based Integration
Path Based Integration
• The basic idea is to focus on interactions among system units
rather than merely to test interfaces among separately developed
and tested units
• In this respect, interface-based testing is structural while
interaction- based is behavioral (why?)
• Overall we want to express integration testing in terms
behavioral threads
Path Based Integration
• Source node:
• a program statement fragment at which program execution begins or
resumes.
• for example the first “begin” statement in a program.
• also, immediately after nodes that transfer control to other units.
• Sink node:
• a statement fragment at which program execution terminates.
• the final “end” in a program as well as statements that transfer control to
other units.
Path Based Integration
• Module execution path:
• a sequence of statements that begins with a source node and ends with a sink
node with no intervening sink nodes.
• Message:
• a programming language mechanism by which one unit transfers control to
another unit.
Path Based Integration
• Message:
• can be interpreted as subroutine invocations, procedure calls and function
references.
• convention: the unit which receives the message always eventually
returns control to the message source.
• messages can pass data to other units.
Path Based Integration
• MM-Path:
• an interleaved sequence of module execution paths and messages.
• we can describe sequences of module execution paths that include transfers
of control among separate units.
• MM-paths always represent feasible execution paths, and these paths cross
unit boundaries.
Path Based Integration
• There are three observable behavioral criteria that put endpoints
on MM-Paths:

• event quiescence: occurs when a system is nearly idle, waiting for


a port input event to trigger further processing.
• This is a system level property with an analog at the integration
level: message quiescence.
Path Based Integration
• message quiescence: occurs when a unit that sends no messages is
reached (i.e. module C in Figure 1).

• data quiescence: occurs when a sequence of processing culminates in the


creation of stored data that is not immediately used.
Path Based Integration
• Data quiescence occurs when a sequence of processing culminates
in the creation of stored data that is not immediately used.

• These criteria are “natural” endpoints for MM-Paths.


Path Based Integration
• A second guideline for MM-Paths serves to distinguish integration from system testing:
• atomic system function (ASF): is an action that is observable at the system level in
terms of port input and output events.
• It begins with a port input event,
• traverses one or more MM-Paths,
• and terminates with a port output event.
Path Based Integration • When viewed from the system
level, there is no compelling
reason to decompose an ASF
into lower levels of detail (hence
the atomicity).
• For example in the ATM case,
• an example of an ASF is card
entry, cash dispensing, or
session closing.
• While PIN entry would
probably be too big since it
might entail a molecular system
function
Path Based Integration
• ASFs are an upper limit for MM-Paths:
• MM-Paths should not cross ASF boundaries.

• ASFs represent the seam between integration and system testing:


• they are the largest item to be tested during integration testing,
• and the smallest item for system testing.
Path Based Integration
A B C

MM-path: Interleaved sequence of module exec path and messages


Module exec path: entry-exit path in the same module
Atomic System Function: port input, … {MM-paths}, … port output
Test cases: exercise ASFs
Path Based Integration
• :Register calls :Sales
• We have event
quiescence from
• :Register to :Sales
• We test this
• Imaging there is another
class :DB that stores
data into database and
the method is store()
• We have a data
quiescence involving
these three classes
• We have covered:
• Functional Decomposition
• Call Graph
• MM-Paths

END
Software Verification
and Validation

Lecture No. 9
Module 51:
System Testing
System Testing
• Assumption
• Unit level testing is performed
• Integration level testing is performed
• Therefore, individual modules are working correctly
• System level focus
• Functionality based on Multi-modules
System Testing
• System level focus
• External interfaces
• Non-functional aspects, e.g.,
• Security
• Recovery
• Performance
• Operational and user business process requirements
System
Testing Integration Testing System
Techniques Testing Techniques
Unit Testing 1. MM Path Techniques on 1. Specification-based
Techniques
1. CFG, DFG
2. … Techniques on the boundary 2. Model-based techniques
the
2. State-Machine
boundary
Exit Criteria Exit Criteria
Exit Criteria
1. Specific Coverage 1. Specific Coverage Crit.
1. Specific Coverage
2. Confidence Crit. Crit.
Development Manager

Quality Assurance
PM Tester1
Developer1
Tester
2
Developer2

… CM Tester n
Developer n
Techniques on the boundary
System Testing
• The process of testing of an integrated hardware and software
system to verify that the system meets its specified requirements
• verification: conformance to examination and provisions of objective
evidence that specified requirements have been fulfilled
• Definitions taken from IEEE
System Testing
• More definitions:
• Testing to confirm that all code modules work as specified and that
the system as a whole performs adequately on the platform on
which it will be deployed

• Testing conducted on a complete, integrated system to


evaluate compliance with specified requirements.
System
Testing

v
Module 52:
System Testing Aspects
System Testing Aspects
• Functional aspects
• Business processes
• System goals
• Non-functional aspects
• Load, Stress, Volume
• Performance, Security, Usability
• Storage, Install-ability, Documentation,
• Recovery, …
• …
• There could be specific functional as well as non-functional
requirements
System Testing Aspects
System Testing Aspects
• Functional Aspects Testing:
• Based on requirements / specifications
• System models
• Context diagrams
• ERD
• Design documents
• Our Focus while deriving test cases
• Data
• Action
• Device
• Event
System Testing Aspects
• Non-functional aspects testing
• We identify non-functional user requirements with the specifications
• User defined non-functional requirements
• Local regulations e.g.,
• EEaNcDh computer in UK must have a warning displayed for user awareness
• That the system usage could be harmful under certain conditions e.g., disconnect
power before opening
• Standard SOPs..
Module 53:
System Testing – Non-functional aspects
System Testing – Non functional aspects
• We study a couple of system testing techniques considering
non- functional testing methods first
• We then move towards system testing considering functional aspects
System Testing – Non functional aspects
• Load Testing
• System is subjected to statistically calculated load of
• Transactions
• Processing
• Parallel connections
• Etc.
• Test Case format
• It is a setup
• Mimicking real life boundary situation
• Software example for Load testing: J-meter
System Testing – Non-functional aspects
• Stress Testing
• Form of Load testing
• The resources are denied
• Finding out the boundary conditions where the system would crash
• Finding situations where system usage would become harmful
• We test the system behavior upon lack of resources
System Testing – Non-functional aspects
• Stress Testing (Contd.)
• Test Case format
• It is a setup
• Mimicking real life boundary situation
• Bugs are not (normally) repaired by reported to the end user
for avoidance.
System Testing – Non-functional aspects
• Performance Testing
• We study the performance
requirements, e.g.,
• Response time
• Worst, best, average case time
to complete specified set of
operations, e.g.,
• Transactions per second
• Memory usage (wastage)
• Handling extra-ordinary situations
• Test Case format
• It is a setup
• Simulation
System Testing – Non-functional aspects
• Volume Testing:
• We test if the system is able to handle expected volume of data, e.g.,
• Backend e.g., PRAL
• Affiliated resources, e.g. ZEROX printers
• Word document having 200,000 pages
• We need to measure and report to user:
• System boundaries with respect the volume or capacity of its processing
System Testing – Non-functional aspects
• Security Testing:
• We want to see if
• Use cases are allowed
• Misuse cases are not allowed
• Aimed at breaking the system
• Test cases format
• Negative intent or penetration
• Specific situations
• SQL injections
• Network security features
System Testing – Non-functional aspects
• GUI Testing
• Verifying if
• HCI principles are properly followed
• Documented and de facto standards are met
• Uses
• Scenarios
• Questionnaire
• User activity logs
• Inspections
• Examples of Test cases
• Lower part of each website
• Every windows OS based application
System Testing – Non-functional aspects
• Storage testing
• Install-ability testing
• Documentation testing
• Recovery testing
•…
• We follow:
END
• what was promised with client,
• what is required of such systems and
• what are regulatory requirements..
Module 54:
System Testing – Functional Aspects
System Testing – Functional Aspects
• Functional aspects are represented as:
• User stories
• Use cases, descriptions
• Formal specifications

• Functional aspects are part of SRS document


• We consider them for functional testing part of the system testing
System Testing – Functional Aspects
• The test phase must assure that the final software system
covers these functional requirements.
• There are two very used techniques for extracting test cases from
use cases:
• Scenario analysis, based on scenario identification.
• Test data analysis, based on category partition method
System Testing – Functional Aspects
System Testing – Functional Aspects
System Testing – Functional Aspects
System Testing – Functional Aspects
• We need mechanisms to extract test cases from functional
aspects presented in SRS document
• We need to report coverage so that we may prove sufficiency
of testing.
• We discuss a couple of such techniques..
END
Module 55:
Use Case Analysis Based System Testing
Use Case Analysis based System Testing
• Use cases is a wide used technique to define functional requirements.
• Use Case Scenario
• In case of a business scenario
• A business scenario is written considering user view of the system
• It reflects a business interaction or a business process
Use Case Analysis based System Testing
• In case of extracting scenarios from use-case diagrams
• A scenario is an instance of a use case
• Execution view of a specific user
• How a user would execute it in a specific way.
Use Case Analysis based System Testing
Use Case Analysis based System Testing
• Driving Test Cases from Use-case

Steps:
• Identify the use case scenarios.
• For each scenario, identify one or more test cases.
• For each test case, identify the conditions that will cause it
to execute.
• Complete the test case by adding data values
Use Case Analysis based System Testing
• Step – 1:
• Use simple matrix that can be implemented in a
spreadsheet, database or test management tool.
• Number the scenarios and define the combinations of basic
and alternative flows that leads to them.
• Many scenarios are possible for one use case.
Use Case Analysis based System Testing
• Step – 1:
• Not all scenarios may be documented. Use an iterative process.
• Not all documented scenarios may be tested.
• Use cases may be at a level that is insufficient for testing.
• Team’s review process may discover additional scenarios.
Use Case Analysis based System Testing
Step 2:
We Find parameters of a test case:
• Conditions
• Input (data values)
• Expected result
• Actual result
Use Case Analysis based System Testing
• Step 3:
• For each test case identify the conditions that will cause it
to execute a specific events.
1. Use matrix with columns for the conditions and for each
condition state whether it is
• Valid (V): Must be true for the basic flow to execute.
• Invalid (I): This will invoke an alternative flow
• Not applicable (N/A): To the test case
Use Case Analysis based System Testing
• Step 4
• Design real input data values that will make such conditions to
be valid or invalid and hence the scenarios to happen.
• Options
• look at the use case constructs and branches.
• Consider category-partitioning
• Boundary value analysis
Use Case Analysis based System Testing
• Coverage analysis
• Model-based coverage i.e.,
• All MCAs covered
• All ACAs covered
• Question
• How can we relate or equate this to actual code coverage
• Do we need to make any assumptions?
Module 56:
Use case analysis - example
Use case analysis - example
• We take an example of a system
which is a control system of
Microwave oven
• We consider its specifications
• We extract test cases from
specifications
Use case analysis - example

Initiator Cook
Food

Cook Microwave Oven System Timer

Reference: Gomaa, H., Designing Software Product Lines with UML : From Use Cases to Pattern-
Based Software Architectures, Addison-Wesley, Reading 2004.
UBsreiefcdaessceripatniona:lyTshiissu-
e x a m p l e
se c a se de s c ribes the user
interaction and operation of a microwave oven.
Cook invokes this use case in order to cook food
in the microwave.
Added value: Food is cooked.
Scope: A microwave oven
Primary actor: Cook
Supporting actors: Timer
Preconditions:
The microwave oven
is waiting to be used.
Use case analysis - example
Main Course of Action
1. Cook opens the microwave oven door, puts food into the oven, and
then closes the door.
2. Cook specifies a cooking time.
3. System displays entered cooking time to Cook.
4. Cook starts the system.
5. System cooks the food, and continuously displays the remaining cooking time.
6. Timer indicates that the specified cooking time has been reached,
and notifies the system.
7. System stops cooking and displays a visual and audio signal to indicate
that cooking is completed.
8. Cook opens the door, removes food, then closes the door.
9. System resets the display.
Use case analysis - example
• Alternative flows:
1a. If Cook does not close the door before starting the system (step 4), the
system will not start.
4a. If Cook starts the system without placing food inside the system, the
system will not start.
4b. If Cook enters a cooking time equal to zero, the system will not start.
5a. If Cook opens the door during cooking, the system will stop cooking.
Cook can either close the door and restart the system (continue at step
5), or Cook can cancel cooking.
5b. Cook cancels cooking. The system stops cooking. Cook may start the
system and resume at step 5. Alternatively, Cook may reset the
microwave to its initial state (cancel timer and clear displays).
Use case analysis - example
Use case analysis - example
• Test case generation:
1. For Each use case, generate a full set of use-case scenarios.

2. For each scenario, identify at least one test case and the
conditions that will make it "execute.“

3. For each test case, identify the data values with which to test.
Use case analysis - example
• Step 1: Read use-case textual description and identify each
combination of main and alternate flows of scenarios and create a
scenario matrix.
U/C S ID Scenario Description Starting Target
ID flow Flow

1 1-1 Scenario 1 – Successful Cooking MCA MCA

1 1-2 Scenario 2 – Door Open MCA ACA1


1 1-3 Scenario 3 – No Food MCA ACA2
1 1-4 Scenario 4 – Door Open during MCA ACA3
cooking
… … Scenario 5 – Cooking cancelled MCA ACA4
Use case analysis - example
• Step 2: Identity Test Cases
• We do this by:
– Analyzing scenarios
– Reviewing use case textual representation
• Step 2: Identify test cases
– There should be at least one test case per scenario
– E.g., test case for door open scenario identified previously
– We develop a test suite containing information e.g.,
– Test case id
– Scenario/condition reference
– Inputs
– Preconditions
– Expected output
– Post-conditions
– Pass/fail criteria
Use case analysis - example
• Step 3: Identity Data - We need to identify suitable data to execute
the identified test cases
TC S Input 1 Input 2 Input 3 Expected Post- Pass/Fa
ID ID Output Conditi il
on Criteria

T1 1-1 Food Timer Start Food Alarm Food


placed initialized cooking cooked on cooked,
for to 1 min button alarm for
cooking pressed 15 sec

… … … … … … …
Use case analysis - example
• What if
• We identify additional scenarios e.g.,
• Power failure and restoration during cooking
• Cooking started with too much load
• Cooking with metal plate in the food
Use case analysis - example
• We find
• scenarios implemented but not defined
• scenarios defined but not implemented
• We report anomalies on the basis of
• Mismatches between specification and implementation
• Next we eENxDtract test cases from SRS written in plain English
Software Verification
and Validation

Lecture No. 10
Module 57:
Specification based Testing
Specification based testing
• Specification or model?
• Recall
• Models typically provide some abstract representation of the behavior of the
system.
• Typical notations are:
• Algebraic Specifications
• Control/Dataflow Graphs
• Logic-based Specifications
Specification based testing
• Specification or model?
• Typical notations are (Contd.):
• Finite State Machine Specifications
• Grammar-based Specifications
• Considering Specifications written using informal languages,
e.g., English
Specification based testing
• Independently Testable Feature (ITF):
• depends on the control and observation that is available in the interface to the
system
• Test case:
• inputs, environment conditions, expected results.
• Test case specification:
• a property of test cases that identifies a class of test cases.
Specification based testing
• Functional specification:
• This can be some kind of formal specification that claims to be comprehensive.
• Often it is much more informal comprising a short English description of the inputs and
their relationship with the outputs.
• For some classes of system the specification is hard to provide (e.g. a GUI, since many of
the important properties relate to hard to formalize issues like usability).
Specification based testing
Specification based testing
• We slice specification into features (that may spread across many code modules).
• Each feature should be independent of the other, i.e. we can concentrate on testing one
at a time.
• The design of the code will make this easier or more difficult depending on how
much attention has been given to testability in the systems design.
• called as Independently Testable Function (ITF)
Specification based testing
• For each of the inputs we consider classes of values that will all generate similar
behavior.
• This will result in a very large number of potential classes of input for non-trivial
programs.
• We then identify constraints that disallow certain combinations of classes. The goal is to
reduce the number of potential test cases by eliminating combinations that do not
make sense.
Specification based testing
• Command: find

• Syntax: find <pattern> <filename>

• Function: The find command is used to locate one or more instances of the given pattern
in the named file. All matching lines in the named file are written to standard output. …
Specification based testing
• We study methods where we have specifications written in plain English and we extract
test cases out of them

• We also study how we can extract test cases from model of a system

• In upcoming lecture modules


Module 58:
Scenario-based Testing
Scenario-based Testing
• What is a scenario:
• A scenario is a hypothetical story, used to help a person think through a complex
problem or system.
• While doing scenario based testing: test case is based on a story about how the
program is used, including information about the motivations of the people involved.
Scenario-based Testing
• What is scenario-based testing:
• The story is motivating. A stakeholder with influence would push to fix a program
that failed this test. (Anyone affected by a program is a stakeholder.

• A person who can influence development decisions is a stakeholder


Scenario-based Testing
• What is scenario based testing:
• The story is credible. It not only could happen in the real world; stakeholders would
believe that something like it probably will happen.
• The story involves a complex use of the program or a complex environment or a
complex set of data.
• The test results are easy to evaluate. This is valuable for all tests, but is especially
important for scenarios because they are complex.
Scenario-based Testing
• In a nutshell:
• Learn the product
• Connect testing to documented requirements
• Expose failures to deliver desired benefits
• Explore expert use of the program
• Bring requirements-related issues to the surface, which might involve
reopening old requirements discussions (with new data) or surfacing not-
yet- identified requirements.
Scenario-based Testing
There are two types of scenario testing.
• Type 1 – scenarios used as to define input/output sequences.
They have quite a lot in common with requirements elicitation.
• Type 2 – scenarios used as a script for a sequence of real actions in
a real or simulated environment
Scenario-based Testing
• Quite a lot of what is needed in order to write a good scenario
is closely related to what is needed to write good requirements.

• One of the reasons why scenario testing is so efficient may be


that we, in some sense, we repeat the requirements process but
with other people involved.
Scenario-based Testing
• Type – 1 scenario:
• Some rules for writing a good scenario:
• List possible users – analyze their interest and objectives
• List system events. How does the system handle them?
• List special events. What accommodations does the system
make for these?
Scenario-based Testing
• List benefits and create end-to-end tasks to achieve them
• Work alongside users and to see how they work and what they do
• Read about what systems like this is supposed to do
• Create a mock business. Treat it as real and process its data
Scenario-based Testing
• Users
• When we later say users, we mean the prospective users – those
who will later use the system that we are currently developing and
testing.

• What we need to do is to understand how the system is used by


its real users.
Scenario-based Testing
• List possible users:
• For each identified user, identify his interests. A user will value the system if it
furthers his interests.

• Focus on one interest at the time. Identify the user’s objectives.

• How can we test that each objective is easy to achieve?


Scenario-based Testing
• List system events:
• An event is any occurrence that the system is supposed to respond to.
• E.g. for a real time system – anything that generates an interrupt is an event.
• For each event we need to understand
• Its purpose
• What the system is supposed to do in it
• Rules related to the event
Scenario-based Testing
• List special events:
• Special events are events that are predictable but unusual. They require
special handling.

• They will also require special circumstances in order to be triggered. Make


sure the scenario includes at least the most important of these circumstances.
Scenario-based Testing
• List benefits:
• What are the benefits that the system is supposed to provide to
the users?
• Ask the stakeholders about their opinions.
• Watch out for
• Misunderstandings
• Conflicts – e.g. between groups of users or between users and customers.
Scenario-based Testing
• Work alongside real users:
• Observe then at their work. What do they
• Usually do during a working day
• Have problems with
• How do they usually solve their problems
• These observations help us to understand the users and give use ideas
for scenarios.
Scenario-based Testing
• Read about this type of systems:
• Before writing a scenario it is important to have knowledge on
• What is the new system supposed to do – system requirements.
• What does other, similar systems do – system expectations. This
knowledge can be obtained through books, manuals etc.
Scenario-based Testing
• Create mock business:
• To crate a mock business we need first of all need good knowledge
of how the business works. The mock business must be realistic even
if it isn’t real. It might be necessary to use external consultants.

• Creating a mock business takes a lot of resources but can


give valuable results.
Scenario-based Testing
• Risks of scenario testing – 1:
• Scenario testing is not good for testing new code. If we hit a bug
early in the scenario test, the rest of the test will most likely have to
be postponed until the bug is fixed.

• The we can resume the testing and go on until we find a new bug
and so on.

• Thus, scenario testing should mainly be used as an acceptance test.


Scenario-based Testing
• A scenario test aims to cover all or part of the functionality in
a system. It does not consider code coverage of any kind.

• Scenario tests discover more design errors than coding errors. Thus,
it is not useful for e.g. regression testing or testing a new fix.
Scenario-based Testing
• Scenario testing type 1:
• When we do scenario testing type 1, we use the scenarios to
write transactions as sequences of
• Input
• Expected output

The result can e.g. be an extremely detailed textual use case.


Scenario-based Testing
• Scenario testing type-2:
• When it comes to realism, scenario testing type 2 is a good
testing method. The goal of scenario testing is to test how the
system will behave
• In real word situations – described by scenarios
• With real users, supplied by the system owner
• With real customers – if necessary
Scenario-based Testing
• Scenario testing type-2:
• A scenario tests is done as follows:
• The environment is arranged according to the scenario description. Customers are
instructed as needed.
• A person – the game master – reads out each step of the scenario.
Scenario-based Testing
• Scenario testing type-2:
• A scenario tests is done as follows:
• Users and customers react to the situations created by the game master.
• The events resulting from each scenario is documented – e.g.
– by a video camera or by one or more observers
– for later assessment.
Scenario-based Testing
• The number of possible scenarios is large. Which scenarios we select
depends on the customer’s priorities. In addition, since scenario
tests are expensive, we can usually just afford to run a few.
• Scenarios are most efficient when we want to test requirements
involving a strong interaction with the systems environment –
users, customers, networks, file servers, a stressful work situation
etc.
Module 59:
Scenario based testing - example
Scenario based testing - example
• Example:
Requirement – MTTR < 1 hr.
Scenario for order handling system:
• We have 50 Mb. of information on the system’s hard disk.
• When we are registering a new order, we suddenly loose
the electrical power.
• At the same time several customers call us on the phone to
enquire the status of their order.
Scenario based testing - example
• Example (Contd.)
When we run the scenario one or more times, we want to measure
the time from the power loss to the system is fully operational again.
The test may identify problems pertaining to:
• Operational routines – e.g back-up
• Operator training – stress handling
• Customer handling
• What about the half-filled-in order?
Scenario based testing - example
• Example – 2:

• Imagine we are working on a home heating system


• We have a specification at hand
• We would do two iterations
• First we do use case based testing
• Next we do scenario based testing
Scenario based testing - example
Home Heating Use-Cases
Use case: Power Up (Normal)
Actors: Home Owner (initiator)
Type: Primary and essential
Description: The Home Owner turns the power on. (Power fails)
Perform Adjust Temp.
Check temperature in all rooms, (Thermostat fails)
temperature OK, (Thermometer fails)
no actions taken. (Due to thermostat/meter failure)
Cross Ref.: Requirements XX, YY, and ZZ
Use-Cases: Perform Adjust Temp
Scenario based testing - example
Use case: Power Up (Temp Low)
Actors: Home Owner (initiator)
Type: Primary and essential
Description: Owner turns the power on.
room temperature checked.
One of the room is below desired temperature
Valve for the room is opened,
water pump started.
water temp falls below threshold,
the fuel valve is opened, burner ignited.

Cross Ref.: Requirements XX, YY, and ZZ


Use-Cases: None
Scenario based testing - example
Use case: Adjust Temp
Actors: System (initiator)
Type: Secondary and essential
Description: Check the temperature in each room. For each
room:
Below target: Perform Temp Low
Above target: Perform Temp High
Cross Ref.: Requirements XX, YY, and ZZ
Use-Cases: Temp Low, Temp High
Scenario based testing - example
Use case: Temp Low
Actors: System (initiator)
Type: Secondary and essential
Description: Open room valve, start pump if not started.
If water temp falls below threshold,
open fuel value and ignite burner.
Cross Ref.: Requirements XX, YY, and ZZ
Use-Cases: None
Scenario based testing - example
From Use Cases to Test Cases
Use case Scenario Scenario Description Starting Target
ID ID flow Flow
1 1-1 Scenario 1 – Power up (Normal) MCA MCA

1 1-2 Scenario 1 – Power up (Normal) MCA ACA1

1 1-3 Scenario 1 – Power up (Normal) MCA ACA2

1 1-4 Scenario 1 – Power up (Normal) MCA ACA3


Scenario based testing - example
Test Cases
TC Scenario ID Input 1 Input 2 Input 3 Expected Post-Condition Pass/Fail Criteria
ID Output
T1 1-2 Power on System in ready Room temp Manual temp
state not shown or reading and
wrong system
reading same
T1 1-1 Power on System in ready Room temp Two readings
state not shown or different yet
wrong system is not
starting heating
Scenario based testing - example
Scenario - I
Scenarios development
• User switches the heating on
• All temperatures (current temperature and
set temperatures are shown)
• System starts working

Power-up All temps OK


S• based testing - example
cSceennar
aior–iIoI
• User switches the system on
• All temperatures (current temperature and set temperatures are shown)
• System starts working
• The room temperature is lower than the set temperature
• System opens valves for water movement and ignites burner
• System open fuel valve.

Power-up All temps too low Open valves

05-Use-Cases

Open fuel valve


⚫ScSceennaa ased testing - example
riori–oIIbI
⚫ User switches the system on, temperatures (current temp./set temp.
shown), System starts working
⚫ The room temperature is lower than the set temperature
⚫ System opens valves for water movement and ignites burner
⚫ System open fuel valve, burner fails and fuel supply is closed with
error notification to user
Power-up All temps too low Open valves

Ignite burner Open fuel valve Start pump

Burner fails Shut off fuel Stop pump Notify error


Scenario based testing - example
• Conclusion:
• Scenarios can be viewed differently from:
• System’s usage perspective
• Usability perspective
• Functionality validation perspective
• It is important to find out what you are planning to conduct or perform
Module 60:
Equivalence Class Testing
Equivalence Class Testing • We start our discussion with an
example.
• Let us have a program specification:
We require a routine that should
take two integer values as input and
returns a+b if a < b and a/b if a > b.
• Let there be the following
code against this specification:
Int myDiv (int a; int b) {
a > b ? return a / b
: return a + b;
}
Equivalence Class Testing • Considering our previous knowledge, we
only know handling situations w.r.t CFG
or DFG Test it with respect to MCDC
• a>b -> (2, 1), 2
• A<b -> (1, 2), 3
• With respect to CFG and we find
out that there all paths are covered
yet, (2, 0) would lead to a crash.
• We need alternatives
• We study equivalence class
partitioning, applicable to all
levels i.e., unit, integration and
system levels
Equivalence Class Testing
• Equivalence class testing uses mathematical concept of equivalence class to generate
test cases for largely Functional (Black-box) testing
• However, it can be used equally at unit and integration level since it works on the inputs
and classifies it into disjoint sets.
• The key goals for equivalence class testing are:
• completeness of test coverage
• Lessens effort of test coverage
Equivalence Class Testing
• Let A = ( a1, a2, - - - -, an )
• We divide A into disjoint subsets such that:
• a1 U a2 U - - - - U an = A
• for any i and j, ai ∩ aj = Ø
• Equivalence relation R, Let R is defined out input variable’s
value set X, is a relation defined on the set A, such that it is
reflexive, transitive, and symmetric
• reflexive : -2 € X then (-2, -2) € R
• symmetric : (3, 5) € R, then (5, 3) € R
• transitive: (-5, -7) € R and (-7, -200) € , then (-5, -200) € R
Equivalence Class Testing
• Let us consider Code for our earlier example once again
Int myDiv (int a; int b)
{
a > b ? return a / b : return a + b;
}
• We take values for inputs a and b from set of integers I:
• I = { …, -3, -2, -1, 0, 1, 2, 3, …}
• A = A1 U A2 U A3 such that
• A1 = {…, -3, -2, -1}
• A2 = {0};
• A3 = {1, 2, 3, …}
• A1 Ո A2 Ո A3 = Ф
Equivalence Class Testing
• A1 = {…, -3, -2, -1}
• A2 = {0};
• A3 = {1, 2, 3, …}
Invalid inputs Valid inputs
• Test cases {(A1, A1), (A1, A2),
(A1, A3), …
System
• Test cases {(A3, A3), (A1, A2),
(A1, A3),…
• (1,2), 3; (1,1), 2; (1,3), 4; .. All
of them equivalent
• So we need to devise
Outputs a strategy
Equivalence Class Testing
We select test cases
1. Within range
2. At the boundaries of the range
3. Outside range (“illegal”)
• A1 = {…, -3, -2, -1}
• A2 = {0};
• A3 = {1, 2, 3, …}
• Range = 8 bit wide +ve integer value
• A1 = {…, -3, -2, -1}
• A2 = {0};
• A3 = {1, 2, 3, …}
Equivalence Class Testing
• A2 = {0};
• A3 = {1, 2, 3, …, 256}
• Test cases {(1, 2), (2, 1), (1, 0), (0, 1) … (2, 2)
• Test cases (1, 0), (2, 2) would surface program defects
• We have a mechanism to:
• select data from equivalence classes
• Less number of equivalent test cases
• Consider boundary values (boundary value analysis)
Module 61:
Weak and Strong Normal and Robust
Equivalence
Weak and Strong Normal and Robust
Equivalence • Weak Normal equivalence testing:
• Assumes the “independence of input
variables.”
• e.g. If there are 2 input
variables, these input variables
are independent of each other.
• Partition the test cases of each
input variable separately into
different equivalence classes.
• Choose the test case from each of
the equivalence classes for each
input variable independently of the
other input variable
Weak and Strong Normal and Robust
Equivalence
• Strong Normal equivalence testing:
• This is the same as the weak normal equivalence testing except for

“dependence” among the inputs

• All the combinations of equivalence classes of the variables must


be included.
Weak and Strong Normal and Robust
Equivalence
• Weak robust equivalence testing:
• Up to now we have only considered partitioning the valid input space.
• “Weak robust” is similar to “weak normal” equivalence test except
that the invalid input variables are now considered.
Weak and Strong Normal and Robust
Equivalence
• A note about considering invalid input is that there may not be
any definition specified for the different invalid inputs
• This makes
• the considering the output as the defining process for equivalence classes
a bit more difficult.
Weak and Strong Normal and Robust
Equivalence
• Strong robust equivalence testing:
• assumes dependency of input variables

• “Strong robust” is similar to “strong normal” equivalence test except


that the invalid input variables are now considered.
Module 62:
Equivalence Class (with boundary value)
Example
Equivalence Class (with boundary value)
Example
• We take case:
• Specification
• We require a system that takes 3 inputs (a, b c), and returns if a triangle is
possible evaluating if a<b+c, b<a+c and c<a+b.
• In case a triangle is possible, it returns if resulting triangle is
• scalene (no sides equal),
• equilateral (two sides equal) or
• isosceles (all sides equal)
• Input range is 1-100
Equivalence Class (with boundary value)
Example
“valid” inputs: inputs
1<= a <= 200 output
1<= b <= 200 a b c
1<= c <= 200
and Not
15 10 4
for triangle:
a<b+c triangle
b<a+c 15 15 15
c<b+a
Equilateral 15 15 7

Isosceles 15 18 14

Scalene
Equivalence Class (with boundary value)
Example
• Since there is no further sub-intervals inside the valid inputs for the
3 sides a, b, and c, Strong Normal Equivalence is the same as the
Weak Normal Equivalence
Equivalence Class (with boundary value)
Ex• aWmeakprolbeust equivalence test cases:
• Include 6 invalid test case in addition to Weak
Normal
<100,100,100>
• <101, 45, 50 >
• < -5, 76, 89 >
• <45, 104, 78 >
• < 56, -5, 89 >
• <50, 78, 108 >
• < 56, 89, 0 >

<1, 1, 1>
Equivalence Class (with boundary value)
E• Sxtraonmg Ropbulset equivalence test cases
• Similar to Weak robust, but all combinations of “invalid” inputs but be included.
• Look at the “cube” figure and consider the corners (two diagonal ones)
• Consider one of the corners: there should be (23 – 1) = 7 cases of “invalids”
• < 101, 101, 101 > < 50 , 101, 50 >
• < 101, 101, 50 > < 50 , 101, 101 >
• < 101, 50 , 101 > < 50, 50 , 101 >
• < 101, 50 , 50 >
• There will be 7 more “invalids” when we consider the other corner , <0,0,0 >
Equivalence Class (with boundary value)
Example
• The following testing schemes are overlapping in their concepts:
• Equivalence classes based testing
• Boundary value analysis
• Category-partitioning
• We have covered equivalence class testing with few references
to other two.
Module 63:
Decision Table Based Testing
Decision Table Based Testing
• Decision table is based on logical relationships just as the truth table.

• It is a tool that helps us look at the “completeness” and


“consistency” of combination of conditions
Decision Table ased
rules sting
B Te
R1 R2 R3 R4 R5 R6 R7 R8

C1 T T T T F F F F
values of
conditions C2 T T F F T T F F conditions
C3 T F T F T F T F

a1 x x x x
a2 x x
x x
actions a3 actions
x x x x taken
a4
a5 x x
Decision Table Based Testing
• The conditions in the decision table may take on any number of
values. When it is binary, then the decision table conditions are just
like a truth table set of conditions.

• The decision table allows the iteration of all the combinations of


values of the condition, thus it provides a “completeness check.”

• The conditions in the decision table may be interpreted as the


inputs, and the actions may be thought of as outputs.
Module 64:
Decision Table Based Testing Example
Decision Table Based Testing Example
• We take case:
• Specification
• We require a system that takes 3 inputs (a, b c), and returns if a triangle is
possible evaluating if a<b+c, b<a+c and c<a+b.
• In case a triangle is possible, it returns if resulting triangle is
• scalene (no sides equal),
• equilateral (two sides equal) or
• isosceles (all sides equal)
• Input range is 1-100
Decision Table Testin Example
Based g
1. a < b + F T T T T T T T T T T
c - F T T T T T T T T T
2. b < a + - - F T T T T T T T T
Assume a, b and c are
all between 1 and 100

c
3. c < a + - - - T T T T F F F F
b - - - T T F F T T F F
- - - T F T F T F T F
4. a = b
5. a = c
6. b = c
1. Not triangle 2. I sceles
s 3. Equilateral
1. Scalene o 4. “impossible”
X X X

X
X
X
X
X
X
X
Decision Table Based Testing Example
• There is the “invalid situation” --- not a triangle:
• There are 3 test conditions in the Decision table
• Note the “-” entries, which represents “don’t care,” when it is determined
that the input sides <a, b, c> do not form a triangle
• There is the “valid” triangle situation:
• There are 3 types of valid; so there are 23 = 8 test conditions
• But there are 3 “impossible” situations
• So there are only 8 – 3 = 5 test conditions
• So, for values of a, b, and c, we need to come up with 8 sets of <a,
b, c> to test the (3 + 5) = 8 test conditions.
Decision Table Based Testing Example
• Advantages: (check completeness & consistency)
1. Allow us to start with a “complete” view, with no consideration of dependence
2. Allow us to look at and consider “dependence,” “impossible,” and “not relevant”
situations and eliminate some test cases.
3. Allow us to detect potential error in our Specifications
Decision Table Based Testing Example
• Disadvantages:
1. Need to decide (or know) what conditions are relevant for testing - - - this may
require Domain knowledge
• e.g. need to know leap year for “next date” problem in the book
2. Scaling up can be massive: 2n rules for n conditions - - - that’s if the conditions are
binary and gets worse if the values are more than binary

You might also like