0% found this document useful (0 votes)
17 views124 pages

Unit 3 SIA

Chapter 18 discusses software testing, emphasizing the importance of finding errors before software delivery. It covers testing methods such as white-box and black-box testing, detailing techniques like basis path testing and cyclomatic complexity to ensure comprehensive test coverage. The chapter also highlights the characteristics of good tests and the significance of testability in software engineering.

Uploaded by

zomsrm6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views124 pages

Unit 3 SIA

Chapter 18 discusses software testing, emphasizing the importance of finding errors before software delivery. It covers testing methods such as white-box and black-box testing, detailing techniques like basis path testing and cyclomatic complexity to ensure comprehensive test coverage. The chapter also highlights the characteristics of good tests and the significance of testability in software engineering.

Uploaded by

zomsrm6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 124

Chapter 18

 Testing Conventional Applications


Slide Set to accompany
Software Engineering: A Practitioner’s Approach, 7/e
by Roger S. Pressman

Slides copyright © 1996, 2001, 2005, 2009 by Roger S. Pressman

For non-profit educational use only


May be reproduced ONLY for student use at the university level when used in conjunction
with Software Engineering: A Practitioner's Approach, 7/e. Any other reproduction or use is
prohibited without the express written permission of the author.

All copyright information MUST appear if these slides are posted on a website for student
use.

1
Software Testing

Testing is the process of exercising


a program with the specific intent of
finding errors prior to delivery to the
end user.

2
What Testing Shows
errors
requirements conformance

performance

an indication
of quality

3
What is a “Good” Test?
 A good test has a high probability of
finding an error
 A good test is not redundant.
 A good test should be “best of breed”
 A good test should be neither too
simple nor too complex

4
Testability- Characteristics lead to s/w tesing

 Operability—it operates cleanly


 Observability—the results of each test case are readily
observed
 Controllability—the degree to which testing can be
automated and optimized
 Decomposability—testing can be targeted
 Simplicity—reduce complex architecture and logic to
simplify tests
 Stability—few changes are requested during testing
 Understandability—of the design

5
Internal and External Views
 Any engineered product (and most other
things) can be tested in one of two ways:
 Knowing the specified function that a product
has been designed to perform, tests can be
conducted that demonstrate each function is
fully operational while at the same time
searching for errors in each function;
 Knowing the internal workings of a product, tests
can be conducted to ensure that "all gears
mesh," that is,
• Internal operations are performed according to
specifications and
• All internal components have been adequately
exercised.
6
Exhaustive Testing

loop < 20 X

14
There are 10 possible paths! If we execute one
test per millisecond, it would take 3,170 years to
test this program!!
7
Selective Testing

Selected path

loop < 20 X

8
Software Testing

white-box black-box
methods methods

Methods

Strategies

9
White-Box Testing

... our goal is to ensure that all


statements and conditions have
been executed at least once ...
10
White-box testing,
 White-box testing, sometimes called glass-box testing, is a
test-case design philosophy that uses the control structure
described as part of component-level design to derive test
cases.
 Using white-box testing methods, you can derive test cases
that
 (1) guarantee that all independent paths within a module have been
exercised at least once
 (2) exercise all logical decisions on their true and false sides
 (3) execute all loops at their boundaries and within their operational
bounds,
 (4) exercise internal data structures to ensure their validity.

11
White-box testing,
 Two types
 Basis path testing

 Control structure Testing

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 12
Basis path testing
 Basis path testing is a white-box testing
technique first proposed by Tom McCabe
 The basis path method enables the test-case
designer to derive a logical complexity
measure of a procedural design and use this
measure as a guide for defining a basis set of
execution paths.
 Test cases derived to exercise the basis set
are guaranteed to execute every statement in
the program at least one time during testing

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 13
Basis Path Testing

 Flow Graph Notation


 Independent Program Paths
 Deriving Test Cases
 Graph Matrices

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 14
1. Flow Graph Notation
 a simple notation for the representation of control
flow, called a flow graph (or program graph)
 The flow graph depicts logical control flow using the
notation illustrated in Figure below.
 Each structured construct has a corresponding flow
graph symbol.

15
Flow Graph Notation
 To illustrate the use of a flow graph, consider the
procedural design representation in Figure 18.2a. Here,
a flowchart is used to depict program control structure.
 Figure 18.2b maps the flowchart into a corresponding
flow graph (assuming that no compound conditions are
contained in the decision diamonds of the flowchart).

16
Flow Graph Notation
 Each circle, called a flow graph node, represents one or
more procedural statements.
 A sequence of process boxes and a decision diamond
can map into a single node.
 The arrows on the flow graph, called edges or links,
represent flow of control and are analogous to flowchart
arrows.
 An edge must terminate at a node, even if the node does
not represent any procedural statements (e.g., see the
flow graph symbol for the if-then-else construct).
 Areas bounded by edges and nodes are called regions.
When counting regions, we include the area outside the
graph as a region.4
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 17
Compound logic
 When compound conditions are encountered in a
procedural design, the generation of a flow graph
becomes slightly more complicated.
 A compound condition occurs when one or more
Boolean operators (logical OR, AND, NAND, NOR) is
present in a conditional statement.

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 18
Compound logic

 Referring to Figure, the program design language


(PDL) segment translates into the flow graph shown.
 Note that a separate node is created for each of the
conditions a and b in the statement IF a OR b.
 Each node that contains a condition is called a
predicate node and is characterized by two or more
edges emanating from it. 19
2. Independent Program Paths
 An independent path is any path through the program that
introduces at least one new set of processing statements
or a new condition.
 When stated in terms of a flow graph, an independent
path must move along at least one edge that has not
been traversed before the path is defined.

20
2. Independent Program Paths
 For example, a set of independent paths for the flow graph
illustrated in Figure is
 Path 1: 1-11
 Path 2: 1-2-3-4-5-10-1-11
 Path 3: 1-2-3-6-8-9-10-1-11
 Path 4: 1-2-3-6-7-9-10-1-11

 Note that each new path introduces a new edge.


The path 1-2-3-4-5-10-1-2-3-6-8-9-10-1-11
is not considered to be an independent path because it is
simply a combination of already specified paths and does not
traverse any new edges.
21
Cyclomatic complexity
 How do you know how many paths to look for? The
computation of cyclomatic complexity provides the
answer.
 Cyclomatic complexity is a software metric that
provides a quantitative measure of the logical
complexity of a program.
 When used in the context of the basis path testing
method, the value computed for cyclomatic
complexity defines the number of independent paths
in the basis set of a program and provides you with
an upper bound for the number of tests that must be
conducted to ensure that all statements have been
executed at least once.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 22
Cyclomatic Complexity
 Cyclomatic complexity provides the upper
bound on the number of test cases that will be
required to guarantee that every statement in
the program has been executed at least one
time.

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 23
Cyclomatic Complexity
A number of industry studies have indicated
that the higher V(G), the higher the probability
or errors.

modules

V(G)

modules in this range are


more error prone

24
Cyclomatic Complexity
 Complexity is computed in one of three ways:

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 25
 Cyclomatic complexity can be computed using each of the
algorithms just noted:

 Therefore, the cyclomatic complexity of the flow graph in


Figure is 4.

26
3. Deriving Test Cases
 Summarizing:
 Using the design or code as a foundation,
draw a corresponding flow graph.
 Determine the cyclomatic complexity of the
resultant flow graph.
 Determine a basis set of linearly independent
paths.
 Prepare test cases that will force execution of
each path in the basis set.

27
4. Graph Matrices
 A graph matrix is a square matrix whose size (i.e., number of
rows and columns) is equal to the number of nodes on a flow
graph
 Each row and column corresponds to an identified node, and
matrix entries correspond to connections (an edge) between
nodes.
 By adding a link weight to each matrix entry, the graph matrix can
become a powerful tool for evaluating program control structure
during testing

28
4. Graph Matrices

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 29
Control Structure Testing

 Condition Testing
 Data Flow Testing
 Loop Testing

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 30
Control Structure Testing
 Condition testing — a test case design method
that exercises the logical conditions contained
in a program module
 Data flow testing — selects test paths of a
program according to the locations of
definitions and uses of variables in the program

31
Condition testing
 Condition testing is a test-case design method that
exercises the logical conditions contained in a program
module.
A simple condition:
 A simple condition is a Boolean variable or a relational
expression, possibly preceded with one NOT (¬)
operator.
 A relational expression takes the form

E1 <relational-operator> E2
where E1 and E2 are arithmetic expressions and
<relational-operator> is one of the Following:
32
Compound condition
 A compound condition is composed of two or more simple
conditions, Boolean operators, and parentheses.
 Boolean operators allowed in a compound condition include
OR (|), AND (&), and NOT (¬).
 If a condition is incorrect, then at least one component of the
condition is incorrect.
 Therefore, types of errors in a condition include Boolean
operator errors (incorrect/missing/extra Boolean operators),
Boolean variable errors, Boolean parenthesis errors,
relational operator errors, and arithmetic expression errors.
 The condition testing method focuses on testing each
condition in the program to ensure that it does not contain
errors. 33
Data Flow Testing
 The data flow testing method selects test paths of a
program according to the locations of definitions
and uses of variables in the program.
 Assume that each statement in a program is

assigned a unique statement number and that


each function does not modify its parameters or
global variables. For a statement with S as its
statement number
• DEF(S) = {X | statement S contains a definition
of X}
• USE(S) = {X | statement S contains a use of X}
 A definition-use (DU) chain of variable X is of the

form [X, S, S'], where S and S' are statement


numbers, X is in DEF(S) and USE(S'), and the
definition of X in statement S is live at statement
S'

34
Loop testing
 Loop testing is a white-box testing technique
that focuses exclusively on the validity of loop
constructs.
 Four different classes of loops can be defined:
 Simple loops
 Concatenated loops
 Nested loop
 Unstructured loops

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 35
Loop Testing

Simple
loop
Nested
Loops

Concatenated
Loops Unstructured
Loops
36
Loop Testing: Simple Loops
Minimum conditions—Simple Loops
1. skip the loop entirely
2. only one pass through the loop
3. two passes through the loop
4. m passes through the loop m < n
5. (n-1), n, and (n+1) passes through
the loop
where n is the maximum number
of allowable passes

37
Loop Testing: Nested Loops
Nested Loops
Start at the innermost loop. Set all outer loops to their
minimum iteration parameter values.
Test the min+1, typical, max-1 and max for the
innermost loop, while holding the outer loops at their
minimum values.
Move out one loop and set it up as in step 2, holding all
other loops at typical values. Continue this step until
the outermost loop has been tested.
Concatenated Loops
If the loops are independent of one another
then treat each as a simple loop
else* treat as nested loops
endif*
for example, the final loop counter value of loop 1 is
used to initialize loop 2.
38
Black-Box Testing

requirements

output

input events

39
Black-box testing
 Black-box testing, also called behavioral testing, focuses on the
functional requirements of the software.
- Black-box testing techniques enable you to derive sets of input conditions
that will fully exercise all functional requirements for a program.
 Black-box testing is not an alternative to white-box techniques.
Rather, it is a complementary approach that is likely to uncover a
different class of errors than whitebox methods.
 Black-box testing attempts to find errors in the following
categories:
 (1) incorrect or missing functions, (2) interface errors, (3) errors
in data structures or external database access, (4) behavior or
performance errors, and (5) initialization and termination errors.

40
Black-Box Testing
 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input will make good test cases?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of a data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operation?

41
Graph-Based Methods
Step 1:
To understand the objects that are modeled in software and the
relationships that connect these objects
Step 2:
Defines a series of tests that verify “All obj have the expected relationship to
one another.

Software testing begins by creating a graph of important


objects and their relationships and then devising a series of
tests that will cover the graph so that each object and
relationship is exercised and errors are uncovered.

42
Black-Box Testing
 Graph-Based Testing Methods
 Equivalence Partitioning
 Boundary Value Analysis
 Orthogonal Array Testing

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 43
Graph-Based Methods
 Begin by creating a graph—a collection of nodes that
represent objects,
 links that represent the relationships between objects,
 Node weights that describe the properties of a node
(e.g., a specific data value or state behaviour), and link
weights that describe some characteristic of a link.

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 44
Graph-Based Methods
 Nodes are represented as circles connected by links
that take a number of different forms.
 A directed link (represented by an arrow) indicates that
a relationship moves in onlyone direction.
 A bidirectional link, also called a symmetric link, implies
that the relationship applies in both directions.
 Parallel links are used when a number of different
relationships are established between graph nodes.

45
Graph for a word-processing application

object Directed link object


#1 (link weight) #2

Node weight
Undirected link
(value
)
Parallel links
object Object #1 newFile
#
3 (menu selection)
Object #2
(a)
documentWindow
Object #3
new menu select generates document documentText
file (generation time  1.0 sec) window

allows editing
is represented as of Attributes:
contains
document background color: white
tex text color: default color
t or preferences
46
(b)
Graph for a word-processing application
 a menu select on newFile generates a document window.
 The node weight of documentWindow provides a list of
the window attributes that are to be expected when the
window is generated.
 The link weight indicates that the window must be generated
in less than 1.0 second.
 An undirected link establishes a symmetric relationship
between the newFile menu selection and documentText
and
 parallel links indicate relationships between
documentWindow and documentText

47
Graphs are used in…..
 Beizer [Bei95] describes a number of behavioral testing
methods that can make use of graphs:
 Transaction flow modeling
 flightInformationInput
 Finite state modeling
 Data flow modeling
 Timing Modelling

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 48
Equivalence partitioning
 Equivalence partitioning is a black-box testing method
that divides the input domain of a program into classes
of data from which test cases can be derived.
 Test-case design for equivalence partitioning is based
on an evaluation of equivalence classes for an input
condition.
 if a set of objects can be linked by relationships that are
symmetric, transitive, and reflexive, an equivalence
class is present.
 An equivalence class represents a set of valid or invalid
states for input conditions.
49
Equivalence Partitioning

user output FK
queries formats input
mouse
picks data
prompts

50 Example: Airline Ticket Booking system


Equivalence classes
 Equivalence classes may be defined according to the
following guidelines:
 1. If an input condition specifies a range, one valid
and two invalid equivalence classes are defined.
 2. If an input condition requires a specific value, one
valid and two invalid equivalence classes are defined.
 3. If an input condition specifies a member of a set,
one valid and one invalid equivalence class are defined.
 4. If an input condition is Boolean, one valid and one
invalid class are defined.

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 51
Sample Equivalence Classes
Valid data
user supplied commands
responses to system prompts
file names
computational data
physical parameters
bounding values
initiation values
output data formatting
responses to error messages
graphical data (e.g., mouse picks)

Invalid data
data outside bounds of the program
physically impossible data
proper value supplied in wrong place
52
Boundary Value Analysis

user output FK
queries formats input
mouse
picks data
prompts

output
input domain domain
53
Boundary value analysis
 Boundary value analysis leads to a selection of
test cases that exercise bounding values.
 Boundary value analysis is a test-case design
technique that complements equivalence
partitioning.
 Rather than selecting any element of an
equivalence class, BVA leads to the selection
of test cases at the “edges” of the class.
 Rather than focusing solely on input conditions,
BVA derives test cases from the output domain

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 54
Guidelines for BVA

 Guidelines for BVA are similar in many respects to


those provided for equivalence partitioning:
 1. If an input condition specifies a range bounded
by values a and b, test cases should be designed with
values a and b and just above and just below a and b.
 2. If an input condition specifies a number of
values, test cases should be developed that exercise
the minimum and maximum numbers. Values just
above and below minimum and maximum are also
tested.

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 55
Guidelines for BVA
 3. Apply guidelines 1 and 2 to output conditions. For
example, assume that a temperature versus pressure
table is required as output from an engineering analysis
program. Test cases should be designed to create an
output report that produces the maximum (and
minimum) allowable number of table entries.
 4. If internal program data structures have
prescribed boundaries (e.g., a table has a defined
limit of 100 entries), be certain to design a test case to
exercise the data structure at its boundary.

56
Orthogonal array testing
 Orthogonal array testing can be applied to problems in
which the input domain is relatively small but too large to
accommodate exhaustive testing.
 The orthogonal array testing method is particularly useful
in finding region faults—an error category associated with
faulty logic within a software component.

These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman. 57
Orthogonal Array Testing
 Used when the number of input parameters is
small and the values that each of the
parameters may take are clearly bounded

Z Z

Y Y
X X
One input item at a time L9 orthogonal array

58
Model-Based Testing
 Analyze an existing behavioral model for the
software or create one.
 Recall that a behavioral model indicates how
software will respond to external events or stimuli.
 Traverse the behavioral model and specify the
inputs that will force the software to make the
transition from state to state.
 The inputs will trigger events that will cause the
transition to occur.
 Review the behavioral model and note the
expected outputs as the software makes the
transition from state to state.
 Execute the test cases.
 Compare actual and expected results and take
corrective action as required.

59
Software Testing Patterns
 Testing patterns are described in much
the same way as design patterns (Chapter
12).
 Example:
• Pattern name: ScenarioTesting
• Abstract: Once unit and integration tests have
been conducted, there is a need to determine
whether the software will perform in a manner
that satisfies users. The ScenarioTesting
pattern describes a technique for exercising the
software from the user’s point of view. A failure
at this level indicates that the software has failed
to meet a user visible requirement. [Kan01]

60
Strategic Approach
 To perform effective testing, you should conduct
effective technical reviews. By doing this, many
errors will be eliminated before testing commences.
 Testing begins at the component level and works
"outward" toward the integration of the entire
computer-based system.
 Different testing techniques are appropriate for
different software engineering approaches and at
different points in time.
 Testing is conducted by the developer of the
software and (for large projects) an independent
test group.
 Testing and debugging are different activities, but
debugging must be accommodated in any testing
strategy.

61
V&V
 Verification refers to the set of tasks that
ensure that software correctly implements
a specific function.
 Validation refers to a different set of tasks
that ensure that the software that has been
built is traceable to customer
requirements. Boehm [Boe81] states this
another way:
 Verification: "Are we building the product
right?"
 Validation: "Are we building the right product?"

62
Who Tests the Software?

developer independent tester


Understands the system Must learn about the system,
but, will test "gently" but, will attempt to break it
and, is driven by "delivery" and, is driven by quality

63
Testing Strategy
System engineering

Analysis modeling
Design modeling

Code generation Unit test

Integration test
Validation test

System test

64
Testing Strategy
 We begin by ‘testing-in-the-small’ and move
toward ‘testing-in-the-large’
 For conventional software
 The module (component) is our initial focus
 Integration of modules follows
 For OO software
 our focus when “testing in the small” changes from
an individual module (the conventional view) to an
OO class that encompasses attributes and
operations and implies communication and
collaboration

65
Strategic Issues
 Specify product requirements in a quantifiable manner
long before testing commences.
 State testing objectives explicitly.
 Understand the users of the software and develop a
profile for each user category.
 Develop a testing plan that emphasizes “rapid cycle
testing.”
 Build “robust” software that is designed to test itself
 Use effective technical reviews as a filter prior to testing
 Conduct technical reviews to assess the test strategy
and test cases themselves.
 Develop a continuous improvement approach for the
testing process.

66
Unit Testing

module
to be
tested

results

software
engineer
test cases

67
Integration Testing Strategies
Options:
• the “big bang” approach
• an incremental construction strategy

68
Top Down Integration
A
top module is tested with
stubs

B F G

stubs are replaced one at


a time, "depth first"
C
as new modules are integrated,
some subset of tests is re-run
D E

69
Bottom-Up Integration
A

B F G

drivers are replaced one at a


time, "depth first"
C

worker modules are grouped into


builds and integrated
D E

cluster

70
Regression Testing
 Regression testing is the re-execution of some
subset of tests that have already been conducted
to ensure that changes have not propagated
unintended side effects
 Whenever software is corrected, some aspect of
the software configuration (the program, its
documentation, or the data that support it) is
changed.
 Regression testing helps to ensure that changes
(due to testing or for other reasons) do not
introduce unintended behavior or additional
errors.
 Regression testing may be conducted manually,
by re-executing a subset of all test cases or using
automated capture/playback tools.
71
Smoke Testing
 A common approach for creating “daily builds” for product
software
 Smoke testing steps:
 Software components that have been translated into code are
integrated into a “build.”
• A build includes all data files, libraries, reusable modules, and engineered
components that are required to implement one or more product
functions.
 A series of tests is designed to expose errors that will keep the build
from properly performing its function.
• The intent should be to uncover “show stopper” errors that have the
highest likelihood of throwing the software project behind schedule.
 The build is integrated with other builds and the entire product (in its
current form) is smoke tested daily.
• The integration approach may be top down or bottom up.

72
Sandwich Testing
A
Top modules are
tested with stubs

B F G

Worker modules are grouped into


builds and integrated
D E

cluster
73
Object-Oriented Testing
 begins by evaluating the correctness and
consistency of the analysis and design models
 testing strategy changes
 the concept of the ‘unit’ broadens due to
encapsulation
 integration focuses on classes and their execution
across a ‘thread’ or in the context of a usage
scenario
 validation uses conventional black box methods
 test case design draws on conventional
methods, but also encompasses special
features

74
Testing the CRC Model
1. Revisit the CRC model and the object-relationship
model.
2. Inspect the description of each CRC index card to
determine if a delegated responsibility is part of the
collaborator’s definition.
3. Invert the connection to ensure that each collaborator
that is asked for service is receiving requests from a
reasonable source.
4. Using the inverted connections examined in step 3,
determine whether other classes might be required or
whether responsibilities are properly grouped among the
classes.
5. Determine whether widely requested responsibilities
might be combined into a single responsibility.
6. Steps 1 to 5 are applied iteratively to each class and
75
through each evolution of the analysis model.
OO Testing Strategy
 class testing is the equivalent of unit testing
 operations within the class are tested
 the state behavior of the class is examined
 integration applied three different strategies
 thread-based testing—integrates the set of
classes required to respond to one input or event
 use-based testing—integrates the set of classes
required to respond to one use case
 cluster testing—integrates the set of classes
required to demonstrate one collaboration

76
High Order Testing
 Validation testing
 Focus is on software requirements
 System testing
 Focus is on system integration
 Alpha/Beta testing
 Focus is on customer usage
 Recovery testing
 forces the software to fail in a variety of ways and verifies that recovery is
properly performed
 Security testing
 verifies that protection mechanisms built into a system will, in fact, protect it
from improper penetration
 Stress testing
 executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume
 Performance Testing
 test the run-time performance of software within the context of an integrated
system

77
Debugging: A Diagnostic Process

78
The Debugging Process

79
Debugging Effort
time required
to diagnose the
symptom and
time required determine the
to correct the error cause
and conduct
regression tests

80
Symptoms & Causes
symptom and cause may be
geographically separated

symptom may disappear when


another problem is fixed

cause may be due to a


combination of non-errors

cause may be due to a system


or compiler error

symptom cause may be due to


assumptions that everyone
cause believes

symptom may be intermittent


81
Consequences of Bugs
infectious
damage

catastrophic
extreme
serious
disturbing

annoying
mild

Bug Type

Bug Categories: function-related bugs,


system-related bugs, data bugs, coding bugs,
design bugs, documentation bugs, standards
violations, etc.
82
Debugging Techniques

brute force / testing

backtracking

induction

deduction

83
Correcting the Error
 Is the cause of the bug reproduced in another part of the
program? In many situations, a program defect is caused
by an erroneous pattern of logic that may be reproduced
elsewhere.
 What "next bug" might be introduced by the fix I'm about
to make? Before the correction is made, the source code
(or, better, the design) should be evaluated to assess
coupling of logic and data structures.
 What could we have done to prevent this bug in the first
place? This question is the first step toward establishing
a statistical software quality assurance approach. If you
correct the process as well as the product, the bug will
be removed from the current program and may be
eliminated from all future programs.

84
Chapter 14
 Quality Concepts
Slide Set to accompany
Software Engineering: A Practitioner’s Approach, 7/e
by Roger S. Pressman

Slides copyright © 1996, 2001, 2005, 2009 by Roger S. Pressman

For non-profit educational use only


May be reproduced ONLY for student use at the university level when used in conjunction
with Software Engineering: A Practitioner's Approach, 7/e. Any other reproduction or use is
prohibited without the express written permission of the author.

All copyright information MUST appear if these slides are posted on a website for student
use.

85
Software Quality
 In 2005, ComputerWorld [Hil05] lamented that
 “bad software plagues nearly every organization that uses
computers, causing lost work hours during computer
downtime, lost or corrupted data, missed sales opportunities,
high IT support and maintenance costs, and low customer
satisfaction.
 A year later, InfoWorld [Fos06] wrote about the
 “the sorry state of software quality” reporting that the quality
problem had not gotten any better.
 Today, software quality remains an issue, but who is to
blame?
 Customers blame developers, arguing that sloppy practices
lead to low-quality software.
 Developers blame customers (and other stakeholders),
arguing that irrational delivery dates and a continuing stream
of changes force them to deliver software before it has been
fully validated.

86
Quality
 The American Heritage Dictionary defines
quality as
 “a characteristic or attribute of something.”
 For software, two kinds of quality may be
encountered:
 Quality of design encompasses requirements,
specifications, and the design of the system.
 Quality of conformance is an issue focused primarily
on implementation.
 User satisfaction = compliant product + good quality
+ delivery within budget and schedule

87
Quality—A Philosophical View
 Robert Persig [Per74] commented on the thing
we call quality:
 Quality . . . you know what it is, yet you don't know what it is.
But that's self-contradictory. But some things are better than
others, that is, they have more quality. But when you try to say
what the quality is, apart from the things that have it, it all goes
poof! There's nothing to talk about. But if you can't say what
Quality is, how do you know what it is, or how do you know that
it even exists? If no one knows what it is, then for all practical
purposes it doesn't exist at all. But for all practical purposes it
really does exist. What else are the grades based on? Why else
would people pay fortunes for some things and throw others in
the trash pile? Obviously some things are better than others . . .
but what's the betterness? . . . So round and round you go,
spinning mental wheels and nowhere finding anyplace to get
traction. What the hell is Quality? What is it?

88
Quality—A Pragmatic View
 The transcendental view argues (like Persig) that
quality is something that you immediately recognize,
but cannot explicitly define.
 The user view sees quality in terms of an end-user’s
specific goals. If a product meets those goals, it
exhibits quality.
 The manufacturer’s view defines quality in terms of
the original specification of the product. If the
product conforms to the spec, it exhibits quality.
 The product view suggests that quality can be tied to
inherent characteristics (e.g., functions and features)
of a product.
 Finally, the value-based view measures quality based
on how much a customer is willing to pay for a
product. In reality, quality encompasses all of these
views and more.

89
Software Quality
 Software quality can be defined as:
 An effective software process applied in a
manner that creates a useful product that
provides measurable value for those who
produce it and those who use it.
 This definition has been adapted from [Bes04] and
replaces a more manufacturing-oriented view
presented in earlier editions of this book.

90
Low quality results in
 High maintenance costs
 Low user satisfaction

91
Quality
 User satisfaction= quality+ cost+ time+
correctness

The first step towards quality is defining it!

92
A high quality system is necessarily
Measurable!

93
Quality from different
viewpoints
 End-user
 Product
 Producer
 Maintainer
 Value-based view

94
Effective Software Process
 An effective software process establishes the
infrastructure that supports any effort at building a
high quality software product.
 The management aspects of process create the
checks and balances that help avoid project chaos—
a key contributor to poor quality.
 Software engineering practices allow the developer
to analyze the problem and design a solid solution—
both critical to building high quality software.
 Finally, umbrella activities such as change
management and technical reviews have as much to
do with quality as any other part of software
engineering practice.

95
Useful Product
 A useful product delivers the content,
functions, and features that the end-
user desires
 But as important, it delivers these assets
in a reliable, error free way.
 A useful product always satisfies those
requirements that have been explicitly
stated by stakeholders.
 In addition, it satisfies a set of implicit
requirements (e.g., ease of use) that are
expected of all high quality software.
96
Adding Value
 By adding value for both the producer and user of a
software product, high quality software provides
benefits for the software organization and the end-user
community.
 The software organization gains added value because
high quality software requires less maintenance effort,
fewer bug fixes, and reduced customer support.
 The user community gains added value because the
application provides a useful capability in a way that
expedites some business process.
 The end result is:
 (1) greater software product revenue,
 (2) better profitability when an application supports a
business process, and/or
 (3) improved availability of information that is crucial for
the business.

97
Quality Dimensions
 To determine quality in a system, the quality
dimensions of the system must be defined

98
Quality Dimensions
 David Garvin [Gar87]:
 Performance Quality. Does the software deliver all
content, functions, and features that are specified as
part of the requirements model in a way that provides
value to the end-user?
 Feature quality. Does the software provide features
that surprise and delight first-time end-users?
 Reliability. Does the software deliver all features and
capability without failure? Is it available when it is
needed? Does it deliver functionality that is error
free?
 Conformance. Does the software conform to local and
external software standards that are relevant to the
application? Does it conform to de facto design and
coding conventions? For example, does the user
interface conform to accepted design rules for menu
selection or data input?

99
Quality Dimensions
 Durability. Can the software be maintained
(changed) or corrected (debugged) without the
inadvertent generation of unintended side effects?
Will changes cause the error rate or reliability to
degrade with time?
 Serviceability. Can the software be maintained
(changed) or corrected (debugged) in an acceptably
short time period. Can support staff acquire all
information they need to make changes or correct
defects?
 Aesthetics. Most of us would agree that an aesthetic entity has a
certain elegance, a unique flow, and an obvious “presence” that are
hard to quantify but evident nonetheless.
 Perception. In some situations, you have a set of prejudices that
will influence your perception of quality.

100
Other Views
 McCall’s Quality Factors (SEPA,
Section 14.2.2)
 ISO 9126 Quality Factors (SEPA,
Section 14.2.3)
 Targeted Factors (SEPA, Section
14.2.4)

101
The Software Quality Dilemma
 If you produce a software system that has terrible
quality, you lose because no one will want to buy it.
 If on the other hand you spend infinite time,
extremely large effort, and huge sums of money to
build the absolutely perfect piece of software, then
it's going to take so long to complete and it will be
so expensive to produce that you'll be out of
business anyway.
 Either you missed the market window, or you simply
exhausted all your resources.
 So people in industry try to get to that magical
middle ground where the product is good enough
not to be rejected right away, such as during
evaluation, but also not the object of so much
perfectionism and so much work that it would take
too long or cost too much to complete. [Ven03]

102
“Good Enough” Software
 Good enough software delivers high quality functions and
features that end-users desire, but at the same time it
delivers other more obscure or specialized functions and
features that contain known bugs.
 Arguments against “good enough.”
 It is true that “good enough” may work in some application
domains and for a few major software companies. After all, if a
company has a large marketing budget and can convince enough
people to buy version 1.0, it has succeeded in locking them in.
 If you work for a small company be wary of this philosophy. If
you deliver a “good enough” (buggy) product, you risk
permanent damage to your company’s reputation.
 You may never get a chance to deliver version 2.0 because bad
buzz may cause your sales to plummet and your company to fold.
 If you work in certain application domains (e.g., real time
embedded software, application software that is integrated with
hardware can be negligent and open your company to expensive
litigation.

103
Cost of Quality
 Prevention costs include
 quality planning
 formal technical reviews
 test equipment
 Training
 Internal failure costs include
 rework
 repair
 failure mode analysis
 External failure costs are
 complaint resolution
 product return and replacement
 help line support
 warranty work

104
Cost
 The relative costs to find and repair an error or defect
increase dramatically as we go from prevention to
detection to internal failure to external failure costs.

105
Quality and Decisions
 Quality depends on the decisions made while
developing the project
 Estimation decisions
 Scheduling decisions
 Risk-oriented decisions
 ,…

106
Quality and Risk
 “People bet their jobs, their comforts, their safety,
their entertainment, their decisions, and their very
lives on computer software. It better be right.”
SEPA, Chapter 1
 Example:
 Throughout the month of November, 2000 at a
hospital in Panama, 28 patients received massive
overdoses of gamma rays during treatment for a
variety of cancers. In the months that followed, five
of these patients died from radiation poisoning and
15 others developed serious complications. What
caused this tragedy? A software package, developed
by a U.S. company, was modified by hospital
technicians to compute modified doses of radiation
for each patient.

107
Negligence and Liability
 The story is all too common. A governmental or
corporate entity hires a major software developer or
consulting company to analyze requirements and
then design and construct a software-based “system”
to support some major activity.
 The system might support a major corporate function
(e.g., pension management) or some governmental
function (e.g., healthcare administration or homeland
security).
 Work begins with the best of intentions on both sides,
but by the time the system is delivered, things have
gone bad.
 The system is late, fails to deliver desired features
and functions, is error-prone, and does not meet with
customer approval.
 Litigation ensues.

108
Quality and Security
 Gary McGraw comments [Wil05]:
 “Software security relates entirely and completely
to quality. You must think about security, reliability,
availability, dependability—at the beginning, in the
design, architecture, test, and coding phases, all
through the software life cycle [process]. Even
people aware of the software security problem have
focused on late life-cycle stuff. The earlier you find
the software problem, the better. And there are two
kinds of software problems. One is bugs, which are
implementation problems. The other is software
flaws—architectural problems in the design. People
pay too much attention to bugs and not enough on
flaws.”

109
Quality and Security
 Two kinds of problems with Bugs
 Implementation problem
 Architectural problem

 Most of attention is on implementation

110
Achieving Software Quality
 Critical success factors:
 Software Engineering Methods
 Project Management Techniques
 Quality Control
 Quality Assurance

111
There is never enough time to
do it right, but always time to
do it again

112
Chapter 16
 Software Quality Assurance
Slide Set to accompany
Software Engineering: A Practitioner’s Approach, 7/e
by Roger S. Pressman

Slides copyright © 1996, 2001, 2005, 2009 by Roger S. Pressman

For non-profit educational use only


May be reproduced ONLY for student use at the university level when used in conjunction
with Software Engineering: A Practitioner's Approach, 7/e. Any other reproduction or use is
prohibited without the express written permission of the author.

All copyright information MUST appear if these slides are posted on a website for student
use.

113
Comment on Quality
 Phil Crosby once said:
 The problem of quality management is not what people
don't know about it. The problem is what they think
they do know . . . In this regard, quality has much in
common with sex.
 Everybody is for it. (Under certain conditions, of
course.)
 Everyone feels they understand it. (Even though they
wouldn't want to explain it.)
 Everyone thinks execution is only a matter of following
natural inclinations. (After all, we do get along
somehow.)
 And, of course, most people feel that problems in these
areas are caused by other people. (If only they would
take the time to do things right.)

114
Elements of SQA
 Standards
 Reviews and Audits
 Testing
 Error/defect collection and analysis
 Change management
 Education
 Vendor management
 Security management
 Safety
 Risk management

115
Role of the SQA Group-I
 Prepares an SQA plan for a project.
 The plan identifies
• evaluations to be performed
• audits and reviews to be performed
• standards that are applicable to the project
• procedures for error reporting and tracking
• documents to be produced by the SQA group
• amount of feedback provided to the software project team
 Participates in the development of the project’s software
process description.
 The SQA group reviews the process description for compliance
with organizational policy, internal software standards, externally
imposed standards (e.g., ISO-9001), and other parts of the
software project plan.

116
Role of the SQA Group-II
 Reviews software engineering activities to verify
compliance with the defined software process.
 identifies, documents, and tracks deviations from the process and
verifies that corrections have been made.
 Audits designated software work products to verify
compliance with those defined as part of the software
process.
 reviews selected work products; identifies, documents, and tracks
deviations; verifies that corrections have been made
 periodically reports the results of its work to the project manager.
 Ensures that deviations in software work and work
products are documented and handled according to a
documented procedure.
 Records any noncompliance and reports to senior
management.
 Noncompliance items are tracked until they are resolved.

117
SQA Goals (see Figure 16.1)
 Requirements quality. The correctness,
completeness, and consistency of the requirements
model will have a strong influence on the quality of
all work products that follow.
 Design quality. Every element of the design model
should be assessed by the software team to ensure
that it exhibits high quality and that the design
itself conforms to requirements.
 Code quality. Source code and related work
products (e.g., other descriptive information) must
conform to local coding standards and exhibit
characteristics that will facilitate maintainability.
 Quality control effectiveness. A software team
should apply limited resources in a way that has the
highest likelihood of achieving a high quality result.

118
Statistical SQA
Product Collect information on all defects
Find the causes of the defects
& Process Move to provide fixes for the process

measurement

... an understanding of how


to improve quality ...

119
Statistical SQA
 Information about software errors and defects is
collected and categorized.
 An attempt is made to trace each error and defect
to its underlying cause (e.g., non-conformance to
specifications, design error, violation of standards,
poor communication with the customer).
 Using the Pareto principle (80 percent of the
defects can be traced to 20 percent of all possible
causes), isolate the 20 percent (the vital few).
 Once the vital few causes have been identified,
move to correct the problems that have caused the
errors and defects.

120
Six-Sigma for Software Engineering
 The term “six sigma” is derived from six standard deviations—
3.4 instances (defects) per million occurrences—implying an
extremely high quality standard.
 The Six Sigma methodology defines three core steps:
 Define customer requirements and deliverables and project goals
via well-defined methods of customer communication
 Measure the existing process and its output to determine current
quality performance (collect defect metrics)
 Analyze defect metrics and determine the vital few causes.
 Improve the process by eliminating the root causes of defects.
 Control the process to ensure that future work does not
reintroduce the causes of defects.

121
Software Reliability
 A simple measure of reliability is mean-time-
between-failure (MTBF), where
MTBF = MTTF + MTTR
 The acronyms MTTF and MTTR are mean-
time-to-failure and mean-time-to-repair,
respectively.
 Software availability is the probability that a
program is operating according to requirements
at a given point in time and is defined as
Availability = [MTTF/(MTTF + MTTR)] x 100%

122
Software Safety
 Software safety is a software quality assurance
activity that focuses on the identification and
assessment of potential hazards that may
affect software negatively and cause an entire
system to fail.
 If hazards can be identified early in the
software process, software design features can
be specified that will either eliminate or control
potential hazards.

123
ISO 9001:2000 Standard
 ISO 9001:2000 is the quality assurance standard that
applies to software engineering.
 The standard contains 20 requirements that must be
present for an effective quality assurance system.
 The requirements delineated by ISO 9001:2000 address
topics such as
 management responsibility, quality system, contract
review, design control, document and data control, product
identification and traceability, process control, inspection
and testing, corrective and preventive action, control of
quality records, internal quality audits, training, servicing,
and statistical techniques.

124

You might also like