0% found this document useful (0 votes)
150 views54 pages

Module 4-Product and Process Metric

This document discusses software product and process metrics. It introduces function point metrics which can be used to measure functionality delivered by a system. Function points are derived by counting external inputs, outputs, inquiries, internal files and external interface files. These counts are then adjusted based on 14 complexity factors to obtain the final function point value. Function points can be used to estimate effort, errors and size of the implemented system based on historical data. The document emphasizes that software metrics need to have desirable mathematical properties and be empirically validated before use.

Uploaded by

venkatkollu678
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
150 views54 pages

Module 4-Product and Process Metric

This document discusses software product and process metrics. It introduces function point metrics which can be used to measure functionality delivered by a system. Function points are derived by counting external inputs, outputs, inquiries, internal files and external interface files. These counts are then adjusted based on 14 complexity factors to obtain the final function point value. Function points can be used to estimate effort, errors and size of the implemented system based on historical data. The document emphasizes that software metrics need to have desirable mathematical properties and be empirically validated before use.

Uploaded by

venkatkollu678
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

CSE1005: Software Engineering

Module 4
Software Product and Process Metrics

SCOPE, VIT-AP University


Introduction
“When you cannot measure, your knowledge is of meagre and unsatisfactory kind."
-- Lord Kelvin (1824 - 1907)
British mathematician & physicist

• Unfortunately, unlike other engineering disciplines, software engineering is not


grounded in the basic quantitative laws of physics.
• Direct measures, such as voltage, mass, velocity, or temperature, are uncommon
in the software world. Because software measures and metrics are often indirect,
they are open to debate.
• Subsequent few slides present measures that can be used to assess the quality of
the product as it is being engineered.

2
Outline
• A Framework for Product Metrics
• Metrics for the Requirements Model
• Metrics for the Design Model
• Metrics for Source Code, Testing, and Maintenance*
• Metrics for Software Quality

4
A Framework for Product Metrics
• Measures, Metrics, and Indicators (in the current context)
• Measure – provides a quantitative indication of the extent, amount, dimension,
capacity, or size of some attribute of a product or process
• Measurement – the act of determining a measure
• Metric – “a quantitative measure of the degree to which a system component, or
process possesses a given attribute.” – [IEEE Standard Glossary of Software Engineering Terminology]

Illustration:
• When a single data point has been collected (e.g., the number of errors uncovered within a single
software component), a measure has been established
• Measurement occurs as the result of the collection of one or more data points (e.g., a number of
component reviews and unit tests are investigated to collect measures of the number of errors for
each).
• A software metric relates the individual measures in some way (e.g., the average number of errors
found per review or the average number of errors found per unit test)

5
A Framework for Product Metrics
• Measures, Metrics, and Indicators (in the current context)
• Measure – provides a quantitative indication of the extent, amount,
dimension, capacity, or size of some attribute of a product or process
• Measurement – the act of determining a measure
• Metric – “a quantitative measure of the degree to which a system component,
or process possesses a given attribute.” – [IEEE Standard Glossary of Software Engineering Terminology]

A software engineer collects measures and develops metrics so that indicators


will be obtained
→An indicator is a metric or combination of metrics that provides insight into the software
process, a software project, or the product itself
→An indicator provides insight that enables the project manager or software engineers to
adjust the process, the project, or the product to make things better

6
A Framework for Product Metrics
• A series of product metrics exist that
• assist in the evaluation of the analysis and design models
• provide an indication of the complexity of procedural designs and source code
• facilitate the design of more effective testing
• However, before going into details of them, understanding basic
measurement principles is very important

7
A Framework for Product Metrics
• Five activities to characterize measurement process [Roche]
• Formulation. The derivation of software measures and metrics appropriate for
the representation of the software that is being considered.
• Collection. The mechanism used to accumulate data required to derive the
formulated metrics.
• Analysis. The computation of metrics and the application of mathematical
tools.
• Interpretation. The evaluation of metrics resulting in insight into the quality of
the representation.
• Feedback. Recommendations derived from the interpretation of product
metrics transmitted to the software team.
Software metrics will be useful only if they are characterized effectively and
validated so that their worth is proven
8
A Framework for Product Metrics
• Representative principles for metrics characterization and validation
• A metric should have desirable mathematical properties (binary, range, ratio, …)
• E.g., if a metric is a range from 0 to 1, where 0 truly means absence, 1 indicates the
maximum value, and 0.5 represents the “halfway point”
• When a metric represents a software characteristic that increases when
positive traits occur or decreases when undesirable traits are encountered, the
value of the metric should increase or decrease in the same manner.
• Each metric should be validated empirically in a wide variety of contexts
before being published or used to make decisions

9
A Framework for Product Metrics
• The Attributes of Effective Software Metrics
• Simple and computable
• Empirically and intuitively persuasive
• Consistent and objective
• Consistent in its use of units and dimensions
• Programming language independent
• An effective mechanism for high-quality feedback

10
Metrics for the Requirements Model
• These metrics examine the requirements model with the intent of
predicting the “size” of the resultant system.

[Size is sometimes (but not always) an indicator of design complexity and is


almost always an indicator of increased coding, integration, and testing effort]

11
Metrics for the Requirements Model
• Function-Based Metrics
• The function point (FP) metrics are used effectively as a means for measuring
the functionality delivered by a system
• Using historical data, the FP metrics are used to
1. estimate the cost or effort required to design, code, and test the software
2. predict the number of errors that will be encountered during testing
3. forecast the number of components and/or the number of projected source lines in
the implemented system
• Function points are derived using an empirical relationship based on
countable(direct) measures of software’s information domain and qualitative
assessments of software complexity

12
Metrics for the Requirements Model
• Function-Based Metrics

Image Source: Roger Pressman, “ Software Engineering: A Practitioner’ s


• Information domain values are defined in the following manner:
• Number of external inputs (EIs)
• Number of external outputs (EOs)
To compute function points (FP), the
• Number of external inquiries (EQs) following relationship is used:
• Number of internal logical files (ILFs) FP = count total ˣ [0.65 + 0.01 ˣ Ʃ (Fi)]
• Number of external interface files (EIFs)
where count total is the sum of all FP entries
obtained from this figure

McGraw-Hill, 7th Edition, 2016


Approach” ,
13
Metrics for the Requirements Model
• Function-Based Metrics
• The Fi (i = 1 to 14) are value adjustment factors (VAF) based on responses to the following 14
questions:
1. Does the system require reliable backup and recovery?
2. Are specialized data communications required to transfer information to or from the application?
3. Are there distributed processing functions?
4. Is performance critical?
5. Will the system run in an existing, heavily utilized operational environment?
6. Does the system require online data entry?
7. Does the online data entry require the input transaction to be built over multiple screens or operations?
8. Are the ILFs updated online?
9. Are the inputs, outputs, files, or inquiries complex?
10. Is the internal processing complex?
11. Is the code designed to be reusable?
12. Are conversion and installation included in the design?
13. Is the system designed for multiple installations in different organizations?
14. Is the application designed to facilitate change and ease of use by the user?
Each of these questions is answered using a scale that ranges from 0 (not important or applicable) to 5 (absolutely essential).

14
Metrics for the Requirements Model
• Function-Based Metrics
• Example:

Now, if we assume that Ʃ (Fi) is 46 (a moderately


complex product),
FP = 50 ˣ [0.65 + (0.01 ˣ 46)] = 56

Based on the projected FP value derived from the requirements


Note: Function points can also be computed from UML class and model, the project team can estimate the overall implemented
sequence diagrams [Uemura et al 1999]. size of the SafeHome user interaction function.

15
Metrics for the Requirements Model
• Metrics for Specification Quality
• The list of characteristics that can be used to assess the quality of the requirements
model and the corresponding requirements specification [Davis et al. 1993]:
• Specificity (lack of ambiguity)
• Completeness
• Correctness
• Understandability Although many of these characteristics appear to be
• Verifiability qualitative in nature, Davis et al. suggest that each can
• Internal and external consistency be represented using one or more metrics.
• Achievability
• Concision An example is presented in the next slide for illustration.
• Traceability
• Modifiability
• Precision
• Reusability

16
Metrics for the Requirements Model
• Metrics for Specification Quality
• Example:
• Suppose, we assume that there are nr requirements in a specification, such that
nr = nf + nnf
where nf is the number of functional requirements and nnf is the number of
nonfunctional (e.g., performance) requirements
• To determine the specificity (lack of ambiguity) of requirements, Davis et al. suggest a
metric that is based on the consistency of the reviewers’ interpretation of each
requirement:

where nui is the number of requirements for which all reviewers had identical
interpretations. The closer the value of Q to 1, the lower is the ambiguity of the
specification.

17
Metrics for the Design Model

• We start designing without defining design measures, determining metrics


for various aspects of design quality, and following guidelines
18
Metrics for the Design Model
• The irony of this is that design metrics for software are available, but the
vast majority of software engineers continue to be unaware of their
existence
• Design metrics for computer software, like all other software metrics, are
not perfect; but they obviously help us to measure and improve the quality
of software
• Architectural Design Metrics
• Metrics for Object-Oriented Design
• Class-Oriented Metrics—The CK Metrics Suite
• Component-Level Design Metrics
• Operation-Oriented Metrics
• User Interface Design Metrics

19
Metrics for the Design Model
• Architectural Design Metrics
• Focus on characteristics of the program architecture with an emphasis on the
architectural structure and the effectiveness of modules or components
within the architecture
• “black box” - they do not require any knowledge of the inner workings of a
particular software component
• Three software design complexity measures (Card and Glass, 1990):
• Structural complexity
• Data complexity
• System complexity

20
Metrics for the Design Model
• Architectural Design Metrics (cont…)
• Structural complexity
• For hierarchical architectures (e.g.,
2
call-and-return architectures), structural complexity
of a module i is defined as S(i) = f (i), where f (i) is the fan-out of module i.
out out
• Data complexity
• Provides an indication of the complexity in the internal interface for a module i and is
defined as D(i) = v(i) / (fout(i)+1), where v(i) is the number of input and output variables
that are passed to and from module i.
• System complexity
• Defined as the sum of structural and data complexity, specified as C(i) = S(i) + D(i)

➢ As each of these complexity values increases, the overall architectural complexity of the
system also increases.
➢ This leads to a greater likelihood that integration and testing effort will also increase.
21
Metrics for the Design Model
• Architectural Design Metrics (cont…)
• Fenton suggests a number of simple morphology (i.e., shape) metrics that enable different
program architectures to be compared using a set of straightforward dimensions
• Size = n + a, where n is the number of nodes and a is the number of arcs → 17 + 18 = 35
• Depth = longest path from the root (top) node to a leaf node → 3
• Width = maximum number of nodes at any one level of the architecture → 6
• The arc-to-node ratio, r = a/n, measures the connectivity density of the architecture and may
provide a simple indication of the coupling of the architecture → 18 / 17 = 1.06

22
Metrics for the Design Model
• Architectural Design Metrics (cont…)
• The U.S. Air Force Systems Command [USA87] has developed a number of software
quality indicators that are based on measurable design characteristics of a computer
program
S1 = total number of modules defined in the program architecture
S2 = number of modules whose correct function depends on the source of data input or that
produce data to be used elsewhere (in general, control modules, among others, would not be
counted as part of S2)
S3 = number of modules whose correct function depends on prior processing
S4 = number of database items (includes data objects and all attributes that define objects)
S5 = total number of unique database items
S6 = number of database segments (different records or individual objects)
S7 = number of modules with a single entry and exit (exception processing is not considered to
be a multiple exit)

23
Metrics for the Design Model
• Architectural Design Metrics (cont…)
• The U.S. Air Force Systems Command [USA87] has developed a number of software
quality indicators that are based on measurable design characteristics of a computer
program
S1 = total number of modules defined in the program architecture
S2 = number of modules whose correct function depends on the source of data input or that
produce data to be used elsewhere (in general, control modules, among others, would not be
counted as part of S2)
S3 = number of modules whose correct function depends on prior processing
S4 = number of database items (includes data objects and all attributes that define objects)
S5 = total number of unique database items
S6 = number of database segments (different records or individual objects)
S7 = number of modules with a single entry and exit (exception processing is not considered to
be a multiple exit)
Once values S1 through S7 are determined for a computer program, some intermediate values (next slide) can be
computed.

24
Metrics for the Design Model
• Architectural Design Metrics (cont…)
• The U.S. Air Force Systems Command [USA87] has developed a number of software
quality indicators that are based on measurable design characteristics of a computer
program
Program structure, D1, where D1 is defined as follows:
If the architectural design was developed using a distinct method (e.g., data flow-oriented design or object
oriented design), then D1 = 1, otherwise D1 = 0.
Module independence: D2 = 1 – (S2 / S1) The value of DSQI for past designs can be determined and
Modules not dependent on prior processing: D3 = 1 – (S3 / S1) compared to a design that is currently under development. If

the DSQI is significantly lower than average, further design


Database size: D 4 = 1 – (S 5 / S4) work and review are indicated. Similarly, if major changes are

Database compartmentalization: D5 = 1 – (S6 / S4) to be made to an existing design, the effect of those changes
Module entrance/exit characteristic: D6 = 1 – (S7 / S1) on DSQI can be calculated.

With these intermediate values determined, the DSQI is computed in the following manner:
DSQI = Ʃ wiDi, where i = 1 to 6, wi is the relative weighting of the importance of each of the intermediate values,
and Ʃwi = 1 (if all Di are weighted equally, then wi = 0.167)
25
Metrics for the Design Model
• Metrics for Object-Oriented Design
• Size – defined in terms of four views:
• Population – measured by taking a static count of OO entities such as classes
• Volume – population measures but are collected dynamically—at a given instant of time
• Length – a measure of a chain of interconnected design elements (e.g., the depth of an
inheritance tree is a measure of length)
• Functionality – provide an indirect indication of the value delivered to the customer by
an OO application
• Complexity – Examines how classes of an OO design are interrelated
(structural and logical)
• Coupling – The physical connections between elements of the OO design

26
Metrics for the Design Model
• Metrics for Object-Oriented Design (cont)
• Sufficiency - “the degree to which an abstraction possesses the features
required of it, or the degree to which a design component possesses features
in its abstraction, from the point of view of the current application.”
• Completeness – similar to sufficiency, but has an indirect implication about
the degree to which the abstraction or design component can be reused
• Cohesion – a component should be designed in a manner that has all
operations working together to achieve a single, well-defined purpose
• Primitiveness - the degree to which an operation is atomic
• Similarity – the degree to which two or more classes are similar in terms of
their structure, function, behavior, or purpose
• Volatility – the likelihood that a change will occur

27
Metrics for the Design Model
• Class-Oriented Metrics—The CK Metrics Suite
• Measures and metrics for an individual class, the class hierarchy, and class
collaborations
• Weighted methods per class (WMC) - Assume that n methods of complexity c1, c2, . . . , cn
are defined for a class C. The specific complexity metric that is chosen (e.g., cyclomatic
complexity) should be normalized so that nominal complexity for a method takes on a
value of 1.0.
WMC = Ʃci, for i = 1 to n
• Depth of the inheritance tree (DIT) - the maximum length from the node to the root of
the tree
• Number of children (NOC) - The subclasses that are immediately subordinate to a class in
the class hierarchy are termed its children

28
Metrics for the Design Model
• Class-Oriented Metrics—The CK Metrics Suite (cont…)
• Measures and metrics for an individual class, the class hierarchy, and class
collaborations
• Coupling between object classes (CBO) - The number of collaborations listed for a class
on its CRC index card
• Response for a class (RFC) - a set of methods that can potentially be executed in
response to a message received by an object of that class
• Lack of cohesion in methods (LCOM) - the number of methods that access one or more of
the same attributes

29
Metrics for the Design Model
• Component-Level Design Metrics (3 Cs - cohesion, coupling, and complexity)
• Cohesion metrics
• Coupling metrics
• Complexity metrics

30
Metrics for the Design Model
• Component-Level Design Metrics (3 Cs - cohesion, coupling, and complexity)
• Cohesion metrics - defined in terms of five concepts and measures
• Data slice – a backward walk through a module that looks for data values that affect the
module location at which the walk began
• Data tokens – the variables defined for a module can be defined as data tokens for the
module
• Glue tokens – set of data tokens that lies on one or more data slice
• Superglue tokens – data tokens that are common to every data slice in a module
• Stickiness – the relative stickiness of a glue token is directly proportional to the number
of data slices that it binds

31
Metrics for the Design Model
• Component-Level Design Metrics (3 Cs - cohesion, coupling, and complexity)
• Coupling metrics
• For data and control flow coupling Module coupling indicator, mc = k / M
di = number of input data parameters
ci = number of input control parameters where k is a proportionality constant and
do = number of output data parameters M = di + (a x ci) + do + (b x co) + gd + (c x gc) + w + r
co = number of output control parameters
Values for k, a , b, and c must be derived
• For global coupling
empirically.
gd = number of global variables used as data
gc = number of global variables used as control
• For environmental coupling
w = number of modules called (fan-out)
r = number of modules calling the module under consideration (fan-in)

32
Metrics for the Design Model
• Component-Level Design Metrics (3 Cs - cohesion, coupling, and complexity)
• Complexity metrics
• Computation of “Cyclomatic Complexity” [already discussed in an earlier lecture in detail]

33
Metrics for the Design Model
• Operation-Oriented Metrics
• Average operation size (OSavg) - number of lines of code or the number of
messages sent by the operation within a class
• Operation complexity (OC) - can be computed using any of the complexity
metrics proposed for conventional software
• Average number of parameters per operation (NPavg)

34
Metrics for the Design Model
• User Interface Design Metrics
• Layout appropriateness (LA)
• Aesthetics
• Usability
• User effort
• Content acquisition time
• Memory Load
• Recognition time
• Page/Window waiting time

35
Metrics for Source Code, Testing, and
Maintenance*
• Metrics for Source Code
• Metrics for Testing
• Metrics for Maintenance

36
Metrics for Source Code, Testing, and
Maintenance*
• Metrics for Source Code
• Halstead’s metric (use primitive measures: n1, n2, N1, N2)
n1 = number of distinct operators that appear in a program n=n +n
1 2
n2 = number of distinct operands that appear in a program
N1 = total number of operator occurrences
N2 = total number of operand occurrences N = N1 + N2

• Estimated Length, Ne = n1 log2 n1 + n2 log2 n2


• Program Volume, V = N log2 (n1 + n2)
• Volume Ratio, L = 2/n1 x n2/N2

37
Metrics for Source Code, Testing, and
Maintenance*
• Metrics for Source Code
• Halstead’s metric (use primitive measures: n1, n2, N1, N2)
Example: Consider the following C program
main() The distinct operators are:
{ main, (), {}, int, scanf, &, =, +, /, printf, ,, ;
int a, b, c, avg;
scanf("%d %d %d", &a, &b, &c); The distinct operands are: a, b, c, avg, "%d %d
avg = (a+b+c)/3; %d", 3, "avg = %d"

printf("avg = %d", avg);


} n1 = 12, n2 = 7, N1= 27, N2=15 n = 19, N = 42

• Estimated Length, Ne = n1 log2 n1 + n2 log2 n2 → 12 log2 12 + 7 log2 7 = 62.67


• Program Volume, V = N log2 (n1 + n2) → 42 log2 19 = 178.4
• Volume Ratio, L = 2/n1 x n2/N2 → 2/12 x 7/15 = 0.078
38
Metrics for Source Code, Testing, and
Maintenance*
• Metrics for Source Code
• Halstead’s metric (use primitive measures: n1, n2, N1, N2)
Example: n1 = 12, n2 = 7, N1= 27, N2=15 n = 19, N = 42
We can also measure:
• Difficulty, D = n1 / 2 x N2 / n2 → 12 / 2 x 15 / 7 = 12.85
• Effort, E = D x V → 12.85 x 178.4 = 2292.44

• Time required to program, T = E / 18 seconds → 2292.44 / 18 = 127.35 seconds


• Estimated bugs, B = E2/3 / 3000 → 2292.442/3 / 3000 = 0.05
or B = V / 3000 → 178.4 / 3000 = 0.059*

39
Metrics for Source Code, Testing, and
Maintenance*
• Metrics for Testing
• Halstead’s Metrics for Testing
• Testing effort, e = V / PL
where V is the volume of the program (as defined in slide number 37)
the program level, PL = 1 / (n1/2 x N2/n2)

Metrics for Object-Oriented Testing:


▪ Lack of cohesion in methods (LCOM)
▪ Percent public and protected (PAP)
▪ Number of root classes (NOR)
▪ Number of children (NOC) and depth of the inheritance tree (DIT)
▪ Fan-in (FIN)

40
Metrics for Source Code, Testing, and
Maintenance*
• Metrics for Maintenance
• IEEE Std. 982.1-1988 suggests a software maturity index (SMI)
[provides an indication of the stability of a software product]

Where
MT = number of modules in the current release
Fc = number of modules in the current release that have been changed
Fa = number of modules in the current release that have been added
Fd = number of modules from the preceding release that were deleted in the current
release
As SMI approaches 1.0, the product begins to stabilize.
41
Metrics for Software Quality
• Measurements in the physical world can be categorized in two ways:
• Direct measures (e.g., the length of a bolt)
• Indirect measures (e.g., the “quality” of bolts produced, measured by
counting rejects)
• Software can also be measured in these two ways
• Direct measures - LOC produced, execution speed, memory size, and defects
reported over a particular duration of time
• Indirect measures - functionality, quality, complexity, efficiency, reliability,
maintainability

42
Metrics for Software Quality
• Measurements in the physical world can be categorized in two ways:
• Direct measures (e.g., the length of a bolt)
• Indirect measures (e.g., the “quality” of bolts produced, measured by
counting rejects)
• Software can also be measured in these two ways
• Direct measures - LOC produced, execution speed, memory size, and defects
reported over a particular duration of time
• Indirect measures - functionality, quality, complexity, efficiency, reliability,
maintainability

43
Metrics for Software Quality
• Measuring Quality
• Correctness
• Maintainability
• Integrity
• Usability

44
Metrics for Software Quality
• Measuring Quality
• Correctness - the degree to which the software performs its required function
Example indirect measure:
• Defects (a verified lack of conformance to requirements) per KLOC
• Counted for a specific period of time – typically for 1 year or 6 months

Obviously, if defects are more, less correct the software is!

45
Metrics for Software Quality
• Measuring Quality
• Maintainability - the ease with which a program can be corrected if an error
is encountered, adapted if its environment changes, or enhanced if the
customer desires a change in requirements
Example indirect measure:
• Mean-Time-To-Change (MTTC) - the time it takes to analyze the change request, design
an appropriate modification, implement the change, test it, and distribute the change to
all users

Generally, on average, programs that are maintainable will have a lower MTTC

46
Metrics for Software Quality
• Measuring Quality
• Integrity - a system’s ability to withstand attacks (both accidental and
intentional) to its security
• Two additional attributes are defined to measure integrity:
• Threat - the probability (which can be estimated or derived from empirical evidence)
that an attack of a specific type will occur within a given time
• Security - the probability (which can be estimated or derived from empirical evidence)
that the attack of a specific type will be repelled

• The integrity of a system is measured using the following formula


Integrity = Ʃ [1 - (threat x (1 - security))]

47
Metrics for Software Quality
• Measuring Quality
• The integrity of a system is measured using the following formula
Integrity = Ʃ [1 - (threat x (1 - security))]
Examples:
• If threat (the probability that an attack will occur) is 0.25 and security (the likelihood of
repelling an attack) is 0.95, the integrity of the system is 0.99  very high
• If the threat probability is 0.50 and the likelihood of repelling an attack is only 0.25, the
integrity of the system is 0.63  unacceptably low

48
Metrics for Software Quality
• Measuring Quality
• Usability - an attempt to quantify ease of use and can be measured in terms
of the following characteristics:
• Effectiveness
• Efficiency
• Learnability
• Memorability
• Satisfaction
• Cognitive load requirement

49
Metrics for Software Quality
• Measuring Quality
• Correctness
• Maintainability
• Integrity
• Usability
These four are just samples of indirect measures of quality of a software - there are many more
A sperate chapter (Ch 23) is there in the book that is referred in the reference slide
A whole course (Software Quality and Reliability) is also offered VIT-AP University
For ‘quality factors and quality assurance’, we will have separate session(s)
Other than these 4 metrics, there exist a special quality metric, that provides benefit at both the project and
process level, called Defect Removal Efficiency (DRE) – described in the next slide

50
Metrics for Software Quality
• Defect Removal Efficiency (DRE)
• A measure of the filtering ability of quality assurance and control actions as they are
applied throughout all process framework activities
• When considered for a project as a whole
DRE = E / (E + D)
where, E = the number of errors found before delivery of the software to the end user
D = the number of defects found after delivery
• The ideal value for DRE is 1 → no defects are found in the software
• Realistically, D will be greater than 0, but the value of DRE can still approach 1 as E increases
for a given value of D. In fact, as E increases, it is likely that the final value of D will decrease
(errors are filtered out before they become defects)
• If used as a metric that provides an indicator of the filtering ability of quality control and
assurance activities, DRE encourages a software project team to institute techniques for
finding as many errors as possible before delivery

51
Metrics for Software Quality
• Defect Removal Efficiency (DRE)
• A measure of the filtering ability of quality assurance and control actions as
they are applied throughout all process framework activities
• When used within the project to assess a team’s ability to find errors
before they are passed to the next framework activity or software engineering action
DREi = Ei / (Ei + Ei+1)
where, Ei = the number of errors found during software engineering action i
Ei+1 = the number of errors found during software engineering action i + 1 that
are traceable to errors that were not discovered in software engineering action i

• A quality objective for a software team (or an individual software engineer) is to achieve
DREi that approaches 1  Errors should be filtered out before they are passed on to the
next activity or action

52
References
• Roger Pressman, “Software Engineering: A Practitioner’s Approach”,
McGraw-Hill, 7th Edition, 2016
• Chapter: 23; Section: 23.1 – 23.7
• Chapter: 25; Section: 25.2 - 25.3

53
Next
Module 5: Managing Software Projects

54
Thank You

55

You might also like