0% found this document useful (0 votes)
10 views85 pages

Chapter 6 Software Metrics

The document discusses software metrics, which are quantifiable measures used to assess various characteristics of software systems and development processes. It emphasizes the importance of measurement for effective management, quality determination, and process improvement, detailing different types of metrics such as product, process, and project metrics. Additionally, it outlines guidelines for using metrics effectively and the significance of normalization in comparing metrics across projects.

Uploaded by

jatinsingh.mnnit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views85 pages

Chapter 6 Software Metrics

The document discusses software metrics, which are quantifiable measures used to assess various characteristics of software systems and development processes. It emphasizes the importance of measurement for effective management, quality determination, and process improvement, detailing different types of metrics such as product, process, and project metrics. Additionally, it outlines guidelines for using metrics effectively and the significance of normalization in comparing metrics across projects.

Uploaded by

jatinsingh.mnnit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 85

Software Metrics

Software Metrics
Questions

• How big is the program?


– Huge!!
• How close are you to finishing?
– We are almost there!!
• Can you, as a manager, make any useful decisions
from such subjective information?
• Need information like, cost, effort, size of project.
Metrics

• Quantifiable measures that could be used to measure


characteristics of a software system or the software
development process
• Required in all phases
• Required for effective management
• Managers need quantifiable information, and not
subjective information
– Subjective information goes against the
fundamental goal of engineering
Measurement

• Measurement is fundamental to any engineering


discipline
• Software Metrics - Broad range of measurements for
computer software
• Software Process - Measurement can be applied to
improve it on a continuous basis
• Software Project - Measurement can be applied in
estimation, quality control, productivity assessment &
project control
• Measurement can be used by software engineers in
decision making.
Why Measure Software?

• Determine the quality of the current product or process

• Predict qualities of a product/process

• Improve quality of a product/process


Definitions
• Measure - Quantitative indication of the
extent, amount, dimension, capacity or size of
some attribute of a product or process.
• E.g., Number of errors

• Measurement - The act of determining a measure

• Metric - A quantitative measure of the degree to which


a system, component, or process possesses a given
attribute
– E.g., Number of errors found per person hours expended
Definitions

• Indicator – An indicator is a metric or combination of


metrics that provide insight into the software process,
a software project or the product itself.
• Direct Metrics: Immediately measurable attributes
(e.g. line of code, execution speed, defects reported)
• Indirect Metrics: Aspects that are not immediately
quantifiable (e.g. functionality, quantity, reliability)
• Faults:
- Errors: Faults found by the practitioners during software development
- Defects: Faults found by the customers after release
Why Do We Measure?

• To indicate the quality of the product.


• To assess the productivity of the people who produce
the product
• To assess the benefits derived from new software
engineering methods and tools
• To form a baseline for estimation
• To help justify requests for new tools or additional
training
• Estimate the cost & schedule of future projects
• Forecast future staffing needs
• Anticipate and reduce future maintenance needs
Example Metrics

• Defect rates
• Error rates

• Measured by:
– individual
– module
– during development

• Errors should be categorized by origin, type,


cost
A Good Manager Measures

process
process metrics
project metrics
measurement
product metrics
product
What do we
“Not everything that can be use as a
counted counts, and not everything basis?
that counts can be counted.” - • size?
Einstein • function?
Software Metrics

Pressman explained as "A measure provides a quantitative


indication of the extent, amount, dimension, capacity, or size
of some attribute of the product or process”.
Measurement is the act of determine a measure
The metric is a quantitative measure of the degree to which
a system, component, or process possesses a given
attribute.
Fenton defined measurement as " it is the process by which
numbers or symbols are assigned to attributes of entities in
the real world in such a way as to describe them according
to clearly defined rules”.
Software Metrics

• Definition

Software metrics can be defined as "The continuous application of


measurement based techniques to the software development
process and its products to supply meaning and timely
management information, together with the use ful
of those techniques
to improve that process and its products”.
Software Metrics

• Areas of Applications
The most established area of software metrics is cost and size
estimation techniques.

The prediction of quality levels for software, often in terms of


reliability, is another area where software metrics have an important
role to play.

The use of software metrics to provide quantitative checks on


software design is also a well established area.

7
Software Metrics

• Categories of Metrics
i. Product metrics: describe the characteristics of the
product such as size, complexity, design features,
performance, efficiency, reliability, portability, etc.
ii. Process metrics: describe the effectiveness and
quality of the processes that produce the software
product. Examples are:
• effort required in the process
• time to produce the product
• effectiveness of defect removal during development
• number of defects found testing
during

maturity of the process
7
Software Metrics

iii Project metrics: describe the project characteristics


. and execution. Examples are :

• number of software developers

• staffing pattern over the life cycle of the software

• cost and schedule

• productivity
Process Metrics

• Process metrics are measures of the software


development process, such as
– Overall development time
– Type of methodology used
• Process metrics are collected across all projects and
over long periods of time.
• Their intent is to provide indicators that lead to long-
term software process improvement.
Process Metrics & Software
Process Improvement
• To improve any process, the rational way is:
– Measure Specific attributes of the process
– Derive meaningful metrics from these attributes.
– Use these metrics to provide indicators.
– The indicators lead to a strategy for improvement.
Process Metrics

• Focus on quality achieved as a consequence of a


repeatable or managed process. Strategic and Long Term.
• Statistical Software Process Improvement (SSPI). Error
Categorization and Analysis:
 All errors and defects are categorized by origin
 The cost to correct each error and defect is recorded
 The number of errors and defects in each category is computed
 Data is analyzed to find categories that result in the highest cost
to the organization
 Plans are developed to modify the process
• Defect Removal Efficiency (DRE). Relationship between
errors (E) and defects (D). The ideal is a DRE of 1:

DRE E /( E  D)
Factors Affecting Software Quality
How to Measure Effectiveness of
a Software Process
• We measure the effectiveness of a software process
indirectly
• We derive a set of metrics based on the outcomes that
can be derived from the process.
• Outcomes include
– Errors uncovered before release of the software
– Defects delivered to and reported by end-users
– Work products delivered (productivity)
– Human effort
– Calendar time etc.
– Conformance to schedule
Project Metrics

• Project Metrics are the measures of Software Project


and are used to monitor and control the project. They
enable a software project manager to:

 Minimize the development time by making the


adjustments necessary to avoid delays and potential
problems and risks.

 Assess product quality on an ongoing basis & modify


the technical approach to improve quality.
Project Metrics

• Used in estimation techniques & other technical work.


• Metrics collected from past projects are used as a
basis from which effort and time estimates are made
for current software project.
• As a project proceeds, actual values of human effort &
calendar time expended are compared to the original
estimates.
• This data is used by the project manager to monitor &
control the project.
Project Metrics

• Used by a project manager and software team to adapt


project work flow and technical activities.

• Metrics:
- Effort or time per SE task
- Errors uncovered per review hour
- Scheduled vs. actual milestone dates
- Number of changes and their characteristics
- Distribution of effort on SE tasks
Product metrics

• Product metrics are measures of the software product


at any stage of its development, from requirements to
installed system. Product metrics may measure:
– the complexity of the software design
– the size of the final program
– the number of pages of documentation produced
Types of Software
Measurements
• Direct measures
– Easy to collect
– E.g. Cost, Effort, Lines of codes (LOC), Execution
Speed, Memory size, Defects etc.
• Indirect measures
– More difficult to assess & can be measured
indirectly only.
– Quality, Functionality, Complexity, Reliability,
Efficiency, Maintainability etc.
An example

• 2 different project • Which team do you


teams are working to think is more
record errors in a effective in finding
software process
errors?
• Team A – Finds 342
errors during software
process before
release
• Team B- Finds 184
errors
Normalization of Metrics

• To answer this we need to know the size & complexity


of the projects.
• But if we normalize the measures, it is possible to
compare the two
• Normalization: compensate for complexity aspects
particular to a product
• For normalization we have 2 ways-
– Size-Oriented Metrics
– Function Oriented Metrics
Metrics Guidelines

• Use common sense and organizational sensitivity when


interpreting metrics data
• Provide regular feedback to the individuals and teams who
have worked to collect measures and metrics.
• Don’t use metrics to appraise individuals
• Work with practitioners and teams to set clear goals and
metrics that will be used to achieve them
• Never use metrics to threaten individuals or teams
• Metrics data that indicate a problem area should not be
considered “negative.” These data are merely an indicator
for process improvement
• Don’t obsess on a single metric to the exclusion of other
important metrics
Typical Normalized Metrics

Project LOC FP Effort R(000) Pp. doc Errors Defects People


(P/M)
alpha 12100 189 24 168 365 134 29 3

beta 27200 388 62 440 1224 321 86 5

gamma 20200 631 43 314 1050 256 64 6

• Size-Oriented:
- errors per KLOC (thousand lines of code), defects per KLOC, R
per LOC, page of documentation per KLOC, errors / person-month,
LOC per person-month, R / page of documentation
• Function-Oriented:
- errors per FP, defects per FP, R per FP, pages of documentation
per FP, FP per person-month
Size-Oriented Metrics

• Based on the “size” of the software produced


• LOC - Lines Of Code
• KLOC - 1000 Lines Of Code
• SLOC – Statement Lines of Code (ignore
whitespace)
• Typical Measures:
– Errors/KLOC, Defects/KLOC, Cost/LOC,
Documentation Pages/KLOC
Size-Oriented Metrics

Project Effort Cost LOC kLOC Doc. Errors Peopl


(person- ($) (pgs) e
month)
A 24 168,000 12100 12.1 365 29 3

B 62 440,000 27200 27.2 1224 86 5


From the above data, simple size oriented
metrics can be developed for each Project

• Errors per KLOC


• $ per KLOC
• Pages of documentation per KLOC
• Errors per person-month
• LOC per person-month
• Advantages of Size Oriented Metrics
– LOC can be easily counted
– Many software estimation models use LOC or KLOC as input.
• Disadvantages of Size Oriented Metrics
– LOC measures are language dependent, programmer dependent
– Their use in estimation requires a lot of detail which can be difficult to
achieve.
• Useful for projects with similar environment
Complexity Metrics

• LOC - a function of complexity


• Language and programmer dependent
• Halstead’s Software Science (entropy measures)
– n1 - number of distinct operators
– n2 - number of distinct operands
– N1 - total number of operators
– N2 - total number of operands
Example

if (k < 2)
{
if (k > 3)
x = x*k;
}

• Distinct operators: if ( ) { } > < = * ;


• Distinct operands: k 2 3 x
• n1 = 10
• n2 = 4
• N1 = 13
• N2 = 7
Halstead’s Metrics

• Amenable to experimental verification [1970s]

• Program length: N = N1 + N2
• Program vocabulary: n = n1 + n2

• Estimated length: N̂ = n1 log2 n1 + n2 log2 n2


– Close estimate of length for well structured programs

• Purity ratio: PR = N̂ /N
Program Complexity

• Volume: V = N log2 n
– Number of bits to provide a unique designator for each of the
n items in the program vocabulary.

• Difficulty

• Program effort: E=D*V


– This is a good measure of program understandability
McCabe’s Complexity Measures

• McCabe’s metrics are based on a control flow


representation of the program.
• A program graph is used to depict control flow.
• Nodes represent processing tasks (one or more code
statements)
• Edges represent control flow between nodes
Cyclomatic Complexity

• Set of independent paths through the graph (basis set)

• V(G) = E – N + 2
– E is the number of flow graph edges
– N is the number of nodes

• V(G) = P + 1
– P is the number of predicate nodes
Meaning

• V(G) is the number of (enclosed) regions/areas of the


planar graph

• Number of regions increases with the number of


decision paths and loops

• A quantitative measure of testing difficulty and an


indication of ultimate reliability

• Experimental data shows value of V(G) should be no


more then 10 - testing is very difficulty above this value
McClure’s Complexity Metric

• Complexity = C + V
– C is the number of comparisons in a module
– V is the number of control variables
referenced in the module
– decisional complexity

• Similar to McCabe’s but with regard to control


variables
Function-Oriented Metrics

• Based on “functionality” delivered by the


software
• Functionality is measured indirectly using
a measure called function point.
• Function points (FP) - derived using an
empirical relationship based on
countable measures of software &
assessments of software complexity
Steps In Calculating FP

1. Count the measurement parameters.


2. Assess the complexity of the values.
3. Calculate the raw FP (see next table).
4. Rate the complexity factors to produce the complexity
adjustment value (CAV)
5. Calculate the adjusted FP as follows:
FP = raw FP x [0.65 + 0.01 x CAV]
Function Point Metrics

Parameter Count Simple Average Complex


Inputs x 3 4 6 =
Outputs x 4 5 7 =
Inquiries x 3 4 6 =
Files x 7 10 15 =
Interfaces x 5 7 10 =

Count-total (raw FP)


Rate Complexity Factors

For each complexity adjustment factor, give a rating


on a scale of 0 to 5
0 - No influence
1 - Incidental
2 - Moderate
3 - Average
4 - Significant
5 - Essential
Complexity Adjustment Factors

1. Does the system require reliable backup and


recovery?
2. Are data communications required?
3. Are there distributed processing functions?
4. Is performance critical?
5. Will the system run in an existing, heavily utilized
operational environment?
6. Does the system require on-line data entry?
7. Does the on-line data entry require the input
transaction to be built over multiple screens or
operations?
Complexity Adjustment
Factors(Continue…)
8. Are the master files updated on-line?
9. Are the inputs, outputs, files, or inquiries complex?
10. Is the internal processing complex?
11. Is the code designed to be reusable?
12. Are conversion and installation included in the
design?
13. Is the system designed for multiple installations in
different organizations?
14. Is the application designed to facilitate change and
ease of use by the user?
Complexity Adjustment Value

• The rating for all the factors, F1 to F14, are summed to


produce the complexity adjustment value (CAV)
• CAV is then used in the calculation of the function
point (FP) of the software
Example of Function-Oriented
Metrics
• Errors per FP
• Defects per FP
• $ per FP
• Pages of documentation per FP
• FP per person month
FP Characteristics

• Advantages: language independent, based on data


known early in project, good for estimation
• Disadvantages: calculation complexity, subjective
assessments, FP has no physical meaning (just a
number)
Software Metrics

Token Count
The size of the vocabulary of a program, which consists of the
number of unique tokens used to build a program is defined as:

η = η1+ η2
η : vocabulary of a program
where
η1 : number of unique operators
η2 : number of unique operands
Software Metrics

The length of the program in the terms of the total number of tokens
used is

N = N1+N2

N : program length
where
N1 : total occurrences of operators
N2 : total occurrences of operands
Software Metrics

Volume

V = N * log2 η

The unit of measurement of volume is common unit for


the
size "bits”. It is the actual size of a program if a uniform
binary encoding for the vocabulary is used.

Program Level
L = V* / V
The value of L ranges between zero and one, with L=1
representing a program written at the highest possible level
(i.e., with minimum size).
Software Metrics

Program Difficulty
D=1/L
As the volume of an of a program increases,
the program level decreases and the difficulty increases.
implementation
Thus, programming practices suc as redundant usage of
h
operands, or the failure to use higher-level control constructs
will tend to increase the volume as well as the difficulty.

Effort
E=V/L=D*V
The unit of measurement of E is elementary mental
discriminations.
Software Metrics

• Estimated Program Length



Ν = η log 2η 1 + η 2 log 2η 2


log 2 14 + 10 log 2 10
Ν = 14
= 53.34 + 33.22 = 86.56

The following alternate expressions have been published to


estimate program length.

Ν J = Log 2 (η 1!) + log 2 (η 2 !)


Software Metrics

Ν B = η1 Log 2η 2 + η 2 log 2 η1

Νc= η1 + η 2 η 2
η1
Ν s = log 2 η ) / 2

The definitions of unique operators, unique operands, total
operators and total operands are not specifically delineated.
Software Metrics

• The Sharing of Data Among Modules


A program normally contains several modules and share coupling
among modules. However, it may be desirable to know the amount
of data being shared among the modules.

Fig.: Three modules from an imaginary program


Software Metrics

Fig.: ”Pipes” of data shared among the modules

Fig.: The data shared in program bubble


Software Metrics

Information Flow Metrics


Component : Any element identified by decomposing a
(software) system into its constituent
parts.

Cohesion : The degree to which a component


performs a single function.

Coupling : The term used to describe the degree of


linkage between one component to
others in the same system.
Software Metrics

• The Basic Information Flow Model


Information Flow metrics are applied to the Components of a
system design. Fig. 13 shows a fragment of such a design, and for
component ‘A’ we can define three measures, but remember that
these are the simplest models of IF.
1. ‘FAN IN’ is simply a count of the number of other Components
that can call, or pass control, to Component A.
2. ‘FANOUT’ is the number of Components that are called by
Component A.
3. This is derived
from the first
two by using
the following
formula. We
will call this
measure the
Software Metrics

Fig.: Aspects of complexity


Software Metrics

The following is a step-by-step guide to deriving these most simple


of IF metrics.

1. Note the level of each Component in the system design.


2. For each Component, count the number of calls so that
Component – this is the FAN IN of that Component. Some
organizations allow more than one Component at the highest
level in the design, so for Components at the highest level which
should have a FAN IN of zero, assign a FAN IN of one. Also
note that a simple model of FAN IN can penalize reused
Components.
3. For each Component, count the number of calls from the
Component. For Component that call no other, assign a FAN
OUT value of one.
cont…
Software Metrics

4. Calculate the IF value for each Component using the above


formula.
5. Sum the IF value for all Components within each level which is
called as the LEVEL SUM.

6. Sum the IF values for the total system design which is called the
SYSTEM SUM.
7. For each level, rank the Component in that level according to
FAN IN, FAN OUT and IF values. Three histograms or line plots
should be prepared for each level.
8. Plot the LEVEL SUM values for each level using a histogram or
line plot.
Software Metrics

• A More Sophisticated Information Flow Model


a = the number of components that call A.
b = the number of parameters passed to A from components
higher in the hierarchy.
c = the number of parameters passed to A from components
lower in the hierarchy.
d = the number of data elements read by component A.

Then:
FAN IN(A)= a + b + c + d
Software Metrics

Also let:
e = the number of components called by A;
f = the number of parameters passed from A to components higher
in the hierarchy;
g = the number of parameters passed from A to components lower
in the hierarchy;
h = the number of data elements written to by A.

Then:
FAN OUT(A)= e + f + g + h
Object Oriented Metrics
Terminologies
S.No Term Meaning/purpose
1 Object Object is an entity able to save a state (information)
and offers a number of operations (behavior) to either
examine or affect this state.
2 Message A request that an object makes of another object to
perform an operation.
3 Class A set of objects that share a common structure and
common behavior manifested by a set of methods;
the set serves as a template from which object can
be created.
4 Method an operation upon an object, defined as part of the
declaration of a class.
5 Attribute Defines the structural properties of a class and
unique within a class.
6 Operation An action performed by or on an object, available
to all instances of class, need not be unique.
Object Oriented Metrics

Terminologies
S.No Term Meaning/purpose
7 Instantiation The process of creating an instance of the object
and binding or adding the specific data.
8 Inheritance A relationship among classes, where in an object
in a class acquires characteristics from one or
more other classes.
9 Cohesion The degree to which the methods within a class
are related to one another.
10 Coupling Object A is coupled to object B, if and only if A
sends a message to B.
Object Oriented Metrics

• Measuring on class level


– coupling
– inheritance
– methods
– attributes
– cohesion
• Measuring on system level
Object Oriented Metrics

Size Metrics:
• Number of Methods per Class (NOM)

• Number of Attributes per Class (NOA)

• Weighted Number Methods in a Class (WMC)


– Methods implemented within a class or the sum of the
complexities of all methods
Object Oriented Metrics

Coupling Metrics:
• Response for a Class (RFC )
– Number of methods (internal and external) in a class.

• Data Abstraction Coupling(DAC)


- Number of Abstract Data Types in a class.

• Coupling between Objects (CBO)


– Number of other classes to which it is coupled.
Object Oriented Metrics

• Message Passing Coupling (MPC)


– Number of send statements defined in a class.

• Coupling Factor (CF)


– Ratio of actual number of coupling in the system to
the max. possible coupling.
Object Oriented Metrics

Cohesion Metrics:
• LCOM: Lack of cohesion in methods
– Consider a class C1 with n methods M1, M2…., Mn. Let (Ij)
= set of all instance variables used by method Mi. There
are n such sets {I1},…….{In}. Let
P = {(Ii , I j ) | Ii ∩ I j = 0} andQ = {((Ii , I j ) | Ii ∩ I j ≠ 0}
If all n {( I 1 },........ .(I n )} sets are 0 then P=0

LCOM =| P | - | Q |, if | P|>|Q|
= 0 otherwise
Object Oriented Metrics

• Tight Class Cohesion (TCC)


_ Percentage of pairs of public methods of the class
with common attribute usage.
• Loose Class Cohesion (LCC)
– Same as TCC except that this metric also
consider indirectly connected methods.
• Information based Cohesion (ICH)
– Number of invocations of other methods of the same
class, weighted by the number of parameters of the
invoked method.
Object Oriented Metrics

Inheritance Metrics:

• DIT - Depth of inheritance tree

• NOC - Number of children


– only immediate subclasses are counted.
Object Oriented Metrics

Inheritance Metrics:
• AIF- Attribute Inheritance Factor
– Ratio of the sum of inherited attributes in all classes of the
system to the total number of attributes for all classes.
TC

AIF =
∑ TC A d (C i )
∑ i =1
i
A a (C i )
=1
Aa(Ci ) = Ai(C ) + Ad (Ci )
i

TC= total number of classes


Ad (Ci) = number of attribute declared in a class
Ai (Ci) = number of attribute inherited in a class
Object Oriented Metrics

Inheritance Metrics:
• MIF- Method Inheritance Factor
– Ratio of the sum of inherited methods in all classes of the
system to the total number of methods for all classes
TC
.
MIF =
∑ TCMMi(C i)
∑ i =1
i =1
(C a) i

M a(C i) = M i(C i) + M d(C i)


TC= total number of classes
Md(Ci)= the number of methods declared in a
class
Mi(Ci)= the number of methods inherited in a class
Use-Case Oriented Metrics

• Counting actors
Type Description Factor
Simple Program interface 1
Average Interactive or protocol 2
driven interface
Complex Graphical interface 3

Actor weighting factors


o Simple actor: represents another system with a interface.
defined
o Average actor: another system that interacts through a text based
interface through a protocol such as TCP/IP.
o Complex actor: person interacting through a GUI interface.
The actors weight can be calculated by adding these values together.
Use-Case Oriented Metrics

• Counting use cases

Type Description Factor


Simple 3 or fewer transactions 5
Average 4 to 7 transactions 10
Complex More than 7 transactions 15

Transaction-based weighting factors

The number of each use case type is counted in the software and
then each number is multiplied by a weighting factor as shown in
table above.
Web Engineering Project Metrics

9 Number of static web pages


9 Number of dynamic web pages
9 Number of internal page links
9 Word count
9 Web page similarity
9 Web page search and retrieval
9 Number of static content objects
9 Number of dynamic content
objects
Metrics Analysis

Statistical Techniques
• Summary statistics such as mean, median, max. and min.
• Graphical representations such as histograms, pie charts and
box plots.
• Principal component analysis
• Regression and correlation analysis
• Reliability models for predicting future reliability.
Metrics Analysis

Problems with metric data:


• Normal Distribution
• Outliers
• Measurement Scale
• Multicollinearity
Metrics Analysis

Common pool of data:


• The selection of projects should be representative and not all
come from a single application domain or development styles.
• No single very large project should be allowed to dominate the
pool.
• For some projects, certain metrics may not have been collected.
Metrics Analysis

Pattern of Successful Applications:


• Any metric is better then none.
• Automation is essential.
• Empiricism is better then theory.
• Use multifactor rather then metrics.
single
• Don’t confuse productivity metrics with complexity metrics.
• Let them mature.
• Maintain them.
• Let them die.
Qualities of a good metric

• simple, precisely definable—so that it is


• clear how the metric can be evaluated;
• objective, to the greatest extent possible;
• easily obtainable (i.e., at reasonable cost);
• valid—the metric should measure what it
• is intended to measure; and
• robust—relatively insensitive to (intuitive-
• ly) insignificant changes in the process or
• product.
Reference

• Software Engineering -KK Aggarwal and Yogesh


Singh.
• Software Engineering: A Practitioner's
Approach- Roger S. Pressman

You might also like