5 SW Metrics
5 SW Metrics
Process Metrics
To improve any process, it is necessary to measure its specified attributes, develop a set of meaningful
metrics based on these attributes, and then use these metrics to obtain indicators in order to derive a
strategy for process improvement.
Using software process metrics, software engineers are able to assess the efficiency of the software
process that is performed using the process as a framework. Process is placed at the centre of the
triangle connecting three factors (product, people, and technology), which have an important influence
on software quality and organization performance. The skill and motivation of the people, the complexity
of the product and the level of technology used in the software development have an important influence
on the quality and team performance. The process triangle exists within the circle of environmental
conditions, which includes development environment, business conditions, and customer /user
characteristics.
To measure the efficiency and effectiveness of the software process, a set of metrics is formulated based
on the outcomes derived from the process. These outcomes are listed below.
Note that process metrics can also be derived using the characteristics of a particular software
engineering activity. For example, an organization may measure the effort and time spent by considering
the user interface design.
It is observed that process metrics are of two types, namely, private and public. Private Metrics are
private to the individual and serve as an indicator only for the specified individual(s). Defect rates by a
software module and defect errors by an individual are examples of private process metrics. Note that
some process metrics are public to all team members but private to the project. These include errors
detected while performing formal technical reviews and defects reported about various functions included
in the software.
Public metrics include information that was private to both individuals and teams. Project-level defect
rates, effort and related data are collected, analyzed and assessed in order to obtain indicators that help
in improving the organizational process performance.
Process Metrics Etiquette
Process metrics can provide substantial benefits as the organization works to improve its process
maturity. However, these metrics can be misused and create problems for the organization. In order to
avoid this misuse, some guidelines have been defined, which can be used both by managers and
software engineers. These guidelines are listed below.
$1· Rational thinking and organizational sensitivity should be considered while analyzing metrics data.
$1· Feedback should be provided on a regular basis to the individuals or teams involved in collecting
measures and metrics.
$1· Metrics should not appraise or threaten individuals.
$1· Since metrics are used to indicate a need for process improvement, any metric indicating this
problem should not be considered harmful.
$1· Use of single metrics should be avoided.
As an organization becomes familiar with process metrics, the derivation of simple indicators leads to a
stringent approach called Statistical Software Process Improvement (SSPI). SSPI uses software failure
analysis to collect information about all errors (it is detected before delivery of the software) and defects
(it is detected after software is delivered to the user) encountered during the development of a product
or system.
Product Metrics
In software development process, a working product is developed at the end of each successful phase.
Each product can be measured at any stage of its development. Metrics are developed for these products
so that they can indicate whether a product is developed according to the user requirements. If a product
does not meet user requirements, then the necessary actions are taken in the respective phase.
Product metrics help software engineer to detect and correct potential problems before they result in
catastrophic defects. In addition, product metrics assess the internal product attributes in order to know
the efficiency of the following.
Metrics for analysis model: These address various aspects of the analysis model such as
system functionality, system size, and so on.
Metrics for design model: These allow software engineers to assess the quality of design
and include architectural design metrics, component-level design metrics, and so on.
Metrics for source code: These assess source code complexity, maintainability, and other
characteristics.
Metrics for testing: These help to design efficient and effective test cases and also
evaluate the effectiveness of testing.
Metrics for maintenance: These assess the stability of the software product.
Lines of code (LOC) is one of the most widely used methods for size estimation. LOC can be defined as
the number of delivered lines of code, excluding comments and blank lines. It is highly dependent on the
programming language used as code writing varies from one programming language to another. Fur
example, lines of code written (for a large program) in assembly language are more than lines of code
written in C++.
From LOC, simple size-oriented metrics can be derived such as errors per KLOC (thousand lines of code),
defects per KLOC, cost per KLOC, and so on. LOC has also been used to predict program complexity,
development effort, programmer performance, and so on. For example, Hasltead proposed a number of
metrics, which are used to calculate program length, program volume, program difficulty, and
development effort.
In addition, various other metrics like simple morphology metrics are also used. These metrics allow
comparison of different program architecture using a set of straightforward dimensions. A metric can be
developed by referring to call and return architecture. This metric can be defined by the following
equation.
Size = n+a
Where
n = number of nodes
a= number of arcs.
For example, there are 11 nodes and 10 arcs. Here, Size can be calculated by the following equation.
Size = n+a = 11+10+21.
Depth is defined as the longest path from the top node (root) to the leaf node and width is defined as the
maximum number of nodes at any one level.
Coupling of the architecture is indicated by arc-to-node ratio. This ratio also measures the connectivity
density of the architecture and is calculated by the following equation.
r=a/n
Quality of software design also plays an important role in determining the overall quality of the software.
Many software quality indicators that are based on measurable design characteristics of
a computer program have been proposed. One of them is Design Structural Quality Index (DSQI), which is
derived from the information obtained from data and architectural design. To calculate DSQI, a number of
steps are followed, which are listed below.
1. To calculate DSQI, the following values must be determined.
o Measures defined for data and control flow coupling are listed below.
di = total number of input data parameters
ci = total number of input control parameters
do= total number of output data parameters
co= total number of output control parameters
$1§ Measures defined for global coupling are listed below.
gd= number of global variables utilized as data
gc = number of global variables utilized as control
$1§ Measures defined for environmental coupling are listed below.
w = number of modules called
r = number of modules calling the modules under consideration
By using the above mentioned measures, module-coupling indicator (m c) is calculated by using the
following equation.
mc = K/M
Where
K = proportionality constant
M = di + (a*ci) + do+ (b*co)+ gd+ (c*gc) + w + r.
Note that K, a, b, and c are empirically derived. The values of m c and overall module coupling are
inversely proportional to each other. In other words, as the value of m c increases, the overall module
coupling decreases.
Complexity Metrics: Different types of software metrics can be calculated to ascertain the complexity
of program control flow. One of the most widely used complexity metrics for ascertaining the complexity
of the program is cyclomatic complexity.
Many metrics have been proposed for user interface design. However, layout appropriateness metric and
cohesion metric for user interface design are the commonly used metrics. Layout Appropriateness (LA)
metric is an important metric for user interface design. A typical Graphical User Interface (GUI) uses
many layout entities such as icons, text, menus, windows, and so on. These layout entities help the users
in completing their tasks easily. In to complete a given task with the help of GUI, the user moves from
one layout entity to another.
Appropriateness of the interface can be shown by absolute and relative positions of each layout entities,
frequency with which layout entity is used, and the cost of changeover from one layout entity to another.
Cohesion metric for user interface measures the connection among the onscreen contents. Cohesion for
user interface becomes high when content presented on the screen is from a single major data object
(defined in the analysis model). On the other hand, if content presented on the screen is from different
data objects, then cohesion for user interface is low.
In addition to these metrics, the direct measure of user interface interaction focuses on activities like
measurement of time required in completing specific activity, time required in recovering from an error
condition, counts of specific operation, text density, and text size. Once all these measures are collected,
they are organized to form meaningful user interface metrics, which can help in improving the quality of
the user interface.
Halstead proposed the first analytic laws for computer science by using a set of primitive measures,
which can be derived once the design phase is complete and code is generated. These measures are
listed below.
nl = number of distinct operators in a program
n2 = number of distinct operands in a program
N1 = total number of operators
N2= total number of operands.
By using these measures, Halstead developed an expression for overall program length, program volume,
program difficulty, development effort, and so on.
Program length (N) can be calculated by using the following equation.
N = n1log2nl + n2 log2n2.
Program volume (V) can be calculated by using the following equation.
V = N log2 (n1+n2).
Note that program volume depends on the programming language used and represents the volume of
information (in bits) required to specify a program. Volume ratio (L)can be calculated by using the
following equation.
L = Volume of the most compact form of a program
Volume of the actual program
Where, value of L must be less than 1. Volume ratio can also be calculated by using the following
equation.
L = (2/n1)* (n2/N2).
Program difficulty level (D) and effort (E)can be calculated by using the following equations.
D = (n1/2)*(N2/n2).
E = D * V.
Metrics for Software Testing
Majority of the metrics used for testing focus on testing process rather than the technical characteristics
of test. Generally, testers use metrics for analysis, design, and coding to guide them in design and
execution of test cases.
Function point can be effectively used to estimate testing effort. Various characteristics like errors
discovered, number of test cases needed, testing effort, and so on can be determined by estimating the
number of function points in the current project and comparing them with any previous project.
Metrics used for architectural design can be used to indicate how integration testing can be carried out.
In addition, cyclomatic complexity can be used effectively as a metric in the basis-path testing to
determine the number of test cases needed.
Halstead measures can be used to derive metrics for testing effort. By using program volume (V) and
program level (PL),Halstead effort (e)can be calculated by the following equations.
e = V/ PL
Where
PL = 1/ [(n1/2) * (N2/n2)] … (1)
For a particular module (z), the percentage of overall testing effort allocated can be calculated by the
following equation.
Percentage of testing effort (z) = e(z)/∑e(i)
Where, e(z) is calculated for module z with the help of equation (1). Summation in the denominator is the
sum of Halstead effort (e) in all the modules of the system.
For developing metrics for object-oriented (OO) testing, different types of design metrics that have a
direct impact on the testability of object-oriented system are considered. While developing metrics for OO
testing, inheritance and encapsulation are also considered. A set of metrics proposed for OO testing is
listed below.
Lack of cohesion in methods (LCOM): This indicates the number of states to be tested.
LCOM indicates the number of methods that access one or more same attributes. The value
of LCOM is 0, if no methods access the same attributes. As the value of LCOM increases,
more states need to be tested.
Percent public and protected (PAP): This shows the number of class attributes, which are
public or protected. Probability of adverse effects among classes increases with increase in
value of PAP as public and protected attributes lead to potentially higher coupling.
Public access to data members (PAD): This shows the number of classes that can access
attributes of another class. Adverse effects among classes increase as the value of PAD
increases.
Number of root classes (NOR): This specifies the number of different class hierarchies,
which are described in the design model. Testing effort increases with increase in NOR.
Fan-in (FIN): This indicates multiple inheritances. If value of FIN is greater than 1, it
indicates that the class inherits its attributes and operations from many root classes. Note
that this situation (where FIN> 1) should be avoided.