0% found this document useful (0 votes)
36 views20 pages

SE Unit 4

It is all about software engineering which describes about types of testing and metrics to be considered for software product

Uploaded by

Divya sree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views20 pages

SE Unit 4

It is all about software engineering which describes about types of testing and metrics to be considered for software product

Uploaded by

Divya sree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

SOFTWARE ENGINEERING – PRODUCT METRICS- UNIT – IV – PART - II

4.2.1 SOFTWARE QUALITY


❖ All the software developers will agree that high-quality software is an important goal.
❖ In the most general sense, software quality is conformance to explicitly stated functional and
performance requirements.
❖ The three important points related to quality are:
1. Software requirements are the foundation from which quality is measured. Lack of
conformance to requirements is lack of quality."
2. Specified standards define a set of development criteria that guide the manner in which
software is engineered. If the criteria are not followed, lack of quality will almost surely
result.
3. There is a set of implicit requirements that often goes unmentioned (e.g., the desire for ease
of use). If software conforms to its explicit requirements but fails to meet implicit
requirements, software quality is suspect.
❖ Software quality is a complex mix of factors that will vary across different applications and
the customers.
❖ To achieve the quality, the following software quality factors and the human activities
required to achieve them are described below.
4.2.1.1 McCall's Quality Factors
❖ The factors that affect software quality can be categorized in two broad groups:
1. Factors that can be directly measured ex: defects uncovered during testing.
2. Factors that can be measured only indirectly ex: usability or maintainability.
❖ McCall, Richards, and Walters propose a useful categorization of factors that affect software
quality.
❖ These software quality factors, shown in Figure 4.1 focus on three important aspects of a
software product
i) Its operational characteristics.
ii) Its ability to undergo change.
iii) Its adaptability to new environments.

1
Figure: 4.1 McCall’s Software Quality Factors
❖ Referring to the factors noted in Figure 4.1, McCall and his colleagues provide the following
descriptions:
1) Correctness:- The extent to which a program satisfies its specification and fulfills the
customer's mission objectives.
2) Reliability:- The extent to which a program can be expected to perform its intended function
with required precision.
3) Efficiency:- The amount of computing resources and code required by a program to perform
its function.
4) Integrity:- The extent to which access to software or data by unauthorized persons can be
controlled.
5) Usability:- The effort required to learn, operate, prepare input , and interpret output of a
program.
6) Maintainability:- The effort required to locate and fix an error in a program.
7) Flexibility:- The effort required to modify an operational program.
8) Testability:- The effort required to test a program to ensure its performs.
9) Portability:- The effort required to transfer the program from one hardware and/or soft- ware
system environment to another.
10) Reusability:- The extent to which a program or parts of a program can be reused in other
applications related to the packaging and scope of the functions that the program performs.
11) Interoperability:- The effort required to couple one system to another.
Note:- It is difficult, and in some cases impossible to develop direct measures of these quality

2
factors.
4.2.1.2 ISO 9126 Quality Factors:
❖ The ISO 9126 standard was developed in an attempt to identify quality attributes for
computer software.
❖ The standard identifies six key quality attributes:
i) Functionality:- The degree to which the software satisfies stated needs as indicated by the
following sub-attributes: a) Suitability, b) Accuracy, c) Interoperability, d) Compliance,
and e) Security.
ii) Reliability:- The amount of time that the software is available for use as indicated the
following sub-attributes: a) Maturity, b) Fault tolerance and c) Recoverability.
iii) Usability:- The degree to which the software is easy to use as indicated by the following
sub-attributes: a) Understandability, b) Learnability and c) Operability.
iv) Efficiency:- The degree to which the software makes optimal use of system resources as
indicated by the following sub-attributes: a) Time behavior and b) Resource behavior.
v) Maintainability:- The ease with which repair may be made to the software as indicated by
the following sub-attributes: a) Analyzability, b) Changeability, c) Stability, and
d) Testability.
vi) Portability:- The ease with which the software can be transposed from one environment
to another as indicated by the following sub-attributes: a) Adaptability, b) Installability,
c) Conformance and d) Replaceability.
4.2.2 METRICS FOR THE ANALYSIS MODEL
4.2.2.1 Function-Based Metrics
❖ The function point metric (FP) proposed by Albrecht, can be used effectively as a means for
measuring the functionality delivered by a system.
❖ Using historical data, the FP can then be used to
1) Estimate the cost or effort required to design, code, and test the software.
2) Predict the number of errors that will be encountered during testing.
3) Forecast the number of components and/or the number of projected source lines in the
implemented system.
vii) Function points are derived using an empirical relationship based on countable (direct)
measures of software's information domain and assessments of software complexity.
3
Figure 4.2 Computing Function Points
❖ Information domain values are defined in the following manner:
❖Number of external inputs (Els):- Each external input originates from a user or is
transmitted from another application.
❖ Number of external outputs (EOS):- Each external output is derived within the application
and provides information to the user. Ex: Reports, screens, error messages etc…
❖ Number of external inquiries (EQs):- An external inquiry is defined as an online input that
results in the generation of some immediate software response in the form of an online output.
❖ Number of internal logical files (ILFs):- Each internal logical file is a logical grouping of
data that resides within the application's boundary and is maintained via external inputs,
❖ Number of external interface files (EIFs):- Each external interface file is a logical grouping
of data that resides external to the application but provides data that may be of use to the
application.
❖ Once these data have been collected, the table in Figure 4.2 is completed along with
complexity value is associated with each count.
❖ Organizations that use function point methods develop criteria for determining whether a
particular entry is simple, average, or complex.
❖ To compute function points (FP), the following relationship is used:
FP = Count total x [0.65 +0.01 × ∑ (Fi)]
here count total is the sum of all FP entries obtained from Figure 4.2
❖ The Fi (i=1 to 14) are value adjustment factors (VAF) based on responses to the following
questions.
4
1. Does the system require reliable backup and recovery?
2. Are specialized data communications required to transfer information to or from the
application?
3. Are there distributed processing functions?
4. Is performance critical?
5. Will the system run in an existing, heavily utilized operational environment?
6. Does the system require on-line data entry?
7. Does the on-line data entry require the input transaction to be built over multiple screens
or operations?
8. Are the ILFs updated on-line?
9. Are the inputs, outputs, files, or inquiries complex?
10. Is the internal processing complex?
11. Is the code designed to be reusable?
12. Are conversion and installation included in the design?
13. Is the system designed for multiple installations in different organizations?
14. Is the application designed to facilitate change and for ease of use by the user?
❖ Each of these questions is answered using a scale that ranges from 0 (not important or
applicable) to 5 (absolutely essential).
4.2.2.2 Metrics for Specification Quality:
❖ Davis and his colleagues propose a list of characteristics such as
i) Specificity (lack of ambiguity or uncertainity), ii) Completeness, iii) Correctness,
iv) Understandability, v) Verifiability, vi) Internal and external consistency, vii) Achievability,
viii) Concision, ix) Traceability, x) Modifiability, xi) Precision and
xii) Reusability that can be used to assess the quality of the analysis model and the corresponding
requirements specification.
❖ In addition, the authors note that high quality specifications are electronically stored,
executable.
❖ Although many of these characteristics appear to be qualitative in nature, Davis suggest that
each can be represented using one or more metrics.
❖ For example, we assume that there are n, requirements in a specification, such that
nr = nf + nrf where nf is the number of functional requirements and
5
nrf is the number of non- functional requirements (ex: Performance).
❖ To determine the specificity (lack of uncertainty) of requirements, Davis et al. suggest a
metric that is based on the consistency of the reviewers interpretation of each requirement:
Q1 = nui /nr
where nui is the number of requirements for which all reviewers had identical interpretations. The
closer the value of Q to 1, the lower is the ambiguity of the specification.
❖ The completeness of functional requirements can be determined by computing the ratio
Q2 = nu / [ ni X ns ]
Where nu : is the number of unique function requirements, ni is the number of inputs, and
ns is the number of states specified.
The Q2 ratio measures the percentage of necessary functions that have been specified for a
system.
❖ To incorporate these into an overall metric for completeness, we must consider the degree
to which requirements have been validated:
Q3 = nc / [nc + nnv]
where nc : is the number of requirements that have been validated as correct and
nnv : is the number of requirements that have not yet been validated.
4.2.3 METRICS FOR THE DESIGN MODEL
❖ Various metrics for design model are
i) Architectural Design Metrics
ii) Metrics for Object Oriented Design
iii) Class Oriented Metrics – The CK Metrics Suite
iv) Class Oriented Metrics – The MOOD Metrics Suite
v) OO Metrics Proposed by Lorenge and Kidd
vi) Component Level Design Metrics
vii) Operation Oriented Metrics
viii) User Interface Design Metrics
1) Architectural Design Metrics:-
❖ Architectural design metrics focus on characteristics of the program architecture.
❖ Which focuses on architectural structure and the effectiveness of modules.
6
❖ These metrics are "black box" in the sense that they do not require any knowledge of the inner
workings of a particular software component.
❖ Card and Glass define three software design complexity measures: i) Structural complexity,
ii) Data complexity and iii) System complexity.
❖ For hierarchical architectures (ex: call and return architectures) structural complexity of a
module i is defined in the following manner:
S(i) = ƒout (i)
Where ƒout (i) is the fan-out of module i ( fan-out is: Number of modules immediately
subordinate to the module i (directly invoked by i) and fan-in is number of modules that
directly invoke module i)
❖ Data complexity provides an indication of the complexity in the internal interface for a
module and is defined as
D (i) = V( i ) / [ ƒout (i) + 1]
Where V(i) is the number of input and output variables that are passed to and from module
i.
❖ System complexity is defined as the sum of structural and data complexity, specified as
C ( i ) = S (i) + D (i)
Note: As each of these complexity values increases, the overall architectural complexity of the
system also increases and integration and testing effort will also increase.
❖ Fenton suggests a number of simple morphology (i.e., shape) metrics that enable different
program architectures to be compared using a set of straightforward dimensions.

Figure 4.3 Morphology metrics

7
❖ Referring to the call-and-return architecture in Figure 4.3, the following metrics can be
defined:
❖ size = n + a Where n is: number of nodes and a is the number of arcs.
❖ For the architecture shown in Figure 4.3 size = 17+18=35.
Here depth = 4, the longest path from the root (top) node to a leaf node.
Width= 6, maximum number of nodes at any one level of the architecture.
❖ arc-to-node ratio r = a/n,
❖ Which measures the connectivity density of the architecture and may provide a simple indication
of the coupling of the architecture.
❖ For the architecture shown in Figure 4.3, r = 18/17 = 1.06.
2) Metrics for Object-Oriented Design
❖ Whitmire describes nine distinct and measurable characteristics of an OO design: i) Size
ii) Complexity iii) Coupling iv) Sufficiency v) Completeness vi) Cohesion
vii) Primitiveness viii) Similarity ix) Volatility
i) Size:- Size is defined in terms of four views: population, volume, length, and functionality.
❖ Population is measured by taking a static count of OO entities such as classes or operations.
❖ Volume measures are identical to population measures but are collected dynamically at a
given Complexity.
❖ Length is a measure of a chain of interconnected design elements (e.g., the depth of an
inheritance tree is a measure of length).
❖ Functionality metrics provide an indirect indication of the value delivered to the customer by
an OO application.
ii) Complexity:- it shows how classes of an OO design are interrelated to one another.
iii) Coupling:- It is the physical connections between elements of the OO design.
iv) Sufficiency:- It is "the degree to which an abstraction possesses the features required of it. A
design component (e.g., a class) is sufficient if it fully reflects all properties of the application.
v) Completeness:- Completeness considers multiple points of view, asking the question: What
properties are required to fully represent the problem domain object.
❖ Sufficiency compares the abstraction from the point of view of the current application.
vi) Cohesion:- An OO component should be designed in a manner that has all operations working
together to achieve a single, well-defined purpose. The cohesiveness of a class is determined by
8
examining the degree to which "the set of properties it possesses is part of the problem or design
domain".
vii) Primitiveness:- A characteristic that is similar to simplicity.
❖ Primitiveness is the degree to which an operation is atomic.
viii) Similarity:- The degree to which two or more classes are similar in terms of their structure,
function, behavior, or purpose is indicated by this measure.
ix) Volatility:- it is the likelihood that a change will occur.
3) Class-Oriented Metrics-The CK Metrics Suite:-
❖ The class is the fundamental unit of an OO system.
❖ Therefore, for measures and metrics for an individual class, the class hierarchy, and class
collaborations will be considered.
❖ The characteristics of a class can be used as the basis for measurement.
❖ One of the most widely referenced sets of OO software metrics has been proposed by Chidamber
and Kemerer.
❖ Often referred to as the CK metrics suite, which contains six class-based design metrics for OO
systems.
i) Weighted methods per class (WMC):- Assume that n methods of complexity c1, c2,… cn are
defined for a class C.
❖ The specific complexity metric that is chosen should be normalized so that nominal complexity
for a method takes on a value of 1.0.
WMC = ∑ cifor i = 1 to n.
2) Depth of the inheritance tree (DIT):- This metric is "the maximum length from the node to
the root of the tree"

Figure 4.4 A Class hierarchy

9
❖ Referring to Figure 4.4, the value of DIT for the class-hierarchy shown is 4.
❖ As DIT grows, it is likely that lower-level classes will inherit many methods.
❖ This leads to potential difficulties when attempting to predict the behavior of a class.
❖ A deep class hierarchy (DIT is large) also leads to greater design complexity.
❖ On the positive side, large DIT values imply that many methods may be reused
3) Number of children (NOC):- The subclasses that are immediately subordinate to a class in
the class hierarchy are termed its children.
❖ Referring to Figure 4.4, class C₂ has three children subclasses C21, C22, and C23.
❖ As the number of children grows, reuse increases but also, as NOC increases.
❖ As NOC increases, the amount of testing will also increase.
4) Coupling between object classes (CBO):- As CBO increases, it is likely that the reusability
of a class will decrease.
❖ High values of CBO also complicate modifications and the testing.
❖ In general, the CBO values for each class should be kept as low as is reasonable.
5) Response for a class (RFC):- The response set of a class is "a set of methods that can
potentially be executed in response to a message received by an object of that class".
❖ RFC is the number of methods in the response set.
❖ As RFC increases, the effort required for testing also increases.
❖ As RFC increases, the overall design complexity of the class also increases.
6) Lack of cohesion in methods (LCOM):- LCOM is the number of methods that access one or
more of the same attributes.
❖ If no methods access the same attributes, then LCOM = 0.
❖ If LCOM is high, methods may be coupled to one another via attributes.
❖ This increases the complexity of the class design.
❖ It desirable to keep cohesion high; that is, keep LCOM low.
iv) Class-Oriented Metrics-The MOOD Metrics Suite
❖ Harrison, Counsell, and Nithi (HAR98) propose a set of metrics for object-oriented design
that provide quantitative indicators for OO design characteristics.
❖ A small sampling of MOOD metrics follows:

10
a) Method inheritance factor (MIF):- The degree to which the class architecture of an OO
system makes use of inheritance for both methods (operations) and attributes is defined as
ΜΙF + Σ Μi (Ci) / Σ Μa(Ci)
Where the summation occurs over i =1 to Tc , Tc is defined as the total number of classes in
the architecture. Ci is a class within the architecture; and
Ma (Ci) = Md (Ci )+ Mi (Ci)
Where Ma (Ci) = the number of methods that can be invoked in association with Ci
Md (Ci ) = the number of methods declared in the class Ci
Mi (Ci) = the number of methods inherited in Ci.
ii) Coupling factor (CF):- The MOOD metrics suite defines coupling in the following way:
CF = ∑ i ∑j is_client (C I , C j ) / (T c2 - T c )
Where the summations occur over i =1 to T c and j =1 to T c .
The function is_clicnt = 1 if and only if a relationship exists between the client class, Cc and
the server class, Cs, and Cc ≠ Cs .
= 0, otherwise
Tc is defined as the total number of classes in the architecture.
Note:- 1. As the value for CF increases, the complexity of the OO software will also increase.
2. Understandability, maintainability, and the potential for reuse may suffer as a result.
v) OO Metrics Proposed by Lorenz and Kidd:
❖ Lorenz and Kidd divide class-based metrics into four broad categories. Such as i) Size,
ii) Inheritance, iii) Internals and iv) externals.
i) Size-oriented metrics for an OO design class focus on counts of attributes and operations
for an individual class and average values for the OO system as a whole.
ii) Inheritance-based metrics focus on the manner in which operations are reused through
the class hierarchy.
iii) Metrics for class internals look at cohesion and code-oriented issues.
iv) External metrics examine coupling and reuse.
❖ A sampling of metrics proposed by Lorenz and Kidd follows:
1) Class size (CS):- The overall size of a class can be determined with the following measures:
a) The total number of operations (both inherited and private instance operations) that are
encapsulated within the class.
11
b) The number of attributes (both inherited and private instance attributes) that are encapsulated
by the class.
Note:- i) Large values for CS (Class size) indicates too much responsibility and This will reduce the
reusability of the class and complicates the implementation and testing.
ii) The lower the average values for CS indicates that classes within the system can be reused
widely.
2) Number of operations added by a subclass (NOA):- Subclasses are specialized by adding
operations and attributes.
❖ As the value for NOA increases, the subclass drifts away from the abstraction implied by the
superclass.
❖ In general, as the depth of the class hierarchy increases (DT becomes large), the value for NOA
at lower levels in the hierarchy should go down
vi) Component-Level Design Metrics:-
❖ Component-level design metrics for conventional software components focus on internal
characteristics of a software component.
❖ This include measures of the "three Cs” -module cohesion, coupling, and complexity.
❖ These measures can help a software engineer to judge the quality of a component-level design.
❖ The metrics presented in this section are "glass box" in the sense that they require knowledge
of the inner working of the module.
i) Cohesion metrics: Bieman and Ott define a collection of metrics that provide an indication of the
cohesiveness of a module.
❖ The metrics are defined in terms of five concepts and measures:
1. Data slice:- A data slice is a backward walk through a module that looks for data values that
affect the state of the module when the walk began.
❖ It should be noted that both program slices (which focus on statements and conditions) and
data slices can be defined.
2. Data tokens:- The variables defined for a module can be defined as data tokens for the module.
3. Glue tokens:- This set of data tokens lies on one or more data slice.
4. Superglue tokens:- These data tokens are common to every data slice in a module.

12
5. Stickiness:- The relative stickiness of a glue token is directly proportional to the number of data
slices that it binds.
ii) Coupling metrics:- Module coupling provides an indication of the "connected ness of a module
to other modules, global data, and the outside environment.
❖ Dhama has proposed a metric for module coupling that encompasses data and control flow
coupling, global coupling, and environmental coupling.
❖ The measures required to compute module coupling are defined in terms of each of the three
coupling types noted previously.
❖ For data and control flow coupling.
di= number of input data parameters
ci = number of input control parameters
do= number of output data parameters
co= number of output control parameters
❖ For global coupling,
gd = 8 number of global variables used as data
gc = number of global variables used as control
❖ For environmental coupling,
w = number of modules called (fan-out)
r = number of modules calling the module under consideration (fan-in)
❖ Now a module coupling indicator mc is defined in the following way:
mc = K / M where K is proportionality constraint and
M=di + (a X ci) + do + (b x co) + gd + (c X gc) + w + r
Values for k, a, b, and c must be derived empirically
❖ As the value of mc increases, the overall module coupling decreases.
❖ In order to have the coupling metric move upward as the degree of coupling increases, a
revised coupling metric may be defined as c = 1-mc
iii) Complexity metric: Many of these are based on the flow graph.
❖ A graph is a representation composed of nodes and links (also called edges).
❖ When the links (edges) are directed, the flow graph is a directed graph.
❖ Complexity metrics can be used to predict critical information about reliability and
maintainability of software systems.
13
vii) Operation-Oriented Metrics: These metrics have been proposed for operations that reside
within a class.
❖ The 3 simple metrics, proposed by Lorenz and Kidd are
1) Average operation size (OSavg). Here the number of messages sent by the operation can be
considered as operation size.
❖ As the number of messages sent by a single operation increases, it will become difficult to
allocate responsibilities within a class.
2) Operation complexity (OC): The complexity of an operation can be computed using any of the
complexity metrics.
❖ To keep OC as low, the operations should be limited to a specific responsibility.
3) Average number of parameters per operation (NPavg): If we have large number of operation
parameters then the collaboration between objects will be more complex one.
❖ Generally, NPavg should be kept as low as possible.
viii) User Interface Design Metrics:
❖ A typical GUI uses layout entities-graphic icons, text, menus, windows that are used to assist
the user in completing tasks.
❖ To accomplish a given task using a GUI, the user must move from one layout entity to the
next.
❖ A cohesion metric for Ul screens measures the relative connection of on-screen content to
other on-screen content.
❖ If data presented on a screen belongs to a single major data object, then Ul cohesion for that
screen is high.
❖ If many different types of data or content are presented and these data are related to different
data objects, Ul cohesion is low.
4.2.4 METRICS FOR SOURCE CODE
❖ Halstead's theory of software science" [HAL771 proposed the first analytical "laws" for
computer software." Halstead assigned quantitative laws to the development of computer
software, using a set of primitive measures that may be derived after code is generated or
estimated once design is complete. The measures are

n the number of distinct operators that appear in a program.


14
the number of distinct operands that appear in a program.

n N, the total number of operator occurrerices.

N, the total number of operand occurrences.


Halstead uses these primitive measures to develop expressions for the overall program length,
potential minimum volume for an algorithm, the actual volume (number of bits required to specify a
program), the program level (a measure of soft- ware complexity), the language level (a constant for
a given language), and other features such as development effort, development time, and even the
projected number of faults in the software.

Halstead shows that length N can be estimated

Nn log, n, n log, n

and program volume may be defined

VN log: (n+2)

It should be noted that V will vary with programming language and represents the volume of
information (in bits) required to specify a program.

"The human brain follows a more rigid set of rules [for developing algorithms] than it has been
aware of."

Maurice Halstead

Theoretically, a minimum volume must exist for a particular algorithm. Halstead defines a volume
ratio & as the ratio of volume of the most compact form of a pro- gram to the volume of the actual

15
program. In actuality, I must always be less than 1. In terms of primitive measures, the volume ratio
may be expressed as

L-2/n, xnj/N₂

Halstead's work is amenable to experimental verification, and a large body of re- search has been
conducted to investigate software science. For further information, see (ZUS90], [FEN91), and
(ZUS97).

15.6 METRICS FOR TESTING

Although much has been written on software metrics for testing (e.g.. [HET93]), the majority of
metrics proposed focus on the process of testing, not the technical char- acteristics of the tests
themselves. In general, testers must rely on analysis, design, and code metrics to guide them in the
design and execution of test cases.

Function-based metrics (Section 15.3.1) can be used as a predictor for overall testing effort. Various
project-level characteristics (e.g., testing effort and time, errors uncovered, number of test cases
produced) for past projects can be col- lected and correlated with the number of function points
produced by a project team. The team can then project "expected values" of these characteristics for
the current project.

Architectural design metrics provide information on the ease or difficulty associ ated with
integration testing (Chapter 13) and the need for specialized testing soft- ware (e.g., stubs and
drivers). Cyclomatic complexity (a component-level design metric) lies at the core of basis path
testing, a test case design method presented in Chapter 14. In addition, cyclomatic complexity can
be used to target modules as can- didates for extensive unit testing. Modules with high cyclomatic
complexity are more likely to be error prone than modules whose cyclomatic complexity is lower.
For this reason, the tester should expend above average effort to uncover errors in such mod ules
before they are integrated in a system.

16
on test coverage for a given component

15.6.1 Halstead Metrics Applied to Testing

Testing effort can also be estimated using metrics derived from Halstead mea- sures (Section 15.5).
Using the definitions for program volume, V, and program level. PL, Halstead effort, e, can be
computed as

(15-7a)

PL1/[(n₁/2) × (N₂/n₂)]

(15-7b)

eV/PL

The percentage of overall testing effort to be allocated to a module k can be esti- mated using the
following relationship:

percentage of testing effort (k)e(k)/2e(i)

(15-8)

where e(k) is computed for module k using Equations (15-7) and the summation in the denominator
of Equation (15-8) is the sum of Halstead effort across all modules of the system.

15.6.2 Metrics for Object-Oriented Testing

The OO design metrics noted in Section 15.4 provide an indication of design quality. They also
provide a general indication of the amount of testing effort required to exercise an OO system.

17
Binder [BIN94] suggests a broad array of design metrics that have a direct influ ence on the
restability of an OO system. The metrics consider aspects of encapsu lation and inheritance. A
sampling follows

Lack of cohesion in methods (L.COM), The higher the value of LCOM, the more states must be
tested to ensure that methods do not generate side effects.

Percent public and protected (PAP). This metric indicates the percentage of class attributes that are
public or protected. High values for PAP increase the likelihood of side effects among classes
because public and protected attributes lead to high po tential for coupling (Chapter 9). Tests must
be designed to ensure that such side ef fects are uncovered.

Public access to data members (PAD). This metric indicates the number of classes for methods that
can access another class's attributes, a violation of encap sulation. High values for PAD lead to the
potential for side effects among classes. Tests must be designed to ensure that such side effects are
uncovered.

Number of root classes (NOR). This metric is a count of the distinct class hier- archies that are
described in the design model. Test suites for each root class and the corresponding class hierarchy
must be developed. As NOR increases, testing effort also increases.

Fan-in (FIN). When used in the OO context, fan-in for the inheritance hierarchy is an indication of
multiple inheritance. FINI indicates that a class inherits its at- tributes and operations from more
than one root class. FIN I should be avoided when possible.

Number of children (NOC) and depth of the inheritance tree (DIT).17 As we discussed in Chapter
14, superclass methods will have to be retested for each subclass.

15.7 METRICS FOR MAINTENANCE

18
All of the software metrics introduced in this chapter can be used for the develop- ment of new
software and the maintenance of existing software. However, metrics designed explicitly for
maintenance activities have been proposed.

IEEE Std. 982.1-1988 [IEE94] suggests a software maturity index (SMI) that provides an indication
of the stability of a software product (based on changes that occur for each release of the product).
The following information is determined:
M, the number of modules in the current release

F. the number of modules in the current release that have been changed

F= the number of modules in the current release that have been added

F the number of modules from the preceding release that were deleted in the current release

The software maturity index is computed in the following manner:

SMI =[Mt-(Fa+Fc+Fd)]/Mr

As SMI approaches 1.0, the product begins to stabilize. SMI may also be used as a metric for
planning software maintenance activities. The mean time to produce a re- lease of a software
product can be correlated with SMI, and empirical models for maintenance effort can be developed.

vii)
viii)
ix)
x)
19
xi)
xii)
xiii)
xiv)
xv)
xvi)
xvii)
xviii)
xix)
xx)
xxi)

20

You might also like