0% found this document useful (0 votes)
315 views

Software Quality Metrics Methodology

This document discusses establishing and implementing software quality metrics. It outlines 6 key steps: 1) establish quality requirements by defining requirements with stakeholders, 2) identify metrics by specifying quality factors, metrics, and data items, 3) implement metrics by collecting baseline data, 4) analyze results, 5) validate metrics, and 6) use metrics as quality indicators. It provides details on tasks to establish requirements, examples of metrics, and challenges in collecting historic metrics data.

Uploaded by

Sumit Rajput
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
315 views

Software Quality Metrics Methodology

This document discusses establishing and implementing software quality metrics. It outlines 6 key steps: 1) establish quality requirements by defining requirements with stakeholders, 2) identify metrics by specifying quality factors, metrics, and data items, 3) implement metrics by collecting baseline data, 4) analyze results, 5) validate metrics, and 6) use metrics as quality indicators. It provides details on tasks to establish requirements, examples of metrics, and challenges in collecting historic metrics data.

Uploaded by

Sumit Rajput
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Chp-6-Software Quality Metrics Methodology

6.1 Establish quality requirements


6.2 Identify Software quality metrics
6.3 Implement the software quality metrics
6.4 Analyze software metrics results
6.5 Validate the software quality metrics
6.6 Software quality indicators
6.7 Fundamentals in Measurement theory
=========================================================================
6.1 Establish quality requirements
 What group is empowered to define software quality requirements?
 How should customers provide input?
 How are requirements conflicts resolved?
The first step in the evaluation process is to establish the requirements of the evaluation.
Task 1.1: Establish the purpose of the evaluation
The goal of this task is to document the purpose for which the organization wants to evaluate
the quality of the software product (decide on the acceptance of the product, decide when to
release the product, compare the product with competitive products, select a product from
among alternative products, etc.).
Task 1.2: Obtain the software product quality requirements
The goal of this task is to identify the stakeholders of the software product (developer,
acquirer, independent evaluator, user, maintainer, supplier, etc.) and to specify the software
product quality requirements using a quality model.
Task 1.3: Identify product parts to be included in the evaluation
All product parts to be included in the evaluation shall be identified and documented. The type
of software product to be evaluated (e.g. requirements specification, design diagrams and test
documentation) depends on the stage in the life cycle and the purpose of the evaluation.
Task 1.4: Define the stringency of the evaluation
SThe evaluation stringency shall be defined in order to provide confidence in the software
product quality according to its intended use and purpose of the evaluation. The evaluation
stringency should establish expected evaluation levels which define the evaluation techniques
to be applied and evaluation results to be achieved.
REQUIREMENTS TRACEABILITY MATRIX REQUIREMENTS TRACEABILITY MATRIX
Project
<optional> Project Name: <optional>
Name:
Nation
al <required> National Center: <required>
Center

[email protected] 9850979655 1
:
Project
Manag Project Manager
<required> <required>
er Name:
Name:
Project
Project
<required> Descriptio <required>
Description:
n:
Technical
Architec Tech
Assumption Test Additi
Ass Functional tural/Des nical System Software Imple
(s) Stat Case Tested onal
ID oc Requireme ign Spec Compon Module(s mente Verification
and/or us Numbe In Com
ID nt Docume ificat ent(s) ) d In
Customer r ments
nt ion
Need(s)
001 1.1.1
002 2.2.2
003 3.3.3
004 4.4.4

6.2 Identify Software quality metrics


“The Future of digital systems is complexity, and complexity is the worst enemy of security.”
Bruce Schneier ,Crypto-Gram Newsletter, March 2000

 Specify important quality factors and subfactors


 Identify direct metrics
 Name
 Costs
 Target value
 Tools
 Application
 Data items
 Computation

Item Description

Name Number of defects detected in selected


modules

Costs Minimal: data can be obtained from bug-


tracking tool

Target Value 5

Tools Spreadsheet

Application Metric is used for relative comparison to


values obtained for other modules

[email protected] 9850979655 2
Data Items Count of defects detected at code inspections

Computation Sum number of defects reported against


specific modules

Software quality is a multidimensional concept. The multiple professional views of product quality
may be very different from popular or nonspecialist views. Moreover, they have levels of abstraction
beyond even the viewpoints of the developer or user. Crosby, among many others, has defined
software quality as conformance to specification.2 However, very few end users will agree that a
program that perfectly implements a flawed specification is a quality product. Of course, when we talk
about software architecture, we are talking about a design stage well upstream from the program's
specification. Years ago Juran3 proposed a generic definition of quality. He said products must
possess multiple elements of fitness for use. Two of his parameters of interest for software products
were quality of design and quality of conformance. These separate design from implementation and
may even accommodate the differing viewpoints of developer and user in each area.
Two leading firms that have placed a great deal of importance on software quality are IBM and
Hewlett-Packard. IBM measures user satisfaction in eight dimensions for quality as well as overall
user satisfaction: capability or functionality, usability, performance, reliability, installability,
maintainability, documentation, and availability (see Table 3.1). Some of these factors conflict with
each other, and some support each other. For example, usability and performance may conflict, as may
reliability and capability or performance and capability. IBM has user evaluations down to a science.
We recently participated in an IBM Middleware product study of only the usability dimension. It was
five pages of questions plus a two-hour interview with a specialist consultant. Similarly, Hewlett-
Packard uses five Juran quality parameters: functionality, usability, reliability, performance, and
serviceability. Other computer and software vendor firms may use more or fewer quality parameters
and may even weight them differently for different kinds of software or for the same software in
different vertical markets. Some firms focus on process quality rather than product quality. Although it
is true that a flawed process is unlikely to produce a quality product, our focus here is entirely on
software product quality, from architectural conception to end use.
IEEE Metric Set Description Paradigm

[email protected] 9850979655 3
6.3 Implement the software quality metrics
• Many software developers do not collect measures.
• Without measurement it is impossible to tell whether a process is improving or not.
• Baseline metrics data should be collected from a large, representative sampling of past
software projects.

Item Description

Name Name given to a data item

Metrics Metrics associated with the data item

[email protected] 9850979655 4
Definition Straightforward description of the data
item

Source Location of where the data originates

Procedures Procedures (manual or automated) for


collecting the data

Representation Manner in which data is represented,


for example, precision, format, units,
etc.

Storage Location of where the data is stored

Getting this historic project data is very difficult, if the previous developers did not collect data in an
on-going manner
• It is important to determine whether the metrics collected are statistically valid and not the
result of noise in the data.
• Control charts provide a means for determining whether changes in the metrics data are
meaningful or not.
• Zone rules identify conditions that indicate out of control processes (expressed as distance
from mean in standard deviation units).
• Most software organizations have fewer than 20 software engineers.
• Best advice is to choose simple metrics that provide value to the organization and don’t require
a lot of effort to collect.
• Even small groups can expect a significant return on the investment required to collect metrics,
if this activity leads to process improvement.
Establishing Software Quality Metrics
Method-1
• Identify business goal
• Identify what you want to know
• Identify subgoals
• Identify subgoal entities and attributes
• Formalize measurement goals
• Identify quantifiable questions and indicators related to subgoals
Method-2

[email protected] 9850979655 5
• Identify data elements needed to be collected to construct the indicators
• Define measures to be used and create operational definitions for them
• Identify actions needed to implement the measures
• Prepare a plan to implement the measures

6.4 Analyze software metrics results

 Results need to be analyzed within the context of the project’s overall software quality
requirements
 Any metrics that fall outside of their respective targets should be identified for further
analysis

Level - 1
Reliability Complexity Usability
(Properties)

Level - 2 Mean time to Information Time taken to


failure flow between learn how to
(Quantities
Metrics/criteria) modules use

Level - 3 Run and count Count Minutes taken


crashes per procedure calls for some user
(Realization
hour task
of Metrics)
• Formulation
– The derivation (i.e., identification) of software measures and metrics appropriate for the
representation of the software that is being considered
• Collection
– The mechanism used to accumulate data required to derive the formulated metrics
• Analysis
– The computation of metrics and the application of mathematical tools
• Interpretation
– The evaluation of metrics in an effort to gain insight into the quality of the
representation
• Feedback

[email protected] 9850979655 6
– Recommendations derived from the interpretation of product metrics and passed on to
the software development team
– A metric should have desirable mathematical properties
– It should have a meaningful range (e.g., zero to ten)
– It should not be set on a rational scale if it is composed of components measured on an
ordinal scale
– If a metric represents a software characteristic that increases when positive traits occur
or decreases when undesirable traits are encountered, the value of the metric should
increase or decrease in the same manner
– Each metric should be validated empirically in a wide variety of contexts before being
published or used to make decisions
– It should measure the factor of interest independently of other factors
– It should scale up to large systems
– It should work in a variety of programming languages and system domains
– Whenever possible, data collection and analysis should be automated
– Valid statistical techniques should be applied to establish relationships between internal
product attributes and external quality characteristics
– Interpretative guidelines and recommendations should be established for each metric
• Statement and branch coverage metrics
• Lead to the design of test cases that provide program coverage
• Defect-related metrics
• Focus on defects (i.e., bugs) found, rather than on the tests themselves
• Testing effectiveness metrics
• Provide a real-time indication of the effectiveness of tests that have been conducted
• In-process metrics
• Process related metrics that can be determined as testing is conducted

6.5 Validate the software quality metrics


 Assess the statistical significance of the metrics to the quality factors they represent
 See the IEEE Standard 1061-1998 for a thorough description of this process
 Complexity Metrics
 The McCabe Cyclomatic Complexity Metric
 Halstead’s Software Science Complexity Metric
 Defect Metrics

[email protected] 9850979655 7
 Product Metrics

 Number and type of defects found during requirements, design, code, and test inspections
 Number of pages of documentation delivered
 Number of new source lines of code created
 Number of source lines of code delivered
 Total number or source lines of code delivered
 Average complexity of all modules delivered
 Average size of modules
 Total number of modules
 Total number of bugs found as a result of unit testing
 Total number of bugs found as a result of integration testing
 Total number of bugs found as a result of validation testing
 Productivity, as measured by KLOC per person-hour

 Process Metrics

 Average find-fix cycle time


 Number of person-hours per inspection
 Number of person-hours per KLOC
 Average number of defects found per inspection
 Number of defects found during inspections in each defect category
 Average amount of rework time
 Percentage of modules that were inspected
Attributes of a Measurement Program
 The measures should be robust
 The measures should suggest a norm
 The measures should relate to specific product and process properties
 The measures should suggest an improvement strategy
 The measures should be a natural result of the software development process
 The measures should be simple
 The measures should be predictable and trackable
 The measures should not be used as part of a person’s performance evaluation

Template for Software Quality Goal

[email protected] 9850979655 8
 Purpose: To (characterize, evaluate, predict, monitor, etc.) the (process, product, model,
metric, etc.) in order to (understand, plan, assess, manage, control, engineer, learn,
improve, etc.) it.
Example: To evaluate the maintenance process in order to improve it.
 Perspective: Examine the (cost, effectiveness, correctness, defects, changes, product
measures, etc.) from the viewpoint of the (developer, manager, customer, etc.)
Example: Examine the effectiveness from the viewpoint of the customer
 Environment: The environment consists of the following: process factors, people factors,
methods, tools, constraints, etc.
Example: The maintenance staff are poorly motivated programmers who have limited access
to tools.

Goal Questions Metrics

Evaluate How fast are fixes to customer Average effort to fix a problem
reported problems made? Percentage of incorrect fixes
What is the quality of fixes
delivered?

In the software measurement, each metric must be validated for its validity. There are two types of
validations for validating software metrics: “theoretical” and “empirical” validations. These two types
of software metrics validations are given in Figure 1.

The structural measurement model defines validation s for software measurement. There are two
basic methods of validity of measures that are give n in measurement models. The theoretical
validation confirms that the measurement does not violate any necessary properties of the
elements of measurement. The empirical validation confirms that measured values of attributes

[email protected] 9850979655 9
are consistent with values predicted by models involving the attribute. The theoretical methods of
validation allow valid measurements with respect to certain defined criteria and empirical
methods are corroborating evidence of validity or invalidity [24, 30, 42].
These two types of validation methods are arrived under internal and external validations. The
internal validation is a theoretical methodology that ensures that the metric is a proper numerical
characterization of the property it claims to measure. Demonstrating that a metric measures what
The idea for the validation is to use static and dynamic (metric) analyses applied on the version history
of particular software sys- tems, and additional information sources like bug databases, and even
human insights.
To avoid confusion, we distinguish model metrics from validation metrics.The former are automated
metrics in the new Quality Model mapped to sub-characteristics. The latter are metrics assessing the
(sub-)characteristics directly, but with much higher effort, e.g., with dynamic analyses or human
involvement, or a posteriori, i.e., by looking backward in the project history.

6.6 Software quality indicators


A Software Quality Indicator can be calculated to provide an indication of the quality of the system by
assessing system characteristics.
Assemble a quality indicator from factors that can be determined automatically with commercial or
custom code scanners, such as the following:
 cyclomatic complexity of code (e.g., McCabe's Complexity Measure),
 unused/unreferenced code segments (these should be eliminated over time),
 average number of application calls per module (complexity is directly proportional to the
number of calls),
 size of compilation units (reasonably sized units have approximately 20 functions (or
paragraphs), or about 2000 lines of code; these guidelines will vary greatly by environment),
 use of structured programming constructs (e.g., elimination of GOTOs, and circular procedure
calls).
These measures apply to traditional 3GL environments and are more difficult to determine in
environments which are using object-oriented languages, 4GLs, or code generators
With existing software, the Software Quality Indicator could also include a measure of the reliability
of the code. This can be determined by keeping a record of how many times each module has to be
fixed in a given time period.
There are other factors which contribute to the quality of a system such as:
 procedure re-use,
 clarity of code and documentation,
[email protected] 9850979655 10
 consistency in the application of naming conventions,
 adherence to standards,
 consistency between documentation and code,
 the presence of current unit test plans.
These factors are harder to determine automatically. However, with the introduction of CASE tools
and reverse-engineering tools, and as more of the design and documentation of a system is maintained
in structured repositories, these measures of quality will be easier to determine, and could be added to
the indicator.

Know the Quality Measures and Indicators Prescribed by Software Testing Experts
Quality Measures and Indicators Prescribed by the Software Testing Experts

Software development & testing experts rightly declare "We cannot control what we cannot measure".
Thus in order to successfully measure quality of any software application, first of all we need to
understand the key differences between the elements of chain of three interconnected quality terms
like; measures, metrics & indicators.

Quality Measure Quality Metric Quality Indicator

A measure is to A metric is a An indicator is a


ascertain or appraise quantitative device or variable,
by comparing to a measure of the which can be set to a
standard. degree to which a prescribed state, based
system, component, on the results of a
or process possesses process or the
a given attribute. occurrence of a
specified condition.

A standard or unit It is a calculated or An indicator usually


of measurement composite indicator compares a metric
covers: based upon two or with a baseline or
more measures. expected result.
# The extent,
dimensions,
capacity of
anything, especially

[email protected] 9850979655 11
as determined by a
standard.

# An act or process
of measuring; a
result of
measurement.

A measure gives A metric is a Indicator help the


very little or no comparison of two decision-makers to
information in the or more measures make a quick
absence of a trend like defects per comparison that can
to follow or an thousand source provide a perspective
expected value to lines of code. as to the "health" of a
compare against. particular aspect of
the project.

Measure does not Software quality Software quality


provide enough metrics is used to indicators act as a set
information to make assess throughout of tools to improve the
meaningful the development management
decisions. cycle whether the capabilities of
software quality personnel responsible
requirements are for monitoring
being met or not software development
projects.

The software quality indicators address management concerns, take advantage of data that is already
being collected, are independent of the software development methodology being used, are specific to
phases in the development cycle, and provide information on the status of a project.
Software Testing experts like ISTQB advanced certified Test Managers prescribe following quality
indicators for use during the software testing & development life cycle.
1) Progress: Measures the amount of work accomplished by the developer in each phase. This
measure flows through the development life cycle with a number of requirements defined and
baselined, then the amount of preliminary and detailed designed completed, then the amount of code
completed, and various levels of tests completed.

[email protected] 9850979655 12
2) Stability: Assesses whether the products of each phase are sufficiently stable to allow the next
phase to proceed. This measures the number of changes to requirements, design, and implementation.
3) Process compliance: Measures the developer's compliance with the development procedures
approved at the beginning of the project. Captures the number of procedures identified for use on the
project versus those complied with on the project.
4) Quality evaluation effort: Measures the percentage of the developer's effort that is being spent on
internal quality evaluation activities. Percent of time developers are required to deal with quality
evaluations and related corrective actions.
5) Test coverage: Measures the amount of the software system covered by the developer's testing
process. For module testing, this counts the number of basis paths executed/covered, & for system
testing it measures the percentage of functions tested.
6) Defect detection efficiency: Measures how many of the defects detectable in a phase were actually
discovered during that phase. Starts at 100% and is reduced as defects are uncovered at a later
development phase.
7) Defect removal rate: Measures the number of defects detected and resolved over a period of time.
Number of opened and closed system problem reports (SPR) reported through the development
phases.
8) Defect age profile: Measures the number of defects that have remained unresolved for a long
period of time. Monthly reporting of SPRs remaining open for more than a month's time.
9) Defect density: Detects defect-prone components of the system. Provides measure of SPRs /
Computer Software Component (CSC) to determine which is the most defect-prone CSC.
10) Complexity: Measures the complexity of the code. Collects basis path counts (cyclomatic
complexity) of code modules to determine how complex each module is.

6.7 Fundamentals in Measurement theory


Goal of the theory of measurement is to allow a safe acquirement and reproducibility of measuring
characteristics. One shall show the necessary conditions for the cognitive requirements to make
scientifically relevant measurement predictions possible. After all, one has to put an end to the
unacceptable situation, that our science of measurement upon which we rely, are not always clear in
what the basis and content of measurement predictions concern. Time is ripe for a presentation of the
rational fundamentals of metrology, in order to allow scientists to make judgements free from intuitive
principles, plausible facts, or authorities. Only possessing a rational theory one can know the reason
why one knows and how sound our knowledge is.
1. The irrevocable cognitive starting position

[email protected] 9850979655 13
 1.1. Criteria are settings of the logic and are, therefore, logical. Criteria are neither true, nor
untrue. Only their degree of logic is open to discussions.
 1.2. Characteristics are criteria transferred upon objects, in order to appropriate them mentally.
They are also neither true, nor untrue. It is also crucial, how logical and how suitable they are,
in order to handle the respective objects, both mentally and practically, in the desired way.
2. The object of the theory
 2.1. The object of the theory are (physical) magnitudes/quantities.
 2.1.1. Physical magnitudes/quantities are quantitative features.
 2.2. In order to designate a quantity, the human intellect has first to acquire a notion (concept)
of the respective quantity. The concept, therefore, will depend first of all on human perception
and ability of knowing and second, on interests.
Clarification:
The concept of heat presupposes a sensitivity to heat, just like the concept of temporality relies upon
memory (i.e. the ability to remember). Duration is thus a quantity perceptible in a temporal
observation of things. Heat and duration are aspects of things perceived by us, while manipulating
them. These aspects do not allow us to conclude that things can exist as objects outside human
observation. The later is, however, insignificant for the treatment of quantities and for the theory of
measurement. We are saying today that heat is an aspect of the molecular motion within a body, or a
system (for example, a gas). The concept of heat retains, nevertheless, its meaning. The same applies
to the quantity "mass": it is no thing in itself, but rather an aspect of a thing, namely the measure of its
mechanical resistance during interactions. The quantity "velocity", on the other hand, depends really
on the chosen distance, i.e. on the initial and final points of a path. The selected section of the path has
to be always specified, in order to convey reconstructable knowledge.
3. The method
 3.1. Knowledge about magnitudes/quantities is acquired through measurement.
 3.2. The method (mean) of measurement is a comparison.
 3.3. The aids of measurement are the standards and the scales. The units are not measured, but
rather fixed by definition.
Clarification:
A basic pattern of recognition is the comparison. To measure means to compare quantities. A
multiplicative comparison means to compare an unknown dimension with a known one, i.e. with a
unit, with the help of aids (scales). In this way, the unknown dimension will be expressed as as a
multiple, or as a fraction, of the defined unit. The so obtained quantity is a number. The concept of
measurement involves, therefore, two cognitively different measures: one unknown and one known, as
well as their comparison by a measurer (or a measuring device). The result is knowledge. This holds

[email protected] 9850979655 14
for relative comparison - as in the case of hardness - where each time the harder material is chosen as a
reference.
4. Implementation and significance of the theory
 4.1. The unit of a measurable quantity (the comparison factor = measurement unit) has to be
appropriately defined and implemented through conventions.
 4.2. The quality of a definition of measurement units is given by the degree of mathematical
representation and by the accuracy, reliability and constancy of its realizations.
 4.2.1. Measurements units are not a question of truths, but rather of usefulness and of validity.
There is no way to determine the "real" magnitude of a measurement unit. One can, however,
for the sake of usefulness, chose the smallest unit, or the most significant phenomenon related
to the magnitude of interest, as basis for a scale.
 4.3. The irrevocability of our level of knowledge makes the above procedure for valid
measurements, compulsory. Theories and statements contradicting the necessary procedure are
false and have to be rejected. A theory of measurement based on the cognitive status and the
corresponding metrology is an unconditional basis for measurement statements made by any
(natura) science.
Clarification:
Without constant, permanent and everywhere valid standards no useful, quantitative knowledge is
possible. In their absence even the velocity of some "motion" cannot be responsibly judged. In the
relativistic physics of "moving systems" it is the "measurement" ("rests, or moves with some
velocity") which has to decide in which system the "true" standards are -a judgement which should
presuppose the availability of valid standards. Quite independently, one could anyway perform
measurements only in the system, under his own conditions, in which one finds himself at the moment
(even if signals from other systems are involved) and in which valid standards are available. The
"resting system" is , as a rule, always the one in which one finds himself, provided one has not chosen
deliberately another reference system. A system at absolut rest with "true = resting" standards could,
anyhow, not exist, since for dynamical reasons all cosmological objects and systems are moving
relativ to their centers of mass, these move again around their centers of mass and so on, otherwise a
general collaps had occured long time ago. Once these arguments understood and taken seriously,
there would be no more freedom left for "relativistic problems". Moreover, in a purely kinematic
analysis of "moving systems" the question which one is "moving" and which "resting" -in the absence
of objective, intrinsic differentiating labels- is a matter of habits of observation, rather than a truth. It
is, therefore, important to think about the the role of the observer, if one wish to avoid the chase of a
phantom and self-ridicule.

[email protected] 9850979655 15
Conclusion: The quality indicators discussed above have certain characteristics related to quality
measures. Accordingly software testing experts like Test Managers have made few conclusions like.
1) Quality measures must be oriented toward management goals. One need not have extensive
familiarity with technical details of the project.
2) Quality measures should reveal problems as they develop and suggest corrective actions that
could be taken.
3) Quality measures must be easy to use. They must not be excessively time consuming, nor
depend heavily on extensive software training or experience. Measures that are clearly
specified, easy to calculate, and straightforward to interpret are needed.
4) Quality measures must be adequately flexible.
5) We must not use quality measurement just for the purpose of quality assurance & software
testing managers & engineers must come out of the narrow notions they happen to have
about the constituents of the quality.

[email protected] 9850979655 16
[email protected] 9850979655 17

You might also like