Software Product Quality Metrics

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 48

3.

Software product quality metrics

The quality of a product:


-the “totality of characteristics that bear on its ability to satisfy stated or implied needs”

 Metrics of the external quality attributes

 producer’s perspective: “conformance to requirements”


 customer’s perspective: “fitness for use” - customer’s expectations
Some of the external attributes cannot be measured: usability,
maintainability, installability.

Two levels of software product quality metrics:

 Intrinsic product quality


 Customer oriented metrics:
Overall satisfaction
Satisfaction to specific attributes:
capability (functionality), usability,
performance, reliability, maintainability, others.
Intrinsic product quality metrics:

 Reliability: number of hours the software can run before a


failure
 Defect density (rate):
number of defects contained in software, relative to its size.

Customer oriented metrics:


 Customer problems
 Customer satisfaction
3.1. Intrinsic product quality metrics

Reliability --- Defect density

• Correlated but different!

• Both are predicted values.


• Estimated using static and dynamic models

Defect: an anomaly in the product (“bug”)

Software failure: an execution whose effect is not conform to software specificatio


3.1.1. Reliability

Software Reliability:
The probability that a program will perform its specified function,
for a stated time period, under specified conditions.

Usually estimated during system tests, using statistical tests,


based on the software usage profile

Reliability metrics:

MTBF (Mean Time Between Failures)

MTTF (Man Time To Failure)


MTBF (Mean Time Between Failures):
 the expected time between two successive failures of a system
 expressed in hours
 a key reliability metric for systems that can be repaired or restored (repairable sys
 applicable when several system failures are expected.

For a hardware product, MTBF decreases with the its age.


For a software product, MTBF it’s not a function of time!
It depends on the debugging quality.
MTTF (Man Time To Failure):
 the expected time to failure of a system
 in reliability engineering  metric for non-repairable systems
 non-repairable systems can fail only once; example, a satellite is not repairable.

Mean Time To Repair (MTTR): average time to restore a system after a failure

When there are no delays in repair: MTBF = MTTF + MTTR

Software products are repairable systems!

Reliability models neglect the time needed to restore the system after a failure.

with MTTR =0  MTBF = MTTF

Availability = MTTF / MTBF = MTTF / (MTTF + MTTR)


Is software reliability important?

• company's reputation
• warranty (maintenance) costs
• future business
• contract requirements
• custom satisfaction
3.1.2. Defect rate (density)

 Number of defects per KLOC or per Number of Function Point,


in a given time unit

Example:
“The latent defect rate for this product, during next four years, is 2.0 defects per KLOC”

Crude metric: a defect may involve one or more lines of code

Lines Of Code
-Different counting tools
-Defect rate metric has to be completed with the counting method for LOC!
-Not recommended to compare defect rates of two products written in different language
Defect rate for subsequent versions (releases) of a software product:

Example:

LOC: instruction statements (executable + data definitions – not comments).


Size metrics:
Shipped Source Instructions (SSI)
New and Changed Source Instructions (CSI)
SSI (current release) = SSI (previous release)
+ CSI (for the current version)
- deleted code
- changed code (to avoid double count in both SSI and CSI)
Defects after the release of a product:

field defects – found by the customer (reported problems that required bug fixing)
internal defects – found internally

Post release defect rate metrics:

 Total defects per KSSI


 Field defects per KSSI
 Release – origin defects (field + internal) per KCSI
 Released – origin field defects per KCSI

Defect rate using number of Function Points:


 If defects per unit of function is low, then the software should have better quality
even though the defects per KLOC value could be higher –
when the functions were implemented by fewer lines of code.
Reliability or Defect Rate ?

Reliability:
 often used with safety-critical systems such as: airline traffic control systems,
avionics, weapons.
(usage profile and scenarios are better defined)

Defect density:
 in many commercial systems (systems for commercial use)
• there is no typical user profile
• development organizations use defect rate for maintenance cost and
resource estimates
• MTTF is more difficult to implement and may not be representative of all
customers.
3.2. Customer Oriented Metrics

3.2.1. Customer Problems Metric


Customer problems when using the product:
valid defects, usability problems, unclear documentation, user errors.

Problems per user month (PUM) metric:

PUM = TNP/ TNM


TNP: Total number of problems reported by customers for a time period
TNM: Total number of license-months of the software during the period
= number of install licenses of the software x number of months in the period
3.2.2. Customer satisfaction metrics

Often measured on the five-point scale:


1. Very satisfied
2. Satisfied
3. Neutral
4. Dissatisfied
5. Very dissatisfied

IBM: CUPRIMDSO
(capability/functionality, usability, performance, reliability, installability,
maintainability,
documentation /information, service and overall)
Hewlett-Packard: FURPS
(functionality, usability, reliability, performance and service)
Metrics:
– Percent of completely satisfied customers
– Percent of satisfied customers
– Percent of dissatisfied customers
– Percent of no satisfied customers

Net Satisfaction Index (NSI)


Completely satisfied = 100% (all customers are completely satisfied)
Satisfied = 75% (all customers are satisfied or
50% are completely satisfied and 50% are neutral)!
Neutral = 50%
Dissatisfied= 25%
Completely dissatisfied=0%(all customers are completely dissatisfied)
17
18
Calculation of sum(Fi)
 Total of 1-5 rating of following 14 questions:
 Does the system require reliable back-up/recovery?

 Are specialized data communications required?

 Are there distributed processing functions?

 Is performance critical?

 Will run in heavily utilized operating environment?

 On-line data entry required?

 For on-line data entry, will it require multiple screens?

 Are ILF’s updated on-line?

 Are input, output, files, or inquiries complex?

 Is the internal processing complex?

 Is the code designed to be reusable?

 Are conversion and installation included?

 Is the system designed for installation in different organizations?

 Is the application designed to facilitate change and ease of use?


19
Value Adjustment Factor 14
Characteristics
 Data Communications  Online Update
 Distributed Data  Complex Processing
Processing  Reusability
 Performance  Installation Ease
 Heavily Used  Operational Ease
Configuration  Multiple Sites
 Transaction Rate  Facilitate Change
 Online Data Entry
 End-User Efficiency
Degrees of Influence
 0 Not present, or no influence
 1 Incidental influence
 2 Moderate influence
 3 Average influence
 4 Significant influence
 5 Strong influence throughout
Function-oriented Metrics
 Function-oriented metrics use a measure of the
functionality delivered by the application as a
normalization value
 Most widely used metric of this type is the
function point:
FP = count total * [0.65 + 0.01 * ∑(Fi))]

22
Uses of Function Points
 Measure productivity (ex. Number of function points
achieved per work hour expended)
 Estimate development and support (cost benefit
analysis, staffing estimation)
 Monitor outsourcing agreements (Ensure that the
outsourcing entity delivers the level of support and
productivity gains that they promise)
 Drive IS related business decisions (Allow decisions
regarding the retaining, retiring and redesign of
applications to be made)
 Normalize other measures (Other measures, such as
defects, frequently require the size in function points)
Objectives of Function Point
Counting
 Measure functionality that the user
requests and receives
 Measure software development and
maintenance independently of technology
used for implementation
 Simple enough to minimize the overhead
of the measurement process
 A consistent measure among various
projects and organizations
Function Point Counting Steps
 Determine the type of function point count
 Identify the counting scope and application
boundary
 Determine the Unadjusted Function Point
Count
 Count Data Functions
 Count Transactional Functions
 Determine the Value Adjustment Factor
 Calculate the Adjusted Function Point Count
Function Point Counting
Diagram
Metrics for S/W Maintenance
 When development of a software product is complete & it
is released to the market, it enters the maintenance phase of
its life cycle. Two metrics during this phase are :
 The defect arrivals by time interval
 Customer problem calls (which may or may not be defects)
by time interval
 However the number of defect arrivals is largely
determined by the development process, before the
maintenance phase. Quality of the product cannot be
altered during maintenance phase. Therefore, these two
metrics do not reflect the quality of software maintenance.
Metrics for S/W Maintenance
 Main Purpose of maintenance phase : is to fix the defects
as soon as possible with excellent fix quality. Such actions
do not improve the defect rate of the product, but can
improve customer satisfaction to a large extent.
 The following metrics are therefore very important :
 Fix backlog & backlog management index
 Fix response time & fix responsiveness

 Percent delinquent fixes

 Fix quality
Fix backlog & backlog management index
 It is a workload statement for software maintenance.
 It is related to both the rate of defect arrivals & the rate at
which fixes for reported problems become available.
 It is a simple count of reported problems that remain
unfixed at the end of each month or each week. This
metric provides meaningful information for managing the
maintenance process.
 Another metric to manage the backlog of open, unsolved,
problems is the backlog management index (BMI)
 BMI = (No. of problems closed during the month/ No of
problem arrivals during the month) * 100%
Metrics for S/W Maintenance
 Since BMI is a ratio of number of closed or solved
problems to number of problem arrivals during the
month, if BMI is larger than 100 it means the backlog is
reduced. If BMI is less than 100, then the backlog
increased.
 The goal is always to strive for a BMI larger than 100. A
BMI trend chart or control chart should be examined
together with trend charts of defect arrivals, defects fixed
(closed), & the number of problems in the backlog.
Fix response time & fix responsiveness
 For many development organizations, guidelines are
established on the time limit within which the fixes
should be available for the reported defects. Usually the
criteria are set in accordance with the severity of the
problems.
 For critical situations in which the customer’s businesses
are at risk due to defects in the software product, software
developers work around the clock to fix the problems. For
less severe defects for which circumventions are
available, the required fix response time is more relaxed.
 The fix response time metric is usually calculated as
follows for all problems as well as by severity level :
“Mean time of all problems from open to closed”
Fix response time & fix responsiveness
 In general, short fix response time leads to customer
satisfaction. However, there is a subtle difference between
fix responsiveness & short fix response time. The
important elements of fix responsiveness are customer
expectations, the agreed to fix time & ability to meet
one’s commitment to the customer.
Percent delinquent fixes
 The mean response time metric is a central tendency
measure. A more sensitive metric is the percentage of
delinquent fixes. For each fix, if the turnaround time
greatly exceeds the required response time, then it is
classified as delinquent :
 % delinquent fixes = (No. of fixes that exceeded the
response time criteria by severity level / No. of fixes
delivered in a specified time ) * 100%
 This metric, however, is not a metric for real-time
delinquent management because it is for closed problems
only. Problems that are still open must be factored into
the calculation for a real-time metric. Assuming the time
unit is 1 week, we propose that the percent delinquent of
problems in the active backlog be used.
Percent delinquent fixes
 Active backlog refers to all open problems for the week,
which is the sum of the existing backlog at the beginning
of the week & new problem arrivals during the week. In
other words, it contains the total number of problems to
be processed for the week-the total workload. The
number of delinquent problems is checked at the end of
week.
 Following diagram shows the real-time delinquency index
diagrammatically
Backlog
Check how
Arrivals many
Delinquent

week
Fix Quality
 Fix quality or the number of defective fixes is another
important quality metric for the maintenance phase.
From, customer’s perspective, it is bad enough to
encounter the defects when running a business on the
software. It is even worse if the fixes turn out to be
defective.
 A fix is defective if it did not fix reported problem, or if it
fixed original problem but injected a new defect.
Defective fixes are harmful to customer satisfaction.
 The metric of percent defective fixes is simply the
percentage of all fixes in a time interval ( e.g. 1 month)
that are defective. The quality goal for the maintenance
process, of course, is zero defective fixes without
delinquency.
In-Process Quality Metrics
 Some of the in-process quality metrics are
 Defect Density during Machine
Testing
 Defect Arrival pattern during
Machine Testing
 Phase based Defect Removal pattern
 Defect Removal Effectiveness
Defect Density during Machine Testing
 Defect rate during formal machine testing ( testing after
code is integrated into the system library) is positively
correlated with the defect rate in the field.
 Higher defect rates found during testing indicate that the
software has experienced higher error injection during its
development process.
 Software defect density never follows the uniform
distribution. If a piece of code or a product has higher
testing defects, it is a result of more effective testing or it
is because of higher latent defects in the code.
Defect Density during Machine Testing
 One principle of Defect Density states that – the more
defects found during testing, the more defects will be
found later. This principle is another expression of
positive correlation between :
 Defect rates during testing & in the field or
 Defect rates between phases of testing
 This simple metric of defect rate i.e. defects per KLOC or
FP, is especially useful to monitor subsequent releases of
a product in the same development organization.
Defect Density during Machine Testing
 Therefore release-to-release comparisons are not infected
by irrelevant factors. The development team or the project
manager can use the following scenarios to judge the
release quality :
 If defect rate during testing is the same or lower than that
of previous release (or a similar product), then ask : Does
the testing for the current release worsen?
 If the answer is no, the quality perspective is positive.
 If the answer is yes, you need to do extra testing.
Defect Density during Machine Testing
 If the defect rate during testing is substantially higher
than that of previous release, then ask : Did we plan for &
actually improve testing effectiveness?
 If the answer is no, the quality perspective is negative.
Ironically, the only remedial approach that can be taken at
this stage of the life cycle is to do more testing, which
will yield even higher defect rates.
 If the answer is yes, then the quality perspective is same
or positive.
Defect Arrival during Machine Testing
 Three metrics for the defect arrival pattern during testing
are :
 The defect arrivals (defect reported) during the testing
phase by time interval (e.g. week). These are the raw
number of arrivals, not all of which are valid defects.
 The pattern of valid defect arrivals : when problem
determination is done on the reported problems. This is
true defect pattern.
 The pattern of defect backlog overtime. This metric is
needed because development organizations cannot
investigate & fix all reported problems immediately. This
metric is a workload as well as quality statement.
Regression testing is needed to ensure that integrated
product quality levels are reached.
Phase-Based Defect Removal Pattern
 It is an extension of the test defect density metric.
 In addition to testing, phase- based defect removal pattern
tracks at all phases of the development cycle, including
the design reviews, code inspections, & formal
verifications.
 As large percentage of programming defects is related to
design problems, conducting formal reviews or functional
verifications to enhance the defect removal capability at
the front end reduces error injection.
 The pattern of phase based defect removal reflects the
overall defect removal ability of the development process.
Defect Removal Efficiency
 Defect removal efficiency provides benefits at both the project and
process level
 It is a measure of the filtering ability of QA activities as they are applied
throughout all process framework activities
 It indicates the percentage of software errors found before software
release
 It is defined as DRE = E / (E + D)
 E is the number of errors found before delivery of the software to
the end user
 D is the number of defects found after delivery

 As D increases, DRE decreases (i.e., becomes a smaller and smaller


fraction)
 The ideal value of DRE is 1, which means no defects are found after
delivery
 DRE encourages a software team to institute techniques for finding as
many errors as possible before delivery.

43
Metrics for software Testing
 Test coverage
 Test Execution productivity
 Defect density/Rate
 Defect Leakage
 Defect per size
 Test cost(in %)
 Cost to locate defect
 Defects detected in testing
 Defects detected in production
 Quality of testing
 Source code analysis
 Effort Productivity
University Questions
Sr. No. Question Year Marks

1 Define measurement scale & explain the Nominal, 2013 08


Ordinal, Interval & Ratio scales of measurement.

2 Explain in short the following metrics used in software 2013 08


testing :
1) Test coverage 2) Test Execution Status
3) Defect density 4) Defect leakage
3 Write short notes on : 1) Product Quality Metrics 2013 08
2) In-process Quality Metrics
4 Explain GQM technique in detail. Draw a GQM tree 2013 08
for identifying software measures.
5 Identify Questions & Metrics for the following Goal : 2011 04
“Evaluate Effectiveness of coding standard”
6 Explain Product, Process & Resources with respect to 2010, 08
their Internal & External attributes. 2011 09
University Questions
Sr. No. Question Year Marks

7 Draw & Explain GQM tree for achieving better 2009 08


software quality.

8 Draw & Explain GQM tree for identifying software 2010 10


measures. 2011
9 Explain Metrics for software maintenance. 2010 08
Topics of University Questions
 Measurement Basics : Representational
Theory of Measurement, Measurement
Scales
 GQM
 Classifying S/W measures
 Product Quality Metrics
 Metrics for S/W Maintenance
 In-Process Quality Metrics
 Metrics for S/W Testing

You might also like