Software Product Quality Metrics
Software Product Quality Metrics
Software Product Quality Metrics
Software Reliability:
The probability that a program will perform its specified function,
for a stated time period, under specified conditions.
Reliability metrics:
Mean Time To Repair (MTTR): average time to restore a system after a failure
Reliability models neglect the time needed to restore the system after a failure.
• company's reputation
• warranty (maintenance) costs
• future business
• contract requirements
• custom satisfaction
3.1.2. Defect rate (density)
Example:
“The latent defect rate for this product, during next four years, is 2.0 defects per KLOC”
Lines Of Code
-Different counting tools
-Defect rate metric has to be completed with the counting method for LOC!
-Not recommended to compare defect rates of two products written in different language
Defect rate for subsequent versions (releases) of a software product:
Example:
field defects – found by the customer (reported problems that required bug fixing)
internal defects – found internally
Reliability:
often used with safety-critical systems such as: airline traffic control systems,
avionics, weapons.
(usage profile and scenarios are better defined)
Defect density:
in many commercial systems (systems for commercial use)
• there is no typical user profile
• development organizations use defect rate for maintenance cost and
resource estimates
• MTTF is more difficult to implement and may not be representative of all
customers.
3.2. Customer Oriented Metrics
IBM: CUPRIMDSO
(capability/functionality, usability, performance, reliability, installability,
maintainability,
documentation /information, service and overall)
Hewlett-Packard: FURPS
(functionality, usability, reliability, performance and service)
Metrics:
– Percent of completely satisfied customers
– Percent of satisfied customers
– Percent of dissatisfied customers
– Percent of no satisfied customers
Is performance critical?
22
Uses of Function Points
Measure productivity (ex. Number of function points
achieved per work hour expended)
Estimate development and support (cost benefit
analysis, staffing estimation)
Monitor outsourcing agreements (Ensure that the
outsourcing entity delivers the level of support and
productivity gains that they promise)
Drive IS related business decisions (Allow decisions
regarding the retaining, retiring and redesign of
applications to be made)
Normalize other measures (Other measures, such as
defects, frequently require the size in function points)
Objectives of Function Point
Counting
Measure functionality that the user
requests and receives
Measure software development and
maintenance independently of technology
used for implementation
Simple enough to minimize the overhead
of the measurement process
A consistent measure among various
projects and organizations
Function Point Counting Steps
Determine the type of function point count
Identify the counting scope and application
boundary
Determine the Unadjusted Function Point
Count
Count Data Functions
Count Transactional Functions
Determine the Value Adjustment Factor
Calculate the Adjusted Function Point Count
Function Point Counting
Diagram
Metrics for S/W Maintenance
When development of a software product is complete & it
is released to the market, it enters the maintenance phase of
its life cycle. Two metrics during this phase are :
The defect arrivals by time interval
Customer problem calls (which may or may not be defects)
by time interval
However the number of defect arrivals is largely
determined by the development process, before the
maintenance phase. Quality of the product cannot be
altered during maintenance phase. Therefore, these two
metrics do not reflect the quality of software maintenance.
Metrics for S/W Maintenance
Main Purpose of maintenance phase : is to fix the defects
as soon as possible with excellent fix quality. Such actions
do not improve the defect rate of the product, but can
improve customer satisfaction to a large extent.
The following metrics are therefore very important :
Fix backlog & backlog management index
Fix response time & fix responsiveness
Fix quality
Fix backlog & backlog management index
It is a workload statement for software maintenance.
It is related to both the rate of defect arrivals & the rate at
which fixes for reported problems become available.
It is a simple count of reported problems that remain
unfixed at the end of each month or each week. This
metric provides meaningful information for managing the
maintenance process.
Another metric to manage the backlog of open, unsolved,
problems is the backlog management index (BMI)
BMI = (No. of problems closed during the month/ No of
problem arrivals during the month) * 100%
Metrics for S/W Maintenance
Since BMI is a ratio of number of closed or solved
problems to number of problem arrivals during the
month, if BMI is larger than 100 it means the backlog is
reduced. If BMI is less than 100, then the backlog
increased.
The goal is always to strive for a BMI larger than 100. A
BMI trend chart or control chart should be examined
together with trend charts of defect arrivals, defects fixed
(closed), & the number of problems in the backlog.
Fix response time & fix responsiveness
For many development organizations, guidelines are
established on the time limit within which the fixes
should be available for the reported defects. Usually the
criteria are set in accordance with the severity of the
problems.
For critical situations in which the customer’s businesses
are at risk due to defects in the software product, software
developers work around the clock to fix the problems. For
less severe defects for which circumventions are
available, the required fix response time is more relaxed.
The fix response time metric is usually calculated as
follows for all problems as well as by severity level :
“Mean time of all problems from open to closed”
Fix response time & fix responsiveness
In general, short fix response time leads to customer
satisfaction. However, there is a subtle difference between
fix responsiveness & short fix response time. The
important elements of fix responsiveness are customer
expectations, the agreed to fix time & ability to meet
one’s commitment to the customer.
Percent delinquent fixes
The mean response time metric is a central tendency
measure. A more sensitive metric is the percentage of
delinquent fixes. For each fix, if the turnaround time
greatly exceeds the required response time, then it is
classified as delinquent :
% delinquent fixes = (No. of fixes that exceeded the
response time criteria by severity level / No. of fixes
delivered in a specified time ) * 100%
This metric, however, is not a metric for real-time
delinquent management because it is for closed problems
only. Problems that are still open must be factored into
the calculation for a real-time metric. Assuming the time
unit is 1 week, we propose that the percent delinquent of
problems in the active backlog be used.
Percent delinquent fixes
Active backlog refers to all open problems for the week,
which is the sum of the existing backlog at the beginning
of the week & new problem arrivals during the week. In
other words, it contains the total number of problems to
be processed for the week-the total workload. The
number of delinquent problems is checked at the end of
week.
Following diagram shows the real-time delinquency index
diagrammatically
Backlog
Check how
Arrivals many
Delinquent
week
Fix Quality
Fix quality or the number of defective fixes is another
important quality metric for the maintenance phase.
From, customer’s perspective, it is bad enough to
encounter the defects when running a business on the
software. It is even worse if the fixes turn out to be
defective.
A fix is defective if it did not fix reported problem, or if it
fixed original problem but injected a new defect.
Defective fixes are harmful to customer satisfaction.
The metric of percent defective fixes is simply the
percentage of all fixes in a time interval ( e.g. 1 month)
that are defective. The quality goal for the maintenance
process, of course, is zero defective fixes without
delinquency.
In-Process Quality Metrics
Some of the in-process quality metrics are
Defect Density during Machine
Testing
Defect Arrival pattern during
Machine Testing
Phase based Defect Removal pattern
Defect Removal Effectiveness
Defect Density during Machine Testing
Defect rate during formal machine testing ( testing after
code is integrated into the system library) is positively
correlated with the defect rate in the field.
Higher defect rates found during testing indicate that the
software has experienced higher error injection during its
development process.
Software defect density never follows the uniform
distribution. If a piece of code or a product has higher
testing defects, it is a result of more effective testing or it
is because of higher latent defects in the code.
Defect Density during Machine Testing
One principle of Defect Density states that – the more
defects found during testing, the more defects will be
found later. This principle is another expression of
positive correlation between :
Defect rates during testing & in the field or
Defect rates between phases of testing
This simple metric of defect rate i.e. defects per KLOC or
FP, is especially useful to monitor subsequent releases of
a product in the same development organization.
Defect Density during Machine Testing
Therefore release-to-release comparisons are not infected
by irrelevant factors. The development team or the project
manager can use the following scenarios to judge the
release quality :
If defect rate during testing is the same or lower than that
of previous release (or a similar product), then ask : Does
the testing for the current release worsen?
If the answer is no, the quality perspective is positive.
If the answer is yes, you need to do extra testing.
Defect Density during Machine Testing
If the defect rate during testing is substantially higher
than that of previous release, then ask : Did we plan for &
actually improve testing effectiveness?
If the answer is no, the quality perspective is negative.
Ironically, the only remedial approach that can be taken at
this stage of the life cycle is to do more testing, which
will yield even higher defect rates.
If the answer is yes, then the quality perspective is same
or positive.
Defect Arrival during Machine Testing
Three metrics for the defect arrival pattern during testing
are :
The defect arrivals (defect reported) during the testing
phase by time interval (e.g. week). These are the raw
number of arrivals, not all of which are valid defects.
The pattern of valid defect arrivals : when problem
determination is done on the reported problems. This is
true defect pattern.
The pattern of defect backlog overtime. This metric is
needed because development organizations cannot
investigate & fix all reported problems immediately. This
metric is a workload as well as quality statement.
Regression testing is needed to ensure that integrated
product quality levels are reached.
Phase-Based Defect Removal Pattern
It is an extension of the test defect density metric.
In addition to testing, phase- based defect removal pattern
tracks at all phases of the development cycle, including
the design reviews, code inspections, & formal
verifications.
As large percentage of programming defects is related to
design problems, conducting formal reviews or functional
verifications to enhance the defect removal capability at
the front end reduces error injection.
The pattern of phase based defect removal reflects the
overall defect removal ability of the development process.
Defect Removal Efficiency
Defect removal efficiency provides benefits at both the project and
process level
It is a measure of the filtering ability of QA activities as they are applied
throughout all process framework activities
It indicates the percentage of software errors found before software
release
It is defined as DRE = E / (E + D)
E is the number of errors found before delivery of the software to
the end user
D is the number of defects found after delivery
43
Metrics for software Testing
Test coverage
Test Execution productivity
Defect density/Rate
Defect Leakage
Defect per size
Test cost(in %)
Cost to locate defect
Defects detected in testing
Defects detected in production
Quality of testing
Source code analysis
Effort Productivity
University Questions
Sr. No. Question Year Marks