Whitepaper - Software Testing Metrics
Whitepaper - Software Testing Metrics
Introduction..................................................................................................... 3
Testing Metrics................................................................................................ 4
Conclusion...................................................................................................... 11
References...................................................................................................... 11
When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you
cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind
- Lord Kelvin (19 century mathematical physicist and engineer)
This white paper discusses different software testing metrics, key attributes of these metrics, and how any
organization can setup a good measurement program to facilitate effective decision-making.
Introduction
Efficiency
Software
Testing Metrics
Effectiveness Schedule
Process, Domain
Expertise, Requirement
Analysis & Test Design
As depicted above, a good set of software metrics can provide detailed insight into the quality of a product/
service (i.e. the effectiveness) as well as efficiency of the process delivering the product/service. It also provides
historical data to leverage so that to make your delivery process more predictable and your estimates reliable.
The discussion of software testing metrics will be incomplete if we just mention the different types of metrics
and not discuss the different factors that influence or influenced by the metrics.
For example, one of the biggest challenges IT organizations face is to get to predictable software delivery. Good
metrics can not only provide the team with solid heuristics to rely on and leverage it to develop estimates, but
also compare its progress against benchmarks and course correct the project at an early stage.
In the following sections we will first discuss the different usage categories for metrics. Following that, will be
the key software testing metrics that should be part of any testing organization metrics program. Lastly, we will
discuss how we go about building a good measurement program.
It is critically important to mention that this paper does not detail out every software testing metric. While it
touches up on a few commonly used and relevant testing metrics (based on author’s experience), it intends to
provide the reader with a good starting point to start measuring quality of his/her organization’s software quality.
Lead/Lag Indicators
More often than not we try to use the number of defects
raised or the number of test cases passed, as a measure
of application software quality. It helps when we
discuss the current status and progress of application Understanding the
development, but they offer minimal help in preventing tiered approach and
defects or accelerating the testing process. This is where implementing it is the
lead indicators come into play. best approach for
Lead indicators measure activities that have an impact on testing metrics.
future performance of a testing services organization.
On projects Lead indicators are focused on defect
prevention rather than defect detection. Examples of lead
Organization Level Metrics
indicators are:
These metrics cut across all the testing team in the
− Testing effort spent in requirements and design review;
organization and will offer Sr. Executives a clear
− Effort spent in peer review of test cases perspective on the quality of deliverables output by their
Point-in-Time/Trending Metrics IT organization. In addition, trend metrics will enable
Sr. Executives to recognize how effective their organization is.
Point-in-time metrics enable organizations and project
teams take stock of where they are in terms of the plan • S
oftware stability trends
and make necessary adjustments so as to meet the The metrics within this category reflect the number
required objectives of the project/program. Typically, of quality issues that have been experienced within
such metrics are used to communicate the status of a its portfolio of applications. An upward trend over a
project or program. period of time or a high number over a period of time
is indicative of widespread issues within the testing
Trending metrics on the other hand help us understand organization and/or within the IT organization. It is an
the patterns of successes or failures, strengths or error to treat these metrics as issues to be resolved,
weaknesses and enable the organization or project rather it is important to understand the root cause(/s)
team to react accordingly. To measure performance of and develop a plan to address them. Some of the
the testing organization we would be focusing more on metrics that will be included in this category are:
trending metrics.
− No. Of production defects reported
Software Quality Assurance/Quality Control Metrics − System outages/downtime
Just so as to be clear, Quality Assurance focusses on − Post release customer feedback
having the right processes in place so that quality is built
into development of the software product. Quality Control • O
perating cost trends
focusses on ensuring that the developed product meets It is important to review operating cost trends
the expected quality objectives. individually from CapEx as a high operating budget
might be indicative of non-optimal resource utilization
Organizations should typically focus on Quality and/or increasing amount of effort being expended
Assurance metrics so that they can take the steps to on Business As Usual (BAU) testing or heavy expense
bring about process improvements across the SDLC towards testing tools. Such trends suggest for an
while project teams focus on Quality Control metrics, will assessment of the BAU testing portfolio and identify
enable them to focus efforts to deliver a quality product. efficiency opportunities. Some of metrics to be
included in this category are:
• S
chedule variance trends • Defect aging
These metrics will help the testing organization • Time distribution metrics
determine if there are challenges with either the
estimation for the Testing effort, or the testing planning Schedule
exercise, or with other factors such as build planning, • Effort variance
environment issues, etc.
• Change request effort ratio
− Schedule variance
When we talk about project-level metrics, it is also
− Effort variance important to align the metrics to the development
− Mean time to repair methodology such as Agile, Waterfall, or Iterative. The
• T
est automation metrics above list of metrics spans across these methodologies
It is challenging for departmental managers to see and should be augmented with metrics specific for the
why they are continuing to incur automation costs project.
for application that have been already delivered to
production or in case of new development, why the
investment in automation is not yielding benefits in
Driving Testing
Services
Performance
requires a formal
approach to setting
up a metrics
program
The below figure graphically represent an approach that we recommend to setup a metrics program for any testing
organization.
Develop Metrics
Program Identify
Implementation Measures
Plan
At this point, it is important to take the testing goals that Any metrics reporting is only as effective as the data that goes
were created in the previous step to develop a list of into the report. Therefore a very critical step in establishment of
metrics. Some examples of these translating goals to the metrics program is the data governance framework.
metrics are: Governance framework needs to include:
Testing Goal – Reduce Defects in Production by 95% in • Data Standardization – Is there a common taxonomy
2 years for the metrics data across the organization
• Data Integrity – Is the data estimated,
Metrics – Defect Detection Efficiency, Defect Removal guesstimated? If yes, how close does it reflect the
Efficiency, Requirements Coverage reality across the organization?
Testing Goal – Decrease testing cycle time by 50% for a • Data Freshness – Data reflected in the metrics
release in 1 year reports should not be stale so that change actions
taken in light of the report is relevant to the
Metrics – Test Case Design/Execution Productivity, organization
Automated Testing Test Coverage
About Mphasis
Mphasis is a global Technology Services and Solutions company specializing in the areas of Digital and Governance, Risk & Compliance. Our solution
focus and superior human capital propels our partnership with large enterprise customers in their Digital Transformation journeys and with global
financial institutions in the conception and execution of their Governance, Risk and Compliance Strategies. We focus on next generation technologies for
differentiated solutions delivering optimized operations for clients.
www.mphasis.com
Copyright © Mphasis Corporation. All rights reserved.
A whitepaper on software testing metrics 12