0% found this document useful (0 votes)
159 views12 pages

Whitepaper - Software Testing Metrics

This whitepaper discusses software testing metrics that can be used to measure the efficiency and effectiveness of testing processes and services. It outlines different categories of metrics like lead indicators, point-in-time metrics, and trending metrics. The whitepaper recommends implementing a tiered approach to metrics with organizational, business unit, and project level metrics. Example metrics discussed include software stability trends, defect prevention metrics, and metrics related to test automation, effort distribution, and test planning. The goal of the metrics program is to facilitate effective decision making through collection and analysis of testing performance data.

Uploaded by

fxgbizdcs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
159 views12 pages

Whitepaper - Software Testing Metrics

This whitepaper discusses software testing metrics that can be used to measure the efficiency and effectiveness of testing processes and services. It outlines different categories of metrics like lead indicators, point-in-time metrics, and trending metrics. The whitepaper recommends implementing a tiered approach to metrics with organizational, business unit, and project level metrics. Example metrics discussed include software stability trends, defect prevention metrics, and metrics related to test automation, effort distribution, and test planning. The goal of the metrics program is to facilitate effective decision making through collection and analysis of testing performance data.

Uploaded by

fxgbizdcs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Software Testing Metrics

Driving Testing Services Performance


Whitepaper by:
Santosh Subramanian
Testing Services Solution Architect

A whitepaper on software testing metrics 1


Contents
Abstract........................................................................................................... 3

Introduction..................................................................................................... 3

Metrics Usage Patterns................................................................................... 4

Testing Metrics................................................................................................ 4

Setting Up A Metrics Program........................................................................ 9

Conclusion...................................................................................................... 11

References...................................................................................................... 11

A whitepaper on software testing metrics 2


Abstract
The current decade is all about how we collect data and how we leverage to better know our customers,
our competitors, the geo-political-cultural environment, etc. This focus on data has had a subliminal impact
on IT organizations as well and we find an increasing focus on gathering information, mining it, analyzing it,
and leveraging it to make strategic decisions on improving their service delivery performance. Consequently,
senior executives pivot to their Testing Services organization to see how best they can leverage the data
collected on the quality of delivered software, the testing efficiency & effectiveness, and the quality of the
software development process.

When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you
cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind
- Lord Kelvin (19 century mathematical physicist and engineer)

This white paper discusses different software testing metrics, key attributes of these metrics, and how any
organization can setup a good measurement program to facilitate effective decision-making.

Introduction

Efficiency

Test Automation, Test Management,


Reusability, Test Test Estimation,
Effort Distribution Test Planning

Software
Testing Metrics

Effectiveness Schedule

Process, Domain
Expertise, Requirement
Analysis & Test Design

Types of Testing Metrics & Influencing Factors

As depicted above, a good set of software metrics can provide detailed insight into the quality of a product/
service (i.e. the effectiveness) as well as efficiency of the process delivering the product/service. It also provides
historical data to leverage so that to make your delivery process more predictable and your estimates reliable.
The discussion of software testing metrics will be incomplete if we just mention the different types of metrics
and not discuss the different factors that influence or influenced by the metrics.
For example, one of the biggest challenges IT organizations face is to get to predictable software delivery. Good
metrics can not only provide the team with solid heuristics to rely on and leverage it to develop estimates, but
also compare its progress against benchmarks and course correct the project at an early stage.
In the following sections we will first discuss the different usage categories for metrics. Following that, will be
the key software testing metrics that should be part of any testing organization metrics program. Lastly, we will
discuss how we go about building a good measurement program.
It is critically important to mention that this paper does not detail out every software testing metric. While it
touches up on a few commonly used and relevant testing metrics (based on author’s experience), it intends to
provide the reader with a good starting point to start measuring quality of his/her organization’s software quality.

A whitepaper on software testing metrics 3


Metrics Usage Patterns Testing Metrics
Before we delve into different software testing metrics, When we discuss testing metrics, it is critically important
their calculation rules, and their purpose, it is very to address the different tiers at which the metrics gets
important to understand the broad usage patterns that reported and the impact that it has on that tier. Typically
each of these metrics will fall into. these tiers are Organization, Business Unit, and Project.
This tiered approach to metrics development will enable
the testing organization to deliver metrics across the IT
organization. The organizational level metrics will enable
Sr. Executives take up strategic decisions to improve the
overall IT organization’s software delivery capability. At
Lead indicators, trending a business unit level, it enables Sr. Managers to identify
metrics, and quality risks and/or issues using the combination of status and
assurance are some of trend metrics and take corrective action. Each testing
the initial touchpoints. team with the help of the detail metrics can not only find
Understanding their position out where testing is lacking, but also how they can help
is critical to understand the overall team identify defects early or even prevent
testing metrics.
defect injection.

Lead/Lag Indicators
More often than not we try to use the number of defects
raised or the number of test cases passed, as a measure
of application software quality. It helps when we
discuss the current status and progress of application Understanding the
development, but they offer minimal help in preventing tiered approach and
defects or accelerating the testing process. This is where implementing it is the
lead indicators come into play. best approach for
Lead indicators measure activities that have an impact on testing metrics.
future performance of a testing services organization.
On projects Lead indicators are focused on defect
prevention rather than defect detection. Examples of lead
Organization Level Metrics
indicators are:
These metrics cut across all the testing team in the
− Testing effort spent in requirements and design review;
organization and will offer Sr. Executives a clear
− Effort spent in peer review of test cases perspective on the quality of deliverables output by their
Point-in-Time/Trending Metrics IT organization. In addition, trend metrics will enable
Sr. Executives to recognize how effective their organization is.
Point-in-time metrics enable organizations and project
teams take stock of where they are in terms of the plan • S
 oftware stability trends
and make necessary adjustments so as to meet the The metrics within this category reflect the number
required objectives of the project/program. Typically, of quality issues that have been experienced within
such metrics are used to communicate the status of a its portfolio of applications. An upward trend over a
project or program. period of time or a high number over a period of time
is indicative of widespread issues within the testing
Trending metrics on the other hand help us understand organization and/or within the IT organization. It is an
the patterns of successes or failures, strengths or error to treat these metrics as issues to be resolved,
weaknesses and enable the organization or project rather it is important to understand the root cause(/s)
team to react accordingly. To measure performance of and develop a plan to address them. Some of the
the testing organization we would be focusing more on metrics that will be included in this category are:
trending metrics.
− No. Of production defects reported
Software Quality Assurance/Quality Control Metrics − System outages/downtime
Just so as to be clear, Quality Assurance focusses on − Post release customer feedback
having the right processes in place so that quality is built
into development of the software product. Quality Control • O
 perating cost trends
focusses on ensuring that the developed product meets It is important to review operating cost trends
the expected quality objectives. individually from CapEx as a high operating budget
might be indicative of non-optimal resource utilization
Organizations should typically focus on Quality and/or increasing amount of effort being expended
Assurance metrics so that they can take the steps to on Business As Usual (BAU) testing or heavy expense
bring about process improvements across the SDLC towards testing tools. Such trends suggest for an
while project teams focus on Quality Control metrics, will assessment of the BAU testing portfolio and identify
enable them to focus efforts to deliver a quality product. efficiency opportunities. Some of metrics to be
included in this category are:

A whitepaper on software testing metrics 4


− Manpower cost trends (operational vs. Capex) terms of reduced cost of testing. A combination of
these metrics should enable the testing organization
− Manpower cost trends (full-time vs. Temporary staff)
make a strong case of continuous commitment to test
− Licensing cost trends automation.
− Testing infrastructure cost trends
− Automation testing test coverage
− Ratio of testing budget against overall it budget
− Automated test development productivity
− Testing cost per defect
− Automation maintenance effort
• R
 eusability metrics − Automation ROI
This metrics is typically not given the attention that it
deserves at an organizational level. Primarily because • C
 apability Metrics
in most cases each individual business unit is at its It is important for any Business Unit to understand the
own level of maturity, there are independently operated, true capabilities of the team. It is critically important
and technology sharing typically is at a low unless it is to give some thought to this metrics, in terms of how
mandated from the top-down. Metrics included in this well does it align with the objectives and needs of the
category are: organization and business unit.
− Effort savings through asset reuse − Certifications metrics
− Time-to-market reduction − Resource skill index
− Resource fulfillment index
Business Unit Level Metrics
Project Level Metrics
These are metrics that go across different testing teams
and will enable executives and/or managers to see trends Typically, individual testers within a team are so down in
and accordingly take action that will have a mid-to-long the trenches that they fail to see the big picture. These
term impact on the testing organization. metrics when presented along with a snapshot of where
the project is against the plan, should provide every tester
• T
 est effectiveness trends
a clear perspective on the state of project.
These trend metrics will let the Program/Business Unit/
Departmental managers know whether the testing Status metrics may hide the true state of the project, as
organization is effective in defect identification. More they represent a point-in-time picture and how it track
importantly, it will also help in determining how late against plan. However, in combination with these metrics,
in the testing life cycle were these defects detected. testing team members can truly assess the Qualitative
Some of metrics included in this category are: state of the project. Some of the metrics included in this
category are:
− Defect removal efficiency
− Test coverage Test Effectiveness
− Defect injection rate • Code coverage
• T
 est efficiency trends • Defect detection efficiency
These metrics will help the testing organization
• Defect acceptance ratio
determine if it can optimize the cost of testing and the
time-to-market. Some of the metrics that will help here • Exploratory testing effectiveness
are:
Test Efficiency
− Mean time to detect
• Test case design/execution productivity
− Defect density
− Test case design/execution productivity • Automated test development productivity

• S
 chedule variance trends • Defect aging
These metrics will help the testing organization • Time distribution metrics
determine if there are challenges with either the
estimation for the Testing effort, or the testing planning Schedule
exercise, or with other factors such as build planning, • Effort variance
environment issues, etc.
• Change request effort ratio
− Schedule variance
When we talk about project-level metrics, it is also
− Effort variance important to align the metrics to the development
− Mean time to repair methodology such as Agile, Waterfall, or Iterative. The
• T
 est automation metrics above list of metrics spans across these methodologies
It is challenging for departmental managers to see and should be augmented with metrics specific for the
why they are continuing to incur automation costs project.
for application that have been already delivered to
production or in case of new development, why the
investment in automation is not yielding benefits in

A whitepaper on software testing metrics 5


Metrics Explained
The below table provides a brief on each of the metrics referred to above along with an explanation of how each is
calculated:

Metrics Brief Description Calculation Logic


No. of production defects Count of defects reported Not applicable
reported in production by severity
month over month (MoM)
System outages/ No. of outages by Not applicable
down time application MoM
Cost of system outage/ Cost to business due to
downtime downtime + cost of effort
spent in restoring
application
Customer acceptance A measure of feedback
index from the user community
Testing infrastructure cost Cost of all environments Not applicable
trends (dev, test, UAT, PreProd,
etc.) By application area
QoQ
Ratio of testing budget A trend of the ratio of
against overall it budget overall testing budget
against overall it spend
Testing cost per defect This essentially indicates Overall cost of testing
how efficient is the testing during a given period/# of
team operating in terms of defects identified by the
identifying defects. testing team during that
This metric in conjunction same period
with the defect detection
efficiency (DDE –
discussed below) indicates
how effective the testing
team is.
Time-to-market reduction It is the length of time it Time in weeks/months
takes from a product saved on a project/
being conceived until it is program by using the asset
released to production
Defect removal efficiency % Indicating if testing (DFiP/(DFiP + DFaP)) * 100
team was efficient in Where
controlling defect leakage DFiP= Defect detected &
within a testing phase fixed in a SDLC phase
DFaP = SDLC phase
Defect detected beyond
the SDLC phase

A whitepaper on software testing metrics 6


Metrics Brief Description Calculation Logic
Test coverage % of requirements that (No. of requirements
were tested within a traced back for all test
testing phase cases executed and
passed)/(total no. Of
requirements specified) *
100
Defect injection rate Rate at which new defects No. of defects accepted/
are introduced into the (total # of development
system hours & test execution
hours spent)
Mean time to detect Determines the time that No. of defects accepted/
it takes for testing team to (total # of test execution
identify a defect hours spent)
Defect density Communicates how many No. of accepted defects/
defects were identified in size of the application
relation to the size of the
application
Test case design/ Number of test cases No. of test case
execution productivity developed per person hour developed/(effort spent on
of effort test case development)
Test case design/ Number of test cases No. of test case executed/
execution productivity executed per person (effort spent on test case
hour of effort execution)
Automated test Number of test scripts No. of test scripts
development productivity developed per person hour developed/(effort spent on
of effort test script development)
Automation maintenance Number of hours Not applicable
effort expended towards
changing existing test
scripts over a period
Automation ROI Iindicates if the investment (Cost for manual test suite
in automation is worth the execution – cost for
cost savings that have automated test suite
been achieved due to execution)/(cost of
automation. If automation suite
ROI = 1 – indicates no development + cost
savings of license + cost of
ROI > 1 – indicates automation suite
benefit in automating the maintenance + cost of
test suite execution)
ROI < 1 – indicates no
value in automating the
test suite.

A whitepaper on software testing metrics 7


Metrics Brief Description Calculation Logic
Code coverage Typically created as part of Not applicable
unit testing, is a measure
of how extensively is the
developed code tested
Defect detection % of total # of defects No. of accepted defects
efficiency reported found during testing state/
(no. of accepted defects
found during testing stage
+ No. of accepted defects
after testing stage) * 100
Exploratory testing Exploratory testing Defects detected in
effectiveness efficiency is the percentage exploratory testing /total
of defects detected in number defects detected
exploratory testing to the in manual, automated and
total defects detected in exploratory testing
the test phase
Time distribution metrics This can give great insight
on how changes in the
testing process affects the
test projects. If we have to
carve out a pie chart from
breakdown of total time in
testing cycle, we can get
good insight into where
most of time is spent and
correct.
Defect acceptance ratio Percentage of defects (No. of accepted defects
reported and accepted as found during testing state/
valid defects (no. of defects reported
during testing stage) )* 100
Effort variance % Difference between ((Actual testing effort –
actual testing effort and estimated testing effort)/
estimated testing effort, actual testing effort)*100
expressed as a % of actual
testing effort
Change request effort Goal is to evaluate Actual effort on changes/
ratio requirements stability and total actual efforts in a
accordingly take actions to testing project
improve quality of prod/
requirements documentation
Defect aging Duration in days or weeks (Date defect fixed – date
for how long a defect is defect reported)/s
open i.e. From the time the
defect was reported until
the defect has been fixed

A whitepaper on software testing metrics 8


Setting Up a Metrics Program
It is common industry practice to setup a metrics program based on the GQM (Goals – Question – Metric) paradigm by
Victor Basili. The objective of the methodology is to have at a conceptual level the objectives that organization is trying to
achieve; then develop a list of questions that can be evaluated against those goals; that will essentially lead to the
operational data sets (measures) that can help answer those questions.

Driving Testing
Services
Performance
requires a formal
approach to setting
up a metrics
program

The below figure graphically represent an approach that we recommend to setup a metrics program for any testing
organization.

Setup Review & Analyze Current


Refine Process State

Approach for Metrics Establish


Implement Program Setup
Metrics Organizational
Goals

Develop Metrics
Program Identify
Implementation Measures
Plan

A whitepaper on software testing metrics 9


Analyze Current State Conduct Gap Analysis
Any implementation program that does not take into Using the current state analysis determine which of these
consideration the current state is working outside of metrics can be captured with the current testing process
context and is expected to have significant challenges in and data capture mechanisms in place.
shifting the organizational mindset required to operate a
Capture Current State Measurements
successful metrics program. The current state analysis
would include: For each of the identified metrics and those that exist
• Understanding the current testing process today, list the actual max and min limits based on the data
• Gathering information on the testing tools available available.
and used
This three step formal process of identifying measures will
• Listing the current set of metrics data captured
help immensely as we move towards developing an
implementation plan for establishing the metrics program.
Establish Organizational Goals
Develop Implementation Plan
Understanding the business objectives of establishing a
metrics program would enable the team to develop the It is critically important to understand that putting a metrics
right set of measures. These goals can be broad and program in place without establishing a process to capture
generic or be specific to start with. To develop a Software the required data is meaningless. It is therefore necessary
Testing Metrics program, it is important at this stage to for the implementation plan to take into account the three
translate these into a set of goals that is influenced/ dimensions of a testing organization.
controlled by the testing organization. Some examples of Process
such goals are:
The implementation plan should detail the necessary
Business Goal – Improve Customer Satisfaction rating by changes that are required, if any, to the testing process
15% in 2 years that is currently in place so that the measures that are
Testing Goal – Reduce Defects in Production by 95% in required to develop the metrics program are in place.
2 years
Tools
Testing Goal – Reduce System Downtime to 99.9% in
18 months The plan should take into consideration implementation of
any tools that need to be implemented for the measures to
Business Goal – Improve time-to-market by 40% in 1 year be captured effectively, automation of data capture, and for
Testing Goal – Decrease testing cycle time by 50% for a the reporting of the metrics data.
release in 1 year
People
Business Goal – Reduce IT Operational Cost by 10% in
20 months The implementation plan should detail any training that is
Testing Goal – Reduce testing effort for Business As Usual required for the testing organization to capture the data
releases by 20% in 18 months that is required and for use of the new tools. It is also
required for the implementation to build-in some time for
Identify Measures familiarization and implementation of the new tools &
This stage consists of three individual steps: processes.
Develop Metrics List Data Governance Framework

At this point, it is important to take the testing goals that Any metrics reporting is only as effective as the data that goes
were created in the previous step to develop a list of into the report. Therefore a very critical step in establishment of
metrics. Some examples of these translating goals to the metrics program is the data governance framework.
metrics are: Governance framework needs to include:
Testing Goal – Reduce Defects in Production by 95% in • Data Standardization – Is there a common taxonomy
2 years for the metrics data across the organization
• Data Integrity – Is the data estimated,
Metrics – Defect Detection Efficiency, Defect Removal guesstimated? If yes, how close does it reflect the
Efficiency, Requirements Coverage reality across the organization?
Testing Goal – Decrease testing cycle time by 50% for a • Data Freshness – Data reflected in the metrics
release in 1 year reports should not be stale so that change actions
taken in light of the report is relevant to the
Metrics – Test Case Design/Execution Productivity, organization
Automated Testing Test Coverage

A whitepaper on software testing metrics 10


Implement Metrics Program judged unfairly or in other cases led to encouraging bad
behavior within the testing team (e.g. tracking total defects
With the pre-work that is done up to this point,
reported).
this process becomes more straightforward. The
implementation is then a matter of reviewing the process It is therefore critically important for any testing organization
to capture measurement, the process that goes into that is looking to improve its performance to:
analysis of those measurements, and the reporting of the
1. 
Take a concerted effort in bringing together all the
metrics.
stakeholders – business teams, IT management,
The implementation process should include feedback development team, testing team, and other support
from all the different stakeholders in the organization and teams, to develop a set of goals to measure
constant review of the metrics to ensure that it aligns performance
with the business objectives.
2. Employ and/or leverage the right tools to automate
Setup Review & Refine Process as much of the data capture and reporting as
possible
As it is with every other system, as the organization
improves its performance through the use of metrics 3. Develop a set of measures that are aligned with
program or otherwise, a period review of the metrics the business performance goals set for the
system is required. Therefore, as part of the Metrics Program organization
setup, it is absolutely essential to put a short-term and a
Last Line; implementing a robust metrics program
long-term review process in place.
is not a one-time event, it is a journey that requires
The objective of the short-term review process is to experimentation and learning.
ensure that the measurements, the analysis, and the
reporting processes are enabling the organization to References
validate the actions taken towards testing improvements • “Economics of Software Quality” – Capers Jones &
are yielding the desired results. Olivier Bonsignour
The long-term review process is to conduct a retrospective • “Managing the Testing Process” – Rex Black
on the implementation of the metrics program and see
• “Metrics and Performance Measurement System for the
how it needs to refine the process of identify new goals,
Lean Enterprise” - Professor Deborah Nightingale
measures, metrics for a more matured metrics program.
• “Better Software Magazine” – various articles
Conclusion
• “Testing Experience” – various articles
A Metrics Program is essential to evaluating where
we are on a project, how we are doing as a testing
organization, and how we are serving our business
community.
Many a times Sr. Executives start with the process of
making changes to the testing organization based on
their perception of the issues that they hear from the
business or development teams or that they glean from
basic
metrics that are available to them. Without a proper
metrics program in place, it is not clear if there are
underlying issues and there is no way to know whether
the change strategy is actually having an impact until
much energy and resources have been expended.
At the other end of the spectrum is, the approach to
develop a bunch of metrics and start managing the testing
organization based on some or all of it. This approach has
in many cases led to disgruntled testing organization
(e.g. test case execution rate), who believe they are

A whitepaper on software testing metrics 11


Santosh Subramanian
Testing Practice
Santosh has over 22 years of experience in IT consulting
with Fortune 500 corporations. He has successfully built
and operated multiple testing organizations across different
industries including Financial Services, Insurance, and
Logistics. Since joining Mphasis in August 2012, Santosh
has successfully led opportunities from pre-sales to
proposal creation, to contract negotiation, transition, and
testing services delivery. Currently, he is working with our
strategic accounts providing point testing solutions.

About Mphasis
Mphasis is a global Technology Services and Solutions company specializing in the areas of Digital and Governance, Risk & Compliance. Our solution
focus and superior human capital propels our partnership with large enterprise customers in their Digital Transformation journeys and with global
financial institutions in the conception and execution of their Governance, Risk and Compliance Strategies. We focus on next generation technologies for
differentiated solutions delivering optimized operations for clients.

For more information, contact: [email protected]


USA UK INDIA
460 Park Avenue South 88 Wood Street Bagmane World Technology Center
VAS 10/01/17 US LETTER BASIL 4197

Suite #1101 London EC2V 7RS, UK Marathahalli Ring Road


New York, NY 10016, USA Tel.: +44 20 8528 1000 Doddanakundhi Village, Mahadevapura
Tel.: +1 212 686 6655 Bangalore 560 048, India
Tel.: +91 80 3352 5000

www.mphasis.com
Copyright © Mphasis Corporation. All rights reserved.
A whitepaper on software testing metrics 12

You might also like