0% found this document useful (0 votes)
117 views12 pages

Metrics For Testing: Measuring Enables

This document discusses various metrics for testing software quality. It defines metrics as measures used to quantify factors like schedule, work effort, product size, and quality performance. It then provides examples of specific metrics like defect density, test effectiveness, and defect removal efficiency, which are calculated based on data from bug reports and peer reviews. The document also discusses the purpose of process metrics, which measure development performance, and product metrics, which assess quality attributes of the software.

Uploaded by

shilpa170509
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
117 views12 pages

Metrics For Testing: Measuring Enables

This document discusses various metrics for testing software quality. It defines metrics as measures used to quantify factors like schedule, work effort, product size, and quality performance. It then provides examples of specific metrics like defect density, test effectiveness, and defect removal efficiency, which are calculated based on data from bug reports and peer reviews. The document also discusses the purpose of process metrics, which measure development performance, and product metrics, which assess quality attributes of the software.

Uploaded by

shilpa170509
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 12

20.

Metrics for Testing


What is a Metric?
‘Metric’ is a measure to quantify software, software development resources, and/or the software development
process. A Metric can quantify any of the following factors:
• Schedule,
• Work Effort,
• Product Size,
• Project Status, and
• Quality Performance
Measuring enables….
Metrics enables estimation of future work.
That is, considering the case of testing - Deciding the product is fit for shipment or delivery depends on the
rate the defects are found and fixed. Defect collected and fixed is one kind of metric.
(www.processimpact.com)
As defined in the MISRA Report,
It is beneficial to classify metrics according to their usage. IEEE 928.1 [4] identifies two classes:
i) Process – Activities performed in the production of the Software
ii) Product – An output of the Process, for example the software or its documentation.
Defects are analyzed to identify which are the major causes of defect and which is the phase that introduces
most defects. This can be achieved by performing Pareto analysis of defect causes and defect introduction
phases. The main requirements for any of these analysis is Software Defect Metrics.

Few of the Defect Metrics are:


Defect Density: (No. Of Defects Reported by SQA + No. Defects Reported By Peer Review)/Actual Size.
The Size can be in KLOC, SLOC, or Function Points. The method used in the Organization to measure the size
of the Software Product.
The SQA is considered to be the part of the Software testing team.
Test effectiveness: ‘t / (t+Uat) where t=total no. of defects reported during testing and Uat = total no.
of defects reported during User acceptance testing
User Acceptance Testing is generally carried out using the Acceptance Test Criteria according to the
Acceptance Test Plan.
Defect Removal Efficiency:
(Total No Of Defects Removed /Total No. Of Defects Injected)*100 at various stages of SDLC
Description
This metric will indicate the effectiveness of the defect identification and removal in stages for a given
project
Formula
• Requirements: DRE = [(Requirement defects corrected during Requirements phase) / (Requirement
defects injected during Requirements phase)] * 100
• Design: DRE = [(Design defects corrected during Design phase) / (Defects identified during
Requirements phase + Defects injected during Design phase)] * 100
• code: DRE = [(Code defects corrected during Coding phase) / (Defects identified during
Requirements phase + Defects identified during Design phase + Defects injected during coding
phase)] * 100
• Overall: DRE = [(Total defects corrected at all phases before delivery) / (Total defects detected at
all phases before and after delivery)] * 100
Metric Representation
Percentage
Calculated at
Stage completion or Project Completion
Calculated from
Bug Reports and Peer Review Reports

Defect Distribution: Percentage of Total defects Distributed across Requirements Analysis, Design
Reviews, Code Reviews, Unit Tests, Integration Tests, System Tests, User Acceptance Tests, Review by Project
Leads and Project Managers.

Software Process Metrics are measures which provide information about the performance of the
development process itself.
Purpose:
1. Provide an Indicator to the Ultimate Quality of Software being Produced
2. Assists to the Organization to improve its development process by Highlighting areas of
Inefficiency or error-prone areas of the process.
Software Product Metrics are measures of some attribute of the Software Product. (Example, Source Code).
Purpose:
1. Used to assess the quality of the output
What are the most general metrics?
Requirements Management
Metrics Collected
1. Requirements by state – Accepted, Rejected, Postponed
2. No. of baselined requirements
3. Number of requirements modified after base lining
Derived Metrics
1. Requirements Stability Index (RSI)
2. Requirements to Design Traceability
Project Management
Metrics Collected Derived Metrics
1. Planned No. of days 1. Schedule Variance
2. Actual No. of days
Estimated effort 1. Effort Variance
Actual Effort
1. Estimated Cost 1. Cost Variance
2. Actual Cost
1. Estimated Size 1. Size Variance
Actual Size
Testing & Review
Metrics Collected
1. No. of defects found by Reviews
2. No. of defects found by Testing
3. No. of defects found by Client
4. Total No. of defects found by Reviews

Derived Metrics
1. Overall Review Effectiveness (ORE)
2. Overall Test Effectiveness
Peer Reviews
Metrics Collected
1. KLOC / FP per person hour (Language) for Preparation
2. KLOC / FP per person hour (Language) for Review Meeting
3. No. of pages / hour reviewed during preparation
4. Average number of defects found by Reviewer during Preparation
5. No. of pages / hour reviewed during Review Meeting
6. Average number of defects found by Reviewer during Review Meeting
7. Review Team Size Vs Defects
8. Review speed Vs Defects
9. Major defects found during Review Meeting
10. Defects Vs Review Effort
Derived Metrics
1. Review Effectiveness (Major)
2. Total number of defects found by reviews for a project
Other Metrics
Metrics Collected
1. No. of Requirements Designed
2. No. of Requirements not Designed
3. No. of Design elements matching Requirements
4. No. of Design elements not matching Requirements
5. No. of Requirements Tested
6. No. of Requirements not Tested
7. No. of Test Cases with matching Requirements
8. No. of Test Cases without matching Requirements
9. No. of Defects by Severity
10. No. of Defects by stage of - Origin, Detection, Removal
Derived Metrics
1. Defect Density
2. No. of Requirements Designed Vs not Designed
3. No. of Requirements Tested Vs not Tested
4. Defect Removal Efficiency (DRE)
Some Metrics Explained
Schedule Variance (SV)
Description
This metric gives the variation of actual schedule vs. the planned schedule. This is calculated for each
project – stage wise
Formula
SV = [(Actual no. of days – Planned no. of days) / Planned no. of days] * 100
Metric Representation
Percentage
Calculated at
Stage completion
Calculated from
Software Project Plan for planned number of days for completing each stage and for actual number of days
taken to complete each stage
Defect Removal Efficiency (DRE)
Description
This metric will indicate the effectiveness of the defect identification and removal in stages for a given
project
Formula
• Requirements: DRE = [(Requirement defects corrected during Requirements phase) / (Requirement
defects injected during Requirements phase)] * 100
• Design: DRE = [(Design defects corrected during Design phase) / (Defects identified during
Requirements phase + Defects injected during Design phase)] * 100
• Code: DRE = [(Code defects corrected during Coding phase) / (Defects identified during
Requirements phase + Defects identified during Design phase + Defects injected during coding
phase)] * 100
• Overall: DRE = [(Total defects corrected at all phases before delivery) / (Total defects detected at
all phases before and after delivery)] * 100
Metric Representation
Percentage
Calculated at
Stage completion or Project Completion
Calculated from
Bug Reports and Peer Review Reports

Overall Review Effectiveness


Description
This metric will indicate the effectiveness of the Review process in identifying the defects for a given
project
Formula
• Overall Review Effectiveness: ORE = [(Number of defects found by reviews) / (Total number of
defects found by reviews + Number of defects found during Testing + Number of defects found
during post-delivery)] * 100
Metric Representation
• Percentage
Calculated at
• Monthly
• Stage completion or Project Completion
Calculated from
• Peer reviews, Formal Reviews
• Test Reports
• Customer Identified Defects
Overall Test Effectiveness (OTE)
Description
This metric will indicate the effectiveness of the Testing process in identifying the defects for a given
project during the testing stage
Formula
• Overall Test Effectiveness: OTE = [(Number of defects found during testing) / (Total number of
defects found during Testing + Number of defects found during post delivery)] * 100
Metric Representation
• Percentage
Calculated at
• Monthly
• Build completion or Project Completion
Calculated from
• Test Reports
• Customer Identified Defects
Effort Variance (EV)
Description
This metric gives the variation of actual effort vs. the estimated effort. This is calculated for each project
Stage wise
Formula
• EV = [(Actual person hours – Estimated person hours) / Estimated person hours] * 100
Metric Representation
• Percentage
Calculated at
• Stage completion as identified in SPP
Calculated from
• Estimation sheets for estimated values in person hours, for each activity within a given stage and
Actual Worked Hours values in person hours.
Cost Variance (CV)
Description
This metric gives the variation of actual cost Vs the estimated cost. This is calculated for each project
Stage wise
Formula
• CV = [(Actual Cost – Estimated Cost) / Estimated Cost] * 100
Metric Representation
• Percentage
Calculated at
• Stage completion
Calculated from
• Estimation sheets for estimated values in dollars or rupees, for each activity within a given stage
• Actual cost incurred

Size Variance
Description
This metric gives the variation of actual size Vs. the estimated size. This is calculated for each project
stage wise
Formula
• Size Variance = [(Actual Size – Estimated Size) / Estimated Size] * 100
Metric Representation
• Percentage
Calculated at
• Stage completion
• Project Completion
Calculated from
• Estimation sheets for estimated values in Function Points or KLOC
• Actual size

Productivity on Review Preparation – Technical

Description
This metric will indicate the effort spent on preparation for Review. Use this to calculate for languages
used in the Project
Formula
For every language (such as C, C++, Java, XSL, etc…) used, calculate

• (KLOC or FP ) / hour (* Language)


*Language – C, C++, Java, XML, etc…
Metric Representation
• KLOC or FP per hour
Calculated at
• Monthly
• Build completion
Calculated from
• Peer Review Report
Number of defects found per Review Meeting
Description
This metric will indicate the number of defects found during the Review Meeting across various stages of
the Project
Formula
• Number of defects per Review Meeting
Metric Representation
• Defects / Review Meeting
Calculated at
• Monthly
• Completion of Review
Calculated from
• Peer Review Report
• Peer Review Defect List
Review Team Efficiency (Review Team Size Vs Defects Trend)
Description
This metric will indicate the Review Team size and the defects trend. This will help to determine the
efficiency of the Review Team
Formula
• Review Team Size to the Defects trend
Metric Representation
• Ratio
Calculated at
• Monthly
• Completion of Review
Calculated from
• Peer Review Report
• Peer Review Defect List
Review Effectiveness
Description
This metric will indicate the effectiveness of the Review process
Formula
Review Effectiveness = [(Number of defects found by Reviews) / ((Total number of defects found by
reviews) + Testing)] * 100
Metric Representation
• Percentage
Calculated at
• Completion of Review or Completion of Testing stage
Calculated from
• Peer Review Report
• Peer Review Defect List
• Bugs Reported by Testing
Total number of defects found by Reviews
Description
This metric will indicate the total number of defects identified by the Review process. The defects are
further categorized as High, Medium or Low
Formula
Total number of defects identified in the Project
Metric Representation
• Defects per Stage
Calculated at
• Completion of Reviews
Calculated from
• Peer Review Report
• Peer Review Defect List

Defects vs. Review effort – Review Yield


Description
This metric will indicate the effort expended in each stage for reviews to the defects found
Formula
• Defects / Review effort
Metric Representation
• Defects / Review effort
Calculated at
• Completion of Reviews
Calculated from
• Peer Review Report
• Peer Review Defect List
Requirements Stability Index (RSI)
Description
This metric gives the stability factor of the requirements over a period of time, after the requirements have
been mutually agreed and baselined between Ivesia Solutions and the Client
Formula
• RSI = 100 * [ (Number of baselined requirements) – (Number of changes in requirements after the
requirements are baselined) ] / (Number of baselined requirements)
Metric Representation
• Percentage
Calculated at
• Stage completion and Project completion
Calculated from
• Change Request
• Software Requirements Specification

Change Requests by State


Description
This metric provides the analysis on state of the requirements
Formula
• Number of accepted requirements
• Number of rejected requirements
• Number of postponed requirements
Metric Representation
• Number
Calculated at
• Stage completion
Calculated from
• Change Request
• Software Requirements Specification
Requirements to Design Traceability
Description
This metric provides the analysis on the number of requirements designed to the number of requirements
that were not designed
Formula
• Total Number of Requirements
• Number of Requirements Designed
• Number of Requirements not Designed
Metric Representation
• Number
Calculated at
• Stage completion
Calculated from
• SRS
• Detail Design
Design to Requirements Traceability
Description
This metric provides the analysis on the number of design elements matching requirements to the number
of design elements not matching requirements
Formula
• Number of Design elements
• Number of Design elements matching Requirements
• Number of Design elements not matching Requirements
Metric Representation
• Number
Calculated at
• Stage completion
Calculated from
• SRS
• Detail Design
Requirements to Test case Traceability
Description
This metric provides the analysis on the number of requirements tested Vs the number of requirements
not tested
Formula
• Number of Requirements
• Number of Requirements Tested
• Number of Requirements not Tested
Metric Representation
• Number
Calculated at
• Stage completion
Calculated from
• SRS
• Detail Design
• Test Case Specification
Test cases to Requirements traceability
Description
This metric provides the analysis on the number of test cases matching requirements Vs the number of
test cases not matching requirements
Formula
• Number of Requirements
• Number of Test cases with matching Requirements
• Number of Test cases not matching Requirements
Metric Representation
• Number
Calculated at
• Stage completion
Calculated from
• SRS
• Test Case Specification
Number of defects in coding found during testing by severity
Description
This metric provides the analysis on the number of defects by the severity
Formula
• Number of Defects
• Number of defects of low priority
• Number of defects of medium priority
• Number of defects of high priority
Metric Representation
• Number
Calculated at
• Stage completion
Calculated from
• Bug Report
Defects – Stage of origin, detection, removal
Description
This metric provides the analysis on the number of defects by the stage of origin, detection and removal.
Formula
• Number of Defects
• Stage of origin
• Stage of detection
• Stage of removal
Metric Representation
• Number
Calculated at
• Stage completion
Calculated from
• Bug Report
Defect Density
Description
This metric provides the analysis on the number of defects to the size of the work product
Formula
Defect Density = [Total no. of Defects / Size (FP / KLOC)] * 100
Metric Representation
• Percentage
Calculated at
• Stage completion
Calculated from
• Defects List
• Bug Report
How do you determine metrics for your application?
Objectives of Metrics are not only to measure but also understand the progress to the Organizational Goal.
The Parameters for determining the Metrics for an application:
• Duration
• Complexity
• Technology Constraints
• Previous Experience in Same Technology
• Business Domain
• Clarity of the scope of the project

You might also like