Software Test Metrics QA
Software Test Metrics QA
Test Metrics:
Objectives
Why Measure? Definition Metrics Philosophy
Types of Metrics
Interpreting the Results Metrics Case Study
Q&A
Slide 2
Why Measure?
Software bugs cost the U.S. economy an estimated $59.5 billion per year. An estimated $22.2 billion could be eliminated by improved testing that enables
Why Measure?
cannot measure.
Slide 4
Definition
Test Metrics: Are a standard of measurement. Gauge the effectiveness and efficiency of several software development activities. Are gathered and interpreted throughout the test effort. Provide an objective measurement of the success
of a software project.
Slide 5
Metrics Philosophy
Keep It Simple
Make It Meaningful
Track It
Use It
When tracked and used properly, test metrics can aid in software development process improvement by providing pragmatic & objective evidence of process change initiatives.
Slide 6
Metrics Philosophy
Keep It Simple
Measure the basics first Make It Meaningful Clearly define each metric Get the most bang for your buck
Track It
Use It
Slide 7
Metrics Philosophy
Metrics are useless if they are meaningless (use GQM model)
Keep It Simple
Make It Meaningful
Track It Must be able to interpret the results Metrics interpretation should be objective
Use It
Slide 8
Metrics Philosophy
Incorporate metrics tracking into the Run Log or defect tracking system Automate tracking process to remove time burdens
Keep It Simple
Make It Meaningful
Track It
Use It
Slide 9
Metrics Philosophy
Interpret the results
Keep It Simple
Track It
Use It
Slide 10
Types of Metrics
Base Metrics
Raw data gathered by Test Analysts Tracked throughout test effort Used to provide project status and evaluations/feedback
Examples
# Test Cases # Executed # Passed # Failed # Under Investigation # Blocked # 1st Run Failures # Re-Executed Total Executions Total Passes Total Failures
Slide 11
Types of Metrics
Base Metrics
Raw data gathered by Test Analyst Tracked throughout test effort Used to provide project status and evaluations/feedback
Examples
# Test Cases # Executed # Passed # Failed # Under Investigation # Blocked # 1st Run Failures # Re-Executed Total Executions Total Passes Total Failures
# Blocked
The number of distinct test cases that cannot be executed during the test effort due to an application or environmental constraint. Defines the impact of known system defects on the ability to execute specific test cases
Slide 12
Types of Metrics
Calculated Metrics
Tracked by Test Lead/Manager Converts base metrics to useful data Combinations of metrics can be used to evaluate process changes
Examples
% Complete % Test Coverage % Test Cases Passed % Test Cases Blocked 1st Run Fail Rate Overall Fail Rate % Defects Corrected % Rework % Test Effectiveness Defect Discovery Rate
Slide 13
Types of Metrics
Calculated Metrics
Tracked by Test Lead/Manager Converts base metrics to useful data Combinations of metrics can be used to evaluate process changes
Examples
% Complete % Test Coverage % Test Cases Passed % Test Cases Blocked 1st Run Fail Rate Overall Fail Rate % Defects Corrected % Rework % Test Effectiveness Defect Discovery Rate
Slide 15
Base Metrics
Metric Total # of TCs # Executed # Passed # Failed # UI # Blocked # Unexecuted # Re-executed Total Executions Total Passes Total Failures 1st Run Failures Value 100 13 11 1 1 2 87 1 15 11 3 2 % % % % % % % %
Calculated Metrics
Metric Complete Test Coverage TCs Passed TCs Blocked 1st Run Failures Failures Defects Corrected Rework Value 11.0% 13.0% 84.6% 2.0% 15.4% 20.0% 66.7% 100.0%
Slide 16
Result: Potential improvements are not implemented leaving process gaps throughout the SDLC. This reduces the effectiveness of the project team and the quality of the applications.
Slide 17
Volvo IT of North America had little or no testing involvement in its IT projects. The organizations projects were primarily maintenance related and operated in a COBOL/CICS/Mainframe environment. The organization had a desire to migrate to newer technologies and felt that testing involvement would assure and enhance this technological shift. While establishing a test team we also instituted a metrics program to track the benefits of having a QA group.
Slide 19
Project V Introduced a test methodology and metrics program Project was 75% complete (development was nearly finished) Test team developed 355 test scenarios 30.7% - 1st Run Fail Rate 31.4% - Overall Fail Rate Defect Repair Costs = $519,000
Slide 20
Project T Instituted requirements walkthroughs and design reviews with test team input Same resources comprised both project teams Test team developed 345 test scenarios 17.9% - 1st Run Fail Rate 18.0% - Overall Fail Rate Defect Repair Costs = $346,000
Slide 21
Project T
Every project moving forward, using the same QA principles can achieve the same type of savings.
Slide 22
Q&A
Slide 23