Deploying Defect Analysis
Deploying Defect Analysis
Deploying Defect Analysis
W1
Wednesday, February 17, 1999 11:00AM
International Conference On
Introduction
n
Background
Lockheed Martin GES develops the radar and combat systems for AEGIS guided missile cruisers and destroyers for the U. S. Navy and internationally. In early 1997 a GES goal was to achieve an Software Engineering Institute (SEI) Level 3 rating. This included:
An upgrade of the current peer review practice to include software inspection methodology a defect analysis program
The peer review upgrade and defect analysis program was implemented in two months
2
Presentation Overview
n
Presentation Objectives
To share our experience initiating a defect analysis program To describe what worked well and what problems were encountered
Agenda
Defect analysis implementation process description
Defect Analysis Goal Establishment Process Definition Measurement Determination Reports Tool Development Training Follow-up and Process Improvement
Summary of results
3
GOVERNMENT ELECTRONIC SYSTEMS
Process Definition
n
The next step: develop in-house expertise and define the software inspection and defect analysis procedure
Brought in outside expert in software inspections and inspection data analysis
Performed training, i.e., orientations and workshops
Reviewed other material available, e.g., books, articles, technical reports, the internet, and conference material
Measurement Determination
n
Predicted to be useful in diagnosing possible causes of process inefficiency, e.g., peer review preparation time *Used measurements in common use within the software industry
Easier to sell Useful as benchmark data for comparing with our data Would aid our data analysis and understanding of the data
Reports
n
7
GOVERNMENT ELECTRONIC SYSTEMS
8
GOVERNMENT ELECTRONIC SYSTEMS
Checklists Used
Completeness Correctness Style Rules of Construction Multiple Views Technology Metrics AEGIS CPS (Req'd) Author Reader Reviewer Reviewer Reviewer Reviewer Total # Reviewers Role Moderator %
Reviewers
Name Prep Time
Comments
- indicates cells with formulas, i.e., computed values
Distribution
Name/Mailstop Name/Mailstop Name/Mailstop
Defect Log
Both a data entry form for data for each defect found at a peer review and a record Includes:
Defect type Defect origin Defect severity Defect category Defect location Time to fix Date closed Other miscellaneous information
10
GOVERNMENT ELECTRONIC SYSTEMS
Module or Procedure
Defect Category: Data, Documentation, Interface, Logic, Maintainability, Performance, Standards, Other Defect Severity: Major, Minor Defect Type: Missing, W rong, Extra Defect Origin: Reqs, Design, Code, Unit Test, EI&T, Maintenance
11
GOVERNMENT ELECTRONIC SYSTEMS
Peer Review Summary - provides feedback about the product and the types of defects that could be eliminated earlier next time Includes:
General peer review information Defect Type by Defect Category profile - a profile of the defects found by Defect Type, Defect Category, and Defect Severity Defect Origin profile - the major and minor defects found plotted against the phase in which the defects were injected Peer Review efficiency and effectiveness measures - for how efficient the review was, its defect-finding effectiveness, and if the review was within normal expected data ranges
12
GOVERNMENT ELECTRONIC SYSTEMS
Defect Category Interface Data Logic Performance Standards Documentation Maintainability Other Totals
Missing 1
Total 1 1
Missing
Total
2 2
2 2
Reqs. 1 1
Design 1 3
EI&T
Total 2 4 6
Measurement Name
# Reviewers # SLOC # Pages Meeting Time (LH) Total Prep Time (LH) Total Mtg. Time (LH) Total Detection Effort (LH) Total Fix Time (LH) Total Inspection Time (LH) Ave. Prep Time per Reviewer Ave. Prep Time Review Rate - SLOC/HR Ave. Prep Time Review Rate - Pgs./HR Ave. Inspection Time per Defect Ave. Inspection Time per Major Defect Ave. Defects Found/Detection Effort Hr. Ave. Major Defects Found/Detection Effort Hr. Defects Logged per Hour Meeting Review Rate - SLOC/HR Meeting Review Rate - Pgs./HR Ave. Defects Found per Page Ave. Major Defects Found per Page Ave. Defects Found per KSLOC Ave. Major Defects Found per KSLOC
Value
2.5 25 2.0 4.0 5.0 9.0 1.4 10.4 1.6 15.6 1.7 5.2 0.7 0.2 3.0 12.5 0.2 0.1
Comments
Total meeting duration time Total time spent preparing # Reviewers * Mtg. Time Preparation + Meeting Time Time to fix defects & reinspect Total time to find and remove Ave. prep time per reviewer Ave. prep time rate per reviewer Ave. prep time rate per reviewer Ave. time to find and remove Ave. time to find and remove Find time efficiency Find time efficiency Inspection efficiency Review rate Review rate Inspection effectiveness Inspection effectiveness Inspection effectiveness Inspection effectiveness
Produced monthly from the peer review and defect data Includes charts and tables depicting: peer review status, product quality, and process efficiency & effectiveness
Defect Severity, Category, and Type Profiles Defect Analysis by Phase Defect Density for Documents and for Code Peer Review Status and Process Metrics
n n
All project personnel are given access to the report Data analysis recorded describing trends and anomalies observed in the data
14
GOVERNMENT ELECTRONIC SYSTEMS
All Elements
All Elements
# Defects
Minor Major
# Defects
300 250 200 150 100 50 0 Standards Interface Maintain Logic Perform Data Doc Other 92 93 60 0 60 116 74 24 40 7 25 28 2 4 8
Major Minor
Reqs
Design
Code
Unit Test
EI&T
Defect Category
15
GOVERNMENT ELECTRONIC SYSTEMS
*
119
All Elements
All Baselines
180 160 140
*
161
All Elements
Major Defects
60% 50%
# Defects
# Defects
117 40% 38% 30% 60 52 17% 20 12 5 25 8% 0 10% 00 0 0% 0% Field 20% Injected Removed Escaped % Removed
80 60 40 20 2 0 6
25
12 4
8 0 0
20 0
Reqs
Design
Code
Unit Test
EI&T
Reqs
Design
Code
UT
EI&T
Phase
16
GOVERNMENT ELECTRONIC SYSTEMS
Tool Development
n
Support tools needed to collect and store the data, and generate the reports *Decided to start with simple Microsoft (MS) Excel spreadsheet system:
MS Excel could easily generate the required charts and tables Simple two-dimensional database structures in Excel were sufficient MS Excel expertise was more readily available MS Excel databases easily exportable to any future DB application MS Excel tools can be enhanced through the creation of macros
Key components of the spreadsheet system are three Excel workbook files, each comprised of multiple spreadsheets:
Peer Review Report
Contains the Peer Review Record, Defect Log, and Peer Review Summary worksheets Is a template file designed for direct data entry during the peer review Contains worksheets that organize the specific data for transfer to the two databases Contains macros used to facilitate data entry, printing, to provide instructional help, and to audit the report for data entry errors
20
GOVERNMENT ELECTRONIC SYSTEMS
Charts & tables selected and viewed online Peer Review Metrics
Defect Database
Defect Profiles
21
GOVERNMENT ELECTRONIC SYSTEMS
Project Databases
n
Includes worksheets for the Peer Review Status and Peer Review Process Metrics charts within the Project Quality Report
n
Includes worksheets for generating the defect profile charts within the Project Quality Report
22
GOVERNMENT ELECTRONIC SYSTEMS
*Macros
For transferring data from the input data files to the databases For controlling the pivot table page settings on all the pivot tables in the workbook at the same time For printing the reports
Training
n n
Training needed to roll out new practices to managers and engineers on a pilot project Software inspection methodology training performed earlier by a consultant Orientation course given for both the new peer review process and the defect analysis program. Covered:
The new written procedures for peer reviews and defect analysis The rationale and use for each new measurement The Peer Review and Project Quality Reports The support tool system of Excel spreadsheets
24
GOVERNMENT ELECTRONIC SYSTEMS
A focus group formed to identify issues Primary issue: the annoyance of additional paperwork
Very few entering peer review data directly into the spreadsheets None of the workrooms equipped with PCs Filling out multiple forms for small changes, e.g., problem fixes
Summary
n
The defect analysis program was implemented successfully in a relatively short period of time. Critical Success Factors:
Clearly understood goals to focus the effort and prevent rework Using the SEIs CMM as a framework to provide direction and focus Utilizing resources and expertise from outside the organization Starting with simple tools at first and then improving them later Training, training, training Continual monitoring and follow-up; using focus group
26
GOVERNMENT ELECTRONIC SYSTEMS
Summary
n
27
GOVERNMENT ELECTRONIC SYSTEMS
Implementation Process
Defect Analysis Program Goal Establishment
The first steps in implementing a defect analysis program were to establish the purpose and goals of the program and its role in supporting the organizations software process improvement goals. This was essential to set the scope of the task and facilitate decision making during the design and implementation of the program. The primary goals of the program were: 1. To satisfy SEI level 3 CMM criteria, particularly certain key practices within the Peer Review and Software Product Engineering Key Process Areas (KPAs). [1] To set the groundwork for SEI Level 4 by collecting data for assessing process stability and to support the analysis associated with defect removal and defect prevention efforts.
2.
In using the CMM to provide direction in our efforts to improve, it was determined that our peer review and defect analysis procedures must be documented, that training be provided for all involved personnel, and that the following data be collected and analyzed: Data on the conduct and results of peer reviews Measurements to determine the status of the peer review activities Data on defects detected during peer reviews and testing The value of peer reviews, especially software inspections, in improving product quality, reducing rework, improving productivity, reducing cycle time, and reducing cost is well-documented. [2] [3] Therefore, it is very important to measure the results, status, and defect-removal efficiency of the peer review process and look for opportunities to improve it.
Defect data collected from peer reviews, testing, and operational use provide insight into the quality of the software development processes and the software products that can be used to initiate process improvement.
Process Definition
The next steps in the implementation process were to develop in-house expertise and then define the defect analysis procedure. An essential requirement for this step was bringing in expertise from outside of the organization. This was accomplished in three ways: 1. 2. 3. By bringing in an outside consultant who was an expert in software inspections and defect analysis. By utilizing the documentation made available for sharing within the corporation from other Lockheed Martin businesses. By reviewing some of the voluminous material available on the subjects of software inspections and defect analysis.
The outside consultant conducted training including two software inspection orientations (one for software engineers and one for managers) and a software inspection workshop for the engineers. His training material provided excellent examples of how defect data and software inspection data can be utilized. Other miscellaneous material was readily available and helpful, including books, SEI Technical Reports [4] [5], and information from the Internet. Utilizing the shared process documentation, including procedural descriptions and guidebooks, from other Lockheed Martin businesses was especially important in expediting this step of research and process definition. Lockheed Martin is a large corporation with a significant number of SEI Level 3, 4, and 5 organizations. Several of these sites have their process documentation available over the corporate intranet. Subsequently, producing detailed procedure descriptions for peer reviews and defect analysis became an editing task to adapt the new procedures to the GES culture and add the best aspects of the other material used.
Measurement Determination
As part of the defect collection and analysis procedure definition, a determination was made of the specific measurements that were needed. These were derived from the defect analysis goals: to measure peer review status, to measure the efficiency of the peer review process, and to collect data on the defects being inserted into the software products to support future analysis. The measurable attributes of these goals were determined. For example, a measurable attribute of peer review effectiveness is the number of defects that escape through a peer review into later development phases, such as a code defect being found in testing. To determine defect leakage such as this, for each defect the development phase where the defect was inserted must be recorded, as well as the defect removal activity (e.g., testing) where the defect was found. Another aspect of determining measurement requirements was to predict relevant data that could be useful in diagnosing possible causes of process inefficiency. Requiring that peer review preparation time be recorded is an example of this type of measurement. The amount of time individuals spend preparing for a peer review can be assumed to have a direct bearing on the effectiveness of each review. Since it can be anticipated that eventually an analysis will be made as to how the peer review process could be improved, data on preparation time would be considered important information. Therefore, it was included as a measurement requirement. In choosing the measurements to make in support of the defect analysis goals, the reference material described earlier was used both for guidance and to ensure we used measurements that were in common use within the software industry. This was for two reasons: It was felt it would be easier to sell the new measurement requirements to management and the software engineers if plenty of examples could be given that much of the industry makes the same measurements. Using common measurements would give us the option to use industry data as benchmark data for comparing with our data. This would aid our data analysis and our understanding of what the data may indicate about our defectremoval processes and the quality of our work products.
The required measurements were of two types: collected measurements and derived measurements calculated from the collected data. Table 1 lists the measurement requirements that were defined. The derived measurements are indicated by a mark in the Calc column.
Reports
It was decided that three reports would be generated: a Peer Review Report for each individual peer review, a Test Defect Log, and a monthly Project Quality Report. Each of these reports is described below. How these reports are generated is discussed later under Tool Development.
Peer Review Report - The Peer Review Report is used to document the results of a single peer review. At the conclusion of a peer review, this report is distributed to the peer review participants, the cognizant project manager, and the Software Quality Assurance (SQA) representative for review. The Peer Review Report is comprised of three sections: the Peer Review Record, the Peer Review Defect Log, and the Peer Review Summary. The Peer Review Record serves as both a form for entering required peer review information and as a report. The information collected and reported in the Peer Review Record includes program, program element, and function identifiers, the work product being reviewed, the type of review (i.e., software inspection or the less rigorous product review), who attended, how much time they spent preparing, how long the review meeting lasted, the disposition of the review, the checklists used, etc. An example of this record is shown in Figure 1. The Defect Log also serves as both a form for entering the required data for each defect found at a peer review and a record. It includes defect type, defect origin, defect severity, defect category, defect location, time to fix, date closed, and other information. An example of the Defect Log is shown in Figure 2. The Peer Review Summary is the third component of the Peer Review Report. It provides metrics that profile the conduct and results of a peer review. The information is useful in providing feedback to the product author and the peer review participants about the product and about the types of defects that could be eliminated earlier in the development process. Peer review efficiency and effectiveness measurements are provided to help determine if the peer review was within the normal expected ranges for the particular type of product reviewed. If not, further investigation may be warranted to determine why the peer review was an apparent anomaly. A sample of the Peer Review Summary is depicted in Figure 3. The main sections of the report are: General peer review information - At the top of the report is the general information that identifies the date of the peer review and the associated work product. Defect Type by Defect Category profile - The Defect Type by Defect Category matrix provides a profile of the defects found during the peer review by Defect Type, Defect Category, and Defect Severity. Defect Origin profile - The Defect Origin table in the report plots the major and minor defects found against the phase in which the defects were injected into the product. Peer Review efficiency and effectiveness - The table at the bottom of the report provides measures that primarily indicate how efficient the review was for the time invested and how effective it was at finding defects.
Test Defect Log - During unit testing and element integration and testing (EI&T), the software engineers are required
to fill out a defect log containing data on each defect they detected. The content of the Test Defect Log is similar to the Peer Review Defect Log described earlier and illustrated in Figure 2.
Project Quality Report - The data from the Peer Review Reports and Test Defect Logs are entered into two project
databases: a peer review database and a defect database. A Project Quality Report is issued once a month using the information from these databases. The report is in two forms, as an online report and as a hard copy report. (How the Project Quality Report is generated is described later under Tool Development). All project personnel are given access to the report and are encouraged to record their analysis of the data. The peer review/defect analysis process leader from the SEPG reviews the report each month and records an analysis of the data with recommendations of any possible actions to take. Recorded analysis describes trends and anomalies observed in the data. Any subsequent corrective actions and their consequences are also recorded.
Calc
Comments
From peer reviews & testing Spec. Change # &/or problem report # Program, Element, Version, etc. Wrong, Missing, or Extra Phase inserted, e.g., design or code
Use
Traceability Data grouping Defect analysis Product quality; defect leakage; defect removal efficiency Major or minor Product quality; defect analysis Documentation, Data, I/O, etc. Defect analysis Peer review or test type, e.g., Detailed Design Defect leakage; defect removal Inspection or Unit Testing effectiveness Module, procedure, line #, etc. Defect closure tracking Concise Defect closure tracking Time taken to fix & reinspect or retest Total cost assessment; ROI calc Who assigned; when due; when completed Defect closure tracking Date, product name, product type, reviewers (by role), peer review type, etc. Accepted (completed), Conditional, Re-review Sum of each participants' time Length of the meeting Sum of % of participation of each participant SLOC = executable Source Lines of Code Sum of changed pg. portions Time it took to fix the defects & reinspect Sum of Major Defects found For Major: Sum of each type, each category, each phase of origin Sum of Minor Defects found For Minor: Sum of each type, each category, each phase of origin Sum of Major and Minor Defects # of participants * Meeting Time Total Prep. Time + Total Meeting Time Total Detection Time + Total Time to Fix Total Prep. Time # of participants # SLOC Ave. Prep. Time per Reviewer # Pgs. Ave. Prep. Time per Reviewer Total Review Time Total Defects Found Total Defects Total Detection Time Total Defects Meeting Time Total Defects # Pages Total Defects (SLOC 1000) Accumulated By Project Same as the individual peer review measure- See above ments listed above only accumulated for the entire project Peer Review Status Peer Review Status Matches available baseline measurement type Comparison to historical data Matches available baseline measurement type Matches available baseline measurement type For each phase, Major Defects Total Major Defect removal efficiency Defects Found in all development phases Data grouping Status tracking Calculated measurements Calculated measurements To compute total review time Calculated measurements Calculated measurements Total cost assessment; ROI calc Peer Review effectiveness Defect analysis Peer Review effectiveness Defect analysis Peer Review effectiveness Other calculations Other calculations Other calculations Prep. Time adequacy Prep. Time adequacy Prep. Time adequacy Peer Review efficiency Find time efficiency Peer Review efficiency Peer Review effectiveness Peer Review effectiveness
X X X X X X X X X X X X X X X X
Checklists Used
Completeness Correctness Style Rules of Construction Multiple Views Technology Metrics AEGIS CPS (Req'd) Author Reader Reviewer Reviewer Reviewer Reviewer Total # Reviewers Role Moderator %
Reviewers
Name Prep Time
Comments
- indicates cells with formulas, i.e., computed values
Distribution
Name/Mailstop Name/Mailstop Name/Mailstop
Figure 1. Peer Review Record serves as a data entry form and a printable report The Project Quality Report includes charts and tables depicting three categories of metrics: peer review status, product quality, and process efficiency. The hard copy version contains only project summary metrics combining all program elements. The on-line version provides interactive control for producing the charts and tables for any combination of the projects program versions and program components. The charts and tables in the Project Quality Report include: Defect Severity, Category, and Type Profiles Defect Analysis by Phase Defect Density for Documents and for Code Peer Review Status and Process Metrics
Defect Category: Data, Documentation, Interface, Logic, Maintainability, Performance, Standards, Other Defect Severity: Major, Minor Defect Type: Missing, W rong, Extra Defect Origin: Reqs, Design, Code, Unit Test, EI&T, Maintenance
Figure 2. Peer Review Defect Log The Defect Severity Summary depicts the number of major and minor defects found and fixed in the projects work products for each software development phase, i.e., requirements, design, code, unit test, and EI&T. Figure 4 is an example of this chart in the Project Quality Report. The chart can be used to draw some conclusions about the overall quality of a projects products. The Defect Category Profile, Figure 5, contains a profile of the defects in each defect category for each phase. This defect profile supports a Pareto analysis for determining the most prevalent sources of defects. The Project Defect Type Profiles, Figure 6, show the number of wrong, missing, and extra defect types for both major and minor defects by phase. Especially high numbers for a particular type of defect for a particular product, e.g., design documentation, may reveal issues to be addressed. For example, the defect type profile may reveal ambiguity in the requirements if the missing or extra counts are high in subsequent work products. The Defect Analysis By Phase chart contains a profile of the injection, removal, and leakage of defects throughout the development life cycle. Injected defects equate to the recorded Defect Origin for each defect. Removed defects are determined by the peer review or test where they were found. As an example, Figure 7 shows the number of major defects injected and removed during each phase. The chart also depicts the percentage of all major defects removed in each phase. Escaped is also plotted for each phase and is the difference between the defects injected and removed, i.e., the defects that escaped the detection process and affect the next activity. This data provides insight into both process effectiveness and product quality. It is more useful when a software program has completed development and is in use by the customer because a more accurate profile of the developed products known defects throughout the development cycle can be plotted. After a product has been submitted for customer use, this data should be analyzed to determine which activities are the primary contributors of defects and which have inadequate detection processes. Corrective actions should be taken as a result of the analysis to reduce the number of defects injected and to improve the detection process so that the number of escaping defects is reduced. The Defect Density For Documents and Defect Density for Code charts depict the density of defects found in each software product. For documents, the number of pages per defect are plotted. For code the defect density is represented as defects per 1000 SLOC (KSLOC). These measurements provide insight into both process effectiveness and product quality. Defect density analysis throughout the development cycle provides a good quality measurement of each product, especially when sufficient historical data on similar products is available for comparison. It can aid in identifying the products and process steps with the most leverage for improvement. Continual comparison against historical defect density data should indicate the effectiveness of the improvement efforts.
Defect Category
Interface
Missing 1
Total 1 1
Missing
Total
2 2
2 2
Reqs. 1 1
Design 1 3
EI&T
Total 2 4 6
Measurement Name
# Reviewers # SLOC # Pages Meeting Time (LH) Total Prep Time (LH) Total Mtg. Time (LH) Total Detection Effort (LH) Total Fix Time (LH) Total Inspection Time (LH) Ave. Prep Time per Reviewer Ave. Prep Time Review Rate - SLOC/HR Ave. Prep Time Review Rate - Pgs./HR Ave. Inspection Time per Defect Ave. Inspection Time per Major Defect Ave. Defects Found/Detection Effort Hr. Ave. Major Defects Found/Detection Effort Hr. Defects Logged per Hour Meeting Review Rate - SLOC/HR Meeting Review Rate - Pgs./HR Ave. Defects Found per Page Ave. Major Defects Found per Page Ave. Defects Found per KSLOC Ave. Major Defects Found per KSLOC
Value
2.5 25 2.0 4.0 5.0 9.0 1.4 10.4 1.6 15.6 1.7 5.2 0.7 0.2 3.0 12.5 0.2 0.1
Comments
Total meeting duration time Total time spent preparing # Reviewers * Mtg. Time Preparation + Meeting Time Time to fix defects & reinspect Total time to find and remove Ave. prep time per reviewer Ave. prep time rate per reviewer Ave. prep time rate per reviewer Ave. time to find and remove Ave. time to find and remove Find time efficiency Find time efficiency Inspection efficiency Review rate Review rate Inspection effectiveness Inspection effectiveness Inspection effectiveness Inspection effectiveness
The Peer Review Status in the Project Quality Report reports the disposition status of all peer reviews conducted to date. A table is used to report the total number of peer reviews completed, in progress, or designated for another review for each program element by product type and review type (inspection or review). Peer Reviews are counted as "in progress" if all defects from the review are not yet fixed. A large number of "in progress" peer reviews could mean that there is a backlog of rework being done. The Peer Review Process Metrics table is the most comprehensive and perhaps most informative part of the Project Quality Report. Table 2 presents a sample of the measurements comprising the Peer Review Process Metrics. Virtually all collected data from peer reviews is represented in the top half of the table. The lower half of the table is comprised of calculated measurements that provide valuable insight into the effectiveness and efficiency of the various types of peer reviews. The data is presented for each product/review type, e.g., code inspection, code review, or design inspection . The bottom of Table 2 contains a description of the product/review type codes used in the header of the table. The Peer Review Metrics data can be analyzed a variety of ways. The following are examples of some of the guidelines that can be used in analyzing the data: Inspection Time per Defect - The time to find and fix defects in work products should increase over time due to a larger number of products affected by the defects found in the inspected product. Defects Found per Detection Hour and Defects Logged per Hour - There should be an upward trend in these values as the inspectors' skills improve.
Defect Severity Summary
All Baselines
400 348 350 300 262
All Elements
All Elements
# Defects
Minor Major
# Defects
300 250 200 150 100 50 0 Standards Interface Maintain Perform Data Other Logic Doc 92 93 60 0 60 116 74 24 40 7 25 2 28 4 8
Major Minor
Reqs
Design
Code
Unit Test
EI&T
Defect Category
All Baselines
140 120 100
*
119
All Elements
All Baselines
180 160 140
*
161
All Elements
Major Defects
60% 50%
# Defects
# Defects
117 40% 38% 30% 60 52 17% 20 12 5 25 8% 0 10% 0 0 0 0% 0% Field 20% Injected Removed Escaped % Removed
80 60 40 20 2 0 6
25
12 4
8 0 0
20 0
Reqs
Design
Code
Unit Test
EI&T
Reqs
Design
Code
UT
EI&T
Phase
Time to Fix Hrs 40 Review Hours (Detect. hrs.+Fix hrs.) 148 Major Defects Minor Defects 15 102
Total Defects 117 Ave. No. Reviewers per Review 2.7 Ave. Prep Time per Reviewer 0.3 Ave. Prep Time Rate - SLOC/HR N/A Ave. Prep Time Rate - Pgs./HR 13.5 Defects Found/Detection Effort Hr. 1.1 Major Defects/Detect. Hr. 0.1 Defects Logged per Hour Meeting Review Rate - SLOC/HR Meeting Review Rate - Pgs./HR Ave. Defects Found per Page Ave. Major Defects per Page Ave. Defects Found per KLOC Ave. Major Defects per KSLOC Ave. Defects/Review Ave. Major Defects/Review Review Time per Defect Review Time per Major Defect 4.7 8.0 0.6 0.1
420 280 510 3.6 2.8 3.5 2.2 1.1 2.9 N/A 201.7 175.0 9.0 N/A N/A 0.6 0.8 0.8 0.2 0.1 0.2 6.0 14.3 0.4 0.1 3.5 6.4 110.0 250.0
Meeting Review Rate - This measurement should be used in conjunction with the other effectiveness metrics to determine if peer review meetings are covering the review material at an effective speed. Slow rates may be caused by unprepared participants or too much discussion taking place. Rates that are too fast may result in poor effectiveness in finding errors. Over time the optimum meeting review rates should be determined for the project. Average Defects Found per Product Size - The higher the rates, especially in the earlier software development activities, the better the final product quality should be. If rates are low, the product may be of extremely high quality (usually when the software process is mature), or else the inspectors need to be more thorough.
Average Defects Found per Review - This metric is included because it can be compared to historical design and code review measurements on the AEGIS projects to determine if improvement from past peer review practice has occurred. This comparison is valid, however, only if the average product size per review is the same then as now.
Tool Development
Once the defect analysis procedure, the required measurements, and reports were defined, it was necessary to consider how the data would be collected, processed, and reported. Obviously, some type of software tools would be needed to collect and store the data, and to generate the reports needed for review and analysis. With limited time and resources, it was decided to start with simple support tools. It was thought that there would be plenty of time later to evolve and enhance the tools when we were more knowledgeable about how the whole process could be improved. A Microsoft (MS) Excel spreadsheet system was developed for collecting and reporting the peer review and defect analysis data. This was done with the idea that the spreadsheet system initially implemented would serve as a prototype until there was time to develop a more sophisticated system. This proved to be a key factor in expediting the development of the defect analysis program for a number of reasons: MS Excel could very easily generate the required charts and tables needed for data analysis. Simple two-dimensional database structures in Excel were sufficient to support the database requirements for storing and retrieving the peer review and defect data. MS Excel expertise was more readily available among the defect analysis process personnel than any other type of database expertise. Also, the use of MS Excel was very widespread among the target users of the defect analysis program. As new tools are developed in the future, any data stored in MS Excel databases could most likely be very easily exported to another database application. MS Excel tools can be continually enhanced and automated through the creation of macros.
The key components of the spreadsheet system are three Excel workbook files, each comprised of multiple spreadsheets. The workbook files are the Peer Review Report, the projects Peer Review Database, and the projects Defect Database. Figure 8 illustrates the relationship between these files within the data collection and reporting process. The Peer Review Report contains the Peer Review Record, Defect Log, and Peer Review Summary worksheets. These worksheets were described earlier in the Reports section of this paper and are illustrated in Figures 1, 2, and 3, respectively. The Peer Review Report file is a template file designed to be used for direct data entry during the peer review (assuming a Personal Computer (PC) is available in the meeting room). However, forms for handwritten entry are available and are forwarded to data entry personnel for creating a Peer Review Report file. The Peer Review Report workbook also contains worksheets that organize the specific data to be transferred to the two databases in the database format, i.e., one row of data per data record. This facilitates the transfer process. Also contained in the file are a number of macros to facilitate data entry and printing, to provide instructional help, and to audit the report for data entry errors or omissions. (Not all of these features existed initially). The Peer Review Database MS Excel workbook is the repository of peer review data transferred from each Peer Review Report. One worksheet in the workbook contains the database. Each row in the database represents a single record for each peer review. Most of the database fields are described in Table 1 under Information on Each Peer Review, (except for the calculated averages and ratios listed and the defect type and category information). Spreadsheets are included in the workbook for generating the Peer Review Status and Peer Review Process Metrics (Table 2) reports contained within the Project Quality Report described earlier. A key capability of MS Excel utilized extensively for generating the charts in the Project Quality Report is the pivot table function. Pivot tables are based on the database worksheet in each database workbook and allow subsets of the data to be grouped in small tables for direct viewing, such as shown in Table 2, or for use in generating charts, such as shown in figures 4 through 7. Pivot tables can be designed to be interactive by allowing any database fields to be set up as Page selectors for viewing subsets of the pivot table data. For example, youll notice that Table 2 has All Baselines (i.e., AEGIS program versions), and All Elements (i.e., major program elements) in the header. Direct interaction with the pivot tables in the Peer Review Database file allow selection of combinations of any baseline with any program element for more selective viewing of the data.
10
Macros are utilized within the Peer Review Database file for transferring data from the Peer Review Report files into the database, for controlling the pivot table page settings on all the pivot tables in the workbook at the same time, and for printing the reports. The Defect Database workbook is the repository of the individual defect data copied from each Peer Review Report and Test Defect Log. One worksheet in the workbook contains the defect database. Each row in the database represents a single defect from either a peer review or test. The database fields are primarily those listed in Table 1 under Information of Each Defect Found. Pivot table worksheets and charts are included in this workbook for all of the defect profile reports within the Project Quality Report as described earlier (see figures 4 through 7). Macros are also included for the same purposes as for the Peer Review Database.
Charts & tables selected and viewed online Peer Review Metrics
Defect Database
Defect Profiles
Training
Once the defect analysis goals were established, the procedure defined, the measurement and reporting requirements formulated, and the support tools developed, it was time to roll out the new software inspection process and defect analysis program to the managers and engineers on the selected pilot project. Software inspection methodology training had been performed earlier by a consultant. Therefore, a single orientation course was developed that included both the new peer review process and the defect analysis program. The new written procedures for peer reviews and defect analysis were addressed in the training. The rationale and use for each new measurement was explained, as was each aspect of the Peer Review and Project Quality Reports and the system of Excel spreadsheets. At the completion of the training, direction was given to begin working in accordance with the new procedures.
Another major complaint from the engineers was the filling out of multiple forms for a small change. For example, a single source line code change to fix a problem required completing separate peer review forms for any design change, coding change, unit test procedure, or EI&T procedure changes, even though they were all reviewed in one meeting. In response to this, provisions were made to treat small problem fixes as a single product package.
Summary
The defect analysis program, in conjunction with an upgrade of the peer review process to include software inspections, was implemented at Lockheed Martin GES in the relatively short period of time of two months. The goals for the defect analysis program were successfully met. SEI level 3 criteria for peer reviews and defect analysis were satisfied and a good baseline of data was established for SEI level 4. More importantly, however, the data has provided improved insight into the effectiveness and efficiency of our defect-removal activities. The need for improvement is now apparent and the data has helped focus our process improvement efforts. The critical factors that lead to the successful, rapid deployment of the process changes, were as follows: 1. 2. Starting with clearly understood goals to focus the effort and prevent rework. Using the SEIs CMM as a framework for our software process improvement also provided direction and focus. Utilizing as many resources and as much expertise from outside the organization as possible, including consultants, conference material, technical reports, books, etc. The most significant gain was achieved by utilizing written procedures and guidebooks made available from other sites in the corporation a true sharing of best practices. By purposefully utilizing standard measurements from across the software industry, time was not wasted trying to reinvent the wheel. Starting with simple tools at first and then improving them after living through the new process for awhile. If we had tried to develop or procure more elaborate tools, the implementation would have taken much longer. Taking the time to prepare and deliver training for all personnel affected by the process changes was well worthwhile. Misunderstandings and errors did occur, but not like it would have been if the training wasnt given. Also, more training proved to be essential in responding to process problems identified by the focus group. Continual monitoring and follow-up proved to be essential in correcting early mistakes and misunderstandings. Also, unanticipated areas of awkwardness in the new process needed to be addressed before they resulted in eventual process breakdowns. The focus group concept was very helpful in addressing the engineers primary concerns.
3.
4.
5.
A number of lessons were learned from this experience. We had underestimated the time required, after process roll out, to monitor the process, and address process inefficiencies and misunderstandings. We learned that many aspects of a new process must be reiterated until it is apparent the engineering staff has internalized it. We learned that engineers concerns need to be addressed and a continual effort made to improve automation of data collection tasks. When tasks are automated, hands-on training is needed to institutionalize the use of new tools. We also underestimated the time needed to respond to the opportunities for improvement indicated by the results of our defect data analysis. Since our organization is committed to achieving SEI Levels 4 and 5, this problem will be addressed. Plans are in place to develop the infrastructure, knowledge, and cultural mindset for continual process improvement. References 1. Paulk, Mark C., Weber, Charles A., Chrissis, Mary Beth, et al., The Capability Maturity Model: Guidelines for Improving the Software Process. Reading, Mass., Addison-Wesley Publishing, 1995. 2. 3. 4. 5. Gilb, Tom, Graham, Dorothy, Software Inspection. Reading, Mass., Addison-Wesley Publishing, 1993. Humphrey, Watts S., Managing the Software Process. Reading, Mass., Addison-Wesley Publishing, 1990. Florac, William A.., Software Quality Measurement: A Framework for Counting Problems and Defects (CMU/SEI92-TR-22). Pittsburgh, Pa., Software Engineering Institute, Carnegie Mellon University, September 1992. Baumert, John H., McWhinney, Mark S., Software Measurements and the Capability Maturity Model (CMU/SEI-92TR-25). Pittsburgh, Pa., Software Engineering Institute, Carnegie Mellon University, September 1992.
12
Steve Lett
Mr. Lett has twenty-23 of software engineering experience as a software engineer and as a project manager developing real-time Command and Control software applications, primarily on the U.S. Navys P-3C Anti-Submarine Warfare program and the AEGIS guided missile cruiser and destroyer programs. He has extensive experience in leading process improvement initiatives, especially in the areas of project management, software engineering processes, software metrics, training, group problem-solving, and employee empowerment. He presented a paper, An Earned Value Tracking System for SelfDirected Software Teams, at the recent 1998 (U.S.) SEPG and European SEPG Conferences. Since early 1997 he has been a full-time member of the Lockheed Martin Government Electronic Systems (GES) Software Engineering Process Group (SEPG).