Chap3 Writingsoftware Testing Methods 122
Chap3 Writingsoftware Testing Methods 122
LEARNING OBJECTIVES........................................................................................................................................ 2
DEFINITIONS OF METRICS................................................................................................................................... 4
ATTRIBUTES OF MEASURES................................................................................................................................. 5
DEVELOPMENT PROJECTS.......................................................................................................................................8
RE-ENGINEERING...................................................................................................................................................11
MAINTENANCE......................................................................................................................................................11
TESTING PROJECTS................................................................................................................................................ 14
Data collection & usage.....................................................................................................................................19
USING METRICS FOR DECISION MAKING..........................................................................................................................19
SUMMARY........................................................................................................................................................ 21
CASE STUDY...................................................................................................................................................... 21
Discussion Points....................................................................................................................................................... 22
Chapter Three
Software Quality and Metrics
Learning Objectives
This chapter deals with metrics related to software projects. At the end of the chapter, students would
learn
Different metrics in software projects
Need for metric
Decisions being taken from these metrics
As discussed in the previous paragraph, defects are injected at all stages of development life cycle and
hence they need to be detected ad removed. The removals in requirement analysis and design stages
are done through review of their deliverables, while in coding stage both review as well as testing will
help in removing defects. A project is considered successful if these defects are removed in all these
stages and application is delivered with no or less defects. Project management process has to plan for
activities such that these are removed at the appropriate stage; defect injected in the initial stages has a
cascading effect on defects downstream. This is because defects at early stages will inject new defects
and time and cost involved in removing these defects, if not removed early, becomes higher and higher.
Thus it is imperative that project management process should be matured enough to detect and remove
the defects immediately. This is however easier said than done. The project manager has to plan for
activities such as reviews of all deliverables from requirement, design and coding stages and also has to
decide testing strategy for codes written during the life cycle stages.
Software Metrics deals with the measurement of the software product and the process by which it is
developed. The software product should be viewed as an abstract object that evolves from an initial
statement of need to finished software system, including source and object code and different forms of
documentation produced during development. Ordinarily, these measurements of the software process
and product are studied and developed for use in modelling the software development process. These
metrics and models are then used to estimate/ predict product costs and schedules and to measure
productivity and product quality. Information gained from the metrics and the model can then be used
in the managed and control of the development process, leading one hopes to improve results.
Jalote (2000) feels that good metrics should facilitate the development of models that are capable of
predicting process or product parameters, not just describing them. As per Jalote (2001) ideal metrics
should be:
Simple precisely definable so that it is clear how the metric can be evaluated.
Objective, to the greatest extent possible.
Easily obtainable (i.e., at reasonable cost)
Valid – the metric should measure what it is intended to measure
Robust – relatively insensitive to insignificant changes in the process or product.
In addition, for maximum utility in analytical studies and statistical analyses, metrics should have data
values that belong to appropriate measurement scales.
Definitions of metrics
Software Engineering Institute in Carnegie Mellon University (www.sei.cmu.edu) has stated that broadly
the metrics can be classified into basic and derived metrics. Basic metrics are collected as result of
direct measurement of the process or product characteristics. The typical basic metrics collected are:
Effort - This is the amount of time spent for an activity and is measure in person hours or person days
Defects – This represents non compliance to requirements and is measured in numbers.
Size – It is the size of the application being developed and is measured in function points or lines of
code.
Elapsed time - This is the time spent between start and end of an activity and is measured in days.
Requirements count - This is the number of requirements given by the customer and is measured in
numbers.
Number of requirement changes - This represents the number of times the customer has changed his
mind resulting in changes in the original requirements. It is usually measured in numbers.
Number of requests (Maintenance projects) – This represents the number of requests received from the
customer to be fixed. It is measured in numbers only.
Derived metrics are quality indicators that are calculated using Basic measures, to gain insight into
process and product quality characteristics. Some of the derived metrics are:
Productivity
Delivered Quality
Defect Injection Rate (DIR)
Defect Detection Rate
Review/Test effectiveness
Review/Test efficiency
Cost of Quality (COQ)
Average Turn Around Time
Age of bug fixes
Attributes of measures
Productivity in software development project is substantially different from productivity in
manufacturing industries. In a manufacturing industry productivity is the result of capital, technology,
human resource, competence and skill of management. In software development, capital and equipment
play a very nominal role. In software development no raw material or bought-out component is used,
whereas in manufacturing industry these two components form a very significant part and these
determine, to a great extent, the productivity. Given this kind of difference, a study for productivity in
manufacturing industry cannot be straight way applied to software industry. If one takes a precise view
of software industry, one observes that the key factor determining productivity in this industry is the
human resource. Hence, in this study measuring the software output in relation to manpower deployed
has been considered for measuring productivity. Further, while few dozens of studies are available in
manufacturing industries, there is hardly a study on productivity in software industry. Hence, this
research is intended to fill the gap by studying the productivity in this industry.
In early 90’s IT industry, which includes both hardware and software, was considered as a sunrise
industry. After mid 90’s it became clear that this industry is going to occupy a prominent place and the
impetus for this industry’s growth was coming from revolutionary changes in telecommunications,
supply chain management, utilities, insurance and banking sector, greater use of satellite etc. With
increasing competition and rising cost of skilled manpower, attention to productivity was becoming
inevitable in such industry. While in early 90’s hardware had the pre-eminence over software, in late 90’s
and early 2000 software has taken a pre-eminent place. In view of these changes, the need for efficiency
has been increasing felt in software industries. This need has become essentially the driving force for the
present study.
High productivity implies that given the number of function points in a project, it has consumed less man
months. However, it is possible that with less man-month being spent, the project may be completed in
a hurry and it may result in high delivered defects which can create customer’s dissatisfaction. It is
therefore important that while less effort is spent, no relaxation should be made on the final quality of
the deliverable, or in other words, while a higher value of productivity is maintained better quality
software with less or no delivered defects should be guaranteed. Such guarantee can help in increasing
efficiency and eventually result in higher profitability and higher return on investment. In this chapter a
detailed approach for measuring metrics has been provided.
The following table 3.2 gives the details of basic measures; their typical attributes and data capture
mechanism.
The list of tools given in the table 3.2 is used for capture of data during the project execution. At the
completion of a project, data related to all life cycle stages is summarized and captured. For an ongoing
maintenance project, usually this data is captured at the end of a defined period of time (for example,
data for maintenance project can be captured at the end of 3 months and then metrics is derived).
Table 3.2: Typical attributes of measures
Sl. Measure Unit Typical attributes of Mechanism for capturing
No measures data
1 Effort Person hours or person Actual and estimated task, Time Sheet
months review, rework effort for
each LC stage
Preparation effort for
reviews.
2 Size Lines of Code(LOC)/FP Size of modules LOC* counters if available;
for source code Size of application FP count using IFPUG
Number of pages for method (International
documents; Function Point User
S/M/C count for Group).
programs
Feature Points/Use case
points/Object points in
special cases where size
is not captured in LOC.
Number of test cases for
test plans
*
LOC (Lines of Code) can be either non-commented, non-blank source statements (NCSS) or physical
lines of code (PLOC). In the case of PLOC, it is necessary to include a note on how NCSS can be estimated
e.g. 70% of PLOC = NCSS
The size can also be measured directly in terms of FP using IFPUG (International Function Point User
Group) method ( www.ifpug.com (2006)).
Note:
A person month constitutes 168 person hours of effort. This is arrived at using the standard working
hours per day, 8.0 hours and 21 days a month.
DEVELOPMENT PROJECTS
Basic Metrics
Effort
Effort spent on all life cycle activities and management activities.
The typical life cycle activities are:
Requirements
Design
Coding
Testing
Typical management activities are:
Project management
Configuration Management
The estimated and actual effort is captured for the task, review and rework involved during
these activities.
Effort is measured in person hours.
Defect
Defects detected during reviews and testing, details like stage injected, stage detected, severity,
defect type, etc are captured for all defects.
Size
Size of any software work product going through all Life Cycle activities is measured as follows:
LOC (lines of code) or
Function Points:
Where ever it’s not feasible to capture size in LOC or FP, use other measures like Feature Points,
Use case points, object points, as appropriate (For detailed definition of use case points
and object points, please refer www.ifpug.com (2006)).
For documents (design docs, requirements doc) - number of pages
For test plans - Number of test cases
In some cases the size is measured in terms of number of Simple, medium and complex
programs
Schedule
Number of calendar days taken for a particular activity per life cycle stage, including holidays.
Requirements count
Number of initial requirements and number of total requirements
Derived Metrics
Productivity
Productivity = (Size of the delivered product in FP) divided by (total effort person months)
Delivered quality
Delivered quality (in terms of delivered defects per FP) = (Total number of delivered defects
found during acceptance testing and warranty) divided by (size of the software delivered
in FP)
Delivered quality (in terms of delivered defects per person hours) = (Total number of delivered
defects found during acceptance testing and warranty) divided by (Total effort for the
project in person hours)
Requirement Stability
A) Overall Requirement stability (%) = (number of initial requirements * 100) divided by total
number of requirements
Where, total number of requirements = Number of initial requirements + no. of added or
modified or deleted requirements.
For example number of initial requirements = 90
number of total requirements = 125
So, overall Requirement stability = 72%
Overall Requirements Stability is computed at the end of the project.
B) Requirements Stability for a given period (%) = (number of added, modified or deleted
requirements in a given period) divided by (cumulative requirement changes up to that
period)
RE-ENGINEERING
All metrics are the same as development metrics except productivity the definition of which is given
below:
Productivity (size in FP per person months) = (Total size added, modified or deleted from the
application) divided by (total effort in person months)
For re-engineering projects, the project has to capture LOC added or modified or deleted in the existing
application. This is converted to FP and used for calculating productivity.
MAINTENANCE
Maintenance projects are characterized by six kinds of requests:
o Bug fixes
o Minor enhancements
o Major enhancements
o Production Support
o Analysis/R&D
o Testing
For each Request type serviced, the actual and estimated effort, the actual number of programs added,
number of programs modified, the LOC added/modified/deleted and the equivalent FP are to be
captured.
Basic Metrics
Size
For enhancements:
The unit of measure of size is the number of LOC (as in most cases) that have been added,
deleted and modified. The size can also be measured in FP using the IFPUG (International
Function Point User Group) Method.
For bug-fixes:
Capture the number of bugs that are fixed.
Also, for maintenance projects, number of each type of request should be captured.
Effort
Estimated and actual effort is captured separately for each type of request.
For major enhancements capture the management and life cycle efforts as detailed under the
Development Project metrics described earlier.
For minor enhancements, bug fixes and analysis/R&D requests the life cycle related effort
should to be captured.
Defects
Same as development project
Schedule
Number of calendar days taken for servicing the request, including holidays.
Derived Metrics
Productivity
For Bug-Fix
Productivity = Total number of Bug-Fixes divided by Total Effort spent in person months for
servicing bug-fix requests
For Analysis/R&D
Productivity = Size in FP analyzed divided by Total effort in person months or person days.
Note:
FP is calculated as {LOC of added programs + (LOC added or modified or deleted of modified
programs)} divided by Conversion factor for LOC to FP conversion. This conversion
factor is taken from Caper Jones conversion factor. This factor is a proprietary
item and can not be obtained or published without permission from the publisher.
Delivered Quality
For Bug-Fix
Delivered quality = Rejected (during acceptance testing or warranty) Bug-Fixes divided by Total
number of Bugs-Fixed during that period
For Analysis/R&D
Requests requiring predominantly Analysis, and comparatively little code change:
Delivered quality = Delivered defects divided by FP
TESTING PROJECTS
Basic Metrics
Size
Size is measured in terms of number of test cases. The unit of measure is the No. of test cases.
Effort
Same as Development Project
Derived Metrics
Productivity
For Manual or Regression testing/automated testing
Productivity = (Number of test cases executed) divided by (Total Effort spent on testing in
person-hours)
Delivered Quality
For Manual or Regression testing
Delivered quality = (Number of defects raised by the customer) divided by (Total number of test
cases executed)
For Automated test cases
Delivered quality = (Number of defects raised by the customer) divided by (Total number of test
cases executed)
For Test Automation
Delivered quality = (Number of test cases (scripts) rejected by customer) divided by (Total
number of Test cases automated (scripted))
Metrics COMMON TO ALL PROCESSES
Review effectiveness:
Review effectiveness = (Number of defects detected in reviews at a given stage) divided by (Sum
of defect injected in that stage + number of defects slipped from earlier stages). An
example is given in table 3.3.
Review Efficiency
Efficiency is measured to check whether the time taken to review any work product is worth the
effort
Review efficiency = (Number of defects detected in a review stage) divided by (Review effort)
Schedule Adherence
Schedule Adherence % = 100- Schedule deviation % (In case of positive deviation)
Schedule Deviation in % = {(Actual delivery date – Planned delivery date)*100} divided by
(Planned elapsed days)
A negative schedule deviation % indicates that the project delivery was within the planned date
of delivery and schedule adherence is 100%
Deviations
Negative deviation in delivered quality and process performance parameters is indicative of
better performance. In case of % deviation calculation, numerator is always estimated value.
Cost of rework, reviews, prevention and training can be considered as directly proportional to
effort. The effort data can be captured through TIME SHEET and Group Review (Inspection)
Summary Reports. The cost of Quality can then be expressed as the sum of appraisal, prevention
and failure cost as a percentage of total effort for Life Cycle activities.
Thus COQ % = ((Review effort + Test Effort + Training effort + Rework effort + Effort for
Prevention Activities) * 100) divided by (Total effort (for the Project for Development
projects; for major enhancement for Maintenance projects))
Software Reliability
Mean Time between Failure (MTBF) is a basic measure of reliability for repairable items. It can
be described as the number of hours that pass before a component, assembly, or system fails. It
is a commonly used variable in reliability and maintainability analyses.
MTBF can be calculated as the inverse of the failure rate for constant failure rate systems. For
example: If a system has a failure rate of 2 failures per million hours of usage, the MTBF would
be the inverse of that failure rate.
Mean time to Failure (MTTF) is defined as average of elapsed time between i th (example, 5,6…)
and the (i-1)th (example, 4,5..) of failure in the system.
Mean time to repair (MTTR) is defined as average time to implement a change or to fix a bug
and restore the system to working order.
Mean time between failure (MTBF) = MTTF + MTTR
System availability
System availability is the probability that a program is operating according to requirements at a
given point in time and is defined as:
Software usability
Usability or User-friendliness of software is the extent to which the product is convenient and
practical to use. This means the probability that the operator of a system will not experience any
user interface problem during a given period of operation under a given operational profile.
Evidence of good usability includes:
Well-structured manuals or documentation or Help functions
Informative error message, Consistent interfaces and Good use of menus and graphs
Note: For details of statistical techniques for analysis refer to “Practical Software Measurement:
Measuring for Process Management and Improvement” by Anita D Carlton
Summary
This chapter dealt with different metrics that are used in software projects. For different type of
software projects, different metrics are defined. These metrics are quantitative measures for measuring
project performance. Different tools are used for collecting data so that data integrity is maintained. By
using well defined guidelines for collecting data, metrics can be robust, valid, easily and consistently
obtainable, and reliable. The entire measurement system becomes sustainable if there is proper
mapping of business goals and metrics.
Case Study
ZFT PLC is a multinational company with more than 15,000 employees and operating in twenty seven
countries. It has an office in India in Chennai, where it develops software for exports. ZFT specializes in
developing software application in banking industry. It has provided banking software to more than
8000 organization in 27 countries both in private and public sectors for managing customer information
and driving improvement in banking services. It focuses on innovative banking software solution using
latest technology and is a leader in the internet banking domain in UK. Being a new development centre
in India, the development centre in Chennai did not have past data for estimating the effort required for
developing these applications. Besides the technology used for developing the applications was new.
The current case is about an application to be developed for internet banking in United Kingdom’s
banking industry. The size of the application was of 14000 Function Points and the management team in
Chennai wanted to benchmark productivity for its development team. Hence to get benchmark
productivity, external benchmarking practice was adopted. Similar applications were earlier developed
in UK development centre of the same organization, however, the data available from earlier
development activities could not be used as there was a difference in the level of available skill. Besides,
domain knowledge of the resources and technology was new.
(1) What are the business goals for an offshore-onsite project that is being developed in Java?
(2) What are the metrics that should be defined for the above new project which is being
developed in Java technology?
Discussion Points
1. Draw a mapping between business goals and metrics?
2. What do you mean by typical attributes of measures? Explain with respect to productivity?
3. What is difference between basic and derived metrics? Take example of maintenance project
and explain this difference.