0% found this document useful (0 votes)
9 views22 pages

Chap3 Writingsoftware Testing Methods 122

Chapter Three focuses on software quality and metrics, detailing the importance of measuring software projects to ensure quality and productivity. It discusses various metrics, including basic and derived metrics, and their application in different project types such as development, re-engineering, and maintenance. The chapter emphasizes the need for effective project management to detect and remove defects throughout the software development lifecycle.

Uploaded by

syed.ali.hanzala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views22 pages

Chap3 Writingsoftware Testing Methods 122

Chapter Three focuses on software quality and metrics, detailing the importance of measuring software projects to ensure quality and productivity. It discusses various metrics, including basic and derived metrics, and their application in different project types such as development, re-engineering, and maintenance. The chapter emphasizes the need for effective project management to detect and remove defects throughout the software development lifecycle.

Uploaded by

syed.ali.hanzala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 22

Chapter Three

Software Quality and Metrics

LEARNING OBJECTIVES........................................................................................................................................ 2

UNDERSTANDING SOFTWARE METRICS............................................................................................................... 2

DEFINITIONS OF METRICS................................................................................................................................... 4

ATTRIBUTES OF MEASURES................................................................................................................................. 5

METRICS FOR DIFFERENT TYPES OF PROJECTS..................................................................................................... 8

DEVELOPMENT PROJECTS.......................................................................................................................................8
RE-ENGINEERING...................................................................................................................................................11
MAINTENANCE......................................................................................................................................................11
TESTING PROJECTS................................................................................................................................................ 14
Data collection & usage.....................................................................................................................................19
USING METRICS FOR DECISION MAKING..........................................................................................................................19

SUMMARY........................................................................................................................................................ 21

CASE STUDY...................................................................................................................................................... 21

Discussion Points....................................................................................................................................................... 22
Chapter Three
Software Quality and Metrics

Learning Objectives
This chapter deals with metrics related to software projects. At the end of the chapter, students would
learn
 Different metrics in software projects
 Need for metric
 Decisions being taken from these metrics

Understanding software metrics


Software quality is primarily concerned with defects that are injected into different life cycle stages.
These defects can be injected right from requirement analysis stage till testing stage. These defects are
injected because of many reasons. These reasons could vary from lack of knowledge, lack of complete
understanding of customer requirements, error in design specifications etc. These defects are then
detected, removed during execution stage. An application is always measured for its quality in terms of
defect injection and defect removal rate. This is an important concept in software project management
and this is one of the aspects which needs to be controlled and monitored during life cycle stage of
project execution. This is because of the fact that software development is a highly people intensive
activity and hence ‘to err is human’. Defects can be injected and detected at any stage. Thus defects can
be injected in requirements analysis, design, coding stages. These are the stages where requirements
are converted to deliverables to the customer (such as design specification, code etc.).

As discussed in the previous paragraph, defects are injected at all stages of development life cycle and
hence they need to be detected ad removed. The removals in requirement analysis and design stages
are done through review of their deliverables, while in coding stage both review as well as testing will
help in removing defects. A project is considered successful if these defects are removed in all these
stages and application is delivered with no or less defects. Project management process has to plan for
activities such that these are removed at the appropriate stage; defect injected in the initial stages has a
cascading effect on defects downstream. This is because defects at early stages will inject new defects
and time and cost involved in removing these defects, if not removed early, becomes higher and higher.
Thus it is imperative that project management process should be matured enough to detect and remove
the defects immediately. This is however easier said than done. The project manager has to plan for
activities such as reviews of all deliverables from requirement, design and coding stages and also has to
decide testing strategy for codes written during the life cycle stages.

Software Metrics deals with the measurement of the software product and the process by which it is
developed. The software product should be viewed as an abstract object that evolves from an initial
statement of need to finished software system, including source and object code and different forms of
documentation produced during development. Ordinarily, these measurements of the software process
and product are studied and developed for use in modelling the software development process. These
metrics and models are then used to estimate/ predict product costs and schedules and to measure
productivity and product quality. Information gained from the metrics and the model can then be used
in the managed and control of the development process, leading one hopes to improve results.

Jalote (2000) feels that good metrics should facilitate the development of models that are capable of
predicting process or product parameters, not just describing them. As per Jalote (2001) ideal metrics
should be:

 Simple precisely definable so that it is clear how the metric can be evaluated.
 Objective, to the greatest extent possible.
 Easily obtainable (i.e., at reasonable cost)
 Valid – the metric should measure what it is intended to measure
 Robust – relatively insensitive to insignificant changes in the process or product.

In addition, for maximum utility in analytical studies and statistical analyses, metrics should have data
values that belong to appropriate measurement scales.

Hence the objectives of software project measurement are:


 Ensure that software projects operate at the desired quality and productivity levels. Quality also
covers attributes like reliability, usability, stability, performance and so forth, as applicable
 Ensure that projects meet the Service Level Agreements
 Ensure that processes operate within the defined bounds and look for opportunities for
improvement.
 Ensure that project meets the commitments made to customer in terms of delivery schedule
and other parameters, as applicable.
Mapping between business goals and measures are given in the table 3.1.

Table 3.1: Mapping between business goals and measures


Business goal Measurement Objective Measures

Increase in Ensure that project operates at Size, effort, review


productivity the desired quality and effectiveness, rework effort
productivity levels ; percentage
Measure process performance

Improvement in Ensure that project operates at Delivered defects, defects


delivered quality the desired quality and detected at all stages, Turn
productivity levels ; around time, Service
meeting SLAs, levels(as applicable)
ensure customer satisfaction
Adherence to On time delivery Elapsed days, schedule
schedule meeting SLAs, adherence percentage

Definitions of metrics
Software Engineering Institute in Carnegie Mellon University (www.sei.cmu.edu) has stated that broadly
the metrics can be classified into basic and derived metrics. Basic metrics are collected as result of
direct measurement of the process or product characteristics. The typical basic metrics collected are:

Effort - This is the amount of time spent for an activity and is measure in person hours or person days
Defects – This represents non compliance to requirements and is measured in numbers.
Size – It is the size of the application being developed and is measured in function points or lines of
code.
Elapsed time - This is the time spent between start and end of an activity and is measured in days.
Requirements count - This is the number of requirements given by the customer and is measured in
numbers.
Number of requirement changes - This represents the number of times the customer has changed his
mind resulting in changes in the original requirements. It is usually measured in numbers.
Number of requests (Maintenance projects) – This represents the number of requests received from the
customer to be fixed. It is measured in numbers only.

Derived metrics are quality indicators that are calculated using Basic measures, to gain insight into
process and product quality characteristics. Some of the derived metrics are:
 Productivity
 Delivered Quality
 Defect Injection Rate (DIR)
 Defect Detection Rate
 Review/Test effectiveness
 Review/Test efficiency
 Cost of Quality (COQ)
 Average Turn Around Time
 Age of bug fixes

Attributes of measures
Productivity in software development project is substantially different from productivity in
manufacturing industries. In a manufacturing industry productivity is the result of capital, technology,
human resource, competence and skill of management. In software development, capital and equipment
play a very nominal role. In software development no raw material or bought-out component is used,
whereas in manufacturing industry these two components form a very significant part and these
determine, to a great extent, the productivity. Given this kind of difference, a study for productivity in
manufacturing industry cannot be straight way applied to software industry. If one takes a precise view
of software industry, one observes that the key factor determining productivity in this industry is the
human resource. Hence, in this study measuring the software output in relation to manpower deployed
has been considered for measuring productivity. Further, while few dozens of studies are available in
manufacturing industries, there is hardly a study on productivity in software industry. Hence, this
research is intended to fill the gap by studying the productivity in this industry.

In early 90’s IT industry, which includes both hardware and software, was considered as a sunrise
industry. After mid 90’s it became clear that this industry is going to occupy a prominent place and the
impetus for this industry’s growth was coming from revolutionary changes in telecommunications,
supply chain management, utilities, insurance and banking sector, greater use of satellite etc. With
increasing competition and rising cost of skilled manpower, attention to productivity was becoming
inevitable in such industry. While in early 90’s hardware had the pre-eminence over software, in late 90’s
and early 2000 software has taken a pre-eminent place. In view of these changes, the need for efficiency
has been increasing felt in software industries. This need has become essentially the driving force for the
present study.

High productivity implies that given the number of function points in a project, it has consumed less man
months. However, it is possible that with less man-month being spent, the project may be completed in
a hurry and it may result in high delivered defects which can create customer’s dissatisfaction. It is
therefore important that while less effort is spent, no relaxation should be made on the final quality of
the deliverable, or in other words, while a higher value of productivity is maintained better quality
software with less or no delivered defects should be guaranteed. Such guarantee can help in increasing
efficiency and eventually result in higher profitability and higher return on investment. In this chapter a
detailed approach for measuring metrics has been provided.

The following table 3.2 gives the details of basic measures; their typical attributes and data capture
mechanism.
The list of tools given in the table 3.2 is used for capture of data during the project execution. At the
completion of a project, data related to all life cycle stages is summarized and captured. For an ongoing
maintenance project, usually this data is captured at the end of a defined period of time (for example,
data for maintenance project can be captured at the end of 3 months and then metrics is derived).
Table 3.2: Typical attributes of measures
Sl. Measure Unit Typical attributes of Mechanism for capturing
No measures data
1 Effort Person hours or person Actual and estimated task, Time Sheet
months review, rework effort for
each LC stage
Preparation effort for
reviews.
2 Size Lines of Code(LOC)/FP Size of modules LOC* counters if available;
for source code Size of application FP count using IFPUG
Number of pages for method (International
documents; Function Point User
S/M/C count for Group).
programs
Feature Points/Use case
points/Object points in
special cases where size
is not captured in LOC.
Number of test cases for
test plans

3 Defects Number of defects Stage Injected Defect Tracking Tool


Stage Detected
Defect type
Severity
Root cause
Status
4 Schedule Elapsed days MSP or Excel
Table 3.2: Typical attributes of measures
Sl. Measure Unit Typical attributes of Mechanism for capturing
No measures data
5 Requirements Number of Requirements Initial number of None
count requirements
Total number of
requirements

6 Number of Number of changes None Requirement Tracker


requirement
changes
7 Number of Number Type of request Project specific tracker
requests

*
LOC (Lines of Code) can be either non-commented, non-blank source statements (NCSS) or physical
lines of code (PLOC). In the case of PLOC, it is necessary to include a note on how NCSS can be estimated
e.g. 70% of PLOC = NCSS
The size can also be measured directly in terms of FP using IFPUG (International Function Point User
Group) method ( www.ifpug.com (2006)).

Note:
A person month constitutes 168 person hours of effort. This is arrived at using the standard working
hours per day, 8.0 hours and 21 days a month.

Metrics for different types of projects


Different types of projects will have different measurement approaches. Because the very natures of
their life cycles are different, the metrics will also be different (Demarco (1982), www.sei.cmu.edu).
Demarco (1982) classifies metrics into two different categories – basic and derived metrics

DEVELOPMENT PROJECTS
Basic Metrics
Effort
Effort spent on all life cycle activities and management activities.
The typical life cycle activities are:
Requirements
Design
Coding
Testing
Typical management activities are:
Project management
Configuration Management

The estimated and actual effort is captured for the task, review and rework involved during
these activities.
Effort is measured in person hours.
Defect
Defects detected during reviews and testing, details like stage injected, stage detected, severity,
defect type, etc are captured for all defects.
Size
Size of any software work product going through all Life Cycle activities is measured as follows:
LOC (lines of code) or
Function Points:
Where ever it’s not feasible to capture size in LOC or FP, use other measures like Feature Points,
Use case points, object points, as appropriate (For detailed definition of use case points
and object points, please refer www.ifpug.com (2006)).
For documents (design docs, requirements doc) - number of pages
For test plans - Number of test cases

In some cases the size is measured in terms of number of Simple, medium and complex
programs

Schedule
Number of calendar days taken for a particular activity per life cycle stage, including holidays.
Requirements count
Number of initial requirements and number of total requirements

Number of Requirement changes and number of added or deleted or modified requirements.

Derived Metrics
Productivity
Productivity = (Size of the delivered product in FP) divided by (total effort person months)

Delivered quality
Delivered quality (in terms of delivered defects per FP) = (Total number of delivered defects
found during acceptance testing and warranty) divided by (size of the software delivered
in FP)
Delivered quality (in terms of delivered defects per person hours) = (Total number of delivered
defects found during acceptance testing and warranty) divided by (Total effort for the
project in person hours)

Requirement Stability
A) Overall Requirement stability (%) = (number of initial requirements * 100) divided by total
number of requirements
Where, total number of requirements = Number of initial requirements + no. of added or
modified or deleted requirements.
For example number of initial requirements = 90
number of total requirements = 125
So, overall Requirement stability = 72%
Overall Requirements Stability is computed at the end of the project.

B) Requirements Stability for a given period (%) = (number of added, modified or deleted
requirements in a given period) divided by (cumulative requirement changes up to that
period)
RE-ENGINEERING
All metrics are the same as development metrics except productivity the definition of which is given
below:

Productivity (size in FP per person months) = (Total size added, modified or deleted from the
application) divided by (total effort in person months)

For re-engineering projects, the project has to capture LOC added or modified or deleted in the existing
application. This is converted to FP and used for calculating productivity.

MAINTENANCE
Maintenance projects are characterized by six kinds of requests:
o Bug fixes
o Minor enhancements
o Major enhancements
o Production Support
o Analysis/R&D
o Testing

For each Request type serviced, the actual and estimated effort, the actual number of programs added,
number of programs modified, the LOC added/modified/deleted and the equivalent FP are to be
captured.

Basic Metrics
Size
For enhancements:
The unit of measure of size is the number of LOC (as in most cases) that have been added,
deleted and modified. The size can also be measured in FP using the IFPUG (International
Function Point User Group) Method.

For bug-fixes:
Capture the number of bugs that are fixed.
Also, for maintenance projects, number of each type of request should be captured.
Effort
Estimated and actual effort is captured separately for each type of request.
For major enhancements capture the management and life cycle efforts as detailed under the
Development Project metrics described earlier.
For minor enhancements, bug fixes and analysis/R&D requests the life cycle related effort
should to be captured.

Defects
Same as development project

Schedule
Number of calendar days taken for servicing the request, including holidays.

Derived Metrics
Productivity
For Bug-Fix
Productivity = Total number of Bug-Fixes divided by Total Effort spent in person months for
servicing bug-fix requests

For Enhancements (major and minor)


Productivity = Size in FP divided by Total effort in person months

For System Appreciation


Productivity for System Appreciation would be collected at the end of the SA phase of the
project.

Productivity = Size in FP analyzed divided by Total effort in person months or

For Analysis/R&D
Productivity = Size in FP analyzed divided by Total effort in person months or person days.
Note:
FP is calculated as {LOC of added programs + (LOC added or modified or deleted of modified
programs)} divided by Conversion factor for LOC to FP conversion. This conversion
factor is taken from Caper Jones conversion factor. This factor is a proprietary
item and can not be obtained or published without permission from the publisher.

Delivered Quality
For Bug-Fix
Delivered quality = Rejected (during acceptance testing or warranty) Bug-Fixes divided by Total
number of Bugs-Fixed during that period

For Enhancements (major and minor)


Delivered quality = Delivered defects (i.e. number of defects detected during acceptance testing
and warranty) divided by FP = Delivered defects (number of defects detected during
acceptance testing and warranty) divided by {(Total LOC added/modified/deleted)
divided by Conversion factor)}

For Analysis/R&D
Requests requiring predominantly Analysis, and comparatively little code change:
Delivered quality = Delivered defects divided by FP

Totally Analysis type of Request:


Delivered quality - Not applicable

Turn Around Time for a request


The unit of measurement is hours.
TAT (hours) = Time and date of delivery of the request to the customer – Time and date of
receipt of the request from the customer

This is applicable for bug fixes and production support requests.

Age of open request


Age of open request (hours) = current time and date – time and date of receipt of request
Age of open request (days) = current date – date of receipt of request

Average Turn Around time


Average Turn Around time = (Total turn around time in hours) divided by (Total number of
requests serviced)

This is applicable for bug fixes and production support requests.

TESTING PROJECTS
Basic Metrics
Size
Size is measured in terms of number of test cases. The unit of measure is the No. of test cases.
Effort
Same as Development Project

Derived Metrics
Productivity
For Manual or Regression testing/automated testing
Productivity = (Number of test cases executed) divided by (Total Effort spent on testing in
person-hours)

For Test Automation


Productivity = (Number of scripted test cases) divided by (Total effort on automation of test
cases)

Delivered Quality
For Manual or Regression testing
Delivered quality = (Number of defects raised by the customer) divided by (Total number of test
cases executed)
For Automated test cases
Delivered quality = (Number of defects raised by the customer) divided by (Total number of test
cases executed)
For Test Automation
Delivered quality = (Number of test cases (scripts) rejected by customer) divided by (Total
number of Test cases automated (scripted))
Metrics COMMON TO ALL PROCESSES
Review effectiveness:
Review effectiveness = (Number of defects detected in reviews at a given stage) divided by (Sum
of defect injected in that stage + number of defects slipped from earlier stages). An
example is given in table 3.3.

Table 3.3: Calculation of Review Effectiveness

Injected # of bugs Detected stage # of bugs Review Remarks


stage injected detected effectiveness
Requirement 150 Requirements 75 50%
s review (Requirement
review
effectiveness)
Design review
Code review
Testing
Design 50 Design review 40 32%(Design Design Review
Coding 125 Code review 90 review # of bugs injected in
effectiveness) design = 50 + # of
42.9% (Code bugs slipped from
review requirements = 75
effectiveness) (Total # of bugs
present in the
system is 125) out
of which only 40 is
detected
Code review:
# of bugs injected in
coding = 125+# of
bugs slipped from
requirements and
design (125-40=85)
totalling to 210
defects . Out of
which only 90
detected

Review Efficiency
Efficiency is measured to check whether the time taken to review any work product is worth the
effort
Review efficiency = (Number of defects detected in a review stage) divided by (Review effort)

Defect injection rate (DIR) in coding


DIR in Coding = (Number of defect injected during coding) divided by (task and rework effort of
coding)

Schedule Adherence
Schedule Adherence % = 100- Schedule deviation % (In case of positive deviation)
Schedule Deviation in % = {(Actual delivery date – Planned delivery date)*100} divided by
(Planned elapsed days)

A negative schedule deviation % indicates that the project delivery was within the planned date
of delivery and schedule adherence is 100%

Deviations
Negative deviation in delivered quality and process performance parameters is indicative of
better performance. In case of % deviation calculation, numerator is always estimated value.

Effort Deviation (%) = (Actual – Estimated) divided by Estimated*100


Delivered Quality Deviation = (Actual – Estimated)
COQ Deviation = (Actual – Estimated)
Defect Injection Rate (DIR) Deviation = (Actual – Estimated)
Defects Deviation = (Actual – Estimated)

Productivity Deviation = (Estimated – Actual)


Overall Defect Removal Effectiveness = (Estimated – Actual)
Overall Defect Detection Effectiveness
Overall Defect Detection Effectiveness % = {(Number of defects detected in all internal reviews
and testing)*100} divided by (Total number of defects detected in the system (including
acceptance defects))
Group Review Coverage Rate
It is defined as pages divided by elapsed hour of review for Documents and LOC per elapsed
hour of review for Code.

Cost of Quality (COQ)


The cost of quality comprises of two factors: Cost of conformance (prevention and appraisal)
and cost of non-conformance (failure).
COQ includes the following:
Cost of Prevention - includes training, defect prevention and process improvement activities
Cost of Appraisal - includes reviews divided by inspection effort, testing, audits, etc
Cost of Failure - includes any rework caused by delivering bugs to customer plus any rework
after internal reviews and testing.

Cost of rework, reviews, prevention and training can be considered as directly proportional to
effort. The effort data can be captured through TIME SHEET and Group Review (Inspection)
Summary Reports. The cost of Quality can then be expressed as the sum of appraisal, prevention
and failure cost as a percentage of total effort for Life Cycle activities.

Thus COQ % = ((Review effort + Test Effort + Training effort + Rework effort + Effort for
Prevention Activities) * 100) divided by (Total effort (for the Project for Development
projects; for major enhancement for Maintenance projects))
Software Reliability
Mean Time between Failure (MTBF) is a basic measure of reliability for repairable items. It can
be described as the number of hours that pass before a component, assembly, or system fails. It
is a commonly used variable in reliability and maintainability analyses.
MTBF can be calculated as the inverse of the failure rate for constant failure rate systems. For
example: If a system has a failure rate of 2 failures per million hours of usage, the MTBF would
be the inverse of that failure rate.

MTBF = (1,000,000 hours) divided by (2 failures) = 500,000 hours

Mean time to Failure (MTTF) is defined as average of elapsed time between i th (example, 5,6…)
and the (i-1)th (example, 4,5..) of failure in the system.

Mean time to repair (MTTR) is defined as average time to implement a change or to fix a bug
and restore the system to working order.
Mean time between failure (MTBF) = MTTF + MTTR

System availability
System availability is the probability that a program is operating according to requirements at a
given point in time and is defined as:

System Availability = MTTF/ (MTTF + MTTR) * 100

Other measures are:


Percentage of customisation required or number of modules modified to implement at
customer site
This is mainly applicable to product development scenario where software products are
developed; here the requirements of a larger customer base are catered to rather than servicing
a single customer.

Software usability
Usability or User-friendliness of software is the extent to which the product is convenient and
practical to use. This means the probability that the operator of a system will not experience any
user interface problem during a given period of operation under a given operational profile.
Evidence of good usability includes:
Well-structured manuals or documentation or Help functions
Informative error message, Consistent interfaces and Good use of menus and graphs

Some quantitative measures are:


Ease with which users can learn to use the system (in person days)
Number of defect raised during acceptance testing because of user misunderstandings
Number of defect usability defects recorded during testing and support

Data collection & usage


Data at the project level is used for process control, process performance monitoring & improvements.
At the organization level, the data is used for organization wide performance measurements,
identification of improvement areas, for setting improvement goals & to define the process capability
baseline. The following paragraphs define how data is used at the project & organization level.

At The Project level


The data to be collected at the project level depends on many factors like:
Project goals and objectives
Project’s defined software process
LC stages to be monitored and controlled
The project’s Service Level Agreements with the customer

Using Metrics for decision making


In their research papers, Mohapatra & Mohanty (20030 have shown that output from software projects
can meet customer satisfaction if they use quantitative project management. In quantitative project
management, metrics is used as the basis for taking decision. Thus results from decisions made based on
metrics can be predictable and can meet customer satisfaction as the project manager can take
corrective actions accurately. As we can see in previous sections, there are number of metrics defined
for measuring different parameters. These parameters are obtained at all stages of life cycle of software
projects. Each metrics has a different interpretation and is used at different stages of project life cycle.
Based on the goals that have been set for the projects, the values of these metrics would indicate
whether the project is in a good position or any improvement or example, if the metrics value review
effectiveness at requirement stage is 80% against a goal of 90%, then the Project Manager has to carry
out an analysis and has to take a decision on all improvements that should be brought into the project.
He (project manager) can decide to review requirements again and plug the gaps in requirement
analysis. In table 3.4, based on prior experience, we have suggested some of the metrics as well as
statistical tools that can be used to measure these metrics in all life cycle stages.
Table 3.4: Statistical Tools for metrics measurement
SL # Measurement Statistical tools suggested Applicable LC Remarks
stage
1 Delivered quality Run chart After AT This is more
Scatter diagrams appropriate
for
Maintenanc
e projects.
2 Productivity Run chart After AT This is more
Scatter diagram appropriate
for
Maintenanc
e projects.
3 Requirement Run chart Requirements
stability Analysis
4 Review Bar chart (for comparing Requirement
effectiveness review effectiveness of all Design
stages) Build

5 Defect injection Control chart Requirements


rate Analysis
Design
Build
Table 3.4: Statistical Tools for metrics measurement
6 Effort deviation Control chart Design, Build SPC tool
available for
build phase
7 Turn around time Run chart NA For
Maintenanc
e projects

Note: For details of statistical techniques for analysis refer to “Practical Software Measurement:
Measuring for Process Management and Improvement” by Anita D Carlton

Summary
This chapter dealt with different metrics that are used in software projects. For different type of
software projects, different metrics are defined. These metrics are quantitative measures for measuring
project performance. Different tools are used for collecting data so that data integrity is maintained. By
using well defined guidelines for collecting data, metrics can be robust, valid, easily and consistently
obtainable, and reliable. The entire measurement system becomes sustainable if there is proper
mapping of business goals and metrics.

Case Study
ZFT PLC is a multinational company with more than 15,000 employees and operating in twenty seven
countries. It has an office in India in Chennai, where it develops software for exports. ZFT specializes in
developing software application in banking industry. It has provided banking software to more than
8000 organization in 27 countries both in private and public sectors for managing customer information
and driving improvement in banking services. It focuses on innovative banking software solution using
latest technology and is a leader in the internet banking domain in UK. Being a new development centre
in India, the development centre in Chennai did not have past data for estimating the effort required for
developing these applications. Besides the technology used for developing the applications was new.
The current case is about an application to be developed for internet banking in United Kingdom’s
banking industry. The size of the application was of 14000 Function Points and the management team in
Chennai wanted to benchmark productivity for its development team. Hence to get benchmark
productivity, external benchmarking practice was adopted. Similar applications were earlier developed
in UK development centre of the same organization, however, the data available from earlier
development activities could not be used as there was a difference in the level of available skill. Besides,
domain knowledge of the resources and technology was new.

(1) What are the business goals for an offshore-onsite project that is being developed in Java?

(2) What are the metrics that should be defined for the above new project which is being
developed in Java technology?

Discussion Points
1. Draw a mapping between business goals and metrics?

2. What do you mean by typical attributes of measures? Explain with respect to productivity?

3. What is difference between basic and derived metrics? Take example of maintenance project
and explain this difference.

4. What are the metrics that are common to all processes?

5. How do you calculate review effectiveness for development projects?

You might also like