A Practical Approach To Software Quality PDF
A Practical Approach To Software Quality PDF
to Software Quality
Springer-Science+Business Media, LLC
Gerard O'Regan
A Practical Approach to
Software Quality
i Springer
Gerard O'Regan
SQC Consulting
80 Upper Friars Rd.
Tumers Cross
Cork
lreland
[email protected]
http://sqc.netfirms.com
9 8 7 654 3 2 1
Overview
The aim of this book is to provide a practical introduction to software quality in
an industrial environment and is based on the author's experience in working in
software engineering and software quality improvement with leading indus-
trial companies. The book is written from a practitioner's viewpoint, and the
objective is to include both theory and practice. The reader will gain a grasp
of the fundamentals as well as guidance on the practical application of the
theory.
The principles of software quality management and software process im-
provement are discussed, and guidance on the implementation of maturity mod-
els such as the CMM, SPICE, or the ISO 9000:2000 standard is included.
Audience
This book is suitable for software engineers and managers in software compa-
nies as well as software quality professionals and practitioners.
It is an introductory textbook and is suitable for software engineering stu-
dents who are interested in the fundamentals of quality management as well as
software professionals. The book will also be of interest to the general reader
who is curious about software engineering.
Acknowledgments
I am deeply indebted to friends and colleagues in academia and in industry who
supported my efforts in this endeavor. My thanks to Rohit Dave of Motorola for
sharing his in-depth knowledge of the software quality field with me, for cama-
raderie, and for introducing me to many of the practical subtleties in quality
management. The staff of Motorola in Cork, Ireland, provided an excellent
working relationship.
John Murphy of DDSI, Ireland, supported my efforts in the implementation
of a sound quality system in DDSI, and my interests in the wider software proc-
ess improvement field. The formal methods group at Trinity College, Dublin,
were an inspiration, and my thanks to Michelil Mac An Airchinnigh, and the
Trinity formal methods group.
My thanks to Richard Messnarz of ISCN for sharing his sound practical ap-
proach to assessment planning and execution, and his pragmatic approach to the
improvement of organizations.
Finally, I must thank my family and friends in the Cork area. I must express
a special thanks to Liam O'Driscoll for the unique Hop Island school of motiva-
Preface ix
tion and horse riding, and to my friends at the Hop Island Equestrian Centre.
Finally, my thanks to personal friends such as Kevin Crowley and others too
numerous to mention, and to the reviewers who sent many helpful comments
and suggestions.
Gerard O'Regan
Cork, Ireland
November 2001
Contents
References............................................................................................................................. 279
Glossary ................................................................................................................................. 285
Index ....................................................................................................................................... 287
1
Introduction to Software Quality
40 .-------------------------4- 1~~~~~~~~
~ 30
~ 20
110
o
Estimation Accuracy Range
prototyping or joint user reviews to ensure that they match the needs of the cus-
tomer.
The implementation of the requirements involves design, coding, and testing
activities. User manuals, technical documentation, and training materials may be
required also. Challenges to be faced include the technical activities of the proj-
ect, communication of changes to the project team, building quality into the
software product, verifying that the software is correct and corresponds to the
requirements, ensuring that the project is delivered on time, and, where appro-
priate, taking corrective action to recover if the project is behind schedule.
The challenges in software engineering are also faced in many other disci-
plines. Bridges have been constructed by engineers for several millennia and
bridge building is a mature engineering activity. However, civil engineering
projects occasionally fall behind schedule or suffer design flaws, for example,
the infamous Tacoma Narrows bridge (or Galloping Gertie as it was known)
collapsed in 1940 owing to a design flaw.
The Tacoma Narrows Bridge was known for its tendency to sway in wind-
storms. The shape of the bridge was like that of an aircraft wing and under
windy conditions it would generate sufficient lift to become unstable. On N 0-
vember 7, 1940, a large windstorm caused severe and catastrophic failure. The
significance of the Tacoma bridge is derived from this collapse, the subsequent
investigation by engineers, and the realization that aero-dynamical forces in sus-
pension bridges were not sufficiently understood or addressed in the design of
the bridge. New research was needed and the recommendation from the investi-
gation was to use wind tunnel tests to aid in the design of the second Tacoma
Narrows bridge. New mathematical theories of bridge design also arose from
these studies.
Software engineering is a less mature field than civil engineering, and it is
only in more recent times that investigations and recommendations from soft-
ware projects have become part of the software development process. The study
of software engineering has led to new theories and understanding of software
development. This includes the use of mathematics to assist in the modeling or
understanding of the behavior or properties of a proposed software system. The
use of mathematics is an integral part of the engineer's work in other engineer-
ing disciplines. The software community has piloted the use of formal specifica-
tion of software systems, but to date formality has been mainly applied to safety
critical software. Currently, the industrial perception is that formal methods are
difficult to use, and their widespread deployment in industry is unlikely at this
time.
failures may cause major problems for the customer and adversely affect the
customer's business. This leads to potential credibility issues for the software
company, and damage to the customer relationship, with subsequent loss of
market share.
The Y2K bug is now a part of history and computer science folklore. The
event itself on January 1, 2000 had minimal impact on the world economy a~d
was, in effect, a non-event. Much has been written about the background to
the Y2K bug and the use of two digits for recording dates rather than four digits.
The solution to the Y2K problem involved finding and analyzing all code with a
Y2K impact, planning and making the necessary changes, and verifying the cor-
rectness of the changes. The cost in the UK alone is estimated to have been ap-
proximately $38 billion.
The Intel response to the famous microprocessor mathematical bug back in
1994 inflicted damage on the company and its reputation. The Intel corporation
was slow to acknowledge the floating point problem in the Pentium microproc-
essor and to provide adequate information on the potential impact of the prob-
lem to its customers. This damaged its reputation and credibility at the time and
involved a large financial cost in replacing microprocessors.
The Ariane 5 failure caused major embarrassment and damage to the credi-
bility of the European Space Agency (ESA). The maiden flight of the Ariane
launcher ended in failure on June 4, 1996, after a flight time of 40 seconds. The
first 37 seconds of flight proceeded normal. The launcher then veered off its
flight path, broke up, and exploded. An independent inquiry board investigated
the cause of the failure, and the report and recommendations to prevent a future
failure are described in [Lio:96].
The inquiry noted that the failure of the inertial reference system was fol-
lowed immediately by a failure of the backup inertial reference system. The ori-
gin of the failure was narrowed down to this specific area quite quickly. The
problem was traced to a software failure owing to an operand error, specifically,
the conversion of a 64 bit floating point number to a 16 bit signed integer value
number. The floating point number was too large to be represented in the 16 bit
number and this resulted in an operand error. The inertial reference system and
the backup reference system reported failure owing to the software exception.
The operand error occurred owing to an exceptionally high value related to the
horizontal velocity, and this was due to the fact that the early part of the trajec-
tory of the Ariane 5 was different from that of the earlier Ariane 4, and required
a higher horizontal velocity. The inquiry board made a series of recommenda-
tions to prevent a reoccurrence of similar problems.
These failures indicate that software quality needs to be a key driving force
in any organization. The effect of software failure may result in huge costs to
correct the software (e.g., Y2K), negative perception of a company and possible
loss of market share (e.g., Intel microprocessor problem), or the loss of a valu-
able communications satellite (e.g., Ariane 5).
1. Introduction to Software Quality 5
In later sections the work of Deming, Shewhart, Juran, and Crosby will be dis-
cussed. The Crosby definition of quality is narrow and states that quality is sim-
ply "conformance to the requirements". This definition does not take the
intrinsic difference in quality of products into account in judging the quality of
the product or in deciding whether the defined requirements are actually appro-
priate for the product. Juran defines quality as "fitness for use" and this is a bet-
ter definition, although it does not provide a mechanism to judge better quality
when two products are equally fit to be used.
The ISO 9126 standard for information technology [ISO:91] provides a
framework for the evaluation of software quality. It defines six product quality
characteristics which indicate the extent to which a software product may be
judged to be of a high quality. These include:
6 A Practical Approach to Software Quality
Characteristic Description
Functionality This characteristic indicates the extent to which the
required functions are available in the software.
Reliability This characteristic indicates the extent to which the
software is reliable.
Usability This indicates the usability ofthe software and indi-
cates the extent to which the users of the software
judge it to be easy to use.
Efficiency This characteristic indicates the efficiency of the soft-
ware.
Maintainability This characteristic indicates the extent to which the
software product is easy to modify and maintain.
Portability This characteristic indicates the ease of transferring
the software to a different environment.
The extent to which the software product exhibits these quality characteris-
tics will judge the extent to which it will be rated as a high-quality product by
customers. The organization will need measurements to indicate the extent to
which the product satisfies these quality characteristics, and metrics for the or-
ganization are discussed in chapter 6.
In the middle ages a craftsman was responsible for the complete development of
a product from conception to delivery to the customer. This lead to a very strong
sense of pride in the quality of the product by the craftsman. Apprentices joined
craftsmen to learn the trade and skills, and following a period of training and
working closely with the master they acquired the skills and knowledge to be
successful craftsmen themselves.
The industrial revolution involved a change to the traditional paradigm and
labor became highly organized with workers responsible for a particular part of
the development or manufacture of a product. The sense of ownership and the
pride of workmanship in the product were diluted as workers were now respon-
sible only for their portion of the product and not the product as a whole.
This lead to a requirement for more stringent management practices, includ-
ing planning, organizing, implementation, and control. It inevitably lead to a
hierarchy of labor with various functions identified, and a reporting structure for
the various functions. Supervisor controls were needed to ensure quality and
productivity issues were addressed.
1. Introduction to Software Quality 7
Software quality control may involve extensive inspections and testing. Inspec-
tions typically consist of a formal review by experts who critically examine a
particular deliverable, for example, a requirements document, a design docu-
ment, source code, or test plans. The objective is to identify defects within the
work product and to provide confidence in its correctness. Inspections playa key
role in achieving process quality, and one well known inspection methodology is
the Fagan inspection methodology developed by Michael Fagan [Fag:76].
Inspections in a manufacturing environment are quite different in that they
take place at the end of the production cycle, and in effect, do not offer a
mechanism for quality assurance of the product; instead the defective products
are removed from the batch and reworked. There is a growing trend towards
quality sampling at the early phases of a manufacturing process to minimize
reworking of defective products.
Software testing consists of "white box" or "black box" testing techniques,
including unit testing, functional testing, system testing, performance testing,
8 A Practical Approach to Software Quality
and acceptance testing. The testing is quite methodical and includes a compre-
hensive set of test cases produced manually or by automated means. The valida-
tion of the product involves ensuring that all defined tests are executed, and that
any failed or blocked tests are corrected. In some cases, it may be impossible to
be fully comprehensive in real time testing, and only simulation testing may be
possible. In these cases, the simulated environment will need to resemble the
real time environment closely to ensure the validity of the testing.
The cost of correction of a defect is directly related to the phase in which the
defect is detected in the lifecycle. Errors detected in phase are the least expen-
sive to correct, and defects, i.e., errors detected out of phase, become increas-
ingly expensive to correct. The most expensive defect is that detected by the
customer. This is because a defect identified by a customer will require analysis
to determine the origin of the defect; it may affect requirements, design and im-
plementation. It will require testing and a fix release for the customer. There is
further overhead in project management, configuration management, and in
communication with the customer.
It is therefore highly desirable to capture defects as early as possible in the
software lifecycle, in order to minimize the effort required to re-work the defect.
Modern software engineering places emphasis on defect prevention and in
learning lessons from the actual defects. This approach is inherited from manu-
facturing environments and consists of formal causal analysis meetings to brain-
storm and identify root causes and corrective actions necessary to prevent
reoccurrence. The actions are then implemented by an action team and tracked
to completion.
Next, some of the ideas of the key individuals who have had a major influ-
ence on the quality field are discussed. These include people such as Shewhart,
Deming, Juran, and Crosby.
Act
Plan Check
Do
Step Description
Plan This step identifies an improvement opportunity and outlines
the problem or process that will be addressed.
• Select the problem to be addressed.
• Describe current process.
• Identify the possible causes of the problem.
• Find the root cause of problems.
• Develop an action plan to correct the root cause.
Do This step involve carrying out the improved process and in-
volves following the plan. This step may involve a pilot of the
proposed changes to the process.
Check This step involves reviewing and evaluating the result of the
changes and determining the effectiveness of the changes to
the process.
Act This step involves acting on the analysis and recommended
changes. It results in further plans for improvement.
1.4.2 Deming
w. Edwards Deming was one of the major figures in the quality movement. He
was influenced by the work of Shewhart, the pioneer of statistical process con-
trol. Deming's ideas on quality management were embraced by the industrial
community in post second world war Japan, and played a key role in achieving
the excellence in quality that Japanese manufactured output is internationally
famous for.
Deming argued that it is not sufficient for everyone in the organization to be
doing his best: instead, what is required is that there be a consistent purpose and
direction in the organization. That is, it is first necessary that people know what
to do, and there must be a constancy of purpose from all individuals in the orga-
nization to ensure success.
Deming argued that there is a very strong case for improving quality as costs
will decrease owing to less reworking of defective products and productivity
will increase as less time is spent in reworking. This will enable the company to
increase its market share with better quality and lower prices and to stay in busi-
ness. Conversely, companies which fail to address quality issues will lose mar-
ket share and go out of business. Deming was highly critical of the American
management approach to quality, and the lack of vision of American manage-
ment in quality. Deming also did pioneering work on consumer research and
sampling.
Deming's influential book Out of the Crisis [Dem:86], proposed 14 princi-
ples or points of action to transform the western style of management of an or-
ganization to a quality and customer focused organization. This transformation
will enable it to be in a position to be successful in producing high-quality prod-
ucts. The 14 points of action include:
• Constancy of purpose
• Quality built into the product
• Continuous improvement culture
Statistical process control is employed to minimize variability in process per-
formance as the quality of the product may be adversely affected by process
variability. This involves the analysis of statistical process control charts so that
the cause of variability can be identified and eliminated. All staff receive train-
ing on quality and barriers are removed. Deming's ideas are described in more
detail below:
1. Introduction to Software Quality 11
Step Description
Eliminate Deming argued that slogans do not help anyone to do a
Slogans better job. Slogans may potentially alienate staff or encour-
age cynicism. Deming criticized slogans such as "Zero de-
fects" or "Do it right the first time" as inappropriate, as how
can it be made right first time if the production machine is
defective. The slogans take no account of the fact that most
problems are due to the system rather than the person. A
slogan is absolutely inappropriate unless there is a clearly
defined strategy to attain it, and Deming argued that nu-
merical goals set for people without a road map to reach the
goals have the opposite effect to that intended, as it contrib-
utes to a loss of motivation.
Eliminate Deming argued that quotas act as an impediment to im-
Numerical provement in quality, as quotas are normally based on what
Quotas may be achieved by the average worker. People below the
average cannot make the rate and the result is dissatisfac-
tion and turnover. Thus, there is a fundamental conflict
between quotas and pride of workmanship.
Pride of The intention here is to remove barriers that rob people of
Work pride of workmanship, for example, machines that are out
of order and not repaired.
Self Im- This involves encouraging education and self-improvement
provement for everyone in the company, as an organization requires
people who are improving all the time.
Take Action This requires that management agree on direction using the
14 principles, communicate the reasons for changes to the
staff, and train the staff on the 14 principles. Every job is
part of a process, and the process consists of stages. There
is a customer for each stage, and the customer has rights
and expectations of qUality. The objective is to improve the
methods and procedures and thereby improve the output of
the phase. The improvements may require a cross- func-
tional team to analyze and improve the process
Disease Description
Lack of Constancy of Management is too focused on short tenn thinking
Purpose rather than long-tenn improvements.
Emphasis on Short A company should aim to become the world's most
Term profit efficient provider of product/service. Profits will
then follow.
Evaluation of Deming is against annual perfonnance appraisal
performance and rating.
Mobility of Mobility of management frequently has a negative
Management impact on quality.
Excessive Excessive management by measurement.
Measurement
Comment (Deming):
Deming's program has been quite influential and has many sound points. His
views on slogans in the workplace are in direct opposition to the use of slo-
gans like Crosby's "Zero defects". The key point for Deming is that a slogan
has no value unless there is a clear method to attain the particular goal de-
scribed by the slogan.
1.4.3 Juran
Joseph Juran is another giant in the quality movement and he argues for a top
down approach to quality. Juran defines quality as ''fitness for use ", and argues
that quality issues are the direct responsibility of management, and that man-
agement must ensure that quality is planned, controlled, and improved.
50
40 + - - - - - - - - -
30 +----------------------------------------~
2°1==~~~~~~~:~~::~::~==~==~==~~~::~
10 ~
O +---~~--~--~--~~--~--~--~~~~
Jan Feb Mar April May June July Aug Sep Oct Nov Dec
Date
Step Description
Identify Cus- This includes the internal and external customers of an
tomers organization, e.g., the testing group is the internal customer
of the development group, and the end user of the software
is the external customer.
Determine Customer needs are generally expressed in the language of
Customer the customer's organization. There may be a difference
Needs between the real customer needs and the needs as initially
expressed by the customer. Thus there is a need to elicit
and express the actual desired requirements, via further
communication with the customer, and thinking through
the consequences of the current definition of the require-
ments.
Translate This involves translating the customer needs into the lan-
guage of the supplier.
Establish This involves defining the measurement units to be used.
Units of
Measurement
Establish This involves setting up a measurement program in the
Measurement organization and includes internal and external measure-
ments of quality and process performance.
Develop This step determines the product features to meet the needs
Product of the customer.
Optimize The intention is to optimize the design of the product to
Product De- meet the needs of the customer and supplier.
sign
Develop Proc- This involves developing processes which can produce the
ess products to satisfy the customer's needs.
Optimize This involves optimizing the capability of the process to
Process capa- ensure that products are of a high qUality.
bility
Transfer This involves transferring the process to normal product
development operations.
1. Introduction to Software Quality 15
Estimation Accuracy
60
30
0
C Estimation Accuracy
-30
·60
Jan April July Sap Dec
Step Description
Breakthrough This involves developing a favorable attitude to quality
in attitude improvement.
Pareto This involves concentrating on the key areas affecting
quality performance.
Organization This involves analyzing the problem and coordinating a
solution.
Control This involves ensuring that performance is controlled at
the new level.
Repeat This leads to continuous improvement with new perform-
ance levels set and breakthroughs made to achieve the
new performance levels.
16 A Practical Approach to Software Quality
1.4.4 Crosby
Philip Crosby is one of the giants in the quality movement, and his ideas have
influenced the Capability Maturity Model (CMM), the maturity model devel-
oped by the Software Engineering Institute. His influential book Quality is Free
[Crs:80] outlines his philosophy of doing things right the first time, i.e., the zero
defects (ZD) program. Quality is defined as "conformance to the requirements",
and he argues that people have been conditioned to believe that error is inevita-
ble.
Crosby argues that people in their personal lives do not accept this: for ex-
ample, it would not be acceptable for nurses to drop a certain percentage of
newly born babies. He further argues that the term "Acceptable Quality Level"
(AQL) is a commitment to produce imperfect material. Crosby notes that defects
are due to two main reasons: lack of knowledge or a lack of attention of the in-
dividual.
He argues that lack of knowledge can be measured and addressed by train-
ing, but that lack of attention is a mindset that requires a change of attitude by
the individual. The net effect of a successful implementation of a zero defects
program is higher productivity due to less reworking of defective products.
Thus, quality, in effect, is free.
Crosby's approach to achieve the desired quality level of zero defects was to
put a quality improvement program in place. He outlined a 14 step quality im-
provement program. The program requires the commitment of management to
be successful and requires an organization-wide quality improvement team to be
set up. A measurement program is put in place to determine the status and cost
of quality within the organization. The cost of quality is then shared with the
staff and corrective actions are identified and implemented. The zero defect pro-
gram is communicated to the staff and one day every year is made a zero defects
day, and is used to emphasize the importance of zero defects to the organization.
Step Description
Management Management commitment and participation is essential
Commitment to ensure the success of the quality improvement pro-
gram. The profile of quality is raised within the organi-
zation.
Quality This involves the formation of an organization-wide
Improvement cross-functional team consisting of representatives from
Team each of the departments. The representative will ensure
that actions for each department are completed.
Quality The objective of quality measurements is to determine
Measurement the status of quality in each area of the company and to
identify areas where improvements are required.
1. Introduction to Software Quality 17
Step Description
Cost of Quality The cost of quality is an indication of the financial cost
Evaluation of quality to the organization. The cost is initially high,
but as the quality improvement program is put in place
and becomes effective there is a reduction in the cost of
qUality.
Quality This involves sharing the cost of poor quality with the
Awareness staff, and explaining what the quality problems are
costing the organization. This helps to motivate staff on
quality and on identifying corrective actions to address
quality issues.
Corrective This involves resolving any problems which have been
Action identified, and bringing any problems which cannot be
resolved to the attention of the management or supervi-
sorlevel.
Zero Defect The next step is to communicate the meaning of zero
Program defects to the employees. The key point is that it is not a
motivation program: instead, it means doing things right
the first time, i.e., zero defects.
Supervisor This requires that all supervisors and managers receive
Training training on the 14 step quality improvement program.
Zero Defects Day This involves setting aside one day each year to high-
light zero defects, and its importance to the organiza-
tion. Supervisors and managers will explain the
importance of zero defects to the staff.
Goal Setting This phase involves getting people to think in terms of
goals and achieving the goals.
Error Cause This phase identifies any roadblocks or problems which
Removal prevent employees from performing error-free work.
The list is produced from the list of problems or road-
blocks for each employee.
Recognition This involves recognizing employees who make out-
standing contributions in meeting goals or quality im-
provement.
Quality Councils This involves bringing quality professionals together on
a regular basis to communicate with each other and to
share ideas on action.
Do it over again The principle of continuous improvement is a key part
of quality improvement. Improvement does not end; it
is continuous.
Comment (Crosby):
Crosby's program has been quite influential and his maturity grid has been
applied in the software CMM. The ZD part of the program is difficult to ap-
ply to the complex world of software development, where the complexity of
1. Introduction to Software Quality 19
the systems to be developed are often the cause of defects rather than the
mindset of software professionals who are dedicated to quality. Slogans may
be dangerous and potentially unsuitable to some cultures and a zero defects
day may potentially have the effect of de-motivating staff.
There are other important figures in the quality movement including Shingo who
developed his own version of zero defects termed "Poka yoke" or defects = O.
This involves identifying potential error sources in the process and monitoring
these sources for errors. Causal analysis is performed on any errors found, and
the root causes are eliminated. This approach leads to the elimination of all er-
rors likely to occur, and thus only exceptional errors should occur. These excep-
tional errors and their causes are then eliminated. The failure mode and effects
analysis (FMEA) methodology is a variant of this. Potential failures to the sys-
tem or sub-system are identified and analyzed, and the causes and effects and
probability of failure documented.
Genichi Taguchi's definition of quality is quite different. Quality is defined
as "the loss a product causes to society after being shipped, other than losses
caused by its intrinsic function". Taguchi defines a loss function as a measure of
the cost of quality; L(x) = c(x-TJ + k. Taguchi also developed a method for de-
termining the optimum value of process variables which will minimize the
variation in a process while keeping a process mean on target.
Kaoru Ishikawa is well known for his work in quality control circles (QCC).
A quality control circle is a small group of employees who do similar work and
arrange to meet regularly to identify and analyze work-related problems, to
brainstorm, and to recommend and implement solutions. The problem solving
uses tools such as pareto analysis, fishbone diagrams, histograms, scatter dia-
grams, and control charts. A facilitator will train the quality circle team leaders
and a quality circle involves the following activities:
• Select problem
• State and re-state problem
• Collect facts
• Brain-storm
• Build on each others ideas
• Choose course of action
• Presentation
Armand Feigenbaum is well known for this work in total quality control
which concerns quality assurance applied to all functions in the organization. It
is distinct from total quality management: total quality control is concerned with
controlling quality throughout, whereas TQM embodies a philosophy of quality
20 A Practical Approach to Software Quality
management and improvement involving all staff and functions throughout the
organization. Total quality management was discussed earlier.
The pre-70s approach to software development has been described as the "Mon-
golian Hordes Approach" by Ince and Andrews [InA:91]. The "method" or lack
of method is characterized by the following belief system:
The completed code will always be full of defects.
The coding should be finished quickly to correct these defects.
Design as you code approach.
This is the "wave the white flag approach ", i.e., accepting defeat in software
development, and suggests that irrespective of a scientific or engineering ap-
proach, the software will contain many defects, and that it makes sense to code
quickly and to identify the defects to ensure that the defects can be corrected as
soon as possible.
The motivation for software engineering came from what was termed the
"software crisis", a crisis with software being delivered over budget and much
later than its original delivery deadline, or not been delivered at all, or a product
being delivered on time, but, with significant quality problems. The term "soft-
ware engineering" arose out of a NATO conference held in Germany in 1968 to
discuss the critical issues in software [Bux:75].
The NATO conference led to the birth of software engineering and to new
theories and understanding of software development. software development has
an associated lifecyc1e, for example, the waterfall model, which was developed
by Royce [Roy:70], or the spiral lifecyc1e model, developed by Boehm
[Boe:88]. These models detail the phases in the lifecyc1e for building a software
product.
[ntegration Testing
Unit Testing
The waterfall model (Fig. 1.5) starts with requirements, followed by spe-
cification, design, implementation, and testing. It is typically used for projects
where the requirements can be identified early in the project lifecycle or are
known in advance. The waterfall model is also called the "V" life cycle model,
with the left-hand side of the "V" detailing requirements, specification, design,
and coding and the right-hand side detailing unit tests, integration tests, system
tests and acceptance testing. Each phase has entry and exit criteria which must
be satisfied before the next phase commences. There are many variations to the
waterfall model.
The spiral model is another lifecycle model and is useful where the require-
ments are not fully known at project initiation, and where the evolution of the
requirements is a part of the development lifecycle. The development proceeds
in a number of spirals where each spiral typically involves updates to the re-
quirements, design, code, testing, and a user review of the particular iteration or
spiral.
The spiral is, in effect, a re-usable prototype and the customer examines the
current iteration and provides feedback to the development team to be included
in the next spiral. This approach is often used in joint application development
for web-based software development. The approach is to partially implement the
system. This leads to a better understanding of the requirements of the system
and it then feeds into the next cycle in the spiral. The process repeats until the
requirements and product are fully complete. There are several variations of the
spiral model including the RAD / JAD models, DSDM models, etc. The spiral
model is shown in Fig. 1.6.
There are other life-cycle models, for example, the iterative development
process which combines the waterfall and spirallifecycle model. The cleanroom
approach to software development includes a phase for formal specification and
its approach to testing is quite distinct from other models as it is based on the
predicted usage of the software product.
The requirements detail what the software system should do as distinct from
how this is to be done. The requirements area is the foundation for the system. If
the requirements are incorrect, then irrespective of the best programmers in the
world the system will be incorrect. Prototyping may be employed to assist in the
definition of the requirements, and the prototype may be thrown away after the
prototyping phase is complete. In some cases the prototype will be kept and used
as the foundation for the system. The prototype will include key parts of the
system and is useful in determining the desired requirements of the system.
The proposed system will typically be composed of several sub-systems, and
the system requirements are composed of sub-system requirements. The sub-
system requirements are typically broken down into requirements for several
features or services. The specification of the requirements needs to be unambi-
guous to ensure that all parties involved in the development of the system under-
stand fully what is to be developed and tested. There are two categories of
requirements: namely, functional requirements and non-functional requirements.
The functional requirements are addressed via the algorithms, and non-
functional requirements may include hardware or timing requirements.
The implications of the proposed set of requirements needs to be considered,
as the choice of a particular requirement may affect the choice of another re-
quirement. For example, one key problem in the telecommunications domain is
the problem of feature interaction. The problem is that two features may work
correctly in isolation, but when present together interact in an undesirable way.
Feature interactions should be identified and investigated at the requirements
phase to determine how the interaction should be resolved, and the problem of
feature interaction is discussed again in chapter 7.
• Requirement Gathering
This involves the collection of all relevant information for the
creation of the product.
• Requirement Consolidation
This involves the consolidation of the collected information into a
coherent set of requirements.
• Requirement Validation
This involves validation to ensure that the defined requirements are
actually those desired by the customer.
• Technical Analysis
This involves technical analysis to verify the feasibility of the
product.
1. Introduction to Software Quality 23
• Developer/Client Contract
This involves a written contract between the client and the devel-
oper.
1.5.2 Specification
1.5.3 Design
Architectural Design
This involves a description of the architecture or structure of the product using
flow charts, sequence diagrams, state charts, or a similar methodology.
Functional Design
This describes the algorithms and operations to implement the specification.
Object-oriented reuse
This sub-phase identifies existing objects that may be reused in the implementa-
tion.
Verification of Design
This involves verification to ensure that the design is valid with respect to the
specification of the requirements. The verification may consist of a review of the
design using a methodology similar to Fagan inspections. Formal methods may
be employed for verification of the design and this involves a mathematical
proof that the design is a valid refinement of the specification.
24 A Practical Approach to Software Quality
1.5.4 Implementation
This phase involves translating the design into the target implementation lan-
guage. This involves writing or generating the actual code. The code is divided
among a development team with each programmer responsible for one or more
modules. The implementation may involve code reviews or walkthroughs to
verify the correctness of the software with respect to the design, and to ensure
that maintainability issues are addressed. The reviews generally include
verification that coding standards are followed and verification that the imple-
mentation satisfies the software design.
The implementation may use software components either developed inter-
nally or commercial off-the-shelf software (COTS). There may be some risks
from the COTS component as the supplier may decide to no longer support it,
and an organization needs to consider the issues with the use of COTS before
deciding to employ components. The issues with COTS are described in
[Voa:90], and research into COTS is being conducted by international research
groups such as the SEI and ESI.
The main benefits are increased productivity and a reduction in cycle time,
and as long as the issues with respect to COTS can be effectively managed there
is a good case for the use of COTS components.
1.5.5 Testing
Unit Test
Unit testing is performed by the programmer on the unit that has been com-
pleted, and prior to handover to an independent test group for verification. Tests
are restricted to the particular unit and interaction with other units is not consid-
ered in this type of testing. Unit tests are typically written to prove that the code
satisfies the design, and the test cases describe the purpose of the particular test.
Code coverage and branch coverage give an indication of the effectiveness of
the unit testing as it is desirable that the test cases execute as many lines of code
as possible and that each branch of a condition is covered by a test case. The test
results are executed and recorded by the developer and any defects are corrected
prior to the handover to the test group. In some software development models,
e.g., the cleanroom model, the emphasis is on the correctness of the design, and
the use of unit testing is considered to be an unnecessary step in the lifecycle.
Integration Test
This type of testing is performed by the development team on the integrated
system, and is performed after it has been demonstrated that the individual units
all work correctly in isolation. The problem is that often units may work cor-
rectly in isolation but may fail when integrated with other modules. Conse-
1. Introduction to Software Quality 25
quently, the purpose of integration testing is to verify that the modules and their
interfaces work correctly together and to resolve any integration issues.
Sub-system Test
This testing is performed in some organizations prior to system testing, and in
large systems the objective is to verify that each large sub-system works cor-
rectly prior to the system test of the entire system. It is typically performed by a
dedicated test group independent of the development group. The purpose of this
testing is to verify the correctness of the sub-system with respect to the sub-
system requirements, and to identify any areas requiring correction, and to ver-
ify that corrections to defects are fully resolved and preserve the integrity of the
sub-system.
System Test
The purpose of this testing is to verify that the system requirements are satisfied,
and it is usually carried out by an independent test group. The system test cases
will need to be sufficient to verify that all of the requirements have been cor-
rectly implemented, and traceability of the requirements to the test cases will
usually be employed. Any requirements which have been incorrectly imple-
mented will be identified, and defects reported. The test group will verify that
the corrections to the defects are valid and that the integrity of the system is
maintained. The system testing may include security testing or performance
testing also, or they may be separate phases where appropriate.
Performance Test
The purpose of this testing is to ensure the performance of the system is within
the bounds specified in the requirements. This may include load performance
testing, where the system is subjected to heavy loads over a long period of time
(soak testing), and stress testing, where the system is subjected to heavy loads
during a short time interval. This testing generally involves the simulation of
many users using the system and measuring the various response times. Per-
formance requirements may refer to the future growth or evolution of the sys-
tem, and the performance of projected growth of the system will need to be
measured also. Test tools are essential for performance testing as often, for ex-
ample, soak performance testing will need to proceed for 24 to 48 hours, and
automated tests are therefore required to run the test cases and record the test
results.
Acceptance test
This testing is performed by a dedicated group with customer involvement in the
testing. The objective is to verify that the product exhibits the correct function-
ality and fully satisfies the customer requirements, and that the customer is
26 A Practical Approach to Software Quality
happy to accept the product. This testing is usually performed under controlled
conditions at the customer site, and the testing therefore matches the real life
behavior of the system. The customer is in a position to see the product in op-
eration and to verify that its behavior and performance is as agreed and meets
the customer requirements.
1.5.6 Maintenance
This phase continues after the release of the software product. The customer
reports problems which require investigation by the development organization.
These may include enhancements to the product or may consist of trivial or po-
tentially serious defects which negatively impact the customer's business. The
development community is required to identify the source of the defect and to
correct and verify the correctness of the defect. Mature organizations will place
emphasis on post mortem analysis to learn lessons from the defect and to ensure
corrective action is taken to improve processes to prevent a repetition of the
defect.
The activities are similar to those in the waterfalllifecycle model. The start-
ing point is requirements analysis and specification, next is the design, followed
by implementation and testing. Often, requirements may change at a late stage in
a project in the software development cycle. This is often due to the fact that a
customer is unclear as to what the exact requirements should be, and sometimes
further desirable requirements only become apparent to the customer when the
system is implemented. Consequently, the customer may identify several en-
hancements to the product once the implemented product has been provided.
This highlights the need for prototyping to assist in the definition of the re-
quirements and to assist the customer in requirements elicitation.
The emphasis on testing and maintenance suggests an acceptance that the
software is unlikely to function correctly the first time, and that the emphasis on
inspections and testing is to minimize the defects that will be detected by the
customer. There seems to be a certain acceptance of defeat in current software
engineering where the assumption is that defects will be discovered by the test
department and customers, and that the goal of building a correct and reliable
software product the first time is not achievable.
The approach to software correctness almost seems to be a "brute force" ap-
proach, where quality is achieved by testing and re-testing, until the testing
group can say with confidence that all defects have been eliminated. Total qual-
ity management suggests that to have a good-quality product quality will need to
be built into each step in the development of the product. The more effective the
in-phase inspections of deliverables, including reviews of requirements, design
and code, the higher the quality of the resulting implementation, with a corre-
sponding reduction in the number of defects detected by the test groups.
There is an inherent assumption in the approach to quality management. The
assumption is that formal inspections and testing are in some sense sufficient to
1. Introduction to Software Quality 27
demonstrate the correctness of the software. This assumption has been chal-
lenged by the eminent computer scientist E. Dijkstra who argued in [Dij:72] that
"Testing a program demonstrates that it contains errors, never that it is
correct."
The implication of this statement, if it is correct, is that irrespective of the
amount of time spent on the testing of a program it can never be said with abso-
lute confidence that it is correct, and, at best all that may be done is to employ
statistical techniques as a measure of the confidence that the software is correct.
Instead, Dijkstra and C.A.R Hoare argued that in order to produce correct soft-
ware the programs ought to be derived from their specifications using mathe-
matics, and that mathematical proof should be employed to demonstrate the
correctness of the program with respect to its specification. The formal methods
community have argued that the formal specification of the requirements and the
step-wise refinement of the requirements to produce the implementation accom-
panied by mathematical verification of each refinement step offers a rigorous
framework to develop programs adhering to the highest quality constraints.
Many mature organizations evaluate methods and tools, processes, and mod-
els regularly to determine their suitability for their business, and whether they
may positively impact quality, cycle time, or productivity. Studies in the US and
Europe have suggested that there are difficulties in scalability with formal meth-
ods and that some developers are uncomfortable with the mathematical notation.
Formal methods may be employed for a part or all of the lifecycle and one ap-
proach to the deployment of formal methods in an organization is to adopt a
phased approach to implementation.
An organization needs to consider an evaluation or pilot formal methods to
determine whether there is any beneficial impact for its business. The safety-
critical area is one domain to which formal methods have been successfully ap-
plied: for example, formal methods may be used to prove the presence of safety
properties such as "when a train is in a level crossing, then the gate is closed'.
In fact, limited testing may only be possible in some safety critical domains,
hence the need for further quality assurance and confidence in the correctness of
the software via simulation, or via mathematical proof of the presence or ab-
sence of certain desirable or undesirable properties for these domains. Formal
methods is discussed in more detail in chapter 7.
Many software companies may consider one defect per thousand lines of
code (KLOC) to be reasonable quality. However, if the system contains one
million lines of code this is equivalent to a thousand post-release defects, which
is unacceptable. Some mature organizations have a quality objective of three
defects per million lines of code.
This goal is known as six sigma and is used as a quality objective by orga-
nizations such as Motorola. Six sigma (6a) was originally applied by Motorola
to its manufacturing businesses, and subsequently applied to the software busi-
nesses. The intention was to reduce the variability in manufacturing processes,
and to therefore ensure that the manufacturing processes performed within strict
28 A Practical Approach to Software Quality
quantitative process control limits. It has since been applied to software organi-
zations outside of Motorola, and the challenge is to minimize and manage the
variablity in software processes. There are six steps to six sigma:
• Identify the product (or service) you create.
• Identify your customer and your customer's requirements.
• Identify your needs to satisfy the customer.
• Define the process for doing the work.
• Mistake-proof the process and eliminate waste.
• Ensure continuous improvement by measuring, analyzing, and
controlling the improved process.
Motorola was awarded the first Malcom Baldridge Quality award in 1988 for
its commitment to quality as exhibited by the six sigma initiative.
One very important measure of quality is customer satisfaction with the
company, and this feedback may be determined by defining an external cus-
tomer satisfaction survey and requesting customers to provide feedback in this
structured survey form. The information may be used to determine the overall
level of customer satisfaction with the company and the loyalty of the customer
to the company. It may also be employed to determine the perception of the
customer of the quality and reliability of the product, the usability of the prod-
uct, the ability of the company to correct defects in a timely manner, the percep-
tion of the testing capability of the organization, etc.
• Planning
• Overview
• Prepare
• Inspect
• Process improvement
• Re-work
• Follow-up
The errors identified in an inspection are classified into various types as
defined by the Fagan methodology. There are other classification schemes of
defect types including the scheme defined by orthogonal defect classification
(ODC). A mature organization will record the inspection data in a database and
30 A Practical Approach to Software Quality
this will enable analysis to be performed on the most common types of errors.
The analysis will yield actions to be performed to minimize the re-occurrence of
the most common defect types. Also, the data will enable the effectiveness of
the organization in identifying errors in phase and detecting defects out of phase
to be determined and enhanced. Software inspections are described in more de-
tail in chapter 2.
Software testing has been described earlier in this chapter and two key types of
software testing are black box and white box testing. White box testing involves
checking that every path in a module has been tested and involves defining and
executing test cases to ensure code and branch coverage. The objective of black
box testing is to verify the functionality of a module or feature or the complete
system itself. Testing is both a constructive activity in that it is verifying the
correctness of functionality, and it may be a destructive activity in that the ob-
jective is to find defects in the implementation of the defined functionality. The
requirements are verified and the testing yields the presence or absence of
defects.
The various types of testing have been discussed previously and these typi-
cally include test cases which are reviewed by independent experts to ensure that
they are sufficient to verify the correctness of the software. There may also be a
need for usability type testing, i.e., the product should be easy to use with re-
spect to some usability model. One such model is SUMI developed by Jurek
Kirakowski [Kir:OO]. The testing performed in the cleanroom approach is based
on a statistical analysis of the predicted usage of the system, and the emphasis is
on detecting the defects that the customer is most likely to encounter in daily
operations of the system. The cleanroom approach also provides a certificate of
reliability based on the mean time between failure.
The effectiveness of the testing is influenced by the maturity of the test proc-
ess in the organization. Testing is described in the software product engineering
key process area on the CMM. Statistics are typically maintained to determine
the effectiveness of the testing process and metrics are maintained, e.g., the
number of defects in the product, the number of defects detected in the testing
phase, the number of defects determined post testing phase, the time period be-
tween failure, etc. Testing is described in more detail in chapter 2.
The IEEE definition of software quality assurance is "the planned and system-
atic pattern of all actions necessary to provide adequate confidence that the
software performs to established technical requirements" [MaCo:96]. The soft-
ware quality assurance department provides visibility into the quality of the
1. Introduction to Software Quality 31
work products being built, and the processes being used to create them. The
quality assurance group may be just one person operating part time or it may be
a team of quality engineers. The activities of the quality assurance group typi-
cally include software testing activities to verify the correctness of the software,
and also quality audits of the various groups involved in software development.
The testing activities have been discussed previously, and the focus here is to
discuss the role of an independent quality assurance group.
The quality group promotes quality in the organization and is independent of
the development group. It provides an independent assessment of the quality of
the product being built, and this viewpoint is quite independent of the project
manager and development viewpoint. The quality assurance group will act as the
voice of the customer and will ensure that quality is carefully considered at each
development step.
The quality group will perform audits of various projects, groups and de-
partments and will identify any deficiencies in processes and non-compliance to
the defined process. The quality group will usually have a reporting channel to
senior management, and any non-compliance issues which are not addressed at
the project level are escalated to senior management for resolution. Software
quality assurance is a level 2 key process area on the Capability Maturity Model
(CMM). The key responsibilities of the quality assurance group are summarized
as follows:
• Independent reporting to senior management
• Customer Advocate
• Visibility to Management
• Audits to verify Compliance
• Promote Quality awareness
• Promote process improvement
• Release sign-offs
The quality audit provides visibility into the work products and processes
used to develop the work products. The audit consists of an interview with sev-
eral members of the project team, and the auditor interviews the team, deter-
mines the role and responsibilities of each member, considers any issues which
have arisen during the work, and assesses if there are any quality risks associ-
ated with the project based on the information provided from team members.
The auditor requires good written and verbal communication skills, and will
need to gather data via open and closed questions. The auditor will need to ob-
serve behavior and body language and be able to deal effectively with any po-
tential conflicts. The auditor will gather data with respect to each participant and
the role that the participant is performing, and relates this to the defined process
for their area. The entry and exit criteria to the defined processes are generally
examined to verify that the criteria has been satisfied at the various milestones.
The auditor writes a report detailing the findings from the audit and the recom-
mended corrective actions with respect to any identified non-compliance to the
defined procedures. The auditor will perform follow-up activity at a later stage
32 A Practical Approach to Software Quality
to verify that the corrective actions have been carried out by the actionees. The
audit activities include planning activities, the audit meeting, gathering data,
reporting the findings and assigning actions, and following the actions through
to closure. The audit process is described in more detail in chapter 3.
There is a relationship between the quality of the process and the quality of the
products built from the process. The defects identified during testing are very
valuable in that they enable the organization to learn and improve from the de-
fect. Defects are typically caused by a mis-execution of a process or a defect in
the process. Consequently, the lessons learned from a particular defect should be
used to correct systemic defects in the process.
Problem-solving teams are formed to analyze various problems and to iden-
tify corrective actions. The approach is basically to agree on the problem to be
solved, to collect and analyze the facts, and to choose an appropriate course of
action to resolve the problem. There are various tools to assist problem solving
and these include fishbone diagrams, histograms, trend charts, pareto diagrams,
and bar charts. Problem solving is described in detail in chapter 6.
Fishbone Diagrams
This is the well-known cause-and-effect diagram and is in the shape of the
backbone of a fish. The approach is to identify the causes of some particular
quality effect. These may include people, materials, methods, and timing. Each
of the main causes may then be broken down into sub-causes. The root cause is
then identified, as often 80% of problems are due to 20% of causes (the 80:20
rule).
Histograms
A histogram is a way of representing data via a frequency distribution in a bar
chart format and displays data spread over a period of time, and illustrating the
shape, variation, and centering of the underlying distribution. The data is divided
into a number of buckets where a bucket is a particular range of data values, and
the relative frequency of each bucket is displayed in bar format. The shape of
the process and its spread from the mean is evident from the histogram.
Pareto Chart
The objective of a pareto chart is to identify the key problems and to focus on
these. Problems are classified into various types or categories, and the frequency
of each category of problem is then determined. The chart is displayed in a de-
scending sequence of frequency, with the most significant category detailed first,
and the least significant category detailed last. The success in problem-solving
1. Introduction to Software Quality 33
activities over a period of time may be judged from the old and new pareto
chart, and if problem solving is successful, then the key problem categories in
the old chart should show a noticeable improvement in the new pareto chart.
Trend Graph
A trend graph is a graph of a variable over time and is a study of observed data
for trends or patterns over time.
Scatter Graphs
The scatter diagram is used to measure the relationship between variables, and
to determine whether there is a correlation between the variables. The results
may be a positive correlation, negative correlation 'or no correlation between the
data. The scatter diagram provides a means to confirm a hypothesis that two
variables are related, and provides a visual means to illustrate the potential rela-
tionship.
1.6.5 Modeling
model is good. The choice is generally influenced by the ability of the model to
explain the behavior, its simplicity, and its elegance.
The importance of models is that they serve to explain the behavior of a par-
ticular entity and may also be used to predict future behavior in other cases. Dif-
ferent models may differ in their ability to explain aspects of the entity under
study. Some models are good at explaining some parts of the behavior, other
models are good at explaining other aspects. The adequacy of a model is a key
concept in modeling, and the adequacy is determined by the effectiveness of the
model in representing the underlying behavior, and its ability to predict future
behavior. Model exploration consists of asking questions, and determining
whether the model is able to give an effective answer to the particular question.
A good model is chosen as a representation of the real world, and is referred to
whenever there are questions in relation to the aspect of the real world.
The model is a simplification or abstraction of the real world and will con-
tain only the essential details. For example, the model of an aircraft is hardly
likely to include the color of the aircraft and instead the objective may be to
model the aerodynamics of the aircraft. The principle of 'Ockham's Razor' is
used extensively in modeling and in model simplification. The objective is to
choose only those entities in the model which are absolutely necessary to ex-
plain the behavior of the world.
The software domain has applied models of software development to assist
with the complexities in software development, and these include the software
maturity models such as the Capability Maturity Model (CMM), which is em-
ployed as a framework to enhance the capability of the organization in software
development, to modeling requirements with graphical notations such as UML,
or mathematical models derived from formal specifications.
Crosby argues that the most meaningful measurement of quality is the cost of
quality, and the emphasis on the improvement activities in the organization is
therefore to reduce the cost of poor quality (COPQ). The cost of quality includes
the cost of external and internal failure, the cost of providing an infrastructure to
prevent the occurrence of problems and an infrastructure to verify the correct-
ness of the product. The cost of quality was divided into the following four sub-
categories by A.V. Feigenbaum in the 1950s, and evolved further by James
Harrington of IBM.
1. Introduction to Software Quality 35
Jan Feb Mar April May June Ju Iy Aug Sep Oct Nov Dec
Date
The cost of quality graph (Fig. 1.7) will initially show high external and in-
ternal costs and very low prevention costs, however, the total quality costs will
be high. However, as an effective quality system is put in place and becomes
fully operational there will be a noticeable decrease in the external and internal
cost of quality and a gradual increase in the cost of prevention and appraisal.
The total cost of quality will substantially decrease, as the cost of provision of
the quality system is substantially below the savings gained from lower cost of
internal and external failure. The COPQ curve will indicate where the organiza-
tion is in relation to the cost of poor quality, and the organization will need to
derive a plan to achieve the desired results to minimize the cost of poor quality.
36 A Practical Approach to Software Quality
1.6.8 Metrics
• Data gathering
• Presentation of charts
• Trends
• Action plans
Software metrics are discussed in chapter 6, and sample metrics for the vari-
ous functional areas in the organization are included. The metrics are only as
good as the underlying data, and data gathering is a key part of a metrics pro-
gram.
Customer Satisfaction
Measurement
customers are totally satisfied with the product and service and to develop loy-
alty in customers. A loyal customer will re-purchase and recommend the com-
pany to other potential customers. The customer satisfaction process is
summarized as follows:
• Define customer surveys
• Send customer surveys
• Analysis and ratings
• Customer meeting and key issues
• Action plans and follow-up
• Metrics for customer satisfaction
The definition of a customer satisfaction survey is dependent on the nature of
the business; however, the important thing is that the questionnaire employed in
the survey is usable and covers the questions that will enable the organization to
identify areas in which it is weak and in need of improvement, and also to iden-
tify areas where it is strong. The questions typically employ a rating scheme to
allow the customer to give quantitative feedback on satisfaction, and the survey
will also enable the customer to go into more detail on issues.
Software companies will be interested in the customer's perception of the
quality of software, reliability, usability, timeliness of delivery, value for money,
etc., and a sample customer satisfaction survey form is included in Table 1.10.
Table 1.10 includes 10 key questions and may be expanded to include other
relevant questions that the organization wishes to measure its performance on.
The survey form will typically include open-ended questions to enable the cus-
tomer to describe in more detail areas where the organization performed well,
and areas where the customer is unhappy with the performance.
Customer satisfaction metrics provide visibility into the level of customer
satisfaction with the software company. The objective of the software company
is to provide a very high level of customer satisfaction, and the feedback from
the customer satisfaction surveys provide an indication of the level of customer
satisfaction. Metrics are produced to provide visibility into the customer satis-
faction feedback and to identify trends in the customer satisfaction measure-
ments.
A sample customer satisfaction metric is included in Figure 1.9, and the met-
ric is derived from the data collected in Table 1.10. The metric provides a quan-
titative understanding of the level of customer satisfaction with the company,
and the company will need to analyze the measurements for trends. Customer
satisfaction is discussed again in chapter 3 as it is an important part of ISO
9000:2000.
40 A Practical Approach to Software Quality
Criteria
No IQuestion Unacceptable Poor Fair Satisfied ExceBent N/A
1. Quality of 0 0 0 0 0 0
software
2. Ability to 0 0 0 0 0 0
meet com-
rnitted dates
3. Timeliness 0 0 0 0 0 0
of projects
4. Effective 0 0 0 0 0 0
testing of
software
5. Expertise of 0 0 0 0 0 0
staff
6. Value for 0 0 0 0 0 0
money
7. Quality of 0 0 0 0 0 0
support
8. Ease of in- 0 0 0 0 0 0
stallation of
software
9. Ease of use 0 0 0 0 0 0
10. Timely reso- 0 0 0 0 0 0
lution of
problems
4
3 - ;- -
- f-
2 l- I- - - f- - - - - I-
1 l- I- - - f-- - - f-- - - I-
a
. Quality ofSonware c Meet COmmitteCl uates c 'Imeliness 01 proJect
• Test Effectiveness c Expertise of staff c Value for money
• Support c Ease of Installation c Ease of use
• Timely problem resolution c Accurate diagnostics c Intention to recommend
1.6.10 Assessments
TQM employs many of the ideas of the famous names in the quality move-
ment, including Deming, Juran, and Crosby and follows the culture and attitude
of delivering what is promised. Senior management are required to take charge
of the implementation of quality management, and all staff will need to be
trained in quality management and to take part in quality improvement activities.
Quality improvement is continuous.
There are four main parts of TQM (Table 1.11)
The implementation of TQM involves a focus on all areas within the organi-
zation, and in identifying areas for improvement. The problems in the particular
area are evaluated and data is collected and analyzed. An action plan is then de-
rived and the actions implemented and monitored. This is then repeated for con-
tinuous improvement. The implementation is summarized as follows:
1.7 Miscellaneous
Software quality management is, in essence, according to this author, the appli-
cation of common sense to software engineering. Clearly, it is sensible to plan
and track a project, identify potential risks early and attempt to eliminate or re-
duce their impact, determine the requirements, produce a design, obtain feed-
back from customers and peers for planning, requirements, design, and
implementation of the software. It is sensible to test the software against the
requirements, to record any problems identified, and to correct them. It is sensi-
ble to have objective criteria to determine if the software is ready to be released
to the customer, and sensible to have a post-mortem at the end of a project to
determine the lessons learned from the project and to do things slightly differ-
ently the next time for improvement, and to survey customers to obtain valuable
feedback.
Every organization has a distinct culture and this reflects the way in which
things are done in the organization. Organization culture includes the ethos of
the organization, the core values that the organization has, the history of the or-
ganization, its success stories, its people, amusing incidents, and many more.
The culture of the organization may be favorable or unfavorable to developing
high-quality software.
Occasionally, where the culture is such that it is a serious impediment to the
development of a high-quality software product, changes may be required to the
organization's culture. This may be difficult as it may involve changing core
values or changing in a fundamental way the approach to software development,
and this is subject to organization psychology, which often manifests as a resis-
tance to change from within an organization. Successful change management,
i.e., the successful implementation of a change to the organization culture, will
typically involve the following:
• Plan implementation
• Kick-off meeting
• Motivate changes
• Display plan
• Training
• Implement changes
• Monitor implementation
• Institutionalize
The culture of an organization is often illustrated by the well-known phrase
of its staff: "That's the way we do things around here". For example, the evolu-
tion from one level of the CMM to another often involves a change in organiza-
tion culture, for example, a software quality assurance process at level 2, and a
peer review process at level 3. The focus on prevention requires a change in
rnindset within the organization to focus on problem solving and problem pre-
vention, rather than on fire fighting.
The explosive growth of the world wide web and electronic commerce in recent
years has made quality of web sites a key concern for any organization which
conducts part or all of its business on the world wide wed. software development
for the web is a relatively new technology area, and the web is rapidly becoming
ubiquitous in society. A web site is quite distinct from other software systems in
that:
• It may be accessed from anywhere in the world.
• It may be accessed by many different browsers.
• The usability and look and feel of the application is a key concern.
• The performance of the web site is a key concern.
• Security is a key concern.
• The web site must be capable of dealing with a large number of
transactions at any time.
• The web site has very strict availability constraints (typically
24x365 availability).
• The web site needs to be highly reliable.
1. Introduction to Software Quality 47
Chapter 7 describes formal methods and design and includes advanced topics
such as software configuration management, the unified modeling language
(UML), software usability, and formal methods.
2
Software Inspections and Testing
Software inspections playa key role in building quality into a software product,
and testing plays a key role in verifying that the software is correct and corre-
sponds to the requirements. The objective of inspections is to build quality into
the software product as there is clear evidence that the cost of correction of a
defect increases the later in the development cycle in which the defect is de-
tected. Consequently, there is an economic argument to employing software
inspections as there are cost savings in investing in quality up front rather than
adding quality later in the cycle. The purpose of testing is to verify that quality
has been built into the product, and in a mature software company the majority
of defects (e.g., 80%) will be detected by software inspections with the remain-
der detected by the various forms of testing conducted in the organization.
There are several approaches to software inspections, and the degree of for-
mality employed in an inspection varies with the particular method adopted. The
simplest and most informal approach consists of a walkthrough of the document
or code by an individual other than the author. The informal walkthrough gener-
ally consists of a meeting of two people, namely, the author and a reviewer. The
meeting is informal and usually takes place at the author's desk or in a meeting
room, and the reviewer and author discuss the document or code, and the deliv-
erable is reviewed informally.
There are very formal software inspection methodologies and these include
the well-known Fagan inspection methodology [Fag:76] and the Gilb methodol-
ogy [Glb:94], and these typically include pre-inspection activity, an inspection
meeting, and post-inspection activity. Several inspection roles are typically em-
ployed, including an author role, an inspector role, a tester role, and a moderator
role. The Fagan inspection methodology was developed by Michael Fagan of
IBM, and the Gilb methodology was developed by Tom Gilb. The formality of
the inspection methodology used by an organization is dependent on the type of
organization and its particular business. For example, telecommunications com-
panies tend to employ a very formal inspection process, as it is possible for a
one-line software change to create a major telecommunications outage. Conse-
quently, a telecommunications company needs to assure the quality of its soft-
ware, and a key part of building the quality in is the use of software inspections.
The organization needs to devise an inspection process which is suitable for its
particular needs.
G. O’Regan, A Practical Approach to Software Quality
© Springer Science+Business Media New York 2002
50 A Practical Approach to Software Quality
The quality of the delivered software product is only as good as the quality at
the end each particular phase. Consequently, it is desirable to exit the phase only
when quality has been assured in the particular phase. Software inspections as-
sist in assuring that quality has been built into each phase, and thus assuring that
the quality of the delivered product is good. Software testing verifies the cor-
rectness of the software. Customer satisfaction is influenced by the quality of the
software and its timely delivery.
The cost of a requirements defect which is detected in the field includes the cost
of correcting the requirements, and the cost of design, coding, unit testing, sys-
tem testing, regression testing, and so on. There are other costs also: for exam-
ple, it may be necessary to send an engineer on site on short notice to implement
the corrections. There may also be hidden costs in the negative perception of the
company and a subsequent loss of sales. Thus there is a powerful economic ar-
gument to identify defects as early as possible, and software inspections serve as
a cost beneficial mechanism to achieve this.
There are various estimates of the cost of quality in an organization, and the
cost has been estimated to be between 20% to 40% of revenue. The exact cal-
culation may be determined by a time sheet accountancy system which details
the cost of internal and external failure and the cost of appraisal and prevention.
The precise return on investment from introducing software inspections into the
software development lifecycle needs to be calculated by the organization; how-
ever, the economic evidence available suggests that software inspections are a
very cost-effective way to improve quality and productivity.
tion rates for an inspection, and guidelines on the entry and exit criteria for an
inspection.
There are typically at least two roles in the inspection methodology. These
include the author role and the inspector role. The moderator, tester, and the
reader roles may not be present in the methodology. The next section describes
a very simple review methodology where there is no physical meeting of the
participants, and where instead the reviewers send comments directly to the
author. Then, a slightly more formal inspection process is described, and finally
the Pagan inspection process is described in detail.
The author is responsible for making sure that the review happens, and ad-
vising the participants that comments are due by a certain date. The author then
analyzes the comments, makes the required changes, and circulates the docu-
ment for approval.
COMMENT: The e-mail / fax review process may work for an organization.
It is dependent on the participants sending comments to the author, and the
author can only request the reviewer to send comments. There is no inde-
pendent monitoring of the author to ensure that the review actually happens
and that comments are requested, received, and implemented.
and investigation, and this is verified by the review leader. The document is then
circulated to the audience for sign-off.
COMMENT: The semi-formal review process may work well for an organi-
zation when there is a review leader other than the author to ensure that the
review is conducted effectively, and to verify that the follow up activity takes
place. It may work with the author acting as review leader provided the
author has received the right training on software inspections, and follows the
inspection process.
Table 2.3 in section 2.4.1 summarizes the process for semi-formal reviews.
Section 2.4.2 includes a template to record the issues identified during a semi-
formal review.
NOTES___________________________________________
2. Software Inspections and Testing 57
Unresolved IssueslInvestigates
The moderator records the defects identified during the inspection, and the
defects are classified according to their type and severity. Mature organizations
typically enter defects into an inspection database to allow metrics to be gener-
ated, and to allow historical analysis. The severity of the defect is recorded, and
the major defects are classified according to the Fagan defect classification
scheme. Some organizations use other classification schemes, e.g., the orthogo-
nal defect classification scheme (ODC).
The next section describes the Fagan inspection guidelines and these detail
the recommended time to be spent on the inspection activities, including prepa-
ration for the actual inspection meeting. Often, the organization may need to
tailor the Fagan inspection process to suit its needs, and the recommended times
in the Fagan process may need to be adjusted accordingly. However, the tailor-
ing will need empirical evidence to confirm that the tailored guidelines are ef-
fective in defect detection.
The Fagan inspection guidelines are based on studies by Michael Fagan and
provide recommendations for the appropriate time to spend on the various in-
spection activities. The aim is to assist the performance of an effective inspec-
tion and to thereby identify as many major defects as possible. There are two
tables presented here: the strict Fagan guidelines as required by the Fagan in-
spection methodology, and tailoring of the strict guidelines to more relaxed cri-
teria to meet the needs of organizations that cannot devote the effort demanded
by the strict guidelines.
The effort involved in a strict adherence to the Fagan guidelines is substan-
tial and the tailored guidelines presented here are based on observations by the
author of effective software inspection methodologies in leading industrial com-
panies. Empirical evidence of the effectiveness of the tailoring is not presented.
Tailoring any methodology requires care, and the effectiveness of the tailoring
should be proved by a pilot prior to its deployment in the organization. This
would generally involve quantitative data of the effectiveness of the inspection
and the number of escaped customer reported defects.
It is important to comply with the guidelines once they are deployed in the
organization, and trained moderators and inspectors will ensure awareness and
compliance. Audits can be employed to verify compliance.
2. Software Inspections and Testing 61
The relaxed guidelines detailed in Table 2.6 do not conform to the strict Fa-
gan inspection methodology.
There are four inspector roles identified in a Fagan Inspection and these include:
62 A Practical Approach to Software Quality
Role Description
Moderator The moderator manages the inspection team through the
seven-step process. The moderator plans the inspection,
chairs the meeting, keeps the meeting focused, keeps to the
Fagan guidelines, resolves any conflicts, ensures that the
deliverables are ready to be inspected, and that the inspec-
tors have done adequate preparation. The moderator records
the defects on the inspection sheet, and verifies that all
agreed follow-up activity has been successfully completed.
The moderator is highly skilled in the inspection process
and is required to have received appropriate training in
software inspections. The moderator needs to be skillful,
diplomatic, and occasionally forceful
Reader The reader paraphrases the deliverable, gives an independ-
ent view of the product, and participates actively in the in-
spection.
Author The author is the creator of the work product being in-
spected, and has an interest in ensuring that the inspection
finds all defects present in the deliverable. The author en-
sures that the work product is ready to be inspected and
informs the moderator, and gives background or an over-
view to the team if required. The author answers all ques-
tions and participates actively during the inspection, and
resolves all defects identified and any items which require
investigation.
Tester The tester role focuses on how the product would be tested,
and the tester role is typically employed as part of a re-
quirements inspection and as part of the inspection of a test
plan. The tester participates actively in the inspection.
There are explicit entry (Table 2.8) and exit criteria (Table 2.10) associated with
the various types of inspections. The entry and exit criteria need to be satisfied
to ensure that the inspection is effective. The entry criteria (Table 2.8) for the
various inspections include the following:
2. Software Inspections and Testing 63
2.5.4 Preparation
Preparation is a key part of the inspection process, as the inspection will be inef-
fective if the inspectors are insufficiently prepared for the inspection The mod-
erator is required to cancel the inspection if any of the inspectors has been
unable to do appropriate preparation.
64 A Practical Approach to Software Quality
The inspection meeting (Table 2.9) consists of a formal meeting between the
author and at least one inspector. The inspection is concerned with finding major
defects in the particular deliverable, and verifying the correctness of the in-
spected material. The effectiveness of the inspection is influenced by
• The expertise and experience of the inspector(s)
• Preparation done by inspector(s)
• The speed of the inspection
These factors are quite clear since an inexperienced inspector will lack the
appropriate domain knowledge to understand the material in sufficient depth.
Second, an inspector who has inadequately prepared will be unable to make a
substantial contribution during the inspection. Third, the inspection is ineffective
if it tries to cover too much material in a short space of time.
The final part of the inspection is concerned with process improvement. The
inspector(s) and author examine the major defects, identify the main root causes
of the defect, and determine corrective action to address any systemic defects in
the software process. The moderator is responsible for completing the inspection
summary form and the defect log form, and for entering the inspection data into
the inspection database. The moderator will give the process improvement sug-
gestions directly to the process improvement team.
The severity of the issue identified in the Fagan inspection may be classified as
major, minor, a process improvement item, or an investigate item. The issue is
classified as major if non-detection of the issue would lead to a defect report
being raised later in the development cycle, whereas a minor issue would not
result in a defect report being raised. An issue classified as investigate requires
further study before it is classified, and an issue classified as process improve-
ment is used to improve the software development process.
66 A Practical Approach to Software Quality
Code
Inspection Type Design Type Requirements Type
Logic (code) LO Usability UY Product objectives PO
Design DE Requirements RQ Documentation DS
Requirements RQ Logic LO Hardware interface HI
Maintainabil- MN Systems inter- IS Competition CO
ity face analysis
Interface IF Portability PY Function FU
Data usage DA Reliability RY Software Interface SI
Performance PE Maintainability MN Performance PE
Standards ST Error handling EH Reliability RL
Code com- CC Other OT Spelling GS
ments
The approach of ODe is to classify defects from the three orthogonal view-
points [Chi:95]. The defect impact provides a mechanism to relate the impact of
the software defect to customer satisfaction. The defect impact of a defect iden-
tified pre-release to the customer is viewed as the impact of the defect being
detected by an end-user, and for a customer-reported defect, the impact is the
actual information reported by the customer.
The inspection data is typically recorded in the inspection database; this will
enable analysis to be performed on the most common types of defects, and en-
able actions to be identified to minimize reoccurrence. The data will enable the
phase containment effectiveness to be determined, and will allow the company
to determine if the software is ready for release to the customer.
The use of the ODC classification scheme can give early warning on the
quality and reliability of the software, as experience with the ODC classification
scheme will enable an expected profile of defects to be predicted for the various
phases. The expected profile may then be compared to the actual profile, and
clearly it is reasonable to expect problems if the actual defect profile at the sys-
tem test phase resembles the defect profile of the unit testing phase, as the unit
testing phase is expected to identify a certain pool of defect types with system
testing receiving higher-quality software with unit testing defects corrected.
Consequently, ODC may be applied to make predictions of product quality and
performance.
68 A Practical Approach to Software Quality
Summary of Findings:
#Major Issues
D #Minor Issues
D #Process Improvement D
Logged Logged Issues Logged
advice of the test manager highlights any risks associated with the product, and
the risks are considered prior to its release. The test manager and test department
can be influential in an organization by providing strategic advice on product
quality and by encouraging organization change to improve the quality of the
software product by the use of best practices in software engineering.
The team of testers need to understand the system and software requirements
to be in a position to test the software. Test planning commences at the early
stages of the project and testers playa role in building quality into the software
product as well as verifying its correctness. The testers typically participate in a
review of the requirements and thus playa key role in ensuring that the require-
ments are correct and are testable. They need to develop the appropriate testing
environment to enable effective testing to take place and to identify the human
resources, hardware, and various testing tools required. The plan for testing the
project is typically documented and includes the resources required, the defini-
tion of the test environment and the test tools, and the test cases to validate the
requirements. The test cases need to be sufficient to verify the requirements and
generally include the purpose of the test case, the inputs and expected outputs,
and the test procedure for the particular test case.
The testing that is typically performed in a project generally consists of unit
testing, integration, system, regression, performance, and acceptance testing.
The unit testing is performed by the software developers, and the objective is to
verify the correctness of a module. This type of testing is termed "white box"
testing. White box testing involves checking that every path in a module has
been tested and involves defining and executing test cases to ensure code and
branch coverage. The objective of "black box" testing is to verify the function-
ality of a module or feature or the complete system itself. Testing is both a con-
structive activity in that it is verifying the correctness of the functionality, and it
also serves as a destructive activity in that another objective is to find defects in
the software. Test reporting is a key part of the project, as this enables all project
participants to understand the current quality of the software, and indicates what
needs to be done to ensure that the product is meets the required quality criteria.
An organization may have an independent test group to carry out the various
types of testing. The test results are reported regularly throughout the project,
and once the test department discovers a defect, a problem report is opened, and
the problem is analyzed and corrected by the software development community.
The problem report may indicate a genuine defect, a misunderstanding by the
tester, or a request for an enhancement. An influential test department concerned
with quality improvement will ensure that the collection of defects identified
during the testing phase are analyzed at the end of the project to identify rec-
ommendations to prevent or minimize reoccurrence. The testers typically write a
test plan for the project, and the plan is reviewed by independent experts. This
ensures that it is of a high quality and that the test cases are sufficient to confirm
the correctness of the requirements. Effective testing requires sound test plan-
ning and execution, and a mature test process in the organization. Statistics are
typically maintained to determine the effectiveness of the software testing.
2. Software Inspections and Testing 71
The testing effort is often complicated by real world issues such as late de-
livery of the software from the development community which may arise in
practice owing to challenging, deadline-driven software development. This
could potentially lead to the compression of the testing cycle as the project man-
ager may wish to stay with the original schedule. There are risks associated with
shortening the test cycle as it may mean that the test department is unable to
gather sufficient data to make an informed judgment as to whether the software
is ready for release with the obvious implication that a defect-laden product may
be shipped. Often, test departments may be understaffed, as management may
consider additional testers to be expensive and wish to minimize costs. Sound
guidelines on becoming an influential test manager are described in [Rot:OO] and
include an explicit description of the problem context, the identification of what
the other person wants, the value that you as a test manager have and what you
want, how you will provide the value that the other person wants, and what to
ask for in return.
Effective testing requires good planning and execution. The IEEE 829 standard
includes a template for test planning. Testing is a sub-project of a project and
needs to be managed as a project. This involves detailing the scope of the work
to be performed; estimating the effort required to define the test cases and to
perform the testing; identifying the resources needed (including people, hard-
ware, software, and tools); assigning the resources to the tasks; defining the
schedule; identifying any risks to the schedule or quality and developing contin-
gency plans to address them; tracking progress and taking corrective action; re-
planning as appropriate where the scope of the project has changed; providing
visibility of the test status to the full project team including the number of test
planned, executed, passed, blocked and failed; re-testing corrections to failed or
blocked test cases; taking corrective action to ensure quality and schedule are
achieved; and providing a final test report with a recommendation to go to ac-
ceptance testing.
• Identify the scope of testing to be done
• Estimates of time, resources, people, hardware, software and tools
• Provide resources needed
• Provide test environment
• Assign people to tasks
• Define the schedule
• Identify risks and contingency plans
• Track progress and take corrective action
• Provide regular test status of passed, blocked, failed tests
• Re-plan if scope of the project changes
• Conduct post mortem to learn any lessons
72 A Practical Approach to Software Quality
The test schedule above (Table 2.14) is a simple example of various possible
tasks in a testing project and a more detailed test plan will require finer granu-
larity. Tracking the plan to completeness is essential and the actual and esti-
mated completion dates are tracked and the testing project rescheduled
accordingly. It is prudent to consider risk management early in test planning,
and the objective is to identify risks that could potentially materialize during the
project, estimate the probability and impact if a risk does materialize, and iden-
tify as far as is practical a contingency plan to address the risk.
The quality of the testing is dependent on the maturity of the test process, and a
good test process will include:
• Test planning and risk management
• Dedicated test environment and test tools
• Test case definition
• Test automation
• Formality in handover to test department
• Test execution
• Test result analysis
• Test reporting
• Measurements of test effectiveness
• Post mortem and test process improvement.
A simplified test process is sketched in Figure 2.1.
74 A Practical Approach to Software Quality
The test planning process has been described in detail in section 2.6.1 and
generally consists of a documented plan identifying the scope of testing to be
performed, the definition of the test environment, the sourcing of any required
hardware or software for the test environment, the estimation of effort and re-
sources for the various activities, risk management, the deliverables to be pro-
duced, the key milestones, the various types of testing to be performed, the
schedule, etc. The test plan is generally reviewed by the affected parties to en-
sure that it is of high quality, and that everyone understands and agrees to their
responsibilities. The test plan may be revised in a controlled manner during the
project.
The test environment varies according to the type of organization and the
business and project requirements. In safety critical domains where there are
high quality constraints, a dedicated test laboratory with lab engineers may be
employed, and booking of lab time by the software testers may be required. In a
small organization for a small project a single workstation may be sufficient to
act as the test environment. The test environment is there to support the project
in verifying the correctness of the software and a dedicated test environment
may require significant capital investment. However, a sound test environment
will pay for itself by providing an accurate assessment of the quality and reli-
ability of a software product.
The test environment includes the required hardware and software to verify
the correctness of the software. The test environment needs to be fully defined at
project initiation so as to ensure that any required hardware or software may be
sourced to ensure availability of the test environment for the scheduled start date
for testing. Testing tools for simulation of parts of the system may be required,
regression and performance test tools may be required, as well as tools for defect
reporting and tracking.
The development organization typically produces a software build under
software configuration management, and the software build is verified for integ-
rity to ensure that testing is ready to commence with the software provided. This
ensures that the content of the software build is known, that the build is formally
handed over to the test department, and that the testers are testing with the cor-
rect version of the software. The testers are required to have their test cases
defined and approved prior to the commencement of the testing. The test process
2. Software Inspections and Testing 75
generally includes a formal handover meeting prior to the acceptance of the de-
velopment build for testing. The handover meeting typically includes objective
criteria to be satisfied prior to the acceptance of the build by the test department.
Effective testing requires a good test process, and the test process details the
various activities to be performed during testing, including the roles and respon-
sibilities, the inputs and outputs of each activity, and the entry and exit criteria.
The various types of testing employed to verify the correctness of the software
may include the following:
The definition of good test cases is essential for effective testing. The test
cases for testing a particular feature need to be complete in that the successful
execution of the test cases will provide confidence in the correctness of the
software. Hence, the test cases must relate or be traceable to the software re-
quirements, i.e., the test cases must cover the software requirements. The
mechanism for defining a traceability matrix is described in section 2.6.6 and the
trace matrix demonstrates the mapping between requirements and test cases. The
test cases will usually consist of a format similar to the following:
• Purpose of test case
• Setup required to execute the test case
• Inputs to the test case
• The test procedure
• Expected outputs or results
The test execution will follow the procedure outlined in the test cases, and
the tester will record the actual results obtained and compare this with the ex-
pected results. There may be a test results summary where each test case will
usually have a test completion status of pass, fail, or blocked. The test results
summary will indicate which test case could be executed, and whether the test
case was successful or not, and which test cases could not be executed.
Test results are generally maintained by the tester, and detailed information
relevant to the unsuccessful tests are recorded, as this will assist the developers
in identifying the precise causes of failure and will allow the team to identify the
appropriate corrective actions. The developers and tester will agree to open a
defect in the defect control system to track the successful correction of the de-
fect.
The test status (Fig. 2.2) consists of the number of tests planned, the number
of test cases run, the number that have passed, and the number of failed and
blocked tests. The test status is reported regularly to management during the
testing cycle. The test status and test results are analyzed and extra resources
provided where necessary to ensure that the product is of high quality with all
defects corrected prior to the acceptance of the product
Test Status 31.10.2000
The test status is reported regularly throughout the project, and the project
team agrees the action that is to be taken to ensure that progress is made to as-
sure that the software is of high quality.
Test tools and test automation are described in detail in section 2.6.3 and test
tools are there to support the test process in quality, reduced cycle time, and
productivity. Tool selection needs to be performed in a controlled manner and it
is best to identify the requirements for the tool first and then to examine a selec-
tion of tools to determine which best meets the requirements for the tool. Tools
may be applied to test management and reporting, test results management, de-
fect management, and to the various types of testing.
A good test process will maintain measurements to determine its effective-
ness and will also include a post mortem after testing completion to ensure that
any lessons that need to be learned from the project are actually learned. Test
process improvement is important as continual improvement of the test process
will enhance the effectiveness of the testing group. The testing group will use
measures to answer questions similar to the following:
• What is the current quality of the software?
• How stable is the product at this time?
• Is the product ready to be released at this time?
• How good was the quality of the software that was handed over?
• How does the product quality compare to other products?
• How effective was the testing performed on the software?
• How many open problems are there?
• How much testing remains to be done?
The purpose of test tools is to support the test process in achieving its goals
more effectively. Test tools can enhance quality, reduce cycle time, and increase
productivity. The selection of a tool requires care to ensure that the appropriate
tool is selected to meet the requirements. Tool evaluation needs to be planned to
be effective and an evaluation plan typically includes the various activities in-
volved in the evaluation, the estimated and actual effort to complete, and the
78 A Practical Approach to Software Quality
individual carrying out the activity. The structured evaluation and selection of a
particular tool typically involves identifying the requirements for the proposed
tool, and identifying tools to evaluate against the requirements. Each tool is then
evaluated against the tool requirements to yield a tool evaluation profile, and the
results are analyzed to enable an informed decision on tool selection to be made.
The sample tool evaluation below (Table 2.16) lists all of the requirements
vertically that the tool is to satisfy, and the tools to be evaluated and rated
against each requirement are listed horizontally. Various rating schemes may be
employed and in the example presented here, a simple rating scheme of good,
fair, and bad is employed to rate the effectiveness of the tool under evaluation,
to indicate the extent to which the tool satisfies the particular requirement.
• Test automation
• Support for various types of testing
The purpose of stress testing and load testing is to verify that the perform-
ance of the system is within the defined quantitative limits whenever the system
is placed under heavy loads and that the system performance is acceptable for all
loads less than the defined threshold value. The WAS (Web application stress
tool) and WCAT (Web capability analysis tool) from Microsoft are useful in
performing load or stress testing. These tools simulate multiple clients on one
client machine and thus enable heavy loads to be simulated.
The decisions on whether to automate, where to automate, and what to
automate are difficult and generally require the involvement of a test process
improvement team. The team will need objective data to guide them in making
the right decision, and a pilot may be considered. It tends to be difficult for a
small organization to make a major investment in test tools especially if the
projects are small. However, larger organizations will require a sophisticated
testing process to ensure that high-quality software is consistently produced.
tation as the waterfall model, except that the chronological sequence of delivery
of the documentation is more flexible. The joint application development is im-
portant as it allows early user feedback to be received on the look and feel and
correctness of the application, and thus the approach of design a little, imple-
ment a little, and test a little is quite suitable and valid for web development.
The various types of web testing include the following:
• Static testing
• Unit testing
• Functional Testing
• Browser compatibility testing
• Usability testing
• Security testing
• Load / performance / stress testing
• Availability testing
• Post deployment testing
Static testing generally involves inspections and reviews of documentation.
The purpose of static testing of web sites is to check the content of the web
pages for accuracy, consistency, correctness, and usability, and also to identify
any syntax errors or anomalies in the HTML. There are tools available (e.g.,
NetMechanic) for statically checking the HTML for syntax correctness and
anomalies. The purpose of unit testing is to verify that the content of the web
pages correspond to the design, that the content is correct, that all the links are
valid, and that the web navigation operates correctly. The purpose of functional
testing is to verify that the functional requirements are satisfied. Functional
testing may be extensive and complex as electronic commerce applications can
be quite complex, and may involve product catalogue searches, order process-
ing, credit checking and payment processing, and an electronic commerce appli-
cation may often liase with legacy systems. Also, testing of cookies, whether
enabled or disabled, needs to be considered.
The purpose of browser compatibility testing is to verify that the web brows-
ers that are to be supported are actually supported. Different browsers imple-
ment the HTML differently; for example, there are differences between the
implementation by Netscape and Microsoft. The purpose of usability testing is
to verify that the look and feel of the application is good. The purpose of secu-
rity testing is to ensure that the web site is secure. The purpose of load, perform-
ance and stress testing is to ensure that the performance of the system is within
the defined parameters. There are tools to measure web server and operating
system parameters and to maintain statistics. These tools allow simulation of a
large number of users at one time or simulation of the sustained use of a web site
over a long period of time. There is a relationship between the performance of a
system and its usability. Usability testing includes testing to verify that the look
and feel of the application are good, and that performance of loading of web
pages, graphics, etc., is good. There are automated browsing tools which go
through all of the links on a page, attempt to load each link, and produce a report
82 A Practical Approach to Software Quality
including the timing for loading an object or page at various modem speeds.
Good usability requires attention to usability in design and usability engineering
is important for web based or GUI applications.
The purpose of post-deployment testing is to ensure that the performance of
the web site remains good, and this is generally conducted as part of a service
level agreement (SLA). Service level agreements typically include a penalty
clause if the availability of the system or its performance falls below defined
parameters. Consequently, it is important to identify as early as possible poten-
tial performance and availability issues before they become a problem. Thus
post-deployment testing will include monitoring of web site availability, per-
formance, security, etc., and taking corrective action as appropriate. Most web
sites are operating 24 hours a day for 365 days a year, and there is the potential
for major financial loss in the case of an outage of the electronic commerce web
site. This is recognized in service level agreements with a major penalty clause
for outages. There is a very good account of e-business testing and all the asso-
ciated issues by Paul Gerrard of Systeme Evolutif Ltd., and it is described in
detail in [Ger:OO].
::l~'T'"
o~
The slope of the curve is steep at first as defects are detected; as testing pro-
ceeds and defects are corrected and retested, the slope of the curves levels off,
and indicates that the software has stabilized and is potentially ready to be re-
leased to the customer. However, it is important not to rush to conclusions based
on an individual measurement. For example, the above chart could possibly in-
dicate that testing halted on the 25.08.2000; as no testing has been performed
since then, and that would explain why the defect arrival rate per week is zero.
Careful investigation and analysis needs to be done before the interpretation of a
measurement is made, and usually several measurements rather than one are
employed in sound decision making.
Test defects are very valuable also, as they offer an organization an op-
portunity to strengthen its software development process to prevent the defect
from reoccurring in the future. Consequently, software testing plays an impor-
tant part in quality improvement. A mature development organization will per-
form internal reviews of requirements, design, and code prior to testing. The
effectiveness of the internal review process and the test process is demonstrated
in the phase containment metric.
Figure 2.4 indicates that the project had a phase containment effectiveness of
approximately 56%. That is, the developers identified 56% of the defects, the
system testing phase identified approximately 32% of the defects, acceptance
testing identified approximately 9% of the defects, and the customer identified
approximately 3% of the defects. Mature software organizations set goals with
respect to the phase containment effectiveness of their software. For example,
some mature organizations aim for their software development department to
have a phase defect effectiveness goal of 80%. This means that development has
a goal to identify 80% of the defects by software inspections.
The chart above may be used to measure where the organization is with re-
spect to its phase containment, and progress on improvements of phase contain-
ment effectiveness may be tracked in this way. There is no point in setting a goal
for a particular group or area unless there is a clear mechanism to achieve the
goal. Thus in order to achieve the goal of 80% phase containment effectiveness
80%
100%
60%
'=============~~~~t=====:=j o Acceptance Test
+
40%
20% +________________ c System Test
0% . Development
Status - 2001
60 o Medium
40 ------------~
20 1--l....1------~l-i0 Urgent
o +-J-I....,-.L..-..L.,.-----,.......-~-L,..---,...._<:=-~-__.,...--....., . Critical
E .2
Q) c Q)
E;:)
0
'';::; E -.
g til Ul
o
o o
• The trace matrix (Table 2.17) provides the mapping between indi-
vidual requirement numbers (or sections) and the sections in the
design corresponding to the particular requirement number (or
section).
• This mapping will typically be one to many (i.e., a single require-
ment number will typically be implemented in several design sec-
tions).
• The trace matrix will be employed to demonstrate that all of the re-
quirements have been implemented and tested.
• For each requirement number, the associated test cases to verify
that the requirement has been correctly implemented will be de-
tailed.
• This mapping will typically be one to many (i.e., for a particular
requirement, several test cases will be employed to demonstrate
correctness) .
86 A Practical Approach to Software Quality
2.7 Summary
This chapter considered software inspections and testing in detail and discussed
how software inspections may be used to build quality into the software,
whereas testing may be used to verify that the software is of a high quality and
fit to be released to potential customers. The economic case for software inspec-
tions was discussed, and it was pointed out that it makes economic sense to de-
tect defects early as the cost of a late discovery of a defect can be quite
expensive. Various types of methodologies for reviews or inspections were dis-
cussed ranging from an informal type of inspection, to a semi-formal inspection,
and finally to the formal Fagan inspection methodology.
Software testing was considered in detail, including test planning, the test
environment setup, test case definition, test execution, defect reporting, and test
management and reporting. Various types of testing were discussed including
black and white box testing, unit and integration testing, system testing, per-
formance testing, security and usability testing. Testing in an e-commerce envi-
ronment was considered. Various tools to support the testing process were
discussed, and a methodology to assist in the selection and evaluation of tools
was considered. Metrics to provide visibility into progress with the testing and
the quality of the software were discussed, and also the role of testing in pro-
moting quality improvement was discussed.
3
The ISO 9000 Standard
3.1 Introduction
The ISO 9000 quality standard is a widely employed international standard, and
was developed by the International Standards Organization (ISO). The standard
was influenced by the British quality standard (BS 5750), and was originally
published as a standard in 1987 and revised in 1994. The standard is revised
approximately every five years to ensure that it continues to meet the needs of
the international community, and submissions for enhancements are invited. The
ISO 9000 standards may be applied to various types of organizations including
manufacturing, software and service organizations. The achievement of ISO
9000 by a company typically indicates that the company has a sound quality
system in place, and that quality and customer satisfaction are core values of the
company. ISO 9000 is regarded as a minimal standard that an organization
which takes quality seriously should satisfy, and many organizations require that
their subcontractors be ISO 9000 certified.
The ISO 9000 standard was developed owing to a need by organizations to
assess the capability or maturity of their subcontractors. The approach taken
prior to the definition of the standard was that organizations who wished to have
confidence in the capability of contractors developed their own internal quality
standard, and prior to the selection of a particular subcontractor, a quality repre-
sentative from the organization visited the proposed subcontractor, and assessed
the maturity of the subcontractor with respect to the prime contractor's or sub-
contractor's own quality system. This became expensive, especially if the orga-
nization had many subcontractors, as a visit to each individual subcontractor was
required to assess their maturity. Once the international standard became avail-
able, it allowed the organization to place the minimal requirement of satisfying
the ISO 9000 standard on the subcontractor, and thereby to expect certain mini-
mal quality standards from the subcontractor. ISO 9000 may thereby become a
discriminator in the selection of a contractor by a customer or the selection of a
subcontractor by the prime contractor. There is no longer any necessity for the
organization to assess the quality system of the subcontractor, as the certification
The previous section discussed how ISO 9000 is useful as a discriminator in the
selection of a contractor by a customer, or in the selection of a sub-contractor by
a prime contractor. This section includes a more complete justification as to why
a company should consider implementing an ISO 9000 certified quality system.
The ISO 9000 standard places requirements on the quality management sys-
tem of the company, but it is flexible on the mechanism by which the company
may satisfy the requirements. The requirements include controls, processes and
procedures, and maintaining quality records as evidence.
ISO 9000 offers a structured way for a company to improve. The company
can choose the most critical clauses which will yield the greatest business gains
and focus on improvements to these first. Then as the company increases in
maturity, the other clauses in the standard may be addressed. A standard or
model is very useful as a way for an organization to know how good it actually
is and to prioritize further improvements.
The ISO 9000 standard places responsibilities on management and staff in the
company. This book is on software quality and therefore in this section the im-
pact of the standard on the software quality assurance (SQA) group is consid-
ered. The typical responsibilities of the quality group in an ISO 9000
environment are considered. The quality group will generally playa key role in
the implementation of the standard.
The quality group plays a key role in the implementation of the quality man-
agement system and in monitoring its effectiveness in the organization.
Standard Description
ISO 8402 Quality Management 1Assurance Vocabulary
ISO 9000-1 Guidelines for selection and use of ISO 9000
ISO 9000-2 Guidelines of the application ISO 900112/3
ISO 9000-3 Guidelines for the application of ISO 9001 to
software
ISO 9000-4 Guidelines on planning, organizing and controlling re-
sources to produce reliable products
ISO 9001 ISO Standard for the design, development, test,
installation and servicing of the product/service
ISO 9002 ISO Standard for the production, installation and
servicing (a subset of ISO 9001)
ISO 9003 Standard for final inspection and test.
(a subset ofISO 9001)
ISO 9004-1 Guidelines to implement a quality system
ISO 9004-4 Guidelines for continuous improvement
3. The ISO 9000 Standard 91
Clause Description
Quality Management This clause refers to the documentation and
System implementation of the quality management
system.
Resource management This is concerned with the provision of the re-
sources needed to implement the quality man-
agement system.
Product or service The provision of processes to implement the
realisation product or service.
Management The responsibility of management in the im-
Responsibility plementation of the quality management
system.
Measurement, analysis and The establishment of a measurement program
improvement to measure the quality management system per-
formance and to identify improvements.
94 A Practical Approach to Software Quality
c c
u u
s S
T T
o 0
M M
E E
R R
Rtquirt-
menls Sa1isfae·
lion
Management Responsibility
This clause includes defining the responsibilities of management in the quality
system, and includes planning for quality and setting quality objectives, defining
a quality policy and reviewing the quality management system. Its consists of
the following sub-clauses:
• Management commitment
• Customer focus .
• Quality policy
• Planning
• Responsibility, authority, and communication
• Management review
The results of audits, customer feedback, process performance and product
conformity, status of preventive and corrective actions, follow up actions from
previous review, planned changes that could affect the QMS, and recommenda-
tions for improvement are all considered in the management review of the qual-
ity system. The sub-clauses are described in more detail in section 3.4.2.
3. The ISO 9000 Standard 95
Resource Management
This clause includes requirements to ensure that appropriate resources are in
place to deliver high quality software, and it includes the human and physical
infrastructure. This clause addresses human resource management, training, and
the work environment and physical infrastructure. It consists of the following
sub-clauses:
• Provision of resources
• Human resources
• Infrastructure
• Work environment
The sub-clauses are described in more detail in section 3.4.3.
Awareness Training.
This involves briefing management on the ISO 9000:2000 standard and the steps
involved in the implementation of the standard. The quality manager or a repre-
sentative of the management team who is responsible for ISO 9000 implemen-
tation will typically attend a course on ISO 9000, and will then share the results
with the management team.
Establish a Team
Management will set up a team with responsibility for ISO 9000 implementa-
tion. The team will consist of management and employees, and the team chair-
person will provide regular progress reports to management. The team members
will champion ISO 9000 in the organization, and will receive more detailed
training on the standard. The team will work with the employees to implement
the standard effectively. Adequate resources and time are required for successful
implementation, and a realistic plan is defined. The plan details the activities and
the resources and scheduled completion date. The timeframe should be realistic,
and take into account that the members of the team have their normal jobs as
well as the task of ISO 9000 implementation. The team will need to be moti-
vated and have sufficient influence to deal with roadblocks effectively.
3. The ISO 9000 Standard 97
The above plan includes four activities which form part of the ISO 9000 im-
plementation program. These activities are mini projects and have associated
tasks. The estimates for the completion of each task are included. This enables a
judgement to be made on the effort required to complete the implementation of
ISO 9000 in the organization, and to determine a realistic completion date.
100 A Practical Approach to Software Quality
Continuous Improvement
The organization will use the feedback from the assessment to continuously
improve.
Celebrate
The award of ISO 9000 certification is a major achievement for the organization
and merits a celebration. The celebration demonstrates the importance attached
to quality and customer satisfaction.
102 A Practical Approach to Software Quality
There is more than one implementation to the specified ISO 9000:2000 re-
quirements. The organization needs to choose an implementation which is tai-
lored to its own needs. Generic guidelines on implementation are provided in
this section.
3. The ISO 9000 Standard 103
Control of Documents
The implementation of this clause requires that the organization define a proce-
dure, namely, the Document Control Procedure. This procedure specifies the
layout that the document must conform to, and that the document is under ver-
sion control. Controls need to be in place to ensure that the document is written
to high quality standards, and the procedure will specify that the document will
need to be reviewed by experts prior to its approval.
The review may be conducted according to the Review/Inspection process
employed in the organization, or via an electronic work flow where each re-
viewer makes comments on the deliverable before it is then passed to the next
reviewer. The approval of a document may take the form of physical signatures
confirming agreement or via electronic sign-off. The document control proce-
dure specifies how current revisions are identified, and the changes from one
version of a document to another are clearly identified. This is typically
achieved via revision bars in the document.
Control of Records
Quality records provide evidence that the quality system conforms to require-
ments and indicate the effectiveness of the quality system. ISO 9000:2000 re-
104 A Practical Approach to Software Quality
Management Commitment
The top management in the organization need to playa key role in the develop-
ment and implementation of the quality management system in the organization,
and in continuously improving the quality system. The management need to be
committed to establishing and communicating the quality policy, establishing
the quality objectives, and participating in the management reviews. The com-
3. The ISO 9000 Standard 105
Customer Focus
This clause is concerned with a focus on determining the right customer re-
quirements and achieving a high level of customer satisfaction. It is related to
the ISO 7.2.1 customer requirements clause, and the ISO 8.2.1 customer satis-
faction clause. It requires a focus on customers and end users, and identifying
their needs and expectations.
The organization needs to understand the needs of its customers, including
those of potential customers, and this may include the competition in the field.
The customer needs include price, performance, safety, functional requirements,
etc. The requirements gathering process needs to be effective in gathering the
correct requirements, and the software development process needs to be effec-
tive in delivering high quality software which matches or exceeds the customer
expectations.
Quality Policy
The quality policy (Fig. 3.3) expresses the core values that the company has on
software quality, and the company's commitment to quality. The employees
need to be familiar with the quality policy and are required to actively imple-
ment the policy. The quality policy is usually displayed in a prominent location
in the organization. It typically expresses the commitment of the company to
customer satisfaction and to continuous improvement.
The quality policy of the organization places responsibility on management
and employees, and the quality policy needs to be actively implemented. This
involves planning for quality, customer satisfaction, and continuous improve-
ment. The quality policy is periodically reviewed to ensure that it continues to
meet the needs of the organization.
QUALITY POLICY
It is the policy of Company X to provide software
products that match or exceed our customer expectations.
Customer satisfaction is a core value of our company.
Weare dedicated to continuous improvement to serve
our customers better.
Planning
The quality policy and organization strategy are employed to set the quality ob-
jectives for the organization. It is important to have measurable objectives, as
this provides an objective status for management. The future needs of the orga-
nization need to be considered when setting quality objectives. The quality ob-
jectives includes quantitative goals similar to the following:
Management reviews
The purpose of the management review of the quality system is to assess the
adequacy of the quality system and to address any weaknesses in the system.
The quality manager is responsible for the introduction of the quality review in
the organization. The review will typically include customer satisfaction, human
resources, training, finance, networks, project management and development,
process improvements, quality audits, preventive and corrective action status,
follow-up actions from previous reviews, etc (Fig. 3.4).
108 A Practical Approach to Software Quality
Each group in the organization will provide visibility into their area, e.g., via
metrics or objective facts, at the quality review. The quality manager is respon-
sible for working with the various groups to facilitate the introduction of met-
rics, as metrics enable trends and analysis to take place.
Each group will have an allocated period of time to provide visibility into
their area. The review involves active participation from management, and the
objective is to understand the current performance of the quality system, and to
identify any potential improvements. There are specific ISO 9000:2000 require-
ments on what needs to be covered at the quality review. This includes the re-
sults of audits and customer satisfaction feedback, process performance and pro-
duct conformity, and the status of corrective and preventive actions. Actions will
be noted and the output of the review is a set of actions to be completed by the
next review. A sample agenda for the quality review is provided in Figure 3.4.
ISO 9000 requires that records of the review be maintained and typically the
agenda, presentations, and action plans are stored either electronically or physi-
cally. The quality review is usually chaired by the quality manager, and attended
by management in the organization. The output of the review consists of actions
and the quality manager verifies that the actions are completed at the next qual-
ity review. The actions yield further improvements of the quality management
system.
as people need sufficient training and education in order to fulfill their role ef-
fectively. It also includes the work environment and physical infrastructure. It
consists of the following sub-clauses:
Provision of Resources
The implementation of this clause requires that the organization determines the
resources needed to implement the quality management system and provides the
resources to meet current and future needs. The performance of the quality man-
agement system is considered at the management review, and resources needed
to enhance the performance of the quality system are discussed. The resources
include people, buildings, computers, etc. The organization needs to plan for
future resource needs, and to enhance the competence of people by education
and training, and to develop leadership skills for future managers. The organiza-
tion needs a process for identifying the resource needs and its provision.
Human Resources
The human resource function plays a strategic role in the organization. It is re-
sponsible for staff recruitment and retention, career planning for employees,
employee appraisals, health and safety, training in the organization, and a pleas-
ant working environment in the organization. The implementation of this clause
requires processes for employee appraisal and career planning, mentoring, edu-
cation and training, employee leave, health and safety, code of ethics, etc. The
HR function can playa key role in promoting a positive and pleasant work envi-
ronment in the organization, and in facilitating two-way communication be-
tween management and employees. The HR function will usually investigate the
reasons why people leave the organization, and will act on any relevant feed-
back.
The responsibilities and skills required for the various roles in the organiza-
tion need to be defined, and training identified to address any gaps in the current
qualification, skills and experience of the employees and the roles which they
are performing. An annual organization training plan is usually produced, and
the plan is updated throughout the year. There will normally be mandatory
training for employees on key areas, for example, on quality. The objectives for
individuals and teams are usually defined on an annual basis.
The training needs of the organization may change throughout the year ow-
ing to changes in tools and processes, or to a change in the strategic direction of
the organization.
Infrastructure
The implementation of this clause requires that the organization have a process
for defining the infrastructure for achieving effective and efficient product reali-
zation. The infrastructure includes buildings, furniture, office equipment, tech-
nologies, and tools, etc. The infrastructure may be planned one year in advance
110 A Practical Approach to Software Quality
with budget allocated to the planned infrastructure needs for the year ahead. The
infrastructure plan is then updated in a controlled manner throughout the year in
response to medium and short-term needs. The infrastructure is there to support
the organization in achieving its strategic goals and customer satisfaction.
The infrastructure needs to be maintained to ensure that it continues to meet the
needs of the organization, and the process for identifying the infrastructure
includes maintenance. The infrastructure for a software organization includes
computer hardware and software, and to be legally compliant the organization
will need to ensure that there is a license for the software installed on each
computer.
The organization will need a risk management plan to identify preventive
measures to prevent disasters from happening, and a disaster recovery procedure
to ensure disruption is kept to a minimum in the case of an actual disaster occur-
ring. The disaster prevention and recovery procedure is a risk management strat-
egy that identifies disaster threats to the infrastructure in the organization from
accidents or the environment itself, and includes a recovery procedure to re-
spond effectively to a disaster. The appropriate recovery procedure depends on
the scale of damage, and the important thing is that recovery is planned with
clearly defined roles and responsibilities. The damage assessment team and
damage recovery team will work together to formulate the appropriate response
to a disaster. The disaster recovery plan should be tested to ensure that it will be
effective in the case of a real disaster. The individuals with responsibility for
disaster prevention and recovery need to be trained in their roles.
Work Environment
The implementation of this clause requires the organization to develop a work
environment that will promote employee satisfaction, motivation and perform-
ance. This may include flexibility in work practices, for example, flexitime, a
state of the art building in a nice location which satisfies all the human require-
ments on noise, humidity, air quality, and cleanliness. It may include a sports
and social club.
This clause is concerned with the provision of efficient processes for product or
service realization to ensure that the organization has the capability of satisfying
its customers. Management plays a key role in ensuring that best in class proc-
esses are defined and implemented. The implementation of this clause involves
the implementation of the following sub-clauses.
process to oversee the implementation of the project, and the project plan will
typically include the scope of work for the project, the activities involved, the
schedule and key milestones, the resources to implement the project, and any
risks or dependencies.
A good software development process is needed to identify the customer re-
quirements and efficiently implement the requirements through design, the soft-
ware code and testing to ensure the correctness of the software product. The
software development process may follow the waterfall model or may be a RAD
lifecycle or whatever lifecycle that is appropriate to the business domain of the
organization. The formality of the software process is dependent on the type of
organization, as the safety critical requirements of an organization like NASA or
the European Space Agency (ESA) are very stringent compared to an organiza-
tion that produces computer games. Consequently, a very formal software de-
velopment process is appropriate in safety-critical software development.
the requirements specified are correct and correspond to what the customer
needs. The assumptions made are critically examined to ensure their validity and
any questions to be resolved are identified, and the answers are examined to en-
sure that there is sufficient information to complete the definition of the re-
quirements.
The contract will detail the work to be carried out by the subcontractor, and
this includes the requirements to be implemented, the schedule for implementa-
tion, the standards to be followed, and the acceptance criteria. The review of the
contract ensures that both parties understand the commitments which they are
making to one another, and the responsibilities of both parties throughout the
project. A contract will usually include a penalty clause to address the situation
where the supplier delivers the software later than the agreed schedule, or if the
software is of poor quality.
There is an ISO 9000 requirement to maintain records of the review of re-
quirements, and this protects both parties, as the review records will demonstrate
that the review happened, and document any follow-up action from the review.
The review record also enables independent verification to be performed by an
auditor to ensure that the agreed changes have been implemented in the next
revision of the requirements document. The organization needs to have a defined
review process to ensure that reviews are consistently performed.
Ryan and Stevens [Ryn:OO] have identified six best practice themes for re-
quirements management: allowing sufficient time to the requirements process,
choosing the right requirements elicitation approach, communicating the re-
quirements, prioritizing the requirements, reusing the requirements, and tracking
the requirements across the lifecycle. The ISO 9000 standard does not provide
specific details to guide the implementation of a sound requirements manage-
ment process, but does provide details of what the requirements process must
satisfy.
There is an ISO 9000 requirement to implement an effective mechanism for
communicating with customers in relation to enquiries, customer feedback, etc.
Good communication channels are essential to delivering high-quality software
to the customer.
The implementation of design and development input and output requires the
software process to be documented, with each phase having inputs and then pro-
ducing outputs. The inputs to design and development include the approved re-
quirements document and the approved project plan for the project. The output
typically includes the approved high-level and low-level design, the software
code, the test plan and results, and inspection records. The software develop-
ment process will detail the inputs and outputs.
Table 3.14 includes a release checklist which may be applied at the handover
of the software to the system test group, or as criteria to be satisfied prior to the
release the software to the customer. Tables 3.14 and Table 3.15 act as a control
check for verification to demonstrate that the system requirements have been
implemented correctly.
114 A Practical Approach to Software Quality
the inputs and outputs of the phase are complete, or it may take the form of
verification at key milestones in the project, for example, the handover to system
testing, or to acceptance testing, etc. The handover checklist is usually in a form
similar to Table 3.14 above. Table 3.15 demonstrates that all of the requirements
are implemented and is the second part of the verification step.
The traceability matrix employs a one-to-many mapping where each re-
quirement is mapped to several parts of the design document which demon-
strates that the particular requirement has been implemented in the design. The
trace matrix employs a one to many mapping of each requirement to several test
cases, and this demonstrates that the implementation of the requirement has been
effectively tested. This is clear from the following sample trace matrix.
Corresponding Corresponding
Requirement Sections in Sections in Test Comments!
Section (or Number) Design Plan Risks
R1.l D1.4, D1.5, T1.2, T1.7
D3.2
R1.2 D1.S, DS.3 T1.4
R1.3 D2.2 T1.3
Purchasing Process
The implementation of this clause requires that the organization have an effec-
tive purchasing process in place for the procurement of high quality products.
116 A Practical Approach to Software Quality
The purchasing process will need to ensure that the purchased product conforms
to the purchase requirements. The purchasing process includes the purchasing
information to describe the requirements of the product being procured. The
organization is required to ensure that the purchasing requirements are appropri-
ate for its needs, and the purchasing information is then used to verify that the
purchased product satisfies the desired requirements.
The verification of the purchased product may take the form of an inspection
of the product to verify its suitability. The extent of control depends on the im-
portance of the supplied product to the final end product. The organization is
required to evaluate and select suppliers based on their ability to supply the
product, and criteria for selection and evaluation will need to be established.
There is an ISO 9000 requirement to maintain records of evaluations of suppli-
ers and any actions arising from the evaluation of the supplier.
Software subcontracting is common in the software industry, and the soft-
ware subcontractor may supply part or all of the software for a particular product
to the prime contractor. Software organizations which employ software sub-
contractors will need a subcontractor process in place to manage the subcon-
tractor effectively. This will usually include a statement of work which will de-
tail the requirements to be implemented by the subcontractor, the key
milestones, the schedule, the activities to be performed, the deliverables to be
produced, the roles and responsibilities, the standards to be followed, the staff
resources, and the acceptance criteria.
This clause is concerned with the measurement of processes to improve the per-
formance of the quality management system. The sub-clauses include the meas-
urement of customer satisfaction, internal audits, measurement and monitoring
of processes, measuring and monitoring of products, control of non-conformity,
analysis of data, and continual improvement. The measurements provide an ob-
jective indication of the current performance of the quality management system.
118 A Practical Approach to Software Quality
Analysis will be used to detennine the improvement actions to prevent the re-
occurrence of similar problems in the future. The objective is to continually im-
prove the organization.
Criteria!
No Question Unacceptable Poor Fair Satisfied Excellent N/A
1. Quality of 0 0 0 0 0 0
software
2. Ability to 0 0 0 0 0 0
meet
committed
dates
3. Timeli- 0 0 0 0 0 0
ness of
projects
4. Effective 0 0 0 0 0 0
testing of
software
5. Expertise 0 0 0 0 0 0
of staff
6. Value for 0 0 0 0 0 0
money
7. Quality of 0 0 0 0 0 0
support
8. Ease of 0 0 0 0 0 0
installa-
tion of
software
9. Ease of 0 0 0 0 0 0
use of
software
10. Timely 0 0 0 0 0 0
resolution
ofprob-
lems
11. Accurate 0 0 0 0 0 0
diagnos-
tics
12. Intention 0 0 0 0 0 0
torecom-
mend
The auditor will need excellent verbal and written communication, and will
need to be tactful and diplomatic, as well as being thorough and forceful when
required. The auditor may need to reassure the group being audited, and will
usually explain that the purpose of the audit is to enable the organization to im-
prove. A sample audit process is outlined in Figure 3.5:
120 A Practical Approach to Software Quality
Audit Planning
Step 1. The quality manager produces an annual audit schedule
Step 2. The quality manager and group leader discuss the scope of the audit
Step 3. An auditor is assigned and a date for the audit agreed with the
group
Step 4. The auditor will interview the group and review appropriate docu-
mentation
Step 5. The auditor will advise the attendees of documentation to be
brought to the audit
Audit Meeting
Step 1. The auditor will interview project team members and review docu-
mentation
Step 2. The auditor will discuss preliminary findings at the end of the
meeting
Audit Reporting
Step 1. The auditor will publish the preliminary audit report and discuss
with group leader.
Step 2. The auditor makes agreed changes and publishes the final report.
Step 3. The audit report will be sent to all affected parties.
Step 4. The audit report will detail the actions to be addressed.
Audit Metrics
Step 1. Audit metrics provide visibility into the audit program.
Step 2. The audit metrics will be presented at the management review.
Process Performance
Performance
o
o +-~...,. - - - - -- --,- VCL
10 ~--"""""""C""""7"J..J..~_~- -~"'_==_=__c:7"d::""-<:~,.....:::~::=; ---~
O +--~~-_.----~-~-~~-_.-~~
LCL
1/1 1/2 1/3 1/4 1/5 1/6 117 1/8 1/9 1110 1/11 1/12
Months
Control of Nonconformity
The control of nonconformity is concerned with the procedure for reporting de-
fects, and the responsibilities for taking action to eliminate the defect. The de-
fects are recorded either via a tool or a spreadsheet. The infonnation recorded
about the defect typically includes the severity of the defect, the date that it oc-
curred, the originator of the defect, the technical person responsible for the cor-
rection of the defect, the type of defect, e.g., genuine defect, enhancement, or
misunderstanding, the current status of the defect, etc. There are many tools
available to record the defect data, and to show the current quality status of the
project with respect to the reported defects. Negative trends in defects should be
identified and improvement actions taken. The organization requires a procedure
for handling nonconformity.
Analysis of Data
The implementation of the analysis of data sub-clause is that decision making
should be based on the objective data obtained from the measurements, and that
the organization should analyze the data to determine the appropriate actions for
improvement. The organization will analyze customer satisfaction measure-
ments, supplier data, etc.
Improvement
The objective of continual improvement is to improve the effectiveness of the
quality management system through the use of the quality policy, quality objec-
tives, audit results, customer satisfaction measurements, management review,
analysis of data, and corrective and preventive actions.
The implementation of corrective action clause requires the organization to
use corrective action for improvement. The objective is to learn from defects or
issues to ensure that there is no reoccurrence. Corrective action is taken on cus-
tomer complaints, defect reports, audit reports, etc., and the results of effective
3. The ISO 9000 Standard 123
o
Requires major attention Requires attention to satisfy On track
(0 - 15% satisfied) (16 - 50% satisfied) (51-85% satisfied)
The rating scheme employed here is to rate the clause as not qualified if it is
less than 15% satisfied, to rate it partially satisfied if it is 16% to 50% satisfied,
largely satisfied if it is 51 % to 85% satisfied, and fully satisfied if it is 86% to
100% satisfied. The rating scheme is based on the ideas of SPICE which is dis-
cussed in chapter 5. The intention is to have an indication of the maturity, e.g., a
rating of 15% indicates that 15% of the projects have implemented the clause, or
it is approximately 15% satisfied at the organization level.
The approach suggested for rating the clauses in ISO 9004:2000 is to employ
the following maturity levels: The reader is referred to the standard for more
detailed information.
3. The ISO 9000 Standard 125
~SO 9000 Action Plan following Internal ISO 9000 self assessment
3.7 Summary
ISO 9000 is an international quality standard which enables an organization to
implement a sound quality system that is dedicated to customer satisfaction and
continual improvement. The independent certification of ISO 9000 indicates that
the company has a good quality management system in place, and that the com-
pany is committed to the core values of quality, customer satisfaction, and im-
provement.
The ISO 9000 standards may be applied to various types of organizations,
including manufacturing, software, and service organizations. ISO 9000 is re-
garded as a minimal quality standard that an organization that takes quality seri-
ously should satisfy. Many organizations require their subcontractors to be ISO
9000 certified, as this provides confidence in the subcontractor's quality system
and in the ability of the subcontractor to produce high-quality software.
The latest revision of ISO 9000 is termed ISO 9000:2000, and it is a sig-
nificant enhancement over the 1994 version of the standard. It places emphasis
on customer satisfaction and continual improvement, and includes a process
model. The older 1994 version of the standard placed emphasis on defining pro-
cedures for doing the work, whereas the new standard the emphasis is on proc-
esses. It is a simpler standard, and is effective from December 2000.
128 A Practical Approach to Software Quality
The standard includes ISO 9000, ISO 9001, and ISO 9004. The ISO
9004:2000 standard provides practical guidance on the implementation of ISO
9001 :2000 and guidelines for performance improvement of the quality system.
The implementation of ISO 9000 is discussed and this involves setting up a
steering group to manage the implementation and to provide the necessary re-
sources for implementation. A self-assessment to indicate the current ISO 9000
status is generally performed and an action plan to address the weaker areas is
defined. The implementation is managed and tracked like a normal project, and
the implementation generally involves defining processes and procedures,
maintaining records, and training. The quality group in the organization will
playa key role in the implementation of the standard and in ensuring compli-
ance with the ISO 9000:2000 requirements.
The award of ISO 9000 certification provides an indication that the company
is focused on quality and customer satisfaction.
4
The Capability Maturity Model
4.1 Introduction
The Capability Maturity Model (CMM©) is a process maturity model which
enables an organization to define and evolve its software processes. It is a
premise in software engineering that there is a close relationship between the
quality of the delivered software product and the quality and maturity of the un-
derlying software processes. Consequently, it is important for a software organi-
zation to devote attention to the software processes as well as to the product.
The CMM is a framework by which an organization may mature its software
processes. It has been influenced by the ideas of some of the leading figures in
the quality movement, such as Crosby, Deming, and Juran, etc.
Crosby's quality management maturity grip describes five evolutionary
stages in adopting quality practices [Crs:80]. Crosby's ideas were adopted,
refined, and applied to software organizations by Watt Humphrey in Managing
the Software Process [Hum:89] and in early work done at the Software Engi-
neering Institute. This lead to the development of a maturity model termed the
Process Maturity Model (PMM) by the Software Engineering Institute
[Hum:87]. The PMM is a questionnaire-based approach to process maturity, and
subsequent work and refinement of the PMM lead to the Capability Maturity
Model. The CMM is, in effect, the application of the process management con-
cepts of total quality management (TQM) to software. The main rationale for the
development of the CMM was the need of the Department of Defense (DOD) to
develop a mechanism to evaluate the Capability of software contractors.
The CMM vl.O was released in 1991, and following pilots it was revised and
released in 1993 as vl.l. The Software Engineering Institute has been working
on the CMMISM project, and the objective of this project is to merge the soft-
ware CMM and the Systems CMM, and also to make the CMM compatible with
SPICE (15504), the emerging international standard for software process as-
sessment. The CMMI vl.O was released in July 2000 and is expected to replace
People Technology
Process
There are other maturity models apart from the software CMM. The People
Capability Maturity Model (P-CMM) is a maturity model that is focused on
maturing the people in the organization to become a world class workforce. It is
dedicated to improving the capability of individuals within the organization. The
CMM is a model for software process maturity and the objective is to under-
stand and mature the software process to improve the capability of the organiza-
tion. The software process is defined as "a set of activities, methods, practices
and transformations that people use to develop and maintain software and the
associated work products". The process is the glue that binds people, tools and
methods together, and as a process matures it is better defined with clearly
defined entry and exit criteria, inputs and outputs, and an explicit description of
the tasks, verification of process, and consistent implementation throughout the
organization.
The CMM model places responsibilities on management and staff in the com-
pany. The impact of the model on the Quality (SQA) group is considered in this
section. The typical responsibilities of the quality group in a CMM environment
are considered. The quality group will generally play a key role in the imple-
mentation of the model and the responsibilities of SQA include:
Initial Level
The first level is termed the initial level and a characteristic feature of a level I
organizations is that processes are ad hoc, or poorly defined, or inconsistently
implemented. Software development is like a black art in a level 1 organization
with requirements flowing in and the product flowing out with little visibility
into the process. Often, level 1 organizations are reactionary and engaged in fire
fighting, and often there is an element of the "hero culture", i.e., where the or-
ganization depends on the heroic efforts of its staff to resolve its latest crises, for
example, a super programmer, or hero, resolves the latest crisis.
A level 1 organization may have many processes defined, but in a level or-
ganization the processes are neither enforced or consistently followed. Many
software organizations in the world are believed to be at level 1 on the CMM
scale. However, the fact that an organization is at level 1 does not mean that it is
necessarily unsuccessful in delivering high quality software. Clearly, as many
organizations are at levell, and yet remain in business, they must be successful
to some degree. The main argument against level 1 organizations is that whereas
I Repeatable I
.,,/" Disciplined Process
they may be successful in one particular project, the success of the present or
previous project is no guarantee of success in future projects. That is, a level I
organization may lack the infrastructure to ensure repeatability of previous suc-
cesses. Further, there is a dependency on key players or "heroes" to ensure the
success of a particular project.
Repeatable Level
The second CMM level is termed the "repeatable leve" and organizations at
level 2 on the CMM model have policies and procedures defined for imple-
menting software projects. The characteristic feature of level 2 organizations is
the emphasis on management processes. The software development is a series of
black boxes with defined milestones. The planning and management of projects
is based on the experience gained in managing previous projects.
The intention is that the organization is capable of repeating the success of
previous projects, and to continue to use practices that have been proven to be
successful in previous projects. Project commitments are made based on experi-
ence gained in managing previous similar projects. The project manager tracks
the schedule, functionality, quality, milestones, deliverables, and manages risk.
Subcontractors are managed by careful selection, agreeing to commitments, and
tracking the agreed upon deliverables to completion. Requirements management
and configuration management practices are in place, and there is independent
visibility into the software project provided by the quality assurance group. It is
not required that projects be managed in the same way, and different projects
may do things differently.
Defined Level
The third level is termed the "defined level" and organizations at level 3 have
defined a standard organizational software process (OSSP) for developing and
maintaining software. The organization standard software process is tailored to
individual projects. There is a group which is responsible for defining the orga-
nization's software process, and in managing improvements and changes to the
organization process. This group is typically termed the software engineering
process group (SEPG) in CMM terminology; however, the essential point is that
there is a group which is responsible for the organization software process and is
actively improving it.
There is an emphasis on training at this level, as all staff are required to
know the software process and need the appropriate expertise to perform their
roles effectively. Projects tailor the OSSP to yield the project's defined software
process, which is the software process employed for the particular project. The
tailoring means that projects do not need to do things exactly the same way, and
the tailoring will indicate how the project's defined software process is obtained
from the organization software process. The software engineering and manage-
ment activities for level 3 organizations are stable. This maturity level also ad-
4. The Capability Maturity Model 135
dresses organization communication and building quality into the software pro-
ject via peer reviews.
There is a paradigm shift in a level 3 organization and the focus changes
from emphasis on product management to emphasis on both product manage-
ment and process management. There is a greater understanding of the software
development process in a level 3 organization, and increased visibility into the
tasks of the process. Processes are well defined with clearly defined entry and
exit criteria, inputs and outputs, tasks and verification.
Managed Level
Level 4 is termed the managed level and it is characterized by processes and
product performing within defined quantitative control limits. Quality goals are
set for both products and processes, and performance is monitored with correc-
tive action taken to ensure that the goals are met. It is required that projects
maintain measurements of quality and performance of the various processes to
satisfy this level. Control limits are set for the performance of the various proc-
esses, and the performance of the processes is monitored, with corrective action
taken should the performance of a process fall outside of its control limits. The
corrective actions triggered act to adjust process performance accordingly to
ensure that it behaves within the defined control limits. The causes of process
variation are identified and the problem in the process is corrected. New project
targets are based on quantified measurements of past performance.
The paradigm shift of a level 4 organization is the change from a focus on
the process definition to a focus on process measurement and analysis of meas-
urement. The fundamental premise is that product performance is linked with
process quality, and thus the emphasis is on improving the quality of the proc-
ess. A high-quality and mature process has low variability from its mean process
performance. The intention in quantitative process management is to narrow the
control limits via process improvements and measurement, thereby leading to
highly predictable processes with a corresponding effect on project qUality. De-
cision making in a level 4 organization is based upon objective measurements.
The level 4 organization requires that an effective data gathering and meas-
urement program be set up and institutionalized within the organization. This
requires that there is already an organization standard software process defined.
Optimizing Level
Level 5 is termed the optimizing level and its focus is on continuous process
improvement. Defects are analyzed to determine their root cause and known
defects are eliminated from software processes. New processes and technologies
are evaluated and piloted and data gathered to measure the effectiveness of the
technologies or process. This enables an informed decision to be made as to
whether the technology or process should become part of the process. There is
an emphasis on a periodic examination of the software process for continuous
136 A Practical Approach to Software Quality
I Maturity Level I
,/ ":ll
I Key Process Area I
":ll
I Key Practices
,/
Infrastructurel
Activitie
Each KPA includes a set of goals which must be satisfied in order for the
KPA to be satisfied. The KPAs are organized by common features and these are
responsible for the implementation and institutionalization of the KPA. They
include the key practices such as the commitment to perform, the ability to per-
form, the activities performed, measurement and analysis, and verification. The
successful implementation of the key practices means that the KPA goals and
therefore the KPA are satisfied. There is a difference between the implementa-
tion and the institutionalization of the KPA. The KPA is implemented via the
activities defined in the KPA, and this indicates that the KPA is being per-
formed. The KPA is institutionalized via the policies, definition, and documen-
tation of the process, training, resources, verification, and measurement. The
institutionalization requires that the KPA becomes in effect part of the organiza-
tion's and project's way of doing business, and this requires training, monitor-
ing, and enforcement of the process.
The assessment of an organization yields a maturity rating for the organiza-
tion. The award of a rating at a particular level indicates that the organization
satisfies all of the goals for the key process areas at that maturity level,. and the
goals for all key process areas at any lower levels. The CMM model is evolu-
tionary and it specifies a logical order or roadmap in which the organization may
mature its software processes, and the approach to organizational maturity is
step wise from one CMM level to the next. Each maturity level provides a firm
foundation by which the processes for the next level may be implemented, and
maturity levels are not skipped. The result is an organization with well defined
138 A Practical Approach to Software Quality
Each maturity level consists of several key process areas, where each KP A has a
set of process goals to be satisfied. The architecture of a KPA is described in
Figure 4.4.
The satisfaction of the process goals for the KPA is via the key practices and
these detail the activities and infrastructure to address implementation and in-
stitutionalization. The key practices detail what is to be done, but not how it is to
be done. They are organized by common features and these indicate whether the
implementation and institutionalization of the key process area is effective.
There are five common features (see Table 4.3)
4. The Capability Maturity Model 139
Key Proce s
~__Are
__a_G_o_ru_s~~
r-----:---:-~-..,
Activities
Performed
Implementation
Figure 4.5 summarizes how the common features apply to a process. An ac-
tivity may be an engineering stage, a management task, or the measurement or
verification of an activity.
The inputs to the activity are from prior steps in the process and the input
and output process execution artifacts provide evidence that the process has
been performed. The resources include trained staff, hardware and software,
tools, etc., and are evidence of ability to perform. Controls include policies, pro-
cedures, standards, reviews, measurements, audits, etc., and are evidence of the
commitment to perform.
The diagram is applied to the activities in the individual key process areas,
for example, the project planning activity in the subcontractor management key
process area. The inputs to this key process area include the statement of work
and allocated requirements; the resources include the project manager, the tools,
training, and schedule; the controls include the organization planning policy and
the planning procedures for a project and the standards for planning at the pro-
ject level; and the output includes the estimates and the plan.
The initial level (Fig. 4.6) is quite distinct from the four other maturity levels in
that it has no key process areas. Software development is like a black art in a
=
4. The Capability Maturity Model 141
Input 1
.... 1 Output
level I organization; requirements flow in and the product flows out, and hope-
fully, the product works.
A level I organization often lacks a stable environment for developing and
maintaining software. Often, there are problems with over-commitment, and the
effort to deliver what has been committed often results in a crisis, with the reali-
zation that functionality committed to cannot be delivered on time. Quality may
be compromised in an effort to deliver the commitments. Such organizations
often lack a sound basis to know what they are capable of delivering, and often
decisions are made more on intuition rather than by quantitative objective facts.
However, level I organizations may be successful, even though their success
and performance is often due to the heroic efforts of their staff. The success of a
particular project is no guarantee that a subsequent project will be successful.
The desire for repeatability of success brings us to the next maturity level.
The repeatable level differs from the initial level in that organization policies for
managing a software project are defined, and procedures for implementing these
policies are defined. The organization sets its expectations via policies. Level 2
organizations have developed project management controls for various projects
although project management may vary from project to project. Project com-
mitments are realistic and are made based on what was achieved with previous
similar projects, and the allocated requirements of the current project.
Project management tracks the schedule, cost, and functionality in the pro-
ject. Configuration management practices are in place and the requirements and
associated work products have a baseline, and any changes to the configuration
items are made in a controlled manner. Subcontractors are evaluated, selected,
and managed. A software assurance group is established, and this group plays a
key role in verifying that the defined process for the project is followed and in
identifying any issues that may adversely project or product quality and the
timeliness of delivery of the project. The level 2 organization is described as
"disciplined" as planning and tracking is stabilized and earlier success can be
repeated. The software for a project is delivered as a series of black boxes with
defined milestones (Fig. 4.7).
142 A Practical Approach to Software Quality
KPA Description
RM Requirements Management
PP Project Planning
PT Project Tracking
SSM Subcontractor Management
SCM Configuration Management
SQA Software Quality Assurance
Requirements Management
The purpose of requirements management is to ensure that the customer and the
project team share a common understanding of the requirements. The require-
ments is the foundation for the planning and management of the software pro-
ject. The requirements will be documented and reviewed, and the plans, work
products, and activities are kept up to date and consistent with the requirements.
The requirements will have a baseline, and any changes made to the baseline
will take place in a controlled manner and generally involve changes to the as-
sociated work products also to ensure consistency. Requirements management is
difficult as customers may not wish to document the requirements, and changes
to the requirements may occur during the project.
4. The Capability Maturity Model 143
Project Planning
The customer's requirements are the basis for planning the software project. Re-
alistic plans are made based on the requirements and experience from similar
projects. The project plan documents the commitments made, the estimates of
the work to be performed, the schedule, and the resources required to implement
the project. It is the foundation for tracking the progress of the project against
estimates. The estimation is based on historical data if available and is otherwise
based on the judgment of the estimator. The plan includes risks that may affect
the quality or timeliness of the project and details how these risks will be man-
aged. The planning is based on commitments which various groups and indi-
viduals make, and changes to commitments can only take place with negotiation
with the project manager as such changes generally affect the cost, quality, or
timeliness of the project.
The project plan will usually include sub-plans such as the development
plan, quality assurance plan, configuration management plan, test plan, and
training plan. These sub-plans may be separate documents or there may be one
master plan.
Project Tracking
Project tracking provides visibility into the actual progress of the software pro-
ject. This involves reviewing the performance of the project against the project
plan and taking corrective action when performance deviates significantly from
the plan. The various milestones are tracked and the plans adjusted based on the
actual results achieved. The original and adjusted plans are kept to enable les-
sons to be learned from the project. The tracking of progress may involve inter-
nal and customer reviews of progress; and effort, cost, schedule, deliverables,
key milestones, risks, and size of deliverables are generally tracked.
Subcontractor Management
The purpose of subcontractor management is to assess and select a software
subcontractor, to agree on the commitments with the subcontractor, and to track
and review the results and performance of the subcontractor effectively. The
selection of the subcontractor involves considering the available subcontractors,
and generally the best qualified subcontractor capable of performing the work is
selected; however, other factors such as the knowledge of the particular domain,
strategic alliances, or agreements among subsidiaries of multinational corpora-
tions may influence the selection of the subcontractor. The software capability
evaluation of a subcontractor is described in [Byr:96].
The prime contractor is the organization responsible for building the system
and the prime contractor may decide to outsource part of the work to another
contractor, i.e., the subcontractor. The management of the subcontractor in-
cludes specifying the work to be performed and the standards and procedures to
be followed. This will usually include a statement of work, the requirements,
144 A Practical Approach to Software Quality
Configuration Management
The purpose of configuration management is to manage the configuration items
of the project. Configuration management involves identifying the configuration
items and systematically controlling change to maintain integrity and traceabil-
ity of the configuration throughout the lifecycle. There is a need for an infra-
structure to manage and control changes to documents and for source code
change control management. The configuration items include the project plan,
the requirements, design, code, and test plans.
A key concept in configuration management is a "baseline", and this is a
work product that has been formally reviewed and agreed upon, and serves as
the foundation for future work. It is changed by a formal change control proce-
dure which leads to a new baseline. A change to the baseline may involve
changes to several deliverables; for example, a change to the baseline of the
software requirements will generally require controlled changes to the design,
code, and test plans. Change control is formal and approval is via a change con-
trol board.
The organization is required to identify the configuration items that need to
be placed under formal change control as some work products (for example,
SQA plan) do not require a formal change control mechanism. Configuration
management also provides a history of the changes made to the baseline.
The main difference between a level 2 and a level 3 organization is that the fo-
cus of the former is on projects, whereas the emphasis shifts to the organization
for a level 3 organization. The organization supports the projects by gathering
best practices, employing common processes and common measurements, and
providing tailoring guidelines and training to the projects. A level 3 organization
has a standard process for developing and maintaining software. This process is
termed the "organization software standard process" (OSSP) in CMM termi-
nology, and it is documented and communicated throughout the organization. It
includes software engineering and management processes. Projects tailor the
organization's standard process, and the tailored process is termed the "project's
defined software process".
A level 3 organization includes a group that is responsible for the organiza-
tion's software process. This is termed the "software engineering process
group" (SEPG) in CMM terminology. The group is responsible for changes and
improvements to the software process. There is a training program in place to
ensure that all new staff receive appropriate training on the software processes,
and that existing staff receive appropriate training on new processes and ade-
quate training to perform their roles effectively. The level 3 organization is de-
scribed as standard and consistent as software engineering and management
practices are stable and repeatable.
The fact that there are common processes at the organization level does not
mean that all projects do things exactly the same. The best practices in the orga-
nization are identified and used to define the common processes at the organiza-
tion level. It means that the projects may use the best practices available in the
organization, and tailor the common processes to their specific project needs to
yield the project's software process. Thus a project in a level 3 organization is
using the best that is available within the organization. The responsibilities are
divided between the organization and the project.
The level 3 organization has increased visibility into the tasks and activities
in the software process, as the engineering processes are defined.
The individual key process areas in a level three organization are described
as follows:
146 A Practical Approach to Software Quality
KPA Description
OPF Organization process focus
OPD Organization process definition
TP Training program
Ie Intergroup coordination
ISM Integrated software management
SPE Software product engineering
PR Peer reviews
Training
The objective of training is to ensure that staff receive appropriate training to
perform their roles effectively, and familiarization on the software process. The
training needs of the organization, the projects, and individuals need to be iden-
tified and training provided to address these needs. There will usually be a
4. The Capability Maturity Model 147
Intergroup Coordination
The purpose of intergroup coordination is to ensure that all engineering groups
participate effectively together. This requires that the groups communicate ef-
fectively. All groups should be aware of the status and plans of the other groups.
Communication is a major issue for large projects, as there are several groups
involved, and good communication is essential for the success of the project.
Commitments between groups are documented and agreed to by all groups and
tracked to completion. Any changes to the commitments need to be communi-
cated to and agreed with all the affected groups.
148 A Practical Approach to Software Quality
Peer Reviews
The purpose of peer reviews is to remove defects from software work products
as early as possible, and to understand and learn lessons from the defects to pre-
vent reoccurrence. One well-known implementation of peer reviews is the "Fa-
gan Inspection" methodology, and software inspections were discussed in
chapter 2.
A peer review involves a methodical examination of the work products by
the author's peers to identify defects. The process involves identifying which
work products require a peer review, planning and scheduling, training inspec-
tion leaders and participants, assigning inspection roles, and distributing the in-
spection material to the participants. Corrective action items are identified and
tracked to closure. Effective software inspections can find 50% to 90% of de-
fects prior to testing and as the cost of correction of a defect increases the later
the defect is identified, there is a strong economic case for inspections.
The main difference between a level 3 and a level 4 organization is that the per-
formance of a level 4 organization (Fig. 4.9) is within strict quantitative limits,
i.e., the behavior is predictable. Measurements are defined and collected and
decision making is based upon quantitative data. The variation in process per-
formance is limited, as the process performance is between lower and upper
control limits. Quantitative quality goals are set for the projects to obtain, and
the project's software process is adjusted to ensure that the goals are achieved.
Consequently, software products in a level 4 organization are of a predictably
high quality. Decisions in a level 4 organization are based on the data collected,
and projects set control limits.
Process Performance
....
.... -.- ......... ...../ /""""- ....
5
'\to.
-........-- ........... ........
o
111 112 113 114 1/5 116 ln 1/8 1/9 1/10 1111 1/12
Months
The main difference between a level 4 and a level 5 organization is that while a
level 4 has a quantitative understanding of the process, a level 5 organization is
focused on continuous improvement using the quantitative data as a way of life
to continually improve. The organization has a mechanism to identify opportu-
nities for improvement to the software processes and to implement change in a
controlled manner. Quantitative data is gathered to measure the improvements to
the process in terms of productivity, quality, or cycle time. Innovations that ex-
ploit leading edge software engineering are identified, piloted, and deployed
where appropriate in the organization.
Level 5 organizations place a strong emphasis on analyzing defects to learn
from the defect and to take corrective action to prevent a reoccurrence. Level 5
organizations are described as continuously improving as the process capability
and performance is continuously being enhanced.
KPA Description
PCM Process Change Management
TCM Technology Change Management
DP Defect Prevention
pIe become familiar with the new process; however, the result of successful im-
provements is a process with enhanced capability.
Improvements tend to be incremental rather than revolutionary steps. The fo-
cus is on fire prevention rather than fire fighting. Process improvement is
planned and managed.
Defect Prevention
The purpose of defect prevention is to identify the causes of defects and prevent
them from recurring. The defects are identified, classified, analyzed; the root
causes identified; and corrective action taken to prevent a recurrence. The results
of the actions taken will need to be reviewed to ensure that they have been ef-
fective. Defects may be due to a defect within the software process, and this will
require a change to the software process to correct. The defect may alternately
be due to a misexecution of the software process and may need training or en-
forcement to resolve.
Team Description
Management The management steering group (MSG) has overall
Steering Group responsibility for CMM implementation. It plans, sets
goals, and provides sufficient resources and training.
The MSG provides regular progress reports to senior
management.
The status of the CMM implementation is determined
by regular internal CMM assessments. The MSG is
responsible for coordinating an external assessment,
and in ensuring that the organization is sufficiently
prepared for an external assessment.
KPA Coordination There may be a separate KPA coordination team for
Team large organizations or this role may be provided by the
MSG. It is responsible for the day-to-day monitoring
of the progress with the implementation of the key
process areas. The KPA coordination team tracks the
progress of the KPA team action plans, and will fa-
cilitate regular internal assessments. The results will
be communicated to the MSG.
The KPA coordination team identifies any barriers that
may adversely affect the CMM program, and these
barriers are reported to the MSG, and are subsequently
resolved by the MSG.
KPA Teams There are 18 key process areas on the CMM, and dedi-
cated KPA teams perform the implementation. The
team may be responsible for more than one KPA, and
in a small organization there may be one team respon-
sible for all of the KPAs to be implemented. The
starting point is a self-assessment to identify the extent
to which the KPA is currently satisfied. Each goal and
each key practice is evaluated and an action plan
defined to address the weak areas in the KPA.
The actions may require resources, sponsorship from
management, training, and enforcement. The imple-
mentation of the KPA must satisfy the KPA goals for
the KPA to be judged satisfied.
Software Engi- The SEPG is the group with overall responsibility for
neering Process the organization's software process and for ensuring
Group that changes and improvements to the organization's
(SEPGTeam) software Il"ocess are carried cut in a controlled manner.
The SEPG may define a mechanism to allow staff to
make improvement suggestions to improve the soft-
ware process. The SEPG will coosider each individual
suggestion and determine whetrer it is appropriate, and
if so will act on the suggestion. Often, the SEPG will
forward tre suggestion to a paticular KP A team.
4. The Capability Maturity Model 153
Table 4.9 presents a sample internal assessment of the software quality man-
agement KPA and the associated action plan. Each goal and key practice of the
KPA is rated on a scale from I to 10, and the key practice is satisfied if a score
of 7 or above is achieved. Actions are required to address areas that score less
than 7.
154 A Practical Approach to Software Quality
Due
Act Ref Score Issue «7) Action Owner Date Status
G1 7
001 G2 4 Limited Define quality Quality 31.12.01 Open
measurable goals to be manager
quality goals achieved and
in place deploy
002 G3 4 Limited Track quality Quality 31.10.01 Closed
tracking of goals at weekly manager
quality goals project meeting
C1 7
AB 1 7
003 AB2 4 Limited Provide training Training 31.12.01 Open
training on SQM to man- manager
provided in agers and engi-
software neers.
quality man-
agement
004 AB3 4 Limited As in Action 003. Training 31.12.01 Open
training in manager
SQM
AC 1 7
AC2 7
005 AC3 4 Monitoring Monitor goals at Quality 31.12.01 Open
quality goals project meetings manager
006 AC4 4 Projects I As in action 005 Quality 31.12.01 Open
products manager
quality goals
AC5 N/A
M1 7
007 VI 5 No man- Implement a Quality 31.12.01 Open
agement quality review manager
review of
SQM activi-
ties
008 V2 5 As in action 007 Quality 31.12.01
manager
The overall score for the KPA is calculated as the average of the individual
scores achieved, and the following fonnula is used:
: ; - --::=-==__
10
7
6
5
4
3
2
The score of the KPA is 5.4 « 7) and this indicates that the KPA is not
satisfied. The parts of the KP A that have recorded a score of less than 7 are
identified and the corrective actions are identified. The symbols employed are to
represent goals and key practices (G i = Goal, ~ = Commitment, Ab i = Ability,
AC i = Activities, Mi =Measurement, and Vi =Verification).
Once the internal assessment of targeted KPAs is complete, the results are
reported to the MSG. Figure 4.12 provides a simple mechanism to report the
maturity profile of the organization, and it indicates the KPAs that the organiza-
tion needs to address to achieve its CMM implementation goals.
Figure 4.12 provides a rating of 17 of the 18 KPAs, and generally only orga-
nizations which have a major improvement program in place would report on
this number of KPAs. Typically, an organization would report on level two
KPAs only or possibly level 2 and level 3 KPAs.
Acting
Establishing
Phase Description
Initiating This is concerned with initiating the improvement pro-
gram. It involves aligning the software improvement
goals with current and future business goals. SPI is
based on business needs and requires management
sponsorship and commitment to be successful.
Diagnosing The diagnosing phase involves an appraisal of the cur-
rent software practices in the organization, and its
findings include strengths and recommendations. The
SEI has defined an appraisal framework termed the
CMM Appraisal Framework (CAF). Both the CMM
Based Appraisal for Internal Process Improvement
(CBA IPI) [Dun:96], and the software capability
Evaluation (SCE) [Byr:96] are CAF compliant.
Establishing The establishing phase involves planning and setting
priorities following the CMM appraisal. This involves
defining a medium and short term plan for improve-
ment and setting up to address the findings and rec-
ommendations.
4. The Capability Maturity Model 157
This phase involves planning and preparation for the assessment and includes
identifying the KPAs that will be assessed; preparing an assessment plan; identi-
fying and training the assessment team and assigning roles; identifying, briefing,
and training the participants; administering a maturity questionnaire and exam-
ining the responses; and arranging logistics for the visit to the site, for example,
interview rooms and laptops. A successful assessment requires a competent as-
sessment team, good preparation and planning, and attention to detail.
This consists of an opening kick-off meeting which is attended by all the par-
ticipants and the senior manager. The senior manager is the sponsor of the as-
sessment and attends to demonstrate the importance of the assessment for
organization improvement. The assessment consists of interviews with project
leaders, managers, and functional area representatives (FARs), and a review of
relevant documentation.
The assessment team will need to be able to meet all relevant people and
groups to assess the KPAs within the scope of the assessment. The objective of
an interview is to gather data, to discover first hand how work is performed and
158 A Practical Approach to Software Quality
managed, and to identify strengths and opportunities. The assessment team re-
cords notes and observations during the interview and the information is con-
solidated after the interview. Documentation is reviewed to verify that the
process is actually performed as described.
The assessment team will need to identify missing information which is re-
quired to rate the particular KPA, and to request missing information from the
site coordinator. Often, the assessment team will use an assessment instrument
or tool to assist in recording KPA findings, and sometimes a manual KPA wall
chart is employed to record the KPA findings and to assist in identifying any
missing information that is required to rate the KPA.
The draft assessment findings are produced and presented by the assessment
team leader to the participants, and feedback is used to produce the final set of
findings. The final assessment findings include the CMM rating for the organi-
zation, and a rating of the individual key process areas. A KPA is satisfied if all
of the goals for the KPA are satisfied. The assessment report will detail the
KPAs which have been assessed and the strengths and weaknesses identified.
The final findings are presented to the organization and to the sponsor. The or-
ganization formulates an action plan to address the findings and recommenda-
tions from the assessment. The process improvement strategy is re-launched and
the process improvement cycle repeats.
50
40
30
r-us--l
20 ~
10
o
Level One Level Two Level Three Level Four Level Five
model, as the distinction between the two disciplines [SEI:OOa] is "at the level of
amplification of practices within otherwise identical process areas". The disci-
pline amplifications appear in the individual process areas in the model and are
essentially extra information specific to software engineering or to systems
engineering.
The CMMI for systems/software engineering consists of the same process
areas regardless of a staged or continuous representation. Each process area
contains goals, practices, and typical work products. The goals consist of generic
goals and specific goals, and the practices consist of generic practices or specific
practices. Specific goals apply to only one process area and address the unique
characteristics of the process area, whereas generic goals apply to all process
areas and the achievement of the generic goals indicates whether the implemen-
tation and institutionalization of the process area is effective. The practices map
onto the goals and the numbering scheme for goals and practices indicates which
goal the practice maps on to; for example, SG 1 indicates specific goal 1 in the
staged model, and SP 1.1 indicates that practice 1 maps onto to SG 1. There are
differences in the number scheme employed in the staged version and the con-
tinuous version of the CMMI model; for example, SP1.1-1 indicates that the
practice is at capability level 1. The generic goals and practices are numbered in
the form GG2 and GP2.1, respectively. The numbering scheme employed for
each representation enables the corresponding practice in the staged or in the
continuous representation to be easily located.
The staged model representation of the CMMI model (Fig. 4.16) is closer to the
older software CMM [Pau:93]. It consists of maturity levels where each maturity
level consists of a number of process areas (known as key process areas in the
older software CMM), and specific and generic goals (known as just goals in the
older SW-CMM), specific and generic practices (known as just practices in the
older SW-CMM). The staged representation organizes processes into maturity
levels to guide process improvement, and each maturity level is the foundation
for improvements for the next level.
The specific goals and practices are listed first in the process area followed
by generic goals and generic practices. The generic practices are organized by
four common features, namely, commitment to perform, ability to perform, di-
recting implementation, and verifying implementation. These are similar but not
identical to the organization of key practices into five common features in the
older software CMM.
There are five maturity levels in the CMMI model and each maturity level
acts as a foundation for improvements in the next level. The maturity levels are
numbered one through five as in the older software CMM; however, the naming
of the levels is slightly different. The maturity levels are:
162 A Practical Approach to Software Quality
IMaturity Level I
/
IProcess Area I I IProcess Area n I
• Initial
• Managed
• Defined
• Quantitatively managed
• Optimizing
Organization process maturity is achieved when the organization attains the
specific and generic goals for the process areas in a maturity level; organization
maturity indicates the expected results likely to be achieved by an organization
at that maturity level and is a means of predicting the most likely outcomes from
the next project. Maturity levels are a foundation to the next level, and so ma-
turity levels are rarely skipped as otherwise the foundation and stability for suc-
cessful improvements is not in place.
The components of the CMMI model are grouped into three categories,
namely, required, expected, and informative components. The required category
is essential to achieving process improvement in a particular area and these in-
clude the specific and generic goals. The expected category includes specific and
generic practices that an organization will typically implement and are intended
to guide individuals or groups in implementing improvements or in performing
assessments. The informative category includes information to understand the
goals and practices and how they may be achieved and includes further elabora-
tion to assist in implementation of the process area. The implementation of the
4. The Capability Maturity Model 163
process area will usually involve processes that carry out the specific or generic
practices of the process area.
There are no process areas associated with the initial level and the process
areas associated with maturity level 2 of the CMMI are similar to the key proc-
ess areas on level 2 of the software CMM (except that measurement and analysis
is a separate process area in the CMMI model) and include the following:
• Requirements management
• Project planning
• Project monitoring and control.
• Supplier agreement management
• Measurement and analysis
• Process and product quality assurance
• Configuration management
The process areas at maturity level 3 of the CMMI are quite different from
level 3 of the software CMM and include the following:
• Requirements development
• Technical solutions
• Product integration
• Verification
• Validation
• Organization process focus
• Organization process definition
• Organization training
• Integrated product management
• Risk management
• Decision analysis and resolution
The process areas at maturity level 4 include:
• Organization process performance
• Quantitative project management
The process areas at maturity level 5 include:
• Organization innovation and deployment
• Causal analysis and resolution
and the capability levels are a maturity rating that apply to each process area.
The six capability levels are numbered from 0 to 5. Each capability level con-
sists of a set of specific and generic goals and practices, and the capability levels
provide a path for process improvement within the process area. The organiza-
tion will need to map its processes into the CMMI process areas as in SPICE
(15504).
The capability levels focus on improving the organization's ability to per-
form and improve its performance in a particular process area. Each capability
level indicates an increase in performance and capability of the process, and the
six capability levels are:
• Incomplete
• Performed
• Managed
• Defined
• Quantitatively managed
• Optimized
The process is rated at a particular capability level if it satisfies all of the
specific and generic goals of the capability level and if it satisfies all lower capa-
bility levels.
The judgment as to whether the goals of a particular capability level are
satisfied is made by examining the extent of the implementation of the specific
and generic practices of the particular capability level. The specific and generic
practices are expected model components and are there to guide the implemen-
tation of the specific and generic goals of the capability level.
The capability levels are similar to SPICE (15504) and the following is a
very brief description of each level, and in depth information is available in
[SEI:OOb] (Table 4.11).
~
I Capability
Levels
The CMMI assessment yields a maturity profile of the organization per proc-
ess area. The process areas assessed are listed, and the associated capability
level of each assessed process area provided.
There are four defined categories of CMMI process areas:
• Process management processes
• Project management processes
• Engineering processes
• Support processes
Each of these categories contain various process areas and these are de-
scribed in detail in the continuous model. They are briefly summarized below:
4.10 Summary
This chapter provides an introduction to the CMM model, which is a process
maturity model that enables an organization to define and evolve its software
processes. Software engineering involves a multitude of software processes, and
the delivery of high-quality software requires a focus on the quality and maturity
of the underlying processes used to manage, develop, and test the software. The
CMM is based on the premise that there is a close relationship between the
quality of the delivered software product and the quality and maturity of the un-
derlying software processes. It is therefore important for a software organization
to devote attention to software process improvement as well as to the product, as
the quality of the product will improve as processes become mature. The CMM
is a vehicle or framework by which an organization may mature its software
processes.
The CMM describes an evolutionary path for an organization to evolve from
an immature or ad hoc software process, to a mature and disciplined software
process. It is a five-level maturity model where a move from one level to a
higher level indicates increased process maturity, and an enhanced capability of
the organization. The five levels in the model are the initial level, the repeatable
level, the defined level, the managed level, and the optimizing level. The organi-
zation moves from one maturity level to the next, with the current maturity level
providing the foundation for improvement for the next level, and therefore ma-
turity levels are not skipped. It allows the organization to follow a logical path in
improvement, and to evolve at its own pace.
The implementation of the CMM was discussed and the implementation
consists of defining a steering group for implementation and various dedicated
teams. The implementation is tracked by internal CMM assessments and the
steering group coordinates the external CMM assessment of the organization.
4. The Capability Maturity Model 167
The IDEAL model includes the CMM in the diagnosing phase and assessments
form one part of the improvement program.
The Software Engineering Institute has developed a new version of the
CMM which merges the software and the system CMM and makes the CMM
compatible with the SPICE standard. The CMMI model (v1.0) has been pub-
lished in two representations namely the continuous CMMI and the staged
CMMImodel.
5
The SPICE (15504) Standard
5.1 Introduction
The ISO SPICE (Software Process Improvement and Capability Determination)
is the emerging international standard for software process assessment. The need
for an international standard arose out of the mUltiple models for software proc-
ess assessment and improvement. These include the ISO 9000 standard, which
was developed by the International Standards Organization [ISO:OO]; Bootstrap
[Kuv:93], which was developed in a EU ESPRIT research project; Trillium, a
telecom specific assessment model developed in Canada; and the CMM
[Pau:93], developed by the software Engineering Institute in the US. Both Boot-
strap and the CMM models have been revised to be SPICE conformant.
The growth in the number of assessment approaches available was a key
motivating factor in the development of the SPICE standard, and the objective is
to provide an international standard for software process assessment. The stan-
dard is expected to allow comparability of results using different assessment
models and methods. The initial work commenced in 1990, version 1 was re-
leased in 1995, version 2 was released in 1996, and the ISO 15504 type 2 tech-
nical report [ISO:98] was published in 1998. A type 2 technical report indicates
a report which is close to acceptance by the standards body as an international
standard, but that full agreement has not yet been reached on the final definition
of the standard.
The early versions of the standard were piloted by organizations as part of
the SPICE trials, and feedback provided to the ISO working group on SPICE.
This lead to revisions in the definition of SPICE and the subsequent technical
report in 1998. The changes in the design of SPICE to enable it to become an
international standard for software assessment are described in [Rou:OO]. This is
likely to impact the contents of this chapter considerably, as it has been pro-
posed to remove the reference model from the standard, and the reader is ad-
vised to follow the progress of the ISO standards body.
This section explains why a company should consider SPICE to assist its im-
provement program. The justification for CMM implementation was provided in
chapter 4, and the reasons for SPICE implementation are similar and include the
following:
turity of each process within the scope of the assessment. The context of the
process is taken into account. The assessment provides a clear indication of the
extent to which the processes are meeting their goals.
There are nine parts in the SPICE standard, and these include guidance mate-
rial and actual SPICE requirements. The standard includes a process model
which acts as the basis against which software process assessment can be made.
The process model includes a set of practices which are essential for good soft-
ware engineering. The process model is generic and describes what is to be done
rather than how it is to be done, and it is described in part 2 of the standard. It is
known as the "reference model", and it describes the processes and a rating
scheme to rate the capability of the software processes. Five process categories
are distinguished in the SPICE model, and these include customer-supplier
processes, engineering processes, management processes, support processes, and
organization processes. The SPICE reference model has been influenced by the
emergence of the ISO 12207 standard for software lifecycle processes (Table
5.3).
Processes
Primary Supporting
Acquisition Documentation Joint review
Supply Quality Assurance Audit
Development Verification / Problem
Operation Validation
Maintenance Configuration man-
agement
Organization
Management Infrastructure Improvement Training
The ISO 12207 standard has five primary processes, eight supporting proc-
esses, and four organization processes. The relationship between the SPICE
processes and ISO 12207 is clear from Table 5.4 in which the five categories of
SPICE processes are mapped to the ISO 12207 standard:
Processes
Primary Supporting
Customer supplier Support
Engineering
Organization
Management Organization
5. The SPICE (15504) Standard 173
Process
Technology
There are three important factors which influence the quality of a delivered
software product: the expertise of the people employed by the organization, the
technology employed in the development of the software, and the processes
employed. The process triangle (Fig. 5.2) has been described previously in
chapter 4. The SPICE standard is focused on process assessment and process
improvement.
Processes may be immature or mature, and the extent of process maturity is
an indication of capability and the expected results by rigorously following the
process. Immature processes are ad hoc, rarely documented or defined, nor rig-
orously followed or enforced. Mature processes are consistent with the way in
which work is done, and a mature process is defined, documented, well con-
trolled via quality audits, measured, and continuously improving. The maturity
of a process indicates its potential for further growth. The main characteristic of
an institutionalized process is that it survives effectively long after the original
author has departed from the organization.
The defects in a software product are typically due to defects with a particu-
lar process used to build the software, or due to a mis-execution of a process.
The focus on fixing a defective process is a key part of a continuous improve-
ment culture, and the philosophy of process improvement is to fix the software
process and not the person. It is accepted that humans may occasionally err in
the execution of the process, but often this is an indication of insufficient train-
ing to perform the particular process.
The role of a process model is to define the design, development, and production
process for software unambiguously, and to provide guidance for all software
personnel involved. A process model specifies the tasks and activities to be per-
formed, and the sequence in which they are to be performed. It specifies the ac-
tors and roles involved in the production, and the methods, tools, standards, etc.,
used by the actors.
A process model has an associated life-cycle, for example, the "waterfall
model" or "V" life-cycle model. The waterfall model is a traditional software
development lifecyc1e model, and was developed by Royce [Roy:70]. One char-
acteristic feature of the waterfall model is that requirements may be determined
176 A Practical Approach to Software Quality
capabmtyL
Process
Category Description
Engineering Processes These processes are related to the engineering of
the software and include requirements, design, im-
plementation, and testing.
Customer-Supplier These are processes related to customer and sup-
Processes plier interface and include acquisition, supplier
selection, and the requirements elicitation proc-
esses.
Management Processes These processes are concerned with managing
projects and include project management, quality
management, and risk management.
Support Processes These processes are there to support other proc-
esses and may be used by other processes (includ-
ing other support processes) throughout the
lifecycle. They include quality assurance and
configuration management processes.
Organization Processes These processes manage and improve the organi-
zation and processes in the organization. They in-
clude process improvement and assessment and
human resources.
5. The SPICE (15504) Standard 177
Each process includes a statement of its purpose and the outcome from exe-
cuting the process. It includes a set of base practices that are essential for good
software engineering. The capability dimension is organized by capability level,
and there are six capability levels in the standard. The measure of capability for
a particular level is based on a set of process attributes, and these measure a par-
ticular aspect of capability. There are nine process attributes and the capability
levels and process attributes are described below:
Capability
Level Description
Level 3: This capability level indicates that the process is now per-
Established formed using a standard process. It includes the process
Process definition attribute and the process resource attribute. Each
implementation of a process uses approved process defini-
tions tailored from standard documented processes. A
defined process is one in which the inputs, outputs, entry and
exit criteria, tasks, roles, and responsibilities are defined. A
standard process is typically a library of different procedures,
standards and controls with guidelines for tailoring them to
meet the requirements for different process implementations.
The process definition attribute indicates the extent to which
the process definition is based upon a standard process. The
process resource attribute indicates the extent to which the
process draws upon suitable resources to deploy the defined
process. This attribute recognizes that the effective imple-
mentation of a defined process, and institutionalization of the
standard process requires planning and training.
Level 4: This capability level indicates that the process performs con-
Predictable sistently within defined limits to achieve its process out-
Process comes. It includes the measurable attribute and the process
control attribute. Measurements of process performance and
work product quality are collected and analyzed. The per-
formance of the process is quantitatively managed and the
quality of the work products is quantitatively known.
The process measurement attribute indicates the extent to
which measurement is employed. Quantitative control limits
for the process are defined and this indicates the extent to
which the process is controlled through the collection, analy-
sis and use of process measurements. Corrective action is
taken from analysis and noting trends.
LevelS: This capability level indicates that the process dynamically
Optimizing changes to address current and predicted business goals. It
Process includes the process change attribute and the continuous im-
provement attribute. The process is continually monitored
against its quantitative process goals, and improvements
made by analyzing the results, and by optimising the proc-
esses by piloting innovative ideas and technologies.
Changes to the process are made in a controlled manner. The
impact of the proposed changes is determined and quantita-
tive analysis of the effectiveness of the process change is
employed. The organization sets targets for process effec-
tiveness and identifies opportunities for improvement. The
process is continuously improving.
5. The SPICE (15504) Standard 179
The process profile is obtained by grouping the nine process attributes to-
gether with the rating for each process attribute (Table 5.8). The process profile
for an assessed process is described in a format similar to the following where
each process attribute is rated as either not satisfied, partially satisfied, largely
satisfied, or fully satisfied.
Optimizing
Predictable 4
E tablished 3
Managed 2
Performed 1
Incomplete o
largely or fully satisfied. The first letter of the rating is employed for brevity,
e.g., "L" indicates largely satisfied.
The capability levels (Fig. 5.4) provide a structured path for the improve-
ment of each process. This allows an organization to focus on improvements to
those processes that are the key to its business success. The capability levels are
summarized in the following diagram.
The reference model is generic and is not directly employed in the assess-
ment. Instead, a model compatible with the reference model is employed. A
compatible model includes a non-empty subset of the processes from the refer-
ence model, and a continuous subset of the capability levels (starting from level
1). The compatible model may include processes that do not map onto the refer-
ence model, and in such a case comparability with other assessments will not be
possible.
The compatible model defines its scope against the reference model and in-
cludes a set of indicators for both dimensions in the reference model. It provides
a mapping to the reference model and provides a mechanism to convert the data
collected to ratings against the reference model. The exemplar model in part 5 of
SPICE and the CMMI model are compatible models.
This particular process category consists of processes that directly impact the
customer and includes processes for acquiring software, requirements elicitation,
5. The SPICE (15504) Standard 181
supply of software, and operation and use of the software. The customer-
supplier processes include:
Process Description
Acquisition This includes acquisition preparation, supplier se-
lection, supplier monitoring, customer acceptance.
Supply This involves supplying the agreed product to the
customer.
Requirements Elicita- This involves gathering and tracking customer
tion needs throughout the life of the product.
Operation This includes operational use and customer support
The purpose of the acquisition base practice is to obtain the product or serv-
ice that satisfies the needs expressed by the customer. This typically involves a
contract between the customer and the supplier and acceptance criteria. The ac-
quisition is monitored to ensure it meets schedule, cost and quality constraints.
The acquirer may be different from the customer, for example, when the ac-
quirer performs the purchasing for a third party that will be the final user of the
product.
The purpose of the supplier selection process is to select an organization to
implement the project. Supplier monitoring involves monitoring the technical
progress of the supplier regularly to ensure that cost, schedule and quality con-
straints are met. The purpose of the customer acceptance process is to specify
criteria to validate and accept the deliverables provided by the supplier, and to
ensure that they are of the right quality and satisfy the requirements of the
customer.
IAcquirer I...
The purpose of the supply process is to provide software to the customer that
meets the agreed requirements. It includes the preparation of a proposal in re-
sponse to a customer request and includes a contract to implement the agreed
customer requirements. The acquisition process and the supply process provide
both sides of the customer supplier relationship. The software product delivered
is installed in accordance with the agreed requirements.
The purpose of the requirements elicitation process is to gather and track the
evolving customer needs and requirements throughout the life of the software
product. A requirements baseline is established and this serves as a basis for
defining the software work products. The changes to the baseline are controlled,
and a formal mechanism is available for introducing new requirements into the
baseline.
The purpose of the operation process is to operate the software product in its
intended environment and to provide support to the customers of the software
product. The product must be tested in the operational environment prior to its
deployment. The purpose of the customer support process is to maintain an ac-
ceptable level of service to the customer and to ensure effective use of the soft-
ware. The supplier corrects any identified defects in the software product.
Process Description
System Requirements The purpose of system requirements analysis and
Analysis and Design design process is to establish the system require-
ments (functional and nonfunctional) and architec-
tun\ and to identify which system requirements
should be allocated to which elements of the system.
The system requirements need to be approved by the
customer
Software Require- The system requirements specific to software form
ments Analysis the basis of the software engineering activities. There
is consistency between software requirements and
designs, and the software requirements are commu-
nicated to affected parties and approved.
5. The SPICE (15504) Standard 183
Process Description
Software Design The purpose of software design is to provide a de-
sign for the software that implements the require-
ments and can be tested against them. The structure
of software is defined into modules and interfaces
between the modules are defined. Data structures and
algorithms to be used by the software are defined.
Software Construc- The purpose of software construction is to provide
tion executable software units and to verify that they re-
flect the software design.
Software Integration The purpose of the software integration process is to
combine the software units producing integrated
software, and to verify that the integrated software
reflects the design.
Software Testing The purpose of the software testing process is to test
the integrated software product to verify that it
satisfies the software requirements. This involves
defining criteria to test against, recording test results,
defining and performing regression testing as appro-
priate, and recording results.
System Integration The purpose of system integration and testing is to
and Testing integrate the software component with the other
components, producing a complete system that will
satisfy the customer's expectations as expressed in
the system requirements. This involves defining cri-
teria to test against, and testing against the criteria,
and recording results. Regression testing is carried
out as appropriate.
System and software The purpose of system and software maintenance
maintenance process is to modify the system, i.e., the hardware,
network, software and associated documentation in
response to customer requests while preserving the
integrity of the system design. This includes the
management of modification, migration and retire-
ment of system components in response to customer
requests. It involves updating specifications, designs
and test criteria and performing testing to verify re-
quirements are satisfied.
The management process category is concerned with processes that contain ge-
neric practices that may be used in the management of projects or processes in a
software lifecycle. It includes the following processes:
184 A Practical Approach to Software Quality
Process Description
Management The purpose of the management process is to manage
performance of processes or functions within the organi-
zation to achieve their business goals. This process sup-
ports performance management and work product
management attributes for level 2 capability. It includes
effort estimation, tasks, resources, assignment of respon-
sibilities, work products to be generated, quality control
measures, and schedules.
Project The purpose of the project management process is to
Management identify, coordinate and monitor activities, tasks, and re-
sources necessary for the project. It supports the perform-
ance management attribute for level 2 capability and
defines the scope of work, estimating size and costs for
the various project tasks, defining the project plan and
tracking the plan. Corrective actions are taken to address
deviations from the plan.
Quality The purpose of quality management process is to monitor
Management the quality of the project and to ensure that it satisfies the
customer. It involves setting quality goals for the project
and a strategy to achieve the goals. Quality control and
assurance activities will be performed, and the actual per-
formance against the quality goals is tracked, and correc-
tive action taken when quality goals are not achieved. It is
closely related to the process control attribute for level 4.
Risk The purpose of risk management process is to identify and
Management mitigate risks throughout the lifecycle. There will be a
baseline of risks identified at project initiation. Further
reviews take place during the project to identify new risks.
A risk is characterized by the probability of its occurrence,
and its impact upon schedule, cost, or quality. Once a risk
is identified, a suitable mitigation strategy is defined, and
this may involve actions to reduce the likelihood of the
risk occurring, actions to reduce the impact if it should
occur, or finding an alternate solution that bypasses the
risk.
The support process category consists of processes that may be used by any of
the other processes, including other support processes. They include processes
for documentation, configuration management, quality assurance, audit,
verification, validation, joint review, and problem resolution. These are de-
scribed below.
5. The SPICE (15504) Standard 185
Process Description
Documentation The purpose of the documentation process is to develop
and maintain documents that record information produced
by a process. Documents needed by engineers, managers,
users, and customers are developed and maintained. The
documents to be produced during the lifecycle are iden-
tified, and standards for the development of the documents
are available.
Configuration The purpose of the configuration management process is to
Management establish and maintain the integrity of the work products of
a process or project. It is employed to control and manage
all products of the lifecycle, including the tools used to
develop the software. All items generated by the process
or project will be identified, defined and baselined.
Modifications and releases of the items will be controlled,
and completeness and consistency of items will be
ensured.
Quality The purpose of the quality assurance process is to ensure
Assurance that work products and activities comply with their spe-
cified requirements and adhere to their established plans. It
provides confidence that the software conforms to re-
quirements and is related to verification, validation, joint
reviews, audits, and problem resolution processes. In
larger organizations there is usually an independent quality
assurance group. The result is software work products that
adhere to the applicable procedures and standards.
Verification The purpose of the verification process is to confirm that
each software work product of a process properly reflects
the specified requirements.
Validation The purpose of the validation process is to confirm that the
requirements specified for the intended use of the software
work product are fulfilled. This typically takes the form of
testing. Criteria for the validation of all required work
products is identified, the validation takes place, and any
problems resolved.
Joint review The purpose of the joint review process is to maintain a
common understanding between the supplier and the cus-
tomer on progress against the objectives. The reviews en-
sure that the product is being built according to the
specified requirements, and that it is technically correct
and consistent. The review results are made known to all
affected parties, and any action items are tracked to clo-
sure.
186 A Practical Approach to Software Quality
Process Description
Audit The purpose of the audit process is to provide independent
confirmation that selected products and processes comply
with the requirements, plans and contract. The auditor
gives an objective and independent account of the level of
compliance to the defined process. Any detected audit is-
sues are detailed in the audit report and actions are as-
signed to affected groups to correct, and tracked to
completion.
Problem The purpose of the problem resolution process is to ensure
Resolution that any discovered problems are promptly reported, ana-
lyzed, and removed.
The organization process category consists of processes that help to establish the
business goals of the organization. This process category is concerned with
building organization infrastructure, and the emphasis is on organization im-
provement. It includes organization alignment and improvement, process as-
sessment and improvement, human resource management, infrastructure and
reuse. These processes are described are described in more detail below:
5. The SPICE (15504) Standard 187
Process Description
Human Resource The purpose of human resource management process is
Management to provide the organization and projects with the indi-
viduals who possess skills and knowledge to perform
their roles effectively. The necessary roles and skills for
operation will be identified, and training identified and
provided to ensure the individuals have the skill to per-
form their roles effectively.
Infrastructure The purpose of the infrastructure process is to maintain a
stable and reliable infrastructure that is needed to sup-
port the performance of any other process. The infra-
structure will need to meet functionality, safety,
performance, security, and availability requirements.
Measurement The purpose of the measurement process is collect and
analyze data relating to processes and products. The
objective is to support the effective management of
processes and to objectively measure the quality of the
product. Measurements will be used to support decisions
and to provide an objective basis for communication.
This process is closely related to the process measure-
ment attribute at level 4 capability.
Reuse The purpose of the reuse process is to promote and fa-
cilitate reuse within the organization. Reusability yields
software elements that can be re-used in many different
applications, and supports a software development proc-
ess relying on preexisting software components.
base practice, and the existence of work products that exhibit the expected work
product characteristics is evidence of process implementation. Indicators are
recorded as part of the records of the assessment (Fig. 5.6) and the presence or
absence of characteristics will help the assessor in forming a judgment, and the
assessor will take the context in which the process is being used into account.
The indicators for the attributes at levels 2 to level 5 are the management
practices that are indicators of process capability, and the indicators for the at-
tribute at level 1 consists of the process performance attribute and the base prac-
tices, work products, and work product characteristics. The output of the
assessment consists of process attribute ratings for each process assessed, and
the ratings are based on the indicators included in the exemplar model or the
chosen compatible model.
The assessor takes the organization environment into account when forming
a judgment on the rating of process capability. The absence of some indicators
may not be significant, as indicators are there to guide the assessor in making a
judgment.
The compatible model is required to define its scope against the reference
model; it is required to contain a non-empty set of processes from the reference
model and a continuous set of capability levels starting from level O. There is a
one-to-one correspondence between the processes in the exemplar model and the
reference model and also between the set of capability levels of both models.
There is a requirement to define a mechanism to convert the data collected
against the indicators in the compatible model to the attribute ratings against the
processes in the reference model.
Reporting
Data Process
Collection Rating
Data
Validation
A good assessment requires planning and preparation, and good planning en-
sures that everything will go smoothly. The assessment team is chosen with care
and includes a competent assessor who has the right skills for performing the
assessment. The competent assessor will verify that the other assessors have the
appropriate background and will ensure that the appropriate participants take
part in the assessment.
The assessment plan is produced and communicated to the assessment spon-
sor. The sponsor is responsible for approving the assessment plan and commit-
ting the resources, time and people to undertake the assessment. The sponsor is
required to ensure that the lead assessor has the required competence. A com-
petent assessor is knowledgeable regarding software engineering and SPICE and
has the desired personal attributes to perform effectively at the assessment.
These personal attributes include judgment, leadership, diplomacy, good com-
munication skills, and resistance handling ability.
The activities planning and preparation include the following:
tion required to judge whether a process is performed or not, and the extent to
which the process attributes are achieved. The accuracy of the rating of the
processes is dependent on the accuracy and consistency of the data gathering, as
the data collected is used to rate the process. Tools may be employed to perform
the data collection effectively. The minimal requirements for the data collection
process are defined in part 3 of the SPICE standard. It requires the identification
of the data collection techniques to be used in the assessment, and that a map-
ping is defined between the processes in the organization unit and the processes
of the compatible model, and a mapping between the compatible model and the
reference model.
The mapping between the organization processes and the processes of the
compatible model is useful, as this will help the assessment team's understand-
ing of the software process in the organization, and thereby ensure more effec-
tive data collection.
Assessment instruments or tools are used to support the evaluation of the
existence or adequacy of the practices within the scope of the assessment. The
tools may be used to collect, record, analyze, store, retrieve, and present infor-
mation of an assessment. The assessment instrument may be paper-based or
(semi-) automated computer-based tools. The main purpose of the assessment
instrument is to support the assessor to perform the assessment in an objective
manner, and to enable an objective rating to be made based on the recorded
information.
The assessment team needs to ensure that the set of information collected is
sufficient, and that the observations recorded accurately reflect the practices of
the organization for each process. The validity of the data may be confirmed in
different ways, e.g., using different information sources for the same purpose,
using information gathered in previous assessments, having feedback sessions
to discuss findings, and obtaining first-hand information from the process
practitioners.
Once sufficient data has been gathered and validated, the processes within the
scope of the assessment are assigned a rating for each process attribute, and a
process profile is formed. The assessor's judgment is based on the indicators of
the compatible model. It is a SPICE requirement to record the decision-making
process to derive the rating judgments, and the decision process may be consen-
sus of the assessment team, majority voting, or an alternate mechanism.
The derivation of the ratings is non-trivial as the judgment of the adequacy
of performance and capability of a process depends on the context of the process
in the organization. Sufficient data must be available to enable the process to be
rated accurately, and in the case of an absence of information, the assessor
makes an informed judgment. The implementation of configuration management
for a small project may be totally inadequate for a larger project. Therefore,
194 A Practical Approach to Software Quality
The results of the assessment are shared with the assessment sponsor and the
participants in the assessment. The records of the assessment are preserved, and
confidentiality of the assessment data is preserved. The assessment report will
detail the scope of the assessment, the activities carried out, and the activities to
be carried out post-assessment. Information is preserved on the set of process
profiles for each process assessed in the scope of the assessment.
The assessment records include the date of the assessment; the names of the
assessors; the assessment input; the assessment approach which details how the
assessment was carried out; and the set of process profiles and any additional
information. This will be analyzed in detail by the organization, and used to
produce an action plan for improvement.
The gaps in capability may be due to missing practices, and the supplier will
generally be required to implement improvements to narrow the gap between the
desired capability rating and the actual capability rating. The target capability is
expressed as a process profile of the key processes. The assessed capability is
determined, and where a gap exists a proposed capability is agreed on between
the supplier and prime contractor, and the supplier is required to implement an
improvement plan to achieve the proposed capability.
The target profile is expressed in table format (Table 5.16) and consists of
the key processes and the target capability of each process.
The ratings of fully (F), largely (L), partially (P), and not satisfied (NS) have
been discussed previously. The target profile is dependent on the type of work
that the prime contractor wishes the subcontractor to perform.
An assessment of the key processes takes place to determine the capability of
the supplier, the actual ratings are then compared to the target ratings, and a pro-
posed profile will be agreed upon.
196 A Practical Approach to Software Quality
is considered substantial and indicates a medium risk. The mechanism for cal-
culating risks is described in part 8 of the SPICE standard.
Improvements
Monitor
Performance
Step Description
Business Needs Customer satisfaction, greater competitiveness,
profitability.
Initiate Process SPI requires senior management sponsorship to succeed.
Improvement Business goals may include improving software quality,
reducing maintenance, providing on-time delivery, stay-
ing within budget, etc.
Prepare/conduct The assessment of key processes in the organization is
assessment performed against the SPICE standard. The output is
analyzed and an action plan prepared.
Analyze results/ Software process improvement requires a considerable
action plan investment in time and people, and a plan of improve-
ment actions for the changes is made.
Implement Involve staff in the improvement effort as changes affect
improvements staff, and buy in for changes is achieved by staff partici-
pating in the definition and implementation of the change.
Confirm Evaluate whether the improvement has achieved the de-
improvements sired target. Improvements may be piloted in restricted
areas and this requires planning and control. Measure-
ments may be employed to verify the results or the
confirmation may be via an assessment.
5. The SPICE (15504) Standard 199
The CMM recommends a specific group for the organization processes and
for process improvement within the organization namely, the software engi-
neering process group (SEPG). An organization-wide process improvement
suggestion database is set up within the organization to enable the software en-
gineering community to submit ideas on process improvement. The SEPG team
then meets regularly to discuss and implement valid improvement suggestions.
New or altered processes are introduced in a controlled manner and older proc-
esses retired in a controlled manner.
The SPICE process model may be employed to identify practices to improve
the capability of the process. Software process assessment is used to verify that
the improvements are successful. The organization category in the SPICE refer-
ence model includes processes concerned with improvement activities. The as-
sessment may be limited to several processes as the organization may wish to
focus on a small number of key processes, and an assessment of a small number
of processes requires less effort by the assessment team. The assessment will
yield strengths and weaknesses, for example, processes with very high capability
ratings, or processes with very low capability ratings, or processes with missing
or incomplete base practices. A profile of the assessed processes will be in-
cluded in the assessment report, and the process ratings and capability level rat-
ings are analyzed to derive the improvement plan.
The success of a software process improvement initiative is dependent on the
following:
• Driven by organization's business needs
• Needs senior management commitment
• Needs to focus
• Needs clear goals
• Requires investment of time
• Team effort
• Continuous activity
• Quantitative measurement
Process improvement requires the support and leadership from senior and
middle management within the organization. The sponsorship of senior man-
200 A Practical Approach to Software Quality
SEI has made the CMM SPICE compatible indicates the importance that is at-
tached to SPICE by one of the leading organizations for process improvement in
the world.
The SPICE standard has been applied or extended to many diverse fields; for
example, an ISO 15504-conformant method (S4S) for software process assess-
ment has been developed for the European space agency [Cas:OO]. This includes
an assessment model based on the exemplar model, four new processes, and 50
new base practices to incorporate space software practices, including, safety and
dependability requirements. The ability to employ SPICE in many diverse fields
and to extend SPICE to reflect the domain to which it is being applied is a very
useful feature of the standard. The SPICE standard is also useful in that it allows
comparability of results and thus enables organizations to benchmark them-
selves and their processes against one another.
SPICE is a model-based approach and is subject to the limitations of models
in that models are simplifications of the real world and do not reflect all of the
attributes of the real world. There is the well known quotation that "All models
are wrong, but, some are useful" and pragmatism is needed with models; a
model is judged useful if it assists in more effective software development.
The use of a model such as SPICE can be quite difficult initially as the ter-
minology is alien to people unfamiliar with the field. Also, its current definition
with 9 parts makes it a very large document, and this poses difficulties with its
usability. The CMM provides a clear roadmap as to how organizations may im-
prove; however, SPICE was designed to give the organization freedom in
choosing improvements to yield the greatest business benefit, but it requires that
the organization defines its own improvement roadmap.
SPICE currently exists as a technical report (type 2) and agreement on the
final definition of the standard is likely in the future. This will affect the contents
of this chapter.
5.12 Summary
SPICE (15504) is the emerging international standard for software process as-
sessment that arose owing to the multiple models for software process assess-
ment and improvement such as CMM, Trillium, Bootstrap, and ISO 9000. The
objective is to have an international standard to allow effective assessments to
take place and allow compatibility of results between different assessment
methods. A SPICE-conformant assessment provides the capability rating of key
processes within the scope of the assessment and this may be used in software
process improvement.
The SPICE standard includes the reference model, which includes a process
dimension and a capability dimension. There are five categories of SPICE proc-
esses: customer supplier, engineering, organization, management, and support
processes. The SPICE processes map on to the ISO 12207 standard for software
5. The SPICE (15504) Standard 203
processes and contain key practices essential to good software engineering. The
model is applicable to software organizations and does not presume a particular
organization structure or software development or management philosophy.
The model offers a framework to develop sound software engineering proc-
esses and enables an organization to assess and prioritize improvements to its
processes. It enables an organization to understand its own capability and the
capability of third-party software suppliers, and to thereby determine if a par-
ticular project is within its own capability or within the capability of a proposed
third-party supplier, and the associated risks of awarding a contract to a third-
party supplier.
The standard is useful for internal process improvement and one of the ad-
vantages of the standard is that it allows the organization to focus on improve-
ments to selected processes related to its business goals rather than the step-wise
evolution approach of the standard CMM. The importance of the standard is
evident from the work of the CMMI project in which the continuous representa-
tion of the CMM is now SPICE compatible.
6
Metrics and Problem Solving
6.1 Introduction
Measurement is an essential part of mathematics and the physical sciences, and
has been successfully applied in recent years to the software engineering disci-
pline. The purpose of a measurement program is to establish and use quantita-
tive measurements to manage the software development environment in the
organization, to assist the organization in understanding its current software ca-
pability, and to provide an objective indication that improvements have been
successful. Measurements provide visibility into the various functional areas in
the organization, and the actual quantitative data allow trends to be seen over
time. The analysis of the trends and quantitative data allow action plans to be
derived for continuous improvement. Measurements may be employed to track
the quality, timeliness, cost, schedule, and effort of software projects. The term
"metric" and "measurement" are used interchangeably in this book. The formal
definition of measurement given by Fenton [Fen:95] is the following:
"Measurement is the process by which numbers or symbols are assigned to
attributes or entities in the real world in such a way as to describe them ac-
cording to clearly defined rules."
Measurement has played a key role in the physical sciences and everyday
life, for example, the distance to the planets and stars, the mass of objects, the
speed of mechanical vehicles, the electric current flowing through a wire, the
rate of inflation, the unemployment rate, and many more. These measurements
provide a more precise understanding of the entity under study. Often several
measurements are used to provide a detailed understanding of the entity, for ex-
ample, the cockpit of an aeroplane contains measurements of altitude, speed,
temperature, fuel, latitude, longitude, and various devices essential to modern
navigation and flight, and clearly an airline offering to fly passengers using just
the altitude measurement would not be taken seriously.
Metrics also playa key role in problem solving. Various problem-solving
techniques were discussed earlier in chapter I, and good data is essential for
obtaining a precise objective understanding of the extent of a particular problem.
G. O’Regan, A Practical Approach to Software Quality
© Springer Science+Business Media New York 2002
206 A Practical Approach to Software Quality
For example, an outage is measured as the elapsed time between down -time and
subsequent up-time. For many organizations, e.g., telecommunications compa-
nies it is essential to minimize outages and the impact of an outage should one
occur. Measurements provide this data, and the measurement data is used to
enable effective analysis to take place to enable the root cause of a particular
problem, e.g., an outage, to be identified, and to verify that the actions taken to
correct the problem have been effective.
Metrics may provide an internal view of the quality of the software product,
and care is needed before deducing the behavior that a product will exhibit ex-
ternally from the various internal measurements of the product. A leading meas-
ure is a software measure that usually precedes the attribute that is under
examination; for example, the arrival rate of software problems is a leading in-
dicator of the maintenance effort. Leading measures provide an indication of the
likely behavior of the product in the field and need to be examined closely. A
lagging indicator is a software measure that is likely to follow the attribute being
studied; for example, escaped customer defects is an indicator of the quality and
reliability of the software. It is important to learn from lagging indicators even if
the data can have little impact on the current project.
Goal-Detennine Effectiveness of
Programming Language L
/ /
Metric-% of Developers Metric--# Lines code
# years Experience Metric--# Defects per KLOC per month
programmers that use L and what is their level of experience? what is the quality
of software code produced with language L? and what is the productivity of lan-
guage L? This leads naturally to the quality and productivity metrics below.
Goal
The focus on improvements in an organization should be closely related to the
business goals, and the first step is to identify the business goals the improve-
ment program is to address. The business goals are related to the strategic direc-
tion of the organization and particular problems that the organization is currently
facing. There is little sense in directing improvement activities to areas which do
not require improvement, or for which there is no business need to improve, or
from which there will be minimal return to the organization.
Question
These are the key questions which require answers to determine the extent to
which the goal is being satisfied, and for each business goal the set of pertinent
questions need to be identified. The questions are identified by a detailed exami-
nation of the goal and determining what information needs to be available to
determine the current status of the business goal and to help the business goal
to be achieved. Each question is then analyzed to determine the best approach
to obtain an objective answer to the question and to identify which metrics
are needed, and the data that needs to be gathered to answer the question
objectively.
Metrics
These are the objective measurements to give a quantitative answer to the par-
ticular question. The questions and measurements are thereby closely related to
the achievement of the goals, and provide an objective picture of the extent to
which the goal is currently being satisfied. The objective of measurement is to
208 A Practical Approach to Software Quality
improve the understanding of a specific process or product, and the GQM ap-
proach leads to focused measurements which are closely related to the goal,
rather than measurement for the sake of measurement. This approach helps to
ensure that the measurements will be used by the organizations to improve and
to satisfy the business goals more effectively. The successful improvement of
software development is impossible without knowing what the improvement
goals are and how they are related with the business goals.
The GQM methodology is a rigorous approach to focused software meas-
urement, and the measures may be from various viewpoints, e.g., manager
viewpoint, project team viewpoint, etc. The idea is always first to identify the
goals, and once the goals have been decided common-sense questions and
measurement are employed. There are two key approaches to software process
improvement: top-down or bottom-up improvement. Top-down approaches to
software process improvement are based on assessment methods and bench-
marking, for example, the CMM, SPICE, and ISO 9000:2000, whereas GQM is
a bottom-up approach to software process improvement, where the focus is to
target improvements related to certain specific goals. The two approaches are
often combined in practice.
...
gets. The balanced scorecard includes financial and non-financial measures.
r I Financial
+
I
Customer
I ~
Vision
and
Strategy
..... J
Internal
Process
t.... +
Learning and ~
1 Growth t
Figure 6.2: The Balanced Scorecard
6. Metrics and Problem Solving 209
The balanced scorecard is useful in selecting the key processes which the or-
ganization should focus its process improvement efforts on in order to achieve
its strategy (Fig. 6.3). Traditional improvement is based on improving quality,
reducing costs and improving productivity, whereas the balanced scorecard
takes the future needs of the organization into account and identifies the proc-
esses that the organization needs to excel at in the future to achieve its strategy.
This results in focused process improvement, and the intention is to yield the
greatest business benefit from the improvement program.
The starting point is for the organization to identify its vision and strategy
for the future. This often involves clarifying the vision and gaining consensus
among the senior management team. The vision and strategy are translated into
objectives for the organization or business unit. The next step is communication,
and the vision and strategy and objectives are communicated to all employees.
The critical objectives must be achieved for the strategy to succeed. All employ-
ees will need to determine their own local objectives to support the organization
strategy. Goals are set and rewards are linked to performance measures.
The financial and customer objectives are first identified from the strategy
and the key business processes to be improved are then identified. These are the
key processes that will lead to a breakthrough in performance for customers and
shareholders of the company. It may require new processes and this may require
re-training of employees on the new processes. The balanced scorecard is very
effective in driving organization change. The financial objectives require targets
to be set for customer, internal business process, and the learning and growth
perspective. The learning and growth perspective will examine competencies
and capabilities of employees and the level of employee satisfaction.
The organization metrics presented in the next section have been influenced
by the ideas in the balanced scorecard.
Planning and
Target Setling
Financial Customer
Cost of provision of services Quality service
Cost of hardware/software Accurate information
Increase revenue Reliability of solution
Reduce costs Rapid response time
Timeliness of Solution
99.999% network availability
24x7 customer support
Internal Business Process Learning and Growth
Requirements elicitation Expertise of staff
Software design • S/w development
Implementation • Project management
Testing process • Customer support
Compliance to lifecycle Staff development
Maintenance Career structure
Problem resolution Objectives for staff
Project / risk management Employee satisfaction
Help desk expertise Leadership
Customer support and training
Hardware and network provision
E-Mail and monitoring
Security / proprietary information
Disaster prevention and recovery
Legal issues (IT)
Figure 6.4 indicates the survey arrival rate per customer per month, and it indi-
cates that there is a customer satisfaction process in place in the organization,
that the customers are surveyed, and the extent to which they are surveyed. It
does not provide any information as to whether the customers are satisfied,
whether any follow-up activity from the survey is required, or whether the fre-
quency of surveys is sufficient for the organization.
Figure 6.5 gives the overall customer satisfaction figures in 20 categories in-
cluding the overall satisfaction, ability of the company to meet committed dates
and to deliver the agreed content, the ease of use of the software, the quality of
documentation, and value for money. Examples of the kind of feedback that
Figure 6.5 provides are as follows: a score of 2.1 for reliability indicates that the
customers perceive the software to be unreliable, and a score of 2.5 for problem
;- -
i
•. 1
~
'-II .
• •• .: • III
•
• Customer A
• Customer B
«1> ~ ~ 0
~~..... .".~~ e;,0Q (} o..!o <:)'l'c. c Customer C
~~~ ~w .".q; ~~ ~~(:' o ~
o~~~~~~~~~~~~~~~~~~~~
c lnternllon to recommen
cDeliver content
r:JReliability
cProblem resolution
aTraining
cFlexibility
resolution indicates that the customers perceive the problem resolution effec-
tiveness to be less than satisfactory.
The chart is interpreted as follows:
4 = Exceeds expectations
3 = Meets Expectations
2 = Fair
I = Below Expectations
~l •
Jan Feb Mar • •i
April
D
May June
Month
July
• D
Aug Sep
IiiiI
Oct
~
Nov -
Dec
100
50
o ~~~~~--~--L,~Lr~~~~~--~--~~~~~'~
')~~ f(.~ ~~ '1""1" ~~4. ')~~e ').$4. 'I""~~ r:/'~ o~ ~o~ (;;:l
Month
[iiilS (raj ) a lS (openr;fs""(ClOsea)]
Figure 6.8 provides visibility into the age of the improvement suggestions,
and indicates the effectiveness of the organization in acting on the improvement
suggestions. It is a measure of the productivity of the improvement team and its
ability to do its assigned work.
Figure 6.9 gives an indication of the productivity of the improvement pro-
gram, and shows how often the team meets to discuss the improvement sugges-
tions and to act upon them. This chart is slightly naive as it just tracks the
number of improvement meetings which have taken place during the year, and
contains no information on the actual productivity of the meeting. The chart
could be considered with Figure 6.6 to get a more accurate idea of productivity
as the number of closed improvement suggestions per month.
There will usually be other charts associated with an improvement program,
for example, a metric to indicate the status of the CMM program is provided in
section 6.4.8. It includes a maturity rating per key process area, and it is shared
with management to ensure that sufficient resources are provided to remove any
roadblocks. Similarly, a measure of the current status of the ISO 9000 imple-
mentation in the organization could be derived from the number of actions
10
5
o
0-3 Months 3 - 6 Months 6 - 9 Months 9 - 12 Months > 12 months
30
20
10
~ain~~M;a~r~M~aY"~JU~I-Y~S~e-p~~N-OV-1
Date
which are required to implement ISO 9000, the number implemented, and the
number outstanding.
These charts give visibility into the human resources and training areas of a
company. They provide visibility into the current headcount (Fig. 6.10) of the
organization per calendar month and the turnover of staff in the organization
(Table 6.2). The human resources department will typically maintain measure-
ments of the number of job openings to be filled per month, the arrival rate of
resumes per month, the average number of interviews to fill one position, the
percentage of employees that have received their annual appraisal, etc.
The key goals of the HR department are defined and the questions and met-
rics are associated with the key goals. For example, one of the key goals of the
HR department is to attract and retain the best employees, and this breaks down
into the two obvious sub-goals of attracting the best employees and retaining
them.
# Employess • 2
2001 ~ ____________________________________________- - ,
The key goals of the HR department are defined and the questions and met-
rics are associated with the key goals. For example, one of the key goals of the
HR department is to attract and retain the best employees, and this breaks down
into the two obvious sub-goals of attracting the best employees and retaining
them.
The next chart gives visibility into the turnover of staff per calendar year and
enables the organization to benchmark itself against the industry average. It in-
dicates the effectiveness of staff retention in the organization.
8r-------------------------------------------____~
a 6t-----------------------------------------------~
e
:4
~
+--------------------------------------------------l
.. 2
o
~ - ~ ~ _ ~ M _ _ ~ ~ ~
Dal&
~
Banking Application
Ecommerce
i"" 1
Telecoms Application I
0 5 10 15 20 25
Schedule (Planned I Actual)
!
Banking Application
Ecommerce
I
I~al
o Planned J
Telecoms I I
0 5 10 15 20 25
Effort (Planned I Actual)
time or within schedule or has been late after the event. It is advisable that time-
liness issues be considered during the project post-mortem.
The on-time delivery of a project requires that the various milestones in the
project be carefully tracked and corrective actions taken to address slippage in
milestones during the project. Modern risk management practices help to mini-
mize the risks of schedule slippage and to achieve or improve upon the expecta-
tions of the agreed schedule.
The second metric provides visibility into the effort estimation accuracy of a
project (Fig. 6.13). Effort estimation is a key component in calculating the cost
of a project and in devising the agreed schedule, and accuracy is essential.
The effort estimation chart is similar to the schedule estimation chart, except
that the schedule metric is referring to time as recorded in elapsed calendar
months, whereas the effort estimation chart refers to the human effort estimated
to carry out the work, and the actual human effort which was employed to carry
out the work. Projects are required to have an estimation methodology to enable
them to be successful in project management, and historical data will usually be
employed.
6. Metrics and Problem Solving 217
BankingAppilcalion
EcommerC8
Telecoms
lied
'
Requirements Delivered Metric
o 5 10 15 20
The next metric is related to the commitments which are made to the cus-
tomer with respect to the content of a particular release, and indicates the effec-
tiveness of the projects in delivering the agreed requirements to the customer
(Fig. 6.14). This chart could be adapted to include enhancements or fixes prom-
ised to a customer for a particular release of a software product.
These charts give visibility into development and testing of the software prod-
uct. Testing metrics have been presented previously in chapter 2. The first chart
presented here (Fig. 6.15) provides an indication of the quality of the software
produced and the stability of the requirements. It gives the total number of de-
fects identified and the total number of change requests and provides details on
the severities of the defects or change requests. If the number of change requests
is quite high, this suggests that there is room for improvement in the require-
ments management process.
Figure 6.16 gives the status of open issues with the project which gives an
indication of the current quality of the project and the effort required to achieve
the desired quality in the software. This chart is not used in isolation, as the
project manager will need to know the arrival rate of problems to determine the
stability of the software product.
The organization may intend to release a software product containing prob-
lems which have not yet been corrected, and it is therefore important to perform
a risk assessment of these problems to ensure that the product may operate ef-
fectively. A work-around for each problem is typically included in a set of re-
lease notes for the product.
The project manager will also need to know the age of the particular prob-
lems which have been raised, as this will indicate the effectiveness of the team
in resolving problems in a timely manner. Figure 6.17 presents the age of the
open problems in a particular project, and includes the severities. The chart be-
218 A Practical Approach to Software Quality
150 .-----------------------------~==~
a MInor
100 +-------------------------------~~~ c Medium
C Urgent
50 +-1-~----------~
• Cr~icaI
O +------.-------;-----'---,.-----r----~
Defects Change Total
Requests Issues
Raised
low indicates that there is one urgent problem that has been open for over one
year, and a project manager would typically prevent this situation from arising,
as critical and urgent problems need to be addressed in a prompt and efficient
manner.
The problem arrival rate (Fig. 6.18) is a key metric for the project manager
to enable the stability of the project to be determined, and to enable an objective
decision as to whether the product should be released to be made. A sample
problem arrival chart is included here, and a preliminary analysis of the chart
indicates that the trend is positive, with the arrival rate of problems falling. The
project manager will need to do analysis to determine if there are other causes
that could contribute to the fall in the arrival rate; for example, it may be the
case that testing was completed in September, which would mean, in effect, that
no testing has been performed since then, with an inevitable fall in the number
of problems reported. The important point is not to jump to a conclusion based
on a particular chart, as the circumstances behind the chart must be fully known
and taken into account in order to draw valid inferences.
6. Metrics and Problem Solving 219
20 -r----- - - - - - - - - - - -
15
c Medium
10 +-_ _ __
c Urgent
__
5 +--t--"t---~-_t_-_I • Critical
O ~~--~~~---~~~~--~~ ~~~~
< 3 months 3 - 6months 6 - 9months 9 -12 > 1 year
months
40 T-------------------~
30
20±---------~----~~~----~~---------~
10 +---...,;;:-- -/
O ~~--~--~--~~--~--r_~--~--~~
Jan Feb Mar April May June July Aug Sep Oct Nov Dec
Date
The next metric measures the effectiveness of the project in identifying de-
fects in the development phase (Fig. 6.19), and the effectiveness of the test
groups in detecting defects which are present in the software. The development
portion typically includes defects reported on inspection forms and in unit test-
220 A Practical Approach to Software Quality
ing; the system testing is typically performed by an independent test group and
may include performance testing; and acceptance testing is performed at the
customer site. The objective is that the number of defects reported at acceptance
test and after the product is officially released to customer should be minimal.
These metrics provide visibility into the audit program in an organization, in-
cluding the number of audits performed (Fig. 6.20), and the status of the audit
actions (Fig. 6.21). The first chart presents visibility into the number of audits
performed in the organization and the number of audits which remain to be
done. It shows that the organization has an audit program, and provides infor-
mation on the number of audits performed in a particular time period. The chart
does not give a breakdown into the type of audits performed, e.g., supplier
audits, project audits, and audits of particular departments in the organization,
but the chart could be adapted to provide this information.
The next chart presented here gives an indication of the status of the various
Audits in 2001
8
6
4
2
0 - 1 - - - - --
I _ Planned o Completed
50
40
~+---------------------
ro+-_______________ _ ____
10+-__---,.~~._~----_1 __-----
Supplier A Telecoms Networks Applications
proj9C1
1/'1 20
~ 195 +---
cs:
~~~============
~ 0 +--_.-
«;)
~ ~
~v
the arrival rate exceeds the closure rate of queries per month. This indicates an
increasing backlog which needs to be addressed.
The customer care department responds to any outages which occur and en-
sures that the outage time is kept to a minimum. Many of the top companies in
the world set ambitious goals on network availability, e.g., the "five nines" ini-
tiative on availability at Motorola in which the objective is to develop systems
which are available 99.999% of the time, i.e., approximately five minutes of
down time per year. The calculation of availability is from the formula:
. MTBF
A vailabllity =
MTBF + MTTR
where the mean time between failure (MTBF) is the average length of time be-
tween outages.
MTBF = Sample Interval Time
# Outages
The formula for MTBF above is for a single system only, and the formula is
adjusted when there are multiple systems.
The mean time to repair (MTTR) is the average length of time that it takes to
correct the outage, i.e., the average duration of the outages that have occurred,
and it is calculated from the following formula:
20
15+----------------------------
10 +----------------------------
5 + - - - - - - - -- - - - -
o +-___L..-....--___L..-....--_ _ _,--
CustA Gust B Gust G Gust D Gust E
Availability in 2001
99 i - - ---tk---
97 +--~~-~~
95~~~--r_~-~-~--~--~--~--r_~-~
Jan Feb Mar April May June July Aug Sep Oct Nov Dec
Date
There are many other areas in the organization to which metrics may be applied.
This section includes metrics on CMM maturity in the organization and a
configuration management build metric.
:11,1,1,1,1,1,0,0,0,0,0,0 0 , o ,0 , ~ ,~,~,
~~~~~~~~~~~~~~~~~~
... (;j".l".l00 "".I ".It) q-<.:
KPAs
8
6
4
2
0 +-----
I_ COde Freeze aBUlia ti Tape Cutting oSamty feSiJ
The CMM maturity of the organization is provided by Figure 6.26, and its
current state of readiness for a formal CMM assessment can be quickly iden-
tified. A numeric score of 1 to lOis applied to rate the KPA, and a score of 7 or
above indicates that the KPA is satisfied. Figure 6.27 gives visibility into the
effectiveness of configuration management in the organization.
the underlying data, and good data gathering is essential. The following are typi-
cal steps in the implementation of a metrics program in an organization:
The business goals are the starting point in the implementation of a metrics
program as there is no sense in measurement for the sake of measurement, and
the objective is to define and use metrics that are closely related to the business
goals. Various questions to indicate the extent to which the business goal is be-
ing achieved and to provide visibility into the actual status of the goal need to be
identified, and metrics provide an objective answer to these key questions.
The organization identifies the goals that need to be satisfied, and each de-
partment develops its specific goals to meet the organization's goals. Measure-
ment will indicate the extent to which specific goals are being met. Good data
are essential and this requires data to be recorded and gathered efficiently. First,
the organization will need to determine which data need to be gathered, and to
determine methods by which the data may be recorded. The analysis of what
information is needed to answer the questions related to the goals will assist in
determining the precise data to be recorded. A small organization may decide to
record the data manually, but usually automated or semi-automated tools will be
employed to assist in data recording and data extraction. Ultimately, unless an
efficient and usable mechanism is employed for data collection and extraction,
the metrics program is likely to fail. The data gathering is at the heart of the met-
rics program, and is described in more detail in section 6.5.1.
The roles and responsibilities of staff will need to be defined with respect to
the implementation and day-to-day operation of the metrics program. Training
will need to be provided to implement the roles effectively. Finally, a regular
management review will need to be implemented in the organization where the
metrics are presented and actions taken to ensure that the business goals are
achieved.
226 A Practical Approach to Software Quality
The metrics are only as good as the underlying data, and data gathering is there-
fore a key activity in the metrics program. The data to be recorded will be
closely related to the questions, and the intention is that the data may be used to
enable the question to be answered objectively. The following illustrates how
the data to be gathered is identified in a top-down manner. The starting point is
the business goal, and a good business goal will usually be quantitative for extra
precision.
out of phase, for example, a fault with the requirements may be discovered in
the design phase, which is out of the phase in which it was created.
In the example table above, the effectiveness of the requirements phase is
judged by its success in identifying defects as early as possible, as the cost of
correction of a requirements defect increases the later in the cycle that it is iden-
tified. For example, the requirements peE is calculated to be 40%, i.e., the total
number of faults identified in phase divided by the total number of faults and
defects identified. There were four faults identified at the inspection of the re-
quirements, and six defect were identified, one defect at the design phase, and
one at the coding phase, two at the unit testing phase, and two at the system
testing phase, i.e., 4110 = 40%. Similarly, the code peE is calculated to be 57%.
The overall peE for the project is calculated to be the total number of faults
detected in phase in the project divided by the total number of faults and defects,
i.e., 27/52 = 52%. The table above is in effect a summary of collected data, and
the data is collected in a format similar to the following:
• Maintain inspection data of requirements, design and code inspec-
tions
• Identify phase of origin of defects
• Record the number of defects and phase of origin
There is a responsibility for staff performing inspections to record the prob-
lems identified, and to record whether it is a fault or a defect, and the phase of
origin of the defect. Staff will need to be trained and periodic enforcement per-
formed to verify institutionalization.
The above is just one example of data gathering, and in practice the organi-
zation will need to collect various data to enable it to give an objective answer to
the extent that the particular goal is being satisfied.
This is a well-known tool in problem solving and consists of a cause and effect
diagram that is in the shape of the backbone of a fish. The objective is to identify
the various causes of some particular effect, and then these various causes are
broken down into a number of sub-causes. The various causes and sub-causes
are analyzed to determine the root cause of the particular effect, and actions to
address the root cause are then identified to prevent a reoccurrence of the mani-
6. Metrics and Problem Solving 229
fested effect. There are various categories of causes and these may include peo-
ple, methods and tools, and training.
The great advantage of the fishbone diagram is that it offers a crisp mecha-
nism to summarize the collective knowledge that a team has about a particular
problem, as it focuses the team on the causes of the problem and facilitates the
detailed exploration of the causes.
The construction of a fishbone diagram involves a clear statement of the par-
ticular effect and this is placed at the right-hand side of the diagram, the major
categories are then drawn on the backbone of the fishbone diagram, brainstorm-
ing is used to identify causes, and these are then placed in the appropriate cate-
gory. For each cause identified the various sub-causes may be identified by
asking the question why does this happen? This leads to a more detailed under-
standing of the causes for a particular effect.
Example 1
An organization wishes to determine the causes for a high number of customer-
reported defects.
There are various categories which may be employed in this example in-
cluding people, training, methods, tools, and environment.
In practice, the fishbone diagram would be more detailed than that presented
in Figure 6.28 as sub-causes would also be identified by a detailed examination
of the identified causes. The root cause(s) are determined from detailed analysis.
This example indicates that the particular organization has significant work
to do in several areas, and that a long-term change management program is re-
quired to implement the right improvements. Among the key areas to address in
this example would be to implement a software development process and a
software test process, to provide training to enable staff to do their jobs more
effectively, and to implement better management practices to motivate staff and
provide a supportive environment for software development.
Blame
Culture
Poor
planning
Hi Number of
Customer Defects
The causes identified may be symptoms rather than actual root causes: for
example, high staff turnover may be the result of poor morale and a "blame
culture", rather than a cause in itself of poor quality software. The fishbone dia-
gram provides a more detailed understanding of the collection of possible causes
of the high number of customer defects; however, from the list of identified
causes and discussion and analysis, a small subset of causes are identified as the
root cause of the problem.
The organization then acts upon the identified root cause by defining an ap-
propriate software development lifecycle and test process and providing training
to all development staff on the lifecycle and test process. The management atti-
tude and organization culture will need to be corrected to enable a supportive
software development environment to be put in place.
6.6.2 Histograms
A histogram is a way of representing data in bar chart format and shows the
relative frequency of various data values or ranges of data values. It is typically
employed when there is a large number of data values, and its key use is that it
gives a very crisp picture of the spread of the data values and the centering and
variance from the mean. The histogram has an associated shape; for example, it
may be a normal distribution, a bimodal or multi-modal distribution, or be posi-
tively or negatively skewed. The variation and centering refer to the variation of
data and the relation of the center of the histogram to the customer requirements.
The variation or spread of the data is important as it indicates whether the proc-
ess is too variable or whether it is performing within the requirements. The his-
togram is termed process centered if its center coincides with the customer
requirements; otherwise the process is too high or too low. A histogram enables
predictions of future performance to be made, assuming that the future will re-
flect the past. The data is divided into a number of data buckets, where a bucket
is a particular range of data values, and the relative frequency of each bucket is
displayed in bar format.
The construction of a histogram first requires that a frequency table be con-
structed, and this requires that the range of data values be determined. The num-
ber of class intervals or buckets are then determined, and the class intervals are
defined. The class intervals are mutually disjoint and span the range of the data
values. Each data value belongs to exactly one class interval and the frequency
of each class interval is determined.
The histogram is a well-known statistical tool and its construction is made
more concrete with the following example:
6. Metrics and Problem Solving 231
ExampJe2
An organization wishes to characterize the behavior of the process for the
resolution of customer queries in order to achieve its customer satisfac-
tion goal.
Goal
Resolve all customer queries within 24 hours.
Question
How effective is the current customer query resolution process, and what
action is required (if any) to achieve this goal?
The histogram below includes a data table where the data classes are of size
6 hours. In standard histograms the data classes are of the same size, although
there are non-standard histograms that use data classes of unequal size. The
sample mean is 19 hours in this example.
This histogram (Fig. 6.29) is based on query resolution data from 35 sam-
ples. The organization goal of customer resolution within 24 hours is not met for
all queries, and the goal is satisfied in (25/35 = 71 % for this particular sample).
Further analysis is needed to determine the reasons why 29% of the goals are
outside the target 24-hour time period. It may prove to be impossible to meet the
goal for all queries, and the organization may need to refine the goal to state that
instead all critical and urgent queries will be resolved within 24 hours, or alter-
nately, more resources may be required to provide the desired response.
The objective of a pareto chart is to help the problem-solving team focus on the
problems of greatest impact, as often 20% of the causes are responsible for 80%
of the problems. The pareto chart helps to identify the key problems, and the
j 5
o
[ Class IntelVaI_~~_ _ _ _--,
• Ctass Interval
focus may then be placed on these. The problems are classified into various
cause categories, and the frequency of each category of problem is then deter-
mined. The chart is then displayed in a descending sequence of frequency, with
the most significant cause detailed first, and the least significant cause detailed
last.
The pareto chart is a key problem-solving tool, and a properly constructed
pareto chart will allow the organization to focus its improvement efforts to re-
solve the key causes of problems, and to verify the resolution of key causes of
problems. The progress and success of the improvement efforts can be deter-
mined at a later stage by analyzing the new problems and creating a new pareto
chart. If the improvement efforts have been successful, then the profile of the
pareto chart will indicate that the key cause categories have been significantly
improved.
The construction of the pareto chart first requires the organization to decide
on the problem to be investigated, then to identify the causes of the problem via
brainstorming, analyze either historical or real data, compute the frequency of
each cause, and finally display the frequency in descending order of each cause
category.
Example 3
An organization wishes to minimize the occurrences of outages and
wishes to understand the various causes of outages, and the relative im-
portance of each cause.
The pareto chart (Fig. 6.30) below includes the data obtained following an
analysis of historical data of outages and classifies the outages into various
causes. There are six cause categories defined: hardware, software, operator er-
ror, power failure, an act of nature, and unknown cause of outage.
The pareto chart indicates that the three key .causes of outages are hardware,
software, and operator error. Further analysis is needed to identify the actions
that are needed to address these three key causes.
2
o
()penIlor Power AclNllulW Unknown
ICiUMCilegoncs
The hardware category may indicate that the reliability of the hardware of
the system is problematic, with some parts failing. The organization would need
to investigate solutions for existing systems and new systems to be deployed.
This may include discussions with other hardware vendors to alter the hardware
specification for new systems to address availability and reliability concerns, or
the replacement of existing hardware in existing systems to correct reliability
issues.
The analysis of software faults may be due to the release of poor-quality
software or to usability issues in the software, and this requires further investi-
gation. Finally, operator issues may be due to lack of knowledge or training of
operators.
A trend graph monitors the performance of a variable over time, allows trends in
performance to be identified, and enables predictions of future trends to be
made. The first step to the construction of a trend graph is to decide on the proc-
ess whose performance is to be measured, and then to gather the data points and
to plot the data.
Example 4
An organization wants to improve its estimation accuracy, has deployed
an enhanced estimation process in the organization, and wishes to deter-
mine if estimation is actually improving.
The estimation accuracy is computed from the quotient of the actual effort
by the estimated effort, and an estimation accuracy of 1.0 indicates that the es-
timated effort is equal to actual. The trend chart (Fig. 6.31) indicates that ini-
tially that estimation accuracy is very poor, but then there is a sudden
improvement coinciding with the successful deployment of the new estimation
process, and performance in estimation has improved. The graph indicates that
at the end of the year estimated effort is very close to the actual effort. .
It is important, of course, to analyze the trend chart in detail; for example,
the estimation accuracy for August (1.5 in the chart) would need to be investi-
gated to determine the reasons why it occurred. It could potentially indicate that
a project is using the old estimation process, or that the project manager received
no training on the new process, etc. A trend graph is useful for noting positive or
negative trends in performance; and negative trends are analyzed and actions are
identified to correct performance.
The scatter diagram is used to measure the relationship between variables and to
determine whether there is a relationship or correlation between the variables.
The results may be a positive correlation, negative correlation, or no correlation
between the data. Correlation has a precise statistical definition and provides a
mathematical understanding of the extent to which two variables are related or
unrelated.
The scatter graph provides a visual means to test the extent that two particu-
lar variables are related, and may be useful to determine if there a connection
between identified causes in a fishbone diagram and the effect.
The construction of a scatter diagram requires the collection of paired sam-
ples of data, and the drawing of one variable as the x-axis, and the other as the
y-axis. The data is then plotted and interpreted.
Example 5
An organization wishes to determine if there is a relationship between the
inspection rate and the error density of defects identified.
The scatter graph (Fig. 6.32) provides evidence for the hypothesis that there
is a relationship between two variables, namely, lines of code inspected and the
error density recorded (per KLOC). The graph suggests that the error density of
defects identified during inspections is low if the speed of inspection is too fast,
and the error density is high if the speed of inspection is below 300 lines of code
per hour. A line can be drawn through the date which indicates a linear relation-
ship.
6. Metrics and Problem Solving 235
The principles of statistical process control have been described in earlier chap-
ters and form an important part of the achievement of level 4 of the CMM, i.e.,
having a process with predictable performance. Measurement plays a key role in
achieving a process which is under statistical control, and the following example
(Fig. 6.33) on achieving a break through in performance of the estimation proc-
ess is adapted from [Kee:OO].
The initial upper and lower control limits for estimation accuracy are
set at ±50%, and the performance of the process is within the defined upper and
control limits. However, the organization will wish to improve its estimation
accuracy and this leads to the organization's revising the upper and lower con-
trollimits to ±25%. The organization will need to analyze the slippage data to
determine the reasons for the wide variance in the estimation, and part of the
solution will be the use of enhanced estimation guidelines in the organization. In
this chart, the organization succeeds in performing within the revised control
limit of ±25%, and the limit is revised again to ±l5%. This requires further
analysis to determine the causes for slippage and further improvement actions
are needed to ensure that the organization performs within the ±15% control
limit.
236 A Practical Approach to Software Quality
6.7 Summary
Measurement is an essential part of mathematics and the physical sciences, and
has been successfully applied in recent years to the software engineering disci-
pline. The purpose of a measurement program is to establish and use quantita-
tive measurements to manage the software development environment in the
organization, to assist the organization in understanding its current software ca-
pability, and to provide an objective indication that improvements have been
successful. This chapter included comprehensive sample metrics to provide visi-
bility into the various functional areas in the organization, including customer
satisfaction metrics, process improvement metrics, project management metrics,
HR metrics, development and quality metrics, and customer care metrics. The
actual quantitative data allow trends to be seen over time. The analysis of the
trends and quantitative data allow action plans to be derived for continuous im-
provement. Measurements may be employed to track the quality, timeliness,
cost, schedule, and effort of software projects
The balanced scorecard assists the organization in selecting appropriate
measurements to indicate the success or failure of the organization's strategy.
Each of the four scorecard perspectives includes objectives to be accomplished
for the strategy to succeed, and measurements to indicate the extent to which the
objectives are being met.
The Goal, Question, Metric (GQM) paradigm is a rigorous, goal-oriented
approach to measurement in which goals, questions, and measurements are
closely integrated. The business goals are first identified, and then questions that
relate to the achievement of the goal are identified, and for each question a met-
ric that gives an objective answer to the particular question is identified. The
statement of the goal is very precise and the goal is related to individuals or
groups.
Metrics play a key role in problem solving, and various problem solving
techniques have been discussed in this chapter. The measurement data are used
6. Metrics and Problem Solving 237
to assist the analysis and determine the root cause of a particular problem, and to
verify that the actions taken to correct the problem have been effective.
Metrics may provide an internal view of the quality of the software product,
but care is needed before deducing the behavior that a product will exhibit ex-
ternally from the various internal measurements of the product.
7
Formal Methods and Design
7.1 Introduction
This chapter discusses more advanced topics in the software quality field, in-
cluding software configuration management, the unified modeling language,
software usability, and formal methods.
Software configuration management is concerned with identifying the
configuration of a system and controlling changes to the configuration, and
maintaining integrity and traceability. The configuration items are generally
documents in the early part of the development of the system, whereas the focus
is on source code control management and software release management in the
later parts of the system.
The consequences of poor configuration management are illustrated in the
following, and it is clear that configuration management is a key part of the
quality system.
configuration items include the project plan, the requirements, design, code, and
test plans.
A key concept is a "baseline": this is a work product that has been formally
reviewed and agreed upon and serves as the foundation for future work. It is
changed by a formal change control procedure which leads to a new baseline.
The organization is required to identify the configuration items that need to be
placed under formal change control. Configuration management also maintains a
history of the changes made to the baseline.
The unified modeling language is a visual modeling language for software
systems. The advantage of a visual modeling language is that it facilitates the
understanding of the architecture of the system by visual means and assists in
the management of the complexity of large systems. It was developed by Jim
Rumbaugh, Grady Booch, and Ivar Jacobson [Rum:99] at Rational as a notation
for modeling object oriented systems and was published as a standard by the
Object Management Group (OMG) in 1997.
UML allows the same information to be presented in many different ways,
and there are nine main diagrams with the standard. These provide different
viewpoints of the system.
need to make a judgment as to which parts of the notation are suitable to em-
ploy.
Software usability is an important aspect of the quality of the software and
has been discussed briefly in chapter 1. It is one of the characteristics of quality
defined in the ISO 9126 standard for information technology [ISO:91] and is the
user's perception of the ease of use of the software. It is essential to have guide-
lines for building a usable software product and to assess the usability of the
software product. There have been several standards developed for software
usability, and these include the product-oriented standards such as parts of ISO
9241 [ISO:98a] and the process-oriented standards such as ISO 13407 [ISO:99]
and parts of ISO 9241 (Table 7.3).
The ISO 13407 standard provides guidance on the usability design of com-
puter-based systems and is concerned with human-centered design processes.
The ISO 9241 standard is large and consists of 17 parts. These include guidance
on usability and requirements for workstation layout, keyboards, environment,
presentation of information, dialogue principles, and ergonomic requirements.
Usability, like quality, needs to be built into the software product, and there-
fore usability needs to be considered from the earliest stages in the software de-
velopment lifecycle. Usability requires an analysis of the user population and the
tasks that they perform in the targeted environment. This will help produce a
more precise user requirements specification. The objective is that the system
should enable its users to perform their tasks more effectively and efficiently
It is important to understand and specify the context of use and to specify
user and organizational requirements. There will often be a variety of different
viewpoints from different individuals and roles in the organization, and this
leads to multiple design solutions and an evaluation of designs against the re-
quirements.
An iterative software lifecycle development lifecycle is generally employed
and there is active user involvement during the development of the software and
especially at the early stages. Prototyping is generally employed to give users a
242 A Practical Approach to Software Quality
flavor of the proposed system, thereby allowing early user feedback to be re-
ceived. User acceptance testing provides confidence that the software is correct
and matches the usability, reliability, and quality expectations of users.
Usability may be assessed via structured questionnaires and one well-known
questionnaire approach is the SUMI methodology. This was developed by Jurek
Kirakowski at the Human Factors Research Group (HFRG) as part of a Euro-
pean funded research project [Kir:OO]. The group has also developed the
WAMMI web-based tool for assessing usability. The SUMI questionnaire may
be completed on early prototypes of the software or on the completed software.
A small sample size of 10 to 12 users is recommended to obtain precise and
valid results.
Formal methods is a mathematical approach to the correctness of software.
The objective is to specify the program in a mathematical language and to dem-
onstrate that certain properties are satisfied by the specification using mathe-
matical proof. The ultimate objective is to provide confidence that the
implementation satisfies the requirements.
The mathematical techniques may also be applied to requirements validation,
in effect to "debug the requirements". This involves exploring the mathematical
consequences of the stated requirements, and ensuring that the implications are
considered and known and that the requirements are explicitly stated. Current
software engineering is subject to the following limitations:
It would be misleading to state that the use of formal methods will eliminate
these problems. However, the mathematical techniques used in formal methods
offers a precision and rigour which is not matched by the conventional ap-
proaches such as peer reviews and testing. The use of formal methods cannot
provide an absolute guarantee of correctness, as they are applied by humans who
are prone to error, although tool support should reduce the incidence of errors.
7. Formal Methods and Design 243
Consequently, formal methods will never eliminate the need for testing or for
the various test departments in the organization.
The successful deployment of formal methods should lead to a shorter test
cycle, as the quality of the software entering the testing phase should be higher
as it has been subject to some degree of formal verification. The real benefits of
formal methods are the increased confidence in the correctness of the software.
Real-time testing may not be feasible or subject to limitations in several do-
mains, and in such cases there is a need for an extra quality assurance step to
provide additional confidence in the reliability and quality of the software; and
formal methods is one way of achieving this. The safety-critical domain is one
domain in which the use of formal methods is quite useful.
There are many examples of the applications of formal methods, including
the collection from Mike Hinchey and Jonathan Bowen in [HB:95].
Area Description
Configuration This requires planning in order to identify the
Identification configuration items, to define naming conventions
for documents and a version numbering system,
and baseline /release planning. The version and
status of each configuration item should be
known.
Configuration This involves implementing effective controls for
Control configuration management. It involves a con-
trolled area/library where documents and source
code are placed and access to the library/area is
controlled. It includes a mechanism for releasing
documents or code and the changes to work prod-
ucts are controlled and authorized by a change
control board or similar mechanism. Problems or
defects are reported by the test groups and the
customer, and following analysis any changes to
be implemented are subject to change control.
The version of the work product is known and the
constituents of a particular release are known and
controlled. The previous versions of releases can
be recovered as the source code constituents are
fully known.
Status This involves data collection and report genera-
Accounting tion. These reports include the software baseline
status, the summary of changes to the software
baseline, problem report summaries, and change
request summaries.
Configuration This includes audits of the baselines to verify
Auditing integrity of the baseline and audits of the configura-
tion management system itself and verification that
standards and procedures are followed. The audit
reports are distributed to affected groups and any
actions are tracked to completion.
Each of the R.l.0.0.k are termed release builds and they consist of function-
ality and fixes to problems. The content of each release build is known; i.e., the
project team and manager will target specific functionality and fixes for each
build, and the actual contents of the particular release baseline is documented.
Each release build can be replicated, as the version of source code to create the
build is known and the source code is under control management.
There are various tools employed for software configuration management
activities, and these include well-known tools such as clearcase, pvcs, continu-
ity, source save for source code control management, and the pv tracker tool for
tracking defects and change requests. A defect tracking tool will list all of the
open defects against the software and a defect may require several change re-
quests to correct the software, as a problem may affect different parts of the
software product and a change request may be necessary for each part. The tool
will generally link the change requests to the problem report. The current status
of the problem report can be determined, and the targeted release build for the
problem identified.
The CMM includes a level 2 key process area on software configuration
management and this provides guidance of the activities to be performed to im-
plement configuration management effectively. It includes four goals for the
SCM key process area:
Goal Description
Goall Software configuration management activities are
planned.
Goal 2 Selected software work products are identified,
controlled, and available.
Goal 3 Changes to identified software work produces are
controlled.
Goal 4 Affected groups and individuals are informed of
the status and content of software baselines.
246 A Practical Approach to Software Quality
ITranSfer
Next, class and object diagrams are considered and the object diagram is re-
lated to the class diagram in that the object is an instance of the class. There will
generally be several objects associated with the class. The class diagram (Table
7.8) describes the data structure and the valid operations on the data structure are
part of the definition. The concept of class and objects are taken from object-
oriented design.
In the ATM example the two key classes are customers and accounts, and
this includes the data structure for customers and accounts and also the opera-
tions on customers and accounts. These include operations to add or remove a
customer and operations to debit or credit an account or to transfer from one
account to another. There are several instances of customers and these are the
actual customers of the bank.
Customer Account
Name: String Balance:Real
Address: String Type: String
Add DebitO
Remove CreditO
CheckBalO
TransferO
The objects of the class are the actual customers and their corresponding ac-
counts. Each customer can have several accounts. The names and addresses of
the customers are detailed as well as the corresponding balance in the cus-
tomer's accounts. There is one instance of the customer class below and two
instances of the account class (Fig. 7.2).
Customer (N.Ford)
Name = "N.Ford"
Address= "Cork"
The next UML diagram (Fig. 7.3) considered is the sequence diagrams and
these show the interaction between objects/classes in the system for each use
case. The example as adapted from [CSE:OO] considers the sequences of inter-
actions between objects for the "check balance" use case. This sequence dia-
gram is specific to the case of a valid inquiry, and there are generally sequence
diagrams to handle exception cases also.
The behavior of the "check balance" operation is evident from the diagram.
The customer inserts the card into the ATM machine and the PIN number is
requested by the ATM machine. The customer then enters the number and the
ATM machine contacts the bank for verification of the number. The bank
confirms the validity of the number and the customer then selects the balance
enquiry. The ATM contacts the bank to request the balance of the particular ac-
count and the bank sends the details to the ATM machine. The balance is dis-
played on the screen of the ATM machine. The customer then withdraws the
card.
The actual sequence of interactions is evident from the sequence diagram.
The next UML diagram concerns activity diagrams (Fig. 7.4) and these are
similar to flow charts. They are used to show the sequence of activities in a use
case and include the specification of decision branches and parallel activities.
EO<Ive$lpin
~nt.rpin
1 f'ler!ly
§ m;nnok
I 1
1
§ Ian<e irlljulry
E eloCCO<Jnt
1
E el bal.nce
retumbAllnce
Fetum bar.nee
pisp.y bal."""
The final UML diagram (Fig. 7.5) that will be discussed here is state dia-
grams or state charts. These show the dynamic behavior of a class and how dif-
ferent operations result in a change of state. There is an initial state and a final
state being and the different operations result in different states entered and ex-
ited from.
There are several other UML diagrams including the collaboration diagram
which is similar to the sequence diagram except that the sequencing is shown
via a number system. The reader is referred to [Rum:99].
UML offers a rich notation to model software systems and to understand the
proposed system from different viewpoints. The main advantages of UML are
described in Table 7.9.
Advantages of UML
State of the art visual modeling language with a rich ex-
pressive notation.
Study of the proposed system before implementation
Visualization of architecture design of the system.
Mechanism to manage complexity of a large system.
Visualization of system from different viewpoints. The
different UML diagrams provide a different view of the
system.
Enhanced understanding of implications of user behav-
ior.
Use cases allow description of typical user behavior.
A mechanism to communicate the proposed behavior of
the software system to describe what it will do and what
to test against.
sional concept with several properties associated with each dimension of usabil-
ity. The SUMI methodology lists five dimensions of usability to be measured
and these dimensions are related to users' expectations and attitudes to the com-
puter system being evaluated. They include the following:
Dimension Description
Helpfulness This measures the degree to which the software is
self-explanatory as well as adequacy of help facili-
ties and documentation.
Control This measures the extent to which the user feels in
control of the software as opposed to being con-
trolled by the software.
Learnability This measures the speed of learning and ease of
mastering the software system or new features.
Efficiency This measures the extent to which the users feel the
software assists them in their work.
Affect This measures the degree to which users like the
computer system, i.e., likeability or the general
emotional reaction to the software.
There are three possible responses to the questions (Agree, Don't know, Dis-
agree) and the questionnaire should be completed rapidly. The questionnaire is
completed by a representative sample of users and the overall results are re-
ported by usability dimension with an overall global usability factor determined
for the computer system (Table 7.12). The global usability factor represents the
perceived quality of use of the software.
252 A Practical Approach to Software Quality
Categorty Mean
Global 22.1
Helpfulness 21.5
Control 21.6
Learnability 23.7
Efficiency 21.7
Affect 21.4
The ISO 9241 standard is large and consists of 17 parts. It includes guidance and
requirements for equipment, environment, and the human computer interface
(HeI). The standard is summarized below:
Principle Description
User Involvement Active user involvement in project.
Human Skills Appropriate allocation of Human Resources to
tasks.
Iteration Iterations planned in project schedule and user
review of iteration.
Multi Multi disciplinary design with user involve-
Disciplines ment in design.
Activity Description
Context of Use This involves an explicit description of the
context of use of the software.
User/Organization This involves specifying user and organization
Requirements requirements and the different viewpoints of
users.
Design Solutions Produce multiple design solutions.
Evaluate Evaluate the designs against the requirements
with user involvement/testing.
There is significant industrial interest in the ISO 13407 standard and indica-
tions suggest that it will be a powerful tool for improving software usability.
Usability, like quality, needs to be built into the software product, and therefore
usability needs to be considered from the earliest stages in the software devel-
opment lifecycle. Usability requires an analysis of the user population and the
tasks that they perform in the targeted environment. This will help produce a
more precise user requirements specification. The objective is that the system
should enable its users to perform their tasks more effectively and efficiently
It is important to understand and specify the context of use and to specify
user and organizational requirements. There will often be a variety of different
viewpoints from different individuals in the organization and this leads to multi-
ple design solutions and an evaluation of designs against the requirements.
An iterative software development lifecycle is generally employed and there
is active user involvement during the development of the software, especially at
the early stages. Prototyping is generally employed to give users a flavor of the
proposed system and to provide early user feedback. User acceptance testing
256 A Practical Approach to Software Quality
provides confidence that the software is correct and matches the usability and
quality expectations of users.
Usability may be assessed via structured questionnaires such as the SUMI
methodology as discussed earlier. The questionnaire may be completed on early
prototypes or on the completed software. The usability lifecycle is described
below:
Phase Description
Requirements This includes the standard requirements process
and interviews with different categories of users.
Prototype The initial prototype is developed and the users
provide structured feedback typically using a
structured questionnaire.
Spiral Design! The prototype is reused and spiral software de-
Development velopment with incremental changes to the de-
sign, code and testing. The completed spiral is
evaluated by the users prior to the commence-
ment of the next spiral.
Acceptance Final acceptance testing by the users.
promoting a common understanding for all those concerned with the system.
The term formal methods is used to describe a formal specification language and
a method for the design and implementation of computer systems. The term has
its roots in mathematics and logic, for example, the wordformal is derived from
form as distinct from content. The terminology goes back to a movement in
mathematics in the early twentieth century which attempted to show that all
mathematics was reducible to symbols with rules governing the generation of
new symbols from existing terms. The movement arose out of the paradoxes in
set theory as identified by Russell and the objective of the formalists was to de-
velop a formal system which was both consistent and complete. Completeness
indicates that all true theorems may be proved within the formal system, and
consistency indicates that only true results may be proved. The objectives of the
formalist program were dealt a fatal blow in 1931 by the Austrian logician Kurt
Goedel [Goe:31] when he demonstrated that any formal system powerful
enough to include arithmetic would necessarily be incomplete, and in fact the
consistency of arithmetic is not provable within the formal system.
Formal software development was defined by Micheal Mac an Airchinnigh
in [Mac:90] as
A formal specification derived from requirements.
A formal method by which one proceeds from the specification
to the ultimate concrete reality of the software.
defense standard is Def Stan 00-56 Hazard analysis and safety classification of
the computer and programmable electronic system elements of defense equip-
ment [MOD:91b). The objective of this standard is to provide guidance to iden-
tify which systems or parts of systems being developed are safety-critical and
thereby require the use of formal methods. This is achieved by subjecting a pro-
posed system to an initial hazard analysis to determine whether there are safety-
critical parts. The reaction to these defense standards 00-55 and 00-56 was quite
hostile initially as most suppliers were unlikely to meet the technical and organi-
zation requirements of the standard, and this is described in [Tie:91]. The stan-
dard indicates how seriously the ministry of defense takes safety, and Brown in
[Bro:93] argues that
Missile systems must be presumed dangerous until shown to be safe, and that
the absence of evidence for the existence of dangerous errors does not
amount to evidence for the absence of danger.
It is quite possible that a software company may be sued for software which
injures a third party, and it is conjectured in [Mac:93] that the day is not far off
when
A system failure traced to a software fault and injurious to a third party, will
lead to a successful litigation against the developers of the said system soft-
ware.
This suggests that companies will need a quality assurance program that will
demonstrate that every reasonable practice was considered to prevent the occur-
rence of defects. One such practice for defect prevention is the use of formal
methods in the software development lifecycle, and in some domains, e.g.,
safety critical domain, it looks likely that the exclusion of formal methods in the
software development cycle may need to be justified.
There is evidence to suggest that the use of formal methods provide savings
in the cost of the project, for example, an independent audit of the large CICS
transaction processing project at IBM demonstrated a 9% cost saving attributed
to the use of formal methods. An independent audit of the Inmos floating point
unit of the T800 transputer project confirmed that the use of formal methods led
to an estimated 12 month reduction in testing time. These savings are discussed
in more detail in chapter one of [HB:95).
The current approach to providing high-quality software on time and within
budget is to employ a mature software development process including inspec-
tions and testing; and models such as the CMM, Bootstrap, SPICE or ISO
9000:2000 are employed to assist the organization to mature its software proc-
ess. The process-based approach is also useful in that it demonstrates that rea-
sonable practices are employed to identify and prevent the occurrence of defects,
and an ISO 9000:2000-approved software development organization has been
independently verified to have reasonable software development practices in
place. A formal methods approach is complementary to these models, and for
example, it fits comfortably into the defect prevention key process area and the
260 A Practical Approach to Software Quality
Formal Methods is used in academia and in industry, and the safety-critical and
security critical fields are two key areas to which formal methods has been suc-
cessfully applied in industry. Several organizations have piloted formal methods
with varying degrees of success. These include IBM, which actually developed
VDM at the mM laboratory in Vienna. mM (Hursley) piloted the Z formal spe-
cification language in the UK, and it was employed for the CICS (Customer In-
formation Control System) project. This is an on-line transaction processing
system with over 500,000 lines of code. This project generated valuable feed-
back to the formal methods community, and although it was very successful in
the sense that an independent audit verified that the use of formal methods gen-
erated a 9% cost saving, there was a resistance to the deployment of the formal
methods in the organization. This was attributed to the lack of education on for-
mal methods in computer science curricula, lack of adequate support tools for
formal methods, and the difficulty that the programmers had with mathematics
and logic.
Formal methods has been successfully applied to thehardware verification
field; for example, parts of the Viper microprocessor were formally verified, and
the FM9001 microprocessor was formally verified by the Boyer Moore theorem
prover [HB:95]. There are many examples of the use of formal methods in the
railway domain, and examples dealing with the modeling and verification of a
railroad gate controller and railway signaling are described in [HB:95]. The
mandatory use of formal methods in some safety and security-critical fields has
led to formal methods being employed to verify correctness in the nuclear power
industry, in the aerospace industry, in the security technology area, and the rail-
road domain. These sectors are subject to stringent regulatory controls to ensure
safety and security.
Formal methods has been successfully applied to the telecommunications
domain, and has been useful in investigating the feature interaction problem as
described in [Bou:94]. Formal methods has been applied to domains which have
little to do with computer science, for example, to the problem of the formal
specification of the single transferable voting system in [Pop:97], and to various
organizations and structures in [ORg:97]. There is an extensive collection of
examples to which formal methods has been applied, and a selection of these are
described in detail in [HB:95]. Formal methods has also been applied to the
problem of reuse, and this is described in the following section.
7. Formal Methods and Design 261
One of the main criticisms of formal methods is the lack of available or usable
tools to support the engineer in writing the formal specification or in doing the
proof. Many of the early tools were criticized as being of academic use only and
not being of industrial strength, but in recent years better tools have become
262 A Practical Approach to Software Quality
There are two key approaches to formal methods: namely the model-oriented
approach of VDM or Z, and the algebraic, or axiomatic approach, which includes
the process calculii such as the calculus communicating systems (CCS) or com-
municating sequential processes (CSP).
A model oriented approach to specification is based on mathematical mod-
els. A mathematical model is a mathematical representation or abstraction of a
physical entity or system. The representation or model aims to provide a
mathematical explanation of the behavior of the system or the physical world. A
model is considered suitable if its properties closely match the properties of the
system, and if its calculations match and simplify calculations in the real system,
and if predictions of future behavior may be made. The physical world is domi-
nated by models, e.g., models of the weather system, which enable predictions
or weather forecasting to be made, and economic models in which predictions
on the future performance of the economy may be made.
It is fundamental to explore the model and to consider the behavior of the
model and the behavior of the real world entity. The extent to which the model
explains the underlying physical behavior and allows predictions of future be-
havior to be made will determine its acceptability as a representation of the
physical world. Models that are ineffective at explaining the physical world are
replaced with new models which offer a better explanation of the manifested
physical behavior. There are many examples in science of the replacement of
one theory by a newer one: the replacement of the Ptolemaic model of the uni-
verse by the Copernican model or the replacement of Newtonian physics by Ein-
stein's theories on relativity. The revolutions that take place in science are
described in detail in Kuhn's famous work on scientific revolutions [Kuh:70].
A model is a foundation stone from which the theory is built, and from
which explanations and justification of behavior are made. It is not envisaged
that we should justify the model itself, and if the model explains the known be-
havior of the system, it is thus deemed adequate and suitable. Thus the model
may be viewed as the starting point of the system. Conversely, if inadequacies
are identified with the model we may view the theory and its foundations as
collapsing, in a similar manner to a house of cards; alternately, we may search
for amendments to the theory to address the inadequacies.
The model-oriented approach to software development involves defining an
abstract model of the proposed software system. The model acts as a representa-
tion of the proposed system, and the model is then explored to assess its suit-
ability in representing the proposed system. The exploration of the model takes
the form of model interrogation, i.e., asking questions and determining the ef-
fectiveness of the model in answering the questions. The modeling in formal
methods is typically performed via elementary discrete mathematics, including
set theory, sequences, and functions. This approach includes the Vienna Devel-
opment Method (VDM) and Z. VDM arose from work done in the IBM labora-
264 A Practical Approach to Software Quality
tory in Vienna in formalizing the semantics for the PUI compiler, and was later
applied to the specification of software systems. The Z specification language
had its origins in the early 1980s at Oxford University.
VDM is a method for software development and includes a specification
language originally named Meta IV (a pun on metaphor), and later renamed
VDM-SL in the standardization of VDM. The approach to software develop-
ment is via step-wise refinement. There are several schools of VDM, including
VDM++, the object oriented extension to VDM, and what has become known as
the "Irish school ofVDM", i.e., VDM''', which was developed at Trinity College,
Dublin.
The axiomatic approach focuses on the properties that the proposed system is to
satisfy, and there is no intention to produce an abstract model of the system. The
required properties and underlying behavior of the system are stated in mathe-
matical notation. The difference between the axiomatic specification and a
model-based approach is illustrated by the example of a stack. The stack is a
well-known structure in computer science, and includes stack operators for
pushing an element onto the stack and popping an element from the stack. The
properties of pop and push are explicitly defined in the axiomatic approach,
whereas in the model-oriented approach, an explicit model of the stack and its
operations are constructed in terms of the effect the operations have on the
model. The specification of an abstract data type of a stack involves the spe-
cification of the properties of the abstract data type, but the abstract data type is
not explicitly defined; i.e., only the properties are defined. The specification of
the pop operation on a stack is given by axiomatic properties, for example,
pop(push(s,x)) = s. The "property oriented approach" has the advantage that the
implementer is not constrained to a particular choice of implementation, and the
only constraint is that the implementation must satisfy the stipulated properties.
The emphasis is on the identification and expression of the required properties of
the system and the actual representation or implementation issues are avoided,
and the focus is on the specification of the underlying behavior. Properties are
typically stated using mathematical logic or higher-order logics, and mechanized
theorem-proving techniques may be employed to prove results.
One potential problem with the axiomatic approach is that the properties
specified may not be satisfiable in any implementation. Thus whenever a "for-
mal theory" is developed a corresponding "model" of the theory must be iden-
tified, in order to ensure that the properties may be realized in practice. That is,
when proposing a system that is to satisfy some set of properties, there is a need
to prove that there is at least one system that will satisfy the set of properties.
The model-oriented approach has an explicit model to start with and so this
problem does not arise. The constructive approach is preferred by some groups
of formal methodists, and in this approach whenever existence is stipulated con-
7. Formal Methods and Design 265
As stated previously, VDM dates from work done by the IBM research labora-
tory in Vienna. Their aim was to specify the semantics of the PUl programming
language. This was achieved by employing the Vienna Definition Language
(VDL), taking an operational semantic approach; i.e. (cf. chapter 1 of [BjJ:82])
the semantics of a language are determined in terms of a hypothetical machine
which interprets the programs of that language. Later work led to the Vienna
Development Method (VDM) with its specification language, Meta IV. This
concerned itself with the denotational semantics of programming languages; i.e.
(cf. chapter 1 of [BjJ:82]) a mathematical object (set, function, etc.) is associated
with each phrase of the language. The mathematical object is the denotation of
the phrase.
VDM is a model-oriented approach, and this means that an explicit model of
the state of an abstract machine is given, and operations are defined in terms of
this state. Operations may act on the system state, taking inputs, and producing
outputs and a new system state. Operations are defined in a precondition and
postcondition style. Each operation has an associated proof obligation to ensure
that if the precondition is true, then the operation preserves the system invariant.
The initial state itself is, of course, required to satisfy the system invariant.
VDM uses keywords to distinguish different parts of the specification, e.g., pre-
conditions, postconditions are introduced by the keywords pre and post respec-
tively. In keeping with the philosophy that formal methods specifies what a
system does as distinct from how, VDM employs postconditions to stipulate the
effect of the operation on the state. The previous state is then distinguished by
employing hooked variables, e.g., v..." and the post condition specifies the new
state (defined by a logical predicate relating the pre-state to the post-state) from
the previous state.
VDM is more than its specification language Meta IV (called VDM-SL in
the standardization of VDM) and is, in fact, a development method, with rules to
verify the steps of development. The rules enable the executable specification,
i.e., the detailed code, to be obtained from the initial specification via refinement
steps. Thus, we have a sequence S=So, Sb ... , S n = E of specifications, where S is
the initial specification, and E is the final (executable) specification. Retrieval
functions enable a return from a more concrete specification, to the more ab-
stract specification. The initial specification consists of an initial state, a system
state, and a set of operations. The system state is a particular domain, where a
domain is built out of primitive domains such as the set of natural numbers, etc.,
266 A Practical Approach to Software Quality
Example 1
The following is a very simple example of a VDM specification and is adapted
from [InA:9l]. It is a simple library system which allows borrowing and returns
of books. The data types for the library system are first defined and the operation
to borrow a book is then defined. It is assumed that the state is made up of three
sets and these are the set of books on the shelf, the set of books which are bor-
rowed, and the set of missing books. These sets are mutually disjoint. The effect
of the operation to borrow a book is to remove the book from the set of books on
the shelf and to add it to the set of borrowed books. The reader is referred to
[InA:91] for a detailed explanation.
types
Bks = Bkd-id set
state Library of
On-shelf,' Bks
Missing,' Bks
Borrowed,' Bks
borrow (b:Bkd-id)
ex wr on-shelf, borrowed: Bks
pre b Con-shelf
7. Formal Methods and Design 267
VDM is a widely used formal method and has been used in industrial
strength projects as well as by the academic community. There is tool support
available, for example, the IFAD toolbox. There are several variants of VDM,
including VDM++, the object-oriented extension of VDM, and the Irish school of
the VDM, which is discussed in the next section.
ExampJe2
The following is the equivalent VDM'" specification of the earlier example of a
simple library presented in standard VDM.
Bks = P Bkd-id
Library = (Bks X Bks X Bks)
Os CBks
Ms CBks
Bw CBks
There is, of course, a proof obligation to prove that the Borrow operation
preserves the invariant, i.e., that the three sets of borrowed, missing, or on the
shelf remain disjoint after the execution of the operation. Proof obligations re-
quire a mathematical proof by hand or a machine-assisted proof to verify that
the invariant remains satisfied after the operation.
Example 3
The following is the equivalent Z specification of the operation to borrow a book
in the library system. Z specifications are visually striking with the schema no-
tation.
-Library'----
on-shelf, missing, borrowed: P Bkd-Id
-Borrow'---
A Library
b?:Bkd-Id
b? Con-shelf
on-shelf' = on-shelf\ {b? J
borrowed' = borrowed U {b?J
VDM and Z are the two most widely used formal methods; their similarities
and differences are summarized below:.
in the method has a set theoretic counterpart, and the method is founded on
Zermelo set theory. Each operation has an explicit precondition, and an immedi-
ate proof obligation is that the precondition is stronger than the weakest precon-
dition for the operation.
One key purpose [McD:94] of the abstract machine in the B-Method is to
provide encapsulation of variables representing the state of the machine, and
operations which manipulate the state. Machines may refer to other machines,
and a machine may be introduced as a refinement of another machine. The ab-
stract machine are specification machines, refinement machines, or imp lemen-
table machines. The B-Method adopts a layered approach to design where the
design is gradually made more concrete by a sequence of design layers, where
each design layer is a refinement that involves a more detailed implementation
in terms of abstract machines of the previous layer. The design refinement ends
when the final layer is implemented purely in terms of library machines. Any
refinement of a machine by another has associated proof obligations and proof
may be carried out to verify the validity of the refinement step,
Specification animation of the AMN specification is possible with the B-
Toolkit and this enables typical usage scenarios of the AMN specification to be
explored for requirements validation. This is, in effect, an early form of testing
and may be used to demonstrate the presence or absence of desirable or undesir-
able behavior. Verification takes the form of proof to demonstrate that the in-
variant is preserved when the operation is executed within its precondition, and
this is performed on the AMN specification with the B-Toolkit.
The B- Toolkit provides several tools which support the B-Method, and these
include syntax and type checking; specification animation, proof obligation gen-
erator, auto prover, proof assistor, and code generation. Thus, in theory, a com-
plete formal development from initial specification to final implementation may
be achieved, with every proof obligation justified, leading to a provably correct
program.
The B-Method and toolkit have been successfully applied in industrial appli-
cations and one of the projects to which they have been applied is the CICS
project at IBM Hursley in the UK. The B-Method and toolkit have been de-
signed to support the complete software development process from specification
to code. The application of B to the CICS project is described in [Hoa:951, and
the automated support provided has been cited as a major benefit of the applica-
tion of the B-Method and the B- Toolkit.
of the truth value of the functional operation may be constructed, and truth val-
ues are normally the binary values of true and false, although there are other
logics, for example, the 3 valued logics, which are more than the normal binary
truth values for the proposition.
A formula in predicate calculus (cf. pp. 39-40 of [Gib:90]) is built up from
the basic symbols of the language; these symbols include variables; predicate
symbols, including equality; function symbols, including the constants; logical
symbols, e.g., 3,1\, V, -', etc.; and the punctuation symbols, e.g., brackets and
commas. The formulae of predicate calculus are then built from terms, where a
term is a key construct, and is defined recursively as a variable or individual
constant or as some function containing terms as arguments. A formula may be
an atomic formula or built from other formulae via the logical symbols. Other
logical symbols are then defined as abbreviations of the basic logical symbols.
An interpretation gives meaning to a formula. If the formula is a sentence
(i.e., does not contain any free variables) then the given interpretation is true or
false. If a formula has free variables, then the truth or falsity of the formula de-
pends on the values given to the free variables. A free formula essentially de-
scribes a relation say, R(x1,Xz, .... xn) such that R(XbXz, .... xn) is true if (XbXz, ....
x n ) is in relation R. If a free formula is true irrespective of the values given to the
free variables, then the formula is true in the interpretation.
A valuation (meaning) function is associated with the interpretation, and
gives meaning to the connectives. Thus associated with each constant c is a con-
stant C L in some universe of values L; with each function symbol f, we have a
function symbolfL in L; and for each predicate symbol P a relation PL in L. The
valuation function in effect gives a semantics to the language of the predicate
calculus L The truth of a proposition P with respect to a model M is then defined
in the natural way, in terms of the meanings of the terms, the meanings of the
functions, predicate symbols, and the normal meanings of the connectives (cf. p.
43 of [Gib:90].
Mendelson (cf. p. 48 of [Men:87]) provides a rigorous though technical
definition of truth in terms of satisfaction (with respect to an interpretation M).
Intuitively a formula F is satisfiable if it is true (in the intuitive sense) for some
assignment of the free variables in the formula F. If a formula F is satisfied for
every possible assignment to the free variables in F, then it is true (in the techni-
cal sense) for the interpretation M. An analogous definition is provided for false
in the interpretation M.
A formula is valid if it is true in every interpretation; however, as there may
be uncountably many interpretations, it may not be possible to check this re-
quirement in practice. M is said to be a model for a set of formulae if any only if
every formula is true in M.
There is a distinction between proof theoretic and model theoretic ap-
proaches in predicate calculus. Proof theoretic is essentially syntactic, and we
have a list of axioms with rules of inference. In this way the theorems of the
calculus may be logically derived and thus we may logically derive (i.e., 1- A)
the theorems of the calculus. In essence the logical truths are as a result of the
7. Formal Methods and Design 273
syntax or form of the formulae, rather than the meaning of the formulae. Model
theoretical, in contrast is essentially semantic. The truths derive essentially from
the meaning of the symbols and connectives, rather than the logical structure of
the formulae. This is written as 1- M A.
A calculus is sound if all the logically valid theorems are true in the inter-
pretation, i.e., proof theoretic => model theoretic. A calculus is complete if all
the truths in an interpretation are provable in the calculus, i.e., model theoretic
=> proof theoretic. A calculus is consistent if there is no formula A such that 1- A
and I--.A.
The objectives of the process calculi [Hor:85] are to provide mathematical mod-
els which provide insight into the diverse issues involved in the specification,
design, and implementation of computer systems which continuously act and
interact with their environment. These systems may be decomposed into sub-
systems which interact with each other and their environment. The basic build-
ing block is the process, which is a mathematical abstraction of the interactions
between a system and its environment. A process which lasts indefinitely may
be specified recursively. Processes may be assembled into systems, execute con-
currently, or communicate with each other. Process communication may be syn-
chronized, and generally takes the form of a process outputting a message
simultaneously to another process inputing a message. Resources may be shared
among several processes. Process calculi enrich the understanding of communi-
cation and concurrency, and elegant formalisms such as CSP [Hor:85] and CCS
[Mil:89] which obey a rich collection of mathematical laws have been devel-
oped.
The expression (a -> P) in CSP describes a process which first engages in
event a, and then behaves as process P. A recursive definition is written as
(j.lX)·F(X», and an example of a simple chocolate vending machine is
The simple vending machine has an alphabet of two symbols, namely, coin
and choc, and the behavior of the machine is that a coin is entered into the ma-
chine and then a chocolate selected and provided.
CSP processes use channels to communicate values with their environment,
and input on channel c is denoted by (c?x -> PJ, which describes a process
that accepts any value x on channel c, and then behaves as process Px ' In con-
trast, (c!e -> P) defines a process which outputs the expression e on channel c
and then behaves as process P .
.Calculus is a calculus which is based on names. Communication between
processes takes place between named channels, and the name of a channel may
be passed over a channel. Thus in .calculus, as distinct from CCS, there is no
distinction between channel names and data values. The output of a value v on
channel a is given by av, i.e., output is a negative prefix. Input on channel a is
given by a(x), and is a positive prefix. Private links or restrictions are given by
(x)P in the calculus and P\x in CCS.
The word proof has several connotations in various disciplines; for example, in a
court of law, the defendant is assumed innocent until proven guilty. The proof of
the guilt of the defendant may take the form of certain facts in relation to the
movements of the defendant, the defendant's circumstances, the defendant's al-
ibi, statements from witnesses, rebuttal arguments from the defense and certain
theories produced by the prosecution or defense. Ultimately, in the case of a trial
by jury, the defendant is judged guilty or not guilty depending on the extent to
276 A Practical Approach to Software Quality
which the jury has been convinced by the arguments proposed by prosecution
and defense.
A mathematical proof typically includes natural language and mathematical
symbols; often many of the tedious details of the proof are omitted. The strategy
of proof in proving a conjecture tends to be a divide and conquer technique, i.e.,
breaking the conjecture down into sub-goals and then attempting to prove the
sub-goals. Most proofs in formal methods are concerned with cross-checking on
the details of the specification or validity of refinement proofs, or proofs that
certain properties are satisfied by the specification. There are many tedious
lemmas to be proved and theorem provers assist and are essential. Machine
proof needs to be explicit and reliance on some brilliant insight is avoided.
Proofs by hand are notorious for containing errors or jumps in reasoning, as dis-
cussed in chapter I of [HB:95], while machine proofs are extremely lengthy and
unreadable, but generally help to avoid errors and jumps in proof as every step
needs to be justified.
One well-known theorem prover is the "Boyer/Moore theorem prover"
[BoM:79], and a mathematical proof consists of a sequence of formulae where
each element is either an axiom or derived from a previous element in the series
by applying a fixed set of mechanical rules. There is an interesting case in the
literature concerning the proof of correctness of the VIPER microprocessor
[Tie:91] and the actual machine proof consisted of several million formulae.
Theorem provers are invaluable in resolving many of the thousands of proof
obligations that arise from a formal specification, and it is not feasible to apply
formal methods in an industrial environment without the use of machine assisted
proof. Automated theorem proving is difficult, as often mathematicians prove a
theorem with an initial intuitive feeling that the theorem is true. Human inter-
vention to provide guidance or intuition improves the effectiveness of the theo-
rem prover.
The proof of various properties about the programs increases confidence in
the correctness of the program. However, an absolute proof of correctness is
unlikely except for the most trivial of programs. A program may consist of leg-
acy software which is assumed to work, or be created by compilers which are
assumed to work; theorem provers are programs which are assumed to function
correctly. In order to be absolutely certain one would also need to verify the
hardware, customized-off-the-shelf software, subcontractor software, and every
single execution path that the software system will be used for. The best that
formal methods can claim is increased confidence in correctness of the software.
7.6 Summary
This chapter considered advanced topics in the software quality field, including
software configuration management, the unified modeling language, and formal
methods.
7. Formal Methods and Design 277
Cas:OO SpiCE for Space. A Method of Process Assessment for Space software Pro-
jects. Ann Cass in: SPICE 2000 Conference. Software Process Improvement
and Capability Determination. Editor: T.P. Rout. Limerick. June 2000.
Chi:95 Software Triggers as a Function of Time. ODC on Field Faults. Ram
Chillarege and Kathryn A Bassin. Fifth ISIP Working Conference on De-
pendable Computing for Critical Applications. September 1995.
CJ:96 software Systems Failure and Success. Capers Jones. Thomson Press, Bos-
ton, MA 1996.
Crs:80 Quality is Free. The Art of Making Quality Certain. Philip Crosby. Penguin
Books. 1980.
CSE:OO Unified Modeling Language. Technical Briefing No.8. Centre for Software
Engineering. Dublin City University. Ireland. April 2000.
Dem:86 Out of Crisis. W. Edwards Deming. M.LT. Press. 1986.
Dij:72 Structured Programming. E.W. Dijkstra. Academic Press. 1972.
Dun:96 CMM Based Appraisal for Internal Process Improvement (CBA IPI):
Method Description. Donna K. Dunaway and Steve Masters. Technical Re-
port CMU/SEI-96-TR-007. Software Engineering Institute. 1996.
Fag:76 Design and code inspections to reduce errors in software development. Mi-
chael Fagan. IBM Systems JoumaZI5(3). 1976.
Fen:95 Software Metrics: A Rigorous Approach. Norman Fenton. Thompson Com-
puter Press. 1995.
Geo:91 The RAISE Specification language: A tutorial. Chris George. Lecture Notes
in Computer Science (552). Springer Verlag. 1991.
Ger:OO Risk-Based E-Business Testing. Paul Gerrard. Technical Report. Systeme
Evolutif. London. 2000.
Gib:90 PhD Thesis. Department of Computer Science. Trinity College Dublin. 1990.
Glb:94 Software Inspections. Tom Gilb and Dorothy Graham. Addison Wesley.
1994.
Glb:76 Software Metrics. Tom Gilb. Winthrop Publishers, Inc. Cambridge. 1976.
Goe:31 Kurt Goedel. Undecidable Propositions in Arithmetic. 1931.
Gri:81 The Science of Programming. David Gries. Springer Verlag. Berlin. 1981.
HB:95 Applications of Formal Methods. Edited by Michael Hinchey and Jonathan
Bowen. Prentice Hall International Series in Computer Science. 1995.
Hoa:95 Application of the B-Method to CICS. Jonathan P. Hoare in: Applications of
Formal Methods. Editors: Michael Hinchey and Jonathan P. Bowen. Prentice
Hall International Series in Computer Science. 1995.
Hor:85 Communicating Sequential Processes. C.AR. Hoare. Prentice Hall Interna-
tional Series in Computer Science. 1985.
Hum:89 Managing the Software Process. Watts Humphry. Addison Wesley. 1989.
Hum: 87 A Method for Assessing the Software Engineering Capability of Contractors.
W. Humphrey and W. Sweet. Technical Report. CMU/SEI-87-TR-023.
Software Engineering Institute. 1987.
IEEE:829 IEEE Standard for Software Test Documentation.
References 281
lnA:91 Practical Formal Methods with VDM. Darrell Ince and Derek Andrews.
McGraw Hill International Series in Software Engineering. 1991.
ISO:98 Information Technology. Software Process Assessment. ISO/IEC TR 15504
- SPICE - Parts 1 to 9. Technical Report (Type 2). 1998.
ISO:98a ISO 9241 (Parts 1 - 17). Ergonomic Requirements for Office Work involving
Visual Display Terminals. International Standards Organization. 1998.
ISO:99 ISO 13407:1999. Human Centred Design Processes for Interactive Systems.
International Standards Organization. 1999.
ISO:OO ISO 9000:2000. Quality Management Systems - Requirements. ISO
9004:2000. Quality Management Systems - Guidelines for Performance Im-
provements. December 2000.
ISO:91 ISOIIEC 9126: Information Technology.- Software Product Evaluation:
Quality Characteristics and Guidelines for their Use. 1991.
Jur:51 Quality Control Handbook. Joseph Juran. McGraw-Hill. New York. 1951.
Kee:OO The evolution of quality processes at Tate Consultancy Services. Gargi
Keeni et al. IEEE Software 17(4). July 2000.
Kir:OO The SUMI Methodology for Software Usability. Jurek Kirakowski. Human
Factors Research Group. University College Cork, Ireland.
KpN:96 The Balanced Scorecard. Translating Strategy into Action. Kaplan and Nor-
ton. Harvard Business School Press. 1996.
Kuh:70 The Structure of Scientific Revolutions. Thomas Kuhn. University of Chi-
cago Press. 1970.
Kuv:93 BOOTSTRAP: Europe's assessment method. P.Kuvaja et al. IEEE Software.
10(3). May 1993.
Lak:76 Proof and Refutations. The Logic of Mathematical Discovery. Imre Lakatos.
Cambridge University Press. 1976.
Lio:96 ARIANE 5. Flight 501 Failure. Report by the Inquiry Board. Prof. J.L. Lions
(Chairman of the Board). 1996.
Mac:90 Conceptual Models and Computing. PhD Thesis. Micheal Mac An Airchin-
nigh. Department of Computer Science. University of Dublin. Trinity Col-
lege. Dublin. 1990.
Mac:93 Formal Methods and Testing. Micheal Mac An Airchinnigh. Tutorials of the
6 th International software Quality Week. Software Research Institute. 1993.
MaCo:96 Software Quality Assurance. Thomas Manns and Michael Coleman. Macmil-
lan Press Ltd. 1996.
Mag:OO The Role of the Improvement Manager. Giuseppe Magnani et al in: Proceed-
ings of SPICE 2000. Software Process Improvement and Capability Deter-
mination. Editor: T. Rout. Limerick, Ireland.
Man:95 Taurus: How I lived to tell the tale. Elliot Manley. American Programmer:
Software Failures. July 1995.
McD:94 MSc Thesis. Eoin McDonnell. Department of Computer Science. Trinity
College, Dublin. 1994.
Men:87 Introduction to Mathematical Logic. Elliot Mendelson. Wadsworth and
ColelBrook, Advanced Books & Software. California, 1987.
282 A Practical Approach to Software Quality
Mil:89 A Calculus of Mobile Processes. Part 1. Robin Milner et al. LFCS Report
Series. ECS-LFCS-89-85. Department of Computer Science. University of
Edinburgh.
MOD:91a 00-55 (PART 1) I Issue 1, The Procurement of Safety Critical software in
Defence Equipment, PART 1: Requirements. Ministry of Defence, Interim
Defence Standard, U.K. 1991.
MOD:91b 00-55 (PART 2) I Issue 1, The Procurement of Safety Critical software in
Defence Equipment, PART 2: Guidance. Ministry of Defence, Interim De-
fence Standard, UK. 1991.
OHa:98 Peer Reviews - The Key to Cost Effective Quality. Fran O'Hara. European
SEPG. Amsterdam. 1998.
ORg:97 Applying Formal Methods to Model Organizations and Structures in the Real
World. PhD Thesis. Department of Computer Science. Trinity College, Dub-
lin. 1997.
Pau:93 Key Practices of the Capability Maturity Model, V1.1. Mark Paul et al.
CMU/SEI-93-TR-25. 1993.
Pet:95 The IDEAL Model. Software Engineering Institute. Bill Peterson. Software
Process Improvement and Practice (Pilot Issue). August 1995.
Pol:57 How to Solve It. A New Aspect of Mathematical Method. Georges Polya..
Princeton University Press. 1957.
Pop:97 The single transferable voting system: Functional decomposition in formal
specification. Michael Poppleton in: 1st Irish Workshop on Formal Methods
(IWFM'97). Editors: Gerard O'Regan and Sharon Flynn. Springer Verlag
Electronic Workshops in Computing, Dublin, 1997.
Pre:94 Human Computer Interaction. Jenny Preece et al. Addison Wesley Publish-
ing Company. 1994.
Pri:OO The SPIRE Handbook. Better, Faster, Cheaper software Development in
Small Organizations. The SPIRE Partners. Editor: Jill Pritchard. Centre for
Software Engineering, Dublin.
Rot:OO The influential test manager. Johanna Rothman. Software and Internet Qual-
ity Week. Software Research Institute. San Francisco. June 2000.
Roy:70 The Software Lifecycle Model (Waterfall Model). Managing the Develop-
ment of Large Software Systems: Concepts and Techniques. W. Royce in:
Proc. WESCON. August 1970.
Rou:OO Evolving SPICE - the future for ISO 15504. Terry Rout. SPICE 2000. Inter-
national Conference on Software Process Improvement and Capability De-
termination. Limerick, Ireland. June 2000.
Rum:99 The Unified Modeling Language: User Guide. James Rumbaugh, Ivar Jacob-
son, and Grady Booch. Addison Wesley, 1999.
Ryn:OO Managing requirements: A new focus for project success. Kevin Ryan and
Richard Stevens. Software in Focus (10) 2000. Centre for Software Engi-
neering, Dublin.
SEI:OOa The CMM Integration Model (CMMIsM ) CMMI SE/SW v1.02. CMMlSEI-
20001TR018. Staged Version of the CMM. Technical Report, Software En-
gineering Institute, Carnegie Mellon University, Pittsburg. July 2000.
References 283
SEI:OOb The CMM Integration Model (CMMIsM ) CMMI SE/SW v1.02. CMMlSEI-
20001TR019. Continuous Version of the CMM. Technical Report, Software
Engineering Institute, Carnegie Mellon University, Pittsburg. July 2000.
Shw:31 The Economic Control of Manufactured Products. Walter Shewhart. Van
Nostrand. 1931.
Spi:92 The Z Notation. A Reference Manual. J.M. Spivey. Prentice Hall Interna-
tional Series in Computer Science. 1992.
Std:99 Estimating: Art or Science. Featuring Morotz Cost Expert. Standish Group
Research Note. 1999.
Sub:OO Performance testing. A methodical approach to E-comrnerce. B.M. Subraya
and S.V. Subrahmanya. Software and Internet Quality Week. Software Re-
search Institute. San Francisco. June 2000.
Tie:91 The Evolution of Def Stan 00-55 and 00-56 : An Intensification of the 'For-
mal Methods debate' in the UK. Margaret Tierney. Research Centre for So-
cial Sciences. University of Edinburgh, 1991.
Voa:90 Prototyping. The Effective Use of CASE Technology. Roland Voan. Prentice
Hall. 1990.
Wrd:92 Formal Methods with Z. A Practical Approach to Formal Methods in Engi-
neering. J.B. Wordsworth. Addison Wesley. 1992.
Glossary
A D
action plan, 128 defect prevention, 153
activity diagrams, 250 defect type classification, 66
Ariane 5 disaster, 3 defined level, 135
assessment, 41 Deming, 10
audit, 122
axiomatic approach, 266 E
engineering process category, 184
B European Space Agency, 4
Balanced scorecard, 210 exemplar model, 190
base practices, 190
baseline, 242 F
browser compatibility testing, 81
Fagan inspection guidelines, 60
business goals, 227
Fagan inspection methodology, 7, 58
fishbone diagram, 232
c FMEA,33
Capability determination, 196 formal methods, 27
Capability Maturity Model, 131 formal specifications, 258
class and object diagrams, 249
CMM appraisal (CBA IPI), 41 G
CMM appraisal framework, 159
Gilb methodology, 49
CMMI model, 162
GQM,208
commercial off -tbe- shelf software, 24
common features, 142
H
compatible model, 191
competent assessor, 175 histogram, 32, 232
cost of poor quality, 14 HR department, 216, 217
Crosby, 16 Humphrey, 131
customer care department, 225
customer satisfaction metric, 39 I
customer surveys, 38
IDEAL model, 41
288 Index
o s
ODC classification scheme, 66 scatter graph, 237
optimizing level, 135 self assessment, 155
organization process category, 188 sequence diagram, 250
organization process definition, 148 Shewhart,8
organization process focus, 148 six sigma, 27
Index 289