Chapter 9
Chapter 9
—Leonardo da Vinci
Back in the 1950s quality was seen merely as the screening or sorting of
products that had already been manufactured to the separate good ones
from the bad. But in the current competitive business environment, so the
thinking goes, you have to prevent defects and failures rather than perform
inspections. “You cannot inspect to make it right.” The modern emphasis
is on processes to ensure getting it right the first time, every time, and a cul-
ture of involving all project team members and other stakeholders in qual-
ity-focused processes.
332
Figure 9-1
London Tower Bridge. (Photo courtesy of Herman Steyn.)
But the quest to be competitive often strains project teams to accelerate project
schedules and cut costs. This can lead to increased rework, required changes, and
greater workload on the project team. This in turn can result in a “quality melt-
down.” Once the project manager has committed to the budget, schedule, and deliv-
erables, the project cost, schedule, and performance requirements are somewhat
fixed and, often, cannot be changed without serious consequences and extensive
negotiation. However, “… the bitterness of poor quality lives long after the sweet-
ness of cheap price or timely delivery has been forgotten.”1
To illustrate the consequences of project quality—and the staying power of those
consequences long after project schedules or budgets have been forgotten—Michael
Carruthers uses the case of the London Tower Bridge (Figure 9-1).2 The bridge was
opened in 1894—4 years late and at a cost nearly twice the estimated £585,000. In
terms of time and cost the project was a failure, but in many other ways it was a suc-
cess. It has withstood the test of time: more than a century later the bridge’s design
and construction quality are still apparent. The original requirement was that it ena-
ble pedestrians and horse-drawn vehicles to cross the river; today it carries 10,000
vehicles per day and is a major tourist attraction. The bridge has survived floods,
pollution, and bombs (World War II)—problems never considered at its inception. It
is a true testament to project quality.
In contrast is the space shuttle Challenger. While engineers claimed to have
warned managers about a potentially serious quality problem, commitments on a
launch due date promised to politicians took precedence: in other words, quality was
compromised in order to meet a schedule. On January 28, 1986, defective seals on the
rocket boosters allowed hot gases to escape and ignite the main fuel tank shortly after
launch, causing a massive explosion and killing the seven astronauts onboard.
What Quality Is
Quality implies meeting specifications, but it is much more than that. While meeting
specifications will usually prevent a customer from taking the project contractor to
court, it alone does not ensure the customer is satisfied or that the contractor has
gained a good reputation or will win repeat business.
Ideally, a project should aim beyond specifications and try to meet customer
expectations—including those not articulated. It should aim at delighting the client.
A common shortfall of project managers is they assume that the needs, expectations,
334 Part III Systems and Procedures for Planning and Control
For that to happen might require extensive training and motivation efforts,
although once everyone is contributing, attention to quality becomes automatic and
requires little influence from the project manager.
Quality Movements
What could be described as the “quality revolution” started in the 1950s in Japan
under the influence of an American, Dr. W. Edwards Deming. He proposed a new
philosophy of quality that included continuous improvement, skills training, leader-
ship at all levels, elimination of dependency on inspections, retaining single-source
suppliers rather than many sources, and use of sampling and statistical techniques.
Since then a number of quality movements have come and gone—some that could
be described as fads. The most lasting and popular movement since the 1980s is total
quality management (TQM). TQM is a set of techniques and more—it is a mindset,
an ambitious approach to improving the total effectiveness and competitiveness of
the organization. The key elements of TQM are identifying the mission of the organ-
ization, acting in ways consistent with the goals and objectives of the organization,
and focusing on customer satisfaction. TQM involves the total organization, includ-
ing teams of frontline workers and the visible support of top management. Quality
problems are systematically identified and resolved to continuously improve proc-
esses. In projects, this purpose is served by closeout sessions or post-mortems, dis-
cussed later.
Another management philosophy called just in time (JIT), or lean production,
complements TQM. In a production environment, JIT recognizes that quality prob-
lems are often hidden by excessive work-in-progress inventory. By continuously
reducing inventory and other sources of non-value added waste in processes, prob-
lems are exposed and solved as part of a continuous improvement drive. The JIT
approach includes relatively easy-to-implement measures that improve quality and
reduce costs and lead times.6
Another influential quality movement is Six Sigma, started in the 1980s at
Motorola and later popularized by General Electric. Advocates claim that Six Sigma
provides a more structured approach to quality than TQM. The term “six sigma”
refers to the fact that in a normal distribution, 99.99966 percent of the population
falls within ⫺6 to ⫹6 of the mean, where “” (sigma) is the standard deviation. If
the quality of a process is controlled to the Six Sigma standard, there would be less
than 3.4 parts per million scrap or defects in the process—near perfection!
336 Part III Systems and Procedures for Planning and Control
Project
Quality control
(QC)
Control tools:
• Planned QC work
• Configuration control
• Inspection
• Acceptance tests
• Ad hoc problem solving
• Cause-and-effect techniques Quality assurance (QA)
Decisions regarding:
• Standards and specifications to meet QA toolbox: Learning oppurtunities:
• Metrics for meeting specifications/standards • Training of project team members • Project close-out meeting
• Criteria for authorizing project phases • Configuration management system • Phase close-out meetings
• Tools and techniques for QA and QC • Configuration identification • Close-out reports
• Quality activities in the overall project plan • Design reviews and audits • Report of non-quality costs
• Quality function deployment
• Classification of characteristics
• Failure mode and effect analysis
• Modeling and prototyping
• Laboratory tests and other experiments
• Inspection
• Checklists
Figure 9-2
The project quality management process.
Quality Planning
Quality planning should provide the confidence that all steps necessary to ensure
quality have been thought through. Quality planning has two aspects: (1) establish-
ing organization-wide project quality management procedures and policies and (2)
establishing a quality plan and including it in the project plan for each project.
Responsibility for establishing organization-wide policies and procedures to
improve project quality management typically falls on functional managers and
especially the quality manager. The ISO 9001 standard specifies the requirements
for such a quality management system.7 For design and development projects, the
ISO 9001 standard prescribes that an organization shall determine (a) the design and
development stages; (b) the necessary review, verification, and validation appropri-
ate to each design and development stage; and, (c) the responsibilities and authori-
ties for design and development.
Planning for quality for each project should be an integrated part of the project
planning process and make use of the principles discussed in Chapters 4 through 8
Quality Assurance
Project quality assurance reduces the risks related to features or performance of
deliverables, and provides confidence that end-item requirements will be met. Since
338 Part III Systems and Procedures for Planning and Control
quality assurance relies heavily on human resources, it usually involves consider-
able training of project team members.
As indicated in Figure 9-2, quality assurance comprises the following:
1. Activities done in a specific project to ensure that requirements are being met.
This includes demonstrating that the project is being executed according to the
quality plan.
2. Activities that contribute to the continuous improvement of current and future
projects, and to the project management maturity of the organization.
Quality assurance should provide confidence that everything necessary is being
done to ensure the appropriate quality of project deliverables.
Quality Control
Quality control is the ongoing process of monitoring and appraising work, and
taking corrective action so that planned-for quality outcomes are achieved. The
process verifies that quality assurance activities are being performed in accord-
ance with approved quality plans, and that project requirements and standards are
being met. Whenever nonconformities are uncovered, the causes are determined
and eliminated. In the same way that quality planning should be integrated with
other aspects of project planning, quality control should be integrated with the other
aspects of project control. Quality control cannot be performed in isolation; it must
be integrated with project scope control, cost control, and progress and risk control.
It is a responsibility of the project manager.
Quality control is a subset of scope verification, but whereas scope verification
refers to the acceptability of project deliverables or end-items by the customer, qual-
ity control refers to conformance to specifications set by the contractor. It is therefore
a narrower concept related to verifying adherence to specifications and standards
previously set, whereas scope verification also includes verifying the general accept-
ability of those specifications and standards.
340 Part III Systems and Procedures for Planning and Control
which is Company C. (More than that, Company A actually nominated Company
C as a potential supplier to Company B.) Company B’s engineering division devel-
ops a functional specification for the transmission that includes functional char-
acteristics, maintenance requirements, interfaces with the rest of the vehicle,
and test requirements. Its vendor quality section then verifies that Company C’s
engineers will be using appropriate processes to ensure cost-effective compli-
ance with the specification, and that the transmissions will be tested according
to Company B’s functional specification to insure compliance to Company B’s
performance criteria before any transmissions are shipped.
System developers employ a wide range of techniques to ensure the quality of the
project end-item or products. This section discusses a “toolbox” of these techniques.
Configuration Management10
During the design of a system, vast amounts of data and information are generated
for use in the design process and later for manufacturing (or construction), mainte-
nance, and support. The design can involve many hundreds or even thousands of
documents (specifications, schematics, drawings, etc.), each likely to be modified in
some way during the project. Keeping track of all the changes and knowing the most
current version of every item can be difficult. Thus, any project aimed at delivering a
technical product should include provision to keep up with and control all this infor-
mation; such is the purpose of configuration management. Configuration management
represents policies and procedures for monitoring and tracking design information
and changes, and ensuring that everyone involved with the project and, later on, the
operation of the end-item has the most current information possible. Policies and
procedures that form the configuration management system for a project should be
included as a section in the quality plan. As with all procedures, the best configura-
tion management system is whatever enables the desired level of control and is the
simplest to implement. Two aspects of configuration management are configuration
identification and configuration control.
Configuration Identification
Configuration identification is an inherent part of systems design that involves defining
the structure of the system, its subsystems, and components. Mentioned in Chapter 2,
any subsystem, component, or part that is to be tracked and controlled as an individual
entity throughout the life cycle of the system is identified as a configuration item (CI).
A CI can be a piece of hardware, a manual, a parts list, a software package, or even
a service. A subsystem that is procured is also treated as a CI. All physical and func-
tional characteristics that define or characterize and are important for controlling the
CI are identified and documented. Ultimately, every functional and physical element
of the end-item system should be associated in some way with a CI, either as a CI on
its own or as a component within a subsystem that has been identified as a CI. Ideally,
each CI is small enough to be designed, built, and tested individually by a small team.
Master copies (electronic or paper) of the configuration documents for every CI
are retained in a secure location (the “configuration center”) and managed by someone
not involved in the functions of design, construction, manufacture, or maintenance.
Configuration Control
Configuration control is the second aspect of configuration management; it is a tech-
nique more for quality control than quality assurance but is covered here for the
sake of continuity. The design of a system is normally specified by means of a large
number of documents such as performance specifications, testing procedures, draw-
ings, manuals, lists, and others that are generated during the design process. As the
design evolves, these documents are subject to change, and an orderly scheme is
needed to manage and keep track of all these changes. Such is the purpose of con-
figuration control.
Configuration control is based on the following principles:
1. Any organization or individual may request a change—a modification, waiver,
or deviation.
2. The proposed change and its motivation should be documented. Standard docu-
ments exist for this purpose; for modifications, the document is called a change
proposal or request, change order, or variation order.
342 Part III Systems and Procedures for Planning and Control
3. The impact of the proposed change on system performance, safety, and the envi-
ronment is evaluated, as is its impact on all other physical items, manuals, soft-
ware, manufacturing or construction, and maintenance.
4. The change is assessed for feasibility, which includes estimating the resources
needed to implement the change and the change’s impact on schedules.
5. The change proposal is either rejected or accepted and implemented. The group
responsible for making the decision, called a configuration board (CB) or a
configuration control board (CCB), should include the chief designer as well as
representatives from manufacturing or construction, maintenance, and other
relevant stakeholders. Often the project manager or program manager chairs
the group.
6. When a proposed change is approved, the work required to implement the
change is planned. This includes actions regarding the disposition of items
that might be affected by the change such as items in the inventory, equipment
and processes used in manufacturing or construction, and manuals and other
documentation.
7. The implemented change is verified to ensure it complies with the proposed,
approved change proposal.
Change requests are sometimes classified as Class I or Class II. Class I requests
are for changes that can be approved by the contractor or the developer; Class II
changes must gain the approval of the client. Configuration control is an aspect of
project control and, in particular, change control, both discussed in Chapter 11.
Design Reviews
Since the fate of an end-item is often sealed by its design, the project manager must
insure that the proposed design is acceptable in all respects. This is the purpose of
design reviews—to ensure that the users’ requirements and inherent assumptions
have been correctly identified, and that the proposed design is able to meet those
requirements in an appropriate way. Design reviews (not to be confused with gen-
eral project reviews, described in Chapter 12) provide confirmation of the data used
during the design process, design assumptions (e.g., load conditions), and design
calculations. They should ensure that important life-cycle aspects of the product or
end-item have been addressed and pose no unacceptable risks; examples of these
aspects include:
1. Omissions or errors in the design
2. Compliance to regulations, codes, specifications, and standards
3. Cost of ownership
4. Safety and product liability
5. Reliability
6. Availability
7. Ability to be constructed or manufactured (manufacturability)
8. Shelf life
9. Operability
10. Maintainability
11. Patentability
12. Ergonomics
The reviews should involve representatives from all disciplines, functions, and
users who are or will be connected to the end-item throughout its life cycle and
include outside designers and subject matter experts. (This relates to the concurrent
Formal Reviews
Formal design reviews are planned events, preferably chaired by the project man-
ager or someone else who is not directly involved in designing the product. For
projects aimed at developing and delivering a product, the kinds of reviews com-
monly include:
1. Preliminary design review: The functional design is reviewed to determine
whether the concept and planned implementation fit the basic operational
requirements.
2. Critical design review: Details of the hardware and software design are reviewed
to ensure they conform to the preliminary design specifications.
3. Functional readiness review: For high-volume or mass-produced products, tests
are performed on early-produced items to evaluate the efficacy of the manufac-
turing process.
4. Product readiness review: Manufactured products are compared to specifications
to ensure that the controlling design documentation results in items that meet
requirements.
Formal reviews serve several purposes: minimize risk, identify uncertainties,
assure technical integrity, and assess alternative design and engineering approaches.
Unlike peer reviews, the actual oversight and conduct of formal reviews are handled
by a group of outsiders, although the project team accumulates information for the
reviewers. These outsiders are technical experts or experienced managers who are inti-
mately familiar with the end-item and workings of the project and the project organi-
zation, but are not formally associated with the project organization or its contractors.
Since a formal review may last for several days and involve considerable prepara-
tion and scrutiny of results, the tasks and time necessary to prepare and conduct the
review and to obtain approvals should be incorporated in the project schedule.
A prerequisite for the design review is thorough design documentation, so a
common practice is to convene a “pre-review meeting” at which the design team
gives the review team a brief overview of the design, documentation describing the
design premises, philosophy, assumptions, and calculations, and specifications and
344 Part III Systems and Procedures for Planning and Control
drawings for the proposed design. The review team is then allowed sufficient time
(typically 14 days) to evaluate the design and prepare for the formal review meeting.
Sometimes the review team uses a checklist to ensure that everything important is
covered. In recent years the Internet has become an effective medium for conducting
design reviews and has reduced the cost of the reviews.12
The design review process is significant to quality, no matter how highly compe-
tent the design staff. There is always more than one means to an end, and a designer
can be expected to think of only some of them. Even the most competent people over-
look things. Mature designers appreciate the design review process in terms of the
networking experience, innovative ideas provided by others, knowledge gained, and
reduction of risks, but less mature designers tend to feel insulted or intimidated by
it. It is human nature for people to be less than enthusiastic about others’ ideas and
to resist suggested changes to their own. The design review process seeks to achieve
“appropriate quality” (a balanced compromise agreed upon by the stakeholders)
and refrains from perfecting minor features and faultfinding. Review meetings
are also discussed in Chapter 12.
Audits
Unlike design reviews, which relate only to the design of a product, audits have
broader scope and include a variety of investigations and inquiries. The purpose of
audits is to verify that management processes comply with prescribed processes, pro-
cedures, and specifications regarding, for example, system engineering procedures,
configuration management systems, contractor warehousing and inventory control
systems, and facility safety procedures. They are also performed to verify that techni-
cal processes such as welding adhere to prescribed procedures, and to determine the
status of a project whenever a thorough examination of certain critical aspects of the
work is required. Any senior stakeholder such as a customer, program manager, or
executive can call for an audit. Like formal design reviews, audits are relatively for-
mal and normally involve multifunctional teams. Unlike design reviews where some-
times innovative ideas originate, audits focus strictly on verifying that the work is
being performed as required. They are performed by internal staff or by independent,
external parties who are deemed credible and, ideally, unbiased, fair, and honest.
Preparation for an audit includes agreement between the auditor and stake-
holder requesting the audit as to the audit’s scope and schedule, and the responsibil-
ities of the audit team. The audit team prepares for the audit by compiling checklists
and sometimes attending training sessions to learn about the project. Each auditor is
required to prepare a report within a few days following the investigation detailing
any nonconformities found, rating the importance of the nonconformities, describ-
ing the circumstances under which they were found and the causes (if known or
determinable), and providing suggestions for corrective action. While the focus is
on uncovering nonconformities, commendable activities are sometimes also noted
in the audit report. A typical thorough audit will take 1 to 2 weeks.
Classification of Characteristics
A system (deliverable or end-item) is “specified” or described in terms of a number
of attributes or characteristics, including functional, geometrical, chemical, or physi-
cal properties and processes. Characteristics can be specified or described in terms
of numerical specifications, which often include tolerances of acceptability. In a com-
plex system there are typically a large number of characteristics defined on draw-
ings and other documents. As a result of the Pareto principle (which states that, in
general, the large majority of problems in any situation are caused by a relatively
small number of sources), the most cost-effective approach to quality assurance is to
346 Part III Systems and Procedures for Planning and Control
attend to the system or component characteristics that have the most serious impact
on quality problems or failures. This does not mean to imply that other characteris-
tics should be ignored, but rather that limited resources and activities for inspection
and acceptance testing should be directed first at those items classified as most cru-
cial or problematic.
Characteristics are typically classified into four categories: critical, major, minor, and
incidental (or, alternatively, critical, major A, major B, and minor). The critical classifica-
tion is reserved for characteristics where a nonconformance would pose safety risks or
lead to system failure. Quality plans often specify that items with critical characteris-
tics be subjected to 100 percent inspection. The major classification is for characteristics
where nonconformance would cause the loss of a major function of the deliverable.
The minor classification is for characteristics where nonconformance would lead to
small impairment of function or to problems with manufacturability or serviceability.
Characteristics classified as incidental would have minimal effect or relate to relatively
unimportant requirements. The classification assigned to a characteristic is determined
by the designer of the system in collaboration with others such as the designer of the
next higher-level system, designers of interfacing systems, or staff from manufactur-
ing or construction. Together they analyze the design characteristics regarding safety
and other requirements, and classify them using a set of ground rules.
Classification also applies to kinds of nonconformities or defects, but this should
not be confused with the classification of characteristics. In welded structures, for
example, the specified characteristics often include the “absence of any cracks or of
certain amounts and kinds of impurities” in the weld metal. A crack (a nonconform-
ity that could lead to a catastrophic failure) would be classified as “very serious,”
whereas a small amount of an impurity in the weld (a nonconformity that would
have no effect on the integrity of the structure) would be classified as “minor.”
Classification of characteristics serves as a basis for decisions regarding modi-
fications, waivers, and deviations at all levels of a system. For example, classifica-
tion of characteristics in a higher-level system provides guidance to designers of the
lower-level subsystems and components that comprise the system. Classifying the
braking performance of an automobile as critical (e.g., that the automobile when
traveling at 25 miles per hour should be able to stop within 40 feet on dry pave-
ment) tells the braking system designers that components of the brakes should be
classified critical as well. Failure mode and effect analysis (FMEA), discussed below,
sometimes plays an important role in this classification process.
Sometimes the characteristic classifications are listed in a separate document,
although it is more practical to indicate the classifications directly on drawings and
other specifications by means of symbols such as “C” for critical, “Ma” for major,
“Mi” for minor, and so on. Absence of a symbol normally indicates the lowest prior-
ity, although some organizations denote even the lowest classification with a sym-
bol as well. Only a small percentage of characteristics should be classified as critical.
Too large a number of characteristics classified as critical could be the sign of poor
design: if everything is critical, nothing in particular is critical!
348 Part III Systems and Procedures for Planning and Control
Table 9-1 FMEA Table.
Action Results
New RPN
Potential P Responsibility &
New Occ
New Sev
New Det
Potention S D R
Potention Cause(s)/ r Current Design Recommended Target
Item/Function Effect(s) e e P Action Taken
Failure Mode(s) Mechanism(s) o Controls Action(s) Completion
of Failure v t N
of Failure b Date
349
Table 9-2 Phases of equipment development.
Objectives Relating to
the Elimination
Project Phase Model Built and Tested of Risks Risks Eliminated
Concept Exploratory development Proof that the concept The risk that the concept
model (XDM) would be feasible would not be feasible
(Breadboard models)
Such models could be
built for the entire system
or for specific high-risk
subsystems
Validation Advanced development Proof that the The risk that the performance
model (ADM) product would of the system and its
perform according interfaces with other systems
to specifications and would not be acceptable
interface well with
other systems (form,
fit and function)
Development Engineering Proof of reliability, The risk of poor operational
development model availability, and availability or reliability
(EDM) manufactured maintainability
from the intended final
materials
Ramp-up Pre-production models Proof that the The risk of unforeseen
(PPM) product could be problems in manufacturing
manufactured reliably
in the production
facility and could be
deployed effectively
350 Part III Systems and Procedures for Planning and Control
a full-scale model that closely resembles the final product is probably cost-effective;
for a complex system with innumerable components, it usually is not and computer
simulation and mathematical models are normally more effective.
Example 4: Modeling the Form and Fit of Boeing 777 Components14
One of the most pervasive problems in the development of large aircraft is align-
ing vast numbers of parts and components so that during assembly there is no
interference or gaps between them. In the mid-1980s Boeing invested in three-
dimensional CAD/CAM (computer-aided design/computer-aided manufacture)
technology that would enable designers to see components as solid images and
simulate their assembly into subsystems and systems on a computer screen. By
1989 Boeing had concluded that “digital preassembling” of an airplane could sig-
nificantly reduce the time and cost of rework that usually accompanies introducing
a new airplane into the marketplace. In 1990 it launched the Boeing 777 twinjet
program and began involving stakeholders such as customers, design engineers,
tool makers, manufacturing representatives, and suppliers in the concurrent engi-
neering design process (see Example 7, Chapter 4). The physical geometry of the
airplane’s components was determined with CAD/CAM technology instead of with
physical mock-ups that are time-consuming and expensive to build. As a result, the
777 program exceeded its goal of reducing changes and rework by 50 percent.
Check Sheet
A check sheet is a sheet created especially for collecting data about a problem from
observations. The content and format of the sheet are uniquely designed by the team
investigating the problem. Data recorded on the sheet is subjected to analysis using
the other six tools. A check sheet should not be confused with a checklist, the latter
352 Part III Systems and Procedures for Planning and Control
being a list of steps, issues, or pointers based upon prior experience to be considered
(e.g., in planning a project).
Flow Chart
Flowcharts show the steps in a procedure and their relationships. Process flowcharts
show the steps or tasks in a process. Project networks are a form of flowcharts that
show the sequence of activities in a project. For problem analysis, usually more
detailed flowcharts are needed to reveal the steps and tasks within the activities.
An example is the diagram showing material flow in Figure 3-5. Close scrutiny of a
flowchart often can reveal the steps or relationships that cause quality problems.
Run Chart
A run chart is a graph of observed results plotted versus time to reveal potential
trends or anomalies. The plot of schedule performance index versus cost perform-
ance index as illustrated in Figure 11-16 is a form of run chart that tracks project
performance and indicates whether a project is improving or worsening in terms of
schedule and cost.
Cause-and-Effect Diagram
Quality problems and risks are often best addressed through the collective experience
of project team members. Team members meet in brainstorming sessions to generate
ideas about problems or risks. These ideas can be recorded on a cause-and-effect (CE)
diagram (also called a fishbone or Ishikawa diagram), which is a scheme for arrang-
ing the causes for a specified effect in a logical way. Figure 9-4 shows a CE diagram
100
98%
70 96%
90
91%
80
70
Percent of total problems
50 70%
60
40
50
43%
40
30
Assembly procedures
Assembly personnel
30
20
Components
Equipment
20
Design
Other
10
10
0
Kinds of problems Figure 9-3
Pareto diagram.
Quality of Quality of
design assembly
Assembly
Assembly personnel
procedures
Control system
malfunctions
Vendor
quality assurance
Equipment Quality of
calibration components
Figure 9-4
CE (fishbone or Ishikawa) diagram.
to determine why a control system does not function correctly. As the team gener-
ates ideas about causes, each cause is assigned to a specific branch (e.g., “assembly
procedures” on the Quality of Assembly branch). CE diagrams and brainstorming
can be used in two ways: (1) given a specified or potential outcome (effect), to iden-
tify the potential causes and (2) given a cause (or a risk), to identify the outcomes that
might ensue (effects). CE diagrams do not solve problems but are nonetheless useful
for identifying problem sources and planning actions to resolve them.
The seven basic tools are relatively simple. Following are two techniques that
are more sophisticated.
354 Part III Systems and Procedures for Planning and Control
Other projects ⴙ
ⴙ Workload of
designer
Malfunctioning of
control system ⴚ
prototype
ⴚ
Attention to
ⴚ assembly
Quality of
procedures
components
Quality of ⴙ
ⴙ assembling
Vendor AQ
Figure 9-5
Casual loop diagram for control system problem.
Figure 9-6
Example of a CRT.
arrows indicates that these four entities are sufficient to have caused UDE 500. In
the same way, entities 500, 600, 700, and 800 are sufficient to have caused UDE 900.
The CRT approach requires more effort than simple CE analysis and, hence, would
be applied only to problems that are considered more severe (the first 20 percent of
the causes that lead to 80 percent of the problems). The CRT is one of several tech-
niques and tools described in the literature on theory of constraints, which include
the future reality tree (describes the desired future situation after a problem has been
resolved), the prerequisite tree (a process for overcoming barriers), and the transition
tree (a process for problem solving).20
356 Part III Systems and Procedures for Planning and Control
9.5 SUMMARY
REVIEW QUESTIONS
1. Describe your understanding of “quality.”
2. A Rolls Royce is a high-quality vehicle. Is this always true? Consider different
users and uses.
3. How does compliance to specification differ from satisfying requirements?
4. Is there a difference between satisfying requirements and fitness for purpose?
Explain.
5. Explain the difference between quality and grade.
6. How does the role of the quality manager (a functional manager) regarding
quality planning differ from that of the project manager?
7. For each of the following, indicate whether you would apply for a modification,
a deviation, or a waiver:
(a) The supplier of oil filters to a motor car manufacturer indicates that it plans
to terminate the production of an oil filter that is specified on a car that is
being developed.
358 Part III Systems and Procedures for Planning and Control
Case 9-1 Ceiling Panel Collapse
in the Big Dig Project
(For more about the Big Dig Project—Boston’s over the anchor system’s adequacy to hold the
Central Artery/Tunnel Project, see Chapter 14, weight of the ceiling panels.
Example 4 and Case 14-3.) Seven years before the accident, safety
Boston, July 11, 2006—Four concrete pan- officer John Keaveney wrote a memo to one of
els, each weighing about three tons, fell from the his superiors at contractor Modern Continental
ceiling of a Big Dig tunnel, crushing a woman to Construction Co. saying he could not “com-
death in a car. The accident occurred in a 200-foot prehend how this structure can withhold the
section that connects the Massachusetts Turnpike test of time.”23 He said his superiors at Modern
to the Ted Williams Tunnel. Said the Modern Continental and representatives from Big Dig
Continental Company, the contractor for that project manager Bechtel/Parsons Brinckerhoff
section of the project, “We are confident that our (B/PB) assured him that the system had been
work fully complied with the plans and specifi- tested and proven to work. Keaveney told the
cations provided by the Central Artery Tunnel Boston Globe that he began to worry about the ceil-
Project. In addition, the work was inspected and ing panels after a third-grade class visited the Big
approved by the project manager.”21 Dig for a tour in 1999. He showed the class some
The panels, which were installed in 1999, are concrete ceiling panels and pointed to the bolts
held from metal trays secured to the tunnel ceil- in the ceiling. A girl raised her hand and asked,
ing with epoxy and bolts. The epoxy–bolt sys- “Will those things hold up the concrete?”“I said,
tem is a tried-and-true method: holes are drilled ‘Yes, it would hold,’ but then I thought about it.”
into the concrete ceiling, cleaned, and filled with Some have argued that the investigation
high-strength epoxy; a bolt is screwed into the should look at the tunnel’s design: Why were the
hole, and as the epoxy cures it develops a secure concrete panels so heavy, weighing 2½ to 3 tons
bond. “That technique is used extensively,” said apiece? Why were they there at all? And why did
an engineering professor at the Massachusetts the failure of a single steel hanger send 6 to 10 of
Institute of Technology (MIT).22 For a design like the panels crashing down? Reports from eyewit-
the Big Dig’s ceiling, he said, engineers often add nesses indicate the accident began with a loud
safety “redundancies,” in other words, enough snap as a steel hanger gave way, which set off a
epoxy-and-bolt anchors to hold the ceiling panels chain reaction that caused other hangers holding
even if a few of them failed. But for the connector up a 40-foot steel bar to fail and send 12 tons of
tunnel, he contends, too few anchors were used. concrete smashing below. Were the 40-foot bars
“They didn’t have enough to carry the load. There under-designed to handle the weight?
was no room for error.” He added, however, that Investigators are also looking at whether the
the evidence was preliminary and to draw con- use of the wrong epoxy may have played a role.24
clusions would be premature. Invoices from 1999 show that at least one case of
Some bolts from the ceiling wreckage showed a quick-drying epoxy was used to secure ceiling
indications of having very little epoxy, and three bolts rather than the standard epoxy specified by
of them had none. State Attorney General Thomas the designers. The epoxy holds 25 percent less
Reilly’s investigation is focusing on whether the weight than standard epoxy and is not recom-
epoxy used in the tunnel failed or if construction mended for suspending heavy objects.
workers who installed the bolts misused or omit- Additional issues raised during the early
ted the epoxy. An accident caused by improper stages of the investigation include the following:25
installation or errors in mixing the epoxy, he said,
would implicate the tunnel’s design and design- • Design changes that resulted in the use of
ers. (Epoxy often requires on-site mixing before heavier concrete ceiling panels in the connector
use.) However, he added that some documents tunnel instead of lighter-weight panels as used
reflected a “substantial dispute” among engineers in the Ted Williams Tunnel.
QUESTIONS
1. With 20-20 hindsight, draw a CE (fishbone, 5. What role could modeling/prototyping, labora-
Ishikawa) diagram to illustrate possible tory tests, checklists, and training have played?
causes and effects. Include the possible causes 6. Explain how someone within B/PB would
mentioned in the case. The diagram should be accountable regardless of the findings of a
have been developed before construction, forensic investigation. Would B/PB be off the
therefore also indicate other possible failure hook if a subcontractor would be found guilty?
modes and other causes you can think of. 7. What would the implications have been if
How would the diagram (developed after the the engineer who signed off a specific design
accident) be of value during litigation? or construction aspect was an engineer-in-
2. List the characteristics that should have been training instead of a registered engineer?
classified as critical. 8. Comment on the relationship between project
3. Propose guidelines for a process to ensure quality management and project risk manage-
that the epoxy would provide sufficient bond- ment. How could risk management have pre-
ing to the concrete ceiling. vented the accident? How does project quality
4. Explain the role that configuration manage- management relate to project cost management?
ment should have played in preventing the 9. Comment on the contribution that inspection
accident. and audits could have played.
ENDNOTES
1. M.C. Carruthers, Principles of Management 5. James Bach, “The Challenge of ‘Good
for Quality Projects (London: International Enough’ Software,”American Programmer
Thompson Press, 1999). (October 1995).
2. Ibid. 6. John Nicholas and Avi Soni, The Portal to Lean
3. See E. Yourdan, Rise and Resurrection of the Production: Principles and Practices for Doing
American Programmer (Upper Saddle River, More with Less (Boca Raton, FL: Auerbach,
NY: Yourdan Press/Prentice Hall, 1998): 2006).
157–181. 7. International Systems Organization, ISO 9001,
4. P.B. Crosby, Quality Is Free (McGraw-Hill, Quality Management Systems—Requirements,
1979). 3rd ed. (Geneva, Switzerland, 2000).
360 Part III Systems and Procedures for Planning and Control
8. P.B. Crosby, ibid. 18. D. Sherwood, Seeing the Forest for the Trees—A
9. A. Kransdorff, “The Role of the Post-project Manager ’s Guide to Applying Systems Thinking
Analysis,”The Learning Organization 3, no. 1 (London: Nicholas Brealey Publishing, 2002);
(1996): 11–15. J.D. Sterman, Business Dynamics: Systems
10. The ISO/CD 10007 standard offers guide- Thinking and Modeling for a Complex World
lines on configuration management systems: (McGraw-Hill, 2000).
International Standards Organization, ISO/ 19. L.J. Steinkopf, Thinking for a Change—Putting
CD 1007 Quality Management Systems— the TOC Thinking Processes to Use (New York:
Guidelines for Configuration Management St Lucie Press, 1999).
(Geneva, Switzerland, November 2001). 20. E.M. Goldratt, What Is This Thing Called
11. Mike Gray, Angle of Attack: Harrison Storms Theory of Constraints and How Should It Be
and the Race to the Moon (New York: Implemented? (New York: North River Press,
W.W. Norton, 1992): 170–171. Inc, 1990).
12. E.W. East, J.G. Kirby , and G. Perez, 21. Pam Belluck and Katie Zezima, “Accident in
“Improved Design Review through Web Boston’s Big Dig Kills Woman in Car,”New
Collaboration,”Journal of Management in York Times (July 12, 2006).
Engineering (April 2004). 22. Matt Bradley, “Bolt Failure at Big Dig: An
13. Adapted from Brian Muirhead and William anomaly?”The Christian Science Monitor (July
Simon, High Velocity Leadership: The Mars 21, 2006).
Pathfinder Approach to Faster, Better, Cheaper 23. Sean Murphy, “Memo Warned of Ceiling
(New York: Harper Business, 1999): 86–89, Collapse: Safety Officer Feared Deaths in ’99,
178–179. Now Agonizes Over Tragedy,”Boston Globe
14. http://www.boeing.com/commercial/ (July 26, 2006).
777family/pf/pf_computing.html accessed in 24. Scott Allen and Sean Murphy, “Big Dig Job
August 2006. May Have Used Wrong Epoxy,”Boston Globe
15. K. Ishikawa, What Is Quality Control? (May 3, 2007).
(Englewood Cliffs, NY: Prentice Hall, 1982). 25. Bob Drake, “Investigators Probe Boston
16. D.R. Bamford and R.W. Greatbanks, “The Tunnel Design,” CENews.com (September 1,
Use of Quality Management Tools and 2006), www.cenews.com/article.asp?id=1108;
Techniques: A Study of Application in accessed May 15, 2007.
Everyday Situations,”International Journal of 26. Russ Waters, Physics forums, www.phys-
Quality and Reliability Management, 22, no. 4 icsforums.com/showthread.php?t=126374,
(2005). russ_waters, July 17, 2006, 9.30 P.M.; accessed
17. J.M. Juran and F.M. Gryna, Juran’s Quality May 20, 2007.
Control Handbook, 4th ed. (McGraw-Hill,
1988).
Life “looks just a little more mathematical and regular than it is;
its exactitude is obvious, but its inexactitude is hidden;
its wildness lies in wait.”
—G. K. Chesterton1
—Peter Bernstein2
362
10.1 RISK CONCEPTS
Risk is a function of the uniqueness of a project and the experience of the project
team. When activities are routine or have been performed many times before,
managers can anticipate the range of potential outcomes and manipulate aspects
of the system design and project plan to achieve the desired outcomes. But when
the project is unique or the team is inexperienced, the potential outcomes are more
uncertain, making it difficult to anticipate problems or know how to avoid them.
Even routine projects have risks because outcomes may be influenced by factors that
are new and emerging or beyond anyone’s control.
The notion of project risk involves two concepts:
1. The likelihood that some problematical event will occur.
2. The impact of the event if it does occur.
Risk is a joint function of the two; that is,
Risk f(likelihood, impact)
Given that risk involves both likelihood and impact, a project will ordinarily be
considered risky whenever the combination of the likelihood and the impact is large.
For example, a project will be considered risky when the potential impact is human
fatality or massive financial loss even when the likelihood is small.
Managers are accustomed to dealing with facts and figures; especially in techni-
cal projects, they work with hard numbers derived from rigorous procedures. Many
of them find the concept of risk hard to deal with; faced with uncertainty, they pre-
fer to ignore the fact that something might go wrong. Of course, ignoring a potential
problem will not make it go away.
Risk cannot be eliminated from projects, but it can be reduced and plans readied
in case things go wrong; this is the purpose of risk management. The main process
and aspects of risk management are shown in Figure 10-1: identify the risks, assess
Figure 10-1
Risk management elements and process.
Sources
Methods
Identify risk
Likelihoods
Impacts
Assess risk Consequences
Priority
Strategy: Accept
Responsibility
Transfer
Avoid
Plan risk responses Reduce
Contingency
Status
Revise
assessment Track and control
New risks
risks Close out
Before you can manage something, you must first know about it. Thus, risk manage-
ment begins with identifying the risks and predicting their consequences. If a risk
and its consequences are significant, ways must be found to avoid or reduce the risk
to an acceptable level. What is considered “acceptable” depends on the risk tolerance
of project stakeholders and managers. Often, experienced managers and stakehold-
ers are somewhat more careful (and risk averse) because they understand the risks
and their consequences, whereas less experienced stakeholders tend to be risk-takers
(more risk tolerant) because they don’t know of the risks or are ignorant of the
consequences.
Risk in projects is sometimes referred to as the risk of failure, which implies that
a project might fall short of schedule, budget, or technical performance goals by a
significant margin. The methods to identify risk discussed in this chapter can also
be used to capitalize on opportunities, e.g., projects with high potential for additional
rewards, savings, or benefits; more typically, however, they are used to determine
the risk of failure.
Among the many ways to identify project risks, one is to proceed according to
project chronology—i.e., to look at the phases and stages in the life cycle (such as
project feasibility, contract negotiation, system concept, or definition, design, and
fabrication) and identify the risks in each separately. Each phase presents unique
hurdles and problems that could halt the project immediately or lead to later failure
(although each also might contribute to reducing the project risk as was illustrated in
Chapter 9, Table 9-1). In product development projects, for instance, the risk of fail-
ure tends to be high in the early stages of preliminary design and testing, but dimin-
ishes in later stages. Some risks that could lead to project failure persist throughout
the project, such as the loss of funding or management commitment.
Risk can also be classified according to type of work or technical function, such
as engineering risks associated with product reliability and maintainability, or pro-
duction risks associated with the manufacturability of a product, the availability of
raw materials, or the reliability of production equipment.
Identifying project risks starts early in the conception phase and focuses on dis-
covering the high-risk factors that would make the project difficult to execute or des-
tined to failure. High risks in projects typically stem from:
• Using an unusual approach.
• Attempting to further technology.
• Training for new tasks or applying new skills.
• Developing and testing of new equipment, systems, or procedures.
• Operating in an unpredictable or variable environment.
The areas and sources of high risk must be studied and well understood before
the project can be approved and funds committed to it. Risks identified in the concep-
tion phase are often broadly defined and subjectively assessed, but they might also be
subject to detailed quantitative analysis using methods discussed later. When multi-
ple, competing projects are under consideration, a comparative assessment should be
performed to enable managers to decide which of them—based upon tradeoffs of the
364 Part III System and Procedures for Planning and Control
relative risks, benefits, and available funding—should be approved.4 Comparing and
selecting projects based upon criteria such as risk is discussed in Chapter 17.
Sources of Risk
Any uncertain factor that can influence the outcome of a project is a risk source or risk
hazard. Identifying hazards involves learning as much as possible about what things
could affect the project or go wrong, and what the outcome for each would be. The
purpose of such deliberation is to identify all sources of significant risk, including
sources yet unknown. (The most difficult part of risk identification is trying to dis-
cover things you don’t already know—the “unknown unknowns.”)
Risk in projects can be classified as internal risks and external risks.
Internal Risks. Internal risks originate inside the project. Project managers and
stakeholders usually have some measure of control over these. Three main catego-
ries of internal risks are market risk, assumptions risk, and technical risk.
Market risk is the risk of not fulfilling market needs or the requirements of par-
ticular customers. Sources of market risk include:
• Incompletely or inadequately defined market or customer needs and
requirements.
• Failure to identify changing needs and requirements.
• Failure to identify newly introduced products by competitors.
Market risk stems from the contractor or developer misreading the market envi-
ronment. It can be reduced by working closely with the customer; thoroughly and
accurately defining needs and requirements at the start of the project; closely moni-
toring trends and developments among markets, customers, and competitors; and
updating requirements as needed throughout the project.
Assumptions risk is risk associated with the numerous implicit or explicit
assumptions made in feasibility studies and project plans during project conception
and definition. Clearly, the risk of meeting time, cost, and technical requirements
depends on the accuracy and validity of these assumptions.
Technical risk is the risk of not meeting time, cost, or performance requirements
due to technical problems with the end-item or project activities. (Sometimes these
risks are listed in special categories—schedule risks being those that would cause
delays, cost risks those that would lead to overruns, and so on.) Technical risk tends
to be high in projects involving unfamiliar activities or requiring new ways of
integration. It is especially high in projects that involve new and untried technical
applications, but is low in projects that involve mostly familiar activities done in
customary ways.
One approach to expressing technical risk is to rate the risk of the project end-
item or primary process as being high, medium, or low according to the following
features:5
• Maturity: How ready is the end-item or process for production or use? An end-
item or process that is preexisting, installed and operational, or based on experi-
ence and preexisting knowledge has much less risk than one that is in the early
stages of development or is new, cutting edge, or trend-setting.
• Complexity: How many steps, elements, or components are in the product or proc-
ess, and what are their relationships? An end-item or process with numerous,
interrelated steps or components is riskier than one with few steps or components
and few, simple relationships.
External Risks. External risks originate from sources outside the project. Project
managers and stakeholders usually have little or no control over these. External risk
hazards include:
Market conditions Customer needs and behavior
Competitors’ actions Supplier relations and business failures
Government regulations Physical environment (weather, terrain)
Interest rates and Labor availability (strikes and walkouts)
exchange rates Material or labor resources (shortages)
Decisions by senior External control by customers or
management or the customer subcontractors over project work and
regarding project priorities, resources
staffing, or budgets
Any of these can affect the success of a project, but the question is: To what extent?
A project where success depends heavily on external factors such as market con-
ditions or facilities controlled by the customer or vendor is beset with much more
external risk than one with fewer such dependencies.
Identification Techniques
Project risks are primarily identified from analysis of the numerous documents
prepared during project conception and definition; these include reports from past
projects, lists of user needs and requirements, WBSs, work package definitions, cost
estimates, schedules, and schematics and models of end-items. Among the tech-
niques for pinpointing risks are analogy, checklists, risk matrices, WBS analysis, proc-
ess flowcharts, project network diagrams, brainstorming, and the Delphi technique.
Analogy
The analogy technique involves looking at records, postcompletion summary reports,
and project team members’ notes and recollections from previous, similar projects to
identify risks in new, upcoming projects. The better the documentation (more com-
plete, accurate, and well catalogued) of past projects and the better peoples’ memo-
ries, the more useful these sources for identifying potential problems and hazards
in future, similar projects. Of course, the technique requires more than just informa-
tion about past projects; it also requires a history of past projects that are similar, in
significant ways, to the project for which risks are being assessed. Knowledge man-
agement methods, described in Chapter 16, promote learning from part projects and
anticipating risks in new ones.
Checklist
Documentation from prior projects is also used to create risk checklists—lists of
factors in projects that affect the risks. The checklist is created by managers and
366 Part III System and Procedures for Planning and Control
Table 10-1 Risk checklist example.
updated with experience. Risk checklists can pertain to the project as a whole or to
specific phases, work packages, or tasks within the project. They might also spec-
ify the levels of risk associated with risk sources based upon personal judgment or
assessments of past projects.
To illustrate, the checklist in Table 10-1 shows the risk levels associated with three
categories of risk sources: (1) status of implementation plan, (2) number of module
interfaces, and (3) percentage of components that require testing. Suppose, for example,
that an upcoming project will use a standard completed plan, have eight modules, and
test 15 percent of the system components. According to the checklist, the project will
be rated as low, low, and medium, respectively, for the three risk sources.
The more experience a company or manager gains with projects, the more they
learn about the risks and the more comprehensive and valid the checklists become.
As experience grows with completed projects, the checklists are expanded and
updated. A good checklist is never considered complete: its composers acknowledge
that not all the risks are known and that more will be discovered with time. While a
checklist cannot guarantee that all significant risks in a project will be identified, it
does help ensure that the important sources won’t be overlooked.
A variation of the checklist is the risk matrix, a table wherein the columns are the
project phases, and the rows, the sources of risks. The cells of the matrix indicate the
presence, absence, or severity of a specified risk for a phase or stage of the project.
The disadvantage of risk checklists is that people might consider only the risks
listed and overlook those not on the list. Checklists should therefore be supple-
mented by other methods, as described next.
Process Flowchart
Project risks can also be identified from process flowcharts. A flowchart illustrates the
steps, procedures, and flows between tasks and activities in a process. Examination
of a flowchart can pinpoint potential trouble spots and areas of risk.
Cause-and-Effect Diagram
Risks can be identified from the collective experience of project team members who
meet in a brainstorming session to share opinions and generate ideas about possible
problems or hazards in the project, and then recorded on a cause-and-effect (CE) dia-
gram as shown in Figure 10-2. Brainstorming and CE diagrams are used in two ways:
(1) given an identified, potential outcome (effect), to identify the potential causes
Figure 10-2
CE diagram.
368 Part III System and Procedures for Planning and Control
(hazards); (2) given a risk hazard (cause), to identify the outcomes that might ensue
(effects). Figure 10-2 illustrates the first use: for the outcome “completion delay,” it
shows the potential hazards that could lead to delay.
The diagram in Figure 10-2 is divided into the generic risk categories (hazards)
of problems with software, hardware, and so on. (Other possible categories include
poor time and cost estimates, design errors or omissions, changes in requirements,
and unavailability or inadequacy of resources.) Each generic risk hazard might
be broken down into more fundamental sources of risk (i.e., factors leading to the
hazards). In Figure 10-2, e.g., the hazard “staff shortage” is shown as caused from
inability to hire and train additional workers. Analysis techniques related to CE are
further discussed in Chapter 9.
Delphi Technique
According to Greek mythology the Oracle at Delphi was consulted to gauge the
risk of waging a war. In modern times, the term Delphi refers to a group survey
technique for combining the opinions of several people to develop a collective judg-
ment. The technique, developed by the Rand Corporation in 1950, comprises a series
of structured questions and feedback reports. Each respondent is given a series
questions (e.g., what are the five most significant risks in this project?), to which he
writes his opinions and reasons. The opinions of everyone surveyed are summa-
rized in a report and returned to the respondents, who then have the opportunity to
modify their opinions. Because the written responses are kept anonymous, no one
feels pressured to conform to anyone else’s opinion. If people change their opinions,
they must explain the reasons why; if they don’t, they must also explain why. The
process continues until the group reaches a collective opinion. Studies have proven
the technique to be an effective way of reaching consensus.6
Risks are ubiquitous, but it is only the notable or significant ones that require atten-
tion. What is considered significant depends on the risk likelihood, the risk impact,
and the risk consequence.
Qualitative Numerical
Low 0–0.20
Medium 0.21–0.50
High 0.51–1.00
Risk Likelihood
Risk likelihood is the probability that a hazard or risk factor will actually material-
ize.7 It can be expressed as a numerical value between 1.0 (certain to happen) and 0
(impossible) or as a qualitative rating such as high, medium, or low. Numerical val-
ues and qualitative ratings are sometimes used interchangeably. Table 10-2 shows an
example of qualitative ratings and the associated percent values for each. When, for
example, someone says, “the likelihood of this or that risk is low,” the probability of
its happening according to the table is 20 percent or less. Alternatively, if someone
feels that the probability of risk is between 20 and 50 percent, then that is equivalent
to a “medium” risk.
But Table 10-2 is only an illustration. The association between qualitative ratings
and particular values is subjective and depends on the experience of the project team
and the risk tolerance of stakeholders. For example, Table 10-2 might be for a project
with high economic stakes and, therefore, a risk likelihood greater than 50 percent
equates to “high risk.” In another project (one with low economic stakes), “high risk”
might equate to a risk likelihood of 75 percent or higher. Often, people have diffi-
culty agreeing on the appropriate qualitative rating for a given likelihood value and
vice versa, even when they have common information or experience; this is described
later in Example 2.
Table 10-3 is a checklist for five potential sources of failure in computer systems
projects and associated numerical likelihood.8 It is interpreted as follows: looking just
at the MS column, the likelihood of failure is low for existing software, but high when
the software is state-of-the-art. Again, the likelihood values are illustrative and would
be tailored to each project depending on the prior project experience and opinion of
stakeholders. The values or ratings assigned to risk factors should be based upon col-
lective judgment, including as much knowledge and experience as possible. A like-
lihood estimate based upon the opinions of several individuals (assuming all have
relevant experience) is usually more valid than one based on only a few.
When a project has multiple, independent risk sources (as is common) they can
be combined and expressed as a single composite likelihood factor, or CLF. For example,
using the sources listed in Table 10-3 the CLF can be computed as a weighted average:
where W1, W2, W3, W4, and W5 each have values 0 through 1.0 and together total 1.0.
370 Part III System and Procedures for Planning and Control
Table 10-3 Sources of failure and likelihood.
Adapted from W. Roetzheim, Structured Computer Project Management (Upper Saddle River, NJ: Prentice Hall,
1988): 23–26.
well it can be integrated into another, larger system. Therefore, from Table 10-3,
MH 0.1, CH 0.3, MS 0.5, CS 0.3, and D 0.5. Assuming all sources are
rated equally at 0.2, then
CLF (0. 2)0. 1 (0. 2)0. 3 (0. 2) 0. 5 (0. 2)0. 3 (0. 2)0. 5 0. 34
Note that the computation in equation (1) assumes that the risk sources are inde-
pendent. If they are not, if, e.g., failure due to software complexity depends on the
failure due to hardware complexity, then the individual likelihoods for each cannot
be summed. In such a situation, the sources would be subjectively combined into
one source (“failure due to a combination of software and hardware complexity”),
and a single likelihood value assigned.
One way to show the interdependency of risk factors is with an influence dia-
gram (a variation of the causal-loop diagram described in Chapter 9). An example
is Figure 10-3.9 To construct the diagram, start with a list of previously identified
risks and space them apart as shown in Figure 10-3. Then look at each risk and ask
whether it is influenced by, or has influence on, any of the other risks. If so, draw
lines between the related risks with arrows to indicate the direction of influence
(e.g., S.1 influences S.2 and I.2). To minimize confusion, keep the number of risks on
the diagram to 15 or fewer.
T.3 Insufficient
technical skills
Figure 10-3
Influence diagram.
Risks with the most connections are the most important. In Figure 10-3, risks I.2,
S.1, and S.2 are each influenced by other risks, which would increase their failure
likelihood.
Risk likelihood is also affected by the future: ceteris paribus, activities planned
further in the future are more risky (have greater likelihood of failure) than those
closer at hand.10 That is because activities planned for farther in the future have
greater chances of being influenced by unknowns. After the project has been exe-
cuted and activities progressively completed, the likelihood of failure diminishes.
As project completion approaches, the likelihood of failure becomes very small. That
doesn’t mean, however, that the risks disappear. There is a tradeoff: although like-
lihood of failure diminishes as the project progresses, the stake in the project—the
amount of human and financial capital sunk into it—increases, which means that
the loss suffered from a failure increases. Risk remains an important matter through-
out the project.
Risk Impact
What would happen if a risk hazard materialized? The result would be a risk impact.
A poorly marked highway intersection is a risk hazard; the risk impact posed is that
of collision resulting in injury or death. Risk impact in projects is specified in terms of
time, cost, and performance. For example, the impact of an insufficient number
of skilled laborers is an extended schedule or project end-item not meeting user
requirements.
Risk impact can be expressed as a qualitative rating such as high, medium, or
low, based upon the judgment of managers and experts about the magnitude of the
impact. For example, a risk leading to a schedule delay of 1 month or less might
be considered “medium impact,” whereas a delay of 3 months or more might be
deemed “high impact.”
372 Part III System and Procedures for Planning and Control
Table 10-4 Impact values for different technical, cost, and time situations.
Impact Value Technical Impact (TI) Cost Impact (CI) Schedule Impact (SI)
Adapted from W. Roetzheim, Structured Computer Project Management (Upper Saddle River, NJ: Prentice
Hall, 1988): 23–26.
Risk impact can also be expressed as a numerical measure between 0 and 1.0,
where 0 is “not serious” and 1.0 is “catastrophic.” Again, the rating is subjective
and depends upon judgment. Table 10-4 is an example of project technical, cost, and
schedule impacts and suggested qualitative and numerical ratings associated with
them.11 The table represents judgments about the impacts associated with various
technical, cost, and schedule situations.
The value assigned to the risk impact is largely subjective—even when
derived from empirical data and numerical analysis—and, hence, is sometimes
problematical.
Example 2: Estimating Risk Likelihood and Risk Impact in New Technologies
Risk assessment in new technologies is, well, difficult. The risk of a serious
problem can stem from a chain of events (e.g., a machine malfunctions, a sen-
sor fails to detect it, an operator takes the wrong action), and to determine
the probability of the risk requires identifying all the events in the chain, esti-
mating the probability of each, and then multiplying the probabilities together.
Managers and designers can try to think of every event, but they can never be
sure that they haven’t missed some. When a project involves new technologies,
the estimates are largely guesses. In 1974, MIT released a report stating that
the likelihood of a reactor core meltdown is 1 every 17,000 years. According
to the report, a meltdown in a particular plant would not occur until after many
hundreds of years of operation, yet less than 5 years later a reactor at Three
Mile Island suffered a partial core meltdown and released radioactivity into the
atmosphere.12
The space shuttle is another case: NASA originally put the risk of a cata-
strophic accident at 1 in 100,000, but after the Challenger disaster revised it to
1 in 200 or 250 (the National Academy of Sciences put the risk at 1 in 145). With
the additional loss of Columbia (the second loss in 113 missions) the actual
known risk became 1 in 56. The shuttles originally were design-rated for 100
missions, yet Columbia broke up during its 26th.13 Few data points (five opera-
tional shuttles and 113 missions over 20 years) in combination with incredible
complexity make it impossible to accurately predict the risks for the shuttle sys-
tem, yet for many projects the data available for estimating probabilities is even
sparser.
Just as the likelihoods for multiple risks can be combined, so can the impacts
from multiple risk sources. A composite impact factor (CIF) can be computed using a
simple weighted average:
where W1, W2, and W3 are valued 0 through 1.0, and together sum to 1.0. CIF will
have values between 0.0 and 1.0, where 0 means “no impact” and 1.0 means “the
most severe impact.”
Example 3: Computation of CIF
Failure in the ROSEBUD project to meet certain technical goals is expected
to have minimal impact on technical performance and be corrected within 2
months at an additional cost of 20 percent. Therefore, from Table 10-4,
TI 0. 1, CI 0. 5, SI 0. 5
Equation (2) assumes that risk impacts are independent. If they are not, equa-
tion (2) does not apply and the impacts must be treated jointly as, e.g., “the impact
of both a 20 percent increase in cost and a 3-month schedule slip is rated as 0.6.”
Another way to express risk impact is in terms of what it would take to recover
from, or compensate for, resulting damages or undesirable outcomes. For exam-
ple, suppose that using a new, innovative technology poses a risk that the perform-
ance requirements will not be met. The plan is to apply the technology, but if early
tests reveal poor performance the technology will be abandoned and an alternative,
proven approach used instead. The risk impact is the cost of switching technologies
in terms of schedule delay and additional cost, e.g., by 4 months and for $300,000.
Risk impact should be assessed for the entire project and articulated with the
assumption that no response or preventive measures are taken. In the above instance,
$300,000 is the anticipated expense under the assumption that nothing special will be
done to avoid or prevent the failure of the new technology. This assessed impact will
be used as a measure to evaluate the effectiveness of possible ways to reduce or pre-
vent risk hazards.15 This is discussed later in the section on risk analysis methods.
Risk Consequence
Early in the chapter the notion of risk was defined as being a function of risk likeli-
hood and risk impact; the combined consideration of both is referred to as the risk
consequence or risk exposure.
374 Part III System and Procedures for Planning and Control
There are two ways to express risk consequence. One way sometimes suggested
is to express it as a simple numerical rating with a value ranging between 0 and 1.0.
In that case, the risk consequence rating, RCR, is
RCR CLF CIF CLF(CIF) (3)
where CLF and CIF are as previously defined in equations (1) and (2). The risk con-
sequence derived from this equation measures how serious the risk is to project
performance. Small values represent unimportant risks that might be ignored; large
values represent important risks worth a serious look. In the previous examples, for
the ROSEBUD project the assessed CLF was 0.34 (Example 1) and the CIF was 0.22
(Example 3). Thus, the risk consequence rating is,
The RCR value is interpreted subjectively. In general, a value over 0.7 usu-
ally would be considered high risk, whereas a value under 0.2 would be low risk.
A value of 0.48 might be considered moderate-level risk and important enough to
merit attention. Management would thus take measures to reduce the risk or pre-
pare contingency plans, as discussed later.
The approach using formula (3), however, is problematical. The factors CLF and
CIF are fundamentally different entities, one representing a probability, the other
the magnitude of an impact, and like meters and grams they don’t add up; that is,
CLF CIF in the formula doesn’t make sense. Neither does the product (CLF)(CIF),
which is supposed to represent the “intersection” of the sets CLF and CIF, although
in reality the intersection would be empty since CLF and CIF represent different
entities incapable of intersecting.16
A second (more common) way to express risk consequence is as an expected
value. Expected value can be interpreted as the average outcome if an event is
repeated a large number of times (e.g., 100). Computed this way,
High
High
consequence
Likelihood
Medium
consequence
PERT
The PERT and Monte-Carlo simulation methods discussed in Chapter 7 are both
used to account for risk in project scheduling and to inform managers about the pos-
sible need to compensate for risks in meeting project deadlines.
The PERT method incorporates risk into project schedules by using three esti-
mates for each project activity: a, m, and b (optimistic, most likely, and pessimis-
tic times, respectively). Greater risk in an activity is reflected by a greater spread
between a and b, and especially between m and b. For an activity with no perceived
risk, a, m, and b would be identical; if, however, hazards are identified, the way to
account for them is to raise the values of b and m. In general, the greater the per-
ceived consequence of risk hazards, the further apart are b and m.
With PERT, recall, it is the average time, not m, that is the basis for scheduled
times, where the average time is the mean of the Beta distribution:
a 4m b
te
6
Thus, for a particular activity with given optimistic and most-likely values (a
and m), using a larger value of b to account for greater risk will result in a larger
value of te; this logically allows more time to complete the activity and compen-
sate for things that might go wrong. In addition, however, the larger value of b also
results in a larger time variance for the activity because
⎡ b a ⎤2
V⎢ ⎥
⎢⎣ 6 ⎥⎦
This larger V will result in a larger variance for the project completion time,
which would spur the cautious project manager to add a time buffer or schedule
reserve to the project schedule.
Risk Priority
Projects are subject to numerous risks, yet only a relatively few are important enough
to merit attention. Once the risk consequences for a project have been computed, the
376 Part III System and Procedures for Planning and Control
risks are listed on a risk log or risk register, and those with moderate-to-high conse-
quences are given a second look. Project team members, managers, subcontractors,
and customers review them and plan the appropriate risk responses. To better assess
the risks, sometimes activities or work packages must be broken down further. For
example, before the full consequences and magnitude of the risk of a staffing shortage
can be comprehended, the risk must clearly be defined in terms of the specific skill
areas affected and the amount of the shortage.
To decide which risks to focus on, management might specify a level of expected
value risk consequence, and address only risks at that level or higher. For example, if
risks with expected consequences of a 2-day or longer delay deserve attention, then
in Table 10-5 only risks S1, F1, T1, T3, I1, I2, and I3 would be addressed.
One drawback with specifying risk priority using expected value is that very
low likelihood risks are sometimes ignored even when they have potentially severe,
even catastrophic, impact. Suppose, e.g., that the impact of a project failure is 1,000
fatalities. If the risk likelihood is infinitesimal, then the expected value consequence
(tiny likelihood of many fatalities) will be very small and, hence, the risk relegated a
low priority.17
In a complex system with a large number of relationships where joint failures in
several would lead to system failure, it is common to ignore joint failures in the hope
they will not occur or will be of only insignificant consequence. Usually the likeli-
hood of joint failure is very low. Very low, however, is not the same as impossible,
Table 10-5 Risk likelihood, risk impact, and expected value consequence.
Likelihood Impact:Days
(%) Late Consequences
Software
S1 Software design does not meet initial 20 10 2
requirements
S2 Change in user requirements 30 5 1.5
Hardware
H1 Hardware shipment delay 5 5 0.25
H2 Hardware design incompatible with 5 10 0.5
software requirements
Funds
F1 Hardware supplier goes bankrupt 5 40 2
F2 Insufficient funds to pay staff and suppliers 5 15 0.75
F3 Revenue from current projects delayed 30 5 1.5
Staff
T1 Staff shortage 10 20 2
T2 Inability to hire/train additional staff 15 10 1.5
T3 Insufficient technical skills 10 30 3
Installation
I1 Hardware incompatible with existing user 5 60 3
systems
I2 Inadequate customer site preparation 20 10 2
I3 Difficulty in teaching user new procedures 20 20 4
Risk response planning addresses the matter of how to deal with risk. In general, the
ways of dealing with an identified risk are to transfer the risk, alter plans or proce-
dures to avoid or reduce the risk, prepare contingency plans, or accept the risk.
Insurance
The customer or contractor might purchase insurance as protection against a wide
range of risks, including risks associated with
• Property damage or personal injury suffered as a consequence of the project.
• Damage to materials while in transit or in storage.
• Breakdown or damage of equipment.
• Theft of equipment and materials.
• Sickness or injury of workers, managers, and staff.
• Forward cover: insure against exchange rate fluctuations.
Contract Type
A popular way to transfer or allocate risk is through the use of an appropriate con-
tract type, as discussed in the Appendix to Chapter 3. When the statement of work is
clear and little uncertainty foreseen, the contractor should be willing to quote a fixed
price. An example would be the building of a wall according to a well-defined draw-
ing and specifications, in which case the contractor perceives little risk and is willing
to accept it. However when the scope of the work is unclear and changes foreseen, it
is less likely a contractor will commit to a fixed price and accept the risk of an over-
run. In such cases a cost-plus contract would be more appropriate since the contrac-
tor is covered for all expenses incurred in the performance of the work.
Whereas in fixed-price contracts the contractor assumes most of the risk for cost
overruns, in fixed-price with incentive fee contracts the contractor accepts roughly
60 percent of the risk, and the customer 40 percent; in cost-plus incentive fee
378 Part III System and Procedures for Planning and Control
contracts the contractor assumes roughly 40 percent, the customer 60 percent. With a
cost-plus fixed fee (CPFF) contract the customer assumes all or most of the risk of an
overrun because the contractor has no incentive to contain the costs.
In large projects, a variety of contracts are used depending on the risk associated
with individual work packages or deliverables. In the Chunnel, the most uncertain
part of the project was tunneling under the English Channel, so the tunneling work
was contracted on a CPFF basis. The electrical and mechanical works for the tunnels
and terminals, perceived as low risk, were done on a lump-sum basis. Procurement
of the rolling stock, perceived as slightly riskier, used a cost-plus-percentage-fee
contract.20
Not all risks can be transferred from one party or another. Even with a fixed-
price contract where ostensibly the contractor takes on the risk of overruns, the cus-
tomer will nonetheless incur damages and hardship should the project overrun the
target schedule or the contractor declare bankruptcy. The project still must be com-
pleted and someone has to pay for it. To avoid losses, a contractor might feel pres-
sured to cut corners, which of course increases the risk of the customer of receiving
an end-item of sub-par quality. To lessen such risks, the customer will stipulate in
the contract rigid quality inspections and penalties.
Subcontract Work
Risk often arises from uncertainty about how to approach a problem or situation.
One way to avoid such risk is to contract with a party who is experienced and
knows how to do it. For example, to minimize the financial risk associated with the
capital cost of tooling and equipment for production of a large, complex system, a
manufacturer might subcontract the production of the system’s major components
to suppliers familiar with those components. This relieves the manufacturer of the
financial risk associated with the tooling and equipment to produce these compo-
nents. But, as mentioned, transfer of one kind of risk often means inheriting another
kind. For example, subcontracting work for the components puts the manufacturer
in the position of relying on outsiders, which increases the risks associated with
quality control, scheduling, and the performance of the end-item system. But these
risks often can be reduced through careful management of the suppliers. If the man-
ufacturer feels capable of handling those management risks, it will happily accept
them to forego the financial risks.
Risk Responsibility
The individuals or groups responsible for all risks in a project should be specified.
Risks may be transferred, but they can never be simply “offloaded.” For instance,
when an item is procured and shipped from abroad, the risk of damage usually
remains with the seller as long as the item is onboard the ship; as soon as the item is
hoisted over the rail of the ship the risk is transferred to the buyer.
A party willing to accept responsibility for high risk in a project will usually
counter by demanding a high level of authority over the situation. For example, a
customer agreeing to accept the risk of poor quality or cost overrun will almost cer-
tainly require a large measure of management control over aspects of the project that
influence quality and cost. Furthermore, a party willing to bear high risk will usu-
ally insist on compensation to cover the risks. The CPFF contract illustrates: the con-
tractor ’s risk is covered by compensation for all expenses, but the customer ’s risk is
covered by his management oversight of the contractor to prevent abuses.
Reduce Risk
Among the ways to reduce the technical risk (its likelihood, impact, or both)
are to:21
• Employ the best technical team.
• Base decisions on models and simulations of key technical parameters.
• Use mature, computer-aided system engineering tools.
• Use parallel development on high-risk tasks.
• Provide the technical team with adequate incentives for success.
• Hire outside specialists for critical review and assessment of work.
• Perform extensive tests and evaluations.
• Perform a risky task earlier in the project to allow time to reduce the impact of
the risk.
• Minimize system complexity.
• Use design margins.
The last two points deserve further explanation. In general, system risk and
unpredictability increase with system complexity: the more elements in a system and
the greater their interconnectedness, the more likely that something—an element or
interconnection—will go wrong. Thus, minimizing complexity through reorganiz-
ing and modifying elements in end-item design and the project tasks can reduce the
project risk. For example, by decoupling activities and subsystems, i.e., making them
independent of one another, the failure of any one activity or subsystem will be con-
tained and not spread to others.
Incorporating design margins into design goals is another way to reduce risk asso-
ciated with meeting technical requirements.22 A design margin is a quantified value
that serves as a safety buffer to be held in reserve and allocated by management.
In general, a design margin is incorporated into a requirement by setting the target
design value stiffer or more rigorous than the design requirement. In particular,
By aiming for the target value, a designer can miss the target by as much as the
margin amount and still satisfy the requirement. Striving to meet target values that
are stiffer than the requirements reduces the risk of not meeting the requirement.
380 Part III System and Procedures for Planning and Control
Example 5: Design Margin Application for the Spaceship
Suppose the weight requirement for the spaceship navigation system is 90
pounds. To allow for the difficulty of reaching the requirement (and the risk of
not meeting it), the design margin is set at 10 percent, or 9 pounds. Thus, the
target weight for the navigation system becomes 81 pounds.
A design margin is also applied to each subsystem or component within the
system. If the navigation system is entirely composed of three major Subsystems,
A, B, and C, then the three together must weigh 81 pounds. Suppose C is an
OTS item with a weight of 1 pound that is fixed and cannot be reduced. But A
and B are being newly developed, and the design goals for them have been set
at 50 pounds for A and 30 pounds for B. Suppose a 12 percent design margin is
imposed on both subsystems; in that case, the target weights for A and B are 50
(1.0 0.12) 44 pounds, and 30 (1.0 0.12) 26.4 pounds, respectively.
Design margins provide managers and engineers a way to flexibly meet
problems in an evolving design. Should the target value for one subsystem
prove impossible to meet, then portions of the margin values from other sub-
systems or the overall system can be reallocated to the subsystem. Suppose
Subsystem B cannot possibly be designed to meet the 26.4-pound target, but
Subsystem A can be designed to meet its target value, then the target value for
B can initially be increased by as much as 3.6 pounds (its margin value) to 30
pounds; if that value also proves impossible to meet, the target can be increased
by another 6 pounds (the margin value originally allocated to Subsystem A) to
36 pounds; if even that value cannot be met, the target can be increased again
by as much as another 9 pounds (the margin value for the entire system) up to
45 pounds. Even with these incremental additions to B’s initial target value, the
overall system would still be able to meet the 90-pound weight requirement.
Design margins not only help reduce the risk in meeting requirements, they
encourage designers to exceed requirements—in the example to design a system
that weighs less than required. Of course, the design margins must be set carefully
so as to reduce the design risk yet not increase design cost.
Design margins focus on technical requirements. Among ways to reduce risks
associated with meeting schedules are:23
• Create a master project schedule and strive to adhere to it.
• Schedule the riskiest tasks as early as possible to allow time for failure recovery.
• Maintain close focus on critical and near-critical activities.
• Put the best workers on time-critical tasks.
• Provide incentives for overtime work.
• Shift high-risk activities in the project network from series to parallel.
• Organize the project early and be careful to adequately staff it.
• Provide project and feeding buffers (contingency reserves), as discussed in
Chapter 7.
To reduce the risk associated with meeting project cost targets:24
• Identify and monitor the key cost drivers.
• Use low-cost design alternative reviews and assessments.
• Verify system design and performance through modeling and assessment.
• Maximize usage of proven technology and commercial off-the-shelf equipment.
• Provide contingency reserves in project budgets.
• Perform early breadboarding, prototyping, and testing on risky components and
modules.
Example 6: Managing Schedule and Cost Risk at the Vancouver Airport Expansion Project27
The expansion project at Vancouver International Airport involved constructing
a new international terminal building (ITB) and a parallel runway. The schedule
for the $355 million project called for full operation of the ITB less than 3.5 years
after the project was approved, and opening of the new runway only 5 months
after that. The project team identified the following as major risk areas in meet-
ing the tight budget and schedule constraints:
1. Risk in Structural Steel Delivery and Erection. Steel is the most critical
aspect of big construction projects in Canada. Long procurement lead times
from steel mills and difficulties in scheduling design, fabrication, and erec-
tion make big-steel projects problematic. Recognizing this, the project team
awarded the structural steel contract very early in the project so there would
be ample time to design, procure, fabricate, and erect the 10,000 tons of
steel required for the ITB. As a result, the ITB was completed on time.
2. Material Handling Risk. Excavation, moving earth, and material handling com-
prised the second critical area. Millions of cubic meters (cum) of earth had
to be moved, and over 4 million cum of sand were required for concrete run-
ways and taxiways. The project team developed an advance plan to enable
coordinated movement of earth from one locale to another, and used local
sand as a component in the concrete. This saved substantial time and money,
and resulted in the runway being completed a year ahead of schedule.
3. Environmental Risk. Excavations and transport of earth and sand by barges
threatened the ecology of the Fraser River estuary. These risks were miti-
gated by advance planning and constantly identifying and handling prob-
lems as they arose through cooperative efforts of all stakeholders.
4. Functionality Risk. Because new technology poses risk, the project team
adopted a policy of using only proven components and technology.
Whenever a new technology was in doubt, its usage and success at other
existing sites was evaluated. Consequently, all ITB systems were installed
and operational according to schedule with few problems.
One additional way to reduce the risk of not meeting budgets, schedules, and
technical performance is to do whatever is necessary to achieve the requirements,
and nothing more (excepting design margin).28 The project team might be aware of
many things that could be done beyond the stated requirements, but in most cases
these will consume additional resources and add time and cost. Unless the customer
approves the added time and cost, these things should be avoided. Avoiding “non-
essential” needs reduces the risk of failing to meet the essential needs.
Contingency Planning
Contingency planning implies identifying the risks, anticipating whatever might
happen, and then preparing a plan of action to cope with them. The initial project
plan is followed, yet throughout execution the risks are closely monitored. Should a
risk materialize as indicated by an undesired outcome or trigger symptom, the con-
tingency course of action is adopted. The contingency can be a post-hoc remedial
action to compensate for a risk impact, an action undertaken in parallel with the
382 Part III System and Procedures for Planning and Control
original plan, or a preventive action initiated by a trigger symptom to mitigate the
risk impact. Multiple contingency plans can be developed based upon “what-if”
analyses of possible outcome scenarios for multiple risks.
The identified risks are documented and added to a list called a risk log or risk regis-
ter and rank ordered, greatest risk consequence first. For risks with the most serious
consequences, mitigation plans are prepared and strategies adopted (transfer, reduce,
avoid, or contingency); for the least important ones, nothing is done (accept).
The project should be continuously monitored for trigger symptoms of previously
identified risks, and for symptoms of risks newly emerging and not previously iden-
tified. Known risks may take a long time before they begin to produce problems.
Should a symptom reach the trigger point, a decision is made as to the course of
action. The action might be to institute an already prepared plan or to organize a
meeting to determine a solution. Sometimes the response is to do nothing; however,
nothing should be a conscious choice (not an oversight) closely tracked to ensure it
was the right choice and no further problems ensue.
All risks deemed critical or important are tracked throughout the project or the
phases to which they apply; to ensure this, someone is assigned responsibility to
track and monitor the symptoms of each important risk.
Altogether, the risk log, mitigation strategies, monitoring methods, people respon-
sible, contingency plans, and schedule and budget reserves constitute the project risk
management plan. The plan is continuously updated to account for changes in risk
status (old risks avoided, downgraded, or upgraded; existing risks reassessed; new
risks added). The project manager (and sometimes other stakeholders such as man-
agement and the customer) is alerted about emerging problems; ideally, the project
culture embodies candor and honesty, and people readily notify the project manager
whenever they detect a known risk materializing or a new one emerging.
• Create a risk management plan that specifies ways to identify all major project
risks. The plan should specify the person(s) responsible for managing risks as
well as methods for allocating time and funds from the risk reserve.
• Create a risk profile for each risk that includes the risk likelihood, cost and sched-
ule impact, and contingencies to be invoked. It should also specify the earliest
visible symptoms (trigger events) that would indicate when the risk is material-
izing. In general, high-risk areas should be visible and have lots of eyes watching
closely. Contingency plans should be kept up-to-date and reflect project progress
and emerging risks.
• Appoint a risk officer to the project, a person whose principle responsibility is the
project’s risk management. The risk officer should not be the same person as the
project manager because the role involves matters of psychology and politics.
He should not be a can-do person, but instead, to some extent, a devil’s advocate
identifying, assessing, and tracking all the reasons why something might not
work—even when everyone else believes it will.
• Include in the budget and schedule a calculated risk reserve, which is a buffer of
money, time, and other resources for dealing with risks as they materialize. The
risk reserve is used at the project manager ’s discretion to cover risks not specifi-
cally detailed in the risk profile. The reserve may include the RT or RC values
(described later) or other amounts. It is usually not associated with a contingency
plan, and its use might be constrained to particular applications or areas of risk.
The size of the risk reserve should be held confidential by the project manager
(otherwise, projects have a tendency to consume whatever time–cost resources
are available, even if in reserve form).
384 Part III System and Procedures for Planning and Control
• Establish communication channels (perhaps anonymous) within the project team
to ensure that bad news gets to the project manager quickly. Ensure that risks are
continually monitored, current status of risks is assessed and communicated, and
the risk management plan updated.
• Specify procedures to ensure that the project is accurately and comprehensively
documented. Documentation includes proposals, detailed project plans, change
requests, summary reports, and a postcompletion summary. In general, the better
the documentation of past projects, the more information available for planning
future, similar projects, estimating necessary time and resources, and identifying
possible risks.
In every project the identified risks are documented. Figure 10-5 illustrates a tem-
plate for the profile and management plan for an identified risk; it summarizes every-
thing known about the risk. Such a document would be retained in a binder or library,
to be updated as necessary until the risk is “closed out” (believed to no longer exist).
Risk Assessment
Risk description
Risk sources
Risk assessment
Measures/Symptoms Comments
Signoffs
Figure 10-5
Document for the profile and management plan of an identified risk.
386 Part III System and Procedures for Planning and Control
of the input to risk analysis is subjective; after all, a likelihood is just that—it does
not indicate what will happen, only what might happen. Data analysis and planning
gives people a sense of having power over events, even when the events are chancy.
Underestimating the risk likelihood or impact can make consequences seem insig-
nificant, leading some people to venture into dangerous territory that common sense
would disallow. For example, the security of seat belts and air bags encourages some
drivers to take risks such as driving too close behind the next car or accelerating
through yellow lights. The result is an actual increase in the overall number of acci-
dents (even though the seriousness of injury is reduced).
Repeated experience and good documentation are important ways to identify
risks, but they cannot guarantee that some important risks will not remain unknown.
Same and similar outcomes that have occurred repeatedly in past projects eventu-
ally deplete peoples’ capacity to imagine anything else happening. As a result, some
risks become unthinkable and are never considered. Even sophisticated computer
models are worthless when it comes to dealing with unthinkable risks because a
computer cannot be instructed to analyze events that have never occurred and are
beyond human imagination. Risk analysis models are based on the occurrence fre-
quency of past events in a finite number of cases. History provides a sample, not the
population of all possibilities.
Managing risk does not mean eliminating it, although the management of some
projects makes it seem as though that is the goal. The prime symptom of “trying
to eliminate risk” is management overkill or micromanagement: excessive con-
trols, unrealistic documentation requirements, and trivial demands for the authori-
zation of everything. By definition, projects inherently entail uncertainty and risk.
Micromanagement is seldom appropriate and may prove disastrous for some
projects, particularly those for product development and R&D. When management
tries to eliminate risk, it stifles innovation and, say Aronstein and Piccirillo, “forces
a company into a plodding, brute force approach to technology, which can be far
more costly in the long run than a more adventurous approach where some pro-
grams fail but others make significant leaps forward.”32 The appropriate risk man-
agement approach, particularly for development projects, is not to try to avoid or
eliminate risk altogether, but to accommodate and mitigate risk by reducing the cost
of failure.
10.7 SUMMARY
Project risk management involves identifying the risks, assessing them, planning the
appropriate responses, and taking action.
Identifying project risks starts early in the project conception phase. Areas of
high risk that can significantly influence project outcomes are hazards to be dealt
with. Risks in projects stem from many sources such as failure to define and sat-
isfy customer needs or market requirements, technical problems arising in the work,
extreme weather, labor and supplier problems, competitors’ actions, and changes
imposed by outside parties. Risk hazards are identified from experience with past
projects and careful scrutiny of current projects.
Projects have innumerable risks, but only the important ones need to be
addressed. Importance depends on the likelihood, impact, and overall consequence of
the risk. Likelihood is the probability a risk will occur as determined by knowledgea-
ble, experienced people. Risk impact is the effect of the risk, its seriousness or potential
Four common methods for risk analysis are expected value, decision trees, payoff
tables, and simulation.
Expected Value
Selection of the appropriate risk response is sometimes based on analysis of risk con-
sequences in terms of the expected value of project costs and schedules.
In general, expected value is the average or mean outcome of numerous repeated
circumstances. For risk assessment, expected value represents the average outcome
of a project, if it were repeated many times, accounting for the possible occurrence of
risk. Mathematically, it is the weighted average of all the possible outcomes, where
the respective likelihoods of the possible outcomes are the weights, that is
To account for risk, the risk project time and cost consequences are determined
using expected value.
388 Part III System and Procedures for Planning and Control
The risk consequence on project duration is called the risk time, RT. It is the
expected values of the estimated time required for risk correction, computed as
The risk consequence on project cost is called the risk cost, RC. It is the expected
value of the estimated cost of correcting for the risk, computed as
For example, suppose the baseline time estimate (BTE) for project completion is
26 weeks and the baseline cost estimate (BCE) is $71,000. Assume that the risk likeli-
hood for the project as a whole is 0.3, and, should the risk materialize, it would delay
the project by 5 weeks and increase the cost by $10,000. Also, because the probability
of the risk materializing is 0.3, the probability of it not materializing is 0.7. If the risk
does not materialize, no corrective measures will be necessary, so the corrective time
and cost will be nil. Hence
These figures, RT and RC, would be included as reserve or buffer amounts in the
project schedule and budget to account for risk. RC and RT are the schedule reserve
and project contingency (budget reserve), respectively, as mentioned in Chapters 6
and 8.
Accounting for the risk time, the expected project completion time, ET, is
and accounting for the risk cost, the expected project completion cost, EC, is
These examples account for risk factors that affect the project as a whole. Another
way to determine risk consequence is to first disaggregate the project into work
packages or phases and then, for each element, estimate the risk likelihood and cor-
rective time and cost. These individual corrective estimates are then aggregated to
determine ET and EC for the entire project. This approach tends to give more cred-
ible RT and RC estimates than do equations (5) through (8) because risks so pin-
pointed to individual tasks or phases can be more accurately assessed. Also, it is
easier to identify the necessary corrective actions and estimate the time and costs
associated with particular tasks.
For example, a project has eight work packages, and for each the BCE, risk likeli-
hood, and corrective cost have been estimated. The following table lists the informa-
tion for each work package and gives EC, where EC is computed as
Therefore, the project EC is $75,150. Because this is only 5.8 percent above the
project BCE of $71,000, the overall cost consequence of project risks is small.
Now, for the same eight work package project, assume the BTE, risk likelihood,
and corrective time have been estimated for each work package. These figures are
listed below along with ET, computed as
J 6 1 0.2 6.2
M 4 1 0.3 4.3
V 6 2 0.1 6.2
Y 8 3 0.2 8.6
L 2 1 0.3 2.3
Q 8 1 0.1 8.1
W 1 1 0.3 1.3
X 1 1 0.3 1.3
The project network is used to determine ET for the overall project. Suppose the
network is as shown in Figure 10-6. Without considering the risk time, the critical
path would be J-M-V-Y-W-X, which gives a project BTE of 24 weeks. Accounting for
risk consequences, the critical path does not change but the duration is increased to
27.9 weeks. This is the project ET.33
Figure 10-6
Project network, accounting for risk time.
6.2
Assembly
4.3 and test 4.3
Purchase V Purchase
6.2 and delivery and delivery 1.3 1.3
System M M System User
design test test
J 2.3 8.1 W X
Software Purchase
specification and delivery
L Q
390 Part III System and Procedures for Planning and Control
Although activities on critical and near-critical paths should be carefully moni-
tored, in general, all activities that poses a high-risk consequence (high likelihood
and/or high impact) should also be carefully monitored, even those not on the criti-
cal path.
Increasing the project schedule and budget to account for the expected risk time
and expected risk cost cannot guarantee adequate protection against risk. Expected
value is equivalent to the long-run average, which results from repeating something
many times. Project activities are never identical or repeated over and over, but even
if they were, that would not preclude a bad outcome in a particular instance. The
point: No attempt to prepare for risk using expected value criteria offers any guar-
antee. Such is the nature of risk.
Decision Trees34
A decision tree is a diagram wherein the “branches” represent different chance
events or decision strategies. Decision trees can be used to assess which risk
responses among alternatives yield the best-to-be-expected consequence.
One application of decision trees is in weighing the cost of potential project
failure against the benefit of project success. Assume a project has a BCE of $200,000
and a failure likelihood of 0.25. If the project is successful, it will yield a net profit of
$1,000,000.
The expected value concept can be used to compute the average value of the
project assuming it could be repeated a large number of times. If it were repeated
many times, then the project would lose $200,000 (BCE) 25 percent of the time, and
generate $1,000,000 profit the other 75 percent of the time. The average outcome or
expected value would be
Figure 10-7
Decision tree.
management has no control. These different routes are called states of nature.
Consider different possible strategies or actions, and then indicate the likely out-
come for each state of nature. The outcomes for different combinations of strategies
and states of nature are represented in a matrix called a payoff table.
For example, suppose the success of a project to develop “Product X” depends
on market demand that is a known function of particular performance features of
the product. The development effort can be directed in any of three possible direc-
tions, referred to as strategies A, B, and C, each of which will result in a product
with different performance features. Also, assume that another firm is developing
a competing product that will have performance features similar to those under
Strategy A. One of three future states of nature will exist when the product develop-
ment effort ends: N1 represents no competing products on the market for at least 6
months, N2 represents the product competing with Product X introduced between 0
and 6 months later; N3 represents the product competing with Product X introduced
first. The payoff table shown in Table 10-6 gives the likely profits in millions of dollars
for different combinations of strategies and states of nature.
The question is: Which strategy should be adopted? The answer: It depends!
If project sponsors are optimistic, they will choose the strategy that maximizes the
potential payoff. The maximum potential payoff indicated in the table is $90 million,
which happens for Strategy C and state of nature N1. Thus, optimistic project sponsors
will adopt Strategy C. In general, the strategy choice that has the potential of yield-
ing the largest payoff is called the maximax decision criterion.
States of nature
Strategy N1 N2 N3
A 60 30 –20
B 60 50 60
C 90 70 40
392 Part III System and Procedures for Planning and Control
Now, if project sponsors are pessimistic, they instead will be interested in
minimizing their potential losses, in which case they will use the maximin decision
criterion and adopt the strategy that gives the best outcome under the worst pos-
sible conditions. For the three strategies A, B, and C, the worst-case payoff scenar-
ios are –$20 million, $50 million, and $40 million, respectively. The best (least bad)
of the three is $50 million, or Strategy B. Thus, pessimistic sponsors would adopt
Strategy B.
Any choice of strategy other than the best one will cause the decision maker to
experience an opportunity loss called regret. If, for example, Strategy A is adopted,
and the state of nature turns out to be N2, the sponsor will regret not having chosen
Strategy C, which is the best for that state of nature. A measure of this regret will be
the difference between the unrealized payoff for Strategy C and the realized payoff
for Strategy A, or $70 – $30 $40 million. This way of thinking suggests another cri-
terion for choosing between strategies, the minimax regret decision criteria, which is
the strategy that minimizes the regret of not having made the best choice.
Regret for a given state of nature is the difference in the outcomes between the
best strategy and any other strategy. This is illustrated in a regret table, shown in
Table 10-7. For example, given the payoffs in Table 10-6, for state of nature (N1) the
highest payoff is $90 million. Had Strategy C, the optimal strategy, been selected,
the regret would have been zero, but had strategies A or B been selected instead, the
regrets would have been $30 million each (the difference between their outcomes,
$60 million, and the optimum, $90 million). The regret amounts for states of nature
N2 and N3 are determined in a similar fashion.
To understand how to minimize regret, first look in the regret table at the
largest regret for each strategy. The largest regrets are $80 million, $30 million, and
$20 million for strategies A, B, and C, respectively. Next, pick the smallest of these,
$20 million, which occurs for Strategy C. Thus, Strategy C is the choice to minimize
regret.
Another approach for selecting a strategy is to assume that every state of nature
has the same likelihood of occurring by using the maximum expected payoff criterion.
Referring back to the payoff table, Table 10-6, where the likelihood of each state of
nature is assumed to be one-third, the expected payoff for Strategy A given out-
comes from the payoff table is
The expected payoffs, computed similarly for strategies B and C, are $56.66 mil-
lion and $66.66 million, respectively. Thus, Strategy C would be chosen as giving the
maximum expected payoff. Notice in the previous examples that three of the four
selection criteria point to Strategy C. This in itself might further convince decision
makers about the appropriateness of selecting Strategy C.
States of Nature
Strategy N1 N2 N3
A 30 40 80
B 30 20 0
C 0 0 20
394 Part III System and Procedures for Planning and Control
Subsystems, X, Y, and Z. Constraints on physical size indicate that the output
capacity of the overall system will be split among the three subsystems in the
approximate ratio of 5:3:2. Suppose a design margin of 3 percent is applied to
the system and the subsystems. Note that because the power requirement is
stated as minimum output, the design margin would set the target output at 3
percent higher than the requirement.
(a) What is the target requirement output for the overall system?
(b) What are the target requirement outputs for each of the subsystems?
(Remember, subsystem margins are in addition to the system margin.)
(c) Suppose that, at best, Subsystem X can be designed to meet only 47 per-
cent of the power output requirement for the overall system. Assuming the
Subsystems Y and Z can be designed to meet their respective design targets,
will the output requirement for the overall system be met?
17. List and review the principles of risk management.
18. How does risk planning serve to increase risk-taking behavior?
19. Risk management includes being prepared for the unexpected. Explain.
20. Can risk be eliminated from projects? Should management try to eliminate it?
21. How and where are risk time and risk cost considerations used in project
planning?
22. Where would criteria such as minimax, maximin, and minimax regret be used
during the project life cycle to manage project risk?
23. Below is the network for the Largesse Hydro Project:
5
T
3 7
S U
9
L
8
V
3
J 6
4 C
R
The following table gives the baseline cost and time estimates (BCE and BTE),
the cost and time estimates to correct for failure, and the likelihood of failure for
each work package in Largesse.
Corrective
WBS
Element BCE BTE (weeks) Cost Time Likelihood
L $20,000 9 $4,000 2 0.2
V 16,000 8 4,000 2 0.3
T 32,000 5 8,000 2 0.1
U 20,000 7 12,000 3 0.2
S 16,000 3 4,000 1 0.3
J 18,000 3 4,000 1 0.1
R 10,000 4 4,000 3 0.3
C 15,000 6 5,000 2 0.3
In all cases, the profit (if the bid is won) will be the bid price minus the pro-
posal-preparation cost, or 0.02C; the loss (if the bid is not won) will be the pro-
posal-preparation cost.
Prepare a decision tree for the three options. If Bradford uses the maximum
expected profit as the criterion, which bid proposal would he select?
27. Iron Butterfly, Inc. submits proposals in response to RFPs and faces three pos-
sible outcomes: N1, Iron Butterfly gets a full contract; N2, Iron Butterfly gets a
partial contract (job is shared with other contractors); N3, Iron Butterfly gets no
contract. The company is currently assessing three RFPs code named P1, P2, and
P3. The customer for P3 will pay a fixed amount for proposal preparation; for P1
and P2 Iron Butterfly must absorb all of the proposal-preparation costs, which
396 Part III System and Procedures for Planning and Control
are expected to be high. Based upon project revenues and proposal-preparation
costs, the expected profits ($1,000s) are as shown:
N1 N2 N3
P1 500 200 –300
P2 300 100 –100
P3 100 50 25
To which RFPs would Iron Butterfly respond using the various uncertainty criteria?
28. Iron Butterfly, Inc. project manager for the LOGON project, Frank Wesley, is
concerned about the development time for the robotic transporter. Although the
subcontractor, Creative Robotics has promised a delivery time of 6 weeks, Frank
knows that the actual delivery time will be a function of the number of other
projects Creative Robotics is working on at the time. As incentive to speed up
Creative Robotic’s delivery of the transporter, Frank has three options:
S1: Do nothing.
S2: Promise Creative Robotics a future contract with Iron Butterfly.
S3: Threaten to never contract with Creative Robotics again.
He estimates the impact of these actions on delivery time would be as follows:
What strategy should Frank adopt based upon uncertainty criteria? Use criteria
similar to the maximax, maximin, minimax regret, and maximum expected pay-
off, except note that the criteria need to be adapted because here the goal is to
minimize the payoff (time), which contrasts to the usual case of maximizing the
payoff (profit).
Figure 10-8
Sydney Opera House. (Photo courtesy of Australian Information Service.)
398 Part III System and Procedures for Planning and Control
through profits raised from a series of state-run project. They thus quickly moved ahead and
lotteries. divided the work into three main contracts: the
Engineers who reviewed the concept noted foundation and building except the roof, the roof,
that the roof shells were much larger and wider and the interior and equipment.
than any shells seen so far. Further, because they As many experts had feared, the SOH project
stuck up so high, they would act like sails in the became an engineering and financial debacle,
strong winds blowing up the harbor. Thus, the lasting 15 years and costing $107 million ($100
roof would have to be carefully designed and million over the initial estimate). Hindsight is
constructed to prevent the building from blowing 20/20, yet from the beginning this should have
away. been viewed as a risky project. Nonetheless, risks
The government was worried that people were either downplayed or ignored, and not
scrutinizing the design might raise questions much was done to mitigate or keep them under
about potential problems that would stall the control.
QUESTIONS
1. Identify the obvious risks. 3. Discuss some principles of risk management
2. What early actions should have been taken to that were ignored.
reduce the risks?
Figure 10-9
Nelson Mandela Bridge, Johannesburg. (Photo courtesy of Jorge Jung,
BKS (Pty) Ltd., Pretoria.)
400 Part III System and Procedures for Planning and Control
Two mitigation strategies were considered: spent on the project. The pump never failed, and
an additional pump on standby and, alternatively, construction finished on time. The stay cables—
completing the process by pouring concrete from a total length of 81,000 meter, (50 miles)—were
the top of the pylon. The concrete mixture had to installed and the bridge deck lifted off tempo-
be transported by trucks to the site, which posed rary supports, all while the electrified railway
yet another risk: interruption of the concrete sup- lines underneath remained live. Upon completion
ply owing to traffic congestion in the city. of the bridge, some wondered whether the costs
Despite the risks of working over a busy area incurred to manage the risks were not excessive,
with trains running back and forth, no serious while others held that the engineers had been too
accident occurred during the 420,000 man-hours frugal and taken unacceptably high risks.
QUESTIONS
1. How would you have identified the risks? 3. Indicate whether the risks listed in the table
(Refer also to methods in Chapter 9.) above are internal or external.
2. Using the table below, discuss how the risks 4. Describe how you would determine the
were addressed (as described in the text) expected values of the risks listed in the table.
and/or how risks could have been addressed. 5. Compile a complete list of information that
Also indicate any additional risks you can you would require in order to make an assess-
think of. ment of the risk of a pump failure.
6. How available do you think this informa- 7. Draw a CE diagram to indicate how different
tion would have been early in the project and factors could have contributed to delaying
where would you obtain it? project completion.
ENDNOTES
1. Quoted in Peter Bernstein, Against the Gods: a subjective estimate, in contrast, might be
The Remarkable Story of Risk (New York: John very reliable because humans often can do a
Wiley & Sons, 1996): 331. pretty good job of assimilating lots of factors.
2. Ibid., 207–208. 8. W. Roetzheim, Structured Computer Project
3. Asked once to define certainty, John Von Management (Upper Saddle River, NJ:
Neumann, the principle theorist of game the- Prentice Hall, 1988): 23–26; further examples
ory and mathematical models of uncertainty, of risk factors and methods of likelihood
answered with an example: To design a house quantification are given in Michaels, Technical
so it is certain the living room floor never gives Risk Management.
way, “calculate the weight of a grand piano 9. See J. Dingle, Project Management: Orientation
with six men huddling over it to sing, then tri- for Decision Makers (London: Arnold, 1997).
ple the weight” and design a floor to hold that 10. See Robert Gilbreath, Winning at Project
weight. That will guarantee certainty! Source: Management: What Works, What Fails, and Why
Bernstein, Against the Gods, 233. (New York: John Wiley & Sons, 1986).
4. See Robert Argus and Norman Gunderson, 11. Roetzheim, Structured Computer Project
Planning, Performing, and Controlling Projects Management, 23–26.
(Upper Saddle River, NJ: Prentice Hall, 1997): 12. Robert Pool , Beyond Engineering: How
22–23. Society Shapes Technology (New York: Oxford
5. Adapted from Jack Michaels, Technical University Press, 1997), 197–202.
Risk Management (Upper Saddle River, NJ: 13. Ronald Kotulak, “Key Differences Seen in
Prentice Hall, 1996): 208–250. Columbia, Challenger Disasters,”Chicago
6. Murray Turoff and Harold Linstone (eds), Tribune (February 2, 2003): 5, Section 1.
The Delphi Method: Techniques and Applications, 14. Robert Pool, Beyond Engineering, 207–214.
2002; http://is.njit.edu/pubs/delphibook/ 15. Michaels, Technical Risk Management, 40.
http://is.njit.edu/pubs/delphibook/ 16. Edmund Conrow, Effective Risk Management
7. The term “likelihood” is sometimes distin- (Reston, VA: American Institute of
guished from “probability.” The latter refers Aeronautics and Astronautics, 2000),
to values based on frequency measures from 135–140.
historical data; the former to subjective esti- 17. Statistics make it easy to ignore risks by
mates or gut feel. If two of three previous depersonalizing the consequences. For
attempts met with success the first time, then example, it is less distressing to state that
Ceteris paribus, the probability of success on there is a 0.005 likelihood of someone being
the next try is 2/3 or 0.67 (and the probabil- killed than to say that 5 people out of 1000
ity of failure is 1/3). However, even without will be killed.
numerical data, a person with experience 18. I. Mitroff and H. Linstone, The Unbounded
can, upon reflection, come up with a similar Mind (New York: Oxford, 1993): 111–135.
estimate that “odds are two to one that it 19. Ibid.
will succeed the first time.” Although one 20. F. T. Anbari (ed.), The Chunnel Project, Case
estimate is objective and the other subjec- studies in Project Management, Project
tive, that does not imply one is better than Management Institute, 2005.
the other. Frequency data will not necessarily 21. Howard Eisner, Computer-Aided Systems
give a more reliable estimate because of the Engineering (Upper Saddle River, NJ: Prentice
multitude of factors that influence outcomes; Hall, 1988): 335.
402 Part III System and Procedures for Planning and Control
22. See Jeffrey Grady, System Requirements 30. Dietrich Dorner, The Logic of Failure (Reading,
Analysis (New York: McGraw-Hill, 1993): MA: Addison-Wesley, 1997): 163.
106–111. 31. D. Aronstein and A. Piccirillo, Have Blue
23. Eisner, Computer-Aided Systems and the F117A: Evolution of the Stealth
Engineering, 336. Fighter (Reston, VA: American Institute of
24. Ibid. Aeronautics and Astronautics, 1997): 79–80.
25. A breadboard is a working assembly of 32. Ibid., 186–190.
components. A prototype is an early working 33. For other approaches to risk time analysis,
model of a complete system. The purpose see Michaels, Technical Risk Management.
of both is to demonstrate, validate, experi- 34. This section and the next address the more
ment, or prove feasibility in a concept or general topic of decision analysis, a broad
design. topic that receives only cursory coverage
26. Roetzheim, Structured Computer Project here. Most textbooks on production/opera-
Management, 96. tions management and quantitative analysis
27. Henry Wakabayashi and Bob Cowan, for management cover the topic in depth. A
“Vancouver International Airport classic book on the subject is R. D. Luce and
Expansion,”PM Network (September 1998): H. Raiffa, Games and Decisions (New York:
39–44. John Wiley & Sons, 1957).
28. Neal Whitten, “Meet Minimum 35. Adapted from O. Kharbanda and J. Pinto,
Requirements: Anything More is Too What Made Gertie Gallop: Learning from Project
Much,”PM Network (September 1998): 19. Failures (New York: Van Nostrand Reinhold,
29. Tom DeMarco, The Deadline (New York: 1996): 177–191.
Dorset House, 1997): 83; Edward Yourdan, 36. Source: Frans Kromhout, Divisional Director,
Rise and Resurrection of the American Bridges, BKS (Pty) Ltd, Pretoria.
Programmer (Upper Saddle River, NJ: Prentice
Hall, 1998): 133–136.