0% found this document useful (0 votes)
1K views216 pages

Bechtel Technology Journal: Major Offices

Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views216 pages

Bechtel Technology Journal: Major Offices

Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 216

Vol ume 3, No.

1
Cor por at e headquar t er s :
San Fr anci sco, Cal i f or ni a, USA
Ot her of f i c es i nc l ude:
Br i sbane, Aust r al i a
Cal gar y, Canada
Fr eder i ck, Mar yl and, USA
Houst on, Texas, USA
London, Engl and, UK
Mont r eal , Canada
New Del hi , I ndi a
Sant i ago, Chi l e
Shanghai , Chi na
Tai pei , Tai wan, ROC
www. bechtel . com
V
o
l
u
m
e

3
,

N
o
.

1
2
0
1
1
Foreword
Editorial
Construction Productivity and Performance Measurement
Prediction of Ground Motion Attenuation in Low-Seismicity Regions
Technology Trends in High-Speed Rail
Management of Ground Engineering Data
Enhancements to Airfield Simulation of London Gatwick Airport
An Integrated Approach to Developing Safe and Efficient Plant Layouts
Alumina Refinery Water Management Design in Tropical and Subtropical Climates
Options for Long-Distance, Large-Capacity Bulk Materials Conveying
Optimum Simulation of Flood Flow Rate: Comparing Combinations of
Precipitation Loss and Rainfall Excess-Runoff Transform Models
Energy Management System for the New Jamnagar Refinery in India
Dynamic 3D Visualization of Construction Activities
CO
2
Compression and Dehydration for Carbon Capture and Sequestration
Formed Suction Inlet Design for Mixed Flow Pump Applications
Climate Change Considerations in the Sustainable Development of
Nuclear Power Plants in the United States
Concentrated Solar Thermal Plants Downstream of the Solar Field
Design/Optimization of the Associated Power Generation Cycle
Probabilistic Site Seismic Response Analysis
Computation-Based Engineering of Multiphase Processes
Using Computational Fluid Dynamics
Water Security
CENTRAL ENGINEERING
Steven Toon
Behrooz Tavakoli, PhD, and Nick Gregor, PhD

CIVIL
Siv Bhamra, PhD, and Maximilian Fieguth
Stephen Walthall
Jim Denton-Brown and Farzam Mostoufi
MINING & METALS
Rafael L. Pires; Robert F. Baxter; Laszlo Tikasz, PhD;
and Robert I. McCulloch
Lucy Martin and Steven Howard
Brett Wilson, Steve J. Davis, and Mark Wolton
OIL, GAS & CHEMICALS
Zhiyong Duan, PhD
Rajesh Narayan Athiyarath, Munir Mujawar, and Vinayak Shanbhag
Shiva Krishnasamy
Tomas Maramba III and Dipanjan Bhattacharya
POWER
David Werth, PhD, and Yifan Zheng
Ping K. Wan; Alice C. Carson; and Desmond W. Chan, PhD
Justin Zachary, PhD; Natasha Jones; and Aslan Golant

SYSTEMS & INFRASTRUCTURE
Nan Deng, PhD, and Farhang Ostadan, PhD
Christopher Kennedy, PhD; Philip Diwakar;
Leonard J. Peltier, PhD; and Brigette Rosendall, PhD
Angelos N. Findikakis, PhD

v
vii
3
13
29
41
49
59
67
79
97
107
125
135
147
153
161
173
181
193
Bechtel Technology Journal
Contents Authors
An I ndependent Anal y s i s of Cur r ent T ec hnol ogy I s s ues
J une 2011
B
e
c
h
t
e
l

T
e
c
h
n
o
l
o
g
y

J
o
u
r
n
a
l
9200
Maj or Of f i ces
Prem Attanayake, PhD Amos Avidan, PhD August D. Benz
William Imrie Joe Litehiser, PhD Sanj Malushte, PhD Cyrus B. Meher-Homji
Lee Schmoe Craig A. Myler, PhD Farhang Ostadan, PhD
Stew Taylor, PhD Justin Zachary, PhD Ping Wan Fred Wettling
Benjamin Fultz Angelos Findikakis, PhD Peter Carrato, PhD
Siv Bhamra, PhD
Who We Are
Bechtel is the worlds No. 1 choice for engineering, construction,
and project management.
Our diverse portfolio encompasses energy, transportation,
communications, mining, oil and gas, and government services.
We currently have projects in dozens of locations worldwide, from
Alaska to Australia. No matter how challenging a project or how
remote its location, chances are Bechtel can handle it. Thats
because we bring an unmatched combination of knowledge, skill,
experience, and customer commitment to every job. Engineering
News-Record (ENR) has named Bechtel the top U.S. construction
contractor for 13 consecutive years.
While we work for governments and commercial customers, our
projects have helped grow local economies and improve the quality
of life for communities and people around the world. Time and
again our work has demonstrated that the only limits on human
achievement are those that we place on ourselves.
Privately owned with headquarters in San Francisco, we have
ofces around the world and 52,000 employees.
What We Do
Airports and seaports Oil and gas eld
Communications networks development
Defense and aerospace facilities Pipelines
Environmental cleanup projects Reneries and
Fossil and nuclear power plants petrochemical facilities
Mines and smelters Roads and rail systems
Ethics
Bechtels culture is grounded in integrity and respect. This means
adhering to the highest standards of ethics. Our reputation as an
ethical company is one of our most valuable assets. We stand by
everything we do.
Quality
At Bechtel, quality means doing the job right the rst time. Weve
always delivered quality work, and we are continually striving to
improve our performance through Six Sigma.
Safety
Bechtel has a world-class safety program, and it pays off. Nearly
80 percent of our projects complete each year without a lost-time
accident. Our philosophy is simpleevery accident, and therefore
every injury, is preventable.
Bechtel Fellows
Chosen for their substantial technical achievement over the years,
the Bechtel Fellows advise senior management on questions related
to their areas of expertise, participate in strategic planning, and help
disseminate new technical ideas and findings throughout the organization.
Bechtel
TECHNOLOGY PAPERS
Bechtel
Technology Journal
Vol ume 3, No. 1
J une 2011
Bechtel Technology Journal ii
Contents
Foreword v
Editorial vii
CENTRAL ENGINEERING
Construction Productivity and Performance Measurement 3
Steven Toon
Prediction of Ground Motion Attenuation in Low-Seismicity Regions 13
Behrooz Tavakoli, PhD, and Nick Gregor, PhD
CIVIL
Technology Trends in High-Speed Rail 29
Siv Bhamra, PhD, and Maximilian Fieguth
Management of Ground Engineering Data 41
Stephen Walthall
Enhancements to Airfield Simulation of London Gatwick Airport 49
Jim Denton-Brown and Farzam Mostoufi
MINING & METALS (M&M)
An Integrated Approach to Developing Safe and Efficient Plant Layouts 59
Rafael L. Pires; Robert F. Baxter; Laszlo Tikasz, PhD; and Robert I. McCulloch
Alumina Refinery Water Management Design in Tropical and 67
Subtropical Climates
Lucy Martin and Steven Howard
Options for Long-Distance, Large-Capacity Bulk Materials Conveying 79
Brett Wilson, Steve J. Davis, and Mark Wolton
OIL, GAS & CHEMICALS (OG&C)
Optimum Simulation of Flood Flow Rate: Comparing Combinations 97
of Precipitation Loss and Rainfall Excess-Runoff Transform Models
Zhiyong Duan, PhD
Energy Management System for the New Jamnagar Refinery in India 107
Rajesh Narayan Athiyarath
Munir Mujawar and Vinayak Shanbhag, Reliance Industries Ltd.
Dynamic 3D Visualization of Construction Activities 125
Shiva Krishnasamy
CO
2
Compression and Dehydration for Carbon Capture 135
and Sequestration
Tomas Maramba III and Dipanjan Bhattacharya
June 2011 Volume 3, Number 1
POWER
Formed Suction Inlet Design for Mixed Flow Pump Applications 147
David Werth, PhD, Clemson Engineering Hydraulics, Inc., and Yifan Zheng
Climate Change Considerations in the Sustainable Development of 153
Nuclear Power Plants in the United States
Ping K. Wan; Alice C. Carson; and Desmond W. Chan, PhD
Concentrated Solar Thermal Plants Downstream of the Solar Field 161
Design/Optimization of the Associated Power Generation Cycle
Justin Zachary, PhD; Natasha Jones; and Aslan Golant
SYSTEMS & INFRASTRUCTURE
Probabilistic Site Seismic Response Analysis 173
Nan Deng, PhD, and Farhang Ostadan, PhD
Computation-Based Engineering of Multiphase Processes Using 181
Computational Fluid Dynamics
Christopher Kennedy, PhD; Philip Diwakar; Leonard J. Peltier, PhD; and
Brigette Rosendall, PhD
Water Security 193
Angelos N. Findikakis, PhD
2011 Bechtel Corporation. All rights reserved.
Bechtel Corporation welcomes inquiries concerning the BTJ. For further information or
for permission to reproduce any paper included in this publication in whole or in part,
please e-mail us at [email protected].
Although reasonable efforts have been made to check the papers included in the BTJ,
this publication should not be interpreted as a representation or warranty by
Bechtel Corporation of the accuracy of the information contained in any paper, and
readers should not rely on any paper for any particular application of any technology
without professional consultation as to the circumstances of that application. Similarly,
the authors and Bechtel Corporation disclaim any intent to endorse or disparage any
particular vendors of any technology.
iii
The BTJ is available on the Web at www.bechtel.com/.
(Click on Services Engineering & Technology Technical Papers)
Bechtel Technology Journal iv
ANSYS and FLUENT are registered trademarks of ANSYS, Inc., and
its subsidiaries in the United States and other countries.
Aspect Objects is a trademark of ABB.
Bentley, AutoPLANT, ConstructSim, and gINT are registered trademarks
of Bentley Systems, Incorporated, or one of its direct or indirect
wholly owned subsidiaries.
Cray and CX1 are trademarks of Cray Inc.
Doppelmayr and RopeCon are registered trademarks of Doppelmayr
Transport Technology GmbH.
Dyform is a registered trademark of Bridon International Limited.
Esri and ArcGIS are registered trademarks of Esri in the United States,
the European Community, and certain other jurisdictions.
Flexowell is a registered trademark of ContiTech AG.
Flexsim is a registered trademark of Flexsim Software Products, Inc.
Google is a trademark of Google Inc.
InfoWorks and SETROUTE are registered trademarks and
TEAMWorks is a trademark of Bechtel Corporation.
Intergraph, PDS, and SmartPlant are registered trademarks of
Intergraph Corporation in the United States and in other countries.
Mathematica is a registered trademark of Wolfram Research, Inc.
MATLAB is a registered trademark of The MathWorks, Inc.
Metso and Cable Belt are registered trademarks of Metso Corporation.
Microsoft, Excel, and Windows are registered trademarks of Microsoft
Corporation in the United States and other countries.
PAC System is a registered trademark of GEA Power Cooling, Inc.
PDMS is a trademark of AVEVA Group plc.
Rail-Veyor is a registered trademark of Rail-Veyor Technologies Global Inc.
STAR-CCM+ is a registered trademark of CD-adapco.
TRADEMARK ACKNOWLEDGMENTS
All brand, product, service, and feature names and trademarks
mentioned in this Bechtel Technology Journal are the property of
their respective owners. Specifically:
EDITORIAL BOARD
Justin Zachary, PhD . . . . . . . . . . . . . . . . . . . . . . . . . Editor-in-Chief
Stew Taylor, PhD . . . . . . . . . . . . . . . . . Central Engineering Editor
Siv Bhamra, PhD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Civil Editor
William Imrie . . . . . . . . . . . . . . . . . . . . . . . Mining & Metals Editor
Cyrus B. Meher-Homji . . . . . . . . . . . . Oil, Gas & Chemicals Editor
Sanj Malushte, PhD . . . . . . . . . . . . . . . . . . . . . . . . . . Power Editor
Farhang Ostadan, PhD . . . . . . . . . Systems & Infrastructure Editor
EDITORIAL TEAM
Richard Peters . . . . . . . . . . . . . . . . . . . . . . Senior Technical Editor
Teresa Baines . . . . . . . . . . . . . . . . . . . . . . . Senior Technical Editor
Ruthanne Evans . . . . . . . . . . . . . . . . . . . . . . . . . . . Technical Editor
Cindy Grant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technical Editor
Barbara Oldroyd . . . . . . . . . . . . . . . Coordinating Technical Editor
GRAPHICS/DESIGN TEAM
Keith Schools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphic Design
Todd Berkhammer . . . . . . . . . . . . . . . . . . . . . . . . . Graphic Design
Barbara Rhoads . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphic Design
Diane Cole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Desktop Publishing
Bechtel
Technology Journal
Volume 3, Number 1
June 2011 Volume 3, Number 1
T
his is the third time the Bechtel Technology Journal has been published. For the first
two issues, printed copies were distributed and an electronic copy was posted to
Bechtels website. Beginning with this issue, the print version has been eliminated,
so you will find us only on the Internet. Be assured, however, that the high caliber of the
papers has been maintained. The quality of the papers is indicative of the depth of our
expertise. The diversity of the topics is representative of the numerous disciplines and
technical specialties that exist within Bechtel.
There are many perceptions concerning innovation. Innovation can be adapting unique
new technologies into the design and construction of industrial facilities. Innovation can
also be modifying work processes either to be more efficient or to provide a work process
workaround that saves schedule without degrading the quality of a deliverable. The
papers in this BTJ help demonstrate some of Bechtels innovative approaches.
The BTJ s objective is to share with our clients, our employees, and select industry and
university experts a sampling of our technical and operational experiences. The papers
included have been written by individuals from each of Bechtels global business units.
In some cases, our customers have also made significant contributions as co-authors.
For this, we thank them!
The authors, editorial board, editorial team, and graphics/design team who have
made this publication possible can be truly proud of the outcome. To each go my
personal thanks. I, too, am proud to be a small part of this effort and am confident that
this BTJ provides a better understanding of how Bechtel applies our best practices
to our work.
Sincerely,
Benjamin Fultz
Global Chief, Materials Engineering Technology
Bechtel Oil, Gas & Chemicals
Chair, Bechtel Fellows
v
Foreword
Editorial
June 2011 Volume 3, Number 1 vii
F
ollowing our successful publication of the first two issues of the Bechtel Technology
Journal, and with much appreciation for the interest it has generated in various
industry sectors as well as in Bechtel, we are pleased to offer this, our third, issue.
This issue marks a noteworthy change in our approach to presenting our wide-ranging
readership with a full complement of inspiring topics: we have chosen to become an
electronic-only publication.
The BTJ provides an additional avenue for disseminating the innovative ideas of
Bechtels leading specialists in response to the diverse technical, operational, regulatory,
and policy challenges significant to our Central Engineering organization as well as to
our five global business units: Bechtel Systems & Infrastructure; Civil; Mining & Metals;
Oil, Gas & Chemicals; and Power.
This collection of papers, selected by the Bechtel Fellows from a large number of worthy
submissions, opens a window on our capabilities to solve complex problems and offer
solutions to real field problems.
I would like to acknowledge the dedication shown by the BTJ s editorial board, by the
editorial and graphics/design teams, and by the authors throughout our global business
units. Their hard work is what makes this publication a success.
Your comments or questions germane to the BTJ s content are welcome (as is your
feedback about our decision to become solely an eJournal). Please send them to me at
[email protected].
Happy Reading!
Dr. Justin Zachary
Technology Manager, Bechtel Power
Bechtel Fellow
Editor-in-Chief
Central Engineering
Elm Road Generating Station
Oak Creek OKd: The first of
two new generating units at a
Wisconsin power plant expansion exceeds
promised capacity in final testing.
T ec hnol ogy Paper s
3
Construction Productivity and
Performance Measurement
Steven Toon
13
Prediction of Ground Motion
Attenuation in Low-Seismicity
Regions
Behrooz Tavakoli, PhD
Nick Gregor, PhD
2011 Bechtel Corporation. All rights reserved. 3
INTRODUCTION
W
ork or activity sampling has been used
for years as a means of measuring time
spent working during the work shift. By
defining a set of productive and nonproductive
work categories to use as measures, the process
provides a statistically valid set of data about a
work environment. Observers collect random
data points or observations during randomly
conducted workplace data collection tours. Once
the data have been collected and percentages
calculated, the project team can evaluate the data
to identify any impediments to work completion.
The team can then act to eliminate or ameliorate
the impediments.
This programmatic approach to improving
productivity, commonly referred to as activity
analysis, incorporates activity sampling as one
phase of the analysis cycle. Using a standard set
of work categories, activity analysis addresses
the methodology for determining sample size
and collecting observations, as well as strategies
to implement during the data collection process
to facilitate data evaluation. Because activity
analysis is dynamic, sample results change
throughout a projects lifecycle. Barriers created
during the various phases of the construction
process can be identified, and the project team
may take steps to resolve the issue, fulfilling the
role of a continuous improvement process.
BACKGROUND
W
ork or activity sampling has been used as a
process evaluation tool since the early part
of the twentieth century. During the late 1920s,
L.H. Tippets application of statistical models to
equipment used in the textile industry eventually
led to improved production and utilization
efficiency. [1] Over time, the application of
statistical models became a standard technique
of industrial engineering practice.
In the mid 1970s, it was determined that statistical
methods could be used in the construction
industry to aid in evaluating workforce activity
and provide management with a tool to identify
and remove productivity barriers.
The term work sampling is generally used in
the industrial engineering field. Because the
actual practice involves observing individuals or
equipment and categorizing them by the activity
they are performing at the time, the term activity
sampling is used in this paper. [2]
Issue Date: June 2011
AbstractBy whatever namework sampling, activity sampling, or productivity measurementthe process
of analyzing the activities of a construction workforce to establish the percentage of time craft workers spend in
various productive and nonproductive activities provides a project team with valuable information. Using work
or activity sampling as a cornerstone of a productivity evaluation program, the project team obtains an objective
measure of the success of the execution effort.
This paper describes a programmatic approach to evaluating productivity. In doing so, it discusses various
methods of using activity sampling to gather project data, ways to present data for analysis, data interpretation,
evaluation frequency, the statistical basis for activity sampling (including margin of error), and steps to
minimize bias. The paper also discusses the desired characteristics of the personnel gathering the data, as well
as the advantages and limitations of this programmatic approach to activity analysis and the results obtained.
Keywordsactivity analysis, activity sampling, construction productivity, craft productivity, productivity
measurement, work sampling
CONSTRUCTION PRODUCTIVITY AND
PERFORMANCE MEASUREMENT
Steven Toon
[email protected]
Bechtel Technology Journal 4
The value of activity sampling lies in its flexibility.
Categories can be designated to measure specific
activities in the work operation. The results are
statistically accurate as long as the sampling
rules and criteria are consistently followed. One
item to note is that the flexibility of the process
has, over time, led to variations in work category
definitions among the different organizations
that have implemented the process. This
variation limits the ability to compare the results
of one sampling system to those of another.
Activity sampling is based on the laws of
probability. A sample taken at random from
a large population tends to have the same
distribution as the large population. If the
sample is of adequate size, the characteristics of
the sample mirror those of the large population.
Therefore, if the activities of a worker or machine
are observed and recorded at random times
during the work shift, the percentage of the time
recorded as spent in each activity during the
data collection tour is representative of the time
actually spent in each activity during the entire
work shift. At a summary level, the overall time
spent during a work shift is characterized as
either working versus not working or, as shown
in Figure 1, productive versus nonproductive.
Activity sampling lends itself to further
partitioning because the general activities
routinely engaged in by the population being
observed may be subdivided into various
categories of productive or nonproductive
work. These categories may then be applied
to the specifically observed activities. A
typical distribution of time spent on a set of
activities is shown in Figure 2. In this case,
the productive work activities consist of direct
work, preparatory work, tool and equipment
activity, and material handling, while the
nonproductive work activities consist of waiting,
travel, and personal.
ACTIVITY SAMPLING THEORY AND SAMPLE SIZE
I
nitially, it is necessary to determine the level
of confidence desired in the final activity
sampling results. Assuming normal distribution,
the most common level is 95%. As shown in
Figure 3, the area under the normal curve
encompassed by two standard deviations (2)
on either side of the mean () is 95.45%, which
rounds to 95%. This means that there is a 95%
probability that the results from the random
observationsin this case, activity sampling
are representative of the entire population, and
there is 5% probability that they are not. [1]
Another major factor to be determined is
the desired confidence interval, or degree of
accuracy, of the activity sampling results because
it affects the number of observations that need
to be made. For illustrative purposes, consider
a confidence interval of 5% as reasonable, with
a 95% confidence level. Assuming that normal
distribution is used as the basis for determining
the sampling error, then the number of
observations required is based on the following
formula [1]:
(1)
A sample
taken at random
from a
large population
tends to have the
same distribution
as the large
population.
Figure 1. Distribution of Productive Time vs.
Nonproductive Time During a Typical Work Shift
Figure 2. Distribution of Activities in a Work Shift
Figure 3. Area Under the Normal Curve
Nonproductive
40%
Productive
60%
Travel
15%
Personal
Direct
34%
Preparatory
Waiting
18%
Material
Handling
Tools & Equipment
6%
10%
8%
9%
3 2 1 1 2 3
0.0
0.1
0.1%
2.1%
0.2
0.3
0.4
0.1%
2.1%
13.6%
34.1%
13.6%
34.1%
p (1 p)
S
p

=

2 ,
N
June 2011 Volume 3, Number 1 5
where S = desired degree of accuracy, p =
percentage expressed as a decimal, and N =
number of random observations (sample size).
If 60% productive work versus 40% nonproductive
work is taken as the initial target, then, from
Equation 1, with p = 60% = 0.60 and S = 5% =
0.05,
Thus, the initial target number of observations
required is 1,067. As the study progresses, the
actual sample size needs to be validated based
on the actual results from the sampling, as
discussed later.
ACTIVITY ANALYSIS: A PROGRAMMATIC
APPROACH TO EVALUATING PRODUCTIVITY
I
n construction, work or activity sampling is
useful because the data help identify conditions
that may be hindering the ability of craft workers
to complete the work. Once the conditions are
identified, the project team can take action to
eliminate or ameliorate these hindrances to craft
worker productivity.
As illustrated in Figure 4, activity analysis,
as a programmatic approach to evaluating
productivity, incorporates a cyclical process
of measuring performance. Steps in this
process include planning a productivity study,
conducting activity sampling, analyzing results
to identify conditions affecting productivity,
identifying and implementing improvements or
changes to address these conditions, and then
following up with another study to evaluate the
effects of the implemented changes. [3]
Planning the Study
The first step in the activity analysis cycle,
planning the study, includes:
Notifying the craft workers of the
upcoming study
Characterizing the workforce
Defining the activity categories
Formatting the data collection sheets
Figure 4. Activity Analysis Cycle
Plan
Study
Analyze
Data
Collect Data
Implement
Improvements
Identify
Improvements
Activity analysis
incorporates a
cyclical process
of measuring
performance.
4 (1 p)
4 (1 0.60)
N

=

=

1,067 .
S
2
p 0.05
2
(0.60)
Bechtel Technology Journal 6
Notifying the Craft Workers of the
Upcoming Study
Individuals generally react negatively to
surveillance, since it is sometimes perceived as an
issue of mistrust. Notification may be cascaded
down through normal communications means,
such as toolbox meetings. Cascading information
about the upcoming study, the anonymity of the
process, and the true purpose of the study serves
to allay misgivings regarding the study. When
workers understand that it is a measurement
of the project teams ability to manage the
project and identify conditions that hinder
work performance, and not a prosecutorial
attempt to weed out the population, their
apprehension dissipates.
Characterizing the Workforce
The makeup of the workforce needs to be
reviewed to determine the limits of the final
results. Items that need to be considered include:
Worker population
Craft mix
Methods of differentiating craft workers
by discipline
Because the target population is the pool
of individuals engaged in productive work
activities that support the completion of
commodities, individuals in nonmanual
supervisory roles are normally excluded.
Foremen, general foremen, superintendents,
field engineers, and other nonmanual personnel
fall into this exclusion group.
The worker population needs to be reviewed to
assist the study planner in defining the study
length. Craft mix and current work plans need
to be reviewed to determine primary areas for
data collection tours. A method of differentiating
craft workers by discipline is critical because
the results cannot be similarly separated (nor
can supervisory personnel be excluded) unless
the observer is able to identify craft types and
supervisory personnel. On some projects, craft
workers wear numeric identification or color
coding to differentiate the disciplines and
supervisory levels. Whatever means is used, the
observer needs to understand the methodology
before beginning the sampling effort.
Defining the Activity Categories
Because activity sampling is not a new process,
numerous organizations and companies
have implemented variations of it along with
variations in activity category definitions.
Regardless, as long as the process is consistently
applied by unbiased observers, the data provide
valid results.
In general, there are two main categories of
activitiesproductive work and nonproductive
work:
Productive work may be defined as
any activity on which an individual or a
machine expends effort to complete a task
or commodity. Examples of productive
work include gathering and positioning
tools or equipment in preparation for
completing a task, gathering or transporting
material, reviewing a drawing or receiving
instructions, completing a weld, and
performing a lift with a crane.
Nonproductive work may be defined as the
opposite of productive work, i.e., activities
that do not contribute to the completion
of a task or commodity. Examples of
nonproductive work include idleness
while waiting for coworkers or a piece
of equipment to arrive at the workface,
moving an unburdened truck around
the jobsite, driving a front-end loader
with an empty bucket to a new location,
walking around the jobsite without tools or
materials, and engaging in nonwork-related
conversations or eating or drinking during
the normal shift.
A significant amount of thought must be given
to defining the categories of productive work
because, once defined, they should not be
changed. Changing category definitions renders
previous sampling sets invalid since they can no
longer be compared to subsequent sampling sets.
This situation adversely affects the cyclic nature
of activity analysis.
Several sets of categories developed by
various companies subdivide productive and
nonproductive work into subcategories. Within
Bechtel, there are seven subcategories, as shown
in Table 1.
Direct work involves exerting physical effort
toward completing an activity or physically
assisting in completing the activity.
Activity sampling
is not a
new process
various
organizations and
companies have
implemented
variations of it
along with
variations in
activity category
definitions.
Productive
Work
Nonproductive
Work
Direct
Preparatory
Tools & Equipment
Material Handling
Waiting
Travel
Personal
Table 1. Bechtel Definitions of Work
June 2011 Volume 3, Number 1 7
Preparatory work pertains to receiving a work
assignment and determining its requirements
before performing the task. It is nonphysical
and includes safety-related activities such as
stretching, completing job hazards analyses, and
participating in safety and toolbox meetings.
Tools and equipment work refers to activities
related to gathering, transporting, repairing, or
setting up tools and equipment in preparation
for commodity completion.
Material handling work pertains to activities
related to moving materials from one location
to another or actively seeking materials in a
storage area. Normally, this subcategory does
not include moving materials into final position
for installation, since this action constitutes an
effort toward commodity completion and, thus,
is categorized as direct work.
Waiting, per the dictionary, is defined as
inactivity. For activity sampling purposes, this
subcategory is consistent with the dictionary
definition; waiting is viewed as a temporary
pause in work activity, such as waiting for
instructions, waiting for a truck to arrive, or
waiting for a coworker to complete a task.
Travel describes unburdened movement, such
as moving a forklift or crane around the jobsite
without a load, walking around the jobsite empty
handed, or riding in the passenger seat of a truck.
Personal is the subcategory that accounts for
time spent on individual private activities during
a work shift. Personal activities include such
things as engaging in a private conversation;
drinking and eating during the work shift; and
donning, doffing, or adjusting personal clothing.
Formatting the Data Collection Sheets
Once the work categories have been defined,
observation worksheets are prepared for use
by the observer(s) during data collection tours.
The sample sheet shown in Figure 5 is divided
by activity categories in the second column and
sorted by craft head count; craft; and, in this
case, color coding identifier, as shown in the
upper rows.
The observer fills in the date, the tour number,
and the tour start and end times (typically an
hours duration). As the observer makes each
observation, he or she records it with a hash
mark in the box aligned with the appropriate
craft column and with the appropriate row for
the observed activity.
Collecting the Data
Desirable Characteristics of the Observer
Because activity sampling is a statistical method
of studying the work pattern of a group, the
quality of the sampling program results depends
on objective (unbiased) observations of categories
and a random pattern of observations.
To be able to understand and classify the
activities being observed, individuals performing
observations should have backgrounds and
experience in the industry. For example, an
individual with minimal construction
Activity sampling
is a
statistical method
of studying
the work pattern
of a group.
Figure 5. Typical Observation Worksheet
TYPICAL OBSERVATIONS (1-hour increments)
82
PF
Red
77
EL
Grey
31
IW
Grn
28
CP
Blk
23
LB
Org
14
OP
Brn
6
CM
Or/Wh
6
TM
Mag
2
PA
Bl/Rd
1
MW
Bk/Ah
0
0
0
0
0
0
Head Count
CATEGORY
Tour #
Date
Start Time Stop Time
CRAFT
Direct Work
Preparatory
Work
Tools &
Equipment
Material
Handling
O
B
S
E
R
V
A
T
I
O
N

W
O
R
K
S
H
E
E
T
Waiting
Travel
Personal
Craft
Identifier
October 2010
Bechtel Technology Journal 8
experience may have difficulty understanding
fire-watch activities or pipefitters witnessing
a hydro test and be tempted to classify them
as nonproductive.
During data collection tours, the observer must
record his or her instantaneous first impression
or snapshot of each activity observed. In some
cases, it may be necessary to wait a few seconds to
ensure that the first impression is correct before
recording it. The observer must not anticipate
when a worker will start or stop any category
and must record the activity as it is observed.
When it is not clear what category appropriately
classifies an activity, the observer may need to
attempt to clarify the observation by moving
to a new vantage point. However, the observer
must recall the initial activity observed and
avoid recategorizing it based on a change of
state or new activity observed. If the observer is
unsure about an activity (which can sometimes
occur due to the observers level of experience in
the construction industry), then the observation
should not be recorded.
Basic criteria for observers to follow include:
Be impartial and unbiased at all times.
The validity of the results depends
on the randomness and accuracy of the
observations.
Eliminate bias from the sampling process by
selecting observation routes randomly and
varying them during successive tours by
entering the work area from different points
or entrances.
Move to a new vantage point if unsure of
what is being observed.
Observe craft workers and equipment
wherever they are encountered on the jobsite.
Do not attempt to take observations from a
concealed position.
Make intelligent decisions about categories
as appropriate; however, if there is any
doubt, do not record uncertain observations.
Much of the validity of activity sampling
depends on obtaining good data. While
sufficient data are needed to make the study
statistically valid, the data collected must
be accurate.
Adjust tours during unusual shift events
(such as extended safety meetings, training
exercises, morale and recognition luncheons)
by either moving to another area away
from the influence of the shift event or
curtailing sampling.
Suspend data collection tours during rain
or other inclement weather and during
emergency drills.
Do not reveal information or personal
opinions about the work habits, performance,
etc., of any crew or individual to anyone.
Management should never question
observers about these items.
Data Collection
Using the initial sample size, which is based on
an assumed target productivity level, the sample
is divided by the number of hours in the work
shift to establish the number of observations
needed per hour; for the example presented
previously, for an 8-hour work shift, 1,067 8 =
133.4 133. The observer should be able to meet
or exceed the required minimum number of
observations per hour without too much effort
unless the population is small (i.e., fewer than
200 personnel). Throughout the day, the observer
should make notes about any events, conditions,
or concerns that occur during tours. These notes
help during the data analysis phase and assist
as references during the preparation of any
required final report.
Typical Data Collection Tours
Typical observations focus on individual worker
activity and are taken throughout the regular
shift on a per-hour basis. The observer walks
random routes or tours through the work areas,
categorizing the observed worker activities. The
observer uses the time between observation
rounds to summarize and analyze the
observations made up to that point in the day. To
provide sorting capability during data analysis,
each sheet should be used for only 1 hour.
Data Compilation
Once the tours for a given day are completed,
the collected data are compiled, either manually
or in a spreadsheet program. The observation
counts are totaled by hour, and the day is totaled
by the end of the work shift.
An example of a completed observation
worksheet is shown in Figure 6. This
example shows the results from a data
collection tour conducted between the hours of
3:00 PM and 4:00 PM. In this case, the observer
was able to gather 156 observations, which
exceeds the minimum number of required
hourly observations.
Observers should be instructed not to exceed
the minimum hourly sample size by a large
margin. One of the goals of sampling by the
hour is to establish a representative sample
throughout the work shift. A particularly large
Eliminate bias
from the
sampling process
by selecting
observation routes
randomly and
varying them during
successive tours
by entering
the work area from
different points
or entrances.
June 2011 Volume 3, Number 1 9
group of observations in one tour may skew the
results, and final data may not be representative,
since various categories of activity may be more
prevalent at different times during the work shift.
Observers gathering data should be advised to
limit observation collection rates to keep the
total tour count to less than 25% over the target
minimum number of observations per hour.
Regardless, once an overly large number of
observations have been taken in any given hour,
the options are to either discard that data and
re-sample, or increase the minimum sample
size required for the rest of the tours. Because
increasing the sample size increases the relative
accuracy of the final results, it is generally
advantageous to do so. For the example provided
in Figure 6, the new target would be about
150 observations per hour.
After 500 to 600 observations have been taken,
the sample size needed to achieve the desired
sampling accuracy should be checked in case
the sample size needs to be adjusted based
on the actual observed productivity level. In
this example, the actual cumulative productive
work percentage is 52%. Using Equation 1, with
p = 52% = 0.52 and S = 5% = 0.05, the required
sample size N can be calculated as

Based on the observed productivity, the new
target number of observations required is 1,477;
thus, the new hourly target for an 8-hour shift is
185 observations.
Analyzing the Data
The work sampling study is continued until the
desired accuracy is obtained. At this point, the
information collected during the study and the
percentages of time spent in each category are
analyzed to identify problem areas. Initial results
should be compared to target benchmarks, as
shown in Table 2, with the objective of increasing
the direct work percentages by reducing the time
spent in other activities.
Information
collected during
the study and the
percentages of
time spent
in each category
are analyzed
to identify
problem areas.
Figure 6. Observation Worksheet After a Completed Tour
TYPICAL OBSERVATIONS (1-hour increments)
Direct Work
Preparatory
Work
Tools &
Equipment
Material
Handling
O
B
S
E
R
V
A
T
I
O
N

W
O
R
K
S
H
E
E
T
Waiting
Travel
Personal
11
5
5
6
6
3
2
13
1
1
6
4
4
16
4
5
1
1
1
2
9
1
3
1
2
9
3
3
3
8
8
1
2
5
2
5
1
1
October 2010
82
PF
Red
77
EL
Grey
31
IW
Grn
28
CP
Blk
23
LB
Org
14
OP
Brn
6
CM
Or/Wh
6
TM
Mag
2
PA
Bl/Rd
1
MW
Bk/Ah
0
0
0
0
0
0
Head Count
CATEGORY
Tour #
Date
Start Time Stop Time
CRAFT
Craft
Identifier
10/19/10
3:00 4:00
Activity Target
Your
Project
Variance
Direct 34.7% 31.5% 3.2%
Preparatory 8.9% 15.2% 6.3%
Tools & Equipment 9.8% 13.6% 3.8%
Material Handling 6.1% 4.7% 1.4%
Productive 59.5% 65.0% 5.5%
Waiting 18.2% 13.8% 4.4%
Travel 14.8% 15.6% 0.8%
Personal 7.5% 5.6% 1.9%
Nonproductive 40.5% 35.0% 5.5%
Table 2. Comparison of Observed Productivity
Against Target Benchmarks
4 (1 p)
4 (1 0.52)
N

=

=

1,477 .
S
2
p 0.05
2
(0.52)
Bechtel Technology Journal 10
In the example shown in Table 2, the preparatory
work percentage is elevated, and the tools and
equipment percentage is slightly elevated as well.
Because preparatory work involves receiving a
work assignment and determining requirements
before performing the task, an elevated level
could indicate confusion on the part of the worker
or that a particularly complicated task requires
additional discussion time with supervisory or
field engineering personnel. An elevated level of
tools and equipment activity could be indicative
of tool program or tool room issues; broken, lost,
or insufficient numbers of tools and equipment;
or tool or tool room location issues.
As with tools and equipment, a higher percentage
of time spent in material handling activities
could be related to material location, lost
materials, or material program issues.
Waiting could indicate overstaffing. If work
group size is too large for the task assigned,
one or more crew members may be unable to
participate fully. Other possible contributors to
waiting include poor supervision or a shortage
of supervisory personnel, complicated designs or
engineering issues, and poor planning.
Personal activities are closely related to waiting.
Idle workers tend to engage in personal activities
because they have nothing else to do. Alterna-
tively, higher levels of waiting and personal
activity could indicate poor management during
the critical times of the day (shift start, before
and after breaks, and shift end).
Travel activities are primarily related to jobsite
layout. The locations of break areas, entry control
points for badge access, tool rooms, and welding
rod dispensaries, for example, all affect the
percentage of time spent on travel. Travel could
also be affected by frequently changing work
assignment priorities because the craft workers
have to disengage from the current activity,
travel to a new location, and re-engage in a
new activity.
If it is possible to differentiate crafts and gather
data by craft discipline, it is useful to evaluate
individual craft results. Comparisons such as
those shown in Figure 7 may be made to identify
the challenges to a particular discipline. Further
research into the conditions causing a particular
category to be above or below a target may be
necessary. The example in Figure 7 indicates
that the carpenters are spending 21 percent of
their time in material handling. In this case, the
project team should review factors that could
be behind this result, such as material storage
locations relative to the workface.
For data collected on a per-hour basis, a graph
may be prepared that illustrates the variability of
productivity over the entire shift. A representative
stacked percentage graph of the results from a
set of data collection tours is shown in Figure 8.
Completing the Activity Analysis Cycle
Once the analysis is completed, the remaining
steps in the activity analysis cycle are
implemented. Changes may be identified to
correct or improve a particular issue or
condition. Once changes have been identified
and put into action, planning should begin for a
follow-up study. This study should be completed
for two reasons:
To determine if the changes have had the
desired effect
To determine if the changes have possibly
created other issues
EVALUATION FREQUENCY
A
lthough site conditions, temporary
facilities, and site layout that can affect
worker productivity should be reviewed and
assessed before contract award and before site
mobilization, these reviews do not involve
activity sampling. Normally, the initial activity
analysis should be done fairly early in the
scheduleat around 10% to 15% of construction
complete. This initial analysis should focus on
site logistics and work site flow.
Figure 7. Comparison of Craft Performance, by Discipline
10
T
i
m
e

S
p
e
n
t
,

b
y

W
o
r
k

C
a
t
e
g
o
r
y
,

%
PF
20
30
40
50
60
70
80
90
100
0
27
13
18
10
19
6
7
40
11
13
10
8
14
4
30
11
17
7
5
24
6
56
5
6
11
15
6
27
13
10
21
8
15
6
28
9
26
11
12
14
52
23
18
1 2
2
3
EL BM LB
Discipline
CP IW OP
Waiting Travel Personal
Direct
Work
Tools &
Equipment
Preparator y
Work
Material
Handling
LEGEND
PF Pipefitter
EL Electrician
BM Boilermaker
LB Laborer
CP Carpenter
IW Ironworker
OP Operator
The initial
activity analysis
should be done
fairly early
in the schedule
at around
10% to 15%
of construction
complete.
June 2011 Volume 3, Number 1 11
A second analysis should be performed before
30% complete, with a focus on improving work
processes in the field. It may be beneficial to
implement additional evaluation tools during
this analysis, such as a foremen delay survey. This
survey involves interviewing approximately 10%
to 20% of the foremen on the jobsite in an effort
to identify the types of issues they encounter and
the frequency with which the issues occur. The
list of possible delays is developed in advance
and may include items such as waiting for
inspections, drawing problems, work package
problems, material delays, material loss, tool
availability, equipment availability, breakdowns,
absenteeism, changing priorities, and rework.
Each foreman interviewed is asked to provide
feedback on the frequency of delay due to
each item on the list. The interviewer assigns a
numeric value (0, 1, 2, or 3) based on the response,
with delays that occur daily or weekly scored as
3; delays that occur a couple of times a month,
as 2; delays that occur a couple of times a year,
as 1; and delays that have never occurred, as
0. The numeric average of each delay category
indicates which category occurs with the
greatest frequency.
A third analysis should occur before
50% complete, and a fourth and final should
be made before punch listing and system
completion begin. Both of these activity analyses
should be conducted in a format similar to that
of the second.
The recommendations here are based on a
minimum frequency of evaluation. Follow-up
activity analyses may be conducted throughout
a projects lifecycle as needed to review specific
processes and evaluate the effectiveness of
implemented improvements.
CONCLUSIONS
I
mplementing a programmatic approach to
evaluating productivity is not a substitute for
using standard project controls tools to measure
commodity completion. The activity analysis
process described here provides a measure of
the level of activity, not the effectiveness of
the projects commodity completion effort. For
example, a worker observed grinding out a
bad weld would most likely be credited by the
observer as performing direct work. However,
the actual time allocated for completing the
weld would be increased (jobhours required
for original weld plus jobhours for repair) and
would, therefore, negatively affect the unit rate
or jobhours allocated per weld.
In the early 1980s, work or activity sampling
was inappropriately promoted as a means
of correlating labor output or commodity
completion to the percentage of time spent in
direct work activity. This led to the imprudent
use of work sampling results in legal cases. [4]
Another pitfall often encountered in the use of
activity analysis is attempting to compare the
results of one study to those of another that
employed a different set of defined categories.
This leads to confusion and misunderstanding
with regard to a projects performance.
Figure 8. Overall Hourly Productivity
10
7:00
8:00
20
30
40
50
60
70
80
90
100
8:00
9:00
9:00
10:00
10:00
11:00
11:00
Time of Day
12:00
12:30
1:30
1:30
2:30
2:30
3:30
3:30
4:30
4:30
5:30
0
Direct Work
Preparator y Work
Tools & Equipment
Material Handling
Waiting
Travel
Personal
T
i
m
e

S
p
e
n
t
,

b
y

W
o
r
k

C
a
t
e
g
o
r
y
,

%
The activity
analysis process
described here
provides a measure
of the level
of activity, not
the effectiveness
of the projects
commodity
completion effort.
Bechtel Technology Journal 12
The primary advantage of activity analysis is
that it helps identify factors affecting or hinder-
ing a workforces ability to engage in direct work.
Increasing the time available to the workforce to
complete commodities gives projects the chance
to improve performance; however, the effective-
ness of taking this action depends on proper
planning and execution by the project team.
Other advantages of activity analysis include:
Overall project work activity is observed at
randomly selected areas and times, not by
specific area or worker.
Activity sampling is less disruptive to work
activity and, thus, causes less stress and
anxiety to the workforce.
The process may be adapted to various
project execution types, such as direct hire
or construction management.
A project may be compared to itself in
subsequent analyses. The effects of varying
manpower levels and the project percentage
complete may be taken into consideration
when reviewing the results.
Observers with construction experience can
conduct observations with a minimum of
specialized training.
Activity sampling collects data not normally
collected by any other means.
Activity sampling is more cost effective than
continuous monitoring techniques.
Implementing an activity analysis is an
extension of performing simple work or
activity sampling and is part of a continuous
improvement program. Using activity analysis
on a construction project enables the project
team to understand the barriers to productivity
so that steps may be taken to eliminate them.
The cyclical nature of the process aligns with the
changing environment of a construction project
so that barriers to productivity not present in the
initial studies can be identified and addressed in
a more timely manner as they evolve.
REFERENCES
[1] R.M. Barnes, Motion and Time Study: Design and
Measurement of Work, 7th ed., John Wiley & Sons,
1980, pp. 406440, see http://www.amazon
.com/Motion-Time-Study-Design-Measurement/
dp/0471059056#_.
[2] K.G.R. Heinze, Performance Measures by Means
of Activity Sampling, 1984 AACE Transactions,
D.4, AACE International, Morgantown, WV, 1984,
see http://www.aacei.org/cgi-bin/litsearch.pl.
[3] C.H. Caldas; D.D. Christian; P.M. Goodrum;
M.C. Gouett; R. Granger; C.T. Haas; G.M.
Stofega, Jr.; and S.A. Toon, Guide to Activity
Analysis, Craft Productivity Research
Program Research Team, Construction
Industry Institute, [Implementation Resource]
IR252-2a, Austin, TX, July 2010, access via
https://construction-institute.org/source/
Orders/index.cfm?section=orders.
[4] H.R. Thomas, Labor Productivity and
Work Sampling: The Bottom Line, Journal
of Construction Engineering and Management,
Vol. 117, No. 3, ASCE, September 1991,
pp. 423444, access via http://scitation.aip.org/
coo/.
BIOGRAPHY
Steven Toon leads the
productivity engineering
effort for Bechtel Construction
Operations as part of the
Construction Engineering
and Technologies group. He
supports Bechtels various
business lines and projects
by performing productivity
studies, including data
collection, analysis, training, and interpretation.
Steve has over 30 years of management, supervision,
design, and construction engineering experience
related to direct hire work, subcontracting, quality
management, telecommunications, nuclear power,
and the federal government.
Steve began his construction engineering career with
Pullman Power Products at Diablo Canyon nuclear
power plant in California and then at Vogtle nuclear
power plant in Georgia. He joined Bechtel at the
Department of Energys Savannah River Site in 1989
and, during his 11 years there, held a variety of
progressively more responsible positions, including
lead field engineer, area superintendent, field design
group lead, and project manager. In 2000, Steve moved
to the Communications business line as the project
field engineer and, subsequently, the project
construction manager for a major project with
Cingular. He joined the Bechtel Construction
Operations staff in 2007 to guide and foster the growth
of productivity engineering within the company.
Currently, Steve serves on a Construction Industry
Institute (CII) Research Team on Craft Productivity
Research, RT-252. In 2010, he spoke on craft
productivity at the Construction Business Forum
jointly sponsored by Engineering News-Record (ENR)
and the Construction Users Roundtable (CURT) and
at the CII annual conference. Steve is also a member
of the American Society for Quality and the American
Institute of Constructors.
Steve has a BS from California Polytechnic State
University, San Luis Obispo, and an AS in Agricultural
Engineering from Modesto Junior College. He is a
Six Sigma Yellow Belt.
Using activity
analysis on a
construction project
enables the
project team
to understand
the barriers
to productivity
so that steps
may be taken
to eliminate them.
2011 Bechtel Corporation. All rights reserved. 13
INTRODUCTION
T
his paper proposes a new methodology
for developing ground motion attenuation
relationships for given project sites in regions
of relatively low seismicity for use as input
to seismic hazard analysis. This methodology
is most useful for projects that lack a region-
specific ground motion attenuation model
and/or empirically recorded ground motions.
An appropriate ground motion attenuation model
is always one critical component of seismic hazard
analysis. Typically, ground motion attenuation
relationships have been developed for regions of
relatively high historical seismicity both because
the empirical strong ground motion data needed
to develop empirical attenuation relationships
are widely available and because the earthquake
damage potential is significant in these high-
seismicity regions. However, regions of relatively
low seismic activity also may require applicable
ground motion attenuation relationships for
seismic hazard analysis. Due to the paucity of
empirical ground motions for these regions,
Issue Date: June 2011
AbstractAny quantitative description of earthquake ground motion to be incorporated into a facilitys
design depends in part on a model that predicts the amplitude of ground shaking as a function, principally,
of earthquake size and distance. Models of this type are called strong ground motion attenuation equations or
attenuation relationships. The functional form and terms of an attenuation equation must be sufficient to match
the main features of the ground motions over the entire expected range of magnitudes, distances, and structural
response periods of engineering interest.
In regions with a history of damaging earthquakes in adequate number and adequately recorded on specialized
strong ground motion instruments, the most direct way to develop an appropriate attenuation relationship is to
use existing empirical data to fit a representative functional form.
In regions with no history of damaging shaking, it might be supposed that quantitatively precise, defensible
estimates of design earthquake ground motions would be of only secondary interest. In fact, the stringent
requirements of the design criteria for critical facilities and the relatively short historical record very often require
that attenuation relationships be developed even in low-seismicity regions. By definition, however, strong
ground motion data are sparse or nonexistent in such regions. Therefore, it is often necessary to use ground
motion simulation approaches, such as the stochastic point-source model, to develop a computer-generated,
strong ground motion data set that can, in turn, be used to develop an attenuation relationship. The
seismological input parameters for the simulations are typically based on regional studies. Given a data set
of simulated ground motions and a functional form for the attenuation equation, a maximum likelihood
estimation (MLE) procedure may then be used to determine regression coefficients for developing a new ground
motion attenuation relationship, much as is done with actual empirical data from seismically active regions.
The strong motion simulationmaximum likelihood regression (SMSIM-MLREG) toolbox described in this paper
comprises a set of programs that may be used to simulate ground motions and to develop appropriate ground
motion attenuation relationships in a given low-seismicity region based on these motions and the MLE approach.
This toolbox provides a streamlined procedure from start to finish for developing a region-specific ground motion
attenuation relationship that may be incorporated in a seismic design characterization of a given site.
Keywordsattenuation relationship, ground motion simulation, maximum likelihood estimation (MLE),
maximum likelihood regression (MLREG), nonlinear regression, seismic hazard, source parameter randomization,
stochastic point-source model, strong motion simulation (SMSIM)
PREDICTION OF
GROUND MOTION ATTENUATION
IN LOW-SEISMICITY REGIONS
Behrooz Tavakoli, PhD
[email protected]
Nick Gregor, PhD
[email protected]
Bechtel Technology Journal 14
these attenuation relationships are typically
developed based on the numerical modeling of
expected ground motions.
Probabilistic seismic hazard analysis (PSHA)
has become an important part of earthquake
design spectra criteria for projects ranging
from conventional structures under modern
building codes to critical facilities such as
liquefied natural gas (LNG) plants, petroleum
facilities, and nuclear power stations. PSHA
results depend significantly on the strong
ground motion attenuation models (and their
uncertainties) incorporated within the analysis.
Deterministic ground motion studies are
also influenced by the selection of applicable
ground motion attenuation models and their
associated uncertainties.
The strong motion simulation (SMSIM) approach
[1] to generating numerical ground motions,
which is based on the stochastic simulation
method, uses regionally determined source
and propagation path input parameters. The
resulting simulations form the synthetic data
set that is then used in the regression analysis
to develop an applicable region-specific ground
motion attenuation model. For the regression
analysis, the functional form of the ground
motion attenuation model is selected to be
relatively simple yet able to capture complicated
ground motion behavior in terms of magnitude,
distance, and other descriptive parameters. To
perform the regression, this paper presents a
new maximum likelihood estimation (MLE)
procedurethe maximum likelihood regression
(MLREG)and demonstrates its validity and
ease of use in developing new empirical ground
motion attenuation relationships in regions
where no empirical and/or actual strong ground
motion data are available.
Taken together, the SMSIM approach and
MLREG constitute the SMSIM-MLREG toolbox,
a set of programs that may be used to develop
attenuation relationships in areas where there
are an insufficient number of strong ground
motion recordings from earthquakes.
BACKGROUND
T
o develop an attenuation relationships
functional form, one must know the
fundamental characteristics of earthquakes in
a given region. Campbell [2] has reviewed and
summarized significant factors that can affect
strong ground motion attenuation. In general,
magnitude, distance, and site conditions are
the principal variables used to predict future
ground motions. Additional parameters can be
incorporated into an attenuation model based on
the analysis of the residuals between the data set
and the functional model.
Once the functional model has been selected, the
next step is to develop a suite of ground motion
data for the specific region. With the SMSIM-
MLREG toolbox, the data set is developed based
on the numerical modeling (i.e., stochastic
Ground motion
models are needed
in areas where
there are an
insufficient number
of strong ground
motion recordings
from earthquakes.
ABBREVIATIONS, ACRONYMS, AND TERMS
BLWN band-limited, Gaussian white
noise
ENA eastern North America
FAS Fourier amplitude spectrum
LNG liquefied natural gas
MLE maximum likelihood estimation
MLREG maximum likelihood regression
OLS ordinary least-squares
PGA peak ground acceleration
PSA peak spectral acceleration
PSHA probabilistic seismic hazard
analysis
RVT random vibration theory
SMSIM strong motion simulation
WNA western North America
UNITS OF MEASURE AND
MATHEMATICAL NOTATIONS
cm centimeter
f frequency
f
c
source corner frequency
f
max
site corner frequency
Hz Hertz
km kilometer
LN log normal (natural log)
M
0
seismic moment
Q
o
quality factor
R distance
V
s
shear-wave velocity
Brune stress drop (stress
parameter)
random error
spectral decay parameter
angular frequency
June 2011 Volume 3, Number 1 15
point-source simulations) of ground motions
using seismological input parameters appropriate
for a given region. Although more complicated
kinematic and dynamic models have been used
to simulate ground motions (e.g., see [3]), the
stochastic method developed by Boore [1, 4] is
used for this analysis because of its simplicity
and appropriateness for the development of
engineering estimates of strong ground motions.
Having selected a functional form for the
attenuation relationship and developed a suite
of stochastic ground motion data, one must
choose a statistical procedure to determine
the period-dependent coefficients in the
functional model. Such a procedure is referred
to as regression analysis. If the selected
ground motion attenuation model is linear with
respect to the coefficients, then standard linear
regression procedures can be used. If not, a
nonlinear regression procedure must be used.
The MLREG program developed in this study
uses the statistical toolbox in the MATLAB

1

environment to determine regression coefficients
for either linear or nonlinear equations.
GROUND MOTION ATTENUATION RELATIONSHIPS
A
basic form for an attenuation relationship is
the logarithmic equation
ln Y = c
1
+ c
2
M c
3
ln[f
1
(R)]
c
4
f
2
(R) + c
5
f
3
(P) + , (1)
where Y is the ground motion parameter of
interest, such as peak ground acceleration
(PGA) or peak spectral acceleration (PSA) at
some defined period; M is magnitude; R is a
measure of the distance from the source to
the site being considered; P is a description of
the local conditions beneath the site; and is
a random error term with a mean of zero and
a standard deviation of
lnY
, representing the
uncertainty in Y. In some studies an additional
term to account for the observed differences in
ground motions due to earthquake mechanism
(e.g., strike-slip, thrust, normal) is included in
the mathematical representation of the ground
motion parameter. Several more-complex ground
motion attenuation models are currently in use
as well to model empirical ground motion data.
The period-dependent coefficient parameters c
1

through c
5
for this model are obtained from the
regression process for spectral period T. In more
complicated forms of Equation 1, the coefficients
c
2
, c
3
, c
4
, and c
5
can be defined in terms of M
and R.
The physical basis for this simplified attenuation
relationship is seismological theory. In
Equation 1, the first term, c
1
+ c
2
M, is called the
magnitude scaling term, and is consistent with
the original definition of earthquake magnitude,
in which ground motions increase exponentially
with a linear increase in magnitude. Alternative
functional forms can be used for the magnitude
scaling term. For example, the piecewise linear
relationships (e.g., see [5]) or the quadratic
magnitude term (e.g., see [6]) have been proposed
where magnitude saturation is an issue for M 6.0.
Magnitude saturation occurs when incremental
increases in magnitude are associated with ever-
smaller increases in ground motion of a defined
period. In unconstrained and extreme cases, this
can lead to the unrealistic effect of predicting, for
a given distance and on firm foundations, smaller
ground motions with increasing magnitude. In
some cases, a purely statistical regression on a
given empirical data set can lead to a ground
motion attenuation model that contains this
unacceptable magnitude saturation result.
The terms c
3
ln [f
1
(R)] and c
4
f
2
(R) are called
distance scaling terms. They are consistent,
respectively, with the geometrical spreading
attenuation of seismic waves as they propagate
away from the earthquake source and with
anelastic attenuation (i.e., material damping
and scattering). Both of these functional terms
are associated with the basic seismological
principles of seismic wave propagation in elastic
media. In some cases, these terms are varied as
a function of magnitude to accommodate the
observed distance saturation properties of strong
ground motion.
The last term, +c
5
f
3
( P ), is used to model the
effect of the local site conditions on the ground
motions. The coefficient c
5
can be a function
of magnitude and distance if f
3
( P ) is found
to correlate with these parameters. For the
SMSIM-MLREG toolbox presented in this study,
simulated ground motions are all for the same
site conditions; hence, this term is not used in the
regression analysis.
Random error term is usually assumed to be
log-normally distributed. The error term is the
difference between a ground motion observation
(based on either numerical simulations or
empirical data) and its predicted value. Although
this total uncertainty is typically separated into
Simplified
attenuation models
can be based
on seismological
theory.
1
MATLAB is a high-level language and interactive
environment that enables computationally intensive
tasks to be performed faster than with the more
traditional programming languages.
Bechtel Technology Journal 16
intra- (within) and inter- (between) earthquake
components, the MLREG program currently
only computes the total uncertainty between the
data set and functional model.
In general, the selection of a functional form
for the ground motion attenuation relationship
should be guided by the data set. If the
data set represents relatively long-distance
ground motions (i.e., distances greater than
100200 km), then the distance saturation (and to a
lesser extent the magnitude saturation discussed
earlier in this section) can be ignored and a simple
functional model (i.e., a linear functional model)
is justified. If the data set represents a relatively
uniform subset of data, then a functional form
having only a few parameters is appropriate.
STOCHASTIC GROUND MOTION MODEL
T
he stochastic method assumes that the average
horizontal component of ground motion may
be modeled as band-limited, Gaussian white
noise (BLWN) and that the peak amplitude
may be approximated using random vibration
theory (RVT). [7, 8, 9, 10] This method assumes
that the seismic shear-wave energy represented
by the Fourier amplitude spectrum (FAS) is
band-limited by the source corner frequency
( f
c
) at low frequencies and by the site corner
frequency ( f
max
) or the spectral decay parameter
() at high frequencies. This section gives a brief
overview of the seismological models used in the
stochastic ground motion model and, in turn,
in the SMSIM program.
Fourier Amplitude Spectrum
A point-source stochastic model in the frequency
domain assumes that the total FAS of acceleration
A( f ) for horizontal ground motions due to shear
waves may be modeled by the general relation [1]
A( f ) = E( f , M
0
)*D( f , R)*P( f ) , (2)
where M
0
is the seismic moment (dyne-cm), R
is the distance (km), and f is the frequency (Hz).
E( f , M
0
) is the point-source spectrum term,
D( f , R) is a diminution factor accounting for
both geometrical and anelastic attenuation, and
P( f ) is a low-pass filter to model the decrease of
Fourier amplitude spectra at high frequencies
(i.e., site amplification factor).
Point-Source Spectrum Model [E(, M
0
)]
The most commonly used point-source model is
based on the Brune spectrum. [11, 12] This basic
seismological model of a ground acceleration
spectrum has a simple
-2
shape, where is
angular frequency (that is, 2f ). This model
assumes that the earthquake source is a circular
fault and that the ground acceleration spectrum
from this simplified source has a
-2
decay
for frequencies below source corner frequency
f
c
and is flat for frequencies greater than f
c

but less than site corner frequency f
max
. The
amplitude spectrum level begins to drop at
higher frequencies beyond f
max
. The choices of f
c

and f
max
depend mainly on the earthquake size
and the site condition, respectively.
The Brune stress drop () is generally computed
from the high-frequency energy of the Fourier
amplitude spectra of measured earthquakes.
[13] Higher stress drops increase the corner
frequencies. Higher corner frequencies in
turn increase the amplitude levels at higher
frequencies. The stress drop value may be used
as a fitting parameter to adjust the source
spectrum model to adequately model observed
ground motions that may not fit a single-
corner source model. To avoid confusion, some
seismologists prefer to call the Brune stress
drop the stress parameter.
Filter Function of the Transfer Media [D(, R)]
The loss of wave energy within a geological
medium (crustal attenuation) is calculated
by multiplying a point-source geometrical
attenuation factor by a deep crustal damping
factor. The geometrical attenuation factor is
modeled using the distance parameter and
depends mainly on the regional thickness of
the Earths crust. For example, in eastern North
America (ENA), the geometric attenuation of
seismic waves may be given by a three-part
expression (e.g., see [14]). The spherical spreading
of body waves results in an R
-1
amplitude
decay within a 70 km range. Beyond 70 km, the
direct shear waves are superimposed on waves
reflected from the Moho
2

, offsetting any decay
in the amplitude of seismic waves between
70 and 130 km (i.e., R
0
amplitude decay). The
cylindrical spreading of surface waves results
in an R
-0.5
amplitude decay beyond 130 km. In
western North America (WNA), an example of
the geometrical attenuation model is defined
as spherical spreading of R
-1
to a distance
of 40 km and a cylindrical spreading of R
-0.5

beyond 40 km. [15] Note that there are many
other published geometrical attenuation models
for both WNA and ENA. Any geometrical model
Stochastic ground
motion modeling
is an acceptable
methodology
for the simulation
of strong
ground motions.
2
The Mohorovii discontinuity, first identified in
1909 by Andrija Mohorovii, a Croatian seismologist
and usually referred to simply as the Moho, is the
boundary between the Earths crust and mantle.
June 2011 Volume 3, Number 1 17
can be used with the SMSIM-MLREG toolbox to
develop ground motion attenuation relationships
for a given region.
The shallow crustal damping (diminution of
ground motions) may be modeled as being
proportional to the factor exp ( R), where
R is distance and is the coefficient of anelastic
attenuation, given by Campbell [16] as
(3)
where quality factor Q models anelastic
attenuation and scattering within the deep
crustal structure and V
s
is the seismic shear-
wave velocity used to determine Q. The quality
factor model is considered as a function of
frequency and may be modeled as the median of
seismic wave attenuation within the lower and
upper uncertainty levels. [14, 17]
Caution should be used in coupling a given
geometrical attenuation model and an anelastic
attenuation model because these two models are
highly correlated.
Filter Function of the Local Site Conditions
[P( f )]
Anderson and Hough [18] proposed a low-pass
filter based on the spectral decay parameter (),
which produces a near surface attenuation of high
frequency energy. This -filter (shallow crustal
damping) is defined as the high-frequency slope
of the Fourier amplitude spectra. Anderson and
Hough [18] found approaching a constant
value near the epicenter of a seismic event and
assumed that it is dependent on the subsurface
geology. At larger distances from the source,
increases slightly due to path effects associated
with wave propagation in the crust and quality
factor Q
o
.
The shallow crustal model beneath the site
defines shear-wave velocity V
s
and density as
functions of depth. When seismic waves travel
through the crust, the amplitude, frequency
content, and duration of ground surface motions
change. The extent of these changes depends
mainly on the geometry and properties of the
subsurface materials. Site amplification factors
may be computed using the quarter-wavelength
approximation method. [19] In this method, site
amplification factors at a specific frequency (or
wavelength) are given by the square root of the
ratio between the seismic impedance (product of
V
s
and density) at the site averaged over a depth
equal to one quarter of the wavelength and the
seismic impedance at the source.
Peak Ground Motion Parameters [ln Y ]
The stochastic model can provide the average
amplitude level of earthquakes for a wide
range of magnitudes and distances. [1, 10] First,
the stochastic model is used to generate an
acceleration time history as Gaussian white noise.
Then, the FAS of the time history is combined
with the seismological model of ground motion
to obtain the desired spectrum shape at the near-
source distance as a function of earthquake size.
Finally, RVT is used to determine maximum
ground motion parameters, such as PGA and
PSA, from root mean square parameters. [20]
These simulation ground motions are used to
develop a new attenuation relationship.
The ground motion values of Y from
Equation 1 are obtained by the Cartwright
and Longuet-Higgins [21] approach using the
maxima of a random function. This approach
assumes that the phases of a stochastic function
are random and uniformly distributed between
0 and 2. The ground motion duration is given by
T
gm
= T
s
+ T
p
, in which T
s
is the source duration
and T
p
is the path duration. The source duration
can be defined as the time for the fault to rupture
and is proportional to the inverse of the source
corner frequency. [10] The path duration depends
on the epicentral distance and can be estimated
based on the method proposed by Atkinson
and Boore. [14]
MAXIMUM LIKELIHOOD ESTIMATION METHOD
W
ith a proposed functional form with
which to model the attenuation of strong
motion and a body of strong motion data
simulated using principles of wave generation
and propagation appropriate to a region of
interest, the final task becomes fitting the data
to the model.
Equation 1 can be recast in the form of a general
attenuation relationship,
y
i
= f(x
i
; ) + e
i
, (4)
where y
i
is a ground motion parameter, such
as PGA or PSA, from the i
th
earthquake and
x
i
is a vector of predictors such as earthquake
magnitude, source-to-site distance, or other
site and source conditions, also from the i
th
earthquake. The in this equation is a vector of
regression coefficients to be estimated, and f is a
known functional form. Error term e
i
is assumed
to be an independent zero-mean normal random
variable with a constant variance,
2
.
The stochastic model
is used to
simulate PGA
as well as spectral
acceleration
ground motion
values.
f


= ,
QV
s
Bechtel Technology Journal 18
Initially, the functional form is assumed to be
linear with respect to the coefficients to be
estimated. (As is discussed later, we can modify
the linear procedure using a Taylor expansion to
accommodate a nonlinear regression model.) For
example, a suitable transformation of data can be
found to reduce a nonlinear attenuation model to
a linear attenuation model. Equation 4 can now
be replaced by a linear system:
y
i
= x
i
+ e
i
. (5)
Given the assumption that e
i
in Equation 5 is
normally distributed, then dependent variable
y
i
is normally distributed as well. This implies
that y
i
is a continuous and unbounded normal
distribution function. Suppose that the data set
consists of N ground motion records coming
from E earthquakes (x
i
, y
i
), i = 1, 2,, N. It follows
that the logarithm of the likelihood function for
the whole data set is given in matrix form by the
following expression:
(6)
where T denotes matrix transposition. The
maximum likelihood estimates of the parameters
(,
2
) are found by maximizing the log-likelihood
function above. Finding the values of and
2

that maximize the log-likelihood is easily done
by taking the derivative of Equation 6 with
respect to and
2
, setting it equal to zero, and
solving for the parameters to be estimated.
Following this procedure, we can find a closed-
form solution for by the following matrix
expression:
= (X
T
X)
-1
X
T
y . (7)
Equation 7 indicates that MLE and ordinary least-
squares (OLS) both give the same estimator for
the regression coefficients. Expected or predicted
ground motion is calculated by substituting
into Equation 4. Thus, we have the prediction of
ground motion values by the expression
= X = X(X
T
X)
-1
X
T
y = Hy , (8)
where H = X(X
T
X)
-1
X
T
is called the hat matrix
because it transforms or projects observed
ground motion y into predicted ground motion .
Because we have already obtained , we can
find a closed-form solution for
2
by taking
the derivative of Equation 6 with respect to
2
,
setting the result equal to zero, replacing with
its estimate , and solving for
2
. The solution
leads to the following matrix expression:
(9)
To accommodate a nonlinear regression model,
we modify the above procedure by using a Taylor
expansion. We begin by first noting that in a
small neighborhood of , the true value of can
be found using a truncated Taylor expansion.
Thus, the general attenuation relationship in
Equation 4 can be expressed as the following
matrix form:
y = f (X, ) X* + X* + , (10)
where X* is the partial derivative matrix X with
respect to , and is a trial value of .
We can obtain an approximate linear attenuation
model by the expression
y* = X* + , (11)
where y* = X* + [y f(X, )], X* is the N p
partial derivative matrix X, and p is the number
of regression coefficients to be estimated.
Therefore, the process of maximizing the
likelihood in Equation 6 involves calculating
matrix X* and vector y* and then replacing them
with X and y, respectively. There is no closed-
form solution for Equation 11 because X* is now
a function of . Thus, in most cases, an iterative
method must be employed to obtain and
2
.
We employed the Gauss-Newton algorithm to
solve this problem. The proposed computation
algorithm used in the MLREG program is
described as follows:
1. Start with initial estimates for .
2. Estimate in attenuation model Equation 7
by substituting X into X* and y into y*.
3. Treat the value as the initial value in the
next approximated linear model.
4. Repeat steps 13 until the solution
converges, that is, for successive iterations
j, j + 1:

(12)

The non-linear
regression
is approximated as
a linear regression
based on a
Taylor expansion.
N
1
1n

(y|x,,
2
)

=

1n( 2
2
)


( y X)
T

( y X) ,
2
2
2

( y X )
T

( y X )

2
=
N


j+1

j
< ,
j

June 2011 Volume 3, Number 1 19


where is a predetermined small amount
(i.e., 1.0 10
-6
) in the MLREG program.
5. The value of
2
given by Equation 9 is not
unbiased. An unbiased estimate is
(13)
RANDOMIZATION OF SOURCE PARAMETERS
I
n the development of the ground motion
simulations, certain seismological input
parameters for the stochastic point-source are
typically defined based on a median value
with an associated uncertainty. To capture
this parametric uncertainty, the simulation
procedure can generate suites of ground
motions based on the defined median and
uncertainty values for any or all input parameters.
As part of the SMSIM-MLREG toolbox, a
streamlined preprocessing program was created
that allows the user to easily generate suites
of necessary input parameter files based on
the user-defined median parameter values and
their associated uncertainties.
As an example, the SMSIM MLREG toolbox was
used to generate a suite of 100 randomized input
cases based on the parameters given in Table 1.
The randomization parameters are given for the
four input parameters: quality factor (Q
o
), stress
parameter (), spectral decay parameter (),
and hypocentral depth. The resulting statistical
values based on the randomized 100 realizations
for these four input parameters are also listed in
Table 1. In comparing the computed median and
uncertainty values with the user-defined values
listed in Table 1, the randomized selection of
each of the four input parameters is considered
acceptable. The statistical distribution of the
100 randomized parameter values for Q
o
, ,
, and hypocentral depth are plotted in
Figures 1a, 1b, 1c, and 1d, respectively, along
with the expected log-normal distribution of each
parameter, given the median and uncertainty
values listed in Table 1. The randomized values
are shown by the red bars, and the expected
values are shown by the blue bars. The results
shown in Figures 1a1d and the statistical results
listed in Table 1 confirm the acceptability of the
randomized selection procedure.
Note that exact agreement between the numerical
median and uncertainty values and the graphical
distribution plots shown in Figures 1a1d is
not expected based on the limited number of
realizations (i.e., 100 samples). For this example,
the minimum and maximum values for three
parametersQ
o
, , and are selected with
a large enough range so that the randomized
values would not be expected to fall outside these
limits. For the hypocentral depth parameter, the
realistic upper and lower depth range values of
5.0 and 20.0 km, respectively, limit the observed
distribution of randomly selected hypocentral
depth values. In addition, this limited upper and
lower range leads to the smaller-than-expected
uncertainty value of 0.362 when compared to
the user-defined uncertainty value of 0.60 and
a skewed distribution of hypocentral values
greater than the given input median value of
8.0 km (see Figure 1d). Increasing these upper
and lower limits for the hypocentral depth would
result in a better agreement between the input
values and the resulting statistical values from
the 100 random samples. However, this would
lead to unrealistic seismological values for the
distribution of hypocentral depths.
For the
simulation of
ground motions,
specific
seismological
input parameters
can be randomized.
Parameter Q
o
Hypocentral Depth, km
Base Median 351 120 0.006 8.0
Base Uncertainty (Sigma) 0.4 0.7 0.3 0.6
Computed Median 351.05 120.88 0.0061 9.97
Computed Sigma 0.310 0.648 0.304 0.362
Number of Values 100 100 100 100
Minimum Allowable Value 100.00 14.00 0.0024 5.0
Minimum Value 168.19 24.02 0.0026 5.01
Maximum Allowable Value 1,200.00 980.00 0.0150 20.0
Maximum Value 680.15 553.41 0.0117 19.22
Table 1. User-Defined Input Parameters (Median and Uncertainty Values) and
Corresponding Output Statistical Values for Validation Example with 100 Realizations
( y X )
T

( y X )

2
= .
N p

Bechtel Technology Journal 20


Figure 1. Histogram Distributions of Seismological Input Parameters for 100 Randomized Realizations (Red Bars) and the
Expected Distribution Values (Blue Bars) Based on the User-Defined Median Values and Associated Uncertainties
0
N
u
m
b
e
r
Quality Value, Qo
5
10
15
20
25
30
86.6 105.7 129.1 157.7 192.6 235.3 287.4 351.0 428.7 523.6 639.6 781.2 954.1 1,165.4 1,423.4
Randomized Qo Values
(Median=351.05, Sigma=0.310 LN Units)
Theoretical Qo Values
(Median=351.0, Sigma=0.40 LN Units)
Figure 1a. Quality, Q
o
(User-Defined Median Value = 351; Uncertainty = 0.4 [LN Units])
Figure 1b. Stress Parameter, (user-0ef|ned Ned|an Va|ue - 120, uncerta|nt, - 0.7 [LN un|ts[[
Figure 1c. Spectra| 0eca, Parameter, (user-0ef|ned Ned|an Va|ue - 0.006, uncerta|nt, - 0.3 [LN un|ts[[
Figure 1d. Hypocentral Depth Values, km (User-Defined Median Value = 8.0; Uncertainty = 0.6 [LN Units])
0
N
u
m
b
e
r
Stress Parameter,
5
10
15
20
25
30
10.4 14.7 20.9 29.6 42.0 59.6 84.6 120.0 170.3 241.7 342.9 486.6 690.6 979.9 1,390.6
Randomized Values
(Median=120.88, Sigma=0.648 LN Units)
Theoretical Values
(Median=120.0, Sigma=0.70 LN Units)
0
N
u
m
b
e
r
Spectra| 0eca, Parameter,
5
10
15
20
25
30
0.0021 0.0024 0.0028 0.0033 0.0038 0.0044 0.0052 0.0060 0.0070 0.0081 0.0094 0.0109 0.0127 0.0148 0.0171
kandom|ted Va|ues
(Median=0.0061, Sigma=0.304 LN Units)
Iheoret|ca| Va|ues
(Median=0.0060, Sigma=0.30 LN Units)
0
N
u
m
b
e
r
Hypocentral Depth, km
5
10
15
20
25
30
0.7 1.0 1.3 1.8 2.4 3.3 4.4 6.0 8.1 10.9 14.8 19.9 26.9 36.3 49.0
Randomized Hypocentral Depth Values
(Median=9.97, Sigma=0.362 LN Units)
Theoretical Hypocentral Depth Values
(Median=8.0, Sigma=0.60 LN Units)
The newly
developed toolbox
allows for the
randomization of
parameters
based on
median and
uncertainty values
from seismological
studies.
June 2011 Volume 3, Number 1 21
REGION-SPECIFIC GROUND MOTION
ATTENUATION MODEL
I
n this example, the SMSIM-MLREG toolbox
was used to perform nonlinear regression on
a simulated ground motion data set to determine
regression coefficients for the median prediction
ground motion equations and the variability
about the median. Two nonlinear ground motion
attenuation models, specified below, were
considered for fit to a new simulated data set of
ground motions. The sample regional ground
motion attenuation model presented is based
on the programs and methodology introduced
in the earlier sections. This sample regional
ground motion attenuation model is developed
for illustrative purposes and is not representative
of any particular site.
Most regional ground motion input parameters
for this practical example were taken from
Silva et al. [22] Stochastic ground motions were
generated based on the Brune single-corner
source model for the suite of magnitudes 4.5,
5.5, 6.5, and 7.5 and distances 1, 5, 10, 15, 20, 30,
50, 75, 100, 150, 200, 300, and 500 km. For each
magnitude and distance value, the program
was used to randomly select 30 realizations,
given the median and uncertainty values
for Q
o
, , , and hypocentral depth listed
in Table 2. In addition to the median and
uncertainty values for the four randomized
parameters, the minimum and maximum values
are also listed in Table 2. These bounding values
were selected to be large enough to span the
range of expected parameter values except for the
hypocentral depth parameter, which is limited
by the observed range in hypocentral depths.
The resulting statistical median and uncertainty
values for the four selected input parameters are
listed in Table 3.
Parameter Median/Base Value
Sigma
(LN Units)
Reference
Quality Factor (Q
o
)
351
(Min. = 100; Max. = 10,000)
0.40
Mid Continent
see [20, 22]
Stress Parameter ()
120.0
(Min. = 10; Max. = 750)
0.70 see [20, 22]
Spectral Decay Parameter ()
0.006
(Min. = 0.0001; Max. = 0.100)
0.30 see [17, 20]
Hypocentral Depth (km) 0.84
Mid Continent
see [20, 22]
Crustal Amplification Factors ENA Hard Rock see [23]
Path Duration
Atkinson and Boore [14] model with
hinge points at 75 and 100 km
see [14]
Shear-Wave Velocity (V
s
) at
Source
3.6 km/sec see [23]
Density 2.8 g/cm
3
see [23]
Geometrical Spreading
R
-(1.0296-0.0422*(M-6.5))
for R 80
R
-(1.0296-0.0422*(M-6.5))/2
for R >80
see [22]
Table 2. Input Parameter Values for Sample Regional Ground Motion Attenuation Model Case
Parameters/Magnitudes
Median/Base
Value
Sigma
(LN Units)
Median Statistical
Value
Sigma
(LN Units)
Quality Factor (Q
o
)
M4.5
M5.5
M6.5
M7.5
351 0.40
347.49
347.49
356.63
369.46
0.391
0.391
0.301
0.350
Stress Parameter ()
M4.5
M5.5
M6.5
M7.5
120.0 0.70
128.0
128.0
130.3
149.0
0.635
0.635
0.629
0.649
Spectral Decay Parameter ()
M4.5
M5.5
M6.5
M7.5
0.006 0.30
0.0061
0.0061
0.0064
0.0057
0.267
0.267
0.318
0.246
Hypocentral Depth (km)
M4.5
M5.5
M6.5
M7.5
6.0
6.0
8.0
10.0
0.6
0.6
0.6
0.6
5.93
5.93
9.08
10.79
0.455
0.455
0.406
0.397
Table 3. Statistical Results for the Four Selected Input Parameters Based on Random Sampling of 30 Realizations
A simple
region-specific
example shows
the benefit
of the newly
developed toolbox.
Bechtel Technology Journal 22
Creating 30 realizations each for the complete set
of four magnitudes and 13 distances produced
a total of 1,560 input files. Next, the toolbox
was used to generate ground motions for each
of the 1,560 input cases at the following seven
frequencies: 100, 25, 10, 5, 2.5, 1, and 0.5 Hz. This
resulted in a total of 1,560 summary case files of
ground motion estimates for each realization.
Finally, the ground motions from each of the
1,560 case files were compiled into a single
ground motion data set file. Given this sample
data set, the MLREG program within the SMSIM-
MLREG toolbox was implemented to fit the
data set to the following two ground motion
attenuation functional forms:
Model 1: ln Y = C
1
+ C
2
*M + C
3
*(M 6)
2
+
(C
4
+ C
5
M)* ln(R + e
C6
) + (C
7
+ C
8
M)*R
and
Model 2: ln Y = C
1
+ C
2
*M + C
3
*(M 6)
2
+
(C
4
+ C
5
M)* ln(R + e
C6
),
where R is the epicentral distance (i.e., Joyner-
Boore distance for this case, in which the ground
motions are simulated as point sources).
The N p matrix X* in Equation 10 is defined
by the N p derivative matrix X with respect
to the regression coefficients, where N is total
number of data points and p is total number
of coefficients. Note that X* now is a function
of the regression coefficients (i.e., it is nonlinear)
and, hence, there is a need to employ an
iterative algorithm. The regression coefficients
found for the seven spectral frequencies are
listed in Table 4 for both ground motion
attenuation models.
The comparison between the simulated data
and the two regression models for 100 Hz is
plotted in Figures 2a and 2b for the two moment
magnitude values 5.5 and 7.5. Both ground
motion attenuation models show an acceptable
fit to the data set and predict similar median
ground motions. The companion results for 1 Hz
are shown in Figures 3a and 3b. Similar results
are observed for the same two magnitude values.
CONCLUSIONS
T
he key feature and product of this
investigation is a new streamlined method
to develop an applicable ground motion
attenuation relationship for a given project site
that can be used in seismic hazard analysis. This
methodology is most useful for projects that lack a
region-specific ground motion attenuation model
and/or empirically recorded ground motions.
Example ground
motion model
would be available
for seismic
hazard studies.
MODEL 1
Coefficient 100 Hz 25 Hz 10 Hz 5 Hz 2.5 Hz 1 Hz 0.5 Hz
C
1
3.8726 3.8741 2.8006 1.1903 1.3238 5.7465 8.5339
C
2
0.0281 0.0403 0.1661 0.3493 0.659 1.174 1.4056
C
3
0.0054 0.0176 0.0401 0.0781 0.1645 0.3286 0.3584
C
4
3.2738 3.1258 2.9472 2.7335 2.45 2.1858 2.3746
C
5
0.3014 0.2824 0.2553 0.223 0.1819 0.1541 0.1971
C
6
2.1127 2.1194 2.1642 2.216 2.2672 2.2032 2.0164
C
7
0.0048 0.0046 0.0043 0.0037 0.0029 0.0026 0.0042
C
8
0.001 0.001 0.0009 0.0007 0.0006 0.0005 0.0007
Sigma 0.548 0.5096 0.4953 0.4789 0.451 0.4037 0.3662
MODEL 2
Coefficient 100 Hz 25 Hz 10 Hz 5 Hz 2.5 Hz 1 Hz 0.5 Hz
C
1
3.3958 3.4084 2.0367 0.3721 2.0361 6.4266 9.7318
C
2
0.2317 0.2895 0.4059 0.5621 0.829 1.3234 1.6194
C
3
0.0054 0.0176 0.0401 0.0781 0.1645 -0.3286 0.3584
C
4
2.9594 2.8259 2.6063 2.4096 2.184 1.9387 1.9616
C
5
0.2049 0.1905 0.1708 0.1503 0.125 0.104 0.1245
C
6
2.449 2.4475 2.3856 2.3688 2.3727 2.2873 2.0495
Sigma 0.5577 0.5189 0.5012 0.4826 0.453 0.4053 0.3703
Table 4. Regression Coefficients for Both Model 1 and Model 2 Fits to Dataset
June 2011 Volume 3, Number 1 23
This new streamlined toolbox allows the user
to efficiently develop an applicable ground
motion attenuation model for use in the seismic
hazard assessment of a project, rather than being
forced to make design calculations based on
nonregion-specific models that may not
be applicable. Typically, the required input
parameters necessary to implement this
methodology are available for regions of
low seismicity even though empirically based
ground motion attenuation models are not.
Thus, the SMSIM MLREG toolbox may be used
for any future project for which region-specific
ground motion attenuation relationships for
seismic hazard analysis are needed but are not
otherwise available.
Figure 2. Comparisons Between Simulated Ground
Motions and Two Regression Model Prediction Curves,
as a Function of Epicentral Distance
Figure 3. Comparisons Between Simulated Ground
Motions and Two Regression Model Prediction Curves,
as a Function of Epicentral Distance
0.0001
0.001
1
Distance, km
P
G
A
,

g
10 100 1,000
0.01
0.1
1
10
Model 1, Median Magnitude 5.5
Model 2, Median Magnitude 5.5
Magnitude 5.5
0.001
0.01
1
Distance, km
P
G
A
,

g
10 100 1,000
0.1
1
10
100
Model 1, Median Magnitude 7.5
Model 2, Median Magnitude 7.5
Magnitude 7.5
Figure 2a. Magnitude 5.5 at 100 Hz
Figure 2b. Magnitude 7.5 at 100 Hz
Figure 3a. Magnitude 5.5 at 1 Hz
Figure 3b. Magnitude 7.5 at 1 Hz
0.0001
0.001
1
Distance, km
P
G
A
,

g
10 100 1,000
0.01
0.1
1
Model 1, Median Magnitude 5.5
Model 2, Median Magnitude 5.5
Magnitude 5.5
0.001
0.01
1
Distance, km
P
G
A
,

g
10 100 1,000
0.1
1
10
Model 1, Median Magnitude 7.5
Model 2, Median Magnitude 7.5
Magnitude 7.5
The new toolbox
allows for the
efficient application
of a region-specific
attenuation model
for seismic
hazard studies.
Bechtel Technology Journal 24
TRADEMARKS
MATLAB is a registered trademark of The
MathWorks, Inc.
REFERENCES
[1] D.M. Boore, SMSIM Fortran Programs for
Simulating Ground Motions from Earthquakes:
Version 2.3 A Revision of OFR 96-80-A,
U.S. Geological Survey Open-File Report (a
modified version of OFR 00-509), August 2005,
http://www.daveboore.com/pubs_online/
2005_smsim_manual_v2.3.pdf.
[2] K.W. Campbell, Strong Motion Attenuations:
A Ten-Year Perspective, Earthquake Spectra,
Vol. 1, No. 4, August 1985, pp. 759804, access via
http://scitation.aip.org/EarthquakeSpectra/.
[3] P. Somerville, N. Collins, N. Abrahamson,
R. Graves, and C. Saikia, Ground Motion
Attenuation Relations for the Central and
Eastern United States, Final Report to U.S.
Geological Survey for Award 99HQGR0098,
June 30, 2001, http://earthquake.usgs.gov/
research/external/reports/99HQGR0098.pdf.
[4] D.M. Boore, Simulation of Ground Motion
Using the Stochastic Method, Pure and Applied
Geophysics, Vol. 160, Nos. 34, March 2003,
pp. 635676, access via http://www.springerlink.
com/content/8b1e1ekm2fr646ct/.
[5] K.W. Campbell and Y. Bozorgnia, NGA
Ground Motion Model for the Geometric
Mean Horizontal Component of PGA, PGV,
PGD and 5% Damped Linear Elastic Response
Spectra for Periods Ranging from 0.01 to 10 s,
Earthquake Spectra, Vol. 24, No. 1, February 2008,
pp. 139171, access via http://scitation.aip.org/
EarthquakeSpectra/.
[6] G.M. Atkinson and D.M. Boore, Some
Comparisons Between Recent Ground-Motion
Relations, Seismological Research Letters, Vol. 68,
No. 1, 1997, pp. 2440, access via http://www.
seismosoc.org/publications/srl/.
[7] G.M. Atkinson and W. Silva, Stochastic
Modeling of California Ground Motions,
Bulletin of the Seismological Society of America,
Vol. 90, No. 2, April 2000, pp. 255274, access via
http://www.seismosoc.org/publications/bssa/.
[8] D.M. Boore, Stochastic Simulation of
High-Frequency Ground Motions Based on
Seismological Models of the Radiated Spectra,
Bulletin of the Seismological Society of America,
Vol. 73, No. 6A, December 1983, pp. 18651894,
access via http://www.seismosoc.org/
publications/bssa/.
[9] D.M. Boore and W.B. Joyner, A Note on the Use
of Random Vibration Theory to Predict Peak
Amplitudes of Transient Signals, Bulletin of the
Seismological Society of America, Vol. 74, No. 5,
October 1984, pp. 20352039, access via http://
www.seismosoc.org/publications/bssa/.
[10] T.C. Hanks and R.K. McGuire, The Character of
High-Frequency Strong Ground Motion, Bulletin
of the Seismological Society of America, Vol. 71,
No. 6, December 1981, pp. 20712095, access via
http://www.seismosoc.org/publications/bssa/.
[11] J.N. Brune, Tectonic Stress and the Spectra of
Seismic Shear Waves from Earthquakes, Journal
of Geophysical Research, Vol. 75, No. 26, September
10, 1970, pp. 49975009, access via http://www.
agu.org/journals/jb/v075/i026/.
[12] J.N. Brune, Correction: Tectonic Stress and
the Spectra of Seismic Shear Waves from
Earthquakes, Journal of Geophysical Research, Vol.
76, No. 20, July 10, 1971, p. 5002, access via http://
www.agu.org/journals/jb/v076/i020/.
[13] G.M. Atkinson and W. Silva, An Empirical
Study of Earthquake Source Spectra for
California Earthquakes, Bulletin of the
Seismological Society of America, Vol. 87, No. 1,
February 1997, pp. 97113, access via http://www.
seismosoc.org/publications/bssa/.
[14] G.M. Atkinson and D.M. Boore, Ground Motion
Relations for Eastern North America, Bulletin of
the Seismological Society of America, Vol. 85, No. 1,
February 1995, pp. 1730, access via http://www.
seismosoc.org/publications/bssa/.
[15] M.R. Raoof, R.B. Herrmann, and L. Malagnini,
Attenuation and Excitation of Three-Component
Ground Motion in Southern California, Bulletin
of the Seismological Society of America, Vol. 89,
No. 4, August 1999, pp. 888902, access via
http://www.seismosoc.org/publications/bssa/.
[16] K.W. Campbell, Prediction of Strong Ground
Motion Using the Hybrid Empirical Method
and Its Use in the Development of Ground-
Motion (Attenuation) Relations in Eastern
North America, Bulletin of the Seismological
Society of America, Vol. 93, No. 3, June 2003, pp.
10121033, access via http://www.seismosoc.org/
publications/bssa/.
[17] Electric Power Research Institute (EPRI),
Methods and Guidelines for Estimating
Earthquake Ground Motion in Eastern North
America, in Guidelines for Determining Design
Basis Ground Motions, Vol. 1, EPRI TR-102293,
Palo Alto, CA, 1993, access via http://epri.com,
Technical Report 102293.
[18] J.G. Anderson and S.E. Hough, A Model for
the Shape of the Fourier Amplitude Spectrum
of Acceleration at High Frequencies, Bulletin of
the Seismological Society of America, Vol. 74, No. 5,
October 1984, pp. 19691993, access via http://
www.seismosoc.org/publications/bssa/.
[19] W.B. Joyner, R.E. Warrick, and T.E. Fumal,
The Effect of Quaternary Alluvium on Strong
Ground Motion in the Coyote Lake, California,
Earthquake of 1979, Bulletin of the Seismological
Society of America, Vol. 71, No. 4, August 1981,
pp. 13331349, access via http://
www.seismosoc.org/publications/bssa/.
[20] W.J. Silva, Factors Controlling Strong Ground
Motions and Their Associated Uncertainties,
in Proceedings of ASCE Symposium Dynamic
Analysis and Design Considerations for High Level
Nuclear Waste Repositories, San Francisco, CA,
August 1920, 1992, pp. 132161, access via http://
cedb.asce.org/cgi/WWWdisplay.cgi?80367.
[21] D.E. Cartwright and M.S. Longuet-Higgins,
The Statistical Distribution of the Maxima
of a Random Function, Proceedings of the
Royal Society of London, Series A, Mathematical
and Physical Sciences, Vol. 237, No. 1209,
June 2011 Volume 3, Number 1 25
October 1956, pp. 212232, see http://
www.eas.slu.edu/People/RBHerrmann/
SeminarGndMotion/CartwrightLonguet.pdf
and http://www.jstor.org/pss/100074.
[22] W.J. Silva, N. Gregor, and R. Darragh,
Development of Regional Hard Rock
Attenuation Relations for Central and Eastern
North America, Pacific Engineering and Analysis
Report, 2002, http://www.pacificengineering.
org/CEUS/Development%20of%20Regional%20
Hard_ABC.pdf.
[23] S.Z. Chen and G.M. Atkinson, Global
Comparisons of Earthquake Source Spectra,
Bulletin of the Seismological Society of America,
Vol. 92, No. 3, April 2002, pp. 885895, access via
http://www.seismosoc.org/publications/bssa/.
BIOGRAPHIES
Behrooz Tavakoli, PhD,
has more than 15 years of
experience in engineering
seismology, geophysics, and
engineering geology. He has
worked at Bechtel for more than
3 years to develop and review
site-specific ground motion
design criteria for projects
ranging from conventional
structures under modern building codes to LNG and
nuclear power plants under foreign and domestic
regulatory provisions. His research interests focus
on stochastic modeling of earthquake events, site
hazard characterization, ground motion modeling,
earthquake damage and loss estimation, risk analysis
of structures, and synthetic earthquake records to
simulate strong ground motions.
Behrooz has published several technical papers on
topics related to earthquake seismology in peer-
reviewed journals or conference proceedings. For
example, he worked on the New Madrid Seismic Zone
(NMSZ), located southwest of New Madrid, Missouri,
to obtain the best estimation of earthquake occurrence
rates and strong ground motions where causative
faults of earthquakes are poorly understood. As a
result, Behrooz has proposed a new 3D mechanical
model of faulting to properly explain the occurrence
of earthquakes in the NMSZ, to illustrate the potential
rupture faults for the 18111812 earthquake sequences,
and to open new areas of research. He also developed
a new ground motion attenuation relationship for
eastern North America. Both of these recent research
results were published in the Bulletin of the Seismological
Society of America. Behrooz has also been the principal
investigator for the Global Seismic Hazard Analysis
Program (GSHAP) that is producing a new earthquake
hazard zonation map for the country of Iran.
Behrooz received his PhD in Seismology from Uppsala
University, Sweden, and an MS in Geophysics and two
BS degrees (Civil Engineering and Geology) from the
University of Tehran, Iran. He was granted a 2-year
post-doctoral research fellowship award in practical
seismology from the Department of Civil Engineering
at the University of Memphis, Tennessee, to pursue his
research interests in the field of earthquake seismology
and geotechnical earthquake engineering.
Nick Gregor, PhD, has been
involved in the seismological
analysis of various engineering
sites throughout his 15-year
career at Bechtel. He has
broad experience in applying
seismology to develop both
probabilistic and deterministic
ground motions (both seismic
source modeling and ground
motion attenuation models) and in applying building
codes and regulations. Nick also has extensive
experience in developing spectrum-compatible time
histories for engineering design analysis. During
his career, he has provided seismological consulting
services for numerous projects worldwide, including
hydroelectric dams, bridges, nuclear power plants,
nuclear waste repositories, water and gas pipelines, rail
lines, ports, landfills, hospitals, electric substations,
and office buildings.
Nick has published several papers in peer-reviewed
journals and has participated as a peer reviewer for
manuscripts submitted for publication.
Nick received his PhD and AB, both in Geophysics,
from the University of California at Berkeley. His
doctoral thesis was on the development of a Peak
Ground Motion Displacement attenuation model.
Bechtel Technology Journal 26
29
Technology Trends in High-Speed Rail
Siv Bhamra, PhD
Maximilian Fieguth
41
Management of Ground
Engineering Data
Stephen Walthall
49
Enhancements to Airfield Simulation
of London Gatwick Airport
Jim Denton-Brown
Farzam Mostoufi
Civil
T ec hnol ogy Paper s
Albanian Motorway
The Albanian Motorway Project is a
four-lane highway with 29 bridges
and a pair of twin-bore tunnels.
Photo shows Section 3 Bridge 088
(the tallest bridge on the alignment).
2011 Bechtel Corporation. All rights reserved. 29
INTRODUCTION
O
ver the course of more than a century, steady
improvements in the mass transportation of
people and freight have made major advances in
global industrialization feasible. Today, as a result
of the rapid expansion of transportation modes
(e.g., shipping, aviation) and infrastructure
(e.g., highways, railways), people and goods are
efficiently transported between cities and across
international borders.
Especially significant over the past 40 years has
been the considerable increase in the number
of dedicated high-speed passenger rail lines
that offer safe, fast, comfortable, and reliable
journeys over long distances in many regions of
the world. In particular, Western Europe, South
Asia, the Middle East, and the Far East are seeing
large investments in rail transport infrastructure
to meet increasing demand. Concurrently,
railway technology in general and higher speed
applications in particular are emerging at an ever
faster rate to support the economic drivers and
to meet higher safety, reliability, performance,
and comfort requirements while achieving more
stringent environmental sustainability targets.
This paper discusses key economic factors
promoting demand for high-speed rail (HSR).
The paper specifically focuses on the technologies
that are emerging to give this form of rail
transportation a further market edge in terms
of economic and business benefits over the next
20 years. The paper also provides an overview
of how major rail networks worldwide are likely
to expand during the same time frame to meet
growing customer expectations.
BACKGROUND
Railways in Mass Transportation
Since their advent 200 years ago, railways
have emerged all over the world as a means to
transport people and freight using dedicated
infrastructure and rolling stock. The capital
investment and ongoing operating costs
associated with railway infrastructure are
offset by the much greater social, economic, and
environmental benefits that accrue over several
decades of operational service.
Early railway developments took place largely
in the UK and the US. Since then, railways
have taken many different forms to meet
specific operational requirements. Todays
railways run the gamut from metropolitan
and suburban light rail systems to interurban
heavy-rail commuter systems, dedicated
long-distance freight lines, and high-speed
passenger rail lines that connect the major cities
of the predominantly developed economies.
High-Speed Rail Defined
Several definitions of modern HSR are in common
use. In particular, European Community (EC)
Directive 95/58 defines conventional HSR (i.e.,
trains with steel wheels rolling over fixed
steel rails) as consisting of rolling stock and
infrastructure that accommodate operation at
speeds of over 250 km/h (155 mph) on new tracks
or 200 km/h (125 mph) on existing upgraded
tracks. The Chinese, Japanese, and Korean
railway authorities concur with this definition.
Issue Date: June 2011
AbstractCurrent economic drivers and increasing environmental awareness are raising the demand for
significant expansions of high-speed passenger rail networks globally to improve the connections to population
centers within countries and to support a closer integration of economies internationally. This paper describes
the emerging technological trends that promise to promote engineering advances in future high-speed rail
(HSR) lines. These advances will enable faster, safer, more comfortable and reliable passage to a wider range of
destinations while limiting environmental impact and minimizing capital and operating costs.
Keywordshigh-speed rail (HSR), rail technology, requirements management, systems engineering
TECHNOLOGY TRENDS
IN HIGH-SPEED RAIL
Siv Bhamra, PhD
[email protected]
Maximilian Fieguth
[email protected]
Bechtel Technology Journal 30
Emergence and Growth of High-Speed Rail
Japan introduced the worlds first high-volume,
high-speed trains as early as the 1960s. Up to
12 cars in length, they traveled at 210 km/h
(130 mph). Outside of Japan, the French Train
Grande Vitesse (TGV) has set record speeds
and provided growing quality of service over
the past two decades. More recently, China
has successfully embarked on the engineering
of its major integrated HSR network. Chinas
ambitious program involves developing
its overall rail network from the current
86,000 km (53,400 mi) to 120,000 km (74,600 mi).
When this expansion is completed, trains will
be able to offer scheduled passenger services
at 350 km/h (218 mph) between more than
70 principal Chinese cities. The engineering of
major high-speed lines is also underway in many
other parts of the world, particularly in Western
Europe, South Asia, and the Far East, with the
Middle East following very soon. The US, South
America, and South Africa are also considering
introducing new high-speed lines.
Figure 1 provides an overview of key historical
technological milestones in the development
of HSR. An overview of the anticipated future
expansion of HSR networks throughout the
world is provided later in this paper.
ECONOMIC DRIVERS AND BUSINESS
REQUIREMENTS
T
hree primary economic drivers are
promoting an increasingly wider consensus
to invest in HSR lines. These drivers can be
broadly categorized as:
Political/economic
Environmental
Social/demographic
The French Train
Grande Vitesse
(TGV) has set
record speeds
and provided
growing quality
of service
over the past
two decades.
ABBREVIATIONS, ACRONYMS, AND TERMS
CBTC communications-based train
control
EC European Community
ERTMS European Rail Traffic
Management System
ETCS European Train Control System
EU European Union
HSR high-speed rail
km/h kilometers per hour
LED light-emitting diode
mph miles per hour
MTBF mean time between failures
RTO remote train operation
SIL safety integrity level
TGV Train Grande Vitesse
TSI Technical Specification for
Interoperability
UPS uninterruptible power supply
VE value engineering
WLC whole-life cost
Figure 1. Timeline for High-Speed Rail Development
1955 2010
19581963
Construction of the worlds
first dedicated HSR line,
Tokaido Shinkansen in Japan
1978
Direttissima
opens in Italy
Shinkansen HSR opens with
210 km/h max. travel speed
One billion passenger
journeys on HSR in Japan
1981
TGV Paris
Dijon opens
1989
First roll-out of
German ICE Train
1992
MadridSeville
HSR line opens
20042009
ERTMS routes commissioned
1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010
European Union
defines HSR
JRMaglev
reaches 581 km/h
French Test TGV
reaches 574.8 km/h
Eschede HSR
disaster
in Germany
403 km/h attained
by unmodified
Siemens Velaro
train in Spain
High-speed line
opens in Belgium
Korea Train
eXpress (KTX) opens
HSR opens
in Turkey
Acela Express
opens in US
Taiwan
High-Speed Rail
(THSR) opens
China opens
6003 km of
HSR lines
High Speed 1
opens in UK
June 2011 Volume 3, Number 1 31
Economic drivers
have led to the
specification of
a number of
top-level business
requirements
for HSR lines.
These economic drivers have led to the
specification of a number of top-level business
requirements for HSR lines. Specifically, future
HSR lines must:
Reduce journey times
Expand service capacity and choice of
destinations
Enhance service quality, comfort, and
reliability
Improve safety and security
Achieve environmental sustainability
Minimize costs
Figure 2 illustrates the integrated relationship
between the broad economic drivers, the business
requirements for future HSR lines, and the role
of emerging technologies in helping to deliver
these requirements.
TECHNOLOGY TO DELIVER BUSINESS
REQUIREMENTS
Overview
This section introduces the process for
assimilating the business requirements for
future HSR lines and discusses the emerging
technological developments that promise
to help deliver these requirements. However,
technology alone can go only so far. A holistic
approach is necessary, starting with leaders
who have the required technical competence to
understand the full spectrum of railway issues
and the possibilities offered by new technologies.
These leaders, in turn, have to select the right
people; introduce sound work processes and
their controls; and integrate cross-disciplinary
input from railway engineering, operational,
and commercial expertise throughout all project
phasesfrom specification development through
design, procurement, construction, testing,
and commissioning. Thereafter, the project
leaders must successfully hand the finished
product over to a long-term operation and
maintenance organization.
Throughout this entire process, it is essential to
use systems engineering to capture the stated
business requirements for complex HSR projects
and translate them into a set of tasks that can
be managed to completion under a carefully
controlled, evidence-based technical assurance
regime. The emerging technologies that can best
assist in satisfying the business requirements
must be clearly tracked into the design process
and then evaluated to test for their effectiveness
in delivering the desired project outcomes.
Figure 2. Integrated Relationship Between Economic Drivers, Business Requirements, and Technology Trends
Political stability, policies, and laws
supporting closer economic cooperation,
removal of trade barriers, movement of
people across borders
Sustained economic progression, creation of
national transportation assets, generation
of local employment opportunities
Transparency of policies, distribution of
wealth, efficient and accessible
transportation networks
Improved security concerning high-profile
infrastructure like railways
Climate change, rising pollution levels
Growing energy consumption
Increasing demand for sustainable
transportation technologies
Whole-lifecycle environmental impact
assessments
Population growth in emerging economies
Continued population drifts to cities
More affluent populations and rising
expectations for travel, quality, and timing
Aging populations in developed economies,
reduced mobility and agility
Lack of highly skilled engineers, rising
labor costs

Political/Economic
TECHNOLOGY TRENDS

Increase train speeds


Reduce total times for full journeys
Maximize customer service volume
Expand facilities at stations
Provide passenger services 7 days a week
Offer a wider selection of destinations
Provide greater comfort for passengers
Enhance overall journey experience
Instill safety first culture throughout life-
cycle of railway projects and operations
Integrate security provisions to protect
passengers, staff, and assets
Assess environmental impact
Maximize sustainability
Minimize whole-life cost to construct,
maintain, and operate
Ensure commercial competitiveness and
affordability during implementation
Reduce Journey Times
Expand Ser vice Capacity
Enhance Ser vice Quality
Improve Safety and Security
Environmental Sustainability
Minimize Costs
Environmental
Social/Demographic

Technologies Developed
To Meet Requirements
Translation of Drivers
into Requirements
ECONOMIC DRIVERS
FOR INVESTMENT INTO
HIGH-SPEED RAIL
BUSINESS REQUIREMENTS
Traction propulsion technologies
Wheel-to-rail adherence
Maglev technology
Longer, double-deck trains
Enhanced signaling systems
Improved interoperability
Information communication technologies
Improved on-board connectivity
Improved track-laying precision
Train interiors optimized for comfort
Enhanced station facilities
Provision for reduced mobility
Safety integrity level method
Surveillance technologies
Dedicated corridors
Improved energy efficiency
Reuse and recycling of materials
Specially developed materials
Automation
Prefabrication
Reliability improvements
Bechtel Technology Journal 32
Reduce Journey Times
Train speeds have been increasing over the past
40 years, and this trend is expected to continue
over the next 20. The current speed record for a
conventional high-speed train, set by a French
TGV, is 574 km/h (357 mph).
Faster traction propulsion technologies and
improved wheel-to-rail adherence (resulting
from more precisely engineered track forms)
should enable train speeds to reach 700 km/h
(435 mph) by 2030. These speeds will bring the
journey times of future rail travel closer to those
of commercial air travel. Market data shows that
higher speeds resulting in shorter journey times
encourage passengers to switch from personal
road vehicles and short-haul flights to rail. For
example, following the opening of the TGV
line between Paris and Lyon in 1983, air service
between the two cities was discontinued because
of the large passenger swing from air travel to
the new HSR line. In the UK, the West Coast
Route Modernization and High Speed 1 projects
have resulted in a similar increase of market
share for rail operations over personal road
vehicles and short-haul flights. Parallel trends
have also emerged in other Western European
countries where new passenger rail lines have
recently entered service.
In Japan, China, and Germany, magnetic-
levitation (maglev) railway technology has
been under development for the past two
decades. Commercial maglev operation recently
commenced in Germany, the Middle East, and
China. However, although a Japanese maglev
test train has reached speeds of 581 km/h
(361 mph), the investment in dedicated
infrastructure for such technology has not yet
proved as commercially viable as for conventional
HSR technologies.
Overall rail journey times can be reduced further
by alleviating existing service bottlenecks.
For example, double platforms at stations,
supplemented by closer crew management,
can reduce unnecessary dwell times. Similarly,
good interchanges and quick, easy, and certain
connections under weatherproof covers
can improve the safety, comfort, and speed
of passenger movement between trains and to
other forms of transportation.
Looking more radically toward a major step
change in journey speeds in the distant future,
the US Department of Defense has tested manned
rocket sleds that ride on rails and have reached
speeds of 1,017 km/h (635 mph) under controlled
experimental conditions.
Expand Service Capacity and Choice of
Destinations
Train journeys over existing routes have
traditionally been constrained by the lack of
capacity to accommodate growth in demand. It
is now necessary in most countries to expand
the capacity of existing networks and increase
the range of destinations for passengers. This
can be accomplished in part by using advanced
technologies such as computer-based train control
(CBTC), which allows trains to operate safely at
closer, controlled, headways and higher speeds,
thereby making the best use of the installed
trackside infrastructure. Such technologies have
become increasingly reliable in recent years.
Figure 3 provides a schematic overview of
a typical advanced signaling technology for
train control.
Each train using CBTC technology transmits
its identity, location, direction, and speed to
dedicated regional control computers. These
computers use this information to calculate
safe braking distance curves between trains
and then transmit the results, along with the
speed and distance of the preceding train, back
via secure radio transmissions. This process is
done continuously to maintain safe distances
between fast-traveling trains for them to slow
down and stop.
Computer-based
train control
allows trains to
operate safely at
closer, controlled,
headways and
higher speeds,
thereby making
the best use of the
installed trackside
infrastructure.
Figure 3. Schematic of Advanced Train Control Technology
Train 1
R
a
d
i
o
T
r
a
n
s
m
i
s
s
i
o
n
s
R
a
d
io
T
r
a
n
s
m
is
s
io
n
s
V
e
l
o
c
i
t
y
Train 1 B
rakin
g
C
u
r v
e
Train 2
Train 2 B
rakin
g
C
u
r v
e
Central Control
Computer
Direction of Travel for Trains
June 2011 Volume 3, Number 1 33
CBTC also facilitates the implementation of other
advanced means of expanding service capacity
such as automatic reversing at sidings and
terminal stations as well as bi-directional running
over the trackside infrastructure available for
service operations during temporary restrictions.
Even more sophisticated forms of CBTC offer
the future possibility of remote train operation
(RTO), with minimal driver intervention, for
HSR service.
Greater political and economic cooperation
among nations will increasingly favor HSR as a
means of moving people between countries more
quickly. For example, the European Union (EU)
has established Interoperability Directives for
implementation under the European Rail Traffic
Management System (ERTMS) initiative. This
initiative requires the EU member countries to
engineer new HSR lines and upgrade existing
routes in a standardized manner to enable easier
and faster movement of trains across Western
Europe. The European Train Control System
(ETCS) is a new signaling, control, and train
protection system being developed to facilitate
this standardization. A series of Technical
Specifications for Interoperability (TSIs) are
already stimulating considerable technological
development from the railway equipment
suppliers. These TSIs encourage development
of technologies that can allow future high-
speed trains to travel over new high-speed
lines as well as existing upgraded tracks, albeit
at lower speeds dictated by historical route
layout constraints.
Train service over congested routes, at major
junctions, or close to large cities can be
somewhat constrained. Future developments in
computer-based traffic management and service
regulation technologies will allow optimum use
of existing railway infrastructure by enabling
train movements against predetermined,
timetabled priorities. Where funding, space, and
development planning permissions exist, high-
speed trains can also avoid existing congested
routes via dedicated bypass routes.
Looking further ahead, dedicated new
infrastructure can be built to accommodate
double-decker trains, as has already been done
successfully on Taiwans high-speed lines. Such
trains can provide much higher passenger-
carrying capacity over long distances.
Because increases in total service capacity are
integrated among trains, infrastructure, and
stations, train service enhancements need to be
carefully modeled and taken into consideration
in providing station facilities and adequate space.
Enhance Service Quality, Comfort, and
Reliability
Technology promises to contribute significantly
to improving the total journey experience for
future HSR customers.
Advances in information communications
technology will enable travelers to receive
service updates and query the status of onward
connections (including with other transport
modes) directly on their personal hand-held
mobile phones before they leave home. These
same technological advances will improve the
connectivity of personal phones on trains within
dedicated compartments, as well as enable
on-board electronic mail and other Internet
transactions. Wireless repeater units aboard
trains will allow passengers to use their personal
mobile phones, handsets, and laptops to establish
external voice and data connections.
The overall ride quality of trains is set to be
improved by technological developments in
more-precise track-laying equipment as well as
further improvements to maintenance of the
wheel-to-rail interface.
Coupled with more reliable air conditioning and
lighting, transatlantic-flight-style seating layouts
will further enhance the overall comfort levels
of fast-traveling trains. Such layouts are already
undergoing successful trials in Japan and Korea.
Added to this are technological advances in
on-train entertainment and amenities that will
raise comfort levels on long train journeys to
the standards already provided by high-quality
airline operators.
Stations, as an integral part of the total journey
experience, will need to be friendly and provide
easy, convenient access to service information
and onward connections, including details
of potential weather impacts. In particular,
technology will cater to providing better
access for passengers with reduced mobility
and to making tickets easier to purchase from
home, at the workplace, at the airport, or
within the shopping mall, as well as at existing
railway stations.
Higher performance equipment, less prone to
unplanned failures, will improve the reliability
of HSR service. In addition, better regulation of
train service and faster recovery to timetabled
operations following service perturbations
will help railway operators and maintainers
minimize the impact of unforeseen delays
on passengers.
Stations, as an
integral part
of the total
journey experience,
will need to be
friendly and provide
easy, convenient
access to service
information and
onward connections,
including details
of potential
weather impacts.
Bechtel Technology Journal 34
Improve Safety and Security
Railways are inherently one of the safer forms
of travel. Statistically, both railways and airlines
provide the safest forms of transportation in
terms of the number of injuries or fatalities
recorded over the distance traveled.
A combination of technological improvements
and better work processes will further improve
the safety of HSR travel. New design, construction,
and maintenance technologies are being applied
that allow railways to be engineered, built,
maintained, and operated with the highest safety
principles in mind. While improved protection
technologies actively prevent construction
hazards from affecting workers, advances in
automation technology will reduce the amount
of manual labor required, making both worksites
and maintenance activities safer. In addition,
better crash protection and response to other
critical safety hazards are at the core of modern
rolling stock, infrastructure, and station designs.
The application of high-integrity, safety-critical
software to control the movement of trains is
just one example of technological innovation
for improved HSR safety. Such software is
developed in a highly controlled programming
structure and rigorously validated and tested to
prove compliance with a defined Safety Integrity
Level (SIL) before it is released for operational
use to enhance the safe movement of passenger-
carrying trains.
Security measures against planned malicious
activity have increased significantly in recent
years. Both the global aviation and railway
industries have become high profile and,
regrettably, key publicity-attracting targets for
the wrong reasons. In response to this threat,
a new generation of security technologies has
started to emerge. These range from more
traditional remote surveillance and blast
prevention equipment to highly sophisticated
movement, facial, voice, and tactile recognition
systems. State-of-the-art threat anticipative
technologies, supported by improved
surveillance and containment through planned
engineering designs, will seek to mitigate the
identified risk exposures.
Advances in track trespass deterrent
technology also enable operators to protect
their infrastructure from theft, vandalism, and
sabotage. Greater use of uninterruptible power
supply (UPS) technologies will protect safety and
security critical systems from the eventualities of
power outages due to accidental, intentional, or
natural events.
Achieve Environmental Sustainability
Future high-speed lines will require the large-
scale construction of linear infrastructure
across long geographical areas. This use of
land and the environmental impact of future
service operations must be assessed carefully
and appropriate mitigation measures developed
and implemented to maximize the chances for
longer-term sustainability.
Advances from all technology fields will
contribute to the environmental sustainability
of HSR. For example, not only are light-emitting
diodes (LEDs) being installed in new rolling
stock, but they will also provide lighting in
stations, tunnels, and other infrastructure in
the future. This is just one example of how
new technologies will help minimize the whole-
life cost (WLC) of high-speed railway lines,
infrastructure, and rolling stock.
It is also important to carefully manage heritage
buildings to minimize damage to or loss of
historic properties. Failure to do so can cause
public concern and have a negative impact
on railway construction activities. When it is
necessary to work around sensitive buildings,
a series of advanced monitoring sensors have
been developed to monitor sound, vibration,
temperature, and settlement during construction.
Figure 4 illustrates how the use of rail travel
can reduce detrimental emissions, compared to
a sample of other forms of mass transportation.
Figure 4. Comparative Emissions, by Transportation Mode
Commercial
Airliner
Long Haul Shor t Distance
Train
Bus & Coach
Car & Van
Light Rail High-Speed Rail
City Bus Long-Distance Coach
Small Car Van
Greenhouse Gas Emissions per Passenger Kilometer
The application
of high-integrity,
safety-critical
software to control
the movement
of trains is just
one example
of technological
innovation for
improved
HSR safety.
June 2011 Volume 3, Number 1 35
The routes for future high-speed lines clearly
need to be planned in a manner that minimizes
adverse environmental impact. One way to
achieve this is by using dedicated corridors of
land next to existing railway lines, water canals,
or road highways, as depicted in Figure 5.
Technological developments on future HSR lines
include the use of more fuel-efficient, lower-
emission train propulsion equipment as well
as hybrid fuel technologies that combine diesel
engines and batteries to provide a potentially
more efficient means of powering a train. These
technologies can provide useful backups to
electricity as the base source of motive energy.
Energy efficiency can also be further improved by
making maximum practical use of regenerative
braking technologies.
Technological sustainability developments
in railways also extend to the trackside
infrastructure. For example, high-performance
track sleepers made from recycled plastic are
now being tested and are becoming available
for use.
Minimize Costs
The cost to build and operate high-speed
lines needs to be kept to a minimum. A WLC
approach is necessary to obtain the best tradeoff
among the up-front capital cost solutions and
their implications for the longer-term impact on
railway operations and asset maintenance.
The simplified diagram in Figure 6 shows
how typically 85% of the WLC for major rail
projects is committed during the early stages
when requirements are
defined and concepts are
designed. It is critical at
these early stages to review
the requirements and
proposed design solutions
in detail. This review is
best conducted in the form
of well-structured, cross-
disciplinary workshops that
cover every aspect of the
project from development
through revenue service
operations. Such structured
workshops can be managed
by trained practitioners
using value engineering (VE)
techniques. It is important
to use a structured systems
engineering approach to
carefully trace the VE cost-
reduction opportunities back
Figure 5. Examples of Using Existing Transportation Corridors for Railways
Figure 6. Simplified Diagram Showing Commitment of Costs at Each Stage of the Rail Projects Lifecycle
Above:
Incheon International Airport Railroad,
Korea (green), parallel to existing
motorway (yellow)
Left:
Construction work for High Speed 1, UK
(red), underway along an established
transportation corridor (blue)
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Requirements
Definition
Concept Design Detail Design
Cost Commitment and Actual Spend Through the Project Lifecycle
Cost Commitment
Actual Spend
PROJECT LIFECYCLE
Construction Testing and
Commissioning
Technological
developments on
future HSR lines
include the use of
more fuel-efficient,
lower emission
train propulsion
equipment
as well as hybrid
fuel technologies
that combine
diesel engines
and batteries
to provide a
potentially more
efficient means of
powering a train.
Bechtel Technology Journal 36
to the business requirements to confirm that the
overall project objectives are still being met. This
approach safeguards the project outputs from
being put at risk by cost-reduction measures that
do not take into account the total impact on the
project throughout its lifecycle.
Technology can help reduce capital costs
by automating routine engineering tasks,
introducing modularized construction
technologies, and incorporating the use of
precast and preassembled components that allow
maximum off-line testing. Similarly, technology
can assist in reducing the cost of operating
and maintaining railway assets by providing a
new generation of trackside assets that deliver
higher mean times between failures (MTBFs).
More reliable equipment, coupled with planned
maintenance regimes, requires less operator
intervention. Features such as remote condition
monitoring, improved ease of access to assets and
their renewal, and design solutions that promote
maximum energy efficiency during revenue
operations all represent further technological
cost-reducing contributions.
The cost of designing and manufacturing rolling
stock can be minimized through a careful
platform design strategy that uses standardized
components compatible with a multitude of
other systems. New or custom-made products
can be designed by combining these standard
components, thereby minimizing design costs
and realizing manufacturing economies of scale.
FUTURE HIGH-SPEED RAIL NETWORKS
H
SR networks are currently concentrated in
Europe and the Far East. The US also has
a single high-speed line, along its northeastern
seaboard. However, a larger pool of nations
is now engaged in planning for HSR. As
illustrated in Figure 7, by 2030, the number of
countries with HSR lines is predicted to have
grown significantly.
Although HSR networks currently dominate
in China, Japan, and Europe, it is worth noting
that lines outside the latter two regions only
started appearing in the past 10 years, indicating
that HSR is still an emerging technology when
considered worldwide. Indeed, most high-speed
line construction today is being undertaken
in regions where HSR is already established.
China is implementing its projects at a striking
pace, with plans to extend the current network
by another 34,000 km (21,100 mi), 10,000 km
(6,200 mi) of which will be high speed. Indeed,
the services on Chinas current network of
dedicated high-speed lines have proven popular
enough to drive domestic airlines out of business
on the routes concerned.
Figure 7. High-Speed Rail (>250 km/h [>155 mph]) Prediction by 2030
Countries predicted to have
High Speed Lines by 2030
Countries with
High Speed Lines in 2010
Technology can
help reduce
capital costs
by automating
routine engineering
tasks, introducing
modularized
construction
technologies, and
incorporating the
use of precast
and preassembled
components
that allow maximum
off-line testing.
June 2011 Volume 3, Number 1 37
The second largest number of high-speed lines
under construction worldwide have the goal of
strengthening the trans-European HSR network,
as shown in Figure 8.
In addition to Western Europe, Turkey is
currently building high-speed lines planned
to extend the national network to 4,000 km
(2,500 mi) by 2025.
Among the numerous concepts for new high-
speed lines elsewhere are:
United StatesPlans for regional networks
such as Californias proposal for HSR
between Anaheim and San Francisco and,
on the East Coast, plans to expand the
existing BostonWashington, DC, line into
an entire network
South AmericaTwo nations with HSR
plans: Brazil between Rio de Janeiro,
So Paulo, and Campinas and Argentina
between Buenos Aires and Crdoba
AfricaFeasibility studies for HSR in Egypt
and South Africa; more mature plans in
Morocco and Algeria
Technical solutions ensuring the viability of
these plans, such as high travel speeds and
interoperability, require further development
to mature. However, given the relative infancy
of worldwide HSR, it would appear that this
development will be a natural consequence of an
accelerated global investment in HSR.
Although many proposals plan to employ existing
HSR technology, the current diversity has set off
competition among national formats, especially
in Europe, and, at the same time, led to focusing
on interoperability efforts. The anticipated
fast expansion of HSR worldwide is likely,
therefore, to be accompanied by technological
advancements in a competitive environment,
thus rapidly displacing conventional boundaries.
Though local political and economic drivers may
lead to the cancellation, delay, or acceleration
of each individual proposed scheme, it is clear
that HSR, as a technology and as a transportation
mode, is gaining momentum worldwide
and is entering an era of rapid development
and expansion.
Figure 8. Current and Planned High-Speed Lines in Western Europe
Existing High-
Speed Lines
High-Speed Lines
Being Developed
Although many
proposals plan to
employ existing
HSR technology,
the current
diversity has set off
competition among
national formats,
especially in Europe,
and, at the
same time,
led to focusing
on interoperability
efforts.
Bechtel Technology Journal 38
CONCLUSIONS
I
n recent decades, HSR projects have
experienced significant growth in many parts
of the world. The next 20 years are expected
to see a major increase in the number of high-
speed lines globally as the broader economic
drivers in most regions favor an accelerating
shift toward this form of mass transportation.
The emerging new railway technologies will
help meet the growing expectations of railway
customers, which are captured in the form of a
structured set of business requirements. These
emerging technologies will also help stimulate
the economic case for future investments in HSR.
The considerable investment of capital funds
necessary to construct HSR lines and the longer-
term costs of operating and maintaining the
assets are normally offset by a combination of
factors: (1) a far greater return on investment
through revenues, (2) a less-adverse impact
on the environment compared to most other
forms of transport, and (3) the substantial social
benefits associated with connecting cities via a
high-quality transport infrastructure.
Technology will always play a key role in railways
by helping to meet the growing expectations
reflected in a number of the criteria discussed
in this paper. Technological developments will
help to cut costs, reduce risks, and make railways
more commercially competitive than most other
forms of mass transportation.
The broad economic drivers and supporting
business requirements for future HSR will be
addressed by a range of technologies that are
already in advanced development. As technology
becomes more central to the growth of HSR
networks in many parts of the world over the
next 20 years, so too does the need to invest
in the management and technical skills that
will help harness these opportunities to
successfully deliver the growing number of
emerging rail projects.
BIOGRAPHIES
Siv Bhamra, PhD, has 29 years
of experience in the project
management and engineering
of major rail projects. He has
worked on the full spectrum
of rail projectsfrom light
rail, urban metros to high-
speed linesengaging in
activities ranging from
conducting feasibility studies
to implementing full schemes. Siv has delivered
rail projects in Europe and the Far East and has
performed studies for rail operators in the US,
Middle East, and South Asia. His numerous technical
achievements include developing solid-state traction
inverting substations to save energy, implementing
advanced train control technologies to improve the
performance of existing and future railways, and
conducting research into state-of-the-art security
management systems.
Sivs understanding of risks and the formal assessment
of work processes and technologies that can contribute
to catastrophic failures, along with the consequential
implications, has put him at the forefront of
promoting robust operational safety-case regimes
and implementing safety-focused cultures in the
lifecycles of major projects.
Siv joined Bechtel in 1999 while on the Jubilee Line
Extension Project in London. Before that, he had
worked in senior roles with London Underground and
a number of other railway companies. He was also
the principal transportation advisor to the European
Bank for Reconstruction and Development for 2 years.
Currently, as the delivery director for Bechtel on the
Crossrail project in London, he manages technical
functions, supervising the work of some 2,000
engineers and overseeing the delivery of all systems
works on this landmark project.
Elected a Bechtel Fellow in 2004, Siv is a member
of five professional institutions and three technical
societies. He won the Enterprise Project Manager of
the Year Award in 2006, the London Transport Award
in 2004, and a Safety Management Award in 2004,
and has twice been accredited with further awards of
technical excellence (1984 and 1986). He is also a leader
in business process improvement initiatives and is an
accredited Six Sigma Champion.
Siv was recognized for his efforts in restoring the
Piccadilly Line to passenger service following the
terrorist attacks in London in 2005, commended for his
work to recover operational service on the Northern
Line following a derailment in 2003, and honored for
his courage during a major fire at Kennington Station
in 1990.
Siv is Bechtels management representative to
Loughborough University, a guest lecturer to several
universities, and a respected transportation security
specialist and advisor. He has presented at several
conferences and has written numerous papers on
management and technical disciplines.
Siv has a PhD in Railway Systems Engineering from
the University of Sheffield, South Yorkshire; an
MBA in Project Management from the University of
Westminster, London; and an MSc in Engineering
Design from Loughborough University (all in the UK).
June 2011 Volume 3, Number 1 39
Maximilian Fieguth is a highly
regarded engineer in Bechtel
with a growing reputation. He
joined the company in 2008 and
participated in the final delivery
stages of High Speed 1, the UKs
first high-speed line connecting
London via rail to European
mainline destinations through
the Channel Tunnel.
Maximilian is currently a mechanical engineer on
the Crossrail project. He looks after the quality,
integration, and standardization of the output
of various design consultants and provides
design management leadership as part of the
Engineering Directorate.
Maximilian has also served in the Engineering section
of Bechtels Civil Global Business Unit, where he
supported the companys work internationally. He is a
Six Sigma Yellow Belt and a member of Bechtels and
Crossrails NextGen groups, which provide a forum for
engaging with emerging engineers of the future.
Maximilian received a masters degree in Mechanical
Engineering from Imperial College London and is an
Associate of the Institution of Mechanical Engineers
in the UK.
Bechtel Technology Journal 40
2011 Bechtel Corporation. All rights reserved. 41
INTRODUCTION
H
istorically, ground engineering project
data, including geotechnical, geophysical,
environmental, and monitoring data, has been
processed on paper and presented in picture,
table, or graph formats. The introduction of
computers made data processing somewhat
simpler, and the use of spreadsheets and
graphical routines made implementation easier
and the output look more professional. However,
the real use of data has not developed at the same
rate. [1] While spreadsheet-based processes are
useful on a one-off basis, they are not efficient
for managing data over a projects lifecycle and
have hindered the development of an integrated
data management system.
Being able to manage project data from
start to finish is much more efficient, leads
to considerable cost savings, and allows data
to be used across disciplines and projects. [2]
This paper describes Bechtels experiences
with managing geotechnical data and how the
lessons learned, the processes developed, and the
database structures are applicable across many
disciplines.
DATA
T
o set up a project-wide data management
system, it is necessary to understand the
nature of data, the principles of data management,
and the software systems available to process it.
For purposes of this paper, and to provide a
useful definition to remember, data is the record
of an observation that has attributes attached
to it that provide a unique identification of the
observation along with any useful information
about how the data was collected. Typically,
these attributes consist of location, date and
time, and type of reading, including the units
of measurement. Other information provided
may include the sample and/or test name and,
for environmental samples, data about how the
sample was collected, prepared, and stored. The
same principles apply to laboratory test results
and environmental data. Each data item, together
with each of its attributes, needs to be gathered in
a storage area, known as a data field. For a typical
geotechnical investigation, approximately 200
to 300 data fields are required to store all the
data and attributes collected. The actual fields
can vary, depending on the nature and type of
investigation.
Issue Date: June 2011
AbstractUntil recently, ground engineering data has been managed manually, primarily using paper reports
and Microsoft

Excel

spreadsheets. However, these methods are time consuming, and it is difficult to transfer
information and methodologies from one project to another without considerable effort. A common database for
all ground engineering data permits ease of use and long-term accessibility to data in a common format with
adequate tools for collecting, validating, using, publishing, and storing data.
While this seems like a simple and straightforward concept, implementing a common database on a wide range of
projects throughout the world is a complicated process. This paper describes how each step has been accomplished,
the difficulties and successes in implementing the system, and the possibilities for future development. The
lessons learned are applicable across a wide range of disciplines that manage data, want to improve its quality
and accessibility, and realize the obvious economic advantages. The paper contains examples from recent Bechtel
projects in which an electronic ground engineering database has been successfully implemented.
Keywordsdata, database, ground engineering, implementation, validation, verification
MANAGEMENT OF
GROUND ENGINEERING DATA
Stephen Walthall
[email protected]
Bechtel Technology Journal 42
Data = 3.00 is an example of data recorded for
the depth to water in a borehole. Its attributes
include the location, the date and time, and the
units of measurement. Additional information
such as the name of the person taking the
reading, the weather conditions, and barometric
pressure may also be recorded simultaneously in
a similar format.
Environmental samples and test results have
many more attributes, including the quality
assurance/quality control (QA/QC) information
needed to validate the data.
Data needs to be uniquely defined, i.e., location
numbers have to be unique, not only to the
investigation but also to the project, to avoid
confusion in situations such as multiphase
investigations in which several boreholes are
designated as 1. The format should also be
consistent, since samples taken from the same
borehole but recorded as BH10, BH 10, and BH_10
will all be read differently by the computer.
To ensure uniqueness, the Bechtel Civil Global
Business Unit (GBU) developed a simple yet
consistent numbering system:
<PROJECT>_<LOCATION-TYPE>_<NUMBER>.
Bechtel executes projects throughout the world,
often in accordance with local codes and
standards. Therefore, the units of measurement
need to be stated carefully, since various
countries and even regions within countries use
different units and methods of representing the
units. Computers, unlike humans, cannot spot
obvious errors or variations.
For example, it is preferable to use the
international date and time format as described in
International Organization for Standardization
(ISO) 8601:2004 [3]: yyyy-mm-ddThh:mm:ss.
sssZ(+hh:mm), where Z is the time zone offset.
This format is flexible and can be used in full or
part according to user requirements (yyyy-mm-
dd or hh:mm:ss or yyyy). The need to define
length units carefully is further underscored
by the fact that metric units in the International
System of Units (SI) have not been consistently
adopted in the United Kingdom (UK), which
uses millimeters while continental Europe uses
centimeters. Similarly, in imperial units, feet and
inches and feet and decimal feet are often used
interchangeably.
Information can be defined as a collection of
data items that have been presented in a usable
format such as a graph or a table or, in the case
of geotechnical data, a borehole log, where many
different sources are combined to produce a
complete record of the investigation that can be
easily understood by the reader.
Since data is used throughout a project, it is
useful to create a data management program for
the whole project using the following helpful
guidelines:
Enter the data only ONCE
Enter the data at the source
Have someone else enter the data
From a commercial standpoint, data entry costs
can be expensive, but each time the data is used,
value is added. It follows that the guidelines may
be extended to:
Enter the data once but use it many times
A simple test involves looking at the project
lifecycle and asking how many times the data
is entered into a system, whether it is via paper,
keyboarding, scanning, or some other means. If
the answer is more than once, then consideration
should be given to reviewing the system.
Enter the data
once but use it
many times,
enter the data
at the source, and
have someone else
enter the data.
ABBREVIATIONS, ACRONYMS, AND TERMS
AGS Association of Geotechnical and
Geoenvironmental Specialists
CAD computer-aided design
COSMOS Consortium of Organizations
for Strong-Motion Observation
Systems
CPT core penetration testing
DIGGS Data Interchange for
Geotechnical and
GeoEnvironmental Specialists
Esri

Environmental Systems Research


Institute
GBU global business unit (Bechtel)
GIS geographic information system
ISO International Organization for
Standardization
NDIA New Doha International Airport
PDA personal digital assistant
QA quality assurance
QC quality control
SI International System of Units
SOE sequence of events
UAE United Arab Emirates
UK United Kingdom
XML extensible markup language
June 2011 Volume 3, Number 1 43
Bechtel
requires data
to be in a
standardized
electronic format.
DATA MANAGEMENT
D
ata management includes a system that
consists of the following elements:
Collection
Validation and verification
Manipulation, data processing, and use
Transfer and onward use
Publication
Archiving
The way in which these elements are linked
together to allow data to flow through a project
is depicted in Figure 1.
Collection
Ground engineering data is usually collected
by specialist subcontractors specializing in
investigations. Typically, these subcontractors
include drilling contractors that excavate
boreholes, take field samples and conduct tests,
and log data on the strata; laboratories and
testing houses that conduct laboratory tests;
and monitoring contractors that conduct tests
and take measurements during the progress of
the work. The data to be collected and the format
of the data are specified in the subcontract.
Bechtel requires data to be in a standardized
electronic format.
For the 60-million-cubic-meter land reclamation
project at the New Doha International Airport
(NDIA), Bechtel specified that all geotechnical
data be collected, processed, and stored in a
database during construction. Because of this
requirement, the contractor implemented a single
system that recorded every piece of geotechnical
data from the laboratory and field. By comparing
this data with the specification, the contractor
was able to automatically produce the compliance
reports for the project, which not only saved
time and effort but enabled the completed land
reclamation site to be handed over on schedule
and with all the necessary documentation.
On projects in which Bechtel has been required
to field log investigation holes, BecLogger, a
specialized program developed by Bechtel for
use on a personal digital assistant (PDA) (see
Figure 2), records data at the source and transfers
it directly to the office. Using this program
reduces the amount of time required to log data
by more than 50%.
Commercial versions of programs similar to
BecLogger are now available outside of Bechtel,
and electronic handheld data collection devices
are commonly used. The recent introduction of
a big button device allows drillers in the field
to record drilling progress while wearing gloves
in the rain.
To monitor projects such as tunneling in sensitive
urban areas, the incoming data may be recorded
in many different formsranging from electronic
files [4] from total measurement stations that
continuously record data, to copies of water
levels and flows recorded in field notebooks.
A data management system that handles all of
these sources was implemented on the Athens
Metro project, and similar systems have been
used on other projects.
Figure 1. The Flow of Data Through a Project
Figure 2. PDA with BecLogger Software for
Logging Borehole Data in the Field
Collection
by Others
Validation and
Verification
Onward Transmission
of Data to Others
Data Processing and Use
Ana Analys lys y is is, , Ca Calcu lculat lation ionss,,
Graphs p , P , lots
Publishing
RRep Rep Rep t ort ort ort RRe Re -Re d ady ady ady Fi Fi Fi Figur gur gures es es,
Tables, etc.
Archive of
Transfer Files
Reuse of Archived Data
Data transferred in
a standard format
Data transferred in
other formats
Bechtel Technology Journal 44
Validation and Verification
While data is valuable, for it to be of any use, it
must be validated and verified. These terms are
often used interchangeably; they both mean that
the data must be checked to demonstrate that
the reading is correct, e.g., that the water level
was taken in the reported borehole and that the
reading is the correct depth.
On the Athens Metro project, all monitoring
readings were checked against two criteria:
All the necessary readings had been taken
The readings did not fall into any trigger
levels
In the first check, a routine in the database
compared the readings taken to the schedule of
required readings. Any missing readings were
referred back to the surveyors for correction. The
second check compared the readings against
the trigger levels for absolute, relative, and rate-
of-change criteria. If the readings exceeded the
preset values, corrective actions were taken
based on a procedure used on the Dulles Metro
project, as shown in Figure 3.
Manipulation
Data manipulation consists of processing data to
ensure that it is useful and can be presented as
information that can be consulted by designers,
engineers, and managers. Data is manipulated
using a specially structured database that can
input, store, process, present, and transfer data.
Bechtel uses the Bentley

gINT

program, which
can handle all geotechnical data and currently
has more than 1,200 data fields available for
storing data items. It also is capable of managing
environmental and monitoring data together
with construction data, if required. A common
data structure allows all projects to use the same
data structure. It is possible to hide those data
fields not in use to make the system more user
friendly, since a typical project uses less than
200 of the 1,200 data fields available. gINT can
produce borehole logs, sections (fence diagrams),
graphs, histograms, statistics, tables, and custom
reports as required during project lifecycles.
Routines are available to produce customized
data structures and reports for nontypical
projects such as grouting a leaking dam or
monitoring settlement of a large oil tank during
first filling.
Transfer
The seamless transfer of data among the various
parties working on a project is essential if an
effective and efficient system is to be created.
Without a common transfer method, formats are
The seamless
transfer of data
among the
various parties
working on
a project is
essential if
an effective and
efficient system
is to be created.
Figure 3. Validation and Verification of Ground Monitoring During Tunnel Construction on the Dulles Metro Project
InfoWorks
Exceedance?
DDT DT DTP TP IM IM IM En Ennngg. g.
Yes
Data
Interpretation
DTP IM Eng.,
Geotech
and Structure
Site
Inspection
Report
DTP IM Eng.
and Geotech
Email Alert
DTP IM Eng.
Water Level
DTP Geotech
Inclinometers
DTP Geotech
Crack Gauges
DTP Geotech
Strain Gauges
DTP Geotech
Vibration
DTP Geotech
STMP and SMP
DTP Survey
Project Folder
Excel and
PDF Formats
No
Site
Inspection
Report
SFE and QC
June 2011 Volume 3, Number 1 45
confusing and systems are inefficient, as shown
in Figure 4.
Bechtel has been involved in both developing
and implementing a common standard transfer
format [5, 6], which increases ease of use and
enables the various software programs to
communicate with each other without the need
for intervention.
The Association of Geotechnical and
Geoenvironmental Specialists (AGS) data format
[7], which has been in use for the last 20 years,
provides a means of ensuring that project data
has to be entered into the system only once and
then transferred (see Figure 5). The latest version
of the transfer format has over 2,000 data fields.
Although it has been designed in accordance
with UK practices, it has been structured
specifically to be adapted for international use.
Of the other existing transfer formats, the Dutch
standard for core penetration testing (CPT) [8]
is commonly used and Data Interchange for
Geotechnical and GeoEnvironmental Specialists
(DIGGS) [9], which is being developed in the US,
will be available shortly. Bechtel has used the
AGS transfer format on many projects and has The AGS
data format
provides a means
of ensuring that
project data
has to be entered
into the system
only once.
Data Archive
(consumer)
Data
Producer
(investigator)
Data User
(constructor)
Data
Receiver
(designer)
1
2
3
4
5
1
2
3
4
5
2
3
4
5
1
2
3
4
5
1
Figure 5. Data Communications Using the AGS Data Transfer Format
AGS Data Transfer Format
Data Archive (consumer)
2 3 1 4 5
Data User (constructor)
Data Producer (investigator) Data Receiver (designer)
2 3 1 4 5
2 3 1 4 5 2 3 1 4 5
Figure 4. Communications Without a Common Transfer Format
Bechtel Technology Journal 46
introduced it to contractors in several countries,
including Singapore, China, United Arab
Emirates (UAE), Saudi Arabia, Qatar, Greece,
and Romania, where it has been implemented
rapidly and successfully.
Publication
Ground engineering data is usually published as
borehole logs, graphs, tables, sections, etc. (see
Figure 6), to provide engineers and managers
with the information as and when they need it
and in a format that they are used to seeing. This
is particularly important in a projects monitoring
phase during construction, when a large number
of people need to see and understand data
quickly to ensure that work progresses safely.
In the past, it has been noted [10] that problems
have arisen because the data was collected but
not examined and the trends and information
contained with the readings were not acted
upon. These published methods promote the
correct and timely use of the data through ease
of presentation.
Repeated use of the data throughout a project
lifecycle is a major advantage of using a database.
The repeated
use of data
throughout a
project lifecycle
is a major
advantage of
using a database.
Figure 6. Sample Plot Plan Showing Location of Monitoring Instruments and Plot of Typical Readings
2010
1.00
0.75
0.50
0.25
0.00
0.25
0.50
0.75
1.00
7/16 7/21 7/26 7/31 8/5 8/10 8/15 8/20 8/25 8/30 9/4 9/9 9/14 9/19 9/24 9/29 10/4
M
o
v
e
m
e
n
t
,

i
n
.
OP5-20-STMP1-1065 (S) OP5-20-STMP1-1066 (O) OP5-20-STMP1-1067 (S)
OP5-20-SMP0-1031
OP5-20-STMP1-1068 (O)
OP5-20-SMP0-1029 OP5-20-SMP0-1030
SMP-1012
SMP0-1011
SMP0-1010
SMP0-1008
SMP0-1007
STMP1-1038
STMP1-1039
STMP1-1037
STMP1-1036
STMP1-1034
STMP1-1035
STMP1-1033
OP5-33-STMP1-1032
OP5-33-SMP0-1009
Pier 44 SOE and Pavement
June 2011 Volume 3, Number 1 47
Archiving
Data is a very valuable asset only when it can
be easily retrieved and is in a usable form.
By exporting the data from the database in a
standard text file that is independent of software
and version, it is possible to store the data in
standard document management systems such
as Bechtels electronic document management
system, InfoWorks

. While this system ensures


that the data is managed in accordance with
project requirements and can be made available
across projects if necessary, the data can also
be stored confidentially if specified. Since it
is stored along with all other project data, no
additional system is required.
FUTURE DEVELOPMENTS
D
ata management is an ongoing operation
that combines new techniques and existing
technologies. Software is already available
for a number of these new data management
techniques that:
Link geotechnical data to a geographic
information system (GIS), including Google


Earth, Esri

ArcGIS

, etc. (see Figure 7)


Link geotechnical data directly to computer-
aided design (CAD) systems to provide
online data to others (which allows boreholes
to be added directly into CAD drawings
from the geotechnical database)
Implement an enterprise database to contain
all geotechnical and geoenvironmental
data within one system (including client
management to ensure that the data is kept
confidential if necessary)
Encourage field capture of data at the rig and
in the laboratory
Integrate real time monitoring data,
which is already available using proprietary
systems, into project procedures
Apply geotechnical engineering procedures
and techniques to other disciplines,
including hydrogeological, environmental,
and geoenvironmental [11]
A number of these techniques involve the use
of third-party software. Bechtel is working to
develop a cross-project operation so that they can
become available to all.
Future developments being investigated include a
new transfer file format using extensible markup
language (XML) technology [12] to facilitate the
production and publication of data. This format
will be published under the auspices of DIGGS, an
organization consisting of representatives from
US and UK government agencies, the University
of Florida, the Consortium of Organizations for
Strong-Motion Observation Systems (COSMOS),
and the geotechnical software industry.
Originally coordinated by the US Federal
Highway Administration, DIGGS is focused
on creating and maintaining an international
transfer standard for transportation-related
data, which will facilitate the use of Web-based
systems.
CONCLUSIONS
S
ince the first AGS data transfer format was
conceived 20 years ago, the availability of
computers has risen dramatically. While the
use of ground engineering data has not kept
pace with these changes, recent developments
have shown that once the fundamentals of data
management are grasped by the practitioners
in a particular discipline, the rewards are great.
The major stumbling block has been the learning
curve to understand the fundamental change
needed to use data correctly and not to continue
to use an uneconomic but comfortable system.
TRADEMARKS
Bentley and gINT are registered trademarks
of Bentley Systems, Incorporated, or one of its
direct or indirect wholly owned subsidiaries.
Esri

and ArcGIS are registered trademarks
of Esri

in the United States, the European


Community, and certain other jurisdictions.
Figure 7. Plot of Borehole Locations for New Panama Canal Locks in Google Earth
Recent
developments have
shown that once
the fundamentals
of data management
are grasped by
the practitioners
in a particular
discipline,
the rewards
are great.
Bechtel Technology Journal 48
Google is a trademark of Google Inc.
InfoWorks is a registered trademark of Bechtel
Corporation.
Microsoft and Excel are registered trademarks
of Microsoft Corporation in the United States
and other countries.
REFERENCES
[1] L. Threadgold and R. Hutchison, The Electronic
Transfer of Geotechnical Data from Ground
Investigations, Colloque International Gotechnique
et Informatique, Presse de lEcole National des
Ponts et Chausses, Paris, 1992, pp. 749756,
see http://geotechml.com/Publications.aspx.
[2] P. Ramcharan, S. Walthall, and M. Zytynski,
The Use of a Geotechnical Data Base on a Large
Civil Engineering Project, Ground Engineering,
August 1997, access via http://subs.geplus.co.uk/.
[3] ISO 8601:2004, Data Elements and Interchange
Formats Information Interchange
Representation of Dates and Times (developed/
published by International Organization for
Standardization), http://www.iso.org/iso/iso
_catalogue/catalogue_tc/catalogue_detail
.htm?csnumber=40874.
[4] D.J. Richards, R.J. Chandler, and A.C. Lock,
Electronic Data Transfer Systems for Field
Monitoring, Proceedings of the Institution of
Civil Engineers (ICE) Geotechnical Engineering,
Vol. 156, No. 1, 2003, pp. 4755, access via
http://www.icevirtuallibrary.com/content/
article/10.1680/geng.2003.156.1.47.
[5] S. Walthall and M.J. Palmer, The Development,
Implementation and Future of the AGS Data
Formats for the Transfer of Geotechnical and
Geoenvironmental Data by Electronic Means,
Proceedings of the GeoCongress 2006: Geotechnical
Engineering in the Information Technology Age,
ASCE, Atlanta, GA, February 26 March 1, 2006,
access via http://cedb.asce.org/cgi/WWWdisplay
.cgi?152030.
[6] S. Walthall, AGS XML Format Developments,
presented at the COSMOS-PEER Lifelines-FHWA
Workshop on Archiving and Web Dissemination
of Geotechnical Data, Newport Beach, CA,
June 22, 2004.
[7] Electronic Transfer of Geotechnical and
GeoEnvironmental Data, Edition 3.1,
Association of Geotechnical and Geoenviron-
mental Specialists (AGS), 2004, access via
http://www.ags.org.uk/site/datatransfer/
download.cfm.
[8] CUR Geotechnical Exchange Format for
CPT-Data, CUR, Gouda, The Netherlands,
September 2006, http://www.geffiles.org/
standaard/pdf/GEFCR112.pdf.
[9] Data Interchange for Geotechnical and
GeoEnvironmental Specialists (DIGGS),
http://www.diggsml.com.
[10] The Collapse of the NATM Tunnels at
Heathrow Airport, HSE Books, London, 2000,
access via http://products.ihs.com/cis/Doc
.aspx?AuthCode=&DocNum=251805.
[11] S. Walthall and M.K. Waterman, The
Collection and Management of Geotechnical
and Geoenvironmental Data, Proceedings of the
GeoCongress 2006: Geotechnical Engineering in the
Information Technology Age, ASCE, Atlanta, GA,
February 26 March 1, 2006, access via http://
cedb.asce.org/cgi/WWWdisplay.cgi?152069.
[12] R.J. Chandler, P.M. Quinn, A.J. Beaumont,
D.J. Evans, and D.G. Toll, Combining the Power
of AGS and XML: AGSML the Data Format for
the Future, Proceedings of the GeoCongress 2006:
Geotechnical Engineering in the Information Tech-
nology Age, ASCE, Atlanta, GA, February 26
March 1, 2006, access via http://cedb.asce.org/
cgi/WWWdisplay.cgi?152033.
BIOGRAPHY
Stephen Walthall is a technical
expert in all aspects of ground
engineering. He has 40 years
of working experience in
the engineering geology,
hydrogeology, geotechnics,
and construction fields in the
UK and internationally.
After starting his career
with Ground Investigation
Specialists based in the UK (working in the Middle
East and Africa), Stephen spent many years with
North West Water in the UK as a hydrogeologist
before leading its geotechnical section. He developed
considerable expertise in construction, particularly
in the areas of tunnels and building on poor
ground, along with a detailed knowledge of
embankment dams.
Since joining Bechtel, Stephen has worked extensively
in Eastern Europe and the former Soviet Union
(FSU), where he gained expertise in their codes and
standards. He has also worked on transportation and
oil and gas projects in the Middle East. Stephen was
jointly responsible for developing the geotechnical
data management systems for the Channel Tunnel
Rail Link (CTRL) project, where he was geotechnical
lead for the Thames Valley section. On the Athens
Metro project, he was an advisor responsible for all
geological aspects, including the development and
implementation of an electronic monitoring system for
controlling tunneling activities through the city.
Stephen is a member of the committee that is preparing
British Standard BS 8574, provisionally titled The
Management of Data in Ground Engineering. He
has been a member of the AGS working party for the
development of the electronic data transfer format
from its inception and chaired the committee from
2000 to 2010. He was a former committee member
of the Engineering and the Hydrogeological groups
of the Geological Society and was an advisor to the
DIGGS committee.
Stephen obtained a BSc in Physics and Geology from
the University of Manchester and an MSc from the
University of Durham. He is a Chartered Geologist,
a Chartered Scientist, and a Fellow of the Geological
Society of London.
2011 Bechtel Corporation. All rights reserved. 49
INTRODUCTION
B
echtel is assisting Global Infrastructure
Partners (GIP) with developing and managing
Londons Gatwick Airport (LGW), which is the
busiest single-runway commercial airport in the
world with about 130,000 operations per year
and 36 million passengers per annum (mppa).
Bechtel assisted in reviewing and revising the
airports capital development program and has
made a number of recommendations to adjust
the programs scope, schedule, and cost. As part
of this work, Bechtel, assisted by Greg Bradford
of Airport Tools, has used the US Federal
Aviation Administrations (FAAs) airport and
airspace simulation model (SIMMOD) to conduct
extensive airfield analysis to assess potential
projects and their impacts on the airfield.
SIMMOD is a widely recognized standard for
airfield and airspace simulation. Although
this program is quite powerful, SIMMOD was
not capable of adequately addressing several
specific issues at Gatwick. To resolve these gaps,
Bechtel added several enhancements to SIMMOD
that have made it possible to fully analyze the
airports unique conditions.
BACKGROUND
A
s shown in Figure 1, the airport consists
of two independent terminals, the South
terminal and the North terminal. The South
terminal includes Piers 1, 2, and 3 and 32 contact
gates, and the North terminal includes Piers 4,
5, and 6 and 32 contact gates. Several clusters of
remote stands add another 45 gates. The airport
currently handles a high volume of aircraft
operations and achieves a very high throughput
(more than 50 operations per hour) on its single
active runway. In contrast, other United Kingdom
(UK) airports typically achieve only about 40
operations per hour.
The airfield area is quite constrained for an
airport with such intensive aircraft activity, and
several limitations inhibit the flow of aircraft.
The most notable limitation is that, although
Gatwick technically has two parallel runways,
only one can be used at a time because the
separation distance between them is inadequate
to allow simultaneous operations. As a result,
northern runway 08L/26R is typically used only
as a taxiway or for emergency back-up. In other
parts of the airfield, particularly around Piers 1,
2, 3, and 4, single taxiways create long cul-de-sacs
that significantly limit the free flow of aircraft
in and out of the pier stands and contribute to
potential congestion and delays.
Another significant factor is that access to
the threshold on runway 26L, which handles
approximately 70% of all operations, is quite
constrained and can lead to long departure
queues during peak-hour operations. These
queues can sometimes become so long that
Issue Date: June 2011
AbstractThis paper describes four enhancements to the US Federal Aviation Administrations (FAAs)
airport and airspace simulation model (SIMMOD) program that have been recently developed for use on
Bechtels project at Londons Gatwick Airport (LGW). Gatwick is the busiest single-runway airport in the
world, and the tools typically available in SIMMOD do not generally address a number of significant airfield
issues. Bechtel Civils airport planning team added capabilities to account for (1) busing/coaching operations,
(2) towing of aircraft to and from stands, (3) pushback operations, and (4) arrival/departure shifts to minimize
aircraft taxi delays. The simulation enhancements have been used to analyze potential expansion projects at the
airport under four different traffic scenarios and to assist the client in assessing and selecting projects based on
optimal airfield performance.
Keywordsairport, airfield, congestion, delay, Gatwick Airport (LGW), runway, simulation, SIMMOD, taxiway
ENHANCEMENTS TO
AIRFIELD SIMULATION OF
LONDON GATWICK AIRPORT
Jim Denton-Brown
[email protected]
Farzam Mostoufi
[email protected]
Bechtel Technology Journal 50
they back up onto the adjacent taxiways,
significantly affecting the flow of arriving and
departing aircraft.
As part of development efforts for the UK Civil
Aviation Authoritys (CAAs) fifth quinquennium
(Q5) (the fifth 5-year airport regulatory period,
from April 2008 to March 2013) and to improve
capacity, there are several ongoing airport
development programs involving:
Pier 5 reconfiguration
Pier 1 and Pier 2 redevelopment
Northwest apron (ramp)
In addition to the issues involving aircraft
congestion identified above, there are additional
issues related to the operation of the airfield.
First, the airport must rely on a fairly significant
number of remote passenger operations because
of the limited number of contact gates. Remote
operations entail moving large numbers of
passenger coaches and baggage trolleys to and
from the terminals and the remote aircraft
positions. These coaches and trolleys must, at
different points on the airfield, cross active
taxiways to reach their destinations. Each
crossing is a potential conflict point that could
significantly impede the free flow of arriving
and departing aircraft on the taxiways and create
aircraft departure delays due to late arriving
coaches or trolleys.
Second, because of the regulatory requirement
to achieve certain pier service level targets, it
has also been necessary to implement a towing
program to shift aircraft between the remote
Each crossing
is a potential
conflict point that
could significantly
impede the
free flow of aircraft
on the taxiways
and create
departure delays
due to late
arriving coaches
or trolleys.
170
172
174
176
178
180
171 169
160
150 151 152
173
175
177
161
157 156
CARGO CENTRE
PIER 4
155 154 153
66
65
64
158 159
145
136 135 134 133 132 131 130
144 143 142 141 140
25
28
23 21 19 17 15 13
24
27
22 20 18 16 14 12 11
1
5 3 7 9
10 6 4 2
31 32
33
34
35
36
37
38
68 67
63
62
61
60
59
58
57
56
553
554
5
5
2
5
5
1
110
112 113 101
109 107 106 105 104 125
111 102
43 42 41
54 53 52 51
50
49
48
47 46
Mainte-
nance
Area
J1
J3
J4
34N
20N
20S
35N Taxiway Juliet
Taxiway Juliet
Taxiway Zulu
Taxiway Alpha North
Taxiway Alpha South
Taxiway Yankee
Taxiway Juliet
Ta
x
iw
a
y
P
a
p
a
Ta
x
iw
a
y
N
o
v
e
m
b
e
r
Taxiway Kilo
Taxiway Kilo
Taxiway Lima Taxiway Lima
Ta
x
iw
a
y
Ta
n
g
o
Ta
x
iw
a
y
S
ie
rra
Ta
x
iw
a
y
R
o
m
e
o
Ta
x
iw
a
y
R
o
m
e
o
Ta
x
iw
a
y
Q
u
e
b
e
c
Ta
x
iw
a
y
Q
u
e
b
e
c
36N 37N
53
66
54
55
67
62 63
56
71
91 90
92
93
94
72
74
64
65
69 70
80
81
68
38W
38S
38N 38E
39N
33N
40N
41N
41S
42S
42N
40S 33S 39S
37S
35S
36SW 36SE
34S
1 2 3 4 5 6 7 8 9 10 11
16 15
14
12
52
43S
44S 49N
49S
45S
46S
48S
48N
47S
47N
50N
51N
51S
50S
45N 46N
43N
57N
75S
76 77
79
78 73S
73N
75N
17 13
J7
H3
H1
G1
FR1 E1 D1
C1
B1
A1 M1
M3
A2
A3
Y1
Y2
J2
H2
G2
J6
T1 S1 R1
Q1
P1
J8
N1
J5
H
GH
FR E
D
CR
Y
Y
C
G
J J
T
J
J
J
AN
AN AE
AS
NA N
K
K
KA
BR
P
A
M
M
Y
Z
P
P
B
S
L
L
L
R
R
R
RA
QA
QB
Q
Q
Q
Q
26R
26L 08R
08L
North Terminal
Piers 4, 5, 6
South Terminal
Piers 1, 2, 3
NORTH
Figure 1. Layout of Gatwick Airport
ABBREVIATIONS, ACRONYMS, AND TERMS
A-D-A arrival-departure-arrival
A-D-D-A arrival-departure-departure-arrival
A-T-D arrival-tow-departure
A-T-O arrival-tow-overnighting
CAA UK Civil Aviation Authority
Code C size of aircraft such as the Boeing 737
Code E size of aircraft such as the Boeing 747
Code F size of aircraft such as the Airbus 380
FAA US Federal Aviation Administration
GIP Global Infrastructure Partners
LGW London Gatwick Airport
MARS multiple aircraft ramp system
mppa million passengers per annum
nm nautical mile
OTD overnighting-tow-departure
SID standard instrument departure
SIMMOD FAAs airport and airspace
simulation model program
UK United Kingdom
UK NATS United Kingdom National
Air Traffic Services
US United States
08R/26L runway 08 right/26 left
08L/26R runway 08 left/26 right
June 2011 Volume 3, Number 1 51
While SIMMOD is a
very powerful and
sophisticated tool
for analyzing airfield
issues, its ability
to address issues
specific to Gatwick
is limited.
stands and the piers. This program allows
pier service targets to be increased and use
of the limited number of contact gates to be
optimized. However, towing aircraft can also
disrupt airfield operations, since towed aircraft
are typically moved at much slower speeds than
piloted aircraft.
While SIMMOD is a very powerful and
sophisticated tool for analyzing airfield issues,
its ability to address the kinds of issues identified
above, especially those specific to Gatwick, is
limited. As a result, Bechtel developed several
SIMMOD enhancements to analyze these
specific concerns as part of the analysis of
Gatwicks airfield.
SIMMOD ENHANCEMENTS
Passenger Coaches and Baggage Trolleys
SIMMOD does not typically include the ability
to analyze coaching and baggage operations
or to integrate these operations with aircraft
movements on an airfield. The following actions
were taken to analyze these issues:
Quantify and simulate the number of
coaches, trolleys, and tugs (special vehicles
used to push aircraft back) required for each
flight using the same flight schedule used to
drive the aircraft simulation
Assess the coach routes and their impacts on
taxiway crossing points
Calculate loading/unloading times for
coaches and trolleys to assess the impacts on
departure delays
Add a graphic element to SIMMOD to visually
demonstrate the interaction of aircraft and
ground service traffic (see Figure 2)
Aircraft Towing
SIMMOD typically addresses aircraft towing
(as shown in Figure 3) by sending aircraft to an
undefined virtual apron in the simulation and
retrieving them only when the aircraft are being
actively towed to a contact gate. Hence, towed
aircraft typically appear and disappear from the
virtual apron in the simulation and typically only
account for single one-way towing operations.
However, towing operations at Gatwick are more
complicated, and a single aircraft may undergo
multiple towing operationsupon arrival, to and
from remote stands, upon departure, and to and
from contact gates. Furthermore, the slow speed
of the towing operation and its potential impact
on the airfield are not typically a concern or
focus. The enhancement includes the ability to:
Track towed aircraft to and from specific
standsnot virtual apronsat actual towing
speeds
Account for all towing possibilities
such as arrival-tow-departure (A-T-D),
arrival-tow-overnighting (A-T-O), and
overnighting-tow-departure (O-T-D)
Automate the towing event in SIMMOD to
allow aircraft to be tracked interactively with
other airfield activity
Pushback Operations
SIMMOD typically assigns precedence to aircraft
operations on a first-in/first-served basis and does
not allow more than one aircraft to occupy a given
block of taxiway. However, given the number of
single taxiways at Gatwick, this frequently leads
to situations in which a single aircraft being
pushed back blocks access to an entire taxiway
for multiple aircraft and triggers major delays.
In addition, at times multiple aircraft at Gatwick
Figure 2. Coaches Integrated with Airfield
Three coaches on
their way to the
departure aircraft
Figure 3. Path of Towed Aircraft (Shown in Purple)
Bechtel Technology Journal 52
are pushed back from a single pier and moved as
a group to the runway threshold (see Figure 4).
Since SIMMOD typically does not allow
such operations to occur, the simulation was
adjusted to:
Provide rules of precedence other than first-
in/first-served to determine which aircraft
should have priority in operation
Allow multiple simultaneous pushbacks and
multiple departures using the same taxiway
Set a maximum time limit on delays in
aircraft arrivals triggering the use of push
and hold stands for aircraft departures to
minimize congestion
Arrival/Departure Shifts
Gatwicks air traffic control tower is managed
by the UK National Air Traffic Services (NATS).
NATS has achieved unusually high throughput
on Gatwicks single runway by actively and
innovatively managing the runway and airfield.
Some of the NATS procedures are, in fact, unique
to Gatwick. One of these procedures involves
adjusting arrival and departure sequences
to increase the number of hourly operations
achieved while simultaneously decreasing
airfield congestion. SIMMOD does not typically
allow sequence shifting equal to the rapidity
and flexibility of the NATSs runway operation
adjustments. As a result, a NATS recommendation
was encoded into the simulation to:
Shift the departure/arrival sequence from
A-D-A to A-D-D-A to accelerate departures
(see Figure 5)
Trigger the shift dynamically when departing
aircraft taxi times reach 20 minutes and
runway holding queues begin to form
Reduce arriving aircraft delays by decreasing
the length of departing aircraft queues at
the threshold
In addition, to achieve this shift, it was also
necessary to add an arrival holding stack so that
arriving aircraft could be held virtually in the
air to allow the proper rules for sequencing
and wake vortex separation (minimum distance
between aircraft to prevent accidents due to
turbulence that forms behind an aircraft as it
passes through air) to be applied.
AIRFIELD SIMULATION
A
fter the simulation model with
enhancements was developed, it was
possible to demonstrate to the NATS, the
airlines, and other stakeholders that the model
was credible and accurate in reflecting actual
conditions at the airport.
The next step was to apply the model to proposed
capital projects to determine the impacts on
airfield operations. In particular, the effort
focused on analyzing a proposed extension to
Pier 6. As shown in Figure 6, the Pier 6 extension
It was necessary
to add an
arrival holding stack
so that
arriving aircraft
could be held
in the air
to allow the
proper rules
for sequencing
and wake vortex
separation.
Figure 4. Multiple Pushback Operations and Queuing at Threshold (Departure Aircraft Shown in Blue)
June 2011 Volume 3, Number 1 53
will be located in the central core of the airport,
and there were concerns that this extension
could seriously affect airfield performance due
to increased congestion and delays.
Pier 6 Extension Project
The new Pier 6 extension will be located south of
the existing Pier 6 and linked to it by a connecting
corridor. The project has been divided into two
phases: Phase 1 coincides with development (by
The Pier 6 extension
will be located in
the central core of
the airport, and
there were concerns
that it could
seriously affect
airfield performance
due to increased
congestion and
delays.
Arr|.a|s
kandom 0e|a,s
0ongest|on
o|d|ng Stacks
Arr|.a| Separat|on
0LNAN0
Schedu|e, ||eet N|s, A|rport I|me
0APA0|I\
Iact|ca| 0apah|||t,, kunwa, 0ccupanc,,
Separat|on ku|es, 0a,/N|ght Seasona|
0epartures
0ongest|on
0eparture Separat|on
0emand : 0apac|t,
Arr|.a| Sequence (nm[
||ow, e.g.,
A-A-A: 3
A-A-0-A-A: 3-6-3
A-0-A: 6
A-0-0-A: 8
S|0s
0epartures
kandom 0e|a,s
Pushhack 0e|a,s
kunwa, o|d 0e|a,s
Aprons and Ias|wa,s
Sequenc|ng 0apac|t,
0n Stand
Ias|wa,
kunwa, o|d|ng Ione
Stands
Figure 6. Location of Pier 6 Extension Project
Pier 1 and
Alpha Hold
Pier 6 Extension
Phase 2
Figure 5. Diagram Showing Logic of the Arrival/Departure Shift
Bechtel Technology Journal 54
2014) of four aircraft stands consisting of two
gates for Code F aircraft and two gates for Code
E aircraft, which are located on the east and
south sides of the new pier. Phase 2 (shown in
Figure 7), which is planned to be completed by
2017, will add an additional three gates for Code
E aircraft along the west side. All of these new
stands will use the multiple aircraft ramp system
(MARS) to also accommodate Code C aircraft,
making the new pier very flexible.
The project has been laid out and integrated
with the existing airfield taxiway system. A new
Code F aircraft taxilane south of and parallel to
Taxiway J is planned to provide a through route
between Taxiways Q and P. The Phase 2 airfield
will add a Code E aircraft taxiway along the
western side of the pier to provide access to new
contact gates and several remote stands at the
edge of the apron.
Accommodating Code F aircraft such as the
Airbus A380 is a significant challenge to the
airfield, since Gatwick cannot currently handle
such large planes at contact positions. Hence,
the project also represents the potential to allow
Gatwick to serve new airlines and markets in
Asia and the Middle East.
Forecast Scenarios
The Pier 6 extension project was simulated
using four forecasts corresponding to different
expectations about the future growth rate at the
airport over the next several years:
2013 Low Growth
2013 High Growth
2017 Low Growth
2017 High Growth
The forecasts provided a range of traffic levels
from a low of 38 mppa to a high of 44 mppaand
incorporated various expectations about aircraft
mixes, airline shifts, and possible changes to the
markets served by the airport.
Results of the Modeling
Table 1 shows the simulated taxi times for
the 2013 and 2017 high growth forecasts and
actual taxi times observed in 2007 to 2009. The
data has been sorted to provide a pier-by-pier
summary of the taxi times. This table shows that
the simulated taxi times for both arrivals and
departures compare quite favorably with historic
data, indicating roughly equivalent airfield
performance between the simulated times and
the actual historic times.
Accommodating
Code F aircraft
such as the
Airbus A380 is a
significant challenge,
since Gatwick cannot
currently handle
such large planes at
contact positions.
Figure 7. Pier 6 Extension Project at Completion of Phase 2
1
3
2
M
1
3
1
M
1
3
0
M
90.00
T
a
x
i
w
a
y

Q
T
a
x
i
w
a
y

Q
A
Taxiway L
Taxiway J
Taxiway NA
T
a
x
i
w
a
y

N
T
a
x
i
w
a
y

P
Relocated Taxiway K
601R/M/L
611R/M/L
610R/M/L 609R/M/L
605R/M/L 602M
600M
6
1
4
M
6
1
3
M
6
1
2
M
6
0
8
M
6
0
7
M
6
0
6
M
6
0
7
A
6
0
6
A
603M 604M
T
a
x
i
w
a
y

R
A
47.50
47.51
1.80
84.25 47.50
42.50
5
0
.
5
0
8
8
.
7
5
42.50
June 2011 Volume 3, Number 1 55
As a result of the modeling, the questions about
negative impacts on the airfield due to the new
project were largely answered and the decision
to proceed with further design development was
made. An additional future step in the process
will be to consider how best to stage the project
during construction. Further simulation will be
carried out to analyze how best to close taxiways
sequentially during construction and reroute
aircraft circulation to minimize further impacts.
CONCLUSIONS
G
atwick is the busiest single-runway
airport in the world. By adding several
enhancements to SIMMOD, it has been possible
to accurately model the airfields complex and
sometimes unique operations and fully analyze
a variety of specific conditions.
This credible and powerful simulation model
was used to simulate several proposed expansion
projects at the airport and assess their potential
impacts on aircraft delay and congestion. As part
of Bechtels continuing work at the airport, the
successful projects will be incorporated into the
airports ongoing capital development program
and further analyzed as they move into the
design and construction stages.
BIOGRAPHIES
Jim Denton-Brown is
the manager of planning
for Bechtel Civil Aviation
and Infrastructure in San
Francisco, California. He has
been involved in planning
transportation infrastructure
megaprojects around the world,
including in North America,
Europe, Asia, and the Middle
East. He is responsible for industrial planning, port
planning, and aviation planning projects and has
also supported joint development efforts in Bechtels
Oil, Gas & Chemicals (OG&C) and Mining & Metals
(M&M) Global Business Units for projects in Asia and
the Middle East.
Jim is a member of the American Planning Association
(APA) and is a registered professional planner through
the American Institute of Certified Planners (AICP).
He has authored 10 articles on airport free trade zones,
airport master planning, and other project types,
including seaports and industrial complexes. He has
been a guest speaker at the Department of Civil and
Environmental Engineering, University of California,
Berkeley, as well as a speaker at the American Society
of Civil Engineers (ASCE), San Francisco section.
Jim received an MBA in Management and Economics
from the University of Maryland, College Park,
Maryland, and a BA in Humanities and Sciences
from Stanford University, Palo Alto, California. He
also did BS/MS studies in Environmental Design
and Planning at California Polytechnic University,
San Luis Obispo, California. He has also received
a certificate in Airport Master Planning from the
Singapore Aviation Academy and a certificate in
Airport Systems Planning from the Massachusetts
Institute of Technology.
Farzam Mostoufi, a senior
planning and simulation
specialist with Bechtel Civil,
has more than 20 years of
experience at Bechtel in
planning and designing
transportation and material
handling facilities, including
international airport terminals,
railroads, transit systems,
bulk and container ports, and mining and metals
production complexes. He is highly experienced in
conducting technical simulation studies and economic
analyses as well as in designing, developing, and
using specialized transportation and logistics models.
Farzam has developed economic models and
participated in feasibility studies to test the impact
of projected operations and designed facilities on
revenues, capital expenditures, and maintenance
costs. He is currently supporting the Gatwick Airport
project with a focus on airfield simulation, terminal
throughput analysis, and ground access issues.
Gatwick is the
busiest single-runway
airport in the world.
By adding several
enhancements
to SIMMOD,
it has been possible
to accurately model
the airfields complex
and sometimes
unique operations
and fully analyze
a variety of
specific conditions.
Table 1. Results of the Simulation (Taxi Times in Minutes)
Pier Activity
Simulated Historic
2013 2017 2007 2008 2009
1
Arrival
Departure
8.3
5.3
8.6
5.5
10.0
10.0
N/A
8.2
N/A
6.7
2
Arrival
Departure
9.5
7.9
9.8
9.0
10.5
11.0
N/A
9.9
N/A
8.2
3
Arrival
Departure
11..1
10.8
12.5
11.8
11.3
12.0
N/A
11.1
N/A
9.9
4
Arrival
Departure
11.7
12.6
11.1
12.4
10.5
11.0
N/A
11.2
N/A
10.3
5
Arrival
Departure
9.7
12.6
10.4
13.2
8.5
12.0
N/A
10.5
N/A
9.3
6
Arrival
Departure
8.0
9.6
7.7
11.6
8.5
11.0
N/A
11.8
N/A
10.6
Bechtel Technology Journal 56
Farzam received an MBA in Finance from Golden
Gate University, San Francisco, California; has a BS in
Economics and Insurance from Tehran College, Tehran,
Iran; and has completed course requirements in the
Doctor of Business Administration (DBA) program at
Golden Gate University. As a lecturer at Golden Gate
University, he taught graduate and undergraduate
courses in computer modeling, simulation, and
quantitative analysis. Farzam also holds a certificate
in Airport Systems Planning from the Massachusetts
Institute of Technology, Cambridge, Massachusetts.
TECHNOLOGY PAPERS
Mining & Metals
T ec hnol ogy Paper s
59
An Integrated Approach to Developing
Safe and Efficient Plant Layouts
Rafael L. Pires
Robert F. Baxter
Laszlo Tikasz, PhD
Robert I. McCulloch
67
Alumina Refinery Water
Management Design in Tropical
and Subtropical Climates
Lucy Martin
Steven Howard
79
Options for Long-Distance,
Large-Capacity Bulk Materials
Conveying
Brett Wilson
Steve J. Davis
Mark Wolton
Los Pelambres Repower 2
The Andes at dawn frame the processing plant
at Chiles Los Pelambres copper mine. The
concentrator is located at an elevation
of 5,200 feet (nearly 1,600 meters).
2011 Bechtel Corporation. All rights reserved. 59
INTRODUCTION
B
echtels Aluminum Center of Excellence
(ACE) is the repository of the companys
institutional knowledge, technical capability,
historical information, and lessons learned on
the design and construction of smelter projects.
ACEs mandate is to deliver value to projects by
applying this knowledge and skills, focusing
on sustainable design. The integrated approach
to developing safe and efficient plant layouts
presented in this paper was developed by ACE
and funded by a Bechtel internal technical
research grant
1
.
A facilitys layout can significantly affect its long-
term success in terms of both safety and its ability
to compete successfully in the marketplace. In
addition, the investment costs associated with
building a given layout are substantial. Early
layout development and finalization offer a clear
advantage to the project from both a cost and a
schedule perspective, compared to finalizing the
layout later, during project execution.
Layout development requires a complete
understanding of the operational aspects of a
particular plant. The operation of an aluminum
smelter, for example, relies on interactions
between customer and supplier facilities to convey
people, products, and materials (PPM), with
the road network directly affecting operational
safety and efficiency. As a consequence, a quick
and effective approach to assessing layout safety
and efficiency early during the design phase
is required.
Typically, laying out traffic-intense facilities
such as an aluminum smelter involves having
a detailed understanding of traffic interactions
and behavior. A known approach is to statically
map the layouts characteristics, identify roads
and intersections, assign routes to traffic, and
calculate average inter-arrival times.
ACEs use of this evaluation method revealed that
a greater awareness of safety and efficiency was
needed, and that it should be based on a better
understanding of the dynamic characteristics
of plant traffic flows. Discrete-event modeling
(DEM) was chosen as a natural fit for PPM
traffic flow problems. [1] Figure 1 illustrates the
application of DEM to envisaged traffic at a plant
shuttle bus station during a shift change.
Previous case studies have demonstrated
the complexity of the different issues arising
throughout the layout development process.
Based on DEM, ACE created a systematic
approachreferred to in this paper as the
Issue Date: June 2011
AbstractA facilitys layout can significantly affect the success of its envisaged operation in terms of safety,
lifecycle cost, and environmental impact. This paper presents an innovative approach to predicting and measuring
the traffic characteristics of a given layout in terms of these three factors. Derived from lessons that Bechtel has
learned in the process of developing numerous plant layouts and analyzing resources, this new approach allows the
quick comparative analysis of competing layouts to arrive at a safe, lean, and green plant configuration.
To illustrate how the approach works, this paper examines its application of safety by design and lean manufacturing
techniques to designing the layout of a fully integrated aluminum complex. Aluminum smelter layout development
usually involves dealing with an integrated mix of operations and services (smelting, carbon anode formation, metal
casting, material handling, etc.). Integral to this approach are the design of access roads, choice of transportation
modes, and planning of resources, as discussed throughout the paper.
Keywordslayout development, lean design, simulation, traffic
AN INTEGRATED APPROACH TO
DEVELOPING SAFE AND EFFICIENT
PLANT LAYOUTS
Rafael L. Pires
[email protected]
Robert F. Baxter
[email protected]
Laszlo Tikasz, PhD
[email protected]
Robert I. McCulloch
[email protected]
1
Technical Approach for Safe and Efficient Plant Layout
Developmenta mini-grant study performed in 2010
by Rafael Pires and Laszlo Tikasz under the Bechtel
Technical Grant Program.
Bechtel Technology Journal 60
Integrated approachto automating data
transfer and reducing the cycle time needed to
assess a given layouts safety, efficiency, and
CO
2
equivalent (CO
2
e) emissions related to the
traffic network.
Bechtels adoption of the integrated approach
has delivered value by quickly simulating
and analyzing plant layouts and assessing the
effectiveness of proposed improvements.
LAYOUT DEVELOPMENT
I
t is well known that the layout of a
traffic-intense plant affects not only the
projects capital cost butmore importantly
decades of plant operations. The development
of a safe, efficient layout requires a collection of
tools and expertise to design out the potential
for accidents, waste, and uncertainties related
to PPM conveyance between customers
and suppliers.
Lean manufacturing and Six Sigma tools provide
guidance on streamlining layout development
initiatives. [1] Proper application of these tools
during initial project definition helps to freeze
the layout early, thereby avoiding rework,
minimizing risk, and providing certainty to the
design team and future plant operators that the
layout is safe, lean, and green (see Figure 2).
SafeSafety by Design
Safety is an integral part of corporate social
responsibility. There is a clear connection
between operational safety and the level of
quality achieved. [2] A safe plant configuration
lays the early foundations for developing an
organizational culture that values behavior-
based safety.
The objective of designing safety into plant
layouts is to minimize the potential for accidents.
Safety by design reduces the likelihood of
accidents over a plants lifecycle. As mentioned,
DEM complements both layout development and
optimization studies. It provides a way to mimic
and monitor the expected operation and to
collect data for further analyses. A failure mode
and effects analysis (FMEA) can then be applied
to estimate the risk of accidents at any particular
intersection in the plants road network.
A variety of operational factors influence the
overall safety of a plant layout. These factors
include intersection [3] and vehicle types,
work schedules, and transportation modes. An
FMEA is used to design and score individual
intersections by incident severity, occurrence,
and detection factors that are combined
into risk priority numbers (RPNs). The RPN of
any particular point of a layout is that points
safety measure.
Safety is an
integral part of
corporate social
responsibility.
ABBREVIATIONS, ACRONYMS, AND TERMS
ACE Aluminum Center of Excellence
(Bechtel)
BDD basic design data
CAD computer-aided design
CCF comparative cost function
CO
2
e carbon dioxide equivalent
DEM discrete-event modeling
FMEA failure mode and effects analysis
IATA International Air Transport
Association
LOS level of service
PPM people, products, and materials
RPN risk priority number
TAC Treatment of Aluminum in
Crucible
Figure 2. The SafeLeanGreen Principle
LEAN SAFE GREEN
Lower
Lifecycle
Cost
Safety
by
Design
Lower
Environmental
Impact
Figure 1. Example of Plant Shuttle Bus Station Traffic During Shift Change
June 2011 Volume 3, Number 1 61
The integrated
approach
was assembled
by coupling and
partially integrating
proven tools.
LeanLowest Lifecycle Cost
Designing an efficient operation requires
applying lean principles and tools to identify
and eliminate waste and streamline PPM flow.
In theory, efficient operations can maximize
productivity with minimum waste, effort, and
expense. [4] However, variations generated by
transient operational conditions directly affect
and reduce efficiency.
DEM is used to predict the dynamic response
of a particular operation, including traffic and
material conveyance and storage, to ensure that
the proposed configuration can meet customer
needs during normal, maximum, and upset
operating conditions. [5] During overall layout
development, DEM is used to mimic traffic
flows and operations to validate the number of
vehicles, load/unload stations, and inspection
lanes; parking lot requirements; inventory
requirements; etc., to minimize the risk inherent
in the overall plant layout, thereby offering a
lower lifecycle cost.
GreenEnvironmental Impact
A layouts environmental performance is
influenced by PPM movement. The road network
design, PPM conveyance between customers
and suppliers, and traffic type segregation all
affect the distances driven by vehicles. As a
consequence, traffic-related emissions are also
affected. During DEM simulation, the number of
vehicles required and the distances driven per
vehicle are confirmed, not only to understand
the operational cost related to traffic, but also to
estimate CO
2
e emissions.
THE INTEGRATED APPROACH
B
echtels need to quickly and effectively develop
safe, lean, and green layouts led to ACEs
integration of the various concepts previously
mentioned into an automated platform. This new
platform automates data transfer and speeds
layout assessment. Figure 3 outlines the structure
of the integrated approach and the connections
between the layout analysis phases.
The integrated approach was assembled by
coupling and partially integrating proven tools
such as computer-aided design (CAD) drawings,
the plant basic design data (BDD) model, PPM flow
mapping, dynamic modeling (using Flexsim
simulation software), FMEA, and a generalized
comparative cost function (CCF). The resulting
platform automatically collects projected plant
layout and operation parameters and links them
in a dedicated input data set. This data set is
then used to specify, construct, and run the plant
dynamic model. The simulated plant operation is
monitored, and data from simulated scenarios is
collected and automatically exported to the same
data set.
The quantitative analysis provided by the
platform takes into account the dynamically
changing conditions of the projected traffic,
Figure 3. Integrated Approach to Plant Layout Development [1]
Intersection
Coordinates
Layout
Designer
CAD
Drawing
Senior
Technology
Specialist
(Guru)
Basic
Design
Data
Data
Integrator
Flow
Maps
Database
System Integrator
VISUAL OPTIMIZATION MODEL-BASED TRAFFIC ANALYSIS
STATIC
TRAFFIC STUDY
Static Model
Input
Safe and Lean
Concepts
Area
Specialist
Static
Analysis
Preliminary Results/
Feedback
Preliminary Results/Feedback
Dynamic
Model
Input
Dynamic
Modeler
Dynamic
Modeling
Tool
Safety
and Risk
Specialist
FMEA
Study
Occurrence
Results/Feedback
Safety and
Efficiency Measures
Routes
Quantities, Capacities,
and Technology
Process Owner Process Owner
Optimized
Plant Layout
Bechtel Technology Journal 62
such as schedules and operational constraints.
Emphasis is given to PPM movement (distance
traveled per vehicle/person, intersection
crossings, unsafe conditions, and possible
bottlenecks on the road network).
The FMEA and CCF tools linked to the data set
score the projected layout in terms of safety and
efficiency. The main outputs of the platform
are factors representing the risk of accidents in
each specific intersection as well as an overall
layout safety score, efficiency measures (cost,
utilization of vehicles and load/unload stations,
etc.), and CO
2
e traffic-related emissions.
Additionally, the data set provides complete post-
processing of the data simulated and collected.
Some of the outputs include:
Layout Safety Score: Average RPN from
FMEA
Layout Efficiency Score: Operating cost
related to traffic
Layout Emissions Score: CO
2
e emissions
related to traffic
RPN of each intersection
Number of vehicles required per flow type
Distance driven per vehicle
Number of trips executed per flow type
Utilization of facilities (e.g., load/unload
stations)
Utilization of vehicles
Optimum benefits result from implementing
the integrated approach early during project
development.
Figure 4 shows a sample of the overall score
the platform generates for a layout once a few
scenarios have been tested. A scenario represents
any modification affecting the road network,
such as plant arrangement, number of vehicles,
schedules, routes, and number of stations.
CASE STUDY
Background
The integrated approach was applied to an
aluminum complex consisting of an alumina
refinery, an aluminum smelter, and an integrated
rolling mill. The application focused on the
interaction of smelter traffic with traffic from all
facilities sharing the same external road network
to nearby cities and ports. Understanding the
traffic patterns was crucial to validating the
operation of the aluminum complex.
Due to the sensitivity of the information
about this project, the results presented in this
paper for demonstration purposes are based
on hypothetical data and a hypothetical plant
configuration. These results focus on the
approach used and the outcomes of the analyses.
Layout Analysis
The integrated approach was used to analyze the
proposed layout. The key tasks [5] that the model
was set to complete were:
Identify the potential for traffic-related
accidents
Identify any bottlenecks impeding
operational efficiency
Validate the number of load/unload
stations, lanes at the security gate, and
weigh stations
Validate the number of vehicles required
Optimum benefits
result from
implementing the
integrated approach
early during
project development.
Figure 4. Overall Scoring (Sample)
100
S
c
e
n
a
rio
2
0
200
300
400
500
600
700
800
900
Safety Score
R
P
N
S
c
e
n
a
rio
5
S
c
e
n
a
rio
1
S
c
e
n
a
rio
4
B
a
s
e
C
a
s
e
S
c
e
n
a
rio
3
50,000
S
c
e
n
a
rio
2
0
100,000
150,000
200,000
250,000
Ef ficiency Score
O
p
e
r
a
t
i
n
g

C
o
s
t

(
U
S
$
)
S
c
e
n
a
rio
1
B
a
s
e
C
a
s
e
S
c
e
n
a
rio
3
S
c
e
n
a
rio
5
S
c
e
n
a
rio
4
5,000
S
c
e
n
a
rio
2
0
10,000
15,000
20,000
25,000
CO2e Emissions
T
o
n
s
S
c
e
n
a
rio
1
B
a
s
e
C
a
s
e
S
c
e
n
a
rio
3
S
c
e
n
a
rio
5
S
c
e
n
a
rio
4
Scenario Average RPN
SAFE
Operating Cost (US$) for Transpor tation
in 1 Week
CO2e Emissions (tons) in 1 Year;
Vehicles Owned by the Plant
LEAN GREEN
Scenario 5 420 210,000 20,400
Scenario 4 630 220,000 20,500
Scenario 3 850 210,000 19,800
Scenario 2 400 170,000 18,600
Scenario 1 510 190,000 19,200
Base Case 840 200,000 19,200
June 2011 Volume 3, Number 1 63
To cover this mandate, 1 week of simulated
operation was targeted. The study was started
by composing a list of flows expected during
this typical week. Then, particular routes were
defined and assigned to these flows. The platform
automatically plotted PPM trajectories on the
original CAD drawing, using multicolored lines
(see Figure 5).
Knowing the PPM flow helped to align the
project team (designers and future operators)
regarding expected vehicle movement. By
drawing attention to the mix of flows in any
particular layout, PPM flow identifies potential
areas of concern for analysis via the simulation.
Eventually, it becomes the FMEA severity factor.
Once the team agreed on the list of flows and
routes, the delivery schedule was defined for
each flow. As part of the scheduling activity,
operations such as metal tapping and anode
change were executed in accordance with
the expected potroom operational schedule.
Annual requirements for each material were
taken directly from the linked BDD model.
The communication between the platform and
the DEM shell was executed automatically. The
discrete-event model was created from the input
data with no user interaction required. Then, the
simulation was performed and predicted output
data recorded.
Figure 6 shows a typical vehicle intersection;
to keep the focus on operational problems,
visualization was kept to a minimum (e.g.,
colored boxes were used instead of 3D objects).
Figure 7 shows a snapshot of the smelter with
various flows active. To bridge the size difference
between the whole complex and the tiny vehicles,
marker strings were used to visually locate the
latter when the view was zoomed out.
Once the results were generated and post-
processing was executed, an analysis followed
to complete the key tasks listed above and to
identify safety and efficiency issues.
Safety Results
Once the FMEA analysis was performed, an RPN
was generated for each intersection; the highest
score for each was presented in a Pareto chart.
See Figure 8 for the RPN values and Figure 9 for
the locations of the corresponding intersections.
The discrete-event
model was
created from the
input data with
no user interaction
required. Then,
the simulation
was performed
and predicted
output data
recorded.
Figure 5. Mapping PPM Flow
Figure 6. Example of Discrete-Event Model
Figure 7. Smelter Model Snapshot
Figure 8. SafetyIntersection RPN Score
3
2
1
Intersection
String for
Visualization
Vehicle
Metal
Hauler
Metal
Hauler
Buses
Consumables
Intersection Identification
R
P
N
100
0
1
5
200
300
400
500
600
700
800
900
1
3
1
0 4
5
0
4
6
1
9
2
2
2
9
2
7
8
8
7
3
6
4
7
0
3
1
3
5
3
8
9
8
9
5
1
0
5
9
9 5
6
7
6
8
1
4
3
2
1
13
10
2
8 11 9
4
15 14 6
5
Figure 9. Intersection Identification
Bechtel Technology Journal 64
The safety result is comparative and allowed the
team to focus on intersections with the highest
accident risk. In the case study, intersections 15,
13, and 10 scored the highest RPNs. For example,
Figure 10 presents the risk at intersection 15 over
a period of time. The highest RPN score happens
during shift change, when the frequency of
crossings is significantly raised and cars and
trucks interact.
The following solutions were recommended:
Intersections15and13requiredredesign,
and truck deliveries were not allowed
during shift change.
Deliveriestothecarbonplantwere
rerouted to avoid interferences with
alumina deliveries at intersection 10.
Efficiency Results
The utilization analyses focused on the following
aspects:
Numberoflanesatthegates
Numberoflanesonmajorroads
Utilizationofload/unloadstations
Utilizationoftruckweighstations
UtilizationofcasthouseTAC
2
/skimstations
Operatingcostrelatedtotraffic
The model validated the design of all load/
unload stations, weigh stations, and security
gates. Figures 11 and 12 provide examples of the
resultsgeneratedbythemodel.
Figure11presentstheutilizationofthecokeload/
unload station during the 1-week simulation;
predicted levels, including transient events, are
withinacceptablenorms.
Figure 12 presents the utilization of security
gate 1 by various delivery trucks during the
1-week simulation. The peaks represent the
highly concentrated day-shift deliveries; the
model validated the number of lanes needed to
inspect trucks. This example demonstrates the
capabilitiesofDEMtopredictpeakperformances.
As part of the validation process, a variety
of scenarios were also run to understand the
impact of an accident on the road between the
port and the complex or the effect of a late ship
arrivalfollowedbyaveryearlyshiparrival.
Figure 13 presents the amount of alumina in
a storage silo at the port during a 1-month
simulation, assuming a late ship arrival followed
byaveryearlyshiparrival.
The blue line represents the results using X
trucks, and the green line represents using X + 1
trucks.Afterthearrivalofthesecondshipment,
the alumina trucks unload the silo as quickly as
possible, and ship unloading is limited only by
truck operation. In this case, one extra alumina
truckprovidestheflexibilityneededtoefficiently
As part of the
validation process,
a variety
of scenarios
were also run
to understand
the impact of
an accident
on the road.
Figure 10. Intersection 15RPN vs. Time
Figure 11. Coke Load/Unload Station Utilization
Figure 12. Security Gate 1 Utilization
Figure 13. Hypothetical 1-Month Variation in
Alumina Inventory at the Port
100
0
0
2
5
5
0
7
5
1
0
0
1
2
5
1
5
0
1
7
5
2
0
0
2
2
5
2
5
0
2
7
5
3
0
0
3
2
5
3
5
0
3
7
5
4
0
0
4
2
5
4
5
0
4
7
5
200
300
400
500
600
700
800
900
Time, minutes
R
P
N
0
0.1
0
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Time, minutes
U
t
i
l
i
z
a
t
i
o
n

R
a
t
e
1,000 2,000 3,000 4,000 5,000 6,000 7,000 8,000 9,000 10,000
0
0.05
0
0.10
0.15
0.20
0.25
0.30
0.35
0.40
Time, minutes
U
t
i
l
i
z
a
t
i
o
n

R
a
t
e
1,000 2,000 3,000 4,000 5,000 6,000 7,000 8,000 9,000 10,000
0
0 2 3 5 6 7 9 10 12 13 14 16 17 19 20 21 23 24 25 27 28
Y (max)
Time, days
I
n
v
e
n
t
o
r
y
,

t
o
n
s
Late Ship Arrival
Early Ship Arrival
Maximum Storage Capacity at the Port
X Trucks
X+1 Trucks
2
TACstandsforTreatmentofAluminuminCrucible,a
processpatentedinthe1980sbyArvidaLaboratories,
AlcanInternationalLimited,toeliminatelithium
and other alkali metal contaminants from primary
aluminum.
June 2011 Volume 3, Number 1 65
overcome the fluctuation caused by the second
ships early arrival. Thus, the extra truck reduces
excessive demurrage costs.
POTENTIAL APPLICATIONS
ACE originally developed the integrated
approach to support layout development
activities for aluminum smelters and other
traffic-intense facilities. However, a variety of
operations could profit from the flexibility of
this platform, including open pit mines, airport
terminals, and construction logistics.
Open Pit Mines
The integrated approach could be used to
optimize the material flow, resources, and
crushed ore stockpiles of an open pit mine
operation. For example, the platform could be
applied to predict realistic truck arrivals and
queuing at the crushing station. This realistic
prediction of arrivals would help validate the
crushing station design as well as the sizes of any
intermediate stockpiles between the crushing
and grinding stations.
Airport Terminals
During airport design, DEM tools have been
used to analyze safety and efficiency aspects. The
integrated platform would support mimicking
the expected flow of passengers at an airport
terminal. This analysis would provide certainty
of outcome with respect to meeting International
Air Transport Association (IATA) level of service
(LOS) planning standards. Further, it would
support analyses related to utilization of airport
resources such as security scanners and customs.
Construction Logistics
The construction of large engineering projects
usually involves a fair amount of material
transport via trucks. The integrated approach
could be used to predict the logistical impact
of this massive flow on the access roads
through a neighboring village. It could also
enable the construction team to validate the
inventory available at a particular time in the
construction schedule.
CONCLUSIONS
B
echtels ACE conceptualized, developed,
and tested the integrated approach to
developing safe and efficient plant layouts
and then applied the platform to a particular
project to validate the safety and efficiency of
the proposed layout. This innovative platform
delivered value by applying safety-by-design
and waste elimination techniques. Early findings
indicate that the integrated approach reduces
the cycle time of aluminum smelter layout
analyses by up to 50% compared to sequential,
nonintegrated approaches.
The demonstrated flexibility of the integrated
approach platform lends itself to a variety of
applications on any project faced with evaluating
traffic-intense operations during layout
development activities. This new approach
allows the quick comparative analysis of
competing layouts to arrive at a safe, lean, and
green plant configuration.
TRADEMARKS
Flexsim is a registered trademark of Flexsim
Software Products, Inc.
ACKNOWLEDGMENTS
The authors would like to thank Bechtel for the
funding to develop an integrated approach for
plant layout development and permission to
publish this paper.
REFERENCES
[1] L. Tikasz, C.M. Read, R. Baxter, R.L. Pires, and
R.I. McCulloch, Safe and Efficient Traffic Flow
for Aluminum Smelters, Light Metals 2010:
Proceedings of the Technical Sessions presented by
the TMS Aluminum Committee at the TMS 2010
Annual Meeting and Exhibition, John Wiley & Sons,
Inc., Hoboken, NJ, February 2010, pp. 427432,
access via http://www.wiley.com/WileyCDA/
WileyTitle/productCd-0873397479.html.
[2] A. Veltri, M. Pagell, M. Behm, and A. Das,
Data-Based Evaluation of the Relationship
Between Occupational Safety and Operating
Performance, Journal of SH&E Research,
The American Society of Safety Engineers,
Vol. 6, No. 1, Spring 2009, http://www.asse.org/
academicsjournal/archive/vol4no1/feature02.
php.
[3] M. Kutz, Ed., Handbook of Transportation
Engineering, McGraw-Hill Companies, New York,
NY, November 2003.
[4] B. El-Haik and R. Al-Aomar, Simulation-Based
Lean Six-Sigma and Design for Six-Sigma, Wiley-
Interscience, John Wiley & Sons, Inc., Hoboken,
NJ, 2006, access via http://www.wiley.com/
WileyCDA/WileyTitle/productCd-0471694908.
html.
[5] J.A. Tompkins, J.A. White, Y.A. Bozer, and J.M.A.
Tanchoco, Facilities Planning, Chapter 10, p. 639,
John Wiley & Sons, Inc., Hoboken, NJ, 2003,
access via http://www.wiley.com/WileyCDA/
WileyTitle/productCd-EHEP000315.html.
A variety
of operations
could profit from
the flexibility of
this platform,
including
open pit mines,
airport terminals,
and construction
logistics.
Bechtel Technology Journal 66
ADDITIONAL READING
Additional information sources used to develop
this paper include:
C.M. Read, R.I. McCulloch, and R.F. Baxter,
Global Delivery of Solutions to the Aluminium
Industry, Proceedings of the 45th International
Conference of Metallurgists (Aluminium 2006),
MetSoc of CIM, COM 2006, Montreal,
Quebec, Canada, October 14, 2006, pp. 3144,
see http://www.metsoc.org/estore/info/
contents/1-894475-65-8.pdf).
BIOGRAPHIES
Rafael L. Pires is a mechanical
engineer for Bechtels Mining
& Metals Aluminum Center
of Excellence. He provides
expertise on multiple projects
in the development of advanced
simulations in areas such as
material handling, resource
optimization, operational
schedule, and work design.
Rafaels technical knowledge and skills have been
applied to the Kitimat aluminum smelter (Canada),
Ras Az Zawr aluminum smelter (Saudi Arabia), Ras
Az Zawr integrated infrastructure project (Saudi
Arabia), Quebrada Blanca copper concentrator (Chile),
and Antamina copper-zinc concentrator (Peru),
among others.
Rafael also assists on the application of safety by
design and lean manufacturing techniques during
the development of lean plant designs. He is currently
supporting mechanical engineering analyses for
various studies.
Rafael holds an MASc in Industrial Engineering from
cole Polytechnique de Montral, Canada, and a BSc
in Mechanical Engineering from Pontifical Catholic
University of Rio de Janeiro, Brazil. He has authored/
co-authored several technical papers, published by
The Minerals, Metals & Materials Society (TMS), the
Journal of Aluminium Production and Processing, and the
Conference on Modelling and Simulation (MOSIM).
Rafael is a licensed Engineer in Quebec, Canada, as
well as in Brazil.
Robert F. Baxter is a technology
manager and technical
specialist for Bechtels Mining
and Metals Aluminum Center
of Excellence. He provides
expertise to the development
of lean plant designs, materials
handling, and environmental
air emissions control systems
for aluminum smelter
development projects, as well as to smelter expansion
and upgrade studies. He is currently one of Bechtels
technology leads for the Ras Az Zawr and Kitimat
aluminum smelter studies.
Bob has 27 years of mining and metals experience,
including 22 years in the aluminum electrolysis
industry. He is a recognized specialist in smelter air
emissions controls and alumina handling systems.
Before joining Bechtel, Bob was senior technical
manager for Hoogovens Technical Services, where
he was responsible for the technical development
and execution of lump-sum, turnkey projects for the
carbon and reduction areas of aluminum smelters.
Bob holds an MAppSc in the Management of
Technology from the University of Waterloo in Ontario,
Canada, and a BS in Mechanical Engineering from
Lakehead University in Ontario, Canada.
Laszlo Tikasz, PhD, is the
senior specialist for Bechtels
Mining & Metals Aluminum
Center of Excellence. He has
over 30 years of experience in
advanced aluminum process
modeling and is an expert
on aluminum production
and transformation, process
modeling, and simulation.
Laszlo has developed various process models and
conducted studies to provide information needed to
support engineering and managerial decisions on
aluminum smelter designs, upgrades, and expansions.
Before joining Bechtel, Laszlo worked in applied
research and industrial relations at the University of
Quebec and an aluminum R&D center in Hungary.
Laszlos PhD, in Metallurgical Engineering, is from
the University of Miskolc, Hungary. His Doctor of
Technology in Process Control and MSc degrees in
Electrical Engineering as well as in Science Teaching are
from the Technical University of Budapest, Hungary.
Robert I. McCulloch is the
manager of Bechtels Mining
and Metals Aluminum Center
of Excellence. He has global
responsibility for aluminum
smelter technology projects and
studies, including reduction
technology, carbon plants,
casting facilities, and related
infrastructure or systems. Bob
is also responsible for the execution of aluminum
industry projects and studies assigned to Bechtels
Montreal office.
Bob has over 41 years of experience in engineering
and project management with Bechtel, primarily for
projects in the mining and metals industries in Canada.
His experience includes projects in the Canadian
Arctic and management assignments in Montreal;
Toronto; and Santiago, Chile. He recently returned to
Canada after several years in Australia, where he led
project management roles on two major projects.
Bob is a member of the Association of Professional
Engineers of Ontario and was previously a member
of the Canadian Standards Association Committee
on Structural Steel and a corporate representative
supporting the Center for Cold Oceans Research and
Engineering in Newfoundland, Canada.
Bob holds a BEng in Civil Engineering from McGill
University, Montreal, Quebec, Canada.
2011 Bechtel Corporation. All rights reserved. 67
INTRODUCTION
T
he increased global focus of governments,
international financing institutions, mining
companies, and communities on sustainable
development has brought a higher level of
environmental and sociological regulation
and scrutiny to new and expanding mining
operations than in the past.
The plant water management strategy is a
key consideration of regulators, investors,
and communities when they evaluate the
sustainability of a greenfield alumina refinery.
The diminishing availability of clean water
worldwide as a result of water pollution has
led many governments and international
financing institutions to introduce increasingly
stringent water discharge quality requirements
to minimize further impacts on water resources.
The issue of effective effluent and hazardous
waste management in the alumina industry was
further highlighted by the accidental release of
approximately 1 million cubic meters (1.3 million
cubic yards) of bauxite residue, also called red
mud, from the Ajka alumina refinery in Hungary
in October 2010.
Many legislative frameworks allow existing
operations to meet less stringent standards than
those expected of a greenfield facility. Therefore,
this paper pays the most attention to those
regions expected to become future alumina
sources and to the associated technical barriers
to developing sustainable and viable projects.
As commodity prices increase and accessible
bauxite deposits near coastal port facilities
(needed to ship alumina to smelters for aluminum
production) become depleted, the need to develop
inland ore bodies is increasing. The relationship
of a refinerys location to its bauxite deposit
and transport infrastructure directly affects
the projects material handling equipment costs
and resulting economic viability. Selecting the
optimum location for a refinery from a materials
transportation perspective involves, in most
cases, minimizing the distance that bauxite must
be transported from the mine to the refinery.
Therefore, there is a trend toward close-coupled
mines and refineries.
Another consideration is the use of seawater
neutralizationthe established method of
effluent treatment at alumina refineries. For
inland locations, the cost of pumping seawater
over long distances can prove prohibitive. A
further consequence is that bauxite residue, a
solid refinery byproduct, would not be able to be
neutralized by seawater and would either:
Have to be stored as a hazardous waste, which
would require long-term environmental
management
Require neutralization by other means such
as adding acid, which could be costly if the
volume of acid needed exceeded the waste
acid produced by the refinery
Issue Date: June 2011
AbstractHow to manage and treat liquid effluents is a determinant consideration in designing alumina
refineries. Rainfall, evaporation rate, proximity to the coast, process design and layout, ore mineralogy, local
environment, and potential impact on contiguous communities are all integral factors that must be taken into
account when developing an appropriate refinery water management strategy. The goal is to achieve zero discharge
of liquid effluent to the environment. However, this is not always the most feasible solution under the extreme
rainfall conditions often encountered in tropical and subtropical locations. This paper explores, for both inland
and coastal refineries, the following issues: methods to reduce and control refinery discharges, treatment design
criteria, and socioeconomic aspects related to surface water use in settlements adjacent to the refinery.
Keywordsalumina refinery, environmental design criteria, wastewater
ALUMINA REFINERY WATER
MANAGEMENT DESIGN IN TROPICAL
AND SUBTROPICAL CLIMATES
Lucy Martin
[email protected]
Steven Howard
[email protected]
Bechtel Technology Journal 68
In short, at inland locations, the water manage-
ment strategy becomes a key consideration in
alumina refinery site selection studies because
water management facilities can significantly
affect project net present value (NPV).
BACKGROUND
A
luminum is used to make airplanes,
automotive engine blocks, beverage cans,
window mullions, cooking foils, and thousands
of other items. Only steel exceeds aluminum as
the worlds most used metal. Even with the high
percentage of recycled aluminum, the demand
for primary aluminum is increasing at about
6% per year. To smelt 1.0 metric ton (1.1 tons)
of primary aluminum requires approximately
2.0 metric tons (2.2 tons) of alumina, and
approximately 5.0 metric tons (5.5 tons) of bauxite
are refined to make each metric ton of alumina.
Aluminum smelters are located where electricity
is inexpensive, and the alumina is shipped to
them from around the world.
Bauxite is mostly formed by the weathering
of lateritic rock caused by the high amount of
rainfall in tropical and subtropical environments.
Figure 1 shows regions currently undergoing
lateritic weathering. The generally well-defined
At inland locations,
the water
management
strategy becomes
a key consideration
in site selection
because water
management
facilities can
significantly affect
project net
present value.
ABBREVIATIONS, ACRONYMS, AND TERMS
BOD biochemical oxygen demand
BRDA bauxite residue disposal area
DSP desilication product
EHS environmental, health, and safety
GIIP good international industry
practice
IFC International Financing
Corporation
NGO nongovernmental organization
NPV net present value
PRC Peoples Republic of China
SRT sump relay tank
SWP stormwater pond
Figure 1. Lateritic Bauxite Sources (modified from Freyssinet et al. [1])
Present Day Lateritic Weathering
Lateritic Bauxite Deposit
June 2011 Volume 3, Number 1 69
Increased demand
for aluminum
will drive the
development
of new bauxite
deposits and,
consequently,
in tropical
and subtropical
locations.
wet and dry seasons of these regions force
refinery designers to address the feast or famine
issue associated with water availability. Zero
effluent discharge, while ideal for sustainability,
is often impractical for water impoundment in
these regions due to the size and consequent
capital cost of the facilities required to store
enough water during the wet season to provide
sufficient water during the dry season.
Bauxite contains organic and inorganic impurities
that increase the costs and environmental risks
associated with processing it into alumina. These
minerals and compounds are readily soluble in
the liquor used by the Bayer process (the principal
means of refining bauxite into alumina) and
tend to accumulate therein. These impurities are
discharged into water impoundments via runoff
from contaminated surfaces in the process areas
and are also discharged with bauxite residue into
storage areas. The demonstrated methodologies
used to treat the large-scale bauxite residue and
to neutralize effluent before it can be discharged
into surface or marine waters primarily employ
seawater, either alone or supplemented by spent
sulfuric acid, a byproduct of the refining process.
When refineries close-coupled to inland bauxite
deposits are being considered, a complex tradeoff
needs to be made among the following factors:
The long-term sustainability of storing
bauxite residue, in either a stable (neutralized)
or a hazardous (raw) form
The capital and operating costs of pumping
seawater
The capital and operating costs of materials
handling and transportation, primarily of
bauxite and alumina
The cost of water treatment facilities to
enable effluent to be discharged into streams
and rivers, which may be used by local
communities for drinking water (for both
human and animal consumption), fishing,
and irrigation
The land availability for, and the capital and
maintenance costs of, water and bauxite
residue disposal areas
In summary, the key water management
considerations are:
Water availability for refinery consumption
is highly variable from season to season.
The impoundments needed to store water
during the wet season for use in the dry
season may be impractical in high rainfall
areas due to the large capital costs associated
with constructing facilities large enough to
accommodate extreme events and due to
the perception by contiguous communities
that there is large-scale storage of liquid
hazardous materials.
In inland locations, water effluents may
discharge into streams potentially used for
drinking and primary industries (farming,
animal husbandry, and fishing).
Typical refinery wastewater treatment
techniques, such as seawater neutralization,
may not be feasible.
DEFINITION
T
his paper discusses the following topics
related to alumina refinery water
management system design:
Methods to reduce and control refinery
discharges
Treatment design criteria
Socioeconomic aspects related to surface
water use in settlements adjacent to the
refinery
These considerations are important both in the
site selection and preliminary and detailed design
of greenfield refineries and in the operational
management of installed facilities. Furthermore,
these considerations can play a major role in
both the financial viability and the long-term
sustainability of projects. Industry forecasts
suggest that global demand for aluminum will
continue to increase. This demand will likely
result in the development of new bauxite deposits
and, consequently, refineries in tropical and
subtropical locations.
REFINERY WATER BALANCE
Water Supply and Demand
An alumina refinery using the Bayer process
consumes 2.02.3 metric tons (2.22.5 tons) of
raw water per 1.0 metric ton (1.1 tons) of alumina
produced. The actual rate of consumption
depends on bauxite quality, process design,
demand for nonprocess applications (e.g., for
potable water), and the extent to which water is
recycled within the facility. About 10% of the total
water intake is accounted for by free moisture in
the bauxite feed and in the 50% caustic soda
solution used as the primary process reagent.
Bechtel Technology Journal 70
Figure 2 depicts water inputs and outputs
across the outer system boundary between
the refineryincluding the bauxite residue
disposal area (BRDA)and the external
environment. This water balance assumes
100% diversion of potential run-on to the site
and no water discharge from the BRDA to
groundwater (by seepage).
Only three sources supply the remaining 90%
of the total raw water demand: natural surface
sources, subsurface sources, and rainfall. This
means that a significant proportion of the water
intake is from rainfall onto the refinery, which is
uncontrollable and, under certain circumstances,
may far exceed the capability of the refinery to
manage the quantities involved.
With the notable exceptions of the Peoples
Republic of China (PRC) and Russia, most
refineries have been established in tropical and
subtropical regions in proximity to the principal
lateritic bauxite provinces. These locations are
not only exposed to high seasonal rainfall, but are
also at risk of extreme flood events. New capacity
planned in Brazil, Guinea, and Southeast Asia
will likely face these issues.
Environmental Control Requirements
It has long been recognized that effective
environmental management is critical to the
viability of any project. Stringent environmental
control standards and industry best practices
with regard to operations and maintenance must
be reflected in the refinery design criteria.
The focus of this paper is on managing
liquid effluent, any release of whichtreated
or untreatedis potentially harmful to the
environment and consequently deleterious to
the well-being of the community. The refinerys
Bayer plant itself handles a large volume of
process liquor, the bulk of which is an aqueous
Stringent
environmental
control standards
and industry
best practices
with regard to
operations and
maintenance
must be reflected
in refinery
design criteria.
Figure 2. Refinery Water Balance
Surface
Evaporation
Surface
Evaporation
Vapor from
Calcination
Stacks
RED
Residue
Raw
Water
Water in
Bauxite
Water in
Caustic
Release
BRDA
Active
SWP
Buffer
Return Recycle First Flush
WHITE
Process Areas
Nonprocess Areas
Refinery Boundary
Outer System Boundary
SRT
Spill
SRT
Surface
Runoff
Spill
Release
Treat
Treat
Rainfall
Rainfall
Dam
June 2011 Volume 3, Number 1 71
solution of caustic soda containing dissolved
aluminum, silica, and many other organic and
inorganic impurities, including trace metals such
as molybdenum and vanadium, which occur in
the bauxite.
The refinery is designed to contain the live
liquor inventory, with minor spills within
operating facilities being promptly returned to
the process. Provision is also made to intercept
larger, accidental spillages that could result from
equipment failure or abnormal events such as
the loss of electrical power. Spill ponds are
installed at strategic locations and have sufficient
capacity to handle the contents of one or more of
the largest tanks in the various facilities. These
measures minimize the probability of releasing
highly concentrated, potentially toxic material
that cannot be discharged except under the most
extreme circumstances.
Environmental control measures for an alumina
refinery must also include facilities to handle
contaminated runoff occurring as a result of
rainfall catchment within the facility perimeter.
One or more stormwater ponds (SWPs) are
provided and must be considerably larger than
the process spill ponds. An SWP is intended
to impound rainfall runoff collected from
nonprocess areas. During the initial onset of
rain, this runoff mobilizes relatively minor
amounts of surface contaminants. After a short
period, typically several hours, the runoff
quality is similar to that found in neighboring
areas outside the refinery perimeter. Although
impure, impounded SWP water is suitable for
recycling into the process for various duties,
thereby reducing the intake of raw water. Finally,
the BRDA, the impoundment area constructed
to permanently store bauxite residue and other
solid wastes, also accumulates contaminated
water that may be returned to the process to
recover the soda content.
Under normal operating conditions, the Bayer
plant liquor inventory is controlled within narrow
limits by varying the input rates of caustic soda,
raw water, and in-plant evaporation. Operating
procedures should be aimed at keeping the
process spill ponds empty and the water levels in
the SWP(s) and BRDA as low as possible during
periods of little or no rain. The onset of heavy or
sustained rainfall, typical in tropical locations,
gives rise to large volumes of site runoff, as well
as run-on if it is not effectively diverted. These
volumes can rapidly exceed the finite limits
of the available impoundments, despite the
risk analysis and major investment involved in
providing them. To handle situations where the
maximum impoundment capacity is exceeded,
the regulatory option often exists to impound
only the so-called first-flush runoff, after which,
all subsequent catchment is allowed to bypass
the SWP(s) and be released to the environment.
Exceptional rainfall events could defeat even
this strategy (assuming that it is permissible),
and the refinery would then be forced to
release contaminated water to the environment.
Such a release may be diluted to mitigate
the alkalinity of the effluent but would still
result in contaminating receiving waters above
background levels, probably in violation of
license provisions.
Community water demand, including
agricultural, may become a significant component
of total consumption, and any shortage or
degradation of the community supply would
likely be a significant issue. These factors also
need to be considered in the overall conceptual
design of the water supply system.
The conclusion is that despite careful scenario
planning and investment in major infrastructure,
sooner or later an extreme event will cause an
unacceptable environmental incident. Therefore,
the refinery must be located where the receiving
waters are able to sustainably withstand the
impact. This explains why most major refineries
are located near the ocean, which also facilitates
applying seawater neutralization, the only
current demonstrably effective liquid effluent
treatment technology.
Refinery Design Principles
The unavoidable accumulation of contaminated
water as a result of rainfall, reclamation
constraints, and severe limitations on release to
the environment has five major consequences for
the design of the refinery. These consequences
are discussed in this section.
1. A refinery environmental policy that
mandates compliance with relevant
regulatory standards becomes a major
driver in selecting the refinery and
BRDA locations.
This policy also has a profound impact on
refinery design features and the operational
procedures that enable the standards to
be met.
A policy mandating compliance with
liquid effluent standards is consistent
with one for particulate emissions or any
other form of environmental impact. The
The refinery
must be located
where
receiving water
can sustainably
withstand the
impact of effluent
resulting from
an extreme
rainfall event.
Bechtel Technology Journal 72
difference between implementation for
particulates (e.g., installing dust collectors)
and for release of impounded water is
one of relative complexity for the latter
and requires coordination of the total
refinery design.
Facility design criteria must address the
(water) environmental policy, which ranks
along with safety and NPV considerations
in importance, and evaluate the effects
of complying with it before detailed
design can proceed. Some objectives
may be compatible, for example minimizing
the caustic content of final-stage mud
washer underflow significantly reduces
operating costs.
2. Departures from conventional refinery
and waste impoundment design become
necessary and can add to the cost
of mitigating potential effluent manage-
ment problems.
A refinery in a location that experiences
extreme rainfall events should incorporate
features to minimize both the volume
effects and the possible contamination of
rainfall catchment. These features facilitate
the segregation of concentrated process
liquor (which cannot be released) from
relatively innocuous nonprocess runoff that
can be impounded, recycled during dry
periods, and released subject to meeting
license conditions.
The waste impoundment design must
satisfy the following criteria:
Effective, permanent sequestration of
red mud waste, which will continue
to generate alkaline leachate over the
longer term
Recovery of supernatant liquor and
impoundment drainage back into the
refinery process
Sequestration of solid-phase organic
and inorganic waste from process liquor
purification facilities
Disposal of other environmentally
sensitive wastes (e.g., ash, scale, and
waste acid)
3. Measures must be adopted to minimize
the controllable intake of raw water,
particularly during the wet season,
thereby reducing the total inventory
to be managed.
The probable excess of rainfall over
evaporation in the regions under
discussion dictates strict control of raw
water intake to minimize the total volume
under management. It follows that the
consumption of raw water in the Bayer
process should be minimized by using
as much water as possible that is already
within the outer system boundary.
Raw water is needed for a number of essential
process duties, for domestic (potable) water
(after treatment), and for the fire control
infrastructure. The number of water entry
points into the refinery must be minimized
and all volumes accurately metered. Strong
design discipline is necessary to ensure
that water control policies and practices are
consistently enforced. Some measures may
conflict with the customs and practices
employed elsewhere, but the capital and
operating costs associated with treating
this water may be excessive. For example,
using hoses for process area housekeeping
is perceived as improving employee safety
by minimizing the hazard of caustic liquor
exposure, but the same results may be
achieved by proper safety procedures and
training while minimizing water usage.
4. Introducing measures to enforce economic
use of controllable process and domestic
water can significantly reduce usage.
Water consumption may be allocated
to two broad categories: uncontrollable,
which is governed by the combined
requirements for essential services, and
controllable, which allows some degree
of flexibility as to the quantity used.
Most of the process requirements fall
into the uncontrollable category; for
example, boiler feedwater, flocculant, and
lime preparation. These examples of end
uses are directly related to the refinery
production rate and to the demand for raw
materials and additives. Domestic water
demand is closely related to the number
of employees. Effective control depends on
installing water-saving devices such as tap
restrictors, educating employees regarding
conservation, and prohibiting usage for
nondomestic purposes.
The final disposition of water in this
category is important in limiting the
potential environmental impact. Appli-
cations such as dust mitigation can help
June 2011 Volume 3, Number 1 73
dispose of excess impounded catchment
under dry season conditions. Other
applications, such as vehicle washing,
may create additional problems caused
by runoff turbidity or contamination by
hydrocarbons and other chemical agents.
The water supply for end users beyond
the outer system boundary should
be from an independent source. This
measure imposes a physical constraint
on consumption and avoids uncontrolled
influence on the infrastructure provided
for the refinery itself.
5. Several tiers of impoundments must be
established to segregate process liquor,
contaminated effluent, and rainfall
catchment.
Process spill containmentA process
spill is any form of liquor, slurry, or solids
released during routine operational
tasks within the plot limit of any facility.
Spills of this nature are normally small
and are promptly collected and returned
to the process. The concentrations of
process chemicals are too high to permit
release to the environment.
There is a possibility that large process
spills may occur due to equipment
failure, such as a pipeline rupture. The
refinery design should incorporate
process-only spill ponds to intercept and
return material that may overtop the
limited facility containment capacity.
The contents of spill ponds cannot be
released and should be recycled to the
refinery via large sump relay tanks
(SRTs), which provide additional surge
capacity and operational flexibility.
Process design modeling should
incorporate provision for large
process spills to ensure adequate total
containment facilities and return rates
to the process and to prevent defined
events of this nature from compromising
the refinerys rated production and other
key parameters.
Runoff managementRunoff is rainfall
catchment from the refinery plot limit
that must be monitored for contaminants
and handled accordingly. Runoff falls
into two categories:
Catchment within process facility
plot limitsThis may be relatively
dilute but is still far too contaminated
for release. It must be returned to
the process via the SRT system, and
it negatively affects the process and
energy demand. These effects must
also be considered for the refinery
mass and energy balance. The impact
must be minimized by reducing the
process catchment footprint, i.e.,
by roofing large tanks and routing
rainwater catchment to surface drains
outside the facility plot limit.
Catchment from nonprocess
areasRunoff mobilizes soluble
contaminants such as oil, dust, and
dirt, the concentrations of which may
be appreciable depending on such
factors as paving and drainage design,
control of fugitive dust from the
refinery, and general housekeeping
standards. Under tropical rainfall
conditions, the initial (or first-flush)
nonprocess runoff will exhibit short-
term contaminant loadings that must
be sampled, and the flow must be
directed to a large SWP.
In practice, the volume of runoff from
average rainfall intensities can be far
greater than SWP capacity because the
costs to construct these facilities large
enough to collect the rainfall volumes
experienced in tropical and subtropical
locations can be prohibitive. This can
mean that all but a small fraction of
nonprocess runoff must be released, and
the refinery must be designed with this
in mind. Water diverted to the SWP(s)
can be reclaimed for use as process
water, diverted to the BRDA buffer zone,
or (as a last resort) treated and released.
The tendency to work with average
rainfall data can be highly misleading
and could result in erroneous
assumptions underlying the planning
for the capacities of spill ponds, SRTs,
the SWP(s), and the BRDA. A credible
environmental control strategy should
be based on scenario planning for
abnormal events, such as 1:100-year-
return rainfall events.
BRDAThe BRDA is one of only two
impoundments from which low-level
contaminated water may be released
(the other being the SWP[s]), provided
that the regulatory and/or best
environmental control practices can
A credible
environmental
control strategy
should be based on
scenario planning
for abnormal events,
such as
1:100-year-return
rainfall events.
Bechtel Technology Journal 74
be addressed. It may be possible to
optimize the total capital investment for
these impoundments by linking them
so that the BRDA provides the final
and somewhat larger capacity for
low-level effluent.
The BRDA must be designed to prevent
contamination of impounded runoff
outside the deposition areas for red mud
and the other solid wastes that have
special sequestration criteria. Liquor that
is either released from the residue slurry
or accumulates from rainfall directly
onto the mud deposition areas must be
recycled to the refinery.
A buffer zone should be provided
between the active solid waste deposition
areas and the outer BRDA perimeter to
allow for abnormal rainfall events that
may temporarily exceed the active areas
capacity. Liquor and any other leachate
from other waste disposal areas should
not be allowed to mix with impounded
low-level water.
The remaining area of the BRDA
may be used to either store and
reclaim or treat and release excess water.
The final treatment and disposition
method is site-specific and depends
on a number of factors (alkalinity,
toxic ionic species, biochemical oxygen
demand [BOD], temperature, and
turbidity) associated with the effluent
itself and the background conditions in
the receiving waters.
Final-stage mud washer underflow slurry
handlingThis residue slurry contains
solid-phase compounds that release
highly alkaline leachate over a long time
frame as a result of unavoidable contact
with supernatant liquor and rainfall.
This tendency cannot be controlled
at the source due to the presence of
sodium aluminum silicates and calcium
compounds that are formed in the
Bayer process. The only control is at the
margin, by minimizing the concentration
and quantity of liquor disengaged from
the residue and reducing direct contact
of the consolidated mud with liquor
or rainwater.
The major controllable variables for
residue disposal are the density of the
slurry and the alkaline concentration
of the liquor discharged with the slurry
into the BRDA disposal area. Paste
thickening is the most effective means
of preparing a very dense, immobile
slurry that releases little or no liquid
as it further consolidates under its
own weight.
The remaining design issue is to
minimize the total soda content (usually
measured as grams of Na
2
CO
3
per
liter) of the final-stage mud washer
underflow. A typical target is 5 g/L,
based on the number of washing stages
to be installed, the ratio of wash water
to mud, and other factors. The target is
established by considering the steady-
state refinery mass balance and by
making certain assumptions related
to equipment reliability, washer stage
efficiency, etc. Under actual conditions,
major departures from steady-state
operation and the other assumptions
cause excursions in the amount of last-
washer soda by as much as a factor of
10, resulting in values of 50 g/L or more.
This implies that the number of installed
washing stages and spare pumps should
go beyond that suggested by steady-state
modeling; otherwise, the environmental
control objectives become unachievable.
Discharge Criteria
In areas where bauxite mining and alumina
refining are emerging, local and national
legislation and associated compliance
monitoring are often not sufficiently developed
to address contaminants of interest associated
with these activities. Consequently, drinking
water, primary industries, and ecosystems may
not be adequately protected by the existing
environmental legislative framework.
Many international mining and mineral
processing companies have implemented
sustainable development policies that require
them not only to meet the requirements of
in-country legislation, but also to consider
the use of the best available technologies
and international guidelines. For projects
seeking external financing, the International
Financing Corporation (IFC) has developed
specific environmental, health, and safety
(EHS) guidelines that are technical reference
documents with general and industry-specific
examples of Good International Industry
Practice (GIIP). These guidelines are used by the
IFC for project appraisals. IFC EHS guidelines
In areas where
bauxite mining and
alumina refining
are emerging,
legislation and
compliance
monitoring
are often not
sufficiently
developed
to address
contaminants
of interest.
June 2011 Volume 3, Number 1 75
have been developed for both mining and base
metal smelting and refining. [2, 3] However,
these may not include all contaminants of
interest relevant to alumina refinery effluent
discharge. For example, the base metal and
smelting guidelines state that toxicity should be
considered on a case specific basis and do not
provide guidelines for discharge of heavy metals
associated with alumina refining.
Assuming that local legislation is not sufficiently
defined, guidelines from international
organizations and other countries with highly
developed legislative frameworks can also be
used to develop robust design criteria for effluent
discharges from the project site.
Design criteria for effluent disposal must
consider the environment to which the effluent
will be dischargeda primary consideration is
whether the discharge is to fresh water or to a
marine environment. Quality requirements for
discharge to marine water are often less stringent
than for discharge to fresh water because of
various factors such as higher naturally occurring
baseline concentrations of compounds in marine
water compared to fresh water, increased
dilution of effluent when discharged to open
water, and less sensitive receptors in saline water,
from both an ecosystem and a downstream
user perspective.
Surface water is used for drinking, recreation,
and primary industries in many countries. In
developing countries it is more likely to be
used for these purposes without treatment. To
control population influx and resulting water
usage, strict and enforced industrial zoning
legislation is required where industrial effluent
is discharged; however, this is often lacking in
the regions currently experiencing growth in
bauxite mining and associated refining. In these
situations, it must be assumed that human and
animal ingestion of surface water could take
place at any point outside the industrial fence
line. Therefore, as a minimum requirement,
drinking water quality standards must be
attained at inland refineries discharging to
streams and rivers.
An alternative is to provide a separate, secure
drinking water supply to the local populace
and educate the general public regarding
withdrawal of water; doing so may limit effluent
treatment costs. If this can be achieved and is
accepted by permitting authorities, then it may
be possible to discharge at a higher concentration
and allow for mixing until drinking water or
other international standards are achieved at a
compliance point downstream of the discharge
location. Mixing-zone calculations need to take
into account the lowest surface water flow rate
when calculating dilution, because this rate can
often mean that discharge criteria to achieve
compliance at the edge of a mixing zone need to
be more stringent during the dry season. This
applies to impurity concentrations, temperature,
and turbidity (visual impact).
Nongovernmental organizations (NGOs) are
becoming increasingly prominent in enforcing
environmental and sustainable development
policies. The poor performance of some regulators
and companies has contributed to an increasing
public demand for independently verified
information and action. Therefore, in countries
without sufficient environmental legislation
to protect the health of local communities,
the livelihoods of the local populace, and the
surrounding flora and fauna, large corporations
are often under pressure to implement stringent
environmental and sustainable development
policies to protect their reputation.
In summary, developing project-specific
effluent discharge criteria to meet sustainable
development policies requires vigorous social,
health, and environmental impact assessments
of existing and potential future uses, and a
one-size-fits-all solution cannot be applied.
However, generally speaking, discharge quality
requirements to marine rather than aquatic
environments are less stringent and consequently
more achievable.
Effluent Neutralization Technologies
In this context, neutralization refers to reducing
the level of alkalinity in the stream to be treated.
Neutralization processes are required for
effluent that is to be permanently sequestered or,
where there is no other alternative, released to
the environment.
An alumina refinery generates large volumes of
liquid- and solid-phase alkaline effluent from
a variety of sources, predominantly waste mud
and other process byproducts discharged with
the mud, including:
Diluted process liquor entrained with
the bauxite residue (red mud)
Insoluble fractions of the bauxite
(iron and titanium oxides)
Desilication product (DSP)hydrated
sodium aluminum silicates
Calcium compounds, e.g., tricalcium
aluminate, calcium oxalate
Assuming that local
legislation is not
sufficiently defined,
guidelines from
international
organizations and
other countries
can also be used
to develop robust
design criteria for
effluent discharges.
Bechtel Technology Journal 76
Other effluent streams are generated by
contamination of rainfall runoff from the
refinery site and the possible spillage and release
of process materials to impoundments.
Effluent released from the waste mud stream
invariably exhibits pH levels above 12. Both the
liquid- and the solid-phase sources of alkalinity
(hydroxide ions) must be brought into reaction
with a neutralizing agent. The rate at which
the neutralization reaction proceeds varies
greatly, depending on reactants involved, pH,
concentration, and temperature.
Principles of Effluent Treatment
The primary objective of treating effluent
is to reduce the immediate and longer-term
environmental risk of the (solid or liquid) waste
stream by reducing its alkalinity to the minimum
practicable level. However, low alkalinity is not
the only consideration. Potentially serious soluble
pollutants such as aluminum and other elements
(e.g., molybdates, vanadates, and arsenates) must
also be targeted.
Neutralization options include reaction with
seawater, dilute sulfuric acid, or a combination
of both. Another option, carbonation, is in the
early stages of development, and little process
performance data is available to suggest this as a
viable option at this point.
The application of either seawater or acid
neutralization must take into consideration
that a certain fraction of solid-phase alkalinity
is released over long periods of timeweeks
or months, depending on the prevailing
conditions and the composition of the residue.
In practical terms, there is no such thing as
complete neutralization, due to the limited
treatment time available.
Treatment with sulfuric acid invariably involves
reusing the spent dilute acid used initially to clean
refinery heat exchangers. During the cleaning
process, the acid dissolves scale deposits that
may contain additional pollutants. A corrosion
inhibitor is also employed, the nature of which
must be assessed if the reacted acid is to be
released into the environment.
The chemical and physical properties of the
particular bauxite to be processed have a
determinant impact on the refinerys red-side
process design, on the BRDA design, and on the
technology selected to treat effluent. Significant
effort and cost must be invested in characterizing
the bauxite, and the same attention must
be paid to establishing the environmental
control requirements.
Design Basis
The refinerys design basis must specify the
following fundamental criteria:
The refinerys location
The practicability of using seawater
neutralization
The climatic conditionsrainfall and
evaporation
The need to release effluent to the
environment and the effluents likely
quantities
The receiving waters into which effluent
will be released
The social, agrarian, and environmental
factors associated with effluent release
The applicable effluent discharge standards
The critical parameters for the outfall point
The waste streams to the BRDA
The tonnages of solid and liquid waste
to be discharged
The composition of the red mud
The variability expected in waste loadings
and concentrations
The probable toxicity profile and whether
mitigation technology exists
Acid Neutralization
In acid neutralization, residue is mixed with
dilute acid, which reacts immediately with soluble
alkalinity to produce a rapid but temporary drop
in pH. Acid neutralization is commonly achieved
by adding (waste) acid after the last mud-washing
stage. The attack by residual acid (if any) on the
solid-phase alkaline content occurs over a much
longer time frame and may lead to a gradual
increase in pH. It is, therefore, impractical to
neutralize the solid-phase component before
residue disposal. Post-neutralization of waters
released from the BRDA may be necessary.
Acid neutralization produces a dilute sodium
sulfate solution, which, if released, may give
rise to environmental impacts in its own right,
such as algal blooms or local concentrations that
exceed background levels or the level specified
for potable water (<250 mg/L).
The availability of sufficient acid to treat primary/
secondary residue or effluent must be carefully
assessed. It is also necessary to establish that
probable peak effluent discharge rates can be
handled and that the potential heavy metal
contamination of the spent acid is acceptable.
The primary
objective of
treating effluent
is to reduce
the immediate
and longer-term
environmental risk
of the
waste stream
by reducing
its alkalinity to
the minimum
practicable level.
June 2011 Volume 3, Number 1 77
Seawater Neutralization
The important reaction in seawater neutralization
is the precipitation of hydroxyl ions by their
reaction with the magnesium ions present in
the seawater. Again, the rate of reaction varies
greatlyrapid in the liquor phase, much slower
with calcium compounds and the DSP. The
presence of sulfate ions inhibits the reaction
with the DSP.
A major advantage of seawater neutralization
is the precipitation of aluminum ions during
the formation of hydrotalcite, the primary
reaction product from the soluble alkalis
and magnesium. Hydrotalcite formation has
also been shown to remove vanadium,
molybdenum, and phosphorus when the pH
is in the range of 810.
If turbidity criteria can be attained, seawater
neutralization of residue allows immediate
effluent discharge, eliminating the need to
separately contain and manage the liquor.
Additionally, using excess seawater enables
the neutralization and release of alkaline
runoff generated by the slow dissolution of
alkaline compounds.
The system design must ensure that the seawater
supply and discharge capacities always exceed
the magnesium demand of residue alkalinity
excursions resulting from process problems. If
the pH is allowed to rise, some of the trace
metals in the hydrotalcite revert to the soluble
phase and affect receiving water quality. If
excess alkalinity cannot be precipitated within
the BRDA, additional hydrotalcite precipitation
occurs at the outfall, creating a visible plume.
Attempts have been made to augment
seawater neutralization with sulfuric acid. This
significantly alters the chemistry to the point
that a much lower pH is necessary to remove
aluminum from the solution. Oxalate removal
is favored at high pH values, so the addition of
acid may be counterproductive and costly.
Alternative Technologies
Other technologies for industrial water treatment,
such as membrane treatment and ion exchange,
are untested on alumina wastewater chemistry
and at the scale discussed in this paper. If these
technologies were proven to be effective, the
volumes of water that would require treatment
in tropical and subtropical locations would result
in significant increases in capital and operating
costs for alumina refineries.
CONCLUSIONS
A
lumina refineries, whether coastal or inland,
discharge effluent under both normal and
extreme circumstances. To minimize the amount
of effluent requiring treatment and discharge
in tropical and subtropical climates, the
environmental design features and operational
controls detailed in this paper for managing raw
water intake and the water balance should be
incorporated into the design.
Inland refineries pose additional challenges due
to the onerous discharge criteria applied to
inland waterways. Discharge criteria can be
imposed by government legislation or financial
institution requirements or prescribed by
company internal sustainable development and
environmental policies. In either circumstance,
effluent discharge requirements are often defined
according to the expected use of the receiving
water body by contiguous communities.
The most demonstrated treatment method for
large-scale refineries is seawater neutralization.
However, transporting seawater to an inland
refinery and then back for discharge can strain
a projects capital and operating costs and may
not be feasible.
Other water treatment methodologies are not
proven at large-scale refineries that are required
to meet stringent inland water requirements
and may require expensive reagents to treat
the effluent. These methodologies can increase
a projects risk profile, drive up capital
and operating costs, and result in breaching
discharge design criteria, with consequential
downstream impacts.
ACKNOWLEDGMENTS
T
he authors wish to thank Bob McCulloch, Bill
Imrie, and James Murray for their valuable
comments and Rosemary Dotlic, Daniel Hayes,
and Eric Tlozek from the Bechtel Communications
team for their editorial assistance.
Inland refineries
pose additional
challenges due to the
onerous discharge
criteria applied to
inland waterways.
Bechtel Technology Journal 78
REFERENCES
[1] Ph. Freyssinet, C.R.M. Butt, R.C. Morris, and
P. Piantone, Ore-forming Processes Related
to Lateritic Weathering, Economic Geology and
the Bulletin of the Society of Economic Geologists,
One Hundredth Anniversary Volume 1905 2005,
Society of Economic Geologists Inc., 2005,
pp. 681722, see http://www.segweb.org/
EG/100thAnniversaryVolume.htm.
[2] Environmental, Health, and Safety Guidelines:
Mining, International Finance Corporation
(IFC), December 10, 2007, http://www.ifc.org/
ifcext/enviro.nsf/AttachmentsByTitle/
gui_EHSGuidelines2007_Mining/$FILE/
Final+-+Mining.pdf.
[3] Environmental, Health, and Safety Guidelines:
Base Metal Smelting and Refining, International
Finance Corporation (IFC), April 30, 2007,
http://www.ifc.org/ifcext/enviro.nsf/
AttachmentsByTitle/gui_EHSGuidelines2007_
SmeltingandRefining/$FILE/
Final+-+Smelting+and+Refining.pdf.
BIOGRAPHIES
Lucy Martin is currently
based in Brisbane, Australia,
as a project engineer on the
Frieda River project. She
previously held the position of
chief environmental engineer
for Bechtels Mining & Metals
(M&M) Global Business Unit
(GBU). In this capacity, she was
functionally responsible for
the environmental engineering activities executed
from Bechtels offices in Montreal, Canada; Brisbane,
Australia; and Santiago, Chile. Lucy began her
career with Bechtel as a process engineer with the
Bechtel Water organization and transitioned to the
Bechtel Oil, Gas & Chemicals GBU before joining
M&M in 2003.
Lucy holds a BS degree in Chemical Engineering
from the University of Sheffield, England, and is a
registered Professional Engineer in Ontario, Canada.
Steven Howard is a senior
process engineer with Bechtels
Alumina & Bauxite Centre
of Excellence in Brisbane,
Australia. A chemical engineer
with over 40 years of experience
in major alumina refineries
in Australia and Jamaica, he
gained his technical knowledge
through direct involvement in
facilities employing both low- and high-temperature
bauxite processing technology and by assessing
and applying process design and environmental
control advances. Senior appointments in operations,
technical, and environmental management afforded
Steve the opportunity to devise and implement
policies and procedures to address the demanding
environmental standards facing the large-scale
chemical process industry.
Before joining Bechtel in 2008, Steve spent 15 years
in Jamaica, a location that typifies the challenges
of operating in an emerging economy beset by the
extreme climatic events alluded to in this paper.
Steve holds BSc App (Honours) and BE Chemicals
degrees from the University of Queensland, Australia.
He also has a Graduate Diploma in Management
from James Cook University in Queensland and has
completed the Management Development Program
of the Australian Administrative Staff College of
Mt. Eliza in Victoria.
2011 Bechtel Corporation. All rights reserved. 79
INTRODUCTION
T
he conventional overland troughing belt
conveyor is a proven method for transporting
bulk materials at high capacities and over long
distances. Ore bodies are often located in remote
places, which can create difficulties in conveying
material overland due to rugged terrain, steep
topography, and poor geotechnical properties.
The conventional overland conveyor is limited
in its ability to traverse irregular terrain with
respect to gradient and alignment curvature.
Other technologies exist that might benefit a
project if a troughed overland conveyor proves to
be too expensive and time consuming to construct
or if geotechnical risks, route limitations, or both,
are factors.
Alternatives to the conventional overland
conveying systems include the Doppelmayr


RopeCon

conveying system and the cable-


hauled conveying systemspecifically, the
Metso

MRC
1
Cable Belt

. These two alternatives


warrant consideration when there is a need
to traverse terrain for which the conventional
conveyor is unsuited. Both offer certain
advantages over the conventional system, but
with their lesser developed extent of application,
other considerations are also involved.
This paper provides an overview of all three
of these overland conveying technologies and
details some unique components of each. Major
factors requiring consideration when evaluating
overland conveying options are outlined, and
the technologies are compared with respect to
design, construction, operation, maintenance,
and the environment. Some of the advantages
and potential risks of each system are analyzed
and compared to determine when one system
might be considered favorably over the others.
Issue Date: June 2011
AbstractOverland conveyors provide an efficient, cost-competitive method of transporting bulk materials at
high capacity. Current trends are for capacities well in excess of 10,000 metric tons/hour (about 11,600 tons/hour)
and distances in excess of 20 kilometers (12 miles). The conventional troughing overland conveyor is a proven
transport method for these duties; however, it is limited in ability with respect to gradient and alignment curvature.
These limitations can lead to added capital costs through the need for significant earthworks and support structures.
In some situations, other technologies can be considered that might benefit a project by reducing overall cost;
potentially making construction easier; and, in certain cases, improving environmental performance.
Two less-traditional conveying systems worthy of evaluation against conventional overland conveyors for
long-distance, large-capacity installations are:
Metso

MRC Cable Belt

cable-hauled conveying systemAn established process that has merit in


certain circumstances when compared to other available systems
Doppelmayr

RopeCon

conveying systemA developing process that transfers proven technology


from the aerial ropeway and conventional conveying fields and offers potential cost-saving and
construction benefits along with a much lower footprint than conventional systems
This paper addresses the similarities and differences among the three overland conveying systems, discusses why
each might be considered favorably compared to the others, and offers guidance on selection methodology.
Keywordsbulk materials, cable hauled, capacity, conventional, conveying, conveyor, Doppelmayr RopeCon,
footprint, long distance, Metso MRC Cable Belt, overland, topography
OPTIONS FOR LONG-DISTANCE,
LARGE-CAPACITY BULK
MATERIALS CONVEYING
Brett Wilson
[email protected]
Steve J. Davis
[email protected]
Mark Wolton
mwolton@
xstratacopper.com
1
MRC stands for Metso Rope Conveyor.
Bechtel Technology Journal 80
Economics plays a key role in selecting which
technology to use, and this paper makes a
simplified, generic cost comparison of the three
systems. Equipment, installation, operation, and
maintenance costs are important factors when
evaluating the three technologies. A guide on
selection methodology is also provided to enable
the reader to make informed decisions during
the project design phase. Finally, a case study is
presented to identify potential benefits that can
arise from selecting a less traditional method of
transporting bulk materials overland.
BACKGROUND
T
he conventional troughed overland conveyor
belt is a proven technology that has been
used to transport bulk materials for nearly a
century. The invention of the three-roll idler
belt configuration for material transport
dates back to the 1930s. [1] Since then, the
requirements for higher capacities and longer
runs have led to innovation and new tech-
nologies used in conventional overland
conveyors. Several conveyors have been installed
at lengths approaching 20 kilometers (km) (about
12 miles [mi]) or longer, and current trends
are for even longer lengths and capacities in
excess of 10,000 metric tons/hour (t/h) (about
11,600 tons/hour [T/h]). Capacities of 30,000 t/h
(about 33,000 T/h) and higher are possible.
Although a proven and widely used technology,
the conventional overland conveyor is limited in
its ability to traverse natural terrain. In regions
of unfavorable topography, this limitation
can result in indirect routes and longer, more
expensive conveying systems. Accommodating
this limitation may also involve large amounts
of earthworks, tunnels, and multiple transfer
stations, which all result in increased cost
and longer construction schedule. The effort
to overcome this limitation has led to new
technologies that offer more flexibility with
respect to gradient and alignment curvature.
One such technology is the cable-hauled
system. Although this technology has been
around since the 1950s, it has not been widely
used compared to the conventional overland
conveyor. To date, all large cable-hauled
systems are Cable Belt systems, as currently
manufactured by Metso Corporation. The
largest cable-hauled system holds the world
record for the longest single-flight conveyor,
with a center-to-center distance of 31 km
(about 19 mi); however, its capacity is limited to
3,000 t/h (about 3,300 T/h). There are a few
other examples of cable-hauled systems close to
20 km (about 12 mi) in length, all with relatively
low capacities.
Another technology, which is relatively early
in its development process compared to the
conventional overland conveyor, is RopeCon,
a proprietary system developed by Doppelmayr
Transport Technology GmbH. Doppelmayr
(founded in 1892 in Austria) has combined aerial
ropeway technology and proven conveyor belt
technology to create the RopeCon system. The
first system was installed in 2000, and the largest
RopeCon system to dateinstalled in 2007has
a capacity of 1,200 t/h (about 1,325 T/h) and
a horizontal length of 3.4 km (about 2 mi).
Pipe conveyors also have application as overland
conveyors but are limited in capacity. They
are comparable to cable-hauled belts in many
situations. Other technologies may eventually
be able to match the lengths and capacities being
considered here, such as the pneumatic capsule
pipelines developed by various companies in
the last half of the 20th century from much
earlier concepts, and the more recent proprietary
Rail-Veyor

developed in 1999 by Rail-Veyor


Systems, Inc. (now Rail-Veyor Technologies
Global Inc.) and in use in South Africa and
Canada. However, the development of these
options to date has not yielded information
sufficient to consider them for long-distance,
large-capacity bulk materials conveying projects.
CONVEYING OPTIONS OVERVIEW AND
KEY COMPONENTS
O
ptions for overland conveying vary
according to numerous factors, including
capacity, length, terrain, geotechnical properties,
and the environment. Overland conveying
options include the conventional conveyor,
Cable Belt, and RopeCon systems. This section
provides a brief overview of these three
technologies, with an emphasis on the less
traditional Cable Belt and RopeCon systems.
Specific components that make each system
unique are discussed in more detail.
Conventional Troughing Overland Conveyor
Conventional overland conveyors are
predominantly ground or truss mounted, as
can be seen in Figure 1. Fixed rolling elements
known as idlers are used to support the belt. A
drive pulley provides driving tension to the belt,
which is usually of steel cord construction for
Although a
proven and widely
used technology,
the conventional
overland conveyor
is limited
in its ability
to traverse
natural terrain.
June 2011 Volume 3, Number 1 81
Conventional
overland conveyors
have a proven
track record.
The baseline
for conventional
systems is a
solid foundation
of many tens of
thousands of
conveyors built
and operated
over many years
of experience.
overland applications. The conventional overland
conveyor is the most commonly used conveyor
for overland applications and has a proven track
record of delivering systems with high capacity
over long distances.
Conventional overland conveyors are capable of
horizontal and vertical curves, but have higher
radius-of-curvature limitations compared to
Cable Belt and RopeCon conveyors. The larger
radii make route selection more difficult. Another
limitation is the inclination angle of the conveyor,
which can generally range from 10 to 20 degrees,
depending on the bulk material. The length of a
single flight is limited by the breaking strength
(in the US, the maximum operating strength)
of the belt. Higher strength belts are currently
in development, and a steel cord belt rating
of ST 10,000 (Newtons/millimeter [N/mm])
(between 11,400 and 7,140 pounds per inch of
width [PIW], depending on safety factor) should
be available within the next 2 years.
Conventional troughing belt overland conveying
technology has been developing over the past
20 years and continues to do so. The technology
and components for these systems are readily
available from many suppliers. The baseline is
a solid foundation of many tens of thousands of
conveyors built and operated over many years
of experience. However, technical risks are still
associated with these systems at the ultimate
limits of existing knowledge and experience.
Belt
Conventional large-capacity, long-distance
overland conveyors use a steel cord belt
incorporating a single layer of parallel steel cords
to carry the belt tension. The steel cords serve
as the load-bearing member and are embedded
in an elastomeric compound.
Top and bottom covers are provided to protect
the steel cords or carcass of the belt. Cover
thickness and material depend on the application
and the bulk material to be carried. The elastomer
selected for a belt significantly affects the
amount of energy needed to drive the conveyor.
Current knowledge and recent developments in
rubber technology have helped reduce specific
power consumption.
Vulcanized splices are used to connect belt
segments, and splice design and application are
the limiting factors in a belts carrying capacity.
Current finite element analysis and splice testing
methods have led to the acceptance that splice
safety factors can be reduced from previous
empirical numbers. This alone has had the most
significant impact on belt carrying capacity.
Idlers
The idlers of a conventional conveyor system
are used to support the carry and return sides
of the belt. They are mounted on shafts and
fitted with antifriction bearings and seals to
reduce friction losses. The carry idlers support
the loaded section of the belt and the return
Figure 1. Conventional Troughing Overland Conveyor (Boodarie, Western Australia)
Bechtel Technology Journal 82
idlers support the empty return run of the
belt. For troughed overland conveyors, the carry
idlers usually consist of three rolls, while the
return idlers are either horizontal or in a two-
roll V configuration. Typical spacing for the
carry and return idlers are 2 and 4 meters (m)
(about 6.5 and 13 feet [ft]), respectively, but can
vary greatly (up to 5 and 10 m [about 16.5 and
33 ft], respectively). Impact idlers are used at the
loading points of the conveyor, where the impact
of the material could seriously damage the belt.
The number of idlers installed directly affects
the cost of the conveyor, and idler spacing has
increased significantly for overland conveyors.
The selection of the idler bearing, seal, and
lubricant also has a significant effect on conveyor
life, noise emission, and energy consumption.
Tensioning System
There are numerous options, depending on
the application, for tensioning the belt on a
conventional overland conveyor. These options
include:
Gravity take-up (vertical or horizontal)
Fixed take-up (specific circumstances only)
Powered winch take-up
Combined take-ups
These options allow the system to be very
flexible and cater to numerous applications
and loading conditions. Current knowledge of
dynamic conveyor behavior and take-up control
has allowed overland conveyors to be designed
with sophisticated take-up management that
reduces belt tension and overall costs.
Conveyor Drives
Conventional conveyors are typically driven
and braked at one or more locations along the
conveyor. The designs, locations, and controls
of these drives, combined with take-up and
splice technology, have allowed these conveyors
to perform at capacities and lengths that were
inconceivable 25 years ago. Several types of
intermediate drives are available, from the
tripper style through the booster style (the
so-called T-T drive, originally from the German
term Trieb-Tragegurt [drive harness]) to the
power-strip style, which does not need trippers.
Cable-Hauled Conveyors
Cable-hauled conveyors such as the Cable
Belt system differ from conventional overland
conveyors in that the driving tension is
provided by steel cables that support the belt
longitudinally but are independent of it. The two
endless cables run on stationary sheaves (wire
rope pulleys) spaced along the entire length
of the conveyor. Typical spacing is 6 m (about
20 ft) for the carry side and 12 m (about 40 ft)
for the return side of the belt. [2] Driving power
tension is provided to the cables, which carry
the belt, which in turn carries the material. In
effect, the cable-hauled system separates the
material-carrying function from the power and
tension function. This allows longer flights
because the tension limitations of a cable are
higher than those of the splice in a steel cord belt.
The steel cables support both the carry and return
sides of the belt. Separation of the cables and belt
occurs at the conveyor discharge, drives, and
take-ups. The belt and cables wrap around their
respective return pulleys, or drums. A friction
drive wheel is used for the cables because they
are the driven component. Each cable requires
a drive, and an electronic differential is used to
balance the load and speed on each drive.
Cable Belt systems have the ability to
accommodate horizontal curves with radii down
to 400 m (about 1,300 ft). [3] Conveyor capacity
is limited to 6,000 t/h [3] (about 6,600 T/h) for
higher density applications (e.g., iron ore), and
4,500 t/h (about 5,000 T/h) for lower density
applications (e.g., coal). Presently, belt width is
limited to 1,600 mm (about 5.25 ft) and conveyor
speed is limited to 6 m/sec (about 20 ft/sec).
The coal-hauling Cable Belt system shown in
Figure 2 is installed at the Line Creek Mine
in British Columbia, Canada, now owned
by Teck Coal Ltd. The 10.3 km (6.4 mi) long
conveyor features horizontal curve radii down to
The cable-hauled
system separates
the material-carrying
function from
the power and
tension function,
which allows longer
flights to be used.
Figure 2. Cable Belt Overland Conveyor (British Columbia, Canada)
(Source: Metso Corporation)
June 2011 Volume 3, Number 1 83
430 m (about 1,400 ft). The mine site experiences
frequent heavy snowfalls and temperatures
as low as 40 C (40 F). It has been operating
since 1996.
Belt
The belt used on a cable-hauled conveyor is
constructed for lateral strength to support the
load, rather than longitudinal strength to carry
drive tension. The belt in a Cable Belt system has
two longitudinal V grooves molded to both
the carrying surface and the underside to locate
the belt on the driving cables. To improve load
support and maximize material stability, the
grooves on the underside are located a distance
from the edge of the belt.
Inserted into the central core of the belt is a web
of steel cross strands that provides the carrying
strength. This flexible steel mesh also minimizes
risk of belt puncture. Vulcanized joint splices
are not required because the belt does not carry
driving tension. Instead, mechanical fasteners
are used, which offer technicians the ability to
quickly repair the belt if necessary.
Cable-hauled belting is designed to trough under
load, and the belting is constructed with pre-
troughing. The troughing angle of a cable-hauled
system is lower than that of a conventional
overland conveyor. This reduction of the material
cross-section requires a higher speed or wider
belt for a cable-hauled system to achieve capacity
similar to that of a conventional conveyor.
Cables and Sheaves
Twin endless Dyform

Zebra steel rope drive


cables, manufactured by Bridon International
Limited, are used to provide the driving force
for the Cable Belt cable-hauled system. These
steel rope cables incorporate integrally wound,
engineered, plastic inserts. The steel wires and
plastic inserts are profiled to create a smooth,
continuous cylinder that provides constant
surface contact with the support sheaves. This
reduces cable chatter, wear, and vibration and
provides more efficient drive load sharing as a
result of more positive contact with the sheaves.
There are reports that previous versions of the
cable resulted in damage and wear to both cable
and sheaves, but it appears that cable and sheave
development is ongoing and system performance
is improving.
The running sheaves are designed as split
half-units with polyurethane inserts. The
highest risk of wear on the sheaves occurs along
horizontal curved sections (due to the higher
horizontal cable loadings) because the sheaves
carry a tension component in these curves, as
well as the load component. In these sections,
the sheaves are tilted to direct the cable tension
to their centers (see Figure 3).
Tensioning System
The Cable Belt cable tensioning system consists
of a cable tension carriage, a winch rope, an
equalizer frame, a series of bridle pulleys, a
gravity weight, and a tension winch. Two of
these units are required, one for each of the
two driving cables. The carriage tension track
can be up to 300 m (almost 1,000 ft) long and is
designed to accept the initial stretch of the cable.
The belt tensioning system requires only a
nominal working tension to ensure a smooth
belt transition in the terminal areas and over the
intermediate drives. Tension is applied through a
belt tension carriage mounted in the structure at
one end of the rope tension track.
Drive System
The Cable Belt drive units consist of two bevel
helical reducers. Drive control incorporates
electronic differentials to maintain equal tensions
in both ropes. This ensures that the peripheral
speeds of the drive wheels are constant and
that the drive cable safety factor is preserved.
The intermediate drives for a cable-hauled
system do not require a tripper configuration,
and little material disturbance occurs. As with
conventional conveyors, the use of intermediate
drives reduces the maximum tension required
The belt used on
a cable-hauled
conveyor
is constructed
for lateral strength
to support
the load, rather
than longitudinal
strength to carry
drive tension.
Figure 3. Cross-Section of Cable Belt Curved Conveyor [3]
(Source: Metso Corporation)
Bechtel Technology Journal 84
to drive the system, allowing for longer
systems using current technology. Figure 4
shows an example of an intermediate drive in a
Cable Belt conveyor.
RopeCon Conveyors
The RopeCon system is a bulk material handling
conveyor that combines ropeway technology
with that of a conventional conveyor belt. The
belt combines a flat conventional steel cord
belt with bonded corrugated rubber side walls
and integrated wheel sets. The corrugated
side walls keep the bulk material in position
and are identical to those of ContiTech AGs
proven Flexowell

belt system that has been


in operation for 50 years. The wheel sets run
on fixed, anchored track ropes guided over
tower structures. The conveyor operates off the
ground and is supported by the tower structures
(similar to ski lifts).
The hauling and support functions are performed
by a belt driven by a drive pulley at the head or
tail station, similar to the conventional overland
conveyor. After the material has been discharged,
the belt is turned over along the return length to
eliminate any dispersion of residual material
and dust and then turned back again before
it reaches the return station. Turnovers are an
option for conventional troughing belts for
similar reasons, but not for cable-hauled belts.
The wheel sets are integrated into the belt and are
supported by cross-members or axles mounted
along the length of the conveyor. The cross-
members act as cleats, allowing the conveyor to
work at angles steeper than those achievable by
conventional and cable-hauled conveyors. The
wheels are inspected in the loading station, and
the entire length of the system can be inspected
and maintained from a maintenance cart that
rides on the ropes.
It has been suggested that the capacity and
conveying distance limitations of the RopeCon
system are similar to those of the conventional
The RopeCon
system is a
bulk material
handling conveyor
that combines
ropeway technology
with that of
a conventional
conveyor belt.
Figure 4. Cable Belt Intermediate Drive
Figure 5. RopeCon System (Simberi Island, Papua New Guinea) [5]
(Source: Metso Corporation)
(Source: Doppelmayr Transport Technology GmbH)
June 2011 Volume 3, Number 1 85
overland conveyor; however, this has not
been proven. Doppelmayr states the following
parameters [4]:
Distance up to 20 km (about 12 mi)
Capacities in excess of 20,000 t/h
(about 22,000 T/h)
Tower spacing up to 1,500 m (almost 1 mi)
The RopeCon system cannot turn horizontal
curves, but benefits by being able to connect in
a straight line over terrain that would require
extensive structural support and ground level
intrusion for the other two systems. The distance
between discrete tower supports and the small
footprint combine to reduce environmental
impact compared to the other systems. The
system is designed to need no maintenance from
ground level, so, in theory, there is no need for
access roads once the system is installed. An
example of a RopeCon system can be seen in
Figure 5.
Belt
The RopeCon belt is based on a conventional
steel cord belt but with some cords omitted to
allow the axles to be through-bolted, resulting in
fewer cords compared to a conventional conveyor
belt. It is therefore considered to be a derivative
of a higher strength belt. The belt does not have
to trough, so it can have textile (breaker) layers
on the top and bottom, increasing the resistance
to bolt pull-out and improving the resistance to
impact damage at loading points. The convoluted
sidewalls are the standard Flexowell product,
cold-bonded to the belt and to themselves at
joints, and carry no load. The belt and sidewalls
are available from several manufacturers.
Vulcanized belt splicing is identical in concept
to that of the conventional troughing belt and
limits capacity in the same manner. Design
indications are that the splice is more efficient
than that of a standard belt because the omitted
cords give more overlap space.
Tensioning System
The RopeCon system has used fixed tensioning
for all installations to date. The advantages are
that a fixed tensioning system is simple, robust,
and easy to analyze. The disadvantage is that the
system needs to be overdesigned to cope with
the wide range of possible system tensions due to
operating load and ambient temperature.
Doppelmayr is evaluating active and
hydraulic take-ups as better systems for higher
capacity conveyors.
Track Rope Support Frames
The spacing of the RopeCon track rope spans
is maintained by support frames, which also
maintain the spacing between the upper and
lower belts. The support frames are discrete
components spaced regularly along the length
of the system. In high winds, the frames prevent
lift-off or deropement and ensure that the
material being transported remains in place. [2]
The frame-to-rope connectors are similar in
design to the standard devices used in ropeways.
The frames are connected by an electrical sensor
wire that immediately shuts the system down
in the event of excess movement for any reason.
This mitigates the possibility of significant
damage from a minor and easily rectified issue.
Axles, Wheels, and Bolts
The wheel sets attached to a RopeCon belt
comprise an axle, two stub-axles that bolt to the
axle, and the wheels, which turn on the stub
axles using sealed rolling element bearings. The
axles are attached to the belt with proprietary
fasteners and sit on a bonded-on base of rubber
and steel.
The belt is effectively suspended from each
axle rather than being supported by it. This is
intuitively incorrect but works well in practice.
Interfaces with other components prevent any
change to this arrangement. The result of this
suspension is that the axle has to penetrate
both sidewalls.
In some cases of inadvertent damage, a few
axles have been pulled through the belt. They
are relatively easy to repair/replace in position
with special temporary fasteners, followed by
permanent relocation by drilling new holes
in the belt and sidewalls and repairing the
original holes.
The wheels are made of high-grade polyamide,
which minimizes rolling resistance and resists
ultraviolet light. The high flanges on the wheels
ensure precise belt tracking. Wheel guidance
in the load and discharge stations and at belt
turnovers is on rigid rails, the wheels having
been diverted from the ropes at the entry points.
Wheels are also lifted from the ropes in the
support towers, and the loads are taken by a
series of large diameter rollers that carry the belt.
The design aims to not overload the wheels at
the higher load points in the system. The loading
station design is such that the belt is supported
on sliding impact bars and the wheels take
no impact loading.
Although the
RopeCon system
cannot turn
horizontal curves,
it can connect
in a straight line
over terrain
that would require
extensive structural
support and ground
level intrusion for
the conventional
overland conveyor
and cable-hauled
conveying system
to function.
Bechtel Technology Journal 86
Ropes and Accessories
The RopeCon system ropes are the standard
ropeway track ropes used in all fixed-rope cable
cars and aerial ropeways. The only difference
between them and common ropeway ropes is
that they are galvanized, not bright. The double
locked-coil construction presents a smooth
external face. All interior voids are filled with
grease. The RopeCon system uses ropes as
a structural component, and there are many
different devices for holding the ropes and
clamping to them.
Using the number of installations as a measure,
Doppelmayr is considered the world leader in
rope and support design and installation for
aerial ropeways.
Tower and Ground Structures
The RopeCon system towers (Figure 6) are very
similar to those used in other ropeways. Two
types are available in the set of standard designs:
tubular and lattice-framed. At the top of each
tower is a tower-head containing rope and belt
supports and maintenance platforms.
Both the head and tail of the RopeCon system
have solid anchor blocks where the ropes are
initially aligned and tensioned.
The robust nature of RopeCon tower and ground
structures was demonstrated at an installation
in Jamaica that was exposed to tornado winds
exceeding 249 km/h (155 mi/h). The Ropecon
system suffered only a handful of wheel
derailments in one location and was fixed using
the maintenance carriage.
Wherever the natural ground profile has a
ridgeline that can support the RopeCon system
without the need for a tower, the tower-head
components are directly supported on a
concrete base (Figure 7). These are referred to
as ground supports.
Tower heads and ground supports contain
four elements:
Rope supportsThese supports provide
the vertical and lateral support for the rope
and take all vertical loads and wind loads.
Rope anchorsThe ropes do not slide over
the tower heads, but are fixed. The rope
anchors take all of the tension loads of
the incoming and outgoing ropes. These
substantial elements are in the center of the
tower head. Towers do not take any rope
tension loads and are hinged at the base so
that they can lean with the ropes as loads
change. Ground supports, on the other hand,
are fixed, and any difference in tension
between incoming and outgoing ropes is
supported by the concrete ground structure.
Belt support idlers (Figure 8)The angle
of the belt changes as it approaches, passes
over, and departs the tower heads. This angle
change and the belt tension create a
substantial vertical load on the belt at the
tower heads. This load cannot be taken by
the axles because it would require very large
axles, wheel bearings, and bolts. Instead, the
belt is supported through the tower head by
a set of flat idlers. The number of idlers is
governed by the belt tension and angle change.
Maintenance platformsThese platforms
enable some of the equipment to be
maintained. They are designed to allow the
maintenance trolley to pass through, and
they allow access to the top of the structure.
The robust nature
of RopeCon
tower and
ground structures
was demonstrated
at an installation
in Jamaica that
was exposed to
tornado winds
exceeding
249 km/h
(155 mi/h).
Figure 7. RopeCon Ground Support [5]
Figure 6. RopeCon Tower Support Structure
(Mt. Olyphant, Jamaica) [5]
(Source: Doppelmayr Transport Technology GmbH)
(Source: Doppelmayr Transport Technology GmbH)
June 2011 Volume 3, Number 1 87
The idlers supporting the belt are the only rotating
components on the RopeCon system that are not
able to be maintained at the end structures.
These idlers are surrounded by substantial rope
support and anchoring devices, so access to them
is restricted.
Drive Systems
The RopeCon drive system (Figure 9) is virtually
identical to that used on a conventional conveyor
with a lagged drive pulley applying drive tension
to the belt. The drive layout includes brake
devices between the motor and gear reducer
and also between the gear reducer and the
drive pulley. The sidewalls and axles on the belt
prevent reverse bends, which constrains the
layout of the drives.
Dual drives are typically used, with each
drive incorporating a Doppelmayr-designed
high-speed brake and a Svendborg Brakes A/S
multi-caliper, low-speed brake on the pulley
shaft disc. The braking system and its control
far exceed any seen on conventional conveyors.
An over-speed-driven mechanical trip that
engages a dumb (purely mechanical) brake is
also incorporated in case of complete power and
control system failure.
No holdbacks are provided because provision
is made to reverse the system for maintenance.
This also means a consistent drive philosophy for
the machines, whether they are being driven or
working in regenerative mode. The drive speed
is fully variable to allow inching inspections.
Intermediate drives are possible; however, no
installations to date have required one.
Belt Cleaning
RopeCon belts are difficult to clean compared
to conventional and cable-hauled belts, and this
difficulty must be considered during detailed
design because a small percentage of carry-back
can be significant as spillage or as a circulating
load. Doppelmayr has developed mechanical
scrapers, but their practicality and efficiency
depend on belt speed. This is an area that
needs some development and could involve
rapping and washing. Spill belts under each
belt turning device are also recommended.
DESIGN CONSIDERATIONS
Environment
Environmental factors play a key role in selecting
the ideal conveying technology. These factors
include:
Installation disturbance
Dust
Spillage
Security
Sustainability
Community impact
Noise
Installation disturbance can be as simple as
putting a road through for installation or as
significant as removing/filling in large amounts
of existing terrain and leaving changed water
flow paths, fauna trails, and so on. Lower impact
installations cost less to mitigate and maintain.
Excessive dust and spillage can occur in any poorly
designed overland conveying system. No single
technology eliminates all dust emissions and
material spillages. The cable-hauled conveying
system does have the benefit of proven longer
flight lengths, but the other two technologies are
likely to match or exceed this soon. Longer flight
lengths can reduce the number of transfer stations,
which is where most dust is generated and
RopeCon belts are
difficult to clean
compared to
conventional and
cable-hauled belts,
a factor that
must be considered
during detailed
design.
Figure 8. RopeCon Idler Supports at
Tower and Ground Structures [5]
Figure 9. Typical RopeCon Drive System [5]
(Source: Doppelmayr Transport Technology GmbH)
(Source: Doppelmayr Transport Technology GmbH)
Bechtel Technology Journal 88
the most material spillage is commonly found.
High-speed transit of materials can generate its
own dust trail, but this is related to the material
being carried and the cover systems. A well-
designed transfer station with dust suppression
or a collection system can mitigate the risk of
excessive dust emissions and material spillage
to the environment. Belt cleaning can be an issue
for the RopeCon system, with belt turnovers and
spillage conveyors being requirements to reduce
spillage. Belt cleaning is less of an issue with
cable-hauled and conventional systems, but still
needs to be considered in detail.
Security can be a concern for certain projects,
depending on their location. Both cable-hauled
and conventional systems are predominantly
ground mounted and require access roads along
the entire length of the conveyor for easy access
to the equipment. The RopeCon system has the
advantages of not requiring road access along
the conveyor and of being suspended between
towers. These features reduce the risk associated
with equipment vandalism. Conventional and
cable-hauled systems can also be elevated, with
maintenance car access, but this configuration
requires different support considerations and
does not match the RopeCon system for footprint,
since many more supports will be required.
Environmental sustainability and community
impact can also play a role when selecting
conveying technology. The RopeCon system has
the advantage in this respect due to its reduced
footprint and ability to traverse terrain that is not
navigable by the other systems.
Noise emissions vary depending on the
technology selected and the design of the
system. The RopeCon system emits less noise
compared to the other technologies, including
a well-specified conventional overland system.
The RopeCon system does not require sound
barriers, which, while they reduce noise,
can also restrict equipment maintenance access.
A cable-hauled or poorly specified conventional
overland system emits the most noise.
Construction
Construction cost and schedule are two
factors that can determine the feasibility of a
project. When comparing the three conveying
technologies, the following items should be
considered:
Civil works
River crossings
Access roads
In an area of difficult terrain, it can be very costly
and time consuming to complete the civil works,
river crossings, and access roads needed for
cable-hauled and conventional conveyor systems.
Tunneling for conveyors is a possible solution
to reduce the amount of civil works; however,
depending on the geotechnical parameters
of the area, this may not be feasible and is likely
to add costs and safety issues.
The RopeCon solution can significantly reduce
the cost and time associated with civil works,
river crossings, and access roads if the terrain is
rugged. The Cable Belt conveyor can negotiate
tighter horizontal curves compared to the
conventional system, which may or may not be
beneficial, depending on terrain and proposed
conveyor alignments.
In ideal terrain, where all three systems could
be installed in a straight flight, the RopeCon
system loses its advantage.
Maintenance
Conventional System
The maintenance requirements of a conventional
conveyor are well-known within the industry.
Proper design of the conventional overland
conveyor significantly affects the system
maintenance requirements. There are numerous
suppliers and designers of conventional overland
systems with varying degrees of quality, and
this risk must be mitigated in the design phase
by developing a selection process to identify
appropriate designers and suppliers that can
deliver the desired results.
The major equipment components that require
maintenance in a conventional overland conveyor
are the idlers. The drives and belt need regular
inspection. The belt can be easily inspected at
either the loading or discharge stations. Belt rip
and other detectors can be installed to detect
belt damage. Replacing belt sections can be
time consuming and costly.
Because the idlers are stationary, maintenance
crews must travel the length of the conveyor to
check them. This process is time consuming for
large overland conveyors that have a significant
number of individual idlers, especially
where safety guards or noise barriers restrict
inspections. Detecting early idler failure is
difficult, and, for most installations, safety
practices require the conveyor to be stopped
during idler inspections.
In ideal terrain,
where all
three systems
can be installed
in a straight flight,
RopeCons ability
to traverse rugged
topographical
areas offers
no advantage.
June 2011 Volume 3, Number 1 89
Cable Belt System
A Cable Belt system is similar to a conventional
overland conveyor in that the rolling parts
are stationary and, therefore, require access
roads on both sides of the conveyor to maintain
them. There are varying reports regarding
the maintenance histories of installed Cable
Belt systems, and it is advisable for anyone
considering this system to confirm the latest
developments and the experiences of others in
this respect.
The risk associated with a damaged or broken
cable-hauled belt is not as severe as that
associated with RopeCon and conventional
overland conveyors. The belt does not provide
the driving power and tension and is easily
repaired using mechanical fastening, which is
significantly simpler and less time consuming
compared to vulcanized splicing.
The haul-rope splice life leads to regular splice
replacement and an increasing number of splices.
Near the end of the rope life, many splices exist,
all needing inspection and maintenance. The
increased number of splices also increases
the noise emitted by the Cable Belt conveyor.
The running sheaves are easily maintained by
removing a stainless steel clip and four bolts to
split the sheaves, which allows the polyurethane
inserts to be replaced.
Cable Belt system maintenance issues and
requirements are not as well known as those of
conventional systems, so operators and mainte-
nance crews typically require additional training.
RopeCon System
Doppelmayr offers several fixed-point condition
monitoring systems for the belt and components.
These systems include bolt head monitoring, rip
detection, belt overload (profile) detection, wheel
alignment monitoring, and others. Additional
condition monitoring possibilities include
the usual bearing temperature and vibration
monitors. To date, none of these systems have
been deployed on a RopeCon system, so their
performance and reliability are unconfirmed.
With every wheel and axle passing through the
loading and discharge stations, the ability to
install fixed-point condition monitoring systems
is a distinct advantage of a RopeCon conveyor
over a Cable Belt or conventional conveyor. Using
these systems eliminates the need to travel along
both sides of the conveyor looking and listening
for failed idlers or idler frame alignment issues.
Wheels are likely to be the most often replaced
components, although current installations show
that this is not a common issue. Being able
to replace wheels at the head or tail is much
simpler than lifting the belt in the field, as with
conventional conveyors.
The maintenance trolley provides access for
inspection and maintenance of all components
that are not belt-related, including frames,
towers, and instrumentation along the system.
The trolley runs along the top ropes and can be
self-propelled or winch-driven. Existing trolleys
have platforms on either side that facilitate
inspection and maintenance. These platforms
are self-leveling so that personnel can stand on
a level surface.
One possible issue with the RopeCon system is
the differential wheel diameters in service and
how this affects load sharing between adjacent
wheels. If several adjacent wheels are nearing
their wear limit and only one is replaced, the
significant increase in vertical load on the
replacement wheel could lead to premature wear
or reduced bearing life.
Another issue is the idler rolls at the towers,
which cannot be replaced quickly in existing
RopeCon systems. If there is a failure,
maintenance must be performed immediately.
General wear issues are fundamentally identical
to those of conventional conveyors, but RopeCon
does avoid the idler change issue and the
need for remote inspection of 20,000 or more
individual rollers.
The reliability of the RopeCon system and its
individual components is somewhat unproven,
reflecting the limited amount of time that they
have been in operation. The longest-operating
RopeCon system has only 50,000 hours of service,
which is insufficient for understanding the long-
term effects of operation on system reliability.
RopeCon system maintenance issues and
requirements are not as well known as
those of conventional systems, so operators
and maintenance crews typically require
additional training.
Operation
Safety is paramount in any operation involving
overland conveyors. The highest risk to operator
safety is nip points between moving parts. All
three technologies have safety issues related
to nip points. The moving nip points of the
RopeCon system (the wheel sets) can add a
higher degree of safety risk if precautions are not
taken; however, most of the system is suspended
high enough to avoid any inadvertent access,
so that there is some mitigation.
The risk
associated with a
damaged or broken
cable-hauled belt
is not as severe
as that associated
with RopeCon
and conventional
overland conveyors.
Bechtel Technology Journal 90
Table 1. Overland Conveyor Technology Comparison
Parameter
Conveyor Technology
Conventional Cable Belt RopeCon
Distance Currently up to 20 km (12 mi).
No reason why this cannot be longer.
Operates over long distances
31 km (19 mi). Could be longer but
manufacturer is cautious.
Longest is 3.4 km (2 mi). Doppelmayr
states that up to 20 km (12 mi) is
achievable.
Capacity In excess of 10,000 t/h (11,600 T/h).
Limited by current maximum belt width
of 3,200 mm (10.5 ft).
Systems at 40,000 t/h (44,100 T/h)
exist in pit operations over reasonable
distances.
Speeds of over 9 m/sec (29.5 ft/s)
possible.
Each system is evaluated for
technology limitations.
Limited to 6,000 t/h (6,600 T/h) for
higher density applications (e.g., iron
ore) and 4,500 t/h (4,950 T/h) for
lower density applications (e.g., coal).
Belt width limited to 1,600 mm (5.25 ft).
Conveyor speed limited to
6 m/sec (20 ft/s).
Largest is 1,200 t/h (1,320 T/h).
Doppelmayr states capacities in
excess of 20,000 t/h (22,050 T/h)
are achievable.
Alignment Horizontal curves down to 800 to
1,500-m (875 to 1,640-yd) radius.
Inclination limited to 10 to 20 degrees.
Horizontal curves down to
400-m (437-yd) radius.
Substantial route flexibility due
to horizontal curve capability.
Vertical terrain capability significantly
better than others.
Tall rubber cleats allow inclination
angles up to 60 degrees heading into
support structures.
Currently not capable of horizontal curves.
Rugged Terrain
Constructability
High civil works required. Potentially reduced ground disturbance
through better route capability.
Significant reduction in civil costs and
construction schedule.
Can cover very rough terrain using towers.
Towers can be airlifted into position.
River Crossings High construction cost. High construction cost. Non-issue depending on river bank
elevationstower spacing up to
1,500 m (1,640 yd).
Security More difficult to mitigate vandalism. More difficult to mitigate vandalism. Reduces exposure to vandalism.
Landform Significant landform change
may be required.
Could have high maintenance and
legacy impact.
Less landform change may be a benefit.
Could have high maintenance and
legacy impact.
Lowest impact on existing landform.
Easier rehabilitation.
Environmental
Noise and Dust
Belt cleaning possible.
High noise emissions.
Belt cleaning possible.
Highest noise emissions.
Belt cleaning difficult.
Spillage conveyors required.
Low noise emissions.
Environmental
Habitats
High impact. High impact. Reduced impact.
Community
Impact
High impact. High impact. Reduced, except for high visual impact.
Safety Stationary nip points. Stationary nip points. Moving nip points.
Capital Cost
(Equipment Only)
Lownumerous suppliers. Moderatelimited number of suppliers. Highsingle source of supply.
Reliability Proven but still improving. Proven but developing in details. Unprovenlongest running installation
is 7 years; no major issues to date.
Maintenance Needs access along entire conveyor.
Highest number of moving parts.
Requirements well known.
Fewer moving parts than conventional
system.
No risk of ripped belt; belt easier to
repair/replace.
Belt damage not as critical compared
to others.
Needs access along entire conveyor.
Operates at higher speeds than others
for similar capacity.
Different technology requires additional
training.
Majority of rotating parts inspected
at common station.
Dislodging, cable wear, wheel wear,
and spillage not issues.
No continuous access required
alongside conveyor.
Different technology requires
additional training.
Power
Consumption
Good attention to design and components
can lead to lower power consumption.
Lower power requirements due to overall
lower frictional losses.
Lower power requirements due to
overall lower frictional losses.
June 2011 Volume 3, Number 1 91
Another factor in operating a conveying system
is the cost associated with running the conveyors.
During the technology selection process, the
operating costs associated with power and
consumables should be compared. The RopeCon
and Cable Belt systems should have lower power
requirements than the conventional conveyor
due to the differences in their technologies. The
cost benefit realized depends on the installation.
Capital Cost
The capital cost of the conventional conveyor
(excluding installation) is generally lower than
that of the Cable Belt and RopeCon systems
where the terrain allows all three to be installed
on similar alignments. In terrain that is not ideal
for a conventional conveyor, the RopeCon and
Cable Belt systems can yield cost savings when
the civil works, construction, and potentially
much shorter routes are considered. Costs
should be compared for each application to aid
in selecting the appropriate technology.
The Cable Belt system has a very limited number
of suppliers, and RopeCons sole-source supplier
is Doppelmayr. This means that, effectively,
costs are only available from a single source for
each system.
ADVANTAGES AND POTENTIAL RISKS
T
he key differences in the three overland
conveyor technologies discussed in this
paper are summarized in Table 1.
SELECTION METHODOLOGY
A
design process that can be used to determine
which technology is appropriate for a given
application is outlined in this section. A concept
study should be performed to evaluate the three
technologies. The following elements should be
conceptually evaluated for each technology:
Capacity
Topography
Alignment
Environment
Distance
If each technology is found capable of achieving
the desired capacity and conveying distance
with a feasible alignment, then costs should be
compared. The cost comparison should take into
account the following:
Capital cost
Installation cost
Operating cost
In parallel with the cost comparison, risk should
be analyzed, including for the following aspects:
Schedule
Reliability
Environment
Scale-up
The option with the lowest cost and most
acceptable risk characteristics should be selected,
and a risk mitigation plan should be developed.
CASE STUDY
T
his section presents a case study showing
the potential benefits of selecting a non-
traditional method of overland conveying. Two
options were considered during a feasibility
study: the conventional overland conveyor and
the RopeCon system. A Cable Belt system was not
considered because the capacity requirements
exceeded 10,000 t/h (11,600 T/h). The terrain
was very rugged, as seen in Figure 10, which
displays a section of the RopeCon system.
Figure 10. Example of a RopeCon Conveying System Profile
1,165.0
1
,
3
2
0
.
0
1,167.0
9
7
5
.
0
1,182.0
8
3
9
.
0
1,135.5
998.0
7
0
2
.
0
meters
m
e
t
e
r
s
In terrain that
is not ideal for
a conventional
conveyor,
the RopeCon
and Cable Belt
systems can yield
cost savings when
the civil works,
construction, and
potentially much
shorter routes
are considered.
(Source: Doppelmayr Transport Technology GmbH [modified])
METERS FEET
702.0 2,300
839.0 2,753
975.0 3,200
998.0 3,274
1,135.5 3,725
1,165.0 3,822
1,167.0 3,829
1,182.0 3,878
1,320.0 4,331
Bechtel Technology Journal 92
The RopeCon systems profile clearly showed
the potential to reduce capital costs and
schedule duration by reducing the amount
of civil works. By comparison, the equivalent
conventional overland conveyor required a
huge amount of civil works.
Capital costs, including for installation, were
compared (see Table 2). After 6 years of
operation, this particular application required
the installation of an additional overland
conveyor parallel to the initial system.
The capital cost required to supply and install
the initial system was far greater for the
conventional overland conveyor because the civil
costs were extremely high. The capital cost of the
conventional overland conveyor installed in the
sixth year was lower than that of the RopeCon
system because most of the necessary civil works
had already been completed in the first year, and
the equipment cost of the conventional system
was lower than that of the RopeCon system.
As a result of the feasibility study, the RopeCon
system was selected based on the significant
capital cost reduction that it offered. A risk
mitigation plan was developed because of
the scale-up risk involved with this system.
CONCLUSIONS
C
onventional conveyors offer a proven,
cost-effective method to convey bulk
materials long distances at high capacities.
Their limitations with respect to gradient and
alignment curvature have resulted in other
technologies being developed, which include
the Cable Belt and RopeCon systems.
The RopeCon system is relatively new and has
not yet been proven to achieve the capacities and
distances of the conventional overland conveyor.
Its advantages are in its significant civil cost
savings and its reduced risk where the terrain
is rugged and not ideal for the conventional
overland conveyor.
Cable Belt conveyors are an established
technology that has not been used as widely as
others due to their niche market. The Cable Belt
conveyor is limited in capacity and is not cost
competitive for shorter applications. Cable Belt
conveyors are more ideal for medium-capacity
long distances over difficult terrain that is not
suitable for a conventional overland conveyor.
Every project is site dependent and must be
analyzed on its own merits. A detailed analysis
is required before an informed decision can be
made, and this paper has summarized the key
factors influencing that decision. In each case,
cost and risk should be analyzed to determine
the optimal solution.
ACKNOWLEDGMENTS
T
he authors would like to acknowledge
Doppelmayr Transport Technology GmbH
and Metso Corporation for their contributions
to this paper.
TRADEMARKS
Doppelmayr and RopeCon are registered trade-
marks of Doppelmayr Transport Technology
GmbH.
Dyform is a registered trademark of Bridon
International Limited.
Flexowell is a registered trademark of
ContiTech AG.
Metso and Cable Belt are registered trademarks
of Metso Corporation.
Rail-Veyor is a registered trademark of
Rail-Veyor Technologies Global Inc.
REFERENCES
[1] Belt Conveyors for Bulk Materials, Conveyor
Equipment Manufacturers Association (CEMA),
Sixth Edition, 2nd Printing, Naples, FL, 2007,
p. XI, see http://www.cemanet.org/publications/
previews/Belt%20Book-6thEditionpv.pdf.
[2] Materials Handling Peer Review, report
prepared by P. Taggart & Associates Ltd.,
Canada, for the Tampakan Feasibility Study
Project, April 2, 2009.
[3] MRC Cable Belt Conveyors, Metso Corporation,
http://www.metso.com/miningandconstruction/
mm_bulk.nsf/WebWID/WTB-041125-2256F-
222FA, accessed November 4, 2010.
[4] Doppelmayr Transport Technology GmbH,
http://www.ropecon.com, accessed
November 2, 2010.
[5] M. Wolton and S.J. Davis, RopeCon Risk
Mitigation Project, report prepared for
the Tampakan Feasibility Study Project,
October 15, 2010.
Every project
is site dependent
and must be
analyzed on
its own merits.
SYSTEM YEAR 1 YEAR 6
Conventional 340 145
RopeCon 180 170
Cost Difference 160 25
Table 2. Capital Cost Comparison:
Conventional System vs. RopeCon System, AU$ millions
June 2011 Volume 3, Number 1 93
BIOGRAPHIES
Brett Wilson is a mechanical
engineer with 6 years
experience in the mining and
metals industry. He has had
a variety of engineering roles,
including mechanical design,
bulk materials handling
design, commissioning, field
engineering, and reliability
engineering. He joined Bechtel
in 2009 and is currently based in Brisbane, Queensland,
Australia. He worked on the feasibility studies for both
the Tampakan and Frieda River copper projects as part
of the Xstrata Bechtel Copper Alliance.
Before joining Bechtel, Brett worked for Alcan
Engineering, including on the third-stage expansion of
its Gove refinery. He wrote a paper on Goves new rod
mills that was presented at the 2007 annual meeting of
The Minerals, Metals & Materials Society (TMS). At the
following annual TMS meeting, he was awarded the
2007 Bauxite and Alumina Technology award for the
paper. He was responsible for the design of mechanical
and bulk materials handling systems for the Gove
expansion and was part of the onsite field engineering
and commissioning teams.
Brett holds a B Eng Coop (Hons) degree in Mechanical
Engineering from Central Queensland University,
Rockhampton, Australia. He is a member of the
Institution of Engineers, Australia.
Steve J. Davis is chief materials
handling engineer in the
Bechtel Mining & Metals
Global Business Unit (GBU) and
is based in Brisbane, Australia.
He joined Bechtel in 2009 to
redefine the bulk materials
handling capability of the
GBU. Previously, Steve was a
materials handling consultant
for the Australian firm of Hatch Ltd., which supplies
engineering, project, and construction management
services; process and business consulting; and
operational services to the mining, metallurgical,
energy, and infrastructure industries. He has also been
a chief engineer, engineering manager, and principal
engineer for other organizations in Australia, South
Africa, Canada, Chile, England, and other countries.
Steve has been involved in many studies and projects
that consider different styles of overland conveying
and all associated bulk materials handling aspects and
has worked with many international specialist and
suppliers, including Doppelmayr, in this respect. He
regularly presents courses and workshops on various
aspects of the bulk materials handling industry and
has presented papers on similar topics at forums in
Australia and South Africa.
Steve holds a B Eng (Hons) degree in Mechanical
Engineering from Sheffield University in the UK and
a post-graduate qualification in Advanced Project
Management from the University of Pretoria in
South Africa. He has also completed many specialist
courses applicable to the materials handling field. His
professional memberships include Fellowships of the
Institution of Mechanical Engineers and of Engineers
Australia. Steve is a Chartered Engineer in the UK and
Australia and a registered Professional Engineer in
Queensland and is listed in the National Professional
Engineers Register.
Mark Wolton is a senior
mechanical engineer seconded
to the Xstrata Bechtel
Copper Alliance as study
manager for the Frieda River
project feasibility study. His
home company is Parsons
Brinckerhoff, where he has
been employed since 2004. He
has worked on many projects
that involve long overland conveyors, including the
Curragh North overland conveyor (the worlds longest
single-flight, troughed-belt conveyor) and the Tarong
Transport Alliance project (which included design of a
three-flight system of 75 km [almost 47 mi] in length).
Before entering the materials handling field, Mark
spent 10 years in the heavy-haul and passenger
railway industry working in vehicle dynamics,
testing, and carriage design. He has also worked in
the building services field and the oil and gas industry
as a drilling services engineer. Mark has worked in
Australia, Mexico, and the UK.
Mark holds a BE Mech (Hons) degree in Mechanical
Engineering from the University of Queensland,
Australia, and is a registered Professional Engineer
in Queensland.
Bechtel Technology Journal 94
TECHNOLOGY PAPERS
Oil, Gas & Chemicals
T ec hnol ogy Paper s
97
Optimum Simulation of Flood Flow
Rate: Comparing Combinations of
Precipitation Loss and Rainfall
Excess-Runoff Transform Models
Zhiyong Duan, PhD
107
Energy Management System for the
New Jamnagar Refinery in India
Rajesh Narayan Athiyarath
Munir Mujawar
Vinayak Shanbhag
125
Dynamic 3D Visualization of
Construction Activities
Shiva Krishnasamy
135
CO
2
Compression and Dehydration for
Carbon Capture and Sequestration
Tomas Maramba III
Dipanjan Bhattacharya
Angola LNG EPC Onshore
The liqueed natural gas processing plant
in Angola includes LNG Tanks B (foreground)
and A (background) with the lights from the
Kwanda Dock area in the background.
2011 Bechtel Corporation. All rights reserved. 97
INTRODUCTION
P
redicting the flow discharge at a watershed
outlet is one of the most important objectives
of a hydrological model. The result provides the
basis for the subsequent hydraulic modeling.
The Hydrologic Engineering Center Hydrologic
Modeling System (HEC-HMS) is a popular
hydrological model developed by the US
Army Corps of Engineers. This precipitation-
runoff simulation program categorizes the
land and water in a watershed into directly
connected impervious and pervious surfaces.
The precipitation falling on impervious surfaces
runs off without loss. Processes are at work on
pervious surfaces that result in precipitation
excess flowing as runoff to the watershed outlet.
In the HEC-HMS model, these precipitation
processes fall into two classifications: loss and
transform. Precipitation interception by trees,
grass, and other land cover; infiltration under
land surface; evaporation and transpiration to
the air; etc.; are all counted as precipitation losses
that do not run off. The precipitation excess
flowing to the watershed outlet is defined as
transform. [1]
Multiple models have been developed to describe
the precipitation losses. The initial and constant-
rate (IC) loss model, a simple loss model, uses an
initial loss to account for the interception and
depression storage of precipitation falling on a
land surface. It assumes that the precipitation
loss rate is a constant. [2] The Natural Resources
Conservation Service (NRCS) curve number
(CN) loss model uses the CN to represent the
land use/land cover, soil group, and antecedent
moisture. [3] The Green-Ampt (GA) loss model
is based on a conceptual model of precipitation
infiltration. The parameters Moisture Deficit,
Suction, and Conductivity are used to represent
the infiltration process of precipitation under
land surface. [4]
Unit hydrograph (UH) models are system
theoretical models that connect the precipitation
excess and runoff by ignoring the detailed inter-
nal processes. [3] These traditional UH models
are widely used in hydrological modeling. The
Snyder UH model estimates the UH parameters
with watershed characteristics. [4] The NRCS-
developed UH model expresses the UH discharge
as a function of the UH peak discharge and the
time to UH peak. [3] The Clark UH model uses a
linear reservoir model to represent the transform
of precipitation excess through a watershed. [1]
These models are based on different physical
models or empirical data. Their applications
require determination of different hydrological
Issue Date: June 2011
AbstractHydrologists can use many precipitation loss rate functions and rainfall excess-runoff transform
techniques to develop watershed runoff hydrographs. For the most part, they can apply the loss and transform
approaches in virtually any combination. To evaluate some of the many possible combinations, the author formed
a matrix of three commonly used precipitation loss rate models and three rainfall excess-runoff transform models
and compared the nine resulting simulated runoff hydrographs for a single large storm. The predicted peak flow
rate derived from the US Geological Survey (USGS) regression equation was compared with the predictions
obtained from these combination models. The estimates of the parameters of these precipitation loss models
and rainfall excess-runoff transform models were also analyzed. The combination of the Natural Resources
Conservation Service (NRCS) curve number (CN) loss model and the NRCS unit hydrograph (UH) model
was considered the best option, since it requires simple parameters that can sufficiently represent the watershed
characteristics. The other combinations also produce reasonable predictions. However, the estimates of some of
their parameters result in prediction uncertainty.
Keywordsprecipitation loss, rainfall excess-runoff transform, watershed runoff hydrograph
OPTIMUM SIMULATION OF FLOOD FLOW
RATE: COMPARING COMBINATIONS OF
PRECIPITATION LOSS AND RAINFALL
EXCESS-RUNOFF TRANSFORM MODELS
Zhiyong Duan, PhD
[email protected]
Bechtel Technology Journal 98
parameters. Previous studies were conducted
that compared only two of these models. [57]
In this paper, a matrix of the combination of the
three loss rate models and the three excess-runoff
transform models was applied for the Linnville
Bayou watershed (located in southeast Texas)
to explore the advantages and disadvantages of
these models.
WATERSHED CHARACTERISTICS
T
he Linnville Bayou watershed includes the
main stream channel of Linnville Bayou
(commonly known as Big Linnville Bayou) and
a tributary channel known as Little Linnville
Bayou. The watershed was divided into eight
sub-basins, as shown in Figure 1, using the
BASINS software developed by the US
Environmental Protection Agency. [8] BASINS is
composed of a suite of interrelated components
for performing various aspects of environmental
analysis. Through the use of geographic
information system (GIS) techniques, BASINS
has the flexibility to display and integrate a wide
range of information (e.g., land use, point source
discharges, and water supply withdrawals) at
a scale chosen by the user. The BASINS data
downloading tools were used to obtain the
digital elevation model (DEM) dataset, the land
use/land cover dataset, and the hydrologic soil
dataset from online sources on the Internet.
BASINS includes GIS functions that enable
highly accurate calculations to be made for
the watershed boundary and delineation and
drainage area.
The sub-basins were delineated using a DEM
with 30 m by 30 m (98.4 ft by 98.4 ft) data
spacing. Streamflow lines were obtained from
the National Hydrography Dataset (NHD), a
comprehensive set of digital spatial data that
encodes information about bodies of water
(naturally occurring and constructed), paths
through which water flows, and related entities.
PRECIPITATION LOSS AND TRANSFORM
METHODS
B
ased on the delineation results, the HEC-HMS
program was applied to build a model of the
watershed by employing different combinations
of three loss rate models (NRCS CN, GA, and
IC) and three excess-runoff transform models
(Snyder UH, Clark UH, and NRCS UH).
Parameters of these models were determined
from the land use/land cover characteristics, soil
groups, and watershed delineation.
The loss models describe the precipitation loss as
a result of interception, depression, evaporation,
etc. A single storm (12.8 in. [32.5 cm] for a 100-
year, 24-hour flood) was set up in this hydrologic
model to explore the applicability of various loss
models. The precipitation excess-runoff transform
models describe the change of precipitation
excess into runoff from a catchment area. Various
transform models were explored by estimating
the model parameters from the watershed
characteristics using BASINS and ArcGIS

tools.
ArcGIS, developed by Environmental Systems
Research Institute (Esri

), is an integrated
collection of GIS software products that provides
a standards-based platform for spatial analysis,
data management, and mapping.
USGS REGRESSION EQUATIONS
T
he US Geological Survey (USGS) has recently
developed a set of alternative regression
equations for estimating peak streamflow
frequency Q
T
for watersheds in Texas for
various recurrence intervals (T-year recurrence
intervals, ft
3
/sec). These equations are based on
Through the use
of GIS techniques,
BASINS has
the flexibility
to display
and integrate
a wide range
of information
at a scale chosen
by the user.
ABBREVIATIONS, ACRONYMS, AND TERMS
BASINS Better Assessment Science
Integrating point and Non-point
Sources (software system)
CN curve number
DEM digital elevation model
Esri

Environmental Systems Research


Institute
GA Green-Ampt (loss model)
GIS geographic information system
HEC-HMS Hydrologic Engineering Center
Hydrologic Modeling System
IC initial and constant-rate (loss
model)
NHD National Hydrography Dataset
NRCS Natural Resources Conservation
Service
SI International System of Units
UH unit hydrograph
USGS US Geological Survey
June 2011 Volume 3, Number 1 99
a logarithmic transform of drainage area using
three predictor variables: A = drainage area,
mi
2
; P = mean annual precipitation, in.; and S =
main-channel slope, ft/mi. [9] For the 100-year
recurrence interval, the equation is:
Q
100

= 10
0.3879
A
0.7133
P
1.183
S
0.6660
(1)
The 100-year flood flow for Linnville Bayou
watershed calculated with this equation is
9,323 ft
3
/sec (264 m
3
/sec). This calculated flow
discharge from the USGS regression equation
was used as the basis for the subsequent
HEC-HMS rainfall-runoff modeling results.
HYDROLOGICAL PARAMETER ESTIMATES
Loss Models
IC Loss Model
One initial condition (initial loss) and one
parameter (the constant rate) need to be
determined for the IC loss model. The initial loss
will approach zero if the watershed is saturated;
otherwise, the initial loss will be greater than
zero to represent the maximum rainfall depth
before runoff occurs in a watershed. The constant
rate represents the infiltration speed of the soils
Calculated
flow discharge
from the
USGS regression
equation was used
as the basis for
the subsequent
HEC-HMS
rainfall-runoff
modeling results.
2.5 5
Scale in Kilometers
7.5 10 0
1
2
5
8
7
3
6
4
Figure 1. Sub-Basin Delineation for the Linnville Bayou Watershed
Bechtel Technology Journal 100
and is determined by the soil characteristics. The
parameters of the IC loss model applied for the
Linnville Bayou watershed are shown in Table 1.
GA Loss Model
The GA loss model is based on a conceptual
model of precipitation infiltration. In this model,
a wetting front is assumed to penetrate down
once infiltration begins. The soil is divided by
the wetting front into two layers, the upper
saturated soil with moisture content and the
lower unsaturated soil with moisture content
i. The effective hydraulic conductivity is about
half of the saturated hydraulic conductivity, as a
rule of thumb. [10] The parameters of the GA loss
model applied for the Linnville Bayou watershed
are shown in Table 2.
NRCS CN Loss Model
The NRCS CN loss model calculates the rainfall
runoff based on precipitation, land use/land
cover, and antecedent moisture. The initial
abstraction represents the precipitation depth
before precipitation excess can occur. The CN in
each sub-basin represents the combination of the
different land use/land cover and soil groups
in this sub-basin. The hydrologic characteristics
of soils within a watershed are a primary factor
influencing the runoff potential. The NRCS
classifies soils into four hydrologic groups (A
to D), based on runoff-producing potential. The
state soil dataset was downloaded using the
BASINS data downloading tool. Land use/land
cover characteristics influence the rainfall-runoff
relationship of a watershed due to interception
and retardance factors. In the NRCS modelology,
the land use/land cover characteristics are used,
in conjunction with the hydrologic soil groups,
to develop the runoff CN parameter relating
rainfall volume to runoff volume.
The NRCS UHs and 24-hour design storms were
used to develop runoff hydrographs for each of
the eight sub-basins. [11] The US Department
of Agriculture NRCS (1986) presented CNs for
various hydrologic soil and land use/land cover
examples. For this paper, ArcGIS 9.2 was used to
map hydrologic soil and land use/land cover in
each sub-basin of the Linnville Bayou watershed.
The average CN for each sub-basin was calculated
based on the types of hydrologic soil and land
use/land cover. Stream channel lengths and
slopes were also calculated for each sub-basin
using the BASINS software. Parameters obtained
In the
NRCS modelology,
land use/
land cover
characteristics
are used,
in conjunction
with hydrologic
soil groups,
to develop
the runoff
CN parameter
relating rainfall
volume to
runoff volume.
Table 1. IC Loss Model Parameters
Table 2. GA Loss Model Parameters
Sub-Basin
Number
Initial Loss
10% of Precipitation,
in.
Initial Loss
15% of Precipitation,
in.
Initial Loss
20% of Precipitation,
in.
Constant Rate,
in./hr
1 1.28 1.92 2.56 0.0350
2 1.28 1.92 2.56 0.0341
3 1.28 1.92 2.56 0.0366
4 1.28 1.92 2.56 0.0350
5 1.28 1.92 2.56 0.0350
6 1.28 1.92 2.56 0.0350
7 1.28 1.92 2.56 0.0350
8 1.28 1.92 2.56 0.0348
Sub-Basin
Number
Initial Loss,
in.
Moisture Deficit,
%
Suction Head,
in.
Hydraulic Conductivity,
in./hr
1 0 38.6 12.0 0.071
2 0 39.0 12.1 0.063
3 0 38.6 11.9 0.079
4 0 38.6 12.0 0.071
5 0 38.6 12.0 0.071
6 0 38.6 12.0 0.071
7 0 38.6 12.0 0.071
8 0 38.6 12.0 0.071
June 2011 Volume 3, Number 1 101
to model storm runoff from the sub-basins are
presented in Table 3. The calculated CNs for the
sub-basins of the Linnville Bayou watershed are
shown in Figure 2.
The HEC-HMS model uses standard UH
techniques to generate runoff hydrographs from
excess rainfall. For the Linnville Bayou analyses,
the empirical UH relationships developed by
NRCS were used. These same NRCS-developed
The HEC-HMS model
uses standard
UH techniques
to generate runoff
hydrographs from
excess rainfall.
Figure 2. Runoff CNs of the Sub-Basins of the Linnville Bayou Watershed
0 2 4
Kilometers
8 12 16
CN 85
CN 83
CN 81
CN 84
CN 75
CN 78
CN 86
CN 84
Table 3. Linnville Bayou Watershed Hydrologic Parameters
Sub-Basin
Number
Drainage Area, mi
2
Runoff,
CN
Stream Channel
Lag Time,
min.
Sub-Basin Cumulative
Length,
ft
Average Slope,
%
1 18.81 18.81 85 7,260 0.60 707
2 15.75 34.56 83 4,980 1.01 467
3 15.14 49.70 75 10,130 0.93 855
4 7.26 56.96 81 10,340 0.57 410
5 22.02 78.98 84 7,290 0.95 445
6 4.92 83.90 78 9,120 0.91 322
7 2.50 86.40 86 6,340 1.58 141
8 10.84 97.24 84 12,340 0.65 401
Bechtel Technology Journal 102
UH relationships have been used extensively
by the US Army Corps of Engineers and other
public and private entities in the analysis of
watershed runoff characteristics. The parameters
of the NRCS CN loss model applied for the
Linnville Bayou watershed are shown in Table 4.
Transform Models
NRCS UH Transform Model
The primary UH parameter used in conjunction
with the NRCS UH is the basin lag time
t
L
= 0.6 t
c
, where t
c
is the time of concentration
of the catchment, namely the time for runoff to
flow from the most hydraulically remote point
of the sub-basin to its outlet. The lag time is a
function of CN, watershed slope, and stream
length. Lag time in hours is calculated based on
watershed parameters as follows:
(2)
where L is the hydraulic length in feet, Y is the
watershed slope in percent, and S is the potential
maximum rainfall retention in inches [3], where

(3)
Calculated lag times for the Linnville Bayou sub-
basins are given in Table 5.
The standard lag time is the time from the
centroid of precipitation mass to the hydrograph
peak flow [1]:
t
p

= CC
t
(LL
c
)
0.3
(4)
where C
t
is the basin coefficient, L is the length
of the main stream from the outlet to the divide,
L
c
is the length along the main stream from
the outlet to a point nearest the watershed
centroid, and C is a conversion constant (0.75
for International System of Units [SI] and 1.00
for foot-pound system). [12] The parameters of
the NRCS UH transform (standard lag times)
applied for the Linnville Bayou watershed are
shown in Table 6.
Sub-Basin
Number
Initial
Abstraction,
in.
CN
Impervious,
%
1 0.41 85 0
2 0.40 83 0
3 0.46 75 0
4 0.47 81 0
5 0.41 84 0
6 0.44 78 0
7 0.41 86 0
8 0.44 84 0
Sub-Basin
Number
Lag Time,
min.
1 707
2 467
3 855
4 410
5 445
6 322
7 141
8 401
Table 4. NRCS CN Loss Model Parameters
Table 5. Calculated Lag Times for
Linnville Bayou Sub-Basins
Table 6. NRCS UH Transform Parameters (Standard Lag Times)
Sub-Basin
Number
Conversion
Constant, C
Basin
Coefficient, C
t
Stream
Length, L,
m
Stream
Length, L
c
,
m
Standard
Lag Time, t
p
,
hr
1 0.75 4 24,915 12,458 12.2
2 0.75 4 18,847 13,193 11.1
3 0.75 4 28,028 19,620 15.4
4 0.75 4 10,343 6,206 11.0
5 0.75 4 17,816 8,908 11.0
6 0.75 4 9,116 7,293 11.1
7 0.75 4 6,340 3,170 7.8
8 0.75 4 12,335 8,635 12.8
The primary
UH parameter
used in
conjunction with
the NRCS UH
is the
basin lag time.
L
0.8
(S + 1)
0.7
t
L

=
1900Y
0.5
1000
S

= 10
CN
June 2011 Volume 3, Number 1 103
Snyder UH Transform Model
The standard lag time in the Snyder UH trans-
form model is a function of some measurable
watershed characteristics, including L, length of
the main stream from the outlet to the divide,
and L
c
, length along the main stream from the
outlet to a point nearest the watershed centroid.
Both L and L
c
were obtained based on the
watershed delineation results for this paper. The
parameters of the Snyder UH transform applied
for the Linnville Bayou watershed are shown
in Table 7.
Clark UH Transform Model
In the Clark UH transform model, the storage
effects of the sub-basin are estimated by
assuming a linear reservoir at the sub-basin
outlet. The time of concentration can be estimated
through the lag time in the NRCS UH model.
The storage coefficient can be estimated based
on the time of concentration. Previous studies
have directed that the storage coefficient divided
by the time of concentration is constant. [1] The
parameters of the Clark UH transform applied
for the Linnville Bayou watershed are shown
in Table 8.
MODELING RESULTS AND DISCUSSION
T
he hydrographs of these nine combinations of
precipitation loss and transform models are
shown in Figure 3. The predicted flow discharges
of various combinations of precipitation loss
Figure 3. Hydrographs of Nine Combinations of
Precipitation Loss Models and Transform Models
Applied for Linnville Bayou Watershed
Sub-Basin
Number
Calculated
Standard Lag,
hr
Peaking
Coefficient
1 12.2 0.47
2 11.1 0.47
3 15.4 0.47
4 11.0 0.47
5 11.0 0.47
6 11.1 0.47
7 7.8 0.47
8 12.8 0.47
Sub-Basin
Number
Concentration
Time,
hr
Storage
Coefficient,
hr
1 19.64 28.57
2 12.97 18.87
3 23.75 34.56
4 11.39 16.57
5 12.36 17.99
6 8.94 13.01
7 3.92 5.70
8 11.14 16.21
Table 7. Snyder UH Transform Parameters
Table 8. Clark UH Transform Parameters
10,000
F
l
o
w

R
a
t
e
,

f
t
3
/
s
e
c
Time, hrs
Time, hrs
Time, hrs
Clark UH
NRCS UH
Snyder UH
8,000
6,000
4,000
2,000
0
0 24 48 72 96 120
10,000
F
l
o
w

R
a
t
e
,

f
t
3
/
s
e
c
8,000
6,000
4,000
2,000
0
0 24 48 72 96 120
10,000
F
l
o
w

R
a
t
e
,

f
t
3
/
s
e
c
8,000
6,000
4,000
2,000
0
0 24 48 72 96 120
GA IC CN
In the Clark UH
transform model,
the storage effects
of the sub-basin
are estimated
by assuming
a linear reservoir
at the
sub-basin outlet.
Bechtel Technology Journal 104
and transform models are shown in Figure 4.
Compared with the USGS regression equation
result, the combinations of IC/CN and NRCS
UH/Snyder UH provide the best estimates. The
other combinations also provide reasonable
estimates.
The IC loss model is simple but useful. It assumes
that the rate of precipitation loss is constant
during a storm and uses the initial loss to account
for the interception and depression storage.
The precipitation absorbed by surface cover is
accounted for by interception. The precipitation
falling into the depressions of the watershed
topography is accounted for by depression. The
constant rate ranges from 10% to 20% of the total
precipitation. Accordingly, the flow discharge
at the outlet ranges from 8,842 to 9,871 ft
3
/sec
(250 to 279.5 m
3
/sec) when the NRCS UH model
is used for rainfall runoff transform.
Conceptually, the GA loss model represents the
infiltration process mechanism. However, one
of its parameters, the wetting front soil suction
head , has a wide value rangefrom 0.38 in.
to 9.98 in. for sand. As a result, the calculated
flow discharges with these values range from
5,596 to 9,076 ft
3
/sec (158 to 257 m
3
/sec). Another
parameter, hydraulic conductivity K, varies
along with the wetting front soil suction head. It
is advisable to specify these parameters during
application of this loss model.
The GA loss model accounts for surface
ponding by itself; therefore, the initial loss here
is additional interception by canopy and, in
this application, is simply considered as zero.
To be applicable for the GA loss model, daily
precipitation data must be disaggregated, which
introduces difficulties. [5]
A CN is assigned to a specific combination
of land use and soil groups. In this paper, the
Linnville Bayou watershed was delineated into
eight sub-basins. ArcGIS was used to generate
the land use and soil groups distribution for
each of these sub-basins. As a consequence, the
combined CN for each sub-basin was calculated
based on its values for each combination of land
use and soil groups and their distribution in
that sub-basin. The NRCS CN loss model took
advantage of the spatial analysis techniques
and the concept of CN, which represents the
combined effects of land use and soil groups in
a watershed.
The NRCS UH model was developed based on a
large number of small agricultural watersheds.
10,000
9,000
8,000
7,000
GA
Loss Rate Method
P
e
a
k

F
l
o
w

R
a
t
e
,

f
t
3
/
s
e
c
8,877
8,746
8,412 8,412
IC
9,378
9,248
9,073 9,073
CN
9,479
9,367
9,104 9,104
GA
8,746
IC
9,248
CN
9,367
8,877
9,378
9,479
NRCS UH Snyder UH Clark UH
Figure 4. Predicted Peak Flow Rates for Various Transform/Loss Rate Method Combinations
The NRCS
CN loss model
takes advantage
of spatial analysis
techniques and the
concept of CN,
which represents
the combined effects
of land use and
soil groups
in a watershed.
June 2011 Volume 3, Number 1 105
The Linnville Bayou watershed encompasses
97 mi
2
(251 km
2
), mostly covered by trees and
grass. Thus, the NRCS UH model is applicable
for this watershed. This simple, straight-line
model has lag time as the only parameter. As
a function of CN, watershed slope, and stream
length, the lag time incorporates all related
watershed characteristics (e.g., land use, soil
groups, and antecedent moisture).
The time of concentration in the Clark UH
model can be estimated from the lag time in the
NRCS UH model. The storage coefficient can be
estimated based on the time of concentration.
Thus, all parameters in the Clark UH model can
be estimated from the only parameter in the
NRCS UH model, the lag time. The Clark UH
model internally incorporates the advantages
of the NRCS UH model. However, the estimate
of the linear relationship between the storage
coefficient and the time of concentration
increases the difficulty of applying this model.
Basin coefficient C
t
as a variable of standard time
in the Snyder UH transform model ranges from
0.4 to 8.0 in the Gulf of Mexico area. [12] This
wide range decreases the accuracy of the flow
discharge estimate.
CONCLUSIONS
V
arious combinations of precipitation loss
models and transform models have been
formed and explored for predicting peak flow
rate for the Linnville Bayou watershed. The
following conclusions are drawn from this
analysis:
1. Based on the predicted flood discharges, the
combination of the NRCS CN loss model and
the NRCS UH transform model is considered
the best option, since both models have the
simplest, least data-intensive parameters that
are easy to estimate. The CN and the major
parameter of each model, lag time, are widely
applied to represent the soil type, land use,
antecedent condition, and other measurable
watershed parameters.
2. The other combinations of precipitation
loss models and transform models can also
provide reasonable flood discharge estimates.
However, these combinations involve
more complex parameters, which present
difficulties for the estimates or introduce
uncertainties for the results. For instance,
the wetting front soil suction head in the GA
loss model has a wide value range. For the
Snyder UH transform model, it is difficult
to estimate the length along the main
stream from the outlet to a point nearest the
watershed centroid, L
c
. Thus, the application
of these models is subject to determining
these parameters.
3. The GA loss model may perform better
than the NRCS CN loss model, since the
former considers precipitation intensity and
duration. However, this advantage may be
diluted by the requirement to disaggregate
daily precipitation data for the application of
the GA loss model.
4. Development of geological information
system techniques improves accuracy and
efficiency for estimating the CN and thus
eventually for predicting flow discharge by
applying the NRCS CN loss model and the
NRCS UH transform model.
5. The IC loss model is simple to use, but its
initial loss estimate, ranging from 10% to 20%
of total precipitation, decreases the accuracy
of the flood discharge prediction.
TRADEMARKS
Esri

and ArcGIS are registered trademarks
of Esri

in the United States, the European


Community, and certain other jurisdictions.
REFERENCES
[1] A.D. Feldman (ed.), HEC-HMS Hydrologic
Modeling System, Technical Reference
Manual, Report No. CPD-74B, US Army Corps
of Engineers, Hydrologic Engineering Center,
Davis, CA, March 2000, http://www.hec.usace
.army.mil/software/hec-hms/documentation/
CPD-74B_2000Mar.pdf.
[2] W.H. Asquith and M.C. Roussel, An Initial-
Abstraction, Constant-Loss Model for
Unit Hydrograph Modeling for Applicable
Watersheds in Texas, Scientific Investigations
Report 2007-5243, U.S. Geological Survey, Austin,
TX, 2007, http://pubs.usgs.gov/sir/2007/5243/.
[3] L.W. Mays, Water Resources Engineering,
John Wiley & Sons, Inc., Hoboken, NJ, 2005,
see http://www.amazon.co.uk/Water-Resources
-Engineering-L-W-Mays/dp/0471373982.
[4] V.T. Chow, D.R. Maidment, and L.W. Mays,
Applied Hydrology, McGraw-Hill, New York,
NY, 1988, see http://www.amazon.com/
Applied-Hydrology-Civil-Engineering-Chow/
dp/0071001743/ref=sr_1_6?s=books&ie=UTF8&
qid=1294344304&sr=1-6.
The combination
of the NRCS
CN loss model
and the NRCS
UH transform model
is considered
the best option,
since both models
have the simplest,
least data-intensive
parameters
that are easy
to estimate.
Bechtel Technology Journal 106
[5] K.W. King, J.G. Arnold, and R.L. Bingner,
Comparison of Green-Ampt and Curve Number
Methods on Goodwin Creek Watershed Using
SWAT, Transactions of the American Society of
Agricultural and Biological Engineers, Vol. 42, No. 4,
pp. 919926, July/August 1999, access via http://
asae.frymulti.com/abstract.asp?aid=13272&t=1.
[6] J.A. Van Mullem, Runoff and Peak Discharges
Using Green-Ampt Infiltration Model, Journal of
Hydraulic Engineering, Vol. 117, No. 3, pp. 354370,
March 1991, access via http://cedb.asce.org/cgi/
WWWdisplay.cgi?69219.
[7] B. Yu, A Comparison of the Green-Ampt and a
Spatially Variable Infiltration Model for Natural
Storm Events, Transactions of the American Society
of Agricultural and Biological Engineers, Vol. 42,
No. 1, pp. 8997, 1999, access via http://asae
.frymulti.com/abstract.asp?aid=13212&t=1.
[8] U.S. Environmental Protection Agency (USEPA)
Office of Water, Better Assessment Science
Integrating Point and Non-point Sources
(BASINS) Version 4.0 Users Manual, Report No.
EPA/823/C-07/001, Washington, DC, 2007, see
http://water.epa.gov/scitech/datait/models/
basins/index.cfm.
[9] W.H. Asquith and D.B. Thompson, Alternative
Regression Equations for Estimation of
Annual Peak-Streamflow Frequency for
Undeveloped Watersheds in Texas Using PRESS
Minimization, Scientific Investigations Report
2008-5084, U.S. Geological Survey, Austin,
TX, 2008, access via http://pubs.usgs.gov/
sir/2008/5084/.
[10] H. Bouwer, Groundwater Hydrology, McGraw-Hill,
New York, NY, 1978.
[11] W.A. Scharffenberg and M.A. Fleming, HEC-
HMS Hydrologic Modeling System, Users
Manual, Version 3.3, Report No. CPD-74A,
U.S. Army Corps of Engineers Hydrologic
Engineering Center, Davis, CA, September 2008,
http://www.hec.usace.army.mil/software/
hec-hms/documentation/HEC-HMS_Users
_Manual_3.3.pdf.
[12] P.B. Bedient and W.C. Huber, Hydrology and
Floodplain Analysis, 2nd edition, Addison-
Wesley Publishing Company, New York, NY,
June 1992, see http://www.amazon.com/
Hydrology-Floodplain-Analysis-Philip-Bedient/
dp/0201517116/ref.
BIOGRAPHY
Zhiyong Duan, PhD, a
hydraulics and hydrology
engineer in Bechtels OG&C
Global Business Unit, has
extensive experience in envi-
ronmental and water resources
engineering. He has an
impressive record of delivering
key support for projects located
in the United States, Nigeria,
Thailand, Angola, and Australia. Zhiyong has diverse
experience in hydrological and hydraulic modeling,
hydraulic transient analysis, water quality analysis
and modeling, and water and wastewater treatment.
Zhiyong has authored many papers appearing in
peer-reviewed, top-rated journals and conference
publications, including Journal of Environmental
Engineering; Environmental Engineering Science; World
Environmental & Water Resources Congress; and
International Symposium on Fluid Control, Measurement
and Visualization. He is a committee member of the
Wastewater Engineering Technical Committee of
the American Society of Civil Engineers (ASCE).
Zhiyong received a PhD in Civil and Environmental
Engineering from Mississippi State University,
Starkville; an MS in Environmental Engineering
from Vanderbilt University, Nashville, Tennessee;
and a BS in Environmental Engineering from Tongji
University, Shanghai, China. He is a registered
Professional Engineer in Texas.
2011 Bechtel Corporation. All rights reserved. 107
INTRODUCTION
T
he new Jamnagar refinery is among the
largest projects executed by Bechtel. Located
adjacent to Reliance Industries Limiteds old
refinery near Gujarat, India, the new facility
increased the total refining capacity to over
1,200,000 barrels per stream day (bpsd) upon
its full commissioning in 2010, making the
combined refinery complex the worlds largest.
The new refinerys power generation and
distribution system is quite complex. The
800 MW captive power plant (CPP) and electrical
network operate in parallel with the old refinerys
450 MW CPP and network. Grid connections are
at 132 kV. The new refinerys electrical network
comprises a 220 kV switchyard, two 33 kV main
receiving substations, and nearly 40 process
substations. Power is distributed at 33 kV from
the main substations to the process substations,
which step it down to 11 kV, 6.6 kV, and 415 V for
distribution to refinery loads.
To monitor and control this new large, complex
electrical network, an ABB energy management
system (EMS) was designed and installed. The
main features of the new refinerys EMS are:
Electrical distribution management system
(EDMS) to acquire relevant data and to
control and monitor the electrical generation
and distribution system
Load management system (LMS) to carry out
load shedding across both refinery networks
Generation control (generator active and
reactive power control)
Tie-line control (inter-refinery tie-line active
and reactive power control)
Information storage and retrieval (IS&R)
system
Interfaces with other control and monitoring
systems, such as the switchyard automation
system (SAS) that controls and monitors the
220 kV/33 kV systems
Interface with third-party systems, e.g., a
large-screen display in the central control
room (CCR) and Web interface
Issue Date: June 2011
AbstractBechtel recently executed one of the worlds largest refinery projectsthe new Jamnagar refinery in
India for Reliance Industries Limited. This major new refinery is located adjacent to the old refinery completed
by Bechtel in 1999. The new refinerys captive 800 MW power generation and electrical distribution system
operates in parallel with the old refinerys captive 450 MW system. Grid connections are at 132 kV. The new
refinerys electrical network comprises a 220 kV switchyard, two 33 kV main receiving substations, and nearly
40 process substations. Power is distributed at 33 kV from the main substations to the process substations,
which step it down to 11 kV, 6.6 kV, and 415 V for distribution to refinery loads.
To monitor and control this new large, complex electrical network, an ABB energy management system (EMS)
was designed and installed. The specifications and functionalities of this EMS evolved out of a continuous
process of Bechtels detailed interaction with Reliances project and operations teams. The resulting key design
requirements were steady-state and dynamic performance commensurate with power system requirements,
incorporation of the latest hardware and software technologies, redundancy, reliability of operation, seamless
interface with the original refinerys EMS, and provision for expansion to match future network growth.
This paper discusses the development of the new refinerys EMS, including the challenges faced during its
design, engineering, and execution and some of the innovative measures involved in resolving them.
Keywordselectrical distribution management system (EDMS), energy management system (EMS),
Jamnagar, load management system (LMS), load shedding, substation automation
ENERGY MANAGEMENT SYSTEM
FOR THE NEW JAMNAGAR
REFINERY IN INDIA
Rajesh Narayan
Athiyarath
[email protected]
Munir Mujawar
[email protected]
Vinayak Shanbhag
vinayak.g.shanbhag@
ril.com
Reliance Industries
Bechtel Technology Journal 108
The key EMS design requirements were
steady-state and dynamic performance
commensurate with power system requirements,
incorporation of the latest hardware and
software technologies, redundancy, reliability
of operation, seamless interface with the old
refinerys EMS, and provision for expansion to
match future network growth.
The new refinerys EMS is one of the largest
electrical network monitoring systems supplied
by ABB. This paper discusses its development,
including the challenges faced during its
design, engineering, and execution and
some of the innovative measures involved in
resolving them.
OVERVIEW OF THE JAMNAGAR REFINERY
T
he old Reliance Industries Limited
Jamnagar oil refinery and petrochemical
complex processes 650,000 bpsd of crude oil and
produces liquefied petroleum gas (LPG),
naphtha, gasoline, kerosene, diesel, sulfur,
coke, polypropylene, and numerous aromatic
The new refinerys
EMS requirements
were steady-state
and dynamic
performance
commensurate
with power system
requirements,
latest technologies,
redundancy,
reliability,
seamless interface,
and provision
for expansion.
ABBREVIATIONS, ACRONYMS, AND TERMS
AVR automatic voltage regulator
bpsd barrels per stream day
CB circuit breaker
CCR central control room
CPP captive power plant
DCS distributed control system
DO digital output
DRS disturbance recording system
EDMS electrical distribution
management system
EMS energy management system
GIS gas-insulated switchgear
GPS global positioning system
GTG gas turbine generator
HMI human-machine interface
I/O input/output
ICT interconnecting transformer
IEC International Electrotechnical
Commission
IED intelligent electronic device
IS&R information storage and retrieval
LAN local area network
LMS load management system
LPG liquefied petroleum gas
LV low voltage
MIS management information system
MLS main load-shedding system
MRS main receiving station
MSA modular smart array
MV medium voltage
OPC industry standard that defines
methods for exchanging
real-time automation data
between PC-based clients using
Microsoft operating systems
OS operator station
PC personal computer
PDS program development and
testing system
RST refinery service transformer
RTU remote terminal unit
SAS switchyard automation system
SNTP simple network time protocol
SOE sequence of events
STG steam turbine generator
TPS thermal power station
UFLS under-frequency load-shedding
system
VIP virtual Internet protocol
June 2011 Volume 3, Number 1 109
products, including paraxylene, orthoxylene, and
benzene, all primarily for domestic consumption.
At the time Bechtel designed and constructed
this project (1995 to 1999), it was the worlds
largest grassroots single-stream refinery. The
old complex includes a CPP designed to produce
450 MW of power (backed up by a 132 kV grid
supply) to meet the refinerys power demands.
The new export-oriented refinery almost
doubles the overall capacity to more than
1,200,000 bpsd; adds crude distillation,
associated secondary conversion facilities,
and an 800 MW CPP; and modifies the old
refinery to ensure the efficient operation of
both refineries.
The current Jamnagar complex is now the worlds
largest refinery, surpassing the 940,000 bpsd
Paraguana refining complex.
JAMNAGAR REFINERY POWER GENERATION
AND DISTRIBUTION
F
igure 1 presents a simplified diagram of the
power generation and distribution systems of
the new and old refineries.
New Refinery Power System
The CPP that is the new refinerys power
source consists of six 125 MW, 14.5 kV gas
turbine generators (GTGs), with provision for
three future GTGs. The GTGs are connected
to a 220 kV switchyard bus via dedicated
14.5 kV/231 kV, 161 MVA step-up transformers.
Eight 220 kV/34.5 kV, 174 MVA refinery service
transformers (RSTs) connected to the 220 kV
switchyard feed the refinery substations through
33 kV substations in two main receiving stations
(MRS 1 and MRS 2).
Two 11 kV, 25 MW steam turbine generators
(STGs) are connected to the 33 kV switchboards
The current
Jamnagar complex
is now
the worlds
largest refinery.
NEW REFINERY
6 GTGs
2 STGs
132 kV Overhead Line
1.3 km Span
33 kV
Switchgear
Refiner y Feeders (New)
220 kV
Cable
132 kV
Lines
Grid
Connection
JAMNAGAR
SIKKA
TPS
6 Main Step-Up
Transformers
14.5/231 kV
161 MVA
15 Generator
Transformers
11/34.5 kV
53 MVA
5 Grid
Transformers
132/34.5 kV
53 MVA
2x Interconnection
Transformers
220/132 kV
107 MVA
4x Refiner y Ser vice
Transformers
220/34.5 kV
174 MVA
(each)
Refiner y Feeders (New)
Old Refiner y Feeders
LEGEND
GIS Gas-Insulated Switchgear
GTG Gas Turbine Generator
MRS Main Receiving Station
STG Steam Turbine Generator
OLD
REFINERY
GRID
220 kV SWITCHYARD
8 New Generators
6 x 125 MW GTG = 750 MW
2 x 25 MW STG = 50 MW
Total = 800 MW
33 kV GIS
MRS 1
15 Old Generators
9 x 30 MW GTG = 270 MW
6 x 30 MW STG = 180 MW
Total = 450 MW
33 kV GIS
MRS 2
132 kV SWITCHYARD
Figure 1. Power Generation and Distribution Systems of the Old and New Refineries
Bechtel Technology Journal 110
in MRS 1 via 11 kV/34.5 kV, 38 MVA step-up
transformers.
A pair of 220 kV/132 kV, 107 MVA auto-
transformers function as the interconnecting
transformers (ICTs) between the new and the old
electrical systems.
The available throughput from the expansion
refinerys CPP is approximately 700 MW.
Old Refinery Power System
The old refinerys CPP consists of nine
30 MW GTGs and six 30 MW STGs that
feed five 33 kV switchboards, from which
point power is further distributed to the old
refinerys substations. The installed base
is approximately 450 MW, and available
throughput is approximately 400 MW.
EVOLUTION OF THE EMS AS A CENTRAL
MONITORING AND CONTROL SYSTEM
T
he term energy management system is a
broad-based term that covers a wide array
of functionalities encompassing electrical
system control and monitoring, key parameter
measurement and recording, automatic
and manual report generation, contingency
evaluation, emergency action execution, and
more, under one umbrella. An EMS replaces a
host of legacy systems used to monitor, control,
troubleshoot, and record electrical network
parameters and components.
Legacy systems have typically comprised
annunciation panels with indication lamps and
horns, hardwired alarms/controls, transducers,
auxiliary relays, and log sheets. Plant electrical
networks were usually controlled and monitored
by systems confined to the associated substation.
Users had to necessarily prioritize the sections
of the network that required remote control and
monitoring. Other main drawbacks associated
with legacy systems include lengthy downtimes,
maintenance and troubleshooting difficulties,
and the operator-centric nature of the systems
with the unavoidable possibility of human error.
Several factors have converged to make the
modern EMS possible:
The rapid advancement in industrial
electronics
The invention of the personal computer
(PC)
A significant increase in performance
A dramatic drop in component prices
The earliest EMSs consisted of mainframe
computers that were costly, bulky, and hard
to program. External devices were initially
hardwired back to the computer. Then
a distributed format evolved that used
multiplexed signals over a common wire or
over the electrical distribution system (power
line carrier). The latest EMSs use intelligent
electronic devices (IEDs), which are relays
capable of communicating a wide range of data
over a dedicated line or network.
The evolution and widespread use of IEDs over
the last two decades have made it easier to
implement a comprehensive control and
monitoring system. IEDs can also monitor the
status of the associated electrical system. The
combination of digital (status monitoring and
control) and analog (real-time) measurements
gives the IED a snapshot of the state of the
associated part of the electrical network at
any given point in time. There has also been
considerable enhancement in the communication
interfaces available in IEDs that allows measured/
monitored values to be transferred to a remote
location within an acceptable duration, thus
allowing remote, near-real-time electrical system
monitoring. Hence, it is possible to provide
a central control and monitoring system by
building upon the capabilities of these IEDs.
Along with IEDs, other developments have
contributed to the efficiency and effectiveness
of current EMS technology. Software evolution
and communication platform standardization
ensure that remote/field IEDs can be
programmed, multifunctional displays created,
and software functions assigned quickly and
efficiently from the CCR, often with little more
than drag-and-drop effort. The evolvement of
industrial Ethernet and wireless technologies
has made it possible to connect to automation
systems from remote locations and transmit
large amounts of data at high speeds. And
management information systems (MISs) are
now also built into most EMSs.
EMS REQUIREMENTS FOR THE NEW REFINERY
T
he switchyard, main substations, and
process substations of the new refinery are
geographically separated by distances varying
from 1 to 3 km. Hence, a central EMS was
needed to deliver a consistent, unified system of
operation, monitoring, and reporting.
An EMS covers
a wide array of
functionalities:
electrical system
control and
monitoring,
parameter
measurement
and recording,
report generation,
contingency
evaluation,
emergency action
executionand
moreunder
one umbrella.
June 2011 Volume 3, Number 1 111
The key advantages offered by any EMS are:
Centralized monitoring and control
Prompt fault reporting, easy trouble-
shooting, and faster decision making
Flexible configuration
Analysis on a common time domain
Interoperability among different devices
Distributed intelligence
It was also envisaged that the new refinerys
EMS would be designed to accomplish the
following specialized functions:
Interface with the earlier-generation EMS
of the old refinery to acquire selected
network information
Carry out selected control and contingency
actions in the old refinerys electrical
network when both networks are operating
in parallel
Interface with third-party control and
monitoring systems
Functionalities Required in the
New Refinerys EMS
The new refinerys EMS (see Figure 2) was
envisaged as being the central point for
controlling and monitoring its electrical network.
To meet this objective, the EMS needed to
provide the following main functionalities:
Control and monitoring of the new
refinerys medium voltage (MV) and
low voltage (LV) switchgear
Control and monitoring of the new
refinerys 220 kV switchyard and two
33 kV substations (which have gas-insulated
switchgear [GIS]) via an SAS
Main and backup load shedding for the new
refinerys electrical network
Unified load shedding when the electrical
networks of both refineries operate in
parallel
Tie-line control (active/reactive power
control over the inter-refinery tie lines)
The new refinerys EMS was also required to
possess the following features:
Interface with other refinery control systems
Interface with other monitoring systems
(video wall, corporate local area network
[LAN], and disturbance recording system
[DRS])
Key Design and Performance Aspects
Established for the New Refinerys EMS
The technical specification document for the
EMS evolved out of a continuous process of
interaction among the project engineering teams
(from Bechtel and Reliance) and the end user (new
refinery operations and maintenance teams).
An EMS
offers centralized
monitoring
and control,
prompt fault
reporting, easy
troubleshooting,
faster decision
making, flexible
configuration,
analysis on
a common
time domain,
interoperability,
and distributed
intelligence.
Figure 2. New EMS Setup in Jamnagar CCR
Bechtel Technology Journal 112
Rather than build on the nearly decade-old
design and operation of the old refinerys EMS,
it was decided to start from the very basics in
preparing the technical specification document
for the new refinerys EMS. The following key
requisites were reflected:
EMS functionalities that incorporate valid
end-user requirements, which include the
functionalities available in the old EMS,
additional functionalities to overcome the
limitations of the old EMS, and specific
functionalities necessary to oversee
and control the extensive inter-refinery
electrical network
Use of the latest applicable technology and
hardware to ensure that the EMS is suitable
for the anticipated expansion of the electrical
system over the next 10 years without major
hardware or software modifications
Incorporation of lessons learned from the
old EMS in terms of equipment selection
and application engineering
The whole process of formulating, reviewing,
and finalizing the EMS technical specification
document took approximately 3 months. The
following key design points were emphasized
in this document and during the bid evaluation
process:
An open systemOpen industry-standard
communication protocols, programming
languages, and graphics design systems
were to be used insofar as possible. The
operator workstations were to be based on the
Microsoft Windows

XP operating system
and enable operators to interact with the
system easily. This would ease the process
of further EMS expansion, modification,
and enhancement.
System redundancyRedundancy was
specified at the various levels of the EMS (i.e.,
server redundancy, processor redundancy,
and communication network and associated
element redundancy) so that failure in any
part of the EMS would not affect overall
operation. All key network elements (server
and communication links) were to be
monitored for physical condition. The EMS
also had to incorporate data consistency
check and data validation packages to
detect failure in any part. Furthermore, the
EMS was to be built with hot-swappable
components insofar as possible to achieve
the highest system availability and lowest
downtime during repairs.
Separation of critical functions and
distributed intelligenceAlthough the EMS
was to function as one single unit, it was
to be internally divided into independent
subsystems such as control, monitoring, load
shedding, IS&R, control room, and third-
party interface. With this configuration,
overload or failure of any one subsystem
would have minimal or no effect on the
operation of the other critical functionalities.
Enhanced human-machine interface (HMI)
and help toolsHMI graphics were to use
the object and aspect concept, where
each piece of equipment (e.g., circuit breaker,
generator, transformer) is identified as an
object and each object has various
aspects (e.g., control faceplates, alarm
lists, trends).
System reliability and availability
To validate that the EMS was proven
and reliable, the vendor was to provide
calculations demonstrating that it was
designed to have a minimum availability
of 99.99%. The EMS was also to be equipped
with software to continuously monitor
server and communication link operation
and report any hardware/software errors.
Suitability for further expansionA
number of key features were specified to
ensure that the EMS could be augmented
to cope with the new electrical network:
the latest generation of controllers and
processors; scalable server, database, and
communications link architecture; sufficient
spare server/workstation memory and
communication link capacity; and an
adequate number of installed spare input/
output (I/O) ports. It was also specified
that the EMS should allow processors to
be replaced or upgraded to increase
computational power and expand the system
without requiring system or application
software changes.
Additionally, the following key performance
parameters were also specified:
Under transient conditions, it may be
necessary to shed loads to match operating
load to available power generation. Based
on the results of system studies, the load-
shedding process had to be completed within
80120 ms. As a backup, an under-frequency
load-shedding system (UFLS) was also to be
provided that could complete load shedding
within 200 ms.
Rather than
build on the
old refinerys nearly
decade-old EMS,
it was decided
to start from
the basics
in preparing
the technical
specification
for the new
refinerys EMS.
June 2011 Volume 3, Number 1 113
Limits were set on the controller and remote
terminal unit (RTU) loadings under steady-
state and abnormal (transient) conditions
to ensure that the EMS would continue
to function normally under all electrical
system operating conditions.
Performance limits were set for the
data update times at outstation nodes,
display update times at the CCR, command
execution times, automatic changeover times
between controllers/processors/servers
upon failure, etc.
TECHNOLOGY SELECTION FOR THE NEW EMS
B
eing a critical system aimed at avoiding
power blackouts and optimizing electrical
network operation, the new refinerys EMS was
subject to a technology selection process carried
out based on the dominant functional
requirements vis--vis the latest industrial
practices/trends. During this process, the
EMS solutions offered by the four primary
EMS bidders were evaluated against the
following key criteria:
What EMS functions were key to the
proper functioning of the overall refinery?
What errors/failures within the EMS would
render any given EMS function unavailable?
What components/devices within the
EMS were required to support each
EMS function?
Based on the design and performance aspects
described in the previous section of this paper,
the following attributes were used to evaluate
the various EMS offers:
System architecture
Proven hardware and application
technology
Structurally developed and thoroughly
tested modular software
Reliability and availability
Ease of maintenance
Scalability for future expansion
Availability of integrated supervision
and diagnostic functions for modules and
systems
Competency of suppliers onshore resources
to support the EMS
Compatibility with third-party systems
Compliance with codes and standards
Analysis of failures reported on similar
projects and the solutions deployed by the
supplier to overcome them
After a rigorous evaluation process, ABB
was selected as the supplier for the new
refinerys EMS. ABBs EMS was based on its
advanced System 800xA technology and
was one of the largest power management
systems supplied by ABB in the world.
Furthermore, ABB had also supplied the old
refinerys EMS and was involved in the various
expansions of that refinery. Hence, selecting the
ABB EMS was also expected to ease the process
of interfacing the two systems.
DESIGN AND CONFIGURATION OF THE
NEW REFINERYS EMS
T
he new refinerys EMS is configured to
provide data acquisition, control, and
monitoring for the facilitys electrical network
and to facilitate load shedding for both the new
and the old electrical networks (see Figure 3).
The EDMS caters to data acquisition, selected
circuit breaker control, and electrical generation
and distribution system monitoring. The LMS
Figure 3. Overall Configuration of New Refinerys EMS
Inter face to
Old Refiner y
EMS
Inter face to
Other Plant
Control Systems
(SAS & DCS)
Inter face to
Monitoring
Systems
(large-screen
display, corporate
LAN, and DRS)
EDMS LMS
Active/
Reactive
Power Control
Engineering
Stations
Information
Storage &
Retrieval
System
Operator
Workstations
NETWORK
SWITCHES
ABBs EMS
was based on
its advanced
System 800xA
technology and
was one of the
largest power
management
systems supplied
by ABB
in the world.
Bechtel Technology Journal 114
handles load-shedding functionality, generation
control, and tie-line control.
The EMS includes approximately 50,000 I/O
points (hardwired and soft).
Topology
The EMS consists of a group of CCR servers
connected to the EDMS and LMS nodes in the
new refinerys substations. Figure 4 shows the
CCR equipment arrangement.
The CCR servers include:
Domain servers
Aspect servers for object presentation
Connectivity servers that interface with
ABBs AC 800M controllers in each substation
for further interface with hardwired I/O
and intelligent meters
Connectivity servers that interface with
ABBs COM 500 RTUs in each substation
for further interface with the Siemens
SIPROTEC IEDs used in the new
refinerys EMS
IS&R servers with modular smart array
(MSA) units
The CCR servers are redundantly configured
to ensure continuous availability and reliability
under all conditions.
Each pair of connectivity servers (in redundant
configuration) can have a logical connection to
20 nodes. Since 36 AC 800M controller nodes
and 22 COM 500 RTU nodes are distributed
among the various substations, three pairs of
connectivity servers were provided to interface
with the controllers and two pairs were provided
to interface with the RTUs. This ensures that
sufficient spares remain available for the future
electrical network expansion.
In designing the HMI graphics, ABB used its
Aspect Objects

technology, wherein each piece


of equipment (e.g., circuit breaker, generator,
transformer) is identified as an object and each
object has various aspects (e.g., control faceplates,
alarm lists, trends). Aspects are picked from the
aspect servers, while the live data from these
aspects is picked from the connectivity servers.
Alarms and events are generated at the operator
stations (OSs) to provide alerts in case an object
is in an abnormal state.
A master clock in the CCR obtains global
positioning system (GPS) time references and
The new refinerys
EMS includes
approximately
50,000 I/O points
(hardwired and soft).
Figure 4. EMS Configuration in the CCR
LMS OS
(Load
Shedding)
Remote OS at
Switchyard
Control
Building
Operating Area
NEW REFINERY
CENTRAL CONTROL ROOM
Server/Rack Room
OPC Interface to Third-Party Systems
3 Redundant
Connectivity
Servers for
AC 800M Network
2 Redundant
Connectivity
Servers for
RTU Network
Aspect &
Application
Servers
Access to
Corporate LAN
GPS Clock
Disturbance
Recorder
3 Portable
Engineering
Stations for
RTU Configuration
Redundant
Ethernet
Backbone
PDS Engineering
Stations
Network
Management
Station
Information
Management
Server and
WebServer with
Smart Array Unit
Firewall
EDMS
OSs
LMS OS
(Generator
Control)
Supervisory
OS
Data
Logger
B&W
Laser
Printers
LaserJet
Color
Printer
Remote OS at
Old Refinery
Control
Room
June 2011 Volume 3, Number 1 115
periodically synchronizes the clocks in the
various controller and RTU monitoring modules.
This provides a common time baseline across
the electrical network for analysis of events and
alarms after any disturbance.
The CCR also houses the operator monitoring
and control workstations, as well as engineering
stations for modifying the EMS applications,
incorporating new substations into the EMS
network, etc. All workstations that interface with
the EMS are connected with the EMS servers via
client/server technology on redundant networks.
Web servers and firewalls allow selected EMS
parameters, such as log files and reports, to be
viewed on the corporate office LAN.
A data logger collects and stores all events from
the EMS.
A standalone program development and testing
system (PDS) in the CCR server rack room is
used to develop, modify, test, and prepare future
programs offline before incorporating them into
the EMS.
Redundant Ethernet switches in the CCR provide
connectivity between the various components of
the EMS. A network monitoring station monitors
the complete Ethernet network and associated
elements of the EMS from the CCR. All diagnostic
features configured for communication interface
can be viewed from the CCR.
Four main networks within the EMS interconnect
the various EMS devices:
Control network for AC 800M controllers
This network interconnects all EDMS and
LMS controllers and is connected to AC 800M
connectivity servers. The network facilitates
the transfer of status data, control data, and
measurement data (current, voltage, MW,
and MVAr) for the EDMS.
Control network for COM 500 RTUsThis
network interconnects the EDMS RTUs
located at different substations and identified
for interfacing with the individual substation
IEDs. The network is connected to COM 500
RTU connectivity servers and facilitates the
transfer of status and measurement data
from the IEDs for the EDMS.
Refinery HMI networkThis network
interconnects, via EMS network switches,
all operator workstations, servers, and
printers identified for the function of
providing operator and engineer interaction
with the EMS.
VIP networkThis network communication
link is used to transfer time-critical
data related to load shedding from one
controller to another via a dedicated
network. The network is connected to
the AC 800M controllers to enable
communication between the selected
load-shedding controllers.
The EMS continuously monitors the operation
of the EDMS/LMS nodes and its own communi-
cation links and reports any hardware/software
errors to the EMS servers. These system events
can be viewed from the OSs. The EMS also checks
for errors/failures in processors, communication
links, converters, and I/Os. The EMS facilitates
failure isolation and correction and includes
features that promote rapid fault isolation and
online component replacement.
Electrical Distribution and Management System
The EDMS nodes collect substation data from
IEDs (via serial link) and hardwired I/Os and
perform control commands issued from the
CCR OSs. EDMS nodes in main substations
also facilitate load-shedding functionality by
transferring relevant data to the load-shedding
controller.
Each EDMS node comprises two subsystems:
COM 500 RTU subsystemUsing Inter-
national Electrotechnical Commission
(IEC)-103 protocol, this subsystem provides
a serial communication interface with the
IEDs to receive selected data from them
and send time synchronization messages
to them. Sixteen loops are connected to a
single subsystem to achieve the necessary
performance specifications.
AC 800M controller subsystemThis sub-
system interfaces with hardwired I/Os to
collect digital inputs (e.g., circuit breaker
status, circuit breaker service status,
protection alarms) and issue hardwired
control commands to circuit breakers based
on commands from the operator in the CCR.
Since some of the process substations house a
limited number of I/Os, these are provided with
remote I/Os with process field bus connectivity
over optical fiber cables to the AC 800M
controller in the upstream main substation.
Four main networks
within the EMS
interconnect
the various
EMS devices.
Bechtel Technology Journal 116
The arrangement of EDMS nodes in typical
process substations is shown in Figure 5.
The EDMS nodes also carry out sequence-of-
events (SOE) reporting. In addition to the normal
status reporting of hardwired inputs, any status
input point may be assigned to SOE reporting.
The EDMS nodes detect changes in the state
of SOE points and record the date and time of
change with a resolution of 0.4 ms. The time
tagging of all SOE inputs within a substation
is synchronized to ensure that SOE inputs
connected to different control cabinets maintain
the same time resolution and accuracy.
The EDMS node located in the 220 kV switchyard
control room interfaces and communicates with
the switchyards SAS via IEC-104 protocol. The
SAS connects with the switchyards IEDs via
IEC-61850 protocol. The EDMS communicates
with the switchyard SAS to obtain switchyard
bay monitoring and control data.
Displays in the EMS give the operator a total
overview of the new refinerys electrical
network. Network status, measurements, and
alarms are available as part of the display,
which includes the following main categories:
Single-line diagrams for overview and
control
Trend displays to monitor measured value
against history
Alarm displays
Event displays
Help displays
Report displays
The displays are automatically updated with the
latest tag data approximately every second.
Main Load-Shedding System
Overview
Load shedding becomes imperative when
electrical load demand exceeds available power
after power source loss or network disintegration
(i.e., formation of one or more islands within the
electrical network when network circuit breakers
open). The load-shedding system has to ensure
the availability of electrical power to all essential
and most critical refinery loads. This is achieved
by switching off nonessential loads when there
is insufficient power in the network or parts of it.
When the new refinerys main load-shedding
system (MLS) is triggered by the operation of a
critical input that results in power source loss
or network disintegration, the system calculates
network power balances by subtracting
electrical loads from available power. If the
Figure 5. Arrangement of EDMS Nodes in Typical Process Substations
SATELLITE SUBSTATION
11 kV, 6.6 kV, and 415 V Switchgear (Typical)
COM 500 RTU
Single-Mode Fiber Optic Cable
S
i
n
g
l
e
-
M
o
d
e
F
i
b
e
r

O
p
t
i
c

C
a
b
l
e
S
i
n
g
l
e
-
M
o
d
e
F
i
b
e
r

O
p
t
i
c

C
a
b
l
e
S
i
n
g
l
e
-
M
o
d
e
F
i
b
e
r

O
p
t
i
c

C
a
b
l
e
Fallback Switch
COM 500 RTU Network
AC 800M Controller
Hardwired
Digital Inputs
and Outputs
Single-Mode Fiber Optic Cable
Hardwired
Digital Inputs
and Outputs
AC 800M Network
Fiber Optic/
RS232
Converter
Intelligent
Meters
Intelligent
Relays
RS485/
RS232
Converter
Profibus
Converter
LEGEND
415 V Switchgear (Typical)
MAIN SUBSTATION
Displays in
the EMS give
the operator
a total overview
of the
new refinerys
electrical network.
June 2011 Volume 3, Number 1 117
power balance calculation
is negative, there is a power
deficit. The power deficit is
compared to a priority load
table, low priority loads are
identified, and load-shedding
commands are issued to them.
The MLS logic incorporated
into the EMS is shown in
Figure 6.
Operator Input
Each load feeder has been
prioritized on a scale from
1 to 199, with 1 being lowest
priority (first to trip) and
199 being highest priority
(last to trip). A graphic
display allows operators
to enter priorities for each
switchboard; Figure 7 shows
a typical example.
The load-shedding system
automatically groups loads
having the same priority.
Figure 6. Main Load-Shedding System Logic
Figure 7. Setting Load-Shedding Priorities
Load-Shedding
Module
8ushar w|se
|oad pr|or|t,
Accumu|ated
|oad tah|e
A.a||ah|e power
ca|cu|at|on
N|smatch ca|cu|at|on
Network determ|nat|on
0perator |nput
n Pr|orr|t, ddef|n|t|o
of |oads
Loadd
|nh|h h|t|oon
Load 0ata
Act|. .e power
meassureement
0|rcu u|t h hreaker
statuus
0enerat|on 0ata
Act|. .e power
meassureement
0|rcu u|t h hreaker
statuus
Network 8reaker
Status
Stage 1 0perat|on
from u| ke|a, & I|e-
L|ne Power ||ow L|m|t
Pr|or|t, Load Iah|e
Pr|or|t, 2000: ---- NNw
0enerated//a.a||ahh|e power: --- Nw
|nh|h|ted power: --- Nw
N|smatch: --- Nww
0ont|ngenc, determ|nat|on
0ont|ngenc, pr|or|t, |oad
tah|e hu||d|ng
Power ha|ance ca|cu|at|on
0a|cu|at|on of |oad to he shed
Sp|nn|ng reser .e cons|derat|on
0a|cu|at|on of shedd|ng pr|or|t,
0ata presentat|on
00 Nodu|e
0etect|on of
08 0hange
Lnah|e/0|sah|e/
keset
Pmas |nput from
Act|.e-keact|.e
Power Nodu|e

Click on this
button for
priority entry
The load-shedding
system ensures
the availability of
electrical power to
essential and critical
refinery loads by
switching off
nonessential loads
when there is
insufficient power.
Bechtel Technology Journal 118
Operators are able to inhibit any feeder/group
from main load shedding if necessary for a given
operating scenario.
Processing
The main load-shedding controller builds a
priority load table for each load busbar. A load
busbar is any part of the electrical network that
has generation and loads connected. The table
also includes the total active power available
from the generators connected to a particular
load busbar and the accumulated value of the
loads pertaining to the same load busbar.
The network determination module determines
the connectivity of every load busbar based on
the status of critical breakers. The result is used
to integrate the load busbar priority load tables
into contingency priority load tables.
The values in the contingency tables are used
in the following formula to calculate the power
balances for all contingencies:
P
To be shed
or Power Balance =
Total Available Power Total Load
where total available power equals the summa-
tion of the actual generator output values
and the spinning reserve connected in the
existing contingency.
To add a margin of safety and maintain electrical
network stability at all times, when the power
balance calculation is negative, the calculated
power deficit is multiplied by an engineer-
settable safety factor before it is compared to the
priority load table.
To guard against measuring circuit errors, the
megawatt values used for the power balance
calculation are obtained from two independent
sources, and a validation check is carried out on
the measured values before they are used in the
power balance calculation. The power balance
calculation is performed using 2-second-old data
to prevent the load-shedding system from using
analog data obtained while the electrical network
is in a transient condition immediately after the
occurrence of a critical event.
Output
The amount of load to be shed is calculated by
comparing power deficit to the priority load
table. The result of the comparison is used to
generate load-shedding commands. For faster
response time, these commands are distributed
via dedicated virtual links (the virtual Internet
protocol [VIP] network) to the various nodes
that house the digital output (DO) modules that
execute the commands and trip the load feeders.
Under-Frequency Load-Shedding System
Overview
The new refinery electrical system incorporates a
UFLS as a backup to the MLS. When the MLS is
not in service or fails to operate, the UFLS sheds
loads based on drops in system frequency, which
indirectly reflect network power deficits.
The UFLS is active only when the new refinerys
electrical network is working in standalone
mode, i.e., not tied into the old refinerys
electrical network. When the tie lines between
the two refineries are connected, the UFLS is
automatically deactivated.
The under-frequency relays installed on the two
220 kV bus sections monitor under-frequency
condition and rate of frequency drop (df/dt).
Whenever these two parameters fall below their
set limits, the corresponding under-frequency
relay stage operates. This triggers the operation
of the UFLS, which sheds loads based on
comparing the amount of load identified for
shedding against individual under-frequency
relay stages and network conditions. The UFLS
logic incorporated into the EMS is shown in
Figure 8.
Processing
Based on validated load data, respective load
breaker status, and the positions of critical
breakers, cumulative load tables are generated
for the various contingencies that could arise
for the equipment connected to each 220 kV
bus section.
Various under-frequency relay stages are
programmed with the under-frequency, df/dt,
and time-delay settings. The load to be shed
upon operation of any under-frequency relay
stage is predetermined based on the results of
transient studies conducted for various network
contingencies that could occur.
When the MLS
is not in service
or fails to operate,
the new electrical
systems UFLS
sheds loads
based on drops in
system frequency,
which indirectly
reflect network
power deficits.
Operator Input
Priority settings
for UFLS
Frequency Relay
24 operated
Frequency stages
Network Breaker
Status (open/
close) from MLS
Load Data Input
etection of
under frequency
Network
determination
Load data
collection
Priority load table
UFLS Module
Generation of
commands
load-shedding
DO Module
Trip
Command
Figure 8. Under-Frequency Load-Shedding System Logic
June 2011 Volume 3, Number 1 119
Upon operation of any under-frequency relay
stage, the load feeders to be tripped are identified
by comparing the amount of load to be shed
against the corresponding priority load tables.
Output
The amount of load to be shed is calculated by
comparing power deficit to the priority load
table. The result of the comparison is used to
generate load-shedding commands. For faster
response time, these commands are distributed
via dedicated virtual links (the VIP network)
to the various nodes that house the DO
modules that execute the commands and trip the
load feeders.
Unified Load-Shedding Scheme
When both refinery electrical networks are
operating in parallel, it is vital to have an
integrated approach to identifying the existence
of power deficit conditions across them. It is also
important to maximize load-shedding efficiency
during parallel operation by shedding noncritical
loads across both complexes during a power
deficit condition.
Hence, a unified load-shedding scheme was
developed and is enabled whenever the inter-
refinery tie lines are closed.
This unified scheme enables
power deficits to be calculated
for any source loss throughout
both networks. When a
power deficit condition is
identified, noncritical loads
are shed in either network
based on predefined priorities
across both.
During this scenario, the load-
shedding system specific to
each network is barred from
shedding loads. However, if the
unified network disintegrates
(i.e., if the inter-refinery tie-
line circuit breakers open
or a section of a network
operates as an island), the
load-shedding requirements
for the power deficits in
the individual networks or
islands are handled by the
individual refinery load-
shedding systems based on
individual load priorities.
The unified load-shedding scheme includes the
following main functions:
Determining tie-line connectivity
Calculating unified load-shedding priorities
Generating unified dynamic load-shedding
tables
Generating unified load-shedding
commands
The operating principle of the unified load-
shedding scheme (the power balance calculation)
is similar to that of the MLS. Upon the occurrence
of a contingency (loss of generation), the unified
load-shedding scheme calculates the power
balance in the unified network by subtracting
the electrical loads from the available electrical
power. If the calculation results call for load
shedding, the unified load-shedding scheme
calculates the load-shedding priority, which is
then compared to the unified priority load table
to generate signals to trip the noncritical loads.
Figure 9 shows a sample display template for
the unified load-shedding scheme indicating
the power outputs from the power sources of
both refineries.
When both refinery
electrical networks
operate in parallel,
it is vital
to have an
integrated approach
to identifying power
deficit conditions
across them.
Figure 9. Sample Display Template for Unified Load-Shedding Scheme
Bechtel Technology Journal 120
Generator Control (P-Q Control)
P-Q control functionality within the LMS
provides GTG output control. This functionality
is provided by two control modules for
each GTG:
Active power control module
Reactive power control module
These two modules operate independently of
each other. The active power control module
interfaces with the machines local governor
controller. The reactive power control module
interfaces with the machines local automatic
voltage regulator (AVR).
Whether the LMS controls the machines depends
on which control mode the operator selects. The
machines can be controlled either locally (i.e.,
from the governor/AVR/generator control panel)
or remotely (via the LMS).
When the point of control is local, no corrective
actions are possible from the LMS P-Q controller.
The following conditions must be fulfilled before
generator control can be transferred to the LMS:
The machine must be in speed-droop mode
for governor control.
The machine must be in voltage-droop
mode for AVR control.
The P-Q control module governor/AVR set points
can be adjusted from an OS by means of either a
raise/lower command or a fixed load set point,
as determined by the operator. Figure 10 shows
the LMS display for setting the P-Q control set
points. The set points are displayed over the
generator capability curve to give the operator
a visually striking illustration of the operational
margins available at any given set point.
Tie-Line Control
When the inter-refinery tie-line circuit breakers
are closed, the tie-line control scheme regulates
to a preset level the amount of active-reactive
power flow between the two electrical networks.
The new refinerys generators have a higher rating
than the old refinerys, and the new refinerys
electrical network currently has excess power
under normal operating conditions. Hence, tie-
line power flow is usually from the new network
P-Q control
functionality
provides GTG
output control.
When the
inter-refinery
tie-line
circuit breakers
are closed,
the tie-line
control scheme
regulates the
amount of
active-reactive
power flow
between the two
electrical networks.
Figure 10. EMS Display for P-Q Control
June 2011 Volume 3, Number 1 121
to the old one. The tie-line control scheme
controls the active-reactive power generation of
the new refinerys GTGs to maintain the active-
reactive power flow between the two electrical
networks at the level pre-set by the operator.
Information Storage and Retrieval System
The IS&R system provides centralized
data collection, management, analysis, and
presentation services for the EDMS and LMS.
Because information is the key asset for these
two systems, the IS&R system is designed to
transform raw data into meaningful outputs,
generate reports, and keep the data secure for
future access.
The IS&R system provides automated capture
of any defined EDMS and LMS data points,
captures alarms and events generated in these
systems, and securely stores the data.
The IS&R system can generate reports based
on real-time data, historical values, or event
information. These reports are available in
two forms: user-defined and automated. The
IS&R system can also generate energy balance
reports for the new refinerys electrical network
based on available historical data.
Interfaces with Other Third-Party Systems
The Web interface license residing within
the IS&R server provides interfaces with third-
party systems; examples of these systems are
the corporate LAN network and the CCR large-
screen video wall:
Web interface with corporate LANThe
EMS Web interface with the corporate LAN
provides authorized users access to key
information such as important displays,
reports, and alarm/event data.
OPC interface with CCR large-screen video
wallThe EDMS and LMS OPC interface
with the large-screen video wall in the
CCR enables critical data to be displayed.
The information displayed is generally
restricted to very high level data such as
total consumed power, system frequency,
various unit loads, tie-line power flow,
220 kV system overview, and generator
status. A sample display template is shown
in Figure 11.
MAJOR CHALLENGES ENCOUNTERED IN
EXECUTING THE NEW REFINERYS EMS
D
uring project execution, a number of factory
and onsite tests were performed to ensure
the EMSs integrity and satisfactory operation.
Test feedback was used to fine-tune the design
to achieve the required system performance. The
issues reported during testing were resolved in
consultation with system experts and end users.
The EMS hardware and application programs
were also modified based on the results of these
validation tests. Two of these modifications are
discussed below.
The IS&R system
is designed to
transform raw data
into meaningful
outputs,
generate reports,
and keep
data secure
for future access.
Figure 11. Sample Template for Displaying EMS Data on CCR Large-Screen Display
Bechtel Technology Journal 122
Segregation of Noncritical I/O Cards in the
Main Load-Shedding System
During the design stage, the cards for both
critical and noncritical I/Os associated with
load-shedding were configured on a single AC
800M controller.
These I/O cards were all scanned at the same
fast scan-time rate. However, scanning the
analog (noncritical I/O) cards at such a high rate
was not necessary for load-shedding purposes.
Not all signals were time critical because the
load-shedding system uses 2-second-old data
to prevent load-shedding functions from using
analog data obtained while the electrical network
is in a transient condition immediately after the
occurrence of a critical event.
During factory testing, it was observed that
this all-inclusive fast-scan configuration
prevented performance parameters (load-
shedding operating time and processor loading)
from being achieved. After further evaluation
and discussions with experts, it was decided
to introduce additional controllers to process
noncritical I/Os at a lower scan rate. A high-end
controller was introduced to process critical
I/Os. It was also verified that adopting this
solution would not significantly affect the overall
project completion schedule.
Since controller loading and load-shedding
performance are related to each other, various
tasks executed by the AC 800M controller
were also tuned to further optimize load-
shedding performance.
While carrying out these modifications, care
was taken to avoid changing the field cable
terminations. A significant effort was also made
to ensure that the changes caused minimal EMS
application retesting.
All requisite performance parameters were
achieved after the changes were implemented.
Sequencing the Under-Frequency
Load-Shedding Application
During the site acceptance test for the
UFLS, it was observed that the system was
operated inadvertently in one of the simulated
contingencies. This was a very serious condition,
because inadvertent load tripping could lead to
major production loss or even refinery shutdown.
To analyze the incident, the SOE and respective
UFLS displays were observed and recorded.
Based on this information, a similar condition
was simulated offline so that the UFLSs
behavior under this condition could be studied.
This offline test confirmed that the systems
behavior in the offline setup matched that of the
real-time incident.
Further analysis revealed that the incident
occurred due to an incorrect execution sequence
programmed into a program/control module
associated with the UFLS. This sequence was
corrected, and the execution sequence of all
programs and control modules connected to the
system was checked in detail before the modules
were redeployed in the EMS.
The EMS was subsequently successfully
commissioned.
IMPROVEMENTS RECOMMENDED FOR
FUTURE PROJECTS
T
he EMS is a very critical system for the
entire electrical network because it interfaces
with practically all major refinery electrical
equipment. In developing the EMS for the new
refinery, it was observed that EMS performance
improvements sometimes necessitated adding
or changing devices or modifying the design
of other electrical equipment, including, in
particular, critical items such as switchyard
equipment, GIS, and MV/LV switchgear. Such
adjustments are better coordinated when
the initial EMS philosophy development and
procurement activities are started sooner so that
EMS detailed design can proceed concurrently
with that of the other critical equipment.
Based on lessons learned, the following
aspects need to be considered to improve EMS
functionality on future projects:
IEDs installed to monitor and control
electrical networks should be based on IEC-
61850, which facilitates interoperability, free
configurability, and long-term stability.
Optical fibers should be used for the IED
network, even within substations, because
they provide better reliability and reduce
testing and commissioning efforts.
The MLS and backup UFLS must be
completely independent of each other in
terms of hardware and application software.
Detailed performance calculations for the
processor that provides LMS functionality
should be established in the initial project
stages to avoid having to upgrade the
processor and stretch the site acceptance test
schedule during commissioning.
An EMS is very
critical because
it interfaces with
practically all major
electrical equipment.
EMS philosophy
development and
procurement
should be addressed
early so that EMS
detailed design
can proceed
concurrently with
that of other
critical equipment.
June 2011 Volume 3, Number 1 123
After factory tests for a pilot or typical
substation are completed, the test setup
should be made available on site for repetitive
engineering of EDMS functionalities
(database population, display compilation,
and testing). This helps optimize the
engineering time cycle.
Alarm and event management should be
discussed in depth with the end users
operations and maintenance group and
should be optimized during initial detailed
engineering to avoid operational difficulties
after commissioning.
ACKNOWLEDGMENTS
The authors wish to express their gratitude to
R.H. Buckle (Bechtel London Execution Unit
chief engineer), R.D. Hibbett (Bechtel London
Execution Unit lead electrical engineer for
the new Jamnagar refinery project), David
Hulme (Bechtel London Execution Unit project
engineering manager for the new refinery
project), Sanjay Mashruwala (Reliance Industries
project director for the new refinery project),
and S.K. Garyali (Reliance Industries engineering
director for the new refinery project) for their
support and guidance during execution of the
new Jamnagar refinery. The authors also wish
to express their thanks to Reliance Industries
engineers Ninad Date and Vipul Mehta, and the
Jamnagar Central Services and Operation teams
from Reliance Industries for their fine support
during EMS execution. The authors also wish
to acknowledge the efforts of ABBs task force
deployed during EMS implementation, from
design to commissioning.
TRADEMARKS
The term Aspect Objects is a trademark of ABB.
Microsoft and Windows are registered
trademarks of Microsoft Corporation in the
United States and other countries.
BIOGRAPHIES
Rajesh Narayan Athiyarath
is a specialist senior electrical
engineer in Bechtels Oil,
Gas & Chemicals (OG&C)
Global Business Unit (GBU)
in London. He has 17 years
of experience in engineering
oil and gas, petrochemical,
and GTG power plant
projects worldwide. During
his 5 years with Bechtel OG&C in London,
Rajesh was the responsible engineer for the new
Jamnagar refinerys project system studies and relay
coordination studies as well as Ruwais refinery
expansion project front-end engineering and
design (FEED) system studies. He was also the
responsible engineer for the Jamnagar EMS and
load-shedding systems. Rajesh received a performance
award for his Jamnagar system studies work.
For Bechtels Mining & Metals GBU, Rajesh has also
provided specialist support for system studies and
relay coordination work on the Worsley efficiency and
growth project. He is currently working as lead on
various proposals and studies.
Rajesh holds a BE from Mumbai University, India, and
is a chartered electrical engineer (CEng, member of
Institution of Engineering and Technology [MIET],
UK). He is a Six Sigma Yellow Belt.
Munir Mujawar is a senior
vice president (Core Group,
Electrical discipline) at Reliance
Industries Ltd. He has 38 years
of design and engineering
experience on projects related
to oil, gas, petrochemical, and
utility power systems in India
and the Middle East. For the
last 15 years, Munir has been
responsible, from basic engineering to execution, for
various Reliance projects, including both Jamnagar
refineries, KGD6 onshore facilities, petroleum products
storage depots, East-West pipelines, and retail outlets.
Munir holds an MTech (Power Systems) from the
Indian Institute of Technology, Mumbai, India.
He is a chartered engineer and a senior member of
the Institution of Engineers (India).
Vinayak Shanbhag is a senior
lead design engineer in the
EPC Core Group at Reliance
Industries Ltd. He has 20 years
of engineering experience
in the refinery and petro-
chemical industry. Vinayak
has handled numerous large
system packages during his
17-year tenure with Reliance.
He has also been involved in the basic and detailed
engineering of various systems for Reliances
petroleum refinery units at various locations in India.
Vinayak holds a BE from Mumbai University, India.
Bechtel Technology Journal 124
2011 Bechtel Corporation. All rights reserved. 125
INTRODUCTION
C
onstruction operations have been modeled
in three dimensions for many years. Three-
dimensional (3D) modeling is a powerful,
objective function evaluator well suited to
designing complex resource-driven construction
processes. The current state-of-the-art simulation
systems allow complex construction operations
to be modeled in great detail and with utmost
flexibility. 3D analysis reveals meticulous
details about planned operations before actual
field implementation commences. Such details
include resource utilization, resource idleness,
operational bottlenecks, production rates, and
the resulting costs.
The ability to create 3D visualizations of
construction operations can be of substantial
help in communicating the authenticity of a
simulation analysis to construction decision
makers. In addition, a visualization can provide
valuable insight into the subtleties of the
modeled construction operations that cannot
be otherwise quantified and presented. The
ability to realistically model construction
operations can thus enable planners to provide
more pragmatic and comprehensive feedback
to construction personnel.
MAJOR ISSUES RELATED TO CONSTRUCTION
PROJECT DELAYS
C
onstruction project delays are frequent and
recurring. The principal factors causing
these delays are comparable across developing
countries. Additional local factors pertaining to
industry, socio-economic, and cultural issues,
along with project-specific characteristics,
further contribute to delays. The most significant
factors causing construction project delays are as
follows (also see Figure 1):
Poor communication among disciplines and
functions
Concurrent engineeringdesign and
construction going on in parallel
Unavailability of engineering drawings
Material unavailable or unallocated
Delayed delivery of items from fabricators
Schedule pressure on fast-track projects
Limited resources
Aging skilled workforceyounger,
less-experienced craft workers
Many revisions during the project
Scheduling by area, turnover by system
Tendency to work with whatever is available
Operation in scramble mode to try to
achieve schedule
Issue Date: June 2011
AbstractConstruction companies large and small are being called upon to do more with less by building
ever-more-complex plants in shorter timeframes with fewer resources. In this context, the use of a robust 3D
visualization suite can contribute to project efficiency by helping to optimize materials and equipment, reduce
lost inventory, provide greater predictability, lower costs, and improve worker safety. This paper examines ways
in which Bechtel has been using 3D visualization in conjunction with other tools and techniques to improve
construction performance and productivity. Specific examples are provided of how Bechtel is integrating 3D
visualization with both in-house and externally developed software and work process improvement initiatives.
The results have brought the company closer to achieving its overall strategic vision of being able to identify,
integrate/aggregate, and ultimately deliver timely and validated information to projects in a manner that
consistently enables quality work to be performed safely, effectively, and efficiently.
Keywords3D model; 4D model; computer-aided design and drafting (CADD); Construction Owners
Association of Alberta (COAA) WorkFace Planning; engineering; engineering, procurement, and construction
(EPC); integration; Intergraph SmartPlant Review; lean construction; technology; visualization
DYNAMIC 3D VISUALIZATION OF
CONSTRUCTION ACTIVITIES
Shiva Krishnasamy
[email protected]
Bechtel Technology Journal 126
These delay factors pose the following questions:
Is the planned schedule achievable?
What is the exact progress to date?
How do engineering changes affect
work content, cost, and schedule?
How is the schedule maintained
in view of changing circumstances?
What effort is needed to complete
the project?
What is the impact to the project
if materials are delayed?
THE ROLE OF POWERFUL AND VERSATILE
3D VISUALIZATION
S
ome of the construction project delay factors
and questions listed above can be avoided
or controlled by implementing powerful
and versatile 3D visualization solutions in
conjunction with other tools and techniques
developed to improve construction performance
and productivity. This paper examines Bechtels
use of Intergraph

SmartPlant

Reviewa
robust, industry-leading 3D visualization suite
and provides examples of how Bechtel has
been able to integrate its capabilities with both
Project
delay factors
can be avoided
or controlled
by implementing
powerful and
versatile
3D visualization
solutions
in conjunction
with other tools
and techniques
developed
to improve
construction
performance
and productivity.
ABBREVIATIONS, ACRONYMS, AND TERMS
3D three-dimensional
4D for the purposes of this paper,
3D plus time
4D CADD process used to compile 3D
CADD models in a time-phased
manner to depict a construction
sequence
AFC approved for construction
API application programming
interface
BPS Bechtel Procurement System
(Bechtels integrated supply
chain and materials management
system for E&C projects;
designed for use throughout a
projects life for procurement
of engineered equipment, bulk
materials, and services) (a BSAP)
BSAP Bechtel Standard Application
Program (Bechtel uses BSAPs
corporate-wide as the standard
software for specific functional
or corporate business processes.)
CADD computer-aided design and
drafting (a typical project
3D CADD model contains
piping; equipment; concrete;
structural; maintenance/safety;
heating, ventilating, and air
conditioning; and electrical and
instrumentation design elements)
COAA Construction Owners Association
of Alberta (Canada)
E&C engineering and construction
ELT erect line and tack
EPC engineering, procurement, and
construction
FIWP field installation work
package (a WorkFace Planner
element)
GBU (Bechtel) global business unit
IFC issued for construction
IT information technology
MRR material receiving report
PDS plant design system
SEC Software Engineering and
Construction (part of Bechtel
Information Systems &
Technology)
SETROUTE

Bechtel-developed automated
engineering, design, and
construction management
system for electrical cable,
raceway, and wiring (a BSAP)
SQL structured query language
(used to manage data
in relational database
management systems)
SSRS SQL Server Reporting
Services (Microsoft


Corporations server-based
reporting platform that
provides comprehensive
reporting functionality for
a variety of data sources)
TEAMWorks

Bechtel-developed system
for tracking equipment and
reporting materials and
quantities (a BSAP)
WFP WorkFace Planning (a Best
Practice of the COAA)
June 2011 Volume 3, Number 1 127
in-house and externally developed software and
work process improvement initiatives. Through
this approach, the company has moved closer
to achieving its strategic vision of being able
to identify, integrate/aggregate, and ultimately
deliver timely and validated information to
projects in a manner that consistently enables
quality work to be performed safely, effectively,
and efficiently.
3D Visualization Tools and Techniques
An animated construction sequence has proven
useful on projects in helping to avoid planning
overlaps, among other things. [1] As a deliverable
for work contractors, the underlying model
should be updated daily for the duration of
the project. In order for the model to remain
the central source of project information, it is
important for the engineering team to maintain
the models integrity by using redline strikeout
and as-built drawings from start to finish. Tools
and techniques for capturing the information that
needs to be incorporated include the following.
Work Package Planning
Work package planning can assist in constructing
work packages. SmartPlant Review can highlight
the components of a group of isometrics within
a view (by color) and supports the planning
process by using colors to display procurement
information (delivery, fabrication status, etc.)
about various work package commodities. [2]
Progress Visualization
Progress visualization can be supplemented by
progress information taken from various Bechtel
project information systems (TEAMWorks

,
SETROUTE

, BPS, etc.) and can be graphically


displayed by color code in a tool such as
SmartPlant Review. The 3D visualization can
be populated with information from these
systems and can be viewed at SmartPlant
Review workstations. Progress visualizations
can be configured to be automatically updated
daily. [2]
Organization by System
The engineering model can be reorganized by
system for testing and turnover. Doing so enables
installation progress and remaining installation
tasks to be visually monitored on a system-by-
system basis. As the project transitions from
installation by areas to testing of systems, daily
installation activities can be reprioritized and
focused to prepare systems for testing in the
required sequence and timing. [2]
Installation
progress and
remaining
installation tasks
are visually
monitored.
1. Poor communication among
disciplines and functions
2. Concurrent engineering
r Engineering deliverables
r Availability of materials
r Allocation of materials
r Fabrication of deliverables
3. Fast-track projects
4. Limited resources
5. Aging skilled workforce
6. Scheduling/installing by area
testing/turning over by system(s)
7. Short time between AFC/IFC
and construction
8. Many revisions
9. Operation in scramble mode/
working with whatever is available
s

m(s)
/
lable
Engineering
Delivery + Fabrication
Installation
Testing
Systems Turnover
7
7
9
9
1, 2
3, 4, 5
6, 8
1, 2
3, 4, 5
6, 8
1, 2
3, 4, 5
6, 8
1, 2
3, 4, 5
6, 8
Figure 1. Construction Project Delay Factors
(Image courtesy of Bentley Systems, Incorporated. Used by permission.)
Bechtel Technology Journal 128
Material Availability
The procurement and fabrication processes can
be monitored and tracked for materials released
for construction. The daily arrival of pipe spools,
for example, can be visualized so that crew
resources can be directed to target installation
work in areas where materials are available. [2]
Integration of 3D Visualization with
Bechtel Standard Application Programs
3D visualization, which is based on appli-
cation programming interface (API) modules,
automatically organizes the engineering design
model into constructable elements so that the
virtual model represents the way the facility
is built, not just the way it is designed. A data
flow diagram depicting how 3D visualization
is integrated with Bechtel Standard Application
Programs (BSAPs) is provided in Figure 2. Some
of the additional detail relates to information
that can be provided by the Engineering
and Procurement disciplines as deliverables.
The division of responsibility among all the
disciplines is shown in Table 1.
Procurement
and fabrication
can be monitored
and tracked for
materials released
for construction.
Table 1. 3D Visualization Division of Responsibility
Task
Establish ownership of 3D visualization
Provide list of required files to create placeholder in SmartPlant Review model set
Coordinate with Engineering and Procurement
Create placeholder in SmartPlant Review model set
Create interface between TEAMWorks and BPS
Install and configure Microsoft

SQL Server
Map the fields in TEAMWorks to the labels in SmartPlant Review output
Schedule run time to immediately follow the project SmartPlant Review output routine
Review modified output in the SmartPlant Review graphic user interface
Validate the data
Instruct the field team in creating and using display sets and how they apply to the new TEAMWorks data included in
SmartPlant Review
If TEAMWorks/BPS report does not satisfy user needs, direct user to SSRS team to have new reports developed
Figure 2. Data Flow Diagram of 3D Visualization Integration with BSAPs
Construction
3D Model
Staging Server
Procurement
BPS
Project Controls
Primavera
SQL Job
Engineering
3D Model
Primary Server
Construction
TEAMWorks
Engineering
SP 3D
Construction
3D Visualization
June 2011 Volume 3, Number 1 129
Throughout the ongoing process of integrating
3D visualization with BSAPs, engineering
coordination plays an important role. During
the engineering and procurement phases,
engineering coordination facilitates:
Common naming conventions and standard
attributes across the disciplines
The use of BSAPs to produce standard design
practices and procedures
During the construction phase, engineering
coordination facilitates:
The accommodation of work scope variations
that may necessitate the incorporation of
additional engineering deliverables and data
content into the model
The development and incorporation of the
labels required for construction
Information exchanges are triggered manually
or automatically at the discretion of the plant
design system (PDS) coordinator for the project.
Depending on the phase, strategy, and scope
of work for the project, the interval of updates
varies and is also at the discretion of the
PDS coordinator.
The exchange of flat files is nonstandard in
that the mechanism, timing, and content
configuration vary from project to project.
Factors that affect these variables are network
connectivity and bandwidth, available
engineering resources for the project, and the
construction scope of work.
Information is exchanged via a database
connection between the TEAMWorks
visualization application, the SmartPlant
Review database (Microsoft Access), and the
TEAMWorks database server. The utility has no
user interface and, once configured, runs on a
job schedule. This step should be synchronized
with the planned SmartPlant Review dataset
synchronization and the planned quantity
reporting cycle in the field.
The exchange process works by finding tags in
TEAMWorks, gathering the configured status
data, and writing it to the existing labels in the
SmartPlant Review dataset based on matching
tags between the model and TEAMWorks.
Viewing TEAMWorks Object Data in
3D Visualization
Object data that includes TEAMWorks and BPS
information can be viewed for any tagged object
in the 3D model via SmartPlant Review. By
selecting the auto-highlight option on the
menu, pressing the mouse button, and pointing
to any component in the main view, a user can
see the object data in an adjacent text view
window (see Figure 3). Thus, information such
as isometric sheet and assorted TEAMWorks
data can be seen against the visible object.
(However, this interface does not support
input to TEAMWorks.) In the piping spool
example shown in Figure 3, the circled spool is
represented in the Object Data box to the right.
The material receiving report (MRR) attribute
shows 1, meaning that a value is present in
TEAMWorks and that the commodity has been
received by BPS. The referenced module has also
been toggled on to show a scale reference. All
pipe displayed here has been received.
Figure 3. Example of TEAMWorks Object Data in 3D Visualization
This spool, tied to Module 020R03,
shows as having been delivered
but no progress on installation,
per the information circled at right.
Work scope
variations
may necessitate
the incorporation
of additional
engineering
deliverables and
data content
into the model.
Bechtel Technology Journal 130
Data is collected from TEAMWorks for the
attributes listed in Table 2.
Figures 4, 5, and 6 respectively illustrate
how integrating 3D visualization with BSAPs
facilitates viewing TEAMWorks object data
to pinpoint material availability, progress,
and organization by system (three of the 3D
visualization tools and techniques discussed
earlier in this paper).
Table 2. Data Attributes
3D visualization
is integrated
with WFP
to improve
planning and
reduce overall
construction cost.
Attribute Description
MRR Material Receiving Report*
ELT Erect Line and Tack*
PUNCH Punchlist Complete*
WELD Weld Complete*
HYD TEST Hydrostatic Test Complete*
HYD REST Hydrostatic Restore Complete*
WK PKG
Work Package Number present in TEAMWorks
for any object
HYD PKG
Hydro Package Number present in TEAMWorks
for any object
SYSTEM
System Number present in TEAMWorks for
any object
* A value of 1 indicates that there is data in TEAMWorks.
Figure 4. Material Availability
Figure 5. Progress
Figure 6. Organization by System
MRR
Punchlist Complete
Hydro Test Complete
Hydro Restore Complete
411A-A001
411A-M101
411A-M301
411A-M401
Punchlist Complete
Hydro Test Complete
Hydro Restore Complete
MRR
June 2011 Volume 3, Number 1 131
Integration of 3D Visualization with
COAAs WorkFace Planning Process
WorkFace Planning (WFP) is an initiative
being investigated by the Construction Owners
Association of Alberta (COAA). As stated on
the COAAs website, WFP is the process of
organizing and delivering all elements necessary
before work is started, to enable craft persons
to perform quality work in a safe, effective
and efficient manner. [3] Bechtel is currently
working on ways to integrate 3D visualization
with WFP to improve planning and reduce
overall construction cost.
WorkFace Planners are responsible for developing
field installation work packages (FIWPs) for
the construction contractors workforce. FIWPs,
which are developed for the parts of the
project where engineering is complete, convert
construction work packages into lists of assigned
tasks for work crews and ensure that the
required resources and information are in place
before work starts. A typical WorkFace Planner
develops and maintains FIWPs in one discipline
for one general foreman or approximately 50
craft employees.
The WorkFace Planner typically uses 3D models
to design virtual plans based on the schedule
and the superintendents execution plan. The
WorkFace Planner then identifies and removes
the constraints for each FIWP and enters
progress once the plans have been executed. The
3D model plays an important role in establishing
a knowledge center that allows WorkFace
Planners to access real-time project information.
A simplified work process diagram of WFP
integration with 3D visualization is shown in
Figure 7.
EPC BENEFITS
U
sing 3D visualization during the project
engineering, procurement, and construction
(EPC) phases:
Increases workflow efficiency and
reduces costly errors and rework
Reduces the risk of project overruns by
making accurate information available
whenever and wherever it is needed
Improves predictability
Improves construction productivity
Reduces schedule
Enables data to be compared among
various contractors
The 3D model
plays an
important role
in establishing a
knowledge center.
Figure 7. WFP Integration with 3D Visualization
3D Model
Progress
Remove
Constraints
WorkFace
Planner
Field Installation
Work Package
Execute the
Work
Progress Reports
Bechtel Technology Journal 132
IMPLEMENTATION
I
t is becoming increasingly important
to manage a projects entire lifecycle:
engineering, project management, procurement,
material management, fabrication, and
installation. To this end, 3D visualization has
already been implemented on several major
Bechtel projects, including:
Shell Scotford Upgrader Expansion 1
(SUEx1) in Alberta, Canada, which has
increased the upgraders capacity to process
bitumen into synthetic crude oil
Motiva oil refinery expansion at Port
Arthur, Texas, which will increase
production by 325,000 barrels per day when
completed in 2011 (making Motiva the USs
largest refinery)
Angola liquefied natural gas facility, which
will produce 1 billion cubic feet per day of
natural gas upon completion in 2012
Duke Edwardsport integrated gasification
combined cycle (IGCC) 630 MW power
plant in Edwardsport, Indiana, the first
commercial-scale IGCC plant in the US since
the 1990s, scheduled for completion in 2012
Pueblo chemical agent-destruction pilot
plant near Pueblo, Colorado, which will
destroy the chemical weapons stockpile at
the Pueblo Chemical Depot in Colorado,
starting as soon as 2015
FUTURE OUTLOOK
C
onstruction planning can be interpreted
from different angles, including making
technology choices, defining work tasks,
estimating the required resources and durations
for individual tasks, and identifying interactions
among the different tasks. The use of 4D CADD
models [4] (intelligent, time-phased 3D CADD
models that can be used for various construction
activities) to carry out true collaborative planning
is only beginning to be realized. It has been
postulated that 4D models are currently being
used more as planning review tools, rather
than being integral to the initial construction
planning process [5], and have generally not
yet been engaged to fully support multi-
disciplinary collaborative construction planning
by the teams involved.
As advancements continue, however, 4D
models will eventually become the standard
for all phases of facility design, engineering,
and lifecycle engineering with regard to
operation, maintenance, and retrofit. Bentley


ConstructSim

and Intergraph SmartPlant


Construction are two examples of 4D CADD tools
currently available on the market. As these tools
mature, they may become suitable alternatives
for 3D visualization.
Bentley ConstructSim
Bentley ConstructSim derives schedule data
from 3D models instead of using schedule data
as input. It optimizes a project by planning,
sequencing, executing, and monitoring
construction activities from within a virtual
plant model. ConstructSim can be used to break
up an engineering model and reorganize it the
way it is actually built.
ConstructSim links project information
technology (IT) systemsincluding 3D CADD
model (all major formats: Bentley AutoPLANT

,
Intergraph PDS

, AVEVA PDMS

, etc.),
schedule, materials management system, project
controls databases, and othersto create an
integrated virtual environment that is kept
constantly up to date.
Intergraph SmartPlant Construction
Intergraph SmartPlant Construction, a new
SmartPlant Enterprise software module created
specifically for process, power, and marine
firms to more efficiently plan and manage
plant, ship, and offshore construction, is said to
foster increased productivity, accelerated project
completion, and reduced risk.
By seamlessly integrating engineering,
procurement, fabrication, and site materials,
SmartPlant Construction facilitates improved
work planning, efficient information exchange
for better communication, and enhanced
engineering and construction work processes.
SUMMARY AND CONCLUSIONS
T
he use of 3D visualization holds significant
potential for the construction industry.
Employing 3D visualization can positively affect
both the preconstruction and construction phases
by assisting planners in producing improved
projects and then allowing them to see how
their plans evolve. In addition, 3D technology
enables planners to anticipate potential problems
at the construction stage, which could have
considerable cost and time implications. [6] And
where 4D technology has been embraced, direct
savings and increased productivity have been
3D visualization
can positively affect
both the
preconstruction
and construction
phases.
June 2011 Volume 3, Number 1 133
seen. The use of 3D visualization simulations
allows considerable savings to be realized on
construction projects by identifying problems
before construction and avoiding rework during
the project lifecycle.
In summary, the essence of using simulation
to model construction operations is to obtain
insights into the consequences of using
different techniques and strategies, thus helping
the planner to make the most advantageous
decisions. Construction simulation systems
provide users with detailed information such
as material status, resource usability, and
breakdown of the modeled system.
Bechtels experiences to date show that if project
requirements are accurately defined early and
the right applications and work processes are
put into place within Engineering, it is possible
to deliver high-quality 3D visualizations along
with the physical plants at a relatively small
additional cost.
Clearly, the use of 3D visualization as a
construction planning execution tool should
be further explored for future large-scale
construction projects.
ACKNOWLEDGMENTS
T
he author wishes to express his gratitude
to Jeremy Lee, Technical Support Specialist,
for his support and guidance in developing the
utility described in this paper. The author also
wishes to thank Chris Thumann and Edward
Ritchie for their support and encouragement.
TRADEMARKS
Bentley, AutoPLANT, and ConstructSim are
registered trademarks of Bentley Systems,
Incorporated, or of one of its direct or indirect
wholly owned subsidiaries.
Intergraph, PDS, and SmartPlant are registered
trademarks of Intergraph Corporation in the
United States and in other countries.
Microsoft is a registered trademark of Microsoft
Corporation in the United States and other
countries.
PDMS is a trademark of AVEVA Group plc.
SETROUTE is a registered trademark and
TEAMWorks is a trademark of Bechtel
Corporation.
REFERENCES
[1] Glenn Ballard, Lean Construction and EPC
Performance Improvement, Proceedings of the
1st Annual Meeting of the International Group for
Lean Construction, Espoo, Finland, August 1993,
pp. 7992, http://www.leanconstruction.org/
pdf/LEAN_EPC.pdf.
[2] Lean Construction Fact Sheet, Constructing
Excellence, October 28, 2004, pp. 14,
http://www.constructingexcellence.org.uk/
pdf/fact_sheet/lean.pdf.
[3] Construction Owners Association of Alberta
(COAA), http://coaa.ab.ca/Productivity/
WorkFacePlanning/InDepthLook.aspx.
[4] L.M. Sheppard, Virtual Building for
Construction Projects, IEEE Computer
Graphics and Applications, Vol. 24, No. 1,
January/February 2004, pp. 612, access via
http://ieeexplore.ieee.org/xpl/ freeabs_all
.jsp?arnumber=1255800.
[5] F. Vaugn, 3D and 4D CAD Modeling on
Commercial Design-Build Projects,
Proceedings of the Third Congress on Computing
in Civil Engineering, ASCE, Anaheim, CA,
June 1719, 1996, pp. 390396, access via http://
cedb.asce.org/cgi/WWWdisplay.cgi?101236.
[6] S. Staub-French and A. Khanzode, 3D and
4D Modeling for Design and Construction
Coordination: Issues and Lessons Learned,
Journal of Information Technology in Construction,
ITcon Vol. 12, July 2007, pp. 381407, http://
www.itcon.org/cgi-bin/works/Show?2007_26.
ADDITIONAL READING
Additional information sources used to develop
this paper include:
Balfour Technologies, http://www.BAL4.com/.
Bentley Systems, http://www.bentley.com/
products/navigator.
Common Point, http://www.commonpointinc
.com/.
Intergraph, http://ppo.intergraph.com/
visualization.
WorkFace Planning (WFP), http://www.coaa
.ab.ca/productivity/workfaceplanning.aspx.
3D visualization
simulations allow
considerable savings
to be realized by
identifying problems
before construction
and avoiding rework
during the project.
Bechtel Technology Journal 134
BIOGRAPHY
Shiva Krishnasamy is the
field IT manager for the
Shell Scotford Upgrader
Expansion 1 project. Since
joining the company in 1997,
he has worked on multiple
projects within Bechtels Oil,
Gas & Chemicals (OG&C)
Global Business Unit, including
Thai Plastic & Chemicals
polyvinyl chloride facility expansion, BPC acrylics
complex, Optimal butanol and derivatives facility,
Goro nickel plant, CSPC Nanhai petrochemicals
complex, and Thai Oil TPX facility revamp.
Shiva has 14 years of IS&T experience supporting
EPC, with an emphasis on implementing technical
innovations that add value and improve project
work processes. While on the Shell project,
he managed the EPC application portfolios and
provided solutions for interoperability and data
integration among these applications.
Shiva has authored a Bechtel white paper,
plus four Bechtel Refined Intellectual Capital
papers: (1) Automated Population of the Startup
System Designator in Construction BSAPs,
(2) Articulate Online eLearning Solution for
Project Office, (3) Using WinPCS on EPC Projects as
a Startup Tracking Tool, and (4) iSCSI SAN for the
IS&T Organization.
Shiva holds an MS from the University of Wales
in Cardiff and a BS from the University of Greenwich
in London, UK, both in Information Technology.
He is a Six Sigma Yellow Belt.
2011 Bechtel Corporation. All rights reserved. 135
INTRODUCTION
T
he geologic sequestration of CO
2
from large
industrial point sources requires that the
pressure of the CO
2
stream be raised to at
least 2,200 psig. The optimum configuration
may involve compression only, or some
combination of compression and liquid CO
2

pumping, depending on various factors such as
the composition of the CO
2
stream and the CO
2

pipeline pressure.
In order to use carbon steel pipelines to
transport the CO
2
from the point source to
the sequestration site, the CO
2
stream must be
dehydrated to avoid carbonic acid corrosion. The
optimum pressure in the compression system to
perform this dehydration step is determined by
a balance between the size of the dehydration
unit and the cost of construction materials of the
compression/dehydration system.
This paper summarizes the results of a study
on CO
2
compression for carbon capture and
sequestration (CCS) performed under a Bechtel
technical grant. The paper also shows that the
study results were subsequently validated by
a CO
2
geologic sequestration study recently
performed by Bechtel for a world-scale liquefied
natural gas (LNG) facility.
BACKGROUND
S
everal options are currently feasible for the
geologic sequestration of CO
2
:
Enhanced oil recovery (EOR), where oil
production rates from low-yield fields are
increased by flooding them with CO
2
Storage in saline aquifers. Saline aquifers are
deep sandstone formations containing briny
water that can accept injection of CO
2
for
permanent storage.
Storage in depleted oil and gas fields
These options dictate that the CO
2
to be
sequestered is injected at very high pressures. At
these conditions, CO
2
is in a supercritical dense
phase, meaning that the supercritical CO
2
, like
a gas, has no surface tension, but has a density
similar to that of a liquid.
Issue Date: June 2011
AbstractIncreasing climate change legislation is driving a significant growth in the deployment of carbon
capture and sequestration (CCS). This development has implications for several Bechtel industries, including
fossil power (pre-combustion and post-combustion), natural gas production (domestic gas and liquefied natural
gas [LNG]), and gasification (coal to chemicals, coal to liquids, and synthetic natural gas).
CO
2
compression and dehydration are two of the more significant factors affecting the high capital cost and
energy consumption of a CCS facility. Thus, developing the correct design criteria for CCS compression and
dehydration is critical to the optimal balance of low capital cost and high reliability and availability. CO
2

compression design must consider rotating equipment selection, material selection, CO
2
hydrate formation,
startup and shutdown, pipeline depressurization, and high-pressure CO
2
relief. CO
2
dehydration options include
molecular sieve adsorption and triethylene glycol dehydration.
This paper summarizes the results of a study on CO
2
compression for CCS performed under a Bechtel technical
grant. The paper also shows that the study results were subsequently validated by a CO
2
geologic sequestration
study recently performed by Bechtel for a world-scale LNG facility.
Keywordscarbon capture, carbon capture and sequestration (CCS), carbon sequestration, climate change,
CO
2
hydrate, dehydration, depressurization, fossil fuels, gasification, global warming, liquefied natural gas
(LNG), natural gas, power generation
CO
2
COMPRESSION AND DEHYDRATION
FOR CARBON CAPTURE AND
SEQUESTRATION
Tomas Maramba III
[email protected]
Dipanjan Bhattacharya
[email protected]
Bechtel Technology Journal 136
Large industrial-scale point sources of CO
2
are
often located far from the sequestration site; thus,
pipelines are used to transport the CO
2
from the
point source to the sequestration site.
The large industrial-scale point sources for CO
2

that have demonstrated the feasibility of CO
2

capture or are being considered for CCS include:
Natural gas production and treatment facilities,
including domestic gas plants and LNG
facilities. These plants capture the CO
2
that
accompanies the natural gas from deep,
high-pressure gas fields. The CO
2
stream
coming from a natural gas treatment facility
is saturated with water and therefore
requires dehydration.
Gasification plants. These plants convert coal
and other hydrocarbon feedstocks into a
synthetic gas (syngas) consisting mainly of
hydrogen and carbon monoxide, with CO
2

as a by-product. Gasification plants capture
the CO
2
from the syngas at high pressure
and relatively high CO
2
concentrations.
Depending on the choice of syngas acid
gas removal technology, the CO
2
from a
gasification plant may be saturated with
water. The syngas can be converted to
chemicals or fuels (such as synthetic natural
gas, gasoline, or diesel), or it can be used as
fuel in an integrated gasification combined
cycle (IGCC) plant to produce electric power.
Pulverized-coal power plants. These facilities
combust coal at atmospheric pressure and
produce steam to generate electric power.
The technology to capture CO
2
emitted
by these plants has not yet been proven
on a commercial scale. The CO
2
stream
from these plants will likely be saturated
with water.
CO
2
COMPRESSION FOR SEQUESTRATION
C
O
2
can be brought to the required
sequestration pressure by compression
only, or by a combination of compression and
pumping. Selection of a compression system is
based on a variety of factors:
The pressure of the CO
2
source. Low starting
pressures (close to atmospheric) require
large machines for the initial stages due to
enormous suction volume.
A balance between the number of
compression stages and power consumption
to achieve the optimum configuration. A
configuration with fewer compression stages
(and thus a high compression ratio) may
lead to lower capital costs, but also results
in lower thermal efficiency and thus higher
power consumption.
The feasibility of subcritical liquefaction of
CO
2
through refrigeration, or compression
of CO
2
to supercritical pressure, followed
by pumping the liquid or dense-phase
CO
2
fluid.
The composition of the CO
2
stream, which
can determine whether refrigeration and
liquefaction of CO
2
are feasible. The acid
gas from an LNG facility contains an almost
pure stream of CO
2
and is a prime candidate
for refrigerated liquefaction. On the other
hand, the CO
2
stream from a gasification
facility is likely to include some amount of
hydrogen, so refrigerated liquefaction may
not be cost effective.
The study explored four different thermo-
dynamic paths for compressing CO
2
to the
required sequestration pressure: high compression
ratio, semi-isothermal (using integral gear
compressors), compression with supercritical
pumping, and compression with refrigeration
and CO
2
pumping.
CO
2
can be
brought to
the required
sequestration
pressure by
compression
only, or by a
combination of
compression
and pumping.
ABBREVIATIONS, ACRONYMS, AND TERMS
AGRU acid gas removal unit
CCS carbon capture and sequestration
EOR enhanced oil recovery
IGCC integrated gasification combined
cycle
LNG liquefied natural gas
VFD variable frequency drive
UNITS OF MEASURE
bara bar absolute
(unit of absolute pressure)
kJ/kg kilojoule per kilogram
(unit of enthalpy)
kJ/kgK kilojoule per kilogram Kelvin
(unit of entropy)
lb pound
MMscfd million standard cubic feet
per day
MPa megapascals
mtpa metric tons per annum
psia pound-force per square inch
absolute
psig pound-force per square inch
gauge
June 2011 Volume 3, Number 1 137
These paths, which have also been
investigated by other researchers [1, 2], are
briefly described below.
High Compression Ratio Path
Figure 1 depicts the thermodynamic path
of a high compression ratio machine. Fewer
compression stages lead to lower thermal
efficiency and higher power consumption.
Semi-Isothermal Path (Using Integral
Gear Compressors)
Figure 2 shows a semi-isothermal approach in
which many compression stages are used with
Low starting
pressures
(close to
atmospheric)
require
large machines
for the
initial stages.
0.1
300 400 500 600 700 800 900 1,000
1
P
r
e
s
s
u
r
e
,

M
P
a
Enthalpy, kJ/kg
High
Compression
Ratio
10
100
R ti
High H h
Carbon Dioxide (CO2)
Pressure-Enthalpy Diagram
0.1
300 400 500 600 700 800 900 1,000
1
P
r
e
s
s
u
r
e
,

M
P
a
Enthalpy, kJ/kg
Semi-Isothermal
for Better
Per formance
10
100
h
B
mmance n Per form P f m
ermal m Semi Isoth s h
Carbon Dioxide (CO2)
Pressure-Enthalpy Diagram
Figure 1. High Compression Ratio Path
Figure 2. Semi-Isothermal Path (Using Integral Gear Compressors)
Bechtel Technology Journal 138
aftercoolers to minimize heat of compression
and power consumption. Capital costs are
minimized by using integral gear centrifugal
compressors, allowing up to eight compression
stages to be installed in a single casing.
This semi-isothermal approach using integral
gear compressors has been found to be superior
to the high compression ratio approach in terms
of both capital cost and energy efficiency. This
option is feasible for all large-scale industrial
sources of CO
2
.
A semi-isothermal
approach
using integral
gear compressors
has been found
to be superior
to the
high compression
ratio approach.
Figure 3. Compression with Supercritical Pumping Path
Figure 4. Compression and Pumping with Refrigeration Path
0.1
300 400 500 600 700 800 900 1,000
1
P
r
e
s
s
u
r
e
,

M
P
a
Enthalpy, kJ/kg
10
100
Carbon Dioxide (CO2)
Pressure-Enthalpy Diagram
Supercritical
Compression
and Pumping
0.1
300 400 500 600 700 800 900 1,000
1
P
r
e
s
s
u
r
e
,

M
P
a
Enthalpy, kJ/kg
10
100
Carbon Dioxide (CO2)
Pressure-Enthalpy Diagram
Refrigerated
Compression
and Pumping
June 2011 Volume 3, Number 1 139
Compression with Supercritical Pumping Path
The preceding two approaches involve
compression only. Another feasible option is
to use compression followed by supercritical
pumping. The pumping suction pressure has to
be significantly higher than the critical pressure
because the dense-phase CO
2
needs to attain a
minimum density prior to pumping.
Figure 3 shows the thermodynamic path for
supercritical compression and pumping. Here,
the fluid is compressed above the critical
pressure of CO
2
, so the CO
2
becomes a dense-
phase fluid. After compression, the fluid is
pumped to reach the desired pressure. This
approach is useful when a full-scale refrigeration
system is not feasible and could offer an energy
efficiency advantage when the final sequestration
pressure is above 3,000 psia. Compression with
supercritical pumping is feasible for all industrial
sources of CO
2
, including gasification facilities,
despite the possible presence of hydrogen in the
CO
2
stream.
Compression with Refrigeration and CO
2

Pumping Path
Figure 4 shows the thermodynamic path for
a compression and pumping scenario with
the aid of CO
2
refrigeration. This approach is
particularly feasible for CO
2
pumping in an LNG
plant, which already has a cryogenic system, so
the costs of CO
2
refrigeration are reduced. The
gas is compressed to a subcritical pressure and
then condensed with an optimal refrigerant.
The condensing pressure determines the levels
of refrigeration required as well as the type
of refrigerant. After condensation, the CO
2
is
liquid-pumped to the pipeline pressure.
CO
2
DEHYDRATION
Factors To Be Considered
The water content of CO
2
is depicted in the curves
in Figure 5. The water content in CO
2
decreases
with pressure up to about 50 bara and then
increases. As the CO
2
is compressed, water is
removed from the CO
2
stream at the intercoolers
only until a pressure of 50 bara is reached. While
it may appear that CO
2
dehydration at this
pressure is optimal, there are other factors to
be considered.
The dehydrators are normally designed for a
specification of 30 lb H
2
O/MMscfd of gas. The
dehydrator system itself is not a primary cost
element, but optimizing the dehydrator location
is a very critical function because it directly
affects the operating and capital costs of the
entire compression system.
Dehydration at higher pressure requires a
smaller and less expensive dehydration system,
since upstream compression and intercooling
remove some of the water from the CO
2
stream.
However, some of the cost savings from a smaller
dehydration system are lost due to the cost of
greater material thickness. If dehydration is to
be performed to a tight specification and the
CO
2
stream needs to be bone dry, then a higher
operating pressure (up to a maximum of 50 bara
or 725 psia) is beneficial.
If the dehydration is performed at higher
pressure, many of the interstage compressors
may require a significantly higher aftercooler
outlet temperature to avoid hydrate formation,
which can occur at low aftercooler temperatures
in cold ambient conditions. Increasing the
outlet temperature reduces compression system
efficiency and results in higher operating costs
due to greater power consumption.
Perhaps the biggest cost factor affected by the
pressure at which dehydration is performed is
the metallurgy of the entire CO
2
compression
and dehydration system. The portion of the
compression system upstream of and including
the dehydration system needs to be constructed
of expensive duplex stainless steel; a wet
CO
2
stream causes carbonic acid corrosion in
less-expensive carbon steel. Downstream
of dehydration, carbon steel can be used to
construct the system.
The biggest
cost factor
affected by the
pressure at which
dehydration
is performed
is the
metallurgy of
the entire system.
Figure 5. Saturated Water Content of CO
2
Mixtures at 100 F
0
0 4,000 3,000 2,000 1,000
50
W
a
t
e
r

C
o
n
t
e
n
t
,

l
b
/
M
M
s
c
f
Pressure, psia
100
150
200
250
300
350
400
Pure CO2
5.31% CH4, 94.69% CO2
Pure CH4
(Source: GPSA Engineering Data Book [3])
Bechtel Technology Journal 140
Compressor Intercoolers and CO
2

Hydrate Formation
Compressor intercoolers are a very critical
component, since they are the primary items
that minimize compressor horsepower. For a
compression system upstream of the dehydration
unit, temperature at the discharge of each
intercooler must be controlled to maintain the
process stream above the hydrate formation
temperature. Hydrate formation in the
compression system would plug orifices and
valves and damage the rotating equipment. The
hydrate formation temperature of CO
2
H
2
O
is very sensitive to pressure and requires
careful study.
TECHNICAL GRANT STUDY: LNG FACILITY CASE
T
he design criteria and findings presented in
this paper summarize the results of a study
on CO
2
compression and dehydration for CCS
performed under a Bechtel technical grant.
1

The LNG facility chosen for the study incorporates
a typical 4.5 mtpa LNG train with 10% CO
2
in
the feed gas. The destination pressure was set
at 3,700 psia.
Detailed system analysis was carried out using
in-house simulation and also by interfacing
with equipment vendors to reach accurate cost
estimates. The key optimization variables for
this case were:
Optimal parameters for each compressor stage
to minimize power consumption. Power is
the largest operating cost in a typical CO
2

compression system. The number of stages is
a tradeoff between fixed costs and efficiency.
Optimal location of the dehydrator. In view
of CO
2
hydrate formation upstream of
dehydration, the compression intercoolers
were designed to operate at a temperature
about 810 F above the hydrate formation
temperature. The high hydrate formation
temperature in some cases results in less
intercooling in the compressor stages,
which leads to lower compressor efficiency.
Therefore, locating the dehydrator at a
lower pressure helps to increase the overall
compressor system efficiency.
Compressor System Optimization
Compression Only
Analysis was performed to conceptualize the
design of the compression system, which is
graphically depicted in Figure 6. The costs for
compressors in all cases include the costs for
electric motors and control systems, including
set-up for anti-surge valves for all stages. Fixed-
speed electric motors were assumed to be the
base-case compressor drivers. Steam turbines,
gas turbines, or variable speed motors can
provide more flexibility for the system, but
also entail additional costs and add significant
operational complexity. The compression
control system involves anti-surge systems, load
sharing for multiple trains, suction pressure
control using inlet guide vane or suction
throttling valve, and possible discharge back-
pressure control, depending on startup and
transient requirements.
Since the CO
2
gas is flowing at low pressure
from the acid gas removal unit (AGRU), the first-
stage compressor suction pressure is also low.
Therefore, the compression system cannot be
located too far from the AGRU, since this would
cause near vacuum conditions in the first stage.
Turndown is accomplished with the use of inlet
guide vanes at the first stage, and the turndown
range is from rated flow to about 75% of rated
flow. Further turndown would necessitate a
bypass loop.
Compression with Supercritical Pumping
In this option, CO
2
is compressed above the
critical pressure of CO
2
and then pumped to
the CO
2
sequestration pipeline pressure.
Even though CO
2
critical pressure is around
1,100 psia, the optimum pump suction pressure
was found to be about 1,600 psia to allow the
pump to operate at an optimal fluid density.
For an LNG plant, this option might be
preferred if the sequestration pressure is
relatively high (above 3,000 psig), since the
Hydrate formation
in the
compression system
would plug
orifices and valves
and damage
the rotating
equipment.
1
Design Criteria for CO
2
Sequestration Technology
and Configurationa study performed in 2009 by
the authors and other team members under the
Bechtel Technical Grant Program.
Figure 6. Compression Only
Compressor
Stage
Cooler
Water Water
Cooler
Compressor
Stage
Compressor
Stage
June 2011 Volume 3, Number 1 141
energy efficiency of the CO
2
pump would
be realized without the parasitic load from a
refrigerated liquefaction system.
Compression, Refrigeration, and CO
2

Liquid Pumping
This option involves compression up to a certain
pressure, followed by condensation and then
pumping of the CO
2
(see Figure 7). The critical
component is optimization of the condensing
temperature level to minimize the refrigeration
and compression costs. Different possibilities
were considered to find the optimized
condensing temperature and the corresponding
suitable refrigerant.
Since an LNG plant already contains a cryogenic
section with propane handling facilities, the
study considered a two-stage standalone propane
refrigeration system to compress, dehydrate, and
condense the CO
2
. Further optimization studies
can be conducted to minimize power and capital
costs by selecting refrigeration pressure and
temperature levels.
CO
2
Pumping and Variable Frequency Drives
One of the cost factors that may affect the
feasibility of CO
2
pumping is whether variable
frequency drives (VFDs) are required. Without
VFDs, there is less flexibility for turndown
control. Some degree of turndown control can be
provided by using multiple pumps in parallel.
Another option is to use a minimum flow
bypass configuration.
The VFD requirement should be evaluated on
a case-by-case basis. If the suction temperature
and pressure are expected to vary more than
10%, with a constant discharge pressure required
for pipeline or injection, VFDs may be needed.
Dehydration Location Optimization
For the specified design cases, simulation and
analysis showed that dehydration at lower
pressure (approximately 250 350 psia) is more
cost effective for the compression/dehydration
system. Therefore, even though the dehydration
system has traditionally been designed to
operate at pressures above 600 psia, for wet CO
2

compression it may be advantageous to locate the
dehydration system at lower pressures.
Both glycol dehydration and molecular-sieve-
based adsorption dehydration were considered.
Adsorption dehydration is preferred when a
tight water content specification is desired.
All other issues being equal, CO
2
dehydration
using glycol can meet the specifications at a
comparatively lower cost.
Only air coolers were considered for the LNG
case, given the difficulty of providing cooling
water for most LNG facilities, which tend to be
located in remote areas. The aftercooler materials
of construction were selected based on gas
composition and process design conditions.
CASE STUDY: CO
2
CAPTURE, COMPRESSION,
AND DEHYDRATION FOR A WORLD-SCALE
LNG PLANT
S
ubsequent to the technical grant study,
the authors performed a study for the
sequestration of CO
2
captured from the feed gas
of a 12 million mtpa LNG plant. The feed gas
contained an 8 12 volume percentage of CO
2
,
and the average amount of CO
2
for sequestration
was 3 million mtpa.
A configuration with CO
2
compression only
was developed. CO
2
acid gas pressure from
the AGRU was 26 psia. The CO
2
was compressed
to a pipeline pressure of 2,550 psig.
The results of this study confirmed the
authors previous findings that integral gear
compressors were the optimal choice for CO
2

compression. Four parallel trains of compression
with inlet guide vanes were provided to allow
for turndown control. The CO
2
compressors
were driven by fixed-speed electric motors
for simplicity of design and operation. With
approximately 90 MW of power required for
the entire CO
2
compression and dehydration
system, four General Electric Frame 5E simple-
cycle gas turbines were provided to supply the
electrical load.
The study also confirmed that a molecular
sieve desiccant system is a feasible method for
dehydration of the CO
2
stream. (However, a
triethylene glycol dehydration system should
be evaluated as a low-cost alternative.) The CO
2

The results of
two studies
conducted by
the authors show
that integral
gear compressors
are the
optimal choice for
CO
2
compression.
Figure 7. Compression with CO
2
Refrigeration and
Liquid Pumping
CO2 Condenser
Using Refrigerant
Compressor
Stage
Liquid CO2
Pump
3,0003,200 psia
Bechtel Technology Journal 142
compression unit was designed with louvers
for temperature control on the compressor
interstage air coolers, consistent with this studys
confirmation of the previous findings that CO
2

hydrate would form at temperatures slightly
above ambient conditions.
Dense-phase CO
2
depressurization must be
carefully controlled to prevent solid dry ice
formation. At atmospheric pressure, the incipient
solid formation occurs at around 78.5 C. A
detailed analysis was conducted to study the
effect of depressurization at different operating
pressures. The results indicate that a heater placed
upstream of the depressurization valve can be an
effective hedge against formation of solid dry
ice during dense-phase CO
2
depressurization.
The operating temperature and duty of the
heater are contingent on several parameters,
such as volume and minimum temperature of
the depressurized system.
Similarly, the discharge side of high-pressure
CO
2
relief valves is subject to CO
2
solids
formation. The study recommended that
high-pressure CO
2
relief valves discharge
directly to atmosphere (without tailpipes) to
prevent the relief valve discharge piping from
becoming plugged with dry ice.
Because of the highly invasive nature of CO
2

in dense phase, care must be taken in choosing
materials of construction. Supercritical CO
2
can
render seal elastomers vulnerable to explosive
decompression damage, particularly during
depressurization. [4]
CONCLUSIONS
T
he options for raising the CO
2
stream
to sequestration pressure are straight
compression (using high compression ratio
path or semi-isothermal path with integral
gear compressors), compression followed
by supercritical pumping, and compression
followed by refrigerated liquefaction and
subcritical pumping. Integral gear compressors
are the optimal choice for CO
2
compression
because they combine high efficiency (due
to the low compression ratio for each stage)
with competitive capital cost (since one
integral gear machine can have up to eight
compression stages).
Most CO
2
streams for sequestration contain
water, so the design of the compressor interstage
coolers upstream of the dehydration section
must consider the possibility of CO
2
hydrate
formation at lower temperatures. In addition,
the optimum pressure for CO
2
dehydration must
reflect a balance between the size and cost of the
dehydration unit and the cost of construction
materials of the entire CO
2
compression/
dehydration system.
ACKNOWLEDGMENTS
T
he authors wish to acknowledge the vital role
of fellow technical grant team members Ram
Tekumalla, Lee Schmoe, and Dave Messersmith
in contributing to the study that provided the
foundation for this paper.
REFERENCES
[1] J.J. Moore, PhD; M. Nored; R. Gernentz; and
K. Brun, PhD, Novel Concepts for the
Compression of Large Volumes of Carbon
Dioxide, Final Technical Progress Report (DOE
Award No. DE-FC26-05NT42650), September 28,
2007, access via http://www.osti.gov/bridge/
purl.cover.jsp?purl=/918688-U23cWI/.
[2] M. Minotti, CO
2
Compression Capabilities,
General Electric Imagination at Work
presentation, 2009, access via http://www.nist.
gov/pml/high_megawatt/upload/3-3-Minotti-
Approved.pdf.
[3] Gas Processors Suppliers Association (GPSA)
Engineering Data Book, 12th Edition (2004),
Vol. II, Section 20 Dehydration, Figure 20-9,
see http://gpsa.gpaglobal.org/databook/.
[4] F. Eldevik, Safe Pipeline Transmission
of CO
2
, Pipeline and Gas Journal, Vol. 235,
No. 11, November 2008, http://www.
pipelineandgasjournal.com/safe-pipeline-
transmission-co2.
ADDITIONAL READING
The following additional information source was
used to develop this paper:
D. Bhattacharya, R. Tekumalla, D. Messersmith,
L. Schmoe, and T. Maramba, The Third C in
CCS Compression, Proceedings of the 2010
AIChE Spring Annual Meeting, San Antonio, TX,
March 23, 2010, see http://aiche.confex.com/
aiche/s10/webprogram/Paper175860.html.
Dense-phase CO
2

depressurization
must be
carefully controlled
to prevent
solid dry ice
formation.
June 2011 Volume 3, Number 1 143
BIOGRAPHIES
Tomas Maramba is a principal
process manager for Bechtel
Hydrocarbon Technology
Solutions in Houston, Texas.
Currently the principal process
technologist for gasification
and biomass technologies, he
leads conceptual and feasibility
studies as well as front-end
engineering design work.
Lead process engineering and technology roles have
been Toms primary contribution to Bechtel projects
and studies. To name two, he led the process group
for the Edwardsport integrated gasification combined
cycle project and was the responsible engineer for the
CO
2
capture and sequestration unit for an Australian
LNG facility study.
Tom has more than 20 years of process engineering
experience with Bechtel in gasification, petroleum
refining, and petrochemicals. He has presented
papers on gasification and CO
2
capture and
sequestration at various venues, including
the Pittsburgh Coal Conference, the Annual CO
2

Capture and Sequestration Conference, and the
Canadian Clean Power Coalition. He received a
Bechtel Outstanding Technical Paper award as a
co-author of Bechtel Pressure Swing Claus Sulfur
Recovery Technology, a paper on an advanced
gasification process.
Tom holds a BS in Chemical Engineering from the
University of the Philippines, Diliman, Quezon City,
Philippines.
Dipanjan Bhattacharya is a
process engineering specialist
with Bechtel Oil, Gas &
Chemicals in Houston, Texas. In
his current role as responsible
engineer, he conducts dynamic
simulation studies for multiple
LNG projects. He has more
than 13 years experience in
process technology, startup,
and process control in refining and gas processing.
Dipanjan has published several papers and has also
presented at various conferences.
Dipanjan holds a BS from Jadavpur University, India,
and an MS from the University of Oklahoma, both in
Chemical Engineering.
Bechtel Technology Journal 144
TECHNOLOGY PAPERS
Power
T ec hnol ogy Paper s
147
Formed Suction Inlet Design for
Mixed Flow Pump Applications
David Werth, PhD
Yifan Zheng
153
Climate Change Considerations in
the Sustainable Development of
Nuclear Power Plants in the
United States
Ping K. Wan
Alice C. Carson
Desmond W. Chan, PhD
161
Concentrated Solar Thermal Plants
Downstream of the Solar Field
Design/Optimization of the
Associated Power Generation Cycle
Justin Zachary, PhD
Natasha Jones
Aslan Golant
Ivanpah
In each of Ivanpahs three facilities,
tens of thousands of mirrored heliostats
will reect sunlight to a boiler atop a central tower.
Photo shows BrightSources demonstration plant
in the Negev Desert in Israel.
2011 Bechtel Corporation. All rights reserved. 147
INTRODUCTION
T
he authors have presented several recent
papers regarding the design and performance
characteristics of formed suction inlets (FSIs).
These papers have discussed formed inlet
submergence [1], optimized intake design for
formed inlets [2], alternative FSI designs [3], and
comparisons of different FSI configurations [4].
Formed inlets have often been used in large flood
control applications to replace the typical pump
bells found on large vertical turbine pumps. FSIs
are used because they are relatively insensitive to
high crossflow conditions, eliminate subsurface
vortex activity, and may reduce the required
minimum pump submergence to minimize
surface vortex activity when appropriate surface
vortex suppression measures are employed.
Numerous configurations of formed inlets
exist, with design guidelines being published
for several of them. The most familiar of these
designs was developed by the US Army Corps
of Engineers (USACE) and is referred to as the
Type 10 FSI. Design guidelines for the minimum
size of the inlet were published by the USACE
[5] and distributed widely in the 1998 Hydraulic
Institute Standards [6]. Figures 1 and 2 show
typical Type 10 FSIs.
Issue Date: June 2011
AbstractFormed suction inlets (FSIs) have been in common use on large vertical pumps in the flood control
field for many years. One example is the US Army Corps of Engineers Type 10 FSI, which has been used
extensively throughout the southeastern United States as well as on the Everglades Restoration Project. This
inlet is now gaining popularity in higher head applications such as power plant cooling water systems.
Extensive design development and research of the Type 10 FSI and its performance have been undertaken over
the years. However, nearly all of this research has focused on Type 10 FSIs for flood control applications, which
generally use axial flow pumps. More recently, the Type 10 FSI has been used or considered for a wider variety
of projects, including cooling water applications, which typically use radial or mixed flow pumps. These pumps
normally have smaller throat diameters than the equivalent-capacity axial flow pumps. Because the design of the
FSI is based on throat diameter, there is some question regarding the impact of the resulting smaller FSI design
and corresponding increase in velocity.
In several recent applications of Type 10 FSIs installed on mixed flow pumps, the inlets were designed for larger
pump impeller inlet diameters to increase the size of the FSI and reduce the inlet velocity. Larger-than-normal
reducing cones were then used to transition into the smaller pump throats. This paper presents a comparison of
inlet conditions for a variety of mixed flow and flood control applications that were evaluated during physical
model studies. Several different ratios of inlet eye diameter to flow rate are investigated in an attempt to identify
the ideal ratio for determining an appropriate diameter for FSI design. These results can be used by design
engineers when considering FSI for mixed flow pump applications, and they reduce the likelihood of inlet-related
problems associated with increased inlet velocities.
Keywordsformed suction inlet (FSI), inlet eye diameter, mixed flow pump, throat velocity
FORMED SUCTION INLET DESIGN FOR
MIXED FLOW PUMP APPLICATIONS
David Werth, PhD
[email protected]
Clemson
Engineering
Hydraulics, Inc.
Yifan Zheng
[email protected]
Figure 1. Typical Type 10 FSI
Bechtel Technology Journal 148
While the Type 10 FSI is relatively well known,
other inlet variations are also in service.
Reference [3] presents guidelines for an alter-
native inlet intended for use under existing
pumps with suction bells already installed.
However, this paper focuses on the more
widely recognized Type 10 FSI.
TYPE 10 DESIGN
T
he design guidelines published by the
USACE present a dimensionless design
based on pump throat inlet diameter D, which
refers to the diameter of the opening into the
bowl at the eye of the impeller. The design ratios
were developed through extensive model testing
and are intended to prevent flow separation
and vortex activity within the inlet, as well as
streamline the approach flow so that velocities
are uniform at the impeller. A plan view of the
published design guidance is shown in Figure 3.
INTENDED TYPE 10 FSI APPLICATIONS
T
he Type 10 FSI design ratios were developed
primarily for flood control applications,
which generally use axial flow pumps that
have high flow and low head characteristics.
Another characteristic of this type of pump
is a relatively large impeller eye opening
(D diameter) compared to those of the mixed
flow pumps typically used in cooling water,
water supply, and water distribution applications.
For example, a recent flood control project that
used 100 ft
3
/sec (44,880 gal/min) axial flow
pumps had a D diameter of approximately
36 inches. A recent cooling water application
using 102 ft
3
/sec (46,000 gal/min) mixed flow
pumps had a D diameter of only 25.5 inches. The
mixed flow pump had an impeller eye opening
only 70% the size of the axial flow pumps
opening. This resulted in much higher inlet
velocities than those present in an axial flow
pump application. Regarding these higher
velocities, the question has been raised
concerning the point at which the inlet may no
longer be effective due to flow separation on the
upstream curvature leading into the impeller.
The authors are aware of a recent instance in
which a power contractor used Type 10 FSIs in a
mixed flow application. Some of the significant
problems present in the overall application were
attributed to the FSIs because their removal
remediated certain issues. Since becoming aware
of this problematic application, the authors have
reviewed a significant number of successful
Type 10 FSI installations for both axial and mixed
flow pumps, with the intention of evaluating the
ratio of the inlet flow rate to the D diameter to
see if the contractors application fell within or
outside the range of successful applications.
PROJECT EXPERIENCE
T
hirteen Type 10 FSI applications were
reviewed for a variety of axial and mixed
flow pump installations. For each of these
projects, a physical hydraulic model of the inlet
was studied, making qualitative, quantitative,
and visual data available for each application.
It was decided that the easiest way to compare
the relationship to flow rate and D diameter was
For each of the
13 projects
reviewed, a
physical hydraulic
model of the
inlet was studied,
making qualitative,
quantitative,
and visual data
available for each
application.
ABBREVIATIONS, ACRONYMS, AND TERMS
FSI formed suction inlet
NPSH net positive suction head
USACE US Army Corps of Engineers
1.06 D
R1.06 D
R0.78 D
1.00 D
R0.02 D
0.49 D
0.88 D
0.90 D
1.24 D
1.28 D
1.45 D
3.30 D
D = Diameter
1.85 D
Figure 3. Type 10 Design Ratios
Figure 2. Type 10 FSIs
June 2011 Volume 3, Number 1 149
to compare the flow rate at the impeller inlet or at
the location of the D diameter. Table 1 shows the
velocity for each inlet.
Figure 4 is a photo of uniform flow that stays
attached to the pump throat, as shown by the
dye tracers.
Figure 5 shows flow separation in the pump
throat. Although no photos of the separation
observed in Project 1 were available, Figure 5
shows how the dye detaches from the pump
throat when inlet conditions are unstable or too
fast for the transition, which in this photo is at the
pump bell. The separation is shown by the dye
detaching from the side wall and moving toward
the center of the pump column.
Flow separation is a significant problem because
it can result in large pressure and velocity
variations around the impeller, leading to
imbalanced loading, decreased performance,
localized cavitation, and vibration.
Out of the 13 models, flow separation was
observed only in Project 1 which, coincidentally,
is the only project that has reported operational
issues. In an attempt to see if there were any
differences in the inlet designs, each inlets
dimensional characteristics were reviewed.
Each inlet adhered to the standard Type 10 FSI
Flow separation
is a significant
problem because
it can result in
large pressure and
velocity variations
around the impeller,
leading to
imbalanced loading,
decreased
performance,
localized cavitation,
and vibration.
Table 1. Inlet D Velocities
Project
D Diameter Velocity,
ft/sec
Comment
1 32.7 Significant flow separation on the upstream curvature into the impeller
2 23.2 Uniform flow and no flow separation
3 30.0 Uniform flow but higher-than-expected head loss
4 12.9 Uniform flow and no flow separation
5 15.8 Uniform flow and no flow separation
6 15.0 Uniform flow and no flow separation
7 13.2 Uniform flow and no flow separation
8 17.7 Uniform flow and no flow separation
9 14.0 Uniform flow and no flow separation
10 13.4 Uniform flow and no flow separation
11 13.7 Uniform flow and no flow separation
12 15.6 Uniform flow and no flow separation
13 13.9 Uniform flow and no flow separation
Figure 4. Uniform Flow with No Separation
Figure 5. Flow Separation in Pump Throat
Bechtel Technology Journal 150
design guidelines with no variations. The flow
characteristics of the inlet were compared by
plotting the velocity at the impeller eye or
D location, as shown in Figure 6.
As noted in Table 1, most of the applications
(Projects 4 to 13) had velocities between
12.9 and 17.7 feet/sec, while only three had
a velocity higher than 20.0 feet/sec. Only
Project 1 exhibited any flow separation within
the inlet. This inlet also had the highest throat
velocity, at approximately 32.7 feet/sec. It was
also noted that while no flow separation
was observed in Project 2 (D velocity at
30.0 feet/sec), there was higher-than-anticipated
head loss, which could reduce the net positive
suction head (NPSH) available at the pump and
increase pumping cost over time. Given the
vast number of applications with D velocities
less than 20.0 feet/sec and the need to keep the
FSI head loss in check, it is reasonable to assert
that a D velocity of up to 20.0 feet/sec would
be a safe application for an FSI to a mixed flow
pump. Higher velocities of up to 30.0 feet/sec
(such as in Project 3) may be technically
acceptable, but there is only one data point
available at this time; additional testing
is required to determine if it is consistently
applicable for mixed flow applications.
ADDITIONAL CONSIDERATIONS
A
dditional consideration should be given to
the top cone, which transitions from the top
of the FSI (diameter of 1.06D) to the pump throat
(diameter of D). Several pump companies have
started to require that inlet design be based on
an FSI D diameter that is larger than the pump D.
The authors have experience with inlet designs
based on an FSI D value that is 10% larger than
the actual pump D. For example, if a pump has a
D of 50 inches, then the inlet design may actually
be based on an FSI D of 55 inches. The top cone
would then either converge at a steeper angle to
reduce to the pump D, or maintain the same angle
but become longer. This effectively enlarges the
inlet by 10% and reduces the velocity through the
turn into the impeller.
CONCLUSIONS
A
brief review of numerous Type 10 FSI
velocities revealed that an intake with
known problems had the highest velocity at
the entrance to the pump, at 32.7 feet/sec. The
intake with the second highest velocity did not
exhibit flow separation but did have higher-
than-anticipated head loss, suggesting that
conditions may be close to deteriorating near
30.0 feet/sec. This suggests that the upper limit
of the D velocity may be 30.0 feet/sec; however,
additional testing is required.
From the large set of data available as well as
from consideration of FSI hydraulic performance
versus high cost associated with the increased
FSI head loss, it is safe to recommend a nominal
D velocity of up to 20.0 feet/sec. The intake
design of pumps with a higher D velocity may
need to be based on a larger FSI D and use a
modified reducing cone to reduce to the pump
impeller. Additional work is also required to
determine the optimum percentage by which
D should be increased, which would serve as a
design basis in conjunction with using a larger
cone reduction to get back to the actual pump D.
From the
large set of data
available as well as
from consideration
of FSI hydraulic
performance
versus high cost
associated with
the increased
FSI head loss,
it is safe to
recommend a
nominal D velocity
of up to
20.0 feet/sec.
0.0
0 2 4 6 8 10 12 14
5.0
10.0
15.0
20.0
25.0
30.0
35.0
Separation starts near this velocity
Recommended maximum design velocity
Project Number
D

V
e
l
o
c
i
t
y
,

f
t
/
s
e
c
Figure 6. Inlet D Velocity for Each Project
June 2011 Volume 3, Number 1 151
The data presented in this paper does provide
a valuable design guideline that can be used
to show the range of appropriate D velocities
for successful FSI application and identify a
point at w hich further consideration is needed
as to the appropriateness of the standard
Type 10 FSI design or whether modifications
may be necessary.
REFERENCES
[1] D. Werth and Y. Zheng, Reducing Pump
Submergence with Formed Suction Inlets,
Proceedings of the 2007 ASCE/EWRI World
Environmental and Water Resources Congress,
Tampa, FL, May 1519, 2007, access via
http://ascelibrary.org/proceedings/resource/
2/ascecp/243/40927?isAuthorized=no.
[2] Y. Zheng and D. Werth, Optimize Pump
Intake Design with Formed Suction Inlets,
Proceedings of the 2008 ASCE/EWRI World
Environmental and Water Resources Congress,
Honolulu, HI, May 1216, 2008, access via
http://ascelibrary.org/proceedings/resource/
2/ascecp/316/40976/252_1?isAuthorized=no.
[3] D.E. Werth and D.E. Cheek, Design
Guidelines for Alternative Formed Suction
Inlets, Proceedings of the Institution of Civil
Engineers Water Management, London,
Vol. 157, Issue 3, September 2004, pp. 151158,
access via http://www.icevirtuallibrary.com/
content/issue/wama/157/3.
[4] D. Werth, M. Allen, and B.W. Nicolas, A
Comparison of Formed Suction Inlets for
Flood Control Applications, Proceedings of
the 2009 ASCE/EWRI World Environmental
and Water Resources Congress, Kansas City, MO,
May 1721, 2009, access via http://ascelibrary.
org/proceedings/resource/2/ascecp/342/
41036?isAuthorized=no.
[5] B.P. Fletcher and R. Oswalt, Geometry
Limitations for the Formed Suction Intake,
Engineering Technical Letter No. 1110-2-327,
US Army Corps of Engineers, Washington,
DC, 1992.
[6] ANSI/HI 9.8, Pump Intake Design Standard,
Hydraulic Institute, 1998.
BIOGRAPHIES
David Werth, PhD, is a
principal engineer with
Clemson Engi neeri ng
Hydraulics, Inc. (CEH), a
specialty hydraulic consulting
and physical hydraulic
modeling firm in South
Carolina. He has 18 years of
physical modeling experience
and has conducted nearly
400 physical model and laboratory studies of a
wide variety of hydraulic structures and systems,
including pump intakes, discharge structures, energy
dissipation systems, open channel and closed conduit
flows, cavitation, valve hydraulics, siphons and siphon
breakers, spillways, and other water-related systems.
CEH conducts studies for projects in the water,
wastewater, power, and flood control industries.
Dave began working in a physical model setting at
Utah State University (where he earned his masters
degree and PhD in Civil Engineering, with an
emphasis on applied hydraulics). Later, he was a
senior project manager at ENSR International,
including in the physical modeling group. In 2000, he
left ENSR to join the faculty at Clemson University,
where he taught a wide variety of graduate and
undergraduate classes in fluid mechanics and
hydraulics. There, he also started a pump intake
physical modeling program that quickly became very
active on both industrial and research projects. In
early 2005, Dave (and others) spun the program off
to form CEH as a standalone private entity entirely
separate from the university. The company quickly
expanded into a 20,000-square-foot facility with
several acres of outdoor modeling space.
Most recently, Dave has conducted many physical
model studies with Bechtel and also published
several papers with Bechtel staff on pump intake and
FSI design.
Dave earned his masters degree and PhD in Civil
Engineering, with an emphasis on applied hydraulics,
from Utah State University in Logan, and holds a
BS in Forest Engineering from State University of New
York College of Environmental Science & Forestry
(SUNY-ESF) in Syracuse.
Yifan Zheng is a principal
engi neer i n Bechtels
Geotechnical and Hydraulic
Engineering Services, with
over 20 years of experience in
hydraulic and hydrological
engineering. He works in
Bechtels Frederick, Maryland,
office and supervises a team
of hydraulic and hydrology
specialists supporting all nuclear, fossil, and renewable
energy projects. Yifans engineering experience
includes hydraulic design of various structures
such as inland and coastal water intakes, pump
stations, discharge outfalls, and energy dissipators;
analysis of hydraulic transients in closed conduit
systems; design of erosion and sediment control
measures; analysis of open channel hydraulics and
sediment transport; and thermal plume analysis
and cooling pond thermal performance.
Yifan is a member of the American Society of Civil
Engineers (ASCE) and was the chairperson of the
ASCEs technical committee on hydraulic structures
from 2008 to 2010. He is the author or co-author of
more than 20 publications in technical magazines and
conference proceedings.
Yifan received an MS in Civil Engineering from
the University of Minnesota, Twin Cities, and a BS
in Hydraulic Engineering from Hohai University,
Nanjing, China. He is a registered Professional Civil
Engineer in Maryland.
The data presented
in this paper
provides a valuable
design guideline
that can be used
to show the range
of appropriate
D velocities
for successful
FSI application.
Bechtel Technology Journal 152
2011 Bechtel Corporation. All rights reserved. 153
Originally Issued: May 2010
Updated: June 2011
BACKGROUND
P
art 52 to Title 10 of the Code of Federal
Regulations (10 CFR 52) provides the
requirements for preparing and submitting
combined (construction/operating) license (COL)
and early site permit (ESP) applications for
nuclear power plants. Within it, 10 CFR 52.79(a)
(1)(iii) requires that an ESP applicant identify
the most severe of the natural phenomena
historically reported for the site and surrounding
area and provide sufficient design margin
to take into account the limited accuracy,
quantity, and accumulation time of the historical
data. This regulation contains no quantitative
requirements but requires that margin exist
between the values for various meteorological
characteristics of concern and the corresponding
values relative to the characteristics for the
selected site and vicinity.
As stated in both the Nuclear Energy Institute
and US Department of Energy (DOE) 2008 ESP
Lessons Learned reports [1, 2], the Advisory
Committee on Reactor Safeguards and the US
Nuclear Regulatory Commission (NRC) were
specifically interested in the impacts of climate
change on design-basis hurricanes and design-
basis temperatures as they relate to nuclear
safety and the environment.
AbstractNuclear power generation has become more attractive globally due to growing demand for power,
increasing competition for fossil fuels, concern over greenhouse gases and climate change, and desire for energy
independence. Carbon-free nuclear energy supports sustainable development of global electric power generation.
Protecting people and the environment is a prime consideration in designing, constructing, and operating a
nuclear plant. Environmental and safety concerns must be carefully evaluated and addressed. To this end, the
US Nuclear Regulatory Commission (NRC) requires that applicants for early site permits (ESPs) and combined
(construction/operating) licenses (COLs) identify the most severe natural phenomena historically reported for
the site and surrounding area to ensure that the plant is designed with sufficient margin, considering the limited
accuracy, quantity, and accumulation time of the associated data. Because these permits are valid for up to
40 years, the potential impacts of climate change on the severity of natural phenomena as they relate to design
basis, nuclear safety, and environmental impact are of increasing interest.
Although no conclusive evidence or consensus of opinion is available about the long-term effect of human versus
natural causes, the NRC has requested that climate change forecasts be evaluated with respect to the potential
effect of such changes on the most severe natural phenomena. The specific areas of concern include extreme
temperatures and precipitation (liquid and frozen)100 years of data around the site; extreme/basic wind
speedsprevious 100 years of tropical cyclone data (including hurricanes) in site vicinity; tornadosfrequency
and intensity trends and forecasts; droughtwater availability/supply during drought conditions and drought
of record; and stagnation potentialconditions resulting in restrictive dispersion of greenhouse gas emissions.
This paper examines the challenges and constraints to identifying and developing appropriate design- and
operating-basis site/regional meteorological conditions while accounting for potential climate change when
preparing an ESP or COL. Because no regulatory guidance or quantitative acceptance criteria are currently
available, the methodology used to address climate change in a recently issued ESP is discussed as an example.
Keywordsclimate change, design basis, new nuclear power plant, operating basis, permitting, regional
meteorological conditions, sustainable development
CLIMATE CHANGE CONSIDERATIONS
IN THE SUSTAINABLE DEVELOPMENT
OF NUCLEAR POWER PLANTS
IN THE UNITED STATES
Ping K. Wan
[email protected]
Alice C. Carson
[email protected]
Desmond W.
Chan, PhD
[email protected]
Bechtel Technology Journal 154
In general, just analyzing past trends is not
sufficient. Forecasts of climate change need
to be examined.
CLIMATE CHARACTERISTICS OF CONCERN
E
SP applicants should ensure that all physical
attributes of the site that could affect
the design basis of structures, systems, and
components important to safety are reflected
in the site characteristics and related design
parameters acceptable to meet the requirements
of 10 CFR 100.20(c)(2) and 100.21(d).
Meteorological conditions identified as (1) site
characteristics for ESP applications; (2) design
and operating bases for construction permit (CP),
operating license (OL), and COL applications;
and (3) site parameters for design certification
(DC) applications are discussed in NRC Review
Standard RS-002, Processing Applications
for Early Site Permits, and in NUREG-0800,
Standard Review Plan for the Review of
Safety Analysis Reports for Nuclear Power
Plants, Section 2.3.1, Regional Climatology.
[3, 4] This section identifies average and
extreme climatic conditions that could affect the
design and safe operation of a nuclear power
plant. These climate conditions include the wind
speed, temperature, humidity, and precipitation,
as well as the seasonal and annual frequencies
of severe weather conditions such as tornadoes
and hurricanes, ice and snow accumulation,
hail, and lightning.
The regional meteorological conditions that
are classified as climate site characteristics
for consideration in evaluating the design and
operation of a nuclear power plant include:
The weight of the 100-year return period
snowpack and the weight of the 48-hour
probable maximum winter precipitation for
the site vicinity for use in determining the
weight of snow and ice on the roof of each
safety-related structure [5, 6]
The minimum and maximum dry bulb
and wet bulb temperatures for the site and
vicinity used to evaluate the performance
of the ultimate heat sink (UHS) with respect
to (1) maximum evaporation and drift loss,
(2) minimum water cooling, and, if appli-
cable, (3) the potential for water freezing in
the UHS water storage facility [7]
Site-characteristic tornado parameters,
including translational speed, rotational
speed, and maximum pressure drop and
rate of pressure drop [8, 9]
The 100-year return period 3-second gust
wind speed [10]
Ambient temperature and humidity statistics
(e.g., 2% and 1% annual exceedance and 100-
year maximum dry bulb temperature and
coincident wet bulb temperature, 2% and 1%
annual exceedance and 100-year maximum
wet bulb temperature [non-coincident], and
98% and 99% annual exceedance and 100-
year minimum dry bulb temperature) for
use in establishing heat loads for design and
operation of plant heating, ventilation, and
air conditioning (HVAC) systems [11, 12, 13]
CLIMATE CHANGE CONSIDERATIONS IN
DEVELOPING DESIGN- AND OPERATING-BASES
REGIONAL METEOROLOGICAL CONDITIONS
I
t is understood that climatic conditions change
over time and that such changes are cyclical
in nature on various time and spatial scales.
The timing, magnitude, relative contributions to,
Climatic conditions
change over time;
such changes are
cyclical in nature
on various time
and spatial scales.
ABBREVIATIONS, ACRONYMS, AND TERMS
CFR Code of Federal Regulations
COL combined (construction/
operating) license
CP construction permit
DC design certification
DOE Department of Energy
ESP early site permit
HVAC heating, ventilation, and air
conditioning
IPCC Intergovernmental Panel on
Climate Change
NCDC National Climatic Data Center
NRC Nuclear Regulatory Commission
NUREG NRC Regulation
OL operating license
RS (NRC) review standard
SNC Southern Nuclear Operating
Company
UHS ultimate heat sink
VEGP Vogtle Electric Generating Plant
June 2011 Volume 3, Number 1 155
and implications of these changes are generally
difficult to predict accurately. Thus, the NRC
staff implemented in 2007 an approach to the
study of climate change impacts that considers
current scientific thoughts, including those of
the 2007 Intergovernmental Panel on Climate
Change (IPCC) report [14].
The NRC asks applicants to review 100 years of
relevant climate data around the site versus the
historical 30-year period. Also, future applicants
should be prepared to provide data and/or
analyses that adequately address apparent trends
toward increased frequency and intensity of
relevant climate parameters as they relate to
the ESP/COL site.
The specific subject areas of concern related to
potential impacts of climate change are:
Extreme temperature statisticsreview
100 years of data around the site versus
the historical 30 years of data
Extreme precipitation (liquid and frozen)
review 100 years of data around the site
versus the historical 30 years of data
Extreme wind/basic wind speedreview
100 years of tropical cyclone data around
the site (including hurricanes) versus the
historical 30 years of data
Hurricanereview frequency and intensity
trends and forecasts
Droughtreview water availability/water
supply during drought conditions and
frequency/severity of droughts
Stagnation potentialreview conditions
that would result in restrictive dispersion
of greenhouse gas emissions
POTENTIAL CLIMATE CHANGES CASE STUDY
A
lthough no conclusive evidence or consensus
of opinion is available on the long-term
climatic changes resulting from human or
natural causes, the NRC has requested that
climate change forecasts be analyzed for the
potential of climatic changes to affect the most
severe natural phenomena. During its review of
Southern Nuclear Operating Companys (SNCs)
ESP application for the Vogtle Electric Generating
Plant (VEGP) (Units 3 and 4) [15], the NRC
validated the representativeness of the data used
to describe severe weather phenomena that may
affect the proposed plant during the expected
period of operation. As a part of that validation,
the NRC considered the long-term environmental
changes to the region that might affect the ability
of historical data to accurately represent the
future site climate characteristics. Although the
NRC did acknowledge that there is no consensus
regarding the issue of climate change, it also
stated that climate trends identified (specifically
those related to temperature and precipitation)
should be evaluated for their impact on the
description of severe weather phenomena. [16]
Quantified Climate Change Trends
To analyze trends in temperature and rainfall,
SNC reviewed normals and standard deviations
over a 70-year period for successive 30-year
intervals, updated every 10 years, beginning
in 1931 (e.g., 19311960, 19411970) through the
most recent normal period (i.e., 19712000), based
on the National Climatic Data Center (NCDC)
publication Climatography of the United
States, No. 85. [12] The report summarizes these
observations for the 344 climate divisions in the
48 contiguous states.
A climate division represents a region within
a state that is as climatically homogeneous as
possible. Division boundaries generally coincide
with county boundaries except in the western
United States. In Georgia, the VEGP site is located
within Climate Division GA-06 (East Central). In
South Carolina, Climate Division SC-05 (West
Central), whose southern extent includes Aiken
County, is nearly adjacent to the VEGP site.
Summaries of successive annual temperature
and rainfall normals, as well as the composite
70-year average, are provided for these climate
divisions in Table 1. [12]
This data identifies that the average temperature
increased only slightly (i.e., 0.2 F to 0.3 F)
over the latest 30-year period, and rainfall, on
average, increased 1.5 inches over that same
period. The NRC staff confirmed SNCs findings
Future applicants
should be prepared
to adequately
address apparent
trends toward
increased frequency
and intensity of
relevant climate
parameters
as they relate to
the ESP/COL site.
Table 1. Annual Temperature and Rainfall Summaries
for Climate Divisions GA-06 and SC-05
Period
Temperature, F Rainfall, in.
GA-06 SC-05 GA-06 SC-05
19312000 64.3 62.2 45.60 46.99
19311960 65.0 62.9 43.42 44.88
19411970 64.3 62.3 45.35 46.46
19511980 63.8 61.8 45.95 47.53
19611990 63.6 61.6 46.61 48.46
19712000 63.9 61.8 47.06 48.36
Bechtel Technology Journal 156
by reviewing 1-year, 10-year, and 20-year
trends in the annual average daily maximum
temperatures and annual extreme daily
precipitation using data from Waynesboro and
Augusta for the period from 1951 to 2004. As a
result of this review, the staff concluded that the
trends over 20 years show that annual extreme
minimum temperatures have increased 2 F and
average annual precipitation has increased about
1.5 to 2.5 inches over the period of record.
All other meteorological parameters showed
no discernible signs of climate change. [16]
The following paragraphs, extracted from [16]
immediately after this quote, continue the NRC
staffs discussion regarding the potential effect
of climate change on the viability of the data and
the conclusions provided by SNC in its VEGP
ESP application:
The Intergovernmental Panel on Climate
Change (IPCC) issued its Fourth Assessment
Report on Climate Change in February 2007. The
staff considered Chapter 11 in Climate Change
2007: The Physical Science Basis, Contribution of
Working Group I to the 4th Assessment Report of
the Intergovernmental Panel on Climate Change,
regarding the southeastern portion of the United
States. The IPCC models projecting potential
future climate change depend on human activity
and land use. To account for this, the IPCC uses
different global scenarios as input to the models.
Chapter 11 of the IPCC report discusses the
following three scenarios:
(A2) A more divided world with self-
reliant, independently operating nations
(A1B) A more integrated world with
an emphasis on all energy sources
(B1) A world more integrated
and ecologically friendly (i.e., less
energy consumption and more
cooperating nations)
During the 100-year period under the A1B
scenario (i.e., 19801999 as compared to 2080
2099), the IPCC projection estimates that the
proposed VEGP site may see an increase in
average annual temperature of 3 C and an
increase in precipitation of 0 to 5 percent. Under
the more and less extreme scenarios, increases
in annual average temperature may range from
2 C to 7.5 C. The projection also shows a general
decrease in snow depth as a result of delayed
autumn snowfall and earlier spring snow melt.
The staff also analyzed climate-change-induced
hurricane trends within 100 nautical miles of
the site and found no discernible trends in
hurricane frequency or intensity. The Summary
for Policymakers based on the February 2007
IPCC report makes the following statement
concerning tropical cyclones: Based on a range
of models, it is likely that future tropical cyclones
(typhoons and hurricanes) will become more
intense, with larger peak wind speeds and more
heavy precipitation associated with ongoing
increases of tropical sea surface temperatures.
(IPCC Sections 3.8, 9.5, and 10.3)
However, the question of whether hurricanes
are becoming more destructive because of global
warming is a contested issue in the scientific
debate over climate change. A number of
academic papers have been published either
supporting or debunking the idea that warmer
temperatures linked to human activity have
created more intense storms, and the issue
is currently unresolved (Dean; Eilperin; Kerr;
Witze)
[
*
]
. Based on the current amount of
scientific uncertainty regarding this subject,
the staff believes the applicant has adequately
addressed the issue of hurricanes and provided
conservative site characteristics.
The applicant stated that the number of recorded
tornado events has increased, in general, since
detailed records were routinely kept beginning
around 1950. However, some of this increase is
attributable to a growing population, greater
public awareness and interest, and technological
advances in detection. These changes are
superimposed on normal year-to-year variations.
Consequently, the number of observations
recorded within a 2-degree latitude and longitude
square centered on the VEGP site reflects these
effects. The staff has confirmed and accepts
the applicants statements regarding tornadoes.
The Summary for Policymakers based on
the February 2007 IPCC report states, there
is insufficient evidence to determine whether
trends exist in small scale phenomena such
as tornadoes, hail, lightning, and dust storms.
(IPCC Sections 3.8 and 5.3) [16]
The NRC staff ultimately drew the following
conclusion: The NRC staff has evaluated the rele-
vant sections of the application, as supplemented
by letters dated January 30, 2007, March 26, 2007, and
March 30, 2007, pursuant to the acceptance criteria
described [in] RS-002, Section 2.3.1 and applicable
regulatory requirements of 10 CFR Part 52 and
10 CFR Part 100. The applicant has presented and
substantiated information relative to the regional
meteorological conditions. The staff has reviewed
the information presented by the applicant and
concludes that the identification and consideration
of the regional and site meteorological character-
istics meet the requirements of 10 CFR 52.17(a)(1),
10 CFR 100.20(c), and 10 CFR 100.21(d). [16]
Whether
hurricanes
are becoming
more destructive
because of
global warming is
a contested issue
in the scientific
debate over climate
change. [16]
* Dean, C., Will Warming Lead to a Rise in Hurricanes?
The New York Times, p. F01, May 29, 2007.
Eilperin, J., Scientists Disagree on Link Between Storms,
Warming, The Washington Post, p. A03, August 20, 2006.
Kerr, R., A Tempestuous Birth for Hurricane Climatology,
Science, Vol. 312, No. 5774, pp. 676678, May 2006.
Witze, A., Tempers Flare at Hurricane Meeting, Nature,
Vol. 441, No. 7089, p. 11, May 4, 2006.
June 2011 Volume 3, Number 1 157
CONCLUSIONS
1
0 CFR 52.79(a)(iii) requires that a COL
applicant identify the most severe of the
natural phenomena that have been historically
reported for the site and surrounding area, with
sufficient margin to account for the limited
accuracy, quantity, and accumulation time of the
historical data.
A new approach to the study of climate change
impacts that considers current scientific thoughts
was implemented in 2007 by the NRC staff. The
areas potentially most affected by this review
are the design-basis hurricane and design-basis
temperature. Both ESP and COL applicants
are now required to review 100 years of data
around the site versus the historical 30-year look
back to ensure that sufficient margin has been
incorporated in the design-basis consideration.
For its evaluation of the VEGP site, the NRC staff
acknowledges that long-term climatic change
resulting from human or natural causes may
introduce changes into the most severe natural
phenomena reported for the site. However, no
conclusive evidence or consensus of opinion
is available on the rapidity or nature of such
changes. If in the future, the ESP site is no
longer in compliance with the terms and
conditions of the ESP (e.g., if new information
shows that the climate has changed and that the
climatic site characteristics no longer represent
extreme weather conditions), the staff may seek
to modify the ESP or impose requirements
on the site in accordance with the provisions
of 10 CFR 52.39, Finality of Early Site Permit
Determinations. [16]
REFERENCES
[1] NEI 08-03, Lessons Learned from Initial Early
Site Permit Experience, Rev. 3, February 2008.
[2] Report on Lessons Learned from the NP 2010
Early Site Permit Program, Final Report,
US DOE, March 2008, http://www.ne.doe.gov/
pdfFiles/FinalReportonESPLessonsLearned.pdf.
[3] NRR Review Standard RS-002, ML 040700099,
Processing Applications for Early Site Permits,
NRC, Office of Nuclear Reactor Regulation,
May 3, 2004.
[4] NUREG-0800, Standard Review Plan for
the Review of Safety Analysis Reports for
Nuclear Power Plants (Section 2.3.1,
Regional Climatology), Rev. 3, March 2007,
http://pbadupws.nrc.gov/docs/ML0707/
ML070710496.pdf.
[5] NUREG/CR-1486, Seasonal Variation of
10-Square-Mile Probable Maximum Precipita-
tion Extremes, United States East of the 105th
Meridian, NOAA Hydrometeorological Report
No. 53 (Nonproprietary), April 1980, http://
www.nws.noaa.gov/oh/hdsc/PMP_documents/
HMR53.pdf.
[6] Interim Staff Guidance on Assessment of
Normal and Extreme Winter Precipitation Loads
on the Roofs of Seismic Category Structures,
Proposed Interim Staff Guidance DC/COL-
ISG07, issued for comment, NRC,
November 2008.
[7] USNRC Regulatory Guide 1.27, Rev. 2,
Ultimate Heat Sink for Nuclear Power Plants,
January 1976, http://pbadupws.nrc.gov/docs/
ML0037/ML003739969.pdf.
[8] USNRC Regulatory Guide 1.76, Design-Basis
Tornado and Tornado Missiles for Nuclear
Power Plants, Rev. 1, March 2007, http://
www.nrc.gov/reading-rm/doc-collections/reg-
guides/power-reactors/rg/01-076/01-076.pdf.
[9] NUREG/CR-4461, Tornado Climatology
of the Contiguous United States, Rev. 2,
February 2007, http://www.nrc.gov/reading-
rm/doc-collections/nuregs/contract/cr4461/
cr4461-r2.pdf.
[10] ASCE Standard ASCE/SEI 7-05, Minimum
Design Loads for Buildings and Other
Structures, Revision of ASCE 7-02 (Proprietary),
ASCE and SEI, 2006, access via http://cedb.asce.
org/cgi/WWWdisplay.cgi?144769.
[11] Engineering Weather Data, 2000 Interactive
Edition, Version 1.0 (CD-ROM: C 55.281/2-
2:EN 3/CD), developed by the Air Force
Combat Climatology Center and published by
the National Climatic Data Center (NCDC),
December 1999, see http://library.sc.edu/
pubserv/gdcdnoaa.html and http://www.library.
fau.edu/depts/govdocs/docslist.htm.
[12] Climatography of the United States, No. 85,
Divisional Normals and Standard Deviations
of Temperature, Precipitation, and Heating and
Cooling Degree Days 19712000 (and Previous
Normal Periods) (Section 1, Temperature,
and Section 2, Precipitation) (Nonproprietary),
NCDC, NESDIS, NOAA, June 15, 2002, see
http://openlibrary.org/books/OL17627425M/
Divisional_normals_and_standard_deviations_
of_temperature_precipitation_and_heating_
and_cooling_degree_days_1971-2000_(and_
previous_normals_periods).
[13] 2005 ASHRAE Handbook Fundamentals
(Chapter 28, Climatic Design Conditions),
CD-ROM (Proprietary), 2005.
[14] IPCC Fourth Assessment Report: Climate
Change 2007 (AR4), Intergovernmental Panel
on Climate Change (IPCC), 2007, see http://
www.ipcc.ch/.
[15] Vogtle Early Site Permit Application, Rev. 5,
Southern Nuclear Operating Company (SNC),
December 23, 2008.
[16] NUREG-1923, Safety Evaluation Report
for an Early Site Permit (ESP) at the Vogtle
Electric Generating Plant (VEGP) ESP Site
(Sections 2.3.1.3.7, Climate Change, and 2.3.1.4,
Conclusion), NRC, Office of New Reactors,
July 2009, see http://www.nrc.gov/reading-rm/
doc-collections/nuregs/staff/sr1923/sr1923.pdf.
The NRC staff
acknowledges that
long-term climatic
change resulting
from human or
natural causes
may introduce
changes into
the most severe
natural phenomena
reported for
the site. [16]
Bechtel Technology Journal 158
BIOGRAPHIES
Ping Wan has over 35 years
of environmental engineering
experience, specializing in
environmental studies for
permitting, constructing, and
operating power generation
plants, industrial facilities, and
highway projects worldwide.
She is a recognized industry
expert in environmental
engineering and air pollution
meteorology and has performed numerous first-of-
a-kind (FOAK) permit applications for site selection,
facility construction, and facility operation.
In her early years with Bechtel, Ping secured a large
number of prevention of significant deterioration
(PSD) air permits for fossil-fired power plants,
including those that were among the first few PSD
permits issued by the states of Delaware, Maryland,
New Jersey, and Pennsylvania under the New Source
Review, the Clean Air Act Amendments of 1977.
Her work went on to include permitting the first
Clean Air Act Title V, Operating Air Permit, for the
national laboratories at both the Nevada test site and
Idaho Falls site. Ping then led an environment team
that obtained the FOAK construction and operating
certifications for the Boston Central Artery/Third
Harbor Tunnel project (the largest highway project in
the United States). She also developed an innovative
and cost-effective engineering solution to comply
with the most stringent regulatory requirements
worldwide regarding continuous emission monitoring
of the projects vehicular tunnel emissions through
the ventilation systems. Her latest efforts include
acquisition of an ESP (North Anna), preparation
of a COL application for various new reactors at
existing nuclear and greenfield sites in the US, and
intensive involvement in preparing the safety analysis
report and nuclear environmental impact assessment
for the Baraka nuclear power plant in the United
Arab Emirates.
Ping is an active standard committee member for
the American National Standard, ANSI/ANS 2.15
(standard development), 2.21 (standard reviewer), and
3.11 (standard affirmation), regarding atmospheric
dispersion modeling, meteorological criteria for
cooling system design, and meteorological monitoring.
She has been a Bechtel Fellow since 1996.
Ping holds two MSc degrees, one in Physics from
Montana State University in Bozeman and one in
Natural Science from Pacific Lutheran University in
Tacoma, Washington. She received a BSc in Physics
from the Chinese University of Hong Kong (Chung
Chi College). Ping has also done doctorate-level
work in Physics at Montana State University and
in Meteorology at the University of Maryland,
College Park.
Alice C. Carson has worked in
nuclear licensing for 20 years
on a wide range of projects. She
has supported the licensing of
steam generator replacement
and other major modification
projects associated with
operating reactors, supported
the licensing of independent
spent storage facilities, authored
portions of applications for
plant license renewals, led the development of the
license termination plans for two decommissioning
nuclear power plants, and authored portions of
and supported the licensing of multiple new reactor
license applications.
Alice is currently employed in Bechtel Corporations
NuGen group working on multiple COL and ESP
applications as well as providing support to a
reactor pressure vessel head/steam generator
replacement project.
Alice holds a BNE from The Georgia Institute of
Technology in Atlanta and a BA from Wesleyan
College in Macon, Georgia.
Desmond Chan, PhD, joined
Bechtel Power Corporation in
2009 as senior licensing
engineer and 6 months later
was promoted to manager of
licensing. Now a chief engineer,
he focuses his expertise on
nuclear power generation
and environmental projects,
where his responsibilities
encompass nuclear licensing,
regulatory affairs, environmental engineering, project
oversight, and staff development. Desmond supports
and contributes to both new and operating nuclear
plant work ranging from ESP applications, design
certification, and COL applications to current
licensing basis assessments, safety evaluations and
safety analysis report updates, steam generator
replacements, and extended power uprates.
Over the course of his 31-year career, Desmond has
managed numerous engineering groups (nuclear,
environmental, radiological, health physics,
chemistry, emergency preparedness, and training),
regional offices, field project offices (nuclear power
plants, DOE sites, and Department of Defense bases),
and laboratories (environmental and chemical
demilitarization). In addition, he has been project
director/manager of many major multidisciplinary,
multiyear technical projects involving reactor
safety, radiological engineering, decommissioning
and decontamination, environmental engineering,
installation restoration, low-level radioactive waste
management, remedial investigation, regulatory
compliance, environmental impact analysis, health
and cost benefit analysis, industrial hygiene
assessment, feasibility studies, risk analysis,
computer modeling, environmental permitting,
nuclear licensing, quality control programs, technical
training, and facility audits. Companies benefiting
from his expertise over the years have included
more than 40 nuclear power plants, DOE facilities,
DOD installations, NRC, EPA, and FEMA, as well as
his own company, of which he was managing partner
and executive vice president.
This paper was originally published in the
Proceedings of the 18th International Conference on
Nuclear Engineering, ICONE18, held May 1721, 2010,
in Xian, China. Paper number ICONE18 2933.1,
2010 ASMEused with permission.
The paper has been edited and reformatted to conform
to the style of the Bechtel Technology Journal.
June 2011 Volume 3, Number 1 159
Desmond was chief editor of the eight-volume
Advanced Health Physics for Nuclear Utilities and
has authored/coauthored more than 100 papers,
technical reports, and training programs. He is
currently a member of the Nuclear Energy Institute
Working Group and was a governor-appointed
member of the State of Maryland Radiation
Control Advisory Board, chair of the Nuclear
Utilities Committee on Below-Regulatory-Concern
Radioactive Waste, a member of the Electric
Power Research Institutes Steering Committee on
Below-Regulatory-Concern Radioactive Waste, and
alternate chair of Consolidated Edison Companys
Oversight Committee on Radiation Protection.
Desmond holds a PhD in Computational Nuclear
Physics, an MS in Nuclear Physics, and a BS in
Physics (magna cum laude), all from the University
of Massachusetts in Lowell.
Bechtel Technology Journal 160
2011 Bechtel Corporation. All rights reserved. 161
INTRODUCTION
I
n the design and development of concentrated
solar thermal plants (CSPs), major effort is
devoted to improving the solar energy conversion
to steam. Use of sophisticated tracking and
controls, better optics, and improved tube
coatings are just a few of the elements employed
by CSP technologies designers to achieve
their goals. However, the optimization of heat
input to the system is only half of the effort.
The remaining partprocessing the heat into
electric power using well-known conventional
thermodynamic cycles such as Rankine, Brayton,
or Stirlingis the subject of this paper. Special
emphasis is given to the main components of a
steam cycle: turbine and heat sink.
The nature of the solar heat source and its cyclic
behavior make the design of turbo-machinery
power generation equipment quite different from
that of steam turbines used in conventional
power plants. The high capital cost of renewable
facilities and the limited hours of operation are
powerful drivers to increase turbo-machinery
efficiency. Proven technology will therefore
be a key advantage in the current project
financing situation.
For high temperature applications such as the
power tower or in medium temperature solar
trough collector fields, the paper addresses
the unique requirements for performance,
integration, and fast startup of the turbines,
including the effect of various thermal storage
options. Since most of the concentrated thermal
solar applications are in arid regions, heat sink
selection (air-cooled condenser [ACC], hybrid,
Heller tower, etc.) and how it affects plant
design and performance are emphasized in the
discussion. The paper also reviews the state-of-
the-art hardware designs for each application
from an engineering, procurement, and
construction (EPC) contractors perspective.
EXISTING SOLAR THERMAL TECHNOLOGY
CONCEPTS AND EFFECT ON STEAM
PRODUCTION
C
SP systems require several components
to produce electricity: (1) concentrator,
Issue Date: June 2011
AbstractWhile major design efforts are dedicated to developing and improving solar energy collection
technologies, the downstream power generation cycle is often considered a straightforward exercise. The diverse
nature of the heat sources and their cyclic behavior make the design of the turbo-machinery and associated
balance-of-plant equipment quite different from those used in conventional fired power plants. The high capital
cost of these renewable facilities and the limited hours of operation are powerful drivers to increase equipment
efficiency and reduce startup time.
This paper reviews state-of-the-art hardware selection and design considerations for several solar thermal
technologies (tower, trough, and linear Fresnel lens) from an engineering, procurement, and construction (EPC)
contractors perspective. Topics addressed include the benefits and limitations of each method and the impact
of flow and temperature on cycle efficiency. The turbine design challenges posed by repeated fast startups and
plant size optimization are examined, with a special emphasis placed on heat sink design due to water scarcity.
Finally, the authors offer recommendations for achieving a balance between the economics of generation, the cost
of equipment, and the reliability of the downstream power generation system.
Keywordsconcentrated solar thermal plant (CSP), heat sink, power generation cycle, solar thermal
technology, steam cycle, thermal analysis, turbine
CONCENTRATED SOLAR THERMAL
PLANTS DOWNSTREAM OF THE SOLAR
FIELDDESIGN/OPTIMIZATION OF THE
ASSOCIATED POWER GENERATION CYCLE
Justin Zachary, PhD
[email protected]
Natasha Jones
[email protected]
Aslan Golant
[email protected]
Bechtel Technology Journal 162
(2) receiver, (3) storage or transportation system,
and (4) power conversion device. Several types of
technologies are available, including:
Solar tower
Trough
Linear Fresnel lens
CSP technology type determines the options for
interface with a conventional fossil-fired plant.
Table 1 summarizes the types of technology and
their thermal outputs.

CYCLE CONFIGURATION
Plant Size
Defining plant size is related to CSP technology
as well as the availability of appropriate steam
turbines and heat sinks. Size can sometimes also
be dictated by local legislation and permitting.
For example, Spains maximum plant size is set
at 50 MW; no such legal limitations exist in the
United States. Plants capable of generating more
than 200 MW with trough or tower configurations
are being planned. The economy of scale for cost
and performance is expected to yield the most
suitable plant size based on land available for the
solar field, standardization to reduce capital cost,
and increased availability. Note, however, that
only a detailed analysis of each specific location
can provide a definitive answer to the question
of size.
Number of Feedwater Heaters
Increasing the number of feedwater heaters
improves plant efficiency but also increases
the cost. Figure 1 presents a typical plant
configuration including an auxiliary boiler.
Optimal plant
size selection is
a complex task
requiring a
detailed analysis
of each
specific location.
ABBREVIATIONS, ACRONYMS, AND TERMS
ACC air-cooled condenser
Btu British thermal unit
CSP concentrated solar thermal plant
DC direct contact
EPC engineering, procurement, and
construction
HP high pressure
HTF heat transfer fluid
LSB last-stage blade
LP low pressure
O&M operation and maintenance
TTD terminal temperature difference
Table 1. Summary of Concentrated Solar Technologies
Technology
Type
Working
Fluid
Maximum
Temperature,
C (F)
Solar Tower Steam 545 (1,020)
Trough Synthetic Oil HTF 395 (745)
Linear Fresnel Lens Steam
270 (520)
(or higher)
ACC ACC
Feedwater Heaters
Steam Turbine
ACC
Solar Field
Boiler
Figure 1. Typical Plant Configuration
June 2011 Volume 3, Number 1 163
Figure 2 depicts the change in steam cycle
efficiency that results from reducing the number
of feedwater heaters. If four heaters are used
instead of seven, cycle efficiency is reduced by
0.8%. Depending on the solar multiplier and
the economics of the plant, shutting down or
throttling heaters could have a positive effect on
the plant output. It is imperative that the steam
turbine is designed in such a way that it can
receive the additional steam available when the
feedwater heaters are out of service.
Reheat Options
The decision to use a reheat cycle versus a
non-reheat cycle is a function of low-pressure
(LP) turbine exhaust moisture levels and the
desired throttle conditions that provide the
optimum plant-efficiency-to-capital-investment
ratio. The renewable technology that is used will
provide restrictions on the throttle and reheat
temperatures. For example, a power tower plant
can have main steam temperatures of around
538 C (1,000 F), while a parabolic trough plant
using heat transfer fluid (HTF) will be limited
to around 371 C (700 F).
With throttle temperatures relatively fixed, the
plant cycle design is reduced to two options:
Design with a higher throttle pressure that
provides a higher efficiency but requires a
reheat system to lower exhaust moisture levels
Design with a lower throttle pressure that
requires less initial capital investment, but
suffers from lower efficiency
The paper explores these two options using the
two throttle temperatures stated above. While
such discussion is also pertinent to fossil-fuel-
fired plants, the increased efficiency only equates
to lower capital cost for solar applications.
Condensing steam turbines commonly operate
with saturated steam exhaust conditions.
However, if there is too much moisture, the
turbine blades will suffer from erosion, causing
decreased efficiency and eventually leading to an
earlier than normal overhaul. It is common to see
reheat turbines designed to safely operate with
8% exhaust moisture content, while non-reheat
turbines are allowed to go to 11% moisture. This
analysis uses these values as design constraints.
For those steam turbines of interest to
designers of renewable resource power plants,
isentropic efficiencies can vary from 80% to
90%. Temperature versus entropy diagrams can
illustrate the performance impacts via available
heat energy for the use of reheat versus non-
reheat cycles. It is at this end that we arbitrarily
choose 85% isentropic efficiency to form a basis
for our comparisons. Furthermore, the LP exhaust
pressure is kept constant to aid in comparison.
The number of
feedwater heaters
used affects
steam cycle
efficiency.
3 4
E
f
f
i
c
i
e
n
c
y

D
r
o
p
,

%
Number of Heaters
5 6 7
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
0
Figure 2. Cycle Efficiency versus Number of Feedwater Heaters
Bechtel Technology Journal 164
Figure 3 shows a typical reheat cycle versus a
non-reheat cycle designed at the same
combination of main steam temperature and
pressure. The reheat option has a moisture
content of 8%, while the non-reheat section has
a moisture content of 14.6%. The higher moisture
content in the LP section should be avoided.
Figure 4 compares a reheat cycle with a non-
reheat cycle, both designed with a throttle
temperature of 538 C (1,000 F) but with a
lower throttle pressure in the non-reheat case to
maintain an acceptable moisture level.
Reduction in the turbine throttle pressure has
protected the LP turbine blades from erosion
caused by high moisture levels. However, the
amount of recoverable energy has been reduced as
well. This is represented as the area encompassed
by the red, non-reheat line compared to the
area encompassed by the orange, reheat line.
For the cases used in this study, the amount of
heat available for conversion to power is 18%
lower in the non-reheat case. The magnitude of
the reduction in recoverable energy will vary
with the temperature and pressure constraints
imposed by the renewable resource. Figure 5
shows a comparison of a reheat cycle and a non-
reheat cycle designed with a throttle temperature
of 371 C (700 F) and a lower throttle pressure to
maintain an acceptable moisture level.
The ratio of recoverable energy has shifted (by a
magnitude of 9%) in favor of using the non-reheat
cycle by using a lower throttle temperature.
Therefore, the performance advantage that
the reheat cycle had in the 538 C (1,000 F)
case has been reduced when using a throttle
temperature of 371 C (700 F). This performance
is summarized in Table 2. This trend indicates
that there is a point where the performance gains
of using a reheat cycle are outweighed by the
additional capital investment required.
One further option is to borrow an idea from
the nuclear power industry, where moisture
removal is added near the final stage of the
LP turbine. Table 2 presents a summary of
this options performance for a 538 C (1,000 F)
throttle temperature case with a single-stage
moisture removal section that has a moisture
removal effectiveness of 40%. As shown in the
table, throttle pressure for this moisture removal
case can be maintained at the reheat level while
moisture is kept at a safe level; however, there
is a 12% reduction in available heat energy.
These performance losses are less than in the
case where throttle pressure was lowered. Note
that the data is qualitative due to the
Reheat or
non-reheat
this is the
challenge for
plant cycle design.
1,080
980
880
780
680
580
480
380
T
e
m
p
e
r
a
t
u
r
e
,

F
Steam, Btu/lbR
280
180
80
0 0.5 1.0 2.0 1.5
Non-reheat
Reheat
Vapor Dome
1,080
980
880
780
680
580
480
380
T
e
m
p
e
r
a
t
u
r
e
,

F
Steam, Btu/lbR
280
180
80
0 0.5 1.0 2.0 1.5
Non-reheat
Reheat
Vapor Dome
1,080
980
880
780
680
580
480
380
T
e
m
p
e
r
a
t
u
r
e
,

F
Steam, Btu/lbR
280
180
80
0 0.5 1.0 2.0 1.5
Non-reheat
Reheat
Vapor Dome
Figure 3. Reheat versus Non-Reheat for
Constant Main Steam Conditions
Figure 5. Reheat versus Non-Reheat for
Constant Exhaust Conditions with Moisture Reduction
Figure 4. Reheat versus Non-Reheat for
Constant Exhaust Conditions
June 2011 Volume 3, Number 1 165
theoretical basis of using this technology
which is derived for much larger
applicationsand viewing it solely from a
thermodynamic point of view. A quantitative
analysis would require further investigation
by the steam turbine manufacturers.
The choice of whether to add reheat to the cycle
can hold serious consequences for performance
and the initial capital investment required for
construction of the plant. LP turbine exhaust
conditions must be maintained at sufficiently
low moisture levels to ensure long and reliable
operation. The previous cases have shown
performance reduction on the basis of available
energy, but it is important to note that the reheat
options performance increase comes at the price
of increased heat transfer surface area. It is to
this end that plant efficiency must be evaluated
in addition to plant output.
Either additional initial capital must be invested
to keep turbine throttle pressure up and increase
performance with a reheat cycle, or throttle
pressure can be allowed to fall with
a lower initial capital investment
and lower performance. The
use of moisture removal stages
offers a compromise between
reheat and non-reheat throttle
pressures but requires further
quantitative analysis from turbine
manufacturers. In summation, in
todays renewable technologies
market, it is imperative that
a comprehensive engineering
analysis of various turbo-
machinery options be conducted
to ensure that capital investment
is optimized for the renewable
resource being used.
HEAT SINK CONSIDERATION
I
n this section, three heat sink options are
described. Since heat sink selection is dictated
not only by cycle design, but also by availability
of water, a detailed discussion is needed. More
often than not, solar plants are located in desert
regions where solar availability is desirable but
water is scarce. Therefore, heat sink designs
typically use either dry cooling or some type of
hybrid (combination of wet and dry) technology.
Air-Cooled Condenser
In an ACC (see Figure 6), heat is transferred
from the steam to the air using fin tube bundles.
The ACC tube bundles have a relatively large
tube side cross-section and are usually arranged
in an A-frame configuration, resulting in a high
ratio of heat exchange surface area to plot area.
The tubes are kept cool by the heat that is
conducted across the tube thickness to the finned
outer surface. Air is continuously circulated
over the (dry) outside surface of the tubes. Heat
Heat sink selection
is dictated
not only by
cycle design,
but also by
availability
of water.
Table 2. Summary of Thermal Analysis Results
Parameter Units
538 C
(1,000 F)
Reheat
538 C
(1,000 F)
Non-Reheat
Constant
Pressure
538 C
(1,000 F)
Non-Reheat
Reduced
Pressure
538 C
(1,000 F)
Non-Reheat
Moisture
Removal
371 C
(700 F)
Reheat
371 C
(700 F)
Non-Reheat
Reduced
Pressure
Heat Input Btu/lb 1,706.2 1,544.9 1,542.3 1,581.9 1,471.4 1,392.7
Heat Rejected Btu/lb 942.0 874.1 911.3 911.2 942.0 911.3
Heat Available Btu/lb 764.2 670.8 631.1 670.7 529.4 481.4
LP Efficiency 44.8% 43.4% 40.9% 42.4% 36.0% 34.6%
Exhaust Moisture 8.0% 14.6% 11.0% 11.0% 8.0% 11.0%
Generator Generator
Steam
Condensate
Condensate
Condenser
Turbine
Steam
Condensate
Pump
Air In
Air
Out
Air
Out
Figure 6. Air-Cooled Condenser
Bechtel Technology Journal 166
transfer from this outside surface of the tubes
to the air takes place by forced convection heat
transfer (heating of the air). No evaporation of
water is involved.
Thus, for ACCs, the condenser performance with
regard to turbine exhaust pressure is directly
related to the ambient (dry bulb) air temperature
as well as to the condenser design and operating
conditions. This results in a higher turbine
backpressure for given ambient atmospheric
conditions, with a resultant decrease in turbine
generator output when compared to wet cooling
technologies, whose performance is dictated
by the lower wet bulb temperature. An ACC
eliminates the entire circulating water system,
circulating water pumps, and surface condenser.
Parallel Condensing Wet/Dry System
(the GEA PAC System

)
In this type of hybrid system, exhaust steam
from the steam turbine is separated into
two streams. One stream flows into a
water-cooled surface condenser, while the
other is directed to an ACC. Condensate from
the surface condenser and the ACC can be
collected in a common hotwell. Water
consumption is controlled by the distribution
of the heat load between the two condensers.
The parallel condensing wet/dry system (PAC
System) developed by GEA Power Cooling, Inc.
(see Figure 7) should not be confused with a
hybrid plume-abated cooling tower, which is
used primarily to reduce the visible plume from
a wet cooling tower. A hybrid cooling tower has
practical limits to the amount of heat that can
be rejected in the dry section, since the latter
is sized for plume abatement only. With the
PAC System, there is complete flexibility in the
amount of heat rejected in the dry section.
The dry section of the PAC System employs
direct condensation in contrast with other
hybrid systems, which are indirect condensing
systems (i.e., water is cooled through both
the wet and dry sections and then pumped
through a common condenser). As a result, the
dry section of the PAC System can efficiently
reject a substantial amount of heat even on hot
days, thereby reducing peak water usage. During
cooler periods, the amount of heat rejected in
the dry section can be increased up to 100%
if so designed, further reducing the plants
water consumption. An additional benefit of the
PAC System is the reduction of plume. Plume
can be reduced or eliminated entirely when
danger of icing exists simply by shutting off the
wet section.
Heller System
The Heller system (see Figure 8) is an indirect
dry cooling technology that requires a separate
condenser and circulating water pump. The
heat is initially exchanged in a condenser to a
closed water circuit, where the heat is rejected to
A number of
heat sink solutions
using either
dry cooling
or hybrid
(combination
of wet and dry)
technology offer
optimization tools
to designers.
Condensate
Condensate Pump
Condensate
Cold Water
Hot
Water
Circulating
Water Pump
Air-Cooled
Condenser
Sur face
Condenser Air In
Air
In
Air
In
Air
Out
Air
Out
Air Out
Wet Tower
Condensate
Co
Hot
Water
Air-Cooled
Condenser
Sur face
Condenser Air In
Air
In
Air
Out
Air
Out
Turbine
Steam
Steam
Generator
Figure 7. Parallel Condensing System
June 2011 Volume 3, Number 1 167
ambient air utilizing a dry tower with water-to-
air heat exchangers, typically in a natural draft
configuration (although mechanical draft is also
available). The tower may be equipped with a
peak cooling system that, during hot ambient
conditions, sprays water on part of the heat
exchanger bundles for peak shaving purposes.
A direct contact (DC) jet condenser is typically
used, since it is characterized by low terminal
temperature difference (TTD) values, but surface
condensers have been utilized as well. Because
Heller systems are indirect, there is no need for
a large diameter steam duct between the steam
turbine and condenser.
There is no general solution for determining the
most suitable heat sink. As mentioned before,
many considerations should be weighed before
a final decision is made. Capital cost, scarcity of
water, and plant location are some of the many
determining factors. Experienced plant design
firms could also assist in the selection process.
STEAM TURBINE
Requirements
The steam turbine requirements for solar
applications are quite different from those
for conventional steam turbines used in fossil
applications. Equipment suppliers must provide
some important features, including:
Modular design
Capability to accommodate variable high-
pressure (HP) flows and high LP flows
Fast and easy assembly
Robust design for daily startup (low mass
rotors and casings, reduced seal leakages,
etc.)
Fast responding controls
Capability to operate at high backpressure
due to extensive use of ACC for solar
applications
High-quality materials of construction to
support cycling operation
It should be emphasized that steam turbine
startup and warming must be done as quickly
as possible. Designers should consider using a
conventional natural gas-firing system to ensure
that the warmup of the steam lines and turbine
casing is done before sunrise.
In terms of thermal performance, the turbo-
machinery should meet the following
requirements:
High efficiency to reduce solar field
Low minimum load capability
Convenient steam extraction locations
Flexibility to cope with thermal transients
Proven technology
The main goal of a solar power plant is to
produce as many megawatt-hours per year as
possible. At the beginning and end of the day,
solar radiation is substantially lower. Therefore,
to maximize power production, the turbine
should be capable of operating at extremely
low loads. While a conventional steam turbines
minimum load is about 12% to 15% of the base
Solar applications
require a dedicated
steam turbine
with features
quite different
from those of
a conventional
steam turbine
used in
fossil applications.
Turbine
Generator
Condensate
Condensate
Condensate
Spray
Condenser
Recover y
Turbine
Cooling
Tower
Air
In
Air
In
Air Out
Figure 8. Heller System
Bechtel Technology Journal 168
load, turbo-machinery designers for these special
applications should find innovative solutions
to continuously operate at 3% to 5% of the base
load. This is not a trivial task due to the effects
of low flow on fixed exhaust geometry and high
ventilation losses.
The turbo-machinery for solar applications
should meet the following requirements:
Achieve environmental emissions
requirements
Offer design simplicity
Achieve high availability and reliability
Provide low operation and maintenance
(O&M) cost
Turbine Backpressure
During the steam turbine selection process, it
is important to consider exhaust backpressure
and last-stage blade (LSB) design. As evidenced
by Figure 9, a larger exhaust blade will not
necessarily provide the optimum solution for the
system. In reality, a 26-inch
blade yields a lower power
loss than a 33.5-inch blade as
exhaust pressure increases.
Startup
Another important factor in
selecting the most appropriate
turbine is the startup time. In
the absence of natural gas or
another heat source to faci-
litate the startup procedure
by warming up the lines,
valves, and turbine casing,
the ability of the turbine
to accept steam at lower
temperatures becomes a
signi-ficant consideration.
Figure 10 depicts such a
scenario. It can be seen that
despite heat input from the
solar field reaching substantial
heat generation much faster,
the turbine startup requires
almost 40 minutes to produce
any power and almost
80 minutes to reach full power.
This behavior has a direct effect
on the number of kilowatt-
hours produced annually.
Efforts should be dedicated to
improve startup time, either by
use of conventional heat sources
or by use of thermal storage.
SUMMARY AND CONCLUSIONS
W
hile a significant effort has been
dedicated to solar field improvements, a
comprehensive understanding of the interaction
between the solar field and the heat energy
conversion system is required in order to develop
a successful project. Selection of the two major
componentsthe turbo-machinery and the heat
sinkmust be coordinated and integrated to
meet the specific site requirements.
The unique requirements of solar power plants
have created specific types of turbo-machinery.
The continuous demand for renewable energy
will lead to the development of more efficient
and reliable equipment.
Optimum equipment selection requires detailed
analysis of site-specific climate conditions,
commercial drivers, and equipment capability
to respond to intermittent heat source behavior.
Selection of experienced power plant design
and construction firms could certainly facilitate
the process.
Optimum
equipment
selection requires
site-specific
detailed analysis.
1 2 3 4 5 6 7 8
P
o
w
e
r

L
o
s
s
,

%

o
f

D
e
s
i
g
n

C
a
s
e
Backpressure, in. Hg
8
7
6
5
4
3
2
1
0
1
2
33.5 in.
Blade Size
26 in.
0
0 20 40 60 80 100 120
50
100
150
200
250
M
W
Time, minutes
Heat Input
Power Output
Figure 10. Turbine Startup Curve
Figure 9. Power Loss versus Steam Turbine Backpressure
June 2011 Volume 3, Number 1 169
TRADEMARKS
PAC System is a registered trademark of GEA
Power Cooling, Inc.
BIOGRAPHIES
Justin Zachary, PhD, a
Bechtel Fellow and technology
manager for Bechtel Power
Corporation, oversees the
technical assessment of major
equipment used in Bechtels
power plants worldwide.
He is engaged in a number
of key activities, including
evaluation of integrated gasifi-
cation combined cycle power island technologies;
participation in Bechtels CO
2
capture and sequestration
studies; and application of other advanced power
generation technologies, including renewables.
Justin has more than 32 years of experience with
electric power generation technologies, particularly
those involving the thermal design and testing
of gas and steam turbines. He has special expertise
in gas turbine performance, combustion, and
emissions for simple and combined cycle
plants worldwide, and is a widely respected
international specialist in turbo machinery. Before
coming to Bechtel, he designed, engineered,
and tested steam and gas turbine machinery
while employed with Siemens Power Corporation
and General Electric Company. Drawing on his
expertise as one of the foremost specialists
in turbo machinery, he has authored more than
72 technical papers on this and related topics.
He also owns patents in combustion control
and advanced thermodynamic cycles.
Justin is an ASME Fellow and a member of a
number of ASME Performance Test Code committees.
Justin holds a PhD in Thermodynamics and Fluid
Mechanics from Western University in Alberta,
Canada. His MS in Thermal and Fluid Dynamics
is from Tel-Aviv University, and his BS in
Mechanical Engineering is from Technion
Israel Institute of Technology, Haifa, both in Israel.
Natasha Jones is a mechanical
engineering staff specialist in
Bechtels Power Global Business
Unit, located in Frederick,
Maryland. She has been with
Bechtel for 5 years and is
currently engaged in cooling
system and cooling tower
specification, evaluation, and
performance testing.
Natasha has co-authored and presented papers
at regional and national conferences and presently
serves on several Cooling Technology Institute
technical committees.
Natasha has a BS in Chemical Engineering from
Colorado State University, Fort Collins.
Aslan Golant is a mechanical
engineer in Bechtels Power
Global Business Unit, located
in Frederick, Maryland. During
his 4 years with Bechtel, Aslans
time has been split between
the Project Development Group
cycle analysis team and field
support for the Elm Road
project. On the cycle analysis
team, he analyzes thermodynamic cycles and
builds models to provide power plant
performance predictions and optimization. Earlier,
at the Elm Road jobsite (two supercritical coal
plants in Oak Creek, Wisconsin), he provided senior
engineering support and operational experience to
assist the project in obtaining substantial completion.
Aslan spent 5 years in the US Navy supervising
operation and maintenance of his ships main
propulsion gas turbine engines and ancillary
equipment. After leaving the Navy, he moved into
the industrial field as a quality test technician
conducting first article static and dynamic testing of
gas turbine engines. During these earlier phases of
his career, Aslan gained 7 years of collective
experience in power plant design, operation, and
maintenance requirements.
Following the field experience described previously,
Aslan enrolled in the University of Illinois at
Chicago, graduating Summa Cum Laude with a
BS in Mechanical Engineering before being recruited
by the Bechtel team.
Bechtel Technology Journal 170

Systems & Infrastructure
T ec hnol ogy Paper s
173
Probabilistic Site Seismic
Response Analysis
Nan Deng, PhD
Farhang Ostadan, PhD
181
Computation-Based Engineering
of Multiphase Processes Using
Computational Fluid Dynamics
Christopher Kennedy, PhD
Philip Diwakar
Leonard J. Peltier, PhD
Brigette Rosendall, PhD
193
Water Security
Angelos N. Findikakis, PhD
LLNS Base Contract
The National Ignition Facilitys target chamber is
assembled from 4-inch- (10-centimeter-) thick
aluminum panels. Photo credit is given to
Lawrence Livermore National Security, LLC,
Lawrence Livermore National Laboratory,
and the Department of Energy, under
whose auspices this work was performed.
2011 Bechtel Corporation. All rights reserved. 173
Issue Date: June 2011
INTRODUCTION
I
n current engineering practice, seismic
design motions at the sites of most critical
structures are developed by first generating the
rock motions using probabilistic seismic hazard
analysis (PSHA) and then by analyzing site
response through the soil column to incorporate
local soil effects. The most popular approach
used in site response analysis, adopted first
in SHAKE [1, 2] and its linear and nonlinear
variations, assumes that the ground profile
consists of an assembly of horizontal soil/
rock layers with different material properties.
One acceleration time history, which usually
starts with a recorded motion as the seed and
is modified to fit a given design response
spectrum, is specified at a certain elevation,
and the responses of the soil profiles, including
stresses, strains, maximum accelerations,
response spectra, etc., are computed. Soil
nonlinearity can be considered through
either equivalent linear or nonlinear numerical
iterative procedures. Figure 1 shows the
approach schematically.
Uncertainties arise in using the above procedure
on practical engineering projects. One major
uncertainty is the appropriate selection of the
input time history. It is well known that two
acceleration time histories may fit the same design
response spectrum but differ in other important
characteristics such as velocity, displacement,
Arias intensity, and power spectrum density
(PSD). This is mainly due to the fact that the
phasing and energy characteristics of the time
history play significant roles in soil column
responses, particularly for site conditions where
Nan Deng, PhD
[email protected]
Farhang Ostadan, PhD
[email protected]
AbstractDetermination of seismic site response is the important first step in any earthquake-related
engineering study. The most popular approach in current practice, adopted first in the SHAKE computer
program in 1972, assumes that the ground profile consists of an assembly of horizontal soil/rock layers with
different material properties. This approach requires selecting an acceleration time history as the representative
motion for the design earthquake, usually through a spectral matching process from a recorded seed motion.
Due to uncertainties at many steps of this procedure, the results of a site response analysis from a single time
history record may scatter significantly. To get statistically stable results for a practical site such as a nuclear
facility, it may be necessary to perform the same site response analysis many times using 3060 different time
histories. Thus, the procedure becomes cumbersome and very time consuming.
An alternative procedure is proposed in this paper to overcome the shortcomings. In this new procedure, the
original SHAKE framework of site response analysis is preserved. However, instead of using an acceleration
time history as the seismic input, a design spectrum is used as the input motion directly. Power spectrum
densities (PSDs) are calculated in each step of the procedure. Extreme values of stress, strain, acceleration, and
response spectra are derived directly from the PSDs based on relationships obtained from random-vibration-
theory (RVT). The results represent statistical means of the quantities of interest from all possible input time
histories fitting the same design spectra. This alternative procedure is coded in a new program, P-SHAKE.
Numerical examples included in the paper demonstrate the compatibility of P-SHAKE results with the results
of conventional SHAKE runs, and the efficiency and easiness of this new procedure in generating statistically
meaningful and stable results. This new approach has been used successfully in the site response analysis work
for several large-scale Bechtel projects.
KeywordsP-SHAKE, random vibration theory (RVT), site seismic response analysis
PROBABILISTIC SITE SEISMIC
RESPONSE ANALYSIS
Bechtel Technology Journal 174
soil nonlinearity becomes important (e.g., see [3]).
The most commonly used approach to reducing
this uncertainty is to (1) generate multiple time
histories, all fitted to the same design spectrum
and orthogonal to each other but originated
from different earthquake recordings; (2) run
the site response computation many times, each
time using a different time history; and (3) take
statistic measures and bonding values from the
multiple runs for the quantities of interest for the
engineering project. This approach can be very
cumbersome and time consuming, especially
for site response studies of critical structures,
e.g., a nuclear power plant, in which a group of
3060 time histories is usually required to obtain
statistically stable results. Selecting such a large
suite of time histories at sites where only limited
recorded motions are available (e.g., eastern
US) is very challenging and often involves
modifying the motions from other regions to fit
the project site.
This paper presents P-SHAKE, Bechtels
alternative approach for conducting seismic site
response analysis that eliminates the need to
generate time histories. This approach follows
the SHAKE theoretical framework but uses
formulations based on random vibration theory
(RVT) for input motion and soil column analysis.
This new approach follows three basic steps:
First, the input target rock response spectrum
is converted to a PSD function.
Next, the PSDs of the site soil column
responses are computed based on the
input PSD and the transfer functions of
the soil column. The statistical means of
the maximum shear strains and effective
strains are obtained based on the PSD,
and the process is repeated until strain
compatibility is reached over the entire
soil column.
Finally, the PSDs and the statistical means of
the maximum responses of other required
quantities, such as the acceleration response
spectra and maximum accelerations, are
computed once convergence has been
reached on soil properties.
Figure 2 shows the new approach schematically.
The current
SHAKE approach
can be very
cumbersome and
time consuming,
especially for site
response studies of
critical structures,
such as a
nuclear plant.
ABBREVIATIONS, ACRONYMS, AND TERMS
P-SHAKE Bechtels alternative approach
for site seismic response analysis
based on RVT and working
within the theoretical
framework of SHAKE
PSD power spectrum density
PSHA probabilistic seismic hazard
analysis
RVT random vibration theory
SHAKE a computer program for
conducting equivalent linear
seismic response analyses of
horizontally layered soil deposits
UHS uniform hazard spectrum
Soil Column
Time, sec
Sur face Response:
ARS, Maximum
Acceleration, etc.
Maximum Stress,
Acceleration,
Strain, etc.
Time, sec
Spectrum-Matched/
Recorded Time
Histor y
Time, sec
Log Freq, Hz
S
a
,

g
Soil Column
Sur face Response:
ARS, Maximum
Acceleration, etc.
Maximum Stress,
Acceleration,
Strain, etc.
Input Motion Defined
as a PSD Function
Log Freq, Hz
S
a
,

g
Log Freq, Hz
L
o
g

P
S
D
Log Freq, Hz
L
o
g

P
S
D
Log Freq, Hz
L
o
g

P
S
D
Figure 1. Site Response AnalysisTime History Approach
Figure 2. Site Response AnalysisAlternative Approach
June 2011 Volume 3, Number 1 175
THEORY
Converting Acceleration Response Spectra to
Power Spectrum Density Functions
It is well known from basic RVT theory (e.g.,
see [4]) that

S
d
() = H
2
()S
a
(),
(1)
where S
d
() is the relative displacement PSD,
S
a
() is the acceleration PSD, H() is the transfer
function between displacement response and
absolute acceleration input of a single-degree-
of-freedom oscillator with frequency
o
and
damping , and
H
2
()= . (2)
The mean of the maximum relative displacement
response of the oscillator (definition of a mean
relative displacement response spectrum) is
given by
D = p
0 ,
(3)
where p is a peak factor and
0
is the zero moment
of the response defined in Equation 6. Following
References [5] and [6],
p = + , (4)
where (0) is the mean zero crossing of the
response between 0 and (the strong motion
duration of the earthquake), with
(0) =

. (5)
The moments of the response are defined as

n
=

n
S
d
()d
, (6)
where n = 0, 1, 2 for the zero (
0
), first (
1
), and
second (
2
) moments of the response, respectively.
Following References [7] and [8], (0) is neces-
sarily adjusted with the parameter , where
= 1 . (7)
In summary, the acceleration PSD function
from a given acceleration response spectrum is
calculated in the following steps:
1. Convert the acceleration response spectrum
RS
a
() to a relative displacement response
spectrum RS
d
().
2. Assume an initial acceleration PSD function
S
a,0
(). Usually a constant value of unity
is assumed as the initial value over the
frequency range.
3. With the assumed S
a,0
() and the relations
given above, calculate the mean of the
maximum relative displacement response
for all of the frequencies defining the
response spectrum. This will be a new
relative displacement response spectrum
RS
d,1
().
4. Calculate the ratio R() = RS
d
()/RS
d,1
().
5. Correct the assumed acceleration PSD
function S
a,0
() by R
2
() to calculate a new
acceleration PSD function S
a,1
().
6. Iterate steps 3 to 5 until the desired
accuracy is reached in the calculation of the
displacement response spectrum.
Determining the Mean of Maximum Responses
Using the acceleration PSD S
a
() of the input
motion and the transfer function between the
input and any desired response H
r
(), which
is calculated following the normal SHAKE
procedure, the steps to calculate the mean of the
maximum response are as follows:
1. Calculate the PSD of the desired response:

SR() = H
r
()S
a
()
. (8)
2. Calculate the moments
0
,
1
, and
2
of the
response:

n
=

n
SR()d
. (9)
3. Calculate the peak factor p with these
moments as described in Step 1.
4. Calculate the mean of the maximum
response:
M
R
= p
0
, (10)
where p is the peak factor for the desired
response, following the same procedure
outlined in Equations 4 through 7 but with
the response PSD from Equation 8.
This paper
presents P-SHAKE
as an alternative
approach for
conducting seismic
site responses
that eliminates the
need to generate
time histories.
1
(
2

2
)
2
+ 4
2

2
o o
0

2
1

21n (0)
0.5772
21n (0)

2
2
Bechtel Technology Journal 176
NUMERICAL EXAMPLE
B
echtel has coded the above procedure in the
computer program P-SHAKE. [9] An earlier
study [10] demonstrated that P-SHAKE results
generally are in very good agreement with
SHAKE results for an individual earthquake
time history. The following numerical example
illustrates compatibility of the P-SHAKE results
with the SHAKE analysis results and the
efficiency of the new approach.
A 1,630-foot-deep soil profile consisting of
various sand, clay, and soft rock layers overlaying
a rock half-space was analyzed. The site shear
wave velocity profile is shown in Figure 3.
A group of strain-degradation curves for
shear moduli and damping ratios is assigned to
various soil layers but is not presented here due
to space limitations.
The uniform hazard spectrum (UHS) at the
rock surface outcrop with 10
-5
recurrence
period was developed through PSHA (Figure 4).
Thirty acceleration time histories were selected
as the seeds from historical recordings
around the site and from other earthquakes with
similar geological and seismological conditions.
Figure 5 shows the overall match of the 30 time
histories with the rock UHS; Figure 6 shows a
few of these matched rock time histories.
0
0 2,000
Shear Wave Velocity, ft/sec
Rock Outcrop Motion
D
e
p
t
h

f
r
o
m

G
r
a
d
e
,

f
t
4,000 6,000 8,000 10,000
200
400
600
800
1,000
1,200
1,400
1,600
1,800
UHS Targets Spectra: 10
-5
Recurrence Probability
1.00E+01
Frequency, Hz
S
p
e
c
t
r
a
l

A
c
c
e
l
e
r
a
t
i
o
n
,

g
1.00E+00
1.00E-01
1.00E-02
1.00E-03
0.1 1 10 100
14
17
21
24
30
07
01
Spectral-Matched Time Histor y Spectra: RP5LF
S
p
e
c
t
r
a
l

A
c
c
e
l
e
r
a
t
i
o
n
,

g
Frequency, Hz
0.1 1 10 100
10
1
0.1
0.01
0.001
Figure 3. Shear Wave Velocity Profile of the
Example Problem
Figure 4. Uniform Hazard Spectrum of Rock Motion
Figure 6. Selected Plots of Spectra-Matched Time
Histories (Number in Box is the Sequential Number)

Figure 5. Target Spectra-Matching of 30 Time Histories
The P-SHAKE
results generally
are in very good
agreement with
SHAKE results
for an individual
earthquake
time history.
June 2011 Volume 3, Number 1 177
Two parallel analyses were performed. The
first one used the SHAKE program, repeated
30 times using the matched time histories as
input motions, one time history at a time. The
second one used the P-SHAKE program, with
the 5%-damped UHS as the input motion. In both
cases, the input motion was specified as outcrop
motion at the top of the rock half-space.
Figure 7 shows the maximum shear strains
developed in the soil profile after convergence
was reached on soil properties. Figure 8 shows
the maximum acceleration profile of the analysis
results. Figure 9 shows the acceleration response
spectra at the ground surface. In all the figures,
the thin gray lines are from the 30 individual
SHAKE analyses, which show, as expected,
large variations from the results of the different
time histories. The thick red line is the average
of all SHAKE analyses, and the thick black line
is the P-SHAKE results. It can be observed quite
Maximum Shear Strain, %
D
e
p
t
h
,

f
t
1,800
1,600
1,400
1,200
1,000
800
600
400
200
0
0 0.05 0.1 0.15 0.2 0.25
SHAKE Individual Results
SHAKE Average
P-SHAKE Result
Maximum Acceleration, g
D
e
p
t
h
,

f
t
1,800
1,600
1,400
1,200
1,000
800
600
400
200
0
0 0.2 0.4 0.6 0.8 1.0 1.2 1.4
SHAKE
Individual Results
SHAKE Average
P-SHAKE Result
Frequency, Hz
A
c
c
e
l
e
r
a
t
i
o
n
,

g
0.2
0
0.4
0.6
0.8
1.0
1.2
1.4
1.6
0.1 1 10 100
SHAKE Individual Results
SHAKE Average
P-SHAKE Result
Figure 7. Computed Maximum Shear Strain Profile Figure 8. Computed Maximum Acceleration Profile
Figure 9. Acceleration Response Spectra at Ground Surface
The SHAKE and
P-SHAKE results
are in very good
to excellent
agreement.
However, SHAKE
requires 30 time
histories and
30 analyses for
the same profile
while P-SHAKE
needs only one
to achieve
essentially the
same results.
Bechtel Technology Journal 178
clearly that the two sets of results are in very
good to excellent agreement. However, SHAKE
requires 30 analyses for the same profile while
P-SHAKE needs only one to achieve essentially
the same results.
Worth mentioning is that we have tested many
cases in numerous different soil profiles with
multiple time histories to demonstrate the
compatibility and close agreement between
SHAKE and P-SHAKE results. These results are
not shown here due to space limitations. The
P-SHAKE program is now widely used for major
Bechtel projects.
CONCLUSIONS
A
n alternative approach for site seismic
response analysis is presented in this
paper. This approach is based on RVT and
works within the theoretical framework of the
computer program SHAKE. In this approach, the
design input motion is characterized directly by
the design response spectrum, all intermediate
computations are calculated through PSD
and transfer functions, and all responses of
interest are calculated as statistical averages.
The P-SHAKE approach avoids the difficulties
associated with generating multiple spectrum
matching input time histories and is most
suitable to use with a suite of randomized soil
profiles for soil amplification studies.
Numerical examples show that the results
computed by the new approach are in good
agreement in statistical average with the
results computed by SHAKE. Thus, all
practical engineering experiences and
empirical relationships built upon SHAKE
are still applicable. P-SHAKE has been used
successfully in the site response analysis work
for several major nuclear power plant sites
and has been accepted by the US Nuclear
Regulatory Commission.
REFERENCES
[1] P.B. Schnabel, J. Lysmer, and H.B. Seed, SHAKE:
A Computer Program for Earthquake Response
Analysis of Horizontally Layered Sites, Report
No. UCB/EERC-72/12, Earthquake Engineering
Research Center, University of California,
Berkeley, 1972, access via http://nisee.berkeley
.edu/elibrary/Text/21000829.
[2] I.M. Idriss and J.I. Sun, Users Manual for
SHAKE91, University of California, Davis,
November 1992, refer to http://www.ce
.memphis.edu/7137/PDFs/Notes/Shake91.pdf
(August 1992) and/or http://nisee.berkeley.edu/
elibrary/Text/360175 (1993).
[3] F. Ostadan, S. Mamoon, and I. Arango, Effect
of Input Motion Characteristics on Seismic
Ground Responses, Proceedings of the 11th World
Conference on Earthquake Engineering (11WCEE),
Acapulco, Mexico, June 2328, 1996, http://
www.iitk.ac.in/nicee/wcee/article/11_1924.PDF.
[4] A. Der Kiureghian, Introduction to Random
Processes, Lecture Notes for Short Course on
Structural Reliability: Theory and Applications,
University of California, Berkeley, March 2325,
1983.
[5] A.G. Davenport, Note on the Distribution of
the Largest Value of a Random Function with
Application to Gust Loading, Institution of
Civil Engineers (ICE) Proceedings, Vol. 28, No. 2,
June 1964, pp. 187196, access via http://
www.icevirtuallibrary.com/content/issue/
iicep/28/2.
[6] A. Der Kiureghian, Structural Response to
Stationary Excitation, Journal of the Engineering
Mechanics Division, ASCE, Vol. 106, No. 6,
November/December 1980, pp. 11951213,
access via http://cedb.asce.org/cgi/
WWWdisplay.cgi?9906.
[7] T. Igusa and A. Der Kiureghian, Dynamic
Analysis of Multiply Tuned and Arbitrarily
Supported Secondary Systems, Report No. UCB/
EERC-83/07, Earthquake Engineering Research
Center, University of California, Berkeley, 1983,
access via http://nisee.berkeley.edu/elibrary/
Text/131000583.
[8] E.H. Vanmarcke, On the Distribution of the
First-Passage Time for Normal Stationary
Random Processes, Journal of Applied Mechanics,
Vol. 42, No. 1, March 1975, pp. 215220, access via
http://asmedl.aip.org/dbt/dbt.jsp?KEY=JAMCAV
&Volume=42&Issue=1.
[9] Users Manual for P-SHAKE, Version 2.0, Bechtel
National Inc., September 2009.
[10] N. Deng and F. Ostadan, Random Vibration
Theory Based Seismic Site Response Analysis,
Paper No. 04-02-0024, Proceedings of the 14th
World Conference on Earthquake Engineering
(14WCEE), Beijing, China, October 1217, 2008,
http://www.iitk.ac.in/nicee/wcee/article/
14_04-02-0024.PDF.
BIOGRAPHIES
Nan Deng, PhD, has more
than 20 years of experience in
geotechnical and geotechnical
earthquake engineering and
foundation design. As a senior
engineering specialist, he
has actively participated in
many of Bechtels global civil,
mining, and power projects
and research activities, ranging
from dams to nuclear sites to waste treatment plants.
Nan specializes in all aspects of geotechnical analysis
and design, especially seismic site characterization,
liquefaction, and soil structure interactions.
Nan has published more than 15 technical papers on
topics related to geotechnical earthquake engineering.
He is currently a member of the American Society of
Civil Engineers (ASCE), Geotechnical Division.
P-SHAKE
has been used
successfully in
the site response
analysis for
several major
nuclear power
plant sites and
has been accepted
by the US Nuclear
Regulatory
Commission.
June 2011 Volume 3, Number 1 179
Nan received his MS and PhD in Civil Engineering
from the University of California, Berkeley, and
his BEng in Hydraulic Engineering from Tsinghua
University of Beijing, China. He is a registered
Professional Engineer in California.
Farhang Ostadan, PhD, a
Bechtel Fellow, has more
than 27 years of experience in
geotechnical and geotechnical
earthquake engineering and
foundation design. As chief
soils engineer for Bechtel, he
has overall responsibility for
this discipline and manages the
efforts of a large and diverse
group of geotechnical specialists in locations across
the US and around the globe. His project oversight
responsibilities range from major transportation
projects to petrochemical, nuclear, and power- and
energy-related projects.
Farhang has published more than 30 technical
papers on topics related to geotechnical earthquake
engineering. He co-developed a method for dynamic
soil structure interaction analysis currently in use by
the industry worldwide. Farhang is a frequent lecturer
at universities and research organizations.
Farhang is a Fellow of the American Society of Civil
Engineers (ASCE), Geotechnical Division; is a member
of the Earthquake Engineering Research Institute
(EERI); and was recently elected chairman of the
international conference on Structural Mechanics in
Reactor Technology (SMiRT 22) to be held in San
Francisco in August 2013.
Farhang received a PhD in Civil Engineering from
the University of California, Berkeley; an MS in
Civil Engineering from the University of Michigan,
Ann Arbor; and a BS in Civil Engineering from the
University of Tehran, Iran.
Bechtel Technology Journal 180
2011 Bechtel Corporation. All rights reserved. 181
Issue Date: June 2011
INTRODUCTION
T
he use of computational fluid dynamics (CFD)
to model multiphase transport phenomenon
is becoming a commonplace engineering
practice. Many processes that Bechtel encounters
in its engineering projects include multiphase
flows. For example, gas-liquid flows are
encountered when a liquid flashes into vapor
behind a valve. Solid-liquid mixing occurs in
tanks when a radioactive slurry is processed
for vitrification. Gas-solid flows involving
particulate injection, mixing, and transport
occur in many mining operations.
Syamlal et al. [1] present an excellent review of the
importance and future of computational science
and the significant role it can play in furthering
technology development. The authors point
out four pathways along which computational
science, including CFD, can contribute:
Identifying the most promising concepts
Getting the design right the first time at
any scale
Reducing the amount of building and
testing at intermediate scales
Learning and disseminating information
when deploying a technology from the
first plant to subsequent plants
The broad range of examples this review uses to
demonstrate these four pathways is very relevant
to Bechtel projects.
Applying CFD to an engineering problem begins
by choosing a domain of interest and representing
this domain with a computational mesh. It is
upon this mesh that the equations representing
conservation of mass, momentum, and energy
are discretized and solved numerically. Along
with these conservation equations, there may be
additional equations such as those representing
the time-evolution of certain turbulence quantities
or those representing different chemical species
concentrations or material phase quantities.
Associated with each equation being solved
are numerous other mathematical relationships
that capture the specific details of fluid and
particle behavior. Examples of these constitutive
relations and submodels are expressions for
stress tensor, heat flux vector, chemical reaction
rate, momentum exchange between phases,
and production and dissipation of turbulence
quantities. The CFD approach has the benefit
of maximizing the application of first principle
relations and minimizing empirical relations.
Combined with a numerical procedure
that solves all coupled, nonlinear equations
Christopher
Kennedy, PhD
[email protected]
Philip Diwakar
[email protected]
Leonard J. Peltier, PhD
[email protected]
Brigette
Rosendall, PhD
[email protected]
AbstractComputer-based engineering tools such as computational fluid dynamics (CFD) provide tremendous
insight into flow details that are inaccessible using simpler approaches. Used in conjunction with a well-founded
verification and validation (V&V) process, these details enable improved engineering designs with tighter
margins and reduced costs. Wherever the potential for high risk/exposure exists and design uncertainty must be
reduced, CFD is an enabling technology.
This paper presents three illustrations of how CFD models can be applied to gas-solid, gas-liquid, and solid-
liquid multiphase flow problems. Two cases show how CFD can be applied to engineering projects and how
validation data is used to add confidence to the model predictions. The third case specifically illustrates how
validation is used to improve the process of building a CFD model. Together, these illustrations are indicative of
how Bechtel is at the cutting edge of bringing multiphase CFD into mainstream engineering.
Keywordscomputational fluid dynamics (CFD), gas-liquid, gas-solid, mixing, modeling, multiphase,
numerical, scrubbing, simulation, slurry, solid-liquid, verification and validation (V&V)
COMPUTATION-BASED ENGINEERING
OF MULTIPHASE PROCESSES USING
COMPUTATIONAL FLUID DYNAMICS
Bechtel Technology Journal 182
simultaneously, CFD is remarkably general and
broadly applicable.
Using commercially available CFD codes,
todays engineering workstations and clusters
are sufficiently powerful to simulate very
complex flow phenomena, and can often do
so with reasonable turnaround times. Further,
available commercial codes such as ANSYS

FLUENT

and CD-adapco STAR-CCM+

include the necessary models for the complex
physics present in multiphase problems. Current
customer focus on the enhanced performance
of new and existing technologies, coupled with
tighter industry requirements to meet stricter
codes and standards, makes it inevitable that
verified CFD models with validated solutions in
parameter ranges of interest will be integrated
into the design process.
Verifying CFD models and validating their
solutions are nontrivial tasks. In 2009, the
American Society of Mechanical Engineers
(ASME) released its verification and validation
(V&V) standard for CFD codes (ASME V&V
20: Standard for Verification and Validation
in Computational Fluid Dynamics and Heat
Transfer). Bechtel has already begun a V&V
effort to qualify FLUENT for gas-solid-liquid
mixing systems.
This paper presents three illustrations of
how CFD models can be applied to gas-solid,
gas-liquid, and solid-liquid multiphase flow
problems. Two cases show how CFD can be
applied to Bechtel projects and how validation
data is used to add confidence to the model
predictions. The third case specifically illustrates
how validation is used to improve the process of
building a CFD model.
MULTIPHASE FLOW MODELING STRATEGIES
B
echtel National, Inc.s, Advanced Simulation
and Analysis group primarily uses two
commercial solvers for multiphase simulations
the FLUENT and STAR-CCM+ software
programs. These industry-leading CFD packages
offer several different modeling approaches to
multiphase flow. No one approach is optimal for
all situations. The user must review the solution
strategies and, based on the particular problem
being confronted and the questions being asked,
select the one that makes the most sense. A brief
overview of the modeling strategies follows.
Eulerian-Lagrangian Approaches
When a multiphase system consists mainly of
a single continuous phase carrying a relatively
small volume of discrete particles, droplets,
or bubbles, a Eulerian-Lagrangian model can
be used. The carrier phase is modeled from
a Eulerian point of view, solving the mass,
momentum, and energy transport equations on
a mesh. Particle trajectories are modeled with
ballistic equations that use properties extracted
from the carrier phase solution to compute the
local forces acting on the transported particles.
By randomizing the injection location on the
particle inflow boundary and adding a stochastic
component to emulate turbulent diffusion, a large
number of unique particle trajectories can be
calculated that are all equally likely. A statistical
assessment of the expected particle distributions
allows particle effects to be impressed onto the
carrier phase (for a two-way coupled solution).
FLUENT and STAR-CCM+ offer Eulerian-
Lagrangian modeling strategies with
extensions that allow relatively high-volume
fractions. Details of these Eulerian-Lagrangian
formulations are documented in the FLUENT
and STAR-CCM+ user manuals.
At Bechtel, Eulerian-Lagrangian models have
been used to simulate alumina injection and
transport in dry scrubbing systems for handling
the off-gas stream from aluminum smelting,
which is the example presented in this paper.
Other examples from the literature where
Eulerian-Lagrangian models have been applied
are cyclone dust separators, vehicle soiling, spray
Using
commercially
available
CFD codes,
engineering
workstations
and clusters
are sufficiently
powerful
to simulate
very complex
flow phenomena,
often with
reasonable
turnaround times.
ABBREVIATIONS, ACRONYMS, AND TERMS
AOA angle of attack
ASME American Society of Mechanical
Engineers
atm standard atmosphere
CFD computational fluid dynamics
DDT deflagration-to-detonation
transition
FEA finite element analysis
FFT fast Fourier transform
LNG liquefied natural gas
TUI text-user interface
UDF user-defined function
V&V verification and validation
VOF volume of fluid
June 2011 Volume 3, Number 1 183
coating, aerosol dispersion, spray cooling, and
liquid fuel combustion.
Eulerian-Eulerian Approaches
FLUENT and STAR-CCM+ also offer Eulerian-
Eulerian multiphase modeling options. For
example, FLUENT has three Eulerian-Eulerian
strategies for solving multiphase flows with
interpenetrating continua. The first strategy is
the volume-of-fluid (VOF) model, a front-tracking
method for the volume fraction of the immiscible
phase that uses a single momentum equation.
The second strategy, the mixture model, solves
mixture momentum equations and uses relative
velocities to describe the dispersed phases. The
third strategy, the Eulerian model, is the most
detailed of the three, solving continuity and
momentum equations for each phase present.
When particle volume fractions routinely
approach the packing limit, only the Eulerian
model is appropriate. Details of these three
strategies may be found in the FLUENT Theory
Guide. [2] The Eulerian-Eulerian approach is used
in the gas-liquid and solid-liquid simulations
presented in this paper.
GAS-SOLID FLOWS
Introduction
Gas-solid flows are common in Bechtel projects.
Recent simulations have modeled the incursion
of sand into buildings, the transport of flyash
in power plant ducts, and the suspension of
alumina within dry scrubbing units at aluminum
smelting facilities. Although different in scope,
these applications engage the same underlying
mathematical models.
In the gas-solid simulation example presented
here, a Eulerian-Lagrangian multiphase model
is applied to a dry scrubbing unit for a notional
aluminum smelting facility. This application
demonstrates how computation can enable a
system to be optimized for performance and
risk mitigation before equipment choices are
narrowed and purchases made. Performance
metrics are:
Pressure losses across the stages of the unit
(limiting the number of scrubbers that can be
attached to an exhaust manifold with a fixed
available vacuum capacity)
Development of a well-mixed region of
alumina and smelter off-gases for multiple
alumina recycle rates (ensuring optimum
toxin adsorption for removal)
Distribution of alumina by particle size in
the hopper (limiting scale formation on the
bag filters and affecting bag filter pulse rate)
Each metric affects the production capacity of the
smelting facility and thus the bottom line.
The Mathematical Model
The Eulerian-Lagrangian multiphase model
solves the set of Reynolds-averaged equations
for mass, momentum, and energy conservation
and a two-equation model for turbulence for the
carrier phase. If an option for two-way coupling
with transported solids is selected, a drag model
for interphase momentum (and energy) transfer
is enabled.
Individual particle trajectories are computed
using a ballistic equation coupled with the carrier
phase fields for the expected local flow conditions.
If a model option for turbulence dispersion is
selected, a stochastic submodel is implemented
to jitter the particle direction. This submodel
depends on the integral time, determined from
the carrier phase turbulence model parameters.
If an option for two-way coupling is selected,
particle statistics are computed to facilitate
momentum interchange with the carrier phase.
A Notional Dry Scrubbing System
Because the features of specific dry scrubbing
units simulated by Bechtel are client-confidential
and cannot be published, a generic dry scrubbing
unit has been modeled for this paper that has
the salient features of real systems. STAR-CCM+
was used to discretize the flow volume and
provide the multiphase solution. An image of the
notional scrubbing unit with a visualization of
the computational surface mesh is presented in
the left image of Figure 1.
Disk Injector Disk Injector
Figure 1. Notional Dry Scrubbing Unit: Surface Mesh (left),
Streamlines (middle), and Total Solids Distribution (right)
FLUENTs detailed
Eulerian model
solves continuity
and momentum
equations for each
flow phase present.
Bechtel Technology Journal 184
The dry scrubber consists of an off-gas duct from
a supply manifold from the smelter. This duct
routes toxin-laden gas to a reaction zone, where
virgin alumina mixed with recycled alumina is
injected into the duct. There are many injection
schemes. This injector produces a downward
spray of alumina into the upward flow of gas.
The intent is to have the gas-solid mixture
homogenize across the duct within the reaction
zone to ensure adequate gas-solid contact for
adsorption. The solid-laden flow is then directed
into a hopper where an abrupt expansion reduces
the flow velocity, allowing larger particles to
settle out for removal. Smaller particles remain
suspended to follow the flow and deposit on
the bag filters, providing a final opportunity
for toxin adsorption. The bag filters are pulsed
periodically to remove deactivated solids and
scale, allowing fresh material to deposit. The
cleansed gas stream passes through the bag
filters and leaves the scrubber for venting to
the environment.
The computational mesh shown in Figure 1
uses a generalized polyhedral element meshing
approach. The mesh was generated using the
STAR-CCM+ development suite. Approximately
400,000 polyhedral cells resolve the flow and
the viscous boundary layers. The bag filters
were modeled as a porous zone that enforces
the experimentally observed pressure drop. The
turnaround time from submission to converged
solution using 20 processors on a Cray

CX-1

supercomputer was approximately 5 hours.
Approximately 8 hours of preprocessing was
required to set up the initial case. The first
solution can be turned around in less than 1 day.
Examples from the flow solution showing
streamlines and total solids distribution are
provided in the middle and right images,
respectively, of Figure 1. The streamlines show
significant swirl in the duct after the elbow and
in the hopper, features difficult to predict with
simpler design codes. The injected solids mix
within the reactor, where high quality contact
between the gas and the injected alumina is
desired. CFD enables the injector geometry to
be optimized to improve the solids distribution
and, thus, the quality of the gas-solid contact.
The alumina recycle rate for this computation is
six parts used alumina to one part new alumina,
leading to a relatively high solids loading. The
gas-solid mixture exhausts to a hopper. Without
any turning vanes to redirect the flow, the
alumina-laden jet impinges on the opposite wall.
CFD could be used to optimize a set of turning
vanes to improve the distribution of alumina in
the hopper.
Figure 2 shows alumina distributions by particle
size. The 20 m particles disperse throughout
the hopper. The larger particles deflect upward
to impinge on the filter bags. This notional
configuration would be expected to form scale
on the filter bags, to have significant wear on the
filter bags, and to exhibit poor performance.
Benchmarking
There is only limited CFD model benchmarking
for dry scrubber simulations. Those comparisons
show that CFD can accurately predict the total
pressure drop across the dry scrubbing unit and
the velocity distribution across the duct.
GAS-LIQUID FLOWS
Introduction
An example of a gas-liquid flow may be
considered by picturing a mischievous child
with a glass of water containing a drinking
straw. At some instant, the curious child takes a
deep breath and blows as hard as possible into
the straw. Initially, the water inside the straw
accelerates until all water is expelled from the
straw into the glass. Subsequently, air is released
into the glass. Bubbles form rapidly and rise
toward the free surface.
Moving from the kitchen to a large industrial
production facility, consider a large liquid-
filled process vessel containing a small pipe
used to remove the liquid from the vessel. Due
to the vertical extent of the pipe, any gases
generated or released within the liquid tend to
collect at the highest local point within the pipe
network. If these gases are combustible, a small
ignition source could initiate a deflagration.
Because the flame would be propagating within
a confined space, if the gas mixture is detonable,
a sufficiently long gas pocket could provide the
flame enough run-up distance for it to transition
to a detonation.
20m 45m 75m 150m Total 20m 45m 75m 150m Total
Figure 2. Alumina Distribution, by Particle Size,
in the Dry Scrubbing Unit
CFD enables
injector geometry
to be optimized
to improve solids
distribution and,
thus, quality of
gas-solid contact.
June 2011 Volume 3, Number 1 185
The gas-liquid flow illustrated here begins with
a gas-phase detonation inside a submerged pipe.
Sitting within a large liquid-filled vessel, the
pipe contains both a detonable gas mixture of
hydrogen and oxygen and the liquid. Figure 3
shows two piping configurations, one fitted with
a vertical pipe and one fitted with a compound,
semi-horizontal pipe. The detonation is
presumed to have initiated on the far side of the
semi-horizontal pipe network where the reactive
gas mixture resides. The detonation propagates
toward the pipe end submerged within the
large vessel, whereupon it collides with the
stationary liquid. Upon colliding with the liquid,
the reflected detonation causes an extremely
large, short-term pressure peak at the gas-liquid
interface that begins to accelerate the liquid into
the vessel.
Subsequent to this peak pressure and a rapid
exponential decay, longer-term sustained
pressures associated with the high temperature
of the combustion products continue to accelerate
the liquid from the pipe. High-velocity liquid
ejected from the pipe acts as an impinging jet
on the vessel bottom. Immediately after the
liquid is ejected, the high-pressure combusted
gases that accelerated the liquid are themselves
vented into the vessel. A gas-phase, radial-wall
jet is momentarily formed at the vessel bottom.
As the velocity of the radially propagating gases
decay, buoyancy takes over and causes a brief but
strong bubble plume.
Loads are applied to the piping, supports, and
vessel internals not only by the high pressures
generated by the detonation, but also by the
stagnation pressures associated with high
velocity liquid impacting structures and high
amplitude acoustic waves caused by the gas
motion within the vessel. Additional forces, not
included in this discussion, are generated by
the drag and added mass associated with the
local flow velocities and accelerations adjacent
to internal items such as piping. Given the
potential for peak detonation reflection pressures
of hundreds of atmospheres, the destructive
potential from gas-phase detonation events in
liquid-filled piping networks is enormous.
This example demonstrates the importance
and utility of using multiphase CFD models
to help understand the forces caused by a
potential detonation. The hydrodynamic loading
mechanisms discussed here are of interest as
an aid to piping system and pressure vessel
designers in designing support structures, vessel
internal structures, and vessels that can safely
and adequately sustain the loads produced in
the event of a gas-phase detonation inside the
piping system.
CFD Model
For the example under consideration, rather
than computing the deflagration-to-detonation
transition (DDT) event, the application of the
CFD model begins with a time-dependent static
pressure boundary condition at the gas-liquid
interface within the pipe. The simulation is
accomplished using the VOF models in FLUENT
v6.3, coupled with a user-defined function (UDF)
containing an equation-of-state that permits
sound transmission through liquids. Based on
these configurations, two CFD geometries are
created, as shown in Figure 4.
Gas
Accumulation
Location
Gas Accumulation
Location
Dipped
Lines
Liquid Level
Liquid Phase
Resulting Liquid
Motion
Apply DDT and
Residual
Pressure
Gas Phase
Figure 3. Straight and Bent Routing Configurations
Figure 4. CFD Grid Used in a Study of a Detonation Aftermath in
Straight Routing and Bent Routing
Immediately
after the liquid
is ejected,
the high-pressure
combusted gases
that accelerated it
are vented
into the vessel.
A gas-phase,
radial-wall jet
is momentarily
formed.
Bechtel Technology Journal 186
To avoid computing the detonation itself, the
pressuretime-history curve representing the
static pressure seen at the gas-liquid interface
during and subsequent to the detonation
reflection is found for an initial gas pressure of
2.38 atmospheres (atm) absolute and gas pocket
length of 408 inches (34 feet). The static pressure
is prescribed as a very brief period where an
extremely large pressure is felt by the liquid,
followed immediately by an exponential decay
toward the systems deflagration pressure. This
static pressure profile is imposed as a boundary
condition in the CFD model.
Liquid Ejection
Based on the specified gas-liquid interface
pressure boundary condition above, the liquid
inside the pipe is rapidly accelerated until all
liquid is ejected from the pipe. Figure 5 shows
both the pipe exit velocity and the stagnation
point pressure seen on the impingement wall, as
derived from the CFD calculations.
Wolfram Research Inc.s computational
application, Mathematica

(v5.2), is used to
validate the time-dependent pipe exit velocity
and stagnation point pressure results, using an
integration of the one-dimensional momentum
equation in time. The main parameters for
comparison are the pipe exit velocity and the
stagnation point pressure at the bottom of the
pressure vessel. Figure 6 shows both velocity and
stagnation point pressure from the Mathematica
runs as a function of time. It is observed that
there is reasonable agreement between the
Mathematica integration results and the CFD
simulations shown in Figure 5.
For the liquid
ejection
example under
consideration,
there is
reasonable
agreement
between the
Mathematica
integration
results and
CFD simulations.
0.E+00 0
20
40
60
V
e
l
o
c
i
t
y
,

m
/
s
P
r
e
s
s
u
r
e
,

P
a
Time, seconds
80
100
120
140
0.0E+00 0.5E-02 1.0E-01 1.5E-01 2.0E-01 2.5E-01
1.E+06
2.E+06
3.E+06
4.E+06
5.E+06
6.E+06
7.E+06
8.E+06
9.E+06
1.E+07
Velocity at Pipe Exit
Pressure at Bottom Head
20
0.05 0.10 0.15 0.20 0.25
40
60
80
100
V
e
l
o
c
i
t
y
,

m
/
s
Time, seconds
Time, seconds
P
r
e
s
s
u
r
e
,

P
a
1'10
6
0.05 0.10 0.15 0.20 0.25
2'10
6
3'10
6
4'10
6
Figure 5. Pipe Exit Velocity and Stagnation Point Pressure on Vessel Bottom from CFD
Figure 6. Solution of One-Dimensional Equations
Using Mathematica: Pipe Exit Velocity (top)
and Stagnation Point Pressure (bottom)
June 2011 Volume 3, Number 1 187
Gas Ejection
Stagnation Point and
Bubble Plume
Characterization
The exiting gas from the
nozzle creates a bubble that
rapidly floats to the surface
after the deflagration
event. Upon final ejection
of the gas-liquid interface
across the plane of the pipe
exit, the gas-phase ejection
begins. Due to the high gas
pressures, the gas-phase
initially vents into the vessel
with a choked condition
at the pipe exit. To better
understand the frequency
content of the stagnation
point pressure field, a fast
Fourier transform (FFT) is
performed on the pressure
field at the stagnation
point. The dominant
stagnation point frequency
is observed at about 425 Hz
(Figure 7). The frequency
of the bubble plume is
obtained by performing an
FFT of pressure forces on
an internal object at a later
moment, after the gases
have fully vented from the
pipe. The FFT of the bubble
plume gives a frequency of
about 40 Hz (not shown).
To validate these results,
a CFD simulation was
performed on a substantially different vessel
configuration where a large, high-pressure gas
venting event occurred. Because pressure sensor
data was available for this event, CFD results
could be compared to the experimental values.
Omitting details of the event, CFD matched
pressure frequencies quite well but appeared to
over-predict pressure amplitudes by ~20%.
Peak Loading
To get realistic estimates of the net loading to
internal components during this rapid transient
event, the peak pressure gradient magnitude is
considered as well as the peak spatial differences
in pressure. Each can be expressed in both a
spatially and temporally varying manner. The
acoustic waves emanating from the pipe exit in a
series of diminishing pressure wave amplitudes
is seen in Figure 8.
Subsequently, pressure differences on an internal
object experience up to p
max
= 1.5 psi during
the bubble plume phase of the event. Acoustic
forces applied to internal components within
the vessel by the initial 425 Hz gas venting are
computed using
where S
Factor
is a safety factor, A
X-Section
is a cross-
sectional area of an internal component relative
to the incoming disturbance, and L
Thickness
is the
components thickness. Values of the maximum
pressure gradient are significantly higher inside
the venting gas than in the liquid. Once in the
liquid, the maximum pressure gradient is seen
to decay as approximately r
-5/3
. This implies
that this loading mechanism is more relevant to
objects at the bottom of the vessel.
To validate
FFT stagnation
point and
bubble plume
characterizations,
a CFD simulation
was performed
on a substantially
different vessel
configuration
where a large,
high-pressure
gas venting event
occurred.
F
Acoustic
= S
Factor
* A
X-Section
* L
Thickness
*| p|
max
,

6,000
4,000
2,000
0
2,000
4,000
6,000
8,000
10,000
F
F
T
,

p
s
i
/
H
z
Frequency, Hz
0.00E+00 4.00E+02 8.00E+02 1.20E+03 1.60E+03 2.00E+03
10
0
20
30
40
50
60
70
80
90
100
P
r
e
s
s
u
r
e
,

p
s
i
Time, seconds
0.00E+00 5.00E-03 1.00E-02 1.50E-02 2.00E-02 2.50E-02
Figure 7. FFT of Stagnation Point Pressure Signal
Figure 8. Acoustic Pressure Waves in the Vessel
Emanating from the Explosion in the Nozzle
Bechtel Technology Journal 188
Forces imposed on vessel internals during
the 40 Hz bubble plume phase of the event are
given by
F
Bubble Plume
= S
Factor
* A
X-Section
* p
max
,
where the force is taken to depend on the
maximum observed pressure change on the
surface of an internal object. These forces are
largest in the upper portions of the vessel.
Using this approach, a realistic and
standardized load-time curve was obtained
that was implemented for all pressure vessels
having internal items in finite-element
stress calculations.
SOLID-LIQUID FLOWS
Introduction
Particle-laden liquids are a common occurrence
in nature and in engineering systems. Sediment
transport in rivers and unwanted soil erosion
are important contexts where particle-laden
liquids are of concern. In the chemical process
industries, slurry flows are common. However,
from the point of view of CFD, they have received
considerably less attention than particle-laden gas
flows. Consequently, there are fewer validation
data sets available with which to calibrate CFD
models. Among the many differences between
particle-laden liquid and gas flows is the
significance of freestream turbulence effects on
the ultimate flow behavior. To investigate this,
a stirred vessel is considered where freestream
turbulence levels are extremely high and several
experimental datasets are available.
Stirred Vessel with Solids
This section presents the results of a study
that compares CFD simulations to experiments
for solid-liquid flows in a lightly loaded pitch-
bladed turbine test case described in [3 and 4].
Experiment Description
In this study, a 0.48-meter-diameter by
1.44-meter-high cylinder served as the testing
medium for the acquisition of experimental
data. The flow in the experiment was fully
turbulent and was lightly loaded with a value
of 4% average volume fraction. Tests were
was conducted in a laboratory by the Mixing
Research Group in the Chemical Engineering
Department at the University of Bologna in
Italy using 327 m spherical glass beads with
a specific gravity of 2.45. The test vessel,
seen in Figure 9, consisted of four baffles
and incorporated three pitch-bladed turbines
to provide mixing energy to the system.
Water was used for the working fluid in
this experiment.
The experiment was conducted with an impeller
rotational speed of 475 rotations per minute,
which corresponded to the just-suspended
velocity of the particles being studied. This
speed essentially defined the lowest power input
at which the system could operate without solids
settling at the bottom of the vessel. Particle
concentrations were measured using an optical
technique that employed a laser diode coupling
to measure the attenuation in light as it traversed
the vessel cross-section.
CFD Model
A series of parametric studies that varied the
CFD model settings were conducted to determine
which settings in FLUENT v12.1 yielded results
closest to experimental data. To summarize,
it was shown that the virtual mass term had
virtually no impact on the solution and that
choosing between cell- and node-based spatial
discretization of the gradient had minimal effect
on the final concentration profile. Moderate
sensitivity in the overall concentration profile
was observed by changing the turbulence model
The approach used
to determine peak
loading yielded
realistic and
standardized
load-time
curve that was
implemented for
pressure vessels
having internal
items in
finite-element
stress
calculations.
Baffles
1.44 m
0
.4
8
m
Impeller
Mixing Blades
(High AOA)
Hub
(Rotational
Axis)
0
.4
8
m
Figure 9. CFD Model
June 2011 Volume 3, Number 1 189
and the spatial derivative operator. The greatest
sensitivity with regard to solution outcome
was dependent on the interphase momentum
transfer treatment. In terms of physical models,
the standard k- turbulence model with
nonequilibrium wall functions yielded results
closest to the experimental data.
Further Studies on Interphase
Momentum Transfer
FLUENT v12.1 offers three choices for the
interphase momentum exchange term in solid-
liquid flows. These are described in [2]. While
each includes a Reynolds number and a particle
volume-fraction dependence on particle drag,
none accounts for particles traversing freestream
turbulence. To include this effect, two corrections
are needed. The first effectively corrects the
particle slip velocity, while the second accounts
for modifications to the boundary layer and
wake of the local particle flow field that are
caused by the freestream turbulence.
Using the text-user interface (TUI) in FLUENT
v12.1, the momentum dispersion force is
invoked to address the particle slip velocity
problem. [4] This term is only appropriate at low
particle volume fractions where the turbulence
is unaffected by the particles. To correct for
modifications to the drag coefficient caused by
freestream turbulence, the Gidaspow drag model
(including an improved stitching function) is
coded into a UDF and supplemented with a
model by Fajner et al. [5] Freestream turbulence
corrections in this model are conditioned on
the ratio of the particle diameter to the local
Kolmogorov length scale of the fluid turbulence.
When the Kolmogorov length scale is larger
than the particle, no correction is made to the
particle drag coefficient. As the Kolmogorov
length scale becomes smaller than the particle,
the drag coefficient is reduced initially. When
the Kolmogorov length scale becomes an order
of magnitude smaller than the particle, the drag
increases (following [5]).
Figure 10 compares particle concentrations in
the stirred vessel measured experimentally
as a function of normalized height to those
computed by FLUENT for several different
model choices. The first step was to test the UDF
implementation of the Gidaspow model relative
to the one internally coded into FLUENT. At
higher particle volume fractions, the effect of the
improved stitching function may be seen. While
it appears that the UDF implementation of the
Gidaspow model matches the one in FLUENT,
the comparisons to the experimental results are
quite poor. This can be seen not only from visual
inspection but also by realizing that the area
under each curve should be the same. Even after
x/H,
C
o
n
c
e
n
t
r
a
t
i
o
n
,

g
/
L
0
50
100
150
200
250
300
350
400
0.14 0.23 0.31 0.39 0.48 0.56 0.64 0.73 0.81 0.92
Base (Gidaspow)
Base (Gidaspow)-UDF
Gidaspow & Simonin
Gidaspow & Fajner
Gidaspow & Simonin & Fajner
Experiment
Figure 10. Particle Concentrations as a Function of Normalized Vessel Height for Experiment and CFD
Depending on the CFD Treatment of Freestream Turbulence in the Interphase Momentum Transfer Term
The greatest
sensitivity
with regard to
solid-liquid flow
solution outcomes
was dependent
on the interphase
momentum
transfer treatment.
Bechtel Technology Journal 190
correcting the particle slip velocity or the particle
drag coefficient while using the Gidaspow model,
the comparisons to the experimental results are
still poor, but the respective areas under the
curve are now closer to the experimental. Finally,
by using the Gidaspow model in conjunction
with both the slip velocity and particle drag
coefficient corrections, the CFD model compares
very well to the experimental.
CONCLUSIONS
M
ultiphase processes span many different
industry applications. In 2010, Bechtel
National Inc.s Advanced Simulation and
Analysis group used multiphase simulation to
service projects for four of Bechtels five global
business units: Bechtel Systems & Infrastructure,
Inc.; Mining & Metals; Power; and Oil, Gas &
Chemicals. Those projects spanned gas-solid,
gas-liquid, and gas-solid-liquid systems. The
simulations were conducted within project
schedule and budget, showing that state-of-the-
art multiphase flow CFD is a viable engineering
design and forensics tool.
The three examples presented in this paper
represent three types of multiphase flows: solid-
gas, gas-liquid, and solid-liquid. The first example
demonstrates how CFD can be used to learn how
particles of differing size follow different flow
paths in the same geometryvery important to
process requirementsand indicates where CFD
could be applied to design improvement. The
second example shows CFDs utility in difficult
environments with numerous unknowns that
would overwhelm many other methods. The
rapid expansion and bubble rise of a gas in a
liquid, where both the gas and the liquid need
to be simulated as compressible flow fields,
is ideally suited for CFD. The third example,
where solids mix in a liquid, demonstrates the
importance of knowing the relevant physics
among the many modeling choices available. The
comparison to experimental data confirms the
knowledge and experience of, and the need for,
the technical specialist.
Computer-based engineering tools such as CFD
provide tremendous insight into flow details
that are inaccessible using simpler approaches.
Used in conjunction with a well-founded
V&V process, these details enable improved
engineering designs with tighter margins and
reduced costs. One company reports a sixfold
return on investment as a result of using CFD
throughout various design phases. [6] Wherever
the potential for high risk/exposure exists
and design uncertainty must be reduced, CFD
is an enabling technology. Bechtel is at the
cutting edge of bringing multiphase CFD into
mainstream engineering.
ACKNOWLEDGMENTS
T
he authors express their appreciation to
YoChan Kim and Milorad Dordevich for
their assistance with the work presented in this
paper. The authors also thank Andre Bakker of
ANSYS for providing the case file representing
the stirred vessel and Bob Baxter from Bechtel
Mining & Metals, whose expert guidance on dry
scrubber flows made that work meaningful.
TRADEMARKS
ANSYS and FLUENT are registered trademarks
of ANSYS, Inc., and its subsidiaries in the United
States and other countries.
Cray and CX1 are trademarks of Cray Inc.
Mathematica is a registered trademark of
Wolfram Research, Inc.
STAR-CCM+ is a registered trademark of
CD-adapco.
REFERENCES
[1] M. Syamlal, C. Guenther, A. Cugini, W. Ge,
W. Wang, N. Yang, and J. Li, Computational
Science: Enabling Technology Development,
Chemical Engineering Progress, Vol. 107, No. 1,
2011, pp. 2329, access via http://www.aiche.org/
cep/issues/2011-01/index.aspx.
[2] ANSYS FLUENT 12.0: Theory Guide, April 2009,
http://www1.ansys.com/customer/content/
documentation/120/fluent/flth.pdf.
[3] G. Montante and A. Bakker, Solid-Liquid
Multiphase Flow Validation in Tall Stirred
Vessels with Multiple Impeller Systems, FLUENT
Inc. Technical Note TN253, 2004, p. 16, http://
www.bakker.org/cfm/publications/tn253.pdf.
[4] G. Montante, D. Rondini, A. Bakker, and
F. Magelli, CFD Predictions of Solid
Concentration Distributions in a Baffled Stirred
Vessel Agitated With Multiple PBT Impellers,
Proceedings of the 15th International Congress of
Chemical and Process Engineering (CHISA 2002),
Prague, Czech Republic, August 2529, 2002,
http://www.bakker.org/cfm/publications/
mont2002.pdf.
[5] D. Fajner et al., Solids Distribution and
Rising Velocity of Buoyant Solid Particles
in a Vessel Stirred With Multiple Impellers,
Chemical Engineering Science, Vol. 63, No. 24,
December 2008, pp. 58765882, access via http://
www.sciencedirect.com/science/journal/
00092509 (doi:10.1016/j.ces.2008.08.033).
Computer-based
engineering
tools such as
CFD provide
tremendous
insight
into flow details
that are
inaccessible
using simpler
approaches.
June 2011 Volume 3, Number 1 191
[6] D. Davidson, The Role of Computational Fluid
Dynamics in Process Industries, The Bridge,
Vol. 32, No. 4, Winter 2004, pp. 914, http://
www.nae.edu/File.aspx?id=7309.
ADDITIONAL READING
The following additional information source was
used to develop this paper:
O. Simonin, E. Deutsch, and J.P. Minier, Eulerian
Prediction of the Fluid/Particle Correlated
Motion In Turbulent Two-Phase Flows, Applied
Scientific Research, Vol. 51, 1993, pp. 275283,
purchasable at http://www.springerlink.com/
content/m178j517v8452624/.
BIOGRAPHIES
Christopher Kennedy, PhD, is
a senior engineering specialist
whose focus is computational
fluid dynamics. He works
with the CFD team in Bechtel
National, Inc.s Advanced
Simulation and Analysis group.
Most of his nearly 5 years at
Bechtel have been concentrated
on the topics of gas-phase
detonations and multiphase mixing at the Waste
Treatment Plant being built in Richland, Washington,
to clean up one of the worlds largest concentrations
of radioactive waste, a legacy of Cold War nuclear
weapons production.
Chris holds a PhD and an MS, both in Applied
Mechanics, from The University of California, San
Diego. His BA, in Natural Sciences, is from the Johns
Hopkins University, Baltimore, Maryland.
Philip Diwakar is a senior
engineering specialist for
Bechtel National, Inc.s,
Advanced Simulation and
Analysis group. He employs
state-of-the-art technology to
resolve a wide range of complex
engineering problems on large-
scale projects. Philip has more
than 15 years of experience
in CFD and finite element analysis for structural
mechanics. His more recent experience includes work
on projects involving fluid-solid interaction.
During his 9-year tenure with Bechtel, Philip has
received two full technical grants and a mini-grant.
One full grant was used to determine the effects
of blast pressure on structures at LNG plants, for
designing less costly, safer, and more blast-resistant
buildings. The other was used to study transient
thermal fatigue stresses using fluid-structure
interaction in molecular sieve dehydrators, with
applicability to other components that may be subject
to fatigue failure. Philip has also received four Bechtel
Outstanding Technical Paper awards, as well as
two awards for his exhibit on the applications
of fluid-solid interaction technology at the 2006
Engineering Leadership Conference in Frederick,
Maryland. This paper is his third contribution to the
Bechtel Technology Journal.
Before joining Bechtel, Philip was a project engineer
with Caterpillar, Inc., where he was part of a Six Sigma
team. He applied his CFD expertise to determine the
best approach for solving issues involving the cooling
of Caterpillar heavy machinery.
Philip holds an MTech in Aerospace Engineering
from the Indian Institute of Science, Bengalaru; a
BTech in Aeronautics from the Madras Institute of
Technology, India; and a BS in Mathematics from
Loyola College, Baltimore, Maryland. He is a licensed
Professional Mechanical Engineer and is a Six
Sigma Yellow Belt. His current focus is on LEED and
sustainable development.
Leonard (Joel) Peltier, PhD,
is a principal engineer in
Bechtels Advanced Simulation
and Analysis Group and an
adjunct assistant professor of
mechanical engineering at the
Pennsylvania State University.
He has 19 years of experience
in applied fluid mechanics,
computational fluid mechanics,
and numerical heat transfer. At Bechtel, Joel has
worked on projects for BSII, M&M, and Power and
has been elected to the ASME V&V 20 code committee
(Verification and Validation in Computational
Modeling and Simulation) as a contributing member.
Before joining Bechtel, Joel worked at the Pennsylvania
State Universitys Applied Research Laboratory, a
university-affiliated research center for the US
Navy, performing simulations and analyses of
turbomachinery and undersea vehicle flows.
His research work extended to passive heating of
architectural spaces, simulation of arterial stenoses and
glottal aerodynamics, and modeling of atmospheric
transport and dispersion. His expertise includes
steady and unsteady Reynolds-averaged, Navier-
Stokes (RANS) modeling; large-eddy simulation
(LES); and hybrid RANS/LES techniques, including
detached-eddy simulation (DES) and its zonal (ZDES)
and delayed (DDES) variants. At Pennsylvania
State University, he taught fluid mechanics at the
undergraduate and graduate levels, specializing in
turbulence theory and turbulence modeling.
Joel holds PhD and MS degrees, both in Aerospace
Engineering, from the University of Colorado at
Boulder and a BS degree in Chemical Engineering
from Princeton University, New Jersey. He spent
3 years as a post-doctoral research associate in
the Department of Meteorology at Pennsylvania
State University.
Brigette Rosendall, PhD, is a
principal engineer and the lead
computational fluid dynamics
specialist in Bechtel National,
Inc.s, Advanced Simulation
and Analysis group. She
specializes in multiphase
transport phenomenon, heat
transfer, and reaction kinetics.
For the past 10 years, her focus
has been on modeling mixing within vessels for
the Waste Treatment Plant being built in Richland,
Washington, to clean up one of the worlds largest
concentrations of radioactive waste, a legacy of Cold
War nuclear weapons production.
Bechtel Technology Journal 192
Brigette has been with Bechtel for 13 years and has
supported projects in the Power; Mining & Metals;
and Oil, Gas & Chemicals global business units,
in addition to Bechtel National. Some key projects
include National Missile Defense, the Tacoma
Narrows Bridge, Darwin LNG, Atlantic LNG, and
Cerro Matoso. She also participated in a technical
grant to determine the ability of CFD to model
pump intake flow. Brigette has received two Bechtel
Outstanding Technical Paper awards, as well as
an award for the best exhibit on the application of
advanced simulation at Bechtels 2002 Engineering
Leadership Conference in Frederick, Maryland.
Brigette is a member of the board for the Computer
Aids for Chemical Engineering (CAChE) Corporation
and co-chair of the CAChE CFD Task Force, and
has been the industrial director of the National
Science Foundation-funded Industry & University
Cooperative Research Program (I/UCRC) on
Multiphase Transport Phenomenon. She is an adjunct
assistant professor of chemical engineering and
material science at Michigan State University.
Brigette holds a PhD and an MS in Chemical
Engineering from the University of Washington
and a BS in Chemical Engineering from Michigan
State University.
2011 Bechtel Corporation. All rights reserved. 193
Issue Date: June 2011
INTRODUCTION
The term water security is often used to describe
the condition of an uninterrupted water
supply of sufficient quantity and adequate
quality to meet the domestic water needs of
a country or region and support its economic
activities that depend on water. A broader
definition of water security involves the
sustainable development of water resources,
protection of water systems, protection of water
supply from hazards associated with extreme
hydrologic events (such as floods and droughts),
and safeguarding of access to water and
services for people and the environment. [1]
According to the Global Water Partnership,
the term water security aims to capture the
complex concept of holistic water management
and the balance between resource protection
and resource use. [2] Over the last few years,
the term has been increasingly used, even in
countries not characterized by water scarcity,
like Canada [3] and the United Kingdom [4].
Depending on the prevailing actual or perceived
threats to the water supply, water security at
times becomes synonymous with a single aspect
of the overall problem. For example, in the United
States after 9/11, the Environmental Protection
Agency used the term almost exclusively to
indicate the measures required to protect against
terrorist acts that may contaminate or disrupt
Angelos N.
Findikakis, PhD
[email protected]
AbstractWater security is critical for the survival of communities and nations. It represents a major
challenge, especially in parts of the world facing increasing water scarcity, such as arid and semi-arid regions or
densely populated areas. Water security also directly affects food security, as most of the worlds water is used
in agriculture for food production. The water/food/energy security nexus was recently identified as one of three
important risk clusters that may threaten economic growth and political and social stability in the future.
Water security hinges on balancing water supply with water demand, both of which are subject to change
over time. Since there are limits to augmenting the water supply, demand management through economic,
administrative, and technical measures is essential for achieving water security. Water management should aim
at the sustainable use of renewable water resources and the planned depletion of nonrenewable water resources
within well-defined timeframes. Where possible, this should be supplemented by the use of unconventional water
sources such as treated wastewater and desalination. Serious threats to water security include population growth
(which intensifies water scarcity), pollution (which renders some water resources unusable), and climate change
(which may alter the amount and time distribution of precipitation in some parts of the world and increase the
water requirements for food production due to higher temperatures).
Two powerful conceptsvirtual water and the water footprintmake it easier to grasp the global nature of water
security and the need for global initiatives to ensure it. These concepts can also help the public understand the
impact of their everyday choices on water availability while offering policy and decision makers new ways of
viewing water security and how to improve it.
Water security is critical for most industries and businesses because many processes and supplies depend
directly or indirectly on water. Systematic accounting of the quantities involved can help businesses identify
opportunities for more-efficient water use and understand the water-related risks that they may face. The
development of strategies for dealing with such risks can help achieve water security.
Keywordsvirtual water, water footprint, water management, water scarcity, water security
WATER SECURITY
Bechtel Technology Journal 194
the water supply. On the other hand, in Egypt,
where practically all water comes from the
Nile River, water security is synonymous with
the continuation of past agreements regarding
allocation of the rivers water resources among the
countries sharing its basin. For many industries,
water security means the protection against
operation disruptions due to water shortages.
On March 22, 2000, at the Second World Water
Forum, the main challenges to achieving
water security were set forth in the Ministerial
Declaration of The Hague on Water Security in
the 21st Century. As agreed on by 114 ministers
and officials from 130 countries, the listed
challenges included: (a) meeting the needs of all
people to have access to safe and sufficient
water and sanitation; (b) equitably allocating
water resources to secure food production;
(c) protecting ecosystems; (d) engendering
peaceful cooperation in sharing water resources;
(e) managing risk in dealing with floods,
droughts, and pollution; and (f) managing water
in a manner that reflects its economic, social,
environmental, and cultural values for all its
uses and that accounts for the need for equity. [5]
WATER SCARCITY
W
ater scarcity is a relative concept that
depends on the combination of available
supply and level of demand for water. Fresh
water is inherently scarce in many parts of
the world because of its uneven distribution
around the globe. In most arid and semi-arid
areas, water scarcity has intensified over time
due to population and economic growth. Many
parts of the world face physical water scarcity
due to the lack of adequate water resources,
while parts of the developing world face
economic water scarcity because they do not
possess the economic means to develop their
water resources.
Although the accuracy of global statistics may
sometimes be questionable and their use has
been criticized by some experts (see [6], for
example), data compiled by international
organizations at least give an indication of the
order of magnitude of the problem. According
to the most recent global annual assessment
of sanitation and drinking water by the World
Health Organization, in 2008, nearly 900 million
people did not have access to an improved
drinking water source, and over 2.6 billion people
did not use improved sanitation facilities. [7]
Improved drinking water sources are considered
those that provide safe water and include
household connections, rainwater collections,
public standpipes, boreholes, protected dug
wells, and protected springs within 1 kilometer
of the household. [8] Use of unsafe drinking
water causes the death of 1.8 million people
every year from diarrheal diseases such as
cholera, with 90% of them being children under
5, mostly in developing countries. [9] Improved
sanitation is important because unsanitary
practices and discharge of untreated wastewater
in nearby water bodies or the ground directly
affect drinking water resources.
According to one estimate, the water required
for the drinking, cooking, sanitation, and
bathing needs of an individual is between 27
and 200 liters per day. Based on this estimate,
a minimum standard requirement for basic
human needs of 50 liters per person per day
has been proposed. [10] Actual domestic water
use varies from place to place above and below
this proposed minimum because of differences
in climate, standard of living, individual
lifestyle, water use efficiency, etc. For example,
based on data from the US Geological Survey,
average domestic water use in the United States
was 135 m
3
/yr per capita in 2005 and varied
between approximately 75 and 263 m
3
/yr per
capita in Maine and Nevada, respectively.
[11] In the Middle East, a somewhat earlier
per capita domestic water use figure varied
between approximately 66 m
3
/yr in Egypt and
approximately 23 m
3
/yr in Yemen. [12]
Total water use is much higher than domestic
use due to water needs for agriculture, industry,
urban uses, etc. The largest water user is irrigated
agriculture. More than three-quarters of all
water used around the world is for food
production. It is estimated that producing food
for the average US diet, which is rich in meat,
requires about 5.4 m
3
/day, or 1,971 m
3
/yr per
person, while an equivalent vegetarian diet
requires only 2.6 m
3
/day per person, less than
half that amount. [13]
Water scarcity is
a relative concept
that depends on
the combination
of the available
supply and the
level of demand
for water.
In most arid and
semi-arid areas,
water scarcity
has intensified
over time
due to population
and economic
growth.
ABBREVIATIONS, ACRONYMS, AND TERMS
CDP Carbon Disclosure Project
FTSE global company jointly owned
by The Financial Times and
the London Stock Exchange
GDP gross domestic product
GEMI Global Environmental
Management Initiative
June 2011 Volume 3, Number 1 195
A widely accepted minimum level of renewable
water resources for satisfying the needs of a
country with self-sufficient food production
is 1,700 m
3
/yr per person. [14] Countries or
regions with fewer water resources are said to
be under water stress. If the available renewable
water resources are less than 1,000 m
3
/yr per
person, then the country or region is said to be
characterized by water scarcity; if they fall below
500 m
3
/yr per person, the condition is described
as absolute water scarcity. Based on population
growth rates around 2005, it was estimated that
by 2025, 1.8 billion people would be living in
countries or regions with absolute water scarcity,
and two-thirds of the worlds population could be
under water stress conditions. [15] Water-scarce
regions include most arid and semi-arid areas
in the American Southwest, Australia, North
Africa, Middle East, Northern China, parts of
India and South Africa, and densely populated
cities and parts of Asia, such as Singapore.
Figure 1 shows the per capita renewable
water resources for countries with less than
2,500 m
3
/yr per capita plotted against their
per capita gross domestic products (GDPs)
and population growth rates. As illustrated,
many very poor countries face severe water
scarcity, which is intensified by rapid population
growth. It should be noted that per capita
renewable water resources is a valid indicator
only for relatively small countries or when
applied to regions with a relatively uniform
climate. The average per capita resources for
large countries like the United States or China
can be misleading because average values
mask significant differences within these
countries. For example, the per capita available
water resources in the southwestern United
States or Western China are far less than the
corresponding national average.
Increasing water scarcity can create tensions and
become a destabilizing factor in international
relationships, as pointed out in a recent report
by the US Senate Foreign Relations Committee
that examined water issues in Central and South
Asia. [16] The report presented an analysis
of how foreign assistance could be used for
strategic investments to promote water security
in the region. In the words of the committees
chairman, Senator John Kerry, Water security is
vital in achieving our foreign policy and national
security goals by recognizing the duality of
wateras a tool for development or a means to
exacerbate conflict. [17]
Water security
is vital in achieving
our foreign policy
and national
security goals
by recognizing the
duality of water
as a tool for
development
or a means to
exacerbate conflict.
Senator John Kerry
Water Scarcity Water Stress
UAE
5,000
10,000
15,000
20,000
25,000
30,000
35,000
G
D
P

p
e
r

c
a
p
i
t
a
,

U
S
$
/
y
r
Water Resources per capita, m
3
/yr
0
0 500 1,000 1,500 2,500 2,000
Singapore
Qatar
Israel
Kuwait
Malta
Bahrain
Bahamas
Saudi Arabia
Libya
Barbados
Oman
Antigua
Maldives
Gaza
West Bank
Yemen
Algeria
Tunisia
Egypt
Kenya
Rwanda Burundi
Morocco
Lebanon
Poland
Burkina
Faso
Somalia
Lesotho
Malawi
Syria
Zimbabwe
South Africa
Cyprus
Czech Republic
Belgium
Germany
Puerto Rico
Mauritius
Iran
India
Haiti
Comoros
Ethiopia
Sudan
Uzbekistan
China
UK
Denmark
South Korea
Jordan
Djibouti Cape
Verde
St. Kitts Nevis
Population Doubling Time
<20 years 2030 years 3040 years 4050 years 50100 years >100 years
Eritrea
Figure 1. Total Renewable Water Resources Versus Per Capita GDP
(per capita renewable water resources from [18]; per capita GDP data for 2003 from [19])
Bechtel Technology Journal 196
THE WATER/FOOD/ENERGY SECURITY NEXUS
F
ood security is achieved when all people, at
all times, have physical and economic access
to sufficient, safe, and nutritious food to meet
their dietary needs and food preferences for an
active and healthy life. [20] At a national level,
food security can be achieved through internal
food production, food imports, or a combination
of the two. Many countries pursue policies
aimed at self-sufficiency in food production.
Besides affirming a countrys independence,
such policies are justified based on socio-
economic considerations because farming is the
only employment opportunity in many areas
(especially in developing countries). In this case,
food security depends on the availability of
water and becomes closely tied to water security.
Food and water security are also related to
energy security. Water used for food production
is obtained by either extracting groundwater or
by operating surface water collection, storage,
and distribution systems, both of which require
energy. Finally, energy generation requires
water, primarily for cooling. In the most recent
annual meeting of the World Economic Forum
in Davos, the water/food/energy security nexus
was discussed as one of three important clusters
of risks that may threaten economic growth and
political and social stability in the future. [21]
WATER MANAGEMENT
W
ater security hinges on balancing water
supply with water demand, both of which
change over time. Water supply can decrease due
to the depletion of nonrenewable water resources
or due to changes in the availability of renewable
water resources caused by degradation of quality,
climate change, or various human activities that
affect the hydrologic cycle, such as land use
changes. Water supply can increase through
the development of new water resources. Water
demand increases as the result of population and
economic growth but can be managed through
conservation, more efficient water use, economic
measures, and agricultural and trade policies.
The essence of water management is finding
and achieving the optimal equilibrium between
water supply and water demand without
compromising future water security.
In general, augmenting the supply is problematic.
Most countries and regions facing water scarcity
have already fully developed their water
resources beyond sustainable levels, resulting
in rivers that run out of water by the time they
reach the sea, lakes that shrink, and groundwater
wells that run dry. Use of nonrenewable
water resources is only a short-term solution
because, by definition, their use leads to their
depletion. Large water transfer schemes, like the
south-to-north water transfer under construction
in China, are usually prohibitively expensive
and have many negative environmental impacts.
Nonconventional ways to augment the water
supply include using treated wastewater,
desalination, and, on a smaller scale, water
harvesting. Theoretically, desalination offers
the potential of unlimited water supply to
areas near the ocean or other saline water
bodies, but its feasibility depends on the
availability and cost of energy. Technological
developments in membrane technologies based
on reverse osmosis have lowered the energy
requirements for desalination, though they
still remain quite high.
In view of these limitations, Gleick introduced
the concept of peak water analogously with
the concept of peak oil and described the
implications of reaching peak water use and
what would need to be done to avoid irreversible
ecological damage to the environment. [22] In
this respect, nonrenewable water resources, such
as the fossil groundwater in the aquifers of North
Africa and the Middle East, are very similar to
oil, following a similar production curve over
time with a distinct peak at some point, followed
by declining production. On the other hand,
renewable water resources are limited by the
rate of their replenishment. They can be used
indefinitely at their peak limit, but obviously not
beyond that limit.
Where supply-side augmentation is not an
option, demand management offers the only
viable solution for survival and sustainable
development. Demand management aims at
influencing attitudes and consumption patterns
toward more efficient and cost-effective water
use and is often practiced through a combination
of economic, technical, and administrative
measures. Economic measures often include
pricing mechanisms and other incentives for
reducing water use. Technical measures include
conservation, more-efficient water use, and water
reuse. Adopting such measures is critical to the
viability and acceptability of new community
projects built in parts of the world where water
is scarce.
In the past, water management was viewed as a
technical subject in the domain of engineering.
Today, the many complex factors affecting water
supply and demand require water management
The essence of
water management
is finding
and achieving the
optimal equilibrium
between
water supply and
water demand
without
compromising future
water security.
June 2011 Volume 3, Number 1 197
to integrate input from multiple disciplines and
be practiced in the context of a water strategy
consistent with, and supported by, government
policies and planning in critical sectors. These
sectors include demographic growth, agriculture,
industry, trade, and environmental protection.
To a great extent, successful water resources
management depends on the existence of
the proper political, institutional, and legal
frameworks and organizational structures that
form the foundation of water governance. In
many parts of the world, reform and improvement
of water governance are essential to achieving
water security.
THREATS TO WATER SECURITY
A
recent study concluded that 80% of the
worlds population is exposed to high levels
of threat to water security. [23] Overall, the
greatest of these threats is population growth
because it increases the demand for water.
With the available renewable water resources
remaining the same (at best) and nonrenewable
water resources being depleted, more people
will have to share the same or a smaller amount
of water in the future. As can be seen in
Figure 1, some countries with very limited
renewable water resources have very high birth
rates, their populations doubling in less than
30 years. Also, in some areas with very limited
water resources, internal migration (the result
of policies that encourage the expansion of cities
and other developments in desert environments)
has resulted in rapid population growth,
seriously undermining the water security of
these areas. In addition, economic growth and
higher standards of living over time increase
water use per capita due to changing diets and
greater consumption of goodsmany of which
need water for their production.
Another common threat to water security is
pollution, which may render entire water bodies
unsuitable for specific uses. Multiple sources of
chemical and biological contamination are among
the common causes of water quality degradation.
Such contamination arises both from discrete
point sources (such as industrial facilities and
the discharge of untreated municipal waste)
and from nonpoint pollution resulting from the
application of excess fertilizers, herbicides, and
livestock waste. China today is a great example
of a country where these problems are a major
source of concern for water security.
Water security also includes protecting the water
supply from terrorist acts intended to cause
harm. Besides measures for physically protecting
the water infrastructure, advanced methods and
technologies for the early detection of harmful
agents within the water distribution network
have been developed in response to this threat.
A factor that complicates the assessment of
future water resources availability is climate
change. It is predicted that climate change
will alter precipitation patterns around the
world, making many areas more prone to
drought and increasing water scarcity. Global
atmospheric circulation models suggest that
increased emissions of carbon dioxide and other
greenhouse gases contribute to global warming.
It is predicted that higher temperatures will
increase evapotranspiration, leading to an
increase in precipitation in latitudes greater than
30 North and 30 South, while less and more
erratic precipitation is expected in many tropical
and subtropical regions. [24] This will change
streamflow patterns, affecting the distribution of
water resources in both space and time.
Despite significant progress in global
atmospheric modeling, it is still difficult to
predict the impact of climate change on smaller
scales, such as the scale of a river basin. There
seems to be agreement, though, that changes in
precipitation distribution in space and time are
likely to produce greater extremesmore severe
floods and droughts. Besides their impact on
surface water resources, these precipitation and
temperature changes will affect groundwater
resources by modifying the recharge rate of
different aquifers.
Global warming will also affect demand by
increasing water requirements for agriculture
due to increased evapotranspiration at higher
temperatures. Water stress in many arid and semi-
arid areas is likely to increase, so countries like
Pakistan, India, Mexico, and northern China, and
the countries of the Middle East and sub-Saharan
Africa, are likely to be affected significantly.
The effects on some water systems may be quite
dramatic. For example, according to Barnett and
Pierce [25], there is a 50% probability that Lakes
Mead and Powell, the two largest reservoirs on
the Colorado River and the major sources of
water for the southwestern United States, will go
dry by the year 2021. Besides their effect on water
resources, higher temperatures are also likely
to raise sea levels, causing the inundation of
low-lying coastal areas and increasing saltwater
intrusion into coastal aquifers.
To a great extent,
successful
water resources
management
depends on
the existence of
the proper political,
institutional, and
legal frameworks
and organizational
structures
that form the
foundation of
water governance.
Bechtel Technology Journal 198
THE SCALE OF WATER SECURITY AND
VIRTUAL WATER TRADE
T
hrough most of history, water security
was sought at a local scale, with people
choosing to live near a water source, a river, a
lake, or an aquifer. Early civilizations in Egypt,
Mesopotamia, and India flourished along rivers
where water was readily available. Over time,
advances in engineering made possible the
construction of large dams to store water and
aqueducts to transfer water over long distances.
This allowed the water resources of a river basin
to be shared throughout the entire basin and even
benefit people in nearby basins. As engineering
became more ambitious, water transfer schemes
made it possible to bring water to arid or semi-
arid areas across long distances, expanding the
scale of water security to the regional or national
level. Examples of such large-scale projects are
the water transfer scheme from northern to
southern California and the south-to-north water
transfer under construction in China.
The introduction of the virtual water concept
makes it possible to consider water security on a
global scale by examining trade among nations.
The term virtual water is used to describe the
amount of water required to produce food or
other products. [26] The virtual water contents
of various food products differ significantly.
For example, the production of 1 kilogram of
wheat requires a little more than 1 cubic meter of
water, while the production of 1 kilogram of beef
requires more than 10 times as much. The trade
of goods among countries or regions creates
a flow of virtual water from the commodity-
exporting regions to the commodity-importing
regions. The concept of virtual water shows how
water-scarce countries or regions can benefit
by importing agricultural products that require
large amounts of water, saving the water that
would have been required to produce those
products locally. This way, water-scarce countries
or regions can increase their water security by
reserving part of their limited water resources
for other uses.
As noted by Hoekstra, export of water-intensive
commodities implies that the domestic water
resources are more intensively used than
they would have been in the case without
such export. As a result, the pressure on, and
the scarcity of, the domestic water resources
will be increased. In reverse, countries that
import water-intensive commodities relieve the
pressure on their domestic water resource. [27]
Virtual water becomes, in essence, an alternative
source of water.
The volumes of traded virtual water represent
a significant percentage of the total water
consumed. Estimates of total virtual water traded
among nations in 19952000 are in the range
of 1,040 to 1,340 km
3
[28], which is about
50% to 64% of the 2,100 km
3
estimated total
volume of fresh water consumed in 2000. [29]
GREEN, BLUE, AND GRAY WATER
T
o better appreciate the implications of virtual
water trade, it is instructive to understand
the source of the water content of different
products. In this context, researchers use the
terms green, blue, and gray water. Green water
is defined as the rainwater stored in soil as soil
moisture. The water content of the products of
rain-fed agriculture is classified as 100% green
water. Blue water is the water that runs off
to rivers or percolates deep into the ground
to become groundwater. Irrigated agriculture
depends on blue water. The gray virtual water
content of products assesses the water pollution
caused by their production. Gray water is the
estimated volume of water required to dilute the
effluents from the production of a product and
bring the quality of the receiving water body to
acceptable standards.
An important difference between green and
blue water in the context of the virtual water
discussion is that blue water has alternative uses,
while green water does not. For example, if blue
water is not used in agriculture, it can be used
for domestic or industrial supply, to enhance
environmental flows, etc. On the other hand,
green water can be used only by vegetation;
its use is limited by the variety of plants that
can be grown in a given area. For example, the
moisture in the soil (green water) could support
a forest, a pasture, or agricultural crops. As
countries trade goods that require the use of
water for their production, there is a flow of
virtual water among them. Strictly speaking,
this may not necessarily constitute virtual
water trade because it may not be the result of
conscious water policy decisions. A report by
the World Water Council noted that one can
only speak of virtual water trade if conscious
choices are made in water and environmental
management policies whether or not to make
water available or to release pressure on the
domestic water resources by importing goods
that else would have consumed much of
the domestic water resources available. [30]
However, the flow of virtual water among
countries or regions is a reality, whether it is the
result of conscious water trade or not.
The concept of
virtual water shows
how water-scarce
countries or regions
can benefit
by importing
agricultural products
that require
large amounts
of water,
saving the water
that would
have been required
to produce those
products locally.
June 2011 Volume 3, Number 1 199
THE WATER FOOTPRINT AS A TOOL FOR
RAISING AWARENESS ABOUT WATER SECURITY
H
oekstra introduced the concept of the
water footprint to account for all direct
and indirect water use by a nation. [31] It is
calculated as the sum of all direct water use
within a nation and the net virtual water import.
The estimated global average per capita water
footprint from 19972001 was 1,243 m
3
/yr; for the
United States, the average was 2,483 m
3
/yr; for
the United Kingdom, 1,245 m
3
/yr; and for China,
702 m
3
/yr. [32] Figure 2 shows the water
footprints of different countries as estimated by
Hoekstra and Chapagain [32] versus their per
capita GDPs. This concept can be equally applied
to an individual, a community, or a business. The
water footprint of an individual or a community
is defined as the sum of the volume of fresh
water used and the virtual water content of
all services used and goods consumed by that
individual or community. The water footprint
of a business is defined as the volume of fresh
water used to make the products produced by
the business, plus the virtual water content of
all the supplies, parts, and all other services.
The virtual water content of specific products
and the water footprint are powerful tools for
raising awareness about the implications of
specific consumption patterns for water security.
For example, the total water needed to produce
a cup of coffee (125 mL) is 0.14 m
3
, equal to
1,120 cups of the same size. To produce a
hamburger (150 g), 2.4 m
3
of water is required;
a cotton T-shirt weighing 200 g requires 2 m
3
of
water. [32] It is also important to know whether
the virtual water content of a product is green,
blue, or gray water. For example, most of the
virtual water content of coffee is the water for
growing the coffee plants. Because these plants
are grown in areas with high rainfall, the virtual
water content of coffee is mostly green water. On
the other hand, cotton production in some areas
is supported by rainwater, while in other areas
it relies totally on irrigation, or blue water. For
instance, it has been estimated that the virtual
blue water content of 1 kg of final cotton textile
(the weight of a pair of jeans) in Turkmenistan
is 14.122 m
3
; in Uzbekistan, 11.140 m
3
; in Egypt,
10.787 m
3
; in Pakistan, 9.884 m
3
; in India,
5.726 m
3
; in China, 2.345 m
3
; and in the
United States, 1.897 m
3
. [33]
In some cases, irrigated cotton production has
led to wasteful use of water, with significant
negative impacts. A prime example of this is the
expansion of cotton production in Central Asia,
which led to one of the greatest environmental
catastrophes of the 20th century: the destruction
of the Aral Sea. It all started with the Soviet
governments plan in the 1950s to boost cotton
production using the waters of the Amu Darya
The virtual
water content of
specific products
and the water
footprint are
powerful tools
for raising
awareness about
the implications of
specific consumption
patterns for
water security.
5,000
10,000
15,000
20,000
25,000
30,000
35,000
G
D
P

p
e
r

c
a
p
i
t
a
,

U
S
$
/
y
r
Water Footprint per capita, m
3
/yr
0
0 500 1,000 1,500 2,500 2,000
South Africa
Egypt
India
Jordan
Pakistan
Russia
Netherlands
Mexico
Brazil
Bangladesh
China
Indonesia
Thailand
Australia
Italy
UK
Japan
France
Germany
Canada
United States
Figure 2. Water Footprint of Different Countries Versus the Per Capita GDPs
(water footprint values from [32], based on data for 19972001; per capita GDP for 1999 from [34])
Bechtel Technology Journal 200
and the Syr Darya rivers, the main sources
of water for the Aral Sea. To implement this
plan, several new reservoirs and canals were
constructed in the area. These projects diverted
large volumes of water for irrigation and, at the
same time, introduced significant water loss
through evaporation from the reservoirs and
seepage through the canals, most of which, like
the 1,375 km long Karakum Canal through the
desert, were unlined. This cut off practically all
flow into the Aral Sea, which started shrinking.
Its volume was reduced by 90% over the last
50 years. Between 1950 and 2009, its volume
decreased from 1,058 km
3
to 105 km
3
, its area
decreased from 65,607 km
2
to 13,500 km
2
, and
its salinity increased tenfold from 10.17 g/L
to 102 g/L. [35]
As argued in a recent column [36], publicizing
the virtual water content of different products,
such as that of a T-shirt, can help raise awareness
about the dwindling water resources in arid
regions that sometimes are the source of the
materials for manufacturing those products.
This can help consumers understand the impact
of their choices on the water resources of distant
places. By supporting products from areas
using sustainable water management practices,
consumers can influence farmers and authorities
to adopt such practices and improve water use
efficiency. Among the suggestions to this end are
labeling water-intensive products and certifying
industries or retailers that meet specific criteria
of efficient, sustainable, and fair water use. [27]
BUSINESS WATER SECURITY
W
ater security for individual businesses
implies the avoidance of water-related risks
that may disrupt operations, affect profitability,
or both. Business water security assessments
are based on analyzing the vulnerability of a
business to disruptions in its water supply, as
well as its resilience. An indication of increased
concern and awareness about these risks can be
found in a recent guidance by the Securities and
Exchange Commission, which referred explicitly
to changes in the availability or quality of
water in the context of climate-related risks
and opportunities that public companies need
to disclose to their investors. [37] A recent study
that evaluated the quality, depth, and clarity
of water risk disclosure of 100 of the largest
publicly traded companies from eight different
sectors concluded that most of these companies
failed to reference specific at-risk operations or
supply chains and did not attempt to quantify
or monetize such risks. [38] It also found that
even though many companies face exposure
to water-related risks, only one-quarter of
them detail specific policies, standards, or
management systems to reduce these risks and
associated costs, and only one-fifth disclose
quantified targets to reduce water use.
Another survey aimed at collecting critical
water-related data from several large
corporations was recently undertaken by
the Carbon Disclosure Project (CDP). As part
of this initiativethe CDP Water Disclosure
projectthe CDP sent a questionnaire
to 302 of the worlds 500 largest companies
in the FTSE Global Equity Index Series,
focusing on sectors that are water intensive
or particularly exposed to water-related risks.
Feedback from the 150 companies that
responded showed that in two-thirds of them,
responsibility for water-related issues
lies at the Board or Executive Committee level;
89% have developed specific water policies,
strategies, and plans; and 60% have set
water-related performance targets. [39]
Assessing water security and related risks for
a business requires thorough quantitative and
qualitative analysis of water use in the business
itself and its suppliers and an understanding of
how water affects its customers. The analysis of
water use in the business itself accounts for all
inflows, outflows, and internal flows of water
through the different processes of the business.
These data are then used to develop the water
budget for the full business cycle. The recording
and analysis of these external and internal flows
of a business is sometimes referred to as water
auditing. A goal of water auditing is to identify
potential losses as well as opportunities for
more-efficient water use. Metrics can be very
useful for assessing progress toward the goal of
improved water security and can also be used
to compare various designs and concepts in
terms of sustainability. Examples of such metrics
are the volume of water used per unit output,
the wastewater produced/discharged per unit
output, and the ratio of fresh water usage to
recycled water usage. It is also essential to
assess the sustainability of the water supply that
supports the business, understand present and
future potential competing uses for that water,
and foresee potential regulatory changes that
may impose constraints on its use.
Recognizing the significance of water-related
risks, several think tanks and institutes serving
the business community and different nonprofit
organizations have worked on this subject
over the last few years to prepare tools and
Water security
for individual
businesses implies
the avoidance
of water-related
risks that may
disrupt operations,
affect profitability,
or both. Business
water security
assessments are
based on analyzing
the vulnerability
of a business
to disruptions in
its water supply.
June 2011 Volume 3, Number 1 201
methods to help businesses improve their water
security. An overview of major business-relevant
initiatives that provide guidance on sustainable
water management and tools, measurement
methodologies, standards, and reporting
indicators has been prepared by the World
Business Council for Sustainable Development
and the International Union for Conservation
of Nature. [40]
For example, the Global Environmental
Management Initiative (GEMI), a nonprofit
organization of several major companies
focusing on environmental, health, and safety
issues, produced the Water Sustainability Tool,
designed to help individual companies build
a business water strategy. [41] Similarly, the
Water Sustainability Planner was designed to
help convert the corporate sustainability strategy
into a site or unit strategy for water and assist
in understanding water use and impacts and
water risk assessment [42].
The World Business Council for Sustainable
Development developed the Global Water Tool
to help companies and organizations map their
water use and assess risks relative to their global
operations and supply chains. [43]
Berger and Finkbeiner recently discussed how
to address water use in the lifecycle assessment
methodology, which measures the environ-
mental impacts of individual products from
cradle to grave. [44]
Another tool available to a business is its water
footprint, defined as the total volume of fresh
water used directly or indirectly to run and
support the business. The Water Footprint
Network, established in 2008 to coordinate
efforts to further develop and disseminate
knowledge on water footprint concepts, methods,
and tools, has recently prepared a new global
water footprint standard. [45]
Morrison and Schulte evaluated these four tools
and methodologies (GEMI tools, Global Water
Tool, lifecycle assessment, and water footprint)
and their use. [46] Based on this evaluation,
they made specific recommendations on how to
advance corporate water accounting practices,
such as placing greater emphasis on assessing
the local water resource context, supply chain,
and water quality; improving data collection;
developing consistent reporting criteria
across the industry; and encouraging greater
cooperation among companies in this area.
Another ongoing project related to business
water security is the effort under way by
Goldman Sachs, General Electric, and the World
Resources Institute to develop a water index to
measure water-related risks and opportunities
for companies and their investors. [47] This
effort began with a pilot project focused on the
thermal power industry in the Yellow River
basin in China, a water-stressed region due to
its rapid economic growth.
Finally, some corporations have developed
their own corporate water strategies to assess
their water use, analyze water-related risks
and vulnerabilities, identify opportunities to
use water more efficiently, set specific water
use targets, and employ metrics to assess
progress in reaching these targets. For example,
Nestl published a water management report
presenting year-to-year data on water use and
wastewater generated as well as data on specific
indicators, such as the amount of water used and
wastewater generated per unit of product. [48]
CONCLUSIONS
B
ecause growing populations have to share
limited water resources in many parts
of the world, water security is recognized as
an essential factor for the sustainability of
communities, cities, and countries and critical
for the success of most businesses. Uncertainty
about the impact of climate change on water
resources adds to the urgency of improving water
security. To achieve water security, a sustainable
equilibrium between water supply and demand
must be sought by augmenting supply and
managing demand. In general, the options for
augmenting supply are limited, because most
conventional water resources are already being
used. Unconventional water sources, such as the
reuse of treated wastewater and desalination,
can supplement available water resources in
water-scarce areas. Water savings through
conservation, more-efficient water use, and
elimination of leaks and other losses in water
distribution systems can contribute to the water
supply. Managing demand through economic
measures and well-thought-out development,
agricultural, trade, and other policies should be
an essential part of any strategy for achieving
water security. Good governance of the water
sector and adaptation of policies that discourage
wasteful and inefficient water use are also
essential preconditions for water security.
Virtual water and the water footprint are
powerful concepts that can help the public
understand how their everyday choices affect
water availability and that can expand the
Some corporations
have developed
their own corporate
water strategies
to assess their
water use, analyze
water-related risks
and vulnerabilities,
identify
opportunities
to use water
more efficiently,
set specific water
use targets, and
employ metrics
to assess progress
in reaching
these targets.
Bechtel Technology Journal 202
options for improving water security in front
of policy and decision makers. These concepts
make it easier to realize the global nature of
water security and the need for global initiatives
to ensure it.
Achieving water security for industries and other
businesses requires a thorough understanding of
the myriad processes and supplies dependent on
water, along with a systematic accounting of the
quantities involved to identify opportunities for
more-efficient water use. Water-security-related
risks must be identified and a strategy developed
for dealing with such risks. Use of the proper
metrics can help assess progress toward the
goals of the water strategy.
Achieving water security also requires action
at all levels, from individual choices to the
development and implementation of appropriate
policies at local, regional, and national levels.
International coordination and collaboration
in water governance reform can help ease the
threats to global water security. Raising public
awareness about waters increasing scarcity can
help influence politicians to adopt agricultural,
economic, and other sustainable development
policies that do not promote growth at the
expense of security, because any threat to water
security is also a threat to growth.
REFERENCES
[1] B. Schultz and S. Uhlenbrook, Water Security:
What Does it Mean, What May it Imply? in
Water for a Changing World Developing
Local Knowledge and Capacity, G.J. Alaerts
and N.L. Dickinson eds., Proceedings of the
International Symposium, June 1315 2007,
UNESCO-IHE Delft, the Netherlands, CRC
Press, Taylor and Francis Group, London, UK,
2009, see http://www.unesco-ihe.org/Users/
Prof.-S.-Uhlenbrook-PhD-MSc/Publications.
[2] Towards Water Security: A Framework for
Action, Global Water Partnership, Stockholm,
Sweden, and London, United Kingdom, 2000,
http://www.gwpsudamerica.org/docs/
publicacoes/doc_78_en.pdf.
[3] E. Norman, K. Bakker, C. Cook, G. Dunn, and
D. Allen, Water Security: A Primer 2010,
policy report for Developing a Canadian Water
Security Framework as a Tool for Improved
Water Governance for Watersheds (20082012),
Program on Water Governance, University of
British Columbia, Vancouver, Canada, 2010,
http://www.watergovernance.ca/wp-content/
uploads/2010/04/WaterSecurityPrimer20101.pdf.
[4] Global Water Security An Engineering
Perspective, The Royal Academy of
Engineering, London, UK, April 2010,
http://www.raeng.org.uk/news/publications/
list/reports/Global_Water_Security_report.pdf.
[5] Ministerial Declaration of The Hague on
Water Security in the 21st Century, agreed to
March 22, 2000, in The Hague, the Netherlands,
http://www.worldwatercouncil.org/fileadmin/
wwc/Library/Official_Declarations/The_
Hague_Declaration.pdf.
[6] A.K. Biswas, Water: Crisis Due to Scarcity or
Poor Governance?, closing keynote address,
Global Water Conference, Frankfurt, Germany,
May 20, 2010, http://www.malekigroup.com/
images/fgbw/h2o/Biswas-Asit-K_Global-H20-
Conference_20%20May-2010.pdf.
[7] UN-Water Global Annual Assessment of
Sanitation and Drinking-Water (GLAAS)
2010: Targeting Resources for Better Results,
World Health Organization, 2010,
http://whqlibdoc.who.int/publications/
2010/9789241599351_eng.pdf.
[8] Health through Safe Drinking Water and
Basic Sanitation, World Health Organization
website, accessed on February 15, 2011,
http://www.who.int/water_sanitation_health/
mdg1/en/index.html.
[9] Facts and Figures: Water, Sanitation and
Hygiene Links to Health, World Health
Organization website, accessed on February 15,
2011, http://www.who.int/water_sanitation_
health/publications/factsfigures04/en/.
[10] P.H. Gleick, Basic Water Requirements for
Human Activities: Meeting Basic Needs,
Water International, Vol. 21, No. 2, pp. 8392,
http://www.bvsde.paho.org/bvsacd/cd17/
basic_wate.pdf.
[11] J.F. Kenny, N.L. Barber, S.S. Hutson, K.S. Linsey,
J.K. Lovelace, and M.A. Maupin, Estimated
Use of Water in the United States in 2005,
U.S. Geological Survey Circular 1344, USGS,
Reston, VA, 2009, http://pubs.usgs.gov/
circ/1344/pdf/c1344.pdf.
[12] P. Magiera, S. Taha, and L. Nolte, Water
Demand Management in the Middle East and
North Africa, presented at the International
Water Demand Management Conference,
Dead Sea, Jordan, May 30-June 3, 2004.
[13] S. Postel and A. Vickers, Boosting Water
Productivity, Chapter 3, State of the World 2004,
Special Focus: The Consumer Society, The
Worldwatch Institute, Washington, DC, 2004,
http://www.globalwaterpolicy.org/pubs/
SOW04water.pdf.
[14] F.R. Rijsberman, Water Scarcity: Fact or
Fiction?, Proceedings of the 4th International
Crop Science Congress New Directions for a
Diverse Planet, September 26 October 1, 2004,
Brisbane, Australia, http://www.cropscience.org.
au/icsc2004/plenary/1/1994_rijsbermanf.htm.
[15] Coping with Water Scarcity, UN-Water
Thematic Initiatives, United Nations,
August 2006, http://www.unwater.org/
downloads/waterscarcity.pdf.
[16] Avoiding Water Wars: Water Scarcity and
Central Asias Growing Importance for Stability
in Afghanistan and Pakistan, A Majority Staff
Report prepared for the use of the Committee
on Foreign Relations, United States Senate,
One Hundred Twelfth Congress, First Session,
February 22, 2011, http://foreign.senate.gov/
download/?id=738A9FCF-FA1B-4ECD-9814-
A1F6C5BE04D2.
Raising public
awareness
about waters
increasing scarcity
can help influence
politicians to
adopt agricultural,
economic, and
other sustainable
development policies
that do not
promote growth
at the expense
of security,
because any threat
to water security
is also a threat
to growth.
June 2011 Volume 3, Number 1 203
[17] Senate Foreign Relations Committee Report
Warns Water Security in Central and South Asia
May Jeopardize Long-Term Regional Stability,
US Senate Committee on Foreign Relations
press release, February 22, 2011, http://foreign.
senate.gov/press/chair/release/?id=0b32e452-
9c4c-4417-82ee-d201bcefc8ae.
[18] Review of World Water Resources by Country,
Water Reports 23, Food and Agriculture
Organization of the United Nations, Rome, Italy,
2003, ftp://ftp.fao.org/agl/aglw/docs/wr23e.pdf.
[19] The World Factbook 2004, Central Intelligence
Agency, access via https://www.cia.gov/library/
publications/download/download-2004/index.
html.
[20] Rome Declaration on World Food Security,
World Food Summit, November 1317, 1996,
Rome, Italy, see http://www.fas.usda.gov/icd/
summit/1997/romefinal.html.
[21] Global Risks 2011, Sixth Edition, An Initiative
of the Risk Response Network, World Economic
Forum, January 2011, http://www3.weforum.
org/docs/WEF_GlobalRisks_Report_2011.pdf.
[22] P.H. Gleick and M. Palaniappan, Peak
Water Limits to Freshwater Withdrawal and
Use, Proceedings of the National Academy of
Sciences, Vol. 107, No. 25, pp. 1115511162,
http://www.pnas.org/content/107/25/11155.full
.pdf+html and http://www.pacinst.org/press_
center/press_releases/peak_water_pnas.pdf.
[23] C.J. Vrsmarty, P.B. McIntyre, M.O. Gessner,
D. Dudgeon, A. Prusevich, P. Green, S. Glidden,
S.E. Bunn, C.A. Sullivan, C. Reidy Liermann,
and P.M. Davies, Global Threats to Human
Water Security and River Biodiversity,
Nature, Vol. 467, Issue 7315, September 30, 2010,
pp. 555561, http://www.nature.com/nature/
journal/v467/n7315/full/nature09440.html.
[24] The 1st UN World Water Development Report:
Water for People, Water for Life, UNESCO,
2003, http://www.unesco.org/water/wwap/
wwdr/wwdr1.
[25] T.P. Barnett and D.W. Pierce, When Will
Lake Mead Go Dry? Water Resources Research,
Vol. 44, W03201, March 29, 2008, access via
http://www.agu.org/pubs/crossref/2008/
2007WR006704.shtml.
[26] J.A. Allan, Fortunately There Are Substitutes
for Water Otherwise Our Hydro-Political
Futures Would Be Impossible, Priorities for
Water Resources Allocation and Management,
Overseas Development Administration (ODA),
London, UK, 1993, pp. 1326.
[27] A.Y. Hoekstra, The Global Dimension of Water
Governance: Why the River Basin Approach
Is No Longer Sufficient and Why Cooperative
Action at Global Level Is Needed, Water,
Vol. 3, No. 1, December 2010, pp. 2146,
http://www.mdpi.com/2073-4441/3/1/21/pdf.
[28] A.Y. Hoekstra, Virtual Water: An Introduction,
Proceedings of the International Expert Meeting
on Virtual Water Trade, Value of Water
Research Report Series No. 12, IHE Delft,
the Netherlands, December 1213, 2002,
(published February 2003), Part 1, Chapter 1,
pp. 1323, http://www.waterfootprint.org/
Reports/Report12.pdf.
[29] W.J. Cosgrove and F. Rijsberman, World Water
Vision: Making Water Everybodys Business,
World Water Council, World Water Vision,
Earthscan Ltd., London, UK, March 2000,
see http://www.earthscan.co.uk/?tabid=433.
[30] E-Conference Synthesis: Virtual Water Trade
Conscious Choices, WWC Publication No. 2,
World Water Council, March 2004, http://
www.worldwatercouncil.org/fileadmin/
wwc/Library/Publications_and_reports/
virtual_water_final_synthesis.pdf.
[31] A.Y. Hoekstra and P.Q. Hung, Virtual Water
Trade: A Quantification of Virtual Water Flows
Between Nations in Relation to International
Crop Trade, Proceedings of the International
Expert Meeting on Virtual Water Trade, Value
of Water Research Report Series No. 12,
Part 1, Chapter 2, pp. 2547, IHE Delft,
the Netherlands, December 1213, 2002
(published February 2003), http://www.
waterfootprint.org/Reports/Report12.pdf.
[32] A.Y. Hoekstra and A.K. Chapagain, Water
Footprints of Nations: Water Use by People
as a Function of Their Consumption Pattern,
Water Resources Management, Vol. 21, No. 1, 2007,
pp. 3548, http://www.waterfootprint.org/
Reports/Hoekstra_and_Chapagain_2007.pdf.
[33] A.K. Chapagain, A.Y. Hoekstra, H.H.G. Savenije,
and R. Gautam, The Water Footprint of
Cotton Consumption, Value of Water Research
Report Series No. 18, UNESCO-IHE Delft, the
Netherlands, September 2005, http://www.
waterfootprint.org/Reports/Report18.pdf.
[34] The World Factbook 2000, Central Intelligence
Agency, 2000, access via https://www.cia.gov/
library/publications/download/download-2000/
index.html.
[35] Database of the Aral Sea, Bathymetric
Characteristics, CAWATERinfo, Portal of
Knowledge for Water and Environmental
Issues in Central Asia, funded by the Swiss
Agency for Development and Cooperation,
accessed on February 20, 2011, http://
www.cawater-info.net/aral/data/tabs_e.htm.
[36] I. Rudenko and J.P.A. Lamers, Virtual Water
Saves Prized Liquid Asset Concept Raises
Awareness About Dwindling Resource in
Arid Region, VOICES UNESCO in the
Asia-Pacific, No. 23, JulySeptember 2010,
p. 18, http://www.cawater-info.net/
library/eng/rudenko_lamers_e.pdf and
http://unesdoc.unesco.org/images/0018/
001888/188813E.pdf.
[37] Commission Guidance Regarding Disclosure
Related to Climate Change, Securities and
Exchange Commission 17 CFR PARTS 211, 231,
and 241 [Release Nos. 33-9106; 34-61469; FR-82],
February 2, 2010, http://www.sec.gov/rules/
interp/2010/33-9106.pdf.
[38] B. Barton, Murky Waters? Corporate Reporting
on Water Risk A Benchmarking Study of
100 Companies, A Ceres Report, Boston, MA,
February 2010, http://www.waterfootprint.org/
Reports/Barton_2010.pdf.
[39] CDP Water Disclosure 2010, Global Report,
2010 Carbon Disclosure Project, https://
www.cdproject.net/CDPResults/CDP-2010-
Water-Disclosure-Global-Report.pdf.
Bechtel Technology Journal 204
[40] Water For Business Initiatives Guiding
Sustainable Water Management in the Private
Sector, Version 2, World Business Council
for Sustainable Development (WBCSD) and
International Union for Conservation of Nature
(IUCN), March 2010, http://www.wbcsd.org/
web/water4business.pdf.
[41] Connecting the Drops Toward Creative
Water Strategies A Water Sustainability Tool,
Global Environmental Management Initiative
(GEMI), June 2002, http://www.gemi.org/
resources/ConnectingTheDrops.pdf.
[42] Collecting the Drops: A Water Sustainability
Planner, Global Environmental Management
Initiative (GEMI), January 2007, http://www.
gemi.org/resources/CollectingDrops.pdf.
[43] 2010 Global Water Tool

: Refreshed for
Risk Assessment, World Business Council
for Sustainable Development (WBCSD),
http://www.wbcsd.org/web/publications/
external/GWT/GWT%20brief.pdf.
[44] M. Berger and M. Finkbeiner, Water
Footprinting: How to Address Water Use
in Life Cycle Assessment?, Sustainability,
Vol. 2, April 2010, pp. 919944, http://
www.mdpi.com/2071-1050/2/4/919/pdf.
[45] A.Y. Hoekstra, A.K. Chapagain, M.M. Aldaya,
and M.M. Mekonnen, The Water Footprint
Assessment Manual Setting the Global
Standard, Earthscan, London, UK,
and Washington, DC, 2011, http://
www.waterfootprint.org/downloads/
TheWaterFootprintAssessmentManual.pdf.
[46] J. Morrison and P. Schulte, Corporate Water
Accounting An Analysis of Methods and
Tools for Measuring Water Use and Its
Impacts, Pacific Institute, Oakland, CA,
2010, http://www.pacinst.org/reports/
corporate_water_accounting_analysis/
corporate_water_accounting_analysis.pdf.
[47] T. Wolstencroft, Transition to a Low-Carbon
Economy Opportunities and Challenges
of the Emerging Clean Energy Industry,
The Goldman Sachs Group, Inc., 2010, http://
www2.goldmansachs.com/ideas/environment-
and-energy/goldman-sachs/transition-doc.pdf.
[48] The Nestl Water Management Report, Nestl
S.A., Vevey, Switzerland, March 2007, http://
www.nestle.com/Common/NestleDocuments/
Documents/Reports/CSV%20reports/Water/
Water_management_report_2006_English.pdf.
BIOGRAPHY
Angelos Findikakis, PhD,
is a Bechtel Fellow and
senior principal engineer
with the BSII Geotechnical
and Hydraulic Engineering
Services group, based in San
Francisco. His expertise lies
in environmental hydraulics,
hydrologic engi neeri ng,
groundwater, and water
resources management. In his more than 30 years
with Bechtel, he has supported numerous projects on
surface water and groundwater issues related to safety
analysis, performance assessment, environmental
restoration, and project design. These projects have
included low-level and high-level radioactive waste
repositories, landfills, power stations, mining facilities,
infrastructure, and others.
Since becoming a Bechtel Fellow in 1998, Angelos
has worked on various technical excellence activities
sponsored by the Fellows. He is the corporate
coordinator of the Bechtel Technical Information
Program and the Outstanding Technical Paper
Awards Program, and he led a Fellows initiative on
knowledge management.
Angelos is consulting professor in the Department
of Civil and Environmental Engineering at Stanford
University, where he teaches a class on water
resources management. He is also active in a variety of
professional organizations, including the International
Association of Hydro-Environment Engineering
and Research (IAHR), the American Society of Civil
Engineers (ASCE), and the American Geophysical
Union (AGU). Over the years, Angelos has served
on several professional organization committees and
is currently the vice chair of the Innovation and
Professional Development Division of IAHR and of the
IAHR Task Force on Global Water Security.
The recipient of several awardsincluding the Straub
Award in 1981 (awarded annually for the best doctoral
dissertation in hydraulic engineering), the ASCE
Hering Medal in 1981, and the ASCE Wesley W. Horner
Award in 2000Angelos recently finished editing
a volume on groundwater management practices.
He has also authored or coauthored several papers
published in peer-reviewed journals and participated
in the proceedings of the technical conferences or
symposia where they were presented.
Angelos received his PhD and MSc, both in Civil
Engineering, from Stanford University in Palo Alto,
California. He also holds a Diploma in Civil
Engineering from the National Technical University
of Athens, Greece. Angelos is a Professional Engineer
in California.
Prem Attanayake, PhD Amos Avidan, PhD August D. Benz
William Imrie Joe Litehiser, PhD Sanj Malushte, PhD Cyrus B. Meher-Homji
Lee Schmoe Craig A. Myler, PhD Farhang Ostadan, PhD
Stew Taylor, PhD Justin Zachary, PhD Ping Wan Fred Wettling
Benjamin Fultz Angelos Findikakis, PhD Peter Carrato, PhD
Siv Bhamra, PhD
Who We Are
Bechtel is the worlds No. 1 choice for engineering, construction,
and project management.
Our diverse portfolio encompasses energy, transportation,
communications, mining, oil and gas, and government services.
We currently have projects in dozens of locations worldwide, from
Alaska to Australia. No matter how challenging a project or how
remote its location, chances are Bechtel can handle it. Thats
because we bring an unmatched combination of knowledge, skill,
experience, and customer commitment to every job. Engineering
News-Record (ENR) has named Bechtel the top U.S. construction
contractor for 13 consecutive years.
While we work for governments and commercial customers, our
projects have helped grow local economies and improve the quality
of life for communities and people around the world. Time and
again our work has demonstrated that the only limits on human
achievement are those that we place on ourselves.
Privately owned with headquarters in San Francisco, we have
ofces around the world and 52,000 employees.
What We Do
Airports and seaports Oil and gas eld
Communications networks development
Defense and aerospace facilities Pipelines
Environmental cleanup projects Reneries and
Fossil and nuclear power plants petrochemical facilities
Mines and smelters Roads and rail systems
Ethics
Bechtels culture is grounded in integrity and respect. This means
adhering to the highest standards of ethics. Our reputation as an
ethical company is one of our most valuable assets. We stand by
everything we do.
Quality
At Bechtel, quality means doing the job right the rst time. Weve
always delivered quality work, and we are continually striving to
improve our performance through Six Sigma.
Safety
Bechtel has a world-class safety program, and it pays off. Nearly
80 percent of our projects complete each year without a lost-time
accident. Our philosophy is simpleevery accident, and therefore
every injury, is preventable.
Bechtel Fellows
Chosen for their substantial technical achievement over the years,
the Bechtel Fellows advise senior management on questions related
to their areas of expertise, participate in strategic planning, and help
disseminate new technical ideas and findings throughout the organization.
Bechtel
Vol ume 3, No. 1
Cor por at e headquar t er s :
San Fr anci sco, Cal i f or ni a, USA
Ot her of f i c es i nc l ude:
Br i sbane, Aust r al i a
Cal gar y, Canada
Fr eder i ck, Mar yl and, USA
Houst on, Texas, USA
London, Engl and, UK
Mont r eal , Canada
New Del hi , I ndi a
Sant i ago, Chi l e
Shanghai , Chi na
Tai pei , Tai wan, ROC
www. bechtel . com
V
o
l
u
m
e

3
,

N
o
.

1
2
0
1
1
Foreword
Editorial
Construction Productivity and Performance Measurement
Prediction of Ground Motion Attenuation in Low-Seismicity Regions
Technology Trends in High-Speed Rail
Management of Ground Engineering Data
Enhancements to Airfield Simulation of London Gatwick Airport
An Integrated Approach to Developing Safe and Efficient Plant Layouts
Alumina Refinery Water Management Design in Tropical and Subtropical Climates
Options for Long-Distance, Large-Capacity Bulk Materials Conveying
Optimum Simulation of Flood Flow Rate: Comparing Combinations of
Precipitation Loss and Rainfall Excess-Runoff Transform Models
Energy Management System for the New Jamnagar Refinery in India
Dynamic 3D Visualization of Construction Activities
CO
2
Compression and Dehydration for Carbon Capture and Sequestration
Formed Suction Inlet Design for Mixed Flow Pump Applications
Climate Change Considerations in the Sustainable Development of
Nuclear Power Plants in the United States
Concentrated Solar Thermal Plants Downstream of the Solar Field
Design/Optimization of the Associated Power Generation Cycle
Probabilistic Site Seismic Response Analysis
Computation-Based Engineering of Multiphase Processes
Using Computational Fluid Dynamics
Water Security
CENTRAL ENGINEERING
Steven Toon
Behrooz Tavakoli, PhD, and Nick Gregor, PhD

CIVIL
Siv Bhamra, PhD, and Maximilian Fieguth
Stephen Walthall
Jim Denton-Brown and Farzam Mostoufi
MINING & METALS
Rafael L. Pires; Robert F. Baxter; Laszlo Tikasz, PhD;
and Robert I. McCulloch
Lucy Martin and Steven Howard
Brett Wilson, Steve J. Davis, and Mark Wolton
OIL, GAS & CHEMICALS
Zhiyong Duan, PhD
Rajesh Narayan Athiyarath, Munir Mujawar, and Vinayak Shanbhag
Shiva Krishnasamy
Tomas Maramba III and Dipanjan Bhattacharya
POWER
David Werth, PhD, and Yifan Zheng
Ping K. Wan; Alice C. Carson; and Desmond W. Chan, PhD
Justin Zachary, PhD; Natasha Jones; and Aslan Golant

SYSTEMS & INFRASTRUCTURE
Nan Deng, PhD, and Farhang Ostadan, PhD
Christopher Kennedy, PhD; Philip Diwakar;
Leonard J. Peltier, PhD; and Brigette Rosendall, PhD
Angelos N. Findikakis, PhD

v
vii
3
13
29
41
49
59
67
79
97
107
125
135
147
153
161
173
181
193
Bechtel Technology Journal
Contents Authors
An I ndependent Anal y s i s of Cur r ent T ec hnol ogy I s s ues
J une 2011
B
e
c
h
t
e
l

T
e
c
h
n
o
l
o
g
y

J
o
u
r
n
a
l
9200
Maj or Of f i ces

You might also like