Simulation Nuclear Reactor
Simulation Nuclear Reactor
Disclaimer
This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States
Government nor any agency thereof, nor UChicago Argonne, LLC, nor any of their employees or officers, makes any warranty, express
or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus,
product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific
commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply
its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of
document authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof,
Argonne National Laboratory, or UChicago Argonne, LLC.
ANL-AFCI-168
by
G. Palmiotti, J. Cahalan, P. Pfeiffer, T. Sofu, T. Taiwo, T. Wei, A. Yacout, W. Yang, A. Siegel, Z. Insepov, M. Anitescu,
P. Hovland, C. Pereira, M. Regalbuto, J. Copple, M. Williamson
Nuclear Engineering Division, Argonne National Laboratory
May 2005
work sponsored by
U. S. Department of Energy,
Office of Nuclear Energy, Science and Technology
May 2005
2
Requirements for Advanced Simulation of Nuclear Reactor and
Chemical Separation Plants
Abstract
This report presents requirements for advanced simulation of nuclear reactor and
chemical processing plants that are of interest to the Global Nuclear Energy Partnership
(GNEP) initiative. Justification for advanced simulation and some examples of grand
challenges that will benefit from it are provided.
An integrated software tool that has its main components, whenever possible based on
first principles, is proposed as possible future approach for dealing with the complex
problems linked to the simulation of nuclear reactor and chemical processing plants. The
main benefits that are associated with a better integrated simulation have been identified
as: a reduction of design margins, a decrease of the number of experiments in support of
the design process, a shortening of the developmental design cycle, and a better
understanding of the physical phenomena and the related underlying fundamental
processes.
For each component of the proposed integrated software tool, background information,
functional requirements, current tools and approach, and proposed future approaches
have been provided. Whenever possible, current uncertainties have been quoted and
existing limitations have been presented. Desired target accuracies with associated
benefits to the different aspects of the nuclear reactor and chemical processing plants
were also given. In many cases the possible gains associated with a better simulation
have been identified, quantified, and translated into economical benefits.
Results reported in the AFCI series of technical memoranda frequently are preliminary in nature and
subject to revision. Consequently, they should not be quoted or referenced without the author’s permission
3
Table of Contents
Page
I Introduction................................................................................................................. 6
I.A Case for Advanced Simulation ................................................................................. 6
I.B Challenging Problems That Will Benefit from Advanced Simulation ..................... 8
I.B.1 Safety Case for Sodium Cooled Fast Reactor.................................................... 8
I.B.2 Fuel and Structural Materials in Reactor Operating Conditions............... 10
I.B.3 Chemical Separation and Reprocessing Plant........................................... 11
I.C Characteristics of a Future Advanced Simulation Tool.......................................... 14
II Material Properties.................................................................................................... 17
II.A Background ....................................................................................................... 17
II.B Functional Requirements .................................................................................. 21
II.C Current Tools and Approach............................................................................. 25
II.D Proposed Future Approach ............................................................................... 26
III Neutronics (Core and Fuel Cycle) ........................................................................ 28
III.A Background ...................................................................................................... 28
III.B Functional Requirements ................................................................................. 29
III.C Current Tools and Approach............................................................................. 31
III.D Proposed Future Approach ............................................................................... 36
IV Thermal Hydraulics .............................................................................................. 39
IV.A Background ................................................................................................... 39
IV.B Functional Requirements .................................................................................. 39
IV.C Current Tools and Approach............................................................................. 41
IV.D Proposed Future Approach ........................................................................... 42
V Structural Mechanics ................................................................................................ 44
V.A Background ....................................................................................................... 44
V.B Functional Requirements .................................................................................. 45
V.C Current Tools and Approach............................................................................. 47
V.D Proposed Future Approach ............................................................................... 49
VI Fuel Behavior........................................................................................................ 51
VI.A Background ................................................................................................... 51
VI.B Functional Requirements .................................................................................. 53
VI.D Proposed Future Approach ........................................................................... 56
VII Balance of Plant .................................................................................................... 60
VII.A Background......................................................................................................... 60
VII.B Functional Requirements ............................................................................. 61
VII.C Current Tools and Approach......................................................................... 61
VII.C Proposed Future Approach ........................................................................... 63
VIII Safety Analysis ..................................................................................................... 66
VIII.A Background ................................................................................................... 66
VIII.B Functional Requirements .............................................................................. 67
VIII.C Current Tools and Approaches ..................................................................... 69
VIII.D Proposed Future Approach ........................................................................... 70
IX Chemical Separation and Processing .................................................................... 72
4
IX. A. Aqueous Processing........................................................................................... 72
IX.A.1 Background ............................................................................................... 72
IX.A.2 Functional Requirements .......................................................................... 73
IX.A.3 Current Tools and Approach..................................................................... 74
IX.A.4 Proposed Future Approach ....................................................................... 76
IX. B. Pyrochemical Processing ................................................................................... 77
IX.B.1 Background ............................................................................................... 77
IX.B.2 Functional Requirements .............................................................................. 78
IX.B.3 Current Tools and Approach......................................................................... 79
IX.B.4 Future Tools and Approach........................................................................... 79
X Sensitivity and Uncertainty Analysis........................................................................ 81
X.A Background ....................................................................................................... 81
X.B Functional Requirements .................................................................................. 82
X.C Current Tools and Approach............................................................................. 83
X.D Proposed Future Approach ............................................................................... 85
XI High Performance Computing Enabling Technologies ........................................ 87
XI.A Background ................................................................................................... 87
XI.B Functional Requirements .................................................................................. 88
XI.C Current Tools and Approach............................................................................. 89
XI.D Proposed Future Approach ........................................................................... 89
XII Conclusions........................................................................................................... 91
References......................................................................................................................... 92
5
I Introduction
The current design process for nuclear energy systems leads to many inefficiencies that
translate to significant costs as advanced systems are developed. Benefits from the
improved design and performance of the nuclear fuel cycle can be readily identified: a
decrease of design margins, a reduction of lengthy testing programs in support of the
design process, and a shortening of the developmental design cycle. The decrease of
design margin can be achieved only by reducing the uncertainty of the key parameters
that characterize system performance through improved accuracy and validation of the
models used in the design. The reduction of testing programs will be possible only when
a solid and robust simulation tool can be used with confidence through experimental
validation. The shortening of the developmental cycle can be accomplished by the use of
more efficient and integrated design tools that would significantly reduce the engineering
processing steps.
Current nuclear system simulation represents a conglomerate of tools that are uneven in
terms of accuracy and validation and are only loosely coupled (often by human
intervention). Most of these tools were produced many years ago (often thirty or more
years ago). The basic methodology was to rely on bench-top experiments leading to
prototype operation leading to full-scale demonstration. The role of costly testing was
essential because of the lack of confidence in the simulation tools and associated
parameters. Moreover, the approach lacked a rigorous, scientific-based methodology for
evaluating sensitivities and uncertainty of the key parameters and validation of the data
and models used in the design process. Conservative design margins were established a
posteriori (i. e. after some operation of the full size system) or were defined through
“educated guess” or “expert elicitation,” rather than through a rigorous understanding of
the underlying science.
Today, though, with the advances in modeling and computing we believe that
technologies are mature enough to transition to a science-based approach in order to
make a breakthrough in the way nuclear systems are conceived, designed, and operated.
This could be fostered through a fundamental change in the way nuclear systems are
modeled. Historically, scientific research has been carried out in two main ways:
modeling based on theory and experimentation. With the advent of computers, simulation
has found a role as a third complement to the historical approach. The triad of modeling,
6
simulation, and experiment is quickly becoming the new backbone of the R&D process.
The progress achieved in computing power allows numerical simulations of complex
phenomena that were unimaginable not long ago. As a consequence, predictive science
has progressed as a complement to empiricism, with key experiments as the essential
instruments to validate the models and simulation tools. Because of the high cost and
long time associated with experimentation, simulation has gained more ground in the
scientific research process. In particular for nuclear systems, the improvement in
understanding of fundamental processes and the progress in simulation capabilities
through integration and multi-physics and multi-scale approaches, when linked to the
huge advances in computational power, make significant technological breakthroughs
achievable.
The ultimate goal is a design that has as low as possible uncertainties. There are two
major sources of uncertainties. One is related to input physical data. Among them we can
list nuclear cross sections. physical characteristics of materials (e. g. heat capacity,
thermal conductivity, viscosity, etc.), fabrication data, chemical reactions rates, etc. In
general these quantities can be improved either by measurements or by a better industrial
process (fabrication data), but often a limit exists in the level of improvement that can be
achieved. The other source of uncertainty is related to modeling stemming from
approximations made in the computational methodology used in the design process. Here
is where advanced simulation can provide a major benefit. In principle one can hope to
reduce to the smallest amount the impact of uncertainties coming from the modeling of
the physical processes. In this report, some examples of existing uncertainties will be
provided with indications of possible improvements thanks to a better, advanced,
simulation. On the other hand a consistent approach, based on a more rigorous and
scientific basis, which allows a reliable propagation of uncertainties among the different
components describing the multi-physics aspect of the phenomena to be simulated, will
insure a correct evaluation of the impact of the first source of uncertainty coming from
the input data. In the past a very conservative approach on uncertainty propagation has
lead often to margins that are far too conservative.
7
or as a replacement when fundamental modeling is not available or too complex to be
adopted.
It is quite difficult to perform a precise cost/benefit analysis for advanced simulation, but
the final goal, as previously indicated, is to be able to design a system with highly
decreased margins of uncertainty, reduced number of supporting tests, and shorter design
development time. Of course, all these, if achieved, would translate into huge economic
savings, but, as the famous commercial says, there are things that are priceless and in our
case this is represented by the gain we will achieve in better understanding the physical
phenomena and the related underlying fundamental processes.
Today, several issues exist that need to be resolved in order to ensure the technical and
economical viability of nuclear energy systems and fuel cycles that are envisaged in the
GNEP program. We provide here a series of examples of challenging problems that will
benefit from advanced simulations and help toward the goal of the viability of those
nuclear energy systems. This is not an exhaustive list, but it provides a good illustration
where action is needed for solving the problems.
As is well known, the proposed ABTR for the GNEP program, and in the following the
ABR, are reactors with a fast neutron flux energy spectrum and cooled by liquid metal,
specifically sodium. The choice of the fuel has not been made between metal and oxide
types. In the following we will relate more on the case of the metal fuel case, but it is
worth noting that similar simulation challenges exist for the safety analysis of the oxide
fuel. For the metal fuel reactor it has been claimed that passive reactivity shutdown can
be achieved for any type of unprotected whole-core accident. But even in the case this
cannot be demonstrated, the safety behavior has to be benign in response to a series of
several types of initiators, e. g. unprotected loss of flow (LOF), unprotected loss of heat
sink (LOHS), unprotected rod runout transient overpower (TOP). There are several
references [1-4] that deal with this issue and what is illustrated in the following is mostly
taken out from these references.
The coolant mixed mean outlet temperature reached asymptotically upon passive
shutdown in each of these unprotected events is a useful figure of merit for assessing
reactivity shutdown effectiveness. These asymptotic temperatures are found to depend on
ratios of reactivity feedbacks and for the TOP on a ratio of burnup control swing to
reactivity feedbacks. It is important to observe that the reactivity feedback coefficients
are very small. In contrast to the multiple tens of dollars of shutdown reactivity
embedded in control rod scram, the passive shutdowns bring the core to zero power by
balancing off reactivities in the range of cents or several tens of cents. We now briefly
8
illustrate the sequence of physical phenomena involved in these types of accident
scenario in order to show how complex could be their simulation.
An unprotected LOHS accident is postulated to start with a loss of heat rejection at all
steam generators with the primary and intermediate loop pumps continuing to run. It is
also assumed that control rods fail to insert so that the reactor power changes only in
response to thermal reactivity feedbacks. As the core inlet temperature rises in response
to the loss of heat sink, radial core expansion introduces a negative reactivity of several
tens of cents, causing the power to be reduced to near zero. The coolant temperature rise
collapses to a small value, and the final asymptotic state is achieved when the positive
reactivity introduced by bringing the power to zero is balanced by the negative reactivity
introduced by raising the core average (nearly isothermal) temperature.
In the case of an unprotected LOF accident the initiator is assumed to be the total loss of
offsite power in conjunction with a failure of the reactor scram. As the rate of flow
through the core drops, the outlet temperature of the coolant rises. With this temperature
increase, the thermal expansion of the above-core structure spreads the core, increasing
the axial neutron leakage, his makes the reactivity negative, reducing reactor power.
Feedbacks from decrease in coolant density, axial expansion of the fuel and control rod
drivelines, and Doppler effect in the fuel superimpose on the feedback from radial
expansion of the core. The net effect of all these passive feedbacks, none of which
exceeds a few tens of cents, is negative.
All these examples illustrate the complexity and interrelation of events that have to be
represented in these accident scenarios. Multi-physics representation must necessarily
include neutronics, thermal-hydraulics, and thermo-mechanics. Besides the geometrical
complexity, the time scales are different for the phenomena to be described, including
10 -7 s or higher for the prompt generations of neutrons, 10-4 s to minutes for thermal-
hydraulics, seconds to minutes for delayed neutrons and mechanical events (e. g rod
ejections), and minutes and hours for complete description of accidents (e. g. LOHS).
9
Additionally to the complexities, one has to consider uncertainties. Table I.I (Reference
3) illustrates uncertainties on reactivity coefficients coming from the approximations and
data uncertainties from different fields (neutronics, thermal-hydraulics, thermo-
mechanics). Even if often these uncertainties can cancel out, one has to be very careful in
treating them correctly. In the end all these uncertainties translates in ad hoc factors (e. g.
hot channel factors for hot spot determinations) and safety operating margins that
penalize the overall economics of the plant. Typical hot channel factors for sodium
cooled reactors can add up to 1.5 and safety operating margins of 115% are often
adopted. It is indubitable that for the safety case, advanced simulation can on the one
hand help to better understand the phenomena in play giving a more robust case in front
of the safety authority, and on the other hand help to reduce the safety margins. Even 1%
reduction, when compared to the total fleet of reactors can translate into savings of
billions of dollars.
TABLE I.I
It will be essential for the ABTR (and ABR) project to qualify a new fuel form in a
reasonable amount of time, establishing a firm basis for fuel reliability with specifications
that are valid, not only at the time of fabrication, but also during operation and of normal
conditions. This is especially true for new fuel forms that have yet to be subjected to
irradiation tests. For these advanced reactors, this includes fuels that will contain a non-
negligible amount of minor actinides resulting from the multiple recycling in reactors.
10
The traditional design and implementation approach, of the so called “cook-and-look”
type, involves fabrication of samples or full pins of the new fuel, measurements of
physical characteristics under a few potential operating and off-normal conditions
(temperature, stress conditions, interactions with coolant and cladding, etc.), and finally
long-term, high-fluence neutron irradiation to study degradation and failure. This
approach requires a large amount of money and many years. When irradiation results are
finally available, the new fuel under study may already be obsolete because of
considerations that may be unrelated to the fuel design itself.
Using modern methods and powerful computing tools, it might be possible to screen
proposed fuels without component testing. For example, molecular dynamic (MD)
simulation has the potential to predict thermo-physical behavior, study defect problems,
and provide insights into local, microscopic-scale degradation mechanisms. Kinetic
Monte Carlo (KMC) and dislocation dynamic (DD) simulations can also be used to
supply insights into defect interaction, long-term defect stability, and irradiation
behavior. These simulations could provide the transition from the microscopic to the
mesoscopic scale. Fundamental modeling may also give insights into migration of fission
products and interaction of fuel with cladding and coolants, but would likely need to be
supplemented by phenomenological modeling at this time to describe these complex,
non-linear processes.
Advanced nuclear reactors, like the ABTR, will demand advanced materials for which
little physical, mechanical, and thermodynamic data exist. As with nuclear fuels,
structural materials under extreme temperatures and radiation fields represent a modeling
challenge that will be overcome only through a better understanding of their behavior.
Structural behavior of irradiated metallic alloys, for instance, depends non-intuitively on
temperature. Non-equilibrium, multi-component, multi-phase systems evolve over time
in ways that can promote creep and crack propagation. Dislocation microstructures are
poorly understood. Similarly, the interaction of vacancies and interstitials with minor
alloying ingredients and decay products are inadequately modeled today. Data are non-
existent for modern engineered materials such as nano-scale coatings and composites.
Fuel and structural materials designed with improved performance under normal and
transient conditions, combined with improved core designs, can reduce temperature
peaking in the reactor, which can allow higher average operating temperatures, improved
thermal efficiency for electricity production, and reduced safety margins. The modeling
and simulation approach does not eliminate need for experimentation, but it allows for a
more judicious selection of expensive and time-consuming experiments that would need
to be performed.
Disposal of spent fuel, proliferation concerns, and lack of a closed fuel cycle are principal
impediments to the future viability of the nuclear energy option. The pyrochemical
technology or proliferation-resistant variants of aqueous solvent extraction options such
11
as UREX+, the one proposed for the Engineering Scale Demonstration (ESD), can play
an important role in reducing the hazards of spent fuel by separating uranium and the
transuranic actinides, which in turn may be transmuted in a fast reactor. The most
efficient way to accelerate the development of the two processes to a commercial scale is
to formulate physical models of the underlying chemical and transport processes.
Complex, 3-D, time-dependent mixing and electric currents representing multi-
component fluid-dynamics, chemical reactions, and electromagnetic effects must be
assessed to confirm and optimize the treatment process. Again, an integrated multi-
physics simulation offers a radically different approach to designing, testing, and
implementing these processes.
Over the past decades, ANL has developed a pyrochemical process for treatment of spent
nuclear fuel and demonstrated its feasibility for treating metallic fuel from EBR-II. Over
this and next decades, the challenge will be expanding this technology for treatment of
commercial spent fuel in a much greater scale. Efforts in designing past and current
generation electrorefiners have been almost exclusively based on experiments. The
design of next generation continuous-throughput electrorefiners and oxide reduction
devices with greater capacity, improved economics, and better performance will require
sophisticated computational tools based on first-principles. By bringing the advanced
computing capabilities together with historical chemical technology and analytical
process modeling expertise, the design cycle for such advanced systems can be reduced
to several months, as opposed to decades.
12
Figure I.1. Integrated 3-D transient analysis of ANL electrorefiners: (a) Mixing and
species transport under “mass transport limited” conditions, (b) corresponding
electric potential field and tertiary current distributions between the electrorefiner
components.
Under the Department of Energy' s AFCI, ANL is leading development of the UREX+
aqueous separations, a multi-step process for separating out the high-risk elements of
13
spent nuclear fuel. ANL has successfully demonstrated the entire process in hot cells and
gloveboxes and is preparing for scale-up demonstration.
A separation plant, like the Engineering Scale Demonstration, includes several major
processing steps. Once sufficiently cooled, the fuel must be disassembled and the
elements chopped. In the case of aqueous reprocessing, the chopped fuel is dissolved in a
concentrated nitric acid solution. Cladding and any undissolved solids are separated from
the dissolved fuel solution for further treatment and eventual disposal. The dissolved fuel
is treated in a series of solvent extraction processes to separate different components of
the fuel. The products of the solvent extraction are streams containing specific
components of the fuel in acidic solutions. Each component in solution is concentrated
and solidified for further processing to generate a final product—either fuel or waste
forms. Each of these steps is currently poorly simulated and in many cases recipes are
applied with testing providing confirmation of chemical rates or losses. Advanced
simulation of these processes, based on a deeper understanding of their underlying
science, will benefit their optimization. Accurate modeling has the additional value of
detecting diversions, criticality hazards, or possible effluent composition deviations
outside specifications.
14
make it possible to put together a new simulation system with extraordinary capabilities.
This will result in:
Figure I.2. shows how the future software tool should be structured with the link
(coupling) among the different components (building blocks). The coupling could be
strong (e. g. the coupled calculation of the reactor core), weak, as in the case between the
core and the balance of plants, or very weak as between the reactor and the reprocessing
plant. Given these characteristics for the future software tool, in the following chapters
we present the requirements that are needed for each individual building block that
include:
• Material Properties
• Neutronics (Core and Fuel Cycle)
• Thermal-Hydraulics
• Structural Mechanics
• Fuel Behavior
15
• Balance of Plants
• Safety Analysis
• Chemical Separation and Reprocessing
• Sensitivity and Uncertainty Analysis
• High Performance Computing Enabling Technologies
In general for each item, background information are first provided, then functional
requirements are specified, current tools and approach are listed, and finally proposed
future approaches are indicated. Whenever possible current uncertainties are quoted and
existing limitations are presented. Desired target accuracies with associated benefits to
the different aspects of the reactor and chemical processing plant designed are also given.
16
II Material Properties
II.A Background
Better understanding of basic physical properties represents the area where probably the
most progress can be made and where the biggest potential payoffs exist for advanced
simulation. Properties including thermophysical (e.g., thermal conductivity and phase
diagrams), mechanical (e.g., elastic moduli, ductility), and chemical (e.g., corrosion and
reaction rates) have to be determined under static and dynamic conditions.
Materials science of nuclear fuel elements studies structures, properties, and applications
of nuclear materials in future power plants. Nuclear materials include functional,
structural, and fuel materials. Materials science is based on physics, chemistry, and
mathematics; and for that reason, it has no general governing equations. Computational
materials science (CMS) studies materials at an electronic, atomic, and at macroscopic
levels. Molecular Dynamics (MD) and Monte Carlo (MC) are the most powerful methods
of CMS that are capable of studying the materials’ properties at a time scale of one
atomic vibration, 10-12 s. MD and MC simulations of nuclear fuel materials properties
require an interatomic potential that represents the energy and forces associated with a
configuration of atoms. To be usable for complex geometries and/or statistical averages,
the potential needs to be computed rapidly.
Many interatomic potential functions have been developed and are reported in the
literature, but none are necessarily adequate for simulation of the wide range of properties
required for reactor materials. With pair-wise interactions some are necessarily wrong.
With many-body potentials (such as glue, Finnis-Sinclair, embedded atom, modified
embedded atom and effective medium theory potentials) many can be fitted, provided
that “correct” values are available. These types of potentials have been the “state of the
art” for twenty years.
The most efficient potentials in materials research are based on the Effective Medium
Theory (EMT) which closely resembles the Density-Functional Theory (DFT) approach.
As DFT represents exact solution of the quantum mechanics equations, EMT is helpful to
directly derive a potential as from first principles. However, many potential functions
only obtain the functional form from EMT, while in practice they are constructed purely
by fitting to a vast set of experimental data.
♦ Metal EAM potentials
The most widely used method is called the Embedded-Atom Method (EAM) [5-9]. EAM
typically consists of two terms which are the attractive many-body part and a repulsive
pair-wise potential.
1
E tot = Fi ( n i ( Ri )) + ij (Ri − Rj ),
i
2 ij
n i (R i ) = n a (Ri − Rj ).
i≠ j
17
EAM is similar to EMT and DFT. Φ typically are obtained by fitting to a large set of
experimental data, such as lattice parameter, sublimation, vacancy-formation energies,
elastic constants, and energy difference between BCC and FCC lattices. F is a universal
function and it also works for alloys. Here n(R) is the total electron density, and is
obtained as a rigid superposition of the atomic densities of the neighboring atoms.
EAM potentials are excellent models of metallic bonding for simple (closed-shell)
metals, such as copper or gold, and have been used extensively since their creation in
1984. Excellent potentials exist for several FCC metals (e.g. Cu, Ag, Ni and Al).
1. FCC metals - Ni, Cu, Pd, Ag, Pt, Au, Al and all alloys Cu, Ti and their alloys; Ni, Cu,
Rh, Pd, Ag, Ir, Pt, Au, Al and Pb. The big exception is surfaces, which require a
longer-range interaction.
2. HCP-metals – i) Hf, Ti, Mg and Co. ii) Be, Y, Zr, Cd and Zn. iii) Ti, Zr, Co, Cd, Zn
and Mg. iv) Mg, Ti and Zr.
3. BCC-metals - Fe, V, Nb, Ta, Mo and W. Li, Na, K, V, Nb, Ta, Cr, Mo, W and Fe.
4. Metal-hydrogen potentials: H-Ni
5. Covalent solids: Si, C, C-H, Oxygen
6. Actinides: U, Pu, Np, Am, Oxides, Nitrides
7. Lanthanides: La
A new Modified EAM (MEAM) potential approach has been recently developed by M.
Baskes and co-workers at Sandia [10, 11]. The main difference from EAM is that
MEAM depends on the orientation angles between two and three neighboring ions, and
that makes it similar to other “cluster” potentials, like Brenner or Tersoff potential
functions for silicon and carbon.
Oxides have more complicated potentials than metals. The most successful potential
contains partially-ionic inter-ionic terms with additional covalent forces was introduced
for U, Pu dioxide model by Kawamura et al.. [16].
The motivations to study properties of nuclear fuel and structural materials by classical
MD have increased in view of the fact that ab-initio MD methods failed to reproduce the
Pu phases, which instead were obtained by the Modified EAM potential developed for
plutonium by M. Baskes [10].
18
This is important since Pu will be present in expected fuel material to be used in future
generations of fast spectrum nuclear reactors.
In MD, the classical equations of motion are integrated to obtain dynamical evolution of
a system of atoms. Accurate integration requires time steps in the femto-second range,
limiting the total simulation time to less than a microsecond on today'
s processors. Direct
MD is a powerful tool, giving the exact dynamical picture of a many-body system with
an interatomic potential which should be known beforehand from other theoretical or
experimental sources.
All fuel oxides have ionic crystalline structure of the CaF2 structure type. Typically
simulation is performed with N=324 ions (108 cations, 216 anions). Large-scale
simulation is highly demanded (M. Baskes, 105 ions [11]).
Table II.I summarizes the results of MD simulation studies of oxide nuclear fuels and
structural materials obtained in [10-19].
19
Table II.I Nuclear materials properties obtained by MD simulations
UO2 Pu (U) O2 AmO2, NpO2 (minor Zr Radiation
actinides) defects
- Heat capacity - Molar T=300-3500K - HCP(α) – - Vacancy
- Thermal specific - Lattice parameter BCC (β) formation
conductivity heat - Thermal expansion phase energies
(Lattice parameter - Thermal coefficient transition (anionic,
was used to adjust the conductivi - Compressibility - Defects cationic)
potential) ty - Heat capacity - Surface - Interstitial
- Thermal expansion - Bredig - Thermal - Displ. formation
coefficient, transition conductivity Thresh. for energy
- Density & (anomaly properties of - Frenkel U, O
compressibility in spec. a-Zr pairs formation
- Viscosity heat) energy (MEAM
- Super-ionic - Vegard’s for Pu)
conduction of oxygen Law - Schottky
ions energy
- Bredig transition
- Peak in heat
capacity
The followings are the advantageous features of atomistic simulations of nuclear fuels.
MD is capable of calculating with a high accuracy many parameters. The following
parameters have been obtained: viscosity, thermal conductivity, compressibility, heat
capacity of UO2 (MD – solid symbols). However, there are still unresolved problems and
20
limitations of this method. For example, MD still needs adjusting to a lattice parameter,
MD simulations have been done mainly for single crystalline materials, although the
experiments use polycrystallines and that could be a source of simulation errors.
Grain boundaries (GB) are internal interfaces formed when two crystals that misoriented
relative to each other are brought into close contact. Atomistic studies of GBs are very
important to materials science and engineering as in reality all materials are poly-
crystalline and their mechanical properties such as strength, ductility, fatigue, and
fracture, and their kinetic properties such as diffusion coefficients are mainly defined by
the GB structure.
As described earlier, advanced simulation could provide a screening tool for candidate
fuels and materials using techniques such as molecular dynamics, kinetic Monte Carlo,
and dislocation dynamics. Basic property simulation needs to focus on establishing the
degree of accuracy and the practical limitations among the possible levels of simulation
(e.g., fine grain, mesoscale, atomistic) and the impact that advanced simulation has on
predictive capabilities and reduction of uncertainties. An approach that combines
phenomenological and fundamental modeling will be necessary in cases where the basic
science needs to be better understood. Regardless, modeling of basic properties is
prerequisite to the development of an integrated software tool of the type proposed in this
report.
There are several phenomena that represent real grand challenges for simulation in
treating nuclear fuels and structural materials. These include:
21
o Create a database of nuclear fuel properties and a library of existing
atomistic codes (coupling with fuel performance tasks)
In figure II.3 a road map indicates the short and long term goals for the material
properties simulation activity. The multiscale aspect of the material simulation problem
can be presented by two major phases that can be identified for the atomistic simulation
tasks.
Fig.II.3. Materials properties roadmap shows short- and long-term goals for
materials research for advanced nuclear fuels, tasks for code development and the
ultimate goal of the project – prediction of new properties for fuel
performance/integrity.
First Phase
22
• Data-Bases & Libraries of nuclear materials (properties, potentials, codes). E.g.
DBases - CALPHAD, FUELBASE
• Interatomic potential development (EAM, MEAM, ADP, ab-initio) – there is a
need for a Data-Base for the materials properties: density, bulk and shear moduli,
elastic constants, lattice parameter, linear expansion coefficient, cohesion energy,
vacancy formation energy. Among the first-principle MD codes (~ 10), Wien2k
may be helpful for solving actinide problems. All other existing codes cannot be
applied to actinides. Existing Ab-initio, EAM, rigid ion interatomic forces need
to be improved – various methods exist based on taking into account polarization,
angular dependence etc, and enabling accurate potentials.
• Kinetic MC, meso-scale approach
• Build a large-scale MD code
Second Phase
23
As a global minimization procedure for finding the velocity field requires inversions of a
large sparse matrix at each time step, this restricts the method to the study of small
systems. Therefore, to extend the size of the system to a realistic size, a stochastic
approach based on the Velocity Monte-Carlo method developed by Cleri [21] to
minimize the variational functional can be used. [21, 22]
The time scales of fuel irradiation phenomena include shock-wave (SW) generation, SW
propagation, amorphization, decay of amorphous material, thermal relaxation;
crystallization, these processes would take at least 10 ns. Comparable to this is the time
for defect formation, defect accumulation, fission gas emission, void and cavity
formation, bubble nucleation. After a single energetic impact, these processes are
followed by viscous flow, melting, diffusion, and re-crystallization processes of the
irradiated surface area.
However, the estimated simulation time of 1-10 ns for the grain-boundary evolution is
much longer. Therefore, the overall
computing time should be ~ 10 ns. For
an MD time step of 1 fs, such atomistic
MD simulation would need 10 million
time steps.
24
Assuming that the execution time is linear to the number of atoms, we could estimate the
CPU execution time for 50 million atoms as ~ 40 sec. If the number of processors in a
future Petaflops machine could be increased by a factor of 1000, one would have 40
milliseconds per MD-time step, for a system of 50 million atoms. Such simulation would
run 100 hours on a petaflop machine. The MD simulations are realistic for existing
Teraflop computer simulation tools like Blue Gene with the record high performance of
280.6 Teraflops. This task would take ~ 3000 hours or 4 months of Earth-Simulator [25].
The MD simulations of realistic advanced nuclear fuels and materials irradiated with
neutrons would need development of better algorithms, optimization and they are well
suited the computing power of the future large-scale supercomputers.
Ab-Initio Tools
Numerous ab-initio tools that are applicable to solving various problems of crystalline
materials are available. Among the AB-Initio packages we can mention: VASP, FLAPW,
Paratec, PEscan, PEtot, PWscf, Siesta, WIEN2K, ABINIT, AL_CMD, CHARMM,
DL_POLY, NWChem, and TBMD. Most of them are commercial, such as Wien2k,
VASP.
25
The Wien2K software package can be also used for ab-initio calculations of the
interatomic EAM potentials and of the defect fine-structure in actinides and other nuclear
materials. Testing and purchase of the VASP ab-initio software package which has a
better interface than Wien2K will be planned. Ordering of the visualization packages
such as BALSAC and XCrysDen that work with Wien2k will facilitate the overall
productivity of simulations.
The calculations of the diffusion coefficients of a polycrystalline metal, for various types
of twist grain boundaries, with and without an open surface are useful for the fuel
performance and integrity codes.
Quantum chemistry codes use simulation methods for studying the diffusion-controlled
chemical reactions in dense media. This is essential for the high-temperatures and
pressure conditions in the fuel core. Moreover, these codes enable simulations of the
chemical reactions of nuclear fission gases with the cladding materials.
The hybrid MD code HyDyn consists of two main blocks and is applicable for studying
radiation effects in solids. The inner part relies on MD simulation, while the outer one,
using a finite difference description of a continuum, allows correctly taking into account
boundary effects. This methodology allows to achieve very significant gain in total
computing time.
The simulation tasks for nuclear fuel material’s development and problems are very
important and very broad. A plan can be defined to complete short- and long-term tasks,
to build a nuclear property roadmap, and a feasible time-schedule for the tasks
The reasons for utilizing large-scale computing are manifold. The aim is to gain better
understanding of the basic physical properties of single-, poly-crystalline, and liquid fuels
26
and surfaces under intense neutron and ion irradiations. It will be necessary to study
effects of interfacial strains and microstructural constraints; comparison of the kinetics
versus thermodynamics for amorphous vs. crystalline fuels; effect of interfacial strains,
thickness, microstructure and composition on thermo-mechanical, transport and electrical
properties.
• Radiation defects. This include: radiation effects: defects, damage, fatigue, and
aging problems of nuclear fuels and high-temperature structural materials.
• Benchmarking
• Parallel implementation
• Incorporate dislocations into GB mesoscale model.
• Incorporate electronic properties into the classical MD: the potential functions
will be corrected “in-fly” during the work of the main MD code. Electronic
properties of GB for polycrystalline materials [27-29]
27
III Neutronics (Core and Fuel Cycle)
III.A Background
Neutronics is the discipline devoted to the analysis of the main physics process of a
nuclear reactor. The governing equation for neutronics is the differential-integral
Boltzmann equation for neutron transport, which is a linear equation requiring the
treatment of six independent variables, three in space, two in angle and one in energy, for
time-independent problems. The difficulty in obtaining accurate solutions for problems in
reactor core physics, shielding and related applications is further aggravated by a number
of factors. The nuclear data (i.e. the neutron cross sections) frequently fluctuate rapidly
over orders of magnitude in the energy variable. The neutron population is often sharply
peaked in a particular angular direction, and those directions may vary strongly in space
and energy. Finally, the geometric configurations that must be addressed are complex
three-dimensional configurations, with many intricate interfaces resulting from arrays of
fuel rods, coolant channels, and control rods, as well as reflectors and shielding
penetrated by ducting and other irregularities. A great deal of effort has been expended in
developing computational methods to deal with these problems. They fall into two
classes: Monte Carlo and deterministic. Each has its advantages and limitations.
Monte Carlo methods follow individual neutrons, using random numbers to generate
distances between collisions, and energy transfer and direction change at the collision
sites. These methods are able to utilize directly cross section data that is continuous in
energy, and they are able to treat complex three dimensional geometries. For geometrical
configurations that can be treated by deterministic methods, Monte Carlo calculations
tend to be substantially more expensive, however, and historically have been the method
of last resort for geometric configurations too complicated for deterministic methods to
treat. While Monte Carlo methods can calculate in a reasonable amount of time integral
quantities (e. g. the multiplication factor), a major weakness is related to the difficulties
in obtaining large enough statistical samplings of neutron histories, to calculate detailed
distributions of neutron populations in space and energy with adequate precision. Such
distributions are essential for determining detailed spatial distributions of power, fuel
depletion, actinide buildup, temperature feedback are other phenomena that are essential
to the design and operation of power reactors. A similar argument can be applied also for
major safety related reactivity coefficients (control rods, Doppler coefficient, local
coolant void, etc.) where very small variation of the fundamental eigenvalue has to be
calculated. For similar reasons, burnup calculation is quite challenging (difficulty to
propagate local variances of density variations) and time-dependent calculation with
thermal feedback is impractical when performed with a stochastic methodology.
Furthermore, while the Monte Carlo codes allow treating the energy variable in a
continuous way (a clear advantage over the multigroup approach), they have the
drawback that they cannot calculate an adjoint solution (except in a multigroup mode)
needed for sensitivity analysis, that are today so important for the reactor designer to
perform uncertainty evaluation or optimize design parameters. Finally, the major
disadvantage of the Monte Carlo method is associated with its stochastic approach that
28
disallows establishing any systematic extrapolation, while with deterministic codes a
hierarchical approach permits establishing trends and deducing theoretically correct
results.
The ad-hoc assumptions in such methods have been fine-tuned against experiments and
operating experience to obtain acceptable results for existing reactors. However, they are
prone to error particularly when neighboring fuel rods or fuel assembles have
significantly different compositions, and are sometimes even less reliable as new reactor
designs are considered.
The difficulty in obtaining solutions to this artificially simplified reactor problem with a
coarse treatment of energy and a quite limited spatial domain points to the how far
current reactor physics computational methods are from the ideal. In the deterministic
case, that ideal is the elimination of the need for group collapse and spatial
homogenization approximations, and the treatment of the entire space-angle-energy phase
space with sufficiently fine grained levels of discretization to obtain accurate results.
The most important parameters that a neutronic code has to calculate are the main
eigenvalue, called also the multiplication factor, and the associated neutron flux
distribution that is subsequently used to evaluate in a post treatment several other
quantities of interest like power distribution, damage rates, specific reaction rates etc.
• Core calculations
• Reactivity coefficients and kinetics parameters calculations
• Shielding calculations
• Burnup calculations
• Kinetics calculations
• Out-of-Pile and Fuel Cycle (Decay Heat) related calculations
The last three are time-dependent calculations. The burnup and out-of-pile calculations
could however rely on the static calculation using different types of quasi-static
29
approximations. For the burnup and out-of-pile calculations the solution of the Bateman
equation is required. In general this equation, when considered over a specific domain
(averaged rates) does not represent a challenge for its solution.
Other particular needs are related to the calculations of reactor start up configurations
where external sources are present. In this case the neutron transport equation to be
solved is inhomogeneous. Finally, adjoint solutions are needed, using both classical and
generalized perturbation theory, for sensitivities studies and reactivity coefficient
calculations.
Table III.I indicates a set of target accuracies (1 ) for different neutronic parameters of
interest of fast reactor design. Two separate cases are considered, the viability case to be
used for a preconceptual phase, and a performance case to be used in the final design
phase.
Table III.I Target Accuracies (1 ) for Fast Reactor Neutronic Parameters Design
Viability Performance
Multiplication factor, k-eff <0.5% <0.2%
Relative Power density
Peak ~3% ~1%
Distribution 7% 3%
Control rod worth
Element 10% 5%
Total 5% 2%
Burnup reactivity swing 3% <2%
(of reactivity value) or 0.5%∆k or 0.5%∆k
Breeding gain 0.05 0.02
Reactivity coefficients
Large effects 10% 5%
Small effects 20% 10%
Kinetics parameters 5% 2%
Local nuclide densities
Major constituents 5% 1%
Minor constituents 10-20% 2-5%
30
the case of a sodium cooled reactor [30] based on current cross section uncertainty
estimates. In the past, heavy use of integral experiment has lead to the use of bias factors
or adjusted cross sections that drastically reduce the “a priori” uncertainties.
For the purpose of reducing in a considerable way the use of integral experiments and
relying on simulation for providing better estimates for the neutronic parameters, one has
to improve both the basic nuclear data and the methodology for calculating those
parameters. Cross section evaluation use both measurements and nuclear models for
producing their data. For this reason, it appears that there will be a limit to the maximum
improvement that can be achieved on cross section data (e. g. 1% on a fission cross
section). On the contrary, there is hope in solving the neutron transport equation to attain
a very accurate solution.
A possible target could be an uncertainty, coming from the solution method, of 50 pcm
on the multiplication factor, and 0.2% for the power distribution. For this latter quantity,
it is worthwhile to note that from Ref. 30 only 0.5% could be attributed to uncertainty on
cross sections, therefore a significant improvement in method accuracy can produce a
major gain for this very important parameter. For other parameters similar considerations
can be made especially from the fact that they derive from neutron flux calculations
(distribution rates, densities variation due to burnup, etc.).
For what relates to reactivity coefficients, that are essential for safety analysis, it is
noteworthy that the 50 pcm uncertainty on the multiplication factor will be a systematic
value for deterministic calculations, so that it cancels out when taking a difference
between two eigenvalues. On the other hand, for Monte Carlo calculations, due to the
stochastic nature, it will be very difficult to account for such small reactivity difference.
There are two main aspects to be considered. One relates to the processing and generation
of cross sections and the other is related to solving the neutron transport equation. The
first aspect concerns the treatment of the energy variable in the governing equations. As
mentioned before, the neutron cross sections vary very rapidly (for instance see Fig. III.1
for 238U capture cross section) over the energy domain. Therefore, the treatment of the
energy variable leads to one of the most cumbersome calculational procedures in
neutronic computation: the multigroup cross section generation that have to be
subsequently used in whole-core calculations. The multigroup cross section generation
can be a source of uncertainty larger than that associated to the basic data. Two different
methodologies (suite of codes) of cross section processing can lead to difference ranging
from 0.5% to 1% on the multiplication factor of a fast reactor.
The generation of multigroup cross sections involves several steps (depending on the type
of neutron spectrum of the reactor, thermal, epithermal, or fast) that have to take into
account several calculational approximations including: resonance self-shielding (both in
energy and space), energy group collapsing, and spatial homogenizations. Invariably the
first step involves generating so called “multigroup libraries” processing the differential
31
cross section data measured or evaluated for each individual isotopes and reaction (e. g.
fission, radiative capture, elastic/inelastic scattering, etc.) and putting in a multigroup
form by weighting the data with spectra that are typical of the reactor to be studied. The
number of groups of these libraries varies from several thousands for fast spectrum
reactors, where the presence of the structural materials and the treatment of unresolved
and resolved resonances require a large number of groups, to a few hundred for thermal
spectrum reactors.
Then, an accurate calculation (integral transport) is performed but only over a small
spatial subdomain – referred to as a pin cell – with approximate boundary conditions. For
such calculations, the energy dependence of the neutron population is reduced by
weighed averaging to produce a set of multigroup cross sections. The resulting group
cross section data for fuel, coolant and other materials are then employed in a lattice
calculation over a fuel assembly. An assembly typically consists of hundreds of fuel rods.
The two-dimensional lattice calculation is performed with a high-order angular
approximation, and an explicit treatment of the spatial interfaces between fuel coolant
and other materials. The boundary conditions at the edges of the fuel assemblies,
however, are approximate, assuming an infinite array of identical assemblies. From these
lattice calculations, a set of few energy group cross sections are obtained, which are
spatially homogenized over the fuel assembly. Finally a few energy group (typically of
the order of thirty for fast reactors) three-dimensional whole core calculation is
performed. Since the homogenization procedure wipes out much of the effect of the
angular variation in the neutron population, the whole core calculations are performed
using low-order angular approximations. Once the results for the homogenized
assemblies are obtained, then the local spatial distributions from the lattice calculations
are melded the whole-core results though the use of additional approximations to obtain
an estimate of the power distribution in each fuel rod. Clearly, all these procedures are
sources of approximations or possible misuse of methodology depending on the physical
phenomena that has to be taken into account.
32
Fig. III.1 238U capture cross section
Among the codes used in the fast reactor community for implementing this procedure,
NJOY [31], from LANL, is widely use for processing the differential data obtained from
the basic data libraries (ENDF/B, JEF) to the multigroup form. ETOE-2 [32] is used at
ANL to this purpose. Then the most established cell (lattice) codes for fast reactor
applications are MC2-2/SDX [32] at ANL and ECCO [33] in Europe.
For the whole-core calculation, the starting point for deterministic methods is a
sufficiently fine-grained multigroup discretization that ensures that significant error is not
introduced into the energy dependence of the cross sections. As a result thousands of
coupled equations each of the form [34]:
ˆ ψ (r , Ω
Ω⋅∇ ˆ ) +σ (r )ψ (r , Ω
ˆ ) = dΩ′σ (r , Ω⋅Ω
ˆ ˆ ′)ψ (r , Ω
ˆ ′) + s(r , Ω
ˆ ), (III.1)
s
must be solved over space, angle and energy. Here r and Ω̂ are the space and angle
ˆ ) is the neutron flux, defined as the speed times the density
variables and ψ (r , Ω
33
ˆ ˆ ′) is the macroscopic differential scattering cross section. The
distribution, and σ s (r , Ω⋅Ω
equations are coupled through the group source given by
ˆ)=
s(r , Ω ˆ ˆ ′)ψ ′ (r , Ω
dΩ′σ sgg′ (r , Ω⋅Ω ˆ ′) + S (r )
g fg
g ′≠ g
g designates the group under consideration, σ sgg′ represents scattering from group g ′ to g,
and S fg includes fission as well as known external sources.
Deterministic methods are classified by the treatment of the spatial variable; Eq. (III.1) is
a first order method. We may evaluate Eq. (III.1) at Ω̂ and - Ω̂ and combine the results to
obtain a second-order even parity equation,
ˆ GΩ⋅∇
−Ω⋅∇ ˆ ψ + + Cψ + = S + −Ω⋅∇
ˆ GS − (III.2)
Here the even- and odd- angular parity flux is defined by,
ˆ ) = 12 ψ (r , Ω
ψ ± (r , Ω ˆ ) ±ψ (r , −Ω
ˆ)
and similarly for the cross sections and groups sources. In the case of isotropic scattering
the collision and scattering operators reduce to G → 1/ σ , C → σ − σ s d Ω and S − → 0
In integral methods Eq. (III.1) integrates back along the direction of neutron flight. Thus
∞ R
ˆ)=
ψ (r , Ω ˆ ,Ω
dR exp − dR′σ (r − R′Ω ˆ ) q(r − R Ω
ˆ ,Ω
ˆ ), (III.3)
0 0
Equations (III.1), (III.2) and (III.3) have each been most closely associated with a
particular form of angular approximation. Equation (1) is treated with discrete ordinates
(or Sn) in which the angular variable is evaluated in a set of discrete directions that are the
same as those used in quadrature formula to evaluate the angular integrals. With
appropriate spatial discretization, the operator on the left is reduced to a triangular matrix
and the equations can be solved with so-called marching schemes. Equation (3) is also
evaluated in discrete directions, while the integrand on the right is taken to be piecewise
constant in space. The resulting algorithms are referred to as characteristics methods. The
second-order form, Eq. (III.2) is most often expanded in spherical harmonics in angle,
although discrete ordinates may also be applied, and the spatial variables are treated using
finite element methods. Equation (III.2) gives rise to symmetric positive definite matrix
equations.
All three methods are capable of treating unstructured meshes, and discrete ordinate and
spherical harmonics methods have been incorporated in three-dimensional production
codes. The computational difficulties of extending characteristics methods from two
dimensions to three have thus far impeded their use in three dimensional production
34
codes. For a number of reasons discrete ordinate methods have been favored for deep
penetration calculations, such as in radiation shielding. However, in some problems they
suffer from ray effects: these are unphysical wiggles in the spatial flux distributions that
are attributable to the angular collocation. Among the most used and popular three-
dimensional SN codes are PARTISN (from LANL, with only structured grids) [35],
TORT (from ORNL prevalently used for shielding application and with no unstructured
mesh capability) [36], and ATTILA (a commercial spin off of a LANL unstructured mesh
code) [37].
The Method of Characteristics (MOC) is very accurate but gives rise to quite dense
nonsymmetrical coefficient matrices, impeding their use over large spatial domains. This
method, because of its accuracy, is the preferred method used in the lattice (cell)
calculation step but because of the dense solution matrix has been widely used only in
two dimensions with some limited application for predetermined local (subassembly)
three-dimensional geometry. This is the case of the Canadian DRAGON code [38] used
for the treatment of specific CANDU problems. The French APOLLO [39] lattice code
has been recently extended to subassembly three-dimensional geometries. Finally, the
Korean DeCART [40] code combines a two-dimensional MOC solution with a 1D axial
solution.
Even-parity spherical harmonics methods are more widely used for reactor core
calculations, with the lowest order angular approximation – so called diffusion theory –
being the most widespread for the whole-core calculations. The weak point of the second-
order methods is the cross section in the denominator: this causes the operator to become
poorly conditions in low density (e. g. gas) regions, and singular in total vacuum.
The EVENT [41] and VARIANT [42] codes are among the most widely used second
order spherical harmonics codes for reactor physics calculations. They differ in that
EVENT utilizes a fine mesh finite-element treatment of the spatial variables while
VARIANT utilizes hybrid finite elements to divide the spatial domain into subdomains.
Solutions are then obtained iteratively by tracking the passage of neutrons in and out of
these subdomains.
Many of these flux solvers are integrated in code systems that provide full capability for
neutronic design, including criticality searches, burnup calculations, with equilibrium
density evaluations, and time-dependent kinetic calculations. Among the systems
developed for fast reactor application we can mention REBUS-3/DIF3D [43, 44] at ANL
and the French system code ERANOS [45]. It is very important to consider the flexibility
and easy to use of these code systems for satisfying the needs of reactor designers.
Monte Carlo methods do not suffer, in principle, from the approximations related to the
treatment of the energy variable, even if unresolved and resolved resonance treatments
require appropriate methodologies. To this latter purpose codes like NJOY are used to
preprocess the basic data libraries for subsequent use in Monte Carlo codes. In theory,
with unlimited computing power, because of the flexibility in treating complex
geometries, and with a rigorous continuous energy treatment of the energy variable, the
35
Monte Carlo codes should be able to achieve an extremely accurate solution. While this
is true for a fixed source problem without multiplication, on the contrary for an
eigenvalue problem, due to the stochastic nature of the algorithm, it seems that an
intrinsic limit of accuracy exists, similar to the uncertainty on a measured keff. Moreover,
when interested in local quantities, in order to achieve very low standard deviations,
unreasonable number of neutron histories could be required.
In the realm of the Monte Carlo codes, the Los Alamos MCNP [46] is the widespread
reference. VIM [47], from ANL, and the French Code TRIPOLI [48] are among other
very well known Monte Carlo codes, while KENO [49] from ORNL is prevalently used
in the criticality-safety community. Some of these codes have been linked to Bateman
equation solvers (e. g. CINDER [50], ORIGEN [51]) for providing burnup calculations
(e. g. MONTEBURNS [52], MOCUP [53], MCODE [54]) but in general they lack
flexibility for performing general design calculations (e.g. control rod movements,
equilibrium densities, etc.) Also the propagation of stochastic results on depletion
calculations has not been, up to now, treated in a satisfactory way.
In general both Monte Carlo and deterministic approaches need to be further developed
for a better neutronic analysis of future fast reactor systems. Monte Carlo will be kept as
reference methodology that can provide extremely useful validation. The main areas of
development for Monte Carlo codes would be in eigenvalue (and associated flux
distribution) calculations, and time-dependent problems (burnup and kinetics
calculations).
For the eigenvalue calculations, improvements are needed for nuclear reactor
configurations with a high degree of decoupling. Better strategies for efficient eigenvalue
convergence need to be devised probably using information from some approximate
deterministic solution. As aforementioned, a reliable technique for propagation of
stochastic values in burnup calculations is needed, as well as geometrical modification
flexibility for following operation of the reactor (e. g. control rod movements). Similar
characteristics are needed for the development of a Monte Carlo kinetics capability. If
developed, these features will contribute enormously toward adopting Monte Carlo for
more systematic (parametric) design calculations. Finally, developing a continuous
energy adjoint Monte Carlo will make possible adopting this methodology for sensitivity
analysis.
On the deterministic field, one of the major leaps forward that could be done is the
elimination of the multistep calculational procedure for the treatment of the energy
variable. If, with the advances in both numerical and algorithmic efficiency in
conjunction with significant progress in computing power offered by the use of several
thousands of processors, one can afford to solve the neutron transport equation in a
detailed three dimensional geometry with thousands of energy groups, the deterministic
solution will directly compete with Monte Carlo methods with the clear advantage of
36
having a systematic approach and providing the capability of adjoint solution for the
calculation of sensitivity coefficients.
A new code can be built in modular form such that differing combination of
approximations in space, angle and energy can be explored. In order to give complete
geometrical flexibility, unstructured meshing for three-dimensional geometry could be
adopted. In a first phase, one could concentrate on creating an efficient massively parallel
code based on the second order form of the multigroup equations, using finite elements in
space and spherical harmonics in angle. However, also techniques for coupling different
space-angle formulations across interfaces between spatial subdomains have to be
developed. For example, one can couple second order methods, with characteristics or
other forms of ray tracing across vacuum regions. Likewise, while the first
implementation can include a very fine-grained form of the multigroup equations, other
techniques may be attempted in dealing with the rapid fluctuations of data in energy.
The huge CPU and memory resources required to carry out high fidelity reactor
computations points to the need for flexibility in the level of space-angle energy
discretization. For example, there is no reason why the same order of angular expansion
is needed in every energy group, or for the entire spatial domain, just as there is no
rationale for having an equally refined finite element mesh over the entire spatial domain,
or in every energy group. Building multiresolution into a new code would allow economy
of computing by allowing the level of approximation to vary over the phase space; it
should not be uniform but rather varied according to the physics of the problem.
37
All the previous indicated improvements should lead to a more accurate neutronic design,
with perhaps a reduction in need for integral experiments that have been so heavily used
in the past. Moreover, having access to more accurate solutions in shorter response time
will speed up the developmental process with the additional benefit of possibly reduced
margins. As an example, also mentioned in other sections, the reduction in uncertainty on
power peak factor would lead to huge economical benefits, while the uncertainty
reduction on safety reactivity coefficients, besides their impact on operation costs, will
have the added value of extra confidence in the safety characteristics of the new reactor
plant.
Another domain that will benefit from improved neutronics codes is the estimated
inventory that will be the input for the reprocessing plant. With better estimates of the
nuclide inventory at the end of irradiation in the reactors, fewer measurements will be
needed for establishing the content of the spent nuclear fuel with consequent reduced
costs. Also there will be a favorable impact on the proliferation considerations about the
plant in addition to reducing margins that are taken for compensating the uncertainties on
the heavy isotopes contents.
Finally, one can expect that with high-fidelity advanced simulation, achievable with the
proposed improvements in the solution of the neutron transport equations to be
implemented in the next five years, it will be possible to attain a level of uncertainty,
coming only from the methodology approximation, of 50 pcm on the eigenvalue, 0.5%
for the power distribution or other distribution rates (e.g. damage, rates for burnup
calculation, etc.) For the reactivity coefficients two categories need to be considered. The
first one regards small variations where systematic approximations can be assumed
(temperature, single control rod, mechanical expansion, etc.) In this case an absolute
uncertainty of less than 10 pcm could be achievable. The second category relates to
variations that are results of large compensations of different components (e.g. coolant
void). In this case, with exact perturbation theory used for their evaluation, relative
uncertainty of less than 2% should be targeted.
38
IV Thermal Hydraulics
IV.A Background
Recent nuclear energy system development activities in the context of Gen-IV and GNEP
initiatives indicate substantial interest and opportunity in the use of multi-dimensional
CFD-based techniques particularly for design optimizations. The innovative design
features to reduce investment and operating costs and increase safety margins will require
demonstration of concepts’ viability by credible high-fidelity analyses verified with
experimental data.
For the current generation nuclear reactors, systems analysis codes (like RELAP, TRAC,
RETRAN, SASSYS) have been used successfully but only after being validated
extensively by the code developers and the user community at large. Limitations of 1-D
thermal-hydraulic phenomenology embedded in these codes have been generally
recognized in addressing certain types of fluid flow and heat transfer issues that are
fundamentally multidimensional in nature. Also, the empiricism incorporated into these
systems analysis codes often limits their validity to specific applications.
The application of commercial CFD software to light water reactor systems by the
vendors has shown reasonable success for characterization of the thermal-hydraulic
performance for turbulent flow and heat transfer in complex rod bundle geometries (with
or without spacer grids), flow distributions and thermal stripping in the inlet and outlet
plena, boron mixing in the downcomer, etc… However, applicability of general purpose
CFD software for different types of coolants and to a wider range of flow and heat
transfer phenomena such as natural circulation, multiphase flow, free surface modeling,
and moving boundaries need further verification.
39
assemblies and surrounding coolant channels in the core. Feasibility of such an approach
has been recently demonstrated by coupling a commercial CFD software with a discrete
integral transport model for neutronics calculations.[55] By representing local
heterogeneity explicitly at sub-pin level without any homogenization, a computation
intensive but high-fidelity capability can be developed to address the operational and
safety characteristics of next generation nuclear reactor designs.
As part of such an integrated scheme, the CFD techniques can be used to determine the
coolant and fuel temperatures throughout the core, obtain the flow field and pressure drop
in coolant channels, and resolve the effects of spacer grids and orifices on flow
distributions and cross-flow between the coolant subchannels to avoid hot-spots. Various
other nuclear plant simulations require thermal-hydraulic component models that extend
beyond the core, including the reactor vessel (particularly for pool type reactors), heat
exchangers, steam generators, steam dryers, shutdown heat removal systems, and spent
fuel pool.
The use of CFD based simulation capabilities can also be extended to other aspects of the
nuclear fuel cycle starting from aqueous and pyro-process simulations for spent fuel
treatment to waste-form analyses for safe disposal. Simultaneous solution of Navier-
Stokes equations in conjunction with energy and species conservation equations lends
itself to easy integration with chemical kinetics models such as CHEMKIN for
optimization of spent fuel reprocessing. Feasibility of using CFD for electrochemical
pyro-process modeling has also been demonstrated recently by solving special forms of
Maxwell equations for ionic mass transfer in conjunction with Navier-Stokes equations
for flow field to obtain electric field and current density distributions including the effects
of concentration and surface overpotentials.
As a result, basic functions of a new simulation tool should include most standard
capabilities of a general purpose CFD software, including:
40
Conjugate heat transfer for solving the energy equation among the solid
and fluid domains simultaneously
Active and passive species conservation solutions linked with the transport
processes
Formulations to specify mass, momentum, turbulence, energy, and species
sources and sinks
A functional interface to incorporate user-defined algorithms for thermo-
physical properties, source terms, initial and boundary conditions via user
functions
Implicit, explicit, or hybrid solution methods, or algorithms with
predictor-corrector stages
Conjugate gradient or multi-grid preconditioning schemes to accelerate
convergence
Various differencing schemes including variants of upwind and central
differencing, and advanced schemes to reduce numeric diffusion (QUICK,
GAMMA, MARS)
Restart capability
Parallel computing
41
[60] constitutive formulations for the stress-strain relations. Other variations of RANS
models include the renormalization group (RNG) version[61] and Chen’s variant.[62]
The more complex second-order closure models such as the differential Reynolds Stress
Model (RSM) are based on exact transport equations for the individual Reynolds stresses
as derived from the Navier-Stokes equations.[63] Other higher order turbulence modeling
approaches, such as the Large Eddy Simulations (LES) and its hybrid counterpart
Detached Eddy Simulations (DES), attempt to actually resolve the large scale eddies
while modeling the small scale ones; but, due to their transient nature, these techniques
are very time consuming and their implementation under the existing CFD software is
often not robust enough. While the common empirical coefficients appearing in these
closure models (some of them are functions of other variables themselves) are intended to
be applicable to a fairly broad class of flow and heat transfer regimes, they are generally
not validated for a wider range of fluids.
The design of next-generation nuclear systems with improved economics, safety and
performance will benefit from and likely rely on CFD simulations based on first-
principles to provide accurate predictions of system performance. Application of CFD
simulations to the evaluation of system transients and accident scenarios will likely prove
computationally burdensome and benefit, at least initially, from the use of one
dimensional approaches as acceleration schemes for CFD simulations or as a simplified
model of selected system components to focus computational effort on areas of greatest
importance. Furthermore, viability of innovative design features needs to be
demonstrated by credible analyses that are validated with appropriate experimental data.
A CFD tool tailored for nuclear engineering applications, and especially for liquid metal
cooled fast reactor applications, would likely require advanced capabilities beyond the
standard formulations. As one example, many new reactor concepts rely on natural
convection for heat removal under emergency conditions or even during normal
operation. While most CFD tools include solvers which are capable of simulating natural
convection with conventional fluids (water and air), the codes are largely not validated
for these flow regimes. Because the experiment database with liquid metals in turbulent
natural convection is extremely limited, evaluations of license applications for these
concepts by U.S. NRC will likely stall unless supported by an extensive CFD validation
42
program similar to those conducted for systems analysis codes like RELAP in previous
decades. NRC will likely seek DOE leadership in this area. For ABR-type reactor
designs, it is expected that natural convection will involve mainly turbulent and
transition-to-turbulent flow regimes. Therefore, the assessment of various turbulence
methods for liquid metal coolants under prototypical operating conditions encountered in
nuclear energy systems will be important first step. Outcome of such an assessment will
help identify the need for new turbulence model improvements.
Another area where significant improvements over existing commercial tools could be
realized is transient system analysis. Current generation CFD tools with computational
meshing conventions which allow accurate representation of realistic geometries
typically use semi-implicit segregated solvers for transient simulation. All current
generation commercial CFD tools are very sensitive to the user’s specification of time
step size. Commercial CFD companies are investing significant effort in the
development of improved fully-implicit simultaneous transient solvers which provide
more consistent transient simulation capabilities, but it is likely that these solvers will be
optimized for larger markets such as the automotive industry where single phase air and
water flows dominate. Solvers optimized for transient simulation of the working fluids
and flow regimes expected in nuclear reactor systems are needed to allow extension
application of CFD beyond steady state design to include transient analysis.
Finally, since the large scale CFD models would generally lead to prohibitively long
computing times, and effective parallelization scheme that can distribute the model to
massively parallel computer platforms will be important. The recent experience[55] with
a whole-core model for integrated simulation of neutronic and thermal-hydraulic
phenomena for explicit representation of individual fuel pins and surrounding coolant
channels indicate that the number of computational elements (cells) in a coarse CFD
model would be in the hundreds of thousands (much larger if the effects of the spacer
grids are to resolved). Using fine-mesh CFD solutions in each coolant channel will be
important for an integrated whole-core analysis capability to capture important feedback
effects between the first-principles based multi-physics models. The conventional domain
decomposition schemes as implemented in commercial domain CFD software have
relatively poor scalability characteristics. Again, the experience shows that distributing a
60 million cell CFD model onto more that 200 processors does not translate to significant
speed up in real time.
43
V Structural Mechanics
V.A Background
The structural mechanics field is considered to be quite mature and for its simulation
there has been a widespread code development effort mostly based on finite element
techniques. In any case, the application of structural mechanics models to nuclear reactor
design would special treatment of thermal effects. Additionally, ANL has developed
codes that cover the full spectrum of applications from slow to intermediate and fast
transients up to the case of large deformations, however these capabilities have not been
incorporated into a single code. Work has to be done in order to produce a general
purpose code that is applicable in different cases covering all type of reactors and that has
geometry capability compatible with the other components of the simulation tool.
The governing equations of motion for the finite element model are of the form
where {u} is the column matrix of nodal displacements, [M] the mass matrix, [D] the
damping matrix, {f int} the nodal forces which are obtained from the resistance of the
finite elements to deformation, and {f ext} the nodal forces arising from external loads.
Superposed dots denote time derivatives, so {u} are nodal accelerations and {u} are nodal
velocities. Both lumped and consistent mass matrices can be employed. A lumped mass
matrix possesses nonzero terms only on the principal diagonal, so it can be a treated as a
column matrix, {M}, and is often called a diagonal mass matrix. Damping forces usually
arise from material response and are treated as part of the element internal forces {f int};
however they are treated separately because viscous damping forces can be treated in this
way when damping is not within the a stress-strain material law or behavior. The internal
force vector {f int} is assembled from the element internal forces and transferred to mesh
nodes that comprise the element.
The typical types of problems solved by structural mechanics simulations for the nuclear
industry include core internals, reactor vessels, containment structures, confinement
structures and other non nuclear grade structures. These problems include solutions for
highly transient solutions for accident analyses to quasi static solutions for design basis
issues. In order to assure the structural integrity of nuclear structures, it is necessary to
simulate their response to anticipated loadings, both from a design basis and a beyond-
design-basis viewpoint. To properly treat some of the important structures, it is
necessary to perform three-dimensional numerical simulations for which two-
dimensional models cannot properly capture the mechanics. The above situation was
recognized in the early seventies, and efforts were initiated to develop a three-
dimensional finite element code.
44
V.B Functional Requirements
The application of specialized structural finite element method (FEM) software to reactor
systems by researchers has shown reasonable success for characterization of structural
response to mechanical and thermal loadings. However, applicability of general purpose
structural FEM software for different types of reactor coolants and their respective
structural makeup needs further verification.
The basic functions of a new simulation tool should include most standard capabilities of
general purpose structural FEM software, including:
Seismic analysis can be one of the most important inputs into the design of reactor
structures and components. Thus the structural analysis capabilities must include seismic
considerations. Seismic isolation is very useful to the nuclear industry, since it can reduce
design loads, minimize the effects of specific site environments, and contribute to the
reduction of materials needed for the major components of the primary system. When
properly designed through analysis, seismic isolation greatly reduces the seismic loads
transmitted to the structure. This is particularly important in advanced reactor designs
where components are designed to be thin-walled structures and with reduced inherent
seismic resistance. The advantages of seismic isolation include the ability: (1) to
eliminate or significantly reduce the structural and non-structural damage; (2) to enhance
the safety of the structures contents and; (3) to reduce seismic design forces.
Typical FEM models produce fairly accurate global strains under general analysis
conditions. The local strain concentrations are difficult to obtain with a FEM model due
to the level of analysis sophistication, unknown as-built conditions, material conditions
and tri-axial stress effects on the failure strain. These global strains produce gross
structural distortions or peak plastic strains that do not produce significant distortions.
The actual strain can be considerably higher than the calculated strain, which is very
important when assessing designs against allowable values. The relationship of the
calculated strain and the actual strain value is:
45
εu
εc ≤
KFT
Where:
Typical values for the above knockdown factors are as follows for the current state of
analysis:
The K1 knockdown factor was developed to account for the level of sophistication of the
finite element model. A finite element model attempts to identify the detail and
completeness of the geometry, element refinement, boundary conditions and assumptions
made or implied by the model. Any differences between the finite element model and the
actual structure are quantified and related to the calculated strain, are used to determine
the value of K1. The range of K1 varies from 1 to 5; this range is based on the refinement
of the finite element model and how well it addresses global strains as well as strain
gradients and concentrations due to structural discontinuities. The upper limit of 5 is
based on ASME code criteria (Section III and VIII) which states that 5 is the largest
concentration factor to be used for any configuration designed and fabricated.
The K2 knockdown factor was developed to account for as-built configurations and is
based on the difference between the structural information available to the analyst and the
actual construction configuration. Typical values range from 1 to 1.25, which is based on
the parameters of construction materials, weld quality, fabrication tolerances, post weld
heat treatment, fabrication residual stresses and details, and plate thickness or bar areas.
The K3 knockdown factor was developed to account for material degradation and is based
upon the effect of material property degradation on the strain at failure and the structural
loading of the component. Typical values range from 0.85 to 1.15, which is based on the
parameters of corrosion, pitting, cracking, aging, etc. A factor of 1.0 would represent a
mean value of material properties.
46
The FT reduction factor was developed to account for multi-axial strain effect on the
strain level at failure. The ductility reduction in the material, which is a decrease in the
failure strain level, due to multi-axial loading effects is addressed by using the tri-axial
factor approach. These reductions typically have values from 1 to 2 depending on the
overall stress state of the material.
The area of largest uncertainty is typically the model sophistication for analysis. The
other factors mentioned are important, as-built configurations, material degradation and
the mufti-axial strain effects and need to be addressed. However, the largest area of
concern is the model sophistication and how to improve on the accuracy of modeling
techniques.
The information provided in the following paragraphs describes the current and past
codes used in the structural mechanics areas for mainly accident analyses and some
preliminary design assessments. These are both in-house codes at Argonne and
commercial codes. The in-house codes are listed first.
47
TEMPOR2: A two-dimensional axisymmetric finite element code used to analyze
concrete material for moisture diffusion and heat conduction. The boundary conditions
available are those of perfect moisture transfer from the surface to an environment of
prescribed, possibly time-variable, relative vapor pressure, perfect sealing of the surface,
perfect heat exchange with an environment of prescribed time-dependent temperature,
and perfect thermal insulation. The boundary conditions for imperfect moisture or heat
transmission at the surface can be also implemented. The finite element program utilizes
a quadrilateral four-node element (with variable numerical integration capability), the
unknowns being the value of temperature and pore pressure at the nodes. The finite
element formulation is based on the Galerkin approach and utilizes a step-by-step
algorithm for the integration in time, corresponding to the central difference Crank-
Nicholson algorithm for the diffusion equation.
ICEPEL: The computer code follows the propagation of pressure pulses through the
primary piping and calculates the permanent damage and deformation of the piping and
its components.
The commercial codes that have been used are listed next.
48
DYNA-3D: Finite element program which is used for fast, effective resolution of
complicated engineering problems such as large deformation, nonlinear material
behavior, and multi-body contact typically characterized by transient impact.
SAP 2000: Finite element program for linear and nonlinear analysis for both reinforced
concrete and steel structures for general structural analyses of framed type structures.
There are many other commercial codes which are available which have not been
described here.
Any of the codes mentioned here typically have the largest uncertainty with model
sophistication. Some of the commercial codes have recently incorporated automatic mesh
refinements to help reduce the uncertainties from modeling techniques.
Designing the next-generation nuclear systems which will have improved economics,
safety and performance will need an advanced simulation tool. This will be accomplished
by integrating highly refined solution modules for the coupled neutronic, fuel behavior,
thermal-hydraulic, and thermo-mechanical phenomena. Each solution module will
employ methods and models that are formulated faithfully to the first-principles
governing the physics, real geometry, and constituents. The aspect of reducing the
uncertainties from modeling, as-built configurations, material degradation and the mufti-
axial strain effects need to be addressed in future analysis tools. The major item of
concern is to improve on the modeling sophistication to reduce the uncertainty which
currently could be as much as a factor 5 on strain values. Typically this value may be
reduced with proper care in modeling but still is of concern. Future approaches should
concentrate on lowering this factor to be no more than 1.5; however this is a difficult task
to overcome because of the other uncertainties mentioned.
One of the most important aspects in modeling the structural mechanics is the movement
of the core assemblies. This movement comes from the vibrations induced through the
fluid-structure-interactions inside the core and bowing mechanisms. The mechanical
response for bowing deflection of core assemblies is a function of location in the core,
core assembly supporting structures, and type of reactor core. Bowing is typically caused
by thermal gradients, swelling gradients and irradiation creep. These mechanical
responses can cause significant changes in reactivity during startup, long term operation,
transient overpower, and loss-of-flow without scram transients. These effects are
generally larger for small cores because the number of assemblies is small, so their
individual displacement reactivity worths are large. Bowing of single assembly near the
core boundary, where the gradients are most severe, moves substantially greater
proportion of the fuel in a small core. The manner in which the core assemblies are
supported in the core support plate and within the core barrel is a major design
contributor to these transient reactivity effects. The structural support of the core is
termed the core restraint system and normally consists of several supports for the fuel
49
rods. Generally a top nozzle and bottom nozzle support and several (between 2 to 3) grid
supports provide the axial and lateral support for the fuel rods.
Currently, the primary analysis tool at ANL to calculate the mechanical response (i.e.
bowing) is based on the NUBOW-3D computer code, which was developed at ANL.
NUBOW requires as input a structural description of the assemblies and the core restraint
system, a description of the thermal and flux fields in the core, and displacement
reactivity worths for the individual assemblies.
The specific technique used in the NUBOW computer code, however, does not lend itself
to efficient coupling with the thermal-hydraulics/heat transfer and the reactor physics
modules. The NUBOW code is based on a finite difference formulation, which in
structural analysis applications is an inefficient numerical tool. The main reason being
that this computer code is inefficient is because the on the boundary conditions and the
structural grid has to be internally coded for each numerical model. NUBOW was
originally coded for fast reactors (i.e. LMFBR’s) and has been used in the studies of
EBR-II, CRBR, FFTF, PRISM, S-PRISM, SAFR and others. A more efficient and
problem independent numerical solution is the finite element method (FEM). The
proposed approach would be to modify the structural FEM analysis program NEPTUNE.
This modified code would be an updated structural mechanics tool which will be
modified to incorporate the bowing features of the NUBOW code and the features
outlined in the functional requirements above. Additionally, the vibrations from fluid-
structure-interactions from coolant flow with the reactor structures would also have to be
addressed. A brief description of the coupling of the structural mechanics code and the
thermal-hydraulics (CFD), fuel behavior and neutronics codes is discussed below.
Thus, the structural mechanics simulation would interact with the thermal-hydraulics/heat
transfer (CFD) via the input of the thermal field and fluid pressure. The structural
mechanics code calculations would in turn provide feedback to the thermal-
hydraulics/heat transfer modules through movement of the fuel pins and assemblies,
which would alter the coolant flow paths. This coupling would also provide the
necessary transfer of information to properly capture fluid-structure-interactions.
The structural mechanics simulation would interact with the fuel behavior code through
the displacement field to the constitutive modeling of the fuel behavior code. The fuel
behavior code would provide the resulting internal forces of the fuel pins into the
structural mechanics simulation.
50
VI Fuel Behavior
VI.A Background
The primary function of a nuclear fuel element is to generate and transfer heat to the
reactor coolant. A second major function of the fuel is to contain the fuel and the fission
products and provide a barrier against coolant contamination with fission products. This
is provided by the outer shell of the fuel element, i.e. cladding, that provides a barrier
between the fuel material and the coolant. Thus, the structural integrity of the fuel
element must be maintained in compliance with applicable requirements, to prevent such
a contact between the fuel heat generating material and the coolant during normal and
abnormal conditions.
The modeling and analysis of the thermo-mechanical behavior of a nuclear fuel element
is important for predicting and ensuring such structural integrity of the fuel element
during reactor operation. This behavior is a complex system of interacting and
competing processes as a consequence of the reactor high thermal power densities and
neutron flux environment. There are different types of fuel materials, with differences in
behavior under reactor operating conditions, which include both ceramic (oxide) and
metallic types and are available in different forms. Examples of such materials are UO2,
(U, Pu)O2, U(U,Pu)C, (U,Pu)N in sintered pellet or sphere-pac form or metallic slug such
as U-Fs (uranium-fissium), U-Zr, or U-Pu-Zr, with possible additions of minor actinides
to some of those forms [66].
Over the life of the nuclear industry, fuel performance models and codes played a role on
the advancement of nuclear fuel design and assuring the fuel integrity during both normal
and abnormal conditions. It provided tools for understanding the basic phenomena taking
place within the fuel and the complex interplay between those phenomena. Expertise
from different fields were involved in the developments of such models and codes,
including nuclear, thermal, structural, materials, and chemical expertise. Because of the
central role of the fuel in the nuclear steam supply system (NSSS), fuel elements of
various types and levels of complexity are incorporated in the NSSS codes. [67] Those
codes, arranged in ascending order of complexity of the fuel elements models contained
in them, include system thermal hydraulics codes, core simulation codes, loss of coolant
accident codes, and fuel performance codes. This shows the important role of a fuel
behavior code within an advanced simulation initiative as the one considered here, which
aims at simulating all aspects of this NSSS.
Advanced fuel behavior codes [68] include both thermal and mechanical modeling, at
different levels of coupling between the two types of analysis. Thermal analysis, which
determines the temperature distribution within the fuel, is relatively a straightforward
analysis due to the general applicability of the Fourier heat conduction law. This analysis
is complicated by the changing thermal conductivity of the fuel during irradiation,
changes in fuel-cladding gap, and other factors such as phase transformation of the fuel,
and constituent redistribution. If those complications are not taken into account (or
modeled through empirical correlations), structural analysis of fuel elements represents a
51
more challenging task. The difficulties encountered when handling the basic equations of
structural analysis and material behavior of a fuel element are summarized as follows:
1- Fuel elements are three-dimensional structures and deformation can take
place anywhere over its structure.
2- The constitutive laws (stress-strain) are non-linear as a result of creep,
plasticity, and other material data. Time occurs as a fourth independent
variable. Material behavior may be anisotropic.
3- Materials behavior under irradiation in both fuel and cladding is a complex
problem and understanding of the detailed physical phenomena involved
remains limited. The inclusion of time as an additional factor complicates
the problem even further.
4- Chemical phenomena in the fuel element, such as the fuel cladding
chemical interaction and corrosion of cladding by the coolant changes the
cladding load bearing properties.
where
L = load parameter,
S = failure limit,
M = material quantities,
D = design parameters, and
O = operating conditions
This system of equations is to be solved numerically and represents the core of all
mechanistic fuel elements codes. Due to the extreme complexity of solving that system
of equations in a complete form, different levels of approximations have to be made to
enable the modeling effort. Those approximations range from simple 1-D thermal
analysis models to a full thermo-mechanical 2-D finite elements representations [67].
Although there are wide variations in the complexity of the thermo-mechanical fuel
codes, the physical models implemented within those codes do not have the same first
principle levels of complexity. Most of those models are empirical or semi-empirical
models that are based on simplistic phenomenological models which are calibrated to a
certain range of operating conditions. The applicability of those models outside this
range, i.e. extrapolations outside this range, might not be valid. This lack of
52
implementation of first principle type models of the physical phenomena into the fuel
performance codes combined with simplified thermo-mechanical modeling are some of
the motivations for this proposed advanced simulation initiative.
As aforementioned, the fuel behavior codes contain contributions from wide range of
modelers including nuclear, thermal, mechanical, and chemical modeling. There are
different functional requirements that correspond to each aspect of the modeling
activities. The main requirements of the fuel performance codes are to predict the
thermal and mechanical response of the fuel elements during normal and abnormal
operations of the reactor. Nuclear physics, chemical models, and materials behavior
models are interconnected with the thermal and mechanical analysis in those codes. The
analysis provides estimates of a number of integral parameters that are observed during
the simulations and are usually used by the fuel designer to evaluate the fuel integrity
during reactor operations. For a cylindrical type fuel slug enclosed in a steel cladding
tube (most common form for fast reactor type fuel) those parameters include:
Cumulative damage function (CDF) which relates the time under certain
stress and temperature conditions to the cladding time-to-rupture
Stresses on the cladding
Cladding diametral strain
Fuel axial growth
Magnitude of fuel-cladding chemical interaction (FCCI)
Temperature distribution within the fuel and cladding
Fission gas pressure and released fraction
For fast reactor licensing activities, there are limits imposed on the values of those
parameters that should not be exceeded during reactor operations in order to limit the
possibilities of pin failures below licensing agencies requirements.
53
models. Other uncertainties can be attributed to the type of thermal and mechanical
analysis performed. For example, the use of a fuel performance code that divides a fuel
pin into a number of axial segments that are not mechanically connected (through finite
element modeling) can lead to errors in the predictions of axial fuel growth. In addition
to errors in material properties and thermo-mechanical model approximations, errors in
the operating conditions of the reactor can lead to further uncertainties in calculated
integral parameters. Uncertainties in the pin power densities, magnitude and shape of the
neutron flux (accumulation of fission products and actinides within the fuel) and flow
temperature, can increase the uncertainties of calculated parameters. Notice here that
most cladding failure take place at a certain location over the pin geometry and unlikely
to happen uniformly over one axial location.
Uncertainties in other calculated parameters such as peak fuel temperature and cladding
stresses can be as low as 10% during the initial stages of irradiation. However, those
uncertainties increase with irradiation. Uncertainty in fuel thermal conductivity can be as
high as 25% for unirradiated fuel of certain alloys; however the uncertainty increases
with irradiation as the fuel porosity changes, possible sodium logging (sodium cooled
FRs), changes in fuel density, fuel restructuring, cracks growth, and change in grain size
and possible phase transformations. All of those phenomena that cannot be measured in
most situations contribute to further uncertainties in fuel thermal conductivity and further
uncertainties in fuel temperature predictions. As for the stress on the cladding, those
stresses are mostly attributed to fission gas pressure during the early stages of irradiation,
and can be estimated with high accuracy during those stages. However, as the solid
fission products accumulate within the fuel and fuel start to load the cladding, fuel
cladding mechanical interaction (FCMI) takes place, and the stresses on the cladding
becomes less predictable.
Large numbers of computer codes that simulate fuel behavior are available. Generally,
those codes can be divided into three major categories which include simple correlation
fuel performance codes, fuel model codes, and mechanistic fuel performance codes [67,
72]. The latest category is of interest here, since the former categories provide simplistic
representation of the fuel behavior that are limited to certain applications and cannot be
used to extrapolate the fuel behavior. Among the mechanistic fuel codes there are codes
that simulate the fuel in thermal reactors and there are fast reactors fuel performance
codes. Of interest to the current report are the codes related to fast reactors applications.
In general, mechanistic codes contain both thermal and mechanical analysis of the fuel
elements. Table VI.I shows a summary of the mechanistic fast reactors fuel performance
codes.
54
Table 1. Summary of mechanistic fast reactors fuel performance codes [67].
a
Steady state (SS), transient (T), or steady-state-and-transient (SST)
b
One-dimensional codes include a radial analysis model only. 1.5-D codes also include
provisions for dividing the fuel into axial segments with different operating conditions.
Coupling among axial segments varies from code-to-code.
As shown in the table, the codes are 1-2 D codes, and each code has its own
approximations of the structural analysis. Most of the codes use structural analysis
approximations to avoid the use of finite element analysis, such as the LIFE series of
codes [73] (however, the LIFE code remains the main fast reactors steady state code, and
the FPIN code [73, 74] is the code for transient behavior in the U.S., especially for
metallic fuel) . Other codes that use finite element analysis limit the analysis to 2-D
analysis with assumptions regarding constitutive laws. The Japanese ALFUS code (not
listed in the table) [75] is an example of such a code (2 ½ D, as it does not account for the
azimuthal variations around the periphery of the pin).
In addition to the structural analysis limitations of the existing codes, there are also
limitations on the implementation of the physical models within the codes. For example,
55
most of the existing codes use correlations for the representations of the fission gas
release within the fuel, instead of implementing a physical model that predicts the release
as a function of the neutronics parameters, the microstructure of the fuel, temperature,
and other factors. This highlights the need for implementation of more detailed physical
models that predicts the phenomena of interest based on first principle. Those limitations
of the existing codes limit the ability of the codes to extrapolate the designs beyond
certain boundaries which are determined by the validation database. For example, lack of
axial coupling between the different axial sections can lead to errors in the estimation of
fuel axial growth during irradiation as discussed before.
In order to go beyond the existing state of fuel behavior simulation codes a number of
requirements will be needed in the new code as follows:
Figure VI.1 shows the transition from the current fuel performance code methodology to
the new paradigm that includes the above requirements. Notice the feedback effects
between the physical models and the materials database and the other reactor simulation
codes. The code’s physical models both receive material properties from the properties
database and/or first principle modeling and provide input parameters to those models.
For example, at the start of irradiation, the database will provide unirradiated fuel thermal
conductivity to the performance code. The code’s physical models will estimate the
evolution of phenomena that can affect the thermal conductivity, such as porosity and
phase transformation, and feed that information back to the first principle models to
update the fuel thermal conductivity. Similarly, models that account for irradiation
hardening of the cladding material will feed information into the cladding detailed tensile
properties model that follow the dislocation and grain growth in the cladding, and reside
56
outside the fuel performance code. In addition, the figure shows feedback to and from
the other codes simulating the NSSS such as the thermal hydraulics and neutronics codes
providing the operating conditions of the fuel pin.
START START
Inputs Inputs
Initialization of input
Geometry Geometry parameters
Materials Initialization of input Materials
Coolant pressure and parameters Coolant pressure
temperature and temperature
Power Power
etc. etc. Reactor/
Subassembly
Neutronics
TH
t=0 t =0 Duct deformation
Grid/wire deform
etc.
Fuel element
Fuel element analysis Test programs for Test programs for
analysis by
by component: specific components specific components
component:
Iteration loop
Temperature Temperature
Iteration loop
Physical model 1 Physical model 1
…………. …………. Materials data base
Physical model i Materials data base Physical model i
First principle
Time loop Mechanics Time loop Mechanics materials properties
estimation
No No
Convergence Convergence
3D Finite Element
Yes Yes
t =t + t t =t + t
No No
END Yes Evaluation of results END Yes Evaluation of results
Figure VI.1. Basic Structure of a General Computer Code for Fuel Element
Behavior and Proposed Approach
The construction of a fuel behavior code with the above criteria is faced with different
types of challenges. Figure VI.2 shows the computational challenge caused by the use of
a 3-D finite element framework [76]. The figure shows the number of operations
required for the simulation of one fuel rod over one year of irradiation. As shown in the
figure, a tera-flops scale computer level is required for such calculations. This
computational burden is a result of the detailed structural analysis requirement only, with
physical models mostly represented by empirical correlations. The use of detailed
physical models will further increase the computational burden. Another level of
computational complexity can be expected if more than one element in a subassembly is
57
to be simulated, especially if details of the effects of neighboring elements are thought to
be important. In additional to the increased computational effort needed, the
development of detailed physical models is a very complicated and challenging task by
itself. Those models are inter-dependent and some of the parameters involved cannot be
measured experimentally, requiring the development of first principles detailed models.
Empirical 1 to 1½ D Models
3-D Finite
Interpolation
Element Models
Models
Full 2-D Finite
Element Models
58
designs beyond the validation database. Also, the new fuel performance code, with its
detailed complex physical models, can help planning costly irradiation experiments and
limiting its scope to certain space of parameters based on the code’s predictions. This
can ultimately lead to significant reductions in the R&D cost of new fuel designs and
enhancing the operating parameters of existing designs.
59
VII Balance of Plant
VII.A Background
There are on-going efforts to expand the range of application of nuclear power plants to
cogeneration functions, in particular high temperature production of hydrogen for
transportation fuels, with nuclear heat. However the current application of nuclear power
plants, in the current fleets of industrial utilities world-wide, to electricity generation is a
very important and significant one, borne out by decades of optimized and well-balanced
operations of the Gen-II light water reactor ((LWR) units. These units have operated
well in the industrially developed nations over the past decade. They have contributed to
diversification of energy supply, reduced the carbon demand on the environment and
conserved hydrocarbons, particularly oil, for future generations. But improvements in
performance will be ever-more important as the globalization of the demand-curve and
the ever-tightening supply curve make their effects felt. The energy conversion part of
the nuclear power plant, which is responsible for the generation of the electrical power
supply to the grid from the nuclear heat is currently a large footprint both in number of
systems and building area of the plant. Technical improvements in this part of the plant
which can lead to higher thermodynamic efficiency, reduction in capital cost, higher
reliability coupled with less downtime and lower maintenance requirements in terms of
cost and manpower could have a significant impact on the performance of the plant as a
whole. Improvements in energy conversion efficiency affect the total mils/kwh while
capital reduction on the nuclear island would only affect part of the cost.
Proposals have been made that future generations of reactors be coupled to direct energy
conversion systems such as thermoelectric based or thermionics- -based systems to
reduce the current number of thermal-hydraulic systems. In the specific case of LMRs,
MHD coupling has been proposed. For the purposes of this report, the discussion will be
limited to the classical rotating turbo-machine and the related auxiliary thermal-fluid
systems coupled to the electrical generator. Phenomena which then are to be simulated
are (1) fluid, (2) mechanical/materials and (3) electrical. Fluid flow expanding through
the turbine, heat transfer and two phase generation in the heat exchangers, mechanical
stresses and high-temperature materials integrity of the turbo-machinery blades and
wheels, and the coupling of the stator and rotor magnetic fields in the electrical
generation are all phenomena, more advanced simulation of which could lead to the
technical improvements that could result in enhanced performance parameters such as
higher energy conversion efficiency. Improved simulation of these phenomena would
have to cover not only the static range but also the dynamic range inherent in the plant
duty cycle. The plant duty cycle has inherent to it, conditions from normal operation to
off-normal transients and beyond to the design basis and the beyond design basis
accidents. The transient performance would therefore be continuous from the mild
dynamic effects of operational load change to turbine deblading accidents at full power
and pressure. In addition to the inherent range of the various phenomena imposed by the
dynamics, the computational performance of the simulation codes also have
requirements. The two basic applications of the simulation tools would be (1)
60
Component Computer Aided Design (2) Plant Operation Diagnostics and Control Aid
(on-line and off-line). This would then cover the whole range of the plant duty cycle.
For (1) Component Computer Aided Design, the major need would be in accuracy in
design of component tolerances such as turbine blade gap clearances, less than fractions
of mils to compute bypass leakage losses. Computation time for ultimate usefulness to
the component designer would be turnaround in terms of minutes. Component design
would not only be aimed at thermodynamic and mechanical efficiency, but would also
include issues regarding component reliability and lifetime. Vibration questions, fluid-
structure coupling and thermal stress would need to be addressed. These would then lead
to the simulation of materials response. Models of materials wear fatigue and failure
would also be required to be coupled in.
For (2) Plant Operation Diagnostics and Control Aid, there would be both static and
dynamic applications. Off-line application would aid in the management of maintenance
schedules. Corrosion, fatigue, and wear and tear on bearings, seals and other
subcomponents, if accurately simulated could allow plant management to avoid coolant
leaks which could lead to fires. Simulation tools for this function would not need
instantaneous turnaround but would require accuracy on the computing of parameters
such as crud deposition on rotating shafts leading to pump binding. On-line applications
would be operator aids in the computer control of plant integrated systems and require
prompt turnaround. Control decisions may have to be made in fraction of seconds with
simulation of many potential diagnostic scenarios combined with mitigative actions. This
would definitely be a dynamic application and accuracy would also be needed. In both
types of applications, interaction and feedback from the plant sensors would be required.
This would call for a series of AI based diagnostic and control algorithms interacting with
the simulation of the plant BOP. Given that this report restricts itself to conventional
turbo-machinery, details of the simulation requirements would be focusing on two major
lines of turbo-machine and associated thermodynamic cycle equipment. (1) Single Phase
Gas Turbines (2) Two Phase Vapor Turbines
In the case of (1) single phase gas turbines, the future focus may be on helium (inert gas)
and supercritical carbon dioxide (around the critical point) working fluids [Pino: Need to
focus to GNEP systems]. But there are other possibilities such as nitrogen. The
thermodynamic cycle utilized would be the single phase Brayton cycle and variations
thereof [77]. The usage of and the number of stages for recuperation, inter-cooling and
recompression are all variations which need to be treated. In the case of (2) two phase
vapor turbines, the future focus may be on supercritical water but there are other
possibilities, in particular a mixture of fluids with different boiling points. The
thermodynamic cycle utilized would be the Rankine cycle and variations thereof 78]. As
with the Brayton cycle, there are potential variations in regenerative heating, pre- and re-
heating and superheating features. The discussion at this point will be illustrative using
61
each of the two cycles with its Table VII.I. Gas Cycle Major Components
associated equipment to present (1) Rotating Turbomachinery
issues where advanced simulation 1.1 Turbine
capabilities would be of utility. 1.2 Compressor/circulator
Table VII.I addresses the single (2) Heat Transfer Equipment
phase cycle, while Table VII.II 2.1 Recuperator
addresses the two-phase cycle. 2.2 Precooler
Currently the state-of-the-art to aid 2.3 Intercooler
in the design of the turbomachinery 2.4 Intermediate Heat Exchanger
and the associated component
systems, are CFD tools utilized in Table VII.II Steam Cycle Major Components
the simulation of the fluid flow. For (1) Rotating Turbomachinery
the gas turbine cycle, there are a 1.1 Turbine
number of specialty codes 1.2 Feedwater Pump
specifically for the design of (2) Heat Transfer Equipment
turbines, circulators and 2.1 Steam Generator
compressors. Specific gases are 2.2 Condenser
treated on a case-by-case basis, 2.3 Feedwater Train
allowing for unique features in the 2.4 Reheater
cycle such as supercritical fluids 2.5 Moisture Separator
operating in a stability range around
the critical point. There are also a number of general purpose CFD codes which are
being applied to these design problems but the application is more towards the associated
heat transfer equipment. One of the key pieces of equipment in this category would the
intermediate sodium to gas heat exchanger. This heat exchanger would form part of the
primary system boundary and the design for thermal stresses and general boundary
integrity would have an influence on the reliability and availability of the plant.
Improvements in the computation time of days through improved stability of numerical
schemes would be an advance. For the steam turbine cycle, the available CFD codes with
two phase capability are limited. A complete steam turbine design without accompanying
experiments at this stage is still to be achieved. Not only are there limitations on the
turbo-machinery side but also the CFD tools for the design of the two phase heat transfer
equipment such as the steam generator are limited. Not only are these two-phase flow
equations numerically and computationally challenging but the phenomena modeling
equations themselves are the subject of research. The interfacial transfer terms require
experimental work and at this point cannot be resolved by simulation alone. Fluid flow
asides there are issues regarding fluid structure interaction and associated materials
behavior/design questions. There is some existing capability with CFD tools being
coupled to structural analysis tools, but the vast potential is still to be explored. The
coupling is essentially explicit. So there are possibilities for other numerical schemes.
In the case of aids for plant operation diagnostics and control there are real-time operator
training simulators for two-phase systems and single phase systems but those are for
prescribed scenarios where the solution are known a priori. Not only are there these
limitations but in addition the phenomenological models are zeroth order. One-
dimensional integrated plant system codes used for accident analyses are more detailed in
62
modeling but as a consequence are not capable currently of real-time and faster-than real
time response. Sodium reactor systems simulation codes such as SASSYS [79] and Light
Water reactor codes such as CATHARE [80] and RELAP5 [81] are examples of such one
dimensional integrated plant system codes. These codes have stand alone models for
specialized components such as turbines, circulators and pumps and much more generic
thermal–hydraulic components such as pipes, heat slabs ,volumes and junctions which
can be connected together to form a general thermo-fluid network. There is considerable
flexibility, but for a plant operator aid, a number of scenarios would have to be run faster
than real time, for the current AI diagnostic decision making systems to discriminate and
filter down the scenarios based on feedback from the real-time measurements. Not only
are there opportunities for improvements in the plant simulations but also in the
algorithms of the AI diagnostics and control. Current event-based decision making should
eventually be replaced by first-principles function-based decision making. There is a need
for R&D in this particular area in parallel with the R&D on the plant simulation models.
Coupling in the materials behavior simulation models would help in establishing decision
criteria. For the ABTR and eventually the ABR, this class of operational aids would
significantly improve the availability and reduce the down time of sodium plant systems.
Maintenance, inspectability and surveillance of under-sodium components could benefit
significantly. A core basis for these operational aids could be the simulation code
SASSYS which has been verified and validated for sodium systems and the IGENPRO
[82] suite of diagnostic/monitoring tools (MSET/PRODIAG [83, 84]) which have been
developed for generic thermal-hydraulic systems.
Computer hardware has made major advances in the past and will in all likelihood
continue to make significant advances in the future both in terms of speed and capacity.
In this approach, message passing may be the choke point which needs resolution for
both the component design aid and the operation diagnostics and control aid. Parallel
computing and modularity would appear to be optimum approach to take; modules for
fluid flow physics interfacing with those for structural mechanics and those for materials
behavior. Then the diagnostic/control A-I module could play an overall supervisory role.
This calls for a focus on interfacing and in particular on coupling the different physics
and the identification and characterization of the key variables. Implicit coupling would
have its own special requirements in this approach. There may be some very large
property changes which need to be addressed in the stability of the coupling. Fluid flow
would have to be coupled to structural behavior and then with the other
phenomenological physics and eventually to AI diagnostic and control algorithms.
Stability will be a key feature. To determine whether or not progress has been achieved
in the R&D areas for this approach, it is recommended that a sequence of target tests be
specified in order of difficulty. These tests and their results would form the milestones in
this program. The comprehensiveness of the results would indicate what the status of
technology is at each stage of the R&D phases. The key in the evaluation of the progress
in the milestone for tests would be in the reduction of the need to perform experiments,
both developmental and confirmatory.
63
For the component design aid, certain illustrative future 5 year milestones for the gas
turbine cycle [85] are suggested here for initial discussion. Corresponding milestones
can be proposed for the steam turbine cycle. For the turbo-machine development, the
following specific data needs are to be obtained by simulation alone. These data form the
following deliverables and the acceptance criteria would include the target computation
time.
• Determine flow velocity and temperature profile at the turbine inlet to
quantify potential flow misdistribution.
• Validate analytical performance maps by obtaining the precise configuration
of compressor blades and turbine blades derived from simulation.
• Demonstrate journal and thrust catcher bearing performance, electric control
system performance, and rotor dynamic stability with prototypical bearings.
• Determine electrical properties of generator windings in a gas environment
and obtain insulation data under simulated blow-down conditions to confirm
structural integrity.
• Demonstrate the ability of the turbo-compressor casing to contain missiles as
a result of turbine deblading.
• Determine turbine rotor vibration characteristics, including rotor natural
frequencies and deflection magnitudes.
• Determine static seal system (e.g., seal between the turbocompressor and inlet
ducts) performance and determine materials data for segmented piston seal
rings and the mating surfaces, with which the seals are in contact. Obtain data
on seal coating materials, life expectancy of materials as a function of wear,
and the coefficient of friction in gas to be used for resistance to sliding
motion.
For the pre-cooler (PC) and intercooler (IC), the following specific data needs are to be
obtained by simulation alone.
• Determine the flow distribution and magnitude of hot/cold streaks at various
cross sections, including the inlet to the PC/IC tube bundles.
• Determine leak rates for PC/IC high pressure seal arrangement to assess coolant
bypass.
• Determine presence of PC/IC flow induced vibration characteristics, such as
flow-induced turbulent buffeting, vortex shredding, or fluid-elastic instability,
which can cause dynamic instability and tube damage.
• Confirm inspection capability and inspection equipment sensitivity for the
specific PC/IC tube circuit geometry.
• Confirm PC/IC shell- and tube-side heat transfer characteristics and shell-side
flow resistance and determine the effective flow resistance of the finned tube
bundle.
• Quantify the tube side erosion/corrosion rates as a function of the operating
parameters, water chemistry ranges, and tube geometry.
The benefits in being able to design without the need for testing and experimental
facilities proved very attractive to aircraft companies. Large-scale test facilities and
64
thermo-fluid experiments can require tens of millions of dollars and can take years to
construct and perform. Improvements in generation efficiencies of a few percent for a
large reactor plant would be a significant contribution to the capital cost amortization
over the plant lifetime. For a fleet of plants it would be even larger.
For the plant operation diagnostics and control aid, the milestone test goal would be to, in
five years, implement the operator on-line aid on the plant computer of a test facility and
put the system through a limited number of duty cycle events. This would be a real-time
test. For the management of maintenance, past data from past power plant system failure
events could be used. Determination of status in the R&D progress towards this goal for
this part of the program would include:
(1) Portability – can handle different T-H systems and
(2) Accommodation of unanticipated events – can handle events that were not used
in the development
with diagnostics (identification of malfunctioning component) of “good” accuracy and
transient management (recommendation of sequences of operator action) which are
“reasonably” optimal. Criteria for “good” accuracy diagnostics could be (a) no
misdiagnosis, (b) 95% of the cases with less than two to three potential candidates
identified within a minute of transient time. Criteria for “reasonably” optimal control
could be (a) no component damage within the DBA envelope, (b) number of
recommended operator steps “fewer” than existing procedures generated by
vendor/utility system engineers for existing facilities. To demonstrate this, proof-of-
concept testing will be performed for a wide range of transients but only three types will
be required (a) mass imbalance, (b) momentum imbalance, (c) energy imbalance.
Testing transients will vary in extent/severity and duration from mild/slow (0.01%/hr) to
severe/short (10%/minute). The tests will be performed in two stages (i) off-line, (ii) on-
line.
(i) Off-line: The test plan for the diagnostics combines the use of
synthesized/simulator signal data and post plant instrument data.
There are databases of full-scope operator training simulator system
transient data for commercial plants. For the transient control, the
plan is to test by comparing recommended sequences of operator
actions produced by the modules against existing alignment
procedures, AOPs, and alarm actions for the same event produced by
vendor/utility system engineers for the current fleet of power plants.
(ii) On-line: Accommodation of variations in signals during day-to-day,
hour-to-hour operations can be best tested out on-line. In a staged
approach, the milestone in five years could be to select an auxiliary
system to implement and test the operational aid.
At current rates for electric energy costs, a downtime of days would be worth a few
million dollars to a utility for a single large reactor plant. The benefits of a plant
operation aid which could make a significant contribution to improvements in plant
availability of days over an operating year would be considerable. System-wide, the cost-
benefit ratio would be even more significant.
65
VIII Safety Analysis
VIII.A Background
This discussion relates to developmental needs for new computational techniques applied
to research, development, and licensing of future generation nuclear power reactors, with
special emphasis on liquid sodium-cooled fast reactors.
For licensing analyses, reactor and plant simulation requirements are specifically targeted
to quantify performance of structures, systems, and components in the design. The level
of accuracy required must be sufficient to assure compliance with design guidelines and
standards for design performance, as specified by the designers and verified by the
regulators.
The traditional safety philosophy employed in the design of commercial nuclear power
reactors is based on the concept of defense in depth. In design, the defense-in-depth
concept is manifested by the use of multiple barriers or design features (structures,
systems, and components) to provide protection of the health and safety of the public and
the plant employees. In addition, the defense-in-depth barriers or systems must be
independent and with sufficient diversity to prevent the possibility of a single failure that
breaches all the barriers or fails all the systems. For example, the barriers for
containment of radioactivity are the fuel cladding, the primary coolant system, and the
reactor containment building. The reactor protection system consists of two independent
and diverse reactor shutdown systems. The reactor shutdown cooling system provides
two (or more) paths for removal of residual decay heat.
66
Safety analysis simulations for nuclear reactors address the physical phenomena and
conditions during at-power plant operation and following shutdown. Heat generation in
the fuel is quantified by models of decay heat generation, fission rate, and heat release.
The fission rate can be simulated with reactor point kinetics or reactor spatial kinetics,
depending on the assumed operating conditions and accuracy requirements. Heat release
from the fuel is limited by the heat conductivity of the fuel material, and the thermal
resistance presented by the fuel/cladding interface, the cladding conductivity, and the
convective heat transfer limits at the cladding/coolant interface. The fuel thermal
conductivity may depend on fuel structural and chemical changes caused by irradiation,
and the fuel/cladding thermal resistance may depend on geometrical and chemical
changes caused by irradiation. Consequently, basic heat generation and transfer models
for nuclear fuel are usually augmented by coupled chemical and structural/mechanical
models for fuel and cladding irradiation behavior. At the cladding/coolant interface, the
heat transfer capability is determined by the coolant properties and flow rate.
In a reactor sized for commercial power generation, there will typically be tens of
thousands of cylindrical fuel elements. Safety analyses usually consider only a subset of
this number, and the limiting element, called the hot pin or hot channel, is normally
identified and analyzed with conservatisms to compensate for phenomenological,
manufacturing, and operational uncertainties. The technique of using the hot pin to
establish safety margins evolved during an earlier era when consideration of analysis
details was limited by computer hardware capabilities.
The complexities associated with functional requirements for advanced nuclear reactor
safety analysis simulations may be categorized according to the objectives of the
analysis. For research and development, the objective is to gain understanding of
physical phenomena and their interactions relevant to materials and equipment
performance in proposed reactor arrangements and operating conditions. In licensing
analyses, the objective is to provide a very high-confidence measure of the safety
margins, quantified in terms of material temperatures and strengths relative to failure
limits.
The essence of research and development is to understand the physical phenomena that
govern the performance of materials in engineered applications. The scientific method of
gaining such understanding usually consists of observation, model proposal, and
validation by testing. Computational analyses are employed to provide numerical
solutions of phenomenological models, producing quantitative predictions of material
behavior that may be compared to observations from testing. Once validated, scientific
models may be applied to investigate the impacts of various initial and boundary
conditions and input assumptions, within the validation range of the model. Therefore,
the overall functional requirement for the analysis method is to provide a reliable, “best
estimate” numerical solution of the proposed model. This function is most often fulfilled
in today’s research environment by computational software executing on computer
67
hardware. Such computational software usually provides not only a numerical solution
of the equations representing the phenomenological model, but also graphical displays of
solution results and condensed summaries of solution metrics for purposes of reporting.
Hence, the functional requirements for simulation are to provide a reliable model solution
with efficiency and clarity.
In the nuclear reactor licensing arena, modeling and analysis provides predictions of the
performance of reactor systems, components, and structures in relation to safety limits
established by regulatory requirements. The models and data employed in the analyses
are subject to regulatory review, and hence must present a consensus view by applicant
and regulator technical experts for reactor behavior. Uncertainties in the analysis must be
quantified to a degree that satisfies a level of confidence set by the regulator. Analyses
submitted by the applicant must be reproducible by independent reviewers representing
the regulator. The analyses must produce quantified metrics that measure margins to
safety limits according to formats and standards established by law, interpreted by the
regulator, and documented in published guidelines. The functional requirements for
safety analyses in the licensing process may therefore be strictly specified by regulations.
However, in every licensing application, there arise situations that require negotiations
between the applicant and the regulator because the particular analysis may not be
covered by a previous ruling or judgment.
Although they are not formally part of the licensing requirements, simulations of beyond-
design-basis accident scenarios have traditionally been performed to demonstrate the
margin of additional protection provided by the design beyond that required by the
normal regulations [4]. These sequences are initiated by assumption of an equipment
failure and failure of the safety system (double fault) designed to protect against the
consequences of the initial failure. By design, the probability of such double fault
68
accidents is less than one in a million years of reactor operation. Analyses of beyond
design basis accidents (BDBA) are usually performed with “best estimate” modeling
assumptions, that is, without consideration of uncertainties. For liquid metal fast
reactors, the three most notable BDBA scenarios are 1) the unprotected transient
overpower (UTOP) accident, in which it is assumed that one or more control rods
withdraw and the reactor scram system fails, 2) the unprotected loss-of-flow (ULOF)
accident, in which it is assumed that the coolant pumps cease operation and the reactor
scram system fails, and 3) the unprotected loss-of-heat-sink (ULOHS) accident, in which
normal heat rejection to the power cycle is lost and the reactor scram system fails.
Analyses and reactor testing in U.S. and internationally have shown that liquid sodium
cooled fast reactors have sufficient inherent safety margins to mitigate the consequences
of low-probability, beyond-design-basis accidents, and to prevent the development of
conditions (coolant boiling, cladding failure, fuel melting) that could release harmful
radiation.
The limitations associated with existing models used in nuclear reactor safety analysis
simulations may be categorized into two areas: geometric and phenomenological.
Traditionally, safety analysis simulations have been limited in the number of fuel
elements that could be analyzed, and in the dimensionality of coolant flow directions.
There are typically tens of thousands of fuel elements in a commercial-sized nuclear
power reactor, and computer hardware limitations have limited the number of fuel
elements that could be analyzed in detail to one or a few or several dozen, at most. The
principal computer hardware limitations have been processor speed and size of memory.
Modern computer hardware development has now progressed to make available
relatively fast processors at low cost, and large capacity memories. Further, multiple
processors and memories have been coupled to create parallel computer systems that are
many orders of magnitude more capable than the batch computers that served as the
developmental platforms for much of the existing safety analysis software. The modern
expansion of computer hardware capability now makes possible the reformulation of
safety analysis software to utilize the greatly expanded hardware capabilities. With this
reformulation, the level of geometric detail, in terms of the number of fuel elements
considered and the dimensionality of coolant flow directions, can be increased by many
orders of magnitude. The consequence of this greater geometric detail will be the
reduction of conservatism associated with the averaging inherent in “hot channel” and
“hot spot” modeling techniques. By eliminating unnecessary conservatisms, advanced
analysis techniques will produce a reduction in uncertainties and a real increase in
permitted operational upper limits.
The physical models in existing safety analysis simulation software have also been
limited in the phenomenological scope of the representational models by computer
hardware limitations. These models include descriptions of reactor kinetics, heat
conduction and convection, fluid dynamics, chemical interactions, and structural
mechanics. Within each of these areas, individual modeling aspects of physical behavior
69
has been simplified to meet computational limitations. In addition, the coupling of
individual models, representing the effects of non-linear dependencies, has been
simplified. For example, dependencies exist among the 1) reactor power and reactivity,
2) fuel, coolant, and structural temperatures, and 3) reactor heat removal capacity.
Simplifications of complex phenomenological models and dependencies have been made
in past safety analyses to accommodate computer hardware limitations. With the advent
of modern computer hardware, these limitations may be removed to permit greater
accuracy in representation of physical behavior of materials in design basis and beyond
design basis conditions, and hence more accurate assessment of the true safety margins.
The SAS4A/SASSYS-1 computer code system [87] is an example of the current state-of-
the-art in liquid sodium-cooled fast reactor safety analysis software. The SASSYS-1
computational path is optimized for analysis of design basis accidents (DBA) and
anticipated transients without scram (ATWS), and the SAS4A path is employed to assess
the consequences of severe accidents involving coolant boiling, cladding failure, and fuel
melting. Both computational paths provide single-pin thermal and hydraulic
subassembly models that can be used to represent as many reactor subassemblies as
needed. In addition, the SASSYS-1 code has a multiple-pin subassembly model in which
every fuel element and coolant sub-channel in a subassembly can be modeled explicitly.
The level of geometric detail in the reactor thermal-hydraulic model is limited only by the
computer hardware capability (speed and memory size). SAS4A has models for single
and two-phase coolant dynamics, fuel element transient behavior, and fuel/cladding
melting and relocation. SASSYS-1 adds models for primary and intermediate coolant
systems heat transfer and hydraulics, plant control systems, and balance-of-plant
components and hydraulic systems. Both SAS4A and SASSYS-1 are coupled to point
and spatial reactor kinetics models for prediction of reactor power in transient
simulations.
The best approach for future safety analysis simulation capabilities is a dual-path
program for research and development analysis on the one hand, and licensing analysis
on the other. The research and development path requires highly flexible and robust
computational software that can serve as a framework for integration of diverse
phenomenological models and databases, capable of simulating proposed and actual
experiments and tests. The licensing path requires a significant extension of the
phenomenological and geometric capabilities of existing reactor safety analysis software,
capable of detailed simulations that reduce the uncertainties inherent in current
capabilities and provide a basis for optimal reactor operating conditions. Both paths must
implement techniques that take advantage of modern parallel computing architectures.
The objective of the research and development path is to provide a tool that frees the
researcher from the burden of model integration and numerical solution, and thus speeds
the research and development process. Experience in nuclear reactor research and
development has shown that the majority (greater than 50%) of a researcher’s time is
spent not in creating models and interpreting results, but in obtaining reliable numerical
70
solutions. Development of dedicated software to reduce the effort needed to obtain
model solutions will make researchers more efficient, and shorten the research and
development time frame. The new software will integrate user-supplied models with
available heat transfer, fluid dynamics, structural mechanics, and chemical analysis
software. The integration function of the software will include a user interface that
combines maximum flexibility with ease of use. If it were possible to halve the research
time needed for analysis, the overall research and development time frame could be
reduced by up to 25%.
71
IX Chemical Separation and Processing
IX.A.1 Background
UREX+1a is the baseline solvent extraction process under development to achieve the
separations and product purities required for GNEP. The spent nuclear fuel is first
dissolved in nitric acid. The dissolved fuel is then contacted with a series of solvents that
sequentially extract key components, isolating them from the remaining mixture. The
UREX+1a process consists of a series of four solvent-extraction flowsheets that perform
the following operations: (1) recovery of U and Tc (UREX), (2) recovery of Cs and Sr
(CCD-PEG), and (3) recovery of TRU and rare earth elements (TRUEX), and (4)
separation of TRU elements from the rare earths (TALSPEAK). By adjusting process
feed compositions and flow rates and the number of contact stages the effectiveness of
the extraction can be maximized.
72
and off-gases must be captured and treated for recycle or release. Solidified products
must be packaged for storage, transport, or disposal. Plant operations must be scaled
based on process stream volumes, processing times, and space requirements for specific
equipment.
The modeling of transient operations for both single units and plant-wide is a goal in
developing a detailed model of a reprocessing plant. Upstream deviations from steady-
state will permeate through the plant, and therefore it is critical to understand the cause of
any observed transients and the appropriate real-time responses. The time dependent
response of the entire plant is important for process design, equipment design, defining
the plant layout, and for safeguards and security. Transient models will be implemented
in the development of process control software and in any software that is used to train
operators.
Exiting models simulate ideal systems functioning properly very well. As a result,
experimental data tend to diverge from model data where chemical systems begin to
diverge from the ideal. This divergence is generally at the margins where concentrations
are low or a multiple species contribute to the chemistry associated with a single element.
To some extent this divergence is associated with limitations in chemical analysis
techniques; however, increasing chemical complexity, particularly for dilute species,
where minor bulk species begin to affect the behavior also contributes. Therefore, greater
accuracy in these models, particularly as they apply to real systems can be gained in
modeling the divergence from ideal systems as the number of species increases from one
to two, three or many more.
73
IX.A.3 Current Tools and Approach
The Argonne Model for Universal Solvent Extraction (AMUSE) [90, 91] is used to
calculate flowsheets that achieve the product recoveries and purities required for
UREX+1a. The code consists of two sections, SASPE and SASSE. SASPE contains all
of the chemical property data and the algorithms that are used to calculate the chemical
speciation in a solvent extraction process for a given feed composition. SASSE performs
a mass balance for a unit operation given the process flow rates and the number of stages.
AMUSE iterates between SASPE and SASSE until the program converges to a solution.
AMUSE calculates steady state processes. The capability to model the transient approach
to steady state can be built into the model, and it can be modified to track transients
derived from process upsets.
A similar solvent extraction simulator, the PAREX code, has been developed by the CEA
for design of solvent extraction flowsheets based on PUREX, DIAMEX, SANEX, and
other processes. This flowsheet simulator is not commercially available, though modeling
results are similar to those obtained using AMUSE.
Several commercial chemical process simulators exist, though the number of suppliers is
severely limited. These programs are used to design the chemical processes that are the
core of typical chemical plants, though many of these programs are geared to the
petrochemical industries. Aspen Plus [92] from Aspen Tech is probably the most global
process simulator in terms of range of applicability. Rapid prototyping is available
because Aspen Plus has built-in unit operations that can model many processes in a
reprocessing plant. As the plant design is fine-tuned, built-in unit operations that do not
accurately model an operation can be replaced by customized models. However, Aspen
Plus is limited to steady-state models and therefore cannot be applied much beyond the
initial plant design. Other commercial products from Aspen Tech, such as Aspen
Dynamics and Aspen Custom Modeler will model transient systems. Aspen Custom
Modeler provides the most flexibility, but has no built-in models; rather, it is a tool to
74
solve simultaneous equations. The equations that define specific plant operations are
written into the program, and the solution algorithms built into ACM solve these
equations. Coupling unit operations within the code allows an entire plant to be modeled.
There are disadvantages to using these commercial packages for designing the plant.
Firstly, without a costly license from Aspen Tech, the simulations cannot be run.
Secondly, the solution methods are black boxes, and there is no way to examine the
underlying source code when convergence problems do occur. Thirdly, the Aspen Tech
packages are designed for the wide range of unit operations that are encountered in the
chemical process industries and so are not optimized for a unique application like a
reprocessing plant. It would be very beneficial to develop advanced solution algorithms
to solve this plant design to improve efficiency in attaining a solution, particularly as the
plant grows in complexity.
Generally, the accuracy of the chemical process simulators is limited by the quality of the
chemical properties database. Therefore a key facet of process design is developing
accurate chemical models and the algorithms that can model compounds for which good
data do not currently exist. These models must be developed through a combination of
experimentation and computation.
Experimental data tend to diverge from model data where chemical systems begin to
diverge from the ideal. This divergence is generally at the margins where concentrations
are low or a multiplicity of species contributes to the chemistry associated with a single
element. As an example, for species that are strongly extracted AMUSE will predict
concentrations of 10-12 M or lower. These values do demonstrate the major observed
behavior of the element. However, in experimental systems the actual measured values
may be 10-6 M; the difference may be due to analytical limitations, or to contributions
from species that are not modeled, or simply to the limits of the model reached.
Therefore, greater accuracy in models, particularly as they apply to real systems can help
explain the divergence from ideal systems as the number of species increases from one to
two, three or many more or where concentrations are low.
In modeling a complex plant, such small deviations over time and across several
processes, may contribute to accumulation of significant quantities of a material in
streams where these species are not desired. For example the Cs content in the CCD-PEG
raffinate may be calculated as 10-10 g/L; at a flow of 10 L/min, 1.44x10-4 g/day is
accumulated in the raffinate tank. If the actual values is 10-7 g/L, 0.144 g/day
accumulates in the tank, which may result in an off-spec product. To compensate, the
raffinate would require additional processing. If this behavior is expected adding
additional extraction stages may result in the required decontamination. Improved models
of chemical behavior at these margins, would allow these additional process steps to be
built into the plant design or to alter processes to account for this behavior in the design
stage.
75
IX.A.4 Proposed Future Approach
An efficient plant model can be used to aid in the development of, or perhaps
implemented in, process control or safeguards systems at the plant. For example, the
process control system and its responses to off-normal conditions will be developed from
simulations of the expected off-normal conditions. The deviations from normal modes of
operation must be discerned as soon as possible, with the magnitude, accuracy, and effect
of the appropriate response pre-determined. A highly efficient model may be directly
implemented into the process control system if the computational requirements are
reasonable. In this case, the mechanical response to an off-normal signal will be derived
directly from the simulated process response rather than the process control algorithm.
76
scale facility. On a more fundamental level, changes to the chemical structure of
extractant molecules can be computationally tailored to achieve higher selectivities for
specific elements. Such calculated structures can then be synthesized and tested in the
laboratory to achieve the desired improvement in selectivity and product quality and
incorporated into novel extraction processes.
IX.B.1 Background
A typical metallic fuel treatment process consists of chopping the spent fuel and placing
it into anode baskets for use in the electrorefiner. In the electrorefiner, uranium
anodically dissolves to form uranium chloride in the LiCl – KCl eutectic molten salt. The
uranium chloride serves as a transport mechanism to transfer the uranium ion to the cell
cathode where it is reduced and deposited as metallic uranium. Uranium collected on the
cathode is reused to fabricate fresh fuel after residual salt is removed from its surface by a
vaporization process. The transuranic and active fission product elements (e.g., Cs, Sr,
lanthanides) also anodically dissolve in the electrorefiner to form soluble chloride species
in the eutectic salt. Noble metals fission products (e.g., Zr, Mo, Tc, Ru) remain in the
anode baskets. These fission products are converted to a durable metallic waste form
designed for geologic storage. The transuranic elements are recovered from the molten
salt solution by electrolysis. During the electrolysis process, the transuranic elements
present as ions in the molten salt are reduced and deposited as metals at the cathode of
the electrolytic cell while chlorine gas or a salt soluble chloride is produced at the anode.
The transuranic metals are used to fabricate fresh fuel after the residual salt is removed.
77
Spent salt that contains the fission product chlorides is occluded in a zeolite, which is
blended with glass frit to produce a durable, leach resistant ceramic waste form.
Treatment of spent oxide fuel requires one additional unit operation that converts the
metal oxides to their base metals. In this electrolytic process, the spent fuel oxide is
placed in the cathode of the cell and as current is applied to the cell, the metal ions of the
metal oxide are reduced to form the base metal and oxide ions are liberated to the molten
salt, LiCl – Li2O. The oxide ions transport to the cell anode and are oxidized to produce
oxygen gas, which is released from the cell. The cathode containing the base metals is
transferred from the oxide reduction cell to the electrorefiner where it serves as the anode
of the electrochemical cell. The rest of the oxide fuel treatment process is identical to
that described for metallic fuel.
Numerous opportunities exist for the application of advanced simulation tools to the
pyrochemical treatment of spent advanced burner reactor fuel. These opportunities range
from fundamental electrochemical cell design and performance studies based on first
principles calculations to pyroprocessing facility design optimized for low cost fuel
treatment. Application of advanced simulation tools will result in decreased development
time and lead to enhanced resource utilization.
Beyond electrochemical cell design and evaluation, there exists a need for pyroprocess
flowsheet and plant design simulation tools. Flowsheet design tools should have the
capability to describe the thermodynamic, kinetic, and transport properties of each unit
operation. They should predict product yields, decontamination factors for the process,
and provide material balance data. They should be easily configured for evaluating
different and competing process options. Additionally, a design and simulation package
is needed to construct and evaluate a virtual reprocessing facility. An operational model
of the facility should be developed and used to verify the design with respect to
throughput requirements, identify process or plant shortcomings, determine bottlenecks,
test proposed changes to facility processes for effectiveness, and provide equipment
utilization data.
78
IX.B.3 Current Tools and Approach
Most of the modern pyrochemical processes proposed for treatment of fuel discharged
from ABRs are based on electrochemical cells. In some cases such as electrorefining, the
cells are based on design data collected over the past decade but two important
technological areas, transuranic element recovery and conversion of UREX+ product
from oxide to metal, would benefit greatly from the development of simulation tools.
These areas are technically less mature than electrorefining and represent key technology
needs for the GNEP. Traditionally electrochemical cells are designed by an experimenter
or group of experimenters, they are then fabricated, and their operation evaluated over a
range of conditions. This experiment-driven approach can be expensive and can become
quite time consuming. A simulation tool will be developed to aid in the design and
preliminary evaluation of electrochemical cells. The model will describe the
thermodynamic, transport, and electrodynamic phenomena of the electrochemical system
evaluating Navier-Stokes and Maxwell equations to determine the reaction kinetics, ion
concentration in the electrolyte, and current / potential distribution within the
electrochemical system. It will allow the performance of the cell to be evaluated prior to
experimental validation. Note that this approach requires an extensive knowledge of the
physical chemical properties of the system, which may require additional fundamental
studies to acquire the needed data. Parallel computing architecture will be used in place
of the more common multi-stage approach to arrive at a solution.
Commercial software exists for modeling process flowsheets but these packages focus
almost exclusively on the petroleum industry. Building from these commercial packages,
a module will be developed to simulate ABR spent fuel processing by pyrochemical
methods. The module will comprise thermodynamic, kinetic, and transport data for each
79
of the processes as well as data describing unique unit operations such as electrochemical
systems. This simulation tool will allow for design of advanced fuel treatment
flowsheets, provide guidance for completion of experimental flowsheet demonstration
activities, and ultimately lead to the development of optimized flowsheets for pilot-scale
evaluation.
In close coordination with process flowsheet simulation, a tool will be developed for a
virtual spent nuclear fuel treatment facility (e.g., AFCF) based on pyrochemical
processes. The plant design will developed from data for each unit operation in the
flowsheet. Plant requirements will be derived from a comprehensive analysis of the
interfaces among individual unit operations, between the equipment and the facility, and
between the facility and the outside. Process, facility, mechanical and electrical
equipment design, operations, maintenance, and safeguarding considerations must be
incorporated in the requirements. An operational model will be developed and used to
verify the design with respect to throughput requirements, identify design shortcomings,
determine bottlenecks, test proposed changes to facility processes for effectiveness, and
provide equipment utilization data.
Implementation of these new tools will lead to shorter process development time
requiring fewer experiments to validate optimum process parameters and process
efficiency, and lead to the design of more efficient and economical plant layouts, which
optimize resource utilization.
80
X Sensitivity and Uncertainty Analysis
X.A Background
Sensitivity and uncertainty analyses are the main instruments for dealing with the
sometimes scarce knowledge of the input parameters used in simulation tools. For
sensitivity analysis, sensitivity coefficients are the key quantities that have to be
evaluated. They are determined and assembled, using different methodologies, in a way
that when multiplied by the variation of the corresponding input parameter they will
quantify the impact on the targeted quantities whose sensitivity is referred to. Sensitivity
coefficients can be used for different objectives like uncertainty estimates, design
optimization, determination of target accuracy requirements, adjustment of input
parameters, and evaluations of the representativity of an experiment with respect to a
reference design configuration.
Design optimization [94] can take advantage of sensitivity coefficients by using them in
optimization algorithms. The main problem in this case is related to the fact that in most
cases the sensitivity coefficients are calculated with linear approximation. Thus they need
to be determined repeatedly to take into account the nonlinear effects. Related to this
subject is also the problem of taking into account multi-physics effects. In general,
sensitivity coefficients have been evaluated only relative to one field (e.g. neutronics or
thermal-hydraulics).
Target accuracy assessments [95] are the inverse problem of the uncertainty evaluation.
To establish priorities and target accuracies on data uncertainty reduction, a formal
approach can be adopted by defining target accuracy on design parameter and finding out
required accuracy on data. In fact, the unknown uncertainty data requirements can be
obtained by solving a minimization problem where the sensitivity coefficients in
conjunction with the existing constraints provide the needed quantities to find the
solutions.
Sensitivity coefficients are also used in input parameter adjustments [96]. In this case, the
coefficients are used within a fitting methodology (e.g. least square fit, Lagrange
multipliers with most likelihood function, etc.) in order to reduce the discrepancies
between measured and calculational results. The resulting adjusted input parameters can
81
be subsequently used, sometimes in conjunction with bias factors, to obtain calculational
results to which a reduced uncertainty will be associated.
A further use of sensitivity coefficients is, in conjunction with a dispersion matrix [97], a
representativity analysis of proposed or existing experiments [98]. In this case the
calculation of correlations among the design and experiments allow to determine how
representative is the latter of the former, and consequently, to optimize the experiments
and to reduce their numbers. Formally one can reduce the estimated uncertainty on a
design parameter by a quantity that represents the knowledge gained by performing the
experiment.
Uncertainty analysis can be performed also without the help of sensitivity coefficients. In
general, uncertainties on input parameters can be propagated either using a stochastic
approach (Monte Carlo methods type) or by some regression techniques. In the case of
the Monte Carlo methodology [99], several runs of the same problems are performed
with different random input values, taken within the range of the specified uncertainty
and associated distribution law, and then at the end the final results are statistically
combined in order to determine the average value and the associated standard deviation.
Smarter sampling techniques (e.g. Latin Hypercube [99]) for Monte Carlo simulations are
developed in order to minimize the total number of direct calculations.
An essential attribute of the advanced simulation tool should be the capability to conduct
comprehensive sensitivity analyses and uncertainty evaluation. This sensitivity capability
has to be based on sound theoretical ground using deterministic and/or probabilistic
methodologies. This capability would achieve several goals, including identifying trends
and issues, designing a focused set of validation experiments, quantifying uncertainties,
and assessing the quality of data used in the design process.
Sensitivity analysis allows true system optimization. The proper use of sensitivity
analysis as an integral part of the simulation can lead to more robust systems that can
better withstand transients and off-normal conditions.
The sensitivity analysis capability has to be incorporated from the beginning, in the case
of writing of new codes, or added for existing ones that are selected and adopted for the
design of new plants as an integral feature of the tool.
82
• Chemical Processing
In addition, when needed (e. g. safety analysis), coupling among the different fields has
to be covered by the sensitivity analysis capability. This implies that nonlinear behaviors
due to feedback effects need to be taken into account. Moreover, not only static problems
but also time-dependent transient ones have to be treated.
Because of the nature of the sensitivity evaluation (performed in general at first order
approximation), the sensitivity coefficients can be calculated at a level of accuracy that is
not the same as that of the high fidelity calculation. However, some degree of
sophistication will be required in specific cases. For instance, it is very likely that for
certain configurations a three-dimensional description would be needed, even if in
conjunction with a low level of discretization.
Finally, particular care has to be devoted to assembling the uncertainty data of input
values. Quality, consistency, and interrelationship have to be insured. Correlations should
be provided whenever available and significant for the impact on the problem under
study. This implies a rigorous approach and a scientific based methodology in their
evaluation.
There are two main methodologies developed for sensitivity and uncertainty analysis.
One is the forward (direct) calculation method based on the numerical differentiation, and
the other is the adjoint method based on the perturbation theory and employs adjoint
importance functions. In general, the forward approach is preferable when there are few
input parameters that can vary and many output parameters of interest. The contrary is
true for the adjoint methodology.
For the forward method, there are several different approaches that can be used. The first
one is stochastic (probabilistic with Monte Carlo method) and it has been briefly
described in the background section in the uncertainty analysis paragraph. The main
drawback of this approach, besides the large number of direct calculations, is the fact that
only uncertainty can be evaluated and sensitivity coefficient cannot be directly obtained.
The method has been widely used in other fields than nuclear, and it is very popular for
waste repository assessments, for instance with the GOLDSIM code [99].
Another forward method is the automatic differentiation. In his case, codes are directly
modified in order to evaluate derivatives, through direct calculations, for all input
parameters that are deemed to vary. This translates in one direct calculation for each
input parameter of interest, and it can be very computational intensive. Moreover, as it
was said, it requires direct intervention within the code. Argonne has developed software
that directly modifies a code to add automatic differentiation if the used language of
programming is FORTRAN or C [100]. Several other universities and lab sites have
83
software with similar capabilities, though Argonne is the unique US site with both
FORTRAN and C code transformation capabilities.
Recently, P. Turinsky has proposed a new forward method based on random perturbation
of input parameters. It is claimed that the proposed Efficient Subspace Method (ESM)
[101, 102] can efficiently approximate the huge sensitivity (Jacobian) matrix, resulting
from a large number of input and output parameters, with a limited number of direct
calculations. This method relies on the singular value decomposition technique in
identifying the important subspaces of the domain and range spaces of the Jacobian
matrix. A major advantage of this method is that no modifications to existing codes are
necessary, but only pre- and post-processing of the input and output quantities are
needed.
The adjoint methodologies are based on the perturbation theory originally developed in
the quantum mechanics field. Classical perturbation theory that makes use of the adjoint
function (also called importance), has been widely used in neutronics to calculate the
variation of the fundamental eigenvalue. Subsequently, the general perturbation theory
was proposed by Usatchev [103], extended by Gandini [104], and implemented in several
neutronics codes around the world (e. g. [105, 106]). In this case, a generalized
importance is calculated for each output parameter of interest by solving an
inhomogeneous adjoint neutron transport equation that contains a source term depending
on a specific output parameter.
This type of approach has been extended to other fields including nuclide depletion
calculations where the adjoint solution of the Bateman equation is employed [107].
Depletion Perturbation Theory (DPT) [113] calculates the importance functions for the
coupled neutron and nuclide field. Oblow [109] and others have extended the adjoint
methodology to the thermal-hydraulics field. Cacuci [110], Park, [111] and Gandini [112]
have developed adjoint methodologies for time-dependent transient problems for
application to safety analysis or reactor operation optimization. Automatic differentiation
tools employing the so-called reverse mode are able to compute a discrete adjoint; in
practice, the reverse mode requires more user intervention than forward sensitivity
computations.
The main drawback of the adjoint methodology is related, as pointed out before, to the
number of adjoint functions that have to be calculated if there is a large number of
objective parameters. In many cases, the memory requirements for the adjoint method are
significant, as many intermediate states must be recorded. Also inconvenient is the fact
that the adjoint solution has to be coded directly inside of the code.
Among the existing codes that are widely used, mostly in neutronics, we can mention
VARI3D [113] and its DPT version [114] at ANL, the sensitivity capability of
FORMOSA [94] system (mainly for thermal reactor applications) at the North Carolina
State University, the TSUNAMI [115] (limited only to Keff) and FORSS [116] system at
ORNL, the sensitivity and uncertainty modules that are part of the French fast reactor
code system ERANOS [45].
84
In the field of thermal-hydraulics, the only code that has adjoint capability with
uncertainty analysis is the balance of plant code CATHARE [80, 117], where also
regression is used for uncertainty evaluation [118].
In the case where the uncertainty assessment involves average quantities, advanced
sampling techniques will be developed. Randomized Quasi Monte Carlo (RQMC) [119]
techniques will be implemented for assessing the uncertainty in cases where there is a
large number of uncertain parameters (such as the cross sections) but the effective
dimension is low, as was indeed demonstrated in work by Turinsky [120]. The advantage
of RQMC methods is that they need only forward calculations, they are non-intrusive
with respect to the various software modules and they exhibit a rate of convergence that
is far superior to the classical Monte Carlo method.
The coupling of sensitivity models from the multiple physics models offers a broad
spectrum of challenges, not the least of which is a substantial increase in the number of
intermediate sensitivity objectives and variables. This would result in huge sensitivity
matrices which are generally fully dense and for which even the storage could be difficult
on the most advanced architectures. Nonetheless, sensitivity matrices that involve
pointwise quantities (such as pointwise temperatures versus pointwise heat generation
rates) tend to exhibit an exponential decay with respect to the distance between them,
making them effectively sparse. The Bayesian approach currently in use for the detection
of sparsity patterns could be extended to the detection of the effective sparsity patterns. In
turn, this will result in sensitivity computations that can be carried out and stored on
current architectures. The resulting information can be used as the primary information in
design of experiments and numerical optimization. For uncertainty assessment, to correct
for the error made in the dropping down of small entries in the sensitivity matrices, one
can use the resulting approximate sensitivity for importance sampling approaches.
85
The proposed effort will benefit from the latest advances in computational technologies,
including high levels of abstraction that will make the addition of forward and adjoint
sensitivity calculations much easier to implement compared to previous efforts.
Nonetheless, the adaptation procedures in sophisticated modern simulations such as
adaptive mesh refinement and finite tolerance iterative linear algebra make sensitivity
calculations far less accurate. To address this one can use a modified PDE technique
[121] that gives the same sensitivity results in the limit but that does not differentiate
through the adaptation procedures and results in much more robust sensitivity estimates.
86
XI High Performance Computing Enabling Technologies
XI.A Background
The majority of the simulation projects proposed in this document involve huge ranges of
temporal and spatial scales whose full resolution typically leads to systems of equations
with upwards of billions of unknowns and beyond. In many cases (e.g. thermal hydraulics
and neutronics), even this level of accuracy is not enough to resolve the full range of
scales of dynamical importance. One typically makes compromises in grid resolution,
physics modeled, number of particles, etc. by first considering the reality of available
computing resources (in terms of disk space, RAM, and billions of floating point
operations per second (GFlop/s)). These values then both set limits on the largest
computation that can be performed (in terms of scales resolved, number of particles,
length of integration, etc.) as well as determine what physics must be further simplified
and modeled in order to make a computation feasible.
Over the past thirty years, CPU peak performance and memory have roughly followed
the so-called "Moore' s law", which predicts a doubling of capability every eighteen
months. Current state-of-the-art peak single processor speeds and RAM are in the range
of 4 GFlop/s and 8Gb, respectively. Fifteen to twenty years ago, when most of the
standard simulation tools in nuclear engineering were developed, typical values were
hundreds of times less than this. While one might expect that legacy codes could
automatically adapt to single processor improvements, this is often not the case -- often
the physics has already been greatly homogenized, or physical effects are excluded,
promising techniques not explored, and data limits at least implicitly hardwired,
precluding the tools efficient use on more advanced architectures.
Single processor performance is only a small piece of the story, though. Far more
significant is the trajectory of leading edge, exotic computing platforms that enable
orders of magnitude increases in computing speed and memory over conventional
desktop tools. Traditionally these machines have been built around a huge range of
architectural and design concepts -- the history is too involved to discuss here, but it is
worth mentioning two broad categories -- the Cray vector architectures dominant in the
eighties and early nineties and the currently dominant massively parallel architectures
built typically with cache-based non-vector chips. Massively parallel itself includes
dozens of distinctions, from shared to distributed memory, constellation architectures,
clusters, vector-parallel hybrids, etc. From the perspective of the (naive) application
programmer, though, we can lump together all distributed memory machines, and for
them a single programming model has emerged as dominant and become the current de
facto standard for accessing these architectures -- the Message Passing Interface (MPI).
The emergence of MPI/Fortran and MPI/C as the standard programming model has made
general HPC much easier and more accessible to application programmers.
While MPI gives a standard for carrying out interprocessor communication, a major
advance of the past decade has been the emergence of a sophisticated layer of HPC
"middleware" that abstracts and shields from the application programmer much of the
87
complexity of designing efficient parallel algorithms, i/o strategies, mesh generation, etc.
Popular examples of such tools/stardards are PetsC, HYPRE, Visit, Cubit, Aztec, HDF5
etc. While the ultimate motivation and most of the headlines are reserved for scientific
and engineering results, some of the most complex problems and the greatest overall
gains in efficiency come from sophisticated implementations of these underlying
enabling technologies. Certainly, designing a class of codes that efficiently leverages
state-of-the-art computing resources (viz. 100,000+ processors) for first-principles
physics simulation requires a substantial investment in supporting HPC tools.
For development and deployment efforts in the multiscale fuel and materials properties
modeling, thermal hydraulics, and neutronics components, as outlined in the respective
sections, access to tens of millions of cpu-hours on a BG/P or Cray XT3 type leadership-
class system will be absolutely required to meet the scientific targets. Petaflop archival
and temporary storage systems, high throughput i/o, advanced parallel visualization, and
fast data transfer will all be critical requirements of the computing system to achieve
scientific discovery. Detailed needs are outlined in the respective sections.
Additionally, the development projects here absolutely must have access to and continue
to push the forefronts of research in the following HPC tools/technologies:
• Scalable highly parallel solvers for sparse linear and non-linear systems
• Scalable performance analysis tools for both communication and single node
components
• Scalable parallel file systems and higher-level structured i/o libraries
• Efficient architecture-aware compilers (e.g. to achieve double-hummer
performance on BG/P)
• Efficient memory and scalable load balancing and data redistributions tools
• a wide range of parallel preconditioner and solver strategies (CG, SuperLU,
multigrid, etc.)
• Parallel grid generation for complex geometries
• Tools for language interoperability and component defintion (e.g. Babel)
• Scalable debugging tools
• Assimilation/fusion of experimental data with simulation
• Standardized interface definitions
• Parallel multiphyiscs coupling
• Process workflow technologies/legacy code integration
The above list is not exhaustive but identifies many of the key areas that need to continue
to mature in order to meet the eventual goals of the current project. In most areas
substantial progress has been made to date (see next section), but the tools will need to
continue to evolve to meet the demands of new (petascale) platforms and to continue to
improve to meet the increasing application demands. More details are given below.
88
XI.C Current Tools and Approach
We consider current tools/approaches both from the side of the application and the side of
the enabling technologies. From the application side, it is clear that current approaches in
the field of nuclear-related simulation have lagged behind state-of-the-art in related
disciplines (e.g. weapons simulation, aircraft design, etc.). Many of the examples of such
limitations have been provided in the individual sections above. From the perspective of
HPC tools and technologies the current limitations can generally be identified:
• lack of use of parallel computing (and thus much less physics or less fidelity in
modeled physics)
• lack of use of modern software design principles (and thus code brittleness)
• lack of use of modern software development supporting tools/practices, such as
testing suites, repository management, bug tracking, coding standards, auto-
documentation, etc.
• failure to leverage existing efficient open source solver libraries
• weak and ad-hoc coupling techniques for inefficient workflow
Form the side of HPC enabling technologies and what is actually available, the state of
the art has matured considerably over the past ten years (in large part as a result of the
ASC program) both in the commercial but especially the DOE-funded space. Highly
flexible parallel linear and non-linear solver libraries (e.g. PetsC, Hypre, ESSL), parallel
mesh management frameworks (e.g. SAMRAI, Chombo, Paramesh), efficient mesh
generation toolkits (e.g. Cubit), parallel i/o (MPI-IO, HDF5, PNetCFD), performance
evaluation tools (e.g. PAPI, Kojak, Tau), advanced visualization (e.g. Visit), and full
integrated application codes (e.g. Flash, NWChem) have all demonstrated good
scalability (at least for certain problems) and are designed and distributed in a robust and
well documented way. The state of maturity of these and other similar tools not
mentioned above has enabled a tremendous amount of research and is a major advance in
the use of advanced simulation as a tool for discovery. Many of the projects listed above
are continuing under SciDAC and similar DOE programs.
We articulate the proposed future approach both from the perspective of application
needs as well as the enabling technologies themselves. The first-principles physics
approach advocated here almost without exception implies the need for leadership class
computing resources – that is, as identified in the individual sections, the main physics
modules will need to run efficiently on single processor machines (for quick parameter
studies) and small to moderate clusters as well as leadership class platforms. To do this,
higher-level libraries, such as those mentioned above, should be leveraged whenever
possible. This has the effect of increasing the productivity of the application developers
and users by offloading much of the burden of scalability, portability, etc. as well as the
solver technology to the enabling technology teams. This strategy is particularly
89
important for the cutting-edge machines that can target more exotic architectures and
require a relatively high degree of sophistication to both port and optimize existing codes.
From the HPC tools perspective, significant ongoing work is required to both enhance the
existing tools and demonstrate their scalability on the newest class of ultra-parallel
machines (as well as maintain and add superior functionality). Furthermore, existing
areas of CS engineering/research that impact this area directly and need to mature more
rapidly are in the open source arena are:
• General parallel coupling tools (what exists is very specific or not mature)
• Parallel mesh generation tools for finite element meshes
• Advanced parallel visualization for complicated geometries
• Data reduction techniques
• Fast data transfer
Tools to close the peak/realized performance gap (e.g. better compiler technology)
90
XII Conclusions
This report has presented requirements for advanced simulation of reactor and chemical
processing plants that are of interest of the GNEP initiative. Justification for advanced
simulation and some examples of grand challenges that will benefit from it have been
provided.
In order to effectively deal with the complexity of the problem an integrated software tool
that has its main components, whenever possible, based on first principle methodology is
proposed. The main benefits that are associated with a better integrated simulation have
been identified as: a reduction of design margins, a decrease of the number of
experiments in support of the design process, and a shortening of the developmental
design cycle. This type of benefits translates in economical savings, but enhanced
simulation will also bring, as added value, a better understanding of the physical
phenomena and the related underlying fundamental processes, which, in turn, will enable
to pinpoint potential, and often unexpected, innovations.
For each component of the integrated proposed software tool background information,
functional requirements, current tools and approach, and proposed future approaches
have been provided. Whenever possible, current uncertainties have been quoted and
existing limitations have been presented. Desired target accuracies with associated
benefits to the different aspects of the reactor and chemical processing plant design were
also given. In many cases the possible gains associated with a better simulation have been
identified, quantified, and translated in economical benefits. For example, a reduction of
2% in power distribution uncertainty it would represent the equivalent of two 1000 MWe
nuclear plants when applied to a fleet of 100 reactor plants.
While in the past, the preferred approach has been to use integral (global) experiments for
a general validation of both simulation tools and evaluated parameters, a clever
innovative validation methodology has to be developed. A set of analytical (differential)
experiments has to be devised in order to provide a better physical validation of the
fundamental processes that govern the macroscopic behavior. An advanced simulation
can surely provide the tools and insights to this purpose, but this novel validation
constitutes per se a new big challenge that will require ingenuity as well imagination for
its development.
91
References
92
model, T=1000-3000K, l is within 5% for T=1000-2000K, but 7 times lower than exp
at 3000K.
15. Kurosaki et al (2000, 2001) – UO2,PuO2,(U, Pu)O2, calculated variables: Cp, l, a0, b -
compressibility, results are poor for PuO2, Bredig transition at 80% of Tm (UO2),
Vegard’s law, a l peak at high T, NPT,NVT, quantum corrections, no edge, GB,
surface effects, T=300-2500K,P=0.1-1.5GPa, partially ionic model (with covalent
contribution), parameters were obtained by adjusting the lattice parameter and
pressure at various T. [For details see the following papers: J. Alloys Compd. 307
(2000) 1; J. Alloys Compd.307 (2000) 10; J. Alloys Compd. 319 (2001) 253; J. Nucl.
Mater. 294 (2001) 160.]
16. K. Kawamura, Molecular Dynamics Simulations, Springer Series in Solid-State
Sciences, vol. 103, 1992, p. 88.
17. K. Kurosaki et al, J. Nucl. Sci. Technol. 41 (2004) 827–831; AmO2,NpO2 solid
(T=300-2500K, P=0.1-1.5GPa), a, Cp, l, a0, b,- , parameters of the model were
adjuster to the lattice parameter (T) – only solid, l for AmO2 is 3 order higher than
experiment, no exp. data for NpO2 .
18. K. Kurosaki et al, J. Alloys Compd. 387 (2005) 9–14. – ThN, UN, NpN, PuN,
partially-ionic models with the parameters adjusted to the lattice temperature
dependence, solid (T=300-2800K, P=0.1-1.5GPa), a, Cp, l, a0, b –calculated.
19. M. Katahira, Y. Nagasaka, Proc. 15th Symp. Thermophys. Prop., June 22-27, 2003,
Boulder, Colorado, U.S.A. H (viscosity), L of liquid UO2 (partially ionic model) is 8
times higher than experiment, T<3400K; Zr – N-body model.
20. D. Moldovan, D. Wolf, S.R. Phillpot, and A.J. Haslam, “Mesoscopic simulation of
two-dimensional grain growth with anisotropic grain-boundary properties”, Phil.
Mag. A, 2002, Vol. 82, No. 7, 1271-1297.
21. F. Cleri, “A stochastic grain-growth model based on a variational principle for
dissipative systems”, Physica A, vol. 282 (2000) p.339.
22. A.C.F. Cocks and S.P.A. Gill, “A variational approach to two dimensional grain
growth – I. Theory”, Acta Mater. Vol. 44 (1996) pp. 4765-4775.
23. Grain-size for copper:
http://www.copper.org/resources/properties/microstructure/grain_size.html
24. S. J. Plimpton and B. A. Hendrickson, “Parallel Molecular Dynamics Algorithms for
Simulation of Molecular Systems”, In “Parallel Computing in Computational
Chemistry”, edited by T. G. Mattson, ACS Symposium Series 592, 114-132 (1995).
25. Top 500 List of supercomputers, http://www.top500.org/.
26. P. Blaha, K. Schwarz, G. Madsen, D. Kvasnicka, J. Luitz, Vienna University of
Technology Inst. of Physical and Theoretical Chemistry Getreidemarkt 9/156, A-
1060 Vienna/Austria.
27. F. Mayadas, “Electrical-Resistivity Model for Polycrystalline Films: the Case of
Arbitrary Reflection at External Surfaces” IBM Watson Research Center, Yorktown
Heights, New York 10598
28. M. Shatzkes, IBM Components Division Laboratory, East Fiskhill, New York 12533
Received 31 July 1969a
29. Durkan, M.E. Welland, “Size effects in the electrical resistivity of polycrystalline
nanowires”
93
30. G. Aliberti, et alt. “Nuclear Data Sensitivity, Uncertainty and Target Accuracy
Assessment for Future Nuclear Systems”, Annals of Nuclear Energy, 33, 700-733,
2006
31. R. E. MacFarlane and D. W. Muir, The NJOY Nuclear Data Processing System,
Version 91, LA-12740-M, (UC-413), Los Alamos Natl. Lab., October 1994
32. B. J Toppel., et al., "ETOE-2/MC2-2/SDX Section Processing," RSIC Seminar on
Multigroups Cross Section, Oak Ridge (March 14, 1978)
33. G. Rimpault, “Algorithmic Features of the ECCO Cell Code for Treating
Heterogeneous Fast Reactor Subassemblies”, Intl. Conf. On Mathematics and
Computations, Reactor Physics, and Environmental Analyses, Portland, OR, April
30- May 4, 1995
34. E. E. Lewis and Jr. W. F. Miller, “Computational Methods of Neutron Transport”
Wiley, 1984.
35. R. E. Alcouffe, R. S. Baker, and S. A. Turner, “PARTISN” Technical Report LA-UR-
03-1987, Los Alamos National Laboratory, Los Alamos, NM, April 24, 2003.
36. W. A. Rhoades and D. B. Simpson, “The TORT three dimensional discrete ordinates
neutron/photon transport code”. Technical Report ORNL/TM-13221, Oak Ridge
National Laboratory, Oak Ridge, TN, 1997. Also available from ORNL/RSIC as
CCC-650/DOORS3.2.
37. T. A. Wareing, J. M. McGhee, and J. E. Morel, “ATTILA: A three-dimensional
unstructured tetrahedral mesh discrete ordinates transport code”. Transactions of the
American Nuclear Society, 75:146–147, 1996.
38. G. Marleau, A. Hebert, and R. Roy, “A User’s Guide for DRAGON” Ecole
Polytechnique de Montreal, 1997.
39. S. Loubiere, R. Sanchez, M. Coste, A. Hebert, and Z. Stankovski, “APOLLO 2
twelve years later” In Proceeding of the ANS Topical Meeting on Mathematics and
Computation, Reactor Physics and Environmental Analysis in Nuclear Applications,
1999. Madrid (Spain).
40. Han Gyu Joo and et al., “Methods and performance of a three-dimensional whole-
core transport code DeCART”, In Proceedings of PHYSOR 2004, 2004. Chicago, IL.
41. C. R. E. de Oliveira and A. J. H. Goddard, “EVENT a multidimensional finite
element-spherical harmonics radiation transport code”, In Proceedings of the OECD
International Seminar on 3D Deterministic Radiation Transport Codes, December 01–
02, 1996. Paris, France.
42. G. Palmiotti, E. E. Lewis, and C. B. Carrico, “VARIANT: Variational anisotropic
nodal transport for multidimensional cartesian and hexagonal geometry calculation”
Technical Report ANL-95/40, Argonne National Laboratory, 1995.
43. B. J. Toppel, “A User’s Guide to the REBUS-3 Fuel Cycle Analysis Capability:
ANL-83-2, Argonne National Laboratory (1983)
44. K. L. Derstine, “DIF3D: A Code to Solve One-, Two-, and Three-Dimensional Finite-
Difference Diffusion Theory Problems,” ANL-82-64, Argonne National Laboratory
(1984)
45. G. Rimpault, “The ERANOS code and data system for fast reactor neutronic
analyses”, In Proceedings of PHYSOR 2002, 2002. Seoul, Korea.
94
46. X-5 Monte Carlo Team, “MCNP—a general Monte Carlo n-particle transport code,
version 5, volume I: Overview and theory” Technical Report LA-UR-03-1987, Los
Alamos National Laboratory, Los Alamos, NM, April 24, 2003.
47. R. N. Blomquist. Status of the VIM Monte Carlo neutron/photon transport code. In
Proceedings of the 12th Biennial RPSD Topical Meeting, April 14–18, 2002. Santa
Fe, NM.
48. J.P. Both, H. Derriennic, B. Morillon, J.C. Nimal, « A Survey of TRIPOLI-4 »,
Proceedings of the 8th International Conference on Radiation Shielding, Arlington,
Texas, USA, 24-28 avril 1994, pp. 373-380
49. SCALE:A Modular Code System for Performing Standardized Computer Analyses for
Licensing Evaluations, NUREG/CR-0200, Rev. 7 (ORNL/NUREG/CSD-2/R7), Vols.
I, II, and III, June 2004 (DRAFT). Available from Radiation Safety Information
Computational Center at Oak Ridge National Laboratory as CCC-725
50. W. B. Wilson, T. R. England and K. A. Van Riper “Status of CINDER’90 Codes and
Data”, Los Alamos National Laboratory, report LA-UR-99-361 (1999).
51. O. W. Hermann and R. M. Westfall, “ORIGEN-S: SCALE System Module to
Calculate Fuel Depletion, Actinide Transmutation, Fission Product Buildup and
Decay, and Associated Radiation Source Terms,” Vol. II, Section F7 of SCALE: A
Modular Code System for Performing Standardized Computer Analyses for Licensing
Evaluation, NUREG/CR-0200, Rev. 6 (ORNL/NUREG/CSD-2/R6), Vols. I, II, and
III, May 2000. Available from Radiation Safety Information Computational Center at
Oak Ridge National Laboratory as CCC-545.
52. H. R. Trellue, “Development of Monteburns: A Code That Links MCNP and
0RIGEN2 in an Automated Fashion for Burnup Calculations,” Los Alamos National
Laboratory document LA-T-13514 (November 1998)
53. Moore R L, Schnitzler B G, Wemple C A, etc. « MOCUP:MCNP-ORIGEN2 Coupled
Utility Program”. IdahoNational Engineering Laboratory Report INEL-95/0523.1995
54. Z. XU, P. HEJZLAR, M. DRISCOLL, and M. KAZIMI, “An Improved MCNP-
ORIGEN Depletion Program (MCODE) and its Verification for High-Burnup
Applications,” PHYSOR, Seoul, Korea, Oct. 2002.
55. T. Sofu, et.al., “Development of a Comprehensive Modeling Capability based on
Rigorous Treatment of Multi-Physics Phenomena Influencing Reactor Core Design,”
Proceedings of ICAPP’04, Pittsburgh, PA, June, 2004.
56. B. E. Launder and D. B. Spalding, “The Numerical Computation of Turbulent
Flows,” Computational Methods in Applied Mech. and Engineering, 3, 269-289
(1974).
57. F. S. Lien, W. L. Chen, and M. A. Leschziner, “Low-Reynolds-Number Eddy-
Viscosity Modelling Based on Nonlinear Stress-Strain/Vorticity Relations,” Proc. 3rd
Symp. on Eng. Turbulence Modeling and Measurements, Crete, Greece (1996).
58. W. Rodi, “Experience with Two-Layer Models Combining k- Model with a One-
Equation Model Near the Wall,” AIAA, 91-0216 (1991).
59. T. H. Shih, J. Zhu and J. L. Lumley, “A Realizable Reynolds Stress Algebraic
Equation Model,” NASA Tech. Memo 105993 (1993).
60. T. J. Craft, B. E. Launder and K. Sugar, “Development and Application of a Cubic
Eddy-Viscosity Model of Turbulence,” Int. J. Heat and Fluid Flow, 17, pp. 108-115
(1996).
95
61. V. Yakhot and S. A. Orszag, “Renormalization Group Analysis of Turbulence—I:
Basic Theory,” J. Scientific Computing, 1, pp. 1-51 (1992).
62. Y. S. Chen and S. W. Kim, “Computation of Turbulent flows using an extended k-
turbulence closure model,” NASA CR-179204 (1897).
63. B. E. Launder, G. J. Reece and W. Rodi, “Progress in the Development of a Reynolds
Stress Turbulence Model,” J. of Fluid Mech., 68, pp. 537-566 (1975).
64. R. F. Kulak and C. Fiala, 1988, “NEPTUNE: A System of Finite Element Programs
for 3-D Nonlinear Analysis,” Nuclear Engineering and Design, 106, pp. 47-68.
65. A. H. Marchertas, J. M. Kennedy and P. A. Pfeiffer, "Reinforced Flexural Elements
for the TEMP-STRESS Program," Nuclear Engineering and Design, Vol. 106,
1988, pp. 87-102.
66. D. R. Olander, US Energy Research and Development Administration, Technical
Information Center, Oak Ridge, TN, TID-26711-P1 (1976).
67. M. C. Billone, et. al., “Advancements in the behavioral Modeling of Fuel Elements
and Related Structures,” Nucle. Eng. Design 134(1992)23-36.
68. K. R. Kummerer and H. Eibel, “New Developments in Fuel-Pin Modeling,” ANS
International Conference, Washington DC (1977)173.
69. Y. R. Rashid, “Mathematical Modeling and Analysis of Fuel Rods,” Nucl. Eng. Des.
29(1974)22-32.
70. S. Murakami and M. Mizuno, “Elaborated Constitutive Equations for Structural
Analysis for Creep, Swelling, and Damage under Irradiation,” Nucl. Technol.
95(1991)219.
71. K. Lassmann, “The Structure of Fuel Element Codes,” Nucl. Eng. Des. 57(1980)17-
39.
72. R. W. Weeks, “Structural Analysis of Reactor Fuel Elements,” Nuc. Eng. Des.
46(1978)303-311.
73. M. C. Billone, Y. Y. Liu, E. E. Gruber, T. H. Hughes, and J. M. Kramer, "Status of
Fuel Element Modeling Codes for Metallic Fuels", Proc. ANS International
Conference on Reliable Fuels for Liquid Metal Reactors, September 7-11, 1986, p. 5-
77.
74. T. Sofu, J. M. Kramer, and J. E. Cahalan, “SYSSYS/SAS4A-FPIN2 Liquid-Metal
Reactor Transient Analysis Code System for Mechanical Analysis of Metallic Fuel
Elements,” Nuc. Tech. 113(1996)268.
75. T. Ogata and T. Yokoo, Development and Validation of ALFUS: An Irradiation
Behavior Analysis Code for Metallic Fast Reactor Fuels,” Nucl. Technol.
128(1999)113.
76. S. Oldberg and R. A. Christensen, “Dealing with Uncertainity in Fuel-Rod
Modeling,” ANS International Conference, Washington DC (1977)172.
77. V.Dostal, P.Hejzlar, M.J.Driscoll, N.E.Todreas, A Supercritical CO2 Gas Turbine
Power Cycle for Next Generation Nuclear Reactors, ICONE 10-22192, 10th
International Conference on Nuclear Engineering ,Arlington VA ,April 14-18,2002
78. T. Schulenberg, H.Wider, M. A. Futterer, Electricity Production in Nuclear Power
Plants: Rankine vs Brayton Cycles, Global 2003, New Orleans, LA , November 16-
20, 2004
79. F.E. Dunn et al, The SASYS-1 LMFBR Systems Analysis Code ,Argonne National
Laboratory Report ,ANL/RAS 84-14 Revision 1 ,March 1987
96
80. D. Bestion and G. Geffraye, The CATHARE code, CEA Grenoble Report ,
DTP/SMTH/LMDS/EM/22001-63, April 2002
81. Code Development Team, RELAP5-3D Code Manual, Idaho National Laboratory
Report INEEL-EXT-98-00834 Revision 2.0, July 2002
82. J. A. Morman et al ,IGENPRO knowledge-based digital system for process transient
diagnostics and management, International Atomic Energy Agency report, IAEA-
TECDOC-1054, 213-224, November 1998
83. J. Reifman and T. Y. C. Wei, A Process-Independent Transient Diagnostic System-I;
Theoretical Concepts, Nuclear Science and Engineering ,131,329-347(1999)
84. J. P. Herzog, S. W. Wegerich, K. C. Gross ,F. C. Bockhorst, MSET Modeling of
Crystal River-3 Venturi Flow Meters, Proceedings ,ICONE 6, 6th International
Conference on Nuclear Engineering, May10-15, 1998, San Diego, CA
85. Generation IV International Forum, Draft System Research Plan for the Gas-Cooled
Fast Reactor R&D Program, June 2005
86. USNRC Regulatory Guide 1.70, Standard Content and Format for Safety Analysis
Reports for Nuclear Power Plants, LWR Edition, November 1978.
87. J. E. Cahalan et al., "Advanced LMR Safety Analysis Capabilities in the SASSYS-1
and SAS4A Computer Codes", Proceedings of the International Topical Meeting on
Advanced Reactors Safety, Pittsburgh, PA, April 17-21, American Nuclear Society,
1994.
88. www.GNEP.gov
89. Engineering for Nuclear Fuel Reprocessing, Justin T. Long, American Nuclear
Society Monograph, 1977.
90. G. F. Vandegrift and M.C. Regalbuto, “Validation of the Generic TRUEX Model
Using Data from TRUEX Demonstrations with Actual High-Level Waste,”
Proceedings of the Fifth International Conference on Radioactive Waste Management
and Environmental Remediation (ICEM’95) Vol.1, Cross-Cutting Issues and
Management of High-Level Waste and Spent Fuel, Berlin, Germany, September 3-7,
1995, 457.
91. G. F. Vandegrift, M. C. Regalbuto, S. Aase, A. Bakel, T. J. Battisti, D. Bowers,
J. P. Byrnes, M. A. Clark, J. W. Emery, J. R. Falkenberg, A. V. Gelis, C. Pereira, L.
Hafenrichter, Y. Tsai, K. J. Quigley, and M. H. Vander Pol, “Designing and
Demonstration of the UREX+ Process Using Spent Nuclear Fuel,” Proceedings of
Atalante 2004, Nimes, France, June 21-24, 2004.
92. Aspen Plus, Aspen Custom Modeler, and Aspen Dynamics are products Aspen
Technology, Inc. Ten Canal Park, Cambridge, Massachusetts 02141
93. G. Palmiotti, M. Salvatores, and R. N. Hill, “Sensitivity, Uncertainty Assessment, and
Target Accuracies Related to Radiotoxicity Evaluation,” Nucl. Sci. Eng. 117, 239
(1994)
94. B. R. MOORE, P. J. TURINSKY, and A. A. KARVE, “FORMOSA-B: A Boiling
Water Reactor In-Core Fuel Management Optimization Package,” Nucl. Technol.,
126, 153, 1999.
95. G. Aliberti, G. Palmiotti, M. Salvatores, C. G. Stenberg, “Transmutation Dedicated
Systems: An assessment of Nuclear Data Uncertainty Impact”, Nucl. Sci. and Eng.
146, 13-50, (2004).
97
96. G.Cecchini, U.Farinelli, A.Gandini, M.Salvatores, "Analysis of Integral Data for Few
Group Parameter Evaluation of Fast Reactors", A/CONF 28/P/627, Geneva (1964).
97. G. Palmiotti, M. Salvatores, “Proposal for Nuclear Data Covariance Matrix”,
JEFDOC 1063 Rev.1, January 2005
98. G. Palmiotti, and M. Salvatores, “Use of Integral Experiments in the Assessment of
Large Liquid-Metal Fast Breeder Reactor Basic Design Parameters,” Nuclear Science
Engineering 87, 333 (1984).
99. Golder Associates 2000. User's Guide - GoldSim Graphical Simulation Environment.
Version 6.02. Draft #3. Redmond, Washington.
100. "ADIFOR 2.0 User' s Guide (Revision D)," Christian Bischof, Alan Carle, Paul
Hovland, Peyvand Khademi, Andrew Mauer, March 1995. Revised: June, 1998.
101. H. S. ABDEL-KHALIK and P. J. TURINSKY, “Adaptive Core Simulation:
Efficient Sensitivity Analysis,” Proc. Advances in Nuclear Fuel Management III,
Hilton Head Island, South Carolina, October 5–8, 2003, American Nuclear Society
2003, CD-ROM.
102. H. S. ABDEL-KHALIK and P. J. TURINSKY, “Adaptive Core Simulation
Employing Discrete Inverse Theory—Part I: Theory,” Nucl. Technol., 151, 9, 2005.
103. L. N. Usatchev, J. Nucl. Energy A/B, 18, 571, 1964.
104. A. Gandini, J. Nuclear Energy 21, 755, (1967)
105. J. C. Estiot, G. Palmiotti, and M. Salvatores, “SAMPO: Un Systéme de Codes
pour les Analysis de Sensibilité et de Perturbation á Différents Ordres
d’Approximation,” Specialist’s Meeting on Nuclear Data and Benchmarks for
Reactor Shielding, Paris October 1980a
106. G. Palmiotti and M. Salvatores, “Multidimensional Transport Sensitivity for
Shielding Analysis,” Seventh International Conference on Reactor Shielding
Bournemouth, U.K., September (1988)
107. J. M. Kallfelz, G. Bruna, G. Palmiotti, and M. Salvatores, “Burn-up Calculations
with Time Dependent Generalized Perturbation Theory,” Nuclear Science
Engineering 62, 304 (1977a
108. M. L. Williams, “Development of Depletion Perturbation Theory for Coupled
Neutron/Nuclide Fields”, Nucl. Sci. Eng. 70, 20, 1979.
109. E. Oblow, “Sensitivity Theory for Reactor Thermal-hydraulics Problems”, Nucl.
Sci. Eng. 68, 322, 1978
110. D. G. CACUCI and M. IONESCU-BUJOR, “Deterministic Local Sensitivity
Analysis of Augmented Systems—I: Theory,” Nucl. Sci. Eng., 151, 55, 2005.
111. C. V. Parks, “Adjoint Based Sensitivity Analysis for Reactor Safety
Applications”, ORNL/CSD/TM-231, August 1986
112. A. Gandini, “Perturbation Method for Fuel Evolution and Shuffling Analysis”,
Ann. Nucl. Energy, 14, 273, 1987
113. C. H. Adams, Personal Communication. VARI3D is an ANL 3D perturbation
theory code for which a user manual has not been issued (August 1997).
114. W. S. Yang and T. J. Downar, “Depletion Perturbation Theory for the
Constrained Equilibrium Cycle,” Nucl. Sci. Eng., 102, 365-380 (1989)
115. B. T. Rearden, C. M. Hopper, K. R. Elam, S. Goluoglu, and C. V. Parks,
“Applications of the TSUNAMI Sensitivity and Uncertainty Analysis Methodology,”
98
pp. 61–66 in Proceedingsof the 7th International Conference on Nuclear Criticality
Safety (ICNC2003), October 20–24, 2003, Tokai-Mura, Japan (2003).
116. J. L. Lucius et al., A Users Manual for the FORSS Sensitivity and Uncertainty
Analysis Code System, ORNL-5316, Union Carbide Corp., Oak Ridge Natl. Lab.,
January 1981
117. A. Ounsy, F. de Crecy, B. Brun, ‘The Adjoint Sensitivity Method: a Contribution
to the Code Uncertainty Evaluation”, Nucl. Engrg. Des. 149, 357, 1994
118. A. de Crecy, “CIRCE A Tool for Calculating the Uncertainties of the Constitutive
Relationship of CATHARE-2”, NURETH 8, Kyoto, Japan, Sept 1997.
119. K.-T. Fang, F. J.Hickernell, and H. Niederreiter, editors, Monte Carlo and Quasi-
Monte Carlo Methods 2000, Berlin 2002. Springer-Verlag.
120. H. S. ABDEL-KHALIK and P. J. TURINSKY, “Adaptive Core Simulation
Employing Discrete Inverse Theory—Part II: Numerical Experiments,” Nucl.
Technol., 151, 22, 2005
121. J. BORGGAARD, A. VERMA, “On Efficient Solutions to the Continuous
Sensitivity Equation Using Automatic Differentiation”, SIAM J. SCI. COMPUT.,
Vol. 22, No. 1, pp. 39–62, 2000
99
Nuclear Engineering Division
Argonne National Laboratory
9700 South Cass Avenue, Bldg. 208
Argonne, IL 60439-4842
www.anl.gov