0% found this document useful (0 votes)
27 views29 pages

Parker 1972

Monte Carlo neutronics

Uploaded by

sicih95676
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views29 pages

Parker 1972

Monte Carlo neutronics

Uploaded by

sicih95676
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

The Use of the Monte Carlo Method for Solving Large-Scale Problems in Neutronics

Author(s): J. B. Parker
Source: Journal of the Royal Statistical Society. Series A (General), Vol. 135, No. 1 (1972),
pp. 16-43
Published by: Wiley for the Royal Statistical Society
Stable URL: http://www.jstor.org/stable/2345038
Accessed: 25-06-2016 11:43 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted
digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about
JSTOR, please contact [email protected].

Wiley, Royal Statistical Society are collaborating with JSTOR to digitize, preserve and extend access to
Journal of the Royal Statistical Society. Series A (General)

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
J. R. Statist. Soc. A, 16 [Part 1,
(1972), 135, Part 1, p. 16

The Use of the Monte Carlo Method for solving


Large-scale Problems in Neutronics

By J. B. PARKER
U.K. Atomic Energy Authority

[Read before the ROYAL STATISTICAL SOCIETY at a meeting organized by the


RESEARCH SECTION on Wednesday, October 20th, 1971, Mr M. J. R. HEALY
in the Chair]

SUMMARY
This paper reviews experiences based on the use of the Monte Carlo method
for solving large-scale problems in neutronics.

Keywords: MONTE CARLO; NEUTRONICS

1. INTRODUCTION
NEUTRONICS problems involve the transport of particles (neutrons) from one collision
with an atomic nucleus to another. The collision processes take place in accordance
with quite definite, though probabilistic, physical laws, whose structure is obtained
partly by theory but mainly by experimentation. The transport of neutrons is
expressible in terms of the Boltzmann transport equation, an integro-differential
equation expressing neutron conservation (Davison, 1957, p. 15). Because of the
complexity of the underlying physical laws a mathematical solution of the Boltzmann
equation is generally difficult. However, solutions of certain neutronics problems by
deterministic numerical methods, using electronic computers, have been developed
and these are generally quick, reliable and sufficiently accurate, provided that the
geometry of the assembly under consideration is simple. A particularly attractive
deterministic approach is described by Carlson and Bell (1958).
Neutrons travel with different speeds but have the same mass, so that the terms
"velocity" and (kinetic) "energy" can be, and are, used virtually synonymously. In a
practical problem, the range of energies considered may well extend over four, five
or more decades. Most (as far as I know all) deterministic methods are based on the
energy group concept; that is, neutrons are regarded as belonging to one of a finite
number of generally abutting groups inside which definite, suitably averaged, collision
laws apply. Because the collision laws are finely grained, and may vary rapidly not
only in detail but also in structure with energy, it is necessary either to use a large
number of energy groups, in which case computation time increases, or to establish
a very reliable procedure for averaging the finely grouped data to obtain (macroscopic)
group constants. There are classes of problems arising in the neutronics field where
this approach is rather dubious. It is in these general areas (complex geometries, or
a requirement for the utilization of nuclear data fine grained in energy) that an
alternative to deterministic methods is open to consideration.
The basic problem is to study the properties of neutrons which can be regarded
as moving from collision point to collision point inside a connected system of generally
known geometrical shape consisting of a series of different physical materials. For a

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 17

neutron travelling at a given speed through a given medium, the probability of


collision per unit path length is constant. Each material may be composed of a
number of different elements, or isotopes of a single element. In respect of each
isotope there exists a whole complex of "nuclear data" which specify the probabilistic
laws referred to above. These data, which are mainly experimental, are subject, to a
greater or lesser extent, to uncertainty.
The concept of simulating the neutron paths, which are straight lines in Euclidean
space, by means of a Monte Carlo process was first put forward by Von Neumann
and Ulam, a short historical note being given by Marshall (1966). Conceptually the
method is a simple one. Given the state of a neutron, that is, an expression of the
neutron's spatial co-ordinates, its velocity and, where relevant, its time co-ordinate
measured from some datum time, the distance it travels before sustaining a collision
is a random variable from a negative exponential distribution whose parameter, the
mean free path, is defined by the composition of the material in the neighbourhood
of the neutron. Typically the mean free path is of order a few centimetres, while
the system under study may be a dozen or so mean free paths across. By sampling
from the (negative exponential) distribution, the distance travelled, and hence the
time, to the next collision is determined. Provided that the whole of the neutron's
flight lies in a region of space having the same nuclear composition (if not, a
"boundary crossing" has occurred) the next stage is to decide on the outcome of
the collision event. This consists of the number of neutrons produced at the collision
(which, in the case of fission, is an integer random variable), and the direction(s)
and energy(s) of the new neutron(s), if any, produced. Knowing the appropriate
frequency distributions associated with these events, random sampling may be carried
out and a particular outcome distinguished. The whole procedure may be repeated
(storing aside excess neutrons, as required, for subsequent processing) from collision
to collision until the "neutron" is lost from the system, either through physical escape,
physical absorption or because of some other reason deliberately invoked in the
simulation. A large number of separate neutron histories (chains or trees of tracks)
may be rapidly produced using an electronic computer.
The simulation described above is illustrated in Fig. 1.
It seems right to us to distinguish two separate stages in solving neutronics
problems by Monte Carlo, or, for that matter, by any other method. The first is
the problem of assessing, organizing and controlling the nuclear data, and the second
is the problem of solving the particular example on hand. There are two reasons
for viewing Monte Carlo calculations in this way. First, by far the larger proportion
of published work in the field of neutronics applications of the Monte Carlo method
is devoted to the second stage, and a treatment of the first will, it is hoped, go some
way to re-dress this imbalance. Second, the way in which the data are controlled is
independent, to a great extent, of the actual Monte Carlo program itself. We discuss
this first stage in Section 2. In Section 3 some of the different classes of neutronics
problems, to which the Monte Carlo method may with profit be addressed, are
reviewed. In the following section some methods and techniques for studying these
different classes of problem will be described. Section 5 illustrates our outlook on
this matter by presenting two Monte Carlo case studies, and it will be apparent, in
due course, why the nomenclature "case study" is preferred to "calculation". A
discussion in Section 6 summarizes the problems we have encountered, and the
attitudes we have learned to adopt, in handling large-scale Monte Carlo neutronics
calculations.

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
18 PARKER - Monte Carlo for Problems in Neutronics [Part 1,

Routine associated with initial


input of particles

Input details of system geometry

x Input, from tape, Nuclear Data

. . , _ |Fetch starting coordinates


of particle track

X X and
~~~~~Findvelocity
particle's energy group

X Find particle's path length

Does particle cross a boundaryr


before collision?---- YE

Find details of secondary Bondr


X particle(s) produced at rui

calculation ismaiConvert new track coordinates


? ~~~~to program reference axes

i Particle analysis a
|routines

~~~~ ~~
,. ~| 2
extinct?|
TH, AICDT

FIG. 1. FloW diagram showing interrelationship between main program and nuclear data.

2. THE BASIC DATA


While the ultimate source of the basic nuclear data used in all neutronics
calculations is mainly experimental, good compendia of these data, in a format
suitable for direct utilization by an electronic computer, are available in the form of
nuclear data files (Miller and Parker, 1965). The construction of a nuclear data file
involves what is called an evaluation, that is, an expert synthesis of measurements
that have been carried out by a number of different workers, using different
techniques, operating in different laboratories, and working in different countries.
The way in which these evaluations are carried out is of concern to the Monte
Carlo neutronics worker. For example, a very rough knowledge about the likely

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 19

uncertainties in the basic data may be helpful in a subsequent decision about the
level of accuracy he should seek in his Monte Carlo simulation. He will want to
forecast the likely machine running time and will want to know how far, if at all, he
should employ special simulation techniques. We therefore review a typical evaluation
problem concerning a neutron cross-section a (which is one of the constituents which
determine the probability of the outcome of a particular event at a collision) for a
particular reaction for a particular nuclide. Each experimenter usually measures
cross-sections over a more, or less, wide range of neutron incident energy E, and
quotes his result, together with its experimental error. This experimental error in
most cases measures the reproducibility of the experiments. Errors due to the
particular technique being used, the standards assumed in the experiment and in some
cases the calibration of the apparatus, are generally not included, through no fault
of the experimenter, in his quoted error estimate. These are errors that are likely to
affect all his determinations in a similar manner. It is not unusual to find that different
experimenters' determinations of a cross-section are apparently inconsistent in the
sense that different estimates of the (a, E) plot are widely spaced by an amount that
is large when viewed against the quoted experimental errors of either. This state of
affairs of course merely confirms the diagnosis that the possibly large component of
error due to some, or all, of the causes we have instanced above had not been, and
indeed could not have been, included in the quoted error. Whether or not, and if so
how, statistical methods should be introduced into the evaluation procedure is an
arguable proposition. The complexity of the form of the (a, E) plot, or plots, argues
convincingly, in our judgment, in favour of graduation by cubic splines (Ahlberg et al.,
1967; Powell, 1970) rather than by high order polynomial; the question of fitting
these splines by least squares is itself open, some norm other than the L2 norm possibly
being more appropriate. Fig. 2 (Parker, 1970) illustrates the complexity of the
problem, the evaluation in this case having been mechanically performed using a
program which was a forerunner of that described by Powell (1970). That the output
of any evaluation should include a statement about the errors (a composite of the
several experimental errors together with a contribution due to "apparent
inconsistencies"), including a specification of the likely degree of correlation
between different cross-sections that are adjacent in energy is perhaps a counsel of
perfection; what it is necessary to stress, however, is that the climate in which the
Monte Carlo worker dwells must be conditioned by at least a feel for the errors in
these pre-processing evaluations.
Given evaluated data for each nuclide, the entire set of data appropriate to the
problem under study has to be interpreted into a format suitable for rapid processing
by a computer. Since an appeal to nuclear data occurs every time a collision takes
place it is desirable to include the entire set of data in the fast (core) store of the
machine, and with machines of order 100,000 words storage capacity this is quite
practicable. First, there is the need to decide how to store the data; second, there is
the problem of where to store it most economically. The first is a problem of simula-
tion, the second a problem of organization.
It would be possible to present a curve, for instance the evaluated (a,E) plot,
direct to a computer, using computer graphics, but in practice it is desirable to work
straight from the nuclear data files. The format of these files, whose use is intended
for all types of neutronics calculation, is not immediately suitable for Monte Carlo
purposes. The basic concept in the layout of the UKAEA Nuclear Data Library
(Miller and Parker, 1965) is that all cross-sections are represented by point pairs (a, E)

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
20 PARKER - Monte Carlo for Problems in Neutronics [Part 1,

in such a manner that suitable interpolation between successive pairs provides a


representation of the data which is at least as accurate as is warranted by the experi-
mental errors of measurement. Distributions (for example, the frequency distribution
of secondary energy following a particular type of neutron scatter) are provided in

FRAME 0UU

270.0

00.

U: + A
04,.0 "'TI~~~~~~~~~~

?0 145.0 . - - -~1

ioO 15U. 0 2O 0 25U O 3u0 u 350.0 400 o 45.0


t 2 Energy (MeV)
FIG. 2. Cross-section evaluation. Each point is an experimental measurement and is included
in a line of length of two standard deviations. X denotes selected spline knots.

a similar manner, the first element being the ordinate of the frequency distribution
and the second the appropriate value of the parameter of interest, for example,
secondary energy. Discrete probability distributions are grafted into this framework
as and where necessary; that for the number of neutrons emerging from a fission is
an interesting but not representative example. The true probability that i neutrons
are produced is 7TQIl E), where E is the initial energy and E?ojr(7QIl E) = 1. However,
the Nuclear Data Library only lists v;= ET i7T(i IE). If 7T(i IE), or rather estimates
of it, were provided, Monte Carlo practitioners could (but would not) sample from

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 21

it by conventional methods. In practice, if direct simulation methods be used, the


user would choose, with appropriate probabilities, either [v], the integral part of v,
or [v] + 1. That such a procedure departs from exact physical verisimilitude is obvious;
it is also clear, however, that if, as is practically always the case, the solution to the
problem under consideration requires the use only of the expected value, j, not of
the frequency distribution, an unnecessary source of statistical error is removed by
sampling in the way that we have indicated. We therefore glimpse the possibility
of deliberately outraging the physical process in order to reduce a source of sampling
error, in such a way that the results of our calculation are unbiased. The general
subject of variance-reducing techniques occupies a very large place in the Monte Carlo
literature (Hammersley and Handscomb, 1964, p. 50; Goertzel and Kalos, 1958;
Halton, 1970); because of this, we shall not pursue the matter in depth in this article.
The basic nuclear data situation described above is complicated by the fact that,
for some types of collision law, emergent energy may be wholly or partly correlated
with scatter angle; by the fact that, for a given incident energy, nuclide and reaction
type there may be several distinct forms of scatter law, some continuous and some
discrete, specified by a series of probabilities P1,P2, ...; and by the fact that it is
sometimes more convenient to portray angular distributions in a co-ordinate system
relative to the centre of mass of the neutron and the nucleus than in the laboratory
system.
There are many choices available for the manner in which the data may be
transformed. At one extreme one could merely read in that part of the Nuclear Data
Library required for the solution of the problem under review and then process this
in the required manner whenever a collision takes place in the Monte Carlo tracking.
But it is far preferable to process the elements in such a way that the data can be
used in any and every Monte Carlo tracking program with minimum computing effort.
All cross-sections are represented in histogram form, being assumed constant over
each of a large number of fixed, carefully chosen, energy bands. Angular and energy
distributions are defined by a series of equiprobable scatter angles and energies,
well-known ways of automatically sampling from these distributions (Sobol, 1966)
being utilized. In a minority of cases the frequency distributions are of known
mathematical form (the negative exponential is an example) and well-known specific
sampling methods are available (Hammersley and Handscomb, 1964, p. 36). An
interesting special example is the fission spectrum (the frequency distribution of the
emergent energies of neutrons born at a fission). This spectrum is well approximated
to by a mathematical form

f(E) dE = A sinh (EC) exp (-BE) dE,

where A, B, C are fitted constants. This spectrum is of fundamental importance and


deserves special consideration. As far as we are aware, no special method of sampling
from this analogous to that used, for example, for sampling from the negative
exponential, or the Gaussian distribution, has been studied, and it is treated just as
an empirical spectrum would be (this treatment effectively simulates the distribution
in histogram form, each cell having different widths but equal areas). But this con-
ventional treatment, amply adequate in most of the E range, is rather unsatisfactory
for large E. The form of the spectrum indicates that for large E a negative exponential
distribution gives a satisfactory fit, and suggests that beyond a certain Eo sampling
techniques be changed to exploit the negative exponential shape. This solution is in
fact in use in our laboratory.

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
22 PARKER - Monte Carlo for Problems in Neutronics [Part 1,

The first stage of our nuclear data processing plant is now complete, processed
data having been made available in permanent form (magnetic tape). The next stage
is problem dependent. With this tape as input, the fast store of the machine is fed
with those ingredients of the tape specific to the problem under consideration. Some-
times, as is the case where methods of simulation other than the direct ones we have
described are used in the tracking, it is more convenient to use, not the cross-sections
themselves, but various functions of them. The data also require to be pruned of
redundancies and to be ordered in a hierarchical manner so that the many-stage
sampling procedures can be effectively carried out. A full description of the entire
system of Monte Carlo data handling developed at Aldermaston is given by Parker
(1966). In Fig. 1 those parts of a Monte Carlo code in which this system is called
into play are distinguished by crosses.

3. TYPES OF NEUTRONICS PROBLEMS


Having solved the data problems, including those of sampling at a collision, we
are in a position to distinguish three broad classes of Monte Carlo neutronics problems.

3.1. Non-linear Problemst


It may be the case that the nuclear composition and even the geometry of the
system under study is a function of a preceding stage of the calculation. The study of
the history of a reactor in which burn-up is taken into account (Leshan et al., 1958)
is an obvious example, but there are others.

3.2. External Source Problems


These are problems where the objects of interest are the properties of an imposed
source of neutrons, generally but not necessarily initiated simultaneously. The
calculation of the fraction of a given pulse of neutrons that are transmitted through
a shield is a problem in this class.

3.3. Homogeneous Linear Problems


These are problems where there is no imposed source, and one is interested in
the behaviour of a system for which the objective is to calculate some measure of
the criticality or reactivity. A system is critical if the number of neutrons in it stays
invariant, the gain through fissioning being exactly matched by loss through capture
(absorption) and escape. A typical problem might be to calculate how much reflecting
material should surround a fissile core for it to be just critical. Associated with a
critical system there is some quite definite steady-state distribution of neutrons (in
space and velocity) which, apart from statistical variations that are in general, though
not always, of little physical consequence, will remain invariant. This steady state is
not known a priori, and has to be established by tracking particles.

4. EXPERIENCE WITH MONTE CARLO METHODS


Though it is right to distinguish different approaches, and indeed different
attitudes of mind, towards the three classes of problems distinguished in the last
section, there is a lot of common ground. We shall take for granted the solution to
the problem of speedily generating numbers, that, to all intents and purposes, may
be regarded as independent random samples from the rectangular distribution in

t There may also be problems in which the Boltzmann transport equation is non-linear, as
when neutron-neutron interactions are the object of study. Such problems are not discussed here.

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 23

(0, 1); the literature on this one subject alone is considerable, being well summarized
(Halton, 1970) already. An appropriate congruence scheme, based on the original
idea of Lehmer (1951) is adequate for practical needs.
Also taken for granted is the use made of these random numbers in the sampling
at collision points and during the tracking; some of these facets have been mentioned
in Section 2. Techniques for generating, for example, the sine and cosine of an angle
that is uniformly distributed in (0, 27r) are so well known (Hammersley and
Handscomb, 1964, p. 37; Cashwell and Everitt, 1959; Tocher, 1963, p. 37) as to
deserve no special mention.
At this stage, then, we have a mental picture of an amalgam of connected series
of straight line tracks, called "particle histories", with or without branching at some
of the nodes (nuclei) that can be rapidly generated by the methods we have described.
If the tracks are a microcosm of the real physical process the tracking is said to be
"Crude Monte Carlo"; however they need not be, and if they are not the tracking is
"Sophisticated Monte Carlo". We did not invent this terminology and do not attempt
to justify it. What remains is to analyse the results of the tracking in order that one
may achieve a sufficiently accurate answer to the problem under study. It is convenient
to distinguish the three classes of problem defined in Section 3 before doing this.

4.1. Non-linear Problems


For these it is necessary, from time to time, to aggregate the particles and calculate
some property, or properties of them, which define the new system geometry. Thus
the tracking should be done epoch by epoch, the result of each epoch's tracking being
analysed to provide the input to the (usually deterministic) calculation which
must be carried out before the next epoch's tracking. In this class of problem we have
found crude Monte Carlo adequate. That this is likely to be the case follows from
the fact that the commodity of interest in the calculation is not a single objective
(such as the proportion of neutrons transmitted over a distant surface) but rather a
global compendium of the behaviour of the neutrons. It is, we believe, in the (rather
wide) class of problems in which there is a simple and easily specified single objective
that great attention should be paid to organizing the tracking details in such a way
that histories are more liable to contribute to this objective, a desirable aim that it
is the object of sophisticated Monte Carlo to achieve.
Having said this, it is necessary to re-dress the balance by arguing in favour of
indirect methods of analysing particle histories. It will be helpful to consider an
example; a system may change its composition as a result of nuclear processes. Now
the nature of the composition may well depend on the occurrence of a fairly rare
type of collision event. The number of occurrences of this event during an epoch's
tracking will in this case be an integer random variable with appreciable relative
error. A much more reliable estimate of the number of occurrences is obtained by
scrutinizing all collisions which might have produced the event, and aggregating the
(small) probabilities that this event was achieved. This is the well-known "method
of expected values". Other similar, but more difficult, concepts are utilized; some
include utilizing the particle's track length rather than the collision event, as the basic
scoring concept; they will not be described here.

4.2. External Source Problems


Much work has been devoted to the solution, by Monte Carlo, of external source
problems, and particularly advanced Monte Carlo skills are brought into play

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
24 PARKER - Monte Carlo for Problems in Neutronics [Part 1,

(Goertzel and Kalos, 1958; Kalos, 1963b; Leimdorfer, 1964; Eriksson, 1965; Bendall
and McCracken, 1968). That something other than routine Monte Carlo tracking
is sometimes absolutely necessary is immediately obvious when the well-worn
example of calculating the proportion of neutrons transmitted through a thick
shield (Goertzel and Kalos, 1958; Hammersley and Handscomb, 1964, p. 100; Sobol,
1966) is presented. If the aim is to calculate a transmission which is of the order 10- with
a relative error of about 10 per cent, about a million histories must be processed and
this is computationally unrealistic.

Source

FIG. 3. External source problem: schematic diagram. - -- Denotes importance region


boundaries.

But in practice we might well pause to ask ourselves how realistic this test example
really is. For neutrons incident normally on a plane slab the transmission probability
is of the approximate form p = fexp (- atA), where a is the total cross-section, t the
slab thickness, A a constant and f an unknown "build-up" factor. Even if f were
assumed a constant, we have /plp = -Ao/or/ (thickness of slab in mean free paths),
so that the relative error in p is about ten times that of as. Even the most reliable
cross-section determinations have a relative error of a few per cent and when it is
remembered that neutrons are being degraded in energy as they traverse the slab,
so that as itself changes, and that there is also a multiplicity of different types of
collision event, it is clear that insistence that the error in the Monte Carlo should
be as low as 10 per cent is physically rather idealistic in this example. Nevertheless
the point remains that there are occasions where departures from a direct simulation
are highly desirable. We would go further than this and say they were essential.
A very rough schematic diagram illustrating a hypothetical external source
problem is shown in Fig. 3. Three regions are recognized. First, there is a source
region where neutrons are born. Second, there is a "target" region, it being desired

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 25

to estimate some properties of the neutrons that get into this region from the source,
for example, the energy distribution of neutrons per unit volume as a function of
position. Between the two there exists a medium that could be anything from a thick
absorber to a vacuum that is completely transparent to neutrons. A problem relating
to the storage of uranium billets might come into this category, the object being to
examine the effect of a pulse of fission neutrons generated in one of the billets. The
intervening air is in this case virtually transparent to neutrons; but if the system were
flooded this intervening material would be neutronically thick. The whole system is
surrounded by the floor, walls and roof of a storage building.
In this type of example only a small fraction of the initial neutrons contributes to
the result, though even the most unlikely neutron may contribute. The main way in
which Monte Carlo simulation may be improved is to give expression to the idea
that greatest attention should be paid to those particles that are most likely to
contribute to the result. Thus in Fig. 3, if the true physical source be uniformly
distributed over the surface of the source volume, relatively few particles are started
in positions (and directions) from which a contribution is a priori unlikely, but a great
number of particles would be started in the region closest to the target with directions
in the general target direction. The imbalance with the true physical situation is
adjusted by according fictitious weights to the particles representing the neutrons:
the "unlikely" particles are accorded high weights and the "likely" particles low
weights. This is called "source biasing".
As a particle is tracked it may well approach the general target area and become
"more important". On the other hand it may become less important if it is back
scattered. The physical mischance that a promising particle is lost through neutron
capture may of course be precluded from the simulation by inhibiting absorption
altogether, making a corresponding weight reduction at collision, or during flight,
to balance this violation to the physics. It is clearly desirable that changes in a
neutron's "potential" should be reflected in the simulation, so what are called
"importance regions" are introduced in the intervening material. If a particle crosses
from one region to the next, travelling in the direction of interest, it is split into two
or more particles of lower weight; conversely if it travels in the reverse direction
"Russian Roulette" may be played-that is, the particle is absorbed with some
probability but if it survives this absorption its weight is augmented. By implementing
a system similar to this, one may guarantee good particle traffic all the way between
the source and the target, and so enhance statistical accuracy.
The way the importance regions are chosen depends, amongst other things, on
the structure of the intervening material; the thicker the material is to source neutrons,
the more the number of importance regions. Since faster neutrons have longer mean
free paths than slow ones, the neutronic thickness and therefore the attenuation is a
function of neutron energy. Thus importance region siting strategy is a function of
energy. If there is a wide range of source energy-and the fission spectrum covers a
wide range-the simplest procedure is first to stratify the calculation into a series of
perhaps half a dozen separate calculations in each of which the source is distributed
over a relatively narrow energy band. Each calculation has its own distinctive
strategy.
Ideally, we should like our Monte Carlo importance sampling mechanism to be
such that every starting particle produced some contribution in the target zone, and
further that the coefficient of variation of these contributions is small. With no
importance sampling at all, the vast majority of the contributions will be zero and the

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
26 PARKER - Monte Carlo for Problems in Neutronics [Part 1,

remaining very small fraction unity, or thereabouts. As more and more importance
regions are introduced the proportion of starting particles that give rise to a non-zero
score (the score of a starting particle is the aggregate score of all the descendants of
this particle through splitting) increases and so of course does the computing time
per particle history. The number of contributing source particles for a pre-set
computation time increases rapidly (in most applications) as more importance regions
are introduced, attains a (flat) optimum, and then falls off. The optimum is achieved
when the particle traffic density as between source and target is roughly uniform.
This experience of ours is in line with other people's (Goertzel and Kalos, 1958).
A tally is kept of the particle transmitted weights into the target regions of interest
and from these the appropriate parameters required are estimated. The mean square
of the total transmitted contribution of the starting particles is calculated and in
principle this is sufficient to provide an estimate of the statistical accuracy of the
desired result. But even near optimum importance region choices the proportion of
non-zero contributors to the target is quite small (typically a few per cent) and it is
necessary to address the problem of securing meaningful error estimates in an
environment where the great majority of the scores are zeros. This is a statistical
problem not without interest, and we exploit ideas similar to those described by
Burrows and MacMillan (1965).
The strategies described above are fairly primitive ones, deliberately so. We need
methodologies that are easy to apply in a wide variety of problems and types of
system geometry. The importance of a neutron depends not only on its position but
also on its direction and its energy; we have recognized the first only crudely and
the second only through the initial stratification. Because neutrons generally lose
energy in their passage towards the target, the importance regions should be tightened
up; they are not. Even if we can guess with fair confidence some of the neutronic
phenomena occurring in the problem, we quite deliberately discount them; it is
cheaper to have a single, perhaps inefficient, general purposes code rather than a
medley of highly sophisticated special purpose programs. It is not without interest
to note that the "importance" of a neutron, defined as its probability of contributing
to the final result, is in many applications calculable in principle; it is the solution
of the integro-differential equation that is adjoint to the Boltzmann transport equation
(Goertzel and Kalos, 1958). An approximation to this solution could be obtained in
advance, possibly by Monte Carlo methods; the knowledge so obtained could be
used to institute a highly efficient importance sampling scheme which would result
in a low variance (in the limit, zero variance) solution to the original problem. Of
course there are sizeable practical difficulties in this approach.
The variance-reducing repertoire for external source problems is far from
exhausted. As just one example (there are many others) we could, if we wished to
do so, estimate a "transmission contribution" at every single collision point by
calculating the probability(ies) that the emergent particle will contribute to the
score(s). This is in fact a particularly instructive example-one of many-of an
approach that at first sight appears fruitful (particularly in idealized problems) but
is in general operationally sterile.
In many Monte Carlo neutronics problems, but especially in external problems,
one may be interested in the effect of a small change in system geometry. Studies
of these effects call for a different type of sophistication. Given the solution, by
Monte Carlo, to the transmission problem for a slab of given thickness, it is possible
to use the same particle histories to obtain the solution for other slabs (Berger and

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 27

Doggett, 1956; Morton, 1957). In practical work, however, we are interested in


perturbations to a much more complicated geometry, and moreover these perturba-
tions are not a matter of simple scaling. Where estimating the small effect due to a
geometry change is required, we have found it sufficient to use common tracks for
both problems simultaneously up to the point where the geometries differ, and thence
to bifurcate the Monte Carlo (still within the ambit of a single computer calculation)
along two or, even in some cases, three individual lines. Since a large part of each
particle's history is common to the two calculations, the effect of the geometry change
is calculated satisfactorily accurately. Similar studies have been carried out (by
perturbing the weights at collision points) to explore the effect of a variation in some
aspect or other of the nuclear data, but this is a matter that is best handled by
deterministic methods (Hemment et al., 1966).

4.3. Homogeneous Linear Problems


The individual feature of this type of Monte Carlo calculation, which might,
for example, be the calculation of the critical size of a more or less geometrically
complex assembly, is that one must first find that source which is self-preserving in
the sense described in Section 3.3. A natural method of attack is to start particles
with some arbitrary distribution in space and velocity over the system (or, preferably,
with a distribution believed to be representative of the true eigendistribution), track
them, and then to aggregate them at convenient stages to see whether they have
"settled" to a steady state. Methods for disentangling settling errors from statistical
errors are helpful in this connection (Parker and Woodcock, 1961; Benzi et al., 1966).
Since "settling" may take considerable time, particularly if the initial guess is a poor
one, methods for reducing its effect are worthy of consideration. These fall into two
categories. One approach consists of identifying the criticality measure required
with the eigenvalue of a matrix. This matrix is defined by imagining phase-space
divided into a number of sub-divisions. A Monte Carlo calculation then provides
estimates of kij, the expected number of particles in state j that have originated (after
an appropriate stage) from one particle in state i. The largest eigenvalue of the
matrix is used as the criticality measure (Morton, 1956; Pull, 1962). The point of
this procedure is that less tracking is required to achieve a suitable criticality measure
to a required accuracy, for it is no longer necessary to achieve settling as between
different sub-divisions of phase space. The problem as to how the available system
space should be most expeditiously divided is however not always easy to resolve.
The other approach is quite different. It recognizes that the index used to define
criticality is itself at the Monte Carlo practitioner's choice. The objective is to
calculate the critical size, which could be achieved by doing two runs (possibly a
single run with scaled tracks)t with two different core sizes and then finding the
critical size by interpolation. When viewed in this way, we can choose any criticality
parameter we like, assuming we know it takes some definite value, generally 0 or 1,
when the system is exactly critical. Some parameter choices are good and some are
bad and to clarify matters we give an example of a parameter that is generally bad.
The time constant, A, of a system is defined by the equation

N(t) = No exp At,


t As far as we know, this technique has not been used in homogeneous linear problems, as
distinct from external source problems (Section 4.2). This is almost certainly because of the
awkwardness of implementing it for complex geometry systems.

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
28 PARKER - Monte Carlo for Problems in Neutronics [Part 1,

where N(t) is the neutron population at time t and No the (eigendistribution) popula-
tion at time zero. It would be possible to aggregate neutrons at successive instants
of time and then estimate A from the data subsequent to that apoch where settling is
estimated to occur, and for simple systems such an approach may be a good one
(Davis, 1963). But in an environment where the nature of the criticality is heavily
dependent on the behaviour of very slow neutrons remote from the fissile core (a very
small fraction of which may return to it and subsequently cause fission) this is a poor
parameter choice for two reasons. First, the faster neutrons undergo a very large
number of collisions while the slow ones are gradually settling into their eigenstate,
and second, linear interpolation, and particularly extrapolation, on two systems with
different A is an unreliable procedure. The whole matter is reviewed by Parker and
Woodcock (1961) and Parker (1962) where the concept of the "traverse technique"
is put forward.
In this field also interest often arises in determining, as reliably as possible, the
small effect of a perturbation (for example, moving control rods in a reactor) and
methods based on what is described as a correlation technique (Gubbins, 1966), and
on adjoint calculations (Gubbins, 1969) have been put forward. Methods of estimat-
ing quantities such as reaction rates in small regions of a reactor where relatively
few particles are present have been developed, the most interesting ones being based
on the idea of subjecting particular collision points of interest, but outside the small
region of immediate concern, to close mathematical scrutiny, using the scatter laws
associated with the neutron's subsequent motion (Kalos, 1963a). Techniques for
speeding up the tracking where materials are a mere fraction of a mean free path
thick, so that in the ordinary course of events tracking would be slow computationally
because of the inordinate time required for boundary testing, are described by
Hemmings (1967). A general review of the problem of solving homogeneous linear
problems is given by Parker (1969).

5. CASE STUDIES
The object of this section is to illustrate, as far as we are able, the spirit of our
approach towards the design and execution of Monte Carlo investigations.

5.1. Monte Carlo Studies of Scintillator Efficiencies


A scintillator is a device for counting neutrons, and works on the principle that a
large volume of hydrogenous liquid slows down a neutron injected at the centre.
The neutron diffuses in the liquid until it is captured by what is called "a loading
element". This is a material with a high capture cross-section for low energy neutrons,
and is usually gadolinium or cadmium. The gamma rays produced at a capture event
cause scintillations which combine to form a single "capture" pulse. The object is
to calculate the "efficiency" of the scintillator-that is, the probability of the neutron
being counted-as a function of neutron energy. The main reason why a liquid
scintillation counter is not 100 per cent efficient is its finite size: some neutrons escape
from the counter before capture in the loading material. Because the higher energy
neutrons have the longer mean free paths, efficiency falls off as neutron energy
increases. A subsidiary factor is that neutrons may be absorbed in other materials
apart from the loading material.
It is first necessary to appreciate why the direct calculation of efficiency by Monte
Carlo methods is unsatisfactory. Efficiency is largely determined by neutrons that

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 29

are moderated to very low energies, and a direct Monte Carlo simulation would be
very time consuming. But there is more in it than that. In most Monte Carlo
neutronics studies the nuclei with which the neutrons collide are assumed static;
this is quite a satisfactory approximation for general work but when neutrons are
very slow the physical thermal motion of the protons and other nuclei in the scintillator
cannot be ignored. Even if a Monte Carlo code were written to achieve exact physical
verisimilitude at these very low energies, the results of the calculation would be no
more authoritative than the experimental errors of the nuclear data permit; the
absorption cross-section of gadolinium is only known to an estimated accuracy of
20 per cent.
For these reasons the Monte Carlo calculation was limited to the calculation of
the proportion of neutrons moderated to just less than some specific energy, say G.
The results of this Monte Carlo are now adjusted by applying a thermal group
diffusion treatment to these moderated neutrons. The Monte Carlo calculation would
be even more simplified if the small amount of loading material (typically about
002 per cent of the total number of nuclei in the scintillator) could be ignored in the
Monte Carlo tracking altogether. This would apparently merely defer the capture of
neutrons in the gadolinium to later times, when they would be treated by the diffusion
approximation. If this treatment can be justified it means that, for a particular
source energy, scintillator efficiency as a function of percentage loading may be
explored by means of one Monte Carlo calculation and a number of diffusion theory
applications.
The direction of our planning, and this is quite intentional, is to relegate the
Monte Carlo proper calculation to play the role of just one link in the computational
chain. To justify ourselves we have to ensure that the results of our calculations are
insensitive first to the choice of G and second to whether or not loading material is
included in the nuclear composition. This was done by running the appropriate
Monte Carlo calculations for one particular source (a fission neutron source) for
which some experimental results were available for comparison purposes. Having
verified the assumptions, ordinary crude Monte Carlo methods were deliberately
used for a series of monoenergetic neutron sources. Even the shape of the scintillator,
which is spherical apart from a small cylindrical neutron beam tube (Mather et al.,
1964) was (quite deliberately) not accurately simulated, though careful calculations
to provide an upper bound for the effect of this approximation were mounted. Many
sophisticated ways of carrying out the tracking, and modifying the scoring processes,
were considered briefly only to be rejected. Finally, a patient study of the effect of
the experimental error in the basic nuclear data was put in hand. Full details of the
whole procedure are given by Parker et al. (1968). The final results are shown in
Table 1.
The Monte Carlo calculations each took 30 minutes computing time; the figure
of 0005 for methodological errors includes effects of deliberately simplifying the
geometry. The lesson to be drawn from Table 1 is a valuable one; a Monte Carlo
design of great naivete is amply satisfactory for our purposes, bearing in mind that
our calculational environment is clouded with experimental noise. The clear
conclusion is that, in practical as opposed to theoretical work, statistical errors in
the Monte Carlo, however crude, are frequently (but not always) of minor significance
when viewed against a broader backcloth; this is so seldom admitted in literature
relating to neutronics Monte Carlo procedures that a forceful counter-example is
necessary in an attempt to balance this false perspective.
2

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
30 PARKER - Monte Carlo for Problems in Neutronics [Part 1,

TABLE 1

Calculated efficiency and errors (S.D.): liquid scintillation counter

Errors
Source Calculated
energy efficiency Monte Carlo Other Nuclear data Total
(MeV) statistical errors methodological errors error
errors

0 3 0-963 0 003 0 005 0-008


1 0 0955 0003 0004 0007
2-0 0*950 0 003 0 003 0 007
3 0 0-923 0 004 < 0 005 0 004 0-008
5 0 0-942 0-006 0 010 0-013
7 0 0-766 0 007 0-012 0 015
10 0 0-671 0 007 0-015 0-017
14-0 0*590 0 005 0 019 0-020

5.2. Multiple Scatter Corrections


When a neutron collides with a nucleus, the scattering is not necessarily isotropic.
The frequency distribution of ,u, the cosine of the scattering angle (s), varies with the
type of scatter (viz. elastic or inelastic), the nuclide and the incident neutron energy
and, typically, for high energy neutrons impacting on a heavy nucleus, is strongly
anisotropic, peaking at forward scattering angles (p = + 1), falling off to a minimum
at lower ,u and frequently having a lower peak at u = --1 (backward scatter). These
frequency distributions, when processed, are used in Monte Carlo calculations, as
was seen in Section 2. Except in rare cases, their source is experimental.
The experiment is staged as follows. Monoenergetic neutrons from a source
(effectively a point) impinge on a cylindrical shell whose composition is that of the
material of interest, and emergent neutrons are counted by a detector, which can
be moved on a large circle centred on the cylinder, and which is well shielded from
neutrons coming direct from the source (Fig. 4). The number of scattered neutrons
is counted at each of a number of detector positions, Q.
This procedure provides a somewhat rough, biased, account of the true differential
cross-section.t It is biased for two reasons. First, the cylindrical target has finite
dimensions and the distribution of neutron flux is not uniform over it (there are
more collisions on that part of the cylinder nearer the source); second, again because
of the finite dimensions, the actual events picked up by the detector will consist not
only of singly scattered tracks but will be contaminated by multi-scattered contribu-
tions. Of course both these effects could be mitigated by making the cylindrical
target smaller but then experimental errors due to poor counting statistics would
become appreciable. What is wanted is some mechanism to correct for both these
effects, that is, flux attenuation and multiple scatter, for targets whose dimensions
are of the order of a few centimetres. The detector moves round a circle whose radius
is of the order of metres.

t The differential cross section is proportional to the frequency distribution of the scattering
angle, the constant of proportionality being the cross section for the particular type of scattering
event under consideration.

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 31

By tracing neutron histories by the Monte Carlo method, the effects of flux
attenuation and multiple scatter are automatically taken into account. Apart from
statistical fluctuations, a Monte Carlo calculation using the correct physical nuclear
data, will parallel the physical process so the problem is to find what data, when
put into the Monte Carlo calculation, give the experimental results.

S->--- ------------------ ----- X--.x-

PLAN ASPECT

DETECTOR CIRCLE

/ SAN\PLE R

FIG. 4. Scattering experiment: simplified geometry.

Here we will describe the solution to the multiple scatter problem; for fuller
details see the two papers by Parker et al. (1961 and 1964). Fig. 4 shows the geometry
in elevation and plan. Clearly a straightforward Monte Carlo simulation of the
problem would be very inefficient, since physically the detector is situated a long way
from the target, subtending a very small solid angle at it. Thus only a very small
proportion of particles would hit it. This situation may be alleviated to some extent
by using, instead of the detectors, a detection band which is an area centred on the
detector circle of Fig. 4. This band would subtend a much larger solid angle at the
expense of introducing some error. Multiple scatter correction programs based on
this idea exist (Cashwell and Everitt, 1959) but even so the computing time is lengthy.

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
32 PARKER - Monte Carlo for Problems in Neutronics [Part 1,

The crux of our method lies in the calculation of the density (number per unit
area) of neutron histories, in a given category, at the centre of each detector position.
This is done by examining each particle immediately before collision and calculating
the chance that it would leave the target in the prescribed direction without having
a further collision. If there are several categories of possible collision event, as
many separate scores as types of event are computed, and these scores are further
distinguished according to whether the collision point under study is the first collision
that occurred or not. Each collision is therefore employed to estimate a whole series
of scores at each of a number of artificial detectors. Conceptually the occurrence of
a collision switches on a torch, as it were, whose axis is coincident with the particle's
previous track. This torch produces light whose spectrum and intensity are governed
by the appropriate neutron cross-sections and scatter angle frequency distributions.
The whole sphere (centred on the target, and radius the target-detector distance) is
illuminated with light from the torch but in this application the only concern is to
measure the intensity at selected positions around the counter circle. These calcula-
tions are carried out quite independently of the actual type of collision sampled from
the Monte Carlo. Tracking is then resumed, scoring taking place at each further
collision within the target until the track leaves the target when it is annihilated.
For each collision point in the target, P, there will be some definite direction
d(P, Q) in which the particle must emerge if it is to strike the centre of the detector.
Because it is true that in practical applications the radius of the detector circle is
large when viewed against the target dimensions we may write d(P, Q) = d(Q). For
each collision the score for event type j is the product of three quantities:

(a) the chance, pj, that event j takes place at the forthcoming collision;
(b) the chance, which is exp{-r(d)/m(E)} that the emergent particle, whose
energy is E, will not be further scattered during its remaining flight (of length r) in
the target (m(E) is the neutron mean free path);
(c) the differential cross-section a>(+), where 0 is the angle between the particle
track and the direction d(Q).

The angle b is obtained from a straightforward mathematical calculation, and so


are d and r. For the main type of collision event (which is elastic scatter) E is a
directly calculable function of 0 and m(E) is obtained from the Monte Carlo tracking
data. Where E is not necessarily a function of b (for example, inelastic scatter), it is
chosen by random sampling from the appropriate frequency distribution. a>(+) is
obtainable from the Monte Carlo input data.
Scores are accumulated for about 2000 separate neutron histories, this relatively
small sample size having been shown to be adequate to achieve the necessary accuracy
(Parker et al., 1961). The scores are then examined as a function of detector position D.
Of course the single scatter scores will simply mirror the appropriate Monte Carlo
differential scatter input data, apart from small effects due to the finite source distance
in Fig. 4 and the variation with scatter angle of average track length in the target.
The aggregate of all scores for a particular scattering event (the contributions from
different events can be distinguished experimentally since the detectors measure
counts as a function of energy) estimates what the experimenters actually measure
but the Monte Carlo score plot and the experimental plot will not agree (Fig. 5)
because the Monte Carlo calculation has the wrong data input. The problem now
is to calculate what perturbation must be made to the shape of the appropriate
Monte Carlo input data scatter angle frequency distribution to align the results of

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 33

the calculation with those obtained from experiment. This is an (actually automatic)
iterative procedure. Typically the whole procedure requires 8 minutes computing
time on an IBM 7030 computer.

\ \~~~~~~~~~~~~.

Cos (SCATTER ANGLE)


FIG. 5. Differential elastic cross-section. (Sodium: 1-5 MeV neutrons.) - -- Denotes Monte
Carlo input (gexperimental result); - --, Monte Carlo output, normalized;,-, revised Monte
Carlo input for next run.

The Monte Carlo procedure is crudity itself; it is the scoring that is elaborate.
The computer program is amply efficient but it is right at least to consider the use of
special tracking techniques. There is indeed a more modern version of the program
which uses weighted tracking, but the motivation for doing this was not to increase
accuracy but to enable multiplicative neutron processes (for example, fission) to be
handled more expeditiously (Parker et al., 1964). Hitherto such examples had not
been contemplated. Of course we were aware of Monte Carlo methods for con-
straining particles to interact (that is, have a collision) in the target, this "forced first
collision" technique being well known (Sobol, 1966, P. 143; Cashwell and Everett,
1959); we have used it, with mixed success, in other applications. But it costs very
little computer time to process a history that has no collision, and we set our faces
against this needless sophistication. The computer code., called MAGGIE, is now
in extensive use.t
t In private discussions, Dr M. H. Kalos informed me that he had developed, quite
independently, a multiple scatter code closely paralleling the above philosophy. His work, which
is not only antecedent to ours, but superior in detail, was not published.

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
34 PARKER - Monte Carlo for Problems in Neutronics [Part 1,

6. DISCUSSION
In this day, there appears a wide and growing gulf between the people who
describe techniques for doing problems and those who actually carry them out. It is
true in the subject of our Society (Sprent, 1970) and the genius whom we are com-
memorating would have deplored this gulf.
In the field of Monte Carlo neutronics the chasm is very wide indeed. To surveyors
of this field the attitudes we have consciously adopted, both here and elsewhere
(Parker, 1965 and 1969), will appear bizarre ones. Thus we have described at length
the problems involved in assessing and accommodating the data, while saying next
to nothing about generating random numbers.
In our field the two poles are exemplified by the expert intolerant of simple
approaches and direct methods and the unimaginative practical worker who scorns
(even if he can understand) the implications of important new work. "Crude" and
"sophisticated" are by now technical Monte Carlo terms. Too often the expert
uses "crude" in a deprecatory sense, while his opposite extreme uses "sophisticated"
in a derogatory sense.
To attempt to bridge the gap it might be helpful to rehearse the basic requirements
of a Monte Carlo neutronics practitioner. First, he would like a fully automatic,
fool-proof, method of processing and organizing his data input. Next he aims to use
a comparatively small repertoire of thoroughly tested computer programs; these
must be general enough to address a large class of problems for the luxury of writing
a special purpose Monte Carlo program to solve an isolated problem can seldom be
afforded. In general (though exceptions may well arise when comparative calculations
are envisaged) he should not strive for statistical accuracy which is unrealistically high
when viewed against the background of the data errors, and should be particularly
suspicious of assessing the efficacy of his methods in terms of a quantity that is the
product of the time spent in computing and the purely statistical variance of his
result or results. Finally, his quiver of computer programs should be as free as
possible from difficulties in setting up (it is, for example, dangerous to have less-
skilled personnel defining importance region siting strategy) and the results should
be capable of ready interpretation.
In this climate it is more understandable why so-called crude Monte Carlo is
favoured for a large proportion of problems. It is certainly not always the case that
the problem consists of calculating an efficient estimator of a single quantity. There
are many problems where designs which enhance the accuracy in some regions of
phase space (necessarily), degrading it in others, are not acceptable though fortunately
not all problems are in this class.
It would be false to imply that reducing variance is not important and that the
available repertoire for doing just this has only been explored to the extent that has
been described in Section 4.2. Halton (1970, p. 7) describes seven variance-reducing
techniques (admittedly in the context of evaluating sums and integrals, which is not
always our context) and it is right that these, and others, should be considered in the
context of neutronics work. Some we have considered in depth and have actually
tried out; a few we have dismissed (perhaps incorrectly) outright; and in other cases
we see (perhaps wrongly) no great gain in pursuing them at the moment. Our cautious
attitude is due to the interplay of early failures, disappointments, and conversely, to
our ability to inject, on a number of important occasions, new life into the old crude
Monte Carlo doctrine by subjecting the collision events to careful analysis (Section 5.2).
Three examples suffice. Although the idea of antithetic variation (Hammersley and

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 35

Morton, 1956; Hammersley and Handscomb, 1964, p. 60) has been exploited by us
in a non-neutronic context (Parker and Holford, 1968) we have been disappointed
when applying it to neutron studies, even in an application which at first sight seemed
remarkably encouraging. This was in a critical size problem where the main process
sustaining criticality was fissioning on the skin of a fissile core reflected by water.
Since the expected number of neutrons produced at thermal fission is 2-0 it occurred
to us that random errors could be substantially reduced by directing one secondary
at random into the reflector, while its antithetic mate took the opposite route into the
core. This idea is basically similar to that described by Hammersley and Handscomb
(1964, p. 109). Understanding why the application described there was successful
while ours was not is an essential stage towards the (often rather slow) self-education
of a Monte Carlo neutronics practitioner.
Another idea analysed and rejected is a variant of the forced first collision technique
(Section 5.2) whereby no particles are allowed to leave the system. This is done by
constraining the next collision to lie within the system, correspondingly multiplying
the particle weight by its probability of remaining in the system. The reasons for
our lack of success in this matter are clearly known to us but we cannot stop to
pursue this matter here.
Lastly we noted, and rejected out of hand, the idea of quasirandom sequences
(Halton, 1970) (that is, sequences of "random numbers" that are really non-random
but good for certain Monte Carlo applications) instead of the straight sequences
obtained from the congruence scheme (Section 4). This is because, in neutronics
work, the number of random numbers used to process a particle's history (and for
that matter a single collision) is not invariant but is itself a random variable, so we
do not know in advance what purpose each random number will be used for.
Our experience has taught us to be critical of variance-reducing methodology
but not, we hope, hypercritical. What is true, however, is that we view the need to
reduce variance as just one aspect of Monte Carlo methodology-and it is not, in
our view, the chief one. It is a matter of some surprise to us that the theory of birth
and death processes (Bartlett, 1955, p. 70; Harris, 1963, p. 103), is not considered
in the literature as a possible aid towards an understanding of Monte Carlo neutronics
matters, though the obvious relationship between the subjects is implicit in Rosesou
(1965). Some aspects of the theory have on occasion been helpful to us.
Perhaps the greatest drawback in the field of neutronics (and in some other fields
as well for that matter) is that the results of calculations, whether done Monte Carlo
or deterministically, are practically never accompanied by a statement about the
likely effect of errors in the data. The Monte Carlo practitioner who, in his very
tenacity, quotes his (statistical) errors, misleads the casual reader about the real
accuracy of his results, and paradoxically it is more often the case that due allowance
is made, or perhaps guessed, for the effect of data errors when the calculation is a
deterministic one. That this difficult problem has received little attention is under-
standable; that it should be ignored (or recognized and then swept under the carpet)
is intolerable.
We may view our subject as a branch of nuclear physics; as a branch of
mathematics; as an application of birth and death processes; or as a subject in
its own right, but embedded in a workaday environment. Depending on our perspec-
tive we are less, or more, interested in the theory than in the practice. It is right
that from time to time we should pause and ask ourselves, like children, some simple-
minded but awkward questions.

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
36 PARKER - Monte Carlo for Problems in Neutronics [Part 1,

REFERENCES

AHLBERG, J. H., NILSON, E. N. and WALSH, J. L. (1967). The Theory of Splines and their
Applications. New York: Academic Press.
BARTLETT, M. S. (1955). An Introduction to Stochastic Processes. Cambridge University Press.
BENDALL, D. E. and MCCRACKEN, A. K. (1968). McBend-A prototype code utilizing both
removal-diffusion and Monte Carlo methods. AERE Rep. R-5773. London: H.M.S.O.
BENZI, V., CUPINI, E. and DE MATHEIS, A. (1966). A Monte Carlo analysis of some enriched
U235 fast critical assemblies. J. Nucl. Energy, 20, 17-24.
BERGER, M. J. and DOGGETT, J. (1956). Reflection and transmission of gamma radiation by
barriers: semianalytic Monte Carlo calculation. J. Res. Nat. Bur. Stand., 56, 89-98.
BURROWS, G. L. and MACMILLAN, D. B. (1965). Confidence limits for Monte Carlo calculations.
Nucl. Sci. and Engng, 22, 384-385.
CARLSON, B. G. and BELL, G. I. (1958). Solution of the Transport equation by the Sn, method.
In Proc. 2nd Intern. Conf. Peaceful Uses Atomic Energy, Vol. 16. Nuclear Data and Reactor
Theory, pp. 535-549. Geneva: United Nations.
CASHWELL, E. D. and EVERITT, C. J. (1959). A Practical Manual on the Monte Carlo Methodfor
Random Walk Problems. New York: Pergamon.
DAVIS, D. H. (1963). Critical size calculations for neutron systems by the Monte Carlo method.
In Methods in Computational Physics, Vol. 1, pp. 67-88. London: Academic Press.
DAVISON, B. (1957). Neutron Transport Theory. Oxford University Press.
ERIKSSON, B. (1965). On the use of importance sampling in particle transport problems. Rep.
AE-190, Aktiebolaget Atomenergi, Stockholm, Sweden.
*GOERTZEL, G. and KALOS, M. H. (1958). Monte Carlo methods in transport problems. In
Progress in Nuclear Energy, Series 1: Physics and Mathematics, Vol. 2, pp. 315-369. New
York: Pergamon.
GUBBINS, M. E. (1966). Reactor perturbation calculations by Monte Carlo methods. AEEW Rep.
M 581. London: H.M.S.O.
(1969). Chase B-A Monte Carlo code for calculating reactor criticality, fluxes, and
perturbation worths. AEEW Rep. R 627. London: H.M.S.O.
*HALTON, J. H. (1970). A retrospective and prospective survey of the Monte Carlo method.
SIAM Review, 12, 1-63.
*HAMMERSLEY, J. M. and HANDSCOMB, D. C. (1964). Monte Carlo Methods. London:
Methuen.
HAMMERSLEY, J. M. and MORTON, K. W. (1956). A new Monte Carlo technique; antithetic
variates. Proc. Camb. Phil. Soc., 52, 449-475.
HARRIS, T. E. (1963). The Theory of Branching Processes. Berlin: Springer-Verlag.
HEMMENT, P. C. E., PENDLEBURY, E. D., ADAMS, M. J., BRETT, B. A. and SAMS, D. (1966). The
multigroup neutron transport perturbation program DUNDEE. A WRE Rep. 0-40/66.
London: H.M.S.O.
HEMMINGS, P. J. (1967). The GEM code. AHSB (S) Rep. R 105. London: H.M.S.O.
KALOS, M. H. (1963a). On the estimation of flux at a point by Monte Carlo. Nucl. Sci. and
Engng, 16, 111-117.
(1963b). Importance sampling in Monte Carlo shielding calculations. Nucl. Sci. and Engng,
16, 227-234.
LEHMER, D. H. (1951). Mathematical methods in large-scale computing units. Proc. 2nd Symp.
on Large-scale Digital Calculating Machinery, 1949, pp. 141-146. Cambridge Mass.: Harvard
University Press.
LEIMD6RFER, M. (1964). On the transformation of the transport equation for solving deep
penetration problems by the Monte Carlo method. FOA Rapport A 4361-411. Stockholm,
Sweden: Research Institute of National Defence.
LESHAN, E. J., BURR, J. R., TEMME, M., et al. (1958). Calculation of reactor history including
the details of isotopic concentration. ASAE Rep. 34. Mountain View, California: American-
Standard.
MARSHALL, A. W. (1966). Introductory note. In Symposium on Monte Carlo Methods, pp. 1-14
(University of Florida, 1954), H. A. Meyer (ed.). New York: John Wiley.
MATHER, D. S., FIELDHOUSE, P. and MOAT, A. (1964). Average number of prompt neutrons from
U235 fission induced by neutrons from thermal to 8 MeY. Phys. Rev., 133B, 1403-1420.
MILLER, S. M. and PARKER, K. (1965). List of data files available in the UKAEA Nuclear data
library as at 15 April, 1965. A WRE Rep. 0-55/65. London: H.M.S.O.

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] Discussion on Mr Parker's Paper 37

MORTON, K. W. (1956). Criticality calculations by Monte Carlo methods. A.E.R.E. Rep.


T/R 1902. London: H.M.S.O.
- (1957). Scaling neutron tracks in Monte Carlo shielding calculations. J. Nucl. Energy, 5,
320-324.
PARKER, J. B. (1962). Monte Carlo methods for neutronics problems. In Numerical Solution of
Ordinary and Partial Differential Equations (L. Fox, ed.), pp. 432-441. Oxford: Pergamon.
(1965). The technology of simulating neutronics problems by the Monte Carlo method.
Bull. Inst. Math. and its Applns, 1, 35-43.
- (1966). DICE Mk. V-The preparation of nuclear data into a form suitable for Monte
Carlo calculations using an electronic computer. A WRE Rep. 0-27/66. London: H.M.S.O.
(1969). Practical uses of the Monte Carlo method in neutronics calculations. In Studies in
Appl. Math. III, pp. 47-55.
PARKER, J. B., FIELDHOUSE, P., HARRISON, L. M. and MATHER, D. S. (1968). Monte Carlo studies
of scintillator efficiencies and scatter corrections for (n, 2n) cross section measurements.
Nucl. Inst. and Meth., 60, 7-23.
PARKER, J. B. and HOLFORD, A. (1968). Optimum test statistics with particular reference to a
forensic science problem. Appl. Statist., 17, 237-251.
PARKER, J. B., TOWLE, J. H., SAMS, D., GILBOY, W. B., PURNELL, A. D. and STEVENS, H. J. (1964).
Multiple scatter corrections using the Monte Carlo program MAGGIE. Nucl. Inst. and Meth.,
30, 77-87.
PARKER, J. B., TOWLE, J. H., SAMS, D. and JONES, P. G. (1961). Multiple scatter corrections by
Monte Carlo. Nucl. Inst. and Meth., 14, 1-12.
PARKER, J. B. and WOODCOCK, E. R. (1961). Monte Carlo criticality calculations. In Progress in
Nuclear Energy, Series 4: Technology, Engineering and Safety, Vol. 4, pp. 435-457. Oxford:
Pergamon.
PARKER, K. (1970). Experience with cubic splines in the graduation of neutron cross-section
data. In Numerical Approximation to Functions and Data (J. G. Hayes, ed.), pp. 107-110.
London: Athlone Press.
POWELL, M. J. D. (1970). Curve fitting by splines in one variable. In Numerical Approximation
to Functions and Data (J. G. Hayes, ed.), pp. 65-83. London: Athlone Press.
PULL, I. C. (1962). Special techniques of the Monte Carlo method. In Numerical Solution of
Ordinary and Partial Differential Equations (L. Fox, ed.), pp. 442-457. Oxford: Pergamon.
ROSESOU, T. (1965). Stochastic processes in nuclear reactor theory, a bibliography, IFA-FR-47.
Bucharest, CP-35: Institute of Atomic Physics of Rumania.
SOBOL, I. M. (1966). Application of the Monte Carlo method to neutron physics. In The Monte
Carlo Method (Yu. A. Shreider, ed.), pp. 137-183. Oxford: Pergamon.
SPRENT, P. (1970). Some problems of statistical consultancy. J. R. Statist. Soc. A, 133, 139-164.
TOCHER, K. D. (1963). The Art of Simulation. London: English Universities Press.

The more fundamental references are indicated by an asterisk.

DISCUSSION ON MR PARKER'S PAPER

Dr J. M. HAMMERSLEY (Oxford University): Mr Parker hits the nail on the head when
he says there is "a wide and growing gulf between the people who describe techniques for
doing problems and those who actually carry them out". And he reminds us too of
Dr Sprent's recent paper, dealing with the same gulf in statistics. It is instructive to read
the discussion to Dr Sprent's paper and to collate it with Mr Parker's contribution. The
dominant theme in both is a concern for understanding the data of a problem. The
good statistical consultant asks many simple, and apparently naive, questions about
the sources and the reliability and the limitations of the data; and in this way he gets
a feel for what is going on. He has a background knowledge of statistical theory, some
of it advanced; but more often than not, when he has appreciated the limitations of
the data, he will feel that quite simple statistical techniques are all that the data will
bear-perhaps even no more than a few rough diagrams and the calculation of one or
two means and standard deviations. This is equally true in Monte Carlo work, as
Mr Parker shows so well. For example his Table 1 is a valuable corrective against too
much technical sophistication, too much philosophical worrying about the validity of

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
38 Discussion on Mr Parker's Paper [Part 1,

pseudo-random numbers, and so on when the basic physical data are the prime cause
of imprecision. A theorist does not have to apologize for trying to make a theory work,
but he needs to be realistic enough to recognize that it may be largely a matter of luck
if theory is applicable. The basic trouble is that we are very short of applicable mathe-
matical expertise: we have to make drastic simplifications to get the mathematics off
the ground at all-typically we assume that the problem is one-dimensional, although
most Monte Carlo problems are multidimensional and as such behave quite differently.
In the preface to our book on Monte Carlo methods, Dr Handscomb and I said that
we were dealing with "ideas for variance reduction which have never been seriously
applied in practice. This is inevitable, and typical of a subject that has remained in its
infancy for twenty years or more." It is still in its infancy. On the other hand, we have
to countenance Monte Carlo methods, because so often we have no better alternative.
The other theme to appear both in Dr Sprent's discussion and Mr Parker's paper is the
suggestion that computers may be able to help us out. I think, however, there is a danger
that we may expect too much of them. It is all very well for an expert like Mr Parker
to ask for "a fully-automatic, fool-proof, method of processing and organizing the input
data". What is fool-proof to him is not, unfortunately, fool-proof to fools and amateurs.
I am sure it is always wise, wherever one can, to begin the planning of a Monte Carlo
calculation by playing with the data and with a few random numbers on a desk calculator
before committing them to mass-production on a large computer. It is indeed a very good
aid to getting a feel of the data, and to assessing such matters as the validity of replacing
a distribution by a mean like his v = 2i II (i IE). This is not to pretend that such pre-
liminary tinkering resolves all difficulties, nor to deny that it can be confoundedly
awkward and tentative in a large neutronics problem with elaborate geometry.
Mr Parker has given us a very stimulating and thoughtful paper; and I have much
pleasure in proposing the vote of thanks.

Mr B. E. COOPER (ICL Dataskil Ltd): I am very grateful indeed to Mr Parker for provid-
ing me with this opportunity to renew my acquaintance with Monte Carlo methods. Many
years ago, as a rather green statistics graduate, from a well-known place in Gower Street,
I joined the Monte Carlo Section at the Atomic Energy Research Establishment at
Harwell. Incidentally, somebody on the Committee of this Section obviously has a good
memory. Mr Parker's paper has brought back to me memories of my work in this Section
and I cannot resist making one or two brief reminiscences. At that time we used a British
Tabulating Machine Company-now part of ICL-555 punch-card calculator and
various other punch-card equipment such as sorters, collators and tabulators. In our
Monte Carlo problems one card often represented one nuclear particle and one generation
of nuclear events consisted of a walk, literally, round the machine room. The calculator
calculated what happened at each event, the sorter sorted out the particles judged to have
been absorbed, the collator inserted extra cards for new particles which were created at
the events. The weight of cards moved about the machine room for one problem could be
measured in tons.
I remember one problem, again as a rather green statistics graduate, in which we
were interested in the passage of neutrons from a source. I was greatly intrigued by the
use of the splitting technique where at a boundary a neutron could suddenly become
two neutrons with, as Mr Parker described, half the weight it had before it crossed the
boundary if the crossing was in one direction and would be forced to play Russian roulette
with probability of 0 5 if the crossing was in the other direction. This did, in fact, work
very well; the aim was to make sure that the spatial distribution which we were seeking
was equally accurate over the whole range.
I am greatly impressed by the common-sense approach to Monte Carlo methods in
the paper. This obviously represents a wealth of experience in their application. The
paper is full of "quotes of the week" and I would draw your attention to some of them.

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] Discussion on Mr Parker's Paper 39

The first is "greatest attention should be paid to those particles that are most likely to
contribute to the result". Although this is obviously sound common sense it seems to
have been missed in other simulation areas. A second is "it is cheaper to have a single,
perhaps inefficient, general-purpose code rather than a medley of highly sophisticated
special-purpose programs". This reminds us that although a sophisticated technique
may allow reduction in the sample size its use will often increase rather than decrease the
total amount of work in the simulation. A third quote is "statistical errors in the Monte
Carlo, however crude, are frequently (but not always) of minor significance when viewed
against a broader backcloth". This quote, also emphasizing that Monte Carlo is often
just a stage in a larger calculation, points out that the sample size should be chosen so that
the error in the Monte Carlo is consistent with that in the original data and is sufficient
for that stage in the calculation. We are reminded not to seek a highly accurate simulation
when the data or the other stages do not warrant it.
There is a reference to the use of antithetic variates which I noticed the previous
speaker did not comment on. I can remember trying some of these at Harwell and
finding, as Mr Parker indicates in his paper, that the method worked impressively on
integrals, where there were dramatic reductions in variance. However, I cannot remember
them being tried on Monte Carlo problems at Harwell. Dr Hammersley was also associated
with that group and will correct me if I am wrong.

Dr HAMMERSLEY: Yes, they were tried, but did not give a very good result. A factor of
10 was about the best obtained by us.

Mr COOPER: I see I am corrected.

I have listed just three quotes from the paper. There are many more. The main feature
of the paper is the practical approach it brings to Monte Carlo and we could all benefit
from close attention to the many practical points discussed.
Dr Hammersley also drew attention to the first sentence of the discussion and I will
read this: "In this day, there appears a wide and growing gulf between the people who
describe techniques for doing problems and those who actually carry them out". As Dr
Hammersley has said, this is true of much of statistics. I would go a little further than
this and say that it seems to me that there is also a gulf between Monte Carlo and the rest
of simulation. Mr Parker's paper describes very fully the many techniques he uses and I
think the paper also shows the great insight which Mr Parker gains into the practicalities
of the problems which he has studied. Those of us who are concerned with simulation
in other areas should scan this paper in some detail and determine whether the methods
described can be usefully employed in their environment. I think Mr Parker's paper may
go a long way to bridging this gap.
I am delighted to second the vote of thanks.

The vote of thanks was carried with applause.

Professor J. W. TUKEY (Princeton University): I think it would be very wrong to


touch on the minor points I am going to mention before recognizing the great soundness
of the speaker's general philosophy and approach to this matter.
I shall now turn to the minor points if I may. I have had no direct experience with
spline fitting, but I cannot help wondering: first, what mixture of good and evil is done
by forcing high order contact to the adjacent cubics, and, second, why it pays to fit cubic
splines when the results are then used as step functions in the Monte Carlo calculations.
There must also clearly be a balance between vast store size and problems of complexity
beyond which optimum interval linear approximation, which is something people used
to know about 20 years ago, would be preferred to the use of step functions.
The use of the "nearest" energy to the average neutrons clearly helps us to keep from
studying fluctuation phenomena. Are these not going to be important in some non-linear

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
40 Discussion on Mr Parker's Paper [Part 1,

problems? Except for this it seems to me that this use of the average is the right thing
to do, even though, as one of my friends asked me during the lecture, "Was the speaker
not committing a sin by doing it?" I had to tell him that he was not.
As to references and terminology, we should note that statisticians also turn to Box
and Muller (1958) for generating sines and cosines and that some call crude Monte Carlo
just "experimental sampling", and sophisticated Monte Carlo, just "Monte Carlo".
The argument about appropriate accuracy for a slab problem seems a little unstatistical.
Surely-and I am talking about the details of the argument and not the overall strength-
the averaging of many different uncertainties of about the same size does not increase the
resulting uncertainty. It seems to me that if there are as many nuclei as mentioned in the
paper, and there are uncertainties about each, then the statistician should be happier
than if there were only one nucleus about which he was uncertain.
In the discussion of importance regions and external source problems, the flow is
usually in phase space as well as in geometry. Thus, it is not clear why importancq regions
in phase space do not help. Should we not try for a constant traffic density in phase space
as well as in geometric space?
The multiple-scattering problem raises interesting questions. Presumably the case of
single scattering without allowance for second collisions can be done deterministically.
If so, should we not confine our Monte Carlo to the additions due to second or later
collisions and to the corresponding subtractions of single-collision neutrons?
I might prefer, with the speaker, to get my pasteurized numbers from a congruential
generator, though I would want for extra security to apply a permutation 500 or 1,000
long to them before use. I do not see immediately that having varying numbers of
random numbers used for comparable sequence of histories is in itself desirable, or is an
argument against other kinds of generators.
To the speaker's emphasis on the importance of all kinds of effects, it seems to me
that we can only shout "Hear, hear!" and I do so. This need not, however, preclude
measuring limited but meaningful internal aspects of quality by the product of internal
variance and computer time. Looking at external errors, as we must, will inevitably shorten
our desired computer runs, but only programming and debugging difficulties need inter-
fere with shortening them further by improving internal quality.
Mr Parker called attention in his introduction to the possibility of doing fitting with
the L, norm. I call his attention to an unfortunately still unpublished Princeton thesis
by W. Morven Gentleman, which developed techniques of minimizing L, norms for p
between 1 and 2 with successful results; 1 and 1X; are sometimes quite successful.
One thing that I missed both in the discussion of the data reduction and in Table 1,
I think, was the question not only of the variances of the experimental determination of
physical constants, but of their covariances. It seems to me that, in the use of these
nuclear cross-section graphs, covariance questions must be relevant. It may be that when
you look for particular behaviour in the results as functions of a parameter, this may
have quite a lot to do with how much precision should be put into the Monte Carlo
pattern problem.
I would, I think, point out that in the statistical uses of Monte Carlo as opposed to
the neutronic uses, it is often, perhaps usually, necessary to be both a gentleman and a
player if you are going to get through. It seems to me that Mr Parker is to be con-
gratulated on being fortunate enough to work where a crude Monte Carlo works as often
as it does.
Finally, I would like to enquire why the technique of "what the particles might have
done" is crude Monte Carlo. I would regard that as sophisticated Monte Carlo of a
very interesting type.

The CHARMMAN: The part of the paper which has lessons for all statisticians, not only
those concerned with simulation, is that concerned with the nuclear data, their reduction
and subsequent use. The plea which the author makes towards the end of the paper for

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] Discussion on Mr Parker's Paper 41

fool-proof methods of reducing such data automatically contrasts rather dramatically


with the very subtle modes of intervention which he permits himself during the more
technically Monte Carlo parts of the activity. I take it that this is because there is a very
great deal of nuclear data and the thought of intervening in its analysis conjures up a
picture of a lifetime's work-in fact, automation may be forced upon one by the sheer
volume of work to be got through. None the less, the thought of confronting an automatic
data-reduction module with the results in Fig. 2 in the hope that it will cope with the bottom
left-hand corner seems to me optimistic. Are we not forced to the conclusion that some
points do not belong to the population that we hoped they did, and that we may learn
something interesting by going back and asking ourselves "What on earth has gone
wrong ?"

PROFESSOR TUKEY: I would say that a moving median would do a lot for that bottom
left-hand corner.

Professor D. R. Cox (Imperial College): Mr Parker's paper is of much interest both


for its thorough survey of Monte Carlo problems in a particular field and also for the
broader implications of what it says. One of the central points, the importance of con-
sidering errors in the input parameters, is surely of wide relevance, in particular in appli-
cations to operational research. It would be valuable to have Mr Parker's comments on
the possible application of experimental design techniques in this connection. If one
wishes to estimate 71 = X (al, ..., ag,), where the a's are parameters of the input, the use
of high-order fractional factorials, plus centre points, seems appropriate. From the
linearized form of 71, an approximation to the standard error of 7 can be obtained; also
the effect on -j of systematic errors in the a's can be assessed.

The following contribution was received in writing, after the meeting:

Mr E. R. WOODCOCK (U.K.A.E.A.): This paper gives a good example of the type of


calculation for which Monte Carlo methods are eminently suitable. Neutronic calculations,
whether to assess the performance of a nuclear reactor, the safety of a nuclear fuel-
processing plant, or the calibration of experimental instrumentation, generally involve
considerable geometric complexity in addition to the complexity of the laws governing
the interaction of neutrons with matter. On the one hand deterministic numerical methods
can give an answer of high precision but it is impractical to incorporate the nuclear data
and geometrical configuration in full detail. The necessary simplifications mean that in
many cases these methods are of low accuracy. On the other hand, Monte Carlo methods
have been developed (see, for example, Hemmings, 1967; Longworth, 1968; Woodcock
et al., 1965) to the stage in which the nuclear data and geometrical detail can readily
be included as accurately as it can be known. So these calculations are of high accuracy
but the inevitable statistical uncertainty means they are of low precision. The choice
thus lies between the high accuracy with low precision of the Monte Carlo method and the
low accuracy with high precision of deterministic methods. The former will often be
preferred.
Monte Carlo methods can be used in many other fields but they are only attractive,
because of their low precision, in problems for which deterministic methods require
approximations making them of low accuracy. Their application is thus limited but
where they are appropriate they are invaluable.

The author replied in writing as follows:

I am most grateful to the proposer, seconder and discussants for their thoughtful
contributions. They would, I am sure, have made a much better job of presenting some
aspects of my paper than I did myself.

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
42 Discussion on Mr Parker's Paper [Part 1,

Mr Cooper and Professor Tukey draw attention to the method of antithetic variates.
There is an amusing story about the missing reference, and since it is at the expense of
the Society perhaps I may be excused for telling it. The reference is titled "Optimum
Test Statistics with particular reference to a Forensic Science Problem". It is of course
the author's job to proof-read the text, but not the title sheet of the offprints. The off-
prints turned up impeccably produced, save that the title sheet read "Opium Test Statistics",
the setter having presumably been influenced by the word "Forensic" in the title. I am
reminded of the very eminent nineteenth-century sage who laid down the axiom that
statistics were the opium of the masses.
It is very difficult to do justice to Professor Tukey's many valuable contributions.
If it were true that the results of nuclear data evaluations were needed solely for Monte
Carlo neutronics calculations it would, as Professor Tukey says, be natural to graduate the
experimental data in histogram form. But the data are also required for other types of
neutronics studies, and the presentation of the experimental results is partly dictated by
these requirements. In this connection the President's remark of the dangers of a purely
automatic evaluation of data that are frequently of heterogeneous structure is particularly
valuable. The computer should be used up to a point, but should not be allowed to by-pass
expert assessments about particular sets of experimental results.
Dr Hammersley's remarks on the use of computers are apposite in this connection,
but also, of course, in a wider context. In the early days particularly, a lot of time was
spent trying to do finely grained problems on slow machines, an attitude which Dr
Hammersley himself fought as early as 1954 (Hammersley et al., 1954). With faster
machines, better (though still imperfect) expertise, and greater confidence, problems
of doing really large-scale calculations, accommodating the data in very refined form,
are much less severe than they were 15 years ago, when too often machines mastered us
rather than the other way around. Careful pre-production planning I find as important
as it ever has been. The obvious application here is the careful choice of sampling
strategies (e.g. choice of importance regions), but additionally it is of value to try to get a
feel, before doing the actual calculation, of how the particles are likely to behave and why.
Professor Tukey suggests optimum interval linear approximations to nuclear data.
This indeed is partially used in at least one other establishment concerned with Monte
Carlo calculations and the slight additional complexity in carrying out the sampling is
not a serious matter. It is indeed a preferable representation but in our special problems
our cruder representation is quite satisfactory.
I regret I had missed referring to Box and Muller's work on random number trans-
formations. In other connections I have had occasion to employ their excellent technique
for sampling from the Gaussian distribution (Box and Muller, 1958). It is tempting to
take the average of five or six members of the already available rectangular distribution,
murmur "Central Limit Theorem" and hope for the best. One should at least be aware
of when it is legitimate to get by like this and when it is not!
I agree in the main with Professor Tukey's remark that since all the nuclear data are
subject to uncertainty, my example about the slab penetration problem was overdrawn.
However, it is very often the case that substantial uncertainties in the data for one nuclide
are less influential than minor errors in those for another. This is particularly true in
problems involving a mixture of fissile and non-fissile material. Both Professor Tukey
and Professor Cox have valuable suggestions as to how to assess the sensitivity of the
results to nuclear data uncertainties, though I am worried about the magnitude of
Professor Cox's dimension p. In structuring the errors it is of course the whole of the
error matrix, rather than the diagonal, that is of concern, as Professor Tukey reminds
us. Thus for a particular cross-section, p may not need to be sizeable, but there are so
many different cross-sections! I think the best thing to do is to start by using first-order
perturbation theory to identify those phenomena likely to affect the answer most and
then simply ignore errors in the "unimportant" data; though even then one would have
a formidable task!

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] Discussion on Mr Parker's Paper 43

I am the first to admit that the importance region strategy used by us is very crude,
paying as it does incomplete attention to geometry and none at all to other aspects of
phase space.
On random number generation, I have not found a case where, in neutronic appli-
cations, the use of a congruential generator led one astray (we have carried out many
comparative calculations on problems solved by deterministic methods) but Professor
Tukey's warning is a salutary one. Since different types of collision event call for
different numbers of appeals to our random number generator, it is difficult (though not
impossible) to ensure that different histories each having the same number of collisions
will use the same number of random numbers.
In the multiple-scatter program, we did indeed evaluate separately the effect of
doubly and multiply scattered tracks. Professor Tukey's suggestion is a little difficult
to take into account for even the singly scattered scores are dependent upon the intricacies
of the source angular distribution, which is incorrectly shown as monodirectional on
Fig. 4.
Mr Woodcock's contribution draws my attention to the fact that in my paper I said
next to nothing about the considerable problems involved in constructing modules enabling
really complicated geometries to be considered. That in our particular studies we can
usually manage with relatively simple geometries is no excuse for this scant coverage,
particularly since deterministic methods are non-competitive when the geometry is complex.
I can only in part rectify this lack of emphasis by drawing attention to Mr Woodcock's
references, which are required reading for the Monte Carlo practitioner when he moves
into really complicated geometry.

REFERENCES IN THE DISCUSSION


Box, G. E. P. and MULLER, M. E. (1958). A note on the generation of random normal deviates.
Ann. Math. Statist., 29, 610-611.
HAMMERSLEY, J. M., MORTON, K. W. and TOCHER, K. D. (1954). Symposium on Monte Carlo
methods. J. R. Statist. Soc. B, 16, 23-75.
HEMMINGS, P. J. (1967). The GEM code. AHSB(S) Report R. 105. London: H.M.S.O.
LONGWORTH, T. C. (1968). The GEM 4 code. AHSB(S) Report R. 146. London: H.M.S.O.
WOODCOCK, E. R., MURPHY, T., HEMMINGS, P. J. and LONGWORTH, T. C. (1965). Techniques
used in the GEM code for Monte Carlo neutronics calculations in reactors and other systems
of complex geometry. In Proceedings of the Conference on Applications of Complter Methods
to Reactor Problems. AEC Research and Development Report ANL 7050.

As a result of the ballot held during the meeting, the following were elected Fellows
of the Society:

DAVIDSON, Richard Leslie ROBINSON, Peter


DIXON, Robert Arthur, B.Sc. WEBB, Gilbert Ian, B.Sc.
COLES, James Michael WILSON, Susan Ruth, B.Sc.
HARRIS, Reginald Guy WORSDALE, Graham John, B.A.
McLEAN, Fritz Charles, B.Sc.

This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms

You might also like