Parker 1972
Parker 1972
Author(s): J. B. Parker
Source: Journal of the Royal Statistical Society. Series A (General), Vol. 135, No. 1 (1972),
pp. 16-43
Published by: Wiley for the Royal Statistical Society
Stable URL: http://www.jstor.org/stable/2345038
Accessed: 25-06-2016 11:43 UTC
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted
digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about
JSTOR, please contact [email protected].
Wiley, Royal Statistical Society are collaborating with JSTOR to digitize, preserve and extend access to
Journal of the Royal Statistical Society. Series A (General)
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
J. R. Statist. Soc. A, 16 [Part 1,
(1972), 135, Part 1, p. 16
By J. B. PARKER
U.K. Atomic Energy Authority
SUMMARY
This paper reviews experiences based on the use of the Monte Carlo method
for solving large-scale problems in neutronics.
1. INTRODUCTION
NEUTRONICS problems involve the transport of particles (neutrons) from one collision
with an atomic nucleus to another. The collision processes take place in accordance
with quite definite, though probabilistic, physical laws, whose structure is obtained
partly by theory but mainly by experimentation. The transport of neutrons is
expressible in terms of the Boltzmann transport equation, an integro-differential
equation expressing neutron conservation (Davison, 1957, p. 15). Because of the
complexity of the underlying physical laws a mathematical solution of the Boltzmann
equation is generally difficult. However, solutions of certain neutronics problems by
deterministic numerical methods, using electronic computers, have been developed
and these are generally quick, reliable and sufficiently accurate, provided that the
geometry of the assembly under consideration is simple. A particularly attractive
deterministic approach is described by Carlson and Bell (1958).
Neutrons travel with different speeds but have the same mass, so that the terms
"velocity" and (kinetic) "energy" can be, and are, used virtually synonymously. In a
practical problem, the range of energies considered may well extend over four, five
or more decades. Most (as far as I know all) deterministic methods are based on the
energy group concept; that is, neutrons are regarded as belonging to one of a finite
number of generally abutting groups inside which definite, suitably averaged, collision
laws apply. Because the collision laws are finely grained, and may vary rapidly not
only in detail but also in structure with energy, it is necessary either to use a large
number of energy groups, in which case computation time increases, or to establish
a very reliable procedure for averaging the finely grouped data to obtain (macroscopic)
group constants. There are classes of problems arising in the neutronics field where
this approach is rather dubious. It is in these general areas (complex geometries, or
a requirement for the utilization of nuclear data fine grained in energy) that an
alternative to deterministic methods is open to consideration.
The basic problem is to study the properties of neutrons which can be regarded
as moving from collision point to collision point inside a connected system of generally
known geometrical shape consisting of a series of different physical materials. For a
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 17
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
18 PARKER - Monte Carlo for Problems in Neutronics [Part 1,
X X and
~~~~~Findvelocity
particle's energy group
i Particle analysis a
|routines
~~~~ ~~
,. ~| 2
extinct?|
TH, AICDT
FIG. 1. FloW diagram showing interrelationship between main program and nuclear data.
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 19
uncertainties in the basic data may be helpful in a subsequent decision about the
level of accuracy he should seek in his Monte Carlo simulation. He will want to
forecast the likely machine running time and will want to know how far, if at all, he
should employ special simulation techniques. We therefore review a typical evaluation
problem concerning a neutron cross-section a (which is one of the constituents which
determine the probability of the outcome of a particular event at a collision) for a
particular reaction for a particular nuclide. Each experimenter usually measures
cross-sections over a more, or less, wide range of neutron incident energy E, and
quotes his result, together with its experimental error. This experimental error in
most cases measures the reproducibility of the experiments. Errors due to the
particular technique being used, the standards assumed in the experiment and in some
cases the calibration of the apparatus, are generally not included, through no fault
of the experimenter, in his quoted error estimate. These are errors that are likely to
affect all his determinations in a similar manner. It is not unusual to find that different
experimenters' determinations of a cross-section are apparently inconsistent in the
sense that different estimates of the (a, E) plot are widely spaced by an amount that
is large when viewed against the quoted experimental errors of either. This state of
affairs of course merely confirms the diagnosis that the possibly large component of
error due to some, or all, of the causes we have instanced above had not been, and
indeed could not have been, included in the quoted error. Whether or not, and if so
how, statistical methods should be introduced into the evaluation procedure is an
arguable proposition. The complexity of the form of the (a, E) plot, or plots, argues
convincingly, in our judgment, in favour of graduation by cubic splines (Ahlberg et al.,
1967; Powell, 1970) rather than by high order polynomial; the question of fitting
these splines by least squares is itself open, some norm other than the L2 norm possibly
being more appropriate. Fig. 2 (Parker, 1970) illustrates the complexity of the
problem, the evaluation in this case having been mechanically performed using a
program which was a forerunner of that described by Powell (1970). That the output
of any evaluation should include a statement about the errors (a composite of the
several experimental errors together with a contribution due to "apparent
inconsistencies"), including a specification of the likely degree of correlation
between different cross-sections that are adjacent in energy is perhaps a counsel of
perfection; what it is necessary to stress, however, is that the climate in which the
Monte Carlo worker dwells must be conditioned by at least a feel for the errors in
these pre-processing evaluations.
Given evaluated data for each nuclide, the entire set of data appropriate to the
problem under study has to be interpreted into a format suitable for rapid processing
by a computer. Since an appeal to nuclear data occurs every time a collision takes
place it is desirable to include the entire set of data in the fast (core) store of the
machine, and with machines of order 100,000 words storage capacity this is quite
practicable. First, there is the need to decide how to store the data; second, there is
the problem of where to store it most economically. The first is a problem of simula-
tion, the second a problem of organization.
It would be possible to present a curve, for instance the evaluated (a,E) plot,
direct to a computer, using computer graphics, but in practice it is desirable to work
straight from the nuclear data files. The format of these files, whose use is intended
for all types of neutronics calculation, is not immediately suitable for Monte Carlo
purposes. The basic concept in the layout of the UKAEA Nuclear Data Library
(Miller and Parker, 1965) is that all cross-sections are represented by point pairs (a, E)
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
20 PARKER - Monte Carlo for Problems in Neutronics [Part 1,
FRAME 0UU
270.0
00.
U: + A
04,.0 "'TI~~~~~~~~~~
?0 145.0 . - - -~1
a similar manner, the first element being the ordinate of the frequency distribution
and the second the appropriate value of the parameter of interest, for example,
secondary energy. Discrete probability distributions are grafted into this framework
as and where necessary; that for the number of neutrons emerging from a fission is
an interesting but not representative example. The true probability that i neutrons
are produced is 7TQIl E), where E is the initial energy and E?ojr(7QIl E) = 1. However,
the Nuclear Data Library only lists v;= ET i7T(i IE). If 7T(i IE), or rather estimates
of it, were provided, Monte Carlo practitioners could (but would not) sample from
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 21
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
22 PARKER - Monte Carlo for Problems in Neutronics [Part 1,
The first stage of our nuclear data processing plant is now complete, processed
data having been made available in permanent form (magnetic tape). The next stage
is problem dependent. With this tape as input, the fast store of the machine is fed
with those ingredients of the tape specific to the problem under consideration. Some-
times, as is the case where methods of simulation other than the direct ones we have
described are used in the tracking, it is more convenient to use, not the cross-sections
themselves, but various functions of them. The data also require to be pruned of
redundancies and to be ordered in a hierarchical manner so that the many-stage
sampling procedures can be effectively carried out. A full description of the entire
system of Monte Carlo data handling developed at Aldermaston is given by Parker
(1966). In Fig. 1 those parts of a Monte Carlo code in which this system is called
into play are distinguished by crosses.
t There may also be problems in which the Boltzmann transport equation is non-linear, as
when neutron-neutron interactions are the object of study. Such problems are not discussed here.
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 23
(0, 1); the literature on this one subject alone is considerable, being well summarized
(Halton, 1970) already. An appropriate congruence scheme, based on the original
idea of Lehmer (1951) is adequate for practical needs.
Also taken for granted is the use made of these random numbers in the sampling
at collision points and during the tracking; some of these facets have been mentioned
in Section 2. Techniques for generating, for example, the sine and cosine of an angle
that is uniformly distributed in (0, 27r) are so well known (Hammersley and
Handscomb, 1964, p. 37; Cashwell and Everitt, 1959; Tocher, 1963, p. 37) as to
deserve no special mention.
At this stage, then, we have a mental picture of an amalgam of connected series
of straight line tracks, called "particle histories", with or without branching at some
of the nodes (nuclei) that can be rapidly generated by the methods we have described.
If the tracks are a microcosm of the real physical process the tracking is said to be
"Crude Monte Carlo"; however they need not be, and if they are not the tracking is
"Sophisticated Monte Carlo". We did not invent this terminology and do not attempt
to justify it. What remains is to analyse the results of the tracking in order that one
may achieve a sufficiently accurate answer to the problem under study. It is convenient
to distinguish the three classes of problem defined in Section 3 before doing this.
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
24 PARKER - Monte Carlo for Problems in Neutronics [Part 1,
(Goertzel and Kalos, 1958; Kalos, 1963b; Leimdorfer, 1964; Eriksson, 1965; Bendall
and McCracken, 1968). That something other than routine Monte Carlo tracking
is sometimes absolutely necessary is immediately obvious when the well-worn
example of calculating the proportion of neutrons transmitted through a thick
shield (Goertzel and Kalos, 1958; Hammersley and Handscomb, 1964, p. 100; Sobol,
1966) is presented. If the aim is to calculate a transmission which is of the order 10- with
a relative error of about 10 per cent, about a million histories must be processed and
this is computationally unrealistic.
Source
But in practice we might well pause to ask ourselves how realistic this test example
really is. For neutrons incident normally on a plane slab the transmission probability
is of the approximate form p = fexp (- atA), where a is the total cross-section, t the
slab thickness, A a constant and f an unknown "build-up" factor. Even if f were
assumed a constant, we have /plp = -Ao/or/ (thickness of slab in mean free paths),
so that the relative error in p is about ten times that of as. Even the most reliable
cross-section determinations have a relative error of a few per cent and when it is
remembered that neutrons are being degraded in energy as they traverse the slab,
so that as itself changes, and that there is also a multiplicity of different types of
collision event, it is clear that insistence that the error in the Monte Carlo should
be as low as 10 per cent is physically rather idealistic in this example. Nevertheless
the point remains that there are occasions where departures from a direct simulation
are highly desirable. We would go further than this and say they were essential.
A very rough schematic diagram illustrating a hypothetical external source
problem is shown in Fig. 3. Three regions are recognized. First, there is a source
region where neutrons are born. Second, there is a "target" region, it being desired
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 25
to estimate some properties of the neutrons that get into this region from the source,
for example, the energy distribution of neutrons per unit volume as a function of
position. Between the two there exists a medium that could be anything from a thick
absorber to a vacuum that is completely transparent to neutrons. A problem relating
to the storage of uranium billets might come into this category, the object being to
examine the effect of a pulse of fission neutrons generated in one of the billets. The
intervening air is in this case virtually transparent to neutrons; but if the system were
flooded this intervening material would be neutronically thick. The whole system is
surrounded by the floor, walls and roof of a storage building.
In this type of example only a small fraction of the initial neutrons contributes to
the result, though even the most unlikely neutron may contribute. The main way in
which Monte Carlo simulation may be improved is to give expression to the idea
that greatest attention should be paid to those particles that are most likely to
contribute to the result. Thus in Fig. 3, if the true physical source be uniformly
distributed over the surface of the source volume, relatively few particles are started
in positions (and directions) from which a contribution is a priori unlikely, but a great
number of particles would be started in the region closest to the target with directions
in the general target direction. The imbalance with the true physical situation is
adjusted by according fictitious weights to the particles representing the neutrons:
the "unlikely" particles are accorded high weights and the "likely" particles low
weights. This is called "source biasing".
As a particle is tracked it may well approach the general target area and become
"more important". On the other hand it may become less important if it is back
scattered. The physical mischance that a promising particle is lost through neutron
capture may of course be precluded from the simulation by inhibiting absorption
altogether, making a corresponding weight reduction at collision, or during flight,
to balance this violation to the physics. It is clearly desirable that changes in a
neutron's "potential" should be reflected in the simulation, so what are called
"importance regions" are introduced in the intervening material. If a particle crosses
from one region to the next, travelling in the direction of interest, it is split into two
or more particles of lower weight; conversely if it travels in the reverse direction
"Russian Roulette" may be played-that is, the particle is absorbed with some
probability but if it survives this absorption its weight is augmented. By implementing
a system similar to this, one may guarantee good particle traffic all the way between
the source and the target, and so enhance statistical accuracy.
The way the importance regions are chosen depends, amongst other things, on
the structure of the intervening material; the thicker the material is to source neutrons,
the more the number of importance regions. Since faster neutrons have longer mean
free paths than slow ones, the neutronic thickness and therefore the attenuation is a
function of neutron energy. Thus importance region siting strategy is a function of
energy. If there is a wide range of source energy-and the fission spectrum covers a
wide range-the simplest procedure is first to stratify the calculation into a series of
perhaps half a dozen separate calculations in each of which the source is distributed
over a relatively narrow energy band. Each calculation has its own distinctive
strategy.
Ideally, we should like our Monte Carlo importance sampling mechanism to be
such that every starting particle produced some contribution in the target zone, and
further that the coefficient of variation of these contributions is small. With no
importance sampling at all, the vast majority of the contributions will be zero and the
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
26 PARKER - Monte Carlo for Problems in Neutronics [Part 1,
remaining very small fraction unity, or thereabouts. As more and more importance
regions are introduced the proportion of starting particles that give rise to a non-zero
score (the score of a starting particle is the aggregate score of all the descendants of
this particle through splitting) increases and so of course does the computing time
per particle history. The number of contributing source particles for a pre-set
computation time increases rapidly (in most applications) as more importance regions
are introduced, attains a (flat) optimum, and then falls off. The optimum is achieved
when the particle traffic density as between source and target is roughly uniform.
This experience of ours is in line with other people's (Goertzel and Kalos, 1958).
A tally is kept of the particle transmitted weights into the target regions of interest
and from these the appropriate parameters required are estimated. The mean square
of the total transmitted contribution of the starting particles is calculated and in
principle this is sufficient to provide an estimate of the statistical accuracy of the
desired result. But even near optimum importance region choices the proportion of
non-zero contributors to the target is quite small (typically a few per cent) and it is
necessary to address the problem of securing meaningful error estimates in an
environment where the great majority of the scores are zeros. This is a statistical
problem not without interest, and we exploit ideas similar to those described by
Burrows and MacMillan (1965).
The strategies described above are fairly primitive ones, deliberately so. We need
methodologies that are easy to apply in a wide variety of problems and types of
system geometry. The importance of a neutron depends not only on its position but
also on its direction and its energy; we have recognized the first only crudely and
the second only through the initial stratification. Because neutrons generally lose
energy in their passage towards the target, the importance regions should be tightened
up; they are not. Even if we can guess with fair confidence some of the neutronic
phenomena occurring in the problem, we quite deliberately discount them; it is
cheaper to have a single, perhaps inefficient, general purposes code rather than a
medley of highly sophisticated special purpose programs. It is not without interest
to note that the "importance" of a neutron, defined as its probability of contributing
to the final result, is in many applications calculable in principle; it is the solution
of the integro-differential equation that is adjoint to the Boltzmann transport equation
(Goertzel and Kalos, 1958). An approximation to this solution could be obtained in
advance, possibly by Monte Carlo methods; the knowledge so obtained could be
used to institute a highly efficient importance sampling scheme which would result
in a low variance (in the limit, zero variance) solution to the original problem. Of
course there are sizeable practical difficulties in this approach.
The variance-reducing repertoire for external source problems is far from
exhausted. As just one example (there are many others) we could, if we wished to
do so, estimate a "transmission contribution" at every single collision point by
calculating the probability(ies) that the emergent particle will contribute to the
score(s). This is in fact a particularly instructive example-one of many-of an
approach that at first sight appears fruitful (particularly in idealized problems) but
is in general operationally sterile.
In many Monte Carlo neutronics problems, but especially in external problems,
one may be interested in the effect of a small change in system geometry. Studies
of these effects call for a different type of sophistication. Given the solution, by
Monte Carlo, to the transmission problem for a slab of given thickness, it is possible
to use the same particle histories to obtain the solution for other slabs (Berger and
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 27
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
28 PARKER - Monte Carlo for Problems in Neutronics [Part 1,
where N(t) is the neutron population at time t and No the (eigendistribution) popula-
tion at time zero. It would be possible to aggregate neutrons at successive instants
of time and then estimate A from the data subsequent to that apoch where settling is
estimated to occur, and for simple systems such an approach may be a good one
(Davis, 1963). But in an environment where the nature of the criticality is heavily
dependent on the behaviour of very slow neutrons remote from the fissile core (a very
small fraction of which may return to it and subsequently cause fission) this is a poor
parameter choice for two reasons. First, the faster neutrons undergo a very large
number of collisions while the slow ones are gradually settling into their eigenstate,
and second, linear interpolation, and particularly extrapolation, on two systems with
different A is an unreliable procedure. The whole matter is reviewed by Parker and
Woodcock (1961) and Parker (1962) where the concept of the "traverse technique"
is put forward.
In this field also interest often arises in determining, as reliably as possible, the
small effect of a perturbation (for example, moving control rods in a reactor) and
methods based on what is described as a correlation technique (Gubbins, 1966), and
on adjoint calculations (Gubbins, 1969) have been put forward. Methods of estimat-
ing quantities such as reaction rates in small regions of a reactor where relatively
few particles are present have been developed, the most interesting ones being based
on the idea of subjecting particular collision points of interest, but outside the small
region of immediate concern, to close mathematical scrutiny, using the scatter laws
associated with the neutron's subsequent motion (Kalos, 1963a). Techniques for
speeding up the tracking where materials are a mere fraction of a mean free path
thick, so that in the ordinary course of events tracking would be slow computationally
because of the inordinate time required for boundary testing, are described by
Hemmings (1967). A general review of the problem of solving homogeneous linear
problems is given by Parker (1969).
5. CASE STUDIES
The object of this section is to illustrate, as far as we are able, the spirit of our
approach towards the design and execution of Monte Carlo investigations.
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 29
are moderated to very low energies, and a direct Monte Carlo simulation would be
very time consuming. But there is more in it than that. In most Monte Carlo
neutronics studies the nuclei with which the neutrons collide are assumed static;
this is quite a satisfactory approximation for general work but when neutrons are
very slow the physical thermal motion of the protons and other nuclei in the scintillator
cannot be ignored. Even if a Monte Carlo code were written to achieve exact physical
verisimilitude at these very low energies, the results of the calculation would be no
more authoritative than the experimental errors of the nuclear data permit; the
absorption cross-section of gadolinium is only known to an estimated accuracy of
20 per cent.
For these reasons the Monte Carlo calculation was limited to the calculation of
the proportion of neutrons moderated to just less than some specific energy, say G.
The results of this Monte Carlo are now adjusted by applying a thermal group
diffusion treatment to these moderated neutrons. The Monte Carlo calculation would
be even more simplified if the small amount of loading material (typically about
002 per cent of the total number of nuclei in the scintillator) could be ignored in the
Monte Carlo tracking altogether. This would apparently merely defer the capture of
neutrons in the gadolinium to later times, when they would be treated by the diffusion
approximation. If this treatment can be justified it means that, for a particular
source energy, scintillator efficiency as a function of percentage loading may be
explored by means of one Monte Carlo calculation and a number of diffusion theory
applications.
The direction of our planning, and this is quite intentional, is to relegate the
Monte Carlo proper calculation to play the role of just one link in the computational
chain. To justify ourselves we have to ensure that the results of our calculations are
insensitive first to the choice of G and second to whether or not loading material is
included in the nuclear composition. This was done by running the appropriate
Monte Carlo calculations for one particular source (a fission neutron source) for
which some experimental results were available for comparison purposes. Having
verified the assumptions, ordinary crude Monte Carlo methods were deliberately
used for a series of monoenergetic neutron sources. Even the shape of the scintillator,
which is spherical apart from a small cylindrical neutron beam tube (Mather et al.,
1964) was (quite deliberately) not accurately simulated, though careful calculations
to provide an upper bound for the effect of this approximation were mounted. Many
sophisticated ways of carrying out the tracking, and modifying the scoring processes,
were considered briefly only to be rejected. Finally, a patient study of the effect of
the experimental error in the basic nuclear data was put in hand. Full details of the
whole procedure are given by Parker et al. (1968). The final results are shown in
Table 1.
The Monte Carlo calculations each took 30 minutes computing time; the figure
of 0005 for methodological errors includes effects of deliberately simplifying the
geometry. The lesson to be drawn from Table 1 is a valuable one; a Monte Carlo
design of great naivete is amply satisfactory for our purposes, bearing in mind that
our calculational environment is clouded with experimental noise. The clear
conclusion is that, in practical as opposed to theoretical work, statistical errors in
the Monte Carlo, however crude, are frequently (but not always) of minor significance
when viewed against a broader backcloth; this is so seldom admitted in literature
relating to neutronics Monte Carlo procedures that a forceful counter-example is
necessary in an attempt to balance this false perspective.
2
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
30 PARKER - Monte Carlo for Problems in Neutronics [Part 1,
TABLE 1
Errors
Source Calculated
energy efficiency Monte Carlo Other Nuclear data Total
(MeV) statistical errors methodological errors error
errors
t The differential cross section is proportional to the frequency distribution of the scattering
angle, the constant of proportionality being the cross section for the particular type of scattering
event under consideration.
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 31
By tracing neutron histories by the Monte Carlo method, the effects of flux
attenuation and multiple scatter are automatically taken into account. Apart from
statistical fluctuations, a Monte Carlo calculation using the correct physical nuclear
data, will parallel the physical process so the problem is to find what data, when
put into the Monte Carlo calculation, give the experimental results.
PLAN ASPECT
DETECTOR CIRCLE
/ SAN\PLE R
Here we will describe the solution to the multiple scatter problem; for fuller
details see the two papers by Parker et al. (1961 and 1964). Fig. 4 shows the geometry
in elevation and plan. Clearly a straightforward Monte Carlo simulation of the
problem would be very inefficient, since physically the detector is situated a long way
from the target, subtending a very small solid angle at it. Thus only a very small
proportion of particles would hit it. This situation may be alleviated to some extent
by using, instead of the detectors, a detection band which is an area centred on the
detector circle of Fig. 4. This band would subtend a much larger solid angle at the
expense of introducing some error. Multiple scatter correction programs based on
this idea exist (Cashwell and Everitt, 1959) but even so the computing time is lengthy.
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
32 PARKER - Monte Carlo for Problems in Neutronics [Part 1,
The crux of our method lies in the calculation of the density (number per unit
area) of neutron histories, in a given category, at the centre of each detector position.
This is done by examining each particle immediately before collision and calculating
the chance that it would leave the target in the prescribed direction without having
a further collision. If there are several categories of possible collision event, as
many separate scores as types of event are computed, and these scores are further
distinguished according to whether the collision point under study is the first collision
that occurred or not. Each collision is therefore employed to estimate a whole series
of scores at each of a number of artificial detectors. Conceptually the occurrence of
a collision switches on a torch, as it were, whose axis is coincident with the particle's
previous track. This torch produces light whose spectrum and intensity are governed
by the appropriate neutron cross-sections and scatter angle frequency distributions.
The whole sphere (centred on the target, and radius the target-detector distance) is
illuminated with light from the torch but in this application the only concern is to
measure the intensity at selected positions around the counter circle. These calcula-
tions are carried out quite independently of the actual type of collision sampled from
the Monte Carlo. Tracking is then resumed, scoring taking place at each further
collision within the target until the track leaves the target when it is annihilated.
For each collision point in the target, P, there will be some definite direction
d(P, Q) in which the particle must emerge if it is to strike the centre of the detector.
Because it is true that in practical applications the radius of the detector circle is
large when viewed against the target dimensions we may write d(P, Q) = d(Q). For
each collision the score for event type j is the product of three quantities:
(a) the chance, pj, that event j takes place at the forthcoming collision;
(b) the chance, which is exp{-r(d)/m(E)} that the emergent particle, whose
energy is E, will not be further scattered during its remaining flight (of length r) in
the target (m(E) is the neutron mean free path);
(c) the differential cross-section a>(+), where 0 is the angle between the particle
track and the direction d(Q).
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 33
the calculation with those obtained from experiment. This is an (actually automatic)
iterative procedure. Typically the whole procedure requires 8 minutes computing
time on an IBM 7030 computer.
\ \~~~~~~~~~~~~.
The Monte Carlo procedure is crudity itself; it is the scoring that is elaborate.
The computer program is amply efficient but it is right at least to consider the use of
special tracking techniques. There is indeed a more modern version of the program
which uses weighted tracking, but the motivation for doing this was not to increase
accuracy but to enable multiplicative neutron processes (for example, fission) to be
handled more expeditiously (Parker et al., 1964). Hitherto such examples had not
been contemplated. Of course we were aware of Monte Carlo methods for con-
straining particles to interact (that is, have a collision) in the target, this "forced first
collision" technique being well known (Sobol, 1966, P. 143; Cashwell and Everett,
1959); we have used it, with mixed success, in other applications. But it costs very
little computer time to process a history that has no collision, and we set our faces
against this needless sophistication. The computer code., called MAGGIE, is now
in extensive use.t
t In private discussions, Dr M. H. Kalos informed me that he had developed, quite
independently, a multiple scatter code closely paralleling the above philosophy. His work, which
is not only antecedent to ours, but superior in detail, was not published.
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
34 PARKER - Monte Carlo for Problems in Neutronics [Part 1,
6. DISCUSSION
In this day, there appears a wide and growing gulf between the people who
describe techniques for doing problems and those who actually carry them out. It is
true in the subject of our Society (Sprent, 1970) and the genius whom we are com-
memorating would have deplored this gulf.
In the field of Monte Carlo neutronics the chasm is very wide indeed. To surveyors
of this field the attitudes we have consciously adopted, both here and elsewhere
(Parker, 1965 and 1969), will appear bizarre ones. Thus we have described at length
the problems involved in assessing and accommodating the data, while saying next
to nothing about generating random numbers.
In our field the two poles are exemplified by the expert intolerant of simple
approaches and direct methods and the unimaginative practical worker who scorns
(even if he can understand) the implications of important new work. "Crude" and
"sophisticated" are by now technical Monte Carlo terms. Too often the expert
uses "crude" in a deprecatory sense, while his opposite extreme uses "sophisticated"
in a derogatory sense.
To attempt to bridge the gap it might be helpful to rehearse the basic requirements
of a Monte Carlo neutronics practitioner. First, he would like a fully automatic,
fool-proof, method of processing and organizing his data input. Next he aims to use
a comparatively small repertoire of thoroughly tested computer programs; these
must be general enough to address a large class of problems for the luxury of writing
a special purpose Monte Carlo program to solve an isolated problem can seldom be
afforded. In general (though exceptions may well arise when comparative calculations
are envisaged) he should not strive for statistical accuracy which is unrealistically high
when viewed against the background of the data errors, and should be particularly
suspicious of assessing the efficacy of his methods in terms of a quantity that is the
product of the time spent in computing and the purely statistical variance of his
result or results. Finally, his quiver of computer programs should be as free as
possible from difficulties in setting up (it is, for example, dangerous to have less-
skilled personnel defining importance region siting strategy) and the results should
be capable of ready interpretation.
In this climate it is more understandable why so-called crude Monte Carlo is
favoured for a large proportion of problems. It is certainly not always the case that
the problem consists of calculating an efficient estimator of a single quantity. There
are many problems where designs which enhance the accuracy in some regions of
phase space (necessarily), degrading it in others, are not acceptable though fortunately
not all problems are in this class.
It would be false to imply that reducing variance is not important and that the
available repertoire for doing just this has only been explored to the extent that has
been described in Section 4.2. Halton (1970, p. 7) describes seven variance-reducing
techniques (admittedly in the context of evaluating sums and integrals, which is not
always our context) and it is right that these, and others, should be considered in the
context of neutronics work. Some we have considered in depth and have actually
tried out; a few we have dismissed (perhaps incorrectly) outright; and in other cases
we see (perhaps wrongly) no great gain in pursuing them at the moment. Our cautious
attitude is due to the interplay of early failures, disappointments, and conversely, to
our ability to inject, on a number of important occasions, new life into the old crude
Monte Carlo doctrine by subjecting the collision events to careful analysis (Section 5.2).
Three examples suffice. Although the idea of antithetic variation (Hammersley and
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] PARKER - Monte Carlo for Problems in Neutronics 35
Morton, 1956; Hammersley and Handscomb, 1964, p. 60) has been exploited by us
in a non-neutronic context (Parker and Holford, 1968) we have been disappointed
when applying it to neutron studies, even in an application which at first sight seemed
remarkably encouraging. This was in a critical size problem where the main process
sustaining criticality was fissioning on the skin of a fissile core reflected by water.
Since the expected number of neutrons produced at thermal fission is 2-0 it occurred
to us that random errors could be substantially reduced by directing one secondary
at random into the reflector, while its antithetic mate took the opposite route into the
core. This idea is basically similar to that described by Hammersley and Handscomb
(1964, p. 109). Understanding why the application described there was successful
while ours was not is an essential stage towards the (often rather slow) self-education
of a Monte Carlo neutronics practitioner.
Another idea analysed and rejected is a variant of the forced first collision technique
(Section 5.2) whereby no particles are allowed to leave the system. This is done by
constraining the next collision to lie within the system, correspondingly multiplying
the particle weight by its probability of remaining in the system. The reasons for
our lack of success in this matter are clearly known to us but we cannot stop to
pursue this matter here.
Lastly we noted, and rejected out of hand, the idea of quasirandom sequences
(Halton, 1970) (that is, sequences of "random numbers" that are really non-random
but good for certain Monte Carlo applications) instead of the straight sequences
obtained from the congruence scheme (Section 4). This is because, in neutronics
work, the number of random numbers used to process a particle's history (and for
that matter a single collision) is not invariant but is itself a random variable, so we
do not know in advance what purpose each random number will be used for.
Our experience has taught us to be critical of variance-reducing methodology
but not, we hope, hypercritical. What is true, however, is that we view the need to
reduce variance as just one aspect of Monte Carlo methodology-and it is not, in
our view, the chief one. It is a matter of some surprise to us that the theory of birth
and death processes (Bartlett, 1955, p. 70; Harris, 1963, p. 103), is not considered
in the literature as a possible aid towards an understanding of Monte Carlo neutronics
matters, though the obvious relationship between the subjects is implicit in Rosesou
(1965). Some aspects of the theory have on occasion been helpful to us.
Perhaps the greatest drawback in the field of neutronics (and in some other fields
as well for that matter) is that the results of calculations, whether done Monte Carlo
or deterministically, are practically never accompanied by a statement about the
likely effect of errors in the data. The Monte Carlo practitioner who, in his very
tenacity, quotes his (statistical) errors, misleads the casual reader about the real
accuracy of his results, and paradoxically it is more often the case that due allowance
is made, or perhaps guessed, for the effect of data errors when the calculation is a
deterministic one. That this difficult problem has received little attention is under-
standable; that it should be ignored (or recognized and then swept under the carpet)
is intolerable.
We may view our subject as a branch of nuclear physics; as a branch of
mathematics; as an application of birth and death processes; or as a subject in
its own right, but embedded in a workaday environment. Depending on our perspec-
tive we are less, or more, interested in the theory than in the practice. It is right
that from time to time we should pause and ask ourselves, like children, some simple-
minded but awkward questions.
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
36 PARKER - Monte Carlo for Problems in Neutronics [Part 1,
REFERENCES
AHLBERG, J. H., NILSON, E. N. and WALSH, J. L. (1967). The Theory of Splines and their
Applications. New York: Academic Press.
BARTLETT, M. S. (1955). An Introduction to Stochastic Processes. Cambridge University Press.
BENDALL, D. E. and MCCRACKEN, A. K. (1968). McBend-A prototype code utilizing both
removal-diffusion and Monte Carlo methods. AERE Rep. R-5773. London: H.M.S.O.
BENZI, V., CUPINI, E. and DE MATHEIS, A. (1966). A Monte Carlo analysis of some enriched
U235 fast critical assemblies. J. Nucl. Energy, 20, 17-24.
BERGER, M. J. and DOGGETT, J. (1956). Reflection and transmission of gamma radiation by
barriers: semianalytic Monte Carlo calculation. J. Res. Nat. Bur. Stand., 56, 89-98.
BURROWS, G. L. and MACMILLAN, D. B. (1965). Confidence limits for Monte Carlo calculations.
Nucl. Sci. and Engng, 22, 384-385.
CARLSON, B. G. and BELL, G. I. (1958). Solution of the Transport equation by the Sn, method.
In Proc. 2nd Intern. Conf. Peaceful Uses Atomic Energy, Vol. 16. Nuclear Data and Reactor
Theory, pp. 535-549. Geneva: United Nations.
CASHWELL, E. D. and EVERITT, C. J. (1959). A Practical Manual on the Monte Carlo Methodfor
Random Walk Problems. New York: Pergamon.
DAVIS, D. H. (1963). Critical size calculations for neutron systems by the Monte Carlo method.
In Methods in Computational Physics, Vol. 1, pp. 67-88. London: Academic Press.
DAVISON, B. (1957). Neutron Transport Theory. Oxford University Press.
ERIKSSON, B. (1965). On the use of importance sampling in particle transport problems. Rep.
AE-190, Aktiebolaget Atomenergi, Stockholm, Sweden.
*GOERTZEL, G. and KALOS, M. H. (1958). Monte Carlo methods in transport problems. In
Progress in Nuclear Energy, Series 1: Physics and Mathematics, Vol. 2, pp. 315-369. New
York: Pergamon.
GUBBINS, M. E. (1966). Reactor perturbation calculations by Monte Carlo methods. AEEW Rep.
M 581. London: H.M.S.O.
(1969). Chase B-A Monte Carlo code for calculating reactor criticality, fluxes, and
perturbation worths. AEEW Rep. R 627. London: H.M.S.O.
*HALTON, J. H. (1970). A retrospective and prospective survey of the Monte Carlo method.
SIAM Review, 12, 1-63.
*HAMMERSLEY, J. M. and HANDSCOMB, D. C. (1964). Monte Carlo Methods. London:
Methuen.
HAMMERSLEY, J. M. and MORTON, K. W. (1956). A new Monte Carlo technique; antithetic
variates. Proc. Camb. Phil. Soc., 52, 449-475.
HARRIS, T. E. (1963). The Theory of Branching Processes. Berlin: Springer-Verlag.
HEMMENT, P. C. E., PENDLEBURY, E. D., ADAMS, M. J., BRETT, B. A. and SAMS, D. (1966). The
multigroup neutron transport perturbation program DUNDEE. A WRE Rep. 0-40/66.
London: H.M.S.O.
HEMMINGS, P. J. (1967). The GEM code. AHSB (S) Rep. R 105. London: H.M.S.O.
KALOS, M. H. (1963a). On the estimation of flux at a point by Monte Carlo. Nucl. Sci. and
Engng, 16, 111-117.
(1963b). Importance sampling in Monte Carlo shielding calculations. Nucl. Sci. and Engng,
16, 227-234.
LEHMER, D. H. (1951). Mathematical methods in large-scale computing units. Proc. 2nd Symp.
on Large-scale Digital Calculating Machinery, 1949, pp. 141-146. Cambridge Mass.: Harvard
University Press.
LEIMD6RFER, M. (1964). On the transformation of the transport equation for solving deep
penetration problems by the Monte Carlo method. FOA Rapport A 4361-411. Stockholm,
Sweden: Research Institute of National Defence.
LESHAN, E. J., BURR, J. R., TEMME, M., et al. (1958). Calculation of reactor history including
the details of isotopic concentration. ASAE Rep. 34. Mountain View, California: American-
Standard.
MARSHALL, A. W. (1966). Introductory note. In Symposium on Monte Carlo Methods, pp. 1-14
(University of Florida, 1954), H. A. Meyer (ed.). New York: John Wiley.
MATHER, D. S., FIELDHOUSE, P. and MOAT, A. (1964). Average number of prompt neutrons from
U235 fission induced by neutrons from thermal to 8 MeY. Phys. Rev., 133B, 1403-1420.
MILLER, S. M. and PARKER, K. (1965). List of data files available in the UKAEA Nuclear data
library as at 15 April, 1965. A WRE Rep. 0-55/65. London: H.M.S.O.
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] Discussion on Mr Parker's Paper 37
Dr J. M. HAMMERSLEY (Oxford University): Mr Parker hits the nail on the head when
he says there is "a wide and growing gulf between the people who describe techniques for
doing problems and those who actually carry them out". And he reminds us too of
Dr Sprent's recent paper, dealing with the same gulf in statistics. It is instructive to read
the discussion to Dr Sprent's paper and to collate it with Mr Parker's contribution. The
dominant theme in both is a concern for understanding the data of a problem. The
good statistical consultant asks many simple, and apparently naive, questions about
the sources and the reliability and the limitations of the data; and in this way he gets
a feel for what is going on. He has a background knowledge of statistical theory, some
of it advanced; but more often than not, when he has appreciated the limitations of
the data, he will feel that quite simple statistical techniques are all that the data will
bear-perhaps even no more than a few rough diagrams and the calculation of one or
two means and standard deviations. This is equally true in Monte Carlo work, as
Mr Parker shows so well. For example his Table 1 is a valuable corrective against too
much technical sophistication, too much philosophical worrying about the validity of
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
38 Discussion on Mr Parker's Paper [Part 1,
pseudo-random numbers, and so on when the basic physical data are the prime cause
of imprecision. A theorist does not have to apologize for trying to make a theory work,
but he needs to be realistic enough to recognize that it may be largely a matter of luck
if theory is applicable. The basic trouble is that we are very short of applicable mathe-
matical expertise: we have to make drastic simplifications to get the mathematics off
the ground at all-typically we assume that the problem is one-dimensional, although
most Monte Carlo problems are multidimensional and as such behave quite differently.
In the preface to our book on Monte Carlo methods, Dr Handscomb and I said that
we were dealing with "ideas for variance reduction which have never been seriously
applied in practice. This is inevitable, and typical of a subject that has remained in its
infancy for twenty years or more." It is still in its infancy. On the other hand, we have
to countenance Monte Carlo methods, because so often we have no better alternative.
The other theme to appear both in Dr Sprent's discussion and Mr Parker's paper is the
suggestion that computers may be able to help us out. I think, however, there is a danger
that we may expect too much of them. It is all very well for an expert like Mr Parker
to ask for "a fully-automatic, fool-proof, method of processing and organizing the input
data". What is fool-proof to him is not, unfortunately, fool-proof to fools and amateurs.
I am sure it is always wise, wherever one can, to begin the planning of a Monte Carlo
calculation by playing with the data and with a few random numbers on a desk calculator
before committing them to mass-production on a large computer. It is indeed a very good
aid to getting a feel of the data, and to assessing such matters as the validity of replacing
a distribution by a mean like his v = 2i II (i IE). This is not to pretend that such pre-
liminary tinkering resolves all difficulties, nor to deny that it can be confoundedly
awkward and tentative in a large neutronics problem with elaborate geometry.
Mr Parker has given us a very stimulating and thoughtful paper; and I have much
pleasure in proposing the vote of thanks.
Mr B. E. COOPER (ICL Dataskil Ltd): I am very grateful indeed to Mr Parker for provid-
ing me with this opportunity to renew my acquaintance with Monte Carlo methods. Many
years ago, as a rather green statistics graduate, from a well-known place in Gower Street,
I joined the Monte Carlo Section at the Atomic Energy Research Establishment at
Harwell. Incidentally, somebody on the Committee of this Section obviously has a good
memory. Mr Parker's paper has brought back to me memories of my work in this Section
and I cannot resist making one or two brief reminiscences. At that time we used a British
Tabulating Machine Company-now part of ICL-555 punch-card calculator and
various other punch-card equipment such as sorters, collators and tabulators. In our
Monte Carlo problems one card often represented one nuclear particle and one generation
of nuclear events consisted of a walk, literally, round the machine room. The calculator
calculated what happened at each event, the sorter sorted out the particles judged to have
been absorbed, the collator inserted extra cards for new particles which were created at
the events. The weight of cards moved about the machine room for one problem could be
measured in tons.
I remember one problem, again as a rather green statistics graduate, in which we
were interested in the passage of neutrons from a source. I was greatly intrigued by the
use of the splitting technique where at a boundary a neutron could suddenly become
two neutrons with, as Mr Parker described, half the weight it had before it crossed the
boundary if the crossing was in one direction and would be forced to play Russian roulette
with probability of 0 5 if the crossing was in the other direction. This did, in fact, work
very well; the aim was to make sure that the spatial distribution which we were seeking
was equally accurate over the whole range.
I am greatly impressed by the common-sense approach to Monte Carlo methods in
the paper. This obviously represents a wealth of experience in their application. The
paper is full of "quotes of the week" and I would draw your attention to some of them.
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] Discussion on Mr Parker's Paper 39
The first is "greatest attention should be paid to those particles that are most likely to
contribute to the result". Although this is obviously sound common sense it seems to
have been missed in other simulation areas. A second is "it is cheaper to have a single,
perhaps inefficient, general-purpose code rather than a medley of highly sophisticated
special-purpose programs". This reminds us that although a sophisticated technique
may allow reduction in the sample size its use will often increase rather than decrease the
total amount of work in the simulation. A third quote is "statistical errors in the Monte
Carlo, however crude, are frequently (but not always) of minor significance when viewed
against a broader backcloth". This quote, also emphasizing that Monte Carlo is often
just a stage in a larger calculation, points out that the sample size should be chosen so that
the error in the Monte Carlo is consistent with that in the original data and is sufficient
for that stage in the calculation. We are reminded not to seek a highly accurate simulation
when the data or the other stages do not warrant it.
There is a reference to the use of antithetic variates which I noticed the previous
speaker did not comment on. I can remember trying some of these at Harwell and
finding, as Mr Parker indicates in his paper, that the method worked impressively on
integrals, where there were dramatic reductions in variance. However, I cannot remember
them being tried on Monte Carlo problems at Harwell. Dr Hammersley was also associated
with that group and will correct me if I am wrong.
Dr HAMMERSLEY: Yes, they were tried, but did not give a very good result. A factor of
10 was about the best obtained by us.
I have listed just three quotes from the paper. There are many more. The main feature
of the paper is the practical approach it brings to Monte Carlo and we could all benefit
from close attention to the many practical points discussed.
Dr Hammersley also drew attention to the first sentence of the discussion and I will
read this: "In this day, there appears a wide and growing gulf between the people who
describe techniques for doing problems and those who actually carry them out". As Dr
Hammersley has said, this is true of much of statistics. I would go a little further than
this and say that it seems to me that there is also a gulf between Monte Carlo and the rest
of simulation. Mr Parker's paper describes very fully the many techniques he uses and I
think the paper also shows the great insight which Mr Parker gains into the practicalities
of the problems which he has studied. Those of us who are concerned with simulation
in other areas should scan this paper in some detail and determine whether the methods
described can be usefully employed in their environment. I think Mr Parker's paper may
go a long way to bridging this gap.
I am delighted to second the vote of thanks.
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
40 Discussion on Mr Parker's Paper [Part 1,
problems? Except for this it seems to me that this use of the average is the right thing
to do, even though, as one of my friends asked me during the lecture, "Was the speaker
not committing a sin by doing it?" I had to tell him that he was not.
As to references and terminology, we should note that statisticians also turn to Box
and Muller (1958) for generating sines and cosines and that some call crude Monte Carlo
just "experimental sampling", and sophisticated Monte Carlo, just "Monte Carlo".
The argument about appropriate accuracy for a slab problem seems a little unstatistical.
Surely-and I am talking about the details of the argument and not the overall strength-
the averaging of many different uncertainties of about the same size does not increase the
resulting uncertainty. It seems to me that if there are as many nuclei as mentioned in the
paper, and there are uncertainties about each, then the statistician should be happier
than if there were only one nucleus about which he was uncertain.
In the discussion of importance regions and external source problems, the flow is
usually in phase space as well as in geometry. Thus, it is not clear why importancq regions
in phase space do not help. Should we not try for a constant traffic density in phase space
as well as in geometric space?
The multiple-scattering problem raises interesting questions. Presumably the case of
single scattering without allowance for second collisions can be done deterministically.
If so, should we not confine our Monte Carlo to the additions due to second or later
collisions and to the corresponding subtractions of single-collision neutrons?
I might prefer, with the speaker, to get my pasteurized numbers from a congruential
generator, though I would want for extra security to apply a permutation 500 or 1,000
long to them before use. I do not see immediately that having varying numbers of
random numbers used for comparable sequence of histories is in itself desirable, or is an
argument against other kinds of generators.
To the speaker's emphasis on the importance of all kinds of effects, it seems to me
that we can only shout "Hear, hear!" and I do so. This need not, however, preclude
measuring limited but meaningful internal aspects of quality by the product of internal
variance and computer time. Looking at external errors, as we must, will inevitably shorten
our desired computer runs, but only programming and debugging difficulties need inter-
fere with shortening them further by improving internal quality.
Mr Parker called attention in his introduction to the possibility of doing fitting with
the L, norm. I call his attention to an unfortunately still unpublished Princeton thesis
by W. Morven Gentleman, which developed techniques of minimizing L, norms for p
between 1 and 2 with successful results; 1 and 1X; are sometimes quite successful.
One thing that I missed both in the discussion of the data reduction and in Table 1,
I think, was the question not only of the variances of the experimental determination of
physical constants, but of their covariances. It seems to me that, in the use of these
nuclear cross-section graphs, covariance questions must be relevant. It may be that when
you look for particular behaviour in the results as functions of a parameter, this may
have quite a lot to do with how much precision should be put into the Monte Carlo
pattern problem.
I would, I think, point out that in the statistical uses of Monte Carlo as opposed to
the neutronic uses, it is often, perhaps usually, necessary to be both a gentleman and a
player if you are going to get through. It seems to me that Mr Parker is to be con-
gratulated on being fortunate enough to work where a crude Monte Carlo works as often
as it does.
Finally, I would like to enquire why the technique of "what the particles might have
done" is crude Monte Carlo. I would regard that as sophisticated Monte Carlo of a
very interesting type.
The CHARMMAN: The part of the paper which has lessons for all statisticians, not only
those concerned with simulation, is that concerned with the nuclear data, their reduction
and subsequent use. The plea which the author makes towards the end of the paper for
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] Discussion on Mr Parker's Paper 41
PROFESSOR TUKEY: I would say that a moving median would do a lot for that bottom
left-hand corner.
I am most grateful to the proposer, seconder and discussants for their thoughtful
contributions. They would, I am sure, have made a much better job of presenting some
aspects of my paper than I did myself.
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
42 Discussion on Mr Parker's Paper [Part 1,
Mr Cooper and Professor Tukey draw attention to the method of antithetic variates.
There is an amusing story about the missing reference, and since it is at the expense of
the Society perhaps I may be excused for telling it. The reference is titled "Optimum
Test Statistics with particular reference to a Forensic Science Problem". It is of course
the author's job to proof-read the text, but not the title sheet of the offprints. The off-
prints turned up impeccably produced, save that the title sheet read "Opium Test Statistics",
the setter having presumably been influenced by the word "Forensic" in the title. I am
reminded of the very eminent nineteenth-century sage who laid down the axiom that
statistics were the opium of the masses.
It is very difficult to do justice to Professor Tukey's many valuable contributions.
If it were true that the results of nuclear data evaluations were needed solely for Monte
Carlo neutronics calculations it would, as Professor Tukey says, be natural to graduate the
experimental data in histogram form. But the data are also required for other types of
neutronics studies, and the presentation of the experimental results is partly dictated by
these requirements. In this connection the President's remark of the dangers of a purely
automatic evaluation of data that are frequently of heterogeneous structure is particularly
valuable. The computer should be used up to a point, but should not be allowed to by-pass
expert assessments about particular sets of experimental results.
Dr Hammersley's remarks on the use of computers are apposite in this connection,
but also, of course, in a wider context. In the early days particularly, a lot of time was
spent trying to do finely grained problems on slow machines, an attitude which Dr
Hammersley himself fought as early as 1954 (Hammersley et al., 1954). With faster
machines, better (though still imperfect) expertise, and greater confidence, problems
of doing really large-scale calculations, accommodating the data in very refined form,
are much less severe than they were 15 years ago, when too often machines mastered us
rather than the other way around. Careful pre-production planning I find as important
as it ever has been. The obvious application here is the careful choice of sampling
strategies (e.g. choice of importance regions), but additionally it is of value to try to get a
feel, before doing the actual calculation, of how the particles are likely to behave and why.
Professor Tukey suggests optimum interval linear approximations to nuclear data.
This indeed is partially used in at least one other establishment concerned with Monte
Carlo calculations and the slight additional complexity in carrying out the sampling is
not a serious matter. It is indeed a preferable representation but in our special problems
our cruder representation is quite satisfactory.
I regret I had missed referring to Box and Muller's work on random number trans-
formations. In other connections I have had occasion to employ their excellent technique
for sampling from the Gaussian distribution (Box and Muller, 1958). It is tempting to
take the average of five or six members of the already available rectangular distribution,
murmur "Central Limit Theorem" and hope for the best. One should at least be aware
of when it is legitimate to get by like this and when it is not!
I agree in the main with Professor Tukey's remark that since all the nuclear data are
subject to uncertainty, my example about the slab penetration problem was overdrawn.
However, it is very often the case that substantial uncertainties in the data for one nuclide
are less influential than minor errors in those for another. This is particularly true in
problems involving a mixture of fissile and non-fissile material. Both Professor Tukey
and Professor Cox have valuable suggestions as to how to assess the sensitivity of the
results to nuclear data uncertainties, though I am worried about the magnitude of
Professor Cox's dimension p. In structuring the errors it is of course the whole of the
error matrix, rather than the diagonal, that is of concern, as Professor Tukey reminds
us. Thus for a particular cross-section, p may not need to be sizeable, but there are so
many different cross-sections! I think the best thing to do is to start by using first-order
perturbation theory to identify those phenomena likely to affect the answer most and
then simply ignore errors in the "unimportant" data; though even then one would have
a formidable task!
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms
1972] Discussion on Mr Parker's Paper 43
I am the first to admit that the importance region strategy used by us is very crude,
paying as it does incomplete attention to geometry and none at all to other aspects of
phase space.
On random number generation, I have not found a case where, in neutronic appli-
cations, the use of a congruential generator led one astray (we have carried out many
comparative calculations on problems solved by deterministic methods) but Professor
Tukey's warning is a salutary one. Since different types of collision event call for
different numbers of appeals to our random number generator, it is difficult (though not
impossible) to ensure that different histories each having the same number of collisions
will use the same number of random numbers.
In the multiple-scatter program, we did indeed evaluate separately the effect of
doubly and multiply scattered tracks. Professor Tukey's suggestion is a little difficult
to take into account for even the singly scattered scores are dependent upon the intricacies
of the source angular distribution, which is incorrectly shown as monodirectional on
Fig. 4.
Mr Woodcock's contribution draws my attention to the fact that in my paper I said
next to nothing about the considerable problems involved in constructing modules enabling
really complicated geometries to be considered. That in our particular studies we can
usually manage with relatively simple geometries is no excuse for this scant coverage,
particularly since deterministic methods are non-competitive when the geometry is complex.
I can only in part rectify this lack of emphasis by drawing attention to Mr Woodcock's
references, which are required reading for the Monte Carlo practitioner when he moves
into really complicated geometry.
As a result of the ballot held during the meeting, the following were elected Fellows
of the Society:
This content downloaded from 128.122.230.148 on Sat, 25 Jun 2016 11:43:11 UTC
All use subject to http://about.jstor.org/terms