0% found this document useful (0 votes)
6 views81 pages

Lecture 1

The document outlines a lecture series on Monte Carlo methods, focusing on integration and probability distributions, presented by Morten Hjorth-Jensen at the First National Winter School in eScience. It covers various topics including random numbers, applications in different fields, and the advantages of Monte Carlo integration over traditional methods. The lecture emphasizes the efficiency of Monte Carlo techniques in high-dimensional problems and provides examples from quantum mechanics and radioactive decay.

Uploaded by

maksim isaev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views81 pages

Lecture 1

The document outlines a lecture series on Monte Carlo methods, focusing on integration and probability distributions, presented by Morten Hjorth-Jensen at the First National Winter School in eScience. It covers various topics including random numbers, applications in different fields, and the advantages of Monte Carlo integration over traditional methods. The lecture emphasizes the efficiency of Monte Carlo techniques in high-dimensional problems and provides examples from quantum mechanics and radioactive decay.

Uploaded by

maksim isaev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

Outline

Lecture I: Introduction to Monte Carlo Methods,


Integration and Probability Distributions

Morten Hjorth-Jensen
1 Department of Physics and Center of Mathematics for Applications
University of Oslo, N-0316 Oslo, Norway
2 Department of Physics and Astronomy, Michigan State University
East Lansing, Michigan, USA

January 28 - February 2

First National Winter School in eScience Lecture I, January 28 2007


Outline

Outline

1 Introduction to Monte Carlo Methods

2 Probability Distribution Functions

3 Monte Carlo Integration

First National Winter School in eScience Lecture I, January 28 2007


Outline

Outline

1 Introduction to Monte Carlo Methods

2 Probability Distribution Functions

3 Monte Carlo Integration

First National Winter School in eScience Lecture I, January 28 2007


Outline

Outline

1 Introduction to Monte Carlo Methods

2 Probability Distribution Functions

3 Monte Carlo Integration

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

’Iacta Alea est’, the die is cast!

Plan for the lectures


1 January 28: Introduction to Monte Carlo methods,
probability distributions and Monte Carlo Integration.
2 January 29: Random numbers, Markov chains, diffusion
and the Metropolis algorithm.
3 January 30: Applications in sociology, simulations of phase
transitions in physics and quantum physics.
4 All material taken from my text on Computational Physics,
see http://www.uio.no/studier/emner/matnat/
fys/FYS3150/h06/undervisningsmateriale/
LectureNotes/.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

http://www.iop.org/EJ/journal/CSD

N EW FO R 2006

Uniquely driven by the aim to publish multidisciplinary scientific advances,


together with details of their enablingtechnolog ies.

www.iop.org /journals/csd

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

What is Monte Carlo?

1 Monte Carlo methods are nowadays widely used, from the


integration of multi-dimensional integrals to solving ab initio
problems in chemistry, physics, medicine, biology, or even
Dow-Jones forecasting. Computational finance is one of
the novel fields where Monte Carlo methods have found a
new field of applications, with financial engineering as an
emerging field.
2 Numerical methods that are known as Monte Carlo
methods can be loosely described as statistical simulation
methods, where statistical simulation is defined in quite
general terms to be any method that utilizes sequences of
random numbers to perform the simulation.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Monte Carlo Keywords

Consider it is a numerical experiment


Be able to generate random variables following a given
PDF
Find a probability distribution function (PDF).
Sampling rule for accepting a move
Compute standard deviation and other expectation values
Techniques for improving errors

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

The Plethora of Applications; from the Sciences to


Social Studies
1 Quantum Physics and Chemistry: Variational, Diffusion
and Path Integral Monte Carlo
2 Simulations of Phase transitions, classical ones and
quantal ones such as superfluidity (quantum liquids)
3 Lattice Quantum-Chromo-Dynamics (QCD), the only way
to test the fundamental forces of Nature. (Own dedicated
High-Performance-Computing machine).
4 Reconstruction of particle-collisions’ paths at for example
CERN
5 Solution of Stochastic differential equations
6 Dow-Jones forecasting and financial engineering
7 Modelling electoral patterns
8 Ecological evolution models, percolation, wood fires,
earthquakes....and so forth
First National Winter School in eScience Lecture I, January 28 2007
Introduction PDF MC Integration

Selected Texts

C. R. Robert and G. Casella,Monte Carlo Statistical Methods, Springer, 2nd


edition 2004.

M. E. J. Newman and G. T. Barkema, Monte Carlo Methods in Statistical Physics,


Oxford University Press, 1999.

P. Glasserman, Monte Carlo Methods in Financial Engineering, Springer, 2003.

B. L. Hammond, W. A. Lester, Jr., P. J. Reynolds, Monte Carlo Methods in Ab


Initio Quantum Chemistry, World Scientific, 1994.

G. S. Fishman, Monte Carlo Methods, Concepts, Algorithms and Applications,

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Important Application: Monte Carlo Integration


Consider
Z 1 N
X
I= f (x)dx ≈ ωi f (xi ),
0 i=1

where ωi are the weights determined by the specific integration method (like Simpson’s
or Taylor’s methods) with xi the given mesh points. To give you a feeling of how we are
to evaluate the above integral using Monte-Carlo, we employ here the crudest possible
approach. Later on we will present more refined approaches. This crude approach
consists in setting all weights equal 1, ωi = 1. Recall also that dx = h = (b − a)/N
where b = 1, a = 0 in our case and h is the step size. We can then rewrite the above
integral as
Z 1 N
1 X
I= f (x)dx ≈ f (xi ).
0 N
i=1

Introduce the concept of the average of the function f for a given Probability
Distribution Function p(x) as

N
1 X
E[f ] = hf i = f (xi )p(xi ),
N
i=1

and identify p(x) with the uniform distribution, viz p(x) = 1 when x ∈ [0, 1] and zero for
all other values of x.
First National Winter School in eScience Lecture I, January 28 2007
Introduction PDF MC Integration

Monte Carlo Integration


The integral is then the average of f over the interval x ∈ [0, 1]
Z 1
I= f (x)dx ≈ E[f ] = hf i.
0

In addition to the average value E[f ] the other important quantity in a Monte-Carlo
calculation is the variance σ 2 and the standard deviation σ. We define first the variance
of the integral with f for a uniform distribution in the interval x ∈ [0, 1] to be
N
1 X
σf2 = (f (xi ) − hf i)2 p(xi ),
N
i=1

and inserting the uniform distribution this yields


0 12
N N
1 X 1 X
σf2 = 2
f (xi ) − @ f (xi )A ,
N N
i=1 i=1

or “ ”
σf2 = E[f 2 ] − (E[f ])2 .

which is nothing but a measure of the extent to which f deviates from its average over
the region of integration.
First National Winter School in eScience Lecture I, January 28 2007
Introduction PDF MC Integration

But what do we gain by Monte Carlo Integration?

The trapezoidal rule carries a truncation error O(h2 ), with h the step length.
In general, quadrature rules such as Newton-Cotes have a truncation error which
goes like ∼ O(hk ), with k ≥ 1. Recalling that the step size is defined as
h = (b − a)/N, we have an error which goes like ∼ N −k .
Monte Carlo integration is more efficient in higher dimensions. Assume that our
integration volume is a hypercube with side L and dimension d. This cube
contains hence N = (L/h)d points and therefore the error in the result scales as
N −k/d for the traditional methods.
The error in
√ the Monte carlo integration is however independent of d and scales
as σ ∼ 1/ N, always!
Comparing this with traditional methods, shows that Monte Carlo integration is
more efficient than an order-k algorithm when d > 2k

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Why Monte Carlo Integration?

An example from quantum mechanics: most problems of interest in e.g., atomic,


molecular, nuclear and solid state physics consist of a large number of interacting
electrons and ions or nucleons. The total number of particles N is usually sufficiently
large that an exact solution cannot be found. Typically, the expectation value for a
chosen hamiltonian for a system of N particles is

hHi =

dR1 dR2 . . . dRN Ψ∗ (R1 , R2 , . . . , RN )H(R1 , R2 , . . . , RN )Ψ(R1 , R2 , . . . , RN )


R
R ,
dR1 dR2 . . . dRN Ψ∗ (R1 , R2 , . . . , RN )Ψ(R1 , R2 , . . . , RN )
an in general intractable problem.
This integral is actually the starting point in a Variational Monte Carlo calculation.
Gaussian quadrature: Forget it! given 10 particles and 10 mesh points for each
degree of freedom and an ideal 1 Tflops machine (all operations take the same time),
how long will it ta ke to compute the above integral? Lifetime of the universe
T ≈ 4.7 × 1017 s.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

The Dimensionality Curse

As an example from the nuclear many-body problem, we have Schrödinger’s equation


as a differential equation

ĤΨ(r1 , .., rA , α1 , .., αA ) = EΨ(r1 , .., rA , α1 , .., αA )

where
r1 , .., rA ,
are the coordinates and
α1 , .., αA ,

are sets of relevant quantum numbers such as spin and isospin for a system of A
nucleons (A = N + Z , N being the number of neutrons and Z the number of protons).

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

More on Dimensionality

There are „ «
A
2A ×
Z
coupled second-order differential equations in 3A dimensions.
For a nucleus like 10 Be this number is 215040. This is a truely challenging many-body
problem.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Another classic: Radioactive Decay

Assume that a the time t = 0 we have N(0) nuclei of type X which can decay
radioactively. At a time t > 0 we are left with N(t) nuclei. With a transition probability ω,
which expresses the probability that the system will make a transition to another state
during a time step of one second, we have the following first-order differential equation

dN(t) = −ωN(t)dt,

whose solution is
N(t) = N(0)e−ωt ,
where we have defined the mean lifetime τ of X as

1
τ = .
ω

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Radioactive Decay

Probability for a decay of a particle during a time step ∆t is

∆N(t)
= −λ
N(t)∆t

λ is inversely proportional to the lifetime


Choose the number of particles N(t = 0) = N0 .
Make a loop over the number of time steps, with maximum time bigger than the
number of particles N0
At every time step there is a probability λ for decay. Compare this probability with
a random number x.
If x ≤ λ, reduce the number of particles with one i.e., N = N − 1. If not, keep the
same number of particles till the next time step.
Increase by one the time step (the external loop)

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Radioactive Decay

idum=-1; // initialise random number generator


// loop over monte carlo cycles
// One monte carlo loop is one sample
for (cycles = 1; cycles <= number_cycles; cycles++){
n_unstable = initial_n_particles;
// accumulate the number of particles per time step per trial
ncumulative[0] += initial_n_particles;
// loop over each time step
for (time=1; time <= max_time; time++){
// for each time step, we check each particle
particle_limit = n_unstable;
for ( np = 1; np <= particle_limit; np++) {
if( ran0(&idum) <= decay_probability) {
n_unstable=n_unstable-1;
}
} // end of loop over particles
ncumulative[time] += n_unstable;
} // end of loop over time steps
} // end of loop over MC trials
} // end mc_sampling function

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

The MC Philosophy in a Nutshell

Choose the number of Monte Carlo samples N. Think of every sample as an


experiment. Make a loop over N. These samples are often called Monte Carlo
cycles or just samples.
Within one experiment you may study a given physical system, say the alpha
decay of 100 nuclei every day.
You need a sampling rule. For this decay you choose a random variable from the
uniform distribution with xi in the interval xi ∈ [0, 1] by calling a random number
generator. This number is compared with your decay probability. If smaller
diminish the number of particles, if bigger keep the number. This is the
sampling rule
Every experiment has its mean and variance. Find the contribution to the
variance and the mean value for every loop contribution.
After N samplings, compute the final mean value, variance, standard deviation
and possibly the covariance.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Probability Distribution Functions PDF

Discrete PDF continuous PDF


Domain {x1 , x2 , x3 , . . . , xN } [a, b]
probability p(x ) p(x)dx
P i
Pi = il=1 p(xl ) P(x) = ax p(t)dt
R
Cumulative
Positivity 0 ≤ p(xi ) ≤ 1 p(x) ≥ 0
Positivity 0 ≤ Pi ≤ 1 0 ≤ P(x) ≤ 1
Monotonuous Pi ≥ Pj if xi ≥ xj P(xi ) ≥ P(xj ) if xi ≥ xj
Normalization PN = 1 P(b) = 1

As an example, consider the tossing of two dice, which yields the following possible
values
[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12].
These values are called the domain. To this domain we have the corresponding
probabilities

[1/36, 2/36/3/36, 4/36, 5/36, 6/36, 5/36, 4/36, 3/36, 2/36, 1/36].

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Expectation Values

Discrete PDF
N
1 X k
E[x k ] = hx k i = xi p(xi ),
N
i=1
PN
provided that the sums (or integrals) i=1 p(xi ) converge
PN
absolutely (viz , i=1 |p(xi )| converges)
Continuous PDF
Z b
E[x k ] = hx k i = x k p(x)dx,
a

Function f (x)
Z b
k k
E[f ] = hf i = f k p(x)dx,
a

Variance
σf2 = E[f 2 ] − (E[f ])2 = hf 2 i − hf i2

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Uniform Distribution

The uniform PDF


1
p(x) = Θ(x − a)Θ(b − x).
b−a
It gives for a = 0, b = 1 p(x) = 1 for x ∈ [0, 1] and zero else. It forms the basis for all
generations of random numbers.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Exponential Distribution

The exponential PDF


p(x) = αe−αx ,
yielding probabilities different from zero in the interval [0, ∞) and with mean value
Z ∞ Z ∞ 1
µ= xp(x)dx = xαe−αx dx =
0 0 α

and variance ∞ 1
Z
σ2 = x 2 p(x)dx − µ2 = .
0 α2

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Normal Distribution

The Normal PDF


(x − a)2
„ «
1
p(x) = √ exp −
b 2π 2b2
with
R ∞ probabilities
` 2
´ different from zero in the interval (−∞, ∞). The
√ integral
−∞ exp −(x dx appears in many calculations, its value is π, a result we will need
when we compute the mean value and the variance. The mean value is
Z ∞ 1
Z ∞ „
(x − a)2
«
µ= xp(x)dx = √ x exp − dx,
0 b 2π −∞ 2b2

which becomes with a suitable change of variables

1
Z ∞ √ √
µ= √ b 2(a + b 2y) exp −y 2 dy = a.
b 2π −∞

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Normal Distribution, further Properties

Similarly, the variance becomes

1
Z ∞ „
(x − a)2
«
σ2 = √ (x − µ)2 exp − dx,
b 2π −∞ 2b2

and inserting the mean value and performing a variable change we obtain

1
Z ∞ √ √ “ ” 2b2 ∞ 2
Z “ ”
σ2 = √ b 2(b 2y)2 exp −y 2 dy = √ y exp −y 2 dy ,
b 2π −∞ π −∞

and performing a final integration by parts we obtain the well-known result σ 2 = b2 .

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Normal Distribution, further Properties

It is useful to introduce the standard normal distribution as well, defined by µ = a = 0,


viz. a distribution centered around zero and with a variance σ 2 = 1, leading to

x2
„ «
1
p(x) = √ exp − .
2π 2

The exponential and uniform distributions have simple cumulative functions, whereas
the normal distribution does not, being proportional to the so-called error function
erf (x), given by
Z x „ 2«
1 t
P(x) = . √ exp − dt,
2π −∞ 2
which is difficult to evaluate in a quick way. Later we will present an algorithm by Box
and Mueller which allows us to compute the cumulative distribution using random
variables sampled from the uniform distribution.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Binomial Distribution

The binomial distribution


„ «
n
p(x) = y x (1 − y)n−x x = 0, 1, . . . , n,
x

where y is the probability for a specific event, such as the tossing of a coin or moving
left or right in case of a random walker. Note that x is a discrete stochastic variable.
The sequence of binomial trials is characterized by the following definitions
Every experiment is thought to consist of N independent trials.
In every independent trial one registers if a specific situation happens or not,
such as the jump to the left or right of a random walker.
The probability for every outcome in a single trial has the same value, for
example the outcome of tossing a coin is always 1/2.

In Lecture 3 we will show that the probability distribution for a random walker
approaches the binomial distribution.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Properties of the Binomial Distribution

In order to compute the mean and variance we need to recall Newton’s binomial
formula
m „ «
X m
(a + b)m = an bm−n ,
n
n=0

which can be used to show that


n „ «
X n
y x (1 − y)n−x = (y + 1 − y)n = 1,
x
x=0

the PDF is normalized to one.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Properties of the Binomial Distribution

The mean value is


n „ « n
X n X n!
µ= x y x (1 − y)n−x = x y x (1 − y)n−x ,
x x!(n − x)!
x=0 x=0

resulting in

n
X (n − 1)!
µ= x y x−1 (1 − y )n−1−(x−1) ,
x=0
(x − 1)!(n − 1 − (x − 1))!

which we rewrite as
n „ «
X n−1
µ = ny y ν (1 − y)n−1−ν = ny (y + 1 − y)n−1 = ny .
ν
ν=0

The variance is slightly trickier to get. Exercise: show that it reads σ 2 = ny (1 − y).

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Poisson Distribution
Another important distribution with discrete stochastic variables x is the Poisson model,
which resembles the exponential distribution and reads

λx −λ
p(x) = e x = 0, 1, . . . , ; λ > 0.
x!

In this case both the mean value and the variance are easier to calculate,
∞ ∞
X λx −λ X λx−1
µ= x e = λe−λ = λ,
x=0
x! x=1
(x − 1)!

and the variance is σ 2 = λ. Example of applications of the Poisson distribution is the


counting of the number of α-particles emitted from a radioactive source in a given time
interval. In the limit of n → ∞ and for small probabilities y, the binomial distribution
approaches the Poisson distribution. Setting λ = ny , with y the probability for an event
in the binomial distribution we can show that

λx −λ
„ «
n X
lim y x (1 − y)n−x e−λ = e ,
n→∞ x x!
x=1

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Multivariable Expectation Values

Let us recapitulate some of the above concepts using a discrete PDF (which is what we
end up doing anyway on a computer). The mean value of a random variable X with
range x1 , x2 , . . . , N is
N
1 X
hxi = µ = xi p(xi ),
N
i=1

and the variance is

N N
1 X 1 X
σ2 = (xi − hxi)2 p(xi ) = h(xi − µi )2 i.
N N
i=1 i=1

Assume now that we have two independent sets of measurements X1 and X2 with
corresponding mean and variance µ1 and µ2 and σX2 and σX2 .
1 2

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Multivariable Expectation Values

It follows that if we define the new stochastic variable

Y = X1 + X2 ,

we have
µ Y = µ1 + µ2 ,
and

σY2 = h(Y − µY )2 i = h(X1 − µ1 )2 i + h(X2 − µ2 )2 i + 2h(X1 − µ1 )(X2 − µ2 )i.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

It is useful to define the so-called covariance, given by

cov (X1 , X2 ) = h(X1 − µ1 )(X2 − µ2 )i

where we consider the averages µ1 and µ2 as the outcome of two separate


measurements. The covariance measures thus the degree of correlation between
variables. We can then rewrite the variance of Y as

2
X
σY2 = h(Xj − µj )2 i + 2cov (X1 , X2 ),
j=1

which in our notation becomes

σY2 = σX2 1 + σX2 2 + 2cov (X1 , X2 ).

If X1 and X2 are two independent variables we can show that the covariance is zero,
but one cannot deduce from a zero covariance whether the variables are independent
or not. If our random variables which we generate are truely random numbers, then the
covariance should be zero.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

A way to measure the correlation between two sets of stochastic variables is the
so-called correlation function ρ(X1 , X2 ) defined as
cov (X1 , X2 )
ρ(X1 , X2 ) = q .
hσ 2 iX1 hσ 2 iX2

Obviously, if the covariance is zero due to the fact that the variables are independent,
then the correlation is zero. This quantity is often called the correlation coefficient
between X1 and X2 . We can extend this analysis to a set of stochastic variables
Y = (X1 + X2 + · · · + XN ). We now assume that we have N different measurements of
the mean and variance of a given variable. Each measurement consists again of N
measurements, although we could have chosen the latter to be different from N. The
total mean value is defined as
XN
hµY i = hµi i.
i=1
The total variance is however now defined as
N
X N
X N
X
σY2 = h(Y − µY )2 i = h(Xj − µj )i2 = σX2 j + 2 h(Xj − µj )ih(Xk − µk )i,
j=1 j=1 j<k

or
N
X N
X
σY2 = σX2 j + 2 cov (Xj , Xk ).
j=1 j<k

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Covariance

If the variables are independent, the covariance is zero and the variance is reduced to

N
X
σY2 = σX2 j ,
j=1

and if we assume that all sets of measurements produce the same variance σ 2 , we
end up with
σY2 = Nσ 2 .
In Lecture 5 we will discuss a very important class of correlation functions (another
application of the covariance), the so-called time-correlation functions. This are
important quantities in our studies of equilibrium properties,
Z
dt 0 M(t 0 ) − hMi M(t 0 + t) − hMi .
ˆ ˜ˆ ˜
φ(t) =

From Onsager regression hypothesis, we have that in the long time limit, the variables
M(t 0 + t) and M(t) eventually become uncorrelated from each other so that the time
correlation function becomes zero. The system has then reached its most likely state.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Central Limit Theorem

Suppose we have a PDF p(x) from which we generate a series N of averages hxi i.
Each mean value hxi i is viewed as the average of a specific measurement, e.g.,
throwing dice 100 times and then taking the average value, or producing a certain
amount of random numbers. For notational ease, we set hxi i = xi in the discussion
which follows.
If we compute the mean z of N such mean values xi

x1 + x2 + · · · + xN
z= ,
N

the question we pose is which is the PDF of the new variable z.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Central Limit Theorem

The probability of obtaining an average value z is the product of the probabilities of


obtaining arbitrary individual mean values xi , but with the constraint that the average is
z. We can express this through the following expression

x1 + x2 + · · · + xN
Z Z Z
p̃(z) = dx1 p(x1 ) dx2 p(x2 ) . . . dxN p(xN )δ(z − ),
N

where the δ-function enbodies the constraint that the mean is z. All measurements that
lead to each individual xi are expected to be independent, which in turn means that we
can express p̃ as the product of individual p(xi ).

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Central Limit Theorem

If we use the integral expression for the δ-function

x1 + x2 + · · · + xN 1
Z ∞ “ x +x +···+xN

iq(z− 1 2 N )
δ(z − )= dqe ,
N 2π −∞

and inserting eiµq−iµq where µ is the mean value we arrive at

∞ »Z ∞ –N
1
Z
p̃(z) = dqe(iq(z−µ)) dxp(x)e(iq(µ−x)/N) ,
2π −∞ −∞

with the integral over x resulting in


Z ∞ Z ∞ »
iq(µ − x) q 2 (µ − x)2

dxp(x) exp (iq(µ − x)/N) = dxp(x) 1 + − + . . . .
−∞ −∞ N 2N 2

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Central Limit Theorem

The second term on the rhs disappears since this is just the mean and employing the
definition of σ 2 we have
Z ∞ q 2 σ2
dxp(x)e(iq(µ−x)/N) = 1 − + ...,
−∞ 2N 2

resulting in

∞ –N –N
q 2 σ2
»Z »
dxp(x) exp (iq(µ − x)/N) ≈ 1− 2
+ ... ,
−∞ 2N

and in the limit N → ∞ we obtain


!
1 (z − µ)2
p̃(z) = √ √ exp − √ ,
2π(σ/ N) 2(σ/ N)2

2 = σ 2 /N, where σ is the variance of


which is the normal distribution with variance σN
the PDF p(x) and µ is also the mean of the PDF p(x).

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Central Limit Theorem

Thus, the central limit theorem states that the PDF p̃(z) of the average of N random
values corresponding to a PDF p(x) is a normal distribution whose mean is the mean
value of the PDF p(x) and whose variance is the variance of the PDF p(x) divided by
N, the number of values used to compute z.
The theorem is satisfied by a large class of PDFs. Note however that for a finite N, it is
not always possible to find a closed expression for p̃(x). The central limit theorem
leads then to the well-known expression for the standard deviation, given by

σ
σN = √ .
N

The latter is true only if the average value is known exactly. This is obtained in the limit
N → ∞ only.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Monte Carlo Integration

With the uniform distribution p(x) = 1 for x ∈ [0, 1] and zero else

1 N
1 X
Z
I= f (x)dx ≈ f (xi ),
0 N
i=1

Z 1
I= f (x)dx ≈ E[f ] = hf i.
0
0 12
N N
1 X 1 X
σf2 = f (xi )2 − @ f (xi )A ,
N N
i=1 i=1

or “ ”
σf2 = E[f 2 ] − (E[f ])2 = hf 2 i − hf i2 .

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Brute Force Algorithm for Monte Carlo Integration

Choose the number of Monte Carlo samples N.


Make a loop over N and for every step generate a random number xi in the
interval xi ∈ [0, 1] by calling a random number generator.
Use this number to compute f (xi ).
Find the contribution to the variance and the mean value for every loop
contribution.
After N samplings, compute the final mean value and the standard deviation

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Brute Force Integration

// crude mc function to calculate pi


int i, n;
long idum;
double crude_mc, x, sum_sigma, fx, variance;
cout << "Read in the number of Monte-Carlo samples" << endl;
cin >> n;
crude_mc = sum_sigma=0. ; idum=-1 ;
// evaluate the integral with the a crude Monte-Carlo method
for ( i = 1; i <= n; i++){
x=ran0(&idum);
fx=func(x);
crude_mc += fx;
sum_sigma += fx*fx;
}
crude_mc = crude_mc/((double) n );
sum_sigma = sum_sigma/((double) n );
variance=sum_sigma-crude_mc*crude_mc;

Code at http://folk.uio.no/mhjensen/fys3150/2005/programs/
chapter8/example1.cpp.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Or: another Brute Force Integration

// crude mc function to calculate pi


int main()
{
const int n = 1000000;
double x, fx, pi, invers_period, pi2;
int i;
invers_period = 1./RAND_MAX;
srand(time(NULL));
pi = pi2 = 0.;
for (i=0; i<n;i++)
{
x = double(rand())*invers_period;
fx = 4./(1+x*x);
pi += fx;
pi2 += fx*fx;
}
pi /= n; pi2 = pi2/n - pi*pi;
cout << "pi=" << pi << " sigmaˆ2=" << pi2 << endl;
return 0;
}

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Brute Force Integration

Note the call to a function which generates random numbers according to the uniform
distribution

long idum;
idum=-1 ;
.....
x=ran0(&idum);
....

or

...
invers_period = 1./RAND_MAX;
srand(time(NULL));
...
x = double(rand())*invers_period;

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Results of Brute Force Integration

N I σN
10 3.10263E+00 3.98802E-01
100 3.02933E+00 4.04822E-01
1000 3.13395E+00 4.22881E-01
10000 3.14195E+00 4.11195E-01
100000 3.14003E+00 4.14114E-01
1000000 3.14213E+00 4.13838E-01
10000000 3.14177E+00 4.13523E-01
109 3.14162E+00 4.13581E-01
We note that as N increases, the integral itself never reaches more than an agreement
to the fourth or fifth digit. The variance also oscillates around its exact value
4.13581E − 01. Note well that the variance need not be zero but one can, with
appropriate redefinitions of the integral be made smaller. A smaller variance yields also
a smaller standard deviation.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Acceptance-Rejection Method

This is a rather simple and appealing method after von Neumann. Assume that we are
looking at an interval x ∈ [a, b], this being the domain of the PDF p(x). Suppose also
that the largest value our distribution function takes in this interval is M, that is

p(x) ≤ M x ∈ [a, b].

Then we generate a random number x from the uniform distribution for x ∈ [a, b] and a
corresponding number s for the uniform distribution between [0, M]. If

p(x) ≥ s,

we accept the new value of x, else we generate again two new random numbers x and
s and perform the test in the latter equation again.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Acceptance-Rejection Method

As an example, consider the evaluation of the integral


Z 3
I= exp (x)dx.
0

Obviously to derive it analytically is much easier, however the integrand could pose
some more difficult challenges. The aim here is simply to show how to implent the
acceptance-rejection algorithm. The integral is the area below the curve
f (x) = exp (x). If we uniformly fill the rectangle spanned by x ∈ [0, 3] and
y ∈ [0, exp (3)], the fraction below the curve obatained from a uniform distribution, and
multiplied by the area of the rectangle, should approximate the chosen integral. It is
rather easy to implement this numerically, as shown in the following code.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Simple Plot of the Accept-Reject Method

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Acceptance-Rejection Method

// Loop over Monte Carlo trials n


integral =0.;
for ( int i = 1; i <= n; i++){
// Finds a random value for x in the interval [0,3]
x = 3*ran0(&idum);
// Finds y-value between [0,exp(3)]
y = exp(3.0)*ran0(&idum);
// if the value of y at exp(x) is below the curve, we accept
if ( y < exp(x)) s = s+ 1.0;
// The integral is area enclosed below the line f(x)=exp(x)
}
// Then we multiply with the area of the rectangle and
// divide by the number of cycles
Integral = 3.*exp(3.)*s/n

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Transformation of Variables

The starting point is always the uniform distribution



dx 0≤x ≤1
p(x)dx =
0 else

with p(x) = 1 and satisfying Z ∞


p(x)dx = 1.
−∞

All random number generators provided in the program library generate numbers in
this domain.
When we attempt a transformation to a new variable x → y we have to conserve the
probability
p(y)dy = p(x)dx,
which for the uniform distribution implies

p(y)dy = dx.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Transformation of Variables

Let us assume that p(y) is a PDF different from the uniform PDF p(x) = 1 with
x ∈ [0, 1]. If we integrate the last expression we arrive at
Z y
x(y) = p(y 0 )dy 0 ,
0

which is nothing but the cumulative distribution of p(y), i.e.,


Z y
x(y) = P(y) = p(y 0 )dy 0 .
0

This is an important result which has consequences for eventual improvements over
the brute force Monte Carlo.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Example 1, a general Uniform Distribution

Suppose we have the general uniform distribution


(
dy
b−a
a≤y ≤b
p(y)dy =
0 else

If we wish to relate this distribution to the one in the interval x ∈ [0, 1] we have

dy
p(y)dy = = dx,
b−a

and integrating we obtain the cumulative function

y dy 0
Z
x(y) = ,
a b−a

yielding
y = a + (b − a)x,
a well-known result!

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Example 2, from Uniform to Exponential

Assume that
p(y) = e−y ,
which is the exponential distribution, important for the analysis of e.g., radioactive
decay. Again, p(x) is given by the uniform distribution with x ∈ [0, 1], and with the
assumption that the probability is conserved we have

p(y)dy = e−y dy = dx,

which yields after integration


Z y
x(y) = P(y) = exp (−y 0 )dy 0 = 1 − exp (−y),
0

or
y(x) = −ln(1 − x).
This gives us the new random variable y in the domain y ∈ [0, ∞) determined through
the random variable x ∈ [0, 1] generated by our favorite random generator.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Example 2, from Uniform to Exponential

This means that if we can factor out exp (−y) from an integrand we may have
Z ∞ Z ∞
I= F (y)dy = exp (−y)G(y)dy
0 0

which we rewrite as

∞ ∞ N
dx 1 X
Z Z
exp (−y)G(y)dy = G(y)dy ≈ G(y (xi )),
0 0 dy N
i=1

where xi is a random number in the interval [0,1].


Note that in practical implementations, our random number generators for the uniform
distribution never return exactly 0 or 1, but we we may come very close. We should
thus in principle set x ∈ (0, 1).

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Example 2, from Uniform to Exponential

The algorithm is rather simple. In the function which sets up the integral, we simply
need the random number generator for the uniform distribution in order to obtain
numbers in the interval [0,1]. We obtain y by the taking the logarithm of (1 − x). Our
calling function which sets up the new random variable y may then include statements
like

.....
idum=-1;
x=ran0(&idum);
y=-log(1.-x);
.....

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Example 3
Another function which provides an example for a PDF is

dy
p(y)dy = ,
(a + by )n

with n > 1. It is normalizable, positive definite, analytically integrable and the integral is
invertible, allowing thereby the expression of a new variable in terms of the old one.
The integral Z ∞
dy 1
= ,
0 (a + by )n (n − 1)ban−1
gives
(n − 1)ban−1
p(y)dy = dy ,
(a + by )n
which in turn gives the cumulative function
Z y (n − 1)ban−1 0
x(y) = P(y) = dy =,
0 (a + bx)n

resulting in
a“ ”
y= (1 − x)−1/(n−1) − 1 .
b

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Example 4, from Uniform to Normal

For the normal distribution, expressed here as

g(x, y) = exp (−(x 2 + y 2 )/2)dxdy .

it is rather difficult to find an inverse since the cumulative distribution is given by the
error function erf (x).
If we however switch to polar coordinates, we have for x and y
“ ”1/2 x
r = x2 + y2 θ = tan−1 ,
y

resulting in
g(r , θ) = r exp (−r 2 /2)drdθ,

where the angle θ could be given by a uniform distribution in the region [0, 2π].
Following example 1 above, this implies simply multiplying random numbers x ∈ [0, 1]
by 2π.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Example 4, from Uniform to Normal


The variable r , defined for r ∈ [0, ∞) needs to be related to to random numbers
x 0 ∈ [0, 1]. To achieve that, we introduce a new variable
1 2
u= r ,
2
and define a PDF
exp (−u)du,
with u ∈ [0, ∞). Using the results from example 2, we have that
u = −ln(1 − x 0 ),
where x 0 is a random number generated for x 0 ∈ [0, 1]. With

x = rcos(θ) = 2ucos(θ),
and √
y = rsin(θ) = 2usin(θ),
we can obtain new random numbers x, y through
p
x = −2ln(1 − x 0 )cos(θ),
and p
y= −2ln(1 − x 0 )sin(θ),

with x 0 ∈ [0, 1] and θ ∈ 2π[0, 1].


First National Winter School in eScience Lecture I, January 28 2007
Introduction PDF MC Integration

Example 4, from Uniform to Normal

A function which yields such random numbers for the normal distribution would include
statements like

.....
idum=-1;
radius=sqrt(-2*ln(1.-ran0(idum)));
theta=2*pi*ran0(idum);
x=radius*cos(theta);
y=radius*sin(theta);
.....

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Box-Mueller Method for Normal Deviates

// random numbers with gaussian distribution


double gaussian_deviate(long * idum)
{
static int iset = 0;
static double gset;
double fac, rsq, v1, v2;
if ( idum < 0) iset =0;
if (iset == 0) {
do {
v1 = 2.*ran0(idum) -1.0;
v2 = 2.*ran0(idum) -1.0;
rsq = v1*v1+v2*v2;
} while (rsq >= 1.0 || rsq == 0.);
fac = sqrt(-2.*log(rsq)/rsq);
gset = v1*fac;
iset = 1;
return v2*fac;
} else {
iset =0;
return gset;
}

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Importance Sampling

With the aid of the above variable transformations we address now one of the most
widely used approaches to Monte Carlo integration, namely importance sampling.
Let us assume that p(y) is a PDF whose behavior resembles that of a function F
defined in a certain interval [a, b]. The normalization condition is
Z b
p(y)dy = 1.
a

We can rewrite our integral as


Z b Z b F (y )
I= F (y)dy = p(y) dy .
a a p(y)

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Importance Sampling

Since random numbers are generated for the uniform distribution p(x) with x ∈ [0, 1],
we need to perform a change of variables x → y through
Z y
x(y) = p(y 0 )dy 0 ,
a

where we used
p(x)dx = dx = p(y)dy .

If we can invert x(y), we find y(x) as well.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Importance Sampling
With this change of variables we can express the integral of Eq. (61) as
Z b F (y)
Z b F (y(x))
I= p(y) dy = dx,
a p(y) a p(y(x))

meaning that a Monte Carlo evalutaion of the above integral gives

b N
F (y(x)) 1 X F (y(xi ))
Z
dx = .
a p(y(x)) N p(y(xi ))
i=1

The advantage of such a change of variables in case p(y) follows closely F is that the
integrand becomes smooth and we can sample over relevant values for the integrand.
It is however not trivial to find such a function p. The conditions on p which allow us to
perform these transformations are
1 p is normalizable and positive definite,
2 it is analytically integrable and
3 the integral is invertible, allowing us thereby to express a new variable in terms of
the old one.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Importance Sampling
The algorithm for this procedure is
Use the uniform distribution to find the random variable y in the interval [0,1].
p(x) is a user provided PDF.
Evaluate thereafter
Z b Z b F (x)
I= F (x)dx = p(x) dx,
a a p(x)
by rewriting
Z b F (x)
Z b F (x(y))
p(x) dx = dy ,
a p(x) a p(x(y))
since
dy
= p(x).
dx
Perform then a Monte Carlo sampling for
b N
F (x(y)) 1 X F (x(yi ))
Z
dy, ≈ ,
a p(x(y)) N p(x(yi ))
i=1

with yi ∈ [0, 1],


Evaluate the variance

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Demonstration of Importance Sampling


Z 1 Z 1 1 π
I= F (x)dx = dx = .
0 0 1 + x2 4
We choose the following PDF (which follows closely the function to integrate)

1
Z 1
p(x) = (4 − 2x) p(x)dx = 1,
3 0

resulting
F (0) F (1) 3
= = .
p(0) p(1) 4
Check that it fullfils the requirements of a PDF. We perform then the change of
variables (via the Cumulative function)
Z x 1
y(x) = p(x 0 )dx 0 = x (4 − x) ,
0 3

or
x = 2 − (4 − 3y)1/2

We have that when y = 0 then x = 0 and when y = 1 we have x = 1.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Simple Code

// evaluate the integral with importance sampling


for ( int i = 1; i <= n; i++){
x = ran0(&idum); // random numbers in [0,1]
y = 2 - sqrt(4-3*x); // new random numbers
fy=3*func(y)/(4-2*y); // weighted function
int_mc += fy;
sum_sigma += fy*fy;
}
int_mc = int_mc/((double) n );
sum_sigma = sum_sigma/((double) n );
variance=(sum_sigma-int_mc*int_mc);

Code at http://folk.uio.no/mhjensen/fys3150/2005/programs/
chapter8/example2.cpp.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Test Runs and Comparison with Brute Force for


π = 3.14159

The suffix cr stands for the brute force approach while is stands for the use of
importance sampling. All calculations use ran0 as function to generate the uniform
distribution.

N Icr σcr Iis σis


10000 3.13395E+00 4.22881E-01 3.14163E+00 6.49921E-03
100000 3.14195E+00 4.11195E-01 3.14163E+00 6.36837E-03
1000000 3.14003E+00 4.14114E-01 3.14128E+00 6.39217E-03
10000000 3.14213E+00 4.13838E-01 3.14160E+00 6.40784E-03

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Multidimensional Integrals

When we deal with multidimensional integrals of the form


Z 1 Z 1 Z 1
I= dx1 dx2 . . . dxd g(x1 , . . . , xd ),
0 0 0

with xi defined in the interval [ai , bi ] we would typically need a transformation of


variables of the form
xi = ai + (bi − ai )ti ,
if we were to use the uniform distribution on the interval [0, 1]. In this case, we need a
Jacobi determinant
Yd
(bi − ai ),
i=1

and to convert the function g(x1 , . . . , xd ) to

g(x1 , . . . , xd ) → g(a1 + (b1 − a1 )t1 , . . . , ad + (bd − ad )td ).

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Example: 6-dimensional Integral

As an example, consider the following six-dimensional integral


Z ∞
dxdyg(x, y),
−∞

where
g(x, y) = exp (−x2 − y2 − (x − y)2 /2),

with d = 6.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Example: 6-dimensional Integral

We can solve this integral by employing our brute force scheme, or using importance
sampling and random variables distributed according to a gaussian√PDF. For the latter,
if we set the mean value µ = 0 and the standard deviation σ = 1/ 2, we have

1
√ exp (−x 2 ),
π

and through

6 „ «
1
Z Y
π3 √ exp (−xi2 ) exp (−(x − y)2 /2)dx1 . . . . dx6 ,
π
i=1

we can rewrite our integral as

Z 6
Y
f (x1 , . . . , xd )F (x1 , . . . , xd ) dxi ,
i=1

where f is the gaussian distribution.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Brute Force I

.....
// evaluate the integral without importance sampling
// Loop over Monte Carlo Cycles
for ( int i = 1; i <= n; i++){
// x[] contains the random numbers for all dimensions
for (int j = 0; j< 6; j++) {
x[j]=-length+2*length*ran0(&idum);
}
fx=brute_force_MC(x);
int_mc += fx;
sum_sigma += fx*fx;
}
int_mc = int_mc/((double) n );
sum_sigma = sum_sigma/((double) n );
variance=sum_sigma-int_mc*int_mc;
......

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Brute Force II

double brute_force_MC(double *x)


{
double a = 1.; double b = 0.5;
// evaluate the different terms of the exponential
double xx=x[0]*x[0]+x[1]*x[1]+x[2]*x[2];
double yy=x[3]*x[3]+x[4]*x[4]+x[5]*x[5];
double xy=pow((x[0]-x[3]),2)+pow((x[1]-x[4]),2)+pow((x[2]-x[5]),2);
return exp(-a*xx-a*yy-b*xy);

Full code at http://folk.uio.no/mhjensen/fys3150/2005/programs/


chapter8/example3.cpp.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Importance Sampling I

..........
// evaluate the integral with importance sampling
for ( int i = 1; i <= n; i++){
// x[] contains the random numbers for all dimensions
for (int j = 0; j < 6; j++) {
x[j] = gaussian_deviate(&idum)*sqrt2;
}
fx=gaussian_MC(x);
int_mc += fx;
sum_sigma += fx*fx;
}
int_mc = int_mc/((double) n );
sum_sigma = sum_sigma/((double) n );
variance=sum_sigma-int_mc*int_mc;
.............

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Importance Sampling II

// this function defines the integrand to integrate

double gaussian_MC(double *x)


{
double a = 0.5;
// evaluate the different terms of the exponential
double xy=pow((x[0]-x[3]),2)+pow((x[1]-x[4]),2)+pow((x[2]-x[5]),2);
return exp(-a*xy);
} // end function for the integrand

Full code at http://folk.uio.no/mhjensen/fys3150/2005/programs/


chapter8/example4.cpp.

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Test Runs for six-dimensional Integral

Results for as function of number of Monte Carlo samples N. The exact answer is
I ≈ 10.9626 for the integral. The suffix cr stands for the brute force approach while gd
stands for the use of a Gaussian distribution function. All calculations use ran0 as
function to generate the uniform distribution.

N Icr Igd
10000 1.15247E+01 1.09128E+01
100000 1.29650E+01 1.09522E+01
1000000 1.18226E+01 1.09673E+01
10000000 1.04925E+01 1.09612E+01

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Going Parallel with MPI

Task parallelism the work of a global problem can be divided


into a number of independent tasks, which rarely need to
synchronize. Monte Carlo simulation is one example. It is
almost embarrassingly trivial to parallelize Monte Carlo codes.
MPI is a message-passing library where all the routines have
corresponding C/C++-binding

MPI_Command_name

and Fortran-binding (routine names are in uppercase, but can


also be in lower case)

MPI_COMMAND_NAME

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Computing the 6-dimensional Integral in Parallel

#include "mpi.h"
#include <stdio.h>
int main (int nargs, char* args)
{
Declarations ....
MPI_Init (&nargs, &args);
MPI_Comm_size (MPI_COMM_WORLD, &size);
MPI_Comm_rank (MPI_COMM_WORLD, &iam);
....
no_intervalls = mcs/size;
myloop_begin = iam*no_intervalls + 1;
myloop_end = (iam+1)*no_intervalls;

First National Winter School in eScience Lecture I, January 28 2007


Introduction PDF MC Integration

Computing the 6-dimensional Integral in Parallel

for ( int i = myloop_begin; i <= myloop_end; i++){


// x[] contains the random numbers for all dimensio
for (int j = 0; j < 6; j++) {
x[j] = gaussian_deviate(&idum)*sqrt2;
}
fx=gaussian_MC(x);
average[0] += fx;
average[1] += fx*fx;
}
MPI_reduce(average, total_average, 2, MPI_DOUBLE,
MPI_SUM, 0, MPI_COMM_WORLD)
//print results
MPI_Finalize ();
Full code at http://folk.uio.no/mhjensen/fys3150/
2005/programs/chapter8/example6.cpp.
First National Winter School in eScience Lecture I, January 28 2007
Introduction PDF MC Integration

Exercise

(a) Calculate the integral


Z 1 2
I= e−x dx,
0

using brute force Monte Carlo with p(x) = 1 and importance sampling with
p(x) = ae−x where a is a constant.
(b) Calculate the integral Z π 1
I= dx,
0 x 2 + cos2 (x)
with p(x) = ae−x where a is a constant. Determine the value of a which
minimizes the variance.
(c) Try to parallelize the code as well.

First National Winter School in eScience Lecture I, January 28 2007

You might also like