0% found this document useful (0 votes)
4 views19 pages

Lect 15

The document provides an introduction to Monte Carlo methods, detailing key components of a Monte Carlo algorithm including random number generators, sampling rules, probability distribution functions, and error estimation. It emphasizes the importance of reliable pseudo random number generators and discusses techniques for generating random numbers, including the use of exponential and Gaussian distributions. Additionally, it covers Monte Carlo integration and the concept of importance sampling to enhance computational efficiency.

Uploaded by

n16135051
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views19 pages

Lect 15

The document provides an introduction to Monte Carlo methods, detailing key components of a Monte Carlo algorithm including random number generators, sampling rules, probability distribution functions, and error estimation. It emphasizes the importance of reliable pseudo random number generators and discusses techniques for generating random numbers, including the use of exponential and Gaussian distributions. Additionally, it covers Monte Carlo integration and the concept of importance sampling to enhance computational efficiency.

Uploaded by

n16135051
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

2024/12/26

Introduction to Monte Carlo


Methods

Key Components of a Monte


Carlo Algorithm
 Having examined some simple Monte Carlo applications, it
is timely to identify the key components that typically
compose a Monte Carlo algorithm.

They are:
 Random number generator

 Sampling rule

 Probability Distribution Functions (PDF)

 Error estimation

1
2024/12/26

Random Number Generators


 A reliable random number generator is critical for the success
of a Monte Carlo program. This is particularly important for
Monte Carlo simulations which typically involve the use of
literally millions of random numbers. If the numbers used are
poorly chosen, i.e. if they show non-random behavior over a
relatively short interval, the integrity of the Monte Carlo
method is severely compromised.
 In real life, it is very easy to generate truly random numbers.
The lottery machine does this every night! Indeed, something
as simple as drawing numbers from a hat is an excellent way of
obtain numbers that are truly random.
 In contrast, it is impossible to conceive of an algorithm that
results in purely random numbers because by definition an
algorithm, and the computer that executes it, is deterministic.
That is it is based on well-defined, reproducible concepts.

Pseudo Random Number


Generators
 The closest that can be obtained to a random number
generator is a pseudo random number generator.

 A pseudo good random number generator should have


 A long period, which should be close to the range of
the integers on the computer.
 Good randomness.

 To be fast.

 Most pseudo random number generators typically have


two things in common: (a) the use of very large prime
numbers and (b) the use of module arithmetic.

2
2024/12/26

Pseudo Random Number


Generators
 Most language compilers typically supply a so-called
random number generator. Treat this with the
utmost suspicion because they typically generate a
random number via a recurrence relationship such as:
=( + )( )
where , , are magic numbers. The choice of these
numbers determines the quality of the generator.
 One option, = 7 = 16807, = 0, and = 2 − 1 =
2147483647, has been found to be an excellent
choice for 32 bit computer. The integers on a 32 bit
computer is [−231,231 − 1].

Pseudo Random Number


Generators (contd)
 The above relationship will generate a sequence of
random numbers 1, 2 …etc between 0 and − 1. The
and terms are positive constants.

 This is an example of a linear congruential generator. The


advantage of such a random number generator is that it is
very fast because it only requires a few operations.
However, the major disadvantage is that the recurrence
relationship will repeat itself with a period that is no
greater than .

3
2024/12/26

Pseudo Random Number


Generators (contd)
 If , and are chosen properly, then the repetition period
can be stretched to its maximum length of . However, is
typically not very large. For example, in many C applications
m (called RAND_MAX) is only 32767.

 A number like 32767 seems like a large number but a


moment thought will quickly dispel this illusion. For example,
a typical Monte Carlo integration involves evaluating one
million different points but if the random numbers used to
obtain these points repeat every 32767, the result is that
the same 32767 points are evaluated 30 times!

Pseudo Random Number


Generators (contd)
Function ranf()
DATA ia/16807/,ic/2147483647/,iq/127773/,ir/2836/
COMMON /cseed/ iseed
ih=iseed/iq
il=mod(cseed,iq)
it=ia*il-ir*ih
if(it.gt.0) then
iseed=it
else
iseed=ic+it
end if
ranf=iseed/float(ic)
return
end

4
2024/12/26

Pseudo Random Number


Generators (contd)
 The function routine was given by Park and Miller (1988)
in Pascal.
 iq=ic/ia, ir=mod(ic,ia), and iseed [1,231-1].

 In order to start the random number generator differently


every time, one needs to have a systematic way of
obtaining a different initial seed (the initial value of iseed).
Otherwise, one would end up with exactly the same result
if one starts the program with exactly the same initial
seed.

 It is common to use time as the initial seed.

Pseudo Random Number


Generators (contd)
 For example, most computers would have
Year: 0iy99
Month: 1im12
Day: 1id31
Hour: 0ih23
Minute: 0in59
Second: 0is59
 Then one can choose

iseed=iy+70(im+12{id+31[ih+23(in+59is)]})
as the initial seed, which is roughly in the region of [1,231-1].
 The results should never be the same as long as the jobs
are started at least a second apart but within 100 years.

5
2024/12/26

Pseudo Random Number


Generators (contd)
 To implement the initial seed program in fortran

Integer*4 time, stime, t(9)


stime=time(%ref(0))
call gmtime_(stime,t)
iseed=t(6)+70*(t(5)+12*(t(4)+31*(t(3)+23*(t(2)+59*(t(1)))))
if (mod(iseed,2).eq.0) iseed=iseed-1;

 The intrinsic function time and subroutine gmtime_ are used


to obtain the system time and to transform it into the needed
integer array.

Pseudo Random Number


Generators (contd)
 Another disadvantage is that the successive random
numbers obtained from congruential generators are highly
correlated with the previous random number.

 The generation of random numbers is a specialist topic.


However, a practical solution is to use more than one
congruential generators and “shuffle” the results.

6
2024/12/26

Pseudo Random Number


Generators (contd)
The following C++ function (from Press et al.) illustrates the process of
shuffling.
double myRandom(int *idum)
{ /* constants for random number generator */
const long int M1 = 259200;
const int IA1 = 7141;
const long int IC1 = 54773;
const long int M2 = 134456;
const int IA2 = 8121;
const int IC2 = 28411;
const long int M3 = 243000;
const int IA3 = 4561;
const long int IC3 = 51349;
int j;
static int iff = 0;
static long int ix1, ix2, ix3;
double RM1, RM2, temp;
static double r[97];

Pseudo Random Number


Generators (contd)
RM1 = 1.0/M1;
RM2 = 1.0/M2;
if(*idum < 0 || iff == 0) /*initialise on first call */
{
iff = 1;
ix1 = (IC1 - (*idum)) % M1; /* seeding routines */
ix1 = (IA1 *ix1 + IC1) % M1;
ix2 = ix1 % M2;
ix1 = (IA1 * ix1 + IC1) % M1;
ix3 = ix1 % M3;
for (j = 0; j < 97; j++)
{
ix1 = (IA1 * ix1 + IC1) % M1;
ix2 = (IA2 * ix2 + IC2) % M2;
r[j] = (ix1 + ix2 * RM2) * RM1;
}
*idum = 1;
}

7
2024/12/26

Pseudo Random Number


Generators (contd)
/*generate next number for each sequence */
ix1 = (IA1 * ix1 + IC1) % M1;
ix2 = (IA2 * ix2 + IC2) % M2;
ix3 = (IA3 * ix3 + IC3) % M3;
/* randomly sample r vector */
j = 0 + ((96 * ix3) / M3);
if (j > 96 || j < 0)
cout <<"Error in random number generator\n";
temp = r[j];
/* replace r[j] with next value in the sequence */
r[j] = (ix1 + ix2 * RM2) * RM1;
return temp;
}

Pseudo Random Number


Generators (contd)
Exponential distribution random number
 Simple form
( )=

 If a system has energy levels of , 1, … . , the probability of


the system’s being at energy level at temperature is given by

( )/
∝ ,
where is the Boltzmann constant and is the zero point energy.
 One way to generate the exponential distribution is to relate to a
uniform distribution random number.

8
2024/12/26

Pseudo Random Number


Generators (contd)
 If we have a uniform distribution random number ( ) =
1 for [0,1], we can relate it to an exponential
distribution, ( ) = for [0, ∞], by

( ) = = ( ) =
which gives
( ) − (0) = 1 −
We can set (0) = 0 and invert the above equation to have

= −ln(1 − )
which relates the exponential distribution of to the uniform
distribution of [0,1].

Trial number 10000 Trial number 1000000


Interval number 100 Interval number 100

9
2024/12/26

Pseudo Random Number


Generators (contd)
 The following function routine is an implementation of the
exponential random number generator

function eranf()
common /cseed/ iseed
eranf =-alog(1.0-ranf())
return
end

Pseudo Random Number


Generators (contd)
 Another useful distribution used in physics is the
Gaussian distribution
1
( )=
2
where s is the variance of the distribution.
 We can take as 1 for the moment. The distribution with
s ¹ 1 can be obtained via the rescaling of by s in the
generator with s = 1.
 We can use a uniform distribution (f) = for
f[0, 2p] and an exponential distribution ( ) =
exp(− ) for [0, ¥] to obtain two Gaussian distribution
( ) and ( ).

10
2024/12/26

Pseudo Random Number


Generators (contd)
 We can relate the product of the uniform distribution and an
exponential distribution to the product of two Gaussian
distributions by
=
( )/
=
 If we view the above equation as coordinate transformation
from polar system (r, f) with r = (2 ) / into the
rectangular system ( , ) ,
= 2
= 2

Pseudo Random Number


Generators (contd)
 Construct two Gaussian distribution

Subroutine grnf(x,y)
common /cseed/ iseed
pi=4.0*atan(1.0)
r1=-alog(1-ranf())
r2=2.0*pi*ranf()
r1=sqrt(2.0*r1)
x=r1*cos(r2)
y=r1*sin(r2)
return
end

11
2024/12/26

Monte Carlo Integration


 The calculation of p is actually an example of “hit and miss”
integration. It relies on its success of generating random
numbers on an appropriate interval. Sample mean integration is
a more general and accurate alternative to simplify hit and miss
integration. The general integral

= ( )

is rewritten:
( )
= ( )
( )

where r( ) is an arbitrary probability density function.

12
2024/12/26

Monte Carlo Integration


(contd)
If a number of trials are performed by choosing a
random number from the distribution r( ) in the
range ( , ) then
( )
=
( )

where the brackets denote the average over all trials


(this is standard notation for simulated quantities). If we
choose r( ) to be uniform,
1
( )= ≤ ≤
( − )

Monte Carlo Integration


(contd)
 The integral can be obtained from
( − )

∑ ( )

13
2024/12/26

Sampling
 Monte Carlo works by using random numbers to sample the
“solution space” of the problem to be solved. In our
examples of the calculation of p, simulation of radioactive
decay and Monte Carlo integration we employed a “simple”
sampling technique in which all points were accepted with
equal probability.

 Such simple sampling is inefficient because we must obtain


many points to obtain an accurate solution. Indeed the
accuracy of our solution is directly proportional to the
number of points sampled.

 However, not all points in the solution-space contribute


equally to the solution. Some are of major importance,
whereas others could be safely ignored without adversely
affecting the accuracy of our calculation.

Sampling (contd)
 In view of this, rather than sampling the entire region
randomly, it would be computationally advantageous
to sample those regions which make the largest
contribution to the solution while avoiding low
contributing regions. This type of sampling is referred
to as “importance sampling.”

To illustrate the difference between simple sampling and


importance sampling, consider the difference
between estimating an integral via simple sampling:
1
= ( )

14
2024/12/26

Sampling (contd)
and using importance sampling
1 ( )
=
( )
In the later case, each point is chosen in accordance with
the anticipated importance of the value to the function
and the contribution it makes weighted by the inverse of
the probability ( ) of choice. Unlike the simple sampling
scheme, the estimate is no longer a simple average of all
points sampled but it is a weighted average.
 It should be noted that this type of sampling introduces a
bias which must be eliminated. Importance sampling will
be discussed in greater detail later

Importance Sampling
 Put more quadrature points in regions where
integral receives its greatest contributions
 The 1-dimensional example
( )=3 2

= 3

( )=2
 Most contribution from region near
= 1
 Choose quadrature points not uniformly, but
according to distribution ( )
 linear form is one possibility

 How to revise the integral to remove the bias?

15
2024/12/26

The Importance-Sampled
Integral
 Consider a rectangle-rule quadrature with
unevenly spaced abscissas
1 1
≈ Δ ;Δ =
( )
 Spacing between points
 reciprocal of local number of points per unit

length Δ Δ Δ … Δ
Greater  more points  smaller spacing choose points
1 ( ) according to

( )
( )
 Importance-sampled rectangle rule

 Same formula for MC sampling ≈

The Importance-Sampled
Integral
 Choose quadrature points not uniformly, but
according to distribution ( )
 Transform from uniform random number into
( ) distributed random number
( ) = = ( ) =2

= = 2 =

⇒ =

16
2024/12/26

The Importance-Sampled
Integral

The Importance-Sampled
Integral
 Consider the integral =∫ 3
− ( )
≈ =
( )
( )

( ) =1= 2 = |
1
⇒ =

2
( )=

17
2024/12/26

The Importance-Sampled
Integral
 Transform from uniform random number into
( ) distributed random number
2
( ) = = ( ) =

2 −
= = =
− −
⇒ = ( − ) +

The Importance-Sampled
Integral
 Using different pdf, the result??
=

18
2024/12/26

Probability Distribution
Functions
 As illustrated above, Monte Carlo methods work by
sampling according to a probability distribution function
(PDF).
 In the previous examples, using simple sampling, we
have in effect been sampling from a “uniform distribution
function” in which every selection is made with equal
probability.
 In statistics, the most important PDF is probably the
Gaussian or Normal Distribution:
( )

( )=
2
where is the mean of the distribution and σ2 is the
variance.

Probability Distribution
Functions (contd)
 This function has the well-known “bell-shape” and gives
a good representation of such things as the distribution
of the intelligence quotient (IQ) in the community
centered on a “normal” value of IQ = 100.

 In scientific applications, a well-known distribution


function is the Botlzmann distribution, which relates the
fraction of particles with energy to the temperature
( ) and Boltzmann’s constant ( ):

Indeed, this distributed will play a central role in many of


the Monte Carlo algorithms.

19

You might also like