0% found this document useful (0 votes)
32 views21 pages

Chapter 2 - Random Number Generation

Uploaded by

Charbel Merhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views21 pages

Chapter 2 - Random Number Generation

Uploaded by

Charbel Merhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Random Number

Generation
For applying the Monte Carlo method we need an enormous amount of random samples
from a specified distribution
➢ How to efficiently produces those?
➢ Needed: randomness (and unpredictability)

Real randomness could be generated by real experiment


➢ Throwing a dice
➢ Observing radioactive decay (or other physical phenomena)

Comments
+ Absolutely unpredictable
- Not practical (time consuming, expensive, difficult to implement on computer)
- Not reproducible
• Reproducibility is important: comparability of methods and implementation, debugging,
variance reduction, etc.
• We need an algorithm that produces numbers that “appear random” (pass statical tests) →
pseudorandom numbers
Random Number Generators
Definition
A random number Generator (RGN) is a deterministic algorithm that produces sequences of
numbers that mimic randomness → “pseudorandom numbers”.
Remark
• RNG’s usually have to be initialized (important!!)
• RNG’s produce a finite sequence of pseudorandom numbers which is then repeated. The
length of this sequence, denoted by ℓ, is called the period length of the RNG, for example

• There are many different RNG’s and the quality is different


• Speed, memory requirements, precision etc.
• One should test an RNG before using it → be aware which tests they fail
Uniformly distributed random
numbers
• The computer usually provides uniformly distributed random numbers distributed
random numbers 𝑈𝑖
𝑈𝑖 ∼ 𝑈[0, 1]

• Why are uniform rvs important?

Let 𝐹 ∶ ℝ → [0, 1] be a distribution function and 𝑈 ∼ 𝑈(0, 1), then


𝐹 −1 𝑈 ~ 𝐹

• In general for generation of rvs the basic random ingredient is an uniformly


distributed rv
Linear congruential generators (LCG)
(1)
One of the first RNGs introduced by Lehmer (1949)

Algorithm: LCG
• Set 𝑠𝑛 ∈ 0,1, … , 𝑚 − 1 such that
𝑠𝑛+1 = 𝑎 𝑠𝑛 + 𝑐 mod 𝑚, ∀𝑛 ≥1
where
➢ 𝑚 ∈ ℕ\ {0} is the modulus
➢ 𝑎 ∈ ℕ is the multiplier, 𝑎 < 𝑚
➢ 𝑐 ∈ ℕ is the increment, 𝑐 < 𝑚
➢ 𝑠0 ∈ ℕ is the seed (initial point), 𝑠0 < 𝑚
𝑠𝑛
• Numbers in 0,1 are obtained via 𝑢𝑛 =
𝑚

Note: if 𝑐 = 0, 𝑠0 ≠ 0 and the maximal possible period is 𝑚 − 1. We get a (multiplicative


generator):
𝑠𝑛 = 𝑎𝑛 𝑠0 mod 𝑚
Linear congruential generators (LCG)
(2)
• Generator often denoted by LCG(𝑚, 𝑎, 𝑐)

How do we choose the coefficients of an LCG to achieve a full period?


➢𝑚 = 2𝑘 : faster calculation
➢𝑐 and 𝑚 are relatively prime
➢Every prime number that divides 𝑚 divides 𝑎 − 1
➢𝑎 − 1 is divisible by 4 if 𝑚 is

Property: if the period of such a generator is 𝑚 (or 𝑚 − 1 in the case of 𝑐 = 0), we


call this full period (= maximum possible period length)
Long period is obtained by choosing very large modulus
Linear congruential generators (LCG)
(3)
Properties of an LCG
• Reproducibility → sequence easily reproduced by using the same seed
• Speed → choosing 𝑚 to be a power of 2 speeds up the simulation
• Portability → can produce the same sequence of values on all computers

Disadvantages
• The period of a LCG is < 𝑚 (period is relatively small)
• LCG’s usually do not have good random properties → Correlation between the
random numbers

One should not use a LCG for simulation!


Extensions of LCG (1)
• Multiple recursive generators (MRG) are a generalization of linear generators
• Easy to implement
• With the same modulus 𝑚 their period are much longer and their structure is
improved (with 𝑐 = 0)
Algorithm: MCG
• Set 𝑠𝑛 ∈ 0,1, … , 𝑚 − 1 such that
𝑠𝑛 = 𝑎1 𝑠𝑛−1 + ⋯ + 𝑎𝑘 𝑠𝑛−𝑘 mod 𝑚, ∀ 𝑛 ≥ 0, 𝑛 ≥ 𝑘
where
➢ 𝑚 ∈ ℕ\ {0} is the modulus
➢ 𝑎𝑖 ∈ ℕ are the multipliers, i = 1, … , 𝑘, 𝑎𝑖 < 𝑚
➢ 𝑘 with 𝑎𝑘 ≠ 0 is the order of the recursion, 𝑘 ≥ 2
➢ 𝒔𝟎 = ( 𝑠0 , 𝑠1 , …, 𝑠𝑘−1 ) ∈ ℕ𝑘 is the seed (initial point), 𝑠𝑖 < 𝑚, 𝑖 = 0, … , 𝑘 − 1
𝑠𝑛+𝑘−1
• Numbers in 0,1 are obtained via 𝑢𝑛 = ,𝑛 >0
𝑚
Extensions of LCG (2)
Advantages of an MCG
• With modulus 𝑚, the maximal possible period length is 𝑚𝑘 − 1, with 𝑘 the order of
the recurrence
• Hence, for 𝑘 large, very long periods are possible even with a small modulus
• If 𝑚 is a prime number, it is possible to choose the coefficients 𝑎𝑖 so that the
maximal period can be achieved
• If nearly all 𝑎𝑖 are 0 then the algorithm is very fast but the structure of the random
numbers is concentrated in the same region and thus having many gaps in space
• If 𝑚 and 𝑘 are large with many coefficients 𝑎𝑖 ≠ 0 then the structure is usually
excellent but the algorithm becomes rather slow
Testing and analyzing RNGs
Statistical tests: the RN sequences should be uniformly distributed on (0,1) and
appear to be independent
→ RNs 𝑢𝑛 are tested against the null hypothesis (𝐻0 ):
The numbers 𝑢1 , … , 𝑢𝑛 are realizations of iid. 𝑈 0, 1 − distributed random variables.

Methods to check for uniformity


• 𝜒 2 test
• Kolmogorov – Smirnov test

Method to check for independency


• Auto-correlation test
Generating non-uniform random
variables

➢ We have seen how to generate random variable 𝑈𝑖 ∼ 𝑈[0, 1]


➢ Let 𝑋 be non-uniform random variable. How can we generate iid samples from 𝑋?
Inversion Method
Theorem
Denote by 𝐹(𝑥) ∶= 𝑃(𝑋 ≤ 𝑥) the distribution function of 𝑋 and by 𝑈 ~ 𝑈[0,1] a
uniform random variable. Then,
𝐹 −1 𝑈 ~ 𝑋

Algorithm: Inversion method

1. Generate a random number 𝑢 from 𝑈[0,1]


2. Return 𝑥 = 𝐹 −1 (𝑢)

Only practical when 𝐹 −1 is explicitly available or fast evaluation of 𝐹 −1 is possible!


Inversion Method - Examples
• Exponential distribution: 𝐹 𝑥 = 1 − 𝑒 −𝜆𝑥
1 1
𝑋~ − log 1 − 𝑈 → 𝑋~ − log(𝑈)
𝜆 𝜆
• Pareto distribution: 𝐹 𝑥 = 1 − (1 + 𝑥Τ𝜃)−𝛼
• Burr distribution: 𝐹 𝑥 = 1 − (1+(𝑥Τ𝜃)𝛾 )−𝛼
𝑥Τ )𝜏
• Weibull distribution: 𝐹 𝑥 = 1 − 𝑒 −( 𝜃

• Discrete distribution: 𝑃 𝑋 ≤ 𝑐𝑖 = 𝑝𝑖 , 𝑖 = 1, … , 𝑛
𝑖

𝑞0 = 0, 𝑞𝑖 = ෍ 𝑝𝑗 , 𝑖 = 1, … , 𝑛
𝑗=1
𝑋 = 𝑐𝑘 , with 𝑘 such that 𝑞𝑘−1 < 𝑈 ≤ 𝑞𝑘
Acceptance-rejection method (1)
• Some distributions are so complicated that the inversion of the cdf 𝐹 is too difficult
or only approximations exist → given the density, the acceptance-rejection method
can be much faster and even easier
• We want to generate samples from a random variable 𝑋 with density function 𝑓
• We can generate samples from a random variable 𝑌 with density function 𝑔
• There exists a constant 𝑀 with 𝑓 𝑥 ≤ 𝑀𝑔 𝑥 ∀ 𝑥

Algorithm: Acceptance-rejection method


1. Generate a uniformly distributed random number 𝑢 on 0,1
2. Generate a random number 𝑦 from the distribution of 𝑌
3. If 𝑢 ≤ 𝑓 𝑦 ൗ𝑀𝑔 𝑦 then return 𝑦 else go to 1
Acceptance-rejection method (2)
Remarks
• We need a fast method to generate from 𝑌
• We need more than one uniform random variable to generate one random number
from distribution 𝑋
• Acceptance probability is 1Τ𝑀 and should be close 1

Proof – Acceptance probability


𝑓 𝑌 𝑓 𝑌 𝑓 𝑌 𝑓 𝑦 1
𝑃 𝑈≤ = 𝐸(𝑃 𝑈 ≤ 𝑌 =𝐸 = ‫𝑦 𝑔𝑀 ׬‬ 𝑔 𝑦 𝑑𝑦 =
𝑀𝑔 𝑌 𝑀𝑔 𝑌 𝑀𝑔 𝑌 𝑀
Proof - Acceptance-rejection method
Denote by 𝑋 𝑠 a random variable that is produced by the Acceptance-rejection method
Generating standard normal random
variables (1)
Example (Acceptance rejection method)
2
• Set 𝑔 𝑥 = 𝑒 −𝑥
and 𝑓 𝑥 = 2Τ 𝑒 −𝑥 ൗ2 .
𝜋
2
• 𝑓(𝑥)Τ𝑔 𝑥 = 2Τ 𝑒 𝑥−𝑥 ൗ2 . This is maximized for 𝑥 = 1, hence
𝜋

𝑀= 2𝑒Τ and 1Τ𝑀 ≈ 0.76


𝜋

Algorithm: Acceptance-rejection method for normal distribution


1. Generate a 𝐸𝑥𝑝(1) distributed random number 𝑦
2. Generate a uniform random number 𝑢
2 2
3. If 𝑢 ≤ 𝑒 𝑦−𝑦 Τ2−1Τ2 = 𝑒 −(1+𝑦) Τ2 go to 4 else go to 1
4. Generate a uniform random number 𝑢1
5. If 𝑢1 ≤ 1Τ2 return −𝑦 else return 𝑦
Box-Muller method (1)
• Classical alternative: Box-Muller method samples from the two-dimensional
standard normal distribution → method belongs to the group of polar methods also
used
• Let 𝑋 and 𝑌 be iid normally distributed rv then define
𝑅 = 𝑋 2 + 𝑌 2 ,𝜃 = arctan 𝑌Τ𝑋 , 𝑋 = 𝑅cos𝜃, 𝑌 = 𝑅sin𝜃
Then,
𝜃~𝑈 0,2𝜋 , 𝑅 = 2𝐸1 where 𝐸1 ~𝐸𝑥𝑝(1)

Proof of the above will be made in class.


Box-Muller method (2)

Algorithm: Box-Muller method


1. Generate two independent random numbers 𝑢1, 𝑢2 ∼ 𝑈(0, 1]
2. Obtain two independent standard normal random numbers via
𝑦1 = −2 log(𝑢1 )sin(2𝜋𝑢2 ) ∼ 𝑁(0, 1)
𝑦2 = −2 log(𝑢1 )cos(2𝜋𝑢2 ) ∼ 𝑁(0, 1)
Box-Muller method – second version
When 𝑈 ∼ 𝑈(0, 1) then
(𝑋, 𝑌 ) = (sin(2𝜋𝑈), cos(2𝜋𝑈)) is uniform on 𝑋 2 + 𝑌 2 = 1
Algorithm: Box-Muller method – second version
1. Generate two independent random numbers 𝑢1, 𝑢2 ∼ 𝑈(0, 1]
2. If 𝑢12 +𝑢22 > 1 return to step 1
3. Generate an independent random number 𝑢3 ∼ 𝑈(0, 1]
4. Return two independent standard normal random numbers via
𝑢1
𝑦1 = −2 log(𝑢3 ) ∼ 𝑁(0, 1)
2 2
𝑢1 + 𝑢2
𝑢2
𝑦2 = −2 log(𝑢3 ) ∼ 𝑁(0, 1)
2 2
𝑢1 + 𝑢2
Further Examples

You might also like