Module07 RandomVariateGeneration
Module07 RandomVariateGeneration
5/21/10
1 Introduction
2 Inverse Transform Method
3 Cutpoint Method
4 Convolution Method
5 Acceptance-Rejection Method
6 Composition Method
7 Special-Case Techniques
8 Multivariate Normal Distribution
9 Generating Stochastic Processes
Outline
1 Introduction
2 Inverse Transform Method
3 Cutpoint Method
4 Convolution Method
5 Acceptance-Rejection Method
6 Composition Method
7 Special-Case Techniques
8 Multivariate Normal Distribution
9 Generating Stochastic Processes
Let U ∼ U(0, 1). Then the random variable F −1 (U ) has the same
distribution as X.
Clearly,
X = a + ⌊(b − a + 1)U ⌋,
where ⌊·⌋ is the floor function. 2
The cdf is (
x2 /2 if 0 ≤ x < 1
F (x) =
1 − (x − 2)2 /2 if 1 ≤ x ≤ 2.
√
(a) If U < 1/2, we solve X 2 /2 = U to get X = 2U .
(b) If U ≥ 1/2, the only root of 1 − (X − = U in [1, 2] is 2)2 /2
p
X = 2 − 2(1 − U ).
√
Thus, for example, if U = 0.4, we take X = 0.8. 2
U 0.135 − (1 − U )0.135
Z = Φ−1 (U ) ≈ . 2
0.1975
c0 + c1 t + c2 t2
Z = sign(U − 1/2) t − ,
1 + d1 t + d2 t2 + d3 t3
and
c0 = 2.515517, c1 = 0.802853, c2 = 0.010328,
d1 = 1.432788, d2 = 0.189269, d3 = 0.001308.
P (X = k) = (1 − p)k−1 p, k = 1, 2, . . . .
Hence,
k ℓn(1 − U )
X = min[k : 1 − (1 − p) ≥ U ] = ,
ℓn(1 − p)
Outline
1 Introduction
2 Inverse Transform Method
3 Cutpoint Method
4 Convolution Method
5 Acceptance-Rejection Method
6 Composition Method
7 Special-Case Techniques
8 Multivariate Normal Distribution
9 Generating Stochastic Processes
Cutpoint Method
P (X = k) = pk , k = a, a + 1, . . . , b
qk = P (X ≤ k), k = a, a + 1, . . . , b.
For fixed m, the cutpoint method of Fishman and Moore computes and
stores the cutpoints
j−1
Ij = min k : qk > , j = 1, . . . , m.
m
Algorithm CMSET
j ← 0, k ← a − 1, and A ← 0
While j < m:
While A ≤ j:
k ←k+1
A ← mqk
j ←j+1
Ij ← k
Once the cutpoints are computed, we can use the cutpoint method.
Algorithm CM
Generate U from U(0, 1)
L ← ⌊mU ⌋ + 1
X ← IL
While U > qX : X ← X + 1
I2 − I1 + 1 Im+1 − Im + 1
E(Cm ) ≤ + ··· +
m m
b − I1 + m
= .
m
Outline
1 Introduction
2 Inverse Transform Method
3 Cutpoint Method
4 Convolution Method
5 Acceptance-Rejection Method
6 Composition Method
7 Special-Case Techniques
8 Multivariate Normal Distribution
9 Generating Stochastic Processes
Convolution Method
(This only takes one natural log evaluation, so it’s pretty efficient.) 2
Y ≈ Nor(n/2, n/12).
In particular, let’s choose n = 12, and assume that it’s “large.” Then
12
X
Y −6 = Ui − 6 ≈ Nor(0, 1).
i=1
U1 + U2 ∼ Triangular(0, 1, 2).
Pn
If X1 , . . . , Xn are iid Geom(p), then i=1 Xi ∼ NegBin(n, p).
Pn 2
If Z1 , . . . , Zn are iid Nor(0,1), then i=1 Zi ∼ χ2 (n).
Outline
1 Introduction
2 Inverse Transform Method
3 Cutpoint Method
4 Convolution Method
5 Acceptance-Rejection Method
6 Composition Method
7 Special-Case Techniques
8 Multivariate Normal Distribution
9 Generating Stochastic Processes
Acceptance-Rejection Method
f (x) t(x)
f (x) = c × × = cg(x)h(x),
t(x) c
where
Z ∞
h(x) dx = 1 (h is a density)
−∞
and
0 ≤ g(x) ≤ 1.
Algorithm A-R
Repeat
Generate U from U(0, 1)
Generate Y from h(y) (independent of U )
until U ≤ g(Y )
Return X ← Y
P (A, Y ≤ x)
P (X ≤ x) = P (Y ≤ x|A) = . (1)
P (A)
Then
t(x) = 2.0736,
0≤x≤1
R1
(which isn’t actually very efficient), we get c = 0 f (x) dx = 2.0736, so
and
g(x) = 60x3 (1 − x)2 /2.0736.
E.g., if we generate U = 0.13 and Y = 0.25, then it turns out that
3 (1−Y )2
U ≤ g(Y ) = 60Y2.0736 , so we take X ← 0.25. 2
xβ−1 −(x/α)β
f (x) = e , x > 0.
αβ Γ(β)
We’ll split the task of generating gamma RV’s via the A-R algorithm
into two cases depending on the magnitude of the shape parameter:
β < 1 and β ≥ 1. . .
Algorithm GAM1
b ← (e + β)/e (e is the base of the natural logarithm)
While (True)
Generate U from U(0, 1); W ← bU
If W < 1
Y ← W 1/β ; Generate V from U(0, 1)
If V ≤ e−Y : Return X = αY
Else
Y ← −ℓn[(b − W )/β]
Generate V from U(0, 1)
If V ≤ Y β−1 : Return X = αY
Alexopoulos and Goldsman 5/21/10 39 / 73
Acceptance-Rejection Method
If β ≥ 1, the value
p of c for the following A-R algorithm decreases from
4/e = 1.47 to 4/π = 1.13 as β increases from 1 to ∞.
Algorithm GAM2
a ← (2β − 1)−1/2 ; b ← β − ℓn(4); c ← β + a−1 ; d ← 1 + ℓn(4.5)
While (True)
Generate U1 , U2 from U(0, 1)
V ← aℓn[U1 /(1 − U1 )]
Y ← βeV ; Z ← U12 U2
W ← b + cV − Y
If W + d − 4.5Z ≥ 0: Return X = αY
Else
If W ≥ ℓn(Z): Return X = αY
Algorithm POIS1
a ← e−λ ; p ← 1; X ← −1
Until p < a
Generate U from U(0, 1)
p ← pU ; X ← X + 1
Return X
Thus, we take X = 3. 2
X −λ
√ ≈ Nor(0, 1).
λ
Outline
1 Introduction
2 Inverse Transform Method
3 Cutpoint Method
4 Convolution Method
5 Acceptance-Rejection Method
6 Composition Method
7 Special-Case Techniques
8 Multivariate Normal Distribution
9 Generating Stochastic Processes
Composition Method
Then
1 1
F (x) = F1 (x) + F2 (x).
2 2
Outline
1 Introduction
2 Inverse Transform Method
3 Cutpoint Method
4 Convolution Method
5 Acceptance-Rejection Method
6 Composition Method
7 Special-Case Techniques
8 Multivariate Normal Distribution
9 Generating Stochastic Processes
Special-Case Techniques
But
χ2 (2) ∼ Exp(1/2). 2
But
p
−2ℓn(U1 ) sin(2πU2 )
Z2 /Z1 = p = tan(2πU2 ).
−2ℓn(U1 ) cos(2πU2 )
Similarly,
Z22 /Z12 = tan2 (2πU ) ∼ t2 (1) ∼ F (1, 1).
(Did you know that?)
Order Statistics
P (Y ≤ y) = 1 − P (Y > y)
= 1 − P (min Xi > y)
i
= 1 − P (all Xi ’s > y)
= 1 − (e−y )n .
Other Quickies
χ 2
P(n)n
distribution: If Z1 , Z2 , . . . , Zn are iid Nor(0,1), then
2 2
i=1 Zi ∼ χ (n).
Outline
1 Introduction
2 Inverse Transform Method
3 Cutpoint Method
4 Convolution Method
5 Acceptance-Rejection Method
6 Composition Method
7 Special-Case Techniques
8 Multivariate Normal Distribution
9 Generating Stochastic Processes
(x − µ)T Σ−1 (x − µ)
1
f (x) = exp − , x ∈ IRk .
(2π)k/2 |Σ|1/2 2
Algorithm LTM
For i = 1, . . . , k,
For j = 1, . . . , i − 1,
cij ← σij − j−1
P
ℓ=1 ciℓ cjℓ /cjj
cji = 0
1/2
cii = σii − i−1 2
P
ℓ=1 ciℓ
Algorithm MN
i←1
Until i > k:
Xi ← µi
j←1
i ←i+1
Outline
1 Introduction
2 Inverse Transform Method
3 Cutpoint Method
4 Convolution Method
5 Acceptance-Rejection Method
6 Composition Method
7 Special-Case Techniques
8 Multivariate Normal Distribution
9 Generating Stochastic Processes
T0 ← 0
1
Ti ← Ti−1 − ℓn(Ui ), i ≥ 1.
λ
Let
λ(t) = rate (intensity) function at time t,
N (t) = number of arrivals during [0, t].
Then Z s+t
N (s + t) − N (s) ∼ Poisson λ(u) du .
s
Incorrect NHPP Algorithm [it can “skip” intervals with large λ(t)]
T0 ← 0; i ← 0
Repeat
i ←i+1
The following (good) algorithm assumes that λ∗ ≡ sup λ(t) < +∞,
generates potential arrivals with rate λ∗ , and accepts a potential arrival
at time t with probability λ(t)/λ∗ .
Thinning Algorithm
T0 ← 0; i ← 0
Repeat
t ← Ti
Repeat
Generate U , V from U(0, 1)
1
t←t− λ∗ ℓn(U )
until V ≤ λ(t)/λ∗
i ←i+1
Ti ← t
Alexopoulos and Goldsman 5/21/10 68 / 73
Generating Stochastic Processes
Yi = εi + θεi−1 , for i = 1, 2, . . .,
where θ is a constant and the εi ’s are iid Nor(0, 1) RV’s that are
independent of Y0 .
Yi = φYi−1 + εi , for i = 1, 2, . . .,
where −1 < φ < 1; Y0 ∼ Nor(0, 1); and the εi ’s are iid Nor(0, 1 − φ2 )
RV’s that are independent of Y0 .
The AR(1) has covariance function Cov(Yi , Yi+k ) = φ|k| for all
k = 0, ±1, ±2, . . ..
M/M/1 Queue
Let Ii+1 denote the interarrival time between the ith and (i + 1)st
customers; let Si be the ith customer’s service time; and let Wi denote
the ith customer’s waiting time before service.
Lindley gives a very nice way to generate a series of waiting times for
this simple example:
(You can model the time in the system with a similar equation.)
Brownian Motion
d
where −→ denotes convergence in distribution as n gets big, and ⌊·⌋
is the floor function, e.g., ⌊3.7⌋ = 3.
One choice that works well is to take Yi = ±1, each with probability
1/2. Take n at least 100, t = 1/n, 2/n, . . . , n/n, and calculate
W(1/n), W (2/n), . . . , W(n/n).
It really works!
Alexopoulos and Goldsman 5/21/10 73 / 73