Simulation
Simulation
F (x) = P (X ≤ x) = 1 − e−λx , x ≥ 0.
(Recall that P (U ≥ y) = 1 − y, y ∈ (0, 1).) It turns out that this kind of clever “transfor-
mation” involves the inverse function of a CDF only. We now present this general method.
Let F (x), x ∈ IR, denote any cumulative distribution function (cdf) (continuous or not). In
other words, F (x) = P (X ≤ x), x ∈ IR, for some random variable X. Recall that F : IR −→
[0, 1] is thus a non-negative and non-decreasing (monotone) function that is continuous from
the right and has left hand limits, with values in [0, 1]; moreover F (∞) = limx→∞ F (x) = 1
and F (−∞) = limx→−∞ F (x) = 0. Our objective is to generate (simulate) rvs X distributed
as F ; that is, we want to simulate a rv X such that P (X ≤ x) = F (x), x ∈ IR.
Define the generalized inverse of F , F −1 : [0, 1] −→ IR, via
Proposition 1.1 (The Inverse Transform Method) Let F (x), x ∈ IR, denote any cumu-
lative distribution function (cdf ) (continuous or not). Let F −1 (y), y ∈ [0, 1] denote the inverse
function defined in (1). Define X = F −1 (U ), where U has the continuous uniform distribution
over the interval (0, 1). Then X is distributed as F , that is, P (X ≤ x) = F (x), x ∈ IR.
1
Proof : We must show that P (F −1 (U ) ≤ x) = F (x), x ∈ IR. First suppose that F is continuous.
Then we will show that (equality of events) {F −1 (U ) ≤ x} = {U ≤ F (x)}, so that by taking
probabilities (and letting a = F (x) in P (U ≤ a) = a) yields the result: P (F −1 (U ) ≤ x) =
P (U ≤ F (x)) = F (x).
To this end: F (F −1 (y)) = y and so (by monotonicity of F ) if F −1 (U ) ≤ x, then U =
F (F −1 (U )) ≤ F (x), or U ≤ F (x). Similarly F −1 (F (x)) = x and so if U ≤ F (x), then
F −1 (U ) ≤ x. We conclude equality of the two events as was to be shown. In the general
(continuous or not) case, it is easily shown that
which yields the same result after taking probabilities (since P (U = F (x)) = 0 since U is a
continuous rv.)
1.1 Examples
The inverse transform method can be used in practice as long as we are able to get an explicit
formula for F −1 (y) in closed form. We illustrate with some examples. We use the notation
U ∼ unif (0, 1) to denote that U is a rv with the continuous uniform distribution over the
interval (0, 1).
2
This is known as the discrete inverse-transform method. It easily can be extended to cover
discrete random variables that are not necessarily non-negative. The algorithm is easily
verified
Pk−1directly by recalling
Pk that P (a < U ≤ b) = b − a, for 0 ≤ a < b ≤ 1; here we use
a = i=0 p(i) < b = i=0 p(i), and so b − a = p(k).
Since 1 − U is also unif (0, 1), we can re-do the above by replacing U by 1 − U and obtain
Another Algorithm for generating a Bernoulli (p) rv X:
One could, in principle, use the discrete inverse-transform method with these p(k), but
we also can note that X can be represented (in distribution) as the sum of n iid Bernoulli
(p) rvs, Y1 , . . . , Yn ;
Xn
X= Yi ,
i=1
The advantage of this algorithm is its simplicity, we do not need to do the various compu-
tations involving the p(k). On the other hand, this algorithm requires n uniforms for each
copy of X versus only one uniform when using the discrete inverse-transform method.
Thus we might not want to use this algorithm when n is quite large.
Poisson approximation to the binomial distribution
In fact, when n is very large, and p is small, it follows (e.g., can be proved; there is a
theorem lurking here that is stated below), that the distribution of X is very approxi-
mately the Poisson distribution with mean np: For α > 0, consider a sequence of Binomial
3
(n, p(n)) rvs Xn in which p(n) = α/n, n ≥ 1. Note how E(Xn ) = np(n) = α, n ≥ 1,
but the distribution of Xn is changing as n increases: more Bernoulli trials are performed
but with a decreasing probability of success; p(n) → 0 as n → ∞, even though they each
have the same expected number of successes, α.
Then Xn converges in distribution to the Poisson distribution with mean α:
αk
lim P (Xn = k) = e−α , k ≥ 0.
n→∞ k!
4
required yields Y . Then we get X = N (1) = Y − 1 as our desired Poisson. Here then is
the resulting algorithm:
Alternative algorithm for generating a Poisson rv X with mean α:
i Set X = 0, P = 1
ii Generate U ∼ unif (0, 1), set P = U P
iii If P < e−α , then stop. Otherwise if P ≥ e−α , then set X = X + 1 and go back to ii.
iii Output X.
Note that, unlike the inverse transform method, which would only require one U , the
above algorithm requires a random number of Ui , Y = X + 1 uniforms to be precise, and
unknown in advance. On the other hand this algorithm does not require computing pieces
k
like αk! . On average, the number of Ui required is E(X + 1) = α + 1. If α is not too big,
then this algorithm can be considered very efficient.