0% found this document useful (0 votes)
9 views2 pages

Lecture30 CLT

Uploaded by

3r32r
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views2 pages

Lecture30 CLT

Uploaded by

3r32r
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

EE5110 : Probability Foundations for Electrical Engineers July-November 2015

Lecture 30: The Central Limit Theorem


Lecturer: Dr. Krishna Jagannathan Scribes: Vishakh Hegde

30.1 Central Limit Theorem

In this section, we will state and prove the central limit theorem. Let {Xi } be a sequence of i.i.d. random
variables having a finite variance. From law of large numbers we know that for large n, the sum Sn is
approximately as big as nE[X] , i.e.,
Sn i.p.
−−→ E[X],
n

Sn − nE[X] i.p.
⇒ −−→ 0.
n
Thus whenever the variance of Xi is finite, the difference Sn − nE[X] grows slower as compared to n. The

Central Limit Theorem (CLT) says that this difference scales as n, and that the distribution of Sn −nE[X]

n
approaches a normal distribution as n → ∞ irrespective of the distribution of Xi .

Sn − nE[X] 2
√ ∼ N (0, σX ).
n

Theorem 30.1 (Central Limit Theorem) Let {Xi } be a sequence of i.i.d. random variables with mean
D
2
E[X] and a non-zero variance σX < ∞. Let Zn = Snσ−nE[X]

X n
. Then, we have Zn −→ N (0, 1), i.e.,
Rz 1 −x2
lim FZn (z) = √ e 2 dx, ∀z ∈ R.
n→∞ 2π
−∞

n
P
Yi
Xn −E[X] i=1
Proof: Let Yn = σX . Let Zn = √
n
. It is easy to see that Yn has unit variance and zero mean, i.e.,
E[Yn ] = 0 and σY2 n = 1 .

i2 t2 E[Yn2 ]
CYn (t) = 1 + itE[Yn ] + + O(t2 ),
2
i2 t2 (1)
CYn (t) = 1 + it(0) + + o(t2 ),
2
t2
= 1 − + o(t2 ),
 2 n
t
CZn (t) = CYn √ ,
n
n
t2 t2
 
−t2
= 1− +o −→ e 2 ∀t.
2n n

30-1
30-2 Lecture 30: The Central Limit Theorem

From the theorem on convergence of characteristic functions, Zn converges to a standard Gaussian in distri-
bution.

For example, if Xi ’s are discrete random variables, the CDFs will be step functions. As n → ∞, these
step functions will gradually converge to the error function (i.e. the steps will gradually decrease to form a
continuous distribution as n → ∞).
It is also important to understand what this theorem does not say. It is not saying that the probability
−x2
density function converges to √12π e 2 . Convergence in density function requires more stringent conditions
which are stated in the Local Central Limit Theorem.

Theorem 30.2 (Local Central Limit Theorem) Let X1 , X2 , . . . be i.i.d. random variables with zero
mean and unit variance. Suppose further that their common characteristic function φ satisfies the following:
Z∞
|φ(t)|r dt < ∞.
−∞

for some integer r ≥ 1. The density function gn of Un = (X1 +X2√+...+X


n
n)
exists for n ≥ r, and furthermore
we have,
1 −x2
gn (x) → √ e 2 ,

as n −→ ∞, uniformly in x ∈ R.

Proof: For a proof, refer to Section 5.10 in [1].

Let X1 , X2 , . . . be i.i.d. random variables with zero mean and unit variance. From CLT, we know that
n
P
Xi
i=1
Un = √n is distributed as a standard Gaussian. We now look at yet another interesting result which deals
with the largest value taken by Um , m ≥ n, for a large n.

Theorem 30.3 (The Law of the Iterated Logarithm) Let X1 , X2 , . . . be i.i.d. random variables with
n
P
zero mean and unit variance. Also, let Sn = Xi Then,
i=1
 
Sn
P lim sup √ = 1 = 1.
n→∞ 2n log log n

Unlike the CLT which talks about distribution of Un for a large, fixed n, law of iterated logarithm talks
about the largest fluctuation in Um , for m ≥ n. In particular, it bounds the largest value taken by Um
beyond n. Formally, the subset of Ω for which this holds has a probability measure 1.

References
[1] G. G. D. Stirzaker and D. Grimmett. Probability and random processes. Oxford Science Publications,
2001.

You might also like