0% found this document useful (0 votes)
21 views113 pages

Book

This document is an introduction to advanced probability and statistics aimed at first-year Ph.D. students in economics. It covers foundational concepts such as probability measures, random variables, expectations, distributions, and statistical inference. The text is structured into chapters that include definitions, examples, and exercises to reinforce learning.

Uploaded by

shii
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views113 pages

Book

This document is an introduction to advanced probability and statistics aimed at first-year Ph.D. students in economics. It covers foundational concepts such as probability measures, random variables, expectations, distributions, and statistical inference. The text is structured into chapters that include definitions, examples, and exercises to reinforce learning.

Uploaded by

shii
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 113

An Introduction to Advanced

Probability and Statistics

Junhui Qian
© September 28, 2020
2
Preface

This booklet introduces advanced probability and statistics to first-year Ph.D. stu-
dents in economics.

In preparation of this text, I borrow heavily from the lecture notes of Yoosoon Chang
and Joon Y. Park, who taught me econometrics at Rice University. All errors are
mine.

Shanghai, China, Junhui Qian


December 2012 [email protected]

i
ii
Contents

Preface i

1 Introduction to Probability 1
1.1 Probability Triple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Conditional Probability and Independence . . . . . . . . . . . . . . . 4
1.3 Limits of Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Construction of Probability Measure . . . . . . . . . . . . . . . . . . 8
1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2 Random Variable 17
2.1 Measurable Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 Random Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4 Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.5 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3 Expectations 27
3.1 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 Moment Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.4 Conditional Expectation . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.5 Conditional Distribution . . . . . . . . . . . . . . . . . . . . . . . . . 37

iii
3.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4 Distributions and Transformations 41


4.1 Alternative Characterizations of Distribution . . . . . . . . . . . . . . 41
4.1.1 Moment Generating Function . . . . . . . . . . . . . . . . . . 41
4.1.2 Characteristic Function . . . . . . . . . . . . . . . . . . . . . . 41
4.1.3 Quantile Function . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2 Common Families of Distributions . . . . . . . . . . . . . . . . . . . . 42
4.3 Transformed Random Variables . . . . . . . . . . . . . . . . . . . . . 46
4.3.1 Distribution Function Technique . . . . . . . . . . . . . . . . . 46
4.3.2 MGF Technique . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.3.3 Change-of-Variable Transformation . . . . . . . . . . . . . . . 47
4.4 Multivariate Normal Distribution . . . . . . . . . . . . . . . . . . . . 49
4.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.4.2 Marginals and Conditionals . . . . . . . . . . . . . . . . . . . 50
4.4.3 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5 Introduction to Statistics 59
5.1 General Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.2 Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.3 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.3.1 Method of Moment . . . . . . . . . . . . . . . . . . . . . . . . 63
5.3.2 Maximum Likelihood . . . . . . . . . . . . . . . . . . . . . . . 65
5.3.3 Unbiasedness and Efficiency . . . . . . . . . . . . . . . . . . . 67
5.3.4 Lehmann-Scheffé Theorem . . . . . . . . . . . . . . . . . . . . 68
5.3.5 Efficiency Bound . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.4 Hypothesis Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.4.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.4.2 Likelihood Ratio Tests . . . . . . . . . . . . . . . . . . . . . . 75

iv
5.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

6 Asymptotic Theory 83
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.1.1 Modes of Convergence . . . . . . . . . . . . . . . . . . . . . . 83
6.1.2 Small o and Big O Notations . . . . . . . . . . . . . . . . . . 88
6.2 Limit Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.2.1 Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . 91
6.2.2 Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . 93
6.2.3 Delta Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.3 Asymptotics for Maximum Likelihood Estimation . . . . . . . . . . . 95
6.3.1 Consistency of MLE . . . . . . . . . . . . . . . . . . . . . . . 96
6.3.2 Asymptotic Normality of MLE . . . . . . . . . . . . . . . . . 97
6.3.3 MLE-Based Tests . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

References 101

v
vi
Chapter 1

Introduction to Probability

In this chapter we lay down the measure-theoretic foundation of probability.

1.1 Probability Triple


We first introduce the well known probability triple, (Ω, F, P), where Ω is the sample
space, F is a sigma-field of a collection of subsets of Ω, and P is a probability measure.
We define and characterize each of the probability triple in the following.
The sample space Ω is a set of outcomes from a random experiment. For instance,
in a coin tossing experiment, the sample space is obviously {H, T }, where H denotes
head and T denotes tail. For another example, the sample space may be an interval,
say Ω = [0, 1], on the real line, and any outcome ω ∈ Ω is a real number randomly
selected from the interval.
To introduce sigma-field, we first define

Definition 1.1.1 (Field (or Algebra)) A collection of subsets F is called a field


or an algebra, if the following holds.

(a) Ω ∈ F

(b) E ∈ F ⇒ E c ∈ F
Sm
(c) E1 , ..., Em ∈ F ⇒ n=1 En ∈ F

Note that (c) says that a field is closed under finite union. In contrast, a sigma-field,
which is defined as follows, is closed under countable union.

1
Definition 1.1.2 (sigma-field (or sigma-algebra)) A collection of subsets F is
called a σ-field or a σ-algebra, if the following holds.

(a) Ω ∈ F

(b) E ∈ F ⇒ E c ∈ F
S∞
(c) E1 , E2 , . . . ∈ F ⇒ n=1 En ∈ F

Remarks:

• In both definitions, (a) and (b) imply that the empty set ∅ ∈ F

• (b) and (c) implies that if E1 , E2 , . . . ∈ F ⇒ ∞


T
n=1 En ∈ F, since ∩n En =
c c
(∪n En ) .

• A σ-field is a field; a field is a σ-field only when Ω is finite.

• An arbitrary intersection of σ-fields is still a σ-field. (Exercise 1)

In the following, we may interchangeably write sigma-field as σ-field. An element E


of the σ-field F in the probability triple is called an event.

Example 1.1.3 If we toss a coin twice, then the sample space would be Ω =
{HH, HT, T H, T T }. A σ-field (or field) would be

F = {∅, Ω, {HH}, {HT }, {T H}, {T T },


{HH, HT }, {HH, T H}, {HH, T T }, {HT, T H}, {HT, T T }, {T H, T T },
{HH, HT, T H}, {HH, HT, T T }, {HH, T H, T T }, {HT, T H, T T }}.

The event {HH} would be described as “two heads in a row”. The event {HT, T T }
would be described as “the second throw obtains tail”.

F in the above example contains all subsets of Ω. It is often called the power set of
Ω, denoted by 2Ω .

Example 1.1.4 For an example of infinite sample space, we may consider a thought
experiment of tossing a coin for infinitely many times. The sample space would be
Ω = {(r1 , r2 , . . . , )|ri = 1 or 0}, where 1 stands for head and 0 stands for tail. One
example of an event would be {r1 = 1, r2 = 1}, which says that the first two throws
give heads in a row.

2
A sigma-field can be generated from a collection of subsets of Ω, a field for example.
We define

Definition 1.1.5 (Generated σ-field) Let S be a collection of subsets of Ω. The


σ-field generated by S, σ(S), is defined to be the intersection of all the σ-fields
containing S.

In other words, σ(S) is the smallest σ-field containing S.

Example 1.1.6 Let Ω = {1, 2, 3}. We have

σ ({1}) = {∅, Ω, {1}, {2, 3}}.

Now we introduce the axiomatic definition of probability measure.

Definition 1.1.7 (Probability Measure) A set function P on a σ-field F is a


probability measure if it satisfies:

(1) P(E) ≥ 0 ∀E ∈ F

(2) P(Ω) = 1
S P
(3) If E1 , E2 , . . . are disjoint, then P ( n En ) = n P(En ).

Properties of Probability Measure

(a) P(∅) = 0

(b) P(Ac ) = 1 − P(A)

(c) A ⊂ B ⇒ P(A) ≤ P(B)

(d) P(A ∪ B) ≤ P(A) + P(B)

(e) An ⊂ An+1 for n = 1, 2, . . ., ⇒ P(An ) ↑ P (∪∞


n=1 An )

(f) An ⊃ An+1 for n = 1, 2, . . ., ⇒ P(An ) ↓ P (∩∞


n=1 An )
P∞
(g) P(∪∞
n=1 An ) ≤ n=1 P(An )

Proof: (a)-(c) are trivial.

3
(d) Write A∪B = (A∩B c )∪(A∩B)∪(Ac ∩B), a union of disjoint sets. By adding
and subtracting P(A ∩ B), we have P(A ∪ B) = P(A) + P(B) − P(A ∩ B), using
the fact that A = (A ∩ B) ∪ (A ∩ B c ), also a disjoint union.
c
Sn
(e) S
Define B1 = A 1 and Bn = A n+1 ∩ An for n ≥ 2. We have A n = j=1 Bj and
∞ S∞
j=1 Aj = j=1 Bj . Then it follows from

n
X ∞
X ∞
X ∞
[ ∞
X
P(An ) = P(Bj ) = P(Bj ) − P(Bj ) = P( An ) − P(Bj ).
j=1 j=1 j=n+1 n=1 j=n+1

(f) Note that Acn ⊂ Acn+1 , use (e).

(g) Extend (d).

may write limn→∞ An = ∞


S
Note that we T n=1 An , if An is monotone increasing, and

limn→∞ An = n=1 An , if An is monotone decreasing.

1.2 Conditional Probability and Independence


Definition 1.2.1 (Conditional Probability) For an event F ∈ F that satisfies
P (F ) > 0, we define the conditional probability of another event E given F by

P (E ∩ F )
P (E|F ) = .
P (F )

• For a fixed event F , the function Q(·) = P (·|F ) is a probability. All properties
of probability measure hold for Q.

• The probability of intersection can be defined via conditional probability:

P (E ∩ F ) = P (E|F ) P (F ) ,

and
P (E ∩ F ∩ G) = P (E|F ∩ G) P (F |G) P (G) .

• If {Fn } is a partition of Ω, ie, Fn0 s are disjoint and n Fn = Ω. Then the


S
following theorem of total probability holds,
X
P (E) = P (E|Fn ) P (Fn ) , for all event E.
n

4
• The Bayes Formula follows from P (E ∩ F ) = P (E|F ) P (F ) = P (F |E) P (E),

P (E|F ) P (F )
P (F |E) = ,
P (E)

and
P (E|Fk ) P (Fk )
P (Fk |E) = P .
n P (E|Fn ) P (Fn )

Definition 1.2.2 (Independence of Events) Events E and F are called inde-


pendent if P (E ∩ F ) = P (E) P (F ).

• We may equivalently define independence as

P (E|F ) = P (F ) , when P (F ) > 0

• E1 , E2 , . . . are said to be independent if, for any (i1 , . . . , ik ),


k
\ 
P (Ei1 ∩ Ei2 ∩ · · · ∩ Eik ) = P Eij
j=1

• Let E, E1 , E2 , . . . be independent events. Then E and σ(E1 , E2 , . . .) are inde-


pendent, ie, for any S ∈ σ(E1 , E2 , . . .), P (E ∩ S) = P (E) P (S).

• Let E1 , E2 , . . . , F1 , F2 , . . . be independent events. If E ∈ σ(E1 , E2 , . . .), then


E, F1 , F2 , . . . are independent; furthermore, σ(E1 , E2 , . . .) and σ(F1 , F2 , . . .) are
independent.

1.3 Limits of Events


limsup and liminf First recall that for a series of real numbers {xn }, we define
 
lim sup xn = inf sup xn
n→∞ k n≥k
 
lim inf xn = sup inf xn .
n→∞ k n≥k

It is obvious that lim inf xn ≤ lim sup xn . And we say that xn → x ∈ [−∞, ∞] if
lim sup xn = lim inf xn = x.

5
Definition 1.3.1 (limsup of Events) For a sequence of events (En ), we define
∞ [
\ ∞
lim sup En = En
n→∞
k=1 n=k
= {ω| ∀k, ∃n(ω) ≥ k s.t. ω ∈ En }
= {ω| ω ∈ En for infinitely many n.}
= {ω| En i.o.} ,

where i.o. denotes “infinitely often”.

We may intuitively interpret lim supn→∞ En as the event that En occurs infinitely
often.

Definition 1.3.2 (liminf of Events) We define


∞ \
[ ∞
lim inf En = En
n→∞
k=1 n=k
= {ω| ∃ k(ω), ω ∈ En ∀n ≥ k}
= {ω| ω ∈ En for all large n.}
= {ω| En e.v.} ,

where e.v. denotes “eventually”.

It is obvious that It is obvious that (lim inf En )c = lim sup Enc and (lim sup En )c =
lim inf Enc . When lim sup En = lim inf En , we say (En ) has a limit lim En .

Lemma 1.3.3 (Fatou’s Lemma) We have

P(lim inf En ) ≤ lim inf P(En ) ≤ lim sup P(En ) ≤ P(lim sup En ).

Proof: Note that T ∞


T T∞ S∞ T∞
n=k En is monotone increasing and n=k En ↑ k=1 n=k En .
Hence P(Ek ) ≥ P( ∞ E
n=k n ) ↑ P(lim inf En ). The third inequality, often known as
the reverse Fatou’s lemma, can be similarly proved. And the second inequality is
obvious.

Lemma 1.3.4 (Borel-Cantelli Lemma) Let E1 , E2 , . . . ∈ F, then


P∞
(i) n=1 P(En ) < ∞ ⇒ P(lim sup En ) = 0;

6
P∞
(ii) if n=1 P(En ) = ∞, and if {En } are independent, then P(lim sup En ) = 1.

(i) P(lim sup En ) ≤ P( n≥k En ) ≤ ∞


S P
Proof: n=k P(En ) → 0.

(ii) For k, m ∈ N, using 1 − x ≤ exp(−x), ∀x ∈ R, we have



! k+m
!
\ \
c c
P En ≤ P En
n=k n=k
k+m
Y k+m
Y
= P (Enc ) = (1 − P (En ))
n=k n=k
k+m
!
X
≤ exp − P (En ) → 0,
n=k

c
 
Enc = 0, P (lim sup En ) =
S T P T
as m → ∞. Since P k n≥k En ≤ k P n≥k

1 − P k≥1 n≥k Enc = 1.
S T

Remarks:

• (ii) does not hold if {En } are not independent. To give a counter example,
consider infinite coin tossing. Let E1 = E2 = · · · = {r1 = 1}, the events
that the first coin is head, then {En } is not independent and P (lim sup En ) =
P (r1 = 1) = 1/2.

• Let Hn bePthe event that the n-th tossing comes up head. We have P (Hn ) =
c
1/2 and n P (Hn ) = ∞. Hence P (Hn i.o.) = 1, and P (Hn e.v.) = 1 −
P (Hn i.o.) = 0.

• Let Bn = H2n +1 ∩ H2n +2 ∩ P· · · ∩ H2n +log2 n . Bn is independent, and since


P (Bn ) = (1/2)log2 n = 1/n, n P (Bn ) = ∞. Hence P (Bn i.o.) = 1.

• But if Bn = H2n +1 ∩ H2n +2 ∩ · · · ∩ H2n +2 log2 n , P (Bn i.o.) = 0.

• Let Bn = Hn ∩ Hn+1 , we also have P (Bn i.o.) = 1. To show this, consider B2k ,
which is independent.

Why σ-field? You may already see that events such as lim sup En and lim inf En
are very interesting events. To make meaningful probabilistic statements about
these events, we need to make sure that they are contained in F, on which P is
defined. This is why we require F to be a σ-field, which is closed to infinite unions
and intersections.

7
Definition 1.3.5 (Tail Fields) For a sequence of events E1 , E2 , . . ., the tail field
is given by
\∞
T = σ (En , En+1 , . . .) .
n=1

• For any n, an event E ∈ T depends on events En , En+1 , . . .. Any finite number


of events are irrelevant.

• In the infinite coin tossing experiment,

– lim sup Hn , obtain infinitely many heads


– lim inf Hn , obtain only finitely many heads
– lim sup H2n infinitely many heads on tosses 2, 4, 8, . . .
– {limn→∞ 1/n ∞
P
i=1 ri ≤ 1/3}

– {rn = rn+1 = · · · = rn+m }, m fixed.

Theorem 1.3.6 (Kolmogrov Zero-One Law) Let a sequence of events E1 , E2 , . . .


be independent with a tail field T . If an event E ∈ T , then P (E) = 0 or 1.

Proof: Since E ∈ T ⊂ σ(En , En+1 , . . .), E, E1 , E2 , . . . , En−1 are independent. This


is true for all n, so E, E1 , E2 , . . . are independent. Hence E and σ(E1 , E2 , . . .) are
independent, ie, for all S ∈ σ(E1 , E2 , . . .), S and E are independent. On the other
hand, E ∈ T ⊂ σ(E1 , E2 , . . .). It follows that E is independent of itself! So
P (E ∩ E) = P2 (E) = P (E), which implies P (E) = 0 or 1.

1.4 Construction of Probability Measure


σ-fields are extremely complicated, hence the difficulty of directly assigning proba-
bility to their elements, events. Instead, we work on simpler classes.

Definition 1.4.1 (π-system) A class of subsets of Ω, P, is a π-system if the fol-


lowing holds:
E, F ∈ P ⇒ E ∩ F ∈ P.

For example, the collection {(−∞, x] : x ∈ R} is a π-system.

Definition 1.4.2 (λ-system) A class of subsets of Ω, L, is a λ-system if

8
(a) Ω ∈ L
1
(b) If E, F ∈ L, and E ⊂ F , then F − E ∈ L,

(c) If E1 , E2 , . . . ∈ L and En ↑ E, then E ∈ L.

• If E ∈ L, then E c ∈ L. It follows from (a) and (b).

• L is closed under countable union only for monotone increasing events. Note
that E = ∪∞ n=1 En .

Theorem 1.4.3 A class F of subsets of Ω is a σ-field if and only if F is both a


π-system and a λ-system.

S “only if” is trivial. To show “if”, it suffices to show that for any E1 , E2 , . . . ∈
Proof:
F, n En ∈ F. We indeed have:
n
!c n
\ [ [
c
Ek = Ek ↑ En .
k=1 k=1 n

Notation: Let S be a class of subsets of Ω. σ(S) is the σ-field generated by S.


π(S) is the π-system generated by S, meaning that π(S) is the intersection of all
π-system that contain S. λ(S) is similarly defined as the λ-system generated by S.
We have
π(S) ⊂ σ(S) and λ(S) ⊂ σ(S).

Lemma 1.4.4 (Dynkin’s Lemma) Let P be a π-system, then λ(P) = σ(P).

Proof: It suffices to show that λ(P) is a π-system.

• For an arbitrary C ∈ P, define

DC = {B ∈ λ(P)|B ∩ C ∈ λ(P) } .

• We have P ⊂ DC , since for any E ∈ P ⊂ λ(P), E ∩ C ∈ P ⊂ λ(P), hence


E ∈ DC .

• For any C ∈ P, DC is a λ-system.


1
E − F is defined by E ∩ F c .

9
– Ω ∈ DC
– If B1 , B2 ∈ DC and B1 ⊂ B2 , then (B2 −B1 )∩C = B2 ∩C −B1 ∩C. Since
B1 ∩ C, B2 ∩ C ∈ λ(P) and (B1 ∩ C) ⊂ (B2 ∩ C), (B2 − B1 ) ∩ C ∈ λ(P).
Hence (B2 − B1 ) ∈ DC .
– If B1 , B2 , . . . ∈ DC , and Bn ↑ B, then (Bn ∩ C) ↑ (B ∩ C) ∈ λ(P).
Hence B ∈ DC .
• Thus, for any C ∈ P, DC is a λ-system containing P. And it is obvious that
λ(P) ⊂ DC .
• Now for any A ∈ λ(P) ⊂ DC , we define
DA = {B ∈ λ(P)|B ∩ A ∈ λ(P)} .
By definition, DA ⊂ λ(P).
• We have P ⊂ DA , since if E ∈ P, then E ∩ A ∈ λ(P), since A ∈ λ(P) ⊂ DC
for all C ∈ P.
• We can check that DA is a λ-system that contains P, hence λ(P) ⊂ DA . We
thus have DA = λ(P), which means that for any A, B ∈ λ(P), A ∩ B ∈ λ(P).
Thus λ(P) is a π-system. Q.E.D.

Remark: If P is a π-system, and L is a λ-system that contains P, then σ(P) ⊂ L.


To see why, note that λ(P) = σ(P) is the smallest λ-system that contains P.

Theorem 1.4.5 (Uniqueness of Extension) Let P be a π-system on Ω, and P1


and P2 be probability measures on σ(P). If P1 and P2 agree on P, then they agree
on σ(P).

Proof: Let D = {E ∈ σ(P)|P1 (E) = P2 (E)}. D is a λ-system, since

• Ω ∈ D,
• E, F ∈ D and E ⊂ F imply F − E ∈ D, since
P1 (F − E) = P1 (F ) − P1 (E) = P2 (F ) − P2 (E) = P2 (F − E).

• If E1 , E2 , . . . ∈ D and En ↑ E, then E ∈ D, since


P1 (E) = lim P1 (En ) = lim P2 (En ) = P2 (E).

The fact that P1 and P2 agree on P implies that P ⊂ D. The remark following
Dynkin’s lemma shows that σ(P) ⊂ D. On the other hand, by definition, D ⊂ σ(P).
Hence D = σ(P). Q.E.D.

10
Borel σ-field The Borel σ-field is the σ-field generated by the family of open
subsets (on a topological space). To probability theory, the most important Borel
σ-field is the σ-field generated by the open subsets of R of real numbers, which we
denote B(R).
Almost every subset of R that we can think of is in B(R), the elements of which may
be quite complicated. As it is difficult for economic agents to assign probabilities to
complicated sets, we often have to consider “simpler” systems of sets, π-system, for
example.
Define
P = (−∞, x], x ∈ R.
It can be easily verified that P is a π-system. And we show in the following that P
generates B(R).

Proof: It is clear from


\
(−∞, x] = (−∞, x + 1/n) , ∀x ∈ R
n

that σ(P) ⊂ B(R). To show σ(P) ⊃ B(R), note that every open set of R is
a countable union of open intervals. It therefore suffices to show that the open
intervals of the form (a, b) are in σ(P). This is indeed the case, since
!
[
(a, b) = (−∞, a]c ∩ (−∞, b − 1/n] .
n

Note that the above holds even when b ≤ a, in which case (a, b) = ∅.

Theorem 1.4.6 (Extension Theorem) Let F0 be a field on Ω, and let F =


σ(F0 ). If P0 is a countably additive set function P0 : F0 → [0, 1] with P0 (∅) = 0
and P0 (Ω) = 1, then there exists a probability measure on (Ω, F) such that

P = P0 on F0 .

Proof: We first define for any E ⊂ Ω,


( )
X [
P(E) = inf P0 (An ) : An ∈ F0 , E ⊂ An .
{An }
n n

We next prove that

(a) P is an outer measure.

11
(b) P is a probability measure on (Ω, M), where M is a σ-field of P-measurable
sets in F.

(c) F0 ⊂ M

(d) P = P0 on F0 .

Note that (c) immediately implies that F ⊂ M. If we restrict P to the domain


F, we obtain a probability measure on (Ω, F) that coincide with P0 on F0 . The
theorem is then proved. In the following we prove (a)-(d).

(a) We first define outer measure. A set function µ on (Ω, F) is an outer measure
if

(i) µ(∅) = 0.
(ii) E ⊂ F implies µ(E) ≤ µ(F ). (monotonicity)
S P
(iii) µ ( n En ) ≤ n µ(En ), where E1 , E2 , . . . ∈ F. (countable subadditivity)

• It is obvious that P(∅) = 0, since we may choose En = ∅ ∀n.

• For E ⊂ F , choose {An } such that E ⊂ ( n An ) and F ⊂ ( n An ) ∪ (F − E).


S S
Monotonicity is now obvious.

• To show countable subadditivity, note that S for each n,P


we can find a collection

{Cnk }k=1 such that Cnk ∈SF0 , En S ⊂ Sk Cnk , andS k P0 (CPnk ) ≤ P(En ) +
−n
2
P , where  > 0. Since E
n n ⊂ n C
k nk , P ( E
n n ) ≤ n,k P0 (Cnk ) ≤
n P (En ) + . Since  is arbitrarily chosen, the countable subadditivity is
proved.

(b) Now we define M as

M = {A ⊂ Ω|P (A ∩ E) + P (Ac ∩ E) = P (E) , ∀E ⊂ Ω}.

M contains sets that “split” every set E ⊂ Ω well. We call these sets P-
measurable. M has an equivalent definition,

M = {A ⊂ Ω|P (A ∩ E) + P (Ac ∩ E) ≤ P (E) , ∀E ⊂ Ω},

since E = (A ∩ E) ∪ (Ac ∩ E) and the countable subadditivity of P dictates


that P (A ∩ E) + P (Ac ∩ E) ≥ P (E). To prove that P is a probability measure
on (Ω, M), where M is a σ-field of P-measurable sets in F. We first establish:

12
• Lemma 1. If A1 , A2 , . . . ∈ M are disjoint, then P ( n An ) = n P (An ).
S P
Proof: First note that
P (A1 ∪ A2 ) = P (A1 ∩ (A1 ∪ A2 )) + P (Ac1 ∩ (A1 ∪ A2 )) = P (A1 ) + P (A2 ) .
Induction thus obtains finite additivity. Now for any m ∈ N, we have by
monotonicity,
! !
X [ [
P (An ) = P An ≤ P An .
n≤m n≤m n
P S
Since m is arbitrarily chosen, we have n P (An ) ≤ P ( n An ). Combining
this with subadditivity, we obtain Lemma 1. Next we prove that M is a field.
• Lemma 2. M is a field on Ω.
Proof: It is trivial that Ω ∈ M and that A ∈ M ⇒ Ac ∈ M. It remains to
prove that A, B ∈ M ⇒ A ∩ B ∈ M. We first write,
(A ∩ B)c = (Ac ∩ B) ∪ (A ∩ B c ) ∪ (Ac ∩ B c ) .
Then
P ((A ∩ B) ∩ E) + P ((A ∩ B)c ∩ E)
= P (A ∩ B ∩ E) + P {[(Ac ∩ B) ∩ E] ∪ [(A ∩ B c ) ∩ E] ∪ [(Ac ∩ B c ) ∩ E]}
≤ P (A ∩ (B ∩ E)) + P (Ac ∩ (B ∩ E)) + P (A ∩ (B c ∩ E)) + P (Ac ∩ (B c ∩ E))
= P (B ∩ E) + P (B c ∩ E) = P (E) .
Using the second definition of M, we have A ∩ B ∈ M. Hence M is a field.
Next we establish that M is a σ-field. To show this we only need to show that
M is closed to countable union. We first prove two technical lemmas.
• S
Lemma 3. Let A1 , A2 , . . . ∈ M be disjoint. For each m ∈ N, let Bm =
n≤m An . Then for all m and E ⊂ Ω, we have
X
P (E ∩ Bm ) = P (E ∩ An ) .
n≤m

Proof: We prove by induction. First, note that the lemma holds trivially
P m = 1. Now suppose it holds for some m, we showc that P (E ∩ Bm+1 ) =
when
n≤m+1 P (E ∩ An ). Note that Bm ∩ Bm+1 = Bm and Bm ∩ Bm+1 = Am+1 . So
c
P (E ∩ Bm+1 ) = P (Bm ∩ E ∩ Bm+1 ) + P (Bm ∩ E ∩ Bm+1 )
= P (E ∩ Bm ) + P (E ∩ Am+1 )
X
= P (E ∩ An ) .
n≤m+1

13
• Lemma 4. Let A1 , A2 , . . . ∈ M be disjoint, then
S
n An ∈ M.
Proof: For any m ∈ N, we have
c
P (E) = P (E ∩ Bm ) + P (E ∩ Bm )
X
c
= P (E ∩ An ) + P (E ∩ Bm )
n≤m
!c !
X [
≥ P (E ∩ An ) + P E ∩ An ,
n≤m n

An )c ⊂ Bm
c
S
since ( n . Since m is arbitrary, we have
!c !
X [
P (E) ≥ P (E ∩ An ) + P E ∩ An
n n
!! !c !
[ [
≥ P E∩ An +P E∩ An .
n n
S
Hence n An ∈ M. Now we are read to prove:

• Lemma 5. M is a σ-field of subsets of Ω. S


Proof: It suffices to show if E1 , E2 , . . . ∈ M, n En ∈ M. Define A1 = E1 ,
Ai =SEi ∩ E1c S
∩ E2c ∩ · · · ∩ Ei−1
c
for i ≥ 2. Then A1 , A2 , . . . ∈ M are disjoint
and n En = n An ∈ M by Lemma 4.

(c) We now prove F0 ⊂ M.


Proof: Let A ∈ F0 , we need to show that A ∈ M. For any E ⊂SΩ and any
 > 0, we can find a sequence of E1 , E2 , . . . ∈ F0 such that E ⊂ n En such
that, X
P0 (En ) ≤ P (E) + .
n

By countable additivity of P0 on F0 , we have P0 (En ) = P0 (En ∩ A)+P0 (En ∩ Ac ).


Hence
X X X
P0 (En ) = P (En ∩ A) + P (En ∩ Ac )
n n
[   n [  
≥ P En ∩A +P En ∩ Ac
≥ P (E ∩ A) + P (E ∩ Ac ) .

Since  is arbitrarily chosen, we have P (E) ≥ P (E ∩ A) + P (E ∩ Ac ). Hence


A ∈ M.

14
(d) Finally, we prove that P = P0 on F0 .
Proof: Let E ∈ F0 . It is obviousSfrom the definition of P that P (E) ≤ P0 (E).
Let A1 , A2 , . . . ∈ F0 and E ⊂ n An . Define a disjoint sequence of subsets
c c c
{Bn } such that B1 = A1 and S Bi = ASi ∩ A1 ∩ A2 ∩ · · · ∩ Ai−1 for i ≥ 2. We
have Bn ⊂ An for all n and n An = n Bn . Using countable additivity of P0 ,
!!
[ X
P0 (E) = P0 E ∩ Bn = P0 (E ∩ Bn ) .
n n

Hence X X
P0 (E) ≤ P0 (Bn ) ≤ P0 (An ) .
n n

Now it is obvious that P (E) ≥ P0 (E). The proof is now complete.

1.5 Exercises
1. Prove that an arbitrary intersection of σ-fields is a σ-field.

2. Show that  
1 1
lim − , 1 − = [0, 1).
n→∞ n n

3. Let R be the sample space. We define a sequence En of subsets of R by

− n1 , 12 − n1
( 
if n is odd,
En = 1 1 2 1 
3
− n , 3 + n if n is even.

Find lim inf En and lim sup En . Let the probability P be given by the Lebesgue
measure on the unit interval [0, 1] (that is, the length of interval). Compare
P(lim inf En ), lim inf P(En ), P(lim sup En ), and lim sup P(En ).

4. Prove the following:


(a) If the events E and F are independent, then so are E c and F c .
(b) The events Ω and ∅ are independent of any event E.
(c) In addition to Ω and ∅, is there any event that is independent of itself?

5. Show that σ({[a, b]|∀a ≤ b, a, b ∈ R}) = B(R).

15
16
Chapter 2

Random Variable

2.1 Measurable Functions


Random variables are measurable functions from Ω to R. We first define measurable
functions and examine their properties. Let (S, G) be a general measurable space,
where G is a σ-field on a set S. For example, (Ω, F) is a measurable space, on which
random variables are defined.

Definition 2.1.1 (Measurable function) A function f : S → R is G-measurable


if, for any A ∈ B(R),

f −1 (A) ≡ {s ∈ S|f (s) ∈ A} ∈ G.

We simply call a function measurable if there is no possibility for confusion.

Remarks:

• For a G-measurable function f , f −1 is a mapping from B to G, while f is a


mapping from S to R.

• Given a σ-field G, the inverse of a measurable function maps a Borel set into
an element of G. If we have two σ-fields G1 and G2 and G1 ⊂ G2 . If a function
is G1 -measurable, then it is also G2 -measurable. The reverse is not true.

• For some set E ∈ G, the indicator function IE is G-measurable.

17
• The mapping f −1 preserves all set operations:
!
[ [ c
f −1 An = f −1 (An ), f −1 (Ac ) = f −1 (A) , etc.
n n

{f −1 (A)|A ∈ B} is thus a σ-field. It may be called the σ-field generated by f .

Properties:

(a) If C ⊂ B and σ(C) = B, then f −1 (A) ∈ G ∀A ∈ C implies that f is G-


measurable.
Proof: Let E = {B ∈ B|f −1 (B) ∈ G}. By definition E ⊂ B. Now it suffices
to show that B ⊂ E. First, E is a σ-field, since inverse mapping preserves
all set operations. And since f −1 (A) ∈ G ∀A ∈ C, we have C ⊂ E. Hence
σ(C) = B ⊂ E.
(b) f is G-measurable if
{s ∈ S|f (s) ≤ c} ∈ G ∀c ∈ R.
Proof: Let C = {(−∞, c]}, apply (a).
(c) (b) also holds if we replace f (s) ≤ c by f (s) ≥ c, f (s) > c, etc.
(d) If f is measurable and a is a constant, then af and f + a are measurable.
(e) If both f and g are measurable, then f + g is also measurable.
Proof: Note that we can always find a rational number r ∈ (f (s), c − g(s)) if
f (s) + g(s) < c. We can represent
[
{f (s) + g(s) ≤ c} = ({f (s) < r} ∩ {g(s) < c − r}) ,
r

which is in G for all c ∈ R, since the set of rational numbers is countable.


(f) If both f and g are measurable, then f g is also measurable.
Proof: It suffices to prove that if f is measurable, then f 2 is measurable,
√ √ since
2 2 2 2
f g = ((f + g) − f − g ) /2. But {f (s) ≤ c} = {f (s) ∈ [− c, c]} ∈ G for
all c ≥ 0 and {f (s)2 ≤ c} = ∅ ∈ G for c < 0.
(g) Let {fn } be a sequence of measurable functions. Then sup fn , inf fn , lim inf fn ,
and lim sup fn are all measurable (sup fn and inf fn may be infinite, though,
hence we should consider Borel sets on Tthe extended real line).
T Note that {sup fn (s) ≤ c} = n {fn (s) ≤ c} ∈ G and {inf fn (s) ≥
Proof:
c} = n {fn (s) ≥ c} ∈ G. Now the rest is obvious.

18
(h) If {fn } are measurable, then {lim fn exists in R} ∈ G.
Proof: Note that the set on which the limit exists is

{lim sup fn < ∞} ∩ {lim inf fn > −∞} ∩ g −1 (0),

where g = lim sup fn − lim inf fn is measurable.

(i) If {fn } are measurable and f = lim fn exists, then f is measurable.


Proof: Note that for all c ∈ R,

\[\ 1

{f ≤ c} = fn ≤ c + .
m≥1 k n≥k
m

(j) A simple function f , which takes the form f (s) = ni=1 ci IAi , where (Ai ∈ G)
P
are disjoint and (ci ) are constants, is measurable.
Proof: Use (d) and (e) and the fact that indicator functions are measurable.

Definition 2.1.2 (Borel Functions) If f is B(R)-measurable, it is called Borel


function.

Borel functions can be more general. For example, a B(S)-measurable function,


where S is a general topological space, may be referred to as a Borel function.

• If f is G-measurable and g is Borel, then the composition function g ◦ f is


G-measurable.

• If g is a continuous real function, then g is Borel. It is well known that a real


function f is continuous if and only if the inverse image of every open set is an
open set. By the definition of B(R), for every A ∈ B(R), A can be represented
by a countable union of open intervals. It is then obvious that f −1 (A) is also
in B(R).

2.2 Random Variables


Definition 2.2.1 (Random Variable) Given a probability space (Ω, F, P), we de-
fine a random variable X as a F-measurable function from Ω to R, ie, X −1 (B) ∈ F
for all B ∈ B(R).

19
Remarks:

• A random variable X is degenerate if X(ω) = c, a constant for all ω. For all


B ∈ B(R), if c ∈ B, then X −1 (B) = Ω ⊂ F, and if c ∈/ B, then X −1 (B) =
∅ ⊂ F.
• From Property (b) of measurable functions, if {ω ∈ Ω|X(ω) ≤ c} ∈ F ∀c ∈ R,
then X is a random variable.
• If X and Y are random variables defined on a same probability space, then
cX, X + c, X 2 , X + Y , and XY are all random variables.
• If {Xn } is a sequence of random variables, then sup Xn , inf Xn , lim sup Xn ,
lim inf Xn , and lim Xn (if it exists), are all random variables (possibly un-
bounded).
• If X is a random variable on (Ω, F, P) and f is a Borel function, then f (X) is
also a random variable on the same probability space.
• The concept of random variable may be more general. For example, X may
be a mapping from Ω to a separable Banach space with an appropriate σ-field.

Example 2.2.2 For the coin tossing experiments, we may define a random variable
by X(H) = 1 and X(T ) = 0, where H and T are the outcomes of the experiment,
ie, head and tail, respectively. If we toss the coin for n times, X̄n = n1 ∞
P
i=1 Xi is
also a random variable. As n → ∞, X̄n becomes a degenerate random variable as
we know by the law of large numbers. lim X̄n is still a random variable since the
following event is in F,
 
number of heads 1
→ = {lim sup X̄n = 1/2} ∩ {lim inf X̄n = 1/2}
number of tosses 2

Definition 2.2.3 (Distribution of Random Variable) The distribution PX of


a random variable X is the probability measure on (R, B(R)) induced by X. Specif-
ically,
PX (A) = P(X −1 (A)) for all A ∈ B(R).

• We may write the distribution function as a composite function PX = P ◦ X −1 .


When there is no ambiguity about the underlying random variable, we write
P in place of PX for simplicity.
• P is indeed a probability measure (verify this). Hence all properties of the
probability measure apply to P . P is often called the law of a random variable
X.

20
Definition 2.2.4 (Distribution Function) The distribution function FX of a ran-
dom variable is defined by
FX (x) = PX {(−∞, x]} for all x ∈ R.

We may omit the subscript of FX for simplicity. Note that since {(−∞, x], x ∈ R}
is a π-system that generates B(R), F uniquely determines P .

Properties:

(a) limx→−∞ F (x) = 0 and limx→∞ F (x) = 1.


(b) F (x) ≤ F (y) if x ≤ y.
(c) F is right continuous.

Proof: (a) Let xn → −∞. Since (−∞, xn ] ↓ ∅, we have F (xn ) = P {(−∞, xn ]} →


P (∅) = 0. The other statement is similarly established. (b) It follows from (−∞, x] ⊂
(−∞, y] if x ≤ y. (c) Fix an x, it suffices to show that F (xn ) → F (x) for
any sequence {xn } such that xn ↓ x. It follows, however, from the fact that
(−∞, xn ] ↓ (−∞, x] and the monotone convergence of probability measure.

Remark: If P ({x}) = 0, we say that P does not have point probability mass at x,
in which case F is also left-continuous. For any sequence {xn } such that xn ↑ x, we
have
F (xn ) = P ((−∞, xn ]) → P ((−∞, x)) = F (x) − P ({x}) = F (x).

2.3 Random Vectors


An n-dimensional random vector is a measurable function from (Ω, F) to (Rn , B(Rn )).
We may write a random vector X as X(ω) = (X1 (ω), . . . , Xn (ω))0 .

Example 2.3.1 Consider tossing the coin twice. Let X1 be a random variable that
takes 1 if the first toss gives Head and 0 otherwise, and let X2 be a random variable
that takes 1 if the second toss gives Head and 0 otherwise. Then the random vector
X = (X1 , X2 )0 is a function from Ω = {HH, HT, T H, T T } to R2 :
       
1 1 0 0
X(HH) = , X(HT ) = , X(T H) = , X(T T ) = .
1 0 1 0

21
Definition 2.3.2 (Distribution of Random Vector) The distribution of an n-
dimensional random vector X = (X1 , . . . , Xn )0 is a probability measure on Rn ,

PX (A) = P{ω|X(ω) ∈ A} ∀A ∈ B(Rn ).

The distribution of a random vector X = (X1 , . . . , Xn ) is conventionally called the


joint distribution of X1 , . . . , Xn . The distribution of a subvector of X is called the
marginal distribution.
The marginal distribution is a projection of the joint distribution. Consider a ran-
dom vector Z = (X 0 , Y 0 )0 with two subvectors X ∈ Rm and Y ∈ Rn . Let PX (A) be
the marginal distribution of X for A ∈ B(Rm ). We have

PX (A) = PZ (A × Rn ) = P{ω|Z(ω) ∈ A × Rn },

where the cylinder set A × Rn is obviously an element in B(Rm+n ).

Definition 2.3.3 (Joint Distribution Function) The distribution function of a


random vector X = (X1 , . . . , Xn )0 is defined by

FX (x1 , . . . , xn ) = P{ω|X1 (ω) ≤ x1 , . . . , Xn (ω) ≤ xn }.

The n-dimensional real function FX is conventionally called the joint distribution


function of X1 , . . . , X2 .

2.4 Density
Let µ be a measure on (S, G). A measure is a countably additive1 function from a
σ-field (e.g., G) to [0, ∞). A classic example of measure is the length of intervals.
Equipped with the measure µ, we now have a measure space (S, G, µ). On (S, G, µ),
a statement holds almost everywhere (a.e.) if the set A ∈ G on which the statement
is false is µ-null (µ(A) = 0). The probability triple (Ω, F, P) is of course a special
measure space. A statement on (Ω, F, P) holds almost surely (a.s.) if the event
E ∈ F in which the statement is false has zero probability (P(E) = 0).
We first introduce a more general concept of densityPin the Lebesgue integration
theory. Let fn be a simple function of the form fn (s) = nk=1 ck IAk , where (Ak ∈ G)
are disjoint and (ck ) are real nonnegative constants. We have

1
P
µ is countably additive if whenever {Ak } are disjoint, µ (∪k≥1 Ak ) = k≥1 µ(Ak ).

22
Definition 2.4.1 The Lebesgue integral of a simple function fn with respect to µ is
defined by
Z Xm
fn dµ = ck µ(Ak ).
k=1

For a general nonnegative function f , we have

Definition 2.4.2 The Lebesgue integral of f with respect to µ by


Z Z
f dµ = sup fn dµ,
{fn ≤f }

where {fn } are simple functions.

In words, the Lebesgue integral of a general function f is the sup of the integrals of
simple functions that are below f . For example, we may choose fn = αn ◦ f , where

 0 f (x) = 0
αn (x) = 2 (k − 1) if 2−n (k − 1) < f (x) ≤ 2−n k, for k = 1, . . . , n2n
−n

n f (x) > n

For functions that are not necessarily nonnegative, we define

f + (x) = max(f (x), 0)


f − (x) = max(−f (x), 0).

Then we have
f (x) = f + − f − .
The Lebesgue integral of f is now defined by
Z Z Z
f dµ = f dµ − f − dµ.
+

f − dµ are finite, then we call f integrable with respect to µ.


R R
If both f + dµ and

Remarks:

• RThe function f is called integrand. The notation f dµ is a simplified form of


R

S
f (x)µ(dx).
• The summation n cn is a special case of Lebesgue integral, which is taken
P
with respect to the counting measure. The counting measure on R assigns 1
to each point in N, the set of natural numbers.

23
• The Lebesgue integral generalizes the Riemann integral. It exists and coincides
with the Riemann integral whenever that the latter exists.

Definition 2.4.3 (Absolute Continuity of Measures) Let µ and ν be two mea-


sures on (S, G). ν is absolutely continuous with respect to µ if

ν(A) = 0 whenever µ(A) = 0, A ∈ G.

For example, given µ, we may construct a measure ν by


Z
ν(A) = f dµ, A ∈ G,
A

where f is nonnegative. It is obvious that ν, so constructed, is absolutely continuous


with respect to µ.

Theorem 2.4.4 (Radon-Nikodym Theorem) Let µ and ν be two measures on


a measurable space (S, G). If ν is absolutely continuous with respect to µ, then there
exists a nonnegative measurable function f such that ν can be represented as
Z
ν(A) = f dµ, A ∈ G.
A

The function f is called the Radon-Nikodym derivative of ν with respect to µ. It is


uniquely determined up to µ-null sets. We may denote f = ∂ν/∂µ.

Density Recall that PX is a probability measure on (R, B(R)). If PX is absolutely


continuous with respect to another measure µ on (R, B(R)), then there exists a
nonnegative function pX such that
Z
PX (A) = pX dµ, ∀A ∈ B(R). (2.1)
A

• If the measure µ in (2.1) is the Lebesgue measure, which gives the length
of intervals, the function pX is conventionally called the probability density
function of X. If such a pdf exists, we say that X is a continuous random
variable.

• If PX is absolutely continuous with respect to the counting measure µ, then


pX is conventionally called the discrete probabilities and X is called a discrete
random variable.

24
2.5 Independence
The independence of random variables is defined in terms of σ-fields they generate.
We first define

Definition 2.5.1 (σ-field Generated by Random Variable) Let X be a ran-


dom variable. The σ-field generated by X, denoted by σ(X), is defined by

σ(X) = X −1 (A)|A ∈ B(R) .




• σ(X) is the smallest σ-field to which X is measurable.

• The σ-field generated by a random vector X = (X1 , . . . , Xn )0 is similarly


defined: σ(X) = σ(X1 , . . . , Xn ) = {X −1 (A)|A ∈ B(Rn )} .

• σ(X) may be understood as the set of information that the random variable
X contains about the state of the world. Speaking differently, σ(X) is the
collection of events E such that, for a given outcome, we can tell whether the
event E has happened based on the observance of X.

Definition 2.5.2 (Independence of Random Variables) Random variables X1 , . . . , Xn


are independent if the σ-fields, σ(X1 ), . . . , σ(Xn ), are independent.

Let p(xik ) be the Radon-Nikodym density of the distribution of Xik with respect to
Lebesgue or counting measure. And let, with some abuse of notation, p(xi1 , . . . , xin )
be the Radon-Nikodym density of the distribution of Xi1 , . . . , Xin , with respect to
the product of the measures to which the marginal densities p(xi1 ), . . . , p(xin ) are
defined. The density p may be pdf or discrete probabilities, depending on whether
the corresponding random variable is continuous or discrete. We have the following
theorem.

Theorem 2.5.3 The random variables X1 , X2 , . . . are independent if and only if for
any (i1 , . . . , in ),
n
Y
p(xi1 , . . . , xin ) = p(xik )
k=1

almost everywhere with respect to the measure for which p is defined.

Proof: It suffices to prove the case of two random variables. Let Z = (X, Y )0 be a
two-dimensional random vector, and let µ(dx) and µ(dy) be measure to which p(x)

25
and p(y) are defined. The joint density p(x, y) is then defined with respect to the
measure µ(dx)µ(dy) on R2 . For any A, B ∈ B, we have

PZ (A × B) = P{Z −1 (A × B)} = P{X −1 (A) ∩ Y −1 (B)}.

X and Y are independent iff

PZ (A × B) = P{X −1 (A) ∩ Y −1 (B)} = P{X −1 (A)}P{Y −1 (B)} = PX (A)PY (B).

And PZ (A × B) = PX (A)PY (B) holds iff


Z Z Z Z
p(x, y)µ(dx)µ(dy) = p(x)µ(x) p(y)µ(dy)
A×B A B
Z Z
= p(x)p(y)µ(dx)µ(dy),
A×B

where the second equality follows from Fubini’s theorem.

2.6 Exercises
1. Verify that PX (·) = P (X −1 (·)) is a probability measure on B(R).

2. Let E and F be two events with probabilities P(E) = 1/2, P(F ) = 2/3 and
P(E ∩ F ) = 1/3. Define random variables X = I(E) and Y = I(F ). Find the
joint distribution of X and Y . Also, obtain the conditional distribution of X
given Y .

3. If a random variable X is endowed with the following density function,

x2
p(x) = I{−3 < x < 3},
18
compute P{ω||X(ω)| < 1}.

4. Suppose the joint probability density function of X and Y is given by

p(x, y) = 3(x + y) I{0 ≤ x + y ≤ 1, 0 ≤ x, y ≤ 1}.

(a) Find the marginal density of X.


(b) Find P{ω|X(ω) + Y (ω) < 1/2}.

26
Chapter 3

Expectations

3.1 Integration
Expectation is integration. Before studying expectation, therefore, we first dig
deeper into the theory of integration.

Notations Let (S, G, µ) be a measure space and f be a measurable function from


S to R.

• We denote µ(f ) = f dµ and µ(f ; A) = A f dµ = f IA dµ, where A ∈ G.


R R R

• We say that f is µ-integrable if µ(|f |) = µ(f + ) + µ(f − ) < ∞, in which case


we write f ∈ L1 (S, G, µ).
• If in addition, f is nonnegative, then we write f ∈ L1 (S, G, µ)+ .

Properties of Integration

• If f ∈ L1 (S, G, µ), then |µ(f )| ≤ µ(|f |).


• If f, g ∈ L1 (S, G, µ), then af + bg ∈ L1 (S, G, µ), where a, b ∈ R. Furthermore,
µ(af + bg) = aµ(f ) + bµ(g).
• µ(f ; A) is a measure on (S, G).

Theorem 3.1.1 (Monotone Convergence Theorem) If fn is a sequence of non-


negative measurable functions such that, except on a µ-null set, fn ↑ f , then
µ(fn ) ↑ µ(f ).

27
Note that the monotone convergence of probability is implied by the monotone
convergence theorem. Take fn = IAn and f = IA , where An is a monotone increasing
sequence of sets in G that converge to A, and let µ = P be a probability measure.
Then µ(fn ) = P(An ) ↑ P(A) = µ(f ).

Theorem 3.1.2 (Fatou’s Lemma) For a sequence of nonnegative measurable func-


tions fn , we have
µ(lim inf fn ) ≤ lim inf µ(fn ).

Proof: Note that inf n≥k fn is monotone increasing and inf n≥k fn ↑ lim inf fn . In
addition, since fk ≥ inf n≥k fn for all k, we have µ(fk ) ≥ µ(inf n≥k fn ) ↑ µ(lim inf fn )
by Monotone Convergence Theorem.

Theorem 3.1.3 (Reverse Fatou’s Lemma) If a sequence of nonnegative mea-


surable functions fn are bounded by a measurable nonnegative function g for all n
and µ(g) < ∞, then
µ(lim sup fn ) ≥ lim sup µ(fn ).

Proof: Apply Fatou Lemma to (g − fn ).

Theorem 3.1.4 (Dominated Convergence Theorem) Suppose that fn and f


are measurable, that fn (s) → f (s) for every s ∈ S, and that (fn ) is dominated by
some g ∈ L1 (S, G, µ)+ , ie,
|fn (s)| ≤ g(s), ∀s ∈ S, ∀n,
then
µ(|fn − f |) → 0,
so that
µ(fn ) → µ(f ).
1
In addition, f ∈ L (S, G, µ).

Proof: It is obvious that |f (s)| ≤ g(s) ∀s ∈ S. Hence |fn − f | ≤ 2g, where


µ(2g) < ∞. We apply the reverse Fatou Lemma to (|fn − f |) and obtain
lim sup µ(|fn − f |) ≤ µ(lim sup |fn − f |) = µ(0) = 0.
Since |µ(fn ) − µ(f )| = |µ(fn − f )| ≤ µ(|fn − f |), we have
lim |µ(fn ) − µ(f )| ≤ lim sup µ(|fn − f |) = 0.
n→∞

The theorem can be extended to the case where fn →a.e. f only. The condition of the
existence of a dominating function g can also be relaxed to the uniform integrability
of fn .

28
3.2 Expectation
Now we have

Definition 3.2.1 (Expectation) Let X be a random variable on the probability


space (Ω, F, P). The expectation of X, EX, is defined by
Z
EX = XdP.

If E|X|p < ∞ with 0 < p < ∞, we say X ∈ Lp (Ω, F, P), or simply X ∈ Lp . By


convention, L∞ refers to the space of (essentially) bounded random variables.
More generally, let f be a Borel function,
Z
Ef (X) = f (X)dP.

EX is also called the mean of X, and Ef (X) can be called the f -moment of X.

Theorem 3.2.2 (Change of Variable) We have


Z Z
Ef (X) = f dPX = f pX dµ, (3.1)

where pX is the density of X with respect to measure µ.

Proof: First consider indicator functions of the form f (X) = IA (X), where A ∈ B.
We have f (X)(ω) = IA ◦ X(ω) = IX −1 (A) (ω). Then

Ef (X) = EIA ◦ X = P(X −1 (A)) = PX (A).

And we have
Z Z Z Z
PX (A) = IA dPX = f dPX and PX (A) = IA pX dµ = f pX dµ.

Hence the theorem holds for indicator functions. Similarly we can show that it is
true for simple functions. For a general nonnegative function f , we can choose a
sequence of simple functions (fn ) such that fn ↑ f . The monotone convergence
theorem is then applied to obtain the same result. For general functions, note that
f = f + − f −.

All properties of integration apply to the expectation. In addition, we have the


following convergence theorems.

29
• (Monotone Convergence Theorem) If 0 ≤ Xn ↑ X a.s., then E(Xn ) ↑ E(X).

• (Fatou’s Lemma) If Xn ≥ 0 a.s. for all n, then E(lim inf Xn ) ≤ lim inf E(Xn ).

• (Reverse Fatou’s Lemma) If there exists X ∈ L1 such that Xn ≤ X a.s. for


all n, then E lim sup Xn ≥ lim sup EXn .

• (Dominated Convergence Theorem) If there exists Y ∈ L1 such that |Xn | ≤ Y


a.s. for all n, and Xn → X a.s., then

E(|Xn − X|) → 0,

which implies that


EXn → EX.

• (Bounded Convergence Theorem) If there exists a constant K < ∞ such that


|Xn | ≤ K a.s. for all n, then

E(|Xn − X|) → 0.

3.3 Moment Inequalities


Definitions: Let X and Y be random variables defined on (Ω, F, P). Recall that
we call Ef (X) the f -moment of X. In particular, if f (x) = xk , µk ≡ EX k is called
the k-th moment of X. If f (x) = (x − µ1 )k , we call E(X − µ1 )k the k-th central
moment of X. Particularly, the second central moment is called the variance.
The covariance of X and Y is defined as

cov(X, Y ) = E(X − µx )(Y − µy ),

where µx and µy are the means of X and Y , respectively. cov(X, X) is of course the
2
variance of X. Let σX and σY2 denote the variances of X and Y , respectively, we
define the correlation of X and Y by

cov(X, Y )
ρX,Y = .
σX σY

30
For a random vector X = (X1 , . . . , Xn )0 , the second moment is given by EXX 0 ,
a symmetric matrix. Let µ = EX, then ΣX = E(X − µ)(X − µ)0 is called the
variance-covariance matrix, or simply the covariance matrix. If Y = AX, where
A is a conformable constant matrix, then ΣY = AΣX A0 . This relation reduces to
σY2 = a2 σX
2
, if X and Y are scalar random variables and Y = aX, where a is a
constant.
The moments of a random variable X contain the same information as the distribu-
tion (or the law) dose. We have

Theorem 3.3.1 Let X and Y be two random variables (possibly defined on different
probability spaces). Then PX = PY if and only if Ef (X) = Ef (Y ) for all Borel
functions whenever the expectation is finite.

Proof: If PX = PY , then we have Ef (X) = Ef (Y ) by (3.1). Conversely, set f = IB ,


where B is any Borel set. Then Ef (X) = Ef (Y ) implies that P(X ∈ B) = P(Y ∈
B), ie, PX = PY .

In the following, we prove a set of well-known inequalities.

E|X|k
Theorem 3.3.2 (Chebyshev Inequality) P{|X| ≥ ε} ≤ εk
, for any ε > 0
and k > 0.

Proof: It follows from the fact that εk I|X|≥ε ≤ |X|k .

Remarks:

• We have as a special case of the Chebyshev’s inequality,


σ2
P{|X − µ| ≥ ε} ≤ ,
ε2
where µ and σ 2 are the mean and the variance of X, respectively. If a random
variable has a finite variance, this inequality states that it’s tail probabilities
are bounded.

• Another special case concerns nonnegative random variables. In this case, we


have Markov’s Inequality, which states that for a nonnegative random variable
X,
1
P(X ≥ a) ≤ EX, for all a > 0.
a

31
• There is also an exponential form of Markov Inequality:

P(X > ε) ≤ e−tε E etX , t > 0.




Theorem 3.3.3 (Cauchy-Schwartz Inequality) (EXY )2 ≤ (EX 2 )(EY 2 )

Proof: Without loss of generality, we consider the case when X ≥ 0, Y ≥ 0.


Note first that if E(X 2 ) = 0, then X = 0 a.s., in which case the inequality holds
with equality. Now we consider the case when E(X 2 ) > 0 and E(Y 2 ) > 0. Let
1/2 1/2
X∗ = X/ (E(X 2 )) and Y∗ = Y / (E(Y 2 )) . Then we have EX∗2 = EY∗2 = 1. Then
we have

0 ≤ E(X∗ − Y∗ )2 = E(X∗2 + Y∗2 − 2X∗ Y∗ ) = 1 + 1 − 2E(X∗ Y∗ ),

which results in E(X∗ Y∗ ) ≤ 1. The Cauchy-Schwartz inequality then follows.

Remarks:

• It is obvious that equality holds only when Y is a linear function of X.

• If we apply Cauchy-Schwartz Inequality to X − µX and Y − µY , then we have

cov(X, Y )2 ≤ var(X)var(Y ).

To introduce Jensen’s inequality, recall that f : R → R is convex if f (αx+(1−α)y) ≤


αf (x) + (1 − α)f (y), where α ∈ [0, 1]. If f is twice differentiable, then f is convex
if and only if f 00 ≥ 0. Finally, if f is convex, it is automatically continuous.

Theorem 3.3.4 (Jensen’s Inequality) If f is convex, then f (EX) ≤ Ef (X).

Proof: Since f is convex, there exists a linear function ` such that

`≤f and `(EX) = f (EX).

It follows that
Ef (X) ≥ E`(X) = `(EX) = f (EX).

32
Remarks:

• Functions such as |x|, x2 , and exp(θx) are all convex functions of x.


• The inequality is reversed for concave functions such as log(x), x1/2 , etc.

Definition 3.3.5 (Lp Norm) Let 1 ≤ p < ∞. The Lp norm of a random variable
X is defined by
kXkp ≡ (E|X|p )1/p .

The L∞ norm is defined by

kXk∞ ≡ inf{M : X ≤ M a.s.},

which may be interpreted as the lowest upper bound for X. (Lp (Ω, F, P), k · kp ) with
1 ≥ p ≥ ∞ is a complete normed (Banach) space of random variables. In particular,
when p = 2, if we define inner product

hX, Y i = EXY,

(Lp (Ω, F, P), h·, ·i) is a complete inner product (Hilbert) space.

Theorem 3.3.6 (Monotonicity of Lp Norms) If 1 ≤ p ≤ q < ∞ and X ∈ Lq ,


then X ∈ Lp , and
kXkp ≤ kXkq

Proof: Define Yn = {min(|X|, n)}p . For any n ∈ N, Yn is bounded, hence both Yn


q/p
and Yn are in L1 . Since xq/p is a convex function of x, we use Jensen’s inequality
to obtain
(EYn )q/p ≤ E Ynq/p = E ({min(|X|, n)}q ) ≤ E (|X|q ) .


Now the monotone convergence theorem obtains the desired result.

3.4 Conditional Expectation


Let X be a random variable on L1 (Ω, F, P) and let G ⊂ F be a sub-σ-field.

Definition 3.4.1 (Conditional Expectation) The conditional expectation of X


given G, denoted by E(X|G), is a G-measurable random variable such that for every
A ∈ G, Z Z
E(X|G)dP = XdP. (3.2)
A A

33
In particular, if G = σ(Y ), where Y is a random variable, we write E(X|σ(Y ))
simply as E(X|Y ).
The conditional expectation is a local average. To see this, let {Fk } be a partition
of Ω with P(Fk ) > 0 for all k. Let G = σ({Fk }). Note that we can write
X
E(X|G) = ck IFk ,
k

where {ck } are constants. We may determine {ck } from


Z Z Z
XdP = E(X|G)dP = ck dP,
Fk Fk Fk

which obtains R
Fk
XdP
ck = .
P(Fk )
The conditional expectation E(X|G) may be viewed as a random variable that takes
values that are local averages of X over the partitions made by G. If G1 ⊂ G, G is
said to be “finer” than G1 . In other words, E(X|G) is more “random” than E(X|G1 ),
since the former can take more values. Example 1 gives two extreme cases.

Example 3.4.2 If G = {∅, Ω}, then E(X|G) = EX, which is a degenerate random
variable. If G = F, then E(X|G) = X.

Example 3.4.3 Let E and F be two events that satisfy P(E) = P(F ) = 1/2 and
P(E ∩ F ) = 1/3. E and F are obviously not independent. We define two random
variables, X = IE and Y = IF . It is obvious that {F, F c } is a partition of Ω and
σ({F, F c }) = σ(Y ) = {∅, Ω, F, F c }. The conditional expectation of E(X|Y ) may be
written as
E(X|Y ) = c∗1 IF + c∗2 IF c ,
where c∗1 = P(F )−1 F XP = P(F )−1 P(F ∩ E) = 2/3, and c∗2 = P(F c )−1 F c XP =
R R

P(F c )−1 P(F c ∩ E) = 1/3.

Existence of Conditional Expectation Note that


Z
µ(A) = XdP, A ∈ G ⊂ σ(X)
A

defines a measure on (Ω, G) and that µ is absolutely continuous with respect to P.


By the Radon-Nikodym theorem, there exists a G-measurable random variable ξ
such that Z
µ(A) = ξdP.
A
The random variable ξ is exactly E(X|G). It is unique up to P-null sets.

34
Definition 3.4.4 (Conditional Probability) The conditional probability may be
defined as a random variable P(E|G) such that
Z
P(E|G)dP = P(A ∩ E).
A

Check that the conditional probability behaves like ordinary probabilities, in that
it satisfies the axioms of the probability, at least in a.s. sense.

Properties:

• (Linearity) E(aX + bY |G) = aE(X|G) + bE(Y |G).

• (Law of Iterative Expectation) The definition of conditional expectation di-


rectly implies EX = E [E(X|G)].

• If X is G-measurable, then E(XY |G) = XE(Y |G) with probability 1.

Proof: First, XE(Y |G) is G-measurable. Now let X = IF , where F ∈ G. For


any A ∈ G, we have
Z Z Z Z Z
E(IF Y |G)dP = IF Y dP = Y dP = E(Y |G)dP = IF E(Y |G)dP.
A A A∩F A∩F A

Hence the statement holds for X = IF . For general random variables, use
linearity and monotone convergence theorem.

• Using the above two results, it is trivial to show that X and Y are independent
if and only if Ef (X)g(Y ) = Ef (X)Eg(Y ) for all Borel functions f and g.

• Let G1 and G2 be sub-σ-fields and G1 ⊂ G2 . Then, with probability 1,

E [E(X|G2 )|G1 ] = E(X|G1 ).

Proof: It follows from, for any A ∈ G1 ⊂ G2 ,


Z Z Z Z
E [E(X|G2 )|G1 ] dP = E(X|G2 )dP = XdP = E(X|G1 )dP.
A A A A

• (Doob-Dynkin) There exists a measurable function f such that E(X|Y ) =


f (Y ).

35
Conditional Expectation as Projection The last property implies that
E [E(X|G)|G] = E(X|G),
which suggest that the conditional expectation is a projection operator, projecting
a random variable onto a sub-σ-field. This is indeed the case. It is well known that
H = L2 (Ω, F, P) is a Hilbert space with inner product defined by hX, Y i = EXY ,
where X, Y ∈ L2 . Consider a subspace H0 = L2 (Ω, G, P), where G ⊂ F. The
projection theorem in functional analysis guarantees that for any random variable
X ∈ H, there exists a G-measurable random variable Y such that
E(X − Y )W = 0 for all W ∈ H0 . (3.3)
Y is called the (orthogonal) projection of X on H0 . Write W = IA for any A ∈ G,
the equation (3.3) implies that
Z Z
XdP = Y dP for all A ∈ G.
A A

It follows that Y is indeed a version of E(X|G).

Conditional Expectation as the Best Predictor Consider the problem of


predicting Y given X. We call φ(X) a predictor, where φ is a Borel function. We
have the following theorem,

Theorem 3.4.5 If Y ∈ L2 , then E(Y |X) solves the following problem,


min E(Y − φ(X))2 .
φ

Proof: We have
E(Y − φ(X))2 = E([Y − E(Y |X)] + [E(Y |X) − φ(X)])2
= E [Y − E(Y |X)]2 + [E(Y |X) − φ(X)]2


+2[Y − E(Y |X)][E(Y |X) − φ(X)]} .


By the law of iterative expectation, E[Y − E(Y |X)][E(Y |X) − φ(X)] = 0. Hence
E(Y − φ(X))2 = E[Y − E(Y |X)]2 + E[E(Y |X) − φ(X)]2 .
Since φ only appears in the second term, the minimum of which is attained when
E(Y |X) = φ(X), it is now clear that E(Y |X) minimizes E(Y − φ(X))2 .

Hence the conditional expectation is the best predictor in the sense of minimizing
mean squared forecast error (MSFE). This fact is the basis of regression analysis
and time series forecasting.

36
3.5 Conditional Distribution
Suppose that X and Y are two random variables with joint density p(x, y).

Definition 3.5.1 (Conditional Density) The conditional density of X given Y =


y is obtained by
p(x, y)
p(x|y) = R .
p(x, y)µ(dx)

The conditional expectation E(X|Y = y) may then be represented by


Z
E(X|Y = y) = xp(x|y)µ(dx).

It is clear that E(X|Y = y) is a deterministic function of y. Thus we write g(y) =


E(X|Y = y). We have
g(Y ) = E(X|Y ).

To show this, first note that for all F ∈ σ(Y ), there exists A ∈ B such that F =
Y −1 (A). We now have
Z Z
g(Y )dP = g(y)p(y)µ(dy)
F A
Z Z 
= xp(x|y)µ(dx) p(y)µ(dy)
A
Z Z
= xp(x, y)µ(dx)µ(dy)
R ×A
Z
= XdP
ZF
= E(X|Y )dP.
F

Example 3.5.2 If p(x, y) = (x + y)I{0≤x,y≤1} . To obtain E(X|Y ), we calculate

1 1
+y
Z Z
x+y 3
E(X|Y = y) = xp(x|y)dx = x1 dx = 1 .
0 2
+y 2
+y

Then E(X|Y ) = (1/3 + Y )/(1/2 + Y ).

37
3.6 Exercises
1. Let the sample space Ω = R and the probability P on Ω be given by
   
1 1 2 2
P = and P = .
3 3 3 3
Define a sequence of random variables by
 
1  
Xn = 3 − I(An ) and X = 3 I lim An ,
n n→∞

where  
1 1 2 1
An = + , +
3 n 3 n
for n = 1, 2, . . ..
(a) Show that lim An exists so that X is well defined.
n→∞
(b) Compare lim E(Xn ) with E(X).
n→∞
(c) Is it true that lim E(Xn − X)2 = 0?
n→∞

2. Let X1 and X2 be two zero-mean random variables with correlation ρ. Suppose


the variances of X1 and X2 are the same, say σ 2 . Prove that
2(1 + ρ)
P (|X1 + X2 | ≥ kσ) ≤ .
k2
3. Prove Cantelli’s inequality, which states that if a random variable X has mean
µ and variance σ 2 < ∞, then for all a > 0,
σ2
P(X − µ ≥ a) ≤ .
σ 2 + a2
[Hint: You may first show P(X − µ ≥ a) ≤ P ((X − µ + y)2 ≥ (a + y)2 ), use
Markov’s inequality, and then minimize the resulting bound over the choice of
y. ]
4. Let the sample space Ω = [0, 1] and the probability on Ω be given by the
density
p(x) = 2x
over [0, 1]. We define random variables X and Y by


 1, 0 ≤ ω < 1/4, 
0, 1/4 ≤ ω < 1/2, 1, 0 ≤ ω < 1/2,

X(ω) = and Y(ω) =

 −1, 1/2 ≤ ω < 3/4, 0, 1/2 ≤ ω ≤ 1.
0, 3/4 ≤ ω ≤ 1,

38
(a) Find the conditional expectation E(X 2 |Y )
(b) Show that E(E(X 2 |Y )) = E(X 2 ).

39
40
Chapter 4

Distributions and Transformations

4.1 Alternative Characterizations of Distribution

4.1.1 Moment Generating Function

Let X be a random variable with density p. The moment generating function (MGF)
of X is given by
Z
m(t) = E exp(tX) = exp(tx)p(x)dµ(x).

Note that the moment generating function is the Laplace transform of the density.
The name of MGF is due to the fact that

dk m
k
(0) = EX k .
dt

4.1.2 Characteristic Function

The MGF may not exist, but we can always define characteristic function, which is
given by
Z
φ(t) = E exp(itX) = exp(itx)p(x)dµ(x).

Note that the characteristic function is the Fourier transform of the density. Since
| exp(itx)| is bounded, φ(t) is always defined.

41
4.1.3 Quantile Function
We define the τ -quantile or fractile of X (with distribution function F ) by
Qτ = inf{x|F (x) ≥ τ }, 0 < τ < 1.
If F is strictly monotone, then Qτ is nothing but F −1 (τ ). If τ = 1/2, Q1/2 is
conventionally called the median of X.

4.2 Common Families of Distributions


In the following we get familiar with some families of distributions that are frequently
used in practice. Given a family of distributions {Pθ } indexed by θ, we call the
index θ parameter. If θ is finite dimensional, we call {Pθ } a parametric family of
distributions.

Uniform The uniform distribution is a continuous distribution with the following


density with respect to the Lebesgue measure,
1
pa,b (x) = I[a,b] (x), a < b.
b−a
We denote the uniform distribution with parameters a and b by Uniform(a, b).

Bernoulli The Bernoulli distribution is a discrete distribution with the following


density with respect to the counting measure,
pθ (x) = θx (1 − θ)1−x , x ∈ {0, 1}, and θ ∈ [0, 1].
The Bernoulli distribution, denoted by Bernoulli(θ), usually describes random ex-
periments with binary outcomes such as success (x = 1) or failure (x = 0). The
parameter θ is then interpreted as the probability of success, P{x = 1}.

Binomial The Binomial distribution, corresponding to n-consecutive coin tossing,


is a discrete distribution with the following density with respect to counting measure,
 
n
pn,θ (x) = θx (1 − θ)n−x , x ∈ {0, 1, . . . , n}.
x
We may use Binomial distribution, denoted by Binomial(n, θ), to describe the out-
comes of repeated trials, in which case n is the number of trials and θ is the proba-
bility of success for each trial.

42
Note that if X ∼ Binomial(n, θ), it can be represented by a sum of n i.i.d. (inde-
pendently and identically distributed) Bernoulli(θ) random variables.

Poisson The Poisson distribution is a discrete distribution with the following den-
sity,
exp(−λ)λx
pλ (x) = , x ∈ {0, 1, 2, . . .}.
x!
The Poisson distribution typically describes the probability of the number of events
occurring in a fixed period of time. For example, the number of phone calls in a given
time interval may be modeled by a Poisson(λ) distribution, where the parameter λ
is the expected number of calls. Note that the Poisson(λ) density is a limiting case
of the Binomial(n, λ/n) density,
   x  n−x  −x  n x x
n λ λ n! λ λ λ −λ λ
1− = 1 − 1 − → e .
x n n (n − x)!nx n n x! x!

Normal The normal (or Gaussian) distribution, denoted by N (µ, σ 2 ) is a contin-


uous distribution with the following density with respect to Lebesgue measure,

(x − µ)2
 
1
pµ,σ2 (x) = √ exp − .
2πσ 2σ 2

The parameter µ and σ 2 are the mean and the variance of the distribution, respec-
tively. In particular, N (0, 1) is called standard normal. The normal distribution
was invented for the modeling of observation error, and is now the most important
distribution in probability and statistics.

Exponential The exponential distribution, denoted by Exponential(λ) is a con-


tinuous distribution with the following density with respect to Lebesgue measure,

pλ (x) = λe−λx .

The cdf of the Exponential(λ) distribution is given by

F (x) = 1 − e−λx .

The exponential distribution typically describes the waiting time before the arrival
of next Poisson event.

43
Gamma The Gamma distribution, denoted by Gamma(k, λ) is a continuous dis-
tribution with the following density,
1
pk,λ = (λx)k−1 e−λx , x ∈ [0, ∞),
Γ(k)
where Γ(·) is gamma function defined as follows,
Z ∞
Γ(z) = tz−1 e−t dt.
0
The parameter k is called shape parameter and λ > 0 is called scale parameter.

• Special cases
– Let k = 1, then Gamma(1, λ) reduces to Exponential(λ).
– If k is an integer, Gamma(k, λ) reduces to an Erlang distribution, i.e., the
sum of k independent exponentially distributed random variables, each
of which has a mean of λ.
– Let ` be an integer and λ = 1/2, then Gamma(`/2, 1/2) reduces to χ2` ,
chi-square distribution with ` degrees of freedom.
• The gamma function generalizes the factorial function. To see this, note that
Γ(1) = 1 and that by integration by parts, we have
Γ(z + 1) = zΓ(z).
Hence for positive integer n, we have Γ(n + 1) = n!.

Beta The Beta distribution, denoted by Beta(a, b), is a continuous distribution on


[0, 1] with the following density,
1
pa,b (x) = xa−1 (1 − x)b−1 , x ∈ [0, 1],
B(a, b)
where B(a, b) is the beta function defined by
Z 1
B(a, b) = xa−1 (1 − x)b−1 dx, a, b > 0.
0
Both a > 0 and b > 0 are shape parameters. Since the support of Beta distributions
is [0, 1], it is often used to describe unknown probability value such as the probability
of success in a Bernoulli distribution.

• The beta function is related to the gamma function by


Γ(a)Γ(b)
B(a, b) = .
Γ(a + b)
• Beta(a, b) reduces to Uniform[0, 1] if a = b = 1.

44
Table 4.1: Mean, Variance, and Moment Generating Function
Distribution Mean Variance MGF
a+b (b−a)2 ebt −eat
Uniform[a, b] 2 12 (b−a)t
Bernoulli(θ) θ θ(1 − θ) (1 − θ) + θet
Poisson(λ) λ λ exp(λ(et − 1))
Normal(µ, σ 2 ) σ2 exp µt + 12 σ 2 t2

µ
Exponential(λ) λ−1 λ−2 (1 − t/λ)−1
Gamma(k, λ) k/λ k/λ2 (λ/(λ − t))k
a ab

P∞ Qk−1 a+r  tk
Beta(a, b) a+b (a+b)2 (a+b+1)
1+ k=1 r=0 a+b+r k!

Cauchy The Cauchy distribution, denoted by Cauchy(a, b), is a continuous dis-


tribution with the following density,
1
pa,b (x) =   , b > 0.
x−a 2

πb 1 + b

The parameter a is called the location parameter and b is called the scale parame-
ter. Cauchy(0, 1) is called the standard Cauchy distribution, which coincides with
Student’s t-distribution with one degree of freedom.

• The Cauchy distribution is a heavy-tail distribution. It does not have any


finite moment. Its mode and median are well defined and are both equal to a.

• When U and V are two independent standard normal random variables, then
the ratio U/V has the standard Cauchy distribution.

• Like normal distribution, Cauchy distribution is stable, ie, if X1 , X2 , X are i.i.d.


Cauchy, then for any constants a1 and a2 , the random variable a1 X1 + a2 X2
has the same distribution as cX with some constants c.

Multinomial The multinomial distribution generalizes the binomial distribution


to describe more than two categories. Let X = (X1 , . . . , Xm ). For the experiment
of tossing a coin for n times, X would take (k, n − k)0 , ie, there are k heads and
n−k tails.
Pm For the experiment of rolling a die for n times, X would take (x1 , ..., xm ),
where k=1 xk = n. The multinomial density is given by
m
n! X
p(x1 , . . . , xm ; p1 , ..., pm ) = px1 1 · · · pxmm , x ∈ {0, 1, . . . , n}, xk = n,
x1 ! · · · xm ! k=1

45
where parameter (pk , k = 1, . . . , m) is the probability of getting k−th outcome in
each coin tossing or die rolling. When m = 2, the multinomial distribution reduces
to binomial distribution. The continuous analogue of multinomial distribution is
multivariate normal distribution.

4.3 Transformed Random Variables


In this section, we study three commonly used techniques to derive the distributions
of transformed random variables Y = g(X), given the distribution of X. We denote
by FX the distribution function of X.

4.3.1 Distribution Function Technique

By the definition of distribution function, we may directly calculate FY (y) = P(Y ≤


y) = P(g(X) ≤ y).

Example 4.3.1 Let X ∼ Uniform[0, 1] and Y = − log(1 − X). It is obvious that,


in a.s. sense, Y ≥ 0 and 1 − exp(−Y ) ∈ [0, 1]. Thus, for y ≥ 0, the distribution
function of Y is given by

FY (y) = P(log(1 − X) ≤ y)
= P(X ≤ 1 − e−y )
= 1 − e−y ,

since FX (x) = x for x ∈ [0, 1]. FY (y) = 0 for y < 0. Note that Y ∼ Exponential(1).

Example 4.3.2 Let Xi be independent random variables with distribution function


Fi , i = 1, . . . , n. Then the distribution of Y = max{X1 , . . . , Xn } is given by

n
!
\
FY (y) = P {Xi ≤ y}
i=1
n
Y
= P(Xi ≤ y)
i=1
Yn
= Fi (y).
i=1

46
Example 4.3.3 Let X = (X1 , X2 )0 be a random vector with distribution P and
density p(x1 , x2 ) with respect to measure µ. Then the distribution of Y = X1 + X2
is given by

FY (y) = P{X1 + X2 ≤ y}
= P{(x1 , x2 )|x1 + x2 ≤ y}
Z ∞ Z y−x2
= p(x1 , x2 )µ(dx1 )µ(dx2 ).
−∞ −∞

4.3.2 MGF Technique


The moment generating function (MGF) uniquely determines distributions. When
MGF of Y = g(X) is easily obtained, we may identify the distribution of Y by
writing the MGF into a form that corresponds to some particular distribution. For
example,
P if (Xi ) are independent random variables with MGF mi , then the MGF of
Y = ni=1 Xi is given by
n
Y
t(X1 +···+Xn )
m(t) = Ee = mi (t).
i=1

Pn 4.3.4 Let Xi ∼ Poisson(λi ) be independent over i. Then the MGF of


Example
Y = i=1 Xi is
n n
!
Y X
exp λi et − 1 = exp et − 1
 
m(t) = λi .
i=1 i=1
P
This suggests that Y ∼ Poisson( i λi ).

Example
Pn 4.3.5 Let Xi ∼ N(µi , σi2 ) be independent over i. Then the MGF of Y =
i=1 ci Xi is

n n n
!
t2 X 2 2
 
Y 1 2 22 X
m(t) = exp ci µi t + ci σi t = exp t ci µ i + ci σ i .
i=1
2 i=1
2 i=1
P Pn 2 2
This suggests that Y ∼ N ( i ci µ i , i=1 ci σi ).

4.3.3 Change-of-Variable Transformation


If the transformation function g is one-to-one, we may find the density of Y = g(X)
from that of X by the change-of-variable transformation. Let g = (g1 , . . . , gn )0

47
and x = (x1 , . . . , xn )0 . And let PX and PY denote the distributions of X and Y ,
respectively. Assume PX and PY admit density pX and pY with respect to µ, the
counting or the Lebesgue measure on Rn .
For any B ∈ B(R), we define A = g −1 (B). We have A ∈ B(R) since g is measurable.
It is clear that {X ∈ A} = {Y ∈ B}. We therefore have
Z
PY (B) = PX (A) = pX (x)µ(dx).
A

Discrete Variables If µ is counting measure, i.e., X and Y are discrete random


variables, we have
Z X X
pX (x)µ(dx) = pX (x) = pX (g −1 (y)).
A x∈A y∈B

Hence the density pY of Y is given by

pY (y) = pX (g −1 (y)).

Continuous Variables If µ is Lebesgue measure (i.e., X and Y are continuous


random variables.) and g is differentiable, we use the change-of-variable formula to
obtain,
Z Z Z
 −1
pX (x)dx = pX (x)dx = pX (g −1 (y)) detġ g −1 (y) dy,
A A B

where ġ is the Jacobian matrix of g, ie, the matrix of the first partial derivatives of
f , [∂gi /∂xj ]. Then we obtain the density of Y ,
−1
pY (y) = pX (g −1 (y)) detġ g −1 (y)

.

Example 4.3.6 Suppose we have two random variables X1 and X2 with joint den-
sity

p(x1 , x2 ) = 4x1 x2 if 0 < x1 , x2 < 1,


= 0 otherwise

Define Y1 = X1 /X2 and Y2 = X1 X2 . The problem is to obtain the joint density of


(Y1 , Y2 ) from that of (X1 , X2 ). First note that the inverse transformation is

x1 = (y1 y2 )1/2 and x2 = (y2 /y1 )1/2 .

48
Let X = {(x1 , x2 )|0 < x1 , x2 < 1} denote the support of the joint density of (X1 , X2 ).
Then the support of the joint density of (Y1 , Y2 ) is given by Y = {(y1 , y2 )|y1 , y2 >
0, y1 y2 < 1, y2 < y1 }. Then
q √ !
 1 x1  y1 y1 y2
− x22
|detġ(x)| = det x2 = det p y2 √
y2 /y1
= 2y1 .
x2 x1 y2 /y1 y1 y2

Hence the joint density of (Y1 , Y2 ) is given by

4(y1 y2 )1/2 (y2 /y1 )1/2 2y2


p(y1 , y2 ) = = .
2y1 y1

4.4 Multivariate Normal Distribution

4.4.1 Introduction
Definition 4.4.1 (Multivariate Normal) A random vector X = (X1 , . . . , Xn )0 is
said to be multivariate normally distributed if for all a ∈ Rn , a0 X has a univariate
normal distribution.

Let Z = (Z1 , . . . , Zn )0 be a n-dimensional random vector, where (Zi ) are i.i.d.


N (0, 1). We have EZ = 0 and var(Z) = In . For all a ∈ Rn , we have
n n n
1 2 2 1 Pn
it(a0 Z) a2k
Y Y Y
Ee = itak Zk
Ee = φZ (ak t) = e− 2 ak t = e− 2 k=1 ,
k=1 k=1 k=1
Pn
which is the characteristic function of a N (0, k=1 a2k ) random variable. Hence Z is
multivariate normal. We may write Z ∼ N (0, In ), and call it standard multivariate
normal.
Using similar argument, we can show that X is multivariate normal if it can be
written as
X = µ + Σ1/2 Z,
where Z is standard multivariate normal, µ is an n-vector, and Σ is a symmetric
and positive definite matrix. It is easy to see that EX = µ and var(X) = Σ. We
write X ∼ N (µ, Σ).

Characteristic Function for Random Vectors For a random vector X, the


characteristic function may be defined as φX (t) = E exp(it0 X), where t ∈ Rn . The

49
characteristic function of Z (defined above) is obviously
 
1 0
φZ (t) = exp − t t .
2

Let X ∼ N (µ, Σ). It follows that


 
it0 X it0 µ 1/2
 0 1 0
φX (t) = Ee =e φZ Σ t = exp it µ − t Σt .
2

Joint Density The joint density of Z is given by,


n n
!  
Y 1 1X 2 1 1 0
p(z) = p(zi ) = exp − z = exp − z z .
i=1
(2π)n/2 2 i=1 i (2π)n/2 2

The Jacobian matrix of of the affine transformation X = µ + Σ1/2 Z is Σ1/2 , hence


 
1 1 0 −1
p(x) = exp − (x − µ) Σ (x − µ) .
(2π)n/2 |Σ|1/2 2

Note that | · | denotes determinant, and that, for a square matrix A, we have |A2 | =
|A|2 .

Remarks:

• A vector of univariate normal random variables is not necessarily a multivariate


normal random vector. A counter example is (X, Y )0 , where X ∼ N (0, 1) and
Y = X if |X| > c and Y = −X if |X| < c, where c is about 1.54.

• If Σ is singular, then there exists some a ∈ Rn such that var(a0 X) = a0 Σa = 0.


This implies that X is random only on a subspace of Rn . We may say that
the joint distribution of X is degenerate in this case.

4.4.2 Marginals and Conditionals


Throughout this section, let X ∼ N (µ, Σ).

Lemma 4.4.2 (Affine Transformation) If Y = AX + b, then Y ∼ N (Aµ +


b, AΣA0 ).

50
Proof: Exercise. (Hint: use c.f. arguments.)

To introduce marginal distributions, we partition X conformably into


     
X1 µ1 Σ11 Σ12
X= ∼N , ,
X2 µ2 Σ21 Σ22
where X1 ∈ Rn1 and X2 ∈ Rn2 .

Marginal Distribution Apply Lemma 4.4.2 with A = (In1 , 0) and b = 0, we


have X1 ∼ N (µ1 , Σ11 ). In other words, the marginal distributions of a multivariate
normal distribution are also multivariate normal.

Lemma 4.4.3 (Independence) X1 and X2 are independent if and only if Σ12 = 0.

Proof: The “only if” part is obvious. If Σ12 = 0, then Σ is a block diagonal,
 
Σ11 0
Σ= .
0 Σ22
Hence
Σ−1
 
−1 11 0
Σ = ,
0 Σ−1
22

and
|Σ| = |Σ11 | · |Σ22 |.
Then the joint density of x1 and x2 , can be factored as
 
−n/2 −1/2 1 0 −1
p(x) = p(x1 , x2 ) = (2π) |Σ| exp − (x − µ) Σ (x − µ)
2
 
−n1 /2 −1/2 1 0 −1
= (2π) |Σ11 | exp − (x1 − µ1 ) Σ11 (x1 − µ1 )
2
 
−n2 /2 −1/2 1 0 −1
·(2π) |Σ22 | exp − (x2 − µ2 ) Σ22 (x2 − µ2 )
2
= p(x1 )p(x2 ).
Hence X1 and X2 are independent.

Theorem 4.4.4 (Conditional Distribution) The conditional distribution of X1


given X2 is N (µ1|2 , Σ11|2 ), where
µ1|2 = µ1 + Σ12 Σ−1
22 (X2 − µ2 )
Σ11|2 = Σ11 − Σ12 Σ−1
22 Σ21 .

51
Proof: First note that
X1 − Σ12 Σ−1 I −Σ12 Σ−1
    
22 X2 22 X1
= .
X2 0 I X2
Since
I −Σ12 Σ−1 Σ11 − Σ12 Σ−1
     
22 Σ11 Σ12 I 0 22 Σ21 0
= ,
0 I Σ21 Σ22 −Σ12 Σ−1
22 I 0 Σ22

X1 − Σ12 Σ−1
22 X2 and X2 are independent. We write

X1 = A1 + A2 ,

where
A1 = X1 − Σ12 Σ−1 A2 = Σ12 Σ−1

22 X2 , 22 X2 .

Since A1 is independent of X2 , the conditional distribution of A1 given X2 is the


unconditional distribution of A1 , which is

N µ1 − Σ12 Σ−1 −1

22 µ2 , Σ11 − Σ12 Σ22 Σ21 .

A2 may be treated as a constant given X2 , which only shifts the mean of the cond-
tional distribution of X1 given X2 . We have thus obtained the desired result.

From the above result, we may see that the conditional mean of X1 given X2 is
linear in X2 , and that the conditional variance of X1 given X2 does not depend on
X2 . Of course the conditional variance of X1 given X2 is less than the unconditional
variance of X1 , in the sense that Σ11 − Σ11|2 is a positive semi-definite matrix.

4.4.3 Quadratic Forms


Let X be an n-by-1 random vector and A be an n-by-n deterministic matrix, the
quantity X 0 AX is called the quadratic form of X with respect to A. In this section
we consider the distribution of the quadratic forms of X when X is multivariate
normal. First we introduce a few important distributions that are related with the
quadratic forms of normal vectors.

chi-square distribution If Z = (Z1 , . . . , Zn )0 ∼ N (0, In ), it is well known that


n
X
0
ZZ= Zi2 ∼ χ2n ,
i=1

which is called chi-square distribution with n degrees of freedom.

52
Student t distribution Let T = √ Z , where Z ∼ N (0, 1) and V ∼ χ2m and Z
V /m
and V are independent, then T ∼ tm , the Student t distribution with m degrees of
freedom.

F distribution Let F = VV12 /m


/m2
1
, where V1 and V2 are independent χ2m1 and χ2m2 ,
respectively. Then F ∼ Fm1 ,m2 , the F distribution with degrees of freedom m1 and
m2 .

Theorem 4.4.5 Let X ∼ N (0, Σ), where Σ is nonsingular. Then

X 0 Σ−1 X ∼ χ2n .

Proof: Note that Σ−1/2 X ∼ N (0, In ).

To get to the next theorem, recall that a square matrix is a projection if and only if
P 2 = P .1 If, in addition, P is symmetric, then P is an orthogonal projection.

Theorem 4.4.6 Let Z ∼ N (0, In ) and P be an m-dimensional orthogonal projec-


tion in Rn , then we have
Z 0 P Z ∼ χ2m .

Proof: It is well known that P may be decomposed into


0
P = Hm Hm ,
0 0
where Hm is an n × m orthogonal matrix such that Hm Hm = Im . Note that Hm Z∼
0 0 0 0
N (0, Im ) and Z P Z = (Hm Z) (Hm Z).

Theorem 4.4.7 Let Z ∼ N (0, In ), and let A and B be deterministic matrices, then
A0 Z and B 0 Z are independent if and only if A0 B = 0.

Proof: Let C = (A, B). Without loss of generality, we assume that C is full rank
(if it is not, then throw away linearly dependent columns). We have
 0    0
A A A0 B

0 AZ
CZ= ∼ N 0, .
B0Z B0A B0B

It is now clear that A0 Z and B 0 Z are independent if and only if the covariance A0 B
is null.
1
Matrices that satisfy this property is said to be idempotent.

53
It is immediate that we have

Corollary 4.4.8 Let Z ∼ N (0, In ), and let P and Q be orthogonal projections such
that P Q = 0, then Z 0 P Z and Z 0 QZ are independent.

Proof: Note that since P Q = 0, then P Z and QZ are independent. Hence the
independence of Z 0 P Z = (P Z)0 (P Z) and Z 0 QZ = (QZ)0 (QZ).

Using the above results, we can easily prove

Theorem 4.4.9 Let Z ∼ N (0, In ), and let P and Q be orthogonal projections of


dimensions m1 and m2 , respectively. If P Q = 0, then
Z 0 P Z/m1
∼ Fm1 ,m2 .
Z 0 QZ/m2

Finally, we prove a useful theorem.

Theorem 4.4.10 Let (Xi ) be i.i.d. N (µ, σ 2 ), and define


n
1X
X̄n = Xi
n i=1
n
1 X
Sn2 = (Xi − X̄n )2 .
n − 1 i=1

We have

(a) X̄n ∼ N (µ, σ 2 /n).


2
(n−1)Sn
(b) σ2
∼ χ2n−1

(c) X̄n and Sn2 are independent



n(X̄n −µ)
(d) Sn
∼ tn−1

Proof: Let X = (X1 , . . . , Xn0 ) and ι be an n × 1 vector of ones, then X ∼ N (µι, In ).


(a) follows from X̄n = n1 ι0 X. Define Pι = ιι0 /n = ι(ι0 ι)−1 ι0 , which is the orthogonal
projection on the span of ι. Then we have
n
X
(Xi − X̄n )2 = (X − ιι0 X/n)0 (X − ιι0 X/n) = X 0 (I − Pι )X.
i=1

54
Hence 0
(n − 1)Sn2
  
X − µι X − µι
= (In − Pι ) .
σ2 σ σ
(b) follows from the fact that X−µισ
∼ N (0, In ) and that (In − Pι ) is an (n − 1)-
0
dimensional orthogonal projection. To prove (c), we note that X̄n = ιn Pι X and
1
Sn2 = n−1 ((I − Pι )X)0 ((I − Pι )X), and that Pι X and (I − Pι )X are independent
by Theorem 4.4.7. Finally, (d) follows from
√ √
n(X̄n −µ)
n(X̄n − µ)
=r σ .
Sn (n−1)Sn2
σ2
n−1

4.5 Exercises
1. Derive the characteristic function of the distribution with density

p(x) = exp(−|x|)/2.

2. Let X and Y be independent standard normal variables. Find the density of


a random variable defined by
X
U= .
Y
[Hint: Let V = Y and first find the joint density of U and V .]
3. Let X and Y have bivariate normal distribution with mean and variance
   
1 1 1
and .
2 1 2
(a) Find a constant α∗ such that Y − α∗ X is independent of X. Show that
var(Y − αX) ≥ var(Y − α∗ X) for any constant α.
(b) Find the conditional distribution of X + Y given X − Y .
(c) Obtain E(X|X + Y ).
4. Let X = (X1 , . . . , Xn )0 be a random vector with mean µι and variance Σ,
where µ is a scalar, ι is the n-vector of ones and Σ is an n by n symmetric
matrix. We define
Pn Pn
i=1 Xi 2 (Xi − X)2
Xn = and Sn = i=1 .
n n−1
Consider the following assumptions:
(A1) X has multivariate normal distribution,

55
(A2) Σ = σ 2 I,
(A3) µ = 0.
We claim:
(a) X n and Sn2 are uncorrelated.
(b) E(X n ) = µ.
(c) E(Sn2 ) = σ 2 .
(d) X n ∼ N (µ, σ 2 /n).
(n − 1)Sn2 /σ 2 ∼ χ2n−1 .
(e) √
(f) n(X n − µ)/Sn ∼ tn−1 .
What assumptions in (A1), (A2), and (A3) are needed for each of (a) – (f) to
hold. Prove (a) – (f) using the assumptions you specified.

Appendix: Projection Matrix


We first review some key concepts in linear algebra that are essential for under-
standing projections.

Vector space: A nonempty set X ⊂ Rn is a vector space if x + y ∈ X and cx ∈ X


for all x, y ∈ X and scalar c. Since Rn itself is a vector space, we may call X a
vector subspace. For example, in R3 , lines and planes are vector subspaces.

Span: The span of a set of vectors is the set of all linear combinations of the
vectors. For example, the x-y plane is spanned by (1, 0) and (0, 1).

Range: Given a matrix A, the range of A is defined as the span of its columns,

R(A) = {y|y = Ax, for some x}.

The orthogonal complement of R(A), denoted by R(A)⊥ , is defined by

R(A)⊥ = {x|x0 y = 0 for all y ∈ R(A)}.

Null space: The null space of A is the set of all column vectors x such that
Ax = 0,
N (A) = {x|Ax = 0}.
It can be easily shown that for any matrix A, R(A)⊥ = N (A0 ).

56
Basis: An independent subset of a vector space X that spans X is called a basis
of X . Independence here means that any vector in the set cannot be written as a
linear combination of other vectors in the set. The number of vectors in a basis of
X is the dimension of X .

Positive semidefiniteness: We denote A ≥ 0 if A is positive semidefinite, that


is, x0 Ax ≥ 0 for all x. We denote A ≥ B if A − B is positive semidefinite. A > 0
positive definitiveness.

Projection: In the space Rn , a n-by-n matrix P is called a projection if P 2 = P ,


i.e., if P is idempotent. A projection matrix is a linear transformation that maps
any element in Rn to a vector subspace.

Example 4.5.1 Let n = 3, and


 
1 0 0
P =  0 1 0 .
0 0 0

We can check P is indeed a projection in R3 . It projects any vector in R3 to a


two-dimensional subspace, the x-y plane, since
    
1 0 0 x x
 0 1 0  y  =  y 
0 0 0 z 0
.

A projection P is orthogonal if P is symmetric. The symmetry ensures that for all


x, y ∈ Rn ,
(P x)0 (I − P )y = 0.
The connection between orthogonality and symmetry of P is now clear.
The eigenvalues of an orthogonal projection P are either 1 or 0. To see this, suppose
x is an eigenvector, we have

P (P x) = P x = λx = P (λx) = λP x = λ2 x,

which implies λ = λ2 .
An orthogonal projection P has the following eigen-decomposition,

P = QΛQ0 ,

57
where Λ is a diagonal matrix with eigenvalues (1 or 0) on the diagonal, and Q is
orthonormal, ie, Q0 Q = I. The i-th column of Q is the eigenvector corresponding
to the i-th eigenvalues on the diagonal. Suppose there are n1 ones and n2 zeros on
the diagonal of Λ. We may conveniently order the eigenvalues and eigenvectors such
that  
In1 0
Λ= , Q = (Qn1 Qn2 ),
0 0n2
where the subscript n1 and n2 denotes number of columns. Then we may represent
P by
P = Qn1 Q0n1 .
It is now clear that the range of P has n1 dimensions. In other words, P is n1 -
dimensional. And since I − P = Qn2 Q0n2 , (I − P ) is an n2 -dimensional orthogonal
projection. P is an orthogonal projection on the subspace spanned by the eigenvec-
tors corresponding to eigenvalue ones. And I − P is an orthogonal projection on the
subspace spanned by the eigenvectors corresponding to eigenvalue zeros.
Since the eigenvalues of an orthogonal projection P are either 1 or 0, P is positive
semidefinite, ie, P ≥ 0. And we also have A0 P A ≥ 0, since for any x, x0 A0 P Ax =
(Ax)0 P (Ax) ≥ 0.

58
Chapter 5

Introduction to Statistics

5.1 General Settings


The fundamental postulate of statistical analysis is that the observed data are real-
ized values of a vector of random variables defined on a common probability space.
This postulate is not verifiable. It is a philosophical view of the world that we choose
to take, and we call it the probabilistic view. An alternative view would be that the
seemingly random data are generated from a deterministic but chaotic law. We only
consider the probabilistic view, which is main stream among economists.
Let X = (X1 , . . . , Xn ) be variables of interest, where for each i, Xi may be a vector.
The objective of statistical inference, is to study the joint distribution of X based
on the observed sample.

The First Example: For example, we may study the relationship between indi-
vidual income (income) and the characteristics of the individual such as education
level (edu), work experience (expr), gender, etc. The variables of interest may then
be Xi = (incomei , edui , expri , genderi ). We may reasonably postulate that (Xi )
are independently and identically distributed (i.i.d.). Hence the study of the joint
distribution of X reduces to that of the joint distribution of Xi . To achieve this,
we take a sample of the whole population, and observe (Xi , i = 1, . . . , n), where i
denotes individuals. In this example in particular, we may focus on the conditional
distribution of income given edu, expr, and gender.

The Second Example: For another example, in macroeconomics, we may be


interested in the relationship among government expenditure (gt ), GDP growth
(yt ), inflation (πt ), and unemployment (ut ). The variables of interest may be Xt =

59
(gt , yt , πt , ut ). One of the objective of empirical analysis, in this example, may be
to study the conditional distribution of unemployment given past observations on
government expenditure, GDP growth, inflation, as well as itself. The problem of
this example lies with, first, the i.i.d. assumption on Xt is untenable, and second, the
fact that we can observe Xt only once. In other words, an economic data generating
process is nonindependent and time-irreversible. It is clear that the statistical study
would go nowhere unless we impose (sometimes strong) assumptions on the evolution
of Xt , stationarity for example.
In this chapter, for simplicity, we have the first example in mind. In most cases, we
assume that X1 , . . . , Xn are i.i.d. with a distribution Pθ that belongs to a family of
distributions {Pθ |θ ∈ Θ} where θ is called parameter and Θ a parameter set. In this
course we restrict θ to be finite-dimensional. This is called the parametric approach
to statistical analysis. The nonparametric approach refers to the case where we do
not restrict the distribution to any family of distributions, which is in a sense to
allow θ to be infinite-dimensional. In this course we mainly consider the parametric
approach.

5.2 Statistic
Recall that random variables are mappings from the sample space to real numbers.
We say that the random vector X = (X1 , . . . , Xn )0 is a mapping from the sample
space to a state space X , which is usually Rn in this text. We may write, X : Ω → X .
Now we introduce

Definition 5.2.1 (Statistic) A statistic τ : X → T is a real-valued (or vector-


valued) measurable function τ (X) of a random sample X = (X1 , . . . , Xn ).

Note that the statistic is a random variable (or vector) itself.


Statistical inference consists of two problems: estimation of and hypothesis testing
on θ. For the purpose of estimation, we need to construct a vector-valued statistic
called estimator, τ : X → T , where T includes Θ. It is customary to denote an
estimator of β by θ̂. The realized value of the estimator is called an estimate.
For the purpose of hypothesis testing, we need to construct a statistic called test
statistic, τ : X → T , where T is a subset of R. A hypothesis divides Θ into two
disjoint and exhaustive subsets. We rely on the value of τ to decide whether θ0 , the
true parameter, is in one of them.

60
Sufficient Statistic Let τ = τ (X) be a statistic, and P = {Pθ |θ ∈ Θ} be a family
of distributions of X.

Definition 5.2.2 (Sufficient Statistic) A statistic τ is sufficient for P (or more


precisely θ) if the conditional distribution of X given τ does not depend on θ.

The distribution of X can be any member of the family P. Therefore, the conditional
distribution of X given τ would depend on θ in general. τ is sufficient in the sense
that the distribution of X is uniquely determined by the value of τ . Bayesians may
interpret sufficiency as P(θ|X) = P(θ|τ (X)).
Sufficient statistics are useful in data reduction. It is less costly to infer θ from a
statistic τ than from X, since the former, being a function of the latter, is of lower
dimension. The sufficiency of τ guarantees that τ contains all information about θ
in X.

Example 5.2.3 Suppose that X ∼ N (0, σ 2 ) and τ = |X|. Conditional on τ = t,


X can take t or −t. Since the distribution of X is symmetric about the origin, each
has a conditional probability of 1/2, regardless of the value of σ 2 . The statistic τ is
thus sufficient.

Theorem 5.2.4 (Fisher-Neyman Factorization) A statistic τ = τ (X) is suffi-


cient if and only if there exist two functions f and g such that the density of X is
factorized as
pθ (x) = f (τ (x), θ)g(x).

This theorem implies that if two samples give the same value for a sufficient statistic,
then the MLE based on the two samples yield the same estimate of the parameters.

Example 5.2.5 Let X1 , . . . , Xn be i.i.d. Poisson(λ). We may write the joint dis-
tribution of X = (X1 , . . . , Xn ) as
λx1 +···+xn
pλ (x) = e−nλ Qn = f (τ (x), λ)g(x),
i=1 xi !
−1
where τ (x) = ni=1 xi , f (t, λ) = exp(−nλ)λt , and g(x) = ( ni=1 xi !) . Hence τ (x)
P Q
is sufficient for λ.

Example 5.2.6 Let X1 , . . . , Xn be i.i.d. N(µ, σ 2 ). The joint density is


n
!
−n/2 1 X
pµ,σ2 (x) = 2πσ 2 (xi − µ)2

exp − 2
2σ i=1
n n
!
2 −n/2
 1 X 2 µ X µ2
= 2πσ exp − 2 xi + 2 xi − n 2 .
2σ i=1 σ i=1 2σ

61
Pn Pn
It is clear that τ (x) = ( i=1 xi , i=1 x2i ) is sufficient for (µ, σ 2 )0 .

Minimal Sufficient Statistic Sufficient statistic is by no means unique. τ (x) =


(x1 , . . . , xn )0 , for example, is always sufficient. Let τ and κ be two statistics and τ
is sufficient. It follows immediately from the Fisher-Neyman factorization theorem
that if τ = h(κ) for some function h, then κ is also sufficient. If h is a many-to-one
function, then τ provides further data reduction than κ. We call a sufficient statistic
minimal if it is a function of every sufficient statistic. A minimal sufficient statistic
thus achieves data reduction to the best extent.

Definition 5.2.7 Exponential Family The exponential family refers to the family of
distributions that have densities of the form
" m #
X
pθ (x) = exp ai (θ)τi (x) + b(θ) g(x),
i=1

where m is a positive integer.

To emphasize the dependence on m, we may call the above family m-parameter expo-
nential family. By the factorization theorem, it is obvious that τ (x) = (τ1 (x), . . . , τm (x))0
is a sufficient statistic.
In the case of m = 1, let X1 , . . . , Xn be i.i.d. with density

pθ (xi ) = exp [a(θ)τi (xi ) + b(θ)] g(xi ).

Then the joint density of X = (X1 , . . . , Xn )0 is


" n
# n
X Y
pθ (x) = exp a(θ) τi (xi ) + nb(θ) g(xi ).
i=1 i=1
P
This implies that i τ (xi ) is a sufficient statistic.
The exponential family includes many distributions that are in frequent use. We
can easily deduce sufficient statistics for these distributions.

Example 5.2.8 (One-parameter exponential family) • Poisson(λ)


λx 1
pλ (x) = e−λ = ex log λ−λ .
x! x!

• Bernoulli(θ)

pθ (x) = θx (1 − θ)1−x = exp (x log(θ/(1 − θ)) + log(1 − θ)) .

62
Example 5.2.9 (Two-parameter exponential family) • N(µ, σ 2 )

(x − µ)2
 
1
pµ,σ2 = √ exp −
2πσ 2σ 2
  2 
1 1 2 µx µ
= √ exp − 2 x + 2 − + log σ
2π 2σ σ 2σ 2

• Gamma(α, β)

1
pα,β = xα−1 e−x/β
Γ(α)β α
 
1
= exp (α − 1) log x − x − (log Γ(α) + α log β) .
β

Remark on Bayesian Approach The Bayesian approach to probability is one of


the different interpretations of the concept of probability. Bayesians view probability
as an extension of logic that enables reasoning with uncertainty. Bayesians do not
reject or accept a hypothesis, but evaluate the probability of a hypothesis. To
achieve this, Bayesians specify some prior distribution p(θ), which is then updated
in the light of new relevant data by the Bayes’ rule,

p(x|θ)
p(θ|x) = p(θ) ,
p(x)
R
where p(x) = p(x|θ)p(θ)dθ. Note that Bayesians treat θ as random, hence the
conditional-density notation of p(θ|x), which is called posterior density.

5.3 Estimation

5.3.1 Method of Moment


Let X1 , . . . , Xn be i.i.d. random variables with a common distribution Pθ , where
the parameter vector θ is to be estimated. And let x1 , . . . , xn be a realized sam-
ple. We call the underlying distribution Pθ the population, the moments of which
we call population moments. Let f be a vector of measurable functions f(x) =
(f1 (x), . . . , fm (k))0 , the f-population moments of Pθ are given by
Z
Eθ f = f dPθ .

63
In contrast, we call the sample average of (f(xi )) the sample moments. Note that
the sample average may be regarded as the moment of the distribution that assigns
probability mass 1/n to each realization xi . This distribution is called the empir-
ical distribution, which we denote Pn . Obviously, the moments of the empirical
distribution equal the corresponding sample moments
Z n
1X
En f = fdPn = f(xi ).
n i=1

The method of moment (MM) equates population moment to sample moment so


that the parameter vector θ may be solved. In other words, the MM estimation
solves the following set of equations for the parameter vector θ,

Eθ f = En f. (5.1)

This set of equations are called the moment conditions.

Example 5.3.1 Let Xi be i.i.d. Poisson(λ). To estimate λ, we may solve the


following equation,
n
1X
Eλ Xi = xi .
n i=1

It is immediate that the MM estimator of λ is exactly x̄ = n1 ni=1 xi .


P

Example 5.3.2 Let Xi be i.i.d. N(µ, σ 2 ). To estimate µ and σ 2 , we may solve the
following system of equations
n
1X
Eµ,σ2 X = xi
n i=1
n
2 1X
Eµ,σ2 (X − µ) = (xi − µ)2 .
n i=1

This would obtain


n
1X
µ̂ = x̄, and σ̂ 2 = (xi − x̄).
n i=1

A Remark on GMM If the number of equations (moment conditions) in (5.1)


exceeds the number of parameters to be estimated, then the parameter θ is over-
identified. In such cases, we may use the generalized method of moments (GMM)

64
to estimate θ. The basic idea of GMM is to minimize some distance measure be-
tween the population moments and their corresponding sample moments. A popular
approach is to solve the following quadratic programming problem,

min d(θ; x)0 W d(θ; x),


θ∈Θ

where d(θ; x) = Eθ f − En f and W is a positive definite weighting matrix. The


detailed properties of GMM is out of the scope of this text.

5.3.2 Maximum Likelihood


Let p(x, θ) be the density of the distribution Pθ . We use this notation, instead of
pθ (x), to emphasize that the density is a function of θ as well as that of x. We write
likelihood function as
p(θ, x) = p(x, θ).
The likelihood function and the density are essentially the same function. They
differ only in focus, which is evident in the ordering of the arguments. We may
interpret the likelihood function as a function of the parameter θ given a sample x.
It is intuitively appealing to assume that if θ = θ0 , the true parameter, then
the likelihood function p(θ, x) achieves the maximum. This is indeed the fundamen-
tal assumption of the maximum likelihood estimation (MLE), which is defined as
follows,

Definition 5.3.3 (MLE) The maximum likelihood estimator (MLE) of θ is given


by
θ̂M L = arg max p(θ, x).
θ∈Θ

Remark: Let τ be any sufficient statistic for the parameter θ. According the
factorization theorem, we have p(x, θ) = f (τ (x), θ)g(x). Then θ̂M L maximizes
f (τ (x), θ) with respect to θ. Therefore, θ̂M L is always a function of τ (X). This
implies that if MLE is a sufficient statistic, then it is always minimal.

Log Likelihood It is often easier to maximize the logarithm of the likelihood


function,
`(θ; x) = log(p(θ, x)).
Since the log function is monotone increasing, maximizing log likelihood yields the
same estimates.

65
First Order Condition If the log likelihood function `(θ, x) is differentiable and
globally concave for all x, then the ML estimator can be obtained by solving the
first order condition (FOC),
∂`
(θ, x) = 0
∂θ
∂`
Note that s(θ, x) = ∂θ (θ, x) is called score functions.

Theorem 5.3.4 (Invariance Theorem) If θ̂ is an ML estimator of θ and π =


g(θ) be a function of θ, then g(θ̂) is an ML estimator of π.

Proof: If g is one-to-one, then

p(θ, x) = p(g −1 g(θ), x) = p∗ (g(θ), x).

Both ML estimators, θ̂ and g(θ),


d maximize the likelihood function and it is obvious
that  
θ̂ = g −1 g(θ)
d .

This implies g(θ̂) = g(θ)


d = π̂. If g is many-to-one, π̂ = g(θ̂) still corresponds to θ̂
that maximizes p(θ, x). Any other value of π would correspond to θ that results in
lower likelihood.

Example 5.3.5 (Bernoulli(θ)) Let (Xi , i = 1, . . . , n) be i.i.d. Bernoulli(θ), then


the log likelihood function is given by
n
! n
!
X X
`(θ, x) = xi log θ + n − xi log(1 − θ).
i=1 i=1

The FOC yields


n
X n
X
−1 −1
θ̂ xi − (1 − θ̂) (n − xi ) = 0,
i=1 i=1

which is solved to obtain θ̂ = x̄ = n−1 ni=1 xi . Note that to estimate the variance of
P
Xi , we need to estimate v = θ(1 − θ), a function of θ. By the invariance theorem,
we obtain v̂ = θ̂(1 − θ̂).

Example 5.3.6 (N (µ, σ 2 )) Let Xi be i.i.d. N(µ, σ 2 ), then the log-likelihood func-
tion is given by
n
n n 1 X
`(µ, σ 2 , x) = − log(2π) − log σ 2 − 2 (xi − µ)2 .
2 2 2σ i=1

66
Solving the FOC gives
µ̂ = x̄
n
2 1X
σ̂ = (xi − x̄)2 .
n i=1

Note that the ML estimators are identical to the MM estimators.

Example 5.3.7 (Uniform([0,θ) )] Let Xi be i.i.d. Uniform([0, θ]). Then


n
1 Y
p(θ, x) = n I0≤xi ≤θ
θ i=1
1
= I{min1≤i≤n xi ≥0} I{max1≤i≤n xi ≤θ} .
θn
It follows that θ̂ = max{x1 , . . . , xn }.

5.3.3 Unbiasedness and Efficiency


Let Pθ denote the probability measure in Ω corresponding to Pθ in X , and let Eθ
denote the expectation taken with respect to Pθ .

Definition 5.3.8 (Unbiasedness) An estimator T = tau(X) is unbiased if for all


θ ∈ Θ,
Eθ T = θ.

Unbiasedness is a desirable property. Loosely speaking, it refers to the description


that “the estimation is correct in average”. To describe how “varied” an estimator
would be, we often use the mean squared error, which is defined as
MSE(T ) = Eθ (T − θ)2 .
We may decompose the MSE as
MSE(T ) = Eθ (T − Eθ T )2 + (Eθ T − θ)2 .
For an unbiased estimator T , the second term vanishes, then the MSE is equal to
the variance.
In general, MSE is a function of the unknown parameter θ and it is impossible to
find an estimator that has the smallest MSE for all θ ∈ Θ. However, if we restrict
our attention to the class of unbiased estimators, we may find an estimator that
enjoys the smallest variance (hence MSE) for all θ ∈ Θ. This property is known as
uniformly minimum variance unbiasedness (UMVU). More precisely, we have

67
Definition 5.3.9 (UMVU Estimator) An estimator T∗ = τ∗ (X) is called an
UMVU estimator if it satisfies

(1) T∗ is unbiased,

(2) Eθ (T∗ − θ)2 ≤ Eθ (T − θ)2 for any unbiased estimator T = τ (X).

5.3.4 Lehmann-Scheffé Theorem


The prominent Lehmann-Scheffé Theorem helps to find UMVU estimators. First,
we introduce some basic concepts in the decision-theoretic approach of statistical
estimation.
Let t be the realized value of T = τ (X), we define

Definition 5.3.10 (Loss Function) Loss function is any function `(t, θ) that as-
signs disutility to each pair of estimate t and parameter value θ.

Examples of Loss Function

• `(t, θ) = (t − θ)2 , squared error.

• `(t, θ) = |t − θ|, absolute error.

• `(t, θ) = cI{|t − θ| > }, fixed loss out of bound.

Definition 5.3.11 (Risk Function) For an estimator T = τ (X), the risk func-
tion is defined by
r(τ, θ) = Eθ `(T, θ).

It can be observed that risk function is the expected loss of an estimator for each
value of θ. Risk functions corresponding to the loss functions in the above examples
are

Examples of Risk Function

• r(τ, θ) = Eθ (τ (X) − θ)2 , mean squared error.

• r(τ, θ) = Eθ |τ (X) − θ|, mean absolute error.

• r(τ, θ) = cPθ {|τ − θ| > }

68
In the decision-theoretic approach of statistical inference, estimators are constructed
by minimizing some appropriate loss or risk functions.

Definition 5.3.12 (Minimax Estimator) An estimator τ∗ is called minimax if


sup r(τ∗ , θ) ≤ sup r(τ, θ)
θ∈Θ θ∈Θ

for every other estimator τ .

Note that supθ∈Θ r(τ, θ) measures the maximum risk of an estimator τ .

Theorem 5.3.13 (Rao-Blackwell Theorem) Suppose that the loss function `(t, θ)
is convex in t and that S is a sufficient statistic. Let T = τ (X) be an estimator for
θ with finite mean and risk. If we define T∗ = Eθ (T |S) and write T∗ = τ∗ (X), then
we have
r(τ∗ , θ) ≤ r(τ, θ).

Proof: Since `(t, θ) is convex in t, Jensen’s inequality gives


`(T∗ , θ) = `(Eθ (T |S), θ) ≤ Eθ (`(T, θ)|S).
We conclude by taking expectations on both sides and applying the law of iterative
expectations.

Note that Eθ (T |S) is not a function of θ, since S is sufficient.

Definition 5.3.14 (Complete Statistic) A statistic T is complete if Eθ f (T ) = 0


for all θ ∈ Θ implies f = 0 a.s. Pθ .

Example 5.3.15 Let (Xi , i = 1, . . . , n) be i.i.d. Uniform(0, θ), and let S =


maxi Xi . S is sufficient and complete. To see the completeness, note that
 s n
Pθ (S ≤ s) = (Pθ (Xi ≤ s))n = .
θ
The density of S is thus
nsn−1
pθ (s) = I{0 ≤ s ≤ θ}.
θn
Eθ f (T ) = 0 for all θ implies
Z θ
sn−1 f (s)ds = 0, for all θ.
0

This is only possible when f = 0.

69
Theorem 5.3.16 (Lehmann-Scheffé Theorem) If S is complete and sufficient
and T = τ (X) is an unbiased estimator of g(θ), then f (S) = Eθ (T |S) is a UMVU
estimator.

The proof of Lehmann-Scheffé Theorem involves applying Rao-Blackwell Theorem


with the squared loss function `(t, θ) = (t − θ)2 .
Note that f (S) is a unique unbiased estimator. Suppose there exists another un-
biased estimator f˜(S), then Eθ (f (S) − f˜(S)) = 0. But the completeness of S
guarantees that f = f˜.
Given a complete and sufficient statistic, it is then straightforward to obtain a
UMVU estimator. What we have to do is to take any unbiased estimator T and
obtain the desired UMVU estimator as T ∗ = Eθ (T |S).

Example 5.3.17 We continue with the previous example and proceed to find a
UMVU estimator. Let T = 2X1 , which is an unbiased estimator for θ. Suppose
S = s, then X1 can take s with probability 1/n, since every member of (Xi , i =
1, . . . , n) is equally likely to be the maximum. When X1 6= s, which is of probability
(n − 1)/n, X1 is uniformly distributed on (0, s). Thus we have

Eθ (T |S = s) = 2Eθ (X1 |S = s)
 
1 n−1s
= 2 s+
n n 2
n+1
= s
n
The UMVU estimator of θ is thus obtained as
n+1
T∗ = max Xi .
n 1≤i≤n

5.3.5 Efficiency Bound


It is generally not possible to construct an UMVU estimator. However, we show in
this section that there exists a lower bound for the variance of unbiased estimators,
which we call efficiency bound. If an unbiased estimator achieves the efficiency
bound, we say that it is an efficient estimator.
Let `(θ, x) be the log-likelihood function. Recall that we have defined score function
s(θ, x) = ∂`/∂θ(θ, x). We further define:

∂2`
(a) Hessian: h(θ, x) = ∂θ∂θ0
(θ, x).

70
(b) Fisher Information: I(θ) = Eθ s(θ, X)s(θ, X)0 .

(c) Expected Hessian: H(θ) = Eθ h(θ, X).

We may call s(θ, X) random score and h(θ, X) random Hessian.


Note that for a vector of independent variables, the scores and Hessians are additive.
Specifically, let X1 and X2 be independent random vectors, let X = (X10 , X20 )0 . De-
note the scores and the Hessians of Xi , i = 1, 2, by s(θ, xi ) and H(θ, xi ) respectively,
and denote the score and the Hessian of X by s(θ, x) and H(θ, x), respectively. Then
it is clear that

s(θ, x) = s(θ, x1 ) + s(θ, x2 )


h(θ, x) = h(θ, x1 ) + h(θ, x2 ).

We can also show that

I(θ) = I1 (θ) + I2 (θ)


H(θ) = H1 (θ) + H2 (θ),

where I(θ), I1 (θ), and I2 (θ) denote the information matrix of X, X1 , X2 , respectively,
and the notations of H, H1 , and H2 are analogous.
From now on, we assume that a random vector X has joint density p(x, θ) with
respect to Lebesque measure µ. Note that the notation p(x, θ) emphasizes the fact
that the joint density of X is a function of both x and θ. We let θ̂ (or more precisely,
τ (X)) be an unbiased estimator for θ. And we impose the following regularity
conditions on p(x, θ),

Regularity Conditions


R R ∂
(a) ∂θ
p(x, θ)dµ(x) = ∂θ p(x, θ)dµ(x)
∂2
R R ∂2
(b) ∂θ∂θ 0 p(x, θ)dµ(x) = ∂θ∂θ 0 p(x, θ)dµ(x)

(c) τ (x) ∂θ∂ 0 p(x, θ)dµ(x) = ∂θ∂ 0 τ (x)p(x, θ)dµ(x).


R R

Under these regularity conditions, we have a few results that are both useful in
proving subsequent theorems and interesting in themselves.

Lemma 5.3.18 Suppose that Condition (a) holds, then

Eθ s(θ, X) = 0.

71
Proof: We have
Z
Eθ s(θ, X) = s(θ, x)p(x, θ)dµ(x)
Z

= `(θ, x)p(x, θ)dµ(x)
∂θ
Z ∂
∂θ
p(x, θ)
= p(x, θ)dµ(x)
p(x, θ)
Z

= p(x, θ)dµ(x)
∂θ
= 0

Lemma 5.3.19 Suppose that Condition (b) holds, then


I(θ) = −H(θ).

Proof: We have
∂2
∂2 ∂θ∂θ0
p(x, θ) ∂ ∂
`(θ, x) = − log p(x, θ) 0 log p(x, θ).
∂θ∂θ0 p(x, θ) ∂θ ∂θ
Then
∂2
Z  
H(θ) = `(θ, x) p(x, θ)dµ(x)
∂θ∂θ0
∂2
Z
= p(x, θ)dµ(x) − I(θ)
∂θ∂θ0
= −I(θ).

Lemma 5.3.20 Let τ (X) be an unbiased estimator for θ, and suppose the Condition
(c) holds, then
Eθ τ (X)s(θ, X)0 = I.

Proof: We have
∂p
(x, θ)
Z
0 ∂θ0
Eθ θ̂(X)s(θ, X) = θ̂(x) p(x, θ)dµ(x)
p(x, θ)
Z

= θ̂(x)p(x, θ)dµ(x)
∂θ0
= I.
Since Eθ s(θ, X) = 0, the lemma implies that the covariance matrix between an
unbiased estimator and the random score is identity for all θ.

72
Theorem 5.3.21 (Cramer-Rao Bound) Let θ̂(X) be an unbiased estimator of
θ, and if Conditions (a) and (c) hold, then,
 
varθ θ̂(X) ≥ I(θ)−1 .

Proof: Using the above lemmas, we have


    !
θ̂(X) varθ θ̂(X) I
varθ = ≡ A.
s(θ, X) I I(θ)

Recall that the covariance matrix A must be positive definite. We choose B 0 =


(I, −I(θ)−1 ), then we must have B 0 AB ≥ 0. The conclusion follows.

Example 5.3.22 Let X1 , . . . , Xn be i.i.d. Poisson(λ). The the log-likelihood, the


score, and the Fisher’s information of each Xi are given by

`(λ, xi ) = −λ + xi log λ − log xi !


s(λ, xi ) = −1 + xi /λ
Ii (λ) = 1/λ.

Then the information matrix I(λ) of X = (X1 , . . . , Xn )0 is I(λ) = nI1 (λ) = n/λ.
Recall λ̂ = X̄ = n1 ni=1 Xi is an unbiased estimator for λ. And we have
P

varλ (X̄) = varλ (X1 )/n = λ/n.

Hence the estimator X̄ is an UMVU estimator.

5.4 Hypothesis Testing

5.4.1 Basic Concepts


Suppose a random sample X = (X1 , . . . , Xn )0 is drawn from a population charac-
terized by a parametric family P = {Pθ |θ ∈ Θ}. We partition the parameter set Θ
as
Θ = Θ0 ∪ Θ1 .
A statistical hypothesis is of the following form:

H0 : θ ∈ Θ0 H1 : θ ∈ Θ1 ,

where H0 is called the null hypothesis and H1 is called the alternative hypothesis.

73
A test statistic, say τ , is used to partition the state space X into the disjoint union
of the critical region C and the acceptance region A,
X = C ∪ A.
The critical region is conventionally given as
C = {x ∈ X |τ (x) ≥ c},
where c is a constant that is called critical value. If the observed sample is within
the critical region, we reject the null hypothesis. Otherwise, we say that we fail to
reject the null and thus accept the alternative hypothesis. Note that different tests
differ in their critical regions. In the following, we denote tests using their critical
regions.

For θ ∈ Θ0 , Pθ (C) is the probability of rejecting H0 when it is true. We thus define

Definition 5.4.1 (Size) The size of a test C is


max Pθ (C).
θ∈Θ0

Obviously, it is desirable to have a small size. For θ ∈ Θ1 , Pθ (C) is the probability


of rejecting H0 when it is false. If this probability is large, we say that the test
is powerful. Conventionally, we call π(θ) = Pθ (C) the power function. The power
function restricted to the domain Θ1 characterizes the power of the test. There is
a natural tradeoff between size and power. A test that rejects the null hypothesis
regardless of data has the maximum power, but at the cost of the maximum size.

Example 5.4.2 Let X be a random variable from Uniform(0, θ), and we want to
test
H0 : θ ≤ 1 H1 : θ > 1.
Consider first the following test,
C1 = {x|x ≥ 1}.
The power function of C1 is given by
Z θ
1 1
π(θ) = dx = 1 − .
1 θ θ
Since π(θ) is monotone increasing, the size of C1 is π(1) = 0. Another test may be
C2 = {x|x ≥ 1/2}.
1
The power function of C2 is 1 − 2θ , and the size is 1/2. Note that the power function
of C2 is higher than that of C1 on Θ1 , but at the cost of higher size.

74
Given two tests with a same size, C1 and C2 , if Pθ (C1 ) > Pθ (C2 ) at θ ∈ Θ1 , we say
that C1 is more powerful than C2 . If there is a test C∗ that satisfies Pθ (C∗ ) ≥ Pθ (C)
at θ ∈ Θ1 for any test C of the same size, then we say that C∗ is the most powerful
test. Furthermore, if the test C∗ is such that Pθ (C∗ ) ≥ Pθ (C) for all θ ∈ Θ1 for any
test C of the same size, then we say that C∗ is the uniformly most powerful.
If Θ0 (or Θ1 ) is a singleton set, ie, Θ0 = {θ0 }, we call the hypothesis H0 : θ = θ0
simple. Otherwise, we call it composite hypothesis. In particular, when both H0
and H1 are simple hypotheses, say, Θ0 = {θ0 } and Θ1 = {θ1 }, P consists of two
distributions Pθ0 and Pθ1 , which we denote as P0 and P1 , respectively. It is clear that
P0 (C) and P1 (C) are the size and the power of the test C, respectively. Note that
both P0 (C) and P1 (A) are probabilities of making mistakes. P0 (C) is the probability
of rejecting the true null, and P1 (A) is the probability of accepting the false null.
Rejecting the true null is often called the type-I error, and accepting the false null
is called the type-II error.

5.4.2 Likelihood Ratio Tests


Assume that both the null and the alternative hypotheses are simple, Θ0 = {θ0 }
and Θ1 = {θ1 }. Let p(x, θ0 ) and p(x, θ1 ) be the densities of P0 and P1 , respectively.
We have

Theorem 5.4.3 (Neyman-Pearson Lemma) Let c be a constant. The test


 
p(x, θ1 )
C∗ = x ∈ X λ(x) = ≥c
p(x, θ0 )
is the most powerful test.

Proof: Suppose C is any test with the same size as C∗ . Assume without loss of
generality that C and C∗ are disjoint. It follows that
p(x, θ1 ) ≥ cp(x, θ0 ) on C∗
p(x, θ1 ) < cp(x, θ0 ) on C.
Hence we have
Z Z
P1 (C∗ ) = p(x, θ1 )dµ(x) ≥ c p(x, θ0 )dµ(x) = cP0 (C∗ ),
C∗ C∗

and Z Z
P1 (C) = p(x, θ1 )dµ(x) < c p(x, θ0 )dµ(x) = cP0 (C).
C C
Since P0 (C∗ ) = P0 (C) (the same size), we have P1 (C∗ ) ≥ P1 (C). Q.E.D.

75
Remarks:

• For obvious reasons, test of the same form as C∗ is also called likelihood ratio
(LR) test. The constant c is to be determined by pre-specifying a size, ie, by
solving for c the equation P0 (C) = α, where α is prescribed small number.

• We may view p(x, θ1 ) (or p(x, θ0 )) as marginal increases of power (size) when
the point x is added to the critical region C. The Neyman-Pearson Lemma
shows that those points contributing more power increase per unit increase in
size should be included in C for an optimal test.

• For any monotone increasing function f , the test {x ∈ X |(f ◦ λ)(x) ≥ c0 } is


identical to that is based on λ(x). It is hence also an LR test. Indeed, the LR
tests are often based on monotone increasing transformations of λ whose null
distributions are easier to obtain.

Example 5.4.4 Let X be a random variable with density p(x, θ) = θxθ−1 , x ∈ (0, 1),
θ > 0. The most powerful test for the hypothesis H0 : θ = 2 versus H1 : θ = 1 is
given by  
p(x, 1) 1
C= = ≥c .
p(x, 2) 2x
This is equivalent to
C = {x ≤ c}.
To determine c for a size-α test, we solve the following for c,
Z c
p(x, 2)dx = α,
0
√ √
which obtains c = α. Hence C = {x ≤ α} is the most powerful test.

For composite hypotheses, we may use the generalized LR test based on the ratio
supθ∈Θ1 p(x, θ)
λ(x) = .
supθ∈Θ0 p(x, θ)
The Neyman-Pearson Lemma does not apply to the generalized LR test. However,
it is intuitively appealing and leads satisfactory tests in many contexts.

Example 5.4.5 We continue with the previous example. Suppose we want to test
H0 : θ = 1 versus H1 : θ 6= 1. The generalized LR statistic is given by
supθ∈Θ1 supθ∈Θ1
λ(x) = = .
supθ∈Θ0 1

76
1
The sup on the numerator is obtained at the ML estimator θM L = − log(x) . So we
have
1 − log1 x −1
λ(x) = − x .
log x
Let t = log x, we have
1
λ(et ) = − e−(t+1) = f (t).
t
The generalized LR test is thus given by
C = {x|λ(x) ≥ c}
= {x|f (t) ≥ c}
= {x|t ≤ c1 or t ≥ c2 },
where c1 and c2 are constants that satisfy f (c1 ) = f (c2 ) = c. To determine c for a
size-α test, we solve P0 (C) = α.

Example 1: Simple Student-t Test

First consider a simple example. Let X1 , . . . , Xn be i.i.d. N(µ, 1), and we test
H0 : µ = 0 against H1 : µ = 1
Since both the null and the alternative are simple, Neyman-Pearson Lemma ensures
that the likelihood ratio test is the best test. The likelihood ratio is
p(x, 1)
λ(x) =
p(x, 0)
(2π)−n/2 exp − 12 ni=1 (xi − 1)2
P 
=
(2π)−n/2 exp − 12 ni=1 (xi − 0)2
P 
n
!
X n
= exp xi − .
i=1
2

We know that τ (X) = n−1/2 ni=1 Xi is distributed as N(0, 1) under the null. We
P
may use this construct a test. Note that we can write τ (x) = f ◦ λ(x), where
f (z) = n−1/2 (log z + n/2) is a monotone increasing function. The test
C = {x|τ (x) ≥ c}
is then an LR test. It remains to determine c. Suppose we allow the probability
of type-I error to be 5%, that is a size of 0.05, we may solve for c the equation
P0 (C) = 0.05. Since τ (X) ∼ N (0, 1) under the null, we can look up the N (0, 1)
table and find that
P0 (x|τ (x) ≥ 1.645) = 0.05.
This implies c = 1.645.

77
Example 2: One-Sided Student-t Test

Now we test
H0 : µ = 0 against H1 : µ > 0. (5.2)
The alternative hypothesis is now composite. From the preceding analysis, however,
it is clear that for any µ1 > 0, C is the most powerful test for

H0 : µ = 0 against H1 : µ = µ1 .

We conclude that C is the uniformly most powerful test.

Example 3: Two-Sided F Test

Next we let X1 , . . . , Xn be i.i.d. N (µ, σ 2 ), and test

H 0 : µ = µ0 against H1 : µ 6= µ0 .

Here we have two unknown parameters, µ and σ 2 , but the null and the alternative
hypotheses are concerned with the parameter µ only. We consider the generalized
LR test with the following generalized likelihood ratio
supµ,σ2 (2πσ 2 )−n/2 exp − 2σ1 2 ni=1 (xi − µ)2
P 
λ(x) = .
supσ2 (2πσ 2 )−n/2 exp − 2σ1 2 ni=1 (xi − µ0 )2
P

Recall that the ML estimator of µ and σ 2 are


n
2 1X
µ̂ = x̄, σ̂ = (xi − x̄)2 .
n i=1

Hence µ̂ and σ̂ 2 achieve the sup on the numerator. On the denominator,


n
2 1X
σ̃ = (xi − µ0 )2
n i=1

achieves the sup. Then we have


(2πσ̂ 2 )−n/2 exp − 2σ̂1 2 ni=1 (xi − µ̂)2
P 
λ(x) =
(2πσ̃ 2 )−n/2 exp − 2σ̃1 2 ni=1 (xi − µ0 )2
P 
 Pn n/2
i=1 (xi − µ0 )2
= Pn 2
i=1 (xi − x̄)
n/2
n(x̄ − µ0 )2

= 1 + Pn 2
.
i=1 (xi − x̄)

78
We define
n(x̄ − µ0 )2
τ (x) = (n − 1) n
P 2
.
i=1 (xi − x̄)

It is clear that τ is a monotone increasing transformation of λ. Hence the generalized


LR test is given by C = {x|τ (x) ≥ c} for a constant c. Note that

V1 /1
τ (X) = ,
V2 /(n − 1)

where √ 2 Pn
n(X̄ − µ0 ) i=1 (Xi − X̄)2
V1 = and V2 = .
σ σ2
Under H0 , we can show that V1 ∼ χ21 , V2 ∼ χ2n−1 , and V1 and V2 are independent.
Hence, under H0 ,
τ (X) ∼ F1,n−1 .
To find the critical value c for a size-α test, we look up the F table and find constant
c such that
P0 {x|τ (x) ≥ c} = α.

From the preceding examples, we may see that the hypothesis testing problem con-
sists of three steps in practice: first, forming an appropriate test statistic, second,
finding the distribution of this statistic under H0 , and finally making a decision. If
the outcome of the test statistic is deemed as unlikely under H0 , the null hypothe-
sis H0 is rejected, in which case we accept H1 . The Neyman-Peason Lemma gives
important insights on how to form a test statistic that leads to a powerful test. In
the following example, we illustrate a direct approach that is not built on likelihood
ratio.

Example 4: Two-Sided Student-t Test

For the testing problem of Example 3, we may construct a Student-t test statistic
as follows, √
n(x̄ − µ0 )
τ̃ (x) = pPn .
(x − x̄)2 /(n − 1)
i=1 i

However, τ̃ is not a monotone increasing transformation of λ. Hence the test based


on τ̃ is not a generalized LR test any more. However, we can easily derive the
distribution of τ̃ if the null hypothesis is true. Indeed, we have
Z
τ̃ (X) = p ,
V /(n − 1)

79
where √ Pn
n(X̄ − µ0 ) i=1 (Xi − X̄)2
Z= and V = .
σ σ2
Under H0 , we can show that Z ∼ N (0, 1), V ∼ χ2n−1 , and Z and V are independent.
Hence, under H0 ,
τ̃ (X) ∼ tn−1 .
To find the critical value c for a size-α test, we look up the t table and find a constant
c > 0 such that
P0 {x| |τ̃ (x)| ≥ c} = α.
Finally, to see the connection between this test and the F test in Example 3, note
that F1,n−1 ≡ t2n−1 . In words, taking square of a tn−1 random variable results in an
F1,n−1 random variable.

5.5 Exercises
1. Let X1 and X2 be independent Poisson(λ). Show that τ = X1 + X2 is a
sufficient statistic.

2. Let (Xi , i = 1, . . . , n) be a random sample from the underlying distribution


given by the density
2x
p(x, θ) = 2 I{0 ≤ x ≤ θ}.
θ
(a) Find the MLE of θ.
(b) Show that T = max{X1 , . . . , Xn } is sufficient.
(c) Let

S1 = (max{X1 , . . . , Xm }, max{Xm+1 , . . . , Xn }),


S2 = (max{X1 , . . . , Xm }, min{Xm+1 , . . . , Xn }),

where 1 < m < n. Discuss the sufficiency of S1 and S2 .

3. Let (Xi , i = 1, . . . , n) be i.i.d. Uniform(α − β, α + β), where β > 0, and let


θ = (α, β).
(a) Find a minimal sufficient statistic τ for θ.
(b) Find the ML estimator θ̂ML of θ. (Hint: Graph the region for θ such that
the joint density p(x, θ) > 0.)
(c) Given the fact that τ in (a) is complete, find the UMVU estimator of α.
(Hint: Note that Eθ (X1 ) = α.)

80
4. Let (Xi , i = 1, . . . , n) be a random sample from a normal distribution with
mean µ and variance σ 2 . Define
Pn Pn
i=1 Xi 2 (Xi − X)2
Xn = and Sn = i=1 .
n n−1
(a) Obtain the Cramer-Rao lower bound.
(b) See whether X n and Sn2 attain the lower bound.
(c) Show that X n and Sn2 are jointly sufficient for µ and σ 2 .
(d) Are X n and Sn2 the UMVU estimators?

5. Let X1 and X2 be independent and uniformly distributed on (θ, θ+1). Consider


the two tests with critical regions C1 and C2 given by

C1 = {(x1 , x2 )|x1 ≥ 0.95} ,


C2 = {(x1 , x2 )|x1 + x2 ≥ c} ,

to test H0 : θ = 0 versus H1 : θ = 1/2.


(a) Find the value of c so that C2 has the same size as C1 .
(b) Find and compare the powers of C1 and C2 .
(c) Show how to get a test that has the same size, but is more powerful than
C2 .

81
82
Chapter 6

Asymptotic Theory

6.1 Introduction
Let X1 , . . . , Xn be a sequence of random variables, and let β̂n = β̂(X1 , . . . , Xn ) be
an estimator for the population parameter β. For β̂n to be a good estimator, it
must be asymptotically consistent, ie, β̂n converges to β in some sense as n → ∞.
Furthermore, it is desirable to have an asymptotic distribution of βn , if properly
standardized. That is, there may be a sequence of number an such that an (β̂n − β)
converges in some sense to a random variable Z with a known distribution. If in
particular Z is normal (or Gaussian), we say β̂n is asymptotically normal.
Asymptotic distribution is also important for hypothesis testing. If we can show
that a test statistic has an asymptotic distribution, then we may relax assumptions
on the finite sample distribution of X1 , . . . , Xn . This would make our test more
robust to mis-specifications of the model.
We study basic asymptotic theories in this chapter. They are essential tools for
proving asymptotic consistency and deriving asymptotic distributions. In this sec-
tion we first study the convergence of a sequence of random variables. As a sequence
of measurable functions, the converging behavior of random variables is much richer
than that of real numbers.

6.1.1 Modes of Convergence

Let (Xn ) and X be random variables defined on a common probability space (Ω, F, P).

Definition 6.1.1 (a.s. Convergence) Xn converges almost surely (a.s.) to X,

83
written as Xn →a.s. X, if

P{ω|Xn (ω) → X(ω)} = 1.

Equivalently, the a.s. convergence can be defined as

P{ω| |Xn (ω) − X(ω)| >  i.o.} = 0.

or
P{ω| |Xn (ω) − X(ω)| <  e.v.} = 1.

Definition 6.1.2 (Convergence in Probability) Xn converges in probability to


X, written as Xn →p X, if

P{ω| |Xn (ω) − X(ω)| > } → 0.

Remarks:

• The convergence in probability may be equivalently defined as

P{ω| |Xn (ω) − X(ω)| ≤ } → 1.

• Most commonly, X in the definition is a degenerate random variable (or simply,


a constant).

• The definition carries over to the case where Xn is a sequence of random


vectors. In this case the distance measure | · | should be replaced by the
Euclidian norm.

Definition 6.1.3 (Lp Convergence) Xn converges in Lp to X, written as Xn →Lp


X, if
E |Xn (ω) − X(ω)|p → 0, p > 0.

In particular, if p = 2, L2 convergence is also called the mean squared error conver-


gence.

Definition 6.1.4 (Convergence in Distribution) Let Fn and F be the distribu-


tion function of Xn and X, respectively. Xn converges in distribution to X, written
as Xn →d X, if

Fn (x) → F (x) for every continuous point x of F.

84
Remarks:

• Note that for the convergence in distribution, (Xn ) and X need not be defined
on a common probability space. It is not a convergence of Xn , but that of
probability measure induced by Xn , ie, PXn (B) = P ◦ Xn (B), B ∈ B(R).

• Recall that we may also call PXn the law of Xn . Thus the convergence in distri-
bution is also called convergence in law. More technically, we may call conver-
gence in distribution as weak convergence, as opposed to strong convergence
in the set of probability measures. Strong convergence refers to convergence
in the distance metric of probability measure (e.g., total variation metric).

Without proof, we give the following three portmanteau theorems, each of which
supplies an equivalent definition of convergence in distribution.

Lemma 6.1.5 Xn →d X if and only if for every function f that is bounded and
continuous a.s. in PX ,
Ef (Xn ) → Ef (X).

The function f need not be continuous at every point. The requirement of a.s.
continuity allows f to be discontinuous on a set S ⊂ R that PX (S) = 0.

Lemma 6.1.6 Xn →d X if and only if Ef (Xn ) → Ef (X) for every bounded and
uniformly continuous function f . 1

Lemma 6.1.7 Let φn and φ be the characteristic function of Xn and X, respectively.


Xn →d X if and only if
φn (t) → φ(t) for all t.

We have

Theorem 6.1.8 Both a.s. convergence and Lp convergence imply convergence in


probability, which implies convergence in distribution.

Proof: (a) To show that a.s. convergence implies convergence in probability, we let
En = {|Xn − X| > }. By Fatou’s lemma,

lim P{En } = lim sup P{En } ≤ P{lim sup En } = P{En i.o.}.


n→∞

1
A function f : D → R is uniformly continuous on D if for every  > 0, there exists δ > 0 such
that |f (x1 ) − f (x2 )| <  for x1 , x2 ∈ D that satisfy |x1 − x2 | < δ.

85
The conclusion follows.
(b) The fact that Lp convergence implies convergence in probability follows
from the Chebysheve inequality

E|Xn − X|p
P{|Xn − X| > } ≤ .
p

(c) To show that convergence in probability implies convergence in distribution,


we first note that for any  > 0, if X > z +  and |Xn − X| < , then we must have
Xn > z. That is to say, {Xn > z} ⊃ {X > z + } ∩ {|Xn − X| < }. Taking
complements, we have

{Xn ≤ z} ⊂ {X ≤ z + } ∪ {|Xn − X| ≥ }.

Then we have

P{Xn ≤ z} ≤ P{X ≤ z + } + P{|Xn − X| ≥ }.

Since Xn →p X, lim sup P{Xn ≤ z} ≤ lim sup P{X ≤ z + }. Let  ↓ 0, we have

lim sup P{Xn ≤ z} ≤ P{X ≤ z}.

Similarly, using the fact that X < z −  and |Xn − X| <  imply Xn < z, we can
show that
lim inf P{Xn ≤ z} ≥ P{X < z}.
If P{X = z} = 0, then P{X ≤ z} = P{X < z}. Hence

lim sup P{Xn ≤ z} = lim sup P{Xn ≤ z} = P{X ≤ z}.

This establishes

lim Fn (z) = F (z) for every continuous point of F.


n→∞

Other directions of the theorem do not hold. And a.s. convergence does not imply
Lp convergence, nor does the latter imply the former. Here are a couple of counter
examples:

Counter Examples Consider the probability space ([0, 1], B([0, 1]), µ), where µ
is Lebesgue measure and B([0, 1]) is the Borel field on [0, 1]. Define Xn by

Xn (ω) = n1/p I0≤ω≤1/n , p > 0,

86
and define Yn by

Yn = I(b−1)/a≤ω≤b/a , n = a(a − 1)/2 + b, 1 ≤ b ≤ a, a = 1, 2, . . . .

It can be shown that Xn → 0 a.s., but EXnp = 1 for all n. On the contrary,
EYnp = 1/a → 0, but Yn (ω) does not converge for any ω ∈ [0, 1].

It also follows from the above counter examples that convergence in probability does
not imply a.s. convergence. Suppose it does, we would have →Lp ⇒→p ⇒→a.s. . But
we have

Theorem 6.1.9 If Xn →p X, then there exists a subsequence Xnk such that Xnk →a.s.
X.

Proof: For any  > 0, we may choose nk such that

P {|Xnk − X| > } ≤ 2−k .

Since ∞ ∞
X X
P {|Xnk − X| > } ≤ 2−k < ∞,
k=1 k=1

Borel-Cantelli Lemma dictates that

P lim sup{|Xnk − X| > } = P{|Xnk − X| >  i.o.} = 0.


n→∞

It is clear that convergence in distribution does not imply convergence in probability,


since the former does not even require that Xn be defined on a common probability
space. However, we have

Theorem 6.1.10 Let Xn be defined on a common probability space and let c be


constant. If Xn →d c, then Xn →p c.

Proof: Let f (x) = I|x−c|> for any  > 0. Since f is continuous at c and Xn →d c,
we have
Ef (Xn ) = P{|Xn − c| > } → Ef (c) = 0.

Theorem 6.1.11 Let f be a continuous function. We have,

(a) if Xn →a.s. X, then f (Xn ) →a.s. f (X),

(b) if Xn →p X, then f (Xn ) →p f (X),

87
(c) if Xn →d X, then f (Xn ) →d f (X). (Continuous Mapping Theorem)

Proof: (a) Omitted.


(b) For any  > 0, there exists δ > 0 such that |x − c| ≤ δ implies |f (x) − f (c)| ≤ .
So we have
{|Xn − X| ≤ δ} ⊂ {|f (Xn ) − f (X)| ≤ },
which implies
{|Xn − X| > δ} ⊃ {|f (Xn ) − f (X)| ≤ }.
Hence
P{|Xn − X| > δ} ≥ P{|f (Xn ) − f (X)| > }.
The theorem follows.
(c) It suffices to show that for any bounded and continuous function g,

Eg(f (Xn )) → Eg(f (X)).

But this is guaranteed by Xn →d X, since g ◦ f is also bounded and continuous.

Using the above results, we easily obtain,

Theorem 6.1.12 (Slutsky Theorem) If Xn →d c and Yn →p Y , where c is a


constant, then

(a) Xn Yn →d cY ,

(b) Xn + Yn →d c + Y .

6.1.2 Small o and Big O Notations


We first introduce small o and big O notations for sequences of real numbers.

Definition 6.1.13 (Small o and Big O) Let (an ) and (bn ) be sequences of real
numbers. We write

(a) xn = o(an ) if xn /an → 0, and

(b) yn = O(bn ) if there exists a constant M > 0 such that |yn /bn | < M for all
large n.

88
Remarks:

• In particular, if we take an = bn = 1 for all n, the sequence xn = o(1) converges


to zero and sequence yn = O(1) is bounded.

• We may write o(an ) = an o(1) and O(bn ) = bn O(1). However, these are not
equalities in the usual sense. It is understood that o(1) = O(1) but O(1) 6=
o(1).

• For yn = O(1), if suffices to have |yn | < M for large n. If |yn | < M
for all n > N , then we would have |yn | < M ∗ for all n, where M ∗ =
max{y1 , y2 , . . . , yn , M }.

• O(o(1)) = o(1)
Proof: Let xn = o(1) and yn = O(xn ). It follows from |yn /xn | < M that
|yn | < M |xn | → 0.

• o(O(1)) = o(1)
M
Proof: Let xn = O(1) and yn = o(xn ). It follows from |yn | < |y |
|xn | n
=
|yn |
M |x n|
→ 0.

• o(1)O(1) = o(1)
Proof: Let xn = o(1) and yn = O(xn ). It follows from |xn yn | < M |xn | → 0.

• In general, we have

O(o(an )) = O(an o(1)) = an O(o(1)) = an o(1) = o(an ).

In probability, we have

Definition 6.1.14 (Small op and Big Op ) Let Xn and Yn be sequences of random


variables. We say

(a) Xn = op (an ) if Xn /an →p 0, and

(b) Yn = Op (bn ) if for any  > 0, there exists a constant M > 0 and n0 () such
that P(|Yn /bn | > M ) <  for all n ≥ n0 ().

If we take an = bn = 1 for all n, then Xn = op (1) →p 0, and for any  > 0, there
exists M > 0 such that P(|Yn | > M ) <  for all large n. In the latter case, we say
that Yn is stochastically bounded.
Analogous to the real series, we have the following results.

89
Lemma 6.1.15 We have

(a) Op (op (1)) = op (1),

(b) op (Op (1)) = op (1),

(c) op (1)Op (1) = op (1).

Proof: (a) Let Xn = op (1) and Yn = Op (Xn ), we show that Yn = op (1). For any
 > 0, since |Yn |/|Xn | ≤ M and |Xn | ≤ M −1  imply |Yn | ≤ , we have {|Yn | ≤ } ⊃
{|Yn | ≤ |Xn |M } ∩ {|Xn | ≤ M −1 }. Taking complements, we have

{|Yn | > } ⊂ {|Yn | > |Xn |M } ∪ {|Xn | > M −1 }.

Thus
P{|Yn | > } ≤ P{|Yn |/|Xn | > M } + P{|Xn | > M −1 }.
This holds for any M > 0. We can choose M such that the first term on the right
be made arbitrarily small. And since M is a constant, the second term goes to zero.
Thus P{|Yn | > } → 0, i.e., Yn = op (1).
(b) Let Xn = Op (1) and Yn = op (Xn ), we show that Yn = op (1). By similar argument
as above, we have for any  > 0 and M > 0,

P{|Yn | > } ≤ P{|Yn |/|Xn | > /M } + P{|Xn | > M }.

The first term on the right goes to zero, and the second term can be made arbitrarily
small by choosing a large M .
(c) Left for exercise.

In addition, we have

Theorem 6.1.16 If Xn →d X, then

(a) Xn = Op (1), and

(b) Xn + op (1) →d X.

Proof: (a) For any  > 0, we have sufficiently large M such that P(|X| > M ) < ,
since {|X| > M } ↓ ∅ as M ↑ ∞. Let f (x) = I|x|>M . Since Xn →d X and f
is bounded and continuous a.s., we have E(f (Xn )) = P(|Xn | > M ) → Ef (X) =
P(|X| > M ) < . Therefore, P(|Xn | > M ) <  for large n.
(b) Let Yn = op (1). And let f be any uniformly continuous and bounded function

90
and let M = sup |f (x)|. For any  > 0, there exists a δ such that |Yn | ≤ δ implies
|f (Xn + Yn ) − f (Xn )| ≤ . Hence

|f (Xn + Yn ) − f (Xn )|
= |f (Xn + Yn ) − f (Xn )| · I|Yn |≤δ + |f (Xn + Yn ) − f (Xn )| · I|Yn |>δ
≤  + 2M I|Yn |>δ

Hence
E|f (Xn + Yn ) − f (Xn )| ≤  + 2M P{|Yn | > δ}.
Then we have

|Ef (Xn + Yn ) − Ef (X)| = |E[f (Xn + Yn ) − f (Xn ) + f (Xn ) − f (X)]|


≤ E|f (Xn + Yn ) − f (Xn )| + |Ef (Xn ) − Ef (X)|
≤  + 2M P{|Yn | > δ} + |Ef (Xn ) − Ef (X)|.

The third term goes to zero since Xn →d X, the second term goes to zero since
Yn = op (1), and  > 0 is arbitrary. Hence Ef (Xn + Yn ) → Ef (X).

Corollary 6.1.17 If Xn →d X and Yn →p c, then Xn Yn →d cX.

Proof: We have

Xn Yn = Xn (c + op (1)) = cXn + Op (1)op (1) = cXn + op (1).

Then the conclusion follows from CMT.

6.2 Limit Theorems

6.2.1 Law of Large Numbers


The law of large numbers (LLN) states that sample average converges in some sense
to the population mean. In this section we state three LLN’s for independent random
variables. It is more difficult to establish LLN’s for sequences of random variables
with dependence. Intuitively, every additional observation of dependent sequence
brings less information to the sample mean than that of independent sequence.

Theorem 6.2.1 (Weak LLN (Khinchin)) If X1 , . . . , Xn are i.i.d. with mean


µ < ∞, then
n
1X
Xi →p µ.
n i=1

91
Proof: We only prove the case when var(Xi ) < ∞. The general proof is more
involved. The theorem follows easily from

n
!2 n
!2
1X 1X 1
E Xi − µ =E (Xi − µ) = E(Xi − µ)2 → 0,
n i=1 n i=1 n

since L2 convergence implies convergence in probability.

Theorem 6.2.2 (Strong LLN) If X1 , . . . , Xn are i.i.d. with mean µ < ∞, then
n
1X
Xi →a.s. µ.
n i=1

Proof: Since the mean exists, we may assume µ = 0 and prove


n
1X
Xi →a.s. 0.
n i=1

The general proof is involved. Here we prove the case when EXi4 < ∞. We have

n
!4 n
!
1X 1 X X
E Xi = EXi4 + 6 EXi2 Xj2
n i=1 n4 i=1 i6=j
n(n − 1)
= n−3 EXi4 + 3 EXi2 EXj2
n4
= O(n−2 ).
4 4
This implies E ∞ < ∞, which further implies ∞
P 1
Pn P 1
Pn
n=1 n i=1 Xi n=1 n i=1 Xi <
∞ a.s. Then we have
n
1X
Xi →a.s 0.
n i=1

Without proof, we also give a strong LLN that only requires independence,

Theorem 6.2.3 (Kolmogorov’s Strong LLN) If X1 , . . . , Xn are independent with


EXi = µi and var(Xi ) = σi2 , and if ∞ 2 2
P
σ
i=1 i /i < ∞, then
n n
1X 1X
Xi →a.s. µi .
n i=1 n i=1

92
The first application of LLN is in deducing the probability p of getting head in the
coin-tossing experiment. If we define Xi = 0 when we get tail in Pnthe i-th tossing
1
and Xi = 1 when we get head. Then the LLN guarantees that n i=1 Xi converges
to EXi = p · 1 + (1 − p) · 0 = p. This converge to a probability, indeed, is the basis
of the “frequentist” interpretation of probability.
Sometimes we need LLN for measurable functions of random variables, say, g(Xi , θ),
where θ is a non-random
Pn parameter vector taken values in Θ. The Uniform LLN’s
1
establishe that n i=1 g(Xi , θ) converges in some sense uniformly in θ ∈ Θ. More
precisely, we have

Theorem 6.2.4 (Uniform Weak LLN) Let X1 , . . . , Xn be i.i.d., Θ be compact,


and g(x, θ) be a measurable function that is continuous in x for every θ ∈ Θ. If
E supθ∈Θ |g(X, θ)| < ∞, then
n
1X
sup g(Xi , θ) − Eg(X1 , θ) →p 0.
θ∈Θ n i=1

6.2.2 Central Limit Theorem


The central limit theorem states that sample average, under suitable scaling, con-
verges in distribution to a normal (Gaussian) random variable.
We consider the sequence {Xin }, i = 1, . . . , n. Note that the sequence has double
subscript in, with n denotes sample size and i the index within sample. We call
such data structure as double array. We first state without proof the celebrated

Theorem 6.2.5 (Lindberg-Feller CLT) Let XP 1n , . . . , Xnn be independent with


EXni = µi and var(Xni ) = σi2 < ∞. Define σn2 = ni=1 σi2 . If for any  > 0,
n
1 X
E(Xin − µi )2 I|Xin −µi |>σn → 0, (6.1)
σn2 i=1

then Pn
i=1 (Xin − µi )
→d N (0, 1).
σn

The condition in (6.1) is called the Lindberg condition. As it is often difficult to


check, we often use the Liapounov condition, which implies the Lindberg condition.
The Liapounov condition states that if for some δ > 0,
n 2+δ
X Xin − µi
E → 0. (6.2)
i=1
σn

93
Xin −µi
To see that Liapounov is stronger than Lindberg, let ξni = σn
. We have
n Pn
X
2 i=1 E|ξin |3
Eξin I|ξin |> ≤ .
i=1


Using the Lindberg-Feller CLT, we obtain

Theorem 6.2.6 (Lindberg-Levy CLT) If X1 , . . . , Xn are i.i.d. with mean zero


and variance σ 2 < ∞, then
n
1 X
√ Xi →d N (0, σ 2 ).
n i=1


Proof: Let Yin = Xi / n. Yin is thus an independent double array with µi = 0,
σi2 = σ 2 /n, and σn2 = σ 2 . It suffices to check the Lindberg condition
n
1 X 1
2
EYin2 I|Yin |>σn = 2 EXi2 I|Xi |>σ√n → 0
σn i=1 σ

by dominated convergence theorem. Note that Zn = Xi2 I|Xi |>σ√n ≤ Xi2 , EXi2 < ∞,
and Zn (ω) → 0 for all ω ∈ Ω.

6.2.3 Delta Method


Let Tn be a sequence of statistics that converges in distribution to a multivariate
normal vector, √
n (Tn − θ) →d N (0, Σ).
The delta method is used to derive the asymptotic distribution of f (Tn ), where f is
differentiable. Let O(θ) = ∂f (θ)/∂θ. The Taylor expansion of f (Tn ) around θ gives

f (Tn ) = f (θ) + O(θ)0 (Tn − θ) + o (kTn − θk)


√ 
= f (θ) + O(θ)0 (Tn − θ) + o Op 1/ n
√ 
= f (θ) + O(θ)0 (Tn − θ) + op 1/ n .

This implies
√ √
n (f (Tn ) − f (θ)) = O(θ)0 n (Tn − θ) + op (1) →d N (0, O(θ)0 ΣO(θ)).

94
Example 6.2.7 Let X1 , . . . , Xn be i.i.d. with mean µ and variance σ 2 . By the
central limit theorem, we have

n X̄ − µ →d N (0, σ 2 ).


Using the delta theorem, the limiting distribution of X̄ 2 is given by



n X̄ 2 − µ2 →d N (0, 4µ2 σ 2 ).


Example 6.2.8 Let X1 , . . . , Xn be i.i.d. with mean µ, variance σ 2 , and EX 4 < ∞.


Define
     2

X̄ EX var(X) cov (X, X )
Tn = 1
P 2 . We have θ = , Σ= .
n i Xi EX 2 cov (X, X 2 ) var (X)
Use f (u, v) = v − u2 , we may show that
n
!
√ 1X 2
n Xi − X̄ − σ 2 →d N (0, µ4 − σ 4 ),
n i=1

where µ4 is the fourth central


2 moment of X. Recall that the sample variance is given
2 1
Pn
by sn = n−1 i=1 Xi − X̄ . We obviously have

n s2n − σ 2 →d N (0, µ4 − σ 4 ).


Another application of the delta theorem yields


√ µ4 − σ 4
 
n(sn − σ) →d N 0, .
4σ 2

6.3 Asymptotics for Maximum Likelihood Esti-


mation
As an application of the asymptotic theory we have learned, we present in this sec-
tion the asymptotic properties of Maximum Likelihood Estimator (MLE). The tests
based on MLE, such as likelihood ratio (LR), Wald test, and Lagrange multiplier
(LM) test, are also discussed.
Throughout the section, we assume that X1 , . . . , Xn are i.i.d. random variables
with a common distribution that belongs to a parametric family. We assume that
each distribution in the parametric family admits a density p(x, θ) with respect to
a measure µ. Let θ0 ∈ Θ denoteR the true value of θ, let P0 the distribution with
density p(x, θ0 ), and let E0 (·) ≡ ·p(x, θ0 )dµ(x), an integral operator with respect
to P0 .

95
6.3.1 Consistency of MLE
We first show that the expected log likelihood with respect to P0 is maximized at
θ0 . Let p(xi , θ) and `(xi , θ) denote the likelihood and the log likelihood, respectively.
We consider the function of θ,
Z
E0 `(·, θ) = `(x, θ)p(x, θ0 )dµ(x).

Lemma 6.3.1 We have for all θ ∈ Θ,

E0 `(·, θ0 ) ≥ E0 `(·, θ).

Proof: Note that log(·) is a concave function. Hence by Jensen’s inequality,


p(·, θ)
E0 `(·, θ) − E0 `(·, θ0 ) = E0 log
p(·, θ0 )
p(·, θ)
≤ log E0
p(·, θ0 )
Z
p(·, θ)
= log p(·, θ0 )dµ(x) = 0.
p(·, θ0 )

Under our assumptions, the MLE of θ0 is defined by


n
1X
θ̂ = argmaxθ `(Xi , θ).
n i=1

We have

Theorem 6.3.2 (Consistency of MLE) Under certain regularity conditions, we


have
θ̂ →p θ0 .

Proof: The regularity conditions ensure that the uniform weak LLN applies to
`(Xi , θ),
n
1X
`(Xi , θ) →p E0 (·, θ)
n i=1
uniformly in θ ∈ Θ. The conclusion then follows.

Using the op (1) notation, if θ̂ is consistent, we may write

θ̂ = θ0 + op (1).

96
6.3.2 Asymptotic Normality of MLE
Theorem 6.3.3 Under certain regularity conditions, we have

n(θ̂ − θ0 ) →d N (0, I(θ0 )−1 ),

where I(·) is the Fisher’s information.


The rate of the above convergence is n. Using the Op notation, we may write

θ̂ = θ0 + Op (1/ n).

Proof: The regularity conditions are to ensure:

(a) n−1/2 ni=1 s(Xi , θ0 ) →d N (0, I(θ0 )).


P

(b) n−1 ni=1 h(Xi , θ0 ) →p E0 h(·, θ0 ) = H(θ0 ) = −I(θ0 ).


P

(c) s̄(x, θ) ≡ n−1 ni=1 s(xi , θ) is differentiable at θ0 for all x.


P

(d) θ̂ = θ0 + Op (n−1/2 ).

By Taylor’s expansion,

s̄(x, θ) = s̄(x, θ0 ) + h̄(x, θ0 )(θ − θ0 ) + o(kθ − θ0 k).

We have
n n n
!
1 X 1 X 1X √
√ s(Xi , θ̂) = √ s(Xi , θ0 ) + h(xi , θ0 ) n(θ̂ − θ0 ) + op (1).
n i=1 n i=1 n i=1

Then
n
!−1 n
√ 1 X 1 X
n(θ̂ − θ0 ) = − h(xi , θ0 ) √ s(Xi , θ0 ) + op (1)
n i=1 n i=1
→d N (0, I(θ0 )−1 ).

6.3.3 MLE-Based Tests


Suppose θ ∈ Rm . For simplicity, let the hypothesis be

H0 : θ = θ0 H1 : θ 6= θ0 .

97
We consider the following three celebrated test statistics:
n n
!
X X
LR = 2 `(xi , θ̂) − `(xi , θ0 )
i=1 i=1
√ √
Wald = n(θ̂ − θ0 )0 I(θ̂) n(θ̂ − θ0 )
n
!0 n
!
1 X 1 X
LM = √ s(xi , θ0 ) I(θ0 )−1 √ s(xi , θ0 ) .
n i=1 n i=1
LR measures the difference between restricted likelihood and unrestricted likeli-
hood. Wald measures the difference between estimated and hypothesized values of
the parameter. And LM measures the first derivative of the log likelihood at the
hypothesized value of the parameter. Intuitively, if the null hypothesis holds, all
three quantities should be small.
For the Wald statistic, we may replace I(θ̂) by n1 ni=1 s(Xi , θ̂)s(Xi , θ̂)0 , −H(θ̂), or
P

− n1 ni=1 h(Xi , θ̂). The asymptotic distribution of Wald would not be affected.
P

Theorem 6.3.4 Suppose the conditions in Theorem 6.3.3 hold. We have


LR, Wald, LM →d χ2m .

Proof: Using Taylor’s expansion,


¯ θ0 ) + s̄(x, θ0 )0 (θ − θ0 ) + 1 (θ − θ0 )0 h̄(x, θ0 )(θ − θ0 ) + o(kθ − θ0 k2 )
¯ θ) = `(x,
`(x,
2
s̄(x, θ) = s̄(x, θ0 ) + h̄(x, θ0 )(θ − θ0 ) + o(kθ − θ0 k).
Plugging s̄(x, θ0 ) = s̄(x, θ) − h̄(x, θ0 )(θ − θ0 ) − o(kθ − θ0 k) in the first equation above,
we obtain
¯ θ) = `(x,
`(x, ¯ θ0 ) + s̄(x, θ)(θ − θ0 ) − 1 (θ − θ0 )0 h̄(x, θ0 )(θ − θ0 ) + o(kθ − θ0 k2 ).
2
We then have
n n n
!
X X 1√ 0 1X √
`(Xi , θ̂) − `(Xi , θ0 ) = − n(θ̂ − θ) h(Xi , θ) n(θ̂ − θ) + op (1),
i=1 i=1
2 n i=1

since n1 ni=1 s(Xi , θ̂) = 0. The asymptotic distribution of LR then follows.


P

For the Wald statistic, we have under regularity conditions that I(θ) is continuous
at θ = θ0 so that I(θ̂) = I(θ0 ) + op (1). Then the asymptotic distribution follows

from n(θ̂ − θ0 ) →d N (0, I(θ0 )−1 ).

1
Pn
The asymptotic distribution of the LM statistic follows from n i=1 s(Xi , θ0 ) →d
N (0, I(θ0 )).

98
6.4 Exercises
Pn
1. Suppose X1 , . . . , Xn are i.i.d. Exponential(1), and define X n = n−1 i=1 Xi .
(a) Find the characteristic function of X1 . √
(b) Find the characteristic function of Yn = n(X n − 1).
(c) Find the limiting distribution of Yn .

2. Prove the following statements from the definition of convergence in probabil-


ity,
(a) op (1)op (1) = op (1)
(b) op (1)Op (1) = op (1).

3. Let X1 , . . . , Xn be a random sample from a N (0, σ 2 ) distribution. Let X be


the sample mean and let Sn be the second sample moment ni=1 Xi2 /n. Using
P
the asymptotic theory, find an approximation to the distribution of each of
the following statistics:
(a) Sn .
(b) log Sn .
(c) X n /Sn .
(d) log(1 + X n ).
2
(e) X n /Sn .

4. A random sample of size n is drawn from a normal population with mean θ and
variance θ, i.e., the mean and variance are knownPto be equal but the common
is not known. Let X n = ni=1 Xi /n, Sn2 = ni=1 (Xi − X)2 /(n − 1). and
P
value P
Tn = ni=1 Xi2 /n.
(a) Calculate π = plimn→∞ Tn .
(b) Find the maximum-likelihood estimator of θ and show that it is a differ-
entiable function of Tn .
(c)
√ Find the asymptotic distribution of Tn , i.e., find the limit distribution of
n(Tn − π).
(d) Derive the asymptotic distribution of the ML estimator by using the delta
method.
(e) Check your answer to part (d) by using the information to calculate the
asymptotic variance of the ML estimator.
(f) Compare the asymptotic efficiencies of the ML estimator, the sample mean
X n , and the sample variance Sn2 .

99
100
References

Bierens, Herman J. (2005), Introduction to the Mathematical and Statiscal Foun-


dations of Econometrics, Cambridge University Process.

Chang, Yoosoon & Park, Joon Y. (1997), Advanced Probability and Statistics for
Economists, Lecture Notes.

Dudley, R.M (2003), Real Analysis and Probability (2nd Ed.), Cambridge Univer-
sity Process.

Rosenthal, Jeffrey S. (2006), A First Look at Rigorous Probability Theory (2nd


Ed.) World Scientific.

Williams, David (2001), Probability with Martingales, Cambridge University Pro-


cess.

Su, Liangjun (2007), Advanced Mathematical Statistics (in Chinese), Peking Uni-
versity Press.

101
Index

L1 , 27 change-of-variable theorem, 29
Lp convergence, 84 characteristic function, 41
Lp norm, 32 characteristic function
λ-system, 8 random vector, 50
π-system, 8 Chebyshev inequality, 31
σ-algebra, 2 chi-square distribution, 44
σ-field, 2 CMT, 88
σ-field coin tossing, 2
generated by random variable, 25 coin tossing
f -moment, 29 infinite, 2
complete statistic, 69
a.e., 22 composite, 75
a.s., 22 conditional density, 37
a.s. convergence, 83
conditional distribution
absolutely continuous, 24
multivariate normal, 51
algebra, 1
conditional expectation, 33
almost everywhere, 22
conditional probability, 4, 34
almost sure convergence, 83
consistency
almost surely, 22
MLE, 96
alternative hypothesis, 74
continuous function, 19
asymptotic normality, 97
continuous mapping theorem, 88
basis, 57 convergence in distribution, 84
Bayes formula, 5 convergence in probability, 84
Bernoulli distribution, 42 convex, 32
beta distribution, 44 correlation, 30
big O, 88, 89 countable subadditivity, 12
binomial distribution, 42 covariance, 30
Borel-Cantelli lemma, 6 covariance matrix, 31
bounded convergence theorem, 30 Cramer-Rao bound, 73
cylinder set, 22
Cauchy distribution, 45
Cauchy-Schwartz inequality, 32 delta method, 94
central limit theorem, 93 density, 24
central moment, 30 dimension of projection, 58

102
distribution, 20 information matrix, 71
distribution integrable, 27
random vector, 22 integrand, 23
distribution function, 21 invariance theorem, 66
dominated convergence theorem, 28, 30
double array, 93 Jensen’s inequality, 32
Dynkin’s lemma, 9 joint distribution, 22
joint distribution function, 22
empirical distribution, 64
Erlang distribution, 44 Khinchin, 91
estimator, 60 Kolmogorov, 92
event, 2 Kolmogrov zero-one law, 8
expectation, 29
exponential distribution, 43 law of large number
exponential family, 62 Kolmogorov’s strong, 92
extension strong, 92
theorem, 11 uniform weak, 93
uniqueness, 10 weak, 91
law of random variable, 20
F test, 79 Lebesgue integral
factorial, 44 counting measure, 23
Fatou’s lemma, 28, 30 nonnegative function, 23
Fatou’s lemma simple function, 23
probability, 6 Lehmann-Scheffé theorem, 70
field, 1 Liapounov condition, 93
first order condition, 66 likelihood function, 65
Fisher Information, 71 likelihood ratio, 76
Fisher-Neyman factorization, 61 liminf, 6
limsup, 6
gamma distribution, 44
Lindberg condition, 93
Gaussian distribution, 43
Lindberg-Feller CLT, 93
generalized likelihood ratio, 77
Lindberg-Levy CLT, 94
generalized method of moments, 65
LM, 98
generated sigma-field, 3
log likelihood, 66
GMM, 65
loss function, 68
Hessian, 71 LR, 98

independence, 51 marginal distribution, 22, 51


independence Markov’s inequality, 31
events, 5 maximum likelihood estimator, 65
random variables, 25 measurable function, 17
sigma fields, 5 median, 42

103
minimal sufficient statistic, 62 range, 56
minimax estimator, 69 Rao-Blackwell theorem, 69
MLE, 65 reverse Fatou’s lemma, 28, 30
moment, 30 Riemann integral, 24
moment condition, 64 risk function, 68
moment generating function, 41, 47
monotone convergence theorem, 27, 30 sample moment, 64
monotonicity score function, 66
Lp norm, 33 sigma-algebra, 2
outer measure, 12 sigma-field, 2
probability, 3 simple, 75
multinomial distribution, 46 simple function, 19
multivariate normal, 49 size, 74
Slutsky theorem, 88
Neyman-Pearson lemma, 76 small o, 88
normal distribution, 43 small op , 89
null hypothesis, 74 span, 56
null space, 56 stable, 45
standard Cauchy, 45
orthogonal projection, 53, 57
standard multivariate normal, 49
outer measure, 12
state space, 60
point probability mass, 21 statistic, 60
Poisson distribution, 43 stochastically bounded, 89
population, 64 Student-t test, 78
population moments, 64 sufficient, 61
positive semidefinite, 57
t test, 78
power, 74
t test
power function, 74
one-sided, 78
probability
two sided, 80
measure, 3
tail field, 8
triple, 1
test statistic, 60
probability density function, 24
theorem of total probability, 4
projection, 53, 57
type-I error, 75
quantile, 42 type-II error, 75

random variable, 19 UMVU, 68


random variable unbiasedness, 67
continuous, 24 uniform distribution, 42
degenerate, 20 uniformly minimum variance unbiased es-
discrete, 24 timator, 68
random vector, 21 uniformly most powerful, 75

104
variance, 30
vector space, 56
vector subspace, 56

Wald, 98

105

You might also like