6 Mle Asy A
6 Mle Asy A
Semester 1 20/21
1 / 22
Introduction
I The fact that many MLEs are consistent and asymptotically normal is
of great importance. In particular, large-sample confidence intervals
are feasible.
2 / 22
Theorem: Asymptotic normality of MLE
I(θ)−1
θ̂ ∼ N θ,
n
3 / 22
Asymptotic normality of MLE: the vector version
I(θ)−1
θ̂ ∼ N θ,
n
4 / 22
Interpretation
5 / 22
The Poisson
6 / 22
The normal case (a)
1/σ 2
0
I(θ) =
0 2/σ 2
1/σ 2
0
I(θ) =
0 1/(2σ 4 )
I If n is large,
2
X̄ µ σ /n 0
∼N ,
σ̂ 2 σ2 0 2σ 4 /n
8 / 22
The HWE (Rice page 283)
I Let W1 , . . . , Wn be IID Multinomial(1,p), where
p = ((1 − θ)2 , 2θ(1 − θ), θ2 ). Wi takes values (1,0,0), (0,1,0) and
(0,0,1) with these probabilities. W1 + · · · + Wn = X
∼ Multinomial(n, p).
θ̂ ∼ N(θ, I(θ)−1 )
12 / 22
Sufficient conditions for theorem
I
∂3
log f (x|θ) < M(x)
∂θ3
on (θ − δ, θ + δ), with Eθ (M) < K , a constant.
13 / 22
Sufficient conditions for theorem (continued)
I For the vector version, similar conditions are required, and I(θ) is
assumed to be invertible.
14 / 22
Random interval
15 / 22
Confidence interval
p q
I The approximate SE I(θ)−1 /n is estimated by I(θ̂)−1 /n (the
bootstrap).
16 / 22
CI for Poisson rate λ
17 / 22
CI for µ and σ from N(µ, σ 2 )
18 / 22
CI for µ and σ 2 from N(µ, σ 2 )
19 / 22
The bivariate normal distribution
I The density on Rice page 81 can be written as
1 1 0 −1
f (x) = exp − (x − µ) Σ (x − µ)
2π|Σ|1/2 2
σ12
x1 µ1 ρσ1 σ2
x= ,µ = ,Σ =
x2 µ2 ρσ1 σ2 σ22
We write X ∼ N(µ, Σ).
21 / 22
Linear regression
Yi = β1 xi1 + · · · + βp xip + i
where
X is a fixed known n × p matrix.
p × 1 β is fixed unknown.
n × 1 ∼ N(0, σ 2 In ), with σ 2 fixed unknown.
What is the joint distribution of the n × 1 Y? More compactly, we
can write
Y = Xβ +
I Given realisation y of Y, how can we get ML estimates of β and σ 2 ?
22 / 22