Slides Estimation PDF
Slides Estimation PDF
Estimation
References: Bernard Lindgren, Statistical Theory, 8.1, 8.2, 7.7, 7.10, 8.2, 2.8, 2.3, 8.7 and Arthur
Goldeberger, A course in Econometrics, 11.1-11.3, 12.1, 12.2
November 2019
Definition
[B.1] An estimator θ̂(X1 , ..., XN ) (denoted by θ̂) is a regular function of
(X1 , ..., XN ).
Definition
[B.2] An estimator θ̂ of θ is unbiased if
E[θ̂] − θ = 0.
| {z }
Bias
Definition
[B.3] a) If θ̂ is an unbiased estimator of θ we say that θ̂ is efficient if its
variance is minimal.
b) We say that θ̂ is the BUE if it is unbiased and efficient.
c) We say that θ̂ is the BLUE if it is unbiased, linear in (X1 , ..., XN ) and with a
minimal variance among the linear unbiased estimators.
E[Xn ] = µ
var [X ]
var [Xn ] = n
where in this case var [Xn ] is the precision in terms of the mean squared error
(MSE):
E[σ̃n2 ] = σ 2
µ4 −σ 4
var [σ̃n2 ] = n where µ4 = E[(X − E[X ])4 ]
Definition
[A.4] Corrected sample variance
n
sn2 = σ2
n−1 n
Theorem
[A.1]
2σ 4 µ4 − 3σ 4
var [sn2 ] = + .
n−1 n
Definition
[A.3] Let X1 , ..., Xn be a random sample from the density f with X1 ∈ Lr . We
define
1) The r-th sample moment
n
1X r
Mr = Xi
n
i=1
Suppose that the distribution of X belongs to a known family except for the
parameter of interest θ.
e−θ θ k
Ex: For θ > 0, P(X = k , θ) = k! or fX (x, θ) = θe−θx 1x≥0 .
We suppose to know:
Proposition
[B.1]a) The true parameter fulfills
Definition
N
Q
[B.4]a) Likelihood function, LN (θ) = f (Xi , θ).
i=1
N
P
Rk: We also write LN (x, θ) = log(f (xi , θ))
i=1
Christophe Chorro ([email protected]) (References:
Proba/Stats
Bernard Lindgren,
QEM: B.Statistical
EstimationTheory, 8.1, 8.2, 7.7, 7.10, 8.2, 2.8, 2.3,
November
8.7 and 2019
Arthur Goldeberger
11 / 17
Maximum Likelihood estimation
N
X ∂log(f (Xi , θ))
= 0.
∂θ
i=1
| {z }
z(X ,θ)
Definition
[B.5] We define the Fisher information of the model by
" 2 #
∂LN (θ0 ) ∂LN (θ0 )
I(θ0 ) = Var
=E
.
| ∂θ
{z } ∂θ
The score
Proposition
[B.2]
∂ 2 LN (θ0 )
I(θ0 ) = −E
∂θ2
Theorem
[B.1] (Cramer Rao Lower Bound) If T (X1 , ..., XN ) is an unbiased estimator of
θ0 then
Theorem
[B.2] (Admitted) If a BUE exists it is the MLE.
Theorem
[B.3] (Cramer Rao Lower Bound) If T (X1 , ..., XN ) is an unbiased estimator of
θ0 ∈ Rd then
where
2
∂ LN (θ0 )
I(θ0 ) = −E
.
∂θi ∂θj
1≤i,j≤d
| {z }
Hessian matrix of LN
N
Q
• The likelihood function remains LN (x, θ) = f (xi , θ) and may be seen as
i=1
the density of (X1 , ..., XN ) given θ.