Introduction
Introduction
Introduction
Universidad de Piura
2020-II
E[X ]
P(X > t ) ≤
t
Desigualdad de Chebyshev: sea µ = E[X ] y σ2 = V(X ), entonces
se cumple que:
σ2 1
P(|X − µ| ≥ t ) ≤ y P(|Z | ≥ k ) ≤
t k2
donde Z = (X − µ)/σ
Desigualdad de Mill: Sea Z ∼ N (0, 1), entonces:
r
2 e − t 2 /2
P(|Z | > t ) ≤
π t
zt → p z asi como zt − z → p 0
zt ∼ xt ó zt = xt + op
a
β̂ = (X 0 X )−1 X 0 y
E ( β̂ ols ) = β + E (X 0 u ) = β
de modo que
c ( β̂ ols ) = σ̂2 (X 0 X )−1
Avar
A−1 BA−1
Avar ( β̂ ols ) =
T
Ası́:
Â−1 B̂ Â−1
c ( β̂ ols ) =
Avar
T !
T
= (X 0 X ) −1 ∑ et2 xt0 xt (X 0 X ) −1
t =0
H0 : R β = r
Test t de student
β̂ ols
t= ∼ χ2T −K
var ( β̂ ols )
Test de Wald
W = (R β̂ ols − r )0 (RAvar
c ( β̂ ols )R 0 )−1 (R β̂ ols − r ) ∼ χ2Q
Test F
ẽ0 ẽ − e0 e T − K1 − K2
F = × ∼ F (K2 , T − (K1 + K2 ))
e0 e K2
La función de verosimilitud:
T
L( β, x, y) = ∏ f ( β, xt , yt )
t =0
∂l ( β, x, y)
s ( β̂) = β= β̂
∂β
∂l ( β, x, y)
| β= β̂ = 0
∂β
∂2 l ( β, x, y)
β= β̂ = H ( β) < 0
∂β2
β̂ MLE = (x 0 x ) −1 (x 0 y )
1 1
2
σ̂MLE = (y − x β̂ MLE )0 (y − x β̂ MLE ) = e0 e
T T
Matriz de información:
∂2 l ( β, σ2 , x, y)
I11 = E − = σ−2 E [(x 0 x )]
∂β∂β0
2
∂ l ( β, σ2 , x, y)
T
I22 = E − 4
= 4
∂σ 2σ
2 2
∂ l ( β, σ , x, y)
I12 = I21 = E − =0
∂β∂σ2
E ( β̂ MLE ) = β
2 T −K 2
E (σ̂MLE ) = σ
T
K
sesgo = − σ2
T
El estimador MLE de σ2 no es eficiente(Varianza asintótica mı́nima):
2(T − K ) 4
V (σ̂2 ) = σ
T2
2(T − K ) + K 2 4
MSE (σ̂2 ) = σ
T2
la cota Cramer-Rao es:
2(T − K )2 σ 4
CRLB =
T3
Regla de Cramer-Rao no es alcanzada.
Cristian Maravi (UdeP) E1EMA1 2020-II 21 / 21