2019 Comp Q4 - 5
2019 Comp Q4 - 5
2
f (ut ) = (2πσ 2 x2t )−1/2 exp [− 2σ1 2
( uxtt
) ].
2
T T
L(α, σ ) = 2
∏t=1 f (ut ) = ∏t=1 (2πσ 2 x2t )−1/2 exp [− 2σ1 2 ( uxtt
) ].
Substituting ut
= yt − αxt , we get:
2
T yt −αxt
L(α, σ 2 ) = ∏t=1 (2πσ 2 x2t )−1/2 exp [− 2σ1 2 (
xt
) ].
2
ln L(α, σ 2 ) = − T2 ln(2π) −
T
2
ln(σ 2 ) − 12 ∑Tt=1 ln(x2t ) −
1
2σ 2
∑Tt=1 (
yt −αxt
xt
) .
T T (yt −αxt )2
ln L(α, σ 2 ) = − T2 ln(2π) − T
ln(σ 2 ) − ∑t=1 ln(xt ) − 1
∑t=1 .
2 2σ 2 x2t
To find the maximum likelihood estimators, we take the partial derivatives with respect to α and σ 2
and set them to zero.
∂α x2t σ2 xt
T yt −αxt
∑t=1
xt
= 0.
https://chatgpt.com/c/0e332593-feba-456f-a623-4a5a45bb8bf4 1/3
7/7/24, 2:57 AM Linear Regression Properties
T T
∑t=1 yxtt
= α ∑t=1 1.
yt
∑Tt=1
α= xt
.
T
∂ ln L T (yt −αxt )2
= − 2σT 2 + 1
∑t=1 .
∂σ 2 2σ 4 x2t
(yt −αxt )2
− 2σT 2 +
1
2σ 4
∑Tt=1
x2t
= 0.
T 1 T (yt −αxt )2
2σ 2
= 2σ 4
∑t=1
x2t
.
1 T (yt −αxt )2
σ2 = ∑t=1 .
T x2t
α^ = T t ,
T (y −α^x )2
σ^ 2 = T1 ∑t=1 t x2 t .
t
where εt is a white noise process. Assuming −1 < θ2 < 0 and without taking a first difference, answer
the following:
For h = 1:
For h = 2:
https://chatgpt.com/c/0e332593-feba-456f-a623-4a5a45bb8bf4 2/3
7/7/24, 2:57 AM Linear Regression Properties
Since ε^t+1∣t = 0:
For h = k:
Var(et+1 ) = Var(εt+1 ) = σ 2
For a more general form, the variance would include terms related to all the ϕ and θ parameters,
capturing the structure of the process up to h steps ahead.
https://chatgpt.com/c/0e332593-feba-456f-a623-4a5a45bb8bf4 3/3