0% found this document useful (0 votes)
7 views3 pages

2019 Comp Q4 - 5

The document discusses the derivation of maximum likelihood estimators for a linear regression model, specifically focusing on the parameters α and σ². It also addresses a data generating process for forecasting and calculating forecast errors and their variances. The document provides mathematical formulations and steps for obtaining these estimators and forecasts.

Uploaded by

DarienDBKearney
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views3 pages

2019 Comp Q4 - 5

The document discusses the derivation of maximum likelihood estimators for a linear regression model, specifically focusing on the parameters α and σ². It also addresses a data generating process for forecasting and calculating forecast errors and their variances. The document provides mathematical formulations and steps for obtaining these estimators and forecasts.

Uploaded by

DarienDBKearney
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

7/7/24, 2:57 AM Linear Regression Properties

Let's address the questions in detail.

4. Consider the model


yt = αxt + ut ; t = 1, 2, … , T
​ ​ ​

where E(ut ) ​ = 0, E(u2t ) = σ 2 xt , E(us ut ) = 0 if s =


 t, for all s and t. The density function for ut is ​ ​ ​

2
f (ut ) = ​ (2πσ 2 x2t )−1/2 ​ exp [− 2σ1 2 ​
( uxtt ​


) ].

Derive the maximum likelihood estimators of α and σ 2 .

Maximum Likelihood Estimation:

The likelihood function for the sample is:

2
T T
L(α, σ ) = 2
∏t=1 ​ f (ut ) = ​ ∏t=1 (2πσ 2 x2t )−1/2 ​ ​ exp [− 2σ1 2 ​ ( uxtt ​

​ ) ].

Substituting ut ​
= yt − αxt , we get:​ ​

2
T yt −αxt
L(α, σ 2 ) = ∏t=1 (2πσ 2 x2t )−1/2 exp [− 2σ1 2 ( ​ ​ ​

xt


​ ) ].

Taking the natural logarithm to get the log-likelihood function:

2
ln L(α, σ 2 ) = − T2 ln(2π) − ​
T
2
​ ln(σ 2 ) − 12 ∑Tt=1 ln(x2t ) − ​ ​ ​
1
2σ 2
​ ∑Tt=1 ( ​
yt −αxt
xt


​ ) .

Simplify the log-likelihood function:

T T (yt −αxt )2
ln L(α, σ 2 ) = − T2 ln(2π) − T
ln(σ 2 ) − ∑t=1 ln(xt ) − 1
∑t=1 .
​ ​

2 2σ 2 x2t
​ ​ ​ ​ ​ ​ ​

To find the maximum likelihood estimators, we take the partial derivatives with respect to α and σ 2
and set them to zero.

Partial derivative with respect to α:

∂ ln L T (yt −αxt )(−xt ) T yt −αxt


= − σ12 ∑t=1 = 1
∑t=1 .
​ ​ ​
​ ​

∂α x2t σ2 xt
​ ​ ​ ​ ​ ​ ​


Setting the partial derivative to zero:

T yt −αxt
∑t=1 ​

xt ​

​ = 0.

https://chatgpt.com/c/0e332593-feba-456f-a623-4a5a45bb8bf4 1/3
7/7/24, 2:57 AM Linear Regression Properties
T T
∑t=1 yxtt ​


​ = α ∑t=1 ​ 1.
yt
∑Tt=1

α= xt
.
​ ​

T

Partial derivative with respect to σ 2 :

∂ ln L T (yt −αxt )2
= − 2σT 2 + 1
∑t=1 .
​ ​

∂σ 2 2σ 4 x2t
​ ​ ​ ​ ​

Setting the partial derivative to zero:

(yt −αxt )2
− 2σT 2 + ​
1
2σ 4
∑Tt=1 ​

x2t


​ = 0.

T 1 T (yt −αxt )2
2σ 2
​ = 2σ 4
​ ∑t=1 ​

x2t



.

1 T (yt −αxt )2
σ2 = ∑t=1 .
​ ​

T x2t
​ ​ ​

Thus, the maximum likelihood estimators are:


y
∑Tt=1 xt

α^ = T t ,
​ ​

T (y −α^x )2
σ^ 2 = T1 ∑t=1 t x2 t .
​ ​

​ ​ ​

t ​

5. Consider the following data generating process


Yt = ϕ1 Yt−1 + ϕ2 Yt−2 + εt + θ1 εt−1 + θ2 εt−2 + θ3 εt−3
​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​

where εt is a white noise process. Assuming −1​ < θ2 < 0 and without taking a first difference, answer

the following:

(a) Find the h-step-ahead forecast for Y for h = 1, 2, … , k .


The h-step-ahead forecast is given by:

Y^t+h∣t = E[Yt+h ∣Ωt ] ​ ​ ​

Where Ωt is the information set at time t. ​

For h = 1:

Y^t+1∣t = ϕ1 Yt + ϕ2 Yt−1 + θ1 εt + θ2 εt−1 + θ3 εt−2


​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​

For h = 2:

https://chatgpt.com/c/0e332593-feba-456f-a623-4a5a45bb8bf4 2/3
7/7/24, 2:57 AM Linear Regression Properties

Y^t+2∣t = ϕ1 Y^t+1∣t + ϕ2 Yt + θ1 ε^t+1∣t + θ2 εt + θ3 εt−1


​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​

Since ε^t+1∣t ​ = 0:

Y^t+2∣t = ϕ1 (ϕ1 Yt + ϕ2 Yt−1 + θ1 εt + θ2 εt−1 + θ3 εt−2 ) + ϕ2 Yt + θ2 εt + θ3 εt−1


​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​

For h = k:

Y^t+k∣t = ϕ1 Y^t+k−1∣t + ϕ2 Y^t+k−2∣t + θ1 ε^t+k−1∣t + θ2 ε^t+k−2∣t + θ3 ε^t+k−3∣t


​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​

(b) Find the h-step-ahead forecast error for Y for h = 1, 2, … , k .


The forecast error for h = 1:

et+1 = Yt+1 − Y^t+1∣t = εt+1


​ ​ ​ ​

The forecast error for h = 2:

et+2 = Yt+2 − Y^t+2∣t = εt+2 + ϕ1 εt+1


​ ​ ​ ​ ​ ​

The forecast error for h = k:

et+k = Yt+k − Y^t+k∣t


​ ​ ​

(c) Find the h-step-ahead forecast error variance for Y for h = 1, 2, … , k .


The forecast error variance for h = 1:

Var(et+1 ) = Var(εt+1 ) = σ 2
​ ​

The forecast error variance for h = 2:

Var(et+2 ) = Var(εt+2 + ϕ1 εt+1 ) = σ 2 (1 + ϕ21 )


​ ​ ​ ​ ​

The forecast error variance for h = k:


2(k−1)
Var(et+k ) = σ 2 (1 + ϕ21 + ϕ41 + … + ϕ1
​ ​ ​ )

For a more general form, the variance would include terms related to all the ϕ and θ parameters,
capturing the structure of the process up to h steps ahead.

https://chatgpt.com/c/0e332593-feba-456f-a623-4a5a45bb8bf4 3/3

You might also like