2020HW6
2020HW6
))
yt =xt + et
xt =φxt−1 + ut
where the errors et and ut are mutually independent i.i.d. zero mean processes.
Show that yt is an ARMA(1,1) process.
ut
Proof. To show that yt is an ARMA(1, 1) process, we need to plug xt = 1−φL into yt
ut
yt = + et
1 − φL
⇒(1 − φL)yt = ut + (1 − φL)et ≡ wt .
and second
and
Accordingly, we can observe that wt has the properties that the autocovariances, γj
are zero when j > 1. Thus wt is an MA(1) process. Taking the AR term of yt
together, we can conclude that yt is an ARMA(1, 1) process.
2. Suppose that
yt =ut + et
ut =vt + θvt−1
where the errors vt and et are mutually independent i.i.d. zero mean processes.
Show that yt can be expressed as an MA(1) process using an i.i.d ηt error
yt = ηt + ψηt−1 .
yt = vt + θvt−1 + et .
1
The above representation is similar to the MA(1) part in Question 1. Similarly, we
can have
σy2 = Var(yt ) =E((et + vt + θvt−1 )2 )
=E(e2t ) + E(v2t ) + θ2 E(v2t−1 )
=σe2 + (1 + θ2 )σv2
and second
γ1 = E(yt yt−1 ) =E((et + vt + θvt−1 )(et−1 + vt−1 + θvt−2 ))
=θE(v2t−1 )
=θσv2 .
Assuming that yt can be expressed as
yt = ηt + ψηt−1 ,
we can obtain that
σy2 = Var(yt ) =E((ηt + ψηt−1 )2 )
=E(η2t ) + ψ2 E(η2t−1 )
=(1 + ψ2 )ση2
and
γ1 = E(yt yt−1 ) =E((ηt + ψηt−1 )(ηt−1 + ψηt−2 ))
=ψE(η2t−1 )
=ψση2 .
We can have the following two equalities,
σe2 + (1 + θ2 )σv2 =(1 + ψ2 )ση2 ,
θ
θσv2 =ψση2 ⇒ σv2 .
ψ
Plug the second result into the first one, we have
θ
σe2 + (1 + θ2 )σv2 = (1 + ψ2 ) σv2
ψ
σ2 + (1 + θ2 )σv2
⇒ e 2
ψ = 1 + ψ2
θσv
⇒ 1 − cψ + ψ2 = 0,
σ2e +(1+θ2 )σ2v
where c = θσ2v
. Then
√ √
c + c2 − 4 c − c2 − 4
ψ= , .
2 2
At least, you need to derive the above results. However, under the condition that
c2 − 4 > 0, we have either c ≥ 2 or c ≤ −2, then we have that
√
c2 −4
c+ ≥ 2 if c ≥ 2,
√ 2
c− c2 −4 ≤ −2 if c ≤ −2.
2
2
Thus, under these two scenarios (should be ruled out), the MA process is noninvert-
ible (the AR representations need to use all of the future values of yt ). Therefore,
it is not the usual case of interest. We can obtain the result via simulation (see
another file), you can observe that when we implement OLS on yt based on MA(1)
process, the coefficient can be uniquely identified.
3. Suppose that
where the errors eyt and ext are mutually independent i.i.d. zero mean processes.
Show that wt = yt + xt cab be expressed as an ARMA(2,1) process.
Then
eyt eyt
wt = +
1 − φyL 1 − φy L
⇒ (1 − φy L)(1 − φx L)wt = (1 − φxL)eyt + (1 − φyL)ext ≡ ut .
The left hand side shows that we have an AR(2) process, and the right hand side
ut has the following properties,
and
h i
2 2
γ1 = E ((1 − φx L)eyt + (1 − φyL)ext )((1 − φx L)eyt−1 + (1 − φyL)ext−1 ) = −φxσey − φyσex ,
and
h i
γ2 = E ((1 − φx L)eyt + (1 − φyL)ext )((1 − φxL)eyt−2 + (1 − φy L)ext−2 ) = 0.
3
Proof. We first discuss the variance of AR(2) process by replacing c by (1 − φ1 − φ2)μ
Then multiply (yt − μ), (yt−1 − μ) and (yt−2 − μ) to the above equation and take the
expectation, we can have
γ0 = E((yt − μ)2 ) =φ1E((yt−1 − μ)(yt − μ)) + φ2E((yt−2 − μ)(yt − μ)) + E(εt (yt − μ))
=φ1γ1 + φ2γ2 + σ2
γ1 = E((yt − μ)(yt−1 − μ)) =φ1E((yt−1 − μ)2) + φ2E((yt−2 − μ)(yt−1 − μ)) + E(εt (yt−1 − μ))
=φ1γ0 + φ2γ1
γ2 = E((yt − μ)(yt−2 − μ)) =φ1E((yt−1 − μ)(yt−2 − μ)) + φ2 E((yt−2 − μ)2 ) + E(εt (yt−2 − μ))
=φ1γ1 + φ2γ0 .
γ0 =φ1γ1 + φ2γ2 + σ2
γ1 =φ1γ0 + φ2γ1
γ2 =φ1γ1 + φ2γ0 .
The above equations also imply that γj = φ1 γj−1 + φ2γj−2 for j ≥ 2. As for the AR(p)
process, the results can be done using the same manner.