0% found this document useful (0 votes)
16 views4 pages

2020HW6

The document contains 4 proofs about properties of autoregressive moving average (ARMA) processes. The first proof shows that a process yt is ARMA(1,1). The second proof expresses a process yt as a MA(1) process using an i.i.d. error term. The third proof shows that a process wt can be expressed as an ARMA(2,1) process. The fourth proof derives formulas for the autocovariance of AR(2) and general AR(p) processes.

Uploaded by

Arthur Yeh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views4 pages

2020HW6

The document contains 4 proofs about properties of autoregressive moving average (ARMA) processes. The first proof shows that a process yt is ARMA(1,1). The second proof expresses a process yt as a MA(1) process using an i.i.d. error term. The third proof shows that a process wt can be expressed as an ARMA(2,1) process. The fourth proof derives formulas for the autocovariance of AR(2) and general AR(p) processes.

Uploaded by

Arthur Yeh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Homework 6 (Due on 5/20(Wed.

))

All the following processes are stationary.


1. Suppose that

yt =xt + et
xt =φxt−1 + ut

where the errors et and ut are mutually independent i.i.d. zero mean processes.
Show that yt is an ARMA(1,1) process.

ut
Proof. To show that yt is an ARMA(1, 1) process, we need to plug xt = 1−φL into yt

ut
yt = + et
1 − φL
⇒(1 − φL)yt = ut + (1 − φL)et ≡ wt .

Now, we need to investigate the property of wt . First,


2
σw = Var(wt ) =E((ut + et − φet−1 )2 )
=E(u2t ) + E(e2t ) + φ2E(e2t−1 )
=σu2 + (1 + φ2)σe2

and second

γ1 = E(wt wt−1 ) =E((ut + et − φet−1 )(ut−1 + et−1 − φet−2 ))


= − φE(u2t−1 )
= − φσu2

and

γ2 = E(wt wt−2 ) =E((ut + et − φet−1 )(ut−2 + et−2 − φet−3))


=0.

Accordingly, we can observe that wt has the properties that the autocovariances, γj
are zero when j > 1. Thus wt is an MA(1) process. Taking the AR term of yt
together, we can conclude that yt is an ARMA(1, 1) process. 

2. Suppose that

yt =ut + et
ut =vt + θvt−1

where the errors vt and et are mutually independent i.i.d. zero mean processes.
Show that yt can be expressed as an MA(1) process using an i.i.d ηt error

yt = ηt + ψηt−1 .

Proof. By plugging ut into yt , we can observe that

yt = vt + θvt−1 + et .

1
The above representation is similar to the MA(1) part in Question 1. Similarly, we
can have
σy2 = Var(yt ) =E((et + vt + θvt−1 )2 )
=E(e2t ) + E(v2t ) + θ2 E(v2t−1 )
=σe2 + (1 + θ2 )σv2
and second
γ1 = E(yt yt−1 ) =E((et + vt + θvt−1 )(et−1 + vt−1 + θvt−2 ))
=θE(v2t−1 )
=θσv2 .
Assuming that yt can be expressed as
yt = ηt + ψηt−1 ,
we can obtain that
σy2 = Var(yt ) =E((ηt + ψηt−1 )2 )
=E(η2t ) + ψ2 E(η2t−1 )
=(1 + ψ2 )ση2
and
γ1 = E(yt yt−1 ) =E((ηt + ψηt−1 )(ηt−1 + ψηt−2 ))
=ψE(η2t−1 )
=ψση2 .
We can have the following two equalities,
σe2 + (1 + θ2 )σv2 =(1 + ψ2 )ση2 ,
θ
θσv2 =ψση2 ⇒ σv2 .
ψ
Plug the second result into the first one, we have
θ
σe2 + (1 + θ2 )σv2 = (1 + ψ2 ) σv2
ψ
σ2 + (1 + θ2 )σv2
⇒ e 2
ψ = 1 + ψ2
θσv
⇒ 1 − cψ + ψ2 = 0,
σ2e +(1+θ2 )σ2v
where c = θσ2v
. Then
 √ √ 
 c + c2 − 4 c − c2 − 4 

 
ψ= , .

2 2
 

 

At least, you need to derive the above results. However, under the condition that
c2 − 4 > 0, we have either c ≥ 2 or c ≤ −2, then we have that
 √
c2 −4
 c+ ≥ 2 if c ≥ 2,


√ 2
 c− c2 −4 ≤ −2 if c ≤ −2.


2

2
Thus, under these two scenarios (should be ruled out), the MA process is noninvert-
ible (the AR representations need to use all of the future values of yt ). Therefore,
it is not the usual case of interest. We can obtain the result via simulation (see
another file), you can observe that when we implement OLS on yt based on MA(1)
process, the coefficient can be uniquely identified. 

3. Suppose that

yt =φy yt−1 + eyt


xt =φx xt−1 + ext

where the errors eyt and ext are mutually independent i.i.d. zero mean processes.
Show that wt = yt + xt cab be expressed as an ARMA(2,1) process.

Proof. We can rewrite both yt and xt as


eyt
yt =
1 − φy L
ext
xt = .
1 − φy L

Then
eyt eyt
wt = +
1 − φyL 1 − φy L
⇒ (1 − φy L)(1 − φx L)wt = (1 − φxL)eyt + (1 − φyL)ext ≡ ut .

The left hand side shows that we have an AR(2) process, and the right hand side
ut has the following properties,

Var(ut ) = E(e2yt ) + φ2x E(e2yt−1 ) + E(e2xt ) + φ2y E(e2xt−1 ) = (1 + φ2x )σey


2
+ (1 + φ2y )σex
2
.

and
h i
2 2
γ1 = E ((1 − φx L)eyt + (1 − φyL)ext )((1 − φx L)eyt−1 + (1 − φyL)ext−1 ) = −φxσey − φyσex ,

and
h i
γ2 = E ((1 − φx L)eyt + (1 − φyL)ext )((1 − φxL)eyt−2 + (1 − φy L)ext−2 ) = 0.

We can conclude that ut is an MA(1) process. 

4. Please show that the autocovariance of an AR(2) process follows that


  (1 − φ2)σ2
E (yt − μ)2 =  ,
(1 + φ2) (1 − φ2 )2 − φ21
γj =φ1γj−1 + φ2γj−2 .

and the autocovariance of an AR(p) process follows that

γ0 =φ1γ1 + φ2γ2 + ... + φp γp + σ2 ,


γj =φ1γj−1 + φ2γj−2 + ... + φpγj−p .

3
Proof. We first discuss the variance of AR(2) process by replacing c by (1 − φ1 − φ2)μ

yt = (1 − φ1 − φ2)μ + φ1yt−1 + φ2yt−2 + εt


(yt − μ) = φ1 (yt−1 − μ) + φ2(yt−2 − μ) + εt .

Then multiply (yt − μ), (yt−1 − μ) and (yt−2 − μ) to the above equation and take the
expectation, we can have

γ0 = E((yt − μ)2 ) =φ1E((yt−1 − μ)(yt − μ)) + φ2E((yt−2 − μ)(yt − μ)) + E(εt (yt − μ))
=φ1γ1 + φ2γ2 + σ2
γ1 = E((yt − μ)(yt−1 − μ)) =φ1E((yt−1 − μ)2) + φ2E((yt−2 − μ)(yt−1 − μ)) + E(εt (yt−1 − μ))
=φ1γ0 + φ2γ1
γ2 = E((yt − μ)(yt−2 − μ)) =φ1E((yt−1 − μ)(yt−2 − μ)) + φ2 E((yt−2 − μ)2 ) + E(εt (yt−2 − μ))
=φ1γ1 + φ2γ0 .

We can solve the following equations to obtain γ0 ,

γ0 =φ1γ1 + φ2γ2 + σ2
γ1 =φ1γ0 + φ2γ1
γ2 =φ1γ1 + φ2γ0 .

The above equations also imply that γj = φ1 γj−1 + φ2γj−2 for j ≥ 2. As for the AR(p)
process, the results can be done using the same manner. 

You might also like