0% found this document useful (0 votes)
137 views11 pages

ARFIMA

This document discusses autoregressive fractionally integrated moving average (ARFIMA) processes. It defines AR(1), ARMA(p,q), and ARIMA(p,d,q) processes. It introduces the concept of fractionally integrated processes using fractional differences. It discusses causality and invertibility of ARFIMA processes. Specifically, it states that an ARFIMA process is causal if it can be expressed in terms of present and past shocks, and invertible if the shock can be expressed in terms of past and present values of the process. The document provides exercises on determining causality and invertibility of various ARMA processes.

Uploaded by

linfinityfirst
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
137 views11 pages

ARFIMA

This document discusses autoregressive fractionally integrated moving average (ARFIMA) processes. It defines AR(1), ARMA(p,q), and ARIMA(p,d,q) processes. It introduces the concept of fractionally integrated processes using fractional differences. It discusses causality and invertibility of ARFIMA processes. Specifically, it states that an ARFIMA process is causal if it can be expressed in terms of present and past shocks, and invertible if the shock can be expressed in terms of past and present values of the process. The document provides exercises on determining causality and invertibility of various ARMA processes.

Uploaded by

linfinityfirst
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

ARFIMA PROCESSES

AR(1) processes
Let xt, t∈ , be a zero-mean stochastic process. A roughly
linear relationship between successive observations may be
described by a simple autoregressive model of the form
xt=φxt-1+ut,
where ut, t∈ , is zero-mean white noise.
A stationary process (xt)t∈Z satisfying this first-order
difference equation is called a first-order autoregressive
process (AR(1) process).
Substituting φxt-2+ut-1 for xt-1, φxt-3+ut-2 for xt-2, … gives
xt=φxt-1+ut=φ(φxt-2+ut-1)+ut
=φ2xt-2+φut-1+ut
=φ2(φxt-3+ut-2)+φut-1+ut
=φ3xt-3+φ2ut-2+φut-1+ut
M
k −1
=φ xt-k+ ∑ φ j u t − j .
k
j= 0
If k is large and φ <1, then the first part of this expression is
negligible, hence

xt = ∑ φ j u t − j .
j= 0

1
The autocovariances of an AR(1) process
Suppose that φ <1. Then the solution

xt= ∑ φ ju t − j
j= 0
of the difference equation xt=φxt-1+ut is stationary because
∞ ∞ ∞
E(xt)=E ∑ φ u t − j = ∑ φ E( u t − j ) = ∑ φ j 0 =0,
j j

j= 0 j= 0 j=0
∞ ∞ ∞
Var(xt)= ∑ (φ ) Var(ut-j)= ∑ (φ ) σ =σ ∑ (φ 2 ) j = σ 2 ,
j 2 2 j 2 2 2

j= 0 1− φ j= 0 j= 0
k
φ σ2
and Cov(xt,xt-k)=…=
1− φ 2
do not depend on t.
For example, Cov(xt,xt-1) is given by
E(ut+φut-1+φ2ut-2+...)(ut-1+φut-2+φ2ut-3+...)

=E(ut+φut-1+φ ut-2+...) ∑ φ j−1u t − j
2
j=1
∞ ∞ ∞
=E(ut ∑ φ u t − j +φut-1 ∑ φ u t − j +φ ut-2 ∑ φ j−1u t − j +…)
j−1 j−1 2
j=1 j=1 j=1
∞ ∞ ∞
j−1
=∑φ Eutut-j+ ∑ φ Eut-1ut-j+ ∑ φ j+1 Eut-2ut-j+…
j

j=1 j=1 j=1

=(φ 0+φ10+…)+(φ1E u 2t −1 +φ20+…)+(φ20+φ3E u 2t − 2 +…)+…


0

=φ1E u 2t −1 +φ3E u 2t − 2 +…=σ2φ(φ0+φ2+…)=σ2φ( (φ 2 ) 0 + (φ 2 )1 …)


σ2φ
= .
1− φ 2

2
Using the lag operator
Using the lag operator L we can write the equation
xt=φxt-1+ut
as
L0(xt)=φL(xt)+ut or as L0(xt)-φL(xt)=ut,
or as (L0-φL)(xt)=ut,
or as (1-φL)(xt)=ut
and the equation

xt= ∑ φ ju t − j
j= 0
as
∞ ∞
xt= ∑ φ L (u t ) or as xt= ( ∑ φ jLj ) (ut)
j j

j= 0 j= 0

or as xt= ( ∑ (φL) j )(ut).
j= 0
A comparison of the last versions of the two equations
suggests that the operator ∑(φL)j is the inverse operator of
1-φL and, more generally, that the lag operator follows the
usual algebraic rules. We may therefore write
∞ ∞
(1-φL) = ∑ (φL) or
-1 j 1
1− φL
= ∑ (φL) .
j

j= 0 j= 0

3
The spectral density of an AR(1) process
The spectral density of the AR(1) process
xt=φxt-1+ut, φ <1,
is given by
∞ ∞
φ jσ 2
f(ω)= 21π [ γ (0)+2 ∑ γ ( j) cos(ωj)]= 2π [ 2 +2
1 σ2
∑ 1−φ2 cos(ωj)]
j=1 1− φ j=1

= 2π 2 [1+2 φ j cos(ωj)].
1 σ2

1− φ j=1

Exercise: Fit an AR model of order 1 to the growth rates of


the quarterly GDP and plot its spectral density.
d.dm <- d-mean(d) # d is demeaned
spec.ar(d.dm,m,order=1)
S e r ie s : d .d m
A R (1 ) s p e c tr u m
0.00020
0.00015
0.00010
spectrum

0.00005

0 .0 0 .1 0 .2 0 .3 0 .4 0 .5

fr e q u e n c y

4
ARIMA processes
A stationary process xt, t∈ , satisfying the p’th-order
difference equation
xt=φ1xt-1+…+φpxt-p+ut,
where U is zero-mean white noise and φp≠0, is called an
autoregressive process of order p (AR(p) process).
A stationary process X satisfying
xt=φ1xt-1+…+φpxt-p+ut+θ1ut-1+…+θqut-q,
where u is zero-mean white noise, φp≠0, and θq≠0, is called
an autoregressive moving average process of order (p,q)
(ARMA(p,q) process).
The equation
xt=φ1xt-1+…+φpxt-p+ut+θ1ut-1+…+θqut-q
can also be written as
xt-φ1xt-1-…-φpxt-p=ut+θ1ut-1+…+θqut-q
or as (1-φ1L-…-φpLp)(xt)=(1+θ1L+…+θqLq)(ut).
An ARMA(p,0) process is a AR(p) process. An ARMA(0,q)
is called a moving average process of order q (MA(q)
process).
A stochastic process X is called an autoregressive
integrated moving average process of order (p,d,q)
(ARIMA(p,d,q) process) if its d’th difference is an
ARMA(p,q) process, i.e.,
(1-φ1L-…-φpLp)(∆dxt)=(1+θ1L+…+θqLq)(ut).
5
Fractionally integrated processes

ARIMA(p,d,q) processes may be generalized by permitting


the degree of differencing, d, to take fractional values.
Fractional differences can be defined with the help of the
power series expansion of h (z) = (1 − z ) d around 0:
h(z)=h(0)+ h ′ (0)z+ h ′′ (0) z2 +h ′′′ (0) z3! +…
2 3

2 3
= 1 − dz + d(d − 1) 2
z − d(d − 1)(d − 2) 3!
z + ...

The fractional differencing operator ∆d=(1-L)d is defined


as a power series expansion in integer powers of L:
d L2 L3
(1-L) =1-dL+d(d-1) 2 -d(d-1)(d-2) 3! +…
A stochastic process X is called an autoregressive
fractionally integrated moving average process
(ARFIMA(p,d,q) process) if the fractionally differenced
process is an ARMA process, i.e.,
(1-φ1L-…-φpLp)(∆dxt)=(1+θ1L+…+θqLq)(ut),
where d is not restricted to integral values.

6
Causality and invertibility
We say that an ARFIMA(p,d,q) process
(1-φ1L-…-φpLp)(∆dxt)=(1+θ1L+…+θqLq)(ut),
is causal if xt can be expressed in terms of present and past
shocks, i.e.,

xt= ∑ ψ ju t − j .
j= 0

An ARFIMA(p,d,q) process
(1-φ1L-…-φpLp)(∆dxt)=(1+θ1L+…+θqLq)(ut),
is called invertible if ut has an representation of the form

ut= ∑ ξ j x t − j .
j= 0

Exercise: Show that the AR(1) process (1-φ1L)(xt)=ut is


causal if the zero of the polynomial Φ(z)=1-φ1z is greater
than one in absolute value, i.e., if Φ(z)≠0 for all z≤1.

Criteria for causality and invertibility:


Causality of an ARMA(p,q) process
(1-φ1L-…-φpLp)(∆dxt)=(1+θ1L+…+θqLq)(ut),
is equivalent to Φ(z)=1-φ1z-…-φpzp≠0 ∀z≤1. Invertibility
is equivalent to Θ(z)=1+θ1z+…+θqzq≠0 ∀z≤1.

7
Further exercises
Exercise: Determine which of the following ARMA
processes are causal and which are invertible.
(i) xt-0.7xt-1=ut
(ii) xt+0.4xt-1-1.5xt-2=ut
(iii) xt=ut-0.2ut-1
(iv) xt=ut+0.8ut-1+2.6ut-2
(v) xt=ut- 2 ut-1+ut-2
(vi) xt=ut+ut-2
(vii) xt-xt-1=ut-0.3ut-1
(viii) xt-0.2xt-1=ut+0.4ut-1-1.6ut-2
(ix) xt-1.2xt-1+0.1xt-2=ut+0.3ut-1-1.2ut-2

Hint: The roots z1 and z2 of the quadratic polynomial


az2+bz+c are given by
− b ± b 2 − 4ac
z1,2= .
2a

8
Exercise: Show that the MA(∞) representation of the causal
ARMA(1,1) process
xt-φxt-1=ut+θut-1
is given by

xt=ut+(θ+φ) ∑ φ j −1u t − j .
j =1

Solution 1: We have

1+ θz = ψ z j
1− φz j∑
=0
j

or, equivalently,
(1+θz)=(1-φz)( ψ0+ψ1z+ψ2z2+…).
Equating coefficients of zj we obtain:
j=0: 1=ψ0,
j=1: θ=ψ1-φψ0 ⇒ ψ1=θ−φψ0=θ−φ,
M

Solution 2: We have
xt = 11+−θφLL ut
= 1−1φL ut+θL1−1φL ut
∞ ∞
= ∑ φ u t − j +θL ∑ φ j u t − j
j
j= 0 j= 0
∞ ∞
j−1
=ut+φ ∑ φ u t − j +θ ∑ φ j−1u t − j .
j=1 j=1
9
Exercise: Show that the autocovariances
γ(k)=Cov(xt,xt-k)
of a causal ARMA(1,1) process
xt-φxt-1=ut+θut-1
satisfy
γ(k)=φγ(k-1), if k>1.

Solution: Multiplying the difference equation by xt-k


and taking expectations we obtain for k>1
γ(k)−φγ(k-1)=0.

Exercise: Fit an ARMA(1,2) model to the growth rates of the


quarterly GDP.
p<-1; q<-2
arima(d.dm,order=c(p,0,q),include.mean=FALSE,
transform.pars=TRUE)
# If transform.pars=TRUE, the AR parameters are checked
# and, if necessary, transformed to ensure causality.
We obtain the following estimates of φ1, θ1, θ2, and σ2:
ar1 ma1 ma2
0.2004 0.1041 0.1720
sigma^2 estimated as 8.503e-05

10

You might also like