ARFIMA
ARFIMA
AR(1) processes
Let xt, t∈ , be a zero-mean stochastic process. A roughly
linear relationship between successive observations may be
described by a simple autoregressive model of the form
xt=φxt-1+ut,
where ut, t∈ , is zero-mean white noise.
A stationary process (xt)t∈Z satisfying this first-order
difference equation is called a first-order autoregressive
process (AR(1) process).
Substituting φxt-2+ut-1 for xt-1, φxt-3+ut-2 for xt-2, … gives
xt=φxt-1+ut=φ(φxt-2+ut-1)+ut
=φ2xt-2+φut-1+ut
=φ2(φxt-3+ut-2)+φut-1+ut
=φ3xt-3+φ2ut-2+φut-1+ut
M
k −1
=φ xt-k+ ∑ φ j u t − j .
k
j= 0
If k is large and φ <1, then the first part of this expression is
negligible, hence
∞
xt = ∑ φ j u t − j .
j= 0
1
The autocovariances of an AR(1) process
Suppose that φ <1. Then the solution
∞
xt= ∑ φ ju t − j
j= 0
of the difference equation xt=φxt-1+ut is stationary because
∞ ∞ ∞
E(xt)=E ∑ φ u t − j = ∑ φ E( u t − j ) = ∑ φ j 0 =0,
j j
j= 0 j= 0 j=0
∞ ∞ ∞
Var(xt)= ∑ (φ ) Var(ut-j)= ∑ (φ ) σ =σ ∑ (φ 2 ) j = σ 2 ,
j 2 2 j 2 2 2
j= 0 1− φ j= 0 j= 0
k
φ σ2
and Cov(xt,xt-k)=…=
1− φ 2
do not depend on t.
For example, Cov(xt,xt-1) is given by
E(ut+φut-1+φ2ut-2+...)(ut-1+φut-2+φ2ut-3+...)
∞
=E(ut+φut-1+φ ut-2+...) ∑ φ j−1u t − j
2
j=1
∞ ∞ ∞
=E(ut ∑ φ u t − j +φut-1 ∑ φ u t − j +φ ut-2 ∑ φ j−1u t − j +…)
j−1 j−1 2
j=1 j=1 j=1
∞ ∞ ∞
j−1
=∑φ Eutut-j+ ∑ φ Eut-1ut-j+ ∑ φ j+1 Eut-2ut-j+…
j
2
Using the lag operator
Using the lag operator L we can write the equation
xt=φxt-1+ut
as
L0(xt)=φL(xt)+ut or as L0(xt)-φL(xt)=ut,
or as (L0-φL)(xt)=ut,
or as (1-φL)(xt)=ut
and the equation
∞
xt= ∑ φ ju t − j
j= 0
as
∞ ∞
xt= ∑ φ L (u t ) or as xt= ( ∑ φ jLj ) (ut)
j j
j= 0 j= 0
∞
or as xt= ( ∑ (φL) j )(ut).
j= 0
A comparison of the last versions of the two equations
suggests that the operator ∑(φL)j is the inverse operator of
1-φL and, more generally, that the lag operator follows the
usual algebraic rules. We may therefore write
∞ ∞
(1-φL) = ∑ (φL) or
-1 j 1
1− φL
= ∑ (φL) .
j
j= 0 j= 0
3
The spectral density of an AR(1) process
The spectral density of the AR(1) process
xt=φxt-1+ut, φ <1,
is given by
∞ ∞
φ jσ 2
f(ω)= 21π [ γ (0)+2 ∑ γ ( j) cos(ωj)]= 2π [ 2 +2
1 σ2
∑ 1−φ2 cos(ωj)]
j=1 1− φ j=1
∞
= 2π 2 [1+2 φ j cos(ωj)].
1 σ2
∑
1− φ j=1
0.00005
0 .0 0 .1 0 .2 0 .3 0 .4 0 .5
fr e q u e n c y
4
ARIMA processes
A stationary process xt, t∈ , satisfying the p’th-order
difference equation
xt=φ1xt-1+…+φpxt-p+ut,
where U is zero-mean white noise and φp≠0, is called an
autoregressive process of order p (AR(p) process).
A stationary process X satisfying
xt=φ1xt-1+…+φpxt-p+ut+θ1ut-1+…+θqut-q,
where u is zero-mean white noise, φp≠0, and θq≠0, is called
an autoregressive moving average process of order (p,q)
(ARMA(p,q) process).
The equation
xt=φ1xt-1+…+φpxt-p+ut+θ1ut-1+…+θqut-q
can also be written as
xt-φ1xt-1-…-φpxt-p=ut+θ1ut-1+…+θqut-q
or as (1-φ1L-…-φpLp)(xt)=(1+θ1L+…+θqLq)(ut).
An ARMA(p,0) process is a AR(p) process. An ARMA(0,q)
is called a moving average process of order q (MA(q)
process).
A stochastic process X is called an autoregressive
integrated moving average process of order (p,d,q)
(ARIMA(p,d,q) process) if its d’th difference is an
ARMA(p,q) process, i.e.,
(1-φ1L-…-φpLp)(∆dxt)=(1+θ1L+…+θqLq)(ut).
5
Fractionally integrated processes
2 3
= 1 − dz + d(d − 1) 2
z − d(d − 1)(d − 2) 3!
z + ...
6
Causality and invertibility
We say that an ARFIMA(p,d,q) process
(1-φ1L-…-φpLp)(∆dxt)=(1+θ1L+…+θqLq)(ut),
is causal if xt can be expressed in terms of present and past
shocks, i.e.,
∞
xt= ∑ ψ ju t − j .
j= 0
An ARFIMA(p,d,q) process
(1-φ1L-…-φpLp)(∆dxt)=(1+θ1L+…+θqLq)(ut),
is called invertible if ut has an representation of the form
∞
ut= ∑ ξ j x t − j .
j= 0
7
Further exercises
Exercise: Determine which of the following ARMA
processes are causal and which are invertible.
(i) xt-0.7xt-1=ut
(ii) xt+0.4xt-1-1.5xt-2=ut
(iii) xt=ut-0.2ut-1
(iv) xt=ut+0.8ut-1+2.6ut-2
(v) xt=ut- 2 ut-1+ut-2
(vi) xt=ut+ut-2
(vii) xt-xt-1=ut-0.3ut-1
(viii) xt-0.2xt-1=ut+0.4ut-1-1.6ut-2
(ix) xt-1.2xt-1+0.1xt-2=ut+0.3ut-1-1.2ut-2
8
Exercise: Show that the MA(∞) representation of the causal
ARMA(1,1) process
xt-φxt-1=ut+θut-1
is given by
∞
xt=ut+(θ+φ) ∑ φ j −1u t − j .
j =1
Solution 1: We have
∞
1+ θz = ψ z j
1− φz j∑
=0
j
or, equivalently,
(1+θz)=(1-φz)( ψ0+ψ1z+ψ2z2+…).
Equating coefficients of zj we obtain:
j=0: 1=ψ0,
j=1: θ=ψ1-φψ0 ⇒ ψ1=θ−φψ0=θ−φ,
M
Solution 2: We have
xt = 11+−θφLL ut
= 1−1φL ut+θL1−1φL ut
∞ ∞
= ∑ φ u t − j +θL ∑ φ j u t − j
j
j= 0 j= 0
∞ ∞
j−1
=ut+φ ∑ φ u t − j +θ ∑ φ j−1u t − j .
j=1 j=1
9
Exercise: Show that the autocovariances
γ(k)=Cov(xt,xt-k)
of a causal ARMA(1,1) process
xt-φxt-1=ut+θut-1
satisfy
γ(k)=φγ(k-1), if k>1.
10