Chap 13
Chap 13
In time series analysis our goal is to predict a series that typically is not deterministic
but contains a random component. If this random component is stationary, then we
can develop powerful techniques to forecast its future values. These techniques will be
developed and discussed in this chapter.
for all positive integers n and vectors a = (a1 , . . . , an )0 with real-valued components ai .
Proposition 13.1.3. A real-valued function defined on the integers is the autocovariance
function of a stationary time series if and only if it is even and non-negative definite.
Proof. To show that the autocovariance function γ(·) of any stationary time series {Xt }
is non-negative definite, let a be any n × 1 vector with real components a1 , . . . , an and
let X n = (X1 , . . . , Xn )0 . By the non-negativity of variances,
X
n
0 0
Var(a X n ) = a Γn a = ai γ(i − j)aj ≥ 0,
i,j=1
13-1
where Γn is the covariance matrix of the random vector X n . The last inequality, how-
ever, is precisely the statement that γ(·) is non-negative definite. The converse result,
that there exists a stationary time series with autocovariance function κ if κ is even,
real-valued, and non-negative definite, is more difficult to establish (details see Brockwell
and Davis (1991)).
Example. Let us show that the real-valued function κ(·) defined on the integers by
1 if h = 0,
κ(h) = ρ if h = ±1,
0 otherwise,
If ρ > 1/2, K = [κ(i−j)]ni,j=1 and a is the n-component vector a = (1, −1, 1, −1, . . .)0 ,
then
2ρ
a0 Ka = n − 2(n − 1)ρ < 0 for n > ,
2ρ − 1
which shows that κ(·) is not non-negative definite and therefore is not an autoco-
variance function.
If ρ < −1/2, the same argument using the n-component vector a = (1, 1, 1, . . .)0
again shows that κ(·) is not non-negative definite.
Remark. If {Xt } is a stationary time series, then the vector (X1 , . . . , Xn )0 and the time-
shifted vector (X1+h , . . . , Xn+h )0 have the same mean vectors and covariance matrices
for every integer h and positive integer n.
13-2
An iid sequence is strictly stationary.
Definition 13.1.6 (MA(q) process). {Xt } is a moving average process of order q
(MA(q) process) if
Xt = Zt + θ1 Zt−1 + . . . + θq Zt−q , (13.1)
where {Zt } ∼ WN(0, σ 2 ) and θ1 , . . . , θq are constants.
Remark. If {Zt } is iid noise, then (13.1) defines a stationary time series that is strictly
stationary. It follows also that {Xt } is q-dependent, i.e., that Xs and Xt are independent
whenever |t − s| > q.
Remark. We say that a stationary time series is q-correlated if γ(h) = 0 whenever |h| > q.
A white noise sequence is then 0-correlated, while the MA(1) process is 1-correlated.
The importance of MA(q) processes derives from the fact that every q-correlated
process is an MA(q) process, i.e., if {Xt } is a stationary q-correlated time series with
mean 0, then it can be represented as the MA(q) process in (13.1).
Definition 13.1.7 (AR(p) process). {Xt } is an autoregressive process of order p if
13-3
Simulation ARMA(0 / -0.7) Autocorrelation function (ACF) Partial autocorrelation function (PACF)
1.0
2
0.0
0.6
1
Partial ACF
ACF
X(t)
0
-0.2
0.2
-1
-0.4
-0.4
-3
0 12 24 36 48 60 72 84 96
0 5 10 15 20 25 0 5 10 15 20 25
t Lag Lag
Simulation ARMA(0 / 0.6) Autocorrelation function (ACF) Partial autocorrelation function (PACF)
0.4
3
0.8
2
0.2
Partial ACF
1
ACF
X(t)
0.4
0
-0.2
0.0
-2
0 12 24 36 48 60 72 84 96
0 5 10 15 20 25 0 5 10 15 20 25
t Lag Lag
0.8
Partial ACF
0.4
2
ACF
X(t)
0.4
0
0.0
0.0
-2
0 12 24 36 48 60 72 84 96
0 5 10 15 20 25 0 5 10 15 20 25
t Lag Lag
0.5
Partial ACF
-0.2
ACF
X(t)
0
-2
-0.5
-0.6
0 12 24 36 48 60 72 84 96
0 5 10 15 20 25 0 5 10 15 20 25
t Lag Lag
Simulation ARMA(0 / 0.5,-0.3,0.9,-0.2) Autocorrelation function (ACF) Partial autocorrelation function (PACF)
4
0.3
0.8
2
Partial ACF
ACF
X(t)
0.4
0.1
0
-2
-0.1
0.0
-4
0 12 24 36 48 60 72 84 96
0 5 10 15 20 25 0 5 10 15 20 25
t Lag Lag
Partial ACF
0.2
0
ACF
X(t)
0.4
-2
-0.2
0.0
-4
0 12 24 36 48 60 72 84 96
0 5 10 15 20 25 0 5 10 15 20 25
t Lag Lag
Figure 13.1: Simulations of different MA(q) and AR(p) processes. The left column shows
an excerpt (m = 96) of the whole time series (n = 480).
13-4
Remark. A linear process is called a moving average or MA(∞) if ψj = 0 for all j < 0,
i.e., if
X∞
Xt = ψj Zt−j .
j=0
Proposition 13.2.2.
P Let {Yt } be a stationary time series with mean 0 and covariance
function γY . If ∞
j=−∞ |ψj | < ∞, then the time series
∞
X
Xt = ψj Yt−j = ψ(B)Yt
j=−∞
EXt = 0,
" ∞
! ∞
!#
X X
E(Xt+h Xt ) = E ψj Yt+h−j ψk Yt−k
j=−∞ k=−∞
∞
X X∞
= ψj ψk E(Yt+h−j Yt−k )
j=−∞ k=−∞
X∞ X∞
= ψj ψk γY (h − j + k),
j=−∞ k=−∞
which shows that {Xt } is stationary with covariance function (13.4). Finally, if {Yt } is
the white noise sequence {Zt } in (13.3), then γY (h − j + k) = σ 2 if k = j − h and 0
otherwise, from which (13.5) follows.
Example.
P Consider the MA(q) process in (13.1). We find EXt = 0 and EXt2 =
q
σ 2 j=0 θj2 with θ0 = 1 and with Proposition 13.2.2 we get
q−|h|
X
σ 2 θj θj+|h| , if |h| ≤ q,
γ(h) =
j=0
0, if |h| > q.
13-5
Example. Consider the AR(1) equation
(see page 12-12). Although the series is first observed at time t = 0, the process is
regarded as having started at some time in the remote past. Substituting for lagged
values of Xt gives
X
J−1
Xt = φj Zt−j + φJ Xt−J . (13.6)
j=0
The right hand side consists of two parts, the first of which is a moving average of lagged
values of the white noise variable driving the process. The second part depends on the
value of Xt at time t − J. Taking expectations and treating Xt−J as a fixed number
yields !
X
J−1
E(Xt ) = E φj Zt−j + E(φJ Xt−J ) = φJ Xt−J .
j=0
If |φ| ≥ 1, the mean value of the process depends on the starting value, Xt−J . Expression
(13.6) therefore contains a deterministic component and a knowledge of Xt−J enables
non-trivial prediction to be made for future values of the series. If, on the other hand,
|φ| < 1, this deterministic component is negligible if J is large. As J → ∞, it effectively
disappears and so if the process is regarded as having started at some point in the remote
past, it is quite legitimate to write (13.6) in the form
∞
X
Xt = φj Zt−j , t = 0, . . . , T.
j=0
P
Since ∞ j
j=0 |φ| < ∞ it follows from Proposition 13.2.2 that the AR(1) process is sta-
tionary with mean 0 if |φ| < 1 and the autocovariance function is given by
∞
X
2 φh
γX (h) = σ φj φj+h = σ 2
j=0
1 − φ2
for h > 0.
13-6