0% found this document useful (0 votes)
20 views6 pages

Chap 13

This document discusses stationary time series analysis. It introduces the concepts of stationarity, autocovariance function (ACF), and autocorrelation function (PACF) which are useful for predicting future values of a time series. The document also defines moving average (MA) and autoregressive (AR) processes which are important classes of stationary time series that can be modeled and forecasted using techniques discussed in this chapter.

Uploaded by

Nitin Kumar Azad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views6 pages

Chap 13

This document discusses stationary time series analysis. It introduces the concepts of stationarity, autocovariance function (ACF), and autocorrelation function (PACF) which are useful for predicting future values of a time series. The document also defines moving average (MA) and autoregressive (AR) processes which are important classes of stationary time series that can be modeled and forecasted using techniques discussed in this chapter.

Uploaded by

Nitin Kumar Azad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

13 Stationary Processes

In time series analysis our goal is to predict a series that typically is not deterministic
but contains a random component. If this random component is stationary, then we
can develop powerful techniques to forecast its future values. These techniques will be
developed and discussed in this chapter.

13.1 Basic Properties


In Section 12.4 we introduced the concept of stationarity and defined the autocovariance
function (ACVF) of a stationary time series {Xt } at lag h as
γX (h) = Cov(Xt+h , Xt ), h = 0, ±1, ±2 . . .
and the autocorrelation function as
γX (h)
ρX (h) := .
γX (0)
The autocovariance function and autocorrelation function provide a useful measure of
the degree of dependence among the values of a time series at different times and for this
reason play an important role when we consider the prediction of future values of the
series in terms of the past and present values. They can be estimated from observations
of X1 , . . . , Xn by computing the sample autocovariance function and autocorrelation
function as described in Definition 12.4.4.
Proposition 13.1.1. Basic properties of the autocovariance function γ(·):
γ(0) ≥ 0,
|γ(h)| ≤ γ(0) for all h,
γ(h) = γ(−h) for all h, i.e., γ(·) is even.
Definition 13.1.2. A real-valued function κ defined on the integers is non-negative
definite if
Xn
ai κ(i − j)aj ≥ 0
i,j=1

for all positive integers n and vectors a = (a1 , . . . , an )0 with real-valued components ai .
Proposition 13.1.3. A real-valued function defined on the integers is the autocovariance
function of a stationary time series if and only if it is even and non-negative definite.
Proof. To show that the autocovariance function γ(·) of any stationary time series {Xt }
is non-negative definite, let a be any n × 1 vector with real components a1 , . . . , an and
let X n = (X1 , . . . , Xn )0 . By the non-negativity of variances,
X
n
0 0
Var(a X n ) = a Γn a = ai γ(i − j)aj ≥ 0,
i,j=1

13-1
where Γn is the covariance matrix of the random vector X n . The last inequality, how-
ever, is precisely the statement that γ(·) is non-negative definite. The converse result,
that there exists a stationary time series with autocovariance function κ if κ is even,
real-valued, and non-negative definite, is more difficult to establish (details see Brockwell
and Davis (1991)).

Example. Let us show that the real-valued function κ(·) defined on the integers by


1 if h = 0,
κ(h) = ρ if h = ±1,


0 otherwise,

is an autocovariance function of a stationary time series if and only if |ρ| ≤ 1/2.

ˆ If |ρ| ≤ 1/2 then κ(·) is the autocovariance function of p


an MA(1) process (see
2 2 −1 −1
(12.2), p. 12-12) with σ = (1 + θ ) and θ = (2ρ) (1 ± 1 − 4ρ2 ).

ˆ If ρ > 1/2, K = [κ(i−j)]ni,j=1 and a is the n-component vector a = (1, −1, 1, −1, . . .)0 ,
then

a0 Ka = n − 2(n − 1)ρ < 0 for n > ,
2ρ − 1
which shows that κ(·) is not non-negative definite and therefore is not an autoco-
variance function.

ˆ If ρ < −1/2, the same argument using the n-component vector a = (1, 1, 1, . . .)0
again shows that κ(·) is not non-negative definite.

Remark. If {Xt } is a stationary time series, then the vector (X1 , . . . , Xn )0 and the time-
shifted vector (X1+h , . . . , Xn+h )0 have the same mean vectors and covariance matrices
for every integer h and positive integer n.

Definition 13.1.4. {Xt } is a strictly stationary time series if


d
(X1 , . . . , Xn )0 = (X1+h , . . . , Xn+h )0
d
for all integers h and n ≥ 1. Here = is used to indicate that the two random vectors
have the same joint distribution function.

Proposition 13.1.5. Properties of a strictly stationary time series {Xt }:

ˆ The random variable Xt are identically distributed;


d
ˆ (Xt , Xt+h )0 = (X1 , X1+h )0 for all integers t and h;

ˆ {Xt } is weakly stationary if E(Xt2 ) < ∞ for all t;

ˆ Weak stationarity does not imply strict stationarity;

13-2
ˆ An iid sequence is strictly stationary.
Definition 13.1.6 (MA(q) process). {Xt } is a moving average process of order q
(MA(q) process) if
Xt = Zt + θ1 Zt−1 + . . . + θq Zt−q , (13.1)
where {Zt } ∼ WN(0, σ 2 ) and θ1 , . . . , θq are constants.
Remark. If {Zt } is iid noise, then (13.1) defines a stationary time series that is strictly
stationary. It follows also that {Xt } is q-dependent, i.e., that Xs and Xt are independent
whenever |t − s| > q.
Remark. We say that a stationary time series is q-correlated if γ(h) = 0 whenever |h| > q.
A white noise sequence is then 0-correlated, while the MA(1) process is 1-correlated.
The importance of MA(q) processes derives from the fact that every q-correlated
process is an MA(q) process, i.e., if {Xt } is a stationary q-correlated time series with
mean 0, then it can be represented as the MA(q) process in (13.1).
Definition 13.1.7 (AR(p) process). {Xt } is an autoregressive process of order p if

Xt = φ1 Xt−1 + . . . + φp Xt−p + Zt (13.2)

where {Zt } ∼ WN(0, σ 2 ) and φ1 , . . . , φp are constants.


Example. Figure 13.1 shows different MA(q) and AR(p) processes.

13.2 Linear Processes


The class of linear time series models, which includes the class of autoregressive moving
average (ARMA) models (see Chapter 14), provides a general framework for studying
stationary processes. In fact, every weakly stationary process is either a linear process or
can be transformed to a linear process by subtracting a deterministic component. This
result is known as Wold’s decomposition (see Brockwell and Davis (1991), pp. 187-191).
Therefore we cite some results of the theory of linear processes.
Definition 13.2.1. The time series {Xt } is a linear process if it has the representation

X
Xt = ψj Zt−j , (13.3)
j=−∞
P∞
for all t, where {Zt } ∼ WN(0, σ 2 ) and {ψj } is a sequence of constants with −∞ |ψj | <
∞.
Remark. In terms of the backward shift operator B, the linear process (13.3) can be
written more compactly as
Xt = ψ(B)Zt ,
P∞
where ψ(B) = j=−∞ ψj B j .

13-3
Simulation ARMA(0 / -0.7) Autocorrelation function (ACF) Partial autocorrelation function (PACF)

1.0
2

0.0
0.6
1

Partial ACF
ACF
X(t)
0

-0.2
0.2
-1

-0.4
-0.4
-3

0 12 24 36 48 60 72 84 96
0 5 10 15 20 25 0 5 10 15 20 25
t Lag Lag

Simulation ARMA(0 / 0.6) Autocorrelation function (ACF) Partial autocorrelation function (PACF)

0.4
3

0.8
2

0.2
Partial ACF
1

ACF
X(t)

0.4
0

-0.2
0.0
-2

0 12 24 36 48 60 72 84 96
0 5 10 15 20 25 0 5 10 15 20 25
t Lag Lag

Simulation ARMA(0.7 / 0) Autocorrelation function (ACF) Partial autocorrelation function (PACF)


4

0.8

Partial ACF
0.4
2

ACF
X(t)

0.4
0

0.0
0.0
-2

0 12 24 36 48 60 72 84 96
0 5 10 15 20 25 0 5 10 15 20 25
t Lag Lag

Simulation ARMA(-0.6 / 0) Autocorrelation function (ACF) Partial autocorrelation function (PACF)


1.0
2

0.5

Partial ACF
-0.2
ACF
X(t)
0
-2

-0.5

-0.6

0 12 24 36 48 60 72 84 96
0 5 10 15 20 25 0 5 10 15 20 25
t Lag Lag

Simulation ARMA(0 / 0.5,-0.3,0.9,-0.2) Autocorrelation function (ACF) Partial autocorrelation function (PACF)
4

0.3
0.8
2

Partial ACF
ACF
X(t)

0.4

0.1
0
-2

-0.1
0.0
-4

0 12 24 36 48 60 72 84 96
0 5 10 15 20 25 0 5 10 15 20 25
t Lag Lag

Simulation ARMA(0.9,-0.6,0.4,-0.2 / 0) Autocorrelation function (ACF) Partial autocorrelation function (PACF)


0.8
2

Partial ACF
0.2
0

ACF
X(t)

0.4
-2

-0.2
0.0
-4

0 12 24 36 48 60 72 84 96
0 5 10 15 20 25 0 5 10 15 20 25
t Lag Lag

Figure 13.1: Simulations of different MA(q) and AR(p) processes. The left column shows
an excerpt (m = 96) of the whole time series (n = 480).
13-4
Remark. A linear process is called a moving average or MA(∞) if ψj = 0 for all j < 0,
i.e., if
X∞
Xt = ψj Zt−j .
j=0

Proposition 13.2.2.
P Let {Yt } be a stationary time series with mean 0 and covariance
function γY . If ∞
j=−∞ |ψj | < ∞, then the time series


X
Xt = ψj Yt−j = ψ(B)Yt
j=−∞

is stationary with mean 0 and autocovariance function



X ∞
X
γX (h) = ψj ψk γY (h + k − j). (13.4)
j=−∞ k=−∞

In the special case where {Xt } is a linear process,



X
2
γX (h) = σ ψj ψj+h . (13.5)
j=−∞

Proof. Since EYt = 0, we have

EXt = 0,
" ∞
! ∞
!#
X X
E(Xt+h Xt ) = E ψj Yt+h−j ψk Yt−k
j=−∞ k=−∞

X X∞
= ψj ψk E(Yt+h−j Yt−k )
j=−∞ k=−∞
X∞ X∞
= ψj ψk γY (h − j + k),
j=−∞ k=−∞

which shows that {Xt } is stationary with covariance function (13.4). Finally, if {Yt } is
the white noise sequence {Zt } in (13.3), then γY (h − j + k) = σ 2 if k = j − h and 0
otherwise, from which (13.5) follows.
Example.
P Consider the MA(q) process in (13.1). We find EXt = 0 and EXt2 =
q
σ 2 j=0 θj2 with θ0 = 1 and with Proposition 13.2.2 we get



q−|h|
X
σ 2 θj θj+|h| , if |h| ≤ q,
γ(h) =

 j=0
0, if |h| > q.

13-5
Example. Consider the AR(1) equation

Xt = φXt−1 + Zt , t = 0, ±1, ±2, . . .

(see page 12-12). Although the series is first observed at time t = 0, the process is
regarded as having started at some time in the remote past. Substituting for lagged
values of Xt gives
X
J−1
Xt = φj Zt−j + φJ Xt−J . (13.6)
j=0

The right hand side consists of two parts, the first of which is a moving average of lagged
values of the white noise variable driving the process. The second part depends on the
value of Xt at time t − J. Taking expectations and treating Xt−J as a fixed number
yields !
X
J−1
E(Xt ) = E φj Zt−j + E(φJ Xt−J ) = φJ Xt−J .
j=0

If |φ| ≥ 1, the mean value of the process depends on the starting value, Xt−J . Expression
(13.6) therefore contains a deterministic component and a knowledge of Xt−J enables
non-trivial prediction to be made for future values of the series. If, on the other hand,
|φ| < 1, this deterministic component is negligible if J is large. As J → ∞, it effectively
disappears and so if the process is regarded as having started at some point in the remote
past, it is quite legitimate to write (13.6) in the form

X
Xt = φj Zt−j , t = 0, . . . , T.
j=0

P
Since ∞ j
j=0 |φ| < ∞ it follows from Proposition 13.2.2 that the AR(1) process is sta-
tionary with mean 0 if |φ| < 1 and the autocovariance function is given by

X
2 φh
γX (h) = σ φj φj+h = σ 2
j=0
1 − φ2

for h > 0.

13-6

You might also like