0% found this document useful (0 votes)
82 views21 pages

ARMA Autocorrelation Functions

This document discusses autocorrelation functions (ACFs) and partial autocorrelation functions (PACFs) for autoregressive moving average (ARMA) time series models. It explains that the ACF can identify a moving average (MA) model if it is nonzero at lag q and zero afterwards, while the PACF or inverse ACF can identify an autoregressive (AR) model if it is nonzero at lag p and zero for larger lags. For an ARMA(p,q) model, the ACF tails off while the PACF and inverse ACF tail off. Identification of ARMA processes uses the patterns in these functions to initially select a model that is then estimated.

Uploaded by

bboyvn
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views21 pages

ARMA Autocorrelation Functions

This document discusses autocorrelation functions (ACFs) and partial autocorrelation functions (PACFs) for autoregressive moving average (ARMA) time series models. It explains that the ACF can identify a moving average (MA) model if it is nonzero at lag q and zero afterwards, while the PACF or inverse ACF can identify an autoregressive (AR) model if it is nonzero at lag p and zero for larger lags. For an ARMA(p,q) model, the ACF tails off while the PACF and inverse ACF tail off. Identification of ARMA processes uses the patterns in these functions to initially select a model that is then estimated.

Uploaded by

bboyvn
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

ARMA Autocorrelation Functions

For a moving average process, MA(q):


x
t
= w
t
+
1
w
t1
+
2
w
t2
+ +
q
w
tq
.
So (with
0
= 1)
(h) = cov
_
x
t+h
, x
t
_
= E
_
_
_
_
q

j=0

j
w
t+hj
_
_
_
_
q

k=0

k
w
tk
_
_
_
_
=
_

2
w
qh

j=0

j+h
, 0 h q
0 h > q.
1
So the ACF is
(h) =
_

_
qh

j=0

j+h
q

j=0

2
j
, 0 h q
0 h > q.
Notes:
In these expressions,
0
= 1 for convenience.
(q) = 0 but (h) = 0 for h > q. This characterizes
MA(q).
2
For an autoregressive process, AR(p):
x
t
=
1
x
t1
+
2
x
t2
+ +
p
x
tp
+w
t
.
So
(h) = cov
_
x
t+h
, x
t
_
= E
_
_
_
_
p

j=1

j
x
t+hj
+w
t+h
_
_
x
t
_
_
=
p

j=1

j
(h j) +cov
_
w
t+h
, x
t
_
.
3
Because x
t
is causal, x
t
is w
t
+ a linear combination of w
t1
, w
t2
, . . . .
So
cov
_
w
t+h
, x
t
_
=
_
_
_

2
w
h = 0
0 h > 0.
Hence
(h) =
p

j=1

j
(h j), h > 0
and
(0) =
p

j=1

j
(j) +
2
w
.
4
If we know the parameters
1
,
2
, . . . ,
p
and
2
w
, these equa-
tions for h = 0 and h = 1, 2, . . . , p form p +1 linear equations
in the p +1 unknowns (0), (1), . . . , (p).
The other autocovariances can then be found recursively
from the equation for h > p.
Alternatively, if we know (or have estimated) (0), (1), . . . , (p),
they form p + 1 linear equations in the p + 1 parameters

1
,
2
, . . . ,
p
and
2
w
.
These are the Yule-Walker equations.
5
For the ARMA(p, q) model with p > 0 and q > 0:
x
t
=
1
x
t1
+
2
x
t2
+ +
p
x
tp
+w
t
+
1
w
t1
+
2
w
t2
+ +
q
w
tq
,
a generalized set of Yule-Walker equations must be used.
The moving average models ARMA(0, q) = MA(q) are the
only ones with a closed form expression for (h).
For AR(p) and ARMA(p, q) with p > 0, the recursive equation
means that for h > max(p, q + 1), (h) is a sum of geomet-
rically decaying terms, possibly damped oscillations.
6
The recursive equation is
(h) =
p

j=1

j
(h j), h > q.
What kinds of sequences satisfy an equation like this?
Try (h) = z
h
for some constant z.
The equation becomes
0 = z
h

j=1

j
z
(hj)
= z
h
_
_
1
p

j=1

j
z
j
_
_
= z
h
(z).
7
So if (z) = 0, then (h) = z
h
satises the equation.
Since (z) is a polynomial of degree p, there are p solutions,
say z
1
, z
2
, . . . , z
p
.
So a more general solution is
(h) =
p

l=1
c
l
z
h
l
,
for any constants c
1
, c
2
, . . . , c
p
.
If z
1
, z
2
, . . . , z
p
are distinct, this is the most general solution;
if some roots are repeated, the general form is a little more
complicated.
8
If all z
1
, z
2
, . . . , z
p
are real, this is a sum of geometrically
decaying terms.
If any root is complex, its complex conjugate must also be a
root, and these two terms may be combined into geometri-
cally decaying sine-cosine terms.
The constants c
1
, c
2
, . . . , c
p
are determined by initial condi-
tions; in the ARMA case, these are the Yule-Walker equa-
tions.
Note that the various rates of decay are the zeros of (z),
the autoregressive operator, and do not depend on (z), the
moving average operator.
9
Example: ARMA(1, 1)
x
t
= x
t1
+w
t1
+w
t
.
The recursion is
(h) = (h 1), h = 2, 3, . . .
So (h) = c
h
for h = 1, 2, . . . , but c = 1.
Graphically, the ACF decays geometrically, but with a dier-
ent value at h = 0.
10
5 10 15 20 25
0
.
2
0
.
4
0
.
6
0
.
8
1
.
0
Index
A
R
M
A
a
c
f
(
a
r

=

0
.
9
,

m
a

=

0
.
5
,

2
4
)
11
The Partial Autocorrelation Function
An MA(q) can be identied from its ACF: non-zero to lag q,
and zero afterwards.
We need a similar tool for AR(p).
The partial autocorrelation function (PACF) lls that role.
12
Recall: for multivariate random variables X, Y, Z, the partial
correlations of X and Y given Z are the correlations of:
the residuals of X from its regression on Z; and
the residuals of Y from its regression on Z.
Here regression means conditional expectation, or best lin-
ear prediction, based on population distributions, not a sam-
ple calculation.
In a time series, the partial autocorrelations are dened as

h,h
= partial correlation of x
t+h
and x
t
given x
t+h1
, x
t+h2
, . . . , x
t+1
.
13
For an autoregressive process, AR(p):
x
t
=
1
x
t1
+
2
x
t2
+ +
p
x
tp
+w
t
,
If h > p, the regression of x
t+h
on x
t+h1
, x
t+h2
, . . . , x
t+1
is

1
x
t+h1
+
2
x
t+h2
+ +
p
x
t+hp
So the residual is just w
t+h
, which is uncorrelated with
x
t+h1
, x
t+h2
, . . . , x
t+1
and x
t
.
14
So the partial autocorrelation is zero for h > p:

h,h
= 0, h > p.
We can also show that
p,p
=
p
, which is non-zero by as-
sumption.
So
p,p
= 0 but
h,h
= 0 for h > p. This characterizes AR(p).
15
The Inverse Autocorrelation Function
SASs proc arima also shows the Inverse Autocorrelation Func-
tion (IACF).
The IACF of the ARMA(p, q) model
(B)x
t
= (B)w
t
is dened to be the ACF of the inverse (or dual) process
(B)x
(inverse)
t
= (B)w
t
.
The IACF has the same property as the PACF: AR(p) is
characterized by an IACF that is nonzero at lag p but zero
for larger lags.
16
Summary: Identication of ARMA processes
AR(p) is characterized by a PACF or IACF that is:
nonzero at lag p;
zero for lags larger than p.
MA(q) is characterized by an ACF that is:
nonzero at lag q;
zero for lags larger than q.
For anything else, try ARMA(p, q) with p > 0 and q > 0.
17
For p > 0 and q > 0:
AR(p) MA(q) ARMA(p, q)
ACF Tails o Cuts o after lag q Tails o
PACF Cuts o after lag p Tails o Tails o
IACF Cuts o after lag p Tails o Tails o
Note: these characteristics are used to guide the initial choice
of a model; estimation and model-checking will often lead to
a dierent model.
18
Other ARMA Identication Techniques
SASs proc arima oers the MINIC option on the identify
statement, which produces a table of SBC criteria for various
values of p and q.
The identify statement has two other options: ESACF and
SCAN.
Both produce tables in which the pattern of zero and non-
zero values characterize p and q.
See Section 3.4.10 in Brocklebank and Dickey.
19
options linesize = 80;
ods html file = varve3.html;
data varve;
infile ../data/varve.dat;
input varve;
lv = log(varve);
run;
proc arima data = varve;
title Use identify options to identify a good model;
identify var = lv(1) minic esacf scan;
estimate q = 1 method = ml;
estimate q = 2 method = ml;
estimate p = 1 q = 1 method = ml;
run;
proc arima output

You might also like