0% found this document useful (0 votes)
12 views11 pages

LN LinearTSModels (Contd)

Uploaded by

sureitan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views11 pages

LN LinearTSModels (Contd)

Uploaded by

sureitan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Damped sine waves.

3.2 Partial autocorrelation function (pacf)


Difficult to distinguish between AR processes of differ-
ent order using correlogram.
Partial autocorrelations provide better discrimination be-
tween different AR processes.
In an AR(2) process, the 2 parameter is the partial cor-
relation between yt and yt 2, holding yt 1constant.
Recall the definition of partial correlation coefficient:
r13 r12r23
r13:2 = p 2
p
2
1 r12 1 r23
Let 1; 2 and 3 denote y and its first and second lags, re-
spectively. Then,
r12 = r23 = 1
r13 = 2
Substitute these into the formula for partial correlation
gives
2
2 1
r13:2 = 2
1 1

Note that the Yule-Walker equations above can be solved


for 2 to get
32
2
2 1
2 = = r13:2
2
1 1
The Yule-Walker equations for an AR(3) process:
y t = 1 y t 1 + 2 y t 2 + 3 y t 3 + et
are
1 = 1+ 2 2+ 3 2
2 = 1 1+ 2+ 3 1
3 = 1 2+ 2 1+ 3
3 is the partial correlation between yt and yt 3 : If,
however, the process is only AR(2), the acf shows
that
3 = 1 2+ 2 1
This implies that 3 = 0: Similar results carry over
to higher order AR processes. For example, the
pacf for AR(3) cuts off after the third lag.

4 The Moving Average Processes


In an M A process, the variable is expressed as a lin-
ear function of current and past white noise disturbances.
An M A(q) process is given by

Yt = et + 1 et 1 + 2 et 2 + + q et q (29)
33
For q = 1, we have the M A(1) process

Yt = et + 1et 1 (30)
Consider the variance and autocovariances of this process:
E(Yt2) = E(et + 1et 1)2 = 2e (1 + 21)
E(YtYt 1) = E(et + 1et 1)(et 1 + 1et 2)
) 1 = 1 2e
E(YtYt 2) = E(et + 1et 1)(et 2 + 1et 3)
) 2=0
Similarly, higher order autocovariances all equal to zero.
The autocorrelations of an M A(1) process are therefore:
1
1 =
1 + 21
2 = 3 = =0
If j 1j < 1, the M A(1) process may be expressed as an
infinite series in Yt: Hence, its partial autocorrelations do
not cut off but damp toward zero.

5 The ARMA Processes


The ARM A(p; q) process is defined as

34
yt = 1yt 1 + + pyt p +et + 1et 1 + + q et q (31)
For p = q = 1 we have the ARM A(1; 1) process:

y t = 1 y t 1 + et + 1 et 1 (32)
Squaring this and taking expectation, we can show that
1 + 2 1 1 + 21 2
0 = 2 e
1 1
Myltiplying by yt 1 and taking expectation, yields
2 ( 1 + 1)(1 + 1 1) 2
1 = 1 0 + 1 e = 2 e
1 1
Higher-order autocovariances are given by
k = 1 k 1; k = 2; 3; : : :
The autocorrelation function of the ARM A(1; 1) process
is thus:
( 1 + 1)(1 + 1 1)
1 =
1 + 2 1 1 + 21
k = 1 k 1; k = 2; 3; :::
The first coefficient depends on the parameters of the
AR and M A part of the process. Subsequent coeffi-
cients decline exponentially at a rate that depends on the
35
AR parameter.

5.1 Patterns of ARMA(p,q) ACF and PACF


General ARM A(p; q) model:
yt = 1yt 1 + + pyt p + et + 1et 1+ + q et q
White noise i.e. p = q = 0
I All s = 0 and all ss = 0
AR(1) i.e. p = 1; q = 0; 0 < 1 < 1
I ACF decays directly: s = s
1
I P ACF cuts off after lag 1: 11 = 1; ss =0
for s 2
AR(1) i.e. p = 1; q = 0; 1 < 1 <0
I ACF exhibits oscillating decay: s = s
1
I P ACF cuts off after lag 1: 11 = 1; ss =0
for s 2
AR(p); p 2; q = 0
I ACF decays toward zero. Decay may be direct, or
may oscillate.
I P ACF spikes through lag p. All ss = 0 for s > p
M A(1) i.e. p = 0; q = 1; 1 > 0

36
I ACF exhibits positive spike at lag 1
s = 0 for s > 1
I P ACF exhibits oscillating decay:
( 1 )s
ss =
(1 + 21 + + 2s
1 )
M A(1) i.e. p = 0; q = 1; 1 < 0
I ACF exhibits positive spike at lag 1
s = 0 for s > 1
I P ACF exhibits monotonic decay:
ARM A(1; 1), i.e. p = q = 1; 0 < 1 < 1
I ACF exhibits monotonic decay beginning after
lag 1.
I P ACF exhibits oscillating decayt beginning after
lag 1. 11 = 1:
ARM A(1; 1), i.e. p = q = 1; 1 < 1 < 0
I ACF exhibits oscillating decay beginning after
lag 1.
I P ACF exhibits monotonic decay beginning after
lag 1. 11 = 1:
ARM A(p; q)
I ACF decays (either monotonic or oscillatory)
beginning after lag q
37
I P ACF decays (either monotonic or oscillatory)
beginning after lag p

6 Wold’s Theorem
Let fxtg be any zero-mean covariance stationary process.
Then we can write it as

P
1
xt = B(L)et = bt " t i (33)
i=0
2 P1 2
"t W N (0; ") where b0 = 1 and i=0 bi < 1
It is clear that an M A(q) attempts to approximate Wold’s
representation.
What about an AR(p)? Can it be seen as an approxima-
tion of Wold’s representation?
xt = 1 xt 1 + 2 xt 2 + + p xt p + "t
xt 1 xt 1 2 xt 2 p xt p = "t
(L)xt = "t
where
2 p
(L) = 1 1L 2L pL
If xt is covariance stationary, the roots of (L) all lie
outside the unit circle. Then, xt can be expressed as a

38
convergent infinite moving average of the innovations:
1
xt = "t
(L)
For example, in the case of AR(1);
(L) = 1 1L and

1 1 2 2
= =1+ 1L + 1L +
(L) 1 1 L

and
1 2
xt = "t = "t + 1 "t 1 + 1 "t 2 +
(L)
Both M A and AR processes impose restrictions in the
estimation of the Wold representation.
A more flexible form is provided by the ARM A processes.
xt = 1 xt 1 + + p xt p + "t + 1 "t 1 + + q "t q
xt 1 xt 1 p xt p = "t + 1 "t 1 + + q "t q
(L)xt = (L)"t

(L)
xt = "t
(L)
For example, an ARM A(1; 1) gives
39
1+ 1L
xt = "t
1 1L

2 2
= (1 + 1 L)(1 + 1L + 1L + )"t

2 2
= [1 + ( 1 + 1 )L + ( 1 + 1+ 1 )L + ]"t
Why do AR and M A processes have ACF and P ACF
that behave as described above?
Consider the M A(1) process: xt = "t + 1 "t 1 :
Innovations up to lag 1 affect xt, so values more than
one period apart have no common influences. Therefore
ACF cut off abruptly after 1 period.
Similar reasoning explains why ACF of an M A(q) process
cuts off abruptly after q lags.
All M A processes are covariance stationary, regardless
of the values of the parameters.
An M A(q) process xt = "t + 1"t 1 + + q "t q , is
said to be invertible if and only if the roots of the lag
polynomial 1 + 1L + + q Lq = 0 all lie outside the
unit circle.
Invertibility means that the M A(q) process can be ex-
pressed as an infinite order AR process.
40
For example, the M A(1) process xt = "t + 1"t 1 is
invertible if the root of 1 + 1L = 0 is greater than 1 in
absolute value, i.e. j 1j < 1; in which case
1
xt = "t
1 + 1L
2 2
(1 1L + 1L )xt = "t
2 3
xt = 1 xt 1 1 xt 2 + 1 xt 3 + "t
The P ACF; ss; is defined as the regression coefficient
of the last term in an autoregression:

xt = 1 xt 1 + 2 xt 2 + + s xt s + vt
Since an invertible M A process has an infinite order AR
representation, its P ACF will not cut off abruptly after
some lag.
As shown above, a covariance stationary AR(1) process
can be represented as an infinite order M A process:
1
xt = "t = "t + 1"t 1 + 21"t 2 +
(L)
Values of the series at different points in time have com-
mon influences, no matter how far apart they are. There-
fore, its ACF does not cut off abruptly.
41
Similar reasoning leads to persistent ACF for higher or-
der AR processes.
The P ACF of an AR(p) process cuts off abruptly after
p lags because if the model generating the series is

xt = 1 xt 1 + 2 xt 2 + + p xt p + "t
then the parameter p+1 in the autoregression
xt = 1 xt 1 + 2 xt 2 + + p xt p + p+1 xt p 1 + "t

must be zero.

42

You might also like