0% found this document useful (0 votes)
38 views

Econometrics (EM2008) Stationary Univariate Time Series: Irene Mammi

This document provides an outline and introduction to stationary univariate time series models. It discusses autoregressive (AR), moving average (MA), and autoregressive moving average (ARMA) processes. It covers autocorrelations, partial autocorrelations, seasonality, and the lag operator. It also discusses modeling and stationarity conditions for AR(1) and AR(2) processes.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Econometrics (EM2008) Stationary Univariate Time Series: Irene Mammi

This document provides an outline and introduction to stationary univariate time series models. It discusses autoregressive (AR), moving average (MA), and autoregressive moving average (ARMA) processes. It covers autocorrelations, partial autocorrelations, seasonality, and the lag operator. It also discusses modeling and stationarity conditions for AR(1) and AR(2) processes.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Econometrics [EM2008]

Lecture 7
Stationary univariate time series

Irene Mammi

[email protected]

Academic Year 2018/2019

1 / 29
outline

I stationary univariate time series


I AR, MA and ARMA processes
I autocorrelations and partial autocorrelations
I seasonality

I References:
I Johnston, J. and J. DiNardo (1997), Econometric Methods, 4th
Edition, McGraw-Hill, New York, Chapter 7

2 / 29
introduction
I a univariate time series models the outcome variable in terms of its
own past values and some disturbance:

xt = f (xt −1 , xt −2 , . . . , ut )

I need to specify three features: the functional form f (·), the number
of lags, and a structure for the error term
I autoregressive, AR(p), process:

xt = α1 xt −1 + α2 xt −2 + · · · + αp xt −p + ut

I moving average, MA(q), process

ut = e t − β 1 e t − 1 − · · · − β q e t − q

I autoregressive moving average, ARMA(p, q), process

xt = α1 xt −1 + · · · + αp xt −p + et − β 1 et −1 − · · · − β q et −q

3 / 29
introduction (cont.)
I it might seem reductive to consider a univariate series but, e.g., take
the equations

Ct = α0 + α1 Yt + α2 Ct −1 + ut
Yt ≡ Ct + It

where we generally consider C and Y to be endogenous variables and


I to be the exogenous variable
I some substitutions and simple algebra lead to
α2 α0 α1 1
Ct − Ct −1 = + It + ut
1 − α1 1 − α1 1 − α1 1 − α1
and
α2 α0 1 1
Yt − Yt −1 = + (It − α2 It −1 ) + ut
1 − α1 1 − α1 1 − α1 1 − α1
so that C and Y both have an AR(1) component with the same
coefficient on the lag term and in the RHS there is an error term
whose properties depend on the behavior of I
4 / 29
lag operator

I the lag operator is a linear operator which, for any value of xt , gives
the previous values of the series:

L ( xt ) = xt − 1
L2 (xt ) = L[L(xt )] = L(xt −1 ) = xt −2
Ls (xt ) = xt −s
(1 − L)xt = xt − xt −1 = ∆xt
L(1 − L)xt = xt −1 − xt −2 = ∆xt −1

where ∆ is the first difference operator


I let A(L) = 1 − αL denote a first-order polynomial in L and consider

(1 − αL)(1 + αL + α2 L2 + α3 L3 + · · · + αp Lp ) = 1 − αp +1 Lp +1

where, as p → ∞, αp +1 Lp +1 → 0 provided that |α| < 1

5 / 29
lag operator (cont.)
I we may write the reciprocal, or inverse, of A(L) as

1
A−1 (L ) = = 1 + αL + α2 L2 + α3 L3 + · · ·
(1 − αL)
I e.g., the economic model:

Ct = α0 + α1 Yt + α2 Ct −1 + ut
It = β 0 + β 1 (Yt −1 − Yt −2 ) + νt
Yt ≡ Ct + It + Gt

may be rewritten as
      
(1 − α2 L) 0 − α1 Ct α0 0   ut
D t
 0 1 − β 1 L(1 − L)   It = β 0
  0  + νt 

Gt
−1 −1 1 Yt 0 1 0

where Dt is a dummy variable


I in matrix form, the system becomes A(L)x t = Bz t + w t
6 / 29
stationarity

I a stochastic process is said to be strictly stationary if its properties


are unaffected by a change of time origin: in other words, the joint
probability distribution at t1 , t2 , . . . , tm must be the same as the joint
probability distribution at t1+k , t2+k , . . . , tm+k , where k is an
arbitrary shift in time
I a stochastic process is said to be weakly (or covariance) stationary if

1. E(yt ) = E (yt −s ) = µ
2. E[(yt − µ)2 ] = E[(yt −s − µ)2 ] or var(yt ) = var(yt −s ) = σ2
3. E[(yt − µ)(yt −s − µ)] = E[(yt −j − µ)(yt −j −s − µ)] =
γs or cov(yt , yt −s ) = cov(yt −j , yt −j −s ) = γs

7 / 29
stationarity (cont.)

stationary process:
I observations are regularly spread
about the mean: mean and
variance do not vary over time

non stationary process: non stationary process:


I considering different time I considering different time
intervals, the mean varies intervals, the variance varies
8 / 29
ARMA modeling

AR(1) process

I the model reads


yt = m + αyt −1 + et
where et is a white noise
I can rewrite the equation as

(1 − αL)yt = m + et

which gives

yt = (1 + αL + α2 L2 + · · · )(m + et )

I since m is a constant, can write

yt = (1 + α + α2 + · · · )m + (et + αet −1 + α2 et −2 + · · · )

9 / 29
ARMA modeling (cont.)
I provided |α| < 1 this gives
m
E ( yt ) = =µ
1−α
⇒ y has a constant unconditional mean, independent of time
I y also has constant unconditional variance

σe2
σy2 = E(yt − µ)2 =
1 − α2
I alternatively, write
xt = αxt −1 + et
where xt = yt − µ
I then square both sides and take expectations to find

E(xt2 ) = α2 E(xt2−1 ) + E(et2 ) + 2α E(xt −1 et )

where the last term on RHS vanishes


10 / 29
ARMA modeling (cont.)
I when |α| < 1
σy2 = E(xt2 ) = E(xt2−1 ) = · · ·
so that
σy2 = α2 σy2 + σe2
I multiplying both sides of xt = αxt −1 + et by xt −1 and taking
expectations gives

E(xt xt −1 ) = α E(xt2−1 ) + E(xt −1 et )

I defining autocovariance coefficients as γs = E(xt xt −s ) we have

γ1 = αγ0 ,

γ2 = αγ1
and, in general,

γk = αγk −1 = αk γ0 k = 1, 2, . . .
11 / 29
ARMA modeling (cont.)

I the autocorrelation coefficients for a stationary series are defined by

E(xt xt −k ) γ
ρk = p p = k
var(xt ) var(xt −k ) γ0

I nb: the autocovariances and autocorrelation coefficients are


symmetrical about lag zero
I the autocorrelation coefficients for the AR(1) process are

ρk = αρk −1 = αk k = 1, 2, . . .

I the formula giving the autocorrelation coefficients is the


autocorrelation function (acf): its graphical representation is the
correlogram

12 / 29
ARMA modeling (cont.)

Figure 1: correlograms for stationary AR(1) series

13 / 29
ARMA modeling (cont.)
AR(2) process

I the AR(2) process is defined as

yt = m + α 1 yt − 1 + α 2 yt − 2 + e t

I by assuming stationarity, the unconditional mean is


µ = m/(1 − α1 − α2 )
I defining xt = yt − µ, we may rewrite the previous equation as

xt = α1 xt −1 + α2 xt −2 + et

I multiplying by xt and taking expectations, we have

γ0 = α1 γ1 + α2 γ2 + E(xt et )

I since E(xt et ) = σe2 , then

γ0 = α1 γ1 + α2 γ2 + σe2
14 / 29
ARMA modeling (cont.)
I similarly

γ1 = α1 γ0 + α2 γ1
γ2 = α1 γ1 + α2 γ0

I after some substitutions and simple algebra we get

(1 − α2 )σe2
γ0 =
(1 + α2 )(1 − α1 − α2 )(1 + α1 − α2 )
I under stationarity, this variance must be a constant, positive number
I the stationarity conditions for the AR(2) process therefore are:

α2 + α1 < 1
α2 − α1 < 1
| α2 | < 1

15 / 29
ARMA modeling (cont.)

I we can also restate the autocorrelation coefficients in terms of the


Yule-Walker equations for AR(2) processes as:

ρ1 = α1 + α2 ρ1
ρ2 = α1 ρ1 + α2

I solving for the first two autocorrelation coefficients gives

α1 α21
ρ1 = ρ2 = + α2
1 − α2 1 − α2
I the acf for the AR(2) process is

ρ k = α 1 ρ k −1 + α 2 ρ k −2 k = 3, 4, . . .

16 / 29
ARMA modeling (cont.)
roots of the polynomial in the lag operator

I rewrite xt = α1 xt −1 + α2 xt −2 + et as

A ( L ) xt = e t

where
A ( L ) = 1 − α 1 L − α 2 L2
I express the quadratic form as the product of two factors

A(L) = 1 − α1 L − α2 L2 = (1 − λ1 L)(1 − λ2 L)

I the link between the α and λ parameters is

λ1 + λ2 = α1 and λ1 λ2 = − α2

I the λ’s may be seen as the roots of λ2 − α1 λ − α2 = 0, which is the


characteristic equation of the second-order process

17 / 29
ARMA modeling (cont.)
I its roots are q
α1 ± α21 + 4α2
λ1 , λ2 =
2
and need to satisfy λ1 + λ2 = α1 and λ1 λ2 = −α2
I we can write
1 c d
A−1 (L ) = = +
(1 − λ1 L)(1 − λ2 L) 1 − λ1 L 1 − λ2 L

where c = −λ1 /(λ2 − λ1 ) and d = λ2 /(λ2 − λ1 )


I then
c d
xt = A − 1 ( L ) e t = et + et
1 − λ1 L 1 − λ2 L
I from the results on the AR(1) process, stationarity of the AR(2)
process requires
|λ1 | < 1 and |λ2 | < 1

18 / 29
ARMA modeling (cont.)
I the AR(2) case allows for the possibility of complex unit roots, which
will occur if α21 + 4α2 < 0
I the roots may be written

λ1 , λ2 = h ± vi
q
where h and v are the real numbers h = α1 /2, v = 12 −(α21 + 4α2 ),

and i is the imaginary number i = −1, giving i 2 = −1.
I the absolute value or modulus of each complex root is
p
|λj | = h2 + v 2 = −α2 j = 1, 2

giving 0 < −α2 < 1 as the condition for the autocorrelations to


approach zero as time increases
I nb: for the stationarity condition to be met, the roots of the relevant
polynomial in the lag operator A(z ) = 1 − α1 z − α2 z 2 should lie
outside the unit circle
19 / 29
ARMA modeling (cont.)

I the roots of A(z ) are the values of z that solve the equation

A(z ) = 1 − α1 z − α2 z 2 = (1 − λ1 z )(1 − λ2 z ) = 0

I the roots are zj = 1/λj (j = 1, 2)

20 / 29
ARMA modeling (cont.)

Figure 2: correlograms for stationary AR(2) series

21 / 29
ARMA modeling (cont.)
partial autocorrelation function (pacf)

I in an AR(2) process the α2 parameter is the partial correlation


between xt and xt −2 with xt −1 held constant
I recall partial correlations in the three-variable case
r13 − r12 r23
r13.2 = q q
2
1 − r12 2
1 − r23

I for stationary time series, letting 1,2,3 denote x and its first and
second lags, we have

r12 = corr(xt , xt −1 ) = corr(xt −1 , xt −2 ) = r23 = ρ1

r13 = corr(xt , xt −2 ) = ρ2
I therefore
ρ2 − ρ21
r13.2 =
1 − ρ21
22 / 29
ARMA modeling (cont.)

I can form the partial correlations from autocorrelations by exploiting


the Yule-Walker equations to get

ρ2 − ρ21
α2 = = r13.2
1 − ρ21

I the pacf of an AR(2) process will cut after the second lag

23 / 29
ARMA modeling (cont.)
MA process

I the AR(1) process xt = αxt −1 + et may be inverted to give

xt = et + αet −1 + α2 et −2 + · · ·

which is a MA process of infinite order, MA(∞)


I the MA(1) process is

xt = e t − β 1 e t − 1

I the autocovariances are

γ0 = (1 + β21 )σe2
γ1 = − β 1 σe2
γ2 = γ3 = · · · = 0

24 / 29
ARMA modeling (cont.)
which gives the autocorrelations coefficients as
− β1
ρ1 =
1 + β21
ρ2 = ρ3 = · · · = 0
I the MA(1) process may be inverted to give et as an infinite series in
xt , xt −1 , . . ., i.e.

et = xt + β 1 xt −1 + β21 xt −2 + · · ·

that is
xt = − β 1 xt −1 − β21 xt −2 − · · · + et
I because this is an AR(∞) series, partial autocorrelations do not cut
off but the autocorrelations are zero after the first
I the properties of a pure MA are the converse of those of a pure AR
I the acf of an MA process cuts off after the order of the process, and
the pacf damps towards zero
I invertibility condition: | β 1 | < 1
25 / 29
ARMA modeling (cont.)
ARMA process

I the general ARMA(p, q) process is

A ( L ) xt = B ( L ) e t

where

A(L) = 1 − α1 L − α2 L2 − · · · − αp Lp
B (L) = 1 − β 1 L − β 2 L2 − · · · − β q Lq

I stationarity requires the roots of A(L) to lie outside the unit circle;
invertibility places the same condition on the roots of B (L)
I given these conditions the ARMA(p, q) may alternatively be
expressed as a pure AR process of infinite order or as a pure MA
process of infinite order, i.e.

B −1 (L)A(L)xt = et or xt = A − 1 ( L ) B ( L ) e t

26 / 29
ARMA modeling (cont.)

I the simplest mixed process is the ARMA(1,1):

xt = αxt −1 + et − βet −1

I squaring the previous equation and taking expectations, after some


algebra we get
1 − 2αβ + β2 2
σx2 = γ0 = σe
1 − α2
I multiplying the ARMA(1,1) by xt −1 then taking expectations leads to

(α − β)(1 − αβ) 2
γ1 = αγ0 − βσe2 = σe
1 − α2
I higher order covariances are given by

γk = αγk −1 k = 2, 3, . . .

27 / 29
ARMA modeling (cont.)

I the acf for the ARMA(1, 1) is thus

(α − β)(1 − αβ)
ρ1 =
1 − 2αβ + β2
ρk = αρk −1 k = 2, 3, . . .

Table 1: correlation patterns

process acf pacf


AR(p) infinite: damps out finite: cuts off after lag p
MA(q) finite: cuts off after lag q infinite: damps out
ARMA(p, q) infinite: damps out infinite: damps out

28 / 29

You might also like