Econometrics (EM2008) Stationary Univariate Time Series: Irene Mammi
Econometrics (EM2008) Stationary Univariate Time Series: Irene Mammi
Lecture 7
Stationary univariate time series
Irene Mammi
1 / 29
outline
I References:
I Johnston, J. and J. DiNardo (1997), Econometric Methods, 4th
Edition, McGraw-Hill, New York, Chapter 7
2 / 29
introduction
I a univariate time series models the outcome variable in terms of its
own past values and some disturbance:
xt = f (xt −1 , xt −2 , . . . , ut )
I need to specify three features: the functional form f (·), the number
of lags, and a structure for the error term
I autoregressive, AR(p), process:
xt = α1 xt −1 + α2 xt −2 + · · · + αp xt −p + ut
ut = e t − β 1 e t − 1 − · · · − β q e t − q
xt = α1 xt −1 + · · · + αp xt −p + et − β 1 et −1 − · · · − β q et −q
3 / 29
introduction (cont.)
I it might seem reductive to consider a univariate series but, e.g., take
the equations
Ct = α0 + α1 Yt + α2 Ct −1 + ut
Yt ≡ Ct + It
I the lag operator is a linear operator which, for any value of xt , gives
the previous values of the series:
L ( xt ) = xt − 1
L2 (xt ) = L[L(xt )] = L(xt −1 ) = xt −2
Ls (xt ) = xt −s
(1 − L)xt = xt − xt −1 = ∆xt
L(1 − L)xt = xt −1 − xt −2 = ∆xt −1
(1 − αL)(1 + αL + α2 L2 + α3 L3 + · · · + αp Lp ) = 1 − αp +1 Lp +1
5 / 29
lag operator (cont.)
I we may write the reciprocal, or inverse, of A(L) as
1
A−1 (L ) = = 1 + αL + α2 L2 + α3 L3 + · · ·
(1 − αL)
I e.g., the economic model:
Ct = α0 + α1 Yt + α2 Ct −1 + ut
It = β 0 + β 1 (Yt −1 − Yt −2 ) + νt
Yt ≡ Ct + It + Gt
may be rewritten as
(1 − α2 L) 0 − α1 Ct α0 0 ut
D t
0 1 − β 1 L(1 − L) It = β 0
0 + νt
Gt
−1 −1 1 Yt 0 1 0
1. E(yt ) = E (yt −s ) = µ
2. E[(yt − µ)2 ] = E[(yt −s − µ)2 ] or var(yt ) = var(yt −s ) = σ2
3. E[(yt − µ)(yt −s − µ)] = E[(yt −j − µ)(yt −j −s − µ)] =
γs or cov(yt , yt −s ) = cov(yt −j , yt −j −s ) = γs
7 / 29
stationarity (cont.)
stationary process:
I observations are regularly spread
about the mean: mean and
variance do not vary over time
AR(1) process
(1 − αL)yt = m + et
which gives
yt = (1 + αL + α2 L2 + · · · )(m + et )
yt = (1 + α + α2 + · · · )m + (et + αet −1 + α2 et −2 + · · · )
9 / 29
ARMA modeling (cont.)
I provided |α| < 1 this gives
m
E ( yt ) = =µ
1−α
⇒ y has a constant unconditional mean, independent of time
I y also has constant unconditional variance
σe2
σy2 = E(yt − µ)2 =
1 − α2
I alternatively, write
xt = αxt −1 + et
where xt = yt − µ
I then square both sides and take expectations to find
γ1 = αγ0 ,
γ2 = αγ1
and, in general,
γk = αγk −1 = αk γ0 k = 1, 2, . . .
11 / 29
ARMA modeling (cont.)
E(xt xt −k ) γ
ρk = p p = k
var(xt ) var(xt −k ) γ0
ρk = αρk −1 = αk k = 1, 2, . . .
12 / 29
ARMA modeling (cont.)
13 / 29
ARMA modeling (cont.)
AR(2) process
yt = m + α 1 yt − 1 + α 2 yt − 2 + e t
xt = α1 xt −1 + α2 xt −2 + et
γ0 = α1 γ1 + α2 γ2 + E(xt et )
γ0 = α1 γ1 + α2 γ2 + σe2
14 / 29
ARMA modeling (cont.)
I similarly
γ1 = α1 γ0 + α2 γ1
γ2 = α1 γ1 + α2 γ0
(1 − α2 )σe2
γ0 =
(1 + α2 )(1 − α1 − α2 )(1 + α1 − α2 )
I under stationarity, this variance must be a constant, positive number
I the stationarity conditions for the AR(2) process therefore are:
α2 + α1 < 1
α2 − α1 < 1
| α2 | < 1
15 / 29
ARMA modeling (cont.)
ρ1 = α1 + α2 ρ1
ρ2 = α1 ρ1 + α2
α1 α21
ρ1 = ρ2 = + α2
1 − α2 1 − α2
I the acf for the AR(2) process is
ρ k = α 1 ρ k −1 + α 2 ρ k −2 k = 3, 4, . . .
16 / 29
ARMA modeling (cont.)
roots of the polynomial in the lag operator
I rewrite xt = α1 xt −1 + α2 xt −2 + et as
A ( L ) xt = e t
where
A ( L ) = 1 − α 1 L − α 2 L2
I express the quadratic form as the product of two factors
A(L) = 1 − α1 L − α2 L2 = (1 − λ1 L)(1 − λ2 L)
λ1 + λ2 = α1 and λ1 λ2 = − α2
17 / 29
ARMA modeling (cont.)
I its roots are q
α1 ± α21 + 4α2
λ1 , λ2 =
2
and need to satisfy λ1 + λ2 = α1 and λ1 λ2 = −α2
I we can write
1 c d
A−1 (L ) = = +
(1 − λ1 L)(1 − λ2 L) 1 − λ1 L 1 − λ2 L
18 / 29
ARMA modeling (cont.)
I the AR(2) case allows for the possibility of complex unit roots, which
will occur if α21 + 4α2 < 0
I the roots may be written
λ1 , λ2 = h ± vi
q
where h and v are the real numbers h = α1 /2, v = 12 −(α21 + 4α2 ),
√
and i is the imaginary number i = −1, giving i 2 = −1.
I the absolute value or modulus of each complex root is
p
|λj | = h2 + v 2 = −α2 j = 1, 2
I the roots of A(z ) are the values of z that solve the equation
A(z ) = 1 − α1 z − α2 z 2 = (1 − λ1 z )(1 − λ2 z ) = 0
20 / 29
ARMA modeling (cont.)
21 / 29
ARMA modeling (cont.)
partial autocorrelation function (pacf)
I for stationary time series, letting 1,2,3 denote x and its first and
second lags, we have
r13 = corr(xt , xt −2 ) = ρ2
I therefore
ρ2 − ρ21
r13.2 =
1 − ρ21
22 / 29
ARMA modeling (cont.)
ρ2 − ρ21
α2 = = r13.2
1 − ρ21
I the pacf of an AR(2) process will cut after the second lag
23 / 29
ARMA modeling (cont.)
MA process
xt = et + αet −1 + α2 et −2 + · · ·
xt = e t − β 1 e t − 1
γ0 = (1 + β21 )σe2
γ1 = − β 1 σe2
γ2 = γ3 = · · · = 0
24 / 29
ARMA modeling (cont.)
which gives the autocorrelations coefficients as
− β1
ρ1 =
1 + β21
ρ2 = ρ3 = · · · = 0
I the MA(1) process may be inverted to give et as an infinite series in
xt , xt −1 , . . ., i.e.
et = xt + β 1 xt −1 + β21 xt −2 + · · ·
that is
xt = − β 1 xt −1 − β21 xt −2 − · · · + et
I because this is an AR(∞) series, partial autocorrelations do not cut
off but the autocorrelations are zero after the first
I the properties of a pure MA are the converse of those of a pure AR
I the acf of an MA process cuts off after the order of the process, and
the pacf damps towards zero
I invertibility condition: | β 1 | < 1
25 / 29
ARMA modeling (cont.)
ARMA process
A ( L ) xt = B ( L ) e t
where
A(L) = 1 − α1 L − α2 L2 − · · · − αp Lp
B (L) = 1 − β 1 L − β 2 L2 − · · · − β q Lq
I stationarity requires the roots of A(L) to lie outside the unit circle;
invertibility places the same condition on the roots of B (L)
I given these conditions the ARMA(p, q) may alternatively be
expressed as a pure AR process of infinite order or as a pure MA
process of infinite order, i.e.
B −1 (L)A(L)xt = et or xt = A − 1 ( L ) B ( L ) e t
26 / 29
ARMA modeling (cont.)
xt = αxt −1 + et − βet −1
(α − β)(1 − αβ) 2
γ1 = αγ0 − βσe2 = σe
1 − α2
I higher order covariances are given by
γk = αγk −1 k = 2, 3, . . .
27 / 29
ARMA modeling (cont.)
(α − β)(1 − αβ)
ρ1 =
1 − 2αβ + β2
ρk = αρk −1 k = 2, 3, . . .
28 / 29