Chapter 3: Some Time-Series Models
Chapter 3: Some Time-Series Models
1
Section 3.1 Stochastic Processes and Their
Properties
2
Let Xt be the time series. Then,
µ(t) = E[X(t)];
σ 2(t) = V ar[X(t)];
• the autocovariance is
γ(t1, t2) =Cov(X(t1), X(t2))
=E{[X(t1) − µ(t1)][X(t2) − µ(t2)]}
• the autocorrelation is
ρ(t1, t2) =Cor(X(t1), X(t2))
E{[X(t1) − µ(t1)][X(t2) − µ(t2)]}
=
σ(t1)σ(t2)
γ(t1, t2)
= .
σ(t1)σ(t2)
3
3.2. Stationary Processes
Strictly stationary.
A time series is said strictly stationary if the joint
distribution of X(t1), · · · , X(tk ) is the same as the
joint distribution of X(t1 + τ ), · · · , X(tk + τ ) for
any possible values of τ and k
4
Second-order stationary.
A time series is said second-order stationary if
µ(t) is independent of t, and γ(t1, t2) only depends
on |t1 − t2|.
5
Let X(t) be a time series. If the limit distribution
of X(t) exist, then this distribution is equilibrium
distribution.
6
3.3 Some Properties of Autocorrelation Function
(i) |ρ(τ )| ≤ 1;
(ii) ρ(0) = 1;
• Nugget effect: if
ρ(0+) = lim ρ(τ ) < 1,
τ →0
there there exist a nugget effect.
7
3.4. Some Useful Models
• an autocovariance function as
γ(k) =Cov(Zt, Zt+k )
(
2
σZ k=0
=
0, k 6= 0.
• an autocorrelation function as
ρ(k) =Corr(Zt, Zt+k )
½
1 k=0
=
0, k 6= 0.
Sometimes, we make the assumption weaker from
independent to uncorrelated. This is enough for
any inference of linear operations.
8
3.4.2. Random walks:
• Xt = Xt−1 + Zt
• and X0 = 0.
have
Xt − tµ L 2 ).
√ → N (0, σZ
t
9
An example:
10
Then, we have
(i) f (n) = 1;
(ii) f (−m) = 0;
f (a) = pf (a + 1) + qf (a − 1).
Thus, we have
q
f (a + 1) − f (a) = [f (a) − f (a − 1)]
p
q
=( )2[f (a − 1) − f (a − 2)]
p
q a+m
=( ) [f (−m + 1) − f (−m)]
p
q
=( )a+mf (−m + 1).
p
11
Assume p 6= q. Thus, we have
n
X Xn
q
[f (a + 1) − f (a)] = ( )a+mf (−m + 1)
a=−m −m p
1 − (q/p)m+n
⇒f (−m + 1)( )=1
1 − (q/p)
1 − (q/p)
⇒f (−m + 1) = m+n
.
1 − (q/p)
Therefore,
−1
X −1
X q
[f (a + 1) − f (a)] = ( )a+mf (−m + 1)
a=−m a=−m p
1 − (q/p)m
⇒f (0) = f (−m + 1)( )
1 − (q/p)
1 − (q/p)m
⇒f (0) = .
1 − (q/p)m+n
When p = q, we have
m
f (0) =
m+n
by taking the limit pq → 1.
12
Thus, we have
m
1−(q/p) when p 6= q
m+n
f (m) = 1−(q/p)
m when p = q = 1/2
m+n
13
3.4.3. Moving average (MA) processes
Usually, β0 is scaled to β0 = 1.
We write
2)
Zt ∼ W N (0, σZ
for such Zt and
Xt ∼ M A(q)
for such Xt.
14
Clearly, we have
E(Xt) = 0
q
X
2
V (Xt) = σZ βi2
i=1
γ(k) = γ(−k)
( Pq−k
2
σZ i=0 βiβi+k , when k = 0, · · · , q
=
0 when k > q.
and
ρ(k) = ρ(−k)
Pq−k
i=0 βiβi+k
= Pq 2 when k = 0, · · · , q
i=0 βi
0 when k > q.
15
A MA process Xt expressed by above is invertible
if the innovation expression converges that is
∞
X
Zt = πj Xt−j
j=0
with absolute convergence coefficient as
∞
X
|πj | < ∞.
j=0
16
The backward shift operator B is defined by
B j Xt = Xt−j .
θ(B) = β0 + β1B + · · · + β q B q .
17
Example: Consider the following M A(1) model
18
The autocorrelation for (a) is
−0.5
ρ(1) = 2
= −0.4
1 + 0.5
The autocorrelation for (b) is
−2
ρ(1) = 2
= −0.4.
1+2
Thus, MA(1) model in (a) and (b) are equivalent,
which implies (a) is more approrpiate.
19
Example: check whether the following MA pro-
cesses are invertible
(a) Xt = (1 + θB)Zt.
20
3.4.4. Autoregressive processes
21
First-order process
Xt = αXt−1 + Zt.
This is equivalent to an infinite MA process
∞
X
Xt = αj Zt−j ,
j=0
which is well defined when |α| < 1. For this pro-
cess, we have
E(Xt) = 0
∞
X
2
V (Xt) = σZ α2j
j=0
∞
X
γ(k) = 2
σZ αk+2j ,
j=0
and
ρ(k) = αk
for k = 0, 1, · · ·.
22
General-order process
Zt = (1 − α1B − · · · − αpBp)Xt
or equivalently as
Zt
Xt = = f (B)Zt,
1 − α1B − · · · − αpB p
where
f (B) =(1 − α1B − · · · − αpB p)−1
=(1 + β1B + β2B 2 + · · ·).
If
∞
X
|βj | < ∞
j=1
then Xt is well defined which is also stationary.
23
Since finding β1, β2, · · · usually is not easy, we
sometimes use stationarity and derived the fol-
lowing formulae (Yule-Walker equations)
y p − α1y p−1 − · · · − αp = 0,
where A1, · · · , Ap can be solved by the first p linear
equations.
24
Example: Consider AR(2) process
α1 + 4α2 ≥ 0.
25
Example 3.1. Consider the AR(2) process
1
Xt = Xt−1 − Xt−2 + Zt.
2
Then,
1
φ(B) = 1 − B + B 2.
2
The roots are
√ √
1± 1−2 1 ± −1 1 1
= = ± i.
2 2 2 2
Therefore, it is stationary. By Yule-Walker equa-
tions, we have
1
ρ(1) =ρ(0) − ρ(−1)
2
1
⇒ ρ(1) =1 − ρ(1)
2
2
⇒ ρ(1) = .
3
For other ρ(k), we can use
1
ρ(k) = ρ(k − 1) − ρ(k − 2).
2
Another expression
1 1 1 1
ρ(k) =A1( + i)|k| + A2( − i)|k|
2 2 2 2
1 kπ 1 kπ
=( √ )|k|(cos + sin ).
2 4 3 4
26
3.4.5. Mixed ARMA models
An ARMA(p,q) processes is
φ(B)Xt = θ(B)Zt
where
φ(B) = 0
and
θ(B) = 0
are outside of the unit disc on complex plane.
27
Let ψ(B) = θ(B)/φ(B). Then, we have
Xt = ψ(B)Zt
which is a pure MA process.
π(B)Xt = Zt
which is a pure AR process.
28
Example 3.2: Find the ψ and πi weights for
ARMA(1,1) model given by
Xt = 0.5Xt−1 + Zt − 0.3Zt−1.
Solution: Let φ(B) = 1 − 0.5B and θ(B) = (1 −
0.3B). Then,
θB
ψ(B) =
φ(B)
(1 − 0.3B)
=
(1 − 0.5B)
∞
X
=(1 − 0.3B) 0.5iB i
i=0
∞
X
=1 + 0.2 × 0.5i−1B i.
i=0
Thus,
ψi = 0.2 × 0.5i−1
for i = 1, 2, · · ·. Similarly, we have
πi = 0.2 × 0.3i−1
for i = 1, 2, · · ·. We always have φ0 = π0 = 1.
29
3.4.6. Integrated ARMA (ARIMA) models
φ(B)Wt = θ(B)Zt
and Wt is stationary under some conditions. How-
ever, Xt is not stationary.
30
3.4.7. The general linear process
If
∞
X
|φi| < ∞
i=0
then Xt is stationary.
31
3.4.8. Continuous processes
32
3.5. The Wold Decomposition Theorem
• If
lim τ = V (Xt)
q→∞ q
then we call Xt purely indeterministic.
• If
lim τ =0
q→∞ q
then we call Xt purely deterministic.
33