0% found this document useful (0 votes)
37 views33 pages

Chapter 3: Some Time-Series Models

This document summarizes key concepts related to time-series models. It introduces stochastic processes and their properties such as ensemble, realization, mean function, variance function, autocovariance, and autocorrelation. It then defines stationary processes as strictly stationary and second-order stationary. Autocorrelation functions are introduced along with their properties including being nonnegative-definite. Common time-series models are briefly described, including purely random processes (white noise), random walks, and moving average (MA) processes. The concepts of invertibility and the backward shift operator in relation to MA processes are also summarized.

Uploaded by

payal sachdev
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views33 pages

Chapter 3: Some Time-Series Models

This document summarizes key concepts related to time-series models. It introduces stochastic processes and their properties such as ensemble, realization, mean function, variance function, autocovariance, and autocorrelation. It then defines stationary processes as strictly stationary and second-order stationary. Autocorrelation functions are introduced along with their properties including being nonnegative-definite. Common time-series models are briefly described, including purely random processes (white noise), random walks, and moving average (MA) processes. The concepts of invertibility and the backward shift operator in relation to MA processes are also summarized.

Uploaded by

payal sachdev
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Chapter 3: Some Time-Series Models

1
Section 3.1 Stochastic Processes and Their
Properties

There are several terminologies in this sections:

• Stochastic processes (random processes): xt


for t ∈ R, where xt is a random variable when
t is given.
– continuous: t ∈ (−∞, ∞).
– discrete: t = 0, ±1, ±2, · · ·.

• Ensemble: the set of xt for all possible t.

• Realization: the element of the ensemble.

• We write X(t) or Xt if we treat time series


as random, and x(t) or xt if we treat it as
observations.

2
Let Xt be the time series. Then,

• the mean function is

µ(t) = E[X(t)];

• the variance function is

σ 2(t) = V ar[X(t)];

• the autocovariance is
γ(t1, t2) =Cov(X(t1), X(t2))
=E{[X(t1) − µ(t1)][X(t2) − µ(t2)]}
• the autocorrelation is
ρ(t1, t2) =Cor(X(t1), X(t2))
E{[X(t1) − µ(t1)][X(t2) − µ(t2)]}
=
σ(t1)σ(t2)
γ(t1, t2)
= .
σ(t1)σ(t2)

Clearly, there is γ(t, t) = σ 2(t) and ρ(t, t) = 1.

3
3.2. Stationary Processes

Strictly stationary.
A time series is said strictly stationary if the joint
distribution of X(t1), · · · , X(tk ) is the same as the
joint distribution of X(t1 + τ ), · · · , X(tk + τ ) for
any possible values of τ and k

• If X(t) is strictly stationary, then µ(t) is a


constant and γ(t1, t2) only depends on t1 − t2.

• Then, we can write

γ(t1, t2) = γ(|t1 − t2|)


and
ρ(t1, t2) = ρ(|t1 − t2|).

4
Second-order stationary.
A time series is said second-order stationary if
µ(t) is independent of t, and γ(t1, t2) only depends
on |t1 − t2|.

• If the joint distribution of X(t1), · · · , X(tk ) is


always normal, then we call X(t) is a normal
time series (or just normal).

• If X(t) is normal, then second-order stationary


and strictly stationary are equivalent.

Mostly, we simply call second-order stationary as


stationary.

5
Let X(t) be a time series. If the limit distribution
of X(t) exist, then this distribution is equilibrium
distribution.

If the conditional distribution of X(t + 1) given


X(t) is invariant, then there exists an equilibrium
distribution. This is also called a Markov Chain.

6
3.3 Some Properties of Autocorrelation Function

Let X(t) be a (second-order) stationary time se-


ries. Then we can define the following:
• Autocorrelation function:
γ(τ )
ρ(τ ) = .
γ(0)
• The correlation matrix of X(t1), · · · , X(tn) is
 
1 ρ(t1 − t2) · · · ρ(t1 − tn)
 
 ρ(t2 − t1) 1 · · · ρ(t 2 − t )
n 
 ,


.
.. .
.. ... .
.. 

ρ(tn − t1) ρ(tn − t2) · · · 1
which should be nonnegative-definite.
• Therefore, an autocorrelation function ρ(τ )
satisfies

(i) |ρ(τ )| ≤ 1;

(ii) ρ(0) = 1;

(iii) ρ(τ ) is nonnegative-definite.

• Nugget effect: if
ρ(0+) = lim ρ(τ ) < 1,
τ →0
there there exist a nugget effect.

7
3.4. Some Useful Models

3.4.1 Purely random processes:

Let Zt be a discrete stationary time series. If Zt1


and Zt2 are independent when t1 6= t2, then Zt is
call a pure random processes or a white noise. It
has:

• an autocovariance function as
γ(k) =Cov(Zt, Zt+k )
(
2
σZ k=0
=
0, k 6= 0.
• an autocorrelation function as
ρ(k) =Corr(Zt, Zt+k )
½
1 k=0
=
0, k 6= 0.
Sometimes, we make the assumption weaker from
independent to uncorrelated. This is enough for
any inference of linear operations.

8
3.4.2. Random walks:

Let Xt be a discrete time series and Zt be the


white noise. Then, Xt is called a random walk if

• Xt = Xt−1 + Zt

• and X0 = 0.

For a random walk, we have


t
X
Xt = Zi.
i=1
Suppose E(Zt) = µ and V (Zt) = σZ 2 . Then, we

have E(Xt) = tµ and V (Xt) = tσZ 2 . By CLT, we

have
Xt − tµ L 2 ).
√ → N (0, σZ
t

9
An example:

Assume Tom and Jerry are gambling. Tom has


m units of money and Jerry has n units of money.
Each time, either Tom or Jerry wins 1 unit of
money. Suppose the probability of Tom win is p.
Compute the probability that Tom wins all Jerry’s
money.

Solution: Let Zt be defined by

P (Zt = 1) = p; P (Zt = −1) = 1 − p.


Define X0 = 0 and
t
X
Xt = Zi.
i=1
Then, if Xt attains n first, Tom wins the game;
otherwise Jerry wins the game.

Let f (a) be the probability that Tom wins the


game if Tom has a units of money and Jerry has
m + n − a units of money.

10
Then, we have

(i) f (n) = 1;

(ii) f (−m) = 0;

(iii) if −m < a < n,

f (a) = pf (a + 1) + qf (a − 1).

Thus, we have
q
f (a + 1) − f (a) = [f (a) − f (a − 1)]
p
q
=( )2[f (a − 1) − f (a − 2)]
p
q a+m
=( ) [f (−m + 1) − f (−m)]
p
q
=( )a+mf (−m + 1).
p

11
Assume p 6= q. Thus, we have
n
X Xn
q
[f (a + 1) − f (a)] = ( )a+mf (−m + 1)
a=−m −m p
1 − (q/p)m+n
⇒f (−m + 1)( )=1
1 − (q/p)
1 − (q/p)
⇒f (−m + 1) = m+n
.
1 − (q/p)
Therefore,
−1
X −1
X q
[f (a + 1) − f (a)] = ( )a+mf (−m + 1)
a=−m a=−m p
1 − (q/p)m
⇒f (0) = f (−m + 1)( )
1 − (q/p)
1 − (q/p)m
⇒f (0) = .
1 − (q/p)m+n
When p = q, we have
m
f (0) =
m+n
by taking the limit pq → 1.

12
Thus, we have
 m
 1−(q/p) when p 6= q
m+n
f (m) = 1−(q/p)
 m when p = q = 1/2
m+n

Clearly, we have when


(
1 − (q/p)m when p > 1/2
lim f (m) =
n→∞ 0 when p ≤ 1/2

13
3.4.3. Moving average (MA) processes

A moving average process xT has the form of

Xt = β0Zt + β1Zt−1 + · · · + βq Zt−q , (1)


where Zt is a white noise (purely random process)
with E(Zt) = 0 and V (Zt) = σZ 2.

Usually, β0 is scaled to β0 = 1.

We write
2)
Zt ∼ W N (0, σZ
for such Zt and

Xt ∼ M A(q)
for such Xt.

14
Clearly, we have

E(Xt) = 0
q
X
2
V (Xt) = σZ βi2
i=1

γ(k) = γ(−k)
( Pq−k
2
σZ i=0 βiβi+k , when k = 0, · · · , q
=
0 when k > q.
and
ρ(k) = ρ(−k)
 Pq−k
 i=0 βiβi+k
= Pq 2 when k = 0, · · · , q
 i=0 βi
0 when k > q.

15
A MA process Xt expressed by above is invertible
if the innovation expression converges that is

X
Zt = πj Xt−j
j=0
with absolute convergence coefficient as

X
|πj | < ∞.
j=0

16
The backward shift operator B is defined by

BXt = Xt−1; B 2Xt = Xt−2; · · · ,


and in general there is

B j Xt = Xt−j .

Then, equation (1) can be expressed as

Xt = (β0 + β1B + · · · + βq B q )Zt = θ(B)Zt


where

θ(B) = β0 + β1B + · · · + β q B q .

If an MA(q) process is invertible if the roots of


the equation
θ(B) = 0
are all outside the unit circle in the complex plane.

17
Example: Consider the following M A(1) model

(a) : Xt = Zt − 0.5Zt−1 = (1 − 0.5B)Zt.


Then,
1
Zt = Xt
1 − 0.5B

X
= 0.5k B k Xt
k=0
1 1 1
=Xt + Xt−1 + Xt−2 + Xt−3 + · · · .
2 4 8
Then, based on Xt, we can define Zt according to
the above formula.

(b) : Xt = Zt − 2Zt−1 = (1 − 2B)Zt.


Then,
1
Zt = Xt
1 − 2B
=Xt + 2Xt−1 + 4Xt−2 + 8Xt−3 + · · · .
Then, based on Xt, we cannot define Zt according
to the above formula.

18
The autocorrelation for (a) is
−0.5
ρ(1) = 2
= −0.4
1 + 0.5
The autocorrelation for (b) is
−2
ρ(1) = 2
= −0.4.
1+2
Thus, MA(1) model in (a) and (b) are equivalent,
which implies (a) is more approrpiate.

19
Example: check whether the following MA pro-
cesses are invertible

(a) Xt = (1 + θB)Zt.

(b) Xt = (1 + 0.7B + 0.1B 2)Zt.

20
3.4.4. Autoregressive processes

Suppose Zt ∼ W N (0, σZ2 ). A process {X } is said


t
an autoregressive of order p (AR(p)) if

Xt = α1Xt−1 + · · · + αpXt−p + Zt. (2)


This can also be expressed as

Zt = Xt − (α1Xt−1 + · · · + αpXt−p). (3)

Since we only observed Xt, equation (3) can be


used to derived the white noise Zt.

21
First-order process

Xt = αXt−1 + Zt.
This is equivalent to an infinite MA process

X
Xt = αj Zt−j ,
j=0
which is well defined when |α| < 1. For this pro-
cess, we have
E(Xt) = 0

X
2
V (Xt) = σZ α2j
j=0


X
γ(k) = 2
σZ αk+2j ,
j=0
and
ρ(k) = αk
for k = 0, 1, · · ·.

22
General-order process

Zt = (1 − α1B − · · · − αpBp)Xt
or equivalently as
Zt
Xt = = f (B)Zt,
1 − α1B − · · · − αpB p
where
f (B) =(1 − α1B − · · · − αpB p)−1
=(1 + β1B + β2B 2 + · · ·).
If

X
|βj | < ∞
j=1
then Xt is well defined which is also stationary.

It is also equivalent that if the roots of

φ(B) = (1 − α1B − · · · − αpBp) = 0


are all outside of the unit circle on complex plane,
then Xt is well defined.

23
Since finding β1, β2, · · · usually is not easy, we
sometimes use stationarity and derived the fol-
lowing formulae (Yule-Walker equations)

ρ(k) = α1ρ(k − 1) + · · · + αpρ(k − p)


for k > 0. Note that ρ(0) = 1 and ρ(k) = ρ(−k).
We can solve those by linear equations.

This has a general expression

ρ(k) = A1π |k| + · · · + Apπp|k|


where πi are roots of the auxiliary equations

y p − α1y p−1 − · · · − αp = 0,
where A1, · · · , Ap can be solved by the first p linear
equations.

24
Example: Consider AR(2) process

Xt = α1Xt−1 + α2Xt−2 + Zt.


Then, Xt is stationary if
q
α1 ± α2
1 + 4α2
| | < 1.
2
This requires

α1 + α2 < 1; α1 − α2 > −1; α2 > 1.

The roots are real if

α1 + 4α2 ≥ 0.

Suppose conditions are satisfied. Then, the Yule-


Walker equations are
ρ(0) =1
ρ(1) =α1ρ(0) + α2ρ(−1) = α1 + α2ρ(1)
ρ(k) =α1ρ(k − 1) + α2ρ(k − 2).

25
Example 3.1. Consider the AR(2) process
1
Xt = Xt−1 − Xt−2 + Zt.
2
Then,
1
φ(B) = 1 − B + B 2.
2
The roots are
√ √
1± 1−2 1 ± −1 1 1
= = ± i.
2 2 2 2
Therefore, it is stationary. By Yule-Walker equa-
tions, we have
1
ρ(1) =ρ(0) − ρ(−1)
2
1
⇒ ρ(1) =1 − ρ(1)
2
2
⇒ ρ(1) = .
3
For other ρ(k), we can use
1
ρ(k) = ρ(k − 1) − ρ(k − 2).
2

Another expression
1 1 1 1
ρ(k) =A1( + i)|k| + A2( − i)|k|
2 2 2 2
1 kπ 1 kπ
=( √ )|k|(cos + sin ).
2 4 3 4
26
3.4.5. Mixed ARMA models

An ARMA(p,q) processes is

Xt = α1Xt−1 +· · ·+αpXt−p +β1Xt−1 +· · ·+βq Zt−q .


It can also write as

φ(B)Xt = θ(B)Zt
where

φ(B) = 1 − α1B − · · · − αpB p


and
θ(B) = 1 + β1B + · · · + βq B q .
The process is stationary if

φ(B) = 0
and
θ(B) = 0
are outside of the unit disc on complex plane.

27
Let ψ(B) = θ(B)/φ(B). Then, we have

Xt = ψ(B)Zt
which is a pure MA process.

Alternative, let π(B) = 1/ψ(B). Then, we have

π(B)Xt = Zt
which is a pure AR process.

In general, the above expressions can be used in


theoretical inference and are rarely used in appli-
cations.

28
Example 3.2: Find the ψ and πi weights for
ARMA(1,1) model given by

Xt = 0.5Xt−1 + Zt − 0.3Zt−1.
Solution: Let φ(B) = 1 − 0.5B and θ(B) = (1 −
0.3B). Then,
θB
ψ(B) =
φ(B)
(1 − 0.3B)
=
(1 − 0.5B)

X
=(1 − 0.3B) 0.5iB i
i=0

X
=1 + 0.2 × 0.5i−1B i.
i=0
Thus,
ψi = 0.2 × 0.5i−1
for i = 1, 2, · · ·. Similarly, we have

πi = 0.2 × 0.3i−1
for i = 1, 2, · · ·. We always have φ0 = π0 = 1.

29
3.4.6. Integrated ARMA (ARIMA) models

An ARIMA(p,d,q) model is defined by

φ(B)(1 − B)dXt = θ(B)Zt


2 ).
where Zt ∼ W N (0, σZ

If we write Wt = (1 − B)d, then we have

φ(B)Wt = θ(B)Zt
and Wt is stationary under some conditions. How-
ever, Xt is not stationary.

30
3.4.7. The general linear process

A general linear process is



X
Xt = φiZt−i.
i=0

If

X
|φi| < ∞
i=0
then Xt is stationary.

Clearly, MA(q), AR(p), ARMA(p,q)


and AMIMA(p,d,q) are all linear process.

31
3.4.8. Continuous processes

Suppose xt is a continuous time series. Then,

ρ(τ ) = Corr(Xt, Xt+τ )


is a function defined in (−∞, ∞).

A continuous time series can be approximated by


a discrete time series.

32
3.5. The Wold Decomposition Theorem

Consider the regression Xt on (Xt−q , Xt−q−1, · · ·)


and denote the residual variance by τq2.

• If
lim τ = V (Xt)
q→∞ q
then we call Xt purely indeterministic.

• If
lim τ =0
q→∞ q
then we call Xt purely deterministic.

The Wold Decomposition Theorem says: any dis-


crete time stationary series can be expressed as
the sum of two uncorrelated processes, one purely
deterministic and another purely indeterministic.

33

You might also like