Last Time Today
Last Time Today
FM05: 2020-21
Parameter redundancy I
FM05: 2020-21
Parameter redundancy II
Motivating example
Consider a process Xt = εt , ∀t.
Then Xt−1 = εt−1 and 0.5Xt−1 = 0.5εt−1 .
Adding the two equations gives:
Xt + 0.5Xt−1 = εt + 0.5εt−1 .
Given this, can we recover the original process?
Φ(B)Xt = θ(B)εt , with Φ(x) = 1 + 0.5x, Θ(x) = 1 + 0.5x
Since both polynomials have degree 1, this is ARMA(1,1).
Is it? (I + 0.5B)Xt = (I + 0.5B)εt , or (I + 0.5B)(Xt − εt ) = 0
Sufficient Xt = εt , so we can recover the original process:
White Noise.
FM05: 2020-21
Parameter redundancy III
FM05: 2020-21
Causal process, linear model, stationary sol. to AR(1) I
FM05: 2020-21
Causal process, linear model, stationary sol. to AR(1)
II
AR(1): Xt = εt + φ1 Xt−1 = εt + φ1 εt−1 + φ21 Xt−2 = . . .
= εt + · · · + φn−1 n
1 εt−n+1 + φ1 Xt−n
Pn−1 t t
P∞ lett n t→ ∞: the term t=0 φ1 B (εt ) converges to
Now
t=0 φ1 B (εt ), which is well defined for |φ1 | < 1.
The last term φn1 Xt−n has square mean (by stationarity):
E(φ2n 2 2n 2 2n 2
1 Xt−n ) = φ1 E(Xt−n ) = φ1 E(X0 )
Using |φ1 | < 1 gives 0 as the limit of this last term as
n → ∞ (ARMA assumed finite variance, so E(X02 ) < ∞).
We get the stationary solution: Xt = ∞ t t
P
t=0 φ1 B (εt ) a.s.
It has linear dependency; think MA(∞).
It is causal: it does not depend on any future value εt+j ,
j > 0.
6
FM05: 2020-21
AR(1) when |φ1 | > 1
FM05: 2020-21
AR(1) when |φ1 | = 1
1
Cannot define 1−z as a power/Laurent series that is
absolutely convergent on the unit disk.
No stationary solutions exists.
For |φ1 | > 1 one can define the Laurent series1 :
1 −1 1 −1 1 1 1
:= (1 − ) =− − 2 2 − 3 3 − ...
1 − φ1 z φ1 z φ1 z φ1 z φ1 z φ1 z
1
Laurent series include negative powers, unlike Taylor series 8
FM05: 2020-21
Wold’s decomposition theorem
FM05: 2020-21
Estimation of parameters for the chosen model I
10
FM05: 2020-21
Estimation of parameters for the chosen model II
11
FM05: 2020-21
Method of moments I
12
FM05: 2020-21
Method of moments II
13
FM05: 2020-21
Method of moments III
14
FM05: 2020-21
Method of moments IV
15
FM05: 2020-21
Method of moments V
MA(1):
Xt = εt + θ1 εt−1 .
Recall the theoretical autocorrelation function had a single
θ1
non-zero spike at lag 1 given by ρ(1) = 1+θ 2.
1
16
FM05: 2020-21
Method of moments VI
17
FM05: 2020-21
Comparison: Yule-Walker vs OLS estimators I
Xn Xn−1 εn
or X = Zβ + ε
18
FM05: 2020-21
Comparison: Yule-Walker vs OLS estimators II
19
FM05: 2020-21
Comparison: Yule-Walker vs OLS estimators III
Prop
i) YW estimators are optimal for AR(p) as OLS estimators.
ii) YW estimators are suboptimal for MA(q) or ARMA(p,q)
(models are not linear in the parameters).
20
FM05: 2020-21
Forecasting I
Theorem
Prediction error and prediction variables are orthogonal:
n )X ] = 0, ∀i = 1, . . . , n
E[(Xn+1 − Xn+1 i
21
FM05: 2020-21
Forecasting II
Corollary
The BLP coeff φni satisfy:
n
X
φnj γ(k − j) = γ(k ), ∀k = 1, . . . , n. (1)
j=1
Xn
Proof: From Th. Pn ⊥ Xi so E[Xn+1 Xi ] = E[ φnj Xn+1−j Xi ].
j=1
n
X
Thus: γ(n + 1 − i) = φnj γ(n + 1 − j − i) and denote
j=1
k := n + 1 − i to get the result.
22
FM05: 2020-21
Forecasting III
BLP coeff
Matrix form: γ n = Γn φn
where
γ(1) γ(0) γ(1) . . . γ(n − 1)
γ(2) γ(1) γ(0) . . . γ(n − 2)
· · · Γn = · · ·
γn =
... ... ...
γ(n) γ(n − 1) γ(n − 2) . . . γ(0)
φn1
φn2
φn =
...
φnn
ARMA: φ̂n = Γ−1 n 0 −1
n γ n , Pn+1 = γ(0) − γ n Γn γ n
23
FM05: 2020-21
Example: prediction for AR(2)
24
FM05: 2020-21
m-step ahead prediction
25
FM05: 2020-21
PACF via Durbin-Levinson algorithm I
PACF
Plotting φnn for n = 1, 2, . . . gives the PACF (partial
autocorrelation function)
26
FM05: 2020-21
PACF via Durbin-Levinson algorithm II
27
FM05: 2020-21
PACF via Durbin-Levinson algorithm III
Interpretation:
Eq (1) satisfied by BLP coeff are similar to normal
equations of regression.
Thus coeff φnn is like auto-regression coeff of Xn+1 on X1 ,
measuring the impact of X1 on Xn+1 while everything else
is held constant (thus partial correlation).
For AR(p) we have φnn → 0 for n > p (with asymptotic
variance the inverse of # obs.), and φpp → φp
28
FM05: 2020-21
ARMA(1,1) I
(1 + θφ)(φ + θ) h−1
ρ(h) = φ , h ≥ 1.
1 + 2θφ + θ2
Note: ACF decreases to zero and does not cut off at any lag.
29
FM05: 2020-21
ARMA(1,1) II
Figure
30
FM05: 2020-21
ARMA(1,1) III
31
FM05: 2020-21
AR vs ARMA via PACF I
32
FM05: 2020-21
AR vs ARMA via PACF II
33
FM05: 2020-21
AR vs ARMA via PACF III
34
FM05: 2020-21