STA457 Week 5 Notes
STA457 Week 5 Notes
Lecture 8
Lijia Wang
Last Time:
1 Autoregressive (AR) process
2 Moving average (MA) process
3 Autoregressive moving average (ARMA)
Today:
1 Autoregressive moving average (ARMA)
2 Partial Auto-correlation
3 Forecasting
4 Estimation
3 Forecasting
4 Estimation
ω(B)xt = ε(B)wt .
3 Forecasting
4 Estimation
For MA (q) models, the ACF will be zero for lags greater than q.
Moreover, because εq →= 0, the ACF will not be zero at lag q. We can
use this property to identify MA models.
If the process, however, is ARMA or AR, the ACF alone tells us little
about the orders of dependence.
The partial autocorrelation function (PACF) can be used to identify
AR models.
ϖXY |Z = corr(X ↑ X̂ , Y ↑ Ŷ )
xt = ωxt→1 + wt .
We have
ϱx (2) = cov (xt , xt→2 )
= cov (ωxt→1 + wt , xt→2 ) = cov (ω (ωxt→2 + wt→1 ) + wt , xt→2 )
! 2 "
= cov ω xt→2 + ωwt→1 + wt , xt→2
= ω2 ϱx (0)
Definition: The PACF, ωhh , is the correlation between xt+h and xt with
the linear dependence of xt+1 , xt+2 , · · · , xt+h→1 on each, removed.
3 Forecasting
4 Estimation
We note that the φ s depend on n and m, but for now we drop the
dependence from the notation.
For example, if n = m = 1, then x21 is the one-step-ahead linear
forecast of x2 given x1 . That is, x21 = φ0 + φ1 x1 .
But if n = 2, x32 is the one-step-ahead linear forecast of x3 given x1
and x2 . That is, x32 = φ0 + φ1 x1 + φ2 x2 .
xt = ωxt→1 + wt
where wt is white noise with variance ϑw2 and the model parameters are
known. Suppose that we have observed x1 and x2 , and we would like to
estimate x3 . Find the best linear predictor of x3 .
For example, if the process is Gaussian, then choosing Cω/2 = 2 will yield
an approximate 95% prediction interval for xn+m .