0% found this document useful (0 votes)
17 views2 pages

ACF

The document discusses estimating autocorrelation functions (ACF) and partial autocorrelation functions (PACF) from time series sample data. It provides the definitions of sample ACF, PACF, and variance. Bartlett's formula is presented for the asymptotic distribution of the sample ACF. Guidelines are given for using sample ACF and PACF to identify autoregressive (AR) and moving average (MA) models based on comparing values to critical thresholds.

Uploaded by

walid Ait Mazouz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views2 pages

ACF

The document discusses estimating autocorrelation functions (ACF) and partial autocorrelation functions (PACF) from time series sample data. It provides the definitions of sample ACF, PACF, and variance. Bartlett's formula is presented for the asymptotic distribution of the sample ACF. Guidelines are given for using sample ACF and PACF to identify autoregressive (AR) and moving average (MA) models based on comparing values to critical thresholds.

Uploaded by

walid Ait Mazouz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

LECTURE 12: Sample ACF and PACF

7. Estimation of ACF and PACF. ([BD], §§1.4.1, 2.4)


In practice, time series analysis starts with a sample of consecutive observations, x1 , . . . , xn .
In order to choose a model, we gain information about the process from this sample.
(i) Plot the data and see whether the process is stationary (if not, take logarithms,
difference, etc.)
(ii) For stationary sample we write
Sample Mean Estimator:
P P
X̄n = (1/n) nt=1 Xt . Unbiased: E(X̄n ) = (1/n) nt=1 E(Xt ) = µ.
Pn
Sample Variance Estimator: σ̂ 2 ≡ γ̂(0) = (1/n) t=1 (Xt − X̄n )2 .
Sample autocov. function at lag h is defined as:
P
γ̂(h) = (1/n) n−h
t=1 (Xt − X̄n )(Xt+h − X̄n ).

Use n ≥ 50, h ≤ n/4 for these calculations (o.w. the sum has too few terms).
Sample ACF at lag h is defined as: ρ̂(h) = γ̂(h)/γ̂(0). Note: γ̂(0) = σ̂ 2 .
(ii) Examples. Show pictures of sample ACF for different models.
• For nonstationary TS usually |ρ̂(h)| remains large for a long time.
• For data with strong deterministic periodic component, also ACF is periodic.
(iii) Bartlett’s formula

Motivation: we saw that for TS Xt = Zt + θZt−1 only ACF at lag 1 is non-zero. Thus,
given a sample, we estimate its ACF (find ρ̂(h)), and if ρ̂(h), h ≥ 2 are almost zero,
than we suspect that we deal with model Xt = Zt + θZt−1 . However, what does it
mean “almost zero”? We need to be able to write a confidence interval for ρ̂(h). So, we
need to know distribution of ρ̂(h) and its variance, in particular. The following Thm
says that the distribution of ρ̂(h) is Gaussian for large n and thus, “almost zero” means
|ρ̂(h)| < 1.96V ar(ρ̂(h)).
Bartlett’s formula: Assume that n observations X1 , . . . , Xn come from a stationary
TS with IID innovations Zt ’s, and that n is large. Let ρ̂n = (ρ̂(1), . . . ρ̂(n))0 and ρn =
(ρ(1), . . . ρ(n))0 (here ρ(i) = ρX (i)). Then ρ̂n is distributed approximately N (ρ, n−1 W ),
where W is a covariance matrix with elements

X
wij = {ρ(k + i) + ρ(k − i) − 2ρ(i)ρ(k)} × {ρ(k + j) + ρ(k − j) − 2ρ(j)ρ(k)}
k=1

What does it mean? CONCLUSIONS:


(a) iid noise: ρ( k) = 0 for all k, then wij = 0∀i 6= j and
1
V ar(ρ̂(h)) ≈ whh = 1/n, h = 1, 2, . . .
n

1
(b) MA (q): Xt = Zt + θ1 Zt−1 + . . . + θq Zt−q , Zt ∼ IID(0, σZ2 ). Know: ρ(q + j) =
0, j = 1, 2, . . .; that the autocorr. coefficients are zero beyond some lag q, then
Xq
1
V ar(ρ̂(h)) ≈ whh = (algebra) = (1/n)(1 + 2 ρ2 (k)), h > q
n k=1

(c) Asymptotic distribution of ρ̂(h) is Normal with mean ρ(h) and variance given above.
Recommended (Box and Jenkins) n ≥ 50, k ≤ n/4.
Examination of sample ACF:
(i) If |ρ̂(h)| < 1.96n−1/2 for all h ≥ 1, then we assume MA(0), i.e. WN sequence.
(ii) If |ρ̂(1)| > 1.96n−1/2 , then we should compare the rest of ρ̂(h) with
1.96n−1/2 (1 + 2ρ(1)2 )1/2 . However, ρ(1)2 is unknown. Thus, there are two possibilities
to proceed:
• replace ρ(1) by its estimate, i.e. see whether |ρ̂(h)| < 1.96n−1/2 (1 + 2ρ̂(1)2 )1/2 , h ≥ 2.
If Yes, MA(1) model.
• write 2ρ̂(1)2 /n ∼ 0 for large n. Thus, see whether |ρ̂(h)| < 1.96n−1/2 , h ≥ 2. If Yes,
MA(1) model
In general, if |ρ̂(h0 )| > 1.96n−1/2 and |ρ̂(h)| < 1.96n−1/2 , h ≥ h0 , then assume MA(q)
model with q = h0 .
P
Note: because we throw away nonnegative (unknown to us) terms 2 qi+1 ρ2 (i), if a value
of sample ACF ρ̂(h) ≈ 1.96n−1/2 , assume that ρ̂(h) is within the confidence interval.
Transparency.

Sample PACF – [BD], pp.94 α̂h ≡ φ̂hh is defined as the last component of vector φ̂h
which is the solution of the system of equations (φ̂h = R̂h−1 ρ̂h ).
[Solution of Yule-Walker equations where we substitute sample ACF ρ̂ for theoretical
ACF ρ].
Result: For AR(p) process, the sample PACF at lags greater than p are approximately
independent Normal r.v.s with mean zero and variance V arφ̂hh ∼ 1/n. Here n = sample
size, large.
To decide that the value of the PACF is zero, compare it with the standard deviation. If
|α̂(h)| > 1.96n−1/2 for h = p and |α̂(h)| ≤ 1.96n−1/2 for h > p, then the model is AR(p).

You might also like