EII3002 Exercise 7 & 8

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

EIE3002/EII3002 Exercise 7 & 8

Exercise 7: Based on Lecture 7.

1. Describe the difference between autocorrelations and partial autocorrelations. How


can autocorrelations at certain displacements be positive while the partial
autocorrelations at those same displacements are negative?

Partial autocorrelation measures linear association between two series controlling for
the effects of one or more additional series. Hence the two types of correlation,
although related, are nevertheless very different, and they may well be of different
sign.

2. Given below a list of four autocovariance functions:

a) 𝛾(𝑡, 𝜏) = 𝛼
b) 𝛾(𝑡, 𝜏) = 𝑒 −𝛼𝜏
c) 𝛾(𝑡, 𝜏) = 𝛼𝜏, and
𝛼
d) 𝛾(𝑡, 𝜏) =
𝜏

where 𝛼 is a positive constant. Which autocovariance function(s) are consistent with


covariance stationary, and which are not? Why?

Autocovariance functions of covariance stationary processes must eventually


decay/decrease. Out of four given autocovariance functions, (b) and (d) are
consistent with covariance stationary while (a) and (c) are not.

1
3. Consider the following sample of time series data with 36 values.

Period, t Value Period, t Value Period, t Value


1 23 13 86 25 17
2 59 14 33 26 45
3 36 15 90 27 9
4 99 16 74 28 72
5 36 17 7 29 33
6 74 18 54 30 17
7 30 19 98 31 3
8 54 20 50 32 29
9 17 21 86 33 30
10 36 22 90 34 68
11 89 23 65 35 87
12 77 24 20 36 44

a) Produce a time series plot.

Yt
120
100
80
60
40
20
0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35

b) Compute sample autocorrelation, 𝜌̂(𝜏) at displacements 1, 2, 3, 4, 5 and 6.

𝑇
2
∑(𝑦𝑡 − 𝑦
̅) = 29478.97222
𝑖=1

∑ [(𝑦𝑡 − 𝑦̅)(𝑦𝑡−1 − 𝑦̅)] = 3032.00


𝑡=1+1

∑ [(𝑦𝑡 − 𝑦̅)(𝑦𝑡−2 − 𝑦̅)] = 2911.22


𝑡=2+1

∑ [(𝑦𝑡 − 𝑦̅)(𝑦𝑡−3 − 𝑦̅)] = −1260.72


𝑡=3+1

2
𝑇

∑ [(𝑦𝑡 − 𝑦̅)(𝑦𝑡−4 − 𝑦̅)] = −915.98


𝑡=4+1

∑ [(𝑦𝑡 − 𝑦̅)(𝑦𝑡−5 − 𝑦̅)] = −5409.00


𝑡=5+1

𝑇
∑ [(𝑦𝑡 − 𝑦
̅)(𝑦𝑡−6 − 𝑦
̅)] = 743.30
𝑡=6+1

Therefore, sample autocorrelations are:

3032.00
𝜌̂(1) = = 0.1029
29478.97

2911.22
𝜌̂(2) = = 0.0988
29478.97

−1260.72
𝜌̂(3) = = −0.0428
29478.97

−915.98
𝜌̂(4) = = −0.0311
29478.97

−5409.00
𝜌̂(5) = = −0.1835
29478.97

743.30
𝜌̂(6) = = 0.0252
29478.97

c) Use the Ljung-Box Q-statistic to test the null hypothesis that the series is a
serially uncorrelated process up to displacement 6 (m = 6).
Joint hypothesis testing:
𝐻0 : 𝜌(1) = 𝜌(2) = 𝜌(3) = 𝜌(4) = 𝜌(5) = 𝜌(6)
𝐻1 : 𝐴𝑡 𝑙𝑒𝑎𝑠𝑡 𝑜𝑛𝑒 𝜌(𝜏) ≠ 0
Test statistic:
𝑚
𝜌̂(𝜏)2
𝑄𝐿𝐵 (𝜏) = 𝑇(𝑇 + 2) ∑ ( )
𝑇−𝜏
𝜏=1

3
(0.1029)2 (0.0988)2 (−0.0428)2 (−0.0311)2 (−0.1835)2
𝑄𝐿𝐵 (6) = 36(36 + 2)( + + + +
36−1 36−2 36−3 36−4 36−5
(0.0252)2
+ = 2.43877793
36−6

Critical value:
2 2
𝜒𝛼,𝑚 = 𝜒0.05,6 = 12.592

Decision:
Since the test statistic is smaller than the critical value,
2
𝑄𝐿𝐵 = 2.4388 < 𝜒0.05,6 = 12.592
Do not reject the null hypothesis, 𝐻0 : 𝜌(1) = 𝜌(2) = 𝜌(3) = 𝜌(4) = 𝜌(5) = 𝜌(6).

Conclusion:
The series is a serially uncorrelated process.

d) Compute sample partial autocorrelation, 𝑝̂ (𝜏) at displacements 1, 2.

̂ (1) = 𝜌̂ (1) = 0.1029


𝑝

̂ (2)−𝜌
𝜌 ̂ (1)2 0.0988−0.10292
𝑝̂ (2) = 1−𝜌̂ (1)2
= 1−0.10292
= 0.0892

4
e) Given that partial autocorrelations 𝑝̂ (3) = −0.062, 𝑝̂ (4) = −0.030, 𝑝̂ (5) = −0.171,
𝑝̂ (6) = 0.065. Draw a correlogram for the Autocorrelation Function (ACF) and the
Partial Autocorrelation Function (PACF) up to displacement 6 along with their
two-standard-error bands.

Inspect the graph.

Correlogram.

2 2
Two-standard-error-bands: ± =± = ±0.3333
√𝑇 √36

f) Based on your findings in parts (c) and (e), does the series has a strong cyclical
component?
Since the series is a serially uncorrelated process, it doesn’t have a strong
cyclical component.

5
4. Consider another sample of time series data with 36 values.

Period, t Value Period, t Value Period, t Value


1 23.00 13 183.97 25 94.89
2 72.80 14 143.38 26 101.93
3 79.68 15 176.03 27 70.16
4 146.81 16 179.62 28 114.10
5 124.08 17 114.77 29 101.46
6 148.45 18 122.86 30 77.87
7 119.07 19 171.72 31 49.72
8 125.44 20 153.03 32 58.83
9 92.27 21 177.82 33 65.30
10 91.36 22 196.69 34 107.18
11 143.82 23 183.01 35 151.31
12 163.29 24 129.81 36 134.78

a) Produce a time series plot.

y
250

200

150

100

50

0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37

b) Compute sample autocorrelation, 𝜌̂(𝜏) at displacements 1, 2, 3, 4, 5 and 6.


𝑇

∑(𝑦𝑡 − 𝑦̅)2 = 65787.088


𝑖=1
𝑇

∑ [(𝑦𝑡 − 𝑦̅)(𝑦𝑡−1 − 𝑦̅) = 42185.3528


𝑡=1+1

∑ [(𝑦𝑡 − 𝑦̅)(𝑦𝑡−2 − 𝑦̅) = 25540.1764


𝑡=2+1

∑ [(𝑦𝑡 − 𝑦̅)(𝑦𝑡−3 − 𝑦̅) = 9533.19543


𝑡=3+1

∑ [(𝑦𝑡 − 𝑦̅)(𝑦𝑡−4 − 𝑦̅) = 4303.27449


𝑡=4+1

6
𝑇

∑ [(𝑦𝑡 − 𝑦̅)(𝑦𝑡−5 − 𝑦̅) = −181.37358


𝑡=5+1

∑ [(𝑦𝑡 − 𝑦̅)(𝑦𝑡−6 − 𝑦̅) = 6863.61258


𝑡=6+1

Sample autocorrelation:

42185.3528
𝜌̂(1) = = 0.641240616
65787.088

25540.1764
𝜌̂(2) = = 0.38822476
65787.088

9533.19543
𝜌̂(3) = = 0.144909825
65787.088

4303.27449
𝜌̂(4) = = 0.065412144
65787.088

−181.37358
𝜌̂(5) = = −0.002756978
65787.088

686.61258
𝜌̂(6) = = 0.104330695
65787.088

7
c) Use the Ljung-Box Q-statistic to test the null hypothesis that the series is a
serially uncorrelated process up to displacement 6 (m = 6).

Joint hypothesis testing:


𝐻0 : 𝜌(1) = 𝜌(2) = 𝜌(3) = 𝜌(4) = 𝜌(5) = 𝜌(6)
𝐻1 : 𝐴𝑡 𝑙𝑒𝑎𝑠𝑡 𝑜𝑛𝑒 𝜌(𝜏) ≠ 0
Test statistic:
𝑚
𝜌̂(𝜏)2
𝑄𝐿𝐵 (𝜏) = 𝑇(𝑇 + 2) ∑ ( )
𝑇−𝜏
𝜏=1
(0.641240616)2 (0.38822476)2 (0.144909825)2 (0.065412144)2
𝑄𝐿𝐵 (6) = 36(36 + 2)( 36−1
+ 36−2
+ 36−3
+ 36−4
+
(−0.002756978)2 (0.104330695)2
+ = 23.5080681
36−5 36−6

Critical value:

2 2
𝜒𝛼,𝑚 = 𝜒0.05,6 = 12.592

Decision:
Since the test statistic is smaller than the critical value,
2
𝑄𝐿𝐵 = 23.5080681 > 𝜒0.05,6 = 12.592
Reject the null hypothesis, 𝐻0 : 𝜌(1) = 𝜌(2) = 𝜌(3) = 𝜌(4) = 𝜌(5) = 𝜌(6).
At least one of 𝜌(𝜏) ≠ 0

Conclusion:
The series is not a serially uncorrelated process.

d) Compute sample partial autocorrelation, 𝑝̂ (𝜏) at displacements 1, 2.


𝑝̂ (1) = 𝜌̂(1) = 0.1029
̂ (2)−𝜌
𝜌 ̂ (1)2 0.38822476−0.6412406162
𝑝̂ (2) = 1−𝜌̂ (1)2
= 1−0.6412106162
= - 0.03890

8
e) Given that partial autocorrelations 𝑝̂ (3) = −0.151, 𝑝̂ (4) = 0.074, 𝑝̂ (5) = −0.044,
𝑝̂ (6) = 0.209. Draw a correlogram for the Autocorrelation Function (ACF) and the
Partial Autocorrelation Function (PACF) up to displacement 6 along with their
two-standard-error bands.

Q4
200

160

120

80

40

0
5 10 15 20 25 30 35

2 2
Two-standard-error-bands: ± =± = ±0.3333
√𝑇 √36
Date: 11/27/23 Time: 21:01
Sample: 1 36
Included observations: 36
Autocorrelation Partial Correlation AC PAC Q-Stat Prob

1 0.641 0.641 16.072 0.000


2 0.388 -0.039 22.136 0.000
3 0.145 -0.151 23.006 0.000
4 0.065 0.074 23.189 0.000
5 -0.003 -0.044 23.190 0.000
6 0.104 0.209 23.686 0.001

f) Based on your findings in parts (c) and (e), does the series has a strong cyclical
component?
The series is not a serially uncorrelated process, which means that it has a
strong cyclical component.

9
Exercise 8: Based on lecture 8.

5. Explain the theoretical pattern of population ACF and PACF for the following models:

a) MA(1)
ACF: non-zero value at lag 1 then cuts off to zero.
PACF: Exponential decreasing.

b) MA(2)
ACF: Non-zero value at lag 1 and lag 2 then cuts off to zero.
PACF: Exponential decreasing.

c) AR(1)
ACF: Exponential decreasing.
PACF: Non-zero value at lag 1 then cuts off to zero.

d) AR(2)
ACF: Exponential decreasing.
PACF: Non-zero value at lag 1 and lag 2 then cuts off to zero.

10
6. Given below are sample ACF and PACF for four different series. Identify an ARMA
model that might be useful in describing each series.

Series 1:

Step 1: Inspect the correlogram of the series – decide model.


The sample ACF is decreasing and the PACF is non-zero value at lag 1 then cuts off
to zero.
Series 1 is consistent to AR (1).

Step 2: Run the model, decide whether to include constant or not.


Estimate the model, inspect the graph, if the series fluctuates around 0, no need to
include constant term in the estimation because the mean is equal to zero.
S1
4

-1

-2

-3
25 50 75 100 125 150

11
Dependent Variable: S1
Method: ARMA Maximum Likelihood (OPG - BHHH)
Date: 12/19/23 Time: 17:22
Sample: 1 150
Included observations: 150
Convergence achieved after 14 iterations
Coefficient covariance computed using outer product of gradients

Variable Coefficient Std. Error t-Statistic Prob.

C 0.065273 0.182033 0.358578 0.7204


AR(1) 0.550350 0.079480 6.924349 0.0000
SIGMASQ 0.990718 0.134004 7.393184 0.0000

R-squared 0.306531 Mean dependent var 0.062198


Adjusted R-squared 0.297097 S.D. dependent var 1.199262
S.E. of regression 1.005453 Akaike info criterion 2.870957
Sum squared resid 148.6076 Schwarz criterion 2.931169
Log likelihood -212.3218 Hannan-Quinn criter. 2.895419
F-statistic 32.48895 Durbin-Watson stat 2.070347
Prob(F-statistic) 0.000000

Inverted AR Roots .55

P-value = 0.7204 > 0.05, no need to include constant.

Step 3: check the stability.


View – ARMA structure – graph.

S1: Inverse Roots of AR/MA Polynomial(s)


1.5

1.0

0.5
AR roots

0.0

-0.5

-1.0

-1.5
-1 0 1

The roots lie inside the circle indicate that the estimated model covariance is
stationary.

12
STEP 4: Residual diagnostic.
View – residual diagnostic – correlogram Q

Date: 12/12/23 Time: 19:47


Sample (adjusted): 2 150
Q-statistic probabilities adjusted for 1 ARMA term

Autocorrelation Partial Correlation AC PAC Q-Stat Prob

1 -0.037 -0.037 0.2126


2 0.088 0.087 1.3941 0.238
3 -0.031 -0.025 1.5384 0.463
4 0.067 0.058 2.2424 0.524
5 -0.059 -0.051 2.7913 0.593
6 -0.099 -0.115 4.3304 0.503
7 0.022 0.029 4.4100 0.621
8 0.015 0.029 4.4467 0.727
9 0.026 0.026 4.5595 0.803
10 -0.113 -0.106 6.6270 0.676
11 0.144 0.123 9.9972 0.441
12 -0.024 -0.009 10.094 0.522
13 -0.079 -0.110 11.136 0.517
14 0.049 0.081 11.539 0.566
15 -0.025 -0.034 11.646 0.635
16 -0.013 -0.042 11.676 0.703
17 -0.105 -0.060 13.555 0.632
18 0.125 0.115 16.253 0.506
19 -0.007 -0.003 16.262 0.574
20 0.163 0.145 20.902 0.342
21 -0.057 -0.010 21.467 0.370
22 0.162 0.093 26.142 0.201
23 0.035 0.036 26.364 0.236
24 0.052 0.085 26.845 0.263
25 -0.007 -0.011 26.854 0.311
26 0.039 0.039 27.133 0.349
27 -0.119 -0.134 29.762 0.278
28 -0.007 0.037 29.773 0.324
29 -0.030 -0.045 29.942 0.366
30 -0.018 -0.001 30.006 0.414
31 0.032 0.026 30.205 0.455
32 -0.002 0.015 30.205 0.507
33 0.010 -0.023 30.223 0.557
34 0.007 -0.004 30.232 0.606
35 -0.015 0.021 30.274 0.651
36 0.058 0.068 30.954 0.664

H0: The residual is a white noise process.


H1: The residual is not a white noise process.
All the p-values are greater than a=0.05, do not reject H0, so this model is adequate.

13
Series 2:

Step 1: Inspect the correlogram – choose the model.


Suitable to use MA (1) because the sample ACF is non-zero value at lag 1 then cuts
off to zero and the sample PACF is decreasing. However, it also looks suitable to
use AR (2) because the ACF is decreasing and the PACF is non-zero value at lag 1
and 2 then cuts off to zero.

If not sure, can estimate the model and compare the AIC and SIC. Choose the
smallest AIC and SIC.

Example: MA (1) or AR (2).

Step 2: Estimate the model – compare AIC and SIC.


Dependent Variable: S2
Method: ARMA Maximum Likelihood (OPG - BHHH)
Date: 12/19/23 Time: 17:47
Sample: 1 150
Included observations: 150
Convergence achieved after 14 iterations
Coefficient covariance computed using outer product of gradients

Variable Coefficient Std. Error t-Statistic Prob.

MA(1) 0.508759 0.079011 6.439113 0.0000


SIGMASQ 0.981534 0.131630 7.456767 0.0000

R-squared 0.213392 Mean dependent var 0.039574


Adjusted R-squared 0.208077 S.D. dependent var 1.120795
S.E. of regression 0.997396 Akaike info criterion 2.847902
Sum squared resid 147.2301 Schwarz criterion 2.888044
Log likelihood -211.5927 Hannan-Quinn criter. 2.864210
Durbin-Watson stat 1.972051

Inverted MA Roots -.51

14
Dependent Variable: S2
Method: ARMA Maximum Likelihood (OPG - BHHH)
Date: 12/19/23 Time: 17:48
Sample: 1 150
Included observations: 150
Convergence achieved after 12 iterations
Coefficient covariance computed using outer product of gradients

Variable Coefficient Std. Error t-Statistic Prob.

AR(1) 0.496493 0.090715 5.473137 0.0000


AR(2) -0.200015 0.075758 -2.640165 0.0092
SIGMASQ 0.992169 0.134461 7.378845 0.0000

R-squared 0.204869 Mean dependent var 0.039574


Adjusted R-squared 0.194051 S.D. dependent var 1.120795
S.E. of regression 1.006189 Akaike info criterion 2.871811
Sum squared resid 148.8253 Schwarz criterion 2.932024
Log likelihood -212.3858 Hannan-Quinn criter. 2.896274
Durbin-Watson stat 1.949603

Inverted AR Roots .25-.37i .25+.37i

AIC and SIC produced by MA (1) is smaller than AIC and SIC produced by AR (2).
Therefore, use MA (1).

Step 3: Check the stability.

S2: Inverse Roots of AR/MA Polynomial(s)


1.5

1.0

0.5
MA roots

0.0

-0.5

-1.0

-1.5
-1 0 1
The unit roots lie inside the circle indicating that MA (1) is invertible.

15
Step 4: Residual diagnostic.
Date: 12/19/23 Time: 17:53
Sample: 1 150
Q-statistic probabilities adjusted for 1 ARMA term

Autocorrelation Partial Correlation AC PAC Q-Stat Prob

1 0.012 0.012 0.0238


2 0.004 0.004 0.0266 0.870
3 -0.010 -0.010 0.0411 0.980
4 0.042 0.042 0.3139 0.957
5 -0.058 -0.059 0.8386 0.933
6 -0.112 -0.111 2.8183 0.728
7 0.014 0.019 2.8511 0.827
8 0.033 0.032 3.0265 0.883
9 0.011 0.013 3.0470 0.931
10 -0.111 -0.108 5.0603 0.829
11 0.149 0.142 8.7072 0.560
12 -0.021 -0.038 8.7774 0.642
13 -0.091 -0.093 10.149 0.603
14 0.045 0.074 10.483 0.654
15 -0.016 -0.040 10.524 0.723
16 -0.035 -0.050 10.733 0.771
17 -0.094 -0.054 12.235 0.728
18 0.101 0.098 13.993 0.668
19 0.014 -0.011 14.028 0.727
20 0.136 0.138 17.268 0.572
21 -0.043 -0.014 17.596 0.614
22 0.146 0.101 21.380 0.436
23 0.055 0.037 21.917 0.465
24 0.035 0.091 22.134 0.512
25 0.008 -0.000 22.145 0.571
26 0.024 0.035 22.250 0.621
27 -0.117 -0.127 24.782 0.531
28 -0.009 0.053 24.798 0.586
29 -0.027 -0.060 24.933 0.631
30 -0.022 0.008 25.021 0.677
31 0.033 0.015 25.228 0.714
32 -0.008 0.009 25.239 0.757
33 0.005 -0.029 25.244 0.796
34 0.002 -0.004 25.245 0.831
35 -0.010 0.025 25.264 0.861
36 0.055 0.064 25.876 0.869

H0: The residual is a white noise process.


H1: The residual is not a white noise process.

All the p-values are greater than 0.05, do not reject H0. Therefore, the residual is a
white noise process and MA (1) is adequate on this series.

16
Series 3:

Step 1: Inspect the correlogram – choose the model.


The sample ACF is decreasing, and the sample PACF has non-zero value at lag 1
and lag 2. Use AR (2).

The sample ACF has non-zero value at lag 1, lag 2, lag 3, lag 4, and the sample
PACF is decreasing. Use MA (4).

To decide whether AR (2) or MA (4), estimate the model.

17
Step 2: Estimate the model – compare AIC and SIC.
Dependent Variable: S3
Method: ARMA Maximum Likelihood (OPG - BHHH)
Date: 12/19/23 Time: 19:01
Sample: 1 150
Included observations: 150
Convergence achieved after 13 iterations
Coefficient covariance computed using outer product of gradients

Variable Coefficient Std. Error t-Statistic Prob.

AR(1) -0.276171 0.090416 -3.054460 0.0027


AR(2) 0.331088 0.083409 3.969471 0.0001
SIGMASQ 0.980954 0.131952 7.434160 0.0000

R-squared 0.263994 Mean dependent var 0.028988


Adjusted R-squared 0.253980 S.D. dependent var 1.158341
S.E. of regression 1.000487 Akaike info criterion 2.861442
Sum squared resid 147.1432 Schwarz criterion 2.921654
Log likelihood -211.6081 Hannan-Quinn criter. 2.885904
Durbin-Watson stat 2.004181

Inverted AR Roots .45 -.73

Dependent Variable: S3
Method: ARMA Maximum Likelihood (OPG - BHHH)
Date: 12/19/23 Time: 19:00
Sample: 1 150
Included observations: 150
Convergence achieved after 16 iterations
Coefficient covariance computed using outer product of gradients

Variable Coefficient Std. Error t-Statistic Prob.

MA(1) -0.244324 0.089935 -2.716671 0.0074


MA(2) 0.415394 0.086170 4.820654 0.0000
MA(3) -0.169839 0.089757 -1.892214 0.0605
MA(4) 0.209013 0.087260 2.395294 0.0179
SIGMASQ 0.984783 0.133753 7.362691 0.0000

R-squared 0.261122 Mean dependent var 0.028988


Adjusted R-squared 0.240739 S.D. dependent var 1.158341
S.E. of regression 1.009327 Akaike info criterion 2.892288
Sum squared resid 147.7174 Schwarz criterion 2.992643
Log likelihood -211.9216 Hannan-Quinn criter. 2.933059
Durbin-Watson stat 2.054697

Inverted MA Roots .42+.49i .42-.49i -.30-.64i -.30+.64i

The AIC and SIC produced by AR (2) are smaller than AIC and SIC produced
by MA (4). Therefore, choose AR (2).

18
Step 3: Check the stability.
S3: Inverse Roots of AR/MA Polynomial(s)
1.5

1.0

0.5
AR roots

0.0

-0.5

-1.0

-1.5
-1 0 1

All the roots lie inside the circle indicating that the estimated model
covariance is stationary.

Step 4: Diagnostic checking.


Date: 12/12/23 Time: 20:41
Sample (adjusted): 3 150
Q-statistic probabilities adjusted for 2 ARMA terms

Autocorrelation Partial Correlation AC PAC Q-Stat Prob

1 -0.006 -0.006 0.0060


2 0.008 0.008 0.0148
3 -0.010 -0.010 0.0299 0.863
4 0.039 0.039 0.2618 0.877
5 -0.045 -0.044 0.5767 0.902
6 -0.131 -0.132 3.2532 0.516
7 0.030 0.030 3.3902 0.640
8 0.025 0.026 3.4907 0.745
9 0.011 0.012 3.5101 0.834
10 -0.110 -0.105 5.4734 0.706
11 0.151 0.140 9.1547 0.423
12 -0.030 -0.045 9.2994 0.504
13 -0.091 -0.092 10.651 0.473
14 0.042 0.066 10.944 0.534
15 -0.014 -0.031 10.978 0.613
16 -0.027 -0.051 11.103 0.678
17 -0.108 -0.064 13.089 0.595
18 0.101 0.092 14.844 0.536
19 0.007 -0.019 14.852 0.606
20 0.139 0.146 18.190 0.443
21 -0.043 -0.012 18.515 0.488
22 0.143 0.098 22.108 0.335
23 0.057 0.039 22.688 0.361
24 0.033 0.095 22.885 0.408
25 0.004 -0.012 22.888 0.467
26 0.024 0.049 22.997 0.520
27 -0.100 -0.120 24.833 0.472
28 -0.021 0.044 24.913 0.524
29 -0.024 -0.059 25.017 0.574
30 -0.031 0.001 25.193 0.617
31 0.029 0.004 25.354 0.660
32 -0.014 0.013 25.393 0.706
33 0.009 -0.027 25.408 0.749
34 0.008 0.002 25.419 0.789
35 -0.016 0.013 25.470 0.822
36 0.053 0.065 26.031 0.834

H0: The residual is a white noise process.


H1: The residual is not a white noise process.
All p-values are greater than 0.05, do not reject H0. Therefore, the residual is
a white noise process and AR (2) is adequate to be used in this series.

19
Series 4:

Step 1: Inspect the correlogram – make decision.

The sample ACF is decreasing and the sample PACF is spike at lag 1 and lag 2, can
use AR (2). However, it also looks like the sample ACF is spiking at lag 1 and lag 2,
and the sample PACF is decreasing, can use MA (2). We are not sure whether to
choose AR (2) or MA (2). Therefore, we should estimate the model and choose the
model that produces the lower AIC and SIC.

Step 2: Estimate the model – compare AIC and SIC.

Dependent Variable: S3
Method: ARMA Maximum Likelihood (OPG - BHHH)
Date: 12/19/23 Time: 19:08
Sample: 1 150
Included observations: 150
Convergence achieved after 13 iterations
Coefficient covariance computed using outer product of gradients

Variable Coefficient Std. Error t-Statistic Prob.

AR(1) -0.276171 0.090416 -3.054460 0.0027


AR(2) 0.331088 0.083409 3.969471 0.0001
SIGMASQ 0.980954 0.131952 7.434160 0.0000

R-squared 0.263994 Mean dependent var 0.028988


Adjusted R-squared 0.253980 S.D. dependent var 1.158341
S.E. of regression 1.000487 Akaike info criterion 2.861442
Sum squared resid 147.1432 Schwarz criterion 2.921654
Log likelihood -211.6081 Hannan-Quinn criter. 2.885904
Durbin-Watson stat 2.004181

Inverted AR Roots .45 -.73

20
Dependent Variable: S3
Method: ARMA Maximum Likelihood (OPG - BHHH)
Date: 12/19/23 Time: 19:09
Sample: 1 150
Included observations: 150
Convergence achieved after 11 iterations
Coefficient covariance computed using outer product of gradients

Variable Coefficient Std. Error t-Statistic Prob.

MA(1) -0.286761 0.082728 -3.466293 0.0007


MA(2) 0.349467 0.073349 4.764436 0.0000
SIGMASQ 1.034716 0.135676 7.626357 0.0000

R-squared 0.223657 Mean dependent var 0.028988


Adjusted R-squared 0.213095 S.D. dependent var 1.158341
S.E. of regression 1.027537 Akaike info criterion 2.914049
Sum squared resid 155.2074 Schwarz criterion 2.974262
Log likelihood -215.5537 Hannan-Quinn criter. 2.938511
Durbin-Watson stat 2.070190

Inverted MA Roots .14+.57i .14-.57i

The AIC and SIC produced by AR (2) are smaller than the AIC and SIC
produced by MA (2). Therefore, we should use AR (2). However, we still need
to test the residual.

Step 3: Check the stability.

S4: Inverse Roots of AR/MA Polynomial(s)


1.5

1.0

0.5
AR roots

0.0

-0.5

-1.0

-1.5
-1 0 1
All unit roots lie inside the circle indicating that the estimated model
covariance is stationary.

21
Step 4: Residual diagnostic.

Date: 12/19/23 Time: 19:15


Sample: 1 150
Q-statistic probabilities adjusted for 2 ARMA terms

Autocorrelation Partial Correlation AC PAC Q-Stat Prob

1 -0.025 -0.025 0.0923


2 0.036 0.036 0.2964
3 0.074 0.076 1.1441 0.285
4 -0.028 -0.025 1.2628 0.532
5 -0.051 -0.059 1.6763 0.642
6 -0.103 -0.111 3.3529 0.501
7 -0.000 0.002 3.3530 0.646
8 0.033 0.051 3.5278 0.740
9 0.017 0.035 3.5746 0.827
10 -0.126 -0.140 6.1437 0.631
11 0.165 0.141 10.588 0.305
12 -0.030 -0.026 10.736 0.378
13 -0.098 -0.089 12.326 0.340
14 0.053 0.034 12.795 0.384
15 -0.024 -0.011 12.890 0.456
16 -0.041 -0.050 13.176 0.513
17 -0.070 -0.055 14.023 0.524
18 0.088 0.097 15.373 0.497
19 0.005 -0.004 15.378 0.568
20 0.142 0.138 18.922 0.397
21 -0.037 -0.015 19.167 0.446
22 0.143 0.098 22.787 0.299
23 0.072 0.040 23.716 0.307
24 0.012 0.086 23.743 0.361
25 0.021 -0.008 23.826 0.413
26 0.027 0.039 23.955 0.464
27 -0.124 -0.137 26.784 0.367
28 0.001 0.059 26.784 0.421
29 -0.027 -0.055 26.923 0.468
30 -0.039 0.016 27.205 0.507
31 0.044 -0.001 27.579 0.541
32 -0.020 0.017 27.654 0.589
33 0.012 -0.025 27.685 0.637
34 0.006 -0.006 27.692 0.684
35 -0.022 0.013 27.788 0.724
36 0.061 0.081 28.534 0.732

H0: The residual is a white noise process.


H1: The residual is not a white noise process.

All p-values are greater than a = 0.05, do not reject H0, indicates that MA (2) is
adequate to series 4.

22
7. Write the following models (with zero mean) in terms of the backshift operator, and
then without the backshift operator.

a) MA (1):
Backshift operator: 𝑦𝑡 = (1 + 𝜃1 𝐿)𝜀𝑡
Without backshift operator: 𝑦𝑡 = 𝜀𝑡 + 𝜃1 𝜀𝑡−1

b) MA (2):
Backshift operator: 𝑦𝑡 = (1 + 𝜃1 𝐿 + 𝜃2 𝐿2 )𝜀𝑡
Without backshift operator: 𝑦𝑡 = 𝜀𝑡 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2

c) AR (1):
Backshift operator: (1 − 𝜙1 𝐿)𝑦𝑡 = 𝜀𝑡
Without backshift operator: 𝑦𝑡 = 𝜙1 𝑦𝑡−1 + 𝜀𝑡

d) AR (2):
Backshift operator: (1 − 𝜙1 𝐿−𝜙2 𝐿2 )𝑦𝑡 = 𝜀𝑡
Without backshift operator: 𝑦𝑡 = 𝜙1 𝑦𝑡−1 + 𝜙2 𝑦𝑡−2 + 𝜀𝑡

e) ARMA (1,1)
Backshift operator: (1 − 𝜙1 𝐿)𝑦𝑡 = (1 + 𝜃1 𝐿)𝜀𝑡
Without backshift operator: 𝑦𝑡 = 𝜙1 𝑦𝑡−1 + 𝜀𝑡 + 𝜃1 𝜀𝑡−1

23
8. Derive theoretical covariance and autocorrelation functions at displacements 1 and 2
for MA (2) with zero-mean process. What would you expect to be the value of
autocorrelations beyond displacement 2?

𝛾(0) = 𝑣𝑎𝑟(𝑦𝑡 ) = 𝜎 2 (1 + 𝜃12 + 𝜃22 )

𝛾(1) = 𝑐𝑜𝑣(𝑦𝑡 , 𝑦𝑡−1 ) = 𝐸(𝑦𝑡 𝑦𝑡−1 ) = 𝜎 2 (𝜃1 + 𝜃1 𝜃2 )

𝛾(2) = 𝑐𝑜𝑣(𝑦𝑡 , 𝑦𝑡−2 ) = 𝐸(𝑦𝑡 𝑦𝑡−2 ) = 𝜎 2 𝜃2

𝛾(𝜏) = 𝑐𝑜𝑣(𝑦𝑡 , 𝑦𝑡−𝜏 ) = 𝐸(𝑦𝑡 𝑦𝑡−𝜏 ) = 0, 𝜏 > 2

1𝜃 +𝜃 𝜃
1 2 𝜃
2
𝜌(1) = 1+𝜃 2 +𝜃 2 , 𝜌(2) = 1+𝜃 2 +𝜃 2
1 2 1 2

𝜌(𝜏) = 0, 𝜏 > 2

24

You might also like