0% found this document useful (0 votes)
200 views10 pages

Research Methods

The document provides guidance for extra exercises in research methods and modelling. It notes that students should attempt all questions before and after class, and check the course site and email regularly. Sample questions and answers are provided on topics like stationarity, invertibility, autocorrelation functions, and equivalent representations of time series models. The answers analyze properties like the roots of characteristic polynomials to assess stationarity and invertibility.

Uploaded by

Vishnu V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
200 views10 pages

Research Methods

The document provides guidance for extra exercises in research methods and modelling. It notes that students should attempt all questions before and after class, and check the course site and email regularly. Sample questions and answers are provided on topics like stationarity, invertibility, autocorrelation functions, and equivalent representations of time series models. The answers analyze properties like the roots of characteristic polynomials to assess stationarity and invertibility.

Uploaded by

Vishnu V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

ECON815 Research Methods and Modelling 2018/19

Extra Exercises with Answers

ECON815 Research Methods and Modelling

• This material is a complement, not a substitution, of Lecture Notes and other module materials.

• Attempt all the questions before and after the classes; you should always prepare well and revise
well

• The class does not go though solutions in detail, but rather it highlights some key points and
steps, so it is vital that you try the exercises before the classes and revise afterwards

• Some sketched answers will be provided after the classes as a rough guidance for you to check
against your solutions

• Ensure you check the VITAL site and your email box regularly.

Page 1
ECON815 Research Methods and Modelling 2018/19

1. Consider the following models and comment on their stationarity and invertibility:

Yt = 0.5 + 0.3εt−1 + εt (1)


Yt = 0.5 + 1.2εt−1 + εt (2)
Yt = 0.1 + 0.99Yt−1 + εt (3)
Yt = 0.1 + Yt−1 + εt (4)

Equation (1) is an MA(1) and hence stationary. It has characteristic equation 1 + 0.3L = 0.
The root of this is −3.33 and, since | − 3.33| > 1, it follows that the model is invertible (root of
characteristic equation lies outside the unit circle).
Equation (2) is an MA(1) and hence stationary. It has characteristic equation 1 + 1.2L = 0. The
root of this is −5/6 and, since | − 5/6| < 1, it follows that the model is not invertible (root of
characteristic equation lies inside the unit circle).
Equation (3) is an AR(1), the stationarity of which depends on the root of the characteristic
equation θ(L) = 0. In this case we must solve 1 − 0.99L = 0. The root is L = 1.01. Although this
is close to the unit circle, it is outside it, so the model is stationary.
Equation (4) is an AR(1) but the root of its characteristic equation lies on the unit circle, so the
model is nonstationary; the model is, of course, a Random Walk with Drift (RWD).

2. Comment on the invertibility and stationarity of the following models

yt = −0.8εt−1 − 0.2εt−2 + εt (5)


yt = 1.2yt−1 − 0.32yt−2 + εt (6)

• Model (5) is an MA(2) and can be written as

yt = (1 − L)(1 + 0.2L)εt

Thus the characteristic equation is

(1 − z)(1 + 0.2z) = 0

with characteristic roots 1 and -5, so the process is not invertible because one root is on the
unit circle (i.e. not all the characteristic roots are outside of the unit circle). The process is
still stationary.
• Model (6) is an AR(2) and can be written as

(1 − 0.8L)(1 − 0.4L)yt = εt

Thus the characteristic equation is

(1 − 0.8z)(1 − 0.4z) = 0

with characteristic roots 1.25 and 2.5, both of which are outside of the unit circle. Therefore
the polynomial in term of the lag operator in this model is invertible, and the process is
stationary.

Page 2
ECON815 Research Methods and Modelling 2018/19

3. Consider the demeaned MA(1) model

yt = (1 + αL)εt = α(L)εt

where L is the lag operator. State the invertibility condition for this MA(1) model and derive the
inverse of polynomial α(L).
This MA(1) model is invertible if |α| < 1, i.e. there exists an inverse polynomial α−1 (L) such that
α−1 (L)α(L) = 1. Let the coefficients of the inverse polynomial α−1 (L) be labelled {α∗i }. Then, it
requires
 
1 + α∗1 L + α∗2 L2 + α∗3 L3 + · · · (1 + αL) = 1
   
1 + α∗1 L + α∗2 L2 + α∗3 L3 + · · · + αL 1 + α∗1 L + α∗2 L2 + α∗3 L3 + · · · = 1

which is
 
1 + α∗1 L + α∗2 L2 + α∗3 L3 + · · · = 1 − αL 1 + α∗1 L + α∗2 L2 + α∗3 L3 + · · ·
1 + α∗1 L + α∗2 L2 + α∗3 L3 + · · · = 1 − αL − αα∗1 L2 − αα∗2 L3 − αα∗3 L4 − · · ·

The coefficients of Li , i = 1, 2, 3, . . . on both sides of the equation must be matched, which means

α∗1 = −α
α∗2 = −αα∗1 = (−α)2
α∗3 = −αα∗2 = (−α)3
..
.

When |α| < 1, the inverse polynomial α−1 (L) (a geometric series) is convergent

1 + α∗1 L + α∗2 L2 + α∗3 L3 + · · · = 1 + (−αL) + (−αL)2 + (−αL)3 + · · ·


1 1 1
= = ≡ = α−1 (L)
1 − (−αL) 1 + αL α(L)
As an extra exercise, derive the inverse polynomial for an MA(1) model

yt = εt + 0.5εt−1

4. Show that the two MA(1) models yt = 0.5εt−1 +εt and yt = 2εt−1 +εt have the same autocorrelation
function. Comment on the result.
For an MA(1)
Yt = µ + εt + αεt−1
the ACF has
γ(1) α
ρ(1) = =
γ(0) (1 + α2 )

Page 3
ECON815 Research Methods and Modelling 2018/19

but
γ(k) = E[(Yt − µ)(Yt−k − µ)] = E[(εt + αεt−1 )(εt−k + αεt−k−1 )] = 0
for all k > 1, so ρ(k) = 0, k = 2, 3, . . .
Hence, if α = 0.5,
0.5 0.5
ρ(1) = = = 0.4
1 + 0.5 2 1 + 0.25
whereas if α = 2,
2 0.5
ρ(1) = = = 0.4
1+2 2 1+4
Both specifications evidently have the same ACF. The comment is that all MA(1) models are
stationary, but when there is no unit root in the MA, there are two parameterizations that lead to
the same ACF: one of these is invertible (when α = 0.5) and one of them is not (when α = 2 in
this example).
5. Consider the following model

yt + 1.1yt−1 − 0.3yt−2 = (1 − 0.6L)(1 − 0.5L)yt = εt

(a) Show that this model is stationary.


Extending Examples 3 and 4 above from AR(1) models to an AR(2), as here, we can assess
the stationarity of the model by considering the roots of the characteristic polynomial. The
relevant characteristic equation is given by

(1 − 0.6L)(1 − 0.5L) = 0

the roots of which can be seen to be 1.667 and 2. Since both of these are larger than unity
in absolute value (and hence lie outside the unit circle), the model is stationary.
(b) Obtain the first 4 coefficients of the equivalent MA(∞) process; and derive Ch , h =
1, 2, 3, 4. Comment on your results.
The equivalent MA(∞) process is found by inverting the two first order polynomials to
obtain

yt = (1 − 0.6L)−1 (1 − 0.5L)−1 εt
= (1 + 0.6L + 0.36L2 + 0.216L3 + 0.1296L4 + · · · )
×(1 + 0.5L + 0.25L2 + 0.125L3 + 0.0625L4 + · · · )εt

From this by collecting terms we see that the first 4 coefficients of the equivalent MA(∞)
are:

α1 = 0.6 + 0.5 = 1.1


α2 = 0.25 + 0.36 + 0.6 × 0.5 = 0.91
α3 = 0.216 + 0.125 + 0.6 × 0.25 + 0.36 × 0.5 = 0.671
α4 = 0.4651.

Using the coefficients to obtain Ch , h = 1, 2, 3, 4, we note that for a general MA(q) process
q
X
Yt = µ + αi εt−1 + εt , εt ∼ IID(0, σ2 )
i=1

Page 4
ECON815 Research Methods and Modelling 2018/19

it follows
h−1
 
X
Ch = 1 + α2i  σ2
 

i=1

for h = 2, 3, . . . , q (C1 = σ as before) and


2

 q
X

Ch = 1 + α2i  σ2
 

i=1

for h > q. Again, the upper bound to the variance of the prediction error is the unconditional
variance of the series. From this, setting σ2 = 1 for simplicity (all values are simply
multiplied by σ2 if some other value applies) we have:

C1 = 1
C2 = 1 + 1.21 = 2.21
C3 = 1 + 1.21 + 0.8281 = 3.0381
C4 = 3.0381 + 0.671 × 0.671 = 3.4883

Comment. Forecast error variances are increasing, but will be bounded by the unconditional
variance of the process as forecast horizon increases.

6. Verify if the following process is stationary


q
X
Yt = µ + αi εt−i + εt , εt ∼ WN(0, σ2 )
i=1

It can be verified that

E[Yt ] = µ
 q 2 
h  X
i  
var(Yt ) = E (Yt − E[Yt ])2 = E µ + αi εt−i + εt − µ 
i=1
 q 2 
X  
= E  αi εt−i + εt  
i=1
q
X q X
X q
h i h i h i
= α2i E ε2t−i + E εt + 2
2
αi α j E εt−i εt− j
i=1 i, j=1 i, j
Xq
= α2i σ2 + σ2 + 0
i=1
 q
X

= 1 + αi  σ2
 2


i=1
cov(Yt , Yt−k ) = (αk + αk+1 α1 + αk+2 α2 + · · · + αq αq−k )σ2 , k = 1, 2, . . . , q
= 0 for k > q

Page 5
ECON815 Research Methods and Modelling 2018/19

(Weak) stationarity of a stochastic process requires that variances and autocovariances be finite
and independent of time. Finite order MA processes are always stationary by construction,
because they are a weighted sum of a fixed number of stationary white noise random variables,
as this example shows.

7. Formulate a test that the true model is MA(0) versus the alternative that it is MA(1).
From the lecture notes, critical values of the standard normal distribution can be used to test
hypotheses like H0 : ρk = 0 against H1 : ρk , 0 using ρ̂i , i = 1, 2, . . . , k (i.e. the null
hypothesis of MA(k-1) against the alternative of MA(k) using the statistic

T ρ̂k
z=  1/2
1 + 2 k−1i=1 ρ̂i
2
P

Standard ‘root-T ’ asymptotics apply (see, e.g. Verbeek Chapter 2.6.2). The test required for this
exercise is simply the case k = 1, so the relevant test statistic is simply

z = T ρ̂1

This statistic is asymptotically N(0, 1) on the null hypothesis that ρ1 = 0.

8. Derive a test that the true model is AR(1) against the alternative that it is AR(2).
This is just a related version of the test procedure given above, only it is now based on the Partial
ACF (PACF). We can use partial autocorrelation functions θ̂kk to test the hypothesis H0 : θkk = 0
against the alternative that it is not zero. It can be shown that, if the observed process is in fact
AR(p), then the asymptotic distribution of θ̂kk is

θ̂kk ∼ N(0, T −1 ) for k > p

For the case of an AR(2), θ33 = 0, but θ22 , 0, whereas for an AR(1) θ22 = 0, and θ11 , 0. Hence
the test is that of H0 : θ22 = 0 against H1 : θ22 , 0 and the relevant test statistic is

√ T (ρ̂2 − ρ̂21 )
z = T θ̂22 =
(1 − ρ̂21 )

The asymptotic distribution of z is N(0, 1) on H0 .

9. Derive the mean, variance and autocorrelations for the following three models:

Yt = µ + εt , εt ∼ IID(0, σ2 )
Yt = µ + εt + αεt−1 , εt ∼ IID(0, σ2 )
Yt = δ + θYt−1 + εt , εt ∼ IID(0, σ2 )

• The constant model is


Yt = µ + εt , εt ∼ IID(0, σ2 )

Page 6
ECON815 Research Methods and Modelling 2018/19

where errors are zero mean random variables with variance σ2 .

E[Yt ] = E[µ + εt ] = µ + E[εt ] = µ


h i h i
var(Yt ) = E (Yt − µ)2 = E ε2t = σ2
γ(k) = E (Yt − µ)(Yt−k − µ) = E[εt εt−k ] = 0
 

for all k , 0. Hence the constant model is an uncorrelated process because εt is.

• The MA(1) model is


Yt = µ + εt + αεt−1 , εt ∼ IID(0, σ2 )
It follows that, the expectation is

E[Yt ] = E[µ + εt + αεt−1 ] = µ + E[εt ] + α E[εt−1 ] = µ

because the errors are zero mean random variables. The variance is
h i h i h i
var(Yt ) = E Yt − µ)2 = E (εt + αεt−1 )2 = E ε2t + 2εt εt−1 + α2 ε2t−1 = σ2 (1 + α2 )

because εt is uncorrelated. For the MA(1) model, γ(1) is now non-zero because

γ(1) = E[(Yt − µ)(Yt−1 − µ)] = E[(εt + αεt−1 )(εt−1 + αεt−2 )] = ασ2

so that
γ(1) α
ρ(1) = =
γ(0) (1 + α2 )
However

γ(k) = E[(Yt − µ)(Yt−k − µ)] = E[(εt + αεt−1 )(εt−k + αεt−k−1 )] = 0

for all k > 1, so


ρ(k) = 0, k = 2, 3, . . .
• The AR(1) model is
Yt = δ + θYt−1 + εt , εt ∼ IID(0, σ2 )
As with many of these questions, there is more than one way to do them. Here is probably
the simplest way. If it is assumed that |θ| < 1, the process is stationary (and weakly sta-
tionary). This implies that E[Yt ] = µ for some constant µ that we are required to determine.
Then

E[Yt ] = E[δ + θYt−1 + εt ]


µ = δ + θ E[Yt−1 ] + E[εt ]
= δ + θµ

since εt is a zero mean process and the expectation of a sum is the sum of the expectations.
From the above it is clear that
δ
µ=
1−θ

Page 7
ECON815 Research Methods and Modelling 2018/19

Again, the variance and autocovariance structure can be approached in a variety of ways.
One way is to use the MA(∞) representation of the AR(1). Putting θ(L) = 1 − θL (where L
is the lag operator) and yt = Yt − δ/(1 − θ), we have
θ(L)yt = εt
so that
θ−1 (L)θ(L)yt = yt = θ−1 (L)εt = (1 + θL + θ2 L2 + · · · )εt = εt + θεt−1 + θ2 εt−2 + · · ·
(Note, if you are asked to SHOW that the inverse lag operator polynomial is as shown here,
you need to demonstrate it by equating coefficients to show that θ−1 (L)θ(L) = 1.) Now
γ(0) = var(Yt ) = var(yt )
h i
= E (εt + θεt−1 + θ2 εt−2 + · · · )2
h i h i h i
= E ε2t + θ2 E ε2t−1 + θ4 E ε2t−2 + · · ·
= σ2 (1 + θ2 + θ4 + · · · )
σ2
=
1 − θ2
You need to explain the reasoning used here, viz. that the errors have zero mean, constant
variance and are uncorrelated plus that the sum of the geometric progression 1+θ2 +θ4 +· · ·
is given by 1/(1 − θ2 ). Similar reasoning can be used to find
γ(k) = E[yt yt−k ]
h i
= E (εt + θεt−1 + θ2 εt−2 + · · · )(εt−k + θεt−k−1 + θ2 εt−k−2 + · · · )
= σ2 (θk + θk+2 + θk+4 + · · · )
= θk σ2 (1 + θ2 + θk+4 + · · · )
= θk γ(0)
Hence
γ(k)
ρ(k) = = θk , k = 1, 2, . . .
γ(0)
Alternatively,
yt = θyt−1 + εt
= θ (θyt−2 + εt−1 ) + εt = θ2 yt−2 + θεt−1 + εt
= θ2 (θyt−3 + εt−2 ) + θεt−1 + εt = θ3 yt−3 + θ2 εt−2 + θεt−1 + εt
.
= ..
= θk yt−k + θk−1 εt−(k−1) + θk−2 εt−(k−2) + · · · + θεt−1 + εt
Therefore
γ(k) = cov(yt , yt−k )
h  i
= E θk yt−k + θk−1 εt−(k−1) + θk−2 εt−(k−2) + · · · + θεt−1 + εt yt−k
h i h  i
= E θk yt−k yt−k + E θk−1 εt−(k−1) + θk−2 εt−(k−2) + · · · + θεt−1 + εt yt−k
= θk E yt−k yt−k + 0
 

= θk γ(0)

Page 8
ECON815 Research Methods and Modelling 2018/19

where yt−k is uncorrelated to all the white noises {ε j }tj=t−k+1 . It then follows

γ(k)
ρ(k) = = θk , k = 1, 2, . . .
γ(0)

10. Given YT = 1.32 and σ2 = 1 produce the optimal forecasts for YT +1 and YT +2 and calculate their
forecast error variances from the following model

Yt = 0.5 + 0.25Yt−1 + εt

The optimal forecast in this AR(1) model is the value of the conditional mean, conditional on the
information set FT , i.e. information available up to and including time T . So, we require

E[YT + j|FT ] = E[0.5 + 0.25YT + j|T + εT + j|T ], j = 1, 2

For j = 1, the optimal one-step ahead forecast is

ŶT +1|T = 0.5 + 0.25 × 1.32 = 0.83

whilst for j = 2 it is
ŶT +2|T = 0.5 + 0.25 × 0.83 = 0.7075
since future {t } are not part of FT . Notice that the optimal forecast of YT +1 |FT = 0.83 has been
used in the two-step forecast because YT +1 is not part of the information set FT and future εT + j
are not part of this information set and are thus set to their unconditional mean of zero.
The associated forecast error variances, C1 and C2 are readily seen to be 1 and 1.0625, respec-
tively.

11. Given YT = 231.00, YT −1 = 221.40 and YT −2 = 218.00, produce the optimal forecasts for YT +h
where h = 1, 2 and ∞ from the following model

Yt = 201.5 + 0.7Yt−1 − 0.1Yt−2

This is a stationary AR(2) because the roots of the characteristic equation are 2 and 5. The
unconditional mean of the process will be the optimal predictor for h = ∞. By stationarity we
have E[Yt ] = E[Yt−1 ] = E[Yt−2 ], so it follows that the unconditional mean of the process is
δ 201.5
µ= = = 503.75
1 − θ1 − θ2 0.4
This is the infinite horizon optimal predictor.
For other horizons the best predictor is the conditional mean, conditional on the information set
FT , i.e. information available up to and including time T . Hence, for h = 1 the optimal predictor
is
ŶT +1|T = 201.50 + 0.7 × 231.00 − 0.1 × 221.40 = 341.06
and for h = 2,
ŶT +2|T = 201.50 + 0.7 × 341.06 − 0.1 × 231.00 = 417.14

Page 9
ECON815 Research Methods and Modelling 2018/19

12. State whether the unconditional variance, σ2 , is defined for each of the following processes. If
σ2 is well defined, calculate its value.

σ2t = 1 + 0.5ε2t−1 (7)


σ2t = 3 + 0.55ε2t−1 + 0.45σ2t−1 (8)
σ2t = 2 + 0.25ε2t−1 + 0.45σ2t−1 + 0.15σ2t−2 (9)

The unconditional variance of a (G)ARCH model is well defined if the sum of the coefficients
defining the conditional variance model is bounded above by unity and the constant or intercept
is positive.
So, for (7) we have an ARCH(1) model with ω = 1, α1 = 0.5 (employing the notation used in
lectures). Hence the unconditional variance does exist and its value is σ2 = 1/(1 − 0.5) = 2.
Process (8): Since α1 = 0.55 and β1 = 0.45 and their sum is unity, the unconditional variance of
this GARCH(1,1) process is not defined.
Process (9): This GARCH(1,2) process has: α1 = 0.25; β1 = 0.45; β2 = 0.15 with ω = 2. Since
the sum of the GARCH coefficients is 0.25 + 0.45 + 0.15 = 0.85 < 1, the unconditional error
variance is well defined and is given by σ2 = 1−0.85
2
≈ 13.33.

13. For the process (7) in the above question, assume εT = 0 and calculate the optimal forecast of
the conditional variance h steps ahead, σ2T +h , for h = 1, 2 and ∞ steps ahead.
The optimal predictor for an ARCH(1) model for one-step ahead is
h i
σ̂2T +1|FT = E ε2T +1 |FT = ω + αε2T

where FT denotes the information set at time T . Since εT = 0, ω = 1, α1 = 0.5 the optimal one
step ahead prediction is 1. To predict two periods ahead, we would wish to have
h i
σ̂2T +2|FT = E ε2T +2 |FT = ω + αε2T +1

But εT +1 (and its square) are not available as part of FT , though we do have its optimal forecast,
so the optimal forecast of the conditional variance two periods ahead is
h i h i  
σ̂2T +2|FT = E ε2T +2 |FT = ω + α E ε2T +1 |FT = ω + α ω + αε2T = ω(1 + α) + α2 ε2T

With the values we have in this case, the two-step ahead optimal prediction is 1 × 1.5 = 1.5.
Finally, the infinite horizon best predictor is just the unconditional variance of the process, viz.
2.

Page 10

You might also like