0% found this document useful (0 votes)
33 views6 pages

Tsa Final 2017 Sol

This document contains solutions to a time series analysis final exam from December 19, 2017. It includes solutions to 3 questions about autoregressive (AR) time series models. Question 1 involves an AR(2) process and calculating its stationarity, expected value, autocorrelation function, and determining when it is I(1). Question 2 involves an AR(2) model in error correction form and calculating test statistics for the Augmented Dickey-Fuller test. Question 3 involves a variation of an ARCH(1) process where the conditional variance is a function of lagged values and past errors.

Uploaded by

Lucas Bakker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views6 pages

Tsa Final 2017 Sol

This document contains solutions to a time series analysis final exam from December 19, 2017. It includes solutions to 3 questions about autoregressive (AR) time series models. Question 1 involves an AR(2) process and calculating its stationarity, expected value, autocorrelation function, and determining when it is I(1). Question 2 involves an AR(2) model in error correction form and calculating test statistics for the Augmented Dickey-Fuller test. Question 3 involves a variation of an ARCH(1) process where the conditional variance is a function of lagged values and past errors.

Uploaded by

Lucas Bakker
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Time Series Analysis Final Exam, December 19, 2017, Solutions

Note: Unless indicated otherwise, {εt } represents a white noise process with variance σε2 .

Question 1
Consider the AR(2) process given by

yt = 1 − 2φ + φyt−1 + 2φ2 yt−2 + εt ,

where φ is a real-valued parameter.

a. [10] For which value(s) of φ is {yt } stationary? Determine the expected value E(yt ) for
those cases.

A: The AR polynomial is φ(L) = 1 − φL√− 2φ2 L2 . The inverted AR roots satisfy w2 −


φ± φ2 +8φ2
φw − 2φ2 which has solutions w1,2 = 2
= φ±3φ2
, or w1 = 2φ and w2 = −φ, so
that φ(z) = (1 − 2φz)(1 + φz). The inverted AR roots are both stationary as long as both
−1 < 2φ < 1 and −1 < −φ < 1, or if − 21 < φ < 12 . In those cases

1 − 2φ 1
E(yt ) = = .
φ(1) 1+φ

b. [5] Determine those value(s) of φ for which {yt } is an I(1) process.

A: The process is stationary in first differences if we have a unit root together with a
stationary root. A unit root combined with a stationary AR root occurs for φ = 1/2. The
other inverted AR root in that case is w2 = −1/2 (stationary). (For φ = −1 the inverted
AR roots are w1 = −2 [unstable] and w2 = 1 [unit root], so this doesn’t lead to an I(1)
process).

c. [10] Calculate, assuming that φ is restricted to values for which {yt } is stationary, the au-
tocorrelation function of the process in terms of φ. [Hint: Use the Yule-Walker equations.]

A: Because the expectation of the process is irrelevant for the acf, we work with the process
without the constant, so for convenience we write yt for yt − E(yt ) and obtain

yt = φyt−1 + 2φ2 yt−2 + εt .

Multiplying both sides with yt−1 and taking expectations gives

γ1 = φγ0 + 2φ2 γ1

c Cees Diks (UvA), 2017


φ
which gives (upon dividing through by γ0 ) (1 − 2φ2 )ρ1 = ρ0 = 1, or ρ1 = 1−2φ2
, apart
from, as always, ρ0 = 1. For general k ≥ 2 we find

γk = φγk−1 + 2φ2 γk−2 ,

and from this the recursion


ρk = φρk−1 + 2φ2 ρk−2 .
Two non-trivial solutions to this lineair recursion have the form λk where λ satisfies λ2 −
φλ − 2φ2 = 0, that is, λ1 = w1 = 2φ and λ2 = w2 = −φ (equal to the AR roots). Because
the recursion is linear, also linear combinations of these solutions satisfy the recursion, so
that we find the general solution

ρk = c1 (2φ)k + c2 (−φ)k .
φ
We are interested in the specific solution that satisfies ρ0 = 1 and ρ1 = 1−2φ2
, so we require
!
 1 
  
1 1 c1
= φ ,
2φ −φ c2 1−2φ2

which leads to (left multiply both sides with the inverse of the matrix on the left)
!  1 1
 
1 1 +
   
c1 −1 −φ −1 3 1−2φ2
= =   .
 
φ
c2 3φ −2φ 1 1−2φ 2
1
2− 1
2
3 1−2φ


2 − 2φ2 k 1 − 4φ2
ρk = 2
(2φ) + 2
(−φ)k .
3 − 6φ 3 − 6φ
Check: For k = 0 this gives
1 2
ρ0 = + = 1,
3 3
and for k = 1
1 2φ − (−φ) φ
ρ1 = = .
3 1 − 2φ2 1 − 2φ2

Question 2
Let {yt } follow an AR(2) process, that in the error correction form is given by

∆yt = a + byt−1 + c∆yt−1 + εt .

a. [7] Give the null hypothesis and the alternative hypothesis of the Augmented Dickey-Fuller
test for this model, in terms of the parameter vector (a, b, c). What is de corresponding test
statistic? When does the test reject the null hypothesis (what can you say about the critical
values at the 5% level of significance?

c Cees Diks (UvA), 2017


A: The null hypothesis is H0 : b = 0 (unit root in the AR polynomial), and the alternative
hypothesis is Ha : b < 0 (stationary AR polynomial). The test statistic is the t-statistic

SE(b̂)
, and we reject H0 if it is too small (one-sided test). Normally for a t-test the 5%
critical value is (asymptotically) −1.654, but since under H0 the time series are I(1), the t-
statistic here isn’t asymptotically normally distributed. As a result the critical value for the
ADF test is smaller (−3.41) asymptotically, there is no need to mention the exact critical
values, but it is important to note that the critical values are non-standard, and why).

b. [6] For b = −c − 1, find the conditions on a and/or c for which the process is stationary.

A: Upon re-writing the process in the form φ(L)yt = εt , the AR polynomial can be seen to
2
be φ(L) = 1 − (b + c + 1)L + cLp . If b = −c − 1 the characteristic AR roots are solutions
2
to φ(z) = 1 + cz = 0, or z = −1/c. These solutions lie outside the unit circle in the
complex plane if and only if |c| < 1. Note that the process has a unit root for c = −1.
∂yt+h
c. [7] Now suppose that c = 0. Give the impulse response function g(h) = ∂εt
.

A: For c = 0 the AR polynomial is 1 − (b + 1)L, so

yt − µ = (b + 1)(yt−1 − µ) + εt ,
a
with µ = a/φ(1) = b
and the MA(∞) representation

X
yt − µ = (b + 1)i εt−i ,
i=0

∂yt+h
so ∂εt
= (b + 1)h .

Question 3
Suppose that {yt } follows a variation on an ARCH(1)-process, specified by yt = σt νt with
{νt } ∼ NID(0, 1), that is, a sequence of independent, standard normally distributed random
variables, and conditional variance given by

σt2 = γ + (βyt−1 − α|yt−1 |)yt−1 ,

with α > 0, 0 < β < 1.

a. [4] Should, apart from the conditions on α and β already mentioned, additional restrictions
be imposed on the parameters? If not, why not? If so, which restrictions and why?

A: Yes, to make sure that the conditional variance is always positive, we need γ > 0, for
2 2
cases where yt−1 is close to zero, and α ≤ β, for the case where yt−1 is very large.

c Cees Diks (UvA), 2017


2
b. [10] Express the conditional variance as a function of yt−1 for the two cases yt−1 > 0
and yt−1 < 0. What is the difference? Which stylised fact is being modelled with this
difference?

A: For yt−1 < 0 we have σt2 = γ + (β + α)yt−1 2


and for yt−1 > 0, σt2 = γ + (β − α)yt−1 2
.
Because β + α > β − α, a negative value of yt−1 gives rise to a larger conditional variance
σt2 than an equally large positive value of yt−1 . This is called the ‘leverage effect’, which
has been empirically established many times.

c. [10] Calculate the unconditional variance Var(yt ).

A: Apparently we have
 2
γ + (β + α)yt−1 , als yt−1 ≤ 0
σt2 = 2
γ + (β − α)yt−1 , als yt−1 > 0.

Since regardless of what happened before, yt−1 is distributed symmetrically around zero,

E(yt2 ) = E(σt2 ) = 0.5 ∗ (γ + (β + α)E(yt−1


2 2
)) + 0.5(γ + (β − α)E(yt−1 2
)) = γ + βE(yt−1 ),
γ
so that E(yt2 ) = 1−β
.

d. [6] Give, for an observed time series {yt }nt=1 the log-likelihood function corresponding to
this model, conditional on the first observed value in the time series, y1 .

A: n  
X 1 1 2 2
`(α, β, γ) = − log(2π) − log σt − yt /σt ,
t=2
2 2
with σt2 = γ + (β − α sgn(yt−1 ))yt−1
2
, or
n n
n−1 1X 2 1X 2 2
`=− log(2π)− log(γ+(β−α sgn(yt−1 ))yt−1 )− y /(γ+(β−α sgn(yt−1 ))yt−1 ),
2 2 t=2 2 t=2 t

where sgn(·) represents the ‘sign’ of its argument.

Question 4
Consider the bivcariate VAR(1) model given by

Yt = α + Φ1 Yt−1 + εt ,
 
1
where {εt } is a bivariate white noise process with variance-covariance matrix Ω, α =
−2.6
and  
0.5 0.5
Φ1 = .
1.3 −0.3

c Cees Diks (UvA), 2017


a. [5] Show that the process has a unit root and a stationary root.

A: The characteristic AR equation

det(Φ(z)) = det(I − Φ1 z) = (1 − 0.5z)(1 + 0.3z) − 0.65z 2 = 1 − 0.2z − 0.8z 2 = 0



has solutions z1,2 = 0.2± −1.6
0.04+3.2
= 0.2±1.8
−1.6
, z1 = 1 and z2 = 1/(−0.8) = −1.25, say, and
these indeed are a unit root and a stationary root (outside the unit circle).

b. [10] Give the individual ARIMA representations of {y1,t } and {y2,t }.

A:  
1 − 0.5L −0.5L
Φ(L) = .
−1.3L 1 + 0.3L
The inverse of this matrix, apart from the determinant (in Cramer’s rule), is
 
1 + 0.3L 0.5L
C(L) = .
1.3L 1 − 0.5L

Multiplying Φ(L)Yt = α + εt with C(L) on both sides gives, via


 
(1 − 0.5L)(1 + 0.3L) − 0.65L2 0
C(L)Φ(L) =
 0 (1 − 0.5L)(1
 + 0.3L) − 0.65L2
(1 − L)(1 + 0.8L) 0
=
0 (1 − L)(1 + 0.8L)

and    
1 0
C(L) = ,
−2.6 0
(the lagged value of the constant 1 is 1, and that of −2.6 is −2.6),
     
(1 − L)(1 + 0.8L)y1,t 0 ε1,t + 0.3ε1,t−1 + 0.5ε2,t−1
= + ,
(1 − L)(1 + 0.8L)y2,t 0 1.3ε1,t−1 + ε2,t − 0.5ε2,t−1
or
(1 − L)(1 + 0.8L)y1,t = ε1,t + 0.3ε1,t−1 + 0.5ε2,t−1
and
(1 − L)(1 + 0.8L)y2,t = 1.3ε1,t−1 + ε2,t − 0.5ε2,t−1 .
So y1,t as well as y2,t follow an ARIMA(1, 1, 1)-process.

c. [10] Rewrite the model in the VECM form. Give the cointegration vector, the vector of
adjustment coefficients and the relation describing the long-run equilibrium. [Hint: does
α give rise to a stochastic trend and/or a constant in the long-run equilibrium relation?]

c Cees Diks (UvA), 2017


A: The model can be written as
 
−0.5 0.5
∆Yt = α + Yt−1 + εt .
1.3 −1.3

Check:    
−0.5 0.5 −0.5
Π = −Φ(1) = = (1 − 1).
1.3 −1.3 1.3

The  is (1 − 1), so y1,t − y2,t is stationary. The vector of adjustment coefficients


 CI vector
−0.5
is . The vector α is a multiple of the vector of adjustment coefficients, so that
1.3
the model can be written as
 
−0.5
∆Yt = ((1 − 1)Yt−1 − 2) + εt ,
1.3

which shows that α is part of the long run equilibrium relation, given by y1,t − y2,t − 2 = 0
(or an equivalent relation such as y2,t = y1,t − 2).

c Cees Diks (UvA), 2017

You might also like