0% found this document useful (0 votes)
27 views53 pages

Time-Series Analysis II Vector Autoregressive (VAR) and Vector Error-Correction Model (VECM)

The document discusses advanced quantitative tools for economic and policy research, focusing on Vector Autoregressive (VAR) models and Vector Error-Correction Models (VECM). It covers the estimation of VAR models, Johansen cointegration tests, impulse response functions, and variance decomposition, emphasizing their applications in macroeconomic forecasting and policy analysis. The document also highlights the importance of model stability, lag length selection, and the interpretation of results from Johansen tests.

Uploaded by

Belay Zewude
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views53 pages

Time-Series Analysis II Vector Autoregressive (VAR) and Vector Error-Correction Model (VECM)

The document discusses advanced quantitative tools for economic and policy research, focusing on Vector Autoregressive (VAR) models and Vector Error-Correction Models (VECM). It covers the estimation of VAR models, Johansen cointegration tests, impulse response functions, and variance decomposition, emphasizing their applications in macroeconomic forecasting and policy analysis. The document also highlights the importance of model stability, lag length selection, and the interpretation of results from Johansen tests.

Uploaded by

Belay Zewude
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Quantitative Tools for

Economic and Policy


Research
13 September 2024

Time-Series Analysis II
Vector Autoregressive (VAR) and
Vector Error-Correction Model (VECM)
Siong Hook Law, Ph.D
School of Business and Economics
Universiti Putra Malaysia
Email: [email protected]

1
Preview
• Vector Autoregressive (VAR) Model
• Johansen Multivariate Cointegration
• Vector-error Correction Model
(VECM)

• Estimate the Johansen


Cointegration and VECM using Stata
• Impulse Response Function and
Variance Decomposition

2
Learning Objectives

1. Describe and estimate the VAR model

2. Illustrate the use of the Johansen approach, including the


long-run and short-run effects.

3. Explain and estimate the VECM and test for Granger


causality within Vector Error-correction model (VECM) setting

4. Estimate the impulse response function and variance


decomposition
3
Vector Autoregressive (VAR) Model
• VAR models generalize univariate models (one single equation
model) by allowing multivariate time series (multiple equations).
• A VAR is a n-variables, n-equations model, which express
each variable as a linear function of its own past values, the
past values of all other variables being considered, and a
serially uncorrelated error term.
• Why we need multiple series?
– To be able to understand the relationship between several
components
– To be able to get better forecasts
• VAR models provide a coherent and credible approach to data
description, forecasting, structural inference, and policy
analysis.
• Main use of VAR models include forecasting macroeconomic
variables (i.e. GDP, inflation, unemployment, interest rate) and4
policy analysis.
VAR Models – Formal Representation
• VARs are basically systems of equations with outcome
variables that depend on other outcome variables
• Consider the following bivariate VAR(1) model (yt, xt) and
one lag:
yt = β10 + β11yt-1 + β12xt-1 + yt
xt = β20 + β21xt−1 + β22yt−1 + xt

• VARs are meant to get around the obvious concerns in


the above system of equations. What causes what?
• Focus on predicting endogenous variables as a function
of lags. The goal is exploiting useful information and
making predictions.
5
VAR Models – Formal Representation

- Set up a bivariate VAR model, where the variables are output (y)
and money supply (M3)
- Monthly data from 2015:M1 to 2024:M6 6
Bivariate VAR Example

7
Bivariate VAR Example – Lag-length and
Stability
• Importance of an appropriate lag-length
– If lag length is too small, the model is mispecified
– If lag length is too large, degrees of freedom are wasted
• Lag-length criteria
– Decision: Akaike, Schwartz, Hannan-Quinn
• Note: There should be no autocorrelation / serial
correlation problem at the selected lag

• VAR stability conditions and residual diagnostics


– Stability of the VAR system implies stationary.
– If all inverse roots of the characteristic AR polynomial have modulus less
than one and lie inside the unit circle, the estimate VAR is stable.
– If the VAR is not stable, tests conducted on the VAR model may be
invalid. Also, impulse response standard errors are not valid.

• Model is stable? Time for residuals diagnostics


8
Bivariate VAR Example – Granger Causality
• Granger causality test – to investigate if lagged values of
one variable helps to predict other variables in the model.
The null and alternative hypotheses are as follow:
– H0: X does not Granger cause Y
– Ha: X Granger causes Y

• Decision: If p-value is
– < 0.05, reject H0. “X” Granger causes “Y” at the 5%
significance level
– > 0.05, failed to reject H0. “X” does not Granger causes “Y”
at the 5% significance level

9
Impulse Response Function (IRF)

• IRFs allow us to trace out the time path (current and


future values) of the variables in our model to one unit
increase in the current value of one of the VAR errors.
What is the effect of one unit shock in “X” on “Y”.
– To identify the impulse responses, a restriction is required in
the main matrix – Cholesky Decomposition
– The order of the variables play a key role as the restrictions
on the matrix implies some shocks have no contemporaneous
effects on some of the variables in the system.
– The order is established by the econometrician but economic
theory and sensitive assumptions are required to order the
variables

10
Variance Decomposition (VD) /
Forecast Error Decomposition
• The VD displays in the percentage of the error made
forecasting a variable over time due to a specific shock.
– How much of the variability in the dependent variable is
explained by its “own shocks” vs the “shocks in the other
variables in the systems”.
– As in the IRF, the VD applies the Cholesky Decomposition
for identification purpose.

11
Vector Erorr-Correction Model
(VECM)

Multivariate Cointegration Test:


Johansen Cointegration Test

12
More Practical Issues in Cointegration
Testing
• Note: we have focussed on two variables, Y and X.
• In practice, you may have many more variables.
• Example: consider the three variables: income (Y),
consumption (C) and investment (I).
• Some macroeconomists claim that the ratios C/Y
and I/Y are roughly stable in the long-run.
• Common to take logs, so
ln(C) – ln(Y)  constant
and
ln(I) – ln(Y)  constant
There are two separate tests for cointegration, which can
give different results. 13
More Practical Issues in Cointegration
Testing
• If ln(C), ln(Y) and ln(I) all contain unit roots,
reasoning above suggests that two cointegrating
relationships might occur.
• The variables in the model might form several
equilibrium relationships governing the joint
evolution of all the variables.

• In general, for n number of variables we can have


only up to n – 1 cointegrating vectors
• When n = 2, if cointegration exists then the
cointegrating vector is unique

14
More Practical Issues in Cointegration
Testing
• [Note: Engle-Granger test (based on a cointegrating
regression involving all three variables), would only
find whether cointegration is/is not present (not tell
you how many cointegrating relationships)]

• What should you do in this case? One option is to


use the Johansen cointegration test for multiple
equations

• Using the Johansen Maximum Likelihood (ML)


procedure, it is possible to obtain more than a single
cointegrating relationship.
15
The Johansen ML Cointegration
• Given that Johansen cointegration is a maximum
likelihood based test (Engle-Granger is OLS
based), it requires a large sample.
• The Johansen multivariate cointegration test is
based on a VAR, not a single OLS estimation.
• All the variables are assumed to be endogenous
(although it is possible to include exogenous
variables)
• The test relies on the relationship between the rank
of a matrix and its eigenvalues or characteristic
roots.
16
The Johansen ML Cointegration
• The approach to testing for cointegration in a
multivariate system is similar to the ADF test, but
requires the use of a VAR approach:
xt = A1 xt −1 + ut
xt = ( A1 − I ) xt −1 + ut
xt = xt −1 + ut
Where :  = ( A1 − I ) g=n
Where in a system of g variables:
xt and ut are g x 1 vectors.
A1 is an g x g matrix of parameters
I is an g x g Identity matrix 17
The Johansen ML Cointegration
• The rank of π equals the number of cointegrating
vectors
• If π consists of all zeros, as with the ADF test, the
rank of the matrix equals zero, all the xs are unit
root processes, implying the variables are not
cointegrated.
• As with the ADF test, the equation can also include
lagged dependent variables, although the number
of lags included is important and can affect the
result. This requires the use of the Akaike or
Schwarz-Bayesian criteria to ensure an optimal lag
length. 18
The π Matrix
• The number of cointegrating vectors (r), is the rank of π
and determines the cointegration relationship
• When r = 0 there are no cointegrating vectors
• If there are n variables in the system of equations, there
can be a maximum of n-1 cointegrating vectors.
• π is defined as the product of two matrices: α and β’, of
dimension (g x r) and (r x g) respectively.
• The β gives the long-run coefficients of the cointegrating
vectors, the α is known as the adjustment parameter and
is similar to an error correction term.
• The relationship can be expressed as:
π = αβ’
19
The Johansen Test Statistics
• There are two test statistics produced by the
Johansen ML procedure.

• There are the Trace test and maximal Eigenvalue


test.

• Both can be used to determine the number of


cointegrating vectors present, although they don’t
always indicate the same number of cointegrating
vectors.
20
The Johansen Test Statistics
• The test statistics for cointegration are formulated
as g
trace (r ) = −T  ln(1 − ˆi )
i = r +1

and

max (r , r + 1) = − T ln(1 − r +1 )

where r is the estimated value for the ith ordered


eigenvalue from the  matrix.

21
Differences Between the Two Test Statistics
• The Trace test is a joint test, the null hypothesis is
that the number of cointegrating vectors is less than
or equal to r, against a general alternative
hypothesis that there are more then r.
• trace = 0 when all the i = 0, so it is a joint test.

• The Maximal Eigenvalue test conducts separate


tests on each eigenvalue. The null hypothesis is that
there are r cointegrating vectors present against the
alternative that there are (r + 1) present.

22
The Johansen Critical Values
▪ Johansen & Juselius (1990) provide critical values
for the 2 statistics. The distribution of the test
statistics is non-standard.
▪ The critical values depend on:
1. the value of n-r, the number of non-stationary
components
2. whether a constant and / or trend are included in
the regressions.

▪ If the test statistic is greater than the critical value


from Johansen’s tables, reject the null hypothesis
that there are r cointegrating vectors in favour of the
alternative that there are more than r.
23
The Johansen Testing Sequence

• The testing sequence under the null is r = 0, 1, ..., g-1


so that the hypotheses for trace are

H0: r = 0 vs H1: 0 < r  g


H0: r = 1 vs H1: 1 < r  g
H0: r = 2 vs H1: 2 < r  g
... ... ...
H0: r = g-1 vs H1: r = g

• We keep increasing the value of r until we no longer


reject the null.

24
Interpretation of Johansen Test
Results
• But how does this correspond to a test of the rank of
the  matrix?
• r is the rank of .
•  cannot be of full rank (n) since this would
correspond to the original yt being stationary.
• If  has zero rank, then by analogy to the univariate
case, yt depends only on yt-j and not on yt-1, so
that there is no long run relationship between the
elements of yt-1. Hence there is no cointegration.
• For 1 < rank () < n, there are multiple cointegrating
vectors.
25
Example
• Given the following model of stock prices (s) and income
(y);
st =  0 + 1 yt + ut ut  i.i.d .(0, )
i.i.d − independen tly and identically distributed.
 - variance - covariance matrix, indicating no heteroskedasticity.
Johansen ML Results (Trace Test)
Null Alternative Trace 5% CV
r=0 r1 40.3 20.2
r1 r=2 7.6 9.16

Johansen ML Results (Maximum Eigenvalue Test)


Null Alternative Eigenvalue 5% CV
r=0 r =1 34.7 15.9
r=1 r=2 8.6 9.2
26
Interpretation of Results
• Given that for both tests, the test statistic exceeds
its critical value (5%) when the null is r = 0, we can
conclude that at least one cointegrating vector is
present.

• For more than one cointegrating vector, the test


statistic is less than the critical value so we conclude
only a single cointegrating vector is present.

27
Stata Output – Johansen Test

28
Normalised Cointegrating Vector
(Long-run β Coefficients)
• The long-run coefficients are normalised, such that we express the
relationship in terms of one of the variables as a dependent variable:

29
The α Adjustment Coefficients
• These can be interpreted in exactly the same way as the
error correction term, asymptotic t-statistics are in
parentheses (interpreted in the same way as t-statistics):
Vector error-correction model
-ce1 = ECT
D_linvestment |
_ce1 |
L1. | -.0123537 .0071134 -1.74 0.082
D_lincome |
_ce1 |
L1. | .0025992 .0018129 1.43 0.152
D_lconsumption |
_ce1 |
L1. | .0047339 .0015379 3.08 0.002 30
Main Differences with the Bi-variate Test
for Cointegration
• Using the Johansen Maximum Likelihood (ML)
procedure, it is possible to obtain more than a single
cointegrating relationship, whereas only one can be
obtained with the Engle-Granger test.
• There are two separate tests (Trace & Max Eigenvalue)
for cointegration with the Johansen, but only one with the
Engle-Granger which can give different results.
• Given that the Johansen is a maximum likelihood based
test (Engle-Granger is OLS based), it requires a large
sample.
• The multivariate test is based on a VAR, not a single
OLS estimation as with the Engle-Granger approach.

31
Criticisms of the Johansen Approach

• The result can be sensitive to the number of lags


included in the test and the presence of
autocorrelation
• If there are more than two cointegrating vectors
present, how do we find the most appropriate vector
for the subsequent tests.
• If the two test statistics differ, which one gives the
correct result?
• This is a large sample test.
• The weaken critique suggests we often find
evidence of cointegration when none exists.
32
Multivariate Cointegration and Vector
Error-Correction Models (VECM)
• Vector Error Correction Models (VECM) are the basic
VAR, with an error correction term incorporated into the
model and as with bivariate cointegration, multivariate
cointegration implies an appropriate VECM can be
formed.
• The reason for the error correction term is the same as
with the standard error correction model, it measures
any movement away from the long-run equilibrium.
• These are often used as part of a multivariate test for
cointegration, such as the Johansen ML test, having
found evidence of cointegration of some I(1) variables,
we can then assess the short run and potential Granger
causality with a VECM.
33
The Approach to Multivariate
Cointegration and VECM
1) Test the variables for stationarity using the usual ADF
tests.
2) If all the variables are I(1) include in the cointegrating
relationship.
3) Use the AIC or SBC to determine the number of lags in
the cointegration test (order of VAR)
4) Use the trace and maximal eigenvalue tests to
determine the number of cointegrating vectors present.
5) Assess the long-run β coefficients and the adjustment
α coefficients.
6) Produce the VECM for all the endogenous variables in
the model and use it to carry out Granger causality
tests over the short and long run. 34
Vector Error-correction Modeling (VECM)

Multi-Equation ECM
• It is termed vector error-correction modeling
(VECM).

• The VECM is taken is to investigate dynamic causal


interactions among the variables → among them is
the notion of Granger causality.

• The interest is also to see which variable takes the


burden of adjusting towards the long-run equilibrium.

35
Single Equation Derivation
Dependent variable: Y
Independent variables: x and w Lag dependent variable – at least lag 1
(can not be 0), x and w are 0
Example: ARDL (1,0,0)
Yt = C + 1Yt −1 + 1xt + 2 wt +  t (1a)

Note: Yt = Yt − Yt −1
Yt = Yt + Yt −1 ; xt = xt + xt −1 ; wt = wt + wt −1

Derive Equation (1a) where the dependent variable is ΔY:


Yt + Yt −1 = C + 1Yt −1 + 1 (xt + xt −1 ) + 2 (wt + wt −1 ) +  t

Yt = C − Yt −1 + 1Yt −1 + 1xt + 1xt −1 + 2wt + 2 wt −1 +  t

Yt = C − (1 − 1 )Yt −1 + 1xt −1 + 2 wt −1 + 1xt + 2wt +  t


   
Yt = C − (1 − 1 ) Yt −1 − 1 xt −1 − 2 wt −1  + 1xt +  2 wt +  t
 1 − 1 1 − 1 
Single Equation Derivation…..

    ECM
Yt = C − (1 − 1 ) Yt −1 − 1 xt −1 − 2 wt −1  + 1xt +  2 wt +  t
 1 − 1 1 − 1  Equation
Lag 1 level variables
 variables

Yt = C − ECTt −1 + 1xt +  2 wt +  t


where
ECTt-1 = Yt −1 − 1 xt −1 −  2 wt −1  ;  = (1 - 1)
 1 − 1 1 − 1 

Based on the above, we can calculate the long run coefficient:

Yt =  0 + 1 xt +  2 wt + vt
where
1  C
1 = 2 = 1 0 = Long-run
1 − 1 ; 1 − 1 ; 1 − 1 coefficients
Single Equation Derivation (with same lag = 1 for all variables)
Dependent variable: Y
Independent variables: x and w

Example: ARDL (1,1,1)


Yt = C + 1Yt −1 + 1xt + 2 xt −1 + 3wt + 4 wt −1 +  t (2a)

Note: Yt = Yt − Yt −1
Yt = Yt + Yt −1 ; xt = xt + xt −1 ; wt = wt + wt −1

Reparameterized Equation (2a)

Yt + Yt −1 = C + 1Yt −1 + 1 (xt + xt −1 ) + 2 xt −1 + 3 (wt + wt −1 ) + 4 wt −1 +  t


Δ𝑌𝑡 = 𝐶 − 𝑌𝑡−1 + 𝛼1 𝑌𝑡−1 + 𝛽1 Δ𝑥𝑡 + 𝛽1 𝑥𝑡−1 + 𝛽2 Δ𝑥𝑡−1 + 𝛽3 Δ𝑤𝑡 + 𝛽3 𝑤𝑡−1 + 𝛽4 𝑤𝑡−1 + 𝜀𝑡
Δ𝑌𝑡 = 𝐶 − (1 − 𝛼1 )𝑌𝑡−1 + 𝛽1 Δ𝑥𝑡 + 𝛽1 𝑥𝑡−1 + 𝛽2 𝑥𝑡−1 + 𝛽3 Δ𝑤𝑡 + 𝛽3 𝑤𝑡−1 + 𝛽4 𝑤𝑡−1 + 𝜀𝑡
Δ𝑌𝑡 = 𝐶 − 1 − 𝛼1 𝑌𝑡−1 + 𝛽1 𝑥𝑡−1 + 𝛽2 𝑥𝑡−1 + 𝛽3 𝑤𝑡−1 + 𝛽4 𝑤𝑡−1 + 𝛽1 Δ𝑥𝑡 + 𝛽3 Δ𝑤𝑡 +𝜀𝑡

𝛽1 + 𝛽2 𝛽3 + 𝛽4
Δ𝑌𝑡 = 𝐶 − (1 − 𝛼1 ) 𝑌𝑡−1 − 𝑥𝑡−1 − 𝑤 + 𝛽1 Δ𝑥𝑡 + 𝛽2 Δ𝑤𝑡 + 𝜀𝑡
1 − 𝛼1 1 − 𝛼1 𝑡−1
Single Equation Derivation…..
𝛽1 + 𝛽2 𝛽3 + 𝛽4
Δ𝑌𝑡 = 𝐶 − (1 − 𝛼1 ) 𝑌𝑡−1 − 𝑥𝑡−1 − 𝑤 + 𝛽1 Δ𝑥𝑡 + 𝛽2 Δ𝑤𝑡 + 𝜀𝑡
1 − 𝛼1 1 − 𝛼1 𝑡−1 ECM
Equation
Lag 1 level variables
 variables

Yt = C − ECTt −1 + 1xt +  2 wt +  t


where
𝛽1 + 𝛽2 𝛽3 + 𝛽4
ECTt-1 = 𝑌𝑡−1 − 𝑥𝑡−1 − 𝑤𝑡−1 ;  = (1 - 1)
1 − 𝛼1 1 − 𝛼1

Based on the above, we can calculate the long run coefficient:

Yt =  0 + 1 xt +  2 wt + vt
where

C
𝛼1 =
𝛽1 +𝛽2
; 𝛼2 =
𝛽3 +𝛽4
; 0 = Long-run
1−𝛼1 1−𝛼1 1 − 1 coefficients

VECM model consists of few equations (subject to how many


variables in the equation) – same lag length for all variables
Vector Error-correction Modeling (VECM)
VECM (3-variable system)
k k k
Yt =  1 +  1i Yt −i +  1i X t −i +   1i Z t −i + 1 t −1 + u1t
i =1 i =1 i =1
k k k
X t =  2 +   2i Yt −i +   2i X t −i +   2i Z t −i +  2  t −1 + u 2t
i =1 i =1 i =1
k k k
Z t =  3 +   3i Yt −i +   3i X t −i +   3i Z t −i + 3 t −1 + u 3t
i =1 i =1 i =1

Yt −1 − [ + 1 X t −1 +  2 Z t −1 ] =  t −1

• What should be the signs of the speed of adjustment


coefficients?

40
Vector Error-correction Modeling (VECM)
• Lag length selection: information criteria and the
requirement that the errors must be non-
autocorrelated.

• Note: if the variables are not cointegrated, the error


correction terms must be dropped. In this case, we
have VAR.

• Note: in a system equation like VAR and VECM, the


interest is not in the estimated coefficients [except
the coefficient of the error correction terms].
Accordingly, they are not normally reported.
41
Vector Error-correction Modeling (VECM)
INTERPRETATION I: GRANGER CAUSALITY
k k k
Yt =  1 +  1i Yt −i +  1i X t −i +   1i Z t −i + 1 t −1 + u1t
i =1 i =1 i =1
k k k
X t =  2 +   2i Yt −i +   2i X t −i +   2i Z t −i +  2  t −1 + u 2t
i =1 i =1 i =1
k k k
Z t =  3 +   3i Yt −i +   3i X t −i +   3i Z t −i + 3 t −1 + u 3t
i =1 i =1 i =1

• The normal channel of causality is the coefficient of lagged


first-differenced variables (“short-run” causality).
• The error-correction terms add an additional channel of
causality (“long-run” causality).
• Thus, omitting the error correction terms from the equation
create bias in the estimation due to omitting a relevant
variable.
42
Granger Causality Test
• Granger causality can be further sub-divided into long-run and short-
run causality.
• Long-run causality is determined by the error correction term,
whereby if it is significant, then it indicates evidence of long run
causality from the explanatory variable to the dependent variable.
• Short-run causality is determined with a test on the joint significance
of the lagged explanatory variables, using an F-test or Wald test.
• Before the ECM can be formed, there first has to be evidence of
cointegration, given that cointegration implies a significant error
correction term, cointegration can be viewed as an indirect test of
long-run causality.
• It is possible to have evidence of long-run causality, but not short-
run causality and vice versa.
• In multivariate causality tests, the testing of long-run causality
between two variables is more problematic, as it is impossible to tell
which explanatory variable is causing the causality through the error
correction term.
Vector Error-correction Modeling (VECM)
• Another causality is the strong exogeneity test which
imposes stronger restrictions by testing the joint
significance of both the lagged dynamic terms and
ECT (Charemza & Deadman, 1992; Engle, Hendry,
& Richard, 1983).

• Strong causality – joint test to check for strong


causality test, where variables bear the burden of
short-run adjustment to re-establish a long run
equilibrium

44
Example: Result of Granger Causality within the VECM
Variable  Investment  Income  Consumption ECTt-1

2 statistic (p-value) t-statistic


 Investment - 2.34 1.55 -0.0123*
(0.3108) (0.4617)
 Income 7.09** - 5.96* 0.0026
(0.0289) (0.0509)
 Consumption 6.21** 4.93* - 0.0047***
(0.0447) (0.0850)

Note: ***, ** and * denote significant at 1%, 5% and 10% levels, respectively.

45
Example: Results of Vector Error-correction Modeling

46
Example: Results of Vector Error-correction Modeling

47
Example: Results of Vector Error-correction Modeling

48
Impulse Response Functions
• These trace out the effect on the dependent variables in the
VAR to shocks to all the variables in the VAR
• In a system of 2 variables, there are 4 impulse response
functions, and with 3 there are 9.
• The shock occurs through the error term and affects the
dependent variable over time.
• In effect the VAR is expressed as a vector moving average
model (VMA), as in the univariate case previously, the shocks
to the error terms can then be traced with regard to their
impact on the dependent variable.
• If the time path of the impulse response function becomes 0
over time, the system of equations is stable, however they
can explode if unstable.
An Impulse Response Function
Shock

1.2

0.8

0.6 Shock
y

0.4

0.2

0
0 1 2 3 4 5 6 7 8 9 10
Time

Significant because both upper and lower


bounds (red lines) fall in the positive same zone.
(Note: if one falls in positive zone and one falls in
negative zone, then insignificant)
Variance Decomposition
• Evaluate the percentage of the independent variables
explain the dependent variable
• Total explanation is 100%
• Let dependent variable =Y, X1 and X2 are independent
variables
• For example: Y X1 X2
1Q 97.00 2.00 1.00
100Q 25.00 60.00 15.00
Endogenous variable = Y variable – 25% in 100 quarters
(25 years), X1 explains more compared to X2
Conclusion
• VARs have several important uses, particularly causality
tests and forecasting
• The VAR has several weaknesses, most importantly its
lack of theoretical foundations
• To assess the effects of any shock to the system, we
need to use impulse response functions and variance
decomposition
• VECMs are an alternative, as they allow first-differenced
variables and an error correction term.
• If cointegrated, then the short-run Granger causality
should be conducted with VECM with error-correction
terms (ECT).
Summary: Steps in Time Series Econometrics

THEORETICAL UNDERPINNINGS
Variables to be Included? Focus? Well-specified Equation or System Dynamics. Motivation?

UNIT ROOT TESTS

COINTEGRATION TESTS

One-equation Focus: SYSTEM DYNAMICS


Error Correction Modeling
- Specification Granger VAR or VECM
- Estimation Causality
- Diagnostics Variance
- Inferences Decompositions &
Impulse Responses

You might also like