Tsirfcreate
Tsirfcreate
com
irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs
Description
irf create estimates multiple sets of impulse–response functions (IRFs), dynamic-multiplier
functions, and forecast-error variance decompositions (FEVDs). All of these estimates and their
standard errors are known collectively as IRF results and are saved in an IRF file under a specified
filename. Once you have created a set of IRF results, you can use the other irf commands to analyze
them.
Quick start
Create impulse–response function myirf with 8 forecast periods in the active IRF file
irf create myirf
Same as above, and use IRF file myirfs.irf
irf create myirf, set(myirfs)
Same as above, but compute the IRF for 12 periods
irf create myirf, set(myirfs) step(12)
Note: irf commands can be used after var, svar, vec, arima, arfima, dsge, or dsgenl; see
[TS] var, [TS] var svar, [TS] vec, [TS] arima, [TS] arfima, [DSGE] dsge, or [DSGE] dsgenl.
Menu
Statistics > Postestimation
1
2 irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs
Syntax
After var
irf create irfname , var options
After svar
irf create irfname , svar options
After arima
irf create irfname , arima options
After arfima
irf create irfname , arfima options
Options
Main
set(filename[, replace]) specifies the IRF file to be used. If set() is not specified, the active IRF
file is used; see [TS] irf set.
If set() is specified, the specified file becomes the active file, just as if you had issued an irf
set command.
replace specifies that the results saved under irfname may be replaced, if they already exist. IRF
results are saved in files, and one file may contain multiple IRF results.
step(#) specifies the step (forecast) horizon; the default is eight periods.
order(varlist) is allowed only after estimation by var; it specifies the Cholesky ordering of the
endogenous variables to be used when estimating the orthogonalized IRFs. By default, the order
in which the variables were originally specified on the var command is used.
smemory is allowed only after estimation by arfima; it specifies that the IRFs are calculated based
on a short-memory model with the fractional difference parameter d set to zero.
estimates(estname) specifies that estimation results previously estimated by var, svar, or vec,
and stored by estimates, be used. This option is rarely specified; see [R] estimates.
Std. errors
nose, bs, and bsp are alternatives that specify how (whether) standard errors are to be calculated. If
none of these options is specified, asymptotic standard errors are calculated, except in two cases:
after estimation by vec and after estimation by svar in which long-run constraints were applied.
In those two cases, the default is as if nose were specified, although in the second case, you could
specify bs or bsp. After estimation by vec, standard errors are simply not available.
nose specifies that no standard errors be calculated.
bs specifies that standard errors be calculated by bootstrapping the residuals. bs may not be
specified if there are gaps in the data.
bsp specifies that standard errors be calculated via a multivariate-normal parametric bootstrap.
bsp may not be specified if there are gaps in the data.
nodots, reps(#), and bsaving(filename , replace ) are relevant only if bs or bsp is specified.
nodots specifies that dots not be displayed each time irf create performs a bootstrap replication.
reps(#), # > 50, specifies the number of bootstrap replications to be performed. reps(200) is
the default.
bsaving(filename , replace ) specifies that file filename be created and that the bootstrap
replications be saved in it. New file filename is just a .dta dataset that can be loaded later
using use; see [D] use. If filename is specified without an extension, .dta is assumed.
irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs 5
Estimation command
dsge/
Saves arima arfima var svar vec dsgenl
simple IRFs x x x x x x
orthogonalized IRFs x x x x x
dynamic multipliers x
cumulative IRFs x x x x x
cumulative orthogonalized IRFs x x x x x
cumulative dynamic multipliers x
structural IRFs x x x
Cholesky FEVDs x x x
structural FEVDs x
Introductory examples
. use https://www.stata-press.com/data/r18/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inv dln_inc dln_consump if qtr>=tq(1961q2) & qtr<=tq(1978q4), lags(1/2)
(output omitted )
. irf create asymp, step(8) set(results1)
(file results1.irf created)
(file results1.irf now active)
(file results1.irf updated)
. set seed 123456
. irf create bs, step(8) bs reps(250) nodots
(file results1.irf updated)
. irf ctable (asymp dln_inc dln_consump fevd) (bs dln_inc dln_consump fevd),
> noci stderror
0 0 0 0 0
1 .282135 .087373 .282135 .102756
2 .278777 .083782 .278777 .098161
3 .33855 .090006 .33855 .10586
4 .339942 .089207 .339942 .104191
5 .342813 .090494 .342813 .105351
6 .343119 .090517 .343119 .105258
7 .343079 .090499 .343079 .105266
8 .34315 .090569 .34315 .105303
Point estimates are, of course, the same. The bootstrap estimates of the standard errors, however,
are larger than the asymptotic estimates, which suggests that the sample size of 71 is not large
enough for the distribution of the estimator of the FEVD to be well approximated by the asymptotic
distribution. Here we would expect the bootstrap confidence interval to be more reliable than the
confidence interval that is based on the asymptotic standard error.
Technical note
The details of the bootstrap algorithms are given in Methods and formulas. These algorithms are
conditional on the first p observations, where p is the order of the fitted VAR. (In an SVAR model, p
is the order of the VAR that underlies the SVAR.) The bootstrapped estimates are conditional on the
first p observations, just as the estimators of the coefficients in VAR models are conditional on the
first p observations. With bootstrap standard errors (option bs), the p initial observations are used
with resampling the residuals to produce the bootstrap samples used for estimation. With the more
parametric bootstrap (option bsp), the p initial observations are used with draws from a multivariate
normal distribution with variance–covariance matrix Σ b to generate the bootstrap samples.
Technical note
For var and svar e() results, irf uses Σ,b the estimated variance matrix of the disturbances, in
computing the asymptotic standard errors of all the functions. The point estimates of the orthogo-
nalized impulse–response functions, the structural impulse–response functions, and all the variance
irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs 7
decompositions also depend on Σ. b As discussed in [TS] var, var and svar use the ML estimator of
this matrix by default, but they have option dfk, which will instead use an estimator that includes a
small-sample correction. Specifying dfk when the model is fit—when the var or svar command is
given—changes the estimate of Σ b and will change the IRF results that depend on it.
.2
.1
0
0 50
Step
Graphs by irfname, impulse variable, and response variable
The graph shows that the estimated OIRF converges to a positive asymptote, which indicates that
an orthogonalized innovation to the unemployment rate in Indiana has a permanent effect on the
unemployment rate in Missouri.
IRF files are just Stata datasets that have names ending in .irf instead of .dta. The dataset in
the file has a nested panel structure.
Variable irfname contains the irfname specified by the user. Variable impulse records the name
of the endogenous variable whose innovations are the impulse. Variable response records the name
of the endogenous variable that is responding to the innovations. In a model with K endogenous
variables, there are K 2 combinations of impulse and response. Variable step records the periods
for which these estimates were computed.
Below is a catalog of the statistics that irf create estimates and the variable names under which
they are saved in the IRF file.
Statistic Name
impulse–response functions irf
orthogonalized impulse–response functions oirf
dynamic-multiplier functions dm
cumulative impulse–response functions cirf
cumulative orthogonalized impulse–response functions coirf
cumulative dynamic-multiplier functions cdm
Cholesky forecast-error decomposition fevd
structural impulse–response functions sirf
structural forecast-error decomposition sfevd
standard error of the impulse–response functions stdirf
standard error of the orthogonalized impulse–response functions stdoirf
standard error of the cumulative impulse–response functions stdcirf
standard error of the cumulative orthogonalized impulse–response functions stdcoirf
standard error of the Cholesky forecast-error decomposition stdfevd
standard error of the structural impulse–response functions stdsirf
standard error of the structural forecast-error decomposition stdsfevd
10 irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs
In addition to the variables, information is stored in dta characteristics. Much of the following
information is also available in r() after irf describe, where it is often more convenient to obtain
the information. Characteristic dta[version] contains the version number of the IRF file, which
is currently 1.1. Characteristic dta[irfnames] contains a list of all the irfnames in the IRF file.
For each irfname, there are a series of additional characteristics:
Name Contents
dta[irfname model] var, sr var, lr var, vec, arima, arfima, dsge,
or dsgenl
dta[irfname order] Cholesky order used in IRF estimates
dta[irfname exog] exogenous variables, and their lags, in VAR
dta[irfname exogvars] exogenous variables in VAR
dta[irfname constant] constant or noconstant, depending on whether
noconstant was specified in var or svar
dta[irfname lags] lags in model
dta[irfname exlags] lags of exogenous variables in model
dta[irfname tmin] minimum value of timevar in the estimation sample
dta[irfname tmax] maximum value of timevar in the estimation sample
dta[irfname timevar] name of tsset timevar
dta[irfname tsfmt] format of timevar
dta[irfname varcns] constrained or colon-separated list of
constraints placed on VAR coefficients
dta[irfname svarcns] constrained or colon-separated list of
constraints placed on VAR coefficients
dta[irfname step] maximum step in IRF estimates
dta[irfname stderror] asymptotic, bs, bsp, or none,
depending on the type of standard errors requested
dta[irfname reps] number of bootstrap replications performed
dta[irfname version] version of the IRF file that originally
held irfname IRF results
dta[irfname rank] number of cointegrating equations
dta[irfname trend] trend() specified in vec
dta[irfname veccns] constraints placed on VECM parameters
dta[irfname sind] normalized seasonal indicators included in vec
dta[irfname d] fractional difference parameter d in arfima
As discussed in [TS] varstable, a VAR can be rewritten in moving-average form only if it is stable.
Any exogenous variables are assumed to be covariance stationary. Because the functions of interest
in this section depend only on the exogenous variables through their effect on the estimated Ai , we
can simplify the notation by dropping them from the analysis. All the formulas given below still
apply, although the Ai are estimated jointly with B on the exogenous variables.
Below we discuss conditions under which the IRFs and forecast-error variance decompositions have a
causal interpretation. Although estimation requires only that the exogenous variables be predetermined,
that is, that E(xjt uit ) = 0 for all i, j , and t, assigning a causal interpretation to IRFs and FEVDs
requires that the exogenous variables be strictly exogenous, that is, that E(xjs uit ) = 0 for all i, j ,
s, and t.
IRFs describe how the innovations to one variable affect another variable after a given number of
periods. For an example of how IRFs are interpreted, see Stock and Watson (2001). They use IRFs to
investigate the effect of surprise shocks to the Federal Funds rate on inflation and unemployment. In
another example, Christiano, Eichenbaum, and Evans (1999) use IRFs to investigate how shocks to
monetary policy affect other macroeconomic variables.
Consider a VAR without exogenous variables:
yt = v + A1 yt−1 + · · · + Ap yt−p + ut (1)
The VAR represents the variables in yt as functions of its own lags and serially uncorrelated innovations
ut . All the information about contemporaneous correlations among the K variables in yt is contained
in Σ. In fact, as discussed in [TS] var svar, a VAR can be viewed as the reduced form of a dynamic
simultaneous-equation model.
To see how the innovations affect the variables in yt after, say, i periods, rewrite the model in its
moving-average form
X∞
yt = µ + Φi ut−i (2)
i=0
where µ is the K × 1 time-invariant mean of yt , and
IK if i = 0
Φ i = Pi
j=1 Φi−j Aj if i = 1, 2, . . .
12 irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs
We can rewrite a VAR in the moving-average form only if it is stable. Essentially, a VAR is stable
if the variables are covariance stationary and none of the autocorrelations are too high (the issue of
stability is discussed in greater detail in [TS] varstable).
The Φi are the simple IRFs. The j, k element of Φi gives the effect of a 1–time unit increase in
the k th element of ut on the j th element of yt after i periods, holding everything else constant.
Unfortunately, these effects have no causal interpretation, which would require us to be able to answer
the question, “How does an innovation to variable k , holding everything else constant, affect variable j
after i periods?” Because the ut are contemporaneously correlated, we cannot assume that everything
else is held constant. Contemporaneous correlation among the ut implies that a shock to one variable
is likely to be accompanied by shocks to some of the other variables, so it does not make sense to
shock one variable and hold everything else constant. For this reason, (2) cannot provide a causal
interpretation.
This shortcoming may be overcome by rewriting (2) in terms of mutually uncorrelated innovations.
Suppose that we had a matrix P, such that Σ = PP0 . If we had such a P, then P−1 ΣP0−1 = IK ,
and
E{P−1 ut (P−1 ut )0 } = P−1 E{(ut u0t )P0−1 } = P−1 ΣP0−1 = IK
We can thus use P−1 to orthogonalize the ut and rewrite (2) as
∞
X
yt = µ + Φi PP−1 ut−i
i=0
∞
X
=µ+ Θi P−1 ut−i
i=0
∞
X
=µ+ Θi wt−i
i=0
yt − v − A1 yt−1 − · · · − Ap yt−p = ut
Similarly, a short-run SVAR model can be written as
As discussed in [TS] var svar, the estimates Ab and Bb are obtained by maximizing the concentrated
log-likelihood function on the basis of the Σ b obtained from the underlying VAR. The short-run
SVAR approach chooses P = A b −1 B
b to identify the causal IRFs. The long-run SVAR approach works
similarly, with P = C b =A b −1 B
b , where Ab −1 is the matrix of estimated long-run or accumulated
effects of the reduced-form VAR shocks.
There is one important difference between long-run and short-run SVAR models. As discussed by
Amisano and Giannini (1997, chap. 6), in the short-run model the constraints are applied directly to
the parameters in A and B. Then A and B interact with the estimated parameters of the underlying
VAR. In contrast, in a long-run model, the constraints are placed on functions of the estimated VAR
parameters. Although estimation and inference of the parameters in C is straightforward, obtaining
the asymptotic standard errors of the structural IRFs requires untenable assumptions. For this reason,
irf create does not estimate the asymptotic standard errors of the structural IRFs generated by
long-run SVAR models. However, bootstrap standard errors are still available.
∞
X ∞
X
yt = Di xt−i + Φi ut−i
i=0 i=0
where the Di are the dynamic-multiplier functions. (See Methods and formulas for details.) Some
authors refer to the dynamic-multiplier functions as transfer functions because they specify how a
unit change in an exogenous variable is “transferred” to the endogenous variables.
Technical note
irf create computes dynamic-multiplier functions only after var. After short-run SVAR models,
the dynamic multipliers from the VAR are the same as those from the SVAR. The dynamic multipliers
for long-run SVARs have not yet been worked out.
Lütkepohl (2005, sec. 2.2.2) shows that the h-step forecast error can be written as
h−1
X
yt+h − y
bt (h) = Φi ut+h−i (4)
i=0
where yt+h is the value observed at time t + h and ybt (h) is the h-step-ahead predicted value for
yt+h that was made at time t.
Because the ut are contemporaneously correlated, their distinct contributions to the forecast error
cannot be ascertained. However, if we choose a P such that Σ = PP0 , as above, we can orthogonalize
the ut into wt = P−1 ut . We can then ascertain the relative contribution of the distinct elements of
wt . Thus we can rewrite (4) as
h−1
X
yt+h − y
bt (h) = Φi PP−1 ut+h−i
i=0
h−1
X
= Θi wt+h−i
i=0
Because the forecast errors can be written in terms of the orthogonalized errors, the forecast-
error variance can be written in terms of the orthogonalized error variances. Forecast-error variance
decompositions measure the fraction of the total forecast-error variance that is attributable to each
orthogonalized shock.
Technical note
The details in this note are not critical to the discussion that follows. A forecast-error variance
decomposition is derived for a given P. Per Lütkepohl (2005, sec. 2.3.3), letting θmn,i be the m, nth
element of Θi , we can express the h-step forecast error of the j th component of yt as
h−1
X
yj,t+h − y
bj (h) = θj1,1 w1,t+h−i + · · · + θjK,i wK,t+h−i
i=0
K
X
= θjk,0 wk,t+h + · · · + θjk,h−1 wk,t+1
k=1
The wt , which were constructed using P, are mutually orthogonal with unit variance. This allows
us to compute easily the mean squared error (MSE) of the forecast of variable j at horizon h in terms
of the contributions of the components of wt . Specifically,
K
X
2 2 2
E[{yj,t+h − yj,t (h)} ] = (θjk,0 + · · · + θjk,h−1 )
k=1
The k th term in the sum above is interpreted as the contribution of the orthogonalized innovations
in variable k to the h-step forecast error of variable j . Note that the k th element in the sum above
can be rewritten as
h−1
X 2
2
(θjk,0 2
+ · · · + θjk,h−1 )= e0j Θk ek
i=0
irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs 15
where ei is the ith column of IK . Normalizing by the forecast error for variable j at horizon h yields
Ph−1 2
i=0 e0j Θk ek
ωjk,h =
MSE{yj,t (h)}
Ph−1 PK 2
where MSE{yj,t (h)} = i=0 k=1 θjk,i .
Because the FEVD depends on the choice of P, there are different forecast-error variance de-
compositions associated with each distinct P. irf create can estimate the FEVD for a VAR or an
SVAR. For a VAR, P is the Cholesky decomposition of Σ. b For an SVAR, P is the estimated structural
−1 b
decomposition, P = A B for short-run models and P = C
b b for long-run SVAR models. Due to the
same complications that arose with the structural impulse–response functions, the asymptotic standard
errors of the structural FEVD are not available after long-run SVAR models, but bootstrap standard
errors are still available.
yt = Ayt−1 + ut (5)
y1 = u1
y2 = Ay1 = Au1
y3 = Ay2 = A2 u1
16 irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs
and so on. The ith-row element of the first column of As contains the effect of the unit shock to the
first variable after s periods. The first column of As contains the IRF of a unit impulse to the first
variable after s periods. We could deduce the IRFs of a unit impulse to any of the other variables by
administering the unit shock to one of them instead of to the first variable. Thus we can see that the
(i, j)th element of As contains the unit IRF from variable j to variable i after s periods. By starting
with orthogonalized shocks of the form P−1 ut , we can use the same logic to derive the OIRFs to be
As P.
For the stationary VAR, stability implies that all the eigenvalues of A have moduli strictly less than
one, which in turn implies that all the elements of As → 0 as s → ∞. This implies that all the
IRFs from a stationary VAR taper off to zero as s → ∞. In contrast, in a cointegrating VAR, some of
the eigenvalues of A are 1, while the remaining eigenvalues have moduli strictly less than 1. This
implies that in cointegrating VARs some of the elements of As are not going to zero as s → ∞,
which in turn implies that some of the IRFs and OIRFs are not going to zero as s → ∞. The fact that
the IRFs and OIRFs taper off to zero for stationary VARs but not for cointegrating VARs is one of the
key differences between the two models.
When the IRF or OIRF from the innovation in one variable to another tapers off to zero as time
goes on, the innovation to the first variable is said to have a transitory effect on the second variable.
When the IRF or OIRF does not go to zero, the effect is said to be permanent.
Note that, because some of the IRFs and OIRFs do not taper off to zero, some of the cumulative
IRFs and OIRFs diverge over time.
The results from An introduction to impulse–response functions for VECMs can be used to show
that the interpretation of FEVDs for a finite number of steps in cointegrating VARs is essentially the
same as in the stationary case. Because the MSE of the forecast is diverging, this interpretation is valid
only for a finite number of steps. (See [TS] vec intro and [TS] fcast compute for more information
on this point.)
where
ρ(Lp ) = 1 − ρ1 L − ρ2 L2 − · · · − ρp Lp
θ(Lq ) = 1 + θ1 L + θ2 L2 + · · · + θq Lq
and Lj yt = yt−j .
We can rewrite the above model as an infinite-order moving-average process
yt = xt β + ψ(L)t
where
θ(L)
ψ(L) = = 1 + ψ1 L + ψ2 L2 + · · · (6)
ρ(L)
irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs 17
This representation shows the impact of the past innovations on the current yt . The ith coefficient
describes the response of yt to a one-time impulse in t−i , holding everything else constant. The ψi
coefficients are collectively referred to as the impulse–response function of the ARMA model. For a
covariance-stationary series, the ψi coefficients decay exponentially.
A covariance-stationary multiplicative seasonal ARMA model, often abbreviated SARMA, of order
(p, q) × (P, Q)s can be written as
ρ(Lp )ρs (LP )(yt − xt β) = θ(Lq )θs (LQ )t
where
ρs (LP ) = (1 − ρs,1 Ls − ρs,2 L2s − · · · − ρs,P LP s )
θs (LQ ) = (1 + θs,1 Ls + θs,2 L2s + · · · + θs,Q LQs )
with ρ(Lp ) and θ(Lq ) defined as above.
We can express this model as an additive ARMA model by multiplying the terms and imposing
nonlinear constraints on multiplied coefficients. For example, consider the SARMA model given by
This makes it clear that the impulse–response function for an ARFIMA model corresponds to a
fractionally differenced impulse–response function for an ARIMA model. Because of the fractional
differentiation, the ψi coefficients decay very slowly; see Remarks and examples in [TS] arfima.
and
b oi = Φ
Θ b iP
bc
where A
b j = 0K for j > p.
b sr
Θ i = Φi Psr
b b
or
b lr
Θ i = Φi Plr
b b
The estimated structural IRFs stored in an IRF file with the variable name sirf may be from
either a short-run model or a long-run model, depending on the estimation results used to create the
IRFs. As discussed in [TS] irf describe, you can easily determine whether the structural IRFs were
generated from a short-run or a long-run SVAR model using irf describe.
Following Lütkepohl (2005, sec. 3.7), estimates of the cumulative IRFs and the cumulative
orthogonalized impulse–response functions (COIRFs) at period n are, respectively,
n
X
bn =
Ψ Φ
bi
i=0
and
n
X
bn =
Ξ Θ
bi
i=0
The asymptotic standard errors of the different impulse–response functions are obtained by
applications of the delta method. See Lütkepohl (2005, sec. 3.7) and Amisano and Giannini (1997,
chap. 4) for the derivations. See Serfling (1980, sec. 3.3) for a discussion of the delta method. In
presenting the variance–covariance matrix estimators, we make extensive use of the vec() operator,
where vec(X) is the vector obtained by stacking the columns of X.
Lütkepohl (2005, sec. 3.7) derives the asymptotic VCEs of vec(Φi ), vec(Θoi ), vec(Ψ b n ), and
2 2 2
b n ). Because vec(Φi ) is K × 1, the asymptotic VCE of vec(Φi ) is K × K , and it is given by
vec(Ξ
b G0i
Gi Σαb
irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs 19
where Pi−1
Gi = c 0 )(i−1−m) ⊗ Φ
J(M bm Gi is K 2 ×K 2 p
m=0
J = (IK , 0K , . . . , 0K ) J is K×Kp
A1 A b2 ... A b p−1 A
b bp
IK 0K . . . 0K 0K
0 I 0K 0K
M=
c
.K K . b is Kp×Kp
M
. .. .. ..
. . .
0K 0K . . . IK 0K
The Ab i are the estimates of the coefficients on the lagged variables in the VAR, and Σ
b is the VCE
α
b
2 2
b is a K p × K p matrix whose elements come from the VCE
matrix of αb = vec(A b 1, . . . , A
b p ). Σ
α
b
of the VAR coefficient estimator. As such, this VCE is the VCE of the constrained estimator if there
are any constraints placed on the VAR coefficients.
The K 2 × K 2 asymptotic VCE matrix for vec(Ψ
b n ) after n periods is given by
b F0n
Fn Σ α
b
where
n
X
Fn = Gi
i=1
The K 2 × K 2 asymptotic VCE matrix of the vectorized, orthogonalized, IRFs at horizon i, vec(Θoi ),
is
b C0i
b C0i + Ci Σ
Ci Σ α
b bσ
20 irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs
where
C0 = 0 C0 is K 2 ×K 2 p
b 0 ⊗ IK )Gi , i = 1, 2, . . .
Ci = (P Ci is K 2 ×K 2 p
c
Ci = (IK ⊗ Φi )H, i = 0, 1, . . . Ci is K 2 ×K 2
n o−1
H = L0K LK NK (P b c ⊗ IK )L0
K H is K 2 ×K
(K+1)
2
(K+1)
LK solves vech(F) = LK vec(F) LK is K 2 ×K 2
1
NK = 2 (IK 2 + KK ) NK is K 2 ×K 2
−1
D+K = (D0
D
K K ) D 0
K D+
K
is K (K+1)
2 ×K 2
x11
x21
.
.
.
xK1
x
vech(X) =
22 for X K ×K vech(X) is K
(K+1)
2 ×1
..
.
xK2
.
..
xKK
Note that Σ
b is the VCE of vech(Σ).b More details about LK , KK , DK and vech() are available in
σ
b
Lütkepohl (2005, sec. A.12). Finally, as Lütkepohl (2005, 113–114) discusses, D+
K is the Moore–
Penrose inverse of DK .
As discussed in Amisano and Giannini (1997, chap. 6), the asymptotic standard errors of the
structural IRFs are available for short-run SVAR models but not for long-run SVAR models. Following
Amisano and Giannini (1997, chap. 5), the asymptotic K 2 × K 2 VCE of the short-run structural IRFs
after i periods, when a maximum of h periods are estimated, is the i, i block of
n o n o0
b (h)ij = G
Σ b G
e iΣ e 0 + IK ⊗ (JM
c i J0 ) Σ(0) IK ⊗ (JM
c j J0 )
αb j
irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs 21
where
G
e 0 = 0K G0 is K 2 ×K 2 p
n o
e i = Pi−1 P
G b 0 J(M
c 0 )i−1−k ⊗ JM
c k J0 Gi is K 2 ×K 2 p
k=0 sr
b W Q02
b (0) = Q2 Σ
Σ Σ
b (0) is K 2 ×K 2
b AB Q01
b W = Q1 Σ
Σ Σ
b W is K 2 ×K 2
b0 ⊗ P
Q2 = P b sr Q2 is K 2 ×K 2
n sr o
Q1 = (IK ⊗ Bb −1 ), (−P
b 0−1 ⊗ B−1 ) Q1 is K 2 ×2K 2
sr
D ei B
b i = Jx A i ∈ {0, 1, . . .}
x x
b
where
Jx = (IK , 0K , . . . , 0K ) J is K×(Kp+Rs)
M
c B b
Ax = e e
e e x is (Kp+Rs)×(Kp+Rs)
A
0 I
b bs
B1 B b2 ... B
b = 0̈. 0̈ . . . 0̈
B .. .. .. .. b is Kp×Rs
B
. . .
0̈ 0̈ . . . 0̈
0R 0R . . . 0R 0R
IR 0R . . . 0R 0R
eI = 0R IR 0R 0R
. eI is Rs×Rs
.. .. .. ..
. . .
0R 0R . . . IR 0R
0
Bx = B0 Ï0
b e b 0x is R×(Kp+Rs)
B
e0 = B
B b 0 0̈0 · · · 0̈0 e is R×Kp
B
0
Ï0 = [ IR 0R · · · 0R ] Ï is R×Rs
where
(
Ph−1
djk,h = MSE2 (h)2 i=0 MSEj (h)(e0j Φ b c ek )(e0 P
b iP b0
k c ⊗ e0j )Gi
j
)
b c ek )2 Ph−1 (e0 Φ
−(e0j Φi P 0
m=0 j m Σ ⊗ ej )Gm djk,h is 1×K 2 p
b b
(
Ph−1
djk,h = i=0 MSEj (h)(e0j Φ b i Pc ek )(e0 ⊗ e0j Φ
k
b i )H
)
Ph−1
−(e0j Φ b c ek )2
b iP 0b
m=0 (ej Φm ⊗ ej Φ
b m )DK 1
MSEj (h)2 djk,h is 1×K
(K+1)
2
G0 = 0 G0 is K 2 ×K 2 p
For the structural forecast-error decompositions, we follow Amisano and Giannini (1997, sec. 5.2).
They define the matrix of structural forecast-error decompositions at horizon s, when a maximum of
h periods are estimated, as
W b −1 M
cs = F c
fs for s = 1, . . . , h + 1
s
s−1
!
b sr b sr0
X
F
bs = Θ i Θi IK
i=0
s−1
X sr
M
fs =
c
Θ
bi b sr
Θ i
i=0
Z e0
e s Σ(h)Z
s
∂vec(W
c s) n
b −1 e b sr c 0 b −1 e b sr
o
sr = 2 (IK ⊗ Fs )D(Θj ) − (Ws ⊗ Fs )D(IK )NK (Θj ⊗ IK )
∂vec(Θ
bj )
Defining
p−1
X
Γ = IK − Γi
i=1
A 1 = Π + Γ 1 + IK
Using these formulas, we can back out estimates of Ai from the estimates of the Γi and Π produced
by vec. Then we simply use the formulas for the IRFs and OIRFs presented in Impulse–response
function formulas for VARs.
The running sums of the IRFs and OIRFs over the steps within each impulse–response pair are the
cumulative IRFs and OIRFs.
Algorithms for bootstrapping the VAR IRF and FEVD standard errors
irf create offers two bootstrap algorithms for estimating the standard errors of the various IRFs
and FEVDs. Both var and svar contain estimators for the coefficients in a VAR that are conditional on
the first p observations. The two bootstrap algorithms are also conditional on the first p observations.
Specifying the bs option calculates the standard errors by bootstrapping the residuals. For a
bootstrap with R repetitions, this method uses the following algorithm:
1. Fit the model and save the estimated parameters.
2. Use the estimated coefficients to calculate the residuals.
3. Repeat steps 3a to 3c R times.
3a. Draw a simple random sample of size T with replacement from the residuals. The
random samples are drawn over the K × 1 vectors of residuals. When the tth vector is
drawn, all K residuals are selected. This preserves the contemporaneous correlations
among the residuals.
3b. Use the p initial observations, the sampled residuals, and the estimated coefficients to
construct a new sample dataset.
3c. Fit the model and calculate the different IRFs and FEVDs.
3d. Save these estimates as observation r in the bootstrapped dataset.
4. For each IRF and FEVD, the estimated standard deviation from the R bootstrapped estimates
is the estimated standard error of that impulse–response function or forecast-error variance
decomposition.
Specifying the bsp option estimates the standard errors by a multivariate normal parametric
bootstrap. The algorithm for the multivariate normal parametric bootstrap is identical to the one
above, with the exception that 3a is replaced by 3a(bsp):
3a(bsp). Draw T pseudovariates from a multivariate normal distribution with covariance matrix
Σ.
b
irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs 25
θ(L)
ψ(L) =
ρ(L)
Expanding the above, we obtain
1 + θ1 L + θ2 L2 + · · ·
ψ0 + ψ1 L + ψ2 L2 + · · · =
1 − ρ1 L − ρ2 L2 − · · ·
Given the estimate of the autoregressive terms ρb and the moving-average terms b θ, the IRF is
obtained by solving the above equation for the ψ weights. The ψi are calculated using the recursion
p
X
ψbi = θbi + φbj ψbi−j
j=1
b 0
Ψi ΣΨ i
The IRF for the ARFIMA(p, d, q) model is obtained by applying the filter (1 − L)−d to ψ(L). The
filter is given by Hassler and Kokoszka (2010) as
∞
X
(1 − L)−d = bi Li
i=0
bbi = d + i − 1 bbi−1
b
i
The asymptotic standard errors for the IRF for ARFIMA are calculated using the delta method. Let
Σ be the estimate of the variance–covariance matrix for ρ
b θ, and db, and let Φ be a matrix of
b, b
derivatives of φi with respect to ρ
b, θ, and d. Then the standard errors for φbi are calculated as
b b
b 0
Φi ΣΦi
26 irf create — Obtain IRFs, dynamic-multiplier functions, and FEVDs
References
Amisano, G., and C. Giannini. 1997. Topics in Structural VAR Econometrics. 2nd ed, revised and enlarged. Heidelberg:
Springer.
Christiano, L. J., M. Eichenbaum, and C. L. Evans. 1999. Monetary policy shocks: What have we learned and to
what end? In Handbook of Macroeconomics: Volume 1A, ed. J. B. Taylor and M. Woodford. New York: Elsevier.
https://doi.org/10.1016/S1574-0048(99)01005-8.
Hamilton, J. D. 1994. Time Series Analysis. Princeton, NJ: Princeton University Press.
Hassler, U., and P. Kokoszka. 2010. Impulse responses of fractionally integrated processes with long memory.
Econometric Theory 26: 1855–1861. https://doi.org/10.1017/S0266466610000216.
Johansen, S. 1995. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University
Press.
Lütkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Serfling, R. J. 1980. Approximation Theorems of Mathematical Statistics. New York: Wiley.
Sims, C. A. 1980. Macroeconomics and reality. Econometrica 48: 1–48. https://doi.org/10.2307/1912017.
Stock, J. H., and M. W. Watson. 2001. Vector autoregressions. Journal of Economic Perspectives 15: 101–115.
https://doi.org/10.1257/jep.15.4.101.
Also see
[TS] irf — Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro — Introduction to vector autoregressive models
[TS] vec intro — Introduction to vector error-correction models
Stata, Stata Press, and Mata are registered trademarks of StataCorp LLC. Stata and
®
Stata Press are registered trademarks with the World Intellectual Property Organization
of the United Nations. Other brand and product names are registered trademarks or
trademarks of their respective companies. Copyright c 1985–2023 StataCorp LLC,
College Station, TX, USA. All rights reserved.