Time Series
Time Series
Time Series
REFERENCE MANUAL
RELEASE 13
c 19852013 StataCorp LP
Copyright
All rights reserved
Version 13
Published by Stata Press, 4905 Lakeway Drive, College Station, Texas 77845
Typeset in TEX
ISBN-10: 1-59718-127-7
ISBN-13: 978-1-59718-127-3
This manual is protected by copyright. All rights are reserved. No part of this manual may be reproduced, stored
in a retrieval system, or transcribed, in any form or by any meanselectronic, mechanical, photocopy, recording, or
otherwisewithout the prior written permission of StataCorp LP unless permitted subject to the terms and conditions
of a license granted to you by StataCorp LP to use the software and documentation. No license, express or implied,
by estoppel or otherwise, to any intellectual property rights is granted by this document.
StataCorp provides this manual as is without warranty of any kind, either expressed or implied, including, but
not limited to, the implied warranties of merchantability and fitness for a particular purpose. StataCorp may make
improvements and/or changes in the product(s) and the program(s) described in this manual at any time and without
notice.
The software described in this manual is furnished under a license agreement or nondisclosure agreement. The software
may be copied only in accordance with the terms of the agreement. It is against the law to copy the software onto
DVD, CD, disk, diskette, tape, or any other medium for any purpose other than backup or archival purposes.
c 1979 by Consumers Union of U.S.,
The automobile dataset appearing on the accompanying media is Copyright
Inc., Yonkers, NY 10703-1057 and is reproduced by permission from CONSUMER REPORTS, April 1979.
Stata,
Stata and Stata Press are registered trademarks with the World Intellectual Property Organization of the United Nations.
NetCourseNow is a trademark of StataCorp LP.
Other brand and product names are registered trademarks or trademarks of their respective companies.
For copyright information about the software, type help copyright within Stata.
Contents
intro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction to time-series manual
time series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction to time-series commands
1
2
10
43
48
66
74
98
117
134
139
145
159
167
170
184
189
190
195
197
202
204
214
216
218
220
221
irf
irf
irf
irf
irf
irf
irf
irf
irf
irf
irf
irf
236
240
242
246
271
276
279
281
287
292
294
296
ii
Contents
mgarch
mgarch
mgarch
mgarch
mgarch
mgarch
mgarch
mgarch
mgarch
301
307
322
326
341
345
357
364
379
393
401
406
417
419
468
474
478
497
505
514
522
529
535
541
544
560
562
568
576
583
588
590
632
639
651
655
675
678
683
686
Contents
iii
691
694
700
706
711
716
735
759
762
765
768
776
792
801
The first example is a reference to chapter 26, Overview of Stata estimation commands, in the Users
Guide; the second is a reference to the regress entry in the Base Reference Manual; and the third
is a reference to the reshape entry in the Data Management Reference Manual.
All the manuals in the Stata Documentation have a shorthand notation:
[GSM]
[GSU]
[GSW]
[U ]
[R]
[D ]
[G ]
[XT]
[ME]
[MI]
[MV]
[PSS]
[P ]
[SEM]
[SVY]
[ST]
[TS]
[TE]
[I]
[M ]
Title
intro Introduction to time-series manual
Description
Also see
Description
This entry describes this manual and what has changed since Stata 12.
Stata is continually being updated, and Stata users are always writing new commands. To ensure
that you have the latest features, you should install the most recent official update; see [R] update.
Whats new
For a complete list of all the new features in Stata 13, see [U] 1.3 Whats new.
Also see
[U] 1.3 Whats new
Title
time series Introduction to time-series commands
Description
References
Also see
Description
The Time-Series Reference Manual organizes the commands alphabetically, making it easy to find
individual command entries if you know the name of the command. This overview organizes and
presents the commands conceptually, that is, according to the similarities in the functions that they
perform. The table below lists the manual entries that you should see for additional information.
Data management tools and time-series operators.
These commands help you prepare your data for further analysis.
Univariate time series.
These commands are grouped together because they are either estimators or filters designed for
univariate time series or preestimation or postestimation commands that are conceptually related
to one or more univariate time-series estimators.
Multivariate time series.
These commands are similarly grouped together because they are either estimators designed for
use with multivariate time series or preestimation or postestimation commands conceptually related
to one or more multivariate time-series estimators.
Forecasting models.
These commands work as a group to provide the tools you need to create models by combining
estimation results, identities, and other objects and to solve those models to obtain forecasts.
Within these three broad categories, similar commands have been grouped together.
[TS]
[TS]
[TS]
[TS]
arfima postestimation
arima
arima postestimation
arch
[TS]
[TS]
[TS]
[TS]
[TS]
[TS]
[TS]
arch postestimation
newey
newey postestimation
prais
prais postestimation
ucm
ucm postestimation
Diagnostic tools
[TS] corrgram
Tabulate and graph autocorrelations
[TS] xcorr
Cross-correlogram for bivariate time series
[TS] cumsp
Cumulative spectral distribution
[TS] pergram
Periodogram
[TS] psdensity
Parametric spectral density estimation
[TS] estat acplot
Plot parametric autocorrelation and autocovariance functions
[TS] estat aroots
Check the stability condition of ARIMA estimates
[TS] dfgls
DF-GLS unit-root test
[TS] dfuller
Augmented DickeyFuller unit-root test
[TS] pperron
PhillipsPerron unit-root test
[R] regress postestimation time series Postestimation tools for regress with time series
[TS] wntestb
Bartletts periodogram-based test for white noise
[TS] wntestq
Portmanteau (Q) test for white noise
Dynamic-factor models
Postestimation tools for dfactor
Constant conditional correlation multivariate GARCH models
Postestimation tools for mgarch ccc
Dynamic conditional correlation multivariate GARCH models
Postestimation tools for mgarch dcc
Diagonal vech multivariate GARCH models
Postestimation tools for mgarch dvech
Varying conditional correlation multivariate GARCH models
Postestimation tools for mgarch vcc
State-space models
Postestimation tools for sspace
Vector autoregressive models
Postestimation tools for var
Structural vector autoregressive models
Postestimation tools for svar
Fit a simple VAR and graph IRFs or FEVDs
Postestimation tools for varbasic
Vector error-correction models
Postestimation tools for vec
Diagnostic tools
[TS] varlmar
[TS] varnorm
[TS] varsoc
[TS] varstable
[TS] varwle
[TS] veclmar
[TS] vecnorm
[TS] vecrank
[TS] vecstable
Forecasting models
[TS] forecast
[TS] forecast adjust
[TS] forecast clear
[TS] forecast coefvector
[TS] forecast create
[TS] forecast describe
[TS] forecast drop
[TS] forecast estimates
[TS] forecast exogenous
[TS] forecast identity
[TS] forecast list
[TS] forecast query
[TS] forecast solve
have made it possible to implement full information maximum likelihood (FIML) estimators, such
as Statas arima command. These estimators permit much greater flexibility when modeling the
disturbances and are more efficient estimators.
arima provides the means to fit linear models with autoregressive moving-average (ARMA)
disturbances, or in the absence of linear predictors, autoregressive integrated moving-average (ARIMA)
models. This means that, whether you think that your data are best represented as a distributed-lag
model, a transfer-function model, or a stochastic difference equation, or you simply wish to apply
a BoxJenkins filter to your data, the model can be fit using arima. arch, a conditional maximum
likelihood estimator, has similar modeling capabilities for the mean of the time series but can also model
autoregressive conditional heteroskedasticity in the disturbances with a wide variety of specifications
for the variance equation.
arfima estimates the parameters of autoregressive fractionally integrated moving-average (ARFIMA)
models, which handle higher degrees of dependence than ARIMA models. ARFIMA models allow the
autocorrelations to decay at the slower hyperbolic rate, whereas ARIMA models handle processes
whose autocorrelations decay at an exponential rate.
Unobserved-components models (UCMs) decompose a time series into trend, seasonal, cyclical,
and idiosyncratic components and allow for exogenous variables. ucm estimates the parameters of
UCMs by maximum likelihood. UCMs can also model the stationary cyclical component using the
stochastic-cycle parameterization that has an intuitive frequency-domain interpretation.
Time-series smoothers and filters
In addition to the estimators mentioned above, Stata also provides time-series filters and smoothers.
The BaxterKing and ChristianoFitzgerald band-pass filters and the Butterworth and HodrickPrescott
high-pass filters are implemented in tsfilter; see [TS] tsfilter for an overview.
Also included are a simple, uniformly weighted, moving-average filter with unit weights; a
weighted moving-average filter in which you can specify the weights; single- and double-exponential
smoothers; HoltWinters seasonal and nonseasonal smoothers; and a nonlinear smoother. Most of
these smoothers were originally developed as ad hoc procedures and are used for reducing the noise in
a time series (smoothing) or forecasting. Although they have limited application for signal extraction,
these smoothers have all been found to be optimal for some underlying modern time-series models;
see [TS] tssmooth.
Diagnostic tools
Statas time-series commands also include several preestimation and postestimation diagnostic and
interpretation commands. corrgram estimates the autocorrelation function and partial autocorrelation
function of a univariate time series, as well as Q statistics. These functions and statistics are often used
to determine the appropriate model specification before fitting ARIMA models. corrgram can also be
used with wntestb and wntestq to examine the residuals after fitting a model for evidence of model
misspecification. Statas time-series commands also include the commands pergram and cumsp,
which provide the log-standardized periodogram and the cumulative-sample spectral distribution,
respectively, for time-series analysts who prefer to estimate in the frequency domain rather than the
time domain.
psdensity computes the spectral density implied by the parameters estimated by arfima, arima,
or ucm. The estimated spectral density shows the relative importance of components at different
frequencies. estat acplot computes the autocorrelation and autocovariance functions implied by
the parameters estimated by arima. These functions provide a measure of the dependence structure
in the time domain.
xcorr estimates the cross-correlogram for bivariate time series and can similarly be used for both
preestimation and postestimation. For example, the cross-correlogram can be used before fitting a
transfer-function model to produce initial estimates of the IRF. This estimate can then be used to
determine the optimal lag length of the input series to include in the model specification. It can
also be used as a postestimation tool after fitting a transfer function. The cross-correlogram between
the residual from a transfer-function model and the prewhitened input series of the model can be
examined for evidence of model misspecification.
When you fit ARMA or ARIMA models, the dependent variable being modeled must be covariance
stationary (ARMA models), or the order of integration must be known (ARIMA models). Stata has three
commands that can test for the presence of a unit root in a time-series variable: dfuller performs
the augmented DickeyFuller test, pperron performs the PhillipsPerron test, and dfgls performs
a modified DickeyFuller test. arfima can also be used to investigate the order of integration. After
estimation, you can use estat aroots to check the stationarity of an ARMA process.
The remaining diagnostic tools for univariate time series are for use after fitting a linear model via
OLS with Statas regress command. They are documented collectively in [R] regress postestimation
time series. They include estat dwatson, estat durbinalt, estat bgodfrey, and estat
archlm. estat dwatson computes the DurbinWatson d statistic to test for the presence of firstorder autocorrelation in the OLS residuals. estat durbinalt likewise tests for the presence of
autocorrelation in the residuals. By comparison, however, Durbins alternative test is more general
and easier to use than the DurbinWatson test. With estat durbinalt, you can test for higher
orders of autocorrelation, the assumption that the covariates in the model are strictly exogenous is
relaxed, and there is no need to consult tables to compute rejection regions, as you must with the
DurbinWatson test. estat bgodfrey computes the BreuschGodfrey test for autocorrelation in the
residuals, and although the computations are different, the test in estat bgodfrey is asymptotically
equivalent to the test in estat durbinalt. Finally, estat archlm performs Engles LM test for the
presence of autoregressive conditional heteroskedasticity.
Similarly, several postestimation commands perform the most common specification analysis on a
previously fitted VECM. You can use veclmar to check for serial correlation in the residuals, vecnorm
to test the null hypothesis that the disturbances come from a multivariate normal distribution, and
vecstable to analyze the stability of the previously fitted VECM.
VARs and VECMs are often fit to produce baseline forecasts. fcast produces dynamic forecasts
from previously fitted VARs and VECMs.
Many researchers fit VARs, SVARs, and VECMs because they want to analyze how unexpected
shocks affect the dynamic paths of the variables. Stata has a suite of irf commands for estimating
IRF functions and interpreting, presenting, and managing these estimates; see [TS] irf.
Forecasting models
Stata provides a set of commands for obtaining forecasts by solving models, collections of equations
that jointly determine the outcomes of one or more variables. You use Stata estimation commands such
as regress, reg3, var, and vec to fit stochastic equations and store the results using estimates
store. Then you create a forecast model using forecast create and use commands, including
forecast estimates and forecast identity, to build models consisting of estimation results,
nonstochastic relationships (identities), and other model features. Models can be as simple as a single
linear regression for which you want to obtain dynamic forecasts, or they can be complicated systems
consisting of dozens of estimation results and identities representing a complete macroeconometric
model.
The forecast solve command allows you to obtain both stochastic and dynamic forecasts.
Confidence intervals for forecasts can be obtained via stochastic simulation incorporating both
parameter uncertainty and additive random shocks. By using forecast adjust, you can incorporate
outside information and specify different paths for some of the models variables to obtain forecasts
under alternative scenarios.
References
Baum, C. F. 2005. Stata: The language of choice for time-series analysis? Stata Journal 5: 4663.
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Pisati, M. 2001. sg162: Tools for spatial data analysis. Stata Technical Bulletin 60: 2137. Reprinted in Stata Technical
Bulletin Reprints, vol. 10, pp. 277298. College Station, TX: Stata Press.
Stock, J. H., and M. W. Watson. 2001. Vector autoregressions. Journal of Economic Perspectives 15: 101115.
Also see
[U] 1.3 Whats new
Title
arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
arch depvar
indepvars
options
if
in
weight
, options
Description
Model
noconstant
arch(numlist)
garch(numlist)
saarch(numlist)
tarch(numlist)
aarch(numlist)
narch(numlist)
narchk(numlist)
abarch(numlist)
atarch(numlist)
sdgarch(numlist)
earch(numlist)
egarch(numlist)
parch(numlist)
tparch(numlist)
aparch(numlist)
nparch(numlist)
nparchk(numlist)
pgarch(numlist)
constraints(constraints)
collinear
Model 2
archm
archmlags(numlist)
archmexp(exp)
arima(# p ,# d ,# q )
ar(numlist)
ma(numlist)
Model 3
distribution(dist # )
het(varlist)
savespace
11
Priming
arch0(xb)
arch0(xb0)
arch0(xbwt)
arch0(xb0wt)
arch0(zero)
arch0(#)
arma0(zero)
arma0(p)
arma0(q)
arma0(pq)
arma0(#)
condobs(#)
SE/Robust
vce(vcetype)
Reporting
level(#)
detail
nocnsreport
display options
Maximization
maximize options
coeflegend
You must tsset your data before using arch; see [TS] tsset.
depvar and varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, statsby, and xi are allowed; see [U] 11.1.10 Prefix commands.
iweights are allowed; see [U] 11.1.6 weight.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
. . . , arch(1/#m )
To fit a GARCH(# m , # k ) model assuming that the errors follow Students t distribution with 7 degrees
of freedom, type
. arch depvar
12
Details of syntax
The basic model arch fits is
yt = xt + t
Var(t ) = t2 = 0 + A(, ) + B(, )2
(1)
yt = xt +
2
i g(ti
) + ARMA(p, q) + t
If no options are specified, A() = B() = 0, and the model collapses to linear regression. The
following options add to A() (, , and represent parameters to be estimated):
Terms added to A()
Option
arch()
garch()
2
2
A() = A()+ 2,1 t1
+ 2,2 t2
+
saarch()
tarch()
A() = A()+ 4,1 2t1 (t1 > 0) + 4,2 2t2 (t2 > 0) +
aarch()
A() = A()+ 5,1 (|t1 | + 5,1 t1 )2 + 5,2 (|t2 | + 5,2 t2 )2 +
narch()
narchk()
Option
abarch()
atarch()
B() = B()+ 9,1 |t1 |(t1 > 0) + 9,2 |t2 |(t2 > 0) +
sdgarch()
Each option requires a numlist argument (see [U] 11.1.8 numlist), which determines the lagged
terms included. arch(1) specifies 1,1 2t1 , arch(2) specifies 1,2 2t2 , arch(1,2) specifies
1,1 2t1 + 1,2 2t2 , arch(1/3) specifies 1,1 2t1 + 1,2 2t2 + 1,3 2t3 , etc.
If the earch() or egarch() option is specified, the basic model fit is
yt = xt +
2
i g(ti
) + ARMA(p, q) + t
(2)
13
Option
earch()
p
C() = C() +11,1 zt1 + 11,1 (|zt1 | p2/)
+11,2 zt2 + 11,2 (|zt2 | 2/) +
egarch()
2
2
C() = C() +12,1 lnt1
+ 12,2 lnt2
+
Instead, if the parch(), tparch(), aparch(), nparch(), nparchk(), or pgarch() options are
specified, the basic model fit is
X
2
yt = xt +
i g(ti
) + ARMA(p, q) + t
i
(3)
{Var(t )}/2 = t = 0 + D(, ) + A(, ) + B(, )2
where is a parameter to be estimated. A() and B() are given as above, but A() and B() now add
to t . (The options corresponding to A() and B() are rarely specified here.) D() is given by
Terms added to D()
Option
parch()
tparch()
aparch()
D() = D()+ 15,1 (|t1 | + 15,1 t1 ) + 15,2 (|t2 | + 15,2 t2 ) +
nparch()
nparchk()
pgarch()
Common models
Common term
Options to specify
arch()
arch() garch()
earch() egarch()
parch() [pgarch()]
narch() [garch()]
narchk() [garch()]
aparch() [pgarch()]
nparch() [pgarch()]
14
arch depvar indepvars , options
where options are chosen from the table above. Each option requires that you specify as its argument
a numlist that specifies the lags to be included. For most ARCH models, that value will be 1. For
instance, to fit the classic first-order GARCH model on cpi, you would type
. arch cpi, arch(1) garch(1)
If you wanted to fit a first-order GARCH model of cpi on wage, you would type
. arch cpi wage, arch(1) garch(1)
If, for any of the options, you want first- and second-order terms, specify optionname(1/2). Specifying
garch(1) arch(1/2) would fit a GARCH model with first- and second-order ARCH terms. If you
specified arch(2), only the lag 2 term would be included.
15
Coef.
Std. Err.
P>|z|
depvar
x1
x2
...
L1.
L2.
#
#
...
...
_cons
...
sigma2
...
ar
L1.
...
ma
L1.
...
z1
z2
...
L1.
L2.
#
#
...
...
arch
L1.
...
garch
L1.
...
aparch
L1.
etc.
...
_cons
...
power
...
ARCHM
ARMA
HET
ARCH
POWER
The first equation reports , and the equation will be named [depvar]; if you fit a model on d.cpi,
the first equation would be named [cpi]. In Stata, the coefficient on x1 in the above example could
be referred to as [depvar] b[x1]. The coefficient on the lag 2 value of x2 would be referred to
as [depvar] b[L2.x2]. Such notation would be used, for instance, in a later test command; see
[R] test.
16
The [ARCHM] equation reports the coefficients if your model includes ARCH-in-mean terms;
see options discussed under the Model 2 tabPbelow. Most ARCH-in-mean models include only a
2
2
contemporaneous variance term, so the term
i i g(ti ) becomes t . The coefficient will
2
be [ARCHM] b[sigma2]. If your model includes lags of t , the additional coefficients will be
[ARCHM] b[L1.sigma2], and so on. If you specify a transformation g() (option archmexp()),
the coefficients will be [ARCHM] b[sigma2ex], [ARCHM] b[L1.sigma2ex], and so on. sigma2ex
refers to g(t2 ), the transformed value of the conditional variance.
The [ARMA] equation reports the ARMA coefficients if your model includes them; see options discussed
under the Model 2 tab below. This equation includes one or two variables named ar and ma. In
later test statements, you could refer to the coefficient on the first lag of the autoregressive term
by typing [ARMA] b[L1.ar] or simply [ARMA] b[L.ar] (the L operator is assumed to be lag 1 if
you do not specify otherwise). The second lag on the moving-average term, if there were one, could
be referred to by typing [ARMA] b[L2.ma].
The next one, two, or three equations report the variance model.
The [HET] equation reports the multiplicative heteroskedasticity if the model includes it. When
you fit such a model, you specify the variables (and their lags), determining the multiplicative
heteroskedasticity; after estimation, their coefficients are simply [HET] b[op.varname].
The [ARCH] equation reports the ARCH, GARCH, etc., terms by referring to variables arch,
garch, and so on. For instance, if you specified arch(1) garch(1) when you fit the model, the
2
. The coefficients would be named
conditional variance is given by t2 = 0 + 1,1 2t1 + 2,1 t1
[ARCH] b[ cons] (0 ), [ARCH] b[L.arch] (1,1 ), and [ARCH] b[L.garch] (2,1 ).
The [POWER] equation appears only if you are fitting a variance model in the form of (3) above; the
estimated is the coefficient [POWER] b[power].
Also, if you use the distribution() option and specify either Students t or the generalized
error distribution but do not specify the degree-of-freedom or shape parameter, then you will see
two additional rows in the table. The final row contains the estimated degree-of-freedom or shape
parameter. Immediately preceding the final row is a transformed version of the parameter that arch
used during estimation to ensure that the degree-of-freedom parameter is greater than two or that the
shape parameter is positive.
The naming convention for estimated ARCH, GARCH, etc., parameters is as follows (definitions for
parameters i , i , and i can be found in the tables for A(), B(), C(), and D() above):
1st parameter
= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]
2nd parameter
Common parameter
arch()
garch()
saarch()
tarch()
aarch()
narch()
narchk()
1
2
3
4
5
6
7
abarch()
atarch()
sdgarch()
8 = [ARCH] b[abarch]
9 = [ARCH] b[atarch]
10 = [ARCH] b[sdgarch]
earch()
egarch()
11 = [ARCH] b[earch]
12 = [ARCH] b[egarch]
11 = [ARCH] b[earch a]
parch()
tparch()
aparch()
nparch()
nparchk()
pgarch()
13
14
15
16
17
18
= [POWER]
= [POWER]
15 = [ARCH] b[aparch e] = [POWER]
16 = [ARCH] b[nparch k] = [POWER]
17 = [ARCH] b[nparch k] = [POWER]
= [POWER]
= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]
= [ARCH]
b[arch]
b[garch]
b[saarch]
b[tarch]
b[aarch]
b[narch]
b[narch]
5 = [ARCH] b[aarch e]
6 = [ARCH] b[narch k]
7 = [ARCH] b[narch k]
b[parch]
b[tparch]
b[aparch]
b[nparch]
b[nparch]
b[pgarch]
Menu
ARCH/GARCH
Statistics
>
Time series
>
ARCH/GARCH
>
>
ARCH/GARCH
>
>
>
EARCH/EGARCH
Statistics
>
Time series
ABARCH/ATARCH/SDGARCH
Statistics
>
Time series
>
ARCH/GARCH
ARCH/TARCH/GARCH
Statistics
>
Time series
>
ARCH/GARCH
ARCH/SAARCH/GARCH
Statistics
>
Time series
>
ARCH/GARCH
>
>
ARCH/GARCH
>
>
ARCH/GARCH
>
>
ARCH/GARCH
>
>
ARCH/GARCH
>
PARCH/PGARCH
Statistics
>
Time series
NARCH/GARCH
Statistics
>
Time series
NARCHK/GARCH
Statistics
>
Time series
APARCH/PGARCH
Statistics
>
Time series
b[power]
b[power]
b[power]
b[power]
b[power]
b[power]
17
18
NPARCH/PGARCH
Statistics
>
Time series
>
ARCH/GARCH
>
Description
arch fits regression models in which the volatility of a series varies through time. Usually, periods
of high and low volatility are grouped together. ARCH models estimate future volatility as a function of
prior volatility. To accomplish this, arch fits models of autoregressive conditional heteroskedasticity
(ARCH) by using conditional maximum likelihood. In addition to ARCH terms, models may include
multiplicative heteroskedasticity. Gaussian (normal), Students t, and generalized error distributions
are supported.
Concerning the regression equation itself, models may also contain ARCH-in-mean and ARMA
terms.
Options
Model
19
narch() may not be specified with arch(), saarch(), narchk(), nparchk(), or nparch(),
as this would result in collinear terms.
narchk(numlist) specifies the lags of the two-parameter term i (t )2 ; this is a variation of
narch() with held constant for all lags.
narchk() may not be specified with arch(), saarch(), narch(), nparchk(), or nparch(),
as this would result in collinear terms.
abarch(numlist) specifies lags of the term |t |.
atarch(numlist) specifies lags of |t |(t > 0), where (t > 0) represents the indicator function
returning 1 when true and 0 when false. Like the TARCH terms, these ATARCH terms allow the
effect of unanticipated innovations to be asymmetric about zero.
sdgarch(numlist) specifies lags of t . Combining atarch(), abarch(), and sdgarch() produces
the model by Zakoian (1994) that the author called the TARCH model. The acronym TARCH,
however, refers to any model using thresholding to obtain asymmetry.
p
earch(numlist) specifies lags of the two-parameter term zt +(|zt | 2/). These terms represent
the influence of newslagged innovationsin Nelsons (1991) EGARCH model. For these terms,
zt = t /t , and arch assumes zt N (0, 1). Nelson derived the general form of an EGARCH model
for any assumed distribution and performed estimation assuming a generalized error distribution
(GED). See Hamilton (1994) for a derivation where zt is assumed normal. The zt terms can be
parameterized in either of these two equivalent ways. arch uses Nelsons original parameterization;
see Hamilton (1994) for an equivalent alternative.
egarch(numlist) specifies lags of ln(t2 ).
For the following options, the model is parameterized in terms of h(t ) and t . One is estimated,
even when more than one option is specified.
parch(numlist) specifies lags of |t | . parch() combined with pgarch() corresponds to the class
of nonlinear models of conditional variance suggested by Higgins and Bera (1992).
tparch(numlist) specifies lags of (t > 0)|t | , where (t > 0) represents the indicator function
returning 1 when true and 0 when false. As with tarch(), tparch() specifies terms that allow
for a differential impact of good (positive innovations) and bad (negative innovations) news
for lags specified by numlist.
tparch() may not be specified with tarch(), as this would result in collinear terms.
aparch(numlist) specifies lags of the two-parameter term (|t | + t ) . This asymmetric power
ARCH model, A-PARCH, was proposed by Ding, Granger, and Engle (1993) and corresponds to
a BoxCox function in the lagged innovations. The authors fit the original A-PARCH model on
more than 16,000 daily observations of the Standard and Poors 500, and for good reason. As the
number of parameters and the flexibility of the specification increase, more data are required to
estimate the parameters of the conditional heteroskedasticity. See Ding, Granger, and Engle (1993)
for a discussion of how seven popular ARCH models nest within the A-PARCH model.
When goes to 1, the full term goes to zero for many observations and can then be numerically
unstable.
nparch(numlist) specifies lags of the two-parameter term |t i | .
nparch() may not be specified with arch(), saarch(), narch(), narchk(), or nparchk(),
as this would result in collinear terms.
nparchk(numlist) specifies lags of the two-parameter term |t | ; this is a variation of nparch()
with held constant for all lags. This is the direct analog of narchk(), except for the power
of . nparchk() corresponds to an extended form of the model of Higgins and Bera (1992) as
20
presented by Bollerslev, Engle, and Nelson (1994). nparchk() would typically be combined with
the pgarch() option.
nparchk() may not be specified with arch(), saarch(), narch(), narchk(), or nparch(),
as this would result in collinear terms.
Model 2
archm specifies that an ARCH-in-mean term be included in the specification of the mean equation. This
term allows the expected value of depvar to depend on the conditional variance. ARCH-in-mean is
most commonly used in evaluating financial time series when a theory supports a tradeoff between
asset risk and return. By default, no ARCH-in-mean terms are included in the model.
archm specifies that the contemporaneous expected conditional variance be included in the mean
equation. For example, typing
. arch y x, archm arch(1)
t2 = 0 + 2t1
archmlags(numlist) is an expansion of archm that includes lags of the conditional variance t2 in
the mean equation. To specify a contemporaneous and once-lagged variance, specify either archm
archmlags(1) or archmlags(0/1).
archmexp(exp) applies the transformation in exp to any ARCH-in-mean terms in the model. The
expression should contain an X wherever a value of the conditional variance is to enter the expression.
This option can be used to produce the commonly used ARCH-in-mean of the conditional standard
deviation. With the example from archm, typing
. arch y x, archm arch(1) archmexp(sqrt(X))
specifies yt = 0 + 1 xt + /t + t .
arima(# p ,# d ,# q ) is an alternative, shorthand notation for specifying autoregressive models in the
dependent variable. The dependent variable and any independent variables are differenced # d times,
1 through # p lags of autocorrelations are included, and 1 through # q lags of moving averages are
included. For example, the specification
. arch y, arima(2,1,3)
is equivalent to
. arch D.y, ar(1/2) ma(1/3)
The former is easier to write for classic ARIMA models of the mean equation, but it is not nearly
as expressive as the latter. If gaps in the AR or MA lags are to be modeled, or if different operators
are to be applied to independent variables, the latter syntax is required.
21
ar(numlist) specifies the autoregressive terms of the structural model disturbance to be included in
the model. For example, ar(1/3) specifies that lags 1, 2, and 3 of the structural disturbance be
included in the model. ar(1,4) specifies that lags 1 and 4 be included, possibly to account for
quarterly effects.
If the model does not contain regressors, these terms can also be considered autoregressive terms
for the dependent variable; see [TS] arima.
ma(numlist) specifies the moving-average terms to be included in the model. These are the terms for
the lagged innovations or white-noise disturbances.
Model 3
distribution(dist # ) specifies the distribution to assume for the error term. dist may be
gaussian, normal, t, or ged. gaussian and normal are synonyms, and # cannot be specified
with them.
If distribution(t) is specified, arch assumes that the errors follow Students t distribution,
and the degree-of-freedom parameter is estimated along with the other parameters of the model.
If distribution(t #) is specified, then arch uses Students t distribution with # degrees of
freedom. # must be greater than 2.
If distribution(ged) is specified, arch assumes that the errors have a generalized error
distribution, and the shape parameter is estimated along with the other parameters of the model.
If distribution(ged #) is specified, then arch uses the generalized error distribution with
shape parameter #. # must be positive. The generalized error distribution is identical to the normal
distribution when the shape parameter equals 2.
het(varlist) specifies that varlist be included in the specification of the conditional variance. varlist
may contain time-series operators. This varlist enters the variance specification collectively as
multiplicative heteroskedasticity; see Judge et al. (1985). If het() is not specified, the model will
not contain multiplicative heteroskedasticity.
Assume that the conditional variance depends on variables x and w and has an ARCH(1) component.
We request this specification by using the het(x w) arch(1) options, and this corresponds to the
conditional-variance model
t2 = exp(0 + 1 xt + 2 wt ) + 2t1
Multiplicative heteroskedasticity enters differently with an EGARCH model because the variance is
already specified in logs. For the het(x w) earch(1) egarch(1) options, the variance model is
ln(t2 ) = 0 + 1 xt + 2 wt + zt1 + (|zt1 |
2
2/) + ln(t1
)
savespace conserves memory by retaining only those variables required for estimation. The original
dataset is restored after estimation. This option is rarely used and should be specified only if
there is insufficient memory to fit a model without the option. arch requires considerably more
temporary storage during estimation than most estimation commands in Stata.
Priming
arch0(cond method) is a rarely used option that specifies how to compute the conditioning (presample
or priming) values for t2 and 2t . In the presample period, it is assumed that t2 = 2t and that this
value is constant. If arch0() is not specified, the priming values are computed as the expected
unconditional variance given the current estimates of the coefficients and any ARMA parameters.
22
arch0(xb), the default, specifies that the priming values are the expected unconditional variance
PT 2
of the model, which is
t /T , where b
t is computed from the mean equation and any
1 b
ARMA terms.
arch0(xb0) specifies that the priming values are the estimated variance of the residuals from an
OLS estimate of the mean equation.
arch0(xbwt) specifies that the priming values are the weighted sum of the b
t2 from the current
conditional mean equation (and ARMA terms) that places more weight on estimates of 2t at the
beginning of the sample.
arch0(xb0wt) specifies that the priming values are the weighted sum of the b
t2 from an OLS
estimate of the mean equation (and ARMA terms) that places more weight on estimates of 2t
at the beginning of the sample.
arch0(zero) specifies that the priming values are 0. Unlike the priming values for ARIMA
models, 0 is generally not a consistent estimate of the presample conditional variance or squared
innovations.
arch0(#) specifies that t2 = 2t = # for any specified nonnegative #. Thus arch0(0) is equivalent
to arch0(zero).
arma0(cond method) is a rarely used option that specifies how the t values are initialized at the
beginning of the sample for the ARMA component, if the model has one. This option has an effect
only when AR or MA terms are included in the model (the ar(), ma(), or arima() options
specified).
arma0(zero), the default, specifies that all priming values of t be taken as 0. This fits the model
over the entire requested sample and takes t as its expected value of 0 for all lags required
by the ARMA terms; see Judge et al. (1985).
arma0(p), arma0(q), and arma0(pq) specify that estimation begin after priming the recursions
for a certain number of observations. p specifies that estimation begin after the pth observation
in the sample, where p is the maximum AR lag in the model; q specifies that estimation begin
after the q th observation in the sample, where q is the maximum MA lag in the model; and pq
specifies that estimation begin after the (p + q )th observation in the sample.
During the priming period, the recursions necessary to generate predicted disturbances are performed,
but results are used only to initialize preestimation values of t . To understand the definition
of preestimation, say that you fit a model in 10/100. If the model is specified with ar(1,2),
preestimation refers to observations 10 and 11.
The ARCH terms t2 and 2t are also updated over these observations. Any required lags of t
before the priming period are taken to be their expected value of 0, and 2t and t2 take the
values specified in arch0().
arma0(#) specifies that the presample values of t are to be taken as # for all lags required by
the ARMA terms. Thus arma0(0) is equivalent to arma0(zero).
condobs(#) is a rarely used option that specifies a fixed number of conditioning observations at
the start of the sample. Over these priming observations, the recursions necessary to generate
predicted disturbances are performed, but only to initialize preestimation values of t , 2t , and t2 .
Any required lags of t before the initialization period are taken to be their expected value of 0
(or the value specified in arma0()), and required values of 2t and t2 assume the values specified
by arch0(). condobs() can be used if conditioning observations are desired for the lags in the
ARCH terms of the model. If arma() is also specified, the maximum number of conditioning
observations required by arma() and condobs(#) is used.
23
SE/Robust
vce(vcetype) specifies the type of standard error reported, which includes types that are robust to
some kinds of misspecification (robust) and that are derived from asymptotic theory (oim, opg);
see [R] vce option.
For ARCH models, the robust or quasimaximum likelihood estimates (QMLE) of variance are robust
to symmetric nonnormality in the disturbances. The robust variance estimates generally are not
robust to functional misspecification of the mean equation; see Bollerslev and Wooldridge (1992).
The robust variance estimates computed by arch are based on the full Huber/White/sandwich
formulation, as discussed in [P] robust. Many other software packages report robust estimates
that set some terms to their expectations of zero (Bollerslev and Wooldridge 1992), which saves
them from calculating second derivatives of the log-likelihood function.
Reporting
Maximization
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
gtolerance(#), nrtolerance(#), nonrtolerance, and from(init specs); see [R] maximize
for all options except gtolerance(), and see below for information on gtolerance().
These options are often more important for ARCH models than for other maximum likelihood
models because of convergence problems associated with ARCH models ARCH model likelihoods
are notoriously difficult to maximize.
Setting technique() to something other than the default or BHHH changes the vcetype to vce(oim).
The following options are all related to maximization and are either particularly important in fitting
ARCH models or not available for most other estimators.
gtolerance(#) specifies the tolerance for the gradient relative to the coefficients. When
|gi bi | gtolerance() for all parameters bi and the corresponding elements of the
gradient gi , the gradient tolerance criterion is met. The default gradient tolerance for arch
is gtolerance(.05).
gtolerance(999) may be specified to disable the gradient criterion. If the optimizer becomes
stuck with repeated (backed up) messages, the gradient probably still contains substantial
values, but an uphill direction cannot be found for the likelihood. With this option, results can
often be obtained, but whether the global maximum likelihood has been found is unclear.
When the maximization is not going well, it is also possible to set the maximum number of
iterations (see [R] maximize) to the point where the optimizer appears to be stuck and to inspect
the estimation results at that point.
from(init specs) specifies the initial values of the coefficients. ARCH models may be sensitive
to initial values and may have coefficient values that correspond to local maximums. The
default starting values are obtained via a series of regressions, producing results that, on
24
the basis of asymptotic theory, are consistent for the and ARMA parameters and generally
reasonable for the rest. Nevertheless, these values may not always be feasible in that the
likelihood function cannot be evaluated at the initial values arch first chooses. In such cases,
the estimation is restarted with ARCH and ARMA parameters initialized to zero. It is possible,
but unlikely, that even these values will be infeasible and that you will have to supply initial
values yourself.
The standard syntax for from() accepts a matrix, a list of values, or coefficient name value
pairs; see [R] maximize. arch also allows the following:
from(archb0) sets the starting value for all the ARCH/GARCH/. . . parameters in the conditionalvariance equation to 0.
from(armab0) sets the starting value for all ARMA parameters in the model to 0.
from(archb0 armab0) sets the starting value for all ARCH/GARCH/. . . and ARMA parameters
to 0.
The following option is available with arch but is not shown in the dialog box:
coeflegend; see [R] estimation options.
arch fits models of autoregressive conditional heteroskedasticity (ARCH, GARCH, etc.) using conditional maximum likelihood. By conditional, we mean that the likelihood is computed based on
an assumed or estimated set of priming values for the squared innovations 2t and variances t2 prior
to the estimation sample; see Hamilton (1994) or Bollerslev (1986). Sometimes more conditioning is
done on the first a, g , or a + g observations in the sample, where a is the maximum ARCH term lag
and g is the maximum GARCH term lag (or the maximum lags from the other ARCH family terms).
The original ARCH model proposed by Engle (1982) modeled the variance of a regression models
disturbances as a linear function of lagged values of the squared regression disturbances. We can
write an ARCH(m) model as
yt = xt + t
t2 = 0 + 1 2t1 + 2 2t2 + + m 2tm
(conditional mean)
(conditional variance)
where
25
The ARCH model has a specification for both the conditional mean and the conditional variance,
and the variance is a function of the size of prior unanticipated innovations 2t . This model was
generalized by Bollerslev (1986) to include lagged values of the conditional variancea GARCH
model. The GARCH(m, k) model is written as
yt = xt + t
2
2
2
t2 = 0 + 1 2t1 + 2 2t2 + + m 2tm + 1 t1
+ 2 t2
+ + k tk
where
wt = 2t t2
wt is a white-noise process that is fundamental for 2t
One of the primary benefits of the GARCH specification is its parsimony in identifying the conditional
variance. As with ARIMA models, the ARMA specification in GARCH allows the conditional variance
to be modeled with fewer parameters than with an ARCH specification alone. Empirically, many series
with a conditionally heteroskedastic disturbance have been adequately modeled with a GARCH(1,1)
specification.
An ARMA process in the disturbances can easily be added to the mean equation. For example, the
mean equation can be written with an ARMA(1, 1) disturbance as
26
Much of the literature on ARCH models focuses on alternative specifications of the variance equation.
arch allows many of these specifications to be requested using the saarch() through pgarch()
options, which imply that one or more terms may be changed or added to the specification of the
variance equation.
These alternative specifications also address asymmetry. Both the ARCH and GARCH specifications
imply a symmetric impact of innovations. Whether an innovation 2t is positive or negative makes
no difference to the expected variance t2 in the ensuing periods; only the size of the innovation
mattersgood news and bad news have the same effect. Many theories, however, suggest that positive
and negative innovations should vary in their impact. For risk-averse investors, a large unanticipated
drop in the market is more likely to lead to higher volatility than a large unanticipated increase (see
Black [1976], Nelson [1991]). saarch(), tarch(), aarch(), abarch(), earch(), aparch(), and
tparch() allow various specifications of asymmetric effects.
narch(), narchk(), nparch(), and nparchk() imply an asymmetric impact of a specific form.
All the models considered so far have a minimum conditional variance when the lagged innovations
are all zero. No news is good news when it comes to keeping the conditional variance small.
narch(), narchk(), nparch(), and nparchk() also have a symmetric response to innovations,
but they are not centered at zero. The entire news-response function (response to innovations) is
shifted horizontally so that minimum variance lies at some specific positive or negative value for prior
innovations.
ARCH-in-mean models allow the conditional variance of the series to influence the conditional
mean. This is particularly convenient for modeling the riskreturn relationship in financial series; the
riskier an investment, with all else equal, the lower its expected return. ARCH-in-mean models modify
the specification of the conditional mean equation to be
yt = xt + t2 + t
(ARCH-in-mean)
Although this linear form in the current conditional variance has dominated the literature, arch allows
the conditional variance to enter the mean equation through a nonlinear transformation g() and for
this transformed term to be included contemporaneously or lagged.
2
2
yt = xt + 0 g(t2 ) + 1 g(t1
) + 2 g(t2
) + + t
Square root is the most commonly used g() transformation because researchers want to include a
linear term for the conditional standard deviation, but any transform g() is allowed.
27
. use http://www.stata-press.com/data/r13/wpi1
. regress D.ln_wpi
SS
df
Model
Residual
0
.02521709
0
122
.
.000206697
Total
.02521709
122
.000206697
Source
D.ln_wpi
Coef.
_cons
.0108215
MS
Number of obs
F( 0,
122)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
123
0.00
.
0.0000
0.0000
.01438
Std. Err.
P>|t|
.0012963
8.35
0.000
.0082553
.0133878
chi2
df
8.366
0.0038
vs.
Because the LM test shows a p-value of 0.0038, which is well below 0.05, we reject the null hypothesis
of no ARCH(1) effects. Thus we can further estimate the ARCH(1) parameter by specifying arch(1).
See [R] regress postestimation time series for more information on Engles LM test.
The first-order generalized ARCH model (GARCH, Bollerslev 1986) is the most commonly used
specification for the conditional variance in empirical work and is typically written GARCH(1, 1). We
can estimate a GARCH(1, 1) process for the log-differenced series by typing
. arch D.ln_wpi, arch(1) garch(1)
(setting optimization to BHHH)
Iteration 0:
log likelihood =
Iteration 1:
log likelihood =
(output omitted )
Iteration 10: log likelihood =
355.23458
365.64586
373.23397
D.ln_wpi
Coef.
Number of obs
Wald chi2(.)
Prob > chi2
=
=
=
123
.
.
OPG
Std. Err.
P>|z|
ln_wpi
_cons
.0061167
.0010616
5.76
0.000
.0040361
.0081974
arch
L1.
.4364123
.2437428
1.79
0.073
-.0413147
.9141394
garch
L1.
.4544606
.1866606
2.43
0.015
.0886127
.8203086
_cons
.0000269
.0000122
2.20
0.028
2.97e-06
.0000508
ARCH
We have estimated the ARCH(1) parameter to be 0.436 and the GARCH(1) parameter to be 0.454, so
our fitted GARCH(1, 1) model is
28
yt = 0.0061 + t
2
t2 = 0.436 2t1 + 0.454 t1
D.ln_wpi
Coef.
=
=
=
123
153.56
0.0000
OPG
Std. Err.
P>|z|
ln_wpi
_cons
.0069541
.0039517
1.76
0.078
-.000791
.0146992
ar
L1.
.7922674
.1072225
7.39
0.000
.5821153
1.00242
ma
L1.
L4.
-.341774
.2451724
.1499943
.1251131
-2.28
1.96
0.023
0.050
-.6357574
-.0000447
-.0477905
.4903896
arch
L1.
.2040449
.1244991
1.64
0.101
-.0399688
.4480586
garch
L1.
.6949687
.1892176
3.67
0.000
.3241091
1.065828
_cons
.0000119
.0000104
1.14
0.253
-8.52e-06
.0000324
ARMA
ARCH
29
[ARCH]L.arch = 0
[ARCH]L.garch = 0
chi2( 2) =
Prob > chi2 =
84.92
0.0000
(For comparison, we fit the model over the same sample used in example 1 of [TS] arima; Enders
fits this GARCH model but over a slightly different sample.)
Technical note
The rather ugly iteration log on the previous result is typical, as difficulty in converging is common
in ARCH models. This is actually a fairly well-behaved likelihood for an ARCH model. The switching
optimization to . . . messages are standard messages from the default optimization method for arch.
The backed up messages are typical of BFGS stepping as the BFGS Hessian is often overoptimistic,
particularly during early iterations. These messages are nothing to be concerned about.
Nevertheless, watch out for the messages BFGS stepping has contracted, resetting BFGS Hessian
and backed up, which can flag problems that may result in an iteration log that goes on and on.
Stata will never report convergence and will never report final results. The question is, when do you
give up and press Break, and if you do, what then?
If the BFGS stepping has contracted message occurs repeatedly (more than, say, five times), it
often indicates that convergence will never be achieved. Literally, it means that the BFGS algorithm
was stuck and reset its Hessian and take a steepest-descent step.
The backed up message, if it occurs repeatedly, also indicates problems, but only if the likelihood
value is simultaneously not changing. If the message occurs repeatedly but the likelihood value is
changing, as it did above, all is going well; it is just going slowly.
If you have convergence problems, you can specify options to assist the current maximization
method or try a different method. Or, your model specification and data may simply lead to a likelihood
that is not concave in the allowable region and thus cannot be maximized.
If you see the backed up message with no change in the likelihood, you can reset the gradient
tolerance to a larger value. Specifying the gtolerance(999) option disables gradient checking,
allowing convergence to be declared more easily. This does not guarantee that convergence will be
declared, and even if it is, the global maximum likelihood may not have been found.
You can also try to specify initial values.
Finally, you can try a different maximization method; see options discussed under the Maximization
tab above.
30
ARCH models are notorious for having convergence difficulties. Unlike in most estimators in Stata,
it is common for convergence to require many steps or even to fail. This is particularly true of the
explicitly nonlinear terms such as aarch(), narch(), aparch(), or archm (ARCH-in-mean), and of
any model with several lags in the ARCH terms. There is not always a solution. You can try other
maximization methods or different starting values, but if your data do not support your assumed ARCH
structure, convergence simply may not be possible.
ARCH models can be susceptible to irrelevant regressors or unnecessary lags, whether in the
specification of the conditional mean or in the conditional variance. In these situations, arch will
often continue to iterate, making little to no improvement in the likelihood. We view this conservative
approach as better than declaring convergence prematurely when the likelihood has not been fully
maximized. arch is estimating the conditional form of second sample moments, often with flexible
functions, and that is asking much of the data.
Technical note
if exp and in range are interpreted differently with commands accepting time-series operators.
The time-series operators are resolved before the conditions are tested, which may lead to some
confusion. Note the results of the following list commands:
. use http://www.stata-press.com/data/r13/archxmpl
. list t y l.y in 5/10
L.
y
5.
6.
7.
8.
9.
1961q1
1961q2
1961q3
1961q4
1962q1
30.8
30.5
30.5
30.6
30.7
30.7
30.8
30.5
30.5
30.6
10.
1962q2
30.6
30.7
. keep in 5/10
(118 observations deleted)
. list t y l.y
L.
y
1.
2.
3.
4.
5.
1961q1
1961q2
1961q3
1961q4
1962q1
30.8
30.5
30.5
30.6
30.7
.
30.8
30.5
30.5
30.6
6.
1962q2
30.6
30.7
We have one more lagged observation for y in the first case: l.y was resolved before the in
restriction was applied. In the second case, the dataset no longer contains the value of y to compute
the first lag. This means that
. use http://www.stata-press.com/data/r13/archxmpl, clear
. arch y l.x if twithin(1962q2, 1990q3), arch(1)
31
D.ln_wpi
Coef.
=
=
=
123
156.02
0.0000
OPG
Std. Err.
P>|z|
ln_wpi
_cons
.0087342
.0034004
2.57
0.010
.0020696
.0153989
ar
L1.
.769212
.0968396
7.94
0.000
.5794099
.959014
ma
L1.
L4.
-.3554617
.241463
.1265725
.0863832
-2.81
2.80
0.005
0.005
-.6035393
.072155
-.1073841
.4107711
earch
L1.
.4064007
.116351
3.49
0.000
.178357
.6344445
earch_a
L1.
.2467351
.1233365
2.00
0.045
.0049999
.4884702
egarch
L1.
.8417291
.0704079
11.96
0.000
.7037322
.9797261
_cons
-1.488402
.6604397
-2.25
0.024
-2.78284
-.1939643
ARMA
ARCH
p
2
ln(t2 ) = 1.49 + .406 zt1 + .247 (zt1 2/ ) + .842 ln(t1
)
where zt = t /t , which is distributed as N (0, 1).
32
This is a strong indication for a leverage effect. The positive L1.earch coefficient implies that
positive innovations (unanticipated price increases) are more destabilizing than negative innovations.
The effect appears strong (0.406) and is substantially larger than the symmetric effect (0.247). In fact,
the relative scales of the two coefficients imply that the positive leverage completely dominates the
symmetric effect.
This can readily be seen if we plot what is often referred to as the news-response or news-impact
function. This curve shows the resulting conditional variance as a function of unanticipated news,
in the form of innovations, that is, the conditional variance t2 as a function of t . Thus we must
evaluate t2 for various values of t say, 4 to 4and then graph the result.
.3
.2
.1
.1
01jan1950
01jan1960
01jan1970
date
01jan1980
01jan1990
We formed this difference by referring to D.ln dow, but only after playing a trick. The series is
daily, and each observation represents the Dow closing index for the day. Our data included a time
variable recorded as a daily date. We wanted, however, to model the log differences in the series,
and we wanted the span from Friday to Monday to appear as a single-period difference. That is, the
day before Monday is Friday. Because our dataset was tsset with date, the span from Friday to
Monday was 3 days. The solution was to create a second variable that sequentially numbered the
observations. By tsseting the data with this new variable, we obtained the desired differences.
. generate t = _n
. tsset t
33
date
dayofwk
ln_dow
1.
2.
3.
4.
5.
02jan1953
05jan1953
06jan1953
07jan1953
08jan1953
5
1
2
3
4
1
2
3
4
5
5.677096
5.682899
5.677439
5.672636
5.671259
.
.0058026
-.0054603
-.0048032
-.0013762
6.
7.
8.
09jan1953
12jan1953
13jan1953
5
1
2
6
7
8
5.661223
5.653191
5.659134
-.0100365
-.0080323
.0059433
date
dayofwk
ln_dow
9334.
9335.
9336.
9337.
9338.
08feb1990
09feb1990
12feb1990
13feb1990
14feb1990
4
5
1
2
3
9334
9335
9336
9337
9338
7.880188
7.881635
7.870601
7.872665
7.872577
.0016198
.0014472
-.011034
.0020638
-.0000877
9339.
9340.
9341.
15feb1990
16feb1990
20feb1990
4
5
2
9339
9340
9341
7.88213
7.876863
7.862054
.009553
-.0052676
-.0148082
The difference operator D spans weekends because the specified time variable, t, is not a true date
and has a difference of 1 for all observations. We must leave this contrived time variable in place
during estimation, or arch will be convinced that our dataset has gaps. If we were using calendar
dates, we would indeed have gaps.
Ding, Granger, and Engle (1993) fit an A-PARCH model of daily returns of the Standard and Poors
500 (S&P 500) for 3jan192830aug1991. We will fit the same model for the Dow data shown above.
The model includes an AR(1) term as well as the A-PARCH specification of conditional variance.
34
D.ln_dow
Coef.
(backed up)
Number of obs
Wald chi2(1)
Prob > chi2
=
=
=
9340
175.46
0.0000
OPG
Std. Err.
P>|z|
ln_dow
_cons
.0001786
.0000875
2.04
0.041
7.15e-06
.00035
ar
L1.
.1410944
.0106519
13.25
0.000
.1202171
.1619716
aparch
L1.
.0626323
.0034307
18.26
0.000
.0559082
.0693564
aparch_e
L1.
-.3645093
.0378485
-9.63
0.000
-.4386909
-.2903277
pgarch
L1.
.9299015
.0030998
299.99
0.000
.923826
.935977
_cons
7.19e-06
2.53e-06
2.84
0.004
2.23e-06
.0000121
power
1.585187
.0629186
25.19
0.000
1.461868
1.708505
ARMA
ARCH
POWER
In the iteration log, the final iteration reports the message backed up. For most estimators,
ending on a backed up message would be a cause for great concern, but not with arch or, for that
matter, arima, as long as you do not specify the gtolerance() option. arch and arima, by default,
monitor the gradient and declare convergence only if, in addition to everything else, the gradient is
small enough.
The fitted model demonstrates substantial asymmetry, with the large negative L1.aparch e
coefficient indicating that the market responds with much more volatility to unexpected drops in
returns (bad news) than it does to increases in returns (good news).
35
31139.547
31348.13
32486.461
D.ln_dow
Coef.
Number of obs
Wald chi2(1)
Prob > chi2
=
=
=
9340
178.22
0.0000
OPG
Std. Err.
P>|z|
ln_dow
_cons
.0002735
.000078
3.51
0.000
.0001207
.0004264
ar
L1.
.1337473
.0100187
13.35
0.000
.1141109
.1533836
aparch
L1.
.0641762
.0049401
12.99
0.000
.0544938
.0738587
aparch_e
L1.
-.4052109
.0573054
-7.07
0.000
-.5175273
-.2928944
pgarch
L1.
.9341738
.0045668
204.56
0.000
.925223
.9431246
_cons
.0000216
.0000117
1.84
0.066
-1.39e-06
.0000446
power
1.325313
.1030748
12.86
0.000
1.12329
1.527336
/lnshape
.3527009
.009482
37.20
0.000
.3341166
.3712853
shape
1.422906
.013492
1.396706
1.449597
ARMA
ARCH
POWER
The ARMA and ARCH coefficients are similar to those we obtained when we assumed normally
distributed errors, though we do note that the power term is now closer to 1. The estimated shape
parameter for the generalized error distribution is shown at the bottom of the output. Here the shape
parameter is 1.42; because it is less than 2, the distribution of the errors has tails that are fatter than
they would be if the errors were normally distributed.
36
The original model was fit on U.K. inflation; we will again use the WPI data and retain our earlier
specification of the mean equation, which differs from Engles U.K. inflation model. With our
constraints, we type
. arch D.ln_wpi, ar(1) ma(1 4) arch(1/4) constraints(1/3)
(setting optimization to BHHH)
Iteration 0:
log likelihood =
Iteration 1:
log likelihood =
(output omitted )
Iteration 9:
log likelihood =
396.80198
399.07809
399.46243
Number of obs
Wald chi2(3)
Prob > chi2
=
=
=
123
123.32
0.0000
.75*[ARCH]L.arch - [ARCH]L2.arch = 0
.5*[ARCH]L.arch - [ARCH]L3.arch = 0
.25*[ARCH]L.arch - [ARCH]L4.arch = 0
D.ln_wpi
Coef.
OPG
Std. Err.
P>|z|
ln_wpi
_cons
.0077204
.0034531
2.24
0.025
.0009525
.0144883
ar
L1.
.7388168
.1126811
6.56
0.000
.5179659
.9596676
ma
L1.
L4.
-.2559691
.2528923
.1442861
.1140185
-1.77
2.22
0.076
0.027
-.5387646
.02942
.0268264
.4763645
arch
L1.
L2.
L3.
L4.
.2180138
.1635103
.1090069
.0545034
.0737787
.055334
.0368894
.0184447
2.95
2.95
2.95
2.95
0.003
0.003
0.003
0.003
.0734101
.0550576
.0367051
.0183525
.3626174
.2719631
.1813087
.0906544
_cons
.0000483
7.66e-06
6.30
0.000
.0000333
.0000633
ARMA
ARCH
L1.arch, L2.arch, L3.arch, and L4.arch coefficients have the constrained relative sizes.
Stored results
arch stores the following in e():
Scalars
e(N)
e(N gaps)
e(condobs)
e(k)
e(k eq)
e(k eq model)
e(k dv)
e(k aux)
e(df m)
e(ll)
e(chi2)
e(p)
e(archi)
e(archany)
e(tdf)
e(shape)
e(tmin)
e(tmax)
e(power)
e(rank)
e(ic)
e(rc)
e(converged)
number of observations
number of gaps
number of conditioning observations
number of parameters
number of equations in e(b)
number of equations in overall model test
number of dependent variables
number of auxiliary parameters
model degrees of freedom
log likelihood
2
significance
02 =20 , priming values
1 if model contains ARCH terms, 0 otherwise
degrees of freedom for Students t distribution
shape parameter for generalized error distribution
minimum time
maximum time
for power ARCH terms
rank of e(V)
number of iterations
return code
1 if converged, 0 otherwise
37
38
arch
command as typed
name of dependent variable
list of covariates
names of equations
weight type
weight expression
title in estimation output
formatted minimum time
formatted maximum time
distribution for error term: gaussian, t, or ged
1 if multiplicative heteroskedasticity
yes if degrees of freedom for t distribution or shape parameter for GED distribution
was estimated; no otherwise
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
lags for moving-average terms
lags for autoregressive terms
lags for ARCH terms
ARCH-in-mean lags
ARCH-in-mean exp
lags for EARCH terms
lags for EGARCH terms
lags for AARCH terms
lags for NARCH terms
lags for A-PARCH terms
lags for NPARCH terms
lags for SAARCH terms
lags for PARCH terms
lags for TPARCH terms
lags for ABARCH terms
lags for TARCH terms
lags for ATARCH terms
lags for SDGARCH terms
lags for PGARCH terms
lags for GARCH terms
type of optimization
type of ml method
name of likelihood-evaluator program
maximization technique
maximization technique, including number of iterations
number of iterations performed before switching techniques
b V
program used to implement estat
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
39
coefficient vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
variancecovariance matrix of the estimators
model-based variance
marks estimation sample
yt = xt +
p
X
i=1
q
X
2
i g(ti
)
p
X
(
j
ytj xtj
j=1
p
X
)
2
i g(tji
)
i=1
k tk + t
(conditional mean)
k=1
where
are the regression parameters,
are the ARCH-in-mean parameters,
are the autoregression parameters,
are the moving-average parameters, and
(1)
2
(2)
(3)
where A(, ), B(, ), C( ln, z), and D(, ) are linear sums of the appropriate ARCH terms; see
Details of syntax for more information. Equation (1) is used if no EGARCH or power ARCH terms
are included in the model, (2) if EGARCH terms are included, and (3) if any power ARCH terms are
included; see Details of syntax.
Methods and formulas are presented under the following headings:
Priming values
Likelihood from prediction error decomposition
Missing data
40
Priming values
The above model is recursive with potentially long memory. It is necessary to assume preestimation
sample values for t , 2t , and t2 to begin the recursions, and the remaining computations are therefore
conditioned on these priming values, which can be controlled using the arch0() and arma0()
options. See options discussed under the Priming tab above.
The arch0(xb0wt) and arch0(xbwt) options compute a weighted sum of estimated disturbances
with more weight on the early observations. With either of these options,
T
1
X
.7T t1 2T t
t=0
where t0 is the first observation for which the likelihood is computed; see options discussed under the
Priming tab above. The 2t are all computed from the conditional mean equation. If arch0(xb0wt)
is specified, , i , j , and k are taken from initial regression estimates and held constant during
optimization. If arch0(xbwt) is specified, the current estimates of , i , j , and k are used to
compute 2t on every iteration. If any i is in the mean equation (ARCH-in-mean is specified), the
estimates of 2t from the initial regression estimates are not consistent.
1
ln Lt =
2
2
ln(2t2 ) + t2
t
(
=
)1/2
s1
22/s (3s1 )
41
This command supports the Huber/White/sandwich estimator of the variance using vce(robust).
See [P] robust, particularly Maximum likelihood estimators and Methods and formulas.
Missing data
ARCH allows missing data or missing observations but does not attempt to condition on the
surrounding data. If a dynamic component cannot be computed t , 2t , and/or t2 its priming
value is substituted. If a covariate, the dependent variable, or the entire observation is missing, the
observation does not enter the likelihood, and its dynamic components are set to their priming values
for that observation. This is acceptable only asymptotically and should not be used with a great deal
of missing data.
Robert Fry Engle (1942 ) was born in Syracuse, New York. He earned degrees in physics
and economics at Williams College and Cornell and then worked at MIT and the University of
California, San Diego, before moving to New York University Stern School of Business in 2000.
He was awarded the 2003 Nobel Prize in Economics for research on autoregressive conditional
heteroskedasticity and is a leading expert in time-series analysis, especially the analysis of
financial markets.
References
Adkins, L. C., and R. C. Hill. 2011. Using Stata for Principles of Econometrics. 4th ed. Hoboken, NJ: Wiley.
Baum, C. F. 2000. sts15: Tests for stationarity of a time series. Stata Technical Bulletin 57: 3639. Reprinted in
Stata Technical Bulletin Reprints, vol. 10, pp. 356360. College Station, TX: Stata Press.
Baum, C. F., and R. I. Sperling. 2000. sts15.1: Tests for stationarity of a time series: Update. Stata Technical Bulletin
58: 3536. Reprinted in Stata Technical Bulletin Reprints, vol. 10, pp. 360362. College Station, TX: Stata Press.
Baum, C. F., and V. L. Wiggins. 2000. sts16: Tests for long memory in a time series. Stata Technical Bulletin 57:
3944. Reprinted in Stata Technical Bulletin Reprints, vol. 10, pp. 362368. College Station, TX: Stata Press.
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Berndt, E. K., B. H. Hall, R. E. Hall, and J. A. Hausman. 1974. Estimation and inference in nonlinear structural
models. Annals of Economic and Social Measurement 3/4: 653665.
Black, F. 1976. Studies of stock price volatility changes. Proceedings of the American Statistical Association, Business
and Economics Statistics 177181.
Bollerslev, T. 1986. Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics 31: 307327.
Bollerslev, T., R. Y. Chou, and K. F. Kroner. 1992. ARCH modeling in finance. Journal of Econometrics 52: 559.
Bollerslev, T., R. F. Engle, and D. B. Nelson. 1994. ARCH models. In Vol. 4 of Handbook of Econometrics, ed.
R. F. Engle and D. L. McFadden. Amsterdam: Elsevier.
Bollerslev, T., and J. M. Wooldridge. 1992. Quasi-maximum likelihood estimation and inference in dynamic models
with time-varying covariances. Econometric Reviews 11: 143172.
Davidson, R., and J. G. MacKinnon. 1993. Estimation and Inference in Econometrics. New York: Oxford University
Press.
. 2004. Econometric Theory and Methods. New York: Oxford University Press.
Diebold, F. X. 2003. The ET Interview: Professor Robert F. Engle. Econometric Theory 19: 11591193.
Ding, Z., C. W. J. Granger, and R. F. Engle. 1993. A long memory property of stock market returns and a new
model. Journal of Empirical Finance 1: 83106.
Enders, W. 2004. Applied Econometric Time Series. 2nd ed. New York: Wiley.
42
Engle, R. F. 1982. Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom
inflation. Econometrica 50: 9871007.
. 1990. Discussion: Stock volatility and the crash of 87. Review of Financial Studies 3: 103106.
Engle, R. F., D. M. Lilien, and R. P. Robins. 1987. Estimating time varying risk premia in the term structure: The
ARCH-M model. Econometrica 55: 391407.
Glosten, L. R., R. Jagannathan, and D. E. Runkle. 1993. On the relation between the expected value and the volatility
of the nominal excess return on stocks. Journal of Finance 48: 17791801.
Greene, W. H. 2012. Econometric Analysis. 7th ed. Upper Saddle River, NJ: Prentice Hall.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Harvey, A. C. 1989. Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge: Cambridge
University Press.
. 1990. The Econometric Analysis of Time Series. 2nd ed. Cambridge, MA: MIT Press.
Higgins, M. L., and A. K. Bera. 1992. A class of nonlinear ARCH models. International Economic Review 33:
137158.
Hill, R. C., W. E. Griffiths, and G. C. Lim. 2011. Principles of Econometrics. 4th ed. Hoboken, NJ: Wiley.
Judge, G. G., W. E. Griffiths, R. C. Hill, H. Lutkepohl, and T.-C. Lee. 1985. The Theory and Practice of Econometrics.
2nd ed. New York: Wiley.
Kmenta, J. 1997. Elements of Econometrics. 2nd ed. Ann Arbor: University of Michigan Press.
Mandelbrot, B. B. 1963. The variation of certain speculative prices. Journal of Business 36: 394419.
Nelson, D. B. 1991. Conditional heteroskedasticity in asset returns: A new approach. Econometrica 59: 347370.
Press, W. H., S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. 2007. Numerical Recipes: The Art of Scientific
Computing. 3rd ed. New York: Cambridge University Press.
Stock, J. H., and M. W. Watson. 2011. Introduction to Econometrics. 3rd ed. Boston: AddisonWesley.
Wooldridge, J. M. 2013. Introductory Econometrics: A Modern Approach. 5th ed. Mason, OH: South-Western.
Zakoian, J. M. 1994. Threshold heteroskedastic models. Journal of Economic Dynamics and Control 18: 931955.
Also see
[TS] arch postestimation Postestimation tools for arch
[TS] tsset Declare data to be time-series data
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] mgarch Multivariate GARCH models
[R] regress Linear regression
[U] 20 Estimation and postestimation commands
Title
arch postestimation Postestimation tools for arch
Description
Remarks and examples
Description
The following postestimation commands are available after arch:
Command
Description
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
margins
marginsplot
nlcom
predict
predictnl
test
testnl
type
newvar
if
in
, statistic options
Description
Main
xb
y
variance
het
residuals
yresiduals
These statistics are available both in and out of sample; type predict
for the estimation sample.
43
44
Description
options
Options
dynamic(time constant)
at(varname | # varname2 | # 2 )
t0(time constant)
structural
>
Postestimation
>
Main
xb, the default, calculates the predictions from the mean equation. If D.depvar is the dependent
variable, these predictions are of D.depvar and not of depvar itself.
y specifies that predictions of depvar are to be made even if the model was specified for, say,
D.depvar.
variance calculates predictions of the conditional variance
bt2 .
het calculates predictions of the multiplicative heteroskedasticity component of variance.
residuals calculates the residuals. If no other options are specified, these are the predicted innovations
t ; that is, they include any ARMA component. If the structural option is specified, these are
the residuals from the mean equation, ignoring any ARMA terms; see structural below. The
residuals are always from the estimated equation, which may have a differenced dependent variable;
if depvar is differenced, they are not the residuals of the undifferenced depvar.
yresiduals calculates the residuals for depvar, even if the model was specified for, say, D.depvar. As
with residuals, the yresiduals are computed from the model, including any ARMA component.
If the structural option is specified, any ARMA component is ignored and yresiduals are the
residuals from the structural equation; see structural below.
45
Options
dynamic(time constant) specifies how lags of yt in the model are to be handled. If dynamic()
is not specified, actual values are used everywhere lagged values of yt appear in the model to
produce one-step-ahead forecasts.
dynamic(time constant) produces dynamic (also known as recursive) forecasts. time constant
specifies when the forecast is to switch from one step ahead to dynamic. In dynamic forecasts,
references to yt evaluate to the prediction of yt for all periods at or after time constant; they
evaluate to the actual value of yt for all prior periods.
dynamic(10), for example, would calculate predictions where any reference to yt with t < 10
evaluates to the actual value of yt and any reference to yt with t 10 evaluates to the prediction
of yt . This means that one-step-ahead predictions would be calculated for t < 10 and dynamic
predictions would be calculated thereafter. Depending on the lag structure of the model, the dynamic
predictions might still refer to some actual values of yt .
You may also specify dynamic(.) to have predict automatically switch from one-step-ahead to
dynamic predictions at p + q , where p is the maximum AR lag and q is the maximum MA lag.
at(varname | # varname2 | # 2 ) makes static predictions. at() and dynamic() may not be
specified together.
Specifying at() allows static evaluation of results for a given set of disturbances. This is useful,
for instance, in generating the news response function. at() specifies two sets of values to be
used for t and t2 , the dynamic components in the model. These specified values are treated as
given. Also, any lagged values of depvar in the model are obtained from the real values of the
dependent variable. All computations are based on actual data and the given values.
at() requires that you specify two arguments, which can be either a variable name or a number.
The first argument supplies the values to be used for t ; the second supplies the values to be used
for t2 . If t2 plays no role in your model, the second argument may be specified as . to indicate
missing.
t0(time constant) specifies the starting point for the recursions to compute the predicted statistics;
disturbances are assumed to be 0 for t < t0(). The default is to set t0() to the minimum t
observed in the estimation sample, meaning that observations before that are assumed to have
disturbances of 0.
t0() is irrelevant if structural is specified because then all observations are assumed to have
disturbances of 0.
t0(5), for example, would begin recursions at t = 5. If your data were quarterly, you might
instead type t0(tq(1961q2)) to obtain the same result.
Any ARMA component in the mean equation or GARCH term in the conditional-variance equation
makes arch recursive and dependent on the starting point of the predictions. This includes
one-step-ahead predictions.
structural makes the calculation considering the structural component only, ignoring any ARMA
terms, and producing the steady-state equilibrium predictions.
46
epsilon and sigma2 are either variables or numbers. Using sigma2 is a little tricky because you specify
values of t2 , which predict is supposed to predict. predict does not simply copy variable sigma2
into newvar but uses the lagged values contained in sigma2 to produce the predicted value of t2 . It
does this for all t, and those results are saved in newvar. (If you are interested in dynamic predictions
of t2 , see Options for predict.)
We will generate predictions for t2 , assuming that the lagged values of t2 are 1, and we will
vary t from 4 to 4. First, we will create variable et containing t , and then we will create and
graph the predictions:
. generate et = (_n-64)/15
. predict sigma2, variance at(et 1)
. line sigma2 et in 2/l, m(i) c(l) title(News response function)
2.5
0
et
The positive asymmetry does indeed dominate the shape of the news response function. In fact, the
response is a monotonically increasing function of news. The form of the response function shows
that, for our simple model, only positive, unanticipated price increases have the destabilizing effect
that we observe as larger conditional variances.
47
Example 2
Continuing with our ARCH model with constraints example (example 6) in [TS] arch, using lincom
we can recover the parameter from the original specification.
. lincom [ARCH]l1.arch/.4
( 1) 2.5*[ARCH]L.arch = 0
D.ln_wpi
Coef.
(1)
.5450344
Std. Err.
P>|z|
.1844468
2.95
0.003
.1835253
.9065436
Also see
[TS] arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators
[U] 20 Estimation and postestimation commands
Title
arfima Autoregressive fractionally integrated moving-average models
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
arfima depvar
indepvars
if
in
, options
Description
options
Model
noconstant
ar(numlist)
ma(numlist)
smemory
mle
mpl
constraints(numlist)
collinear
SE/Robust
vce(vcetype)
Reporting
level(#)
nocnsreport
display options
Maximization
maximize options
coeflegend
You must tsset your data before using arfima; see [TS] tsset.
indepvars may contain factor variables; see [U] 11.4.3 Factor variables.
depvar and indepvars may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
Menu
Statistics
>
Time series
>
ARFIMA models
48
49
Description
arfima estimates the parameters of autoregressive fractionally integrated moving-average (ARFIMA)
models.
Long-memory processes are stationary processes whose autocorrelation functions decay more
slowly than short-memory processes. The ARFIMA model provides a parsimonious parameterization of
long-memory processes that nests the autoregressive moving-average (ARMA) model, which is widely
used for short-memory processes. By allowing for fractional degrees of integration, the ARFIMA model
also generalizes the autoregressive integrated moving-average (ARIMA) model with integer degrees of
integration. See [TS] arima for ARMA and ARIMA parameter estimation.
Options
Model
SE/Robust
vce(vcetype) specifies the type of standard error reported, which includes types that are robust to
some kinds of misspecification (robust) and that are derived from asymptotic theory (oim); see
[R] vce option.
Options vce(robust) and mpl may not be combined.
Reporting
50
Maximization
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), gtolerance(#), nonrtolerance(#), and from(init specs); see [R] maximize for all options.
Some special points for arfimas maximize options are listed below.
technique(algorithm spec) sets the optimization algorithm. The default algorithm is BFGS and
BHHH is not allowed. See [R] maximize for a description of the available optimization algorithms.
You can specify multiple optimization methods. For example, technique(bfgs 10 nr) requests
that the optimizer perform 10 BFGS iterations and then switch to NewtonRaphson until convergence.
iterate(#) sets the maximum number of iterations. When the maximization is not going well,
set the maximum number of iterations to the point where the optimizer appears to be stuck and
inspect the estimation results at that point.
from(matname) allows you to specify starting values for the model parameters in a row vector.
We recommend that you use the iterate(0) option, retrieve the initial estimates from e(b),
and modify these elements.
The following option is available with arfima but is not shown in the dialog box:
coeflegend; see [R] estimation options.
51
Technical note
An ARIMA model for the series yt is given by
(1)
Because long-memory processes are stationary, one might be tempted to approximate the processes
with many terms in an ARMA model. But these approximate models are difficult to fit and to interpret
because ARMA models with many terms are difficult to estimate and the ARMA parameterization has
an inherent short-run nature. In contrast, the ARFIMA model has the d parameter for the long-run
dependence and ARMA parameters for short-run dependence. Using different parameters for different
types of dependence facilitates estimation and interpretation, as discussed by Sowell (1992a).
Technical note
An ARFIMA model specifies a fractionally integrated ARMA process. Formally, the ARFIMA model
specifies that
ARFIMA models have been useful in fields as diverse as hydrology and economics. Long-memory
processes were first introduced in hydrology by Hurst (1951). Hosking (1981), in hydrology, and
Granger and Joyeux (1980), in economics, independently discovered the ARFIMA representation of
long-memory processes. Beran (1994), Baillie (1996), and Palma (2007) provide good introductions
to long-memory processes and ARFIMA models.
52
20
100
. use http://www.stata-press.com/data/r13/campito
(Campito Mnt. tree ring data from 3435BC to 1969AD)
. tsline width, xlabel(-3435(500)1969) ysize(2)
935 435
year
65
565
1065
1565
2065
Good years and bad years seem to run together, causing the appearance of local trends. The local
trends are evidence of dependence, but they are not as pronounced as those in a nonstationary series.
We plot the autocorrelations for another view:
0.20
Autocorrelations of width
0.00
0.20
0.40
0.60
0.80
. ac width, ysize(2)
10
20
Lag
30
40
width
Coef.
_cons
42.45055
1.02142
ar
L1.
L2.
1.264367
-.2848827
ma
L1.
/sigma
53
=
5405
= 133686.46
=
0.0000
P>|z|
41.56
0.000
40.44861
44.4525
.0253199
.0227534
49.94
-12.52
0.000
0.000
1.214741
-.3294785
1.313994
-.2402869
-.8066007
.0189699
-42.52
0.000
-.8437811
-.7694204
8.005814
.0770004
103.97
0.000
7.854896
8.156732
width
ARMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
The roots of the AR polynomial are 0.971 and 0.293, and the root of the MA polynomial is 0.807;
all of these are less than one in magnitude, indicating that the series is stationary and invertible
but has a high level of persistence. See Hamilton (1994, 59) for how to compute the roots of the
polynomials from the estimated coefficients.
Below we estimate the parameters of an ARFIMA model with only the fractional difference parameter
and a constant.
54
=
=
=
=
=
=
=
=
-18918.219
-18916.84
-18908.508
-18908.508
-18907.379
-18907.318
-18907.279
-18907.279
(backed up)
= -18907.279
= -18907.279
Number of obs
Wald chi2(1)
Prob > chi2
=
=
=
5405
1864.44
0.0000
OIM
Std. Err.
P>|z|
44.01432
9.174319
4.80
0.000
26.03299
61.99566
.4468888
.0103496
43.18
0.000
.4266038
.4671737
/sigma2
63.92927
1.229754
51.99
0.000
61.519
66.33955
width
Coef.
_cons
width
ARFIMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
The estimate of d is large and statistically significant. The relative parsimony of the ARFIMA model
is illustrated by the fact that the estimates of the standard deviation of the idiosyncratic errors are
about the same in the 5-parameter ARMA model and the 3-parameter ARFIMA model.
55
=
=
=
=
=
=
=
-18910.997
-18910.949
-18908.158
-18907.248
-18907.233
-18907.233
-18907.233
(backed up)
(backed up)
= -18907.233
= -18907.233
Number of obs
Wald chi2(2)
Prob > chi2
=
=
=
5405
1875.35
0.0000
OIM
Std. Err.
P>|z|
43.98774
8.68516
5.06
0.000
26.96513
61.01034
ar
L1.
.0063323
.020731
0.31
0.760
-.0342997
.0469642
.4432471
.0157775
28.09
0.000
.4123238
.4741704
/sigma2
63.92915
1.229754
51.99
0.000
61.51888
66.33942
width
Coef.
_cons
width
ARFIMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
That the estimated AR term is tiny and statistically insignificant indicates that the d parameter has
accounted for all the dependence in the series.
As mentioned above, there is a sense in which the main advantages of an ARFIMA model over an
ARMA model for long-memory processes are the relative parsimony of the ARFIMA parameterization
and the ability of the ARFIMA parameterization to separate out the long-run effects from the short-run
effects. If the true process was generated from an ARFIMA model, an ARMA model with many terms
can approximate the process, but the terms make estimation difficult and the lack of separate long-run
and short-run parameters complicates interpretation.
This example highlights the relative parsimony of the ARFIMA model. In the examples below, we
illustrate the advantages of having separate parameters for long-run and short-run effects.
Technical note
You may be wondering what long-run effects can be produced by a model for stationary processes.
Because the autocorrelations of a long-memory process die out so slowly, the spectral density becomes
infinite as the frequency goes to 0 and the impulseresponse functions die out at a much slower rate.
The spectral density of a process describes the relative contributions of random components at
different frequencies to the variance of the process, with the low-frequency components corresponding
to long-run effects. See [TS] psdensity for an introduction to estimating and interpreting spectral
densities implied by the estimated parameters of parametric models.
56
Granger and Joyeux (1980) motivate ARFIMA models by noting that their implied spectral densities
are finite except at frequency 0 with 0 < d < 0.5, whereas stationary ARMA models have finite spectral
densities at all frequencies. Granger and Joyeux (1980) argue that the ability of ARFIMA models to
capture this long-range dependence, which cannot be captured by stationary ARMA models, is an
important advantage of ARFIMA models over ARMA models when modeling long-memory processes.
Impulseresponse functions are the coefficients on the infinite-order MA representation of a process,
and they describe how a shock feeds though the dynamic system. If the process is stationary, the
coefficients decay to 0 and they sum to a finite constant. As expected, the coefficients from an ARFIMA
model die out at a slower rate than those from an ARMA model. Because the ARMA terms model
the short-run effects and the d parameter models the long-run effects, an ARFIMA model specifies
both a short-run impulseresponse function and a long-run impulseresponse function. When an
ARMA model is used to approximate a long-memory model, the ARMA impulseresponse-function
coefficients confound the two effects.
Example 2
In this example, we model the log of the monthly levels of carbon dioxide above Mauna Loa,
Hawaii. To remove the seasonality, we model the twelfth seasonal difference of the log of the series.
This example illustrates that the ARFIMA model parameterizes long-run and short-run effects, whereas
the ARMA model confounds the two effects. (Sowell [1992a] discusses this point in greater depth.)
We begin by fitting the series to an ARMA model with an AR(1) term and an MA(2).
. use http://www.stata-press.com/data/r13/mloa
. arima S12.log, ar(1) ma(2)
(setting optimization to BHHH)
Iteration 0:
log likelihood = 2000.9262
Iteration 1:
log likelihood = 2001.5484
Iteration 2:
log likelihood = 2001.5637
Iteration 3:
log likelihood = 2001.5641
Iteration 4:
log likelihood = 2001.5641
ARIMA regression
Sample: 1960m1 - 1990m12
Log likelihood =
2001.564
S12.log
Coef.
_cons
.0036754
.0002475
ar
L1.
.7354346
ma
L2.
/sigma
OPG
Std. Err.
Number of obs
Wald chi2(2)
Prob > chi2
=
=
=
372
500.41
0.0000
P>|z|
14.85
0.000
.0031903
.0041605
.0357715
20.56
0.000
.6653237
.8055456
.1353086
.0513156
2.64
0.008
.0347319
.2358853
.0011129
.0000401
27.77
0.000
.0010344
.0011914
log
ARMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
All the parameters are statistically significant, and they indicate a high degree of dependence.
57
Below we nest the previously fit ARMA model into an ARFIMA model.
. arfima S12.log, ar(1) ma(2)
Iteration 0:
log likelihood
Iteration 1:
log likelihood
Iteration 2:
log likelihood
Iteration 3:
log likelihood
Iteration 4:
log likelihood
Refining estimates:
Iteration 0:
log likelihood
Iteration 1:
log likelihood
=
=
=
=
=
2006.0757
2006.0774
2006.0775
2006.0804
2006.0805
=
=
2006.0805
2006.0805
(backed up)
(backed up)
ARFIMA regression
Sample: 1960m1 - 1990m12
Log likelihood =
2006.0805
S12.log
Coef.
S12.log
_cons
Number of obs
Wald chi2(3)
Prob > chi2
=
=
=
372
248.88
0.0000
OIM
Std. Err.
P>|z|
.003616
.0012968
2.79
0.005
.0010743
.0061578
ar
L1.
.2160894
.1015575
2.13
0.033
.0170403
.4151385
ma
L2.
.1633916
.051691
3.16
0.002
.0620791
.2647041
.4042573
.0805442
5.02
0.000
.2463935
.5621211
/sigma2
1.20e-06
8.84e-08
13.63
0.000
1.03e-06
1.38e-06
ARFIMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
All the parameters are statistically significant at the 5% level. That the confidence interval for the
fractional-difference parameter d includes numbers greater than 0.5 is evidence that the series may be
nonstationary. Alternatively, we proceed as if the series is stationary, and the wide confidence interval
for d reflects the difficulty of fitting a complicated dynamic model with only 372 observations.
With the above caveat, we can now proceed to compare the interpretations of the ARMA and ARFIMA
estimates. We compare these estimates in terms of their implied spectral densities. The spectral density
of a stationary time series describes the relative importance of components at different frequencies.
See [TS] psdensity for an introduction to spectral densities.
Below we quietly refit the ARMA model and use psdensity to estimate the parametric spectral
density implied by the ARMA parameter estimates.
. quietly arima S12.log, ar(1) ma(2)
. psdensity d_arma omega1
The psdensity command above put the estimated ARMA spectral density into the new variable
d arma at the frequencies stored in the new variable omega1.
Below we quietly refit the ARFIMA model and use psdensity to estimate the long-run parametric
spectral density and then the short-run parametric spectral density implied by the ARFIMA parameter
estimates. The long-run estimates use the estimated d, and the short-run estimates set d to 0 (as is
implied by specifying the smemory option). The long-run estimates describe the fractionally integrated
series, and the short-run estimates describe the fractionally differenced series.
58
Now that we have the ARMA estimates, the long-run ARFIMA estimates, and the short-run ARFIMA
estimates, we graph them below.
. line d_arma d_arfima
Frequency
ARFIMA longmemory spectral density
.5
1.5
Frequency
ARMA spectral density
The top graph contains a plot of the spectral densities implied by the ARMA parameter estimates
and by the long-run ARFIMA parameter estimates. As discussed by Granger and Joyeux (1980), the
two models imply different spectral densities for frequencies close to 0 when d > 0. When d > 0,
the spectral density implied by the ARFIMA estimates diverges to infinity, whereas the spectral density
implied by the ARMA estimates remains finite at frequency 0 for stable ARMA processes. This difference
reflects the ability of ARFIMA models to capture long-run effects that ARMA models only capture as
the parameters approach those of an unstable model.
The bottom graph contains a plot of the spectral densities implied by the ARMA parameter estimates
and by the short-run ARFIMA parameter estimates, which are the ARMA parameters for the fractionally
differenced process. Comparing the two plots illustrates the ability of the short-run ARFIMA parameters
to capture both low-frequency and high-frequency components in the fractionally differenced series. In
contrast, the ARMA parameters captured only low-frequency components in the fractionally integrated
series.
Comparing the ARFIMA and ARMA spectral densities in the two graphs illustrates that the additional
fractional-difference parameter allows the ARFIMA model to identify both long-run and short-run
effects, which the ARMA model confounds.
Technical note
As noted above, the spectral density of an ARFIMA process with d > 0 diverges to infinity as
the frequency goes to 0. In contrast, the spectral density of an ARFIMA process with d < 0 is 0 at
frequency 0.
59
The autocorrelation function of an ARFIMA process with d < 0 also decays at the slower hyperbolic
rate. ARFIMA processes with d < 0 are sometimes called antipersistent because all the autocorrelations
for lags greater than 0 are negative.
Hosking (1981), Baillie (1996), and others refer to ARFIMA processes with d < 0 as intermediate
memory processes and ARFIMA processes with d > 0 as long-memory processes. Box, Jenkins, and
Reinsel (2008, 429) define long-memory processes as those with the slower hyperbolic rate of decay,
which includes ARFIMA processes with d < 0. We follow Box, Jenkins, and Reinsel (2008) and thus
call ARFIMA processes for 0.5 < d < 0 and 0 < d < 0.5 long-memory processes.
Sowell (1992a) uses the properties of ARFIMA processes with d < 0 to derive tests for whether a
series was generated by an I(1) process or an I(d) process with d < 1.
Example 3
In this example, we use arfima to test whether a series is nonstationary. More specifically, we
test whether the series was generated by an I(1) process by testing whether the first difference of
the series is overdifferenced.
We have monthly data on the log of the number of reported cases of mumps in New York City
between January 1928 and December 1972. We believe that the series is stationary, after accounting
for the monthly seasonal effects. We use an ARFIMA model for differenced series to test the null
hypothesis of nonstationarity. We use the confidence interval for the d parameter from an ARFIMA
model for the first difference of the log of the series to perform the test. If the right-hand end of the
95% CI is less than 0, we conclude that the differenced series was overdifferenced, which implies
that the original series was not nonstationary.
More formally, if yt is I(1), then yt = yt yt1 must be I(0). If yt is I(d) with d < 0,
then yt is overdifferenced and yt is I(d) with d < 1.
We use seasonal indicators to account for the seasonal effects. In the output below, we specify the
mpl option to use the MPL estimator that is less biased in the presence of covariates.
arfima computes the maximum likelihood estimates (MLE) for the parameters of this stationary
and invertible Gaussian process. Alternatively, the maximum MPL estimates may be computed. See
Methods and formulas for a description of these two estimation techniques, but suffice it to say
that the MLE estimates for d are biased in the presence of exogenous variables, even the constant
term, for small samples. The MPL estimator reduces this bias; see Hauser (1999) and Doornik and
Ooms (2004).
. use http://www.stata-press.com/data/r13/mumps2, clear
(Hipel and Mcleod (1994), http://robjhyndman.com/tsdldata/epi/mumps.dat)
. arfima D.log i.month, ma(1 2) mpl
Iteration 0:
log modified profile
Iteration 1:
log modified profile
Iteration 2:
log modified profile
Iteration 3:
log modified profile
Iteration 4:
log modified profile
Iteration 5:
log modified profile
Iteration 6:
log modified profile
Iteration 7:
log modified profile
Iteration 8:
log modified profile
Refining estimates:
Iteration 0:
log modified profile
Iteration 1:
log modified profile
likelihood
likelihood
likelihood
likelihood
likelihood
likelihood
likelihood
likelihood
likelihood
=
=
=
=
=
=
=
=
=
53.766763
54.388641
54.934726
54.937524
55.002186
55.20462
55.205939
55.205949
55.205949
likelihood =
likelihood =
55.205949
55.205949
(backed up)
(backed up)
60
55.205949
OIM
Std. Err.
Number of obs
Wald chi2(14)
Prob > chi2
=
=
=
533
1360.28
0.0000
D.log
Coef.
month
February
March
April
May
June
July
August
September
October
November
December
-.220719
.0314683
-.2800296
-.3703179
-.4722035
-.9613239
-1.063042
-.7577301
-.3024251
-.0115317
.0247135
.0428112
.0424718
.0460084
.0449932
.0446764
.0448375
.0449272
.0452529
.0462887
.0426911
.0430401
-5.16
0.74
-6.09
-8.23
-10.57
-21.44
-23.66
-16.74
-6.53
-0.27
0.57
0.000
0.459
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.787
0.566
-.3046275
-.0517749
-.3702043
-.4585029
-.5597676
-1.049204
-1.151098
-.8464242
-.3931494
-.0952046
-.0596435
-.1368105
.1147115
-.1898548
-.2821329
-.3846394
-.873444
-.9749868
-.669036
-.2117009
.0721413
.1090705
_cons
.3656807
.0303215
12.06
0.000
.3062517
.4251096
ma
L1.
L2.
.258056
.1972011
.0684414
.0506439
3.77
3.89
0.000
0.000
.1239132
.0979409
.3921988
.2964613
-.2329426
.0673361
-3.46
0.001
-.3649188
-.1009663
P>|z|
D.log
ARFIMA
We interpret the fact that the estimated 95% CI is strictly less than 0 to mean that the differenced
series is overdifferenced, which implies that the original series is stationary.
Stored results
arfima stores the following in e():
Scalars
e(N)
e(k)
e(k eq)
e(k dv)
e(k aux)
e(df m)
e(ll)
e(chi2)
e(p)
e(s2)
e(tmin)
e(tmax)
e(ar max)
e(ma max)
e(rank)
e(ic)
e(rc)
e(converged)
e(constant)
number of observations
number of parameters
number of equations in e(b)
number of dependent variables
number of auxiliary parameters
model degrees of freedom
log likelihood
2
significance
idiosyncratic error variance estimate, if e(method) = mpl
minimum time
maximum time
maximum AR lag
maximum MA lag
rank of e(V)
number of iterations
return code
1 if converged, 0 otherwise
0 if noconstant, 1 otherwise
61
arfima
command as typed
name of dependent variable
list of covariates
names of equations
title in estimation output
formatted minimum time
formatted maximum time
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
lags for MA terms
lags for AR terms
maximization technique
number of iterations performed before switching techniques
b V
program used to implement estat
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
coefficient vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
variancecovariance matrix of the estimators
model-based variance
marks estimation sample
Introduction
We model an observed second-order stationary time-series yt , t = 1, . . . , T , using the
ARFIMA(p, d, q) model defined as
(Lp ) = 1 1 L 2 L2 p Lp
(Lq ) = 1 + 1 L + 2 L2 + + q Lq
(1 L)d =
X
(1)j
j=0
(j + d)
Lj
(j + 1)(d)
62
`(y|b
) =
1
b + (y X
b 1 (y X
b )0 V
b)
T log(2) + log |V|
2
T 1
T 2
T 3
(2)
. . . T 1
. . . T 2
..
..
.
.
...
0
h = 2
(1 2d)
(h + d)
(1 d)(d) (1 + h d)
63
h = 2
q X
p
X
(i)j C(d, p + i h, j )
(3)
i=q j=1
where
min(q,q+i)
(i) =
k ki
k=max(0,i)
j = j
p
Y
1
(1 i j )
i=1
and
(j m )
m6=j
h 2p
F (d + h, 1, 1 d + h, ) + F (d h, 1, 1 d h, ) 1
2
C(d, h, ) =
ab
a(a + 1)b(b + 1) 2 a(a + 1)(a + 2)b(b + 1)(b + 2) 3
x+
x +
x + ...
c1
c(c + 1) 1 2
c(c + 1)(c + 2) 1 2 3
The series recursions are evaluated backward as Doornik and Ooms (2003) emphasize. Doornik and
Ooms (2003) also provide other computational enhancements, such as not dividing by j in (3).
b2 =
where R =
1
2 V
1
b 1 (y X
b )0 R
b)
(y X
T
(4)
and
b 1 X)1 X0 R
b 1 y
b = (X0 R
(5)
is the weighted least-squares estimates for . Substituting (4) into (2) results in the profile likelihood
`p (y|b
r ) =
T
2
1 + log(2) +
1
b + log
log |R|
b2
T
We compute the MLEs using the profile likelihood for the reduced parameter set r = (0 , 0 , d).
Equations (4) and (5) provide MLEs for 2 and to create the full parameter vector =
(0 , 0 , 0 , d, 2 ). We follow with the Refining estimates step, optimizing on the log likelihood
(1). The refining step does not change the estimates; it produces the coefficient variancecovariance
matrix from the observed information matrix.
Using this profile likelihood prevents the use of the BHHH optimization method because there are
no observation-level scores.
64
The MPL
The small-sample MLE for d can be biased when there are exogenous variables in the model. The
MPL reduces this bias (Hauser 1999; Doornik and Ooms 2004). The mpl option will direct arfima
to use this optimization criterion. The MPL is expressed as
T
`m (y|b
r ) = {1 + log(2)}
2
1
1
T
2
b
log |R|
T k2
2
log
b2
1
b 1 X|
log |X0 R
2
e2 =
b 1 (y X
b )0 R
b)
(y X
T k
is stored in e() for postestimation computation of the forecast and residual root mean squared errors.
References
An, S., and P. Bloomfield. 1993. Cox and Reids modification in regression models with correlated errors. Technical
report, Department of Statistics, North Carolina State University, Raleigh, NC.
Baillie, R. T. 1996. Long memory processes and fractional integration in econometrics. Journal of Econometrics 73:
559.
Beran, J. 1994. Statistics for Long-Memory Processes. Boca Raton: Chapman & Hall/CRC.
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Doornik, J. A., and M. Ooms. 2003. Computational aspects of maximum likelihood estimation of autoregressive
fractionally integrated moving average models. Computational Statistics & Data Analysis 42: 333348.
. 2004. Inference and forecasting for ARFIMA models with an application to US and UK inflation. Studies in
Nonlinear Dynamics & Econometrics 8: 123.
Golub, G. H., and C. F. Van Loan. 1996. Matrix Computations. 3rd ed. Baltimore: Johns Hopkins University Press.
Gradshteyn, I. S., and I. M. Ryzhik. 2007. Table of Integrals, Series, and Products. 7th ed. San Diego: Elsevier.
Granger, C. W. J., and R. Joyeux. 1980. An introduction to long-memory time series models and fractional differencing.
Journal of Time Series Analysis 1: 1529.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Hauser, M. A. 1999. Maximum likelihood estimators for ARMA and ARFIMA models: a Monte Carlo study. Journal
of Statistical Planning and Inference 80: 229255.
Hosking, J. R. M. 1981. Fractional differencing. Biometrika 68: 165176.
Hurst, H. E. 1951. Long-term storage capacity of reservoirs. Transactions of the American Society of Civil Engineers
116: 770779.
Palma, W. 2007. Long-Memory Time Series: Theory and Methods. Hoboken, NJ: Wiley.
Sowell, F. 1992a. Modeling long-run behavior with the fractional ARIMA model. Journal of Monetary Economics
29: 277302.
. 1992b. Maximum likelihood estimation of stationary univariate fractionally integrated time series models. Journal
of Econometrics 53: 165188.
Also see
[TS] arfima postestimation Postestimation tools for arfima
[TS] tsset Declare data to be time-series data
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] sspace State-space models
[U] 20 Estimation and postestimation commands
65
Title
arfima postestimation Postestimation tools for arfima
Description
Remarks and examples
Description
The following postestimation commands are of special interest after arfima:
Command
Description
estat acplot
irf
psdensity
Command
Description
contrast
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
margins
marginsplot
nlcom
predict
predictnl
pwcompare
test
testnl
estat ic, margins, marginsplot, nlcom, and predictnl are not appropriate after arfima, mpl.
66
67
type
newvar
if
in
, statistic options
Description
statistic
Main
xb
residuals
rstandard
fdifference
These statistics are available both in and out of sample; type predict
the estimation sample.
Description
options
Options
rmse( type newvar)
put the estimated root mean squared error of the predicted statistic
in a new variable; only permitted with options xb and residuals
forecast the time series starting at datetime; only permitted with
option xb
dynamic(datetime)
>
Postestimation
>
Main
xb, the default, calculates the predictions for the level of depvar.
residuals calculates the predicted innovations.
rstandard calculates the standardized innovations.
fdifference calculates the fractionally differenced predictions of depvar.
Options
rmse( type newvar) puts the root mean squared errors of the predicted statistics into the specified
new variables. The root mean squared errors measure the variances due to the disturbances but do
not account for estimation error. rmse() is only permitted with the xb and residuals options.
dynamic(datetime) specifies when predict starts producing dynamic forecasts. The specified datetime must be in the scale of the time variable specified in tsset, and the datetime must be
inside a sample for which observations on the dependent variables are available. For example, dynamic(tq(2008q4)) causes dynamic predictions to begin in the fourth quarter of 2008, assuming
that your time variable is quarterly; see [D] datetime. If the model contains exogenous variables,
they must be present for the whole predicted sample. dynamic() may only be specified with xb.
68
Example 1
We have monthly data on the one-year Treasury bill secondary market rate imported from the
Federal Reserve Bank (FRED) database using freduse; see Drukker (2006) and Stata YouTube video:
Using freduse to download time-series data from the Federal Reserve for an introduction to freduse.
Below we fit an ARFIMA model with two autoregressive terms and one moving-average term to the
data.
69
. use http://www.stata-press.com/data/r13/tb1yr
(FRED, 1-year treasury bill; secondary market rate, monthly 1959-2001)
. arfima tb1yr, ar(1/2) ma(1)
Iteration 0:
log likelihood = -235.31856
Iteration 1:
log likelihood = -235.26104 (backed up)
Iteration 2:
log likelihood = -235.25974 (backed up)
Iteration 3:
log likelihood = -235.2544 (backed up)
Iteration 4:
log likelihood = -235.13353
Iteration 5:
log likelihood = -235.13063
Iteration 6:
log likelihood = -235.12108
Iteration 7:
log likelihood = -235.11917
Iteration 8:
log likelihood = -235.11869
Iteration 9:
log likelihood = -235.11868
Refining estimates:
Iteration 0:
log likelihood = -235.11868
Iteration 1:
log likelihood = -235.11868
ARFIMA regression
Sample: 1959m7 - 2001m8
Number of obs
=
506
Wald chi2(4)
=
1864.15
Log likelihood = -235.11868
Prob > chi2
=
0.0000
OIM
Std. Err.
P>|z|
5.496709
2.920357
1.88
0.060
-.2270864
11.2205
ar
L1.
L2.
.2326107
.3885212
.1136655
.0835665
2.05
4.65
0.041
0.000
.0098304
.2247337
.4553911
.5523086
ma
L1.
.7755848
.0669562
11.58
0.000
.6443531
.9068166
.4606489
.0646542
7.12
0.000
.333929
.5873688
/sigma2
.1466495
.009232
15.88
0.000
.1285551
.1647439
tb1yr
Coef.
_cons
tb1yr
ARFIMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
All the parameters are statistically significant at the 5% level, and they indicate a high degree of
dependence in the series. In fact, the confidence interval for the fractional-difference parameter d
indicates that the series may be nonstationary. We will proceed as if the series is stationary and
suppose that it is fractionally integrated of order 0.46.
We begin our postestimation analysis by predicting the series in sample:
. predict ptb
(option xb assumed)
70
10
15
1960m1
1970m1
1980m1
month
1990m1
2000m1
The above graph shows that the in-sample predictions appear to track the original series well and
that the fractionally differenced series looks much more like a stationary series than does the original.
Example 2
In this example, we use the above estimates to produce a dynamic forecast and a confidence
interval for the forecast for the one-year treasury bill rate and plot them.
We begin by extending the dataset and using predict to put the dynamic forecast in the new
ftb variable and the root mean squared error of the forecast in the new rtb variable. (As discussed
in Methods and formulas, the root mean squared error of the forecast accounts for the idiosyncratic
error but not for the estimation error.)
. tsappend, add(12)
. predict ftb, xb dynamic(tm(2001m9)) rmse(rtb)
Now we compute a 90% confidence interval around the dynamic forecast and plot the original
series, the in-sample forecast, the dynamic forecast, and the confidence interval of the dynamic
forecast.
. scalar z = invnormal(0.95)
. generate lb = ftb - z*rtb if month>=tm(2001m9)
(506 missing values generated)
. generate ub = ftb + z*rtb if month>=tm(2001m9)
(506 missing values generated)
. twoway tsline tb1yr ftb if month>tm(1998m12) ||
>
tsrline lb ub if month>=tm(2001m9),
>
legend(cols(1) label(3 "90% prediction interval"))
71
1999m1
2000m1
2001m1
month
2002m1
Example 3
Here we use the estimates obtained in example 1 to calculate the IRF of the ARFIMA model; see
[TS] irf and [TS] irf create for more details about IRFs.
. irf
(file
(file
(file
.5
0
0
50
step
95% CI
72
The figure shows that a shock to tb1yr causes an initial spike in tb1yr, after which the impact
of the shock starts decaying slowly. This behavior is characteristic of long-memory processes.
0
1
V=
..
.
T 1
1
0
..
.
2
1
..
.
T 2
T 3
. . . T 1
. . . T 2
..
..
.
.
...
0
where the process variance is 0 = Var(yt ). We factor V = LDL0 , where L is lower triangular and
D = Diag(t ). The structure of L1 is of importance.
1
1,1
2,2
..
.
0
1
2,1
..
.
0
0
1
..
.
T 1,T 1
T 1,T 2
T 1,T 2
...
...
...
..
.
0
0
0
..
.
. . . T 1,1
0
0
0
..
.
1
b 1 y X
b + X
b
b=L
y
b and V
b are computed from the maximum likelihood estimates. We use the DurbinLevinson
where L
b , invert L
b , and scale y X
b using
algorithm (Palma 2007; Golub and Van Loan 1996) to factor V
only the vector of estimated autocovariances
b.
The prediction error variances of the one-step predictions are computed recursively in the Durbin
Levinson algorithm. They are the t elements in the diagonal matrix D computed from the Cholesky
2
factorization of V. The recursive formula is 0 = 0 , and t = t1 (1 t,t
).
b 1b
Forecasting is carried out as described by Beran (1994, sec. 8.7), b
zT +k =
e0k V
z, where
0
ek = (b
T +k1 ,
bT +k2 , . . . ,
bk ). The forecast mean squared error is computed as MSE(b
zT +k ) =
b0
0 b 1
1
b
ek V
ek . Computation of V
ek is carried out efficiently using algorithm 4.7.2 of Golub and Van
Loan (1996).
73
References
Beran, J. 1994. Statistics for Long-Memory Processes. Boca Raton: Chapman & Hall/CRC.
Drukker, D. M. 2006. Importing Federal Reserve economic data. Stata Journal 6: 384386.
Golub, G. H., and C. F. Van Loan. 1996. Matrix Computations. 3rd ed. Baltimore: Johns Hopkins University Press.
Palma, W. 2007. Long-Memory Time Series: Theory and Methods. Hoboken, NJ: Wiley.
Also see
[TS] arfima Autoregressive fractionally integrated moving-average models
[TS] estat acplot Plot parametric autocorrelation and autocovariance functions
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] psdensity Parametric spectral density estimation after arima, arfima, and ucm
[U] 20 Estimation and postestimation commands
Title
arima ARIMA, ARMAX, and other dynamic regression models
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
Basic syntax for a regression model with ARMA disturbances
arima depvar indepvars , ar(numlist) ma(numlist)
Basic syntax for an ARIMA(p, d, q) model
arima depvar , arima(# p ,# d ,# q )
Basic syntax for a multiplicative seasonal ARIMA(p, d, q) (P, D, Q)s model
arima depvar , arima(# p ,# d ,# q ) sarima(# P ,# D ,# Q ,# s )
Full syntax
arima depvar
indepvars
options
if
in
weight
, options
Description
Model
noconstant
arima(# p ,# d ,# q )
ar(numlist)
ma(numlist)
constraints(constraints)
collinear
Model 2
sarima(# P ,# D ,# Q ,# s )
mar(numlist, #s )
mma(numlist, #s )
Model 3
condition
savespace
diffuse
p0(# | matname)
state0(# | matname)
SE/Robust
vce(vcetype)
75
Reporting
level(#)
detail
nocnsreport
display options
Maximization
maximize options
coeflegend
You must tsset your data before using arima; see [TS] tsset.
depvar and indepvars may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, statsby, and xi are allowed; see [U] 11.1.10 Prefix commands.
iweights are allowed; see [U] 11.1.6 weight.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
Menu
Statistics
>
Time series
>
Description
arima fits univariate models with time-dependent disturbances. arima fits a model of depvar on
indepvars where the disturbances are allowed to follow a linear autoregressive moving-average (ARMA)
specification. The dependent and independent variables may be differenced or seasonally differenced
to any degree. When independent variables are included in the specification, such models are often
called ARMAX models; and when independent variables are not specified, they reduce to BoxJenkins
autoregressive integrated moving-average (ARIMA) models in the dependent variable. Multiplicative
seasonal ARMAX and ARIMA models can also be fit. Missing data are allowed and are handled using
the Kalman filter and methods suggested by Harvey (1989 and 1993); see Methods and formulas.
In the full syntax, depvar is the variable being modeled, and the structural or regression part of
the model is specified in indepvars. ar() and ma() specify the lags of autoregressive and movingaverage terms, respectively; and mar() and mma() specify the multiplicative seasonal autoregressive
and moving-average terms, respectively.
arima allows time-series operators in the dependent variable and independent variable lists, and
making extensive use of these operators is often convenient; see [U] 11.4.4 Time-series varlists and
[U] 13.9 Time-series operators for an extended discussion of time-series operators.
arima typed without arguments redisplays the previous estimates.
Options
Model
76
is equivalent to
. arima y, arima(2,1,3)
The latter is easier to write for simple ARMAX and ARIMA models, but if gaps in the AR or MA
lags are to be modeled, or if different operators are to be applied to independent variables, the
first syntax is required.
ar(numlist) specifies the autoregressive terms of the structural model disturbance to be included in
the model. For example, ar(1/3) specifies that lags of 1, 2, and 3 of the structural disturbance
be included in the model; ar(1 4) specifies that lags 1 and 4 be included, perhaps to account for
additive quarterly effects.
If the model does not contain regressors, these terms can also be considered autoregressive terms
for the dependent variable.
ma(numlist) specifies the moving-average terms to be included in the model. These are the terms for
the lagged innovations (white-noise disturbances).
constraints(constraints), collinear; see [R] estimation options.
If constraints are placed between structural model parameters and ARMA terms, the first few
iterations may attempt steps into nonstationary areas. This process can be ignored if the final
solution is well within the bounds of stationary solutions.
Model 2
sarima(# P ,# D ,# Q ,#s ) is an alternative, shorthand notation for specifying the multiplicative seasonal
components of models with ARMA disturbances. The dependent variable and any independent
variables are lag-# s seasonally differenced #D times, and 1 through # P seasonal lags of autoregressive
terms and 1 through # Q seasonal lags of moving-average terms are included in the model. For
example, the specification
. arima DS12.y, ar(1/2) ma(1/3) mar(1/2,12) mma(1/2,12)
is equivalent to
. arima y, arima(2,1,3) sarima(2,1,2,12)
mar(numlist, # s ) specifies the lag-# s multiplicative seasonal autoregressive terms. For example,
mar(1/2,12) requests that the first two lag-12 multiplicative seasonal autoregressive terms be
included in the model.
mma(numlist, # s ) specified the lag-# s multiplicative seasonal moving-average terms. For example,
mma(1 3,12) requests that the first and third (but not the second) lag-12 multiplicative seasonal
moving-average terms be included in the model.
Model 3
condition specifies that conditional, rather than full, maximum likelihood estimates be produced.
The presample values for t and t are taken to be their expected value of zero, and the estimate
of the variance of t is taken to be constant over the entire sample; see Hamilton (1994, 132).
This estimation method is not appropriate for nonstationary series but may be preferable for long
series or for models that have one or more long AR or MA lags. diffuse, p0(), and state0()
have no meaning for models fit from the conditional likelihood and may not be specified with
condition.
77
If the series is long and stationary and the underlying data-generating process does not have a long
memory, estimates will be similar, whether estimated by unconditional maximum likelihood (the
default), conditional maximum likelihood (condition), or maximum likelihood from a diffuse
prior (diffuse).
In small samples, however, results of conditional and unconditional maximum likelihood may
differ substantially; see Ansley and Newbold (1980). Whereas the default unconditional maximum
likelihood estimates make the most use of sample information when all the assumptions of the model
are met, Harvey (1989) and Ansley and Kohn (1985) argue for diffuse priors often, particularly in
ARIMA models corresponding to an underlying structural model.
The condition or diffuse options may also be preferred when the model contains one or more
long AR or MA lags; this avoids inverting potentially large matrices (see diffuse below).
When condition is specified, estimation is performed by the arch command (see [TS] arch),
and more control of the estimation process can be obtained using arch directly.
condition cannot be specified if the model contains any multiplicative seasonal terms.
savespace specifies that memory use be conserved by retaining only those variables required for
estimation. The original dataset is restored after estimation. This option is rarely used and should
be used only if there is not enough space to fit a model without the option. However, arima
requires considerably more temporary storage during estimation than most estimation commands
in Stata.
diffuse specifies that a diffuse prior (see Harvey 1989 or 1993) be used as a starting point for the
Kalman filter recursions. Using diffuse, nonstationary models may be fit with arima (see the
p0() option below; diffuse is equivalent to specifying p0(1e9)).
By default, arima uses the unconditional expected value of the state vector t (see Methods and
formulas) and the mean squared error (MSE) of the state vector to initialize the filter. When the
process is stationary, this corresponds to the expected value and expected variance of a random draw
from the state vector and produces unconditional maximum likelihood estimates of the parameters.
When the process is not stationary, however, this default is not appropriate, and the unconditional
MSE cannot be computed. For a nonstationary process, another starting point must be used for the
recursions.
In the absence of nonsample or presample information, diffuse may be specified to start the
recursions from a state vector of zero and a state MSE matrix corresponding to an effectively
infinite variance on this initial state. This method amounts to an uninformative and improper prior
that is updated to a proper MSE as data from the sample become available; see Harvey (1989).
Nonstationary models may also correspond to models with infinite variance given a particular
specification. This and other problems with nonstationary series make convergence difficult and
sometimes impossible.
diffuse can also be useful if a model contains one or more long AR or MA lags. Computation
of the unconditional MSE of the state vector (see Methods and formulas) requires construction
and inversion of a square matrix that is of dimension {max(p, q + 1)}2 , where p and q are the
maximum AR and MA lags, respectively. If q = 27, for example, we would require a 784-by-784
matrix. Estimation with diffuse does not require this matrix.
For large samples, there is little difference between using the default starting point and the diffuse
starting point. Unless the series has a long memory, the initial conditions affect the likelihood of
only the first few observations.
78
p0(# | matname) is a rarely specified option that can be used for nonstationary series or when an
alternate prior for starting the Kalman recursions is desired (see diffuse above for a discussion
of the default starting point and Methods and formulas for background).
matname specifies a matrix to be used as the MSE of the state vector for starting the Kalman filter
recursions P1|0 . Instead, one number, #, may be supplied, and the MSE of the initial state vector
P1|0 will have this number on its diagonal and all off-diagonal values set to zero.
This option may be used with nonstationary series to specify a larger or smaller diagonal for P1|0
than that supplied by diffuse. It may also be used with state0() when you believe that you
have a better prior for the initial state vector and its MSE.
state0(# | matname) is a rarely used option that specifies an alternate initial state vector, 1|0 (see
Methods and formulas), for starting the Kalman filter recursions. If # is specified, all elements of
the vector are taken to be #. The default initial state vector is state0(0).
SE/Robust
vce(vcetype) specifies the type of standard error reported, which includes types that are robust to
some kinds of misspecification (robust) and that are derived from asymptotic theory (oim, opg);
see [R] vce option.
For state-space models in general and ARMAX and ARIMA models in particular, the robust or
quasimaximum likelihood estimates (QMLEs) of variance are robust to symmetric nonnormality
in the disturbances, including, as a special case, heteroskedasticity. The robust variance estimates
are not generally robust to functional misspecification of the structural or ARMA components of
the model; see Hamilton (1994, 389) for a brief discussion.
Reporting
Maximization
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), gtolerance(#), nonrtolerance(#), and from(init specs); see [R] maximize for all options except gtolerance(), and see below for information on gtolerance().
These options are sometimes more important for ARIMA models than most maximum likelihood
models because of potential convergence problems with ARIMA models, particularly if the specified
model and the sample data imply a nonstationary model.
Several alternate optimization methods, such as BerndtHallHallHausman (BHHH) and Broyden
FletcherGoldfarbShanno (BFGS), are provided for ARIMA models. Although ARIMA models are
not as difficult to optimize as ARCH models, their likelihoods are nevertheless generally not quadratic
and often pose optimization difficulties; this is particularly true if a model is nonstationary or
nearly nonstationary. Because each method approaches optimization differently, some problems
can be successfully optimized by an alternate method when one method fails.
Setting technique() to something other than the default or BHHH changes the vcetype to vce(oim).
79
The following options are all related to maximization and are either particularly important in fitting
ARIMA models or not available for most other estimators.
technique(algorithm spec) specifies the optimization technique to use to maximize the
likelihood function.
technique(bhhh) specifies the BerndtHallHallHausman (BHHH) algorithm.
technique(dfp) specifies the DavidonFletcherPowell (DFP) algorithm.
technique(bfgs) specifies the BroydenFletcherGoldfarbShanno (BFGS) algorithm.
technique(nr) specifies Statas modified NewtonRaphson (NR) algorithm.
You can specify multiple optimization methods. For example,
technique(bhhh 10 nr 20)
requests that the optimizer perform 10 BHHH iterations, switch to NewtonRaphson for 20
iterations, switch back to BHHH for 10 more iterations, and so on.
The default for arima is technique(bhhh 5 bfgs 10).
gtolerance(#) specifies the tolerance for the gradient relative to the coefficients. When
|gi bi | gtolerance() for all parameters bi and the corresponding elements of the
gradient gi , the gradient tolerance criterion is met. The default gradient tolerance for arima
is gtolerance(.05).
gtolerance(999) may be specified to disable the gradient criterion. If the optimizer becomes
stuck with repeated (backed up) messages, the gradient probably still contains substantial
values, but an uphill direction cannot be found for the likelihood. With this option, results can
often be obtained, but whether the global maximum likelihood has been found is unclear.
When the maximization is not going well, it is also possible to set the maximum number of
iterations (see [R] maximize) to the point where the optimizer appears to be stuck and to inspect
the estimation results at that point.
from(init specs) allows you to set the starting values of the model coefficients; see [R] maximize
for a general discussion and syntax options.
The standard syntax for from() accepts a matrix, a list of values, or coefficient name value
pairs; see [R] maximize. arima also accepts from(armab0), which sets the starting value for
all ARMA parameters in the model to zero prior to optimization.
ARIMA models may be sensitive to initial conditions and may have coefficient values that
correspond to local maximums. The default starting values for arima are generally good,
particularly in large samples for stationary series.
The following option is available with arima but is not shown in the dialog box:
coeflegend; see [R] estimation options.
80
Introduction
arima fits both standard ARIMA models that are autoregressive in the dependent variable and
structural models with ARMA disturbances. Good introductions to the former models can be found in
Box, Jenkins, and Reinsel (2008); Hamilton (1994); Harvey (1993); Newton (1988); Diggle (1990);
and many others. The latter models are developed fully in Hamilton (1994) and Harvey (1989), both of
which provide extensive treatment of the Kalman filter (Kalman 1960) and the state-space form used
by arima to fit the models. Becketti (2013) discusses ARIMA models and Statas arima command,
and he devotes an entire chapter explaining how the principles of ARIMA models are applied to real
datasets in practice.
Consider a first-order autoregressive moving-average process. Then arima estimates all the parameters in the model
yt = xt + t
t = t1 + t1 + t
structural equation
disturbance, ARMA(1, 1)
where
t
You can combine the two equations and write a general ARMA(p, q) in the disturbances process as
(Lp ) = 1 1 L 2 L2 p Lp
(Lq ) = 1 + 1 L + 2 L2 + + q Lq
and Lj yt = ytj .
For stationary series, full or unconditional maximum likelihood estimates are obtained via the
Kalman filter. For nonstationary series, if some prior information is available, you can specify initial
values for the filter by using state0() and p0() as suggested by Hamilton (1994) or assume an
uninformative prior by using the diffuse option as suggested by Harvey (1989).
81
ARIMA models
Pure ARIMA models without a structural component do not have regressors and are often written
as autoregressions in the dependent variable, rather than autoregressions in the disturbances from a
structural equation. For example, an ARMA(1, 1) model can be written as
yt = + yt1 + t1 + t
(1a)
Other than a scale factor for the constant term , these models are equivalent to the ARMA in the
disturbances formulation estimated by arima, though the latter are more flexible and allow a wider
class of models.
To see this effect, replace xt in the structural equation above with a constant term 0 so that
yt = 0 + t
= 0 + t1 + t1 + t
= 0 + (yt1 0 ) + t1 + t
= (1 )0 + yt1 + t1 + t
(1b)
Equations (1a) and (1b) are equivalent, with = (1 )0 , so whether we consider an ARIMA model
as autoregressive in the dependent variable or disturbances is immaterial. Our illustration can easily
be extended from the ARMA(1, 1) case to the general ARIMA(p, d, q) case.
82
1960q2 - 1990q4
Number of obs
Wald chi2(2)
Prob > chi2
=
=
=
123
310.64
0.0000
OPG
Std. Err.
P>|z|
.7498197
.3340968
2.24
0.025
.0950019
1.404637
ar
L1.
.8742288
.0545435
16.03
0.000
.7673256
.981132
ma
L1.
-.4120458
.1000284
-4.12
0.000
-.6080979
-.2159938
/sigma
.7250436
.0368065
19.70
0.000
.6529042
.7971829
D.wpi
Coef.
_cons
wpi
ARMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
Examining the estimation results, we see that the AR(1) coefficient is 0.874, the MA(1) coefficient
is 0.412, and both are highly significant. The estimated standard deviation of the white-noise
disturbance is 0.725.
This model also could have been fit by typing
. arima D.wpi, ar(1) ma(1)
The D. placed in front of the dependent variable wpi is the Stata time-series operator for differencing.
Thus we would be modeling the first difference in WPI from the second quarter of 1960 through
the fourth quarter of 1990 because the first observation is lost because of differencing. This second
syntax allows a richer choice of models.
25
.04
.02
50
75
.02
.04
100
.06
.08
125
83
1960q1
1970q1
1980q1
1990q1
1960q1
1970q1
1980q1
1990q1
On the basis of the autocorrelations, partial autocorrelations (see graphs below), and the results of
preliminary estimations, Enders identified an ARMA model in the log-differenced series.
10
20
Lag
30
40
0.40
0.40
Autocorrelations of D.ln_wpi
0.20
0.00
0.20
0.40
0.60
0.60
. ac D.ln_wpi, ylabels(-.4(.2).6)
. pac D.ln_wpi, ylabels(-.4(.2).6)
10
20
Lag
30
40
In addition to an autoregressive term and an MA(1) term, an MA(4) term is included to account
for a remaining quarterly effect. Thus the model to be fit is
84
We can fit this model with arima and Statas standard difference operator:
. arima D.ln_wpi, ar(1) ma(1 4)
(setting optimization to BHHH)
Iteration 0:
log likelihood =
Iteration 1:
log likelihood =
Iteration 2:
log likelihood =
Iteration 3:
log likelihood =
Iteration 4:
log likelihood =
(switching optimization to BFGS)
Iteration 5:
log likelihood =
Iteration 6:
log likelihood =
Iteration 7:
log likelihood =
Iteration 8:
log likelihood =
Iteration 9:
log likelihood =
Iteration 10: log likelihood =
382.67447
384.80754
384.84749
385.39213
385.40983
385.9021
385.95646
386.02979
386.03326
386.03354
386.03357
ARIMA regression
Sample:
1960q2 - 1990q4
Log likelihood =
386.0336
D.ln_wpi
Coef.
Number of obs
Wald chi2(3)
Prob > chi2
=
=
=
123
333.60
0.0000
OPG
Std. Err.
P>|z|
ln_wpi
_cons
.0110493
.0048349
2.29
0.022
.0015731
.0205255
ar
L1.
.7806991
.0944946
8.26
0.000
.5954931
.965905
ma
L1.
L4.
-.3990039
.3090813
.1258753
.1200945
-3.17
2.57
0.002
0.010
-.6457149
.0737003
-.1522928
.5444622
/sigma
.0104394
.0004702
22.20
0.000
.0095178
.0113609
ARMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
In this final specification, the log-differenced series is still highly autocorrelated at a level of 0.781,
though innovations have a negative impact in the ensuing quarter (0.399) and a positive seasonal
impact of 0.309 in the following year.
Technical note
In one way, the results differ from most of Statas estimation commands: the standard error of
the coefficients is reported as OPG Std. Err. The default standard errors and covariance matrix
for arima estimates are derived from the outer product of gradients (OPG). This is one of three
asymptotically equivalent methods of estimating the covariance matrix of the coefficients (only two of
which are usually tractable to derive). Discussions and derivations of all three estimates can be found
in Davidson and MacKinnon (1993), Greene (2012), and Hamilton (1994). Bollerslev, Engle, and
Nelson (1994) suggest that the OPG estimates are more numerically stable in time-series regressions
when the likelihood and its derivatives depend on recursive computations, which is certainly the case
for the Kalman filter. To date, we have found no numerical instabilities in either estimate of the
covariance matrixsubject to the stability and convergence of the overall model.
85
Most of Statas estimation commands provide covariance estimates derived from the Hessian of
the likelihood function. These alternate estimates can also be obtained from arima by specifying the
vce(oim) option.
(2)
In the notation (1, 1, 1) (0, 0, 1)4 , the (1, 1, 1) means that there is one nonseasonal autoregressive
term (1 1 L) and one nonseasonal moving-average term (1 + 1 L) and that the time series is
first-differenced one time. The (0, 0, 1)4 indicates that there is no lag-4 seasonal autoregressive term,
that there is one lag-4 seasonal moving-average term (1 + 4,1 L4 ), and that the series is seasonally
differenced zero times. This is known as a multiplicative SARIMA model because the nonseasonal
and seasonal factors work multiplicatively: (1 + 1 L)(1 + 4,1 L4 ). Multiplying the terms imposes
nonlinear constraints on the parameters of the fifth-order lagged values; arima imposes these constraints
automatically.
To further clarify the notation, consider a (2, 1, 1) (1, 1, 2)4 multiplicative SARIMA model:
(3)
where denotes the difference operator yt = yt yt1 and s denotes the lag-s seasonal
difference operator s yt = yt yts . Expanding (3), we have
86
where
(Lp ) and (Lq ) were defined previously, d means apply the operator d times, and similarly
for D
s . Typically, d and D will be 0 or 1; and p, q , P , and Q will seldom be more than 2 or 3. s
will typically be 4 for quarterly data and 12 for monthly data. In fact, the model can be extended to
include both monthly and quarterly seasonal factors, as we explain below.
If a plot of the data suggests that the seasonal effect is proportional to the mean of the series, then
the seasonal effect is probably multiplicative and a multiplicative SARIMA model may be appropriate.
Box, Jenkins, and Reinsel (2008, sec. 9.3.1) suggest starting with a multiplicative SARIMA model with
any data that exhibit seasonal patterns and then exploring nonmultiplicative SARIMA models if the
multiplicative models do not fit the data well. On the other hand, Chatfield (2004, 14) suggests that
taking the logarithm of the series will make the seasonal effect additive, in which case an additive
SARIMA model as fit in the previous example would be appropriate. In short, the analyst should
probably try both additive and multiplicative SARIMA models to see which provides better fits and
forecasts.
Unless diffuse is used, arima must create square matrices of dimension {max(p, q + 1)}2 , where
p and q are the maximum AR and MA lags, respectively; and the inclusion of long seasonal terms can
make this dimension rather large. For example, with monthly data, you might fit a (0, 1, 1)(0, 1, 2)12
2
SARIMA model. The maximum MA lag is 2 12 + 1 = 25, requiring a matrix with 26 = 676 rows
and columns.
DS12.lnair
Coef.
OPG
Std. Err.
P>|z|
=
=
=
87
131
84.53
0.0000
ARMA
ma
L1.
-.4018324
.0730307
-5.50
0.000
-.5449698
-.2586949
ma
L1.
-.5569342
.0963129
-5.78
0.000
-.745704
-.3681644
/sigma
.0367167
.0020132
18.24
0.000
.0327708
.0406625
ARMA12
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
b = 0.037
In (2), for example, the coefficient on t13 is the product of the coefficients on the t1 and t12
terms (0.224 0.402 0.557). arima labeled the dependent variable DS12.lnair to indicate
that it has applied the difference operator and the lag-12 seasonal difference operator 12 to
lnair; see [U] 11.4.4 Time-series varlists for more information.
We could have fit this model by typing
. arima DS12.lnair, ma(1) mma(1, 12) noconstant
For simple multiplicative models, using the sarima() option is easier, though this second syntax
allows us to incorporate more complicated seasonal terms.
The mar() and mma() options can be repeated, allowing us to control for multiple seasonal
patterns. For example, we may have monthly sales data that exhibit a quarterly pattern as businesses
purchase our product at the beginning of calendar quarters when new funds are budgeted, and our
product is purchased more frequently in a few months of the year than in most others, even after we
control for quarterly fluctuations. Thus we might choose to fit the model
88
Although this model looks rather complicated, estimating it using arima is straightforward:
. arima DS4S12.sales, ar(1) mar(1, 4) mar(1, 12) ma(1) mma(1, 4) mma(1, 12)
If we instead wanted to include two lags in the lag-4 seasonal AR term and the first and third (but
not the second) term in the lag-12 seasonal MA term, we would type
. arima DS4S12.sales, ar(1) mar(1 2, 4) mar(1, 12) ma(1) mma(1, 4) mma(1 3, 12)
However, models with multiple seasonal terms can be difficult to fit. Usually, one seasonal factor
with just one or two AR or MA terms is adequate.
ARMAX models
Thus far all our examples have been pure ARIMA models in which the dependent variable was
modeled solely as a function of its past values and disturbances. Also, arima can fit ARMAX models,
which model the dependent variable in terms of a linear combination of independent variables, as
well as an ARMA disturbance process. The prais command (see [TS] prais), for example, allows
you to control for only AR(1) disturbances, whereas arima allows you to control for a much richer
dynamic error structure. arima allows for both nonseasonal and seasonal ARMA components in the
disturbances.
consump
Coef.
OPG
Std. Err.
P>|z|
=
=
=
89
92
4394.80
0.0000
consump
m2
_cons
1.122029
-36.09872
.0363563
56.56703
30.86
-0.64
0.000
0.523
1.050772
-146.9681
1.193286
74.77062
ar
L1.
.9348486
.0411323
22.73
0.000
.8542308
1.015467
ma
L1.
.3090592
.0885883
3.49
0.000
.1354293
.4826891
/sigma
9.655308
.5635157
17.13
0.000
8.550837
10.75978
ARMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
We find a relatively small money velocity with respect to consumption (1.122) over this period,
although consumption is only one facet of the income velocity. We also note a very large first-order
autocorrelation in the disturbances, as well as a statistically significant first-order moving average.
We might be concerned that our specification has led to disturbances that are heteroskedastic or
non-Gaussian. We refit the model by using the vce(robust) option.
90
consump
Coef.
Semirobust
Std. Err.
P>|z|
=
=
=
92
1176.26
0.0000
consump
m2
_cons
1.122029
-36.09872
.0433302
28.10477
25.89
-1.28
0.000
0.199
1.037103
-91.18306
1.206954
18.98561
ar
L1.
.9348486
.0493428
18.95
0.000
.8381385
1.031559
ma
L1.
.3090592
.1605359
1.93
0.054
-.0055854
.6237038
/sigma
9.655308
1.082639
8.92
0.000
7.533375
11.77724
ARMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
We note a substantial increase in the estimated standard errors, and our once clearly significant
moving-average term is now only marginally significant.
Dynamic forecasting
Another feature of the arima command is the ability to use predict afterward to make dynamic
forecasts. Suppose that we wish to fit the regression model
yt = 0 + 1 xt + yt1 + t
by using a sample of data from t = 1 . . . T and make forecasts beginning at time f .
If we use regress or prais to fit the model, then we can use predict to make one-step-ahead
forecasts. That is, predict will compute
c0 +
c1 xf + byf 1
ybf =
Most importantly, here predict will use the actual value of y at period f 1 in computing the
forecast for time f . Thus, if we use regress or prais, we cannot make forecasts for any periods
beyond f = T + 1 unless we have observed values for y for those periods.
If we instead fit our model with arima, then predict can produce dynamic forecasts by using
the Kalman filter. If we use the dynamic(f ) option, then for period f predict will compute
c0 +
c1 xf + byf 1
ybf =
91
by using the observed value of yf 1 just as predict after regress or prais. However, for period
f + 1 predict newvar, dynamic(f ) will compute
c0 +
c1 xf +1 + bybf
ybf +1 =
using the predicted value of yf instead of the observed value. Similarly, the period f + 2 forecast
will be
c0 +
c1 xf +2 + bybf +1
ybf +2 =
Of course, because our model includes the regressor xt , we can make forecasts only through periods
for which we have observations on xt . However, for pure ARIMA models, we can compute dynamic
forecasts as far beyond the final period of our dataset as desired.
For more information on predict after arima, see [TS] arima postestimation.
Video example
Time series, part 5: Introduction to ARMA/ARIMA models
Stored results
arima stores the following in e():
Scalars
e(N)
e(N gaps)
e(k)
e(k eq)
e(k eq model)
e(k dv)
e(k1)
e(df m)
e(ll)
e(sigma)
e(chi2)
e(p)
e(tmin)
e(tmax)
e(ar max)
e(ma max)
e(rank)
e(ic)
e(rc)
e(converged)
number of observations
number of gaps
number of parameters
number of equations in e(b)
number of equations in overall model test
number of dependent variables
number of variables in first equation
model degrees of freedom
log likelihood
sigma
2
significance
minimum time
maximum time
maximum AR lag
maximum MA lag
rank of e(V)
number of iterations
return code
1 if converged, 0 otherwise
92
arima
command as typed
name of dependent variable
list of covariates
names of equations
weight type
weight expression
title in estimation output
formatted minimum time
formatted maximum time
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
lags for moving-average terms
lags for autoregressive terms
multiplicative AR terms and lag i=1... (# seasonal AR terms)
multiplicative MA terms and lag i=1... (# seasonal MA terms)
seasonal lags in model
unstationary or blank
type of optimization
type of ml method
name of likelihood-evaluator program
maximization technique
number of iterations performed before switching techniques
b V
program used to implement estat
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
coefficient vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
variancecovariance matrix of the estimators
model-based variance
marks estimation sample
93
ARIMA model
The model to be fit is
yt = xt + t
p
q
X
X
t =
i ti +
j tj + t
i=1
j=1
yt = x t +
p
X
i (yti xti ) +
i=1
q
X
j tj + t
j=1
Some of the s and s may be constrained to zero or, for multiplicative seasonal models, the products
of other parameters.
(state equation)
y t = A x t + H t + wt
and
vt
wt
N
0,
Q 0
0 R
(observation equation)
We maintain the standard Kalman filter matrix and vector notation, although for univariate models
yt , wt , and R are scalars.
94
1 2
1 0
F=
0 1
0 0
t1
0
...
vt =
...
...
0
. . . p1
...
0
...
0
...
1
p
0
0
0
A0 =
H0 = [ 1 1
. . . q ]
wt = 0
The Kalman filter representation does not require the moving-average terms to be invertible.
(4)
Pt|t1 = FPt1 F0 + Q
The estimator of yt is
(5)
bt|t1 = xt + H0 t|t1
y
bt|t1
bt = yt y
This value or vector has mean squared error (MSE)
Mt = H0 Pt|t1 H + R
Now the expected value of t conditional on a realization of yt is
with MSE
(6)
0
Pt = Pt|t1 Pt|t1 HM1
t H Pt|t1
(7)
95
lnLt =
1
ln(2) + ln(|Mt |) b0t M1
t
t b
2
This command supports the Huber/White/sandwich estimator of the variance using vce(robust).
See [P] robust, particularly Maximum likelihood estimators and Methods and formulas.
Missing data
Missing data, whether a missing dependent variable yt , one or more missing covariates xt , or
completely missing observations, are handled by continuing the state-updating equations without any
contribution from the data; see Harvey (1989 and 1993). That is, (4) and (5) are iterated for every
missing observation, whereas (6) and (7) are ignored. Thus, for observations with missing data,
t = t|t1 and Pt = Pt|t1 . Without any information from the sample, this effectively assumes
that the prediction error for the missing observations is 0. Other methods of handling missing data
on the basis of the EM algorithm have been suggested, for example, Shumway (1984, 1988).
96
George Edward Pelham Box (19192013) was born in Kent, England, and earned degrees
in statistics at the University of London. After work in the chemical industry, he taught and
researched at Princeton and the University of Wisconsin. His many major contributions to statistics
include papers and books in Bayesian inference, robustness (a term he introduced to statistics),
modeling strategy, experimental design and response surfaces, time-series analysis, distribution
theory, transformations, and nonlinear estimation.
Gwilym Meirion Jenkins (19331982) was a British mathematician and statistician who spent
his career in industry and academia, working for extended periods at Imperial College London
and the University of Lancaster before running his own company. His interests were centered on
time series and he collaborated with G. E. P. Box on what are often called BoxJenkins models.
The last years of Jenkins life were marked by a slowly losing battle against Hodgkins disease.
References
Ansley, C. F., and R. J. Kohn. 1985. Estimation, filtering, and smoothing in state space models with incompletely
specified initial conditions. Annals of Statistics 13: 12861316.
Ansley, C. F., and P. Newbold. 1980. Finite sample properties of estimators for autoregressive moving average models.
Journal of Econometrics 13: 159183.
Baum, C. F. 2000. sts15: Tests for stationarity of a time series. Stata Technical Bulletin 57: 3639. Reprinted in
Stata Technical Bulletin Reprints, vol. 10, pp. 356360. College Station, TX: Stata Press.
Baum, C. F., and T. Room. 2001. sts18: A test for long-range dependence in a time series. Stata Technical Bulletin
60: 3739. Reprinted in Stata Technical Bulletin Reprints, vol. 10, pp. 370373. College Station, TX: Stata Press.
Baum, C. F., and R. I. Sperling. 2000. sts15.1: Tests for stationarity of a time series: Update. Stata Technical Bulletin
58: 3536. Reprinted in Stata Technical Bulletin Reprints, vol. 10, pp. 360362. College Station, TX: Stata Press.
Baum, C. F., and V. L. Wiggins. 2000. sts16: Tests for long memory in a time series. Stata Technical Bulletin 57:
3944. Reprinted in Stata Technical Bulletin Reprints, vol. 10, pp. 362368. College Station, TX: Stata Press.
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Berndt, E. K., B. H. Hall, R. E. Hall, and J. A. Hausman. 1974. Estimation and inference in nonlinear structural
models. Annals of Economic and Social Measurement 3/4: 653665.
Bollerslev, T., R. F. Engle, and D. B. Nelson. 1994. ARCH models. In Vol. 4 of Handbook of Econometrics, ed.
R. F. Engle and D. L. McFadden. Amsterdam: Elsevier.
Box, G. E. P. 1983. Obituary: G. M. Jenkins, 19331982. Journal of the Royal Statistical Society, Series A 146:
205206.
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Chatfield, C. 2004. The Analysis of Time Series: An Introduction. 6th ed. Boca Raton, FL: Chapman & Hall/CRC.
David, J. S. 1999. sts14: Bivariate Granger causality test. Stata Technical Bulletin 51: 4041. Reprinted in Stata
Technical Bulletin Reprints, vol. 9, pp. 350351. College Station, TX: Stata Press.
Davidson, R., and J. G. MacKinnon. 1993. Estimation and Inference in Econometrics. New York: Oxford University
Press.
DeGroot, M. H. 1987. A conversation with George Box. Statistical Science 2: 239258.
Diggle, P. J. 1990. Time Series: A Biostatistical Introduction. Oxford: Oxford University Press.
Enders, W. 2004. Applied Econometric Time Series. 2nd ed. New York: Wiley.
Friedman, M., and D. Meiselman. 1963. The relative stability of monetary velocity and the investment multiplier in
the United States, 18971958. In Stabilization Policies, Commission on Money and Credit, 123126. Englewood
Cliffs, NJ: Prentice Hall.
Gourieroux, C. S., and A. Monfort. 1997. Time Series and Dynamic Models. Trans. ed. G. M. Gallo. Cambridge:
Cambridge University Press.
97
Greene, W. H. 2012. Econometric Analysis. 7th ed. Upper Saddle River, NJ: Prentice Hall.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Harvey, A. C. 1989. Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge: Cambridge
University Press.
. 1993. Time Series Models. 2nd ed. Cambridge, MA: MIT Press.
Hipel, K. W., and A. I. McLeod. 1994. Time Series Modelling of Water Resources and Environmental Systems.
Amsterdam: Elsevier.
Holan, S. H., R. Lund, and G. Davis. 2010. The ARMA alphabet soup: A tour of ARMA model variants. Statistics
Surveys 4: 232274.
Kalman, R. E. 1960. A new approach to linear filtering and prediction problems. Transactions of the ASMEJournal
of Basic Engineering, Series D 82: 3545.
McDowell, A. W. 2002. From the help desk: Transfer functions. Stata Journal 2: 7185.
. 2004. From the help desk: Polynomial distributed lag models. Stata Journal 4: 180189.
Newton, H. J. 1988. TIMESLAB: A Time Series Analysis Laboratory. Belmont, CA: Wadsworth.
Press, W. H., S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. 2007. Numerical Recipes: The Art of Scientific
Computing. 3rd ed. New York: Cambridge University Press.
Sanchez, G. 2012. Comparing predictions after arima with manual computations. The Stata Blog: Not Elsewhere
Classified. http://blog.stata.com/2012/02/16/comparing-predictions-after-arima-with-manual-computations/.
Shumway, R. H. 1984. Some applications of the EM algorithm to analyzing incomplete time series data. In Time
Series Analysis of Irregularly Observed Data, ed. E. Parzen, 290324. New York: Springer.
. 1988. Applied Statistical Time Series Analysis. Upper Saddle River, NJ: Prentice Hall.
Wang, Q., and N. Wu. 2012. Menu-driven X-12-ARIMA seasonal adjustment in Stata. Stata Journal 12: 214241.
Also see
[TS] arima postestimation Postestimation tools for arima
[TS] tsset Declare data to be time-series data
[TS] arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators
[TS] dfactor Dynamic-factor models
[TS] forecast Econometric model forecasting
[TS] mgarch Multivariate GARCH models
[TS] prais Prais Winsten and Cochrane Orcutt regression
[TS] sspace State-space models
[TS] ucm Unobserved-components model
[R] regress Linear regression
[U] 20 Estimation and postestimation commands
Title
arima postestimation Postestimation tools for arima
Description
Remarks and examples
Description
The following postestimation commands are of special interest after arima:
Command
Description
estat acplot
estat aroots
irf
psdensity
Description
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
margins
marginsplot
nlcom
predict
predictnl
test
testnl
98
99
type
newvar
if
in
, statistic options
Description
statistic
Main
xb
stdp
y
mse
residuals
yresiduals
These statistics are available both in and out of sample; type predict . . . if e(sample)
the estimation sample.
Predictions are not available for conditional ARIMA models fit to panel data.
Description
options
Options
dynamic(time constant)
t0(time constant)
structural
>
Postestimation
>
100
Main
xb, the default, calculates the predictions from the model. If D.depvar is the dependent variable,
these predictions are of D.depvar and not of depvar itself.
stdp calculates the standard error of the linear prediction xb. stdp does not include the variation
arising from the disturbance equation; use mse to calculate standard errors and confidence bands
around the predicted values.
y specifies that predictions of depvar be made, even if the model was specified in terms of, say,
D.depvar.
mse calculates the MSE of the predictions.
residuals calculates the residuals. If no other options are specified, these are the predicted innovations
t ; that is, they include the ARMA component. If structural is specified, these are the residuals
t from the structural equation; see structural below.
yresiduals calculates the residuals in terms of depvar, even if the model was specified in terms of,
say, D.depvar. As with residuals, the yresiduals are computed from the model, including any
ARMA component. If structural is specified, any ARMA component is ignored, and yresiduals
are the residuals from the structural equation; see structural below.
Options
dynamic(time constant) specifies how lags of yt in the model are to be handled. If dynamic() is
not specified, actual values are used everywhere that lagged values of yt appear in the model to
produce one-step-ahead forecasts.
dynamic(time constant) produces dynamic (also known as recursive) forecasts. time constant
specifies when the forecast is to switch from one step ahead to dynamic. In dynamic forecasts,
references to yt evaluate to the prediction of yt for all periods at or after time constant; they
evaluate to the actual value of yt for all prior periods.
For example, dynamic(10) would calculate predictions in which any reference to yt with t < 10
evaluates to the actual value of yt and any reference to yt with t 10 evaluates to the prediction of
yt . This means that one-step-ahead predictions are calculated for t < 10 and dynamic predictions
thereafter. Depending on the lag structure of the model, the dynamic predictions might still refer
some actual values of yt .
You may also specify dynamic(.) to have predict automatically switch from one-step-ahead to
dynamic predictions at p + q , where p is the maximum AR lag and q is the maximum MA lag.
t0(time constant) specifies the starting point for the recursions to compute the predicted statistics;
disturbances are assumed to be 0 for t < t0(). The default is to set t0() to the minimum t
observed in the estimation sample, meaning that observations before that are assumed to have
disturbances of 0.
t0() is irrelevant if structural is specified because then all observations are assumed to have
disturbances of 0.
t0(5) would begin recursions at t = 5. If the data were quarterly, you might instead type
t0(tq(1961q2)) to obtain the same result.
The ARMA component of ARIMA models is recursive and depends on the starting point of the
predictions. This includes one-step-ahead predictions.
structural specifies that the calculation be made considering the structural component only, ignoring
the ARMA terms, producing the steady-state equilibrium predictions.
101
xbt = b0 + b1 { ln(wpit1 ) b0 } + b1 b
t1 + b4 b
t4
where
If we instead use
. predict y, y
Stata computes yt as yt = xbt + ln(wpit1 ) so that yt represents the predicted levels of ln(wpit ). In
general, predict newvar, y will reverse any time-series operators applied to the dependent variable
during estimation.
If we want to ignore the ARMA error components when making predictions, we use the structural
option,
. predict xbs, xb structural
which generates xbst = b0 because there are no regressors in this model, and
. predict ys, y structural
102
consumpt = 0 + 1 m2t + t
t = t1 + t1 + t
First, we refit the model by using data up through the first quarter of 1978, and then we will evaluate
the one-step-ahead and dynamic forecasts.
. use http://www.stata-press.com/data/r13/friedman2
. keep if time<=tq(1981q4)
(67 observations deleted)
. arima consump m2 if tin(, 1978q1), ar(1) ma(1)
(output omitted )
(Because our dependent variable contained no time-series operators, we could have instead used
predict chat, xb and accomplished the same thing.) We will also make dynamic forecasts,
switching from observed values of consump to forecasted values at the first quarter of 1978:
. predict chatdy, dynamic(tq(1978q1)) y
(52 missing values generated)
The following graph compares the forecasted values to the observed values for the first few years
following the estimation sample:
1200
Billions of dollars
1400
1600
1800
2000
Personal consumption
1977q1
1978q1
1979q1
1980q1
Quarter
Observed
Dynamic forecast (1978q1)
1981q1
1982q1
Onestepahead forecast
The one-step-ahead forecasts never deviate far from the observed values, though over time the
dynamic forecasts have larger errors. To understand why that is the case, rewrite the model as
103
This form shows that the forecasted value of consumption at time t depends on the value of consumption
at time t 1. When making the one-step-ahead forecast for period t, we know the actual value of
consumption at time t 1. On the other hand, with the dynamic(tq(1978q1)) option, the forecasted
value of consumption for period 1978q1 is based on the observed value of consumption in period
1977q4, but the forecast for 1978q2 is based on the forecast value for 1978q1, the forecast for 1978q3
is based on the forecast value for 1978q2, and so on. Thus, with dynamic forecasts, prior forecast
errors accumulate over time. The following graph illustrates this effect.
200
Forecast Actual
150
100
50
Forecast error
1978q1
1979q1
1980q1
Quarter
Onestepahead forecast
1981q1
1982q1
Example 2
Consider a model of the quarterly U.S. money supply, as measured by M1, from Enders (2004).
Enders (2004, 9397) discusses why seasonal shopping patterns cause seasonal effects in M1. The
variable lnm1 contains data on the natural log of the money supply. We fit seasonal and nonseasonal
ARIMA models and compare the IRFs calculated from both models.
We fit the following nonseasonal ARIMA model
104
The code below fits the above model and saves a set of IRF results to a file called myirf.irf.
. use http://www.stata-press.com/data/r13/m1nsa, clear
(U.S. money supply (M1) from Enders (2004), 95-99.)
. arima DS4.lnm1, ar(1 4) noconstant nolog
ARIMA regression
Sample:
1961q2 - 2008q2
Log likelihood =
579.3036
DS4.lnm1
Coef.
Number of obs
Wald chi2(2)
Prob > chi2
OPG
Std. Err.
P>|z|
=
=
=
189
78.34
0.0000
ARMA
ar
L1.
L4.
.3551862
-.3275808
.0503011
.0594953
7.06
-5.51
0.000
0.000
.2565979
-.4441895
.4537745
-.210972
/sigma
.0112678
.0004882
23.08
0.000
.0103109
.0122246
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
. irf
(file
(file
(file
1961q2 - 2008q2
Log likelihood =
588.6689
DS4.lnm1
Coef.
Number of obs
Wald chi2(2)
Prob > chi2
=
=
=
189
119.78
0.0000
OPG
Std. Err.
P>|z|
ARMA
ar
L1.
.489277
.0538033
9.09
0.000
.3838245
.5947296
ar
L1.
-.4688653
.0601248
-7.80
0.000
-.5867076
-.3510229
/sigma
.0107075
.0004747
22.56
0.000
.0097771
.0116379
ARMA4
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
. irf create seasonal, step(30)
(file myirf.irf updated)
105
We now have two sets of IRF results in the file myirf.irf. We can graph both IRF functions side
by side by calling irf graph.
. irf graph irf
nonseasonal, DS4.lnm1, DS4.lnm1
.5
.5
0
10
20
30
10
20
30
step
95% CI
The trajectories of the IRF functions are similar: each figure shows that a shock to lnm1 causes a
temporary oscillation in lnm1 that dies out after about 15 time periods. This behavior is characteristic
of short-memory processes.
See [TS] psdensity for an introduction to estimating spectral densities using the parameters estimated
by arima.
Reference
Enders, W. 2004. Applied Econometric Time Series. 2nd ed. New York: Wiley.
Also see
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] estat acplot Plot parametric autocorrelation and autocovariance functions
[TS] estat aroots Check the stability condition of ARIMA estimates
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] psdensity Parametric spectral density estimation after arima, arfima, and ucm
[U] 20 Estimation and postestimation commands
Title
corrgram Tabulate and graph autocorrelations
Syntax
Options for corrgram
Stored results
References
Menu
Options for ac and pac
Methods and formulas
Also see
Description
Remarks and examples
Acknowledgment
Syntax
Autocorrelations, partial autocorrelations, and portmanteau (Q) statistics
corrgram varname if
in
, corrgram options
Graph autocorrelations with confidence intervals
ac varname if
in
, ac options
Graph partial autocorrelations with confidence intervals
pac varname if
in
, pac options
corrgram options
Description
Main
lags(#)
noplot
yw
calculate # autocorrelations
suppress character-based plots
calculate partial autocorrelations by using YuleWalker equations
ac options
Description
Main
lags(#)
generate(newvar)
level(#)
fft
calculate # autocorrelations
generate a variable to hold the autocorrelations
set confidence level; default is level(95)
calculate autocorrelation by using Fourier transforms
Plot
line options
marker options
marker label options
CI plot
ciopts(area options)
Add plots
addplot(plot)
twoway options
106
107
Description
pac options
Main
lags(#)
generate(newvar)
yw
level(#)
Plot
line options
marker options
marker label options
CI plot
ciopts(area options)
SRV plot
srv
srvopts(marker options)
Add plots
addplot(plot)
twoway options
You must tsset your data before using corrgram, ac, or pac; see [TS] tsset. Also, the time series
must be dense (nonmissing and no gaps in the time variable) in the sample if you specify the fft option.
varname may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
corrgram
Statistics
>
Time series
>
Graphs
>
>
Time series
>
Graphs
>
Correlogram (ac)
>
Time series
>
Graphs
>
ac
Statistics
pac
Statistics
Description
corrgram produces a table of the autocorrelations, partial autocorrelations, and portmanteau (Q)
statistics. It also displays a character-based plot of the autocorrelations and partial autocorrelations.
See [TS] wntestq for more information on the Q statistic.
ac produces a correlogram (a graph of autocorrelations) with pointwise confidence intervals that
is based on Bartletts formula for MA(q) processes.
108
pac produces a partial correlogram (agraph of partial autocorrelations) with confidence intervals
calculated using a standard error of 1/ n. The residual variances for each lag may optionally be
included on the graph.
Main
lags(#) specifies the number of autocorrelations to calculate. The default is to use min(bn/2c 2, 40),
where bn/2c is the greatest integer less than or equal to n/2.
noplot prevents the character-based plots from being in the listed table of autocorrelations and partial
autocorrelations.
yw specifies that the partial autocorrelations be calculated using the YuleWalker equations instead
of using the default regression-based technique. yw cannot be used if srv is used.
Main
lags(#) specifies the number of autocorrelations to calculate. The default is to use min(bn/2c 2, 40),
where bn/2c is the greatest integer less than or equal to n/2.
generate(newvar) specifies a new variable to contain the autocorrelation (ac command) or partial
autocorrelation (pac command) values. This option is required if the nograph option is used.
nograph (implied when using generate() in the dialog box) prevents ac and pac from constructing
a graph. This option requires the generate() option.
yw (pac only) specifies that the partial autocorrelations be calculated using the YuleWalker equations
instead of using the default regression-based technique. yw cannot be used if srv is used.
level(#) specifies the confidence level, as a percentage, for the confidence bands in the ac or pac
graph. The default is level(95) or as set by set level; see [R] level.
fft (ac only) specifies that the autocorrelations be calculated using two Fourier transforms. This
technique can be faster than simply iterating over the requested number of lags.
Plot
line options, marker options, and marker label options affect the rendition of the plotted autocorrelations (with ac) or partial autocorrelations (with pac).
line options specify the look of the dropped lines, including pattern, width, and color; see
[G-3] line options.
marker options specify the look of markers. This look includes the marker symbol, the marker
size, and its color and outline; see [G-3] marker options.
marker label options specify if and how the markers are to be labeled; see
[G-3] marker label options.
CI plot
ciopts(area options) affects the rendition of the confidence bands; see [G-3] area options.
109
SRV plot
srv (pac only) specifies that the standardized residual variances be plotted with the partial autocorrelations. srv cannot be used if yw is used.
srvopts(marker options) (pac only) affects the rendition of the plotted standardized residual
variances; see [G-3] marker options. This option implies the srv option.
Add plots
addplot(plot) adds specified plots to the generated graph; see [G-3] addplot option.
twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).
Basic examples
corrgram tabulates autocorrelations, partial autocorrelations, and portmanteau (Q) statistics and
plots the autocorrelations and partial autocorrelations. The Q statistics are the same as those produced
by [TS] wntestq. ac produces graphs of the autocorrelations, and pac produces graphs of the partial
autocorrelations. See Becketti (2013) for additional examples of how these commands are used in
practice.
Example 1
Here we use the international airline passengers dataset (Box, Jenkins, and Reinsel 2008, Series G).
This dataset has 144 observations on the monthly number of international airline passengers from
1949 through 1960. We can list the autocorrelations and partial autocorrelations by using corrgram.
110
AC
0.9480
0.8756
0.8067
0.7526
0.7138
0.6817
0.6629
0.6556
0.6709
0.7027
0.7432
0.7604
0.7127
0.6463
0.5859
0.5380
0.4997
0.4687
0.4499
0.4416
PAC
0.9589
-0.3298
0.2018
0.1450
0.2585
-0.0269
0.2043
0.1561
0.5686
0.2926
0.8402
0.6127
-0.6660
-0.3846
0.0787
-0.0266
-0.0581
-0.0435
0.2773
-0.0405
Prob>Q
132.14
245.65
342.67
427.74
504.8
575.6
643.04
709.48
779.59
857.07
944.39
1036.5
1118
1185.6
1241.5
1289
1330.4
1367
1401.1
1434.1
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
-1
0
1 -1
0
1
[Autocorrelation] [Partial Autocor]
1.00
Autocorrelations of air
0.50
0.00
0.50
1.00
. ac air, lags(20)
10
Lag
15
20
The data probably have a trend component as well as a seasonal component. First-differencing
will mitigate the effects of the trend, and seasonal differencing will help control for seasonality. To
accomplish this goal, we can use Statas time-series operators. Here we graph the partial autocorrelations
after controlling for trends and seasonality. We also use srv to include the standardized residual
variances.
111
10
Lag
15
20
95% CI
Partial autocorrelations of DS12.air
Standardized variances
95% Confidence bands [se = 1/sqrt(n)]
See [U] 11.4.4 Time-series varlists for more information about time-series operators.
Video example
Time series, part 4: Correlograms and partial correlograms
Stored results
corrgram stores the following in r():
Scalars
r(lags)
r(ac#)
r(pac#)
r(q#)
number of lags
AC for lag #
PAC for lag #
Q for lag #
Matrices
r(AC)
r(PAC)
r(Q)
vector of autocorrelations
vector of partial autocorrelations
vector of Q statistics
112
where x is the sample mean, and the autocorrelation function is then defined as
bv =
b
R(v)
b
R(0)
The variance of bv is given by Bartletts formula for MA(q) processes. From Brockwell and Davis (2002,
94), we have
1/n
v=1
v1
P
Var(b
v ) =
b2 (i)
v>1
n1 1 + 2
i=1
The partial autocorrelation at lag v measures the correlation between xt and xt+v after the effects
of xt+1 , . . . , xt+v1 have been removed. By default, corrgram and pac use a regression-based
method to estimate it. We run an OLS regression of xt on xt1 , . . . , xtv and a constant term. The
estimated coefficient on xtv is our estimate of the v th partial autocorrelation. The residual variance
b .
is the estimated variance of that regression, which we then standardize by dividing by R(0)
If the yw option is specified, corrgram and pac use the YuleWalker equations to estimate the
partial autocorrelations. Per Enders (2010, 6667), let vv denote the v th partial autocorrelation
coefficient. We then have
b11 = b1
and for v > 1
bv
bvv =
v1
P
j=1
bv1,j bvj
v1
P
j=1
bv1,j bj
and
j = 1, 2, . . . , v 1
Unlike the regression-based method, the YuleWalker equations-based method ensures that the firstsample partial autocorrelation equal the first-sample autocorrelation coefficient, as must be true in the
population; see Greene (2008, 725).
McCullough (1998) discusses other methods of estimating vv ; he finds that relative to other
methods, such as linear regression, the YuleWalker equations-based method performs poorly, in part
because it is susceptible to numerical error. Box, Jenkins, and Reinsel (2008, 69) also caution against
using the YuleWalker equations-based method, especially with data that are nearly nonstationary.
Acknowledgment
The ac and pac commands are based on the ac and pac commands written by Sean Becketti (1992),
a past editor of the Stata Technical Bulletin and author of the Stata Press book Introduction to Time
Series Using Stata.
References
Becketti, S. 1992. sts1: Autocorrelation and partial autocorrelation graphs. Stata Technical Bulletin 5: 2728. Reprinted
in Stata Technical Bulletin Reprints, vol. 1, pp. 221223. College Station, TX: Stata Press.
. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
113
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Brockwell, P. J., and R. A. Davis. 2002. Introduction to Time Series and Forecasting. 2nd ed. New York: Springer.
Chatfield, C. 2004. The Analysis of Time Series: An Introduction. 6th ed. Boca Raton, FL: Chapman & Hall/CRC.
Enders, W. 2010. Applied Econometric Time Series. 3rd ed. New York: Wiley.
Greene, W. H. 2008. Econometric Analysis. 6th ed. Upper Saddle River, NJ: Prentice Hall.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
McCullough, B. D. 1998. Algorithm choice for (partial) autocorrelation functions. Journal of Economic and Social
Measurement 24: 265278.
Newton, H. J. 1988. TIMESLAB: A Time Series Analysis Laboratory. Belmont, CA: Wadsworth.
Also see
[TS] tsset Declare data to be time-series data
[TS] pergram Periodogram
[TS] wntestq Portmanteau (Q) test for white noise
Title
cumsp Cumulative spectral distribution
Syntax
Remarks and examples
Menu
Methods and formulas
Description
References
Options
Also see
Syntax
cumsp varname
if
in
, options
Description
options
Main
generate(newvar)
Plot
cline options
marker options
marker label options
Add plots
addplot(plot)
twoway options
You must tsset your data before using cumsp; see [TS] tsset. Also, the time series must be dense
(nonmissing with no gaps in the time variable) in the sample specified.
varname may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
Graphs
>
Description
cumsp plots the cumulative sample spectral-distribution function evaluated at the natural frequencies
for a (dense) time series.
Options
Main
Plot
cline options affect the rendition of the plotted points connected by lines; see [G-3] cline options.
114
115
marker options specify the look of markers. This look includes the marker symbol, the marker size,
and its color and outline; see [G-3] marker options.
marker label options specify if and how the markers are to be labeled; see [G-3] marker label options.
Add plots
addplot(plot) provides a way to add other plots to the generated graph; see [G-3] addplot option.
twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).
1.00
0.00
0.00
0.20
0.40
0.60
0.80
1.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
The cumulative sample spectral-distribution function clearly illustrates the annual cycle.
116
Ck2
1
= 2
n
2
n
X
2i(t1)k
x(t)e
t=1
Fb(k ) =
fb(j )
i=1
q
X
fb(j )
i=1
References
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Newton, H. J. 1988. TIMESLAB: A Time Series Analysis Laboratory. Belmont, CA: Wadsworth.
Also see
[TS] tsset Declare data to be time-series data
[TS] corrgram Tabulate and graph autocorrelations
[TS] pergram Periodogram
Title
dfactor Dynamic-factor models
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
dfactor obs eq
fac eq
if
in
, options
obs eq specifies the equation for the observed dependent variables, and it has the form
, sopts )
(depvars = exog d
fac eq specifies the equation for the unobserved factors, and it has the form
(facvars = exog f
, sopts )
depvars are the observed dependent variables. exog d are the exogenous variables that enter into
the equations for the observed dependent variables. (All factors are automatically entered into the
equations for the observed dependent variables.) facvars are the names for the unobserved factors
in the model. You may specify the names of existing variables in facvars, but dfactor treats
them only as names and takes no notice that they are also variables. exog f are the exogenous
variables that enter into the equations for the factors.
options
Description
Model
constraints(constraints)
SE/Robust
vce(vcetype)
Reporting
level(#)
nocnsreport
display options
Maximization
maximize options
from(matname)
Advanced
method(method)
specify the method for calculating the log likelihood; seldom used
coeflegend
117
118
Description
sopts
Model
arstructure
Description
diagonal
ltriangular
general
covstructure
Description
identity
dscalar
diagonal
unstructured
identity matrix
diagonal scalar matrix
diagonal matrix
symmetric, positive-definite matrix
method
Description
hybrid
use the stationary Kalman filter and the De Jong diffuse Kalman
filter; the default
use the stationary De Jong method and the De Jong diffuse Kalman
filter
dejong
You must tsset your data before using dfactor; see [TS] tsset.
exog d and exog f may contain factor variables; see [U] 11.4.3 Factor variables.
depvars, exog d, and exog f may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
Menu
Statistics
>
>
Dynamic-factor models
Description
dfactor estimates the parameters of dynamic-factor models by maximum likelihood. Dynamicfactor models are flexible models for multivariate time series in which unobserved factors have a
vector autoregressive structure, exogenous covariates are permitted in both the equations for the latent
factors and the equations for observable dependent variables, and the disturbances in the equations
for the dependent variables may be autocorrelated.
119
Options
Model
constraints(constraints) apply linear constraints. Some specifications require linear constraints for
parameter identification.
noconstant suppresses the constant term.
ar(numlist) specifies the vector autoregressive lag structure in the equation. By default, no lags are
included in either the observable or the factor equations.
arstructure(diagonal|ltriangular|general) specifies the structure of the matrices in the vector
autoregressive lag structure.
arstructure(diagonal) specifies the matrices to be diagonalseparate parameters for each
lag, but no cross-equation autocorrelations. arstructure(diagonal) is the default for both
the observable and the factor equations.
arstructure(ltriangular) specifies the matrices to be lower triangularparameterizes a
recursive, or Wold causal, structure.
arstructure(general) specifies the matrices to be general matricesseparate parameters for
each possible autocorrelation and cross-correlation.
covstructure(identity | dscalar | diagonal | unstructured) specifies the covariance structure
of the errors.
covstructure(identity) specifies a covariance matrix equal to an identity matrix, and it is the
default for the errors in the factor equations.
covstructure(dscalar) specifies a covariance matrix equal to 2 times an identity matrix.
covstructure(diagonal) specifies a diagonal covariance matrix, and it is the default for the
errors in the observable variables.
covstructure(unstructured) specifies a symmetric, positive-definite covariance matrix with
parameters for all variances and covariances.
SE/Robust
vce(vcetype) specifies the estimator for the variancecovariance matrix of the estimator.
vce(oim), the default, causes dfactor to use the observed information matrix estimator.
vce(robust) causes dfactor to use the Huber/White/sandwich estimator.
Reporting
Maximization
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), and from(matname); see [R] maximize for all options except from(), and
see below for information on from(). These options are seldom used.
120
from(matname) specifies initial values for the maximization process. from(b0) causes dfactor
to begin the maximization algorithm with the values in b0. b0 must be a row vector; the number
of columns must equal the number of parameters in the model; and the values in b0 must be
in the same order as the parameters in e(b). This option is seldom used.
Advanced
method(method) specifies how to compute the log likelihood. dfactor writes the model in statespace form and uses sspace to estimate the parameters; see [TS] sspace. method() offers two
methods for dealing with some of the technical aspects of the state-space likelihood. This option
is seldom used.
method(hybrid), the default, uses the Kalman filter with model-based initial values when the
model is stationary and uses the De Jong (1988, 1991) diffuse Kalman filter when the model
is nonstationary.
method(dejong) uses the De Jong (1988) method for estimating the initial values for the Kalman
filter when the model is stationary and uses the De Jong (1988, 1991) diffuse Kalman filter
when the model is nonstationary.
The following option is available with dfactor but is not shown in the dialog box:
coeflegend; see [R] estimation options.
121
yt = Pft + Qxt + ut
ft = Rwt + A1 ft1 + A2 ft2 + + Atp ftp + t
ut = C1 ut1 + C2 ut2 + + Ctq utq + t
where the definitions are given in the following table:
Item
yt
P
ft
Q
xt
ut
R
wt
Ai
t
Ci
t
Dimension
k1
k nf
nf 1
k nx
nx 1
k1
nf nw
nw 1
nf nf
nf 1
kk
k1
Definition
vector of dependent variables
matrix of parameters
vector of unobservable factors
matrix of parameters
vector of exogenous variables
vector of disturbances
matrix of parameters
vector of exogenous variables
matrix of autocorrelation parameters for i {1, 2, . . . , p}
vector of disturbances
matrix of autocorrelation parameters for i {1, 2, . . . , q}
vector of disturbances
By selecting different numbers of factors and lags, the dynamic-factor model encompasses the six
models in the table below:
Dynamic factors with vector autoregressive errors
Dynamic factors
Static factors with vector autoregressive errors
Static factors
Vector autoregressive errors
Seemingly unrelated regression
(DFAR)
(DF)
(SFAR)
(SF)
(VAR)
(SUR)
nf
nf
nf
nf
nf
nf
>0
>0
>0
>0
=0
=0
p>0
p>0
p=0
p=0
p=0
p=0
q
q
q
q
q
q
>0
=0
>0
=0
>0
=0
In addition to the time-series models, dfactor can estimate the parameters of SF models and SUR
models. dfactor can place equality constraints on the disturbance covariances, which sureg and
var do not allow.
Some examples
Example 1: Dynamic-factor model
Stock and Watson (1989, 1991) wrote a simple macroeconomic model as a DF model, estimated the
parameters by ML, and extracted an economic indicator. In this example, we estimate the parameters
of a DF model. In [TS] dfactor postestimation, we extend this example and extract an economic
indicator for the differenced series.
We have data on an industrial-production index, ipman; real disposable income, income; an
aggregate weekly hours index, hours; and aggregate unemployment, unemp. We believe that these
variables are first-difference stationary. We model their first-differences as linear functions of an
unobserved factor that follows a second-order autoregressive process.
122
Number of obs
Wald chi2(6)
Prob > chi2
Coef.
=
=
=
442
751.95
0.0000
OIM
Std. Err.
P>|z|
f
f
L1.
L2.
.2651932
.4820398
.0568663
.0624635
4.66
7.72
0.000
0.000
.1537372
.3596136
.3766491
.604466
.3502249
.0287389
12.19
0.000
.2938976
.4065522
.0746338
.0217319
3.43
0.001
.0320401
.1172276
.2177469
.0186769
11.66
0.000
.1811407
.254353
-.0676016
.0071022
-9.52
0.000
-.0815217
-.0536816
.1383158
.2773808
.0911446
.0237232
.0167086
.0188302
.0080847
.0017932
8.28
14.73
11.27
13.23
0.000
0.000
0.000
0.000
.1055675
.2404743
.0752988
.0202086
.1710641
.3142873
.1069903
.0272378
D.ipman
D.income
D.hours
D.unemp
var(De.ipman)
var(De.inc~e)
var(De.hours)
var(De.unemp)
Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.
For a discussion of the atypical iteration log, see example 1 in [TS] sspace.
The header in the output describes the estimation sample, reports the log-likelihood function at the
maximum, and gives the results of a Wald test against the null hypothesis that the coefficients on the
independent variables, the factors, and the autoregressive components are all zero. In this example,
the null hypothesis that all parameters except for the variance parameters are zero is rejected at all
conventional levels.
The results in the estimation table indicate that the unobserved factor is quite persistent and that
it is a significant predictor for each of the observed variables.
123
dfactor writes the DF model as a state-space model and uses the same methods as sspace to
estimate the parameters. Example 5 in [TS] sspace writes the model considered here in state-space
form and uses sspace to estimate the parameters.
Technical note
The signs of the coefficients on the unobserved factors are not identified. They are not identified
because we can multiply the unobserved factors and the coefficients on the unobserved factors by
negative one without changing the log likelihood or any of the model predictions.
Altering either the starting values for the maximization process, the maximization technique()
used, or the platform on which the command is run can cause the signs of the estimated coefficients
on the unobserved factors to change.
Changes in the signs of the estimated coefficients on the unobserved factors do not alter the
implications of the model or the model predictions.
124
Coef.
OIM
Std. Err.
P>|z|
f
f
L1.
L2.
.4058457
.3663499
.0906183
.0849584
4.48
4.31
0.000
0.000
.2282371
.1998344
.5834544
.5328654
De.ipman
e.ipman
LD.
-.2772149
.068808
-4.03
0.000
-.4120761
-.1423538
De.income
e.income
LD.
-.2213824
.0470578
-4.70
0.000
-.3136141
-.1291508
De.hours
e.hours
LD.
-.3969317
.0504256
-7.87
0.000
-.495764
-.2980994
De.unemp
e.unemp
LD.
-.1736835
.0532071
-3.26
0.001
-.2779675
-.0693995
.3214972
.027982
11.49
0.000
.2666535
.3763408
.0760412
.0173844
4.37
0.000
.0419684
.110114
.1933165
.0172969
11.18
0.000
.1594151
.2272179
-.0711994
.0066553
-10.70
0.000
-.0842435
-.0581553
.1387909
.2636239
.0822919
.0218056
.0154558
.0179043
.0071096
.0016658
8.98
14.72
11.57
13.09
0.000
0.000
0.000
0.000
.1084981
.2285322
.0683574
.0185407
.1690837
.2987157
.0962265
.0250704
D.ipman
D.income
D.hours
D.unemp
var(De.ipman)
var(De.inc~e)
var(De.hours)
var(De.unemp)
Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.
125
The autoregressive (AR) terms are displayed in error notation. e.varname stands for the error in
the equation for varname. The estimate of the pth AR term from y1 on y2 is reported as Lpe.y1 in
equation e.y2. In the above output, the estimated first-order AR term of D.ipman on D.ipman is
0.277 and is labeled as LDe.ipman in equation De.ipman.
The previous two examples illustrate how to use dfactor to estimate the parameters of DF models.
Although the previous example indicates that the more general DFAR model fits the data well, we use
these data to illustrate how to estimate the parameters of more restrictive models.
126
Number of obs
Wald chi2(6)
Prob > chi2
Coef.
=
=
=
441
88.32
0.0000
OIM
Std. Err.
P>|z|
D.ipman
ipman
LD.
.206276
.0471654
4.37
0.000
.1138335
.2987185
income
LD.
.1867384
.0512139
3.65
0.000
.086361
.2871158
D.income
ipman
LD.
.1043733
.0434048
2.40
0.016
.0193015
.1894451
income
LD.
-.1957893
.0471305
-4.15
0.000
-.2881634
-.1034153
D.unemp
ipman
LD.
-.0865823
.0140747
-6.15
0.000
-.1141681
-.0589964
income
LD.
-.0200749
.0152828
-1.31
0.189
-.0500285
.0098788
.3243902
.0218533
14.84
0.000
.2815584
.3672219
.0445794
.013696
3.25
0.001
.0177358
.071423
-.0298076
.2747234
.0047755
.0185008
-6.24
14.85
0.000
0.000
-.0391674
.2384624
-.0204478
.3109844
(constrained)
.0019453
14.85
0.000
.0250738
.0326994
var(De.ipman)
cov(De.ipman,
De.income)
cov(De.ipman,
De.unemp)
var(De.inc~e)
cov(De.inc~e,
De.unemp)
var(De.unemp)
0
.0288866
Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.
The output indicates that the model fits well, except that the lag of first-differenced income is not
a significant predictor of first-differenced unemployment.
127
Technical note
The previous example shows how to use dfactor to estimate the parameters of a SUR model
with constraints on the error-covariance matrix. Neither sureg nor var allows for constraints on the
error-covariance matrix. Without the constraints on the error-covariance matrix and including the lag
of D.unemp,
. dfactor (D.(ipman income unemp) = LD.(ipman income unemp),
> noconstant covstructure(unstructured))
(output omitted )
. var D.(ipman income unemp), lags(1) noconstant
(output omitted )
and
. sureg (D.ipman LD.(ipman income unemp), noconstant)
>
(D.income LD.(ipman income unemp), noconstant)
>
(D.unemp LD.(ipman income unemp), noconstant)
(output omitted )
produce the same estimates after allowing for small numerical differences.
128
Number of obs
Wald chi2(6)
Prob > chi2
Coef.
=
=
=
442
75.48
0.0000
OIM
Std. Err.
P>|z|
De.ipman
e.ipman
LD.
.2297308
.0473147
4.86
0.000
.1369957
.3224659
De.income
e.ipman
LD.
.1075441
.0433357
2.48
0.013
.0226077
.1924805
e.income
LD.
-.2209485
.047116
-4.69
0.000
-.3132943
-.1286028
De.unemp
e.ipman
LD.
-.0975759
.0151301
-6.45
0.000
-.1272304
-.0679215
e.income
LD.
-.0000467
.0147848
-0.00
0.997
-.0290244
.0289309
e.unemp
LD.
-.0795348
.0482213
-1.65
0.099
-.1740469
.0149773
.3335286
.0224282
14.87
0.000
.2895702
.377487
.0457804
.0139123
3.29
0.001
.0185127
.0730481
-.0329438
.2743375
.0051423
.0184657
-6.41
14.86
0.000
0.000
-.0430226
.2381454
-.022865
.3105296
(constrained)
.00199
14.68
0.000
.0253083
.0331092
var(De.ipman)
cov(De.ipman,
De.income)
cov(De.ipman,
De.unemp)
var(De.inc~e)
cov(De.inc~e,
De.unemp)
var(De.unemp)
0
.0292088
Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.
The estimated AR terms of D.income and D.unemp on D.unemp are 0.000047 and 0.079535,
and they are not significant at the 1% or 5% levels. The estimated AR term of D.ipman on D.income
is 0.107544 and is significant at the 5% level but not at the 1% level.
129
Technical note
We obtained the unconditional estimator in example 4 by specifying the ar() option instead of
including the lags of the endogenous variables as exogenous variables, as we did in example 3. The
unconditional estimator has an additional observation and is more efficient. This change is analogous
to estimating an AR coefficient by arima instead of using regress on the lagged endogenous variable.
For example, to obtain the unconditional estimator in a univariate model, typing
. arima D.ipman, ar(1) noconstant technique(nr)
(output omitted )
or
. dfactor (D.ipman = LD.ipman, noconstant)
(output omitted )
130
Coef.
OIM
Std. Err.
=
=
=
227
342.56
0.0000
P>|z|
D.west
z
.0978324
.0065644
14.90
0.000
.0849664
.1106983
.0859494
.0061762
13.92
0.000
.0738442
.0980546
.0918607
.0072814
12.62
0.000
.0775893
.106132
.0861102
.0074652
11.53
0.000
.0714787
.1007417
.0036887
.0038902
.0064074
.0074749
.0005834
.0005228
.0007558
.0008271
6.32
7.44
8.48
9.04
0.000
0.000
0.000
0.000
.0025453
.0028656
.0049261
.0058538
.0048322
.0049149
.0078887
.009096
D.south
D.ne
D.midwest
var(De.west)
var(De.south)
var(De.ne)
var(De.mid~t)
Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.
The estimates indicate that we could reasonably suppose that the unobserved factor has the same
effect on the changes in unemployment in all four regions. The output below shows that we cannot
reject the null hypothesis that these coefficients are the same.
. test
( 1)
( 2)
( 3)
Coef.
OIM
Std. Err.
P>|z|
227
363.34
0.0000
De.west
e.west
LD.
.1297198
.0992663
1.31
0.191
-.0648386
.3242781
De.south
e.south
LD.
-.2829014
.0909205
-3.11
0.002
-.4611023
-.1047004
e.ne
LD.
.2866958
.0847851
3.38
0.001
.12052
.4528715
De.midwest
e.midwest
LD.
.0049427
.0782188
0.06
0.950
-.1483634
.1582488
.0904724
.0049326
18.34
0.000
.0808047
.1001401
.0904724
.0049326
18.34
0.000
.0808047
.1001401
.0904724
.0049326
18.34
0.000
.0808047
.1001401
.0904724
.0049326
18.34
0.000
.0808047
.1001401
.0038959
.0035518
.0058173
.0075444
.0005111
.0005097
.0006983
.0008268
7.62
6.97
8.33
9.12
0.000
0.000
0.000
0.000
.0028941
.0025528
.0044488
.0059239
.0048977
.0045507
.0071859
.009165
De.ne
D.west
D.south
D.ne
D.midwest
var(De.west)
var(De.south)
var(De.ne)
var(De.mid~t)
Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.
131
132
The results indicate that the model might not fit well. Two of the four AR coefficients are statistically
insignificant, while the two significant coefficients have opposite signs and sum to about zero. We
suspect that a DF model might fit these data better than an SF model with autocorrelated disturbances.
Stored results
dfactor stores the following in e():
Scalars
e(N)
e(k)
e(k aux)
e(k eq)
e(k eq model)
e(k dv)
e(k obser)
e(k factor)
e(o ar max)
e(f ar max)
e(df m)
e(ll)
e(chi2)
e(p)
e(tmin)
e(tmax)
e(stationary)
e(rank)
e(ic)
e(rc)
e(converged)
significance
minimum time in sample
maximum time in sample
1 if the estimated parameters indicate a stationary model, 0 otherwise
rank of VCE
number of iterations
return code
1 if converged, 0 otherwise
Macros
e(cmd)
e(cmdline)
e(depvar)
e(obser deps)
e(covariates)
e(indeps)
e(factor deps)
e(tvar)
e(eqnames)
e(model)
e(title)
e(tmins)
e(tmaxs)
e(o ar)
e(f ar)
e(observ cov)
e(factor cov)
e(chi2type)
e(vce)
e(vcetype)
e(opt)
e(method)
e(initial values)
e(technique)
e(tech steps)
e(datasignature)
e(datasignaturevars)
e(properties)
dfactor
command as typed
unoperated names of dependent variables in observation equations
names of dependent variables in observation equations
list of covariates
independent variables
names of unobserved factors in model
variable denoting time within groups
names of equations
type of dynamic-factor model specified
title in estimation output
formatted minimum time
formatted maximum time
list of AR terms for disturbances
list of AR terms for factors
structure of observation-error covariance matrix
structure of factor-error covariance matrix
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
type of optimization
likelihood method
type of initial values
maximization technique
iterations taken in maximization technique(s)
the checksum
variables used in calculation of checksum
b V
number of observations
number of parameters
number of auxiliary parameters
number of equations in e(b)
number of equations in overall model test
number of dependent variables
number of observation equations
number of factors specified
number of AR terms for the disturbances
number of AR terms for the factors
model degrees of freedom
log likelihood
2
133
References
De Jong, P. 1988. The likelihood for a state space model. Biometrika 75: 165169.
. 1991. The diffuse Kalman filter. Annals of Statistics 19: 10731083.
Geweke, J. 1977. The dynamic factor analysis of economic time series models. In Latent Variables in Socioeconomic
Models, ed. D. J. Aigner and A. S. Goldberger, 365383. Amsterdam: North-Holland.
Lutkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Sargent, T. J., and C. A. Sims. 1977. Business cycle modeling without pretending to have too much a priori economic
theory. In New Methods in Business Cycle Research: Proceedings from a Conference, ed. C. A. Sims, 45109.
Minneapolis: Federal Reserve Bank of Minneapolis.
Stock, J. H., and M. W. Watson. 1989. New indexes of coincident and leading economic indicators. In NBER
Macroeconomics Annual 1989, ed. O. J. Blanchard and S. Fischer, vol. 4, 351394. Cambridge, MA: MIT Press.
. 1991. A probability model of the coincident economic indicators. In Leading Economic Indicators: New
Approaches and Forecasting Records, ed. K. Lahiri and G. H. Moore, 6389. Cambridge: Cambridge University
Press.
Watson, M. W., and R. F. Engle. 1983. Alternative algorithms for the estimation of dymanic factor, MIMIC and
varying coefficient regression models. Journal of Econometrics 23: 385400.
Also see
[TS] dfactor postestimation Postestimation tools for dfactor
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] sspace State-space models
[TS] tsset Declare data to be time-series data
[TS] var Vector autoregressive models
[R] regress Linear regression
[R] sureg Zellners seemingly unrelated regression
[U] 20 Estimation and postestimation commands
Title
dfactor postestimation Postestimation tools for dfactor
Description
Remarks and examples
Description
The following standard postestimation commands are available after dfactor:
Command
Description
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
nlcom
predict
predictnl
test
testnl
type
{ stub* | newvarlist }
if
in
, statistic options
Description
Main
y
xb
xbf
factors
residuals
innovations
These statistics are available both in and out of sample; type predict
the estimation sample.
134
135
Description
options
Options
equation(eqnames)
rmse(stub* | newvarlist)
dynamic(time constant)
Advanced
smethod(method)
method
Description
onestep
smooth
filter
>
Postestimation
>
Main
y, xb, xbf, factors, residuals, and innovations specify the statistic to be predicted.
y, the default, predicts the dependent variables. The predictions include the contributions of the
unobserved factors, the linear predictions by using the observable independent variables, and
bb
b t+u
bt.
any autocorrelation, P
ft + Qx
b t.
xb calculates the linear prediction by using the observable independent variables, Qx
xbf calculates the contributions of the unobserved factors plus the linear prediction by using the
bb
b t.
observable independent variables, P
ft + Qx
b t+A
b 1b
b 2b
b tpb
factors estimates the unobserved factors, b
ft = Rw
ft1 + A
ft2 + + A
ftp .
b 1u
b 2u
b tq u
bt = C
b t1 + C
b t2 + + C
b tq .
residuals calculates the autocorrelated residuals, u
bb
b tu
bt.
innovations calculates the innovations, b
t = yt P
ft + Qx
Options
equation(eqnames) specifies the equation(s) for which the predictions are to be calculated.
You specify equation names, such as equation(income consumption) or equation(factor1
factor2), to identify the equations. For the factors statistic, you must specify names of equations
for factors; for all other statistics, you must specify names of equations for observable variables.
136
If you do not specify equation() and do not specify stub*, the results are the same as if you
had specified the name of the first equation for the predicted statistic.
equation() may not be specified with stub*.
rmse(stub* | newvarlist) puts the root mean squared errors of the predicted objects into the specified
new variables. The root mean squared errors measure the variances due to the disturbances but do
not account for estimation error.
dynamic(time constant) specifies when predict starts producing dynamic forecasts. The specified
time constant must be in the scale of the time variable specified in tsset, and the time constant
must be inside a sample for which observations on the dependent variables are available. For
example, dynamic(tq(2008q4)) causes dynamic predictions to begin in the fourth quarter of
2008, assuming that your time variable is quarterly, see [D] datetime. If the model contains
exogenous variables, they must be present for the whole predicted sample. dynamic() may not
be specified with xb, xbf, innovations, smethod(filter), or smethod(smooth).
Advanced
smethod(method) specifies the method used to predict the unobserved states in the model. smethod()
may not be specified with xb.
smethod(onestep), the default, causes predict to use previous information on the dependent
variables. The Kalman filter is performed on previous periods, but only the one-step predictions
are made for the current period.
smethod(smooth) causes predict to estimate the states at each time period using all the sample
data by the Kalman smoother.
smethod(filter) causes predict to estimate the states at each time period using previous
and contemporaneous data by the Kalman filter. The Kalman filter is performed on previous
periods and the current period. smethod(filter) may be specified only with factors and
residuals.
137
. use http://www.stata-press.com/data/r13/dfex
(St. Louis Fed (FRED) macro data)
. dfactor (D.(ipman income hours unemp) = , noconstant ar(1)) (f = , ar(1/2))
(output omitted )
While several of the six statistics computed by predict might be of interest, we will look only at
a few of these statistics for D.ipman. We begin by obtaining one-step predictions in the estimation
sample and a six-month dynamic forecast for D.ipman. The graph of the in-sample predictions
indicates that our model accounts only for a small fraction of the variability in D.ipman.
. tsappend, add(6)
. predict Dipman_f, dynamic(tm(2008m12)) equation(D.ipman)
(option y assumed; fitted values)
1970m1
1980m1
1990m1
2000m1
2010m1
Dipman
y prediction, Dipman, dynamic(tm(2008m12))
Graphing the last year of the sample and the six-month out-of-sample forecast yields
2008m1
2008m4
2008m7
2008m10
2009m1
Dipman
y prediction, Dipman, dynamic(tm(2008m12))
2009m4
138
1970m1
1980m1
1990m1
2000m1
2010m1
Dipman
factors, f, onestep
Also see
[TS] dfactor Dynamic-factor models
[TS] sspace State-space models
[TS] sspace postestimation Postestimation tools for sspace
[U] 20 Estimation and postestimation commands
Title
dfgls DF-GLS unit-root test
Syntax
Remarks and examples
References
Menu
Stored results
Also see
Description
Methods and formulas
Options
Acknowledgments
Syntax
dfgls varname
if
in
, options
Description
options
Main
maxlag(#)
notrend
ers
You must tsset your data before using dfgls; see [TS] tsset.
varname may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
Tests
>
Description
dfgls performs a modified DickeyFuller t test for a unit root in which the series has been
transformed by a generalized least-squares regression.
Options
Main
maxlag(#) sets the value of k , the highest lag order for the first-differenced, detrended variable
in the DickeyFuller regression. By default, dfgls sets k according to the method proposed by
Schwert (1989); that is, dfgls sets kmax = floor[12{(T + 1)/100}0.25 ].
notrend specifies that the alternative hypothesis be that the series is stationary around a mean instead
of around a linear time trend. By default, a trend is included.
ers specifies that dfgls should present interpolated critical values from tables presented by Elliott,
Rothenberg, and Stock (1996), which they obtained from simulations. See Critical values under
Methods and formulas for details.
139
140
Example 1
Here we use the German macroeconomic dataset and test whether the natural log of investment
exhibits a unit root. We use the default options with dfgls.
. use http://www.stata-press.com/data/r13/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. dfgls ln_inv
DF-GLS for ln_inv
Number of obs =
80
Maxlag = 11 chosen by Schwert criterion
DF-GLS tau
1% Critical
5% Critical
10% Critical
[lags]
Test Statistic
Value
Value
Value
11
-2.925
10
-2.671
9
-2.766
8
-3.259
7
-3.536
6
-3.115
5
-3.054
4
-3.016
3
-2.071
2
-1.675
1
-1.752
Opt Lag (Ng-Perron seq t) =
Min SC
= -6.169137 at lag
Min MAIC = -6.136371 at lag
-3.610
-3.610
-3.610
-3.610
-3.610
-3.610
-3.610
-3.610
-3.610
-3.610
-3.610
7 with RMSE
4 with RMSE
1 with RMSE
-2.763
-2.798
-2.832
-2.865
-2.898
-2.929
-2.958
-2.986
-3.012
-3.035
-3.055
-2.489
-2.523
-2.555
-2.587
-2.617
-2.646
-2.674
-2.699
-2.723
-2.744
-2.762
.0388771
.0398949
.0440319
The null hypothesis of a unit root is not rejected for lags 13, it is rejected at the 10% level for
lags 910, and it is rejected at the 5% level for lags 48 and 11. For comparison, we also test for
a unit root in log of investment by using dfuller with two different lag specifications. We need to
use the trend option with dfuller because it is not included by default.
141
-3.133
-4.069
-3.463
-3.158
-3.994
-4.075
-3.466
-3.160
The critical values and the test statistic produced by dfuller with 4 lags do not support rejecting
the null hypothesis, although the MacKinnon approximate p-value is less than 0.1. With 7 lags, the
critical values and the test statistic reject the null hypothesis at the 5% level, and the MacKinnon
approximate p-value is less than 0.01.
That the dfuller results are not as strong as those produced by dfgls is not surprising because
the DF-GLS test with a trend has been shown to be more powerful than the standard augmented
DickeyFuller test.
Stored results
If maxlag(0) is specified, dfgls stores the following in r():
Scalars
r(rmse0)
r(dft0)
RMSE
DF-GLS statistic
142
ye1 = y1
yet = yt yt1 ,
t = 2, . . . , T
x1 = 1
xt = 1 ,
t = 2, . . . , T
z1 = 1
zt = t (t 1)
and = 1 (13.5/T ). An OLS regression is then estimated for the equation
yet = 0 xt + 1 zt + t
The OLS estimators b0 and b1 are then used to remove the trend from yt ; that is, we generate
y = yt (b0 + b1 t)
Finally, we perform an augmented DickeyFuller test on the transformed variable by fitting the OLS
regression
k
X
yt = + yt1
+
j ytj
+ t
(1)
j=1
and then test the null hypothesis H0: = 0 by using tabulated critical values.
To perform the DF-GLS test under the second alternative hypothesis, we proceed as before but
define = 1 (7/T ), eliminate z from the GLS regression, compute y = yt 0 , fit the augmented
DickeyFuller regression by using the newly transformed variable, and perform a test of the null
hypothesis that = 0 by using the tabulated critical values.
dfgls reports the DF-GLS statistic and its critical values obtained from the regression in (1) for
k {1, 2, . . . , kmax }. By default, dfgls sets kmax = floor[12{(T + 1)/100}0.25 ] as proposed by
Schwert (1989), although you can override this choice with another value. The sample size available
with kmax lags is used in all the regressions. Because there are kmax lags of the first-differenced
series, kmax + 1 observations are lost, leaving T kmax observations. dfgls requires that the sample
of T + 1 observations on yt = (y0 , y1 , . . . , yT ) have no gaps.
dfgls reports the results of three different methods for choosing which value of k to use. These
are method 1 the NgPerron sequential t, method 2 the minimum Schwarz information criterion
(SIC), and method 3 the NgPerron modified Akaike information criterion (MAIC). Although the SIC
has a long history in time-series modeling, the NgPerron sequential t was developed by Ng and
Perron (1995), and the MAIC was developed by Ng and Perron (2000).
The SIC can be calculated using either the log likelihood or the sum-of-squared errors from a
regression; dfgls uses the latter definition. Specifically, for each k
SIC
= ln(rmse
d ) + (k + 1)
ln(T kmax )
(T kmax )
where
rmse
d =
1
(T kmax )
T
X
143
ebt2
t=kmax +1
dfgls reports the value of the smallest SIC and the k that produced it.
Ng and Perron (1995) derived a sequential-t algorithm for choosing k :
i. Set n = 0 and run the regression in method 2 with all kmax n lags. If the coefficient on
kmax is significantly different from zero at level , choose k to kmax . Otherwise, continue
to ii.
ii. If n < kmax , set n = n + 1 and continue to iii. Otherwise, set k = 0 and stop.
iii. Run the regression in method 2 with kmax n lags. If the coefficient on kmax n is
significantly different from zero at level , choose k to kmax n. Otherwise, return to ii.
Per Ng and Perron (1995), dfgls uses = 10%. dfgls reports the k selected by this sequential-t
algorithm and the rmse
d from the regression.
Method (3) is based on choosing k to minimize the MAIC. The MAIC is calculated as
2
MAIC(k) = ln(rmse
d )+
where
(k) =
1
rmse
d
b2
2 0
2{ (k) + k}
T kmax
T
X
yet2
t=kmax +1
cv = CV1 +
N N1
(CV2 CV1 )
N1
is presented. The same method is used for the mean-only case, except that interpolation is possible
for values between 50 and 500.
144
Acknowledgments
We thank Christopher F. Baum of the Department of Economics at Boston College and author of
the Stata Press books An Introduction to Modern Econometrics Using Stata and An Introduction to
Stata Programming and Richard Sperling for a previous version of dfgls.
References
Cheung, Y.-W., and K. S. Lai. 1995. Lag order and critical values of a modified DickeyFuller test. Oxford Bulletin
of Economics and Statistics 57: 411419.
Dickey, D. A., and W. A. Fuller. 1979. Distribution of the estimators for autoregressive time series with a unit root.
Journal of the American Statistical Association 74: 427431.
Elliott, G. R., T. J. Rothenberg, and J. H. Stock. 1996. Efficient tests for an autoregressive unit root. Econometrica
64: 813836.
Ng, S., and P. Perron. 1995. Unit root tests in ARMA models with data-dependent methods for the selection of the
truncation lag. Journal of the American Statistical Association 90: 268281.
. 2000. Lag length selection and the construction of unit root tests with good size and power. Econometrica 69:
15191554.
Schwert, G. W. 1989. Tests for unit roots: A Monte Carlo investigation. Journal of Business and Economic Statistics
2: 147159.
Stock, J. H., and M. W. Watson. 2011. Introduction to Econometrics. 3rd ed. Boston: AddisonWesley.
Also see
[TS] dfuller Augmented DickeyFuller unit-root test
[TS] pperron PhillipsPerron unit-root test
[TS] tsset Declare data to be time-series data
[XT] xtunitroot Panel-data unit-root tests
Title
dfuller Augmented DickeyFuller unit-root test
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
dfuller varname
if
in
, options
Description
options
Main
noconstant
trend
drift
regress
lags(#)
You must tsset your data before using dfuller; see [TS] tsset.
varname may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
Tests
>
Description
dfuller performs the augmented DickeyFuller test that a variable follows a unit-root process.
The null hypothesis is that the variable contains a unit root, and the alternative is that the variable
was generated by a stationary process. You may optionally exclude the constant, include a trend term,
and include lagged values of the difference of the variable in the regression.
Options
Main
noconstant suppresses the constant term (intercept) in the model and indicates that the process
under the null hypothesis is a random walk without drift. noconstant cannot be used with the
trend or drift option.
trend specifies that a trend term be included in the associated regression and that the process under
the null hypothesis is a random walk, perhaps with drift. This option may not be used with the
noconstant or drift option.
drift indicates that the process under the null hypothesis is a random walk with nonzero drift. This
option may not be used with the noconstant or trend option.
regress specifies that the associated regression table appear in the output. By default, the regression
table is not produced.
lags(#) specifies the number of lagged difference terms to include in the covariate list.
145
146
yt = + yt1 + ut
where ut is an independently and identically distributed zero-mean error term. In cases one and two,
presumably = 0, which is a random walk without drift. In cases three and four, we allow for a
drift term by letting be unrestricted.
The DickeyFuller test involves fitting the model
yt = + yt1 + t + ut
by ordinary least squares (OLS), perhaps setting = 0 or = 0. However, such a regression is likely
to be plagued by serial correlation. To control for that, the augmented DickeyFuller test instead fits
a model of the form
Case
1
2
3
4
Process under
null hypothesis
Regression
restrictions
dfuller
option
= 0, = 0
=0
=0
(none)
noconstant
(default)
drift
trend
Except in the third case, the t-statistic used to test H0: = 0 does not have a standard distribution.
Hamilton (1994, chap. 17) derives the limiting distributions, which are different for each of the
three other cases. The critical values reported by dfuller are interpolated based on the tables in
Fuller (1996). MacKinnon (1994) shows how to approximate the p-values on the basis of a regression
surface, and dfuller also reports that p-value. In the third case, where the regression includes a
constant term and under the null hypothesis the series has a nonzero drift parameter , the t statistic
has the usual t distribution; dfuller reports the one-sided critical values and p-value for the test of
H0 against the alternative Ha: < 0, which is equivalent to < 1.
147
Deciding which case to use involves a combination of theory and visual inspection of the data.
If economic theory favors a particular null hypothesis, the appropriate case can be chosen based on
that. If a graph of the data shows an upward trend over time, then case four may be preferred. If the
data do not show a trend but do have a nonzero mean, then case two would be a valid alternative.
Example 1
In this example, we examine the international airline passengers dataset from Box, Jenkins, and
Reinsel (2008, Series G). This dataset has 144 observations on the monthly number of international
airline passengers from 1949 through 1960. Because the data show a clear upward trend, we use the
trend option with dfuller to include a constant and time trend in the augmented DickeyFuller
regression.
. use http://www.stata-press.com/data/r13/air2
(TIMESLAB: Airline passengers)
. dfuller air, lags(3) trend regress
Augmented Dickey-Fuller test for unit root
Test
Statistic
Z(t)
-6.936
Number of obs
140
Interpolated Dickey-Fuller
1% Critical
5% Critical
10% Critical
Value
Value
Value
-4.027
-3.445
-3.145
Coef.
air
L1.
LD.
L2D.
L3D.
_trend
_cons
-.5217089
.5572871
.095912
.14511
1.407534
44.49164
Std. Err.
.0752195
.0799894
.0876692
.0879922
.2098378
7.78335
-6.94
6.97
1.09
1.65
6.71
5.72
P>|t|
0.000
0.000
0.276
0.101
0.000
0.000
-.67048
.399082
-.0774825
-.0289232
.9925118
29.09753
-.3729379
.7154923
.2693065
.3191433
1.822557
59.88575
Here we can overwhelmingly reject the null hypothesis of a unit root at all common significance
levels. From the regression output, the estimated of 0.522 implies that = (1 0.522) = 0.478.
Experiments with fewer or more lags in the augmented regression yield the same conclusion.
Example 2
In this example, we use the German macroeconomic dataset to determine whether the log of
consumption follows a unit root. We will again use the trend option, because consumption grows
over time.
148
-1.318
-4.069
-3.463
-3.158
As we might expect from economic theory, here we cannot reject the null hypothesis that log
consumption exhibits a unit root. Again using different numbers of lag terms yield the same conclusion.
Stored results
dfuller stores the following in r():
Scalars
r(N)
r(lags)
r(Zt)
r(p)
number of observations
number of lagged differences
DickeyFuller test statistic
MacKinnon approximate p-value (if there is a constant or trend in associated regression)
yt = yt1 + t
where t are independently and identically distributed as N (0, 2 ) and y0 = 0, the OLS estimate
(based on an n-observation time series) of the autocorrelation parameter is given by
n
X
bn =
If || < 1, then
yt1 yt
t=1
n
X
yt2
t=1
n(b
n ) N (0, 1 2 )
If this result were valid when = 1, the resulting distribution would have a variance of zero. When
= 1, the OLS estimate b still converges in probability to one, though we need to find a suitable
nondegenerate distribution so that we can perform hypothesis tests of H0 : = 1. Hamilton (1994,
chap. 17) provides a superb exposition of the requisite theory.
149
yt = + yt1 + t +
k
X
j ytj + et
j=1
via OLS where, depending on the options specified, the constant term or time trend t is omitted
and k is the number of lags specified in the lags() option. The test statistic for H0 : = 0 is
b , where
Zt = /b
b is the standard error of b.
The critical values included in the output are linearly interpolated from the table of values that
appears in Fuller (1996), and the MacKinnon approximate p-values use the regression surface published
in MacKinnon (1994).
David Alan Dickey (1945 ) was born in Ohio and obtained degrees in mathematics at Miami
University and a PhD in statistics at Iowa State University in 1976 as a student of Wayne Fuller.
He works at North Carolina State University and specializes in time-series analysis.
Wayne Arthur Fuller (1931 ) was born in Iowa, obtained three degrees at Iowa State University
and then served on the faculty between 1959 and 2001. He has made many distinguished
contributions to time series, measurement-error models, survey sampling, and econometrics.
References
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Dickey, D. A., and W. A. Fuller. 1979. Distribution of the estimators for autoregressive time series with a unit root.
Journal of the American Statistical Association 74: 427431.
Fuller, W. A. 1996. Introduction to Statistical Time Series. 2nd ed. New York: Wiley.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
MacKinnon, J. G. 1994. Approximate asymptotic distribution functions for unit-root and cointegration tests. Journal
of Business and Economic Statistics 12: 167176.
Newton, H. J. 1988. TIMESLAB: A Time Series Analysis Laboratory. Belmont, CA: Wadsworth.
Also see
[TS] tsset Declare data to be time-series data
[TS] dfgls DF-GLS unit-root test
[TS] pperron PhillipsPerron unit-root test
[XT] xtunitroot Panel-data unit-root tests
Title
estat acplot Plot parametric autocorrelation and autocovariance functions
Syntax
Remarks and examples
Description
References
Options
Also see
Syntax
estat acplot
, options
options
Description
saving( filename , . . . )
level(#)
lags(#)
covariance
smemory
CI plot
ciopts(rcap options)
Plot
marker options
marker label options
cline options
Y axis, X axis, Titles, Legend, Overall
twoway options
>
Postestimation
>
Description
estat acplot plots the estimated autocorrelation and autocovariance functions of a stationary
process using the parameters of a previously fit parametric model.
estat acplot is available after arima and arfima; see [TS] arima and [TS] arfima.
Options
saving( filename , suboptions ) creates a Stata data file (.dta file) consisting of the autocorrelation
estimates, standard errors, and confidence bounds.
Five variables are saved: lag (lag number), ac (autocorrelation estimate), se (standard error),
ci l (lower confidence bound), and ci u (upper confidence bound).
150
151
double specifies that the variables be saved as doubles, meaning 8-byte reals. By default, they
are saved as floats, meaning 4-byte reals.
name(stubname) specifies that variables be saved with prefix stubname.
replace indicates that filename be overwritten if it exists.
level(#) specifies the confidence level, as a percentage, for confidence intervals. The default is
level(95) or as set by set level; see [R] level.
lags(#) specifies the number of autocorrelations to calculate. The default is to use
min{floor(n/2) 2, 40}, where floor(n/2) is the greatest integer less than or equal to n/2 and
n is the number of observations.
covariance specifies the calculation of autocovariances instead of the default autocorrelations.
smemory specifies that the ARFIMA fractional integration parameter be ignored. The computed autocorrelations are for the short-memory ARMA component of the model. This option is allowed only
after arfima.
CI plot
ciopts(rcap options) affects the rendition of the confidence bands; see [G-3] rcap options.
Plot
marker options affect the rendition of markers drawn at the plotted points, including their shape,
size, color, and outline; see [G-3] marker options.
marker label options specify if and how the markers are to be labeled; see [G-3] marker label options.
cline options affect whether lines connect the plotted points and the rendition of those lines; see
[G-3] cline options.
twoway options are any of the options documented in [G-3] twoway options, except by(). These
include options for titling the graph (see [G-3] title options) and options for saving the graph to
disk (see [G-3] saving option).
152
Example 1
In example 1 of [TS] arima, we fit an ARIMA(1,1,1) model of the U.S. Wholesale Price Index
(WPI) using quarterly data over the period 1960q1 through 1990q4.
. use http://www.stata-press.com/data/r13/wpi1
. arima wpi, arima(1,1,1)
(setting optimization to BHHH)
Iteration 0:
log likelihood = -139.80133
Iteration 1:
log likelihood = -135.6278
Iteration 2:
log likelihood = -135.41838
Iteration 3:
log likelihood = -135.36691
Iteration 4:
log likelihood = -135.35892
(switching optimization to BFGS)
Iteration 5:
log likelihood = -135.35471
Iteration 6:
log likelihood = -135.35135
Iteration 7:
log likelihood = -135.35132
Iteration 8:
log likelihood = -135.35131
ARIMA regression
Sample:
1960q2 - 1990q4
Number of obs
Wald chi2(2)
Prob > chi2
=
=
=
123
310.64
0.0000
OPG
Std. Err.
P>|z|
.7498197
.3340968
2.24
0.025
.0950019
1.404637
ar
L1.
.8742288
.0545435
16.03
0.000
.7673256
.981132
ma
L1.
-.4120458
.1000284
-4.12
0.000
-.6080979
-.2159938
/sigma
.7250436
.0368065
19.70
0.000
.6529042
.7971829
D.wpi
Coef.
_cons
wpi
ARMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
Now we use estat acplot to estimate the autocorrelations implied by the estimated ARMA
parameters. We include lags(50) to indicate that autocorrelations be computed for 50 lags. By
default, a 95% confidence interval is provided for each autocorrelation.
153
.2
Autocorrelations
.4
.6
.8
10
20
30
quarterly lag
40
50
The graph is similar to a typical autocorrelation function of an AR(1) process with a positive
coefficient. The autocorrelations of a stationary AR(1) process decay exponentially toward zero.
References
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Palma, W. 2007. Long-Memory Time Series: Theory and Methods. Hoboken, NJ: Wiley.
Also see
[TS] arfima Autoregressive fractionally integrated moving-average models
[TS] arima ARIMA, ARMAX, and other dynamic regression models
Title
estat aroots Check the stability condition of ARIMA estimates
Syntax
Remarks and examples
Also see
Description
Methods and formulas
Options
Reference
Syntax
estat aroots
, options
options
Description
nograph
dlabel
modlabel
Grid
nogrid
pgrid( . . . )
Plot
marker options
Reference unit circle
rlopts(cline options)
twoway options
>
Postestimation
>
Description
estat aroots checks the eigenvalue stability condition after estimating the parameters of an
ARIMA model using arima. A graph of the eigenvalues of the companion matrices for the AR and
MA polynomials is also produced.
Options
nograph specifies that no graph of the eigenvalues of the companion matrices be drawn.
dlabel labels each eigenvalue with its distance from the unit circle. dlabel cannot be specified
with modlabel.
modlabel labels the eigenvalues with their moduli. modlabel cannot be specified with dlabel.
154
155
Grid
Plot
marker options specify the look of markers. This look includes the marker symbol, the marker size,
and its color and outline; see [G-3] marker options.
rlopts(cline options) affect the rendition of the reference unit circle; see [G-3] cline options.
twoway options are any of the options documented in [G-3] twoway options, except by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).
Example 1
In this example, we check the stability condition of the SARIMA model that we fit in example 3
of [TS] arima. We begin by reestimating the parameters of the model.
. use http://www.stata-press.com/data/r13/air2
(TIMESLAB: Airline passengers)
. generate lnair = ln(air)
156
DS12.lnair
Coef.
OPG
Std. Err.
P>|z|
=
=
=
131
84.53
0.0000
ARMA
ma
L1.
-.4018324
.0730307
-5.50
0.000
-.5449698
-.2586949
ma
L1.
-.5569342
.0963129
-5.78
0.000
-.745704
-.3681644
/sigma
.0367167
.0020132
18.24
0.000
.0327708
.0406625
ARMA12
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
We can now use estat aroots to check the stability condition of the MA part of the model.
. estat aroots
Eigenvalue stability condition
Eigenvalue
.824798
.824798
.9523947
-.824798
-.824798
-.4761974
-.4761974
2.776e-16
2.776e-16
.4761974
.4761974
-.9523947
.4018324
+
-
.4761974i
.4761974i
+
+
+
+
-
.4761974i
.4761974i
.824798i
.824798i
.9523947i
.9523947i
.824798i
.824798i
Modulus
.952395
.952395
.952395
.952395
.952395
.952395
.952395
.952395
.952395
.952395
.952395
.952395
.401832
157
.5
Imaginary
0
.5
.5
0
Real
.5
Because the modulus of each eigenvalue is strictly less than 1, the MA process is invertible and
can be represented as an infinite-order AR process.
The graph produced by estat aroots displays the eigenvalues with the real components on the x
axis and the imaginary components on the y axis. The graph indicates visually that these eigenvalues
are just inside the unit circle.
Stored results
aroots stores the following in r():
Matrices
r(Re ar)
real part of the eigenvalues of F ()
r(Im ar)
imaginary part of the eigenvalues of F ()
r(Modulus ar) modulus of the eigenvalues of F ()
r(ar)
F (), the AR companion matrix
r(Re ma)
real part of the eigenvalues of F ()
r(Im ma)
imaginary part of the eigenvalues of F ()
r(Modulus ma) modulus of the eigenvalues of F ()
r(ma)
F (), the MA companion matrix
(Lp ) = 1 1 L 2 L2 p Lp
(Lq ) = 1 + 1 L + 2 L2 + + q Lq
and Lj yt = ytj .
158
1
1
0
F() =
.
..
2
0
1
..
.
. . . r1
...
0
...
0
..
..
.
.
...
1
r
0
0
..
.
0
where = and r = p for the AR part of ARMA, and = and r = q for the MA part of
ARMA. aroots obtains the eigenvalues
of F by using matrix eigenvalues. The modulus of the
Reference
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Also see
[TS] arima ARIMA, ARMAX, and other dynamic regression models
Title
fcast compute Compute dynamic forecasts after var, svar, or vec
Syntax
Remarks and examples
Menu
Methods and formulas
Description
References
Options
Also see
Syntax
After var and svar
fcast compute prefix
, options1
, options2
After vec
fcast compute prefix
prefix is the prefix appended to the names of the dependent variables to create the names of the
variables holding the dynamic forecasts.
options1
Description
Main
step(#)
dynamic(time constant)
estimates(estname)
replace
Std. Errors
nose
bs
bsp
bscentile
reps(#)
nodots
saving(filename , replace )
Reporting
level(#)
159
160
Description
options2
Main
step(#)
dynamic(time constant)
estimates(estname)
replace
differences
Std. Errors
nose
Reporting
level(#)
Menu
Statistics
>
>
VEC/VAR forecasts
>
Description
fcast compute produces dynamic forecasts of the dependent variables in a model previously fit
by var, svar, or vec. fcast compute creates new variables and, if necessary, extends the time
frame of the dataset to contain the prediction horizon.
Options
Main
161
Std. Errors
nose specifies that the asymptotic standard errors of the forecasted levels and, thus the asymptotic
confidence intervals for the levels, not be calculated. By default, the asymptotic standard errors
and the asymptotic confidence intervals of the forecasted levels are calculated.
bs specifies that fcast compute use confidence bounds estimated by a simulation method based on
bootstrapping the residuals.
bsp specifies that fcast compute use confidence bounds estimated via simulation in which the
innovations are drawn from a multivariate normal distribution.
bscentile specifies that fcast compute use centiles of the bootstrapped dataset to estimate the
bounds of the confidence intervals. By default, fcast compute uses the estimated standard errors
and the quantiles of the standard normal distribution determined by level().
reps(#) gives the number of repetitions used in the simulations. The default is 200.
nodots specifies that no dots be displayed while obtaining the simulation-based standard errors. By
default, for each replication, a dot is displayed.
saving(filename , replace ) specifies the name of the file to hold the dataset that contains the
bootstrap replications. The replace option overwrites any file with this name.
replace specifies that filename be overwritten if it exists. This option is not shown in the dialog
box.
Reporting
level(#) specifies the confidence level, as a percentage, for confidence intervals. The default is
level(95) or as set by set level; see [U] 20.7 Specifying the width of confidence intervals.
Example 1
Typing
.
.
.
.
use http://www.stata-press.com/data/r13/lutkepohl2
var dln_inc dln_consump dln_inv if qtr<tq(1979q1)
fcast compute m2_, step(8)
fcast graph m2_dln_inc m2_dln_inv m2_dln_consump, observed
fits a VAR with two lags, computes eight-step dynamic predictions for each endogenous variable, and
produces the graph
162
.1
.02
.1
.04
.02
.04
95% CI
observed
forecast
The graph shows that the model is better at predicting changes in income and investment than in
consumption. The graph also shows how quickly the predictions from the two-lag model settle down
to their mean values.
fcast compute creates new variables in the dataset. If there are K dependent variables in the
previously fitted model, fcast compute generates 4K new variables:
K new variables that hold the forecasted levels, named by appending the specified prefix to
the name of the original variable
K estimated lower bounds for the forecast interval, named by appending the specified prefix
and the suffix LB to the name of the original variable
K estimated upper bounds for the forecast interval, named by appending the specified prefix
and the suffix UB to the name of the original variable
K estimated standard errors of the forecast, named by appending the specified prefix and the
suffix SE to the name of the original variable
If you specify options so that fcast compute does not calculate standard errors, the 3K variables
that hold them and the bounds of the confidence intervals are not generated.
If the model previously fit is a VECM, specifying differences generates another K variables
that hold the forecasts of the first differences of the dependent variables, named by appending the
prefix prefixD to the name of the original variable.
Example 2
Plots of the forecasts from different models along with the observations from a holdout sample
can provide insights to their relative forecasting performance. Continuing the previous example,
163
.05
.05
.1
1978q4
1979q2
1979q4
quarter
1980q2
1980q4
m6_dln_inv, dyn(1979q1)
m2_dln_inv, dyn(1979q1)
firstdifference of ln_inv
The model with six lags predicts changes in investment better than the two-lag model in some periods
but markedly worse in other periods.
t = 1, . . . , T
yt = (y1t , . . . , yKt )0 is a K 1 random vector,
the Ai are fixed (K K) matrices of parameters,
xt is an (M 1) vector of exogenous variables,
B is a (K M ) matrix of coefficients,
v is a (K 1) vector of fixed parameters, and
ut is assumed to be white noise; that is,
E(ut ) = 0K
E(ut u0t ) =
E(ut u0s ) = 0K for t 6= s
fcast compute will dynamically predict the variables in the vector yt conditional on p initial values
of the endogenous variables and any exogenous xt . Adopting the notation from Lutkepohl (2005,
402) to fit the case at hand, the optimal h-step-ahead forecast of yt+h conditional on xt is
164
b 1 yt (h 1) + + A
b p yt (h p) + Bx
b t
b+A
yt (h) = v
(1)
b 1 yt (h 1) + + A
b p yt (h p)
b+A
yt (h) = v
When there are no exogenous variables, fcast compute can compute the asymptotic confidence
bounds.
As shown by Lutkepohl (2005, 204205), the asymptotic estimator of the covariance matrix of
the prediction error is given by
b (h)
b (h) =
b y (h) + 1
b
y
T
(2)
where
b y (h) =
h1
X
b i
b
b 0i
i=0
(h1
)
(h1
)0
T
X h1i
1 X X 0 b 0 h1i b b
0 b0
b
b
Z B
i
(h) =
Zt B
i
T t=0 i=0 t
i=0
1 0
0 ...
0
0
b
b
b
b
b A1 A2 . . . Ap1 Ap
v
0
0
0 IK 0 . . .
b =
B
0
0
0 0 IK
.
..
..
..
..
.
.
.
0 0
0 ...
0
0
Zt = (1, yt , . . . , ytp1
)0
b 0 = IK
i
X
bj
bi =
b ij A
IK
i = 1, 2, . . .
j=1
b j = 0 for j > p
A
b is the estimate of the covariance matrix of the innovations, and
b is the estimated VCE of the
coefficients in the VAR. The formula in (2) is general enough to handle the case in which constraints
are placed on the coefficients in the VAR(p).
b y (h) is the estimated mean squared error (MSE) of the
Equation (2) is made up of two terms.
b
b (h)
forecast. y (h) estimates the error in the forecast arising from the unseen innovations. T 1
estimates the error in the forecast that is due to using estimated coefficients instead of the true
coefficients. As the sample size grows, uncertainty with respect to the coefficient estimates decreases,
b (h) goes to zero.
and T 1
165
If yt is normally distributed, the bounds for the asymptotic (1 )100% interval around the
forecast for the k th component of yt , h periods ahead, are
bk,t (h) z( 2 )
y
bk (h)
(3)
b (h).
where
bk (h) is the k th diagonal element of
b
y
Specifying the bs option causes the standard errors to be computed via simulation, using bootstrapped
residuals. Both var and svar contain estimators for the coefficients of a VAR that are conditional
on the first p observations on the endogenous variables in the data. Similarly, these algorithms
are conditional on the first p observations of the endogenous variables in the data. However, the
simulation-based estimates of the standard errors are also conditional on the estimated coefficients.
The asymptotic standard errors are not conditional on the coefficient estimates because the second
term on the right-hand side of (2) accounts for the uncertainty arising from using estimated parameters.
For a simulation with R repetitions, this method uses the following algorithm:
1. Fit the model and save the estimated coefficients.
2. Use the estimated coefficients to calculate the residuals.
3. Repeat steps 3a3c R times.
3a. Draw a simple random sample with replacement of size T + h from the residuals.
When the tth observation is drawn, all K residuals are selected, preserving any
contemporaneous correlation among the residuals.
3b. Use the sampled residuals, p initial values of the endogenous variables, any
exogenous variables, and the estimated coefficients to construct a new sample
dataset.
3c. Save the simulated endogenous variables for the h forecast periods in the bootstrapped
dataset.
4. For each endogenous variable and each forecast period, the simulated standard error is the
estimated standard error of the R simulated forecasts. By default, the upper and lower bounds
of the (1 )100% are estimated using the simulation-based estimates of the standard errors
and the normality assumption, as in (3). If the bscentile option is specified, the sample
centiles for the upper and lower bounds of the R simulated forecasts are used for the upper
and lower bounds of the confidence intervals.
If the bsp option is specified, a parametric simulation algorithm is used. Specifically, everything
is as above except that 3a is replaced by 3a(bsp) as follows:
3a(bsp). Draw T + h observations from a multivariate normal distribution with covariance
b
matrix .
The algorithm above assumes that h forecast periods come after the original sample of T
observations. If the h forecast periods lie within the original sample, smaller simulated datasets are
sufficient.
Dynamic forecasts after vec
Methods and formulas of [TS] vec discusses how to obtain the one-step predicted differences and
levels. fcast compute uses the previous dynamic predictions as inputs for later dynamic predictions.
166
b (h) =
b
y
T
T d
h1
X
b i
b
bi
i=0
References
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lutkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Also see
[TS] fcast graph Graph forecasts after fcast compute
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
Title
fcast graph Graph forecasts after fcast compute
Syntax
Remarks and examples
Menu
Also see
Description
Options
Syntax
fcast graph varlist
if
in
, options
where varlist contains one or more forecasted variables generated by fcast compute.
Description
options
Main
differences
noci
observed
Forecast plot
cline options
CI plot
ciopts(area options)
Observed plot
obopts(cline options)
twoway options
byopts(by option)
Menu
Statistics
>
>
VEC/VAR forecasts
>
Graph forecasts
Description
fcast graph graphs dynamic forecasts of the endogenous variables from a VAR(p) or VECM that
has already been obtained from fcast compute; see [TS] fcast compute.
Options
Main
differences specifies that the forecasts of the first-differenced variables be graphed. This option is
available only with forecasts computed by fcast compute after vec. The differences option
implies noci.
167
168
noci specifies that the confidence intervals be suppressed. By default, the confidence intervals are
included.
observed specifies that observed values of the predicted variables be included in the graph. By
default, observed values are not graphed.
Forecast plot
cline options affect the rendition of the plotted lines corresponding to the forecast;
[G-3] cline options.
see
CI plot
ciopts(area options) affects the rendition of the confidence bands for the forecasts; see
[G-3] area options.
Observed plot
obopts(cline options) affects the rendition of the observed values of the predicted variables; see
[G-3] cline options. This option implies the observed option.
twoway options are any of the options documented in [G-3] twoway options, excluding by().
byopts(by option) are documented in [G-3] by option. These options affect the appearance of the
combined graph.
Example 1
In this example, we use a cointegrating VECM to model the state-level unemployment rates in
Missouri, Indiana, Kentucky, and Illinois, and we graph the forecasts against a 6-month holdout
sample.
. use http://www.stata-press.com/data/r13/urates
. vec missouri indiana kentucky illinois if t < tm(2003m7), trend(rconstant)
> rank(2) lags(4)
(output omitted )
. fcast compute m1_, step(6)
169
5.5
6.5
5.5
5.5
6.5
6.5
7.5
2003m6
2003m8
2003m10
2003m122003m6
95% CI
observed
2003m8
2003m10
2003m12
forecast
Because the 95% confidence bands for the predicted unemployment rates in Missouri and Indiana do
not contain all their observed values, the model does not reliably predict these unemployment rates.
Also see
[TS] fcast compute Compute dynamic forecasts after var, svar, or vec
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
Title
forecast Econometric model forecasting
Syntax
Description
References
Also see
Syntax
forecast subcommand . . .
, options
subcommand
Description
create
estimates
identity
coefvector
exogenous
solve
adjust
describe
list
clear
drop
query
See [TS] forecast create, [TS] forecast estimates, [TS] forecast identity, [TS] forecast coefvector,
[TS] forecast exogenous, [TS] forecast solve, [TS] forecast adjust, [TS] forecast describe,
[TS] forecast list, [TS] forecast clear, [TS] forecast drop, and [TS] forecast query for details about
these subcommands.
Description
forecast is a suite of commands for obtaining forecasts by solving models, collections of
equations that jointly determine the outcomes of one or more variables. Equations can be stochastic
relationships fit using estimation commands such as regress, ivregress, var, or reg3; or they can
be nonstochastic relationships, called identities, that express one variable as a deterministic function
of other variables. Forecasting models may also include exogenous variables whose values are already
known or determined by factors outside the purview of the system being examined. The forecast
commands can also be used to obtain dynamic forecasts in single-equation models.
The forecast suite lets you incorporate outside information into your forecasts through the use
of add factors and similar devices, and you can specify the future path for some model variables
and obtain forecasts for other variables conditional on that path. Each set of forecast variables has its
own name prefix or suffix, so you can compare forecasts based on alternative scenarios. Confidence
intervals for forecasts can be obtained via stochastic simulation and can incorporate both parameter
uncertainty and additive error terms.
forecast works with both time-series and panel datasets. Time-series datasets may not contain
any gaps, and panel datasets must be strongly balanced.
170
171
This manual entry provides an overview of forecasting models and several examples showing how
the forecast commands are used together. See the individual subcommands manual entries for
detailed discussions of the various options available and specific remarks about those subcommands.
ct
it
wpt
yt
pt
kt
wt
= 0 + 1 pt + 2 pt1 + 3 wt + 1t
= 4 + 5 pt + 6 pt1 + 7 kt1 + 2t
= 8 + 9 yt + 10 yt1 + 11 yrt + 3t
= ct + it + gt
= yt tt wpt
= kt1 + it
= wgt + wpt
(1)
(2)
(3)
(4)
(5)
(6)
(7)
172
Description
Type
c
p
wp
wg
w
i
k
y
g
t
yr
Consumption
Private-sector profits
Private-sector wages
Government-sector wages
Total wages
Investment
Capital stock
National income
Government spending
Indirect bus. taxes + net exports
Time trend = Year 1931
endogenous
endogenous
endogenous
exogenous
endogenous
endogenous
endogenous
endogenous
exogenous
exogenous
exogenous
Our model has four exogenous variables: government-sector wages (wg), government spending (g),
a time-trend variable (yr), and, for simplicity, a variable that lumps indirect business taxes and net
exports together (t). To make out-of-sample forecasts, we must populate those variables over the
entire forecast horizon before solving our model. (We use the phrases solve our model and obtain
forecasts from our model interchangeably.)
We will illustrate the entire process of fitting and forecasting our model, though our focus will be
on the latter task. See [R] reg3 for a more in-depth look at fitting models like this one. Before we
solve our model, we first estimate the parameters of the stochastic equations by loading the dataset
and calling reg3:
173
. use http://www.stata-press.com/data/r13/klein2
. reg3 (c p L.p w) (i p L.p L.k) (wp y L.y yr), endog(w p y) exog(t wg g)
Three-stage least-squares regression
Equation
c
i
wp
Obs
Parms
RMSE
"R-sq"
chi2
21
21
21
3
3
3
.9443305
1.446736
.7211282
0.9801
0.8258
0.9863
864.59
162.98
1594.75
0.0000
0.0000
0.0000
Coef.
Std. Err.
P>|z|
c
p
--.
L1.
.1248904
.1631439
.1081291
.1004382
1.16
1.62
0.248
0.104
-.0870387
-.0337113
.3368194
.3599992
w
_cons
.790081
16.44079
.0379379
1.304549
20.83
12.60
0.000
0.000
.715724
13.88392
.8644379
18.99766
p
--.
L1.
-.0130791
.7557238
.1618962
.1529331
-0.08
4.94
0.936
0.000
-.3303898
.4559805
.3042316
1.055467
k
L1.
-.1948482
.0325307
-5.99
0.000
-.2586072
-.1310893
_cons
28.17785
6.793768
4.15
0.000
14.86231
41.49339
y
--.
L1.
.4004919
.181291
.0318134
.0341588
12.59
5.31
0.000
0.000
.3381388
.1143411
.462845
.2482409
yr
_cons
.149674
1.797216
.0279352
1.115854
5.36
1.61
0.000
0.107
.094922
-.3898181
.2044261
3.984251
wp
Endogenous variables:
Exogenous variables:
c i wp w p y
L.p L.k L.y yr t wg g
The output from reg3 indicates that we have a total of six endogenous variables even though our
model in fact has seven. The discrepancy stems from (6) of our model. The capital stock variable (k)
is a function of the endogenous investment variable and is therefore itself endogenous. However, kt
does not appear in any of our models stochastic equations, so we did not declare it in the endog()
option of reg3; from a purely estimation perspective, the contemporaneous value of the capital stock
variable is irrelevant, though it does play a role in terms of solving our model. We next store the
estimation results using estimates store:
. estimates store klein
Now we are ready to define our model using the forecast commands. We first tell Stata to
initialize a new model; we will call our model kleinmodel:
. forecast create kleinmodel
Forecast model kleinmodel started.
174
The name you give the model mainly controls how output from forecast commands is labeled.
More importantly, forecast create creates the internal data structures Stata uses to keep track of
your model.
The next step is to add all the equations to the model. To add the three stochastic equations we
fit using reg3, we use forecast estimates:
. forecast estimates klein
Added estimation results from reg3.
Forecast model kleinmodel now contains 3 endogenous variables.
That command tells Stata to find the estimates stored as klein and add them to our model. forecast
estimates uses those estimation results to determine that there are three endogenous variables (c, i,
and wp), and it will save the estimated parameters and other information that forecast solve will
later need to obtain predictions for those variables. forecast estimates confirmed our request by
reporting that the estimation results added were from reg3.
forecast estimates reports that our forecast model has three endogenous variables because our
reg3 command included three left-hand-side variables. The fact that we specified three additional
endogenous variables in the endog() option of reg3 so that reg3 reports a total of six endogenous
variables is irrelevant to forecast. All that matters is the number of left-hand-side variables in the
model.
We also need to specify the four identities, equations (4) through (7), that determine the other four
endogenous variables in our model. To do that, we use forecast identity:
. forecast identity y = c + i + g
Forecast model kleinmodel now contains 4 endogenous variables.
. forecast identity p = y - t - wp
Forecast model kleinmodel now contains 5 endogenous variables.
. forecast identity k = L.k + i
Forecast model kleinmodel now contains 6 endogenous variables.
. forecast identity w = wg + wp
Forecast model kleinmodel now contains 7 endogenous variables.
You specify identities similarly to how you use the generate command, except that the left-hand-side
variable is an endogenous variable in your model rather than a new variable you want to create in your
dataset. Time-series operators often come in handy when specifying identities; here we expressed
capital, a stock variable, as its previous value plus current-period investment, a flow variable. An
identity defines an endogenous variable, so each time we use forecast identity, the number of
endogenous variables in our forecast model increases by one.
Finally, we will tell Stata about the four exogenous variables. We do that with the forecast
exogenous command:
. forecast exogenous wg
Forecast model kleinmodel now contains 1 declared exogenous variable.
. forecast exogenous g
Forecast model kleinmodel now contains 2 declared exogenous variables.
. forecast exogenous t
Forecast model kleinmodel now contains 3 declared exogenous variables.
. forecast exogenous yr
Forecast model kleinmodel now contains 4 declared exogenous variables.
forecast keeps track of the exogenous variables that you declare using the forecast exogenous
command and reports the number currently in the model. When you later use forecast solve,
forecast verifies that these variables contain nonmissing data over the forecast horizon. In fact, we
could have instead typed
175
. forecast exogenous wg g t yr
but to avoid confusing ourselves, we prefer to issue one command for each variable in our model.
Now Stata knows everything it needs to know about the structure of our model. klein2.dta in
memory contains annual observations from 1920 to 1941. Before we make out-of-sample forecasts,
we should first see how well our model works by comparing its forecasts with actual data. There
are a couple of ways to do that. The first is to produce static forecasts. In static forecasts, actual
values of all lagged variables that appear in the model are used. Because actual values will be missing
beyond the last historical time period in the dataset, static forecasts can only forecast one period
into the future (assuming only first lags appear in the model); for that reason, they are often called
one-step-ahead forecasts. To obtain these one-step-ahead forecasts, we type
. forecast solve, prefix(s_) begin(1921) static
Computing static forecasts for model kleinmodel.
Starting period: 1921
Ending period:
1941
Forecast prefix: s_
1921: ............................................
1922: ..............................................
1923: .............................................
(output omitted )
1940: .............................................
1941: ..............................................
Forecast 7 variables spanning 21 periods.
We specified begin(1921) to request that the first year for which forecasts are produced be 1921. Our
model includes variables that are lagged one period; because our data start in 1920, 1921 is the first
year in which we can evaluate all the equations of the model. If we did not specify the begin(1921)
option, forecast solve would have started forecasting in 1941. By default, forecast solve looks
for the earliest time period in which any of the endogenous variables contains a missing value and
begins forecasting in that period. In klein2.dta, k is missing in 1941.
The header of the output confirms that we requested static forecasts for our model, and it indicates
that it will produce forecasts from 1921 through 1941, the last year in our dataset. By default,
forecast solve produces a status report in which the time period being forecast is displayed along
with a dot for each iteration the equation solver performs. The footer of the output confirms that we
forecast seven endogenous variables for 21 years.
The command we just typed will create seven new variables in our dataset, one for each endogenous
variable, containing the static forecasts. Because we specified prefix(s ), the seven new variables
will be named s c, s i, s wp, s y, s p, s k, and s w. Here we graph a subset of the variables
and their forecasts:
176
40
50
60
70
40 50 60 70 80 90
Total Income
1920
1925
1930
year
1935
1940
1920
1925
1935
1940
Private Wages
10
20 30 40 50 60
Investment
1930
year
1920
1925
1930
year
1935
1940
1920
1925
1930
year
1935
1940
Our static forecasts appear to fit the data relatively well. Had they not fit well, we would have to go
back and reexamine the specification of our model. If the static forecasts are poor, then the dynamic
forecasts that use previous periods forecast values are unlikely to work well either. On the other
hand, even if the model produces good static forecasts, it may not produce accurate dynamic forecasts
more than one or two periods into the future.
Another way to check how well a model forecasts is to produce dynamic forecasts for time periods
in which observed values are available. Here we begin dynamic forecasts in 1936, giving us six years
data with which to compare actual and forecast values and then graph our results:
. forecast solve, prefix(d_) begin(1936)
Computing dynamic forecasts for model kleinmodel.
Starting period: 1936
Ending period:
1941
Forecast prefix: d_
1936: ............................................
1937: ..........................................
1938: .............................................
1939: .............................................
1940: ............................................
1941: ..............................................
Forecast 7 variables spanning 6 periods.
177
Dynamic Forecasts
Consumption
40
50
60
70
40 50 60 70 80 90
Total Income
1920
1925
1930
year
1935
1940
1920
1930
year
1935
1940
Private Wages
20 30 40 50 60
Investment
1925
1920
1925
1930
year
1935
1940
1920
1925
1930
year
1935
1940
Most of the in-sample forecasts look okay, though our model was unable to predict the outsized
increase in investment in 1936 and the sharp drop in 1938.
Our first example was particularly easy because all the endogenous variables appeared in levels.
However, oftentimes the endogenous variables are better modeled using mathematical transformations
such as logarithms, first differences, or percentage changes; transformations of the endogenous
variables may appear as explanatory variables in other equations. The next few examples illustrate
these complications.
178
and fit the three equations using regress with the data through the end of 2009. We use quietly
to suppress the output from regress to save space, and we store each set of estimation results as
we go. In Stata, we type
. use http://www.stata-press.com/data/r13/hardware, clear
(Hughes Hardware sales data)
. generate lndim = ln(dim)
. generate lngdp = ln(gdp)
. generate lnstarts = ln(starts)
.
.
.
.
.
.
quietly regress
estimates store
quietly regress
estimates store
quietly regress
estimates store
The equations for sheet goods and miscellaneous items do not present any challenges for forecast,
so we proceed by creating a new forecast model named salesfcast and adding those two equations:
. forecast create salesfcast, replace
(Forecast model kleinmodel ended.)
Forecast model salesfcast started.
. forecast estimates sheet
Added estimation results from regress.
Forecast model salesfcast now contains 1 endogenous variable.
. forecast estimates misc
Added estimation results from regress.
Forecast model salesfcast now contains 2 endogenous variables.
The equation for dimensional lumber requires more finesse. First, because our dependent variable
contains a time-series operator, we must use the names() option of forecast estimates to specify
a valid name for the endogenous variable being added:
. forecast estimates dim, names(dlndim)
Added estimation results from regress.
Forecast model salesfcast now contains 3 endogenous variables.
We have entered the endogenous variable dlndim into our model, but it represents the left-hand-side
variable of the regression equation we just added. That is, dlndim is the first-difference of the
logarithm of dim, the sales variable we ultimately want to forecast. We can specify an identity to
reverse the first-differencing, providing us with a variable containing the logarithm of dim:
. forecast identity lndim = L.lndim + dlndim
Forecast model salesfcast now contains 4 endogenous variables.
179
Now we can solve the model. We will obtain dynamic forecasts starting in the first quarter of
2010, and we will use the log(off) option to suppress the iteration log:
. forecast solve, begin(tq(2010q1)) log(off)
Computing dynamic forecasts for model salesfcast.
Starting period: 2010q1
Ending period:
2012q3
Forecast prefix: f_
Forecast 5 variables spanning 11 periods.
We did not specify the prefix() or suffix() option, so by default, forecast prefixed our forecast
variables with f . The following graph illustrates our forecasts:
Hughes Hardware Sales ($mil.)
350
Dimensional Lumber
300
250
160
Sheet Goods
130
100
200
Miscellany
150
100
2008q1
2009q1
2010q1
2011q1
Forecast
Actual
2012q1
Our model performed well in 2010, but it did not forecast the pickup in sales that occurred in 2011
and 2012.
Technical note
For more information about working with log-transformed variables, see the second technical note
in [TS] forecast estimates.
The forecast commands can also be used to make forecasts for strongly balanced panel datasets.
A panel dataset is strongly balanced when all the panels have the same number of observations, and
the observations for different panels were all made at the same times. Our next example illustrates
how to produce a forecast with panel data and highlights a couple of key assumptions one must make.
180
To make our state-level forecasts, we will use essentially the same model that we did for the
company-wide forecast, though we will also include state-specific effects. The model we will use is
Having fit the model, we are almost ready to make forecasts. First, though, we need to consider
how to handle the state-level error terms. If we simply created a forecast model, added our three
estimation results, then called forecast solve, Stata would forecast miscit , for example, as a
function of dimit , rgspgrowthit , unrateit , and the estimate of the constant term 30 . However,
our model implies that miscit also depends on u3i and the idiosyncratic error term 3it . We will
ignore the idiosyncratic error for now (but see the discussion of simulations in [TS] forecast solve).
By construction, u3i has a mean of zero when averaged across all panels, but in general, u3i is
nonzero for any individual panel. Therefore, we should include it in our forecasts.
After you fit a model with xtreg, you can predict the panel-specific error component for the
subset of observations in the estimation sample. Typically, xtreg is used in situations where the
number of observations per panel T is modest. In those cases, the estimates of the panel-specific
error components are likely to be noisy (analogous to estimating a sample mean with just a few
observations). Often asymptotic analyses of panel-data estimators assume T is fixed, and in those
cases, the estimators of the panel-specific errors are inconsistent.
181
However, in forecasting applications, the number of observations per panel is usually larger than
in most other panel-data applications. With enough observations, we can have more confidence in
the estimated panel-specific errors. If we are willing to assume that we have decent estimates of the
panel-specific errors and that those panel-level effects will remain constant over the forecast horizon,
then we can incorporate them into our forecasts. Because predict only provided us with estimates
of the panel-level effects for the estimation sample, we need to extend them into the forecast horizon.
An easy way to do that is to use egen to create a new set of variables:
. by state: egen dlndim_u2 = mean(dlndim_u)
. by state: egen sheet_u2 = mean(sheet_u)
. by state: egen misc_u2 = mean(misc_u)
We can use forecast adjust to incorporate these terms into our forecasts. The following commands
define our forecast model, including the estimated panel-specific terms:
. forecast create statemodel, replace
(Forecast model salesfcast ended.)
Forecast model statemodel started.
. forecast estimates dim, name(dlndim)
Added estimation results from xtreg.
Forecast model statemodel now contains 1 endogenous
. forecast adjust dlndim = dlndim + dlndim_u2
Endogenous variable dlndim now has 1 adjustment.
. forecast identity lndim = L.lndim + dlndim
Forecast model statemodel now contains 2 endogenous
. forecast identity dim = exp(lndim)
Forecast model statemodel now contains 3 endogenous
. forecast estimates sheet
Added estimation results from xtreg.
Forecast model statemodel now contains 4 endogenous
. forecast adjust sheet = sheet + sheet_u2
Endogenous variable sheet now has 1 adjustment.
. forecast estimates misc
Added estimation results from xtreg.
Forecast model statemodel now contains 5 endogenous
. forecast adjust misc = misc + misc_u2
Endogenous variable misc now has 1 adjustment.
variable.
variables.
variables.
variables.
variables.
We used forecast adjust to perform our adjustment to dlndim immediately after we added those
estimation results so that we would not forget to do so and before we used identities to obtain the
actual dim variable. However, we could have specified the adjustment at any time. Regardless of
when you specify an adjustment, forecast solve performs those adjustments immediately after the
variable being adjusted is computed.
182
Now we can solve our model. Here we obtain dynamic forecasts beginning in the first quarter of
2010:
. forecast solve, begin(tq(2010q1))
Computing dynamic forecasts for model statemodel.
Starting period: 2010q1
Ending period:
2011q4
Number of panels: 5
Forecast prefix: f_
Solving panel 1
Solving panel 2
Solving panel 3
Solving panel 4
Solving panel 5
Forecast 5 variables spanning 8 periods for 5 panels.
8
6
4
13 14 15 16 17
10
LA
AR
2008
2010
2012
TX
70
80
10 11 12
90 100 110
OK
2008
2010
2012
2008
Forecast
2010
2012
Actual
Similar to our company-wide forecast, our state-level forecast failed to call the bottom in sales that
occurred in 2011. Because our model missed the shift in sales momentum in every one of the five
states, we would be inclined to go back and try respecifying one or more of the equations in our
model. On the other hand, if our model forecasted most of the states well but performed poorly in
just a few states, then we would first want to investigate whether any events in those states could
account for the unexpected results.
Technical note
Stata also provides the areg command for fitting a linear regression with a large dummy-variable
set and is designed for situations where the number of groups (panels) is fixed, while the number of
observations per panel increases with the sample size. When the goal is to create a forecast model
for panel data, you should nevertheless use xtreg rather than areg. The forecast commands
require knowledge of the panel-data settings declared using xtset as well as panel-related estimation
information saved by the other panel-data commands in order to produce forecasts with panel datasets.
183
In the previous example, none of our equations contained lagged dependent variables as regressors.
If an equation did contain a lagged dependent variable, then one could use a dynamic panel-data
(DPD) estimator such as xtabond, xtdpd, or xtdpdsys. DPD estimators are designed for cases
where the number of observations per panel T is small. As shown by Nickell (1981), the bias
of the standard fixed- and random-effects estimators in the presence of lagged dependent variables
is of order 1/T and is thus particularly severe when each panel has relatively few observations.
Judson and Owen (1999) perform Monte Carlo experiments to examine the relative performance of
different panel-data estimators in the presence of lagged dependent variables when used with panel
datasets having dimensions more commonly encountered in macroeconomic applications. Based on
their results, while the bias of the standard fixed-effects estimator (LSDV in their notation) is not
inconsequential even when T = 20, for T = 30, the fixed-effects estimator does work as well as most
alternatives. The only estimator that appreciably outperformed the standard fixed-effects estimator
when T = 30 is the least-squares dummy variable corrected estimator (LSDVC in their notation).
Bruno (2005) provides a Stata implementation of that estimator. Many datasets used in forecasting
situations contain even more observations per panel, so the Nickell bias is unlikely to be a major
concern.
In this manual entry, we have provided an overview of the forecast commands and provided
several examples to get you started. The command-specific entries fill in the details.
References
Bruno, G. S. F. 2005. Estimation and inference in dynamic unbalanced panel-data models with a small number of
individuals. Stata Journal 5: 473500.
Judson, R. A., and A. L. Owen. 1999. Estimating dynamic panel data models: a guide for macroeconomists. Economics
Letters 65: 915.
Klein, L. R. 1950. Economic Fluctuations in the United States 19211941. New York: Wiley.
Nickell, S. J. 1981. Biases in dynamic models with fixed effects. Econometrica 49: 14171426.
Also see
[TS] var Vector autoregressive models
[TS] tsset Declare data to be time-series data
[R] ivregress Single-equation instrumental-variables regression
[R] reg3 Three-stage estimation for systems of simultaneous equations
[R] regress Linear regression
[XT] xtreg Fixed-, between-, and random-effects and population-averaged linear models
[XT] xtset Declare data to be panel data
Title
forecast adjust Adjust a variable by add factoring, replacing, etc.
Syntax
Also see
Description
Stored results
Reference
Syntax
forecast adjust varname = exp
if
in
varname is the name of an endogenous variable that has been previously added to the model using
forecast estimates, forecast coefvector, or forecast identity.
exp represents a Stata expression; see [U] 13 Functions and expressions.
Description
forecast adjust specifies an adjustment to be applied to an endogenous variable in the model.
Adjustments are typically used to produce alternative forecast scenarios or to incorporate outside
information into a model. For example, you could use forecast adjust with a macroeconomic
model to simulate the effect of an oil price shock whereby the price of oil spikes $50 higher than
your model otherwise predicts in a given quarter.
184
185
use http://www.stata-press.com/data/r13/klein2
quietly reg3 (c p L.p w) (i p L.p L.k) (wp y L.y yr) if year < 1939,
endog(w p y) exog(t wg g)
estimates store klein
identity p = y - t - wp
model kleinmodel now contains 5
identity k = L.k + i
model kleinmodel now contains 6
identity w = wg + wp
model kleinmodel now contains 7
exogenous wg
model kleinmodel now contains 1
exogenous g
model kleinmodel now contains 2
exogenous t
model kleinmodel now contains 3
exogenous yr
model kleinmodel now contains 4
solve, prefix(bl_) begin(1939)
endogenous variables.
endogenous variables.
endogenous variables.
declared exogenous variable.
declared exogenous variables.
declared exogenous variables.
declared exogenous variables.
While computing the forecasts for 1939, whenever forecast evaluates the equation for i, it will set
i to be higher than it would otherwise be by 1. Now we re-solve our model using the prefix alt
to indicate this is an alternative forecast:
186
1939
1941
alt_
.......................................................................
...................................................
.......................................................................
..............................................
.......................................................................
................................................
The following graph shows how investment and total income respond to this policy shock.
Effect of $1 billion investment tax credit
Total Income
60
80
$ Billion
$ Billion
100
10
15
120
Investment
1938
1939
1940
1941
1938
year
1939
1940
1941
year
Both investment and total income would be higher not just in 1939 but also in 1940; the higher
capital stock implied by the additional investment raises total output (and hence income) even after
the tax credit expires. Lets look at these two variables in more detail:
. list year bl_i alt_i bl_y alt_y if year >= 1938, sep(0)
19.
20.
21.
22.
year
bl_i
alt_i
bl_y
alt_y
1938
1939
1940
1941
-1.9
3.757227
7.971523
16.16375
-1.9
6.276423
9.501909
16.20362
60.9
75.57685
89.67435
123.0809
60.9
80.71709
94.08473
124.238
187
It is possible to construct models in which each equation appears to be well specified, but the model
nevertheless forecasts poorly or suggests unlikely behavior in response to policy shocks.
To make adjustments lasting more than one or two periods, you should create an adjustment variable,
which makes more sense. A third way to simulate our two-year tax credit is
. generate i_adj = 0
. replace i_adj = 1 if year == 1939 | year == 1940
. forecast adjust i = i + i_adj
So far in our discussion of forecast adjust, we have always shown an endogenous variable
being adjusted by adding a number or variable to it. However, any valid expression is allowed on the
right-hand side of the equals sign. If you want to explore the effects of a policy that will increase
investment by 10% in 1939, you could type
. forecast adjust i = 1.1*i if year == 1939
Stored results
forecast adjust stores the following in r():
Macros
r(lhs)
r(rhs)
r(basenames)
r(fullnames)
Reference
Klein, L. R. 1950. Economic Fluctuations in the United States 19211941. New York: Wiley.
188
Also see
[TS] forecast Econometric model forecasting
[TS] forecast solve Obtain static and dynamic forecasts
Title
forecast clear Clear current model from memory
Syntax
Description
Also see
Syntax
forecast clear
Description
forecast clear removes the current forecast model from memory.
Also see
[TS] forecast Econometric model forecasting
[TS] forecast create Create a new forecast model
189
Title
forecast coefvector Specify an equation via a coefficient vector
Syntax
Also see
Description
Options
Syntax
forecast coefvector cname
, options
Description
variance(vname)
errorvariance(ename)
names(namelist , replace )
Description
forecast coefvector adds equations that are stored as coefficient vectors to your forecast model.
Typically, equations are added using forecast estimates and forecast identity. forecast
coefvector is used in less-common situations where you have a vector of parameters that represent
a linear equation.
Most users of the forecast commands will not need to use forecast coefvector. We recommend skipping this manual entry until you are familiar with the other features of forecast.
Options
variance(vname) specifies that Stata matrix vname contains the variance matrix of the estimated
parameters. This option only has an effect if you specify the simulate() option when calling
forecast solve and request sim techniques betas or residuals. See [TS] forecast solve.
errorvariance(ename) specifies that the equations being added include an additive error term with
variance ename, where ename is the name of a Stata matrix. The number of rows and columns in
ename must match the number of equations represented by coefficient vector cname. This option
only has an effect if you specify the simulate() option when calling forecast solve and
request sim techniques errors or residuals. See [TS] forecast solve.
names(namelist , replace ) instructs forecast coefvector to use namelist as the names of the
left-hand-side variables in the coefficient vector being added. By default, forecast coefvector
uses the equation names on the column stripe of cname. You must use this option if any of the
equation names stored with cname contains time-series operators.
190
191
Introduction
forecast coefvector can be used to add equations that you obtained elsewhere to your model.
For example, you might see the estimated coefficients for an equation in an article and want to add
that equation to your model. User-written estimators that do not implement a predict command can
also be included in forecast models via forecast coefvector. forecast coefvector can also
be useful in situations where you want to simulate time-series data, as the next example illustrates.
t, 1 to 20
1 unit
. generate y = 0
Now lets think about our coefficient vector. The only tricky part is in labeling the columns. We can
represent the lagged values of yt using time-series operators; there is just one equation, corresponding
to variable y. We can use matrix coleq to apply both variable and equation names to the columns
of our matrix. In Stata, we type
. matrix y = (.9, -.6, 0.3)
. matrix coleq y = y:L.y y:L2.y y:L3.y
. matrix list y
y[1,3]
y:
y:
L. L2.
y
y
r1
.9 -.6
y:
L3.
y
.3
192
forecast coefvector ignores the row name of the vector being added (r1 here), so we can leave
it as is. Next we create a forecast model and add y:
. forecast create
Forecast model started.
. forecast coefvector y
Forecast model now contains 1 endogenous variable.
Now we can solve our model. Because our y variable is filled in for the entire dataset, forecast
solve will not be able to automatically determine when forecasting should commence. We have three
lags in our process, so we will start at t = 4. To reduce the amount of output, we specify log(off):
. forecast solve, begin(4) log(off)
Computing dynamic forecasts for current model.
Starting period:
Ending period:
Forecast prefix:
4
20
f_
.2
Response
.4
.6
.8
ImpulseResponse Function
10
t
15
20
The graph shows our shock causing y to jump to 1 at t = 5. At t = 6, we can see that y = 0.9, and
at t = 7, we can see that y = 0.9 0.9 0.6 1 = 0.21.
The previous example used a coefficient vector representing a single equation. However, coefficient
vectors can contain multiple equations. For example, say we read an article and saw the following
results displayed:
193
We can add both equations at once to our forecast model. Again the key is in labeling the columns.
forecast coefvector understands cons to mean a constant term, and it looks at the equation
names on the vectors columns to determine how many equations there are and to what endogenous
variables they correspond:
. matrix eqvector = (0.2, 0.3, -0.8, 0.1, 0.7, 0.3, -0.2)
. matrix coleq eqvector = x:_cons x:L.x x:y y:_cons y:L.y y:x y:L.x
. matrix list eqvector
eqvector[1,7]
x:
r1
_cons
.2
x:
L.
x
.3
x:
y
-.8
y:
_cons
.1
y:
L.
y
.7
y:
x
.3
y:
L.
x
-.2
194
4. You specify both variance() and errorvariance(). The equations represented by this
coefficient vector are stochastic, and forecast solve treats the coefficient vector just like
an estimation result. sim techniques betas, residuals, and errors all work as expected.
Then the matrix specified in the errorvariance option must be m m. Row and column names
for that matrix are ignored.
Also see
[TS] forecast Econometric model forecasting
[TS] forecast solve Obtain static and dynamic forecasts
[P] matrix Introduction to matrix commands
[P] matrix rownames Name rows and columns
Title
forecast create Create a new forecast model
Syntax
Description
Option
Also see
Syntax
forecast create
name
, replace
name is an optional name that can be given to the model. name must follow the naming conventions
described in [U] 11.3 Naming conventions.
Description
forecast create creates a new forecast model in Stata.
Option
replace causes Stata to clear the existing model from memory before creating name. You may have
only one model in memory at a time. By default, forecast create issues an error message if
another model is already in memory.
Example 1
Here we create a model named salesfcast:
. forecast create salesfcast
Forecast model salesfcast started.
Technical note
Warning: Do not type clear all, clear mata, or clear results after creating a forecast
model with forecast create unless you intend to remove your forecast model. Typing clear all
or clear mata eliminates the internal structures used to store your forecast model. Typing clear
results clears all estimation results from memory. If your forecast model includes estimation results
that rely on the ability to call predict, you will not be able to solve your model.
195
196
Also see
[TS] forecast Econometric model forecasting
[TS] forecast clear Clear current model from memory
Title
forecast describe Describe features of the forecast model
Syntax
Reference
Description
Also see
Options
Stored results
Syntax
Describe the current forecast model
forecast describe , options
Describe particular aspects of the current forecast model
forecast describe aspect , options
aspect
Description
estimates
coefvector
identity
exogenous
adjust
solve
endogenous
estimation results
coefficient vectors
identities
declared exogenous variables
adjustments to endogenous variables
forecast solution information
all endogenous variables
options
Description
brief
detail
Specifying detail provides no additional information with aspects exogenous, endogenous, and solve.
Description
forecast describe displays information about the forecast model currently in memory. For
example, you can type forecast describe endogenous to obtain information regarding all the
endogenous variables in the model. Typing forecast describe without specifying a particular
aspect of the model is equivalent to typing forecast describe aspect for every aspect in the table
above and can result in more output than you want, particularly if you specify the detail option.
Options
brief requests that forecast describe produce a one-sentence summary of the aspect specified.
For example, forecast describe exogenous, brief will tell you just the current forecast
models name and the number of exogenous variables in the model.
197
198
detail requests a more-detailed description of the aspect specified. For example, typing forecast
describe estimates lists all the estimation results added to the model using forecast estimates, the estimation commands used, and the number of left-hand-side variables in each estimation
result. When you specify forecast describe estimates, detail, the output includes a list
of all the left-hand-side variables entered with forecast estimates.
Example 1
In example 1 of [TS] forecast, we created and forecasted Kleins (1950) model of the U.S. economy.
Here we obtain information about all the endogenous variables in the model:
. forecast describe endogenous
Forecast model kleinmodel contains 7 endogenous variables:
Variable
1.
2.
3.
4.
5.
6.
7.
c
i
wp
y
p
k
w
Source
estimates
estimates
estimates
identity
identity
identity
identity
# adjustments
0
0
0
0
0
0
0
As we mentioned in [TS] forecast, there are seven endogenous variables in this model. Three of those
variables (c, i, and wp) were left-hand-side variables in equations we fitted and added to our forecast
model with forecast estimates. The other four variables were defined by identities added with
forecast identity. The right-hand column of the table indicates that none of our endogenous
variables contains adjustments specified using forecast adjust.
We can obtain more information about the estimated equations in our model using forecast
describe estimates:
. forecast describe estimates, detail
Forecast model kleinmodel contains 1 estimation result:
Estimation
result
1. klein
Command
reg3
LHS variables
c
i
wp
Our model has one estimation result, klein, containing results produced by the reg3 command. If
we had not specified the detail option, forecast describe estimates would have simply stated
the number of left-hand-side variables (3) rather than listing them.
199
At the end of example 1 in [TS] forecast, we obtained dynamic forecasts beginning in 1936. Here
we obtain information about the solution:
. forecast describe solve
Forecast model kleinmodel has been solved:
Forecast horizon
Begin
End
Number of periods
Forecast variables
Prefix
Number of variables
Storage type
Type of forecast
Solution
Technique
Maximum iterations
Tolerance for function values
Tolerance for function zero
1936
1941
6
d_
7
float
Dynamic
Damped Gauss-Seidel (0.200)
500
1.0e-09
(not applicable)
We obtain information about the forecast horizon, how the variables holding our forecasts were
created and stored, and the solution technique used. If we had used the simulate() option with
forecast solve, we would have obtained information about the types of simulations performed and
the variables used to hold the results.
Stored results
When you specify option brief, only a limited number of results are stored. In the tables
below, a superscript B indicates results that are available even after brief is specified. forecast
coefvector saves certain results only if detail is specified; these are indicated by superscript D.
Typing forecast describe without specifying an aspect does not return any results.
forecast describe estimates stores the following in r():
Scalars
r(n estimates)B
r(n lhs)
Macros
r(model)B
r(lhs)
r(estimates)
number of identities
name of forecast model, if named
left-hand-side variables
list of identities
200
Macros
r(model)B
r(varlist)
r(source list)
r(adjust cnt)
Reference
Klein, L. R. 1950. Economic Fluctuations in the United States 19211941. New York: Wiley.
Also see
[TS] forecast Econometric model forecasting
[TS] forecast list List forecast commands composing current model
201
Title
forecast drop Drop forecast variables
Syntax
Also see
Description
Options
Stored results
Syntax
forecast drop
, options
options
Description
prefix(string)
suffix(string)
Description
forecast drop drops variables previously created by forecast solve.
Options
prefix(string) and suffix(string) specify either a name prefix or a name suffix that will be used to
identify forecast variables to be dropped. You may specify prefix() or suffix() but not both.
By default, forecast drop removes all forecast variables produced by the previous invocation
of forecast solve.
Suppose, however, that you previously specified the simulate() option with forecast solve
and wish to remove variables containing simulation results but retain the variables containing the
point forecasts. Then you can use the prefix() or suffix() option to identify the simulation
variables you want dropped.
202
203
The former command is dangerous: Suppose you were given the dataset and asked to produce the
forecast. The person who previously worked with the dataset created other variables that ended with
f. Using drop would remove those variables as well. forecast drop removes only those variables
that were previously created by forecast solve based on the model in memory.
If you do not specify any options, forecast drop removes all the forecast variables created by
the current model, including the variables that contain the point forecasts as well as any variables
that contain simulation results specified by the simulate() option with forecast solve. Suppose
you had typed
. forecast solve, prefix(s_) simulate(betas, statistic(stddev, prefix(sd_)))
forecast drop will remove the variables containing the standard deviations of the forecasts and
will leave the variables containing the point forecasts (prefixed with s ) untouched.
forecast drop does not exit with an error if a variable it intends to drop does not exist in the
dataset.
Stored results
forecast drop stores the following in r():
Scalars
r(n dropped)
Also see
[TS] forecast Econometric model forecasting
[TS] forecast solve Obtain static and dynamic forecasts
Title
forecast estimates Add estimation results to a forecast model
Syntax
Description
Options
References
Also see
Syntax
Add estimation result currently in memory to model
forecast estimates name , options
name is the name of a stored estimation result; see [R] estimates store.
Description
predict(p options)
names(namelist , replace )
advise
Description
forecast estimates adds estimation results to the forecast model currently in memory. You
must first create a new model using forecast create before you can add estimation results with
forecast estimates. After estimating the parameters of an equation or set of equations, you must
use estimates store to store the estimation results in memory or use estimates save to save
them on disk before adding them to the model.
Options
predict(p options) specifies the predict options to use when predicting the dependent variables.
For a single-equation estimation command, you simply specify the appropriate options to pass to
predict. If multiple options are required, enclose them in quotation marks:
. forecast estimates ..., predict("pr outcome(#1)")
For a multiple-equation estimation command, you can either specify one set of options that will
be applied to all equations or specify p options, where p is the number of endogenous variables
being added. If multiple options are required for each equation, enclose each equations options
in quotes:
. forecast estimates ..., predict("pr eq(#1)" "pr eq(#2)")
204
205
If you do not specify the eq() option for any of the equations, forecast automatically includes
it for you.
If you are adding results from a linear estimation command that forecast recognizes as one
whose predictions can be calculated as x0t , do not specify the predict() option, because this
will slow forecasts computation time substantially. Use the advise option to determine whether
forecast needs to call predict.
If you do not specify any predict options, forecast uses the default type of prediction for the
command whose results are being added.
names(namelist , replace ) instructs forecast estimates to use namelist as the names of the
left-hand-side variables in the estimation result being added. You must use this option if any of
the left-hand-side variables contains time-series operators. By default, forecast estimates uses
the names stored in the e(depvar) macro of the results being added.
forecast estimates creates a new variable in the dataset for each element of namelist. If a
variable of the same name already exists in your dataset, forecast estimates exits with an
error unless you specify the replace option, in which case existing variables are overwritten.
advise requests that forecast estimates report a message indicating whether the estimation
results being added can be removed from memory. This option is useful if you expect your model
to contain more than 300 sets of estimation results, the maximum number that Stata allows you to
store in memory; see [R] limits. This option also provides an indication of the speed with which
the model can be solved: forecast executes much more slowly with estimation results that must
remain in memory.
number(#), for use with forecast estimates using, specifies that the #th set of estimation results
from filename be loaded. This assumes that multiple sets of estimation results have been saved
in filename. The default is number(1). See [R] estimates save for more information on saving
multiple sets of estimation results in a single file.
Introduction
After you fit an equation that will become a part of your model, you must use either estimates
store to store the estimation results in memory or estimates save to save the estimation results
to disk. Then you can use forecast estimates to add that equation to your model.
We usually refer to equation in the singular, but of course, you can also use a multiple-equation
estimation command to fit several equations at once and add them to the model. When we discuss
adding a stochastic equation to a model, we really mean adding a single estimation result.
206
In this discussion, we also need to make a distinction between making a forecast and obtaining a
prediction. We use the word predict to refer to the process of obtaining a fitted value for a single
equation, just as you can use the predict command to obtain fitted values, residuals, or other statistics
after fitting a model with an estimation command. We use the word forecast to mean finding a
solution to the complete set of equations that compose the forecast model. The iterative techniques
we use to solve the model and produce forecasts require that we be able to obtain predictions from
each of the equations in the model.
use http://www.stata-press.com/data/r13/klein2
quietly reg3 (c p L.p w) (i p L.p L.k) (wp y L.y yr), endog(w p y) exog(t wg g)
estimates store klein
forecast create kleinmodel
Forecast model kleinmodel started.
. forecast estimates klein
Added estimation results from reg3.
Forecast model kleinmodel now contains 3 endogenous variables.
forecast estimates indicated that three endogenous variables were added to the forecast model.
That is because we specified three equations in our call to reg3. As we mentioned in example 1 in
[TS] forecast, the endog() option of reg3 has no bearing on forecast. All that matters are the
three left-hand-side variables.
Technical note
When you add an estimation result to your forecast model, forecast looks at the macro e(depvar)
to determine the endogenous variables being added. If that macro is empty, forecast tries a few
other macros to account for nonstandard commands. The number of endogenous variables being added
to the model is based on the number of words found in the macro containing the dependent variables.
You can fit equations with the D. and S. first- and seasonal-difference time-series operators
adorning the left-hand-side variables, but in those cases, when you add the equations to the model,
you must use the names() option of forecast estimates. When you specify names(namelist),
forecast estimates uses namelist as the names of the newly declared endogenous variables and
ignores what is in e(depvar). Moreover, forecast does not automatically undo the operators on
left-hand-side variables. For example, you might fit a regression with D.x as the regressand and then
add it to the model using forecast estimates . . ., name(Dx). In that case, forecast will solve
the model in terms of Dx. You must add an identity to convert Dx to the corresponding level variable
x, as the next example illustrates.
Of course, you are free to use the D., S., and L. time-series operators on endogenous variables
when they appear on the right-hand sides of equations. It is only when D. or S. appears on the
left-hand side that you must use the names() option to provide alternative names for them. You
cannot add equations to models for which the L. operator appears on left-hand-side variables. You
cannot use the F. forward operator anywhere in forecast models.
207
(1)
(2)
Here logY and M are exogenous variables, so we will assume they are filled in over the forecast
horizon before solving the model. Ultimately, we are interested in forecasting C and W. However,
the first equation is specified in terms of changes in the logarithm of C, and the second equation is
specified in terms of the logarithm of W.
We will refer to variables and transformations like logC, D.logC, and C as related variables
because they are related to one another by simple mathematical functions. Including the related
variables, we in fact have a five-equation model with two stochastic equations and three identities:
Because the left-hand-side variable in (1) contains a time-series operator, we had to use the names()
option of forecast estimates when adding that equations estimation results to our forecast model.
Here we named this endogenous variable dlogC. We then added the other four equations to our
model. In general, when we have a set of related variables, we prefer to specify the identities right
after we add the stochastic equation so that we do not forget about them.
208
Technical note
In the previous example, we undid the log-transformations by simply exponentiating the logarithmic variable. However, that is only an approximation that does not work well in many applications.
Suppose we fit the linear regression model
ln yt = x0t + ut
where ut is a zero-mean regression error term. Then E(yt |xt ) = exp(x0t ) E{exp(ut )}. Although
E(ut ) = 0, Jensens inequality suggests that E{exp(ut )} =
6 1, implying that we cannot predict yt
by simply taking the exponential of the linear prediction x0t .
If we assume that ut N (0, 2 ), then E{exp(ut )} = exp( 2 /2). Moreover, many estimation
commands like regress provide an estimate
b2 of 2 , so for regression models that contain a
logarithmic dependent variable, we can obtain better forecasts for the dependent variable in levels if
we approximate E{exp(ut )} as exp(b
2 /2). Suppose we run the regression
. regress lny x1 x2 x3
. estimates store myreg
In the second command, Stata will first evaluate the expression =e(rmse)^2 / 2 and replace it with
its numerical value. After regress, the macro e(rmse) contains the square root of the estimate of
b2 , so the value of this expression will be our estimate of E{exp(ut )}. Then forecast will forecast
y as the product of this number and exp(lny). Here we had to use a macro expression including
an equals sign to force Stata to evaluate the expression immediately and obtain the expressions
value. Identities are not associated with estimation results, so as soon as we used another estimation
command or restored some other estimation results (perhaps unknowingly by invoking forecast
solve), our reference to e(rmse) would no longer be meaningful. See [U] 18.3.8 Macro expressions
for more information on macro evaluation.
Another alternative would be to use Duans (1983) smearing technique. Stata code for this is
provided in Cameron and Trivedi (2010).
A third alternative is to use the generalized linear model (GLM) as implemented by the glm
command with a log-link function. In a GLM framework, we would be modeling ln {E(yt )} rather
than E { ln(yt )} because we would be using regress, but oftentimes, the two quantities are similar.
Moreover, obtaining predicted values for yt in the GLM does not present the transformation problem as
happens with linear regression. The forecast commands contain special code to handle estimation
results obtained by using glm with the link(log) option, and you do not need to specify an identity
to obtain y as a function of lny. All you would need to do is
. glm y x1 x2 x3, link(log)
. estimates store myglm
. forecast estimates myglm
209
After we typed forecast estimates, Stata advised us that [t]hese estimation results are no longer
needed; you can drop them. That means forecast includes code to obtain predictions from reg3
without having to call predict. forecast has recorded all the information it needs about the
estimation results stored in klein, and we could type
. estimates drop klein
For relatively small models, there is no need to use estimates drop to remove estimation results
from memory. However, Stata allows no more than 300 sets of estimation results to be in memory
at once, and forecast solve requires estimation results to be in memory (and not merely saved
on disk) before it can produce forecasts. For very large models in which that limit may bind, you
can use the advise option to determine which estimation results are needed to solve the model and
which can be dropped.
Suppose we had estimation results from a command for which forecast must call predict to
obtain predictions. Then instead of obtaining the note saying the estimation results were no longer
needed, we would obtain a note stating
. forecast estimates IUsePredict
(These estimation results are needed to solve the model.)
In that case, the estimation results would need to be in memory before calling forecast solve.
The advise option also provides an indication of how quickly forecasts can be produced from
the model. Models for which forecast never needs to call predict can be solved much more
quickly than models that include equations for which forecast must restore estimation results and
call predict to obtain predictions.
210
If you do not specify a file extension, forecast estimates assumes the file ends in .ster. You
are more likely to save your estimation results on your computers disk drive rather than a web server,
but in either case, this example shows that you can fit equations in one session of Stata, save the
results to disk, and then build your forecast model later.
The estimates save command allows you to save multiple estimation results to the same file and
numbers them sequentially starting at 1. You can use the number() option of forecast estimates
using to specify which set of estimation results from the specified file you wish to add to the forecast
model. If you do not specify number(), forecast estimates using uses the first set of results.
When you use forecast estimates using, forecast loads the estimation results from disk
and stores them in memory using a temporary name. Later, when you proceed to solve your model,
forecast checks to see whether those estimation results are still in memory. If not, it will attempt
to reload them from the file you had specified. You should therefore not move or rename estimation
result files between the time you add them to your model and the time you solve the model.
211
Now, whenever forecast calls predict with those estimation results, it will pass the option e(0,.)
so that we obtain the appropriate predictions. If you are adding results from a multiple-equation
estimation command with k dependent variables, then you must specify k predict options within
the predict() option, separated by spaces.
212
1988q1
1990q4
f_
1988q1: .............
1988q2: ...............
1988q3: ...............
(output omitted )
1990q4: ............
Forecast 2 variables spanning 12 periods.
Because our entire forecast model consists of a single equation fit by arima, we can also call predict
to obtain forecasts:
. predict a_wpi, y dynamic(tq(1988q1))
(5 missing values generated)
. list t f_wpi a_wpi in -5/l
120.
121.
122.
123.
124.
f_wpi
a_wpi
1989q4
1990q1
1990q2
1990q3
1990q4
110.2182
111.6782
112.9945
114.3281
115.5142
110.2182
111.6782
112.9945
114.3281
115.5142
Looking at the last few observations in the dataset, we see that the forecasts produced by forecast
(f wpi) match those produced by predict (a wpi). Of course, the advantage of forecast is that
we can combine multiple sets of estimation results and obtain forecasts for an entire system of
equations.
Technical note
Do not add estimation results to your forecast model that you have stored after calling an estimation
command with the by: prefix. The stored estimation results will contain information from only the
last group on which the estimation command was executed. forecast will then use those results for
all observations in the forecast horizon regardless of the value of the group variable you specified
with by:.
References
Cameron, A. C., and P. K. Trivedi. 2010. Microeconometrics Using Stata. Rev. ed. College Station, TX: Stata Press.
Duan, N. 1983. Smearing estimate: A nonparametric retransformation method. Journal of the American Statistical
Association 78: 605610.
Klein, L. R. 1950. Economic Fluctuations in the United States 19211941. New York: Wiley.
Also see
[TS] forecast Econometric model forecasting
[R] estimates Save and manipulate estimation results
[R] predict Obtain predictions, residuals, etc., after estimation
213
Title
forecast exogenous Declare exogenous variables
Syntax
Description
Also see
Syntax
forecast exogenous varlist
Description
forecast exogenous declares exogenous variables in the current forecast model.
Example 1
Here we fit a simple single-equation dynamic model with two exogenous variables, x1 and x2:
. use http://www.stata-press.com/data/r13/forecastex1
. quietly regress y L.y x1 x2
. estimates store exregression
. forecast create myexample
Forecast model myexample started.
. forecast estimates exregression
Added estimation results from regress.
Forecast model myexample now contains 1 endogenous variable.
. forecast exogenous x1
Forecast model myexample now contains 1 declared exogenous variable.
. forecast exogenous x2
Forecast model myexample now contains 2 declared exogenous variables.
214
Also see
[TS] forecast Econometric model forecasting
215
Title
forecast identity Add an identity to a forecast model
Syntax
Also see
Description
Options
Stored results
Syntax
forecast identity varname = exp
, options
options
Description
generate
double
Description
forecast identity adds an identity to the forecast model currently in memory. You must
first create a new model using forecast create before you can add an identity with forecast
identity. An identity is a nonstochastic equation that expresses an endogenous variable in the model
as a function of other variables in the model. Identities often describe the behavior of endogenous
variables that are based on accounting identities or adding-up conditions.
Options
generate specifies that the new variable varname be created equal to exp for all observations in the
current dataset. By default, forecast identity exits with an error if varname does not exist.
double, for use in conjunction with the generate option, requests that the new variable be created
as a double. By default, the new variable is created as a float. See [D] data types.
forecast identity registers varname as an endogenous variable in your forecast model that is
equal to exp, where exp is a valid Stata expression that is typically a function of other endogenous
variables and exogenous variables in your model and perhaps lagged values of varname as well.
forecast identity was used in all the examples in [TS] forecast.
216
217
Typically, you use forecast identity to define the relationship that determines an endogenous
variable that is already in your dataset. For example, in example 1 of [TS] forecast, we used forecast
identity to define total wages as the sum of government and private-sector wages, and the total
wage variable already existed in our dataset.
The generate option of forecast identity is useful when you wish to use a transformation of
one or more endogenous variables as a right-hand-side variable in a stochastic equation that describes
another endogenous variable. For example, say you want to use regress to model variable y as
a function of the ratio of two endogenous variables, u and w, as well as other covariates. Without
the generate option of forecast identity, you would have to define the variable y = u/w
twice: first, you would have to use the generate command to create the variable before fitting your
regression model, and then you would have to use forecast identity to add an identity to your
forecast model to define y in terms of u and w. Assuming you have already created your forecast
model, the generate option allows you to define the ratio variable just once, before you fit the
regression equation. In this example, the ratio variable is easy enough to specify twice, but it is very
easy to forget to include identities that define regressors used in estimation results while building
large forecast models. In other cases, an endogenous variable may be a more complicated function of
other endogenous variables, so having to specify the function only once reduces the chance for error.
Stored results
forecast identity stores the following in r():
Macros
r(lhs)
r(rhs)
r(basenames)
r(fullnames)
Also see
[TS] forecast Econometric model forecasting
Title
forecast list List forecast commands composing current model
Syntax
Description
Options
Reference
Also see
Syntax
forecast list
, options
Description
saving(filename , replace ) save list of commands to file
notrim
do not remove extraneous white space
options
Description
forecast list produces a list of forecast commands issued since the current model was
started.
Options
saving(filename , replace ) requests that forecast list write the list of commands to disk
with filename. If no extension is specified, .do is assumed. If filename already exists, an error is
issued unless you specify replace, in which case the file is overwritten.
notrim requests that forecast list not remove any extraneous spaces and that commands be
shown exactly as they were originally entered. By default, superfluous white space is removed.
219
Suppose you use forecast adjust to perform some policy simulations and then decide you want
to remove those adjustments from the model. forecast list makes this easy to do. You simply
call forecast list with the saving() option to produce a do-file that contains all the forecast
commands issued since the model was created. Then you can edit the do-file to remove the forecast
adjust command, type forecast clear, and run the do-file.
The forecast solve command is not included in output produced by forecast list because
solving the model does not add any features to the model.
Technical note
To prevent you from accidentally destroying the model in memory, forecast list does not add
the replace option to forecast create even if you specified replace when you originally called
forecast create.
Reference
Klein, L. R. 1950. Economic Fluctuations in the United States 19211941. New York: Wiley.
Also see
[TS] forecast Econometric model forecasting
Title
forecast query Check whether a forecast model has been started
Syntax
Description
Stored results
Also see
Syntax
forecast query
Description
forecast query issues a message indicating whether a forecast model has been started.
create fcmodel
model fcmodel started.
query
model fcmodel exists.
Stored results
forecast query stores the following in r():
Scalars
r(found)
Macros
r(name)
Also see
[TS] forecast Econometric model forecasting
[TS] forecast describe Describe features of the forecast model
220
Title
forecast solve Obtain static and dynamic forecasts
Syntax
Stored results
Description
Methods and formulas
Options
References
Syntax
forecast solve
options
Model
prefix(string)
suffix(string)
begin(time constant)
end(time constant)
periods(#)
double
static
actuals
prefix(stub) | suffix(stub)
options
Description
specify prefix for forecast variables
specify suffix for forecast variables
specify period to begin forecasting
specify period to end forecasting
specify number of periods to forecast
store forecast variables as doubles instead of as floats
produce static forecasts instead of dynamic forecasts
use actual values if available instead of forecasts
Simulation
log(log level)
Solver
vtolerance(#)
ztolerance(#)
iterate(#)
technique(technique)
Description
betas
errors
residuals
You can specify one or two sim methods separated by a space, though you cannot specify both errors and residuals.
221
222
sim statistic is
statistic(statistic,
prefix(string) | suffix(string) )
Description
mean
variance
stddev
sim options
Description
saving(filename, . . .)
nodots
reps(#)
Description
forecast solve computes static or dynamic forecasts based on the model currently in memory.
Before you can solve a model, you must first create a new model using forecast create and add
equations and variables to it using the commands summarized in [TS] forecast.
Options
Model
prefix(string) and suffix(string) specify a name prefix or suffix that will be used to name the
variables holding the forecast values of the variables in the model. You may specify prefix() or
suffix() but not both. Sometimes, it is more convenient to have all forecast variables start with
the same set of characters, while other times, it is more convenient to have all forecast variables
end with the same set of characters.
If you specify prefix(f ), then the forecast values of endogenous variables x, y, and z will be
stored in new variables f x, f y, and f z.
If you specify suffix( g), then the forecast values of endogenous variables x, y, and z will be
stored in new variables x g, y g, and z g.
begin(time constant) requests that forecast begin forecasting at period time constant. By default,
forecast determines when to begin forecasting automatically.
end(time constant) requests that forecast end forecasting at period time constant. By default,
forecast produces forecasts for all periods on or after begin() in the dataset.
periods(#) specifies the number of periods after begin() to forecast. By default, forecast
produces forecasts for all periods on or after begin() in the dataset.
double requests that the forecast and simulation variables be stored in double precision. The default
is to use single-precision floats. See [D] data types for more information.
223
static requests that static forecasts be produced. Actual values of variables are used wherever
lagged values of the endogenous variables appear in the model. By default, dynamic forecasts are
produced, which use the forecast values of variables wherever lagged values of the endogenous
variables appear in the model. Static forecasts are also called one-step-ahead forecasts.
actuals specifies how nonmissing values of endogenous variables in the forecast horizon are treated.
By default, nonmissing values are ignored, and forecasts are produced for all endogenous variables.
When you specify actuals, forecast sets the forecast values equal to the actual values if they
are nonmissing. The forecasts for the other endogenous variables are then conditional on the known
values of the endogenous variables with nonmissing data.
Simulation
simulate(sim technique, sim statistic sim options) allows you to simulate your model to obtain
measures of uncertainty surrounding the point forecasts produced by the model. Simulating a
model involves repeatedly solving the model, each time accounting for the uncertainty associated
with the error terms and the estimated coefficient vectors.
sim technique can be betas, errors, or residuals, or you can specify both betas and one of
errors or residuals separated by a space. You cannot specify both errors and residuals.
The sim technique controls how uncertainty is introduced into the model.
sim statistic specifies a summary statistic to summarize the forecasts over all the simulations.
sim statistic takes the form
statistic(statistic, { prefix(string) | suffix(string) })
where statistic may be mean, variance, or stddev. You may specify either the prefix or the
suffix that will be used to name the variables that will contain the requested statistic. You
may specify up to three sim statistics, allowing you to track the mean, variance, and standard
deviations of your forecasts.
sim options include saving(filename, suboptions ), nodots, and reps(#).
saving(filename, suboptions ) creates a Stata data file (.dta file) consisting of (for each
endogenous variable in the model) a variable containing the simulated values.
double specifies that the results for each replication be saved as doubles, meaning 8-byte reals.
By default, they are saved as floats, meaning 4-byte reals.
replace specifies that filename be overwritten if it exists.
every(#) specifies that results be written to disk every #th replication. every() should be
specified only in conjunction with saving() when the command takes a long time for each
replication. This will allow recovery of partial results should some other software crash your
computer. See [P] postfile.
nodots suppresses display of the replication dots. By default, one dot character is displayed for
each successful replication. If during a replication convergence is not achieved, forecast
solve exits with an error message.
reps(#) requests that forecast solve perform # replications; the default is reps(50).
Reporting
log(log level) specifies the level of logging provided while solving the model. log level may be
detail, on, brief, or off.
224
log(detail) provides a detailed iteration log including the current values of the convergence
criteria for each period in each panel (in the case of panel data) for which the model is being
solved.
log(on), the default, provides an iteration log showing the current panel and period for which
the model is being solved as well as a sequence of dots for each period indicating the number of
iterations.
log(brief), when used with a time-series dataset, is equivalent to log(on). When used with a
panel dataset, log(brief) produces an iteration log showing the current panel being solved but
does not show which period within the current panel is being solved.
log(off) requests that no iteration log be produced.
Solver
vtolerance(#), ztolerance(#), and iterate(#) control when the solver of the system of
equations stops. ztolerance() is ignored if either technique(dampedgaussseidel #) or
technique(gaussseidel) is specified. These options are seldom used. See [M-5] solvenl( ).
technique(technique) specifies the technique to use to solve the system of equations. technique
may be dampedgaussseidel #, gaussseidel, broydenpowell, or newtonraphson, where
0 < # < 1 specifies the amount of damping with smaller numbers indicating less damping.
The default is technique(dampedgaussseidel 0.2), which works well in most situations.
If you have convergence issues, first try continuing to use dampedgaussseidel # but with a
larger damping factor. Techniques broydenpowell and newtonraphson usually work well, but
because they require the computation of numerical derivatives, they tend to be much slower. See
[M-5] solvenl( ).
225
5
5
Coef.
RMSE
R-sq
1.88516
11.8776
0.1820
0.1140
No. of obs
AIC
HQIC
SBIC
chi2
P>chi2
18.91318
10.93614
Std. Err.
P>|z|
=
=
=
=
85
12.00176
12.11735
12.28913
0.0008
0.0273
gdp
gdp
L1.
L2.
.1498285
.3465238
.1015076
.1022446
1.48
3.39
0.140
0.001
-.0491227
.146128
.3487797
.5469196
oil
L1.
L2.
-.0374609
.0119564
.0167968
.0164599
-2.23
0.73
0.026
0.468
-.070382
-.0203043
-.0045399
.0442172
_cons
1.519983
.4288145
3.54
0.000
.6795226
2.360444
gdp
L1.
L2.
.8102233
1.090244
.6395579
.6442017
1.27
1.69
0.205
0.091
-.4432871
-.1723684
2.063734
2.352856
oil
L1.
L2.
.0995271
-.1870052
.1058295
.103707
0.94
-1.80
0.347
0.071
-.1078949
-.3902672
.3069491
.0162568
_cons
-4.041859
2.701785
-1.50
0.135
-9.33726
1.253543
oil
The dataset ends in the fourth quarter of 2007, so before we can produce forecasts for 2008 and
beyond, we need to extend our dataset. We can do that using the tsappend command. Here we
extend our dataset three years:
. tsappend, add(12)
226
To see how GDP evolves if oil prices increase 10% in each of the first three quarters of 2008
and then remain flat, we need to obtain a forecast for gdp conditional on a specified path for
oil. The actuals option of forecast solve will do that for us. With the actuals option, if an
endogenous variable contains a nonmissing value for the period currently being forecast, forecast
solve will use that value as the forecast, overriding whatever value might be produced by that
variables underlying estimation result or identity. Then the endogenous variables with missing values
will be forecast conditional on the endogenous variables that do have valid data. Here we fill in oil
with our hypothesized price path:
. replace oil = 10 if qdate == tq(2008q1)
(1 real change made)
. replace oil = 10 if qdate == tq(2008q2)
(1 real change made)
. replace oil = 10 if qdate == tq(2008q3)
(1 real change made)
. replace oil = 0 if qdate > tq(2008q3)
(9 real changes made)
Now we obtain forecasts conditional on our oil variable. We will use the prefix alt for these
forecast variables:
. forecast solve, prefix(alt_) actuals
Computing dynamic forecasts for model oilmodel.
Starting period:
Ending period:
Forecast prefix:
2008q1
2010q4
alt_
2008q1: ...............
(output omitted )
2010q4: ...........
Forecast 2 variables spanning 12 periods.
Forecasts used actual values if available.
227
Finally, we make a variable containing the difference between our alternative and our baseline gdp
forecasts and graph it:
. generate diff_gdp = alt_gdp - bl_gdp
.1
12
Our model indicates GDP growth would be about 0.4% less in the second through fourth quarters of
2008 than it would otherwise be, but would be mostly unaffected thereafter if oil prices followed our
hypothetical path. The one-quarter lag in the response of GDP is due to our using a VAR model. In
our VAR model, lagged values of oil predict the current value of gdp, but the current value of oil
does not.
Technical note
The previous example allowed us to demonstrate forecast solves actuals option, but in fact
measuring the economys response to oil shocks is much more difficult than our simple VAR analysis
would suggest. One obvious complication is that positive and negative oil price shocks do not have
symmetric effects on the economy. In our simple model, if a 50% increase in oil prices lowers GDP
by x%, then a 50% decrease in oil prices must raise GDP by x%. However, a 50% decrease in oil
prices is perhaps more likely to portend weakness in the economy rather than an imminent growth
spurt. See, for example, Hamilton (2003) and Kilian and Vigfusson (2013).
Another way to specify alternative scenarios for your forecasts is to use the forecast adjust
command. That command is more flexible in the types of manipulations you can perform on endogenous
variables but, depending on the task at hand, may involve more effort. The actuals option of the
forecast solve and the forecast adjust commands are complementary. There is much overlap
in what you can achieve; in some situations, specifying the actuals option will be easier, while in
other situations, using adjustments via forecast adjust will prove to be easier.
228
b T +1
ybT +1 =
b + x
(2)
However, there are several factors that prevent us from guaranteeing ex ante that yT +1 will indeed
equal ybT +1 . We must assume that (1) specifies the correct relationship between y and x. Even if that
relationship held for times 1 through T , are we sure it will hold at time T + 1? Uncertainty due to
issues like that are inherent to the type of forecasting that the forecast commands are designed for.
Here we discuss two additional sources of uncertainty that forecast solve can help you measure.
First, we estimated and by OLS to obtain
b and b, but we must emphasize the word estimated.
Our estimates are subject to sampling error. When you fit a regression using regress or any other
estimation command, Stata presents not just the point estimates of the parameters but also the standard
errors and confidence intervals representing the level of uncertainty surrounding those point estimates.
Uncertainty surrounding the true values of and mean that there is some level of uncertainty
surrounding our predicted value ybT +1 as well.
Second, (1) states that yt depends not just on , , and xt but also on an unobserved error term
t . When we make our forecast using (2), we assume that the error term will equal its expected value
of zero. Saying a random error has an expected value of zero is clearly not the same as saying it
will be zero every time. If a positive outside shock occurs at T + 1, yT +1 will be higher than our
estimate based on (2) would lead us to believe.
Fortunately, quantifying both these sources of uncertainty is straightforward using simulation. First,
we solve our model as usual, providing us with our point forecasts. To see how uncertainty surrounding
our estimated parameters affects our forecasts, we can take random draws from a multivariate normal
b and whose variance is the covariance matrix produced by regress.
distribution whose mean is (b
, )
We then solve our model using these randomly drawn parameters rather than the original point
estimates. If we repeat the process of drawing random parameters and solving the model many times,
we can use the variance or standard deviation across replications for each time period as a measure
of uncertainty.
To account for uncertainty surrounding the error term, we can also use simulation. Here, at each
replication, we add a random noise term to our forecast for yT +1 , where we draw our random errors
such that they have the same characteristics as t . There are two ways we can do that. First, all the
estimation commands commonly used in forecasting provide us with an estimate of the variance or
standard deviation of the error term. For example, regress labels the estimated standard deviation
of the error term Root RMSE and conveniently saves it in a macro that forecast can access. If
we are willing to assume that all the errors in the equations in our model are normally distributed,
then we can use random-normal errors drawn with means equal to zero and variances as reported by
the estimation command used to fit each equation.
Sometimes the assumption of normality is unpalatable. In those cases, an alternative is to solve the
model to obtain static forecasts and then compute the sample residuals based on the observations for
which we have nonmissing values of the endogenous variables. Then in our simulations, we randomly
choose one of the residuals observed for that equation.
229
At each replication, whether we draw errors based on the normal errors or from the pool of
static-forecast residuals, we add the drawn value to our estimate of ybT +1 to provide a simulated value
for our forecast. Then, just like when simulating parameter uncertainty, we can use the variance or
standard deviation across replications to measure uncertainty. In fact, we can perform simulations that
draw both random parameters and random errors to account for both sources of uncertainty at once.
t - wp
now contains 5 endogenous variables.
+ i
now contains 6 endogenous variables.
. forecast
Forecast
. forecast
Forecast
. forecast
Forecast
. forecast
Forecast
. forecast
Forecast
identity w = wg + wp
model kleinmodel now
exogenous wg
model kleinmodel now
exogenous g
model kleinmodel now
exogenous t
model kleinmodel now
exogenous yr
model kleinmodel now
Now we are ready to solve our model. We are going to begin dynamic forecasts in 1936, and we
are going to perform 100 replications. We will store the point forecasts in variables prefixed with d ,
and we will store the standard deviations of our forecasts in variables prefixed with sd . Because
the simulations involve the use of random numbers, we must remember to set the random-number
seed if we want to be able to replicate our results; see [R] set seed. We type
230
The key here is the simulate() option. We requested that forecast solve perform 100 simulations
by taking random draws for the parameters (betas), and we requested that it record the standard
deviation (stddev) of each endogenous variable in new variables that begin with sd . Next we
compute the upper and lower bounds of a 95% prediction interval for our forecast of total income y:
. gen d_y_up = d_y
(16 missing values
. gen d_y_dn = d_y
(16 missing values
+ invnormal(0.975)*sd_y
generated)
+ invnormal(0.025)*sd_y
generated)
We obtained 16 missing values after each generate because the simulation summary variables only
contain nonmissing data for the periods in which forecasts were made. The point-forecast variables
that begin with d in this example are filled in with the corresponding actual values of the endogenous
variables for periods before the beginning of the forecast horizon; in our experience, having both the
historical data and forecasts in one set of variables simplifies many tasks. Here we graph our forecast
of total income along with the 95% prediction interval:
231
50
60
70
80
90
100
Total Income
1935
1937
1939
1941
Our next example will use the same forecast model, but we will not need the forecast variables
we just created. forecast drop makes removing those variables easy:
. forecast drop
(dropped 14 variables)
forecast drop drops all variables created by the previous invocation of forecast solve, including
both the point-forecast variables and any variables that contain simulation results. In this case,
forecast drop will remove all the variables that begin with sd as well as d y, d c, d i, and
so on. However, we are not done yet. We created the variables d y dn and d y up ourselves, and
they were not part of the forecast model. Therefore, they are not removed by forecast drop, and
we need to do that ourselves:
. drop d_y_dn d_y_up
232
The only difference between this call to forecast solve and the one in the previous example is that
here we specified betas errors in the simulate() option rather than just betas. Had we wanted
to perform simulations involving the parameters and random draws from the pool of static-forecast
residuals rather than random-normal errors, we would have specified betas residuals. After we
re-create the variables containing the bounds on our prediction interval, we obtain the following graph:
50
60
70
80
90
100
Total Income
1935
1937
1939
1941
Notice that by accounting for both parameter and additive error uncertainty, our prediction interval
became much wider.
233
Stored results
forecast solve stores the following in r():
Scalars
r(first obs)
r(last obs)
r(Npanels)
r(Nvar)
r(vtolerance)
r(ztolerance)
r(iterate)
r(sim nreps)
r(damping)
Macros
r(prefix)
r(suffix)
r(actuals)
r(static)
r(double)
r(sim technique)
r(logtype)
(3)
where i = 1, . . . , k and yi,t refers to the k 1 1 vector of endogenous variables other than yi
at time t. If equation j is an identity, we take ujt = 0 for all t; for stochastic equations, the errors
correspond to the usual regression error terms. Equation (3) does not include subscripts indexing
panels for notational simplicity, but the extension is obvious. A model is solveable if k 1. m may
be zero.
Endogenous variables are added to the forecast model via forecast estimates, forecast
identity, and forecast coefvector. Equations added via forecast estimates are always
stochastic, while equations added via forecast identity are always nonstochastic. Equations added
via forecast coefvector are treated as stochastic if options variance() or errorvariance()
(or both) are specified and nonstochastic if neither is specified.
Exogenous variables are declared using forecast exogenous, but the model may contain additional
exogenous variables. For example, the right-hand side of an equation may contain exogenous variables
that are not declared using forecast exogenous. Before solving the model, forecast solve
determines whether the declared exogenous variables contain missing values over the forecast horizon
and issues an informative error message if any do. Undeclared exogenous variables that contain
missing values within the forecast horizon will cause forecast solve to exit with a less-informative
error message and require the user to do more work to pinpoint the problem.
234
Adjustments added via forecast adjust easily fit within the framework of (3). Simply let fi ()
represent the value of yit obtained by first evaluating the appropriate estimation result, coefficient
vector, or identity and then performing the adjustments based on that intermediate result. Endogenous
variables may have multiple adjustments; adjustments are made in the order in which they were
specified via forecast adjust. For single-equation estimation results and coefficient vectors as well
as identities, adjustments are performed right after the equation is evaluated. For multiple-equation
estimation results and coefficient vectors, adjustments are made after all the equations within that set
of results are evaluated. Suppose an estimation result that uses predict includes two left-hand-side
variables, y1t and y2t , and you have added two adjustments to y1t and one adjustment to y2t . Here
forecast solve first calls predict twice to obtain candidate values for y1t and y2t ; then it performs
the two adjustments to y1t , and finally it adjusts y2t .
forecast solve offers four solution techniques: GaussSeidel, damped GaussSeidel, Broyden
Powell, and NewtonRaphson. The GaussSeidel techniques are simple iterative techniques that are
often fast and typically work well, particularly when a damping factor is used. GaussSeidel is simply
damped GaussSeidel without damping (a damping factor of 0). By default, damped GaussSeidel
with a damping factor of 0.2 is used, representing a small amount of damping. As Fair (1984, 250)
notes, while these techniques often work well, there is no guarantee that they will converge. Technique
NewtonRaphson typically works well but is slow because it requires the use of numerical derivatives at
every iteration to obtain a Jacobian matrix. The BroydenPowell (Broyden 1970; Powell 1970) method
is analogous to quasi-Newton methods used for function optimization in that an updating method is
used at each iteration to update an estimate of the Jacobian matrix rather than actually recalculating
it. For additional details as well as a discussion of the convergence criteria, see [M-5] solvenl( ).
If you do not specify the begin() option, forecast solve uses the following algorithm to select
the starting time period. Suppose the time variable t runs from 1 to T . If, at time T , none of the
endogenous variables contains missing values, forecast solve exits with an error message: there
are no periods in which the endogenous variables are not known; therefore, there are no periods
where a forecast is obviously required. Otherwise, consider period T 1. If none of the endogenous
variables contains missing values in that period, then the only period to forecast is T . Otherwise,
work back through time to find the latest period in which all of the endogenous variables contain
nonmissing values and then begin forecasting in the subsequent period. In the case of panel datasets,
the same algorithm is applied to each panel, and forecasts for all panels begin on the earliest period
selected.
When you specify the simulate() option with sim technique betas, forecast solve draws
random vectors from the multivariate normal distribution for each estimation result individually.
The mean and variance are based on the estimation results e(b) and e(V) macros, respectively.
If the estimation result is from a multiple-equation estimator, the corresponding Stata command
stores in e(b) and e(V) the full parameter vector and covariance matrix for all equations so that
forecast solves simulations will account for covariances among parameters in that estimation
results equations. However, covariances among parameters that appear in different estimation results
are taken to be zero.
If you specify a coefficient vector using forecast coefvector and specify a variance matrix in
the variance() option, then those coefficient vectors are simulated just like the parameter vectors
from estimation results. If you do not specify the variance() option, then the coefficient vector is
assumed to be nonstochastic and therefore is not simulated.
When you specify the simulate() option with sim technique residuals, forecast solve
first obtains static forecasts from your model for all possible periods. For each endogenous variable
defined by a stochastic equation, it then computes residuals as the forecast value minus the actual
value for all observations with nonmissing data. At each replication and for each period in the forecast
horizon, forecast solve randomly selects one element from each stochastic equations pool of
235
residuals before solving the model for that replication and period. Then whenever forecast solve
evaluates a stochastic equation, it adds the chosen element to the predicted value for that equation.
Suppose an estimation result represents a multiple-equation estimator with m equations, and suppose
that there are n time periods for which sample residuals are available. Arrange the residuals into the
n m matrix R. Then when forecast solve is randomly selecting residuals for this estimation
result, it will choose a random number j between 1 and n and select the entire j th row from R.
That preserves the correlation structure among the error terms of the estimation results equations.
If you specify a coefficient vector using forecast coefvector and specify either the variance()
option or the errorvariance() option (or both), sim technique residuals considers the equation
represented by the coefficient vector to be stochastic and resamples residuals for that equation.
When you specify the simulate() option with sim technique errors, forecast solve, for
each stochastic equation, replication, and period, takes a random draw from a multivariate normal
distribution with zero mean before solving the model for that replication and period. Then whenever
forecast solve evaluates a stochastic equation, it adds that random draw to the predicted value
for that equation. The variance of the distribution from which errors are drawn is based on the
estimation results for that equation. The forecast commands look in e(rmse), e(sigma), and
e(Sigma) to find the estimated variance. If you add an estimation result that does not set any of those
three macros and you request sim technique errors, forecast solve exits with an error message.
Multiple-equation commands typically set e(Sigma) so that the randomly drawn errors reflect the
estimated error correlation structure.
If you specify a coefficient vector using forecast coefvector and specify the errorvariance()
option, sim technique errors simulates errors for that equation. Otherwise, the equation is treated
like an identity and no errors are added.
forecast solve solves panel-data models by solving for all periods in the forecast horizon for
the first panel in the dataset, then the second dataset, and so on. When you perform simulations with
panel datasets, one replication is completed for all panels in the dataset before moving to the next
replication. Simulations that include residual resampling select residuals from the pool containing
residuals for all panels; forecast solve does not restrict itself to the static-forecast residuals for a
single panel when simulating that panel.
References
Broyden, C. G. 1970. Recent developments in solving nonlinear algebraic systems. In Numerical Methods for Nonlinear
Algebraic Equations, ed. P. Rabinowitz, 6173. London: Gordon and Breach Science Publishers.
Fair, R. C. 1984. Specification, Estimation, and Analysis of Macroeconometric Models. Cambridge, MA: Harvard
University Press.
Hamilton, J. D. 2003. What is an oil shock? Journal of Econometrics 113: 363398.
Kilian, L., and R. J. Vigfusson. 2013. Do oil prices help forecast U.S. real GDP? The role of nonlinearities and
asymmetries. Journal of Business and Economic Statistics 31: 7893.
Klein, L. R. 1950. Economic Fluctuations in the United States 19211941. New York: Wiley.
Powell, M. J. D. 1970. A hybrid method for nonlinear equations. In Numerical Methods for Nonlinear Algebraic
Equations, ed. P. Rabinowitz, 87114. London: Gordon and Breach Science Publishers.
Also see
[TS] forecast Econometric model forecasting
[TS] forecast adjust Adjust a variable by add factoring, replacing, etc.
[TS] forecast drop Drop forecast variables
[R] set seed Specify initial value of random-number seed
Title
irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
Syntax
Description
References
Also see
Syntax
irf subcommand . . .
, ...
subcommand
Description
create
set
graph
cgraph
ograph
table
ctable
describe
add
drop
rename
IRF stands for impulseresponse function; FEVD stands for forecast-error variance decomposition.
irf can be used only after var, svar, vec, arima, or arfima; see [TS] var, [TS] var svar, [TS] vec,
[TS] arima, and [TS] arfima.
See [TS] irf create, [TS] irf set, [TS] irf graph, [TS] irf cgraph, [TS] irf ograph, [TS] irf table,
[TS] irf ctable, [TS] irf describe, [TS] irf add, [TS] irf drop, and [TS] irf rename for details about
subcommands.
Description
irf creates and manipulates IRF files that contain estimates of the IRFs, dynamic-multiplier
functions, and forecast-error variance decompositions (FEVDs) created after estimation by var, svar,
or vec; see [TS] var, [TS] var svar, or [TS] vec.
irf creates and manipulates IRF files that contain estimates of the IRFs created after estimation
by arima or arfima; see [TS] arima or [TS] arfima.
IRFs and FEVDs are described below, and the process of analyzing them is outlined. After reading
this entry, please see [TS] irf create.
237
.004
.002
.002
0
10
step
95% CI
orthogonalized irf
Multiple sets of IRFs and FEVDs can be placed in the same file, with each set of results in a
file bearing a distinct name. The irf create command above created file myirf1.irf and put
one set of results in it, named order1. The order1 results include estimates of the simple IRFs,
orthogonalized IRFs, cumulative IRFs, cumulative orthogonalized IRFs, and Cholesky FEVDs.
Below we use the same estimated var but use a different Cholesky ordering to create a second set
of IRF results, which we will save as order2 in the same file, and then we will graph both results:
238
.01
.005
.005
0
10
10
step
95% CI
orthogonalized irf
We have compared results for one model under two different identification schemes. We could just
as well have compared results of two different models. We now use irf table to display the results
tabularly:
. irf table oirf, irf(order1 order2) impulse(dln_inc) response(dln_consump)
Results from order1 order2
step
0
1
2
3
4
5
6
7
8
9
10
(1)
oirf
(1)
Lower
(1)
Upper
(2)
oirf
(2)
Lower
(2)
Upper
.004934
.001309
.003573
-.000692
.000905
.000328
.000021
.000154
.000026
.000026
.000026
.003016
-.000931
.001285
-.002333
-.000541
-.0005
-.000675
-.000206
-.000248
-.000121
-.000061
.006852
.003549
.005862
.00095
.002351
.001156
.000717
.000515
.0003
.000174
.000113
.005244
.001235
.00391
-.000677
.00094
.000341
.000042
.000161
.000027
.00003
.000027
.003252
-.001011
.001542
-.002347
-.000576
-.000518
-.000693
-.000218
-.000261
-.000125
-.000065
.007237
.003482
.006278
.000993
.002456
.001201
.000777
.00054
.000315
.000184
.00012
Both the table and the graph show that the two orthogonalized IRFs are essentially the same. In both
functions, an increase in the orthogonalized shock to dln inc causes a short series of increases in
dln consump that dies out after four or five periods.
References
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Also see
[TS] arfima Autoregressive fractionally integrated moving-average models
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] var Vector autoregressive models
[TS] var svar Structural vector autoregressive models
[TS] varbasic Fit a simple VAR and graph IRFs or FEVDs
[TS] vec Vector error-correction models
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
239
Title
irf add Add results from an IRF file to the active IRF file
Syntax
Remarks and examples
Menu
Also see
Description
Option
Syntax
irf add
all | newname= oldname . . . , using(irf filename)
Menu
Statistics
>
>
>
Description
irf add copies results from one IRF file to anotherfrom the specified using() file to the active
IRF file, set by irf set; see [TS] irf set.
Option
using(irf filename) specifies the file from which results are to be obtained and is required. If
irf filename is specified without an extension, .irf is assumed.
Example 1
After fitting a VAR model, we create two separate IRF files:
. use http://www.stata-press.com/data/r13/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inv dln_inc dln_consump if qtr<=tq(1978q4), lags(1/2) dfk
(output omitted )
. irf create original, set(irf1, replace)
(file irf1.irf created)
(file irf1.irf now active)
(file irf1.irf updated)
. irf create order2, order(dln_inc dln_inv dln_consump) set(irf2, replace)
(file irf2.irf created)
(file irf2.irf now active)
(file irf2.irf updated)
We copy IRF results original to the active file giving them the name order1.
. irf add order1 = original, using(irf1)
(file irf2.irf updated)
240
irf add Add results from an IRF file to the active IRF file
Here we create new IRF results and save them in the new file irf3.
. irf
(file
(file
(file
Now we copy all the IRF results in file irf2 into the active file.
. irf add _all, using(irf2)
(file irf3.irf updated)
Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
241
Title
irf cgraph Combined graphs of IRFs, dynamic-multiplier functions, and FEVDs
Syntax
Remarks and examples
Menu
Stored results
Description
Also see
Options
Syntax
irf cgraph (spec1 )
(spec2 ) . . . (specN )
, options
where (speck ) is
(irfname impulsevar responsevar stat
, spec options )
irfname is the name of a set of IRF results in the active IRF file. impulsevar should be specified as an
endogenous variable for all statistics except dm and cdm; for those, specify as an exogenous variable.
responsevar is an endogenous variable name. stat is one or more statistics from the list below:
stat
Description
Main
irf
oirf
dm
cirf
coirf
cdm
fevd
sirf
sfevd
impulseresponse function
orthogonalized impulseresponse function
dynamic-multiplier function
cumulative impulseresponse function
cumulative orthogonalized impulseresponse function
cumulative dynamic-multiplier function
Cholesky forecast-error variance decomposition
structural impulseresponse function
structural forecast-error variance decomposition
options
Description
Main
set(filename)
Options
combine options
twoway options
spec options
individual
242
spec options
243
Description
Main
noci
Options
level(#)
lstep(#)
ustep(#)
Plots
plot#opts(line options)
CI plots
ci#opts(area options)
spec options may be specified within a graph specification, globally, or in both. When specified in a graph
specification, the spec options affect only the specification in which they are used. When supplied globally, the
spec options affect all graph specifications. When supplied in both places, options in the graph specification take
precedence.
Menu
Statistics
>
>
>
Combined graphs
Description
irf cgraph makes a graph or a combined graph of IRF results. Each block within a pair of
matching parentheseseach (speck )specifies the information for a specific graph. irf cgraph
combines these graphs into one image, unless the individual option is also specified, in which case
separate graphs for each block are created.
To become familiar with this command, we recommend that you type db irf cgraph.
Options
Main
noci suppresses graphing the confidence interval for each statistic. noci is assumed when the model
was fit by vec because no confidence intervals were estimated.
set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.
Options
level(#) specifies the default confidence level, as a percentage, for confidence intervals, when they
are reported. The default is level(95) or as set by set level; see [U] 20.7 Specifying the
width of confidence intervals. The value set of an overall level() can be overridden by the
level() inside a (speck ).
lstep(#) specifies the first step, or period, to be included in the graph. lstep(0) is the default.
ustep(#), # 1, specifies the maximum step, or period, to be included in the graph.
combine options affect the appearance of the combined graph; see [G-2] graph combine.
244
Plots
plot1opts(cline options), . . . , plot4opts(cline options) affect the rendition of the plotted statistics. plot1opts() affects the rendition of the first statistic; plot2opts(), the second; and so
on. cline options are as described in [G-3] cline options.
CI plots
ci1opts1(area options) and ci2opts2(area options) affect the rendition of the confidence intervals
for the first (ci1opts()) and second (ci2opts()) statistics. See [TS] irf graph for a description
of this option and [G-3] area options for the suboptions that change the look of the CI.
twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).
The following option is available with irf cgraph but is not shown in the dialog box:
individual specifies that each graph be displayed individually. By default, irf cgraph combines
the subgraphs into one image.
Example 1
You have previously issued the commands
.
.
.
.
.
.
.
use http://www.stata-press.com/data/r13/lutkepohl2
mat a = (., 0, 0\0,.,0\.,.,.)
mat b = I(3)
svar dln_inv dln_inc dln_consump, aeq(a) beq(b)
irf create modela, set(results3) step(8)
svar dln_inc dln_inv dln_consump, aeq(a) beq(b)
irf create modelb, step(8)
.006
.004
.005
.002
0
0
.002
.005
0
4
step
4
step
oirf
sirf
oirf
sirf
.5
.5
.4
.4
.3
.3
.2
.2
.1
.1
0
4
step
4
step
fevd
sfevd
fevd
sfevd
Stored results
irf cgraph stores the following in r():
Scalars
r(k)
Macros
r(individual)
r(save)
r(name)
r(title)
r(save#)
r(name#)
r(title#)
r(ci#)
r(response#)
r(impulse#)
r(irfname#)
r(stats#)
Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
245
Title
irf create Obtain IRFs, dynamic-multiplier functions, and FEVDs
Syntax
Remarks and examples
Menu
Methods and formulas
Description
References
Options
Also see
Syntax
After var
irf create irfname
, var options
, svar options
, vec options
, arima options
, arfima options
After svar
irf create irfname
After vec
irf create irfname
After arima
irf create irfname
After arfima
irf create irfname
Description
Main
set(filename , replace )
replace
step(#)
order(varlist)
estimates(estname)
Std. errors
nose
bs
bsp
nodots
reps(#)
bsaving(filename , replace )
svar options
247
Description
Main
set(filename , replace )
replace
step(#)
estimates(estname)
Std. errors
nose
bs
bsp
nodots
reps(#)
bsaving(filename , replace )
vec options
Description
Main
set(filename , replace )
replace
step(#)
estimates(estname)
arima options
Description
Main
set(filename , replace )
replace
step(#)
estimates(estname)
Std. errors
nose
arfima options
Description
Main
set(filename , replace )
replace
step(#)
smemory
estimates(estname)
Std. errors
nose
248
Menu
Statistics
FEVDs
>
>
>
Description
irf create estimates multiple sets of impulseresponse functions (IRFs), dynamic-multiplier
functions, and forecast-error variance decompositions (FEVDs) after estimation by var, svar, or vec;
see [TS] var, [TS] var svar, or [TS] vec. irf create also estimates multiple sets of IRFs after
estimation by arima or arfima; see [TS] arima or [TS] arfima. All of these estimates and their
standard errors are known collectively as IRF results and are saved in an IRF file under the specified
irfname.
The following types of IRFs and dynamic-multiplier functions are saved:
simple IRFs
orthogonalized IRFs
dynamic multipliers
cumulative IRFs
cumulative orthogonalized IRFs
cumulative dynamic multipliers
structural IRFs
after
after
after
after
after
after
after
arima, or arfima
arima, or arfima
arima, or arfima
arima, or arfima
arfima
Once you have created a set of IRF results, use the other irf commands to analyze them.
Options
Main
set(filename[, replace]) specifies the IRF file to be used. If set() is not specified, the active IRF
file is used; see [TS] irf set.
If set() is specified, the specified file becomes the active file, just as if you had issued an irf
set command.
replace specifies that the results saved under irfname may be replaced, if they already exist. IRF
results are saved in files, and one file may contain multiple IRF results.
step(#) specifies the step (forecast) horizon; the default is eight periods.
249
order(varlist) is allowed only after estimation by var; it specifies the Cholesky ordering of the
endogenous variables to be used when estimating the orthogonalized IRFs. By default, the order
in which the variables were originally specified on the var command is used.
smemory is allowed only after estimation by arfima; it specifies that the IRFs are calculated based
on a short-memory model with the fractional difference parameter d set to zero.
estimates(estname) specifies that estimation results previously estimated by var, svar, or vec,
and stored by estimates, be used. This option is rarely specified; see [R] estimates.
Std. errors
nose, bs, and bsp are alternatives that specify how (whether) standard errors are to be calculated. If
none of these options is specified, asymptotic standard errors are calculated, except in two cases:
after estimation by vec and after estimation by svar in which long-run constraints were applied.
In those two cases, the default is as if nose were specified, although in the second case, you could
specify bs or bsp. After estimation by vec, standard errors are simply not available.
nose specifies that no standard errors be calculated.
bs specifies that standard errors be calculated by bootstrapping the residuals. bs may not be
specified if there are gaps in the data.
bsp specifies that standard errors be calculated via a multivariate-normal parametric bootstrap.
bsp may not be specified if there are gaps in the data.
nodots, reps(#), and bsaving(filename , replace ) are relevant only if bs or bsp is specified.
nodots specifies that dots not be displayed each time irf create performs a bootstrap replication.
reps(#), # > 50, specifies the number of bootstrap replications to be performed. reps(200) is
the default.
bsaving(filename , replace ) specifies that file filename be created and that the bootstrap
replications be saved in it. New file filename is just a .dta dataset than can be loaded later using
use; see [D] use. If filename is specified without an extension, .dta is assumed.
250
Introductory examples
Example 1: After var
Below we compare bootstrap and asymptotic standard errors for a specific FEVD. We begin by
fitting a VAR(2) model to the Lutkepohl data (we use the var command). We next use the irf create
command twice, first to create results with asymptotic standard errors (saved under the name asymp)
and then to re-create the same results, this time with bootstrap standard errors (saved under the name
bs). Because bootstrapping is a random process, we set the random-number seed (set seed 123456)
before using irf create the second time; this makes our results reproducible. Finally, we compare
results by using the IRF analysis command irf ctable.
. use http://www.stata-press.com/data/r13/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inv dln_inc dln_consump if qtr>=tq(1961q2) & qtr<=tq(1978q4), lags(1/2)
(output omitted )
. irf create asymp, step(8) set(results1)
(file results1.irf created)
(file results1.irf now active)
(file results1.irf updated)
. set seed 123456
. irf create bs, step(8) bs reps(250) nodots
(file results1.irf updated)
. irf ctable (asymp dln_inc dln_consump fevd)
> (bs dln_inc dln_consump fevd), noci stderror
step
0
1
2
3
4
5
6
7
8
(1)
fevd
(1)
S.E.
(2)
fevd
(2)
S.E.
0
.282135
.278777
.33855
.339942
.342813
.343119
.343079
.34315
0
.087373
.083782
.090006
.089207
.090494
.090517
.090499
.090569
0
.282135
.278777
.33855
.339942
.342813
.343119
.343079
.34315
0
.104073
.096954
.100452
.099085
.099326
.09934
.099325
.099368
Point estimates are, of course, the same. The bootstrap estimates of the standard errors, however,
are larger than the asymptotic estimates, which suggests that the sample size of 71 is not large
enough for the distribution of the estimator of the FEVD to be well approximated by the asymptotic
distribution. Here we would expect the bootstrap confidence interval to be more reliable than the
confidence interval that is based on the asymptotic standard error.
Technical note
The details of the bootstrap algorithms are given in Methods and formulas. These algorithms are
conditional on the first p observations, where p is the order of the fitted VAR. (In an SVAR model, p
is the order of the VAR that underlies the SVAR.) The bootstrapped estimates are conditional on the
first p observations, just as the estimators of the coefficients in VAR models are conditional on the
251
first p observations. With bootstrap standard errors (option bs), the p initial observations are used
with resampling the residuals to produce the bootstrap samples used for estimation. With the more
parametric bootstrap (option bsp), the p initial observations are used with draws from a multivariate
b to generate the bootstrap samples.
normal distribution with variancecovariance matrix
Technical note
b the estimated variance matrix of the disturbances, in
For var and svar e() results, irf uses ,
computing the asymptotic standard errors of all the functions. The point estimates of the orthogonalized impulseresponse functions, the structural impulseresponse functions, and all the variance
b As discussed in [TS] var, var and svar use the ML estimator of
decompositions also depend on .
this matrix by default, but they have option dfk, which will instead use an estimator that includes a
small-sample correction. Specifying dfk when the model is fitwhen the var or svar command is
b and will change the IRF results that depend on it.
givenchanges the estimate of
252
step
0
1
2
3
4
5
6
7
8
step
0
1
2
3
4
5
6
7
8
(1)
cdm
(1)
Lower
(1)
Upper
.032164
.096568
.140107
.150527
.148979
.151247
.150267
.150336
.150525
-.027215
.003479
.022897
.032116
.031939
.033011
.033202
.032858
.033103
.091544
.189656
.257317
.268938
.26602
.269482
.267331
.267813
.267948
(2)
cdm
(2)
Lower
(2)
Upper
.058681
.062723
.126167
.136583
.146482
.146075
.145542
.146309
.145786
.012529
-.005058
.032497
.038691
.04442
.045201
.044988
.045315
.045206
.104832
.130504
.219837
.234476
.248543
.24695
.246096
.247304
.246365
253
.2
.1
0
0
50
step
Graphs by irfname, impulse variable, and response variable
The graph shows that the estimated OIRF converges to a positive asymptote, which indicates that
an orthogonalized innovation to the unemployment rate in Indiana has a permanent effect on the
unemployment rate in Missouri.
Variable irfname contains the irfname specified by the user. Variable impulse records the name
of the endogenous variable whose innovations are the impulse. Variable response records the name
of the endogenous variable that is responding to the innovations. In a model with K endogenous
variables, there are K 2 combinations of impulse and response. Variable step records the periods
for which these estimates were computed.
Below is a catalog of the statistics that irf create estimates and the variable names under which
they are saved in the IRF file.
254
Name
impulseresponse functions
orthogonalized impulseresponse functions
dynamic-multiplier functions
cumulative impulseresponse functions
cumulative orthogonalized impulseresponse functions
cumulative dynamic-multiplier functions
Cholesky forecast-error decomposition
structural impulseresponse functions
structural forecast-error decomposition
standard error of the impulseresponse functions
standard error of the orthogonalized impulseresponse functions
standard error of the cumulative impulseresponse functions
standard error of the cumulative orthogonalized impulseresponse functions
standard error of the Cholesky forecast-error decomposition
standard error of the structural impulseresponse functions
standard error of the structural forecast-error decomposition
irf
oirf
dm
cirf
coirf
cdm
fevd
sirf
sfevd
stdirf
stdoirf
stdcirf
stdcoirf
stdfevd
stdsirf
stdsfevd
In addition to the variables, information is stored in dta characteristics. Much of the following
information is also available in r() after irf describe, where it is often more convenient to obtain
the information. Characteristic dta[version] contains the version number of the IRF file, which
is currently 1.1. Characteristic dta[irfnames] contains a list of all the irfnames in the IRF file.
For each irfname, there are a series of additional characteristics:
Name
Contents
dta[irfname
dta[irfname
dta[irfname
dta[irfname
dta[irfname
model]
order]
exog]
exogvars]
constant]
dta[irfname
dta[irfname
dta[irfname
dta[irfname
dta[irfname
dta[irfname
dta[irfname
lags]
exlags]
tmin]
tmax]
timevar]
tsfmt]
varcns]
dta[irfname svarcns]
dta[irfname step]
dta[irfname stderror]
dta[irfname reps]
dta[irfname version]
dta[irfname
dta[irfname
dta[irfname
dta[irfname
dta[irfname
rank]
trend]
veccns]
sind]
d]
255
yt = v + A1 yt1 + + Ap ytp + ut
(1)
The VAR represents the variables in yt as functions of its own lags and serially uncorrelated innovations
ut . All the information about contemporaneous correlations among the K variables in yt is contained
in . In fact, as discussed in [TS] var svar, a VAR can be viewed as the reduced form of a dynamic
simultaneous-equation model.
256
To see how the innovations affect the variables in yt after, say, i periods, rewrite the model in its
moving-average form
X
yt = +
i uti
(2)
i=0
if i = 0
if i = 1, 2, . . .
We can rewrite a VAR in the moving-average form only if it is stable. Essentially, a VAR is stable
if the variables are covariance stationary and none of the autocorrelations are too high (the issue of
stability is discussed in greater detail in [TS] varstable).
The i are the simple IRFs. The j, k element of i gives the effect of a 1time unit increase in
the k th element of ut on the j th element of yt after i periods, holding everything else constant.
Unfortunately, these effects have no causal interpretation, which would require us to be able to answer
the question, How does an innovation to variable k , holding everything else constant, affect variable j
after i periods? Because the ut are contemporaneously correlated, we cannot assume that everything
else is held constant. Contemporaneous correlation among the ut implies that a shock to one variable
is likely to be accompanied by shocks to some of the other variables, so it does not make sense to
shock one variable and hold everything else constant. For this reason, (2) cannot provide a causal
interpretation.
This shortcoming may be overcome by rewriting (2) in terms of mutually uncorrelated innovations.
Suppose that we had a matrix P, such that = PP0 . If we had such a P, then P1 P01 = IK ,
and
E{P1 ut (P1 ut )0 } = P1 E{(ut u0t )P01 } = P1 P01 = IK
We can thus use P1 to orthogonalize the ut and rewrite (2) as
yt = +
i PP1 uti
i=0
=+
i P1 uti
i=0
=+
i wti
i=0
257
The SVAR approach integrates the need to identify the causal IRFs into the model specification and
estimation process. Sufficient identification restrictions can be obtained by placing either short-run or
long-run restrictions on the model. The VAR in (1) can be rewritten as
yt v A1 yt1 Ap ytp = ut
Similarly, a short-run SVAR model can be written as
(3)
b and B
b are obtained by maximizing the concentrated
As discussed in [TS] var svar, the estimates A
b obtained from the underlying VAR. The short-run
log-likelihood function on the basis of the
b 1 B
b to identify the causal IRFs. The long-run SVAR approach works
SVAR approach chooses P = A
1
b 1 is the matrix of estimated long-run or accumulated
b B
b =A
b , where A
similarly, with P = C
effects of the reduced-form VAR shocks.
There is one important difference between long-run and short-run SVAR models. As discussed by
Amisano and Giannini (1997, chap. 6), in the short-run model the constraints are applied directly to
the parameters in A and B. Then A and B interact with the estimated parameters of the underlying
VAR. In contrast, in a long-run model, the constraints are placed on functions of the estimated VAR
parameters. Although estimation and inference of the parameters in C is straightforward, obtaining
the asymptotic standard errors of the structural IRFs requires untenable assumptions. For this reason,
irf create does not estimate the asymptotic standard errors of the structural IRFs generated by
long-run SVAR models. However, bootstrap standard errors are still available.
An introduction to dynamic-multiplier functions for VARs
A dynamic-multiplier function measures the effect of a unit change in an exogenous variable on the
endogenous variables over time. Per Lutkepohl (2005, chap. 10), if the VAR with exogenous variables
is stable, it can be rewritten as
yt =
X
i=0
Di xti +
i uti
i=0
where the Di are the dynamic-multiplier functions. (See Methods and formulas for details.) Some
authors refer to the dynamic-multiplier functions as transfer functions because they specify how a
unit change in an exogenous variable is transferred to the endogenous variables.
258
Technical note
irf create computes dynamic-multiplier functions only after var. After short-run SVAR models,
the dynamic multipliers from the VAR are the same as those from the SVAR. The dynamic multipliers
for long-run SVARs have not yet been worked out.
bt (h) =
yt+h y
h1
X
i ut+hi
(4)
i=0
where yt+h is the value observed at time t + h and ybt (h) is the h-step-ahead predicted value for
yt+h that was made at time t.
Because the ut are contemporaneously correlated, their distinct contributions to the forecast error
cannot be ascertained. However, if we choose a P such that = PP0 , as above, we can orthogonalize
the ut into wt = P1 ut . We can then ascertain the relative contribution of the distinct elements of
wt . Thus we can rewrite (4) as
bt (h) =
yt+h y
h1
X
i PP1 ut+hi
i=0
h1
X
i wt+hi
i=0
Because the forecast errors can be written in terms of the orthogonalized errors, the forecasterror variance can be written in terms of the orthogonalized error variances. Forecast-error variance
decompositions measure the fraction of the total forecast-error variance that is attributable to each
orthogonalized shock.
Technical note
The details in this note are not critical to the discussion that follows. A forecast-error variance
decomposition is derived for a given P. Per Lutkepohl (2005, sec. 2.3.3), letting mn,i be the m, nth
element of i , we can express the h-step forecast error of the j th component of yt as
bj (h) =
yj,t+h y
h1
X
i=0
K
X
k=1
259
The wt , which were constructed using P, are mutually orthogonal with unit variance. This allows
us to compute easily the mean squared error (MSE) of the forecast of variable j at horizon h in terms
of the contributions of the components of wt . Specifically,
2
K
X
2
2
(jk,0
+ + jk,h1
)
k=1
The k th term in the sum above is interpreted as the contribution of the orthogonalized innovations
in variable k to the h-step forecast error of variable j . Note that the k th element in the sum above
can be rewritten as
h1
X
2
2
2
(jk,0 + + jk,h1 ) =
e0j k ek
i=0
where ei is the ith column of IK . Normalizing by the forecast error for variable j at horizon h yields
Ph1
jk,h =
where MSE{yj,t (h)} =
e0j k ek
MSE{yj,t (h)}
2
i=0
Ph1 PK
i=0
2
k=1 jk,i .
Because the FEVD depends on the choice of P, there are different forecast-error variance decompositions associated with each distinct P. irf create can estimate the FEVD for a VAR or an
b For an SVAR, P is the estimated structural
SVAR. For a VAR, P is the Cholesky decomposition of .
b 1 B
b for short-run models and P = C
b for long-run SVAR models. Due to the
decomposition, P = A
same complications that arose with the structural impulseresponse functions, the asymptotic standard
errors of the structural FEVD are not available after long-run SVAR models, but bootstrap standard
errors are still available.
yt = Ayt1 + ut
(5)
260
We developed intuition for the IRFs from a stationary VAR by rewriting the VAR as an infiniteorder vector moving-average (VMA) process. While the Granger representation theorem establishes
the existence of a VMA formulation of this model, because the cointegrating VAR is not stable, the
inversion is not nearly so intuitive. (See Johansen [1995, chapters 3 and 4] for more details.) For this
reason, we use (5) to develop intuition for the IRFs from a cointegrating VAR.
Suppose that K is 3, that u1 = (1, 0, 0), and that we want to analyze the time paths of the
variables in y conditional on the initial values y0 = 0, A, and the condition that there are no more
shocks to the system, that is, 0 = u2 = u3 = . These assumptions and (5) imply that
y1 = u1
y2 = Ay1 = Au1
y3 = Ay2 = A2 u1
and so on. The ith-row element of the first column of As contains the effect of the unit shock to the
first variable after s periods. The first column of As contains the IRF of a unit impulse to the first
variable after s periods. We could deduce the IRFs of a unit impulse to any of the other variables by
administering the unit shock to one of them instead of to the first variable. Thus we can see that the
(i, j)th element of As contains the unit IRF from variable j to variable i after s periods. By starting
with orthogonalized shocks of the form P1 ut , we can use the same logic to derive the OIRFs to be
As P.
For the stationary VAR, stability implies that all the eigenvalues of A have moduli strictly less than
one, which in turn implies that all the elements of As 0 as s . This implies that all the
IRFs from a stationary VAR taper off to zero as s . In contrast, in a cointegrating VAR, some of
the eigenvalues of A are 1, while the remaining eigenvalues have moduli strictly less than 1. This
implies that in cointegrating VARs some of the elements of As are not going to zero as s ,
which in turn implies that some of the IRFs and OIRFs are not going to zero as s . The fact that
the IRFs and OIRFs taper off to zero for stationary VARs but not for cointegrating VARs is one of the
key differences between the two models.
When the IRF or OIRF from the innovation in one variable to another tapers off to zero as time
goes on, the innovation to the first variable is said to have a transitory effect on the second variable.
When the IRF or OIRF does not go to zero, the effect is said to be permanent.
Note that, because some of the IRFs and OIRFs do not taper off to zero, some of the cumulative
IRFs and OIRFs diverge over time.
261
(Lp ) = 1 1 L 2 L2 p Lp
(Lq ) = 1 + 1 L + 2 L2 + + q Lq
and Lj yt = ytj .
We can rewrite the above model as an infinite-order moving-average process
yt = xt + (L)t
where
(L) =
(L)
= 1 + 1 L + 2 L2 +
(L)
(6)
This representation shows the impact of the past innovations on the current yt . The ith coefficient
describes the response of yt to a one-time impulse in ti , holding everything else constant. The i
coefficients are collectively referred to as the impulseresponse function of the ARMA model. For a
covariance-stationary series, the i coefficients decay exponentially.
A covariance-stationary multiplicative seasonal ARMA model, often abbreviated SARMA, of order
(p, q) (P, Q)s can be written as
(Lp )s (LP )(yt xt ) = (Lq )s (LQ )t
where
262
yt = xt + (1 L)d (L)t
This makes it clear that the impulseresponse function for an ARFIMA model corresponds to a
fractionally differenced impulseresponse function for an ARIMA model. Because of the fractional
differentiation, the i coefficients decay very slowly; see Remarks and examples in [TS] arfima.
bi =
i
X
bj
b ij A
j=1
and
bc
b oi =
b iP
b j = 0K for j > p.
where A
b and B
b , or C
b , from svar, the estimates of the structural IRFs are either
Given the estimates A
b sr
b b
i = i Psr
or
b lr
b b
i = i Plr
263
The estimated structural IRFs stored in an IRF file with the variable name sirf may be from
either a short-run model or a long-run model, depending on the estimation results used to create the
IRFs. As discussed in [TS] irf describe, you can easily determine whether the structural IRFs were
generated from a short-run or a long-run SVAR model using irf describe.
Following Lutkepohl (2005, sec. 3.7), estimates of the cumulative IRFs and the cumulative
orthogonalized impulseresponse functions (COIRFs) at period n are, respectively,
bn =
n
X
bi
i=0
and
bn =
n
X
bi
i=0
The asymptotic standard errors of the different impulseresponse functions are obtained by
applications of the delta method. See Lutkepohl (2005, sec. 3.7) and Amisano and Giannini (1997,
chap. 4) for the derivations. See Serfling (1980, sec. 3.3) for a discussion of the delta method. In
presenting the variancecovariance matrix estimators, we make extensive use of the vec() operator,
where vec(X) is the vector obtained by stacking the columns of X.
b n ), and
Lutkepohl (2005, sec. 3.7) derives the asymptotic VCEs of vec(i ), vec(oi ), vec(
2
2
2
b n ). Because vec(i ) is K 1, the asymptotic VCE of vec(i ) is K K , and it is given by
vec(
b G0i
Gi
b
where
Gi =
Pi1
m=0
c 0 )(i1m)
bm
J(M
Gi is K 2 K 2 p
J = (IK , 0K , . . . , 0K )
b
b2 ... A
b p1 A
bp
A1 A
IK 0K . . . 0K 0K
c
0
I
0K 0K
M=
.K K .
..
..
.
..
.
0K 0K . . .
IK
0K
J is KKp
b is KpKp
M
b i are the estimates of the coefficients on the lagged variables in the VAR, and
b is the VCE
The A
b
2
2
b
b
b
matrix of
b = vec(A1 , . . . , Ap ).
is
a
K
p
K
p
matrix
whose
elements
come
from
the VCE
b
of the VAR coefficient estimator. As such, this VCE is the VCE of the constrained estimator if there
are any constraints placed on the VAR coefficients.
b n ) after n periods is given by
The K 2 K 2 asymptotic VCE matrix for vec(
b F0n
Fn
b
where
Fn =
n
X
Gi
i=1
The K 2 K 2 asymptotic VCE matrix of the vectorized, orthogonalized, IRFs at horizon i, vec(oi ),
is
b Ci
b C0i + Ci
Ci
b
264
where
LK solves
C0 = 0
C0 is K 2 K 2 p
b 0 IK )Gi , i = 1, 2, . . .
Ci = (P
c
Ci is K 2 K 2 p
Ci = (IK i )H, i = 0, 1, . . .
n
o1
b c IK )L0
H = L0K LK NK (P
K
Ci is K 2 K 2
vech(F) =
LK vec(F)
for
KK solves
LK is K
1
2
(IK 2 + KK )
(K+1)
K 2
2
DK vech(F) = vec(F)
x11
x21
.
.
.
xK1
x
vech(X) =
22
..
.
xK2
.
..
for
KK is K 2 K 2
NK is K 2 K 2
b = 2D+ (
b
b )D+
K
K
b
1
0
0
D
)
D
D+
=
(D
K
K K
K
DK solves
(K+1)
2
F K K and symmetric
H is K 2 K
b is
(K+1)
(K+1)
K 2
2
is K
D+
K
F K K and symmetric
(K+1)
K 2
2
DK is K 2 K
(K+1)
2
for
X K K
vech(X) is K
(K+1)
1
2
xKK
b is the VCE of vech().
b More details about LK , KK , DK and vech() are available in
Note that
b
n
o
n
o0
e i
e 0 + IK (JM
c i J0 ) (0) IK (JM
c j J0 )
b (h)ij = G
b G
265
where
e 0 = 0K
G
n
o
e i = Pi1 P
b 0 J(M
c 0 )i1k JM
c k J0
G
sr
k=0
G0 is K 2 K 2 p
Gi is K 2 K 2 p
b (0) = Q2
b W Q02
b (0) is
b W = Q1
b AB Q01
b W is
b sr
b0 P
Q2 = P
n sr
o
b 1 ), (P
b 01 B1 )
Q1 = (IK B
K 2 K 2
K 2 K 2
Q2 is K 2 K 2
Q1 is K 2 2K 2
sr
b i = Jx A
ei B
b
D
x x
where
Jx = (IK , 0K , . . . , 0K )
c B
b
M
e
Ax = e e
0 I
b
b2 ... B
bs
B1 B
... 0
0
b = 0.
B
..
..
..
..
.
.
.
0 0 ... 0
0R 0R . . . 0R 0R
IR 0R . . . 0R 0R
0R 0R
eI = 0R IR
.
..
..
..
..
.
.
.
0R
0
b
e
Bx = B0
e0 = B
b0
B
0
0R . . . IR 0R
I0
0 0
0
0
I0 = [ IR 0R 0R ]
i {0, 1, . . .}
J is K(Kp+Rs)
e x is (Kp+Rs)(Kp+Rs)
A
b is KpRs
B
eI is
RsRs
b 0x is R(Kp+Rs)
B
e is RKp
B
I is RRs
is a K R matrix of 0s and 0
e is a Rs Kp matrix of 0s.
and 0
266
Di =
i
X
bj
D
j=0
where
" i1
#
X
0
i1j
j
0
j
ei =
e
e J , I R Jx A
e Jx
G
Bx A
Jx A
x
x x
x
j=0
i is
Similarly, an asymptotic VCE of D
P
i
j=0
P
i
ej
e0 .
G
G
j
j=0
bx
bjk,h = i=0
d j (h)
MSE
where MSEj (h) is the j th diagonal element of
h1
X
b i
b
b 0i
i=0
b djk,h
b d0jk,h + djk,h
djk,h
where
(
Ph1
b c ek )(e0 P
b0
b iP
MSEj (h)(e0j
k c
e0j )Gi
)
djk,h
= MSE2 (h)2
j
djk,h
0
b c ek )2 Ph1 (e0
b b
(e0j i P
m=0 j m ej )Gm
(
Ph1
b i Pc ek )(e0 e0j
b i )H
=
MSEj (h)(e0j
k
i=0
i=0
djk,h is 1K 2 p
)
b c ek )2
b iP
(e0j
Ph1
0b
m=0 (ej m
b m )DK
ej
1
MSEj (h)2
G0 = 0
and DK is the K 2 K{(K + 1)/2} duplication matrix defined previously.
djk,h is 1K
(K+1)
2
G0 is K 2 K 2 p
267
For the structural forecast-error decompositions, we follow Amisano and Giannini (1997, sec. 5.2).
They define the matrix of structural forecast-error decompositions at horizon s, when a maximum of
h periods are estimated, as
c
cs = F
b 1 M
fs
W
s
bs =
F
s1
X
b sr
b sr0
i i
for s = 1, . . . , h + 1
!
IK
i=0
s1
X sr
c
fs =
bi
b sr
M
i
i=0
c s ) is given by
The K 2 K 2 asymptotic VCE of vec(W
e s (h)Z
e0
Z
s
b (h) is as derived previously, and
where
(
es =
Z
c s)
c s)
c s ) vec(W
vec(W
vec(W
sr ,
sr , ,
b 0 ) vec(
b1 )
b sr
vec(
vec(
h)
n
o
c s)
vec(W
b 1 e b sr
c 0 b 1 e
b sr
sr = 2 (IK Fs )D(j ) (Ws Fs )D(IK )NK (j IK )
bj )
vec(
e
If X is an n n matrix, then D(X)
is the n2 n2 matrix with vec(X) on the diagonal and zeros
in all the off-diagonal elements, and NK is as defined previously.
p
X
A i IK
(6)
i=1
p
X
j=i+1
Aj
(7)
268
Defining
= IK
p1
X
i=1
A 1 = + 1 + IK
Ai = i i1
for i = {2, . . . , p 1}
and
Ap = p1
Using these formulas, we can back out estimates of Ai from the estimates of the i and produced
by vec. Then we simply use the formulas for the IRFs and OIRFs presented in Impulseresponse
function formulas for VARs.
The running sums of the IRFs and OIRFs over the steps within each impulseresponse pair are the
cumulative IRFs and OIRFs.
Algorithms for bootstrapping the VAR IRF and FEVD standard errors
irf create offers two bootstrap algorithms for estimating the standard errors of the various IRFs
and FEVDs. Both var and svar contain estimators for the coefficients in a VAR that are conditional on
the first p observations. The two bootstrap algorithms are also conditional on the first p observations.
Specifying the bs option calculates the standard errors by bootstrapping the residuals. For a
bootstrap with R repetitions, this method uses the following algorithm:
1. Fit the model and save the estimated parameters.
2. Use the estimated coefficients to calculate the residuals.
3. Repeat steps 3a to 3c R times.
3a. Draw a simple random sample of size T with replacement from the residuals. The
random samples are drawn over the K 1 vectors of residuals. When the tth vector is
drawn, all K residuals are selected. This preserves the contemporaneous correlations
among the residuals.
3b. Use the p initial observations, the sampled residuals, and the estimated coefficients to
construct a new sample dataset.
3c. Fit the model and calculate the different IRFs and FEVDs.
3d. Save these estimates as observation r in the bootstrapped dataset.
4. For each IRF and FEVD, the estimated standard deviation from the R bootstrapped estimates
is the estimated standard error of that impulseresponse function or forecast-error variance
decomposition.
Specifying the bsp option estimates the standard errors by a multivariate normal parametric
bootstrap. The algorithm for the multivariate normal parametric bootstrap is identical to the one
above, with the exception that 3a is replaced by 3a(bsp):
3a(bsp). Draw T pseudovariates from a multivariate normal distribution with covariance matrix
b
.
269
(L) =
Expanding the above, we obtain
0 + 1 L + 2 L2 + =
1 + 1 L + 2 L2 +
1 1 L 2 L2
bi = bi +
p
X
bj bij
j=1
b and b
. Then the standard errors for bi are calculated as
b 0
i
i
The IRF for the ARFIMA(p, d, q) model is obtained by applying the filter (1 L)d to (L). The
filter is given by Hassler and Kokoszka (2010) as
(1 L)d =
bi Li
i=0
b
bbi = d + i 1 bbi1
i
The resulting IRF is then given by
bi =
i
X
bjbbij
j=0
The asymptotic standard errors for the IRF for ARFIMA are calculated using the delta method. Let
b
be the estimate of the variancecovariance matrix for
b, b
, and db, and let be a matrix of
b
b
derivatives of i with respect to
b, , and d. Then the standard errors for bi are calculated as
b 0
i
i
270
References
Amisano, G., and C. Giannini. 1997. Topics in Structural VAR Econometrics. 2nd ed. Heidelberg: Springer.
Christiano, L. J., M. Eichenbaum, and C. L. Evans. 1999. Monetary policy shocks: What have we learned and to
what end? In Handbook of Macroeconomics: Volume 1A, ed. J. B. Taylor and M. Woodford. New York: Elsevier.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Hassler, U., and P. Kokoszka. 2010. Impulse responses of fractionally integrated processes with long memory.
Econometric Theory 26: 18551861.
Johansen, S. 1995. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University
Press.
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Serfling, R. J. 1980. Approximation Theorems of Mathematical Statistics. New York: Wiley.
Sims, C. A. 1980. Macroeconomics and reality. Econometrica 48: 148.
Stock, J. H., and M. W. Watson. 2001. Vector autoregressions. Journal of Economic Perspectives 15: 101115.
Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
Title
irf ctable Combined tables of IRFs, dynamic-multiplier functions, and FEVDs
Syntax
Remarks and examples
Menu
Stored results
Description
Also see
Options
Syntax
irf ctable (spec1 )
(spec2 ) . . .
(specN )
, options
where (speck ) is
(irfname impulsevar responsevar stat
, spec options )
irfname is the name of a set of IRF results in the active IRF file. impulsevar should be specified as an
endogenous variable for all statistics except dm and cdm; for those, specify as an exogenous variable.
responsevar is an endogenous variable name. stat is one or more statistics from the list below:
stat
Description
irf
oirf
dm
cirf
coirf
cdm
fevd
sirf
sfevd
impulseresponse function
orthogonalized impulseresponse function
dynamic-multiplier function
cumulative impulseresponse function
cumulative orthogonalized impulseresponse function
cumulative dynamic-multiplier function
Cholesky forecast-error variance decomposition
structural impulseresponse function
structural forecast-error variance decomposition
options
Description
set(filename)
noci
stderror
individual
title("text")
step(#)
level(#)
spec options
Description
noci
stderror
level(#)
ititle("text")
spec options may be specified within a table specification, globally, or both. When specified in a table specification,
the spec options affect only the specification in which they are used. When supplied globally, the spec options
affect all table specifications. When specified in both places, options for the table specification take precedence.
ititle() does not appear in the dialog box.
271
272
Menu
Statistics
>
>
>
Combined tables
Description
irf ctable makes a table or a combined table of IRF results. Each block within a pair of matching
parentheseseach (speck )specifies the information for a specific table. irf ctable combines these
tables into one table, unless the individual option is specified, in which case separate tables for
each block are created.
irf ctable operates on the active IRF file; see [TS] irf set.
Options
set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.
noci suppresses reporting of the confidence intervals for each statistic. noci is assumed when the
model was fit by vec because no confidence intervals were estimated.
stderror specifies that standard errors for each statistic also be included in the table.
individual places each block, or (speck ), in its own table. By default, irf ctable combines all
the blocks into one table.
title("text") specifies a title for the table or the set of tables.
step(#) specifies the maximum number of steps to use for all tables. By default, each table is
constructed using all steps available.
level(#) specifies the default confidence level, as a percentage, for confidence intervals, when they
are reported. The default is level(95) or as set by set level; see [U] 20.7 Specifying the
width of confidence intervals.
The following option is available with irf ctable but is not shown in the dialog box:
ititle("text") specifies an individual subtitle for a specific table. ititle() may be specified only
when the individual option is also specified.
273
Each specific table displays the requested statistics corresponding to the specified combination of
irfname, impulsevar, and responsevar over the step horizon. By default, all the individual tables are
combined into one table. Also by default, all the steps, or periods, available are included in the table.
You can use the step() option to impose a common maximum for all tables.
Example 1
In example 1 of [TS] irf table, we fit a model using var and we saved the IRFs for two different
orderings. The commands we used were
.
.
.
.
.
use
var
irf
irf
irf
http://www.stata-press.com/data/r13/lutkepohl2
dln_inv dln_inc dln_consump
set results4
create ordera, step(8)
create orderb, order(dln_inc dln_inv dln_consump) step(8)
step
0
1
2
3
4
5
6
7
8
step
0
1
2
3
4
5
6
7
8
(1)
oirf
(1)
S.E.
(1)
fevd
(1)
S.E.
.005123
.001635
.002948
-.000221
.000811
.000462
.000044
.000151
.000091
.000878
.000984
.000993
.000662
.000586
.000333
.000275
.000162
.000114
0
.288494
.294288
.322454
.319227
.322579
.323552
.323383
.323499
0
.077483
.073722
.075562
.074063
.075019
.075371
.075314
.075386
(2)
oirf
(2)
S.E.
(2)
fevd
(2)
S.E.
.005461
.001578
.003307
-.00019
.000846
.000491
.000069
.000158
.000096
.000925
.000988
.001042
.000676
.000617
.000349
.000292
.000172
.000122
0
.327807
.328795
.370775
.366896
.370399
.371487
.371315
.371438
0
.08159
.077519
.080604
.079019
.079941
.080323
.080287
.080366
The output is displayed in one table. Because the table did not fit horizontally, it automatically
wrapped. At the bottom of the table is a list of keys that appear at the top of each column. The
274
results in the table above indicate that the orthogonalized IRFs do not change by much. Because the
estimated forecast-error variances do change, we might want to produce two tables that contain the
estimated forecast-error variance decompositions and their 95% confidence intervals:
. irf ctable (ordera dln_inc dln_consump fevd)
>
(orderb dln_inc dln_consump fevd), individual
Table 1
step
0
1
2
3
4
5
6
7
8
(1)
fevd
(1)
Lower
(1)
Upper
0
.288494
.294288
.322454
.319227
.322579
.323552
.323383
.323499
0
.13663
.149797
.174356
.174066
.175544
.175826
.17577
.175744
0
.440357
.43878
.470552
.464389
.469613
.471277
.470995
.471253
step
0
1
2
3
4
5
6
7
8
(2)
fevd
(2)
Lower
(2)
Upper
0
.327807
.328795
.370775
.366896
.370399
.371487
.371315
.371438
0
.167893
.17686
.212794
.212022
.213718
.214058
.213956
.213923
0
.487721
.48073
.528757
.52177
.52708
.528917
.528674
.528953
Because we specified the individual option, the output contains two tables, one for each specific
table command. At the bottom of each table is a list of the keys used in that table and a note indicating
the level of the confidence intervals that we requested. The results from table 1 and table 2 indicate
that each estimated function is well within the confidence interval of the other, so we conclude that
the functions are not significantly different.
Stored results
irf ctable stores the following in r():
Scalars
r(ncols)
r(k umax)
r(k)
Macros
r(key#)
r(tnotes)
Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
275
Title
irf describe Describe an IRF file
Syntax
Remarks and examples
Menu
Stored results
Description
Also see
Options
Syntax
irf describe
irf resultslist
, options
options
Description
set(filename)
using(irf filename)
detail
variables
Menu
Statistics
>
>
>
Description
irf describe describes the IRF results saved in an IRF file.
If set() or using() is not specified, the IRF results of the active IRF file are described.
Options
set(filename) specifies the IRF file to be described and set; see [TS] irf set. If filename is specified
without an extension, .irf is assumed.
using(irf filename) specifies the IRF file to be described. The active IRF file, if any, remains
unchanged. If irf filename is specified without an extension, .irf is assumed.
detail specifies that irf describe display detailed information about each set of IRF results.
detail is implied when irf resultslist is specified.
variables is a programmers option; additionally displays the output produced by the describe
command.
Example 1
. use http://www.stata-press.com/data/r13/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inv dln_inc dln_consump if qtr<=tq(1978q4), lags(1/2) dfk
(output omitted )
var
var
var
Here we see a summary of the model we fit as well as the specification of the IRFs.
277
278
Stored results
irf describe stores the following in r():
Scalars
r(N)
r(k)
r(width)
r(N max)
r(k max)
r(widthmax)
r(changed)
Macros
r( version)
r(irfnames)
r(irfname model)
r(irfname order)
r(irfname exog)
r(irfname exogvar)
r(irfname constant)
r(irfname lags)
r(irfname exlags)
r(irfname tmin)
r(irfname tmax)
r(irfname timevar)
r(irfname tsfmt)
r(irfname varcns)
r(irfname svarcns)
r(irfname step)
r(irfname stderror)
r(irfname
r(irfname
r(irfname
r(irfname
r(irfname
r(irfname
reps)
version)
rank)
trend)
veccns)
sind)
Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
Title
irf drop Drop IRF results from the active IRF file
Syntax
Remarks and examples
Menu
Also see
Description
Option
Syntax
irf drop irf resultslist
, set(filename)
Menu
Statistics
>
>
>
Description
irf drop removes IRF results from the active IRF file.
Option
set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.
Example 1
. use http://www.stata-press.com/data/r13/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inv dln_inc dln_consump if qtr<=tq(1978q4), lags(1/2) dfk
(output omitted )
. irf
(file
. irf
(file
279
280
irf drop Drop IRF results from the active IRF file
. irf describe
Contains irf results from myirfs.irf (dated 4 Apr 2013 12:59)
model
endogenous variables and order (*)
irfname
order1
order2
order3
var
var
var
var
Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
Title
irf graph Graphs of IRFs, dynamic-multiplier functions, and FEVDs
Syntax
Remarks and examples
Menu
Stored results
Description
Also see
Options
Syntax
irf graph stat
, options
stat
Description
irf
oirf
dm
cirf
coirf
cdm
fevd
sirf
sfevd
impulseresponse function
orthogonalized impulseresponse function
dynamic-multiplier function
cumulative impulseresponse function
cumulative orthogonalized impulseresponse function
cumulative dynamic-multiplier function
Cholesky forecast-error variance decomposition
structural impulseresponse function
structural forecast-error variance decomposition
options
Description
Main
set(filename)
irf(irfnames)
impulse(impulsevar)
response(endogvars)
noci
level(#)
lstep(#)
ustep(#)
Advanced
individual
iname(namestub , replace )
isaving(filenamestub , replace )
Plots
plot#opts(cline options)
CI plots
ci#opts(area options)
281
282
twoway options
byopts(by option)
Menu
Statistics
>
>
>
Description
irf graph graphs impulseresponse functions (IRFs), dynamic-multiplier functions, and forecasterror variance decompositions (FEVDs) over time.
Options
Main
set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.
irf(irfnames) specifies the IRF result sets to be used. If irf() is not specified, each of the results in
the active IRF file is used. (Files often contain just one set of IRF results saved under one irfname;
in that case, those results are used.)
impulse(impulsevar) and response(endogvars) specify the impulse and response variables. Usually
one of each is specified, and one graph is drawn. If multiple variables are specified, a separate
subgraph is drawn for each impulseresponse combination. If impulse() and response() are
not specified, subgraphs are drawn for all combination of impulse and response variables.
impulsevar should be specified as an endogenous variable for all statistics except dm or cdm; for
those, specify as an exogenous variable.
noci suppresses graphing the confidence interval for each statistic. noci is assumed when the model
was fit by vec because no confidence intervals were estimated.
level(#) specifies the default confidence level, as a percentage, for confidence intervals, when they
are reported. The default is level(95) or as set by set level; see [U] 20.7 Specifying the
width of confidence intervals. Also see [TS] irf cgraph for a graph command that allows the
confidence level to vary over the graphs.
lstep(#) specifies the first step, or period, to be included in the graphs. lstep(0) is the default.
ustep(#), # 1, specifies the maximum step, or period, to be included in the graphs.
Advanced
individual specifies that each graph be displayed individually. By default, irf graph combines
the subgraphs into one image. When individual is specified, byopts() may not be specified,
but the isaving() and iname() options may be specified.
iname(namestub , replace ) specifies that the ith individual graph be stored in memory under
the name namestubi, which must be a valid Stata name of 24 characters or fewer. iname() may
be specified only with the individual option.
isaving(filenamestub , replace ) specifies that the ith individual graph should be saved to disk
in the current working directory under the name filenamestubi.gph. isaving() may be specified
only when the individual option is also specified.
283
Plots
plot1opts(cline options), . . . , plot4opts(cline options) affect the rendition of the plotted statistics (the stat). plot1opts() affects the rendition of the first statistic; plot2opts(), the second;
and so on. cline options are as described in [G-3] cline options.
CI plots
ci1opts(area options) and ci2opts(area options) affect the rendition of the confidence intervals
for the first (ci1opts()) and second (ci2opts()) statistics in stat. area options are as described
in [G-3] area options.
twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option). Note that the saving() and name() options may not be combined with the
individual option.
byopts(by option) is as documented in [G-3] by option and may not be specified when individual
is specified. byopts() affects how the subgraphs are combined, labeled, etc.
Example 1
Suppose that we have results generated from two different SVAR models. We want to know whether
the shapes of the structural IRFs and the structural FEVDs are similar in the two models. We are also
interested in knowing whether the structural IRFs and the structural FEVDs differ significantly from
their Cholesky counterparts.
Filling in the background, we have previously issued the commands
.
.
.
.
.
.
.
use http://www.stata-press.com/data/r13/lutkepohl2
mat a = (., 0, 0\0,.,0\.,.,.)
mat b = I(3)
svar dln_inv dln_inc dln_consump, aeq(a) beq(b)
irf create modela, set(results3) step(8)
svar dln_inc dln_inv dln_consump, aeq(a) beq(b)
irf create modelb, step(8)
To see whether the shapes of the structural IRFs and the structural FEVDs are similar in the two
models, we type
284
.01
.005
.005
0
step
95% CI for oirf
orthogonalized irf
The graph reveals that the oirf and the sirf estimates are essentially the same for both models and
that the shapes of the functions are very similar for the two models.
To see whether the structural IRFs and the structural FEVDs differ significantly from their Cholesky
counterparts, we type
. irf graph fevd sfevd, impulse(dln_inc) response(dln_consump) lstep(1)
> legend(cols(1))
.5
.4
.3
.2
.1
0
step
95% CI for fevd
95% CI for sfevd
fraction of mse due to impulse
(structural) fraction of mse due to impulse
Graphs by irfname, impulse variable, and response variable
This combined graph reveals that the shapes of these functions are also similar for the two models.
However, the graph illuminates one minor difference between them: In modela, the estimated structural
285
FEVD is slightly larger than the Cholesky-based estimates, whereas in modelb the Cholesky-based
estimates are slightly larger than the structural estimates. For both models, however, the structural
estimates are close to the center of the wide confidence intervals for the two estimates.
Example 2
Lets focus on the results from modela. Suppose that we were interested in examining how
dln consump responded to impulses in its own structural innovations, structural innovations to
dln inc, and structural innovations to dln inv. We type
. irf graph sirf, irf(modela) response(dln_consump)
.01
.005
.005
0
.005
.005
0
step
95% CI
structural irf
The upper-left graph shows the structural IRF of an innovation in dln consump on dln consump. It
indicates that the identification restrictions used in modela imply that a positive shock to dln consump
causes an increase in dln consump, followed by a decrease, followed by an increase, and so on,
until the effect dies out after roughly 5 periods.
The upper-right graph shows the structural IRF of an innovation in dln inc on dln consump,
indicating that a positive shock to dln inc causes an increase in dln consump, which dies out after
4 or 5 periods.
Technical note
[TS] irf table contains a technical note warning you to be careful in naming variables when you
fit models. What is said there applies equally here.
286
Stored results
irf graph stores the following in r():
Scalars
r(k)
Macros
r(stats)
r(irfname)
r(impulse)
r(response)
r(plot#)
r(ci)
r(ciopts#)
number of graphs
statlist
resultslist
impulselist
responselist
contents of plot#opts()
level applied to confidence
intervals or noci
contents of ci#opts()
r(byopts)
r(saving)
r(name)
r(individual)
r(isaving)
r(iname)
r(subtitle#)
contents of byopts()
supplied saving() option
supplied name() option
individual or blank
contents of saving()
contents of name()
subtitle for individual graph #
Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
Title
irf ograph Overlaid graphs of IRFs, dynamic-multiplier functions, and FEVDs
Syntax
Remarks and examples
Menu
Stored results
Description
Also see
Options
Syntax
irf ograph (spec1 )
(spec2 ) . . . (spec15 )
, options
where (speck ) is
(irfname impulsevar responsevar stat
, spec options )
irfname is the name of a set of IRF results in the active IRF file or ., which means the first named
result in the active IRF file. impulsevar should be specified as an endogenous variable for all statistics
except dm and cdm; for those, specify as an exogenous variable. responsevar is an endogenous variable
name. stat is one or more statistics from the list below:
stat
Description
irf
oirf
dm
cirf
coirf
cdm
fevd
sirf
sfevd
impulseresponse function
orthogonalized impulseresponse function
dynamic-multiplier function
cumulative impulseresponse function
cumulative orthogonalized impulseresponse function
cumulative dynamic-multiplier function
Cholesky forecast-error variance decomposition
structural impulseresponse function
structural forecast-error variance decomposition
options
Description
Plots
plot options
set(filename)
Options
common options
twoway options
287
288
plot options
Description
Main
set(filename)
irf(irfnames)
impulse(impulsevar)
response(endogvars)
ci
spec options
Description
Options
common options
Plot
cline options
CI plot
ciopts(area options)
common options
Description
Options
level(#)
lstep(#)
ustep(#)
common options may be specified within a plot specification, globally, or in both. When specified in a plot
specification, the common options affect only the specification in which they are used. When supplied globally,
the common options affect all plot specifications. When supplied in both places, options in the plot specification
take precedence.
Menu
Statistics
>
>
>
Overlaid graph
Description
irf ograph displays plots of irf results on one graph (one pair of axes).
To become familiar with this command, type db irf ograph.
Options
Plots
plot options defines the IRF plots and are found under the Main, Plot, and CI plot tabs.
set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.
289
Main
set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.
irf(irfnames) specifies the IRF result sets to be used. If irf() is not specified, each of the results in
the active IRF file is used. (Files often contain just one set of IRF results saved under one irfname;
in that case, those results are used.)
impulse(varlist) and response(endogvars) specify the impulse and response variables. Usually
one of each is specified, and one graph is drawn. If multiple variables are specified, a separate
subgraph is drawn for each impulseresponse combination. If impulse() and response() are
not specified, subgraphs are drawn for all combination of impulse and response variables.
ci adds confidence bands to the graph. The noci option may be used within a plot specification to
suppress its confidence bands when the ci option is supplied globally.
Plot
cline options affect the rendition of the plotted lines; see [G-3] cline options.
CI plot
ciopts(area options) affects the rendition of the confidence bands for the plotted statistic; see
[G-3] area options. ciopts() implies ci.
Options
level(#) specifies the confidence level, as a percentage, for confidence bands; see [U] 20.7 Specifying
the width of confidence intervals.
lstep(#) specifies the first step, or period, to be included in the graph. lstep(0) is the default.
ustep(#), # 1, specifies the maximum step, or period, to be included.
twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).
Example 1
We have previously issued the commands
.
.
.
.
use
var
irf
irf
http://www.stata-press.com/data/r13/lutkepohl2
dln_inv dln_inc dln_consump if qtr<=tq(1978q4), lags(1/2) dfk
create order1, step(10) set(myirf1, new)
create order2, step(10) order(dln_inc dln_inv dln_consump)
290
We now wish to compare the oirf for impulse dln inc and response dln consump for two different
Cholesky orderings:
.002
.002
.004
.006
10
step
order1: oirf of dln_inc > dln_consump
order2: oirf of dln_inc > dln_consump
Technical note
Graph options allow you to change the appearance of each plot. The following graph contains the
plots of the FEVDs (FEVDs) for impulse dln inc and each response using the results from the first
collection of results in the active IRF file (using the . shortcut). In the second plot, we supply the
clpat(dash) option (an abbreviation for clpattern(dash)) to give the line a dashed pattern. In
the third plot, we supply the m(o) clpat(dash dot) recast(connected) options to get small
circles connected by a line with a dashdot pattern; the cilines option plots the confidence bands
by using lines instead of areas. We use the title() option to add a descriptive title to the graph
and supply the ci option globally to add confidence bands to all the plots.
291
.2
.4
.6
.8
10
step
95% CI of fevd of dln_inc > dln_inc
95% CI of fevd of dln_inc > dln_consump
95% CI of fevd of dln_inc > dln_inv
fevd of dln_inc > dln_inc
fevd of dln_inc > dln_consump
fevd of dln_inc > dln_inv
Stored results
irf ograph stores the following in r():
Scalars
r(plots)
r(ciplots)
Macros
r(irfname#)
r(impulse#)
r(response#)
r(stat#)
r(ci#)
Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
Title
irf rename Rename an IRF result in an IRF file
Syntax
Remarks and examples
Menu
Stored results
Description
Also see
Option
Syntax
irf rename oldname newname
, set(filename)
Menu
Statistics
>
>
>
Description
irf rename changes the name of a set of IRF results saved in the active IRF file.
Option
set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.
Example 1
. use http://www.stata-press.com/data/r13/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inv dln_inc dln_consump if qtr<=tq(1978q4), lags(1/2) dfk
(output omitted )
. irf
(file
. irf
(file
292
var
var
var
var
var
var
Stored results
irf rename stores the following in r():
Macros
r(irfnames)
r(oldnew)
Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
293
Title
irf set Set the active IRF file
Syntax
Remarks and examples
Menu
Stored results
Description
Also see
Options
Syntax
Report identity of active file
irf set
Set, and if necessary create, active file
irf set irf filename
Create, and if necessary replace, active file
irf set irf filename, replace
Clear any active IRF file
irf set, clear
Menu
Statistics
>
>
>
Description
In the first syntax, irf set reports the identity of the active file, if there is one. Also see [TS] irf
describe for obtaining reports on the contents of an IRF file.
In the second syntax, irf set irf filename specifies that the file be set as the active file and, if
the file does not exist, that it be created as well.
In the third syntax, irf set irf filename, replace specifies that even if file irf filename exists,
a new, empty file is to be created and set.
In the rarely used fourth syntax, irf set, clear specifies that, if any IRF file is set, it be unset
and that there be no active IRF file.
IRF files are just files: they can be erased by erase, listed by dir, and copied by copy; see
[D] erase, [D] dir, and [D] copy.
294
295
Options
replace specifies that if irf filename already exists, the file is to be erased and a new, empty IRF
file is to be created in its place. If it does not already exist, a new, empty file is created.
clear unsets the active IRF file.
We specified the name results1, and results1.irf became the active file. The suffix .irf was
added for us.
irf set irf filename can also be used to create a new file:
. use http://www.stata-press.com/data/r13/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inc dln_consump, exog(l.dln_inv)
(output omitted )
. irf set results2
(file results2.irf created)
(file results2.irf now active)
. irf create order1
(file results2.irf updated)
Stored results
irf set stores the following in r():
Macros
r(Orville)
Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
Title
irf table Tables of IRFs, dynamic-multiplier functions, and FEVDs
Syntax
Remarks and examples
Menu
Stored results
Description
Also see
Options
Syntax
irf table
stat
stat
, options
Description
Main
irf
oirf
dm
cirf
coirf
cdm
fevd
sirf
sfevd
impulseresponse function
orthogonalized impulseresponse function
dynamic-multiplier function
cumulative impulseresponse function
cumulative orthogonalized impulseresponse function
cumulative dynamic-multiplier function
Cholesky forecast-error variance decomposition
structural impulseresponse function
structural forecast-error variance decomposition
If stat is not specified, all statistics are included, unless option nostructural is also specified, in which case
sirf and sfevd are excluded. You may specify more than one stat.
options
Description
Main
set(filename)
irf(irfnames)
impulse(impulsevar)
response(endogvars)
individual
title("text")
Options
level(#)
noci
stderror
nostructural
step(#)
296
297
Menu
Statistics
>
>
>
Description
irf table makes a table from the specified IRF results.
The rows of the tables are the time since impulse. Each column represents a combination of
impulse() variable and response() variable for a stat from the irf() results.
Options
Main
set(filename) specifies the file to be made active; see [TS] irf set. If set() is not specified, the
active file is used.
All results are obtained from one IRF file. If you have results in different files that you want in
one table, use irf add to copy results into one file; see [TS] irf add.
irf(irfnames) specifies the IRF result sets to be used. If irf() is not specified, all the results in the
active IRF file are used. (Files often contain just one set of IRF results, saved under one irfname;
in that case, those results are used. When there are multiple IRF results, you may also wish to
specify the individual option.)
impulse(impulsevar) specifies the impulse variables for which the statistics are to be reported. If
impulse() is not specified, each model variable, in turn, is used. impulsevar should be specified
as an endogenous variable for all statistics except dm or cdm; for those, specify as an exogenous
variable.
response(endogvars) specifies the response variables for which the statistics are to be reported. If
response() is not specified, each endogenous variable, in turn, is used.
individual specifies that each set of IRF results be placed in its own table, with its own title and
footer. By default, irf table places all the IRF results in one table with one title and one footer.
individual may not be combined with title().
title("text") specifies a title for the overall table.
Options
level(#) specifies the default confidence level, as a percentage, for confidence intervals, when they
are reported. The default is level(95) or as set by set level; see [U] 20.7 Specifying the
width of confidence intervals.
noci suppresses reporting of the confidence intervals for each statistic. noci is assumed when the
model was fit by vec because no confidence intervals were estimated.
stderror specifies that standard errors for each statistic also be included in the table.
nostructural specifies that stat, when not specified, exclude sirf and sfevd.
step(#) specifies the maximum step horizon for all tables. If step() is not specified, each table is
constructed using all steps available.
298
Example 1
We have fit a model with var, and we saved the IRFs from two different orderings. The commands
we previously used were
.
.
.
.
.
use
var
irf
irf
irf
http://www.stata-press.com/data/r13/lutkepohl2
dln_inv dln_inc dln_consump
set results4
create ordera, step(8)
create orderb, order(dln_inc dln_inv dln_consump) step(8)
step
0
1
2
3
4
5
6
7
8
step
0
1
2
3
4
5
6
7
8
(1)
oirf
(1)
S.E.
(1)
fevd
(1)
S.E.
.005123
.001635
.002948
-.000221
.000811
.000462
.000044
.000151
.000091
.000878
.000984
.000993
.000662
.000586
.000333
.000275
.000162
.000114
0
.288494
.294288
.322454
.319227
.322579
.323552
.323383
.323499
0
.077483
.073722
.075562
.074063
.075019
.075371
.075314
.075386
(2)
oirf
(2)
S.E.
(2)
fevd
(2)
S.E.
.005461
.001578
.003307
-.00019
.000846
.000491
.000069
.000158
.000096
.000925
.000988
.001042
.000676
.000617
.000349
.000292
.000172
.000122
0
.327807
.328795
.370775
.366896
.370399
.371487
.371315
.371438
0
.08159
.077519
.080604
.079019
.079941
.080323
.080287
.080366
The output is displayed as a single table; because the table did not fit horizontally, it wrapped
automatically. At the bottom of the table is a definition of the keys that appear at the top of each
column. The results in the table above indicate that the orthogonalized IRFs do not change by much.
299
Example 2
Because the estimated FEVDs do change significantly, we might want to produce two tables that
contain the estimated FEVDs and their 95% confidence intervals:
. irf table fevd, impulse(dln_inc) response(dln_consump) individual
Results from ordera
step
0
1
2
3
4
5
6
7
8
(1)
fevd
(1)
Lower
(1)
Upper
0
.288494
.294288
.322454
.319227
.322579
.323552
.323383
.323499
0
.13663
.149797
.174356
.174066
.175544
.175826
.17577
.175744
0
.440357
.43878
.470552
.464389
.469613
.471277
.470995
.471253
step
0
1
2
3
4
5
6
7
8
(1)
fevd
(1)
Lower
(1)
Upper
0
.327807
.328795
.370775
.366896
.370399
.371487
.371315
.371438
0
.167893
.17686
.212794
.212022
.213718
.214058
.213956
.213923
0
.487721
.48073
.528757
.52177
.52708
.528917
.528674
.528953
Because we specified the individual option, the output contains two tables, one for each set of
IRF results. Examining the results in the tables indicates that each of the estimated functions is well
within the confidence interval of the other, so we conclude that the functions are not significantly
different.
Technical note
Be careful in how you name variables when you fit models. Say that you fit one model with var
and used time-series operators to form one of the endogenous variables
. var d.ln_inv
...
300
Say that you saved IRF results from both (perhaps they differ in the number of lags). Now you
wish to use irf table to compare them. You would not be able to specify response(d.ln inv)
or response(dln inv) because neither variable is in both models. Similarly, you could not specify
impulse(d.ln inv) or impulse(dln inv) for the same reason.
All is not lost; if impulse() is not specified, all endogenous variables are used, and similarly if
response() is not specified, so you could obtain the result you desired by simply not specifying
the options, but you will also obtain a lot more, besides. If you want to specify the impulse() or
response() options, be sure to name variables consistently.
Also, you may forget how the endogenous variables were named. If so, irf describe, detail
can provide the answer. In irf describes output, the endogenous variables are listed next to
endog.
Stored results
If the individual option is not specified, irf table stores the following in r():
Scalars
r(ncols)
r(k umax)
r(k)
Macros
r(key#)
r(tnotes)
If the individual option is specified, then for each irfname, irf table stores the following in
r():
Scalars
r(irfname
r(irfname
r(irfname
Macros
r(irfname
r(irfname
ncols)
k umax)
k)
key#)
tnotes)
Also see
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
Title
mgarch Multivariate GARCH models
Syntax
Description
References
Also see
Syntax
mgarch model eq
eq . . . eq
if
in
, ...
Family
model
Vech
Diagonal vech
dvech
Conditional correlation
constant conditional correlation
dynamic conditional correlation
varying conditional correlation
ccc
dcc
vcc
See [TS] mgarch dvech, [TS] mgarch ccc, [TS] mgarch dcc, and [TS] mgarch vcc for details.
Description
mgarch estimates the parameters of multivariate generalized autoregressive conditionalheteroskedasticity (MGARCH) models. MGARCH models allow both the conditional mean and the
conditional covariance to be dynamic.
The general MGARCH model is so flexible that not all the parameters can be estimated. For this
reason, there are many MGARCH models that parameterize the problem more parsimoniously.
mgarch implements four commonly used parameterizations: the diagonal vech model, the constant
conditional correlation model, the dynamic conditional correlation model, and the time-varying
conditional correlation model.
301
302
yt = Cxt + t
1/2
t = Ht t
where
Ht
vech (Ht ) = s + Avech t1 0t1 + Bvech (Ht1 )
(1)
where the vech() function stacks the unique elements that lie on or below the main diagonal in a
symmetric matrix into a vector, s is a vector of parameters, and A and B are conformable matrices
of parameters. Because this model uses the vech() function to extract and model the unique elements
of Ht , it is also known as the VECH model.
Because it is a conditional covariance matrix, Ht must be positive definite. Equation (1) can be
used to show that the parameters in s, A, and B are not uniquely identified and that further restrictions
must be placed on s, A, and B to ensure that Ht is positive definite for all t.
303
The various MGARCH models proposed in the literature differ in how they trade off flexibility
and parsimony in their specifications for Ht . Increased flexibility allows a model to capture more
complex Ht processes. Increased parsimony makes parameter estimation feasible for more datasets.
An important measure of the flexibilityparsimony trade-off is how fast the number of model parameters
increases with the number of time series m, because many applied models use multiple time series.
1/2
Ht = Dt Rt Dt
(2)
where each conditional variance follows a univariate GARCH process and the parameterizations of Rt
vary across models.
Equation (2) implies that
(3)
2
where i,t
is modeled by a univariate GARCH process. Equation (3) highlights that CC models use
nonlinear combinations of univariate GARCH models to represent the conditional covariances and that
the parameters in the model for ij,t describe the extent to which the errors from equations i and j
move together.
304
Comparing (1) and (2) shows that the number of parameters increases more slowly with the number
of time series in a CC model than in a DVECH model.
The three CC models implemented in mgarch differ in how they parameterize Rt .
Bollerslev (1990) proposed a CC MGARCH model in which the correlation matrix is time invariant.
It is for this reason that the model is known as a constant conditional correlation (CCC) MGARCH
model. Restricting Rt to a constant matrix reduces the number of parameters and simplifies the
estimation but may be too strict in many empirical applications.
See [TS] mgarch ccc for more details about this model.
Engle (2002) introduced a dynamic conditional correlation (DCC) MGARCH model in which the
conditional quasicorrelations Rt follow a GARCH(1,1)-like process. (As described by Engle [2009]
and Aielli [2009], the parameters in Rt are not standardized to be correlations and are thus known
as quasicorrelations.) To preserve parsimony, all the conditional quasicorrelations are restricted to
follow the same dynamics. The DCC model is significantly more flexible than the CCC model without
introducing an unestimable number of parameters for a reasonable number of series.
See [TS] mgarch dcc for more details about this model.
Tse and Tsui (2002) derived the varying conditional correlation (VCC) MGARCH model in which the
conditional correlations at each period are a weighted sum of a time-invariant component, a measure
of recent correlations among the residuals, and last periods conditional correlations. For parsimony,
all the conditional correlations are restricted to follow the same dynamics.
See [TS] mgarch vcc for more details about this model.
305
It is generally accepted that there could be more low-level theoretical work done to substantiate
the claims that the ML and QML estimators are consistent and asymptotically normally distributed.
These widely applied estimators have been subjected to many Monte Carlo studies that show that the
large-sample theory performs well in finite samples.
The distribution(t) option causes the mgarch commands to estimate the parameters of the
corresponding model by ML assuming that the errors come from a multivariate Student t distribution.
The choice between the multivariate normal and the multivariate t distributions is one between
robustness and efficiency. If the disturbances come from a multivariate Student t, then the ML
estimates based on the multivariate Student t assumption will be consistent and efficient, while the
QML estimates based on the multivariate normal assumption will be consistent but not efficient. In
contrast, if the disturbances come from a well-behaved distribution that is neither multivariate Student
t nor multivariate normal, then the ML estimates based on the multivariate Student t assumption
will not be consistent, while the QML estimates based on the multivariate normal assumption will be
consistent but not efficient.
Fiorentini and Sentana (2007) compare the ML and QML estimators implemented in mgarch and
provide many useful technical results pertaining to the estimators.
References
Aielli, G. P. 2009. Dynamic Conditional Correlations: On Properties and Estimation. Working paper, Dipartimento di
Statistica, University of Florence, Florence, Italy.
Bauwens, L., S. Laurent, and J. V. K. Rombouts. 2006. Multivariate GARCH models: A survey. Journal of Applied
Econometrics 21: 79109.
Berkes, I., and L. Horvath. 2003. The rate of consistency of the quasi-maximum likelihood estimator. Statistics and
Probability Letters 61: 133143.
Bollerslev, T. 1990. Modelling the coherence in short-run nominal exchange rates: A multivariate generalized ARCH
model. Review of Economics and Statistics 72: 498505.
Bollerslev, T., R. F. Engle, and D. B. Nelson. 1994. ARCH models. In Vol. 4 of Handbook of Econometrics, ed.
R. F. Engle and D. L. McFadden. Amsterdam: Elsevier.
Bollerslev, T., R. F. Engle, and J. M. Wooldridge. 1988. A capital asset pricing model with time-varying covariances.
Journal of Political Economy 96: 116131.
Comte, F., and O. Lieberman. 2003. Asymptotic theory for multivariate GARCH processes. Journal of Multivariate
Analysis 84: 6184.
Engle, R. F. 2002. Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional
heteroskedasticity models. Journal of Business & Economic Statistics 20: 339350.
. 2009. Anticipating Correlations: A New Paradigm for Risk Management. Princeton, NJ: Princeton University
Press.
Fiorentini, G., and E. Sentana. 2007. On the efficiency and consistency of likelihood estimation in multivariate conditionally heteroskedastic dynamic regression models. Working paper 0713, CEMFI, Madrid, Spain.
ftp://ftp.cemfi.es/wp/07/0713.pdf.
Jeantheau, T. 1998. Strong consistency of estimators for multivariate ARCH models. Economic Theory 14: 7086.
306
Ling, S., and M. McAleer. 2003. Asymptotic theory for a vector ARMGARCH model. Economic Theory 19:
280310.
Silvennoinen, A., and T. Terasvirta. 2009. Multivariate GARCH models. In Handbook of Financial Time Series, ed.
T. G. Andersen, R. A. Davis, J.-P. Krei, and T. Mikosch, 201229. New York: Springer.
Tse, Y. K., and A. K. C. Tsui. 2002. A multivariate generalized autoregressive conditional heteroscedasticity model
with time-varying correlations. Journal of Business & Economic Statistics 20: 351362.
Also see
[TS] arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators
[TS] var Vector autoregressive models
[U] 20 Estimation and postestimation commands
Title
mgarch ccc Constant conditional correlation multivariate GARCH models
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
mgarch ccc eq
eq . . . eq
if
in
, options
Description
Model
arch(numlist)
garch(numlist)
het(varlist)
distribution(dist # )
unconcentrated
constraints(numlist)
SE/Robust
vce(vcetype)
Reporting
level(#)
nocnsreport
display options
Maximization
maximize options
from(matname)
coeflegend
307
308
eqoptions
Description
noconstant
arch(numlist)
garch(numlist)
het(varlist)
ARCH terms
GARCH terms
You must tsset your data before using mgarch ccc; see [TS] tsset.
indepvars and varlist may contain factor variables; see [U] 11.4.3 Factor variables.
depvars, indepvars, and varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
Menu
Statistics
>
>
Multivariate GARCH
Description
mgarch ccc estimates the parameters of constant conditional correlation (CCC) multivariate generalized autoregressive conditionally heteroskedastic (MGARCH) models in which the conditional variances
are modeled as univariate generalized autoregressive conditionally heteroskedastic (GARCH) models
and the conditional covariances are modeled as nonlinear functions of the conditional variances. The
conditional correlation parameters that weight the nonlinear combinations of the conditional variance
are constant in the CCC MGARCH model.
The CCC MGARCH model is less flexible than the dynamic conditional correlation MGARCH model
(see [TS] mgarch dcc) and varying conditional correlation MGARCH model (see [TS] mgarch vcc),
which specify GARCH-like processes for the conditional correlations. The conditional correlation
MGARCH models are more parsimonious than the diagonal vech MGARCH model (see [TS] mgarch
dvech).
Options
Model
arch(numlist) specifies the ARCH terms for all equations in the model. By default, no ARCH terms
are specified.
garch(numlist) specifies the GARCH terms for all equations in the model. By default, no GARCH
terms are specified.
het(varlist) specifies that varlist be included in the model in the specification of the conditional
variance for all equations. This varlist enters the variance specification collectively as multiplicative
heteroskedasticity.
distribution(dist # ) specifies the assumed distribution for the errors. dist may be gaussian,
normal, or t.
gaussian and normal are synonyms; each causes mgarch ccc to assume that the errors come
from a multivariate normal distribution. # cannot be specified with either of them.
309
t causes mgarch ccc to assume that the errors follow a multivariate Student t distribution, and
the degree-of-freedom parameter is estimated along with the other parameters of the model. If
distribution(t #) is specified, then mgarch ccc uses a multivariate Student t distribution
with # degrees of freedom. # must be greater than 2.
unconcentrated specifies that optimization be performed on the unconcentrated log likelihood. The
default is to start with the concentrated log likelihood.
constraints(numlist) specifies linear constraints to apply to the parameter estimates.
SE/Robust
vce(vcetype) specifies the estimator for the variancecovariance matrix of the estimator.
vce(oim), the default, specifies to use the observed information matrix (OIM) estimator.
vce(robust) specifies to use the Huber/White/sandwich estimator.
Reporting
Maximization
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), nonrtolerance, and from(matname); see [R] maximize for all options except
from(), and see below for information on from(). These options are seldom used.
from(matname) specifies initial values for the coefficients. from(b0) causes mgarch ccc to begin
the optimization algorithm with the values in b0. b0 must be a row vector, and the number of
columns must equal the number of parameters in the model.
The following option is available with mgarch ccc but is not shown in the dialog box:
coeflegend; see [R] estimation options.
Eqoptions
noconstant suppresses the constant term in the mean equation.
arch(numlist) specifies the ARCH terms in the equation. By default, no ARCH terms are specified.
This option may not be specified with model-level arch().
garch(numlist) specifies the GARCH terms in the equation. By default, no GARCH terms are specified.
This option may not be specified with model-level garch().
het(varlist) specifies that varlist be included in the specification of the conditional variance. This
varlist enters the variance specification collectively as multiplicative heteroskedasticity. This option
may not be specified with model-level het().
310
As discussed in [TS] mgarch, MGARCH models differ in the parsimony and flexibility of their
specifications for a time-varying conditional covariance matrix of the disturbances, denoted by Ht .
In the conditional correlation family of MGARCH models, the diagonal elements of Ht are modeled
as univariate GARCH models, whereas the off-diagonal elements are modeled as nonlinear functions
of the diagonal terms. In the CCC MGARCH model,
hij,t = ij
p
hii,t hjj,t
where the diagonal elements hii,t and hjj,t follow univariate GARCH processes and ij is a timeinvariate weight interpreted as a conditional correlation.
In the dynamic conditional correlation (DCC) and varying conditional correlation (VCC) MGARCH
models discussed in [TS] mgarch dcc and [TS] mgarch vcc, the ij are allowed to vary over
time. Although the conditional-correlation structure provides a useful trade-off between parsimony
and flexibility in the DCC MGARCH and VCC MGARCH models, the time-invariant parameterization
used in the CCC MGARCH model is generally viewed as too restrictive for many applications; see
Silvennoinen and Terasvirta (2009). The baseline CCC MGARCH estimates are frequently compared
with DCC MGARCH and VCC MGARCH estimates.
Technical note
Formally, the CCC MGARCH model derived by Bollerslev (1990) can be written as
yt = Cxt + t
1/2
t = Ht t
1/2
1/2
Ht = Dt RDt
where
Ht
Dt = .
..
0
0
2
2,t
..
.
0
..
..
.
.
2
m,t
311
2
in which each i,t
evolves according to a univariate GARCH model of the form
Pqi
P
pi
2
2
j i,tj
j 2i,tj + j=1
i,t
= si + j=1
by default, or
2
i,t
= exp(i zi,t ) +
Ppi
j=1
j 2i,tj +
Pqi
j=1
2
j i,tj
1
12 1m
1
2m
12
R=
..
..
..
..
.
.
.
.
1m 2m
1
This model is known as the constant conditional correlation MGARCH model because R is time
invariant.
Some examples
Example 1: Model with common covariates
We have daily data on the stock returns of three car manufacturersToyota, Nissan, and Honda,
from January 2, 2003, to December 31, 2010in the variables toyota, nissan, and honda. We
model the conditional means of the returns as a first-order vector autoregressive process and the
conditional covariances as a CCC MGARCH process in which the variance of each disturbance term
follows a GARCH(1,1) process. We specify the noconstant option, because the returns have mean
zero. The estimated constants in the variance equations are near zero in this example because of how
the data are scaled.
312
Std. Err.
Number of obs
Wald chi2(9)
Prob > chi2
P>|z|
=
=
=
2014
17.46
0.0420
toyota
toyota
L1.
-.0537817
.0353211
-1.52
0.128
-.1230098
.0154463
nissan
L1.
.026686
.024841
1.07
0.283
-.0220015
.0753734
honda
L1.
-.0043073
.0302761
-0.14
0.887
-.0636473
.0550327
ARCH_toyota
arch
L1.
.0615321
.0087313
7.05
0.000
.0444191
.0786452
garch
L1.
.9213798
.0110412
83.45
0.000
.8997395
.9430201
_cons
4.42e-06
1.12e-06
3.93
0.000
2.21e-06
6.62e-06
nissan
toyota
L1.
-.0232321
.0400563
-0.58
0.562
-.1017411
.0552769
nissan
L1.
-.0299552
.0309362
-0.97
0.333
-.0905891
.0306787
honda
L1.
.0369229
.0360532
1.02
0.306
-.0337402
.1075859
ARCH_nissan
arch
L1.
.0740294
.0119353
6.20
0.000
.0506366
.0974222
garch
L1.
.9102547
.0142328
63.95
0.000
.8823589
.9381506
_cons
6.36e-06
1.76e-06
3.61
0.000
2.91e-06
9.81e-06
toyota
L1.
-.0378616
.036792
-1.03
0.303
-.1099727
.0342495
nissan
L1.
.0551649
.0272559
2.02
0.043
.0017444
.1085855
honda
L1.
-.0431919
.0331268
-1.30
0.192
-.1081193
.0217354
ARCH_honda
arch
L1.
.0433036
.0070224
6.17
0.000
.0295399
.0570674
garch
L1.
.939117
.010131
92.70
0.000
.9192605
.9589735
_cons
5.02e-06
1.31e-06
3.83
0.000
2.45e-06
7.59e-06
.6532264
.0128035
51.02
0.000
.628132
.6783208
.7185412
.0108132
66.45
0.000
.6973477
.7397347
.6298972
.0135336
46.54
0.000
.6033717
.6564226
313
honda
corr(toyota,
nissan)
corr(toyota,
honda)
corr(nissan,
honda)
The iteration log has three parts: the dots from the search for initial values, the iteration log from
optimizing the concentrated log likelihood, and the iteration log from maximizing the unconcentrated
log likelihood. A detailed discussion of the optimization methods can be found in Methods and
formulas.
The header describes the estimation sample and reports a Wald test against the null hypothesis
that all the coefficients on the independent variables in the mean equations are zero. Here the null
hypothesis is rejected at the 5% level.
The output table first presents results for the mean or variance parameters used to model each
dependent variable. Subsequently, the output table presents results for the conditional correlation
parameters. For example, the conditional correlation between the standardized residuals for Toyota
and Nissan is estimated to be 0.65.
314
The output above indicates that we may not need all the vector autoregressive parameters, but that
each of the univariate ARCH, univariate GARCH, and conditional correlation parameters are statistically
significant. That the estimated conditional correlation parameters are positive and significant indicates
that the returns on these stocks rise or fall together.
That the conditional correlations are time invariant is a restrictive assumption. The DCC MGARCH
model and the VCC MGARCH model nest the CCC MGARCH model. When we test the time-invariance
assumption with Wald tests on the parameters of these more general models in [TS] mgarch dcc and
[TS] mgarch vcc, we reject the null hypothesis that these conditional correlations are time invariant.
Number of obs
Wald chi2(1)
Prob > chi2
Std. Err.
P>|z|
=
=
=
315
2014
1.81
0.1781
ARCH_toyota
arch
L1.
.0619604
.0087942
7.05
0.000
.044724
.0791968
garch
L1.
.9208961
.0110995
82.97
0.000
.8991414
.9426508
_cons
4.43e-06
1.13e-06
3.94
0.000
2.23e-06
6.64e-06
ARCH_nissan
arch
L1.
.0773095
.012328
6.27
0.000
.0531471
.1014719
garch
L1.
.906088
.0147303
61.51
0.000
.8772171
.9349589
_cons
6.77e-06
1.85e-06
3.66
0.000
3.14e-06
.0000104
nissan
L1.
.0186628
.0138575
1.35
0.178
-.0084975
.0458231
ARCH_honda
arch
L1.
.0433741
.006996
6.20
0.000
.0296622
.0570861
garch
L1.
.9391094
.0100707
93.25
0.000
.9193712
.9588477
_cons
5.02e-06
1.31e-06
3.83
0.000
2.45e-06
7.60e-06
.652299
.0128271
50.85
0.000
.6271583
.6774396
.7189531
.0108005
66.57
0.000
.6977845
.7401218
.628435
.0135653
46.33
0.000
.6018475
.6550225
honda
corr(toyota,
nissan)
corr(toyota,
honda)
corr(nissan,
honda)
It turns out that the coefficient on L1.nissan in the honda equation is now statistically insignificant.
We could further improve the model by removing L1.nissan from the model.
As expected, removing the insignificant parameters from conditional mean equations had almost
no effect on the estimated conditional variance parameters.
There is no mean equation for Toyota or Nissan. In [TS] mgarch ccc postestimation, we discuss
prediction from models without covariates.
316
=
=
=
=
10317.225
10630.464
10865.964
11063.329
=
=
11273.962
11274.409
=
=
=
11274.494
11274.499
11274.499
log likelihood =
log likelihood =
log likelihood =
11274.499
11274.501
11274.501
Number of obs
Wald chi2(.)
Prob > chi2
=
=
=
2015
.
.
[ARCH_toyota]L.arch - [ARCH_nissan]L.arch = 0
[ARCH_toyota]L.garch - [ARCH_nissan]L.garch = 0
Coef.
Std. Err.
P>|z|
ARCH_toyota
arch
L1.
.0742678
.0095464
7.78
0.000
.0555572
.0929785
garch
L1.
.9131674
.0111558
81.86
0.000
.8913024
.9350323
_cons
3.77e-06
1.02e-06
3.71
0.000
1.78e-06
5.77e-06
ARCH_nissan
arch
L1.
.0742678
.0095464
7.78
0.000
.0555572
.0929785
garch
L1.
.9131674
.0111558
81.86
0.000
.8913024
.9350323
_cons
5.30e-06
1.36e-06
3.89
0.000
2.63e-06
7.97e-06
.651389
.0128482
50.70
0.000
.6262071
.6765709
corr(toyota,
nissan)
317
We could test our constraints by fitting the unconstrained model and performing a likelihood-ratio
test. The results indicate that the restricted model is preferable.
318
Std. Err.
garch(1))
Number of obs
Wald chi2(2)
Prob > chi2
=
=
=
2499
2212.30
0.0000
P>|z|
acme
afrelated
.9175148
.0651088
14.09
0.000
.7899039
1.045126
ARCH_acme
arch
L1.
.0798719
.0169526
4.71
0.000
.0466455
.1130983
garch
L1.
.7336823
.060157
12.20
0.000
.6157768
.8515877
_cons
2.880836
.7602061
3.79
0.000
1.390859
4.370812
anvil
afinputs
_cons
-1.015561
.0703606
.0226437
.0211689
-44.85
3.32
0.000
0.001
-1.059942
.0288703
-.97118
.1118508
ARCH_anvil
arch
L1.
L2.
.4893288
.2782296
.0286012
.0208172
17.11
13.37
0.000
0.000
.4332714
.2374287
.5453862
.3190305
apex
L1.
1.894972
.0616293
30.75
0.000
1.774181
2.015763
_cons
.1034111
.0735512
1.41
0.160
-.0407466
.2475688
-.5354047
.0143275
-37.37
0.000
-.563486
-.5073234
corr(acme,
anvil)
The results indicate that increases in the futures prices for related products lead to higher returns on
the Acme stock, and increased input prices lead to lower returns on the Anvil stock. In the conditional
variance equation for Anvil, the coefficient on L1.apex is positive and significant, which indicates
that an increase in the return on the Apex stock leads to more variability in the return on the Anvil
stock. That the estimated conditional correlation between the two returns is 0.54 indicates that these
319
returns tend to move in opposite directions; in other words, an increase in the return for the Acme
stock tends to be associated with a decrease in the return for the Anvil stock, and vice versa.
Stored results
mgarch ccc stores the following in e():
Scalars
e(N)
e(k)
e(k aux)
e(k extra)
e(k eq)
e(k dv)
e(df m)
e(ll)
e(chi2)
e(p)
e(estdf)
e(usr)
e(tmin)
e(tmax)
e(N gaps)
e(rank)
e(ic)
e(rc)
e(converged)
Macros
e(cmd)
e(model)
e(cmdline)
e(depvar)
e(covariates)
e(dv eqs)
e(indeps)
e(tvar)
e(title)
e(chi2type)
e(vce)
e(vcetype)
e(tmins)
e(tmaxs)
e(dist)
e(arch)
e(garch)
e(technique)
e(properties)
e(estat cmd)
e(predict)
e(marginsok)
e(marginsnotok)
number of observations
number of parameters
number of auxiliary parameters
number of extra estimates added to
number of equations in e(b)
number of dependent variables
model degrees of freedom
log likelihood
significance
1 if distribution parameter was estimated, 0 otherwise
user-provided distribution parameter
minimum time in sample
maximum time in sample
number of gaps
rank of e(V)
number of iterations
return code
1 if converged, 0 otherwise
mgarch
ccc
command as typed
names of dependent variables
list of covariates
dependent variables with mean equations
independent variables in each equation
time variable
title in estimation output
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
formatted minimum time
formatted maximum time
distribution for error term: gaussian or t
specified ARCH terms
specified GARCH terms
maximization technique
b V
program used to implement estat
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
320
Matrices
e(b)
e(Cns)
e(ilog)
e(gradient)
e(hessian)
e(V)
e(pinfo)
Functions
e(sample)
coefficient vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
Hessian matrix
variancecovariance matrix of the estimators
parameter information, used by predict
marks estimation sample
n
o
1/2
lt = 0.5m log(2) 0.5log {det (R)} log det Dt
0.5e
t R1e
0t
(1)
1/2
where e
t = Dt
t is an m 1 vector of standardized residuals, t = yt Cxt . The log-likelihood
PT
function is t=1 lt .
If we assume that t follow a multivariate t distribution with degrees of freedom (df) greater than
2, then the unconcentrated log-likelihood function for observation t is
df
m
df + m
log
log 1 +
(2)
2
df 2
The correlation matrix R can be concentrated out of (1) and (2) by defining the (i, j)th element
of R as
bij =
T
X
t=1
!
e
ite
jt
T
X
e
2it
T
12 X
e
2jt
21
t=1
t=1
mgarch ccc starts the optimization process with the concentrated log-likelihood function.
The starting values for the parameters in the mean equations and the initial residuals b
t are
obtained by least-squares regression. The starting values for the parameters in the variance equations
are obtained by a procedure proposed by Gourieroux and Monfort (1997, sec. 6.2.2). If the optimization
is started with the unconcentrated log likelihood, then the initial values for the parameters in R are
calculated from the standardized residuals e
t .
GARCH estimators require initial values that can be plugged in for ti 0ti and
Htj when
t i < 1 and t j < 1. mgarch ccc substitutes an estimator of the unconditional covariance of the
disturbances
b = T 1
T
X
t=1
0
b
b
t b
b
t
(3)
321
for ti 0ti when t i < 1 and for Htj when t j < 1, where b
b
t is the vector of residuals
calculated using the estimated parameters.
mgarch ccc requires a sample size that at the minimum is equal to the number of parameters in
the model plus twice the number of equations.
mgarch ccc uses numerical derivatives in maximizing the log-likelihood function.
References
Bollerslev, T. 1990. Modelling the coherence in short-run nominal exchange rates: A multivariate generalized ARCH
model. Review of Economics and Statistics 72: 498505.
Gourieroux, C. S., and A. Monfort. 1997. Time Series and Dynamic Models. Trans. ed. G. M. Gallo. Cambridge:
Cambridge University Press.
Silvennoinen, A., and T. Terasvirta. 2009. Multivariate GARCH models. In Handbook of Financial Time Series, ed.
T. G. Andersen, R. A. Davis, J.-P. Kreis, and T. Mikosch, 201229. Berlin: Springer.
Also see
[TS] mgarch ccc postestimation Postestimation tools for mgarch ccc
[TS] mgarch Multivariate GARCH models
[TS] tsset Declare data to be time-series data
[TS] arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators
[TS] var Vector autoregressive models
[U] 20 Estimation and postestimation commands
Title
mgarch ccc postestimation Postestimation tools for mgarch ccc
Description
Remarks and examples
Description
The following standard postestimation commands are available after mgarch ccc:
Command
Description
contrast
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
margins
marginsplot
nlcom
predict
predictnl
pwcompare
test
testnl
322
323
type
{ stub* | newvarlist }
if
in
, statistic options
Description
statistic
Main
xb
residuals
variance
correlation
These statistics are available both in and out of sample; type predict
the estimation sample.
Description
options
Options
equation(eqnames)
names of equations for which predictions are made
dynamic(time constant) begin dynamic forecast at specified time
>
Postestimation
>
Main
xb, the default, calculates the linear predictions of the dependent variables.
residuals calculates the residuals.
variance predicts the conditional variances and conditional covariances.
correlation predicts the conditional correlations.
Options
equation(eqnames) specifies the equation for which the predictions are calculated. Use this option
to predict a statistic for a particular equation. Equation names, such as equation(income), are
used to identify equations.
One equation name may be specified when predicting the dependent variable, the residuals, or
the conditional variance. For example, specifying equation(income) causes predict to predict
income, and specifying variance equation(income) causes predict to predict the conditional
variance of income.
Two equations may be specified when predicting a conditional variance or covariance. For example, specifying equation(income, consumption) variance causes predict to predict the
conditional covariance of income and consumption.
324
dynamic(time constant) specifies when predict starts producing dynamic forecasts. The specified
time constant must be in the scale of the time variable specified in tsset, and the time constant
must be inside a sample for which observations on the dependent variables are available. For
example, dynamic(tq(2008q4)) causes dynamic predictions to begin in the fourth quarter of
2008, assuming that your time variable is quarterly; see [D] datetime. If the model contains
exogenous variables, they must be present for the whole predicted sample. dynamic() may not
be specified with residuals.
.001
.002
.003
. use http://www.stata-press.com/data/r13/stocks
(Data from Yahoo! Finance)
. quietly mgarch ccc (toyota nissan = , noconstant)
> (honda = L.nissan, noconstant), arch(1) garch(1)
. tsappend, add(50)
. predict H*, variance dynamic(2016)
01jan2009
01jul2009
01jan2010
Date
01jul2010
01jan2011
Recent in-sample one-step-ahead forecasts are plotted to the left of the vertical line in the above
graph, and the dynamic out-of-sample forecasts appear to the right of the vertical line. The graph
shows the tail end of the huge increase in return volatility that took place in 2008 and 2009. It also
shows that the dynamic forecasts quickly converge.
325
Also see
[TS] mgarch ccc Constant conditional correlation multivariate GARCH models
[U] 20 Estimation and postestimation commands
Title
mgarch dcc Dynamic conditional correlation multivariate GARCH models
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
mgarch dcc eq
eq . . . eq
if
in
, options
Description
Model
arch(numlist)
garch(numlist)
het(varlist)
distribution(dist # )
constraints(numlist)
SE/Robust
vce(vcetype)
Reporting
level(#)
nocnsreport
display options
Maximization
maximize options
from(matname)
coeflegend
326
eqoptions
Description
noconstant
arch(numlist)
garch(numlist)
het(varlist)
ARCH terms
GARCH terms
327
You must tsset your data before using mgarch dcc; see [TS] tsset.
indepvars and varlist may contain factor variables; see [U] 11.4.3 Factor variables.
depvars, indepvars, and varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
Menu
Statistics
>
>
Multivariate GARCH
Description
mgarch dcc estimates the parameters of dynamic conditional correlation (DCC) multivariate
generalized autoregressive conditionally heteroskedastic (MGARCH) models in which the conditional
variances are modeled as univariate generalized autoregressive conditionally heteroskedastic (GARCH)
models and the conditional covariances are modeled as nonlinear functions of the conditional variances.
The conditional quasicorrelation parameters that weight the nonlinear combinations of the conditional
variances follow the GARCH-like process specified in Engle (2002).
The DCC MGARCH model is about as flexible as the closely related varying conditional correlation
MGARCH model (see [TS] mgarch vcc), more flexible than the conditional correlation MGARCH
model (see [TS] mgarch ccc), and more parsimonious than the diagonal vech MGARCH model (see
[TS] mgarch dvech).
Options
Model
arch(numlist) specifies the ARCH terms for all equations in the model. By default, no ARCH terms
are specified.
garch(numlist) specifies the GARCH terms for all equations in the model. By default, no GARCH
terms are specified.
het(varlist) specifies that varlist be included in the specification of the conditional variance for all
equations. This varlist enters the variance specification collectively as multiplicative heteroskedasticity.
distribution(dist # ) specifies the assumed distribution for the errors. dist may be gaussian,
normal, or t.
gaussian and normal are synonyms; each causes mgarch dcc to assume that the errors come
from a multivariate normal distribution. # may not be specified with either of them.
328
t causes mgarch dcc to assume that the errors follow a multivariate Student t distribution, and
the degree-of-freedom parameter is estimated along with the other parameters of the model. If
distribution(t #) is specified, then mgarch dcc uses a multivariate Student t distribution
with # degrees of freedom. # must be greater than 2.
constraints(numlist) specifies linear constraints to apply to the parameter estimates.
SE/Robust
vce(vcetype) specifies the estimator for the variancecovariance matrix of the estimator.
vce(oim), the default, specifies to use the observed information matrix (OIM) estimator.
vce(robust) specifies to use the Huber/White/sandwich estimator.
Reporting
Maximization
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), nonrtolerance, and from(matname); see [R] maximize for all options except
from(), and see below for information on from(). These options are seldom used.
from(matname) specifies initial values for the coefficients. from(b0) causes mgarch dcc to begin
the optimization algorithm with the values in b0. b0 must be a row vector, and the number of
columns must equal the number of parameters in the model.
The following option is available with mgarch dcc but is not shown in the dialog box:
coeflegend; see [R] estimation options.
Eqoptions
noconstant suppresses the constant term in the mean equation.
arch(numlist) specifies the ARCH terms in the equation. By default, no ARCH terms are specified.
This option may not be specified with model-level arch().
garch(numlist) specifies the GARCH terms in the equation. By default, no GARCH terms are specified.
This option may not be specified with model-level garch().
het(varlist) specifies that varlist be included in the specification of the conditional variance. This
varlist enters the variance specification collectively as multiplicative heteroskedasticity. This option
may not be specified with model-level het().
329
MGARCH models are dynamic multivariate regression models in which the conditional variances
and covariances of the errors follow an autoregressive-moving-average structure. The DCC MGARCH
model uses a nonlinear combination of univariate GARCH models with time-varying cross-equation
weights to model the conditional covariance matrix of the errors.
As discussed in [TS] mgarch, MGARCH models differ in the parsimony and flexibility of their
specifications for a time-varying conditional covariance matrix of the disturbances, denoted by Ht .
In the conditional correlation family of MGARCH models, the diagonal elements of Ht are modeled
as univariate GARCH models, whereas the off-diagonal elements are modeled as nonlinear functions
of the diagonal terms. In the DCC MGARCH model,
hij,t = ij,t
p
hii,t hjj,t
where the diagonal elements hii,t and hjj,t follow univariate GARCH processes and ij,t follows the
dynamic process specified in Engle (2002) and discussed below.
Because the ij,t varies with time, this model is known as the DCC GARCH model.
Technical note
The DCC GARCH model proposed by Engle (2002) can be written as
yt = Cxt + t
1/2
t = Ht t
1/2
1/2
Ht = Dt Rt Dt
1/2
Rt = diag(Qt )
Qt diag(Qt )
1/2
Qt = (1 1 2 )R + 1 e
t1e
0t1 + 2 Qt1
where
Ht
Dt = .
..
0
..
..
.
.
2
m,t
0
2
2,t
..
.
0
2
in which each i,t
evolves according to a univariate GARCH model of the form
P
Pqi
pi
2
2
i,t
= si + j=1
j 2i,tj + j=1
j i,tj
by default, or
2
i,t
= exp(i zi,t ) +
Ppi
j=1
j 2i,tj +
Pqi
j=1
2
j i,tj
(1)
330
12,t
Rt =
..
.
1m,t
12,t
1
..
.
2m,t
1m,t
2m,t
..
..
.
.
1
1/2
e
t is an m 1 vector of standardized residuals, Dt
t ; and
1 and 2 are parameters that govern the dynamics of conditional quasicorrelations. 1 and
2 are nonnegative and satisfy 0 1 + 2 < 1.
When Qt is stationary, the R matrix in (1) is a weighted average of the unconditional covariance
matrix of the standardized residuals e
t , denoted by R, and the unconditional mean of Qt , denoted by
Q. Because R 6= Q, as shown by Aielli (2009), R is neither the unconditional correlation matrix nor
the unconditional mean of Qt . For this reason, the parameters in R are known as quasicorrelations;
see Aielli (2009) and Engle (2009) for discussions.
Some examples
Example 1: Model with common covariates
We have daily data on the stock returns of three car manufacturersToyota, Nissan, and Honda,
from January 2, 2003, to December 31, 2010in the variables toyota, nissan and honda. We
model the conditional means of the returns as a first-order vector autoregressive process and the
conditional covariances as a DCC MGARCH process in which the variance of each disturbance term
follows a GARCH(1,1) process.
Number of obs
Wald chi2(9)
Prob > chi2
Std. Err.
=
=
=
2014
19.54
0.0210
P>|z|
toyota
toyota
L1.
-.0510866
.0339824
-1.50
0.133
-.117691
.0155177
nissan
L1.
.0297834
.0247455
1.20
0.229
-.0187169
.0782837
honda
L1.
-.0162826
.0300323
-0.54
0.588
-.0751449
.0425797
ARCH_toyota
arch
L1.
.0608223
.0086686
7.02
0.000
.0438321
.0778124
garch
L1.
.9222207
.0111053
83.04
0.000
.9004547
.9439868
_cons
4.47e-06
1.15e-06
3.90
0.000
2.22e-06
6.72e-06
331
332
nissan
toyota
L1.
-.005672
.0389348
-0.15
0.884
-.0819828
.0706387
nissan
L1.
-.0287095
.0309379
-0.93
0.353
-.0893466
.0319276
honda
L1.
.0154979
.0358802
0.43
0.666
-.054826
.0858218
ARCH_nissan
arch
L1.
.084424
.0128192
6.59
0.000
.0592989
.1095492
garch
L1.
.8994206
.0151125
59.52
0.000
.8698007
.9290406
_cons
7.21e-06
1.93e-06
3.74
0.000
3.43e-06
.000011
toyota
L1.
-.027242
.0361819
-0.75
0.451
-.0981572
.0436732
nissan
L1.
.0617495
.0271378
2.28
0.023
.0085603
.1149386
honda
L1.
-.063507
.0332918
-1.91
0.056
-.1287578
.0017438
ARCH_honda
arch
L1.
.0490135
.0073695
6.65
0.000
.0345696
.0634573
garch
L1.
.9331126
.0103685
90.00
0.000
.9127907
.9534344
_cons
5.35e-06
1.35e-06
3.95
0.000
2.69e-06
8.00e-06
.6689543
.0168021
39.81
0.000
.6360228
.7018858
.7259625
.0140156
51.80
0.000
.6984923
.7534326
.6335659
.0180412
35.12
0.000
.5982058
.668926
.0315274
.8704193
.0088386
.0613329
3.57
14.19
0.000
0.000
.0142041
.750209
.0488506
.9906295
honda
corr(toyota,
nissan)
corr(toyota,
honda)
corr(nissan,
honda)
Adjustment
lambda1
lambda2
The iteration log has three parts: the dots from the search for initial values, the iteration log from
optimizing the log likelihood, and the iteration log from the refining step. A detailed discussion of
the optimization methods is in Methods and formulas.
The header describes the estimation sample and reports a Wald test against the null hypothesis
that all the coefficients on the independent variables in the mean equations are zero. Here the null
hypothesis is rejected at the 5% level.
The output table first presents results for the mean or variance parameters used to model each
dependent variable. Subsequently, the output table presents results for the conditional quasicorrelations.
333
For example, the conditional quasicorrelation between the standardized residuals for Toyota and Nissan
is estimated to be 0.67. Finally, the output table presents results for the adjustment parameters 1
and 2 . In the example at hand, the estimates for both 1 and 2 are statistically significant.
The DCC MGARCH model reduces to the CCC MGARCH model when 1 = 2 = 0. The output
below shows that a Wald test rejects the null hypothesis that 1 = 2 = 0 at all conventional levels.
. test _b[Adjustment:lambda1] = _b[Adjustment:lambda2] = 0
( 1) [Adjustment]lambda1 - [Adjustment]lambda2 = 0
( 2) [Adjustment]lambda1 = 0
chi2( 2) = 1102.45
Prob > chi2 =
0.0000
These results indicate that the assumption of time-invariant conditional correlations maintained in
the CCC MGARCH model is too restrictive for these data.
log likelihood =
log likelihood =
17476.35
17476.35
334
Number of obs
Wald chi2(1)
Prob > chi2
=
=
=
2014
2.21
0.1374
Std. Err.
P>|z|
ARCH_toyota
arch
L1.
.0608188
.0086675
7.02
0.000
.0438308
.0778067
garch
L1.
.9219957
.0111066
83.01
0.000
.9002271
.9437643
_cons
4.49e-06
1.14e-06
3.95
0.000
2.27e-06
6.72e-06
ARCH_nissan
arch
L1.
.0876161
.01302
6.73
0.000
.0620974
.1131348
garch
L1.
.8950964
.0152908
58.54
0.000
.865127
.9250658
_cons
7.69e-06
1.99e-06
3.86
0.000
3.79e-06
.0000116
nissan
L1.
.019978
.0134488
1.49
0.137
-.0063811
.0463371
ARCH_honda
arch
L1.
.0488799
.0073767
6.63
0.000
.0344218
.063338
garch
L1.
.9330047
.0103944
89.76
0.000
.912632
.9533774
_cons
5.42e-06
1.36e-06
3.98
0.000
2.75e-06
8.08e-06
.6668433
.0163209
40.86
0.000
.6348548
.6988317
.7258101
.0137072
52.95
0.000
.6989446
.7526757
.6313515
.0175454
35.98
0.000
.5969631
.6657399
.0324493
.8574681
.0074013
.0476274
4.38
18.00
0.000
0.000
.0179429
.7641202
.0469556
.9508161
honda
corr(toyota,
nissan)
corr(toyota,
honda)
corr(nissan,
honda)
Adjustment
lambda1
lambda2
It turns out that the coefficient on L1.nissan in the honda equation is now statistically insignificant.
We could further improve the model by removing L1.nissan from the model.
There is no mean equation for Toyota or Nissan. In [TS] mgarch dcc postestimation, we discuss
prediction from models without covariates.
335
336
Number of obs
Wald chi2(.)
Prob > chi2
0
= 0
Std. Err.
P>|z|
=
=
=
2015
.
.
ARCH_toyota
arch
L1.
.080889
.0103227
7.84
0.000
.060657
.1011211
garch
L1.
.9060711
.0119107
76.07
0.000
.8827267
.9294156
_cons
4.21e-06
1.10e-06
3.83
0.000
2.05e-06
6.36e-06
ARCH_nissan
arch
L1.
.080889
.0103227
7.84
0.000
.060657
.1011211
garch
L1.
.9060711
.0119107
76.07
0.000
.8827267
.9294156
_cons
5.92e-06
1.47e-06
4.03
0.000
3.04e-06
8.80e-06
.6646283
.0187793
35.39
0.000
.6278215
.7014351
.0446559
.8686054
.0123017
.0510885
3.63
17.00
0.000
0.000
.020545
.7684739
.0687668
.968737
corr(toyota,
nissan)
Adjustment
lambda1
lambda2
We could test our constraints by fitting the unconstrained model and performing a likelihood-ratio
test. The results indicate that the restricted model is preferable.
337
. use http://www.stata-press.com/data/r13/acmeh
. mgarch dcc (acme = afrelated, noconstant arch(1) garch(1))
> (anvil = afinputs, arch(1/2) het(L.apex))
Calculating starting values....
Optimizing log likelihood
(setting technique to bhhh)
Iteration 0:
log likelihood = -13260.522
(output omitted )
Iteration 9:
log likelihood = -12362.876
(switching technique to nr)
Iteration 10: log likelihood = -12362.876
Refining estimates
Iteration 0:
Iteration 1:
Number of obs
Wald chi2(2)
Prob > chi2
Std. Err.
P>|z|
=
=
=
2499
2596.18
0.0000
acme
afrelated
.950805
.0557082
17.07
0.000
.841619
1.059991
ARCH_acme
arch
L1.
.1063295
.0157161
6.77
0.000
.0755266
.1371324
garch
L1.
.7556294
.0391568
19.30
0.000
.6788836
.8323753
_cons
2.197566
.458343
4.79
0.000
1.29923
3.095901
anvil
afinputs
_cons
-1.015657
.0808653
.0209959
.019445
-48.37
4.16
0.000
0.000
-1.056808
.0427538
-.9745054
.1189767
ARCH_anvil
arch
L1.
L2.
.5261675
.2866454
.0281586
.0196504
18.69
14.59
0.000
0.000
.4709777
.2481314
.5813572
.3251595
apex
L1.
1.953173
.0594862
32.83
0.000
1.836582
2.069764
_cons
-.0062964
.0710842
-0.09
0.929
-.1456188
.1330261
-.5600358
.0326358
-17.16
0.000
-.6240008
-.4960708
.1904321
.7147267
.0154449
.0226204
12.33
31.60
0.000
0.000
.1601607
.6703916
.2207035
.7590618
corr(acme,
anvil)
Adjustment
lambda1
lambda2
The results indicate that increases in the futures prices for related products lead to higher returns on
the Acme stock, and increased input prices lead to lower returns on the Anvil stock. In the conditional
variance equation for Anvil, the coefficient on L1.apex is positive and significant, which indicates
338
that an increase in the return on the Apex stock leads to more variability in the return on the Anvil
stock.
Stored results
mgarch dcc stores the following in e():
Scalars
e(N)
e(k)
e(k aux)
e(k extra)
e(k eq)
e(k dv)
e(df m)
e(ll)
e(chi2)
e(p)
e(estdf)
e(usr)
e(tmin)
e(tmax)
e(N gaps)
e(rank)
e(ic)
e(rc)
e(converged)
Macros
e(cmd)
e(model)
e(cmdline)
e(depvar)
e(covariates)
e(dv eqs)
e(indeps)
e(tvar)
e(title)
e(chi2type)
e(vce)
e(vcetype)
e(tmins)
e(tmaxs)
e(dist)
e(arch)
e(garch)
e(technique)
e(properties)
e(estat cmd)
e(predict)
e(marginsok)
e(marginsnotok)
Matrices
e(b)
e(Cns)
e(ilog)
e(gradient)
e(hessian)
e(V)
e(pinfo)
Functions
e(sample)
number of observations
number of parameters
number of auxiliary parameters
number of extra estimates added to
number of equations in e(b)
number of dependent variables
model degrees of freedom
log likelihood
significance
1 if distribution parameter was estimated, 0 otherwise
user-provided distribution parameter
minimum time in sample
maximum time in sample
number of gaps
rank of e(V)
number of iterations
return code
1 if converged, 0 otherwise
mgarch
dcc
command as typed
names of dependent variables
list of covariates
dependent variables with mean equations
independent variables in each equation
time variable
title in estimation output
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
formatted minimum time
formatted maximum time
distribution for error term: gaussian or t
specified ARCH terms
specified GARCH terms
maximization technique
b V
program used to implement estat
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
coefficient vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
Hessian matrix
variancecovariance matrix of the estimators
parameter information, used by predict
marks estimation sample
339
n
o
1/2
lt = 0.5m log(2) 0.5log {det (Rt )} log det Dt
0.5e
t R1
0t
t e
1/2
where e
t = Dt
t is an m 1 vector of standardized residuals, t = yt Cxt . The log-likelihood
PT
function is t=1 lt .
If we assume that t follow a multivariate t distribution with degrees of freedom (df) greater than
2, then the log-likelihood function for observation t is
df
m
log 1 +
2
df 2
lt = log
df + m
2
log
The starting values for the parameters in the mean equations and the initial residuals b
t are
obtained by least-squares regression. The starting values for the parameters in the variance equations
are obtained by a procedure proposed by Gourieroux and Monfort (1997, sec. 6.2.2). The starting
values for the quasicorrelation parameters are calculated from the standardized residuals e
t . Given
the starting values for the mean and variance equations, the starting values for the parameters 1 and
2 are obtained from a grid search performed on the log likelihood.
The initial optimization step is performed in the unconstrained space. Once the maximum is found,
we impose the constraints 1 0, 2 0, and 0 1 + 2 < 1, and maximize the log likelihood
in the constrained space. This step is reported in the iteration log as the refining step.
GARCH estimators require initial values that can be plugged in for ti 0ti and
Htj when
t i < 1 and t j < 1. mgarch dcc substitutes an estimator of the unconditional covariance of the
disturbances
b = T 1
T
X
0
b
b
t b
b
t
(2)
t=1
for ti 0ti when t i < 1 and for Htj when t j < 1, where b
b
t is the vector of residuals
calculated using the estimated parameters.
mgarch dcc uses numerical derivatives in maximizing the log-likelihood function.
References
Aielli, G. P. 2009. Dynamic Conditional Correlations: On Properties and Estimation. Working paper, Dipartimento di
Statistica, University of Florence, Florence, Italy.
Engle, R. F. 2002. Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional
heteroskedasticity models. Journal of Business & Economic Statistics 20: 339350.
. 2009. Anticipating Correlations: A New Paradigm for Risk Management. Princeton, NJ: Princeton University
Press.
340
Gourieroux, C. S., and A. Monfort. 1997. Time Series and Dynamic Models. Trans. ed. G. M. Gallo. Cambridge:
Cambridge University Press.
Also see
[TS] mgarch dcc postestimation Postestimation tools for mgarch dcc
[TS] mgarch Multivariate GARCH models
[TS] tsset Declare data to be time-series data
[TS] arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators
[TS] var Vector autoregressive models
[U] 20 Estimation and postestimation commands
Title
mgarch dcc postestimation Postestimation tools for mgarch dcc
Description
Remarks and examples
Description
The following standard postestimation commands are available after mgarch dcc:
Command
Description
contrast
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
margins
marginsplot
nlcom
predict
predictnl
pwcompare
test
testnl
341
342
type
{ stub* | newvarlist }
if
in
, statistic options
Description
statistic
Main
xb
residuals
variance
correlation
These statistics are available both in and out of sample; type predict
the estimation sample.
Description
options
Options
equation(eqnames)
names of equations for which predictions are made
dynamic(time constant) begin dynamic forecast at specified time
>
Postestimation
>
Main
xb, the default, calculates the linear predictions of the dependent variables.
residuals calculates the residuals.
variance predicts the conditional variances and conditional covariances.
correlation predicts the conditional correlations.
Options
equation(eqnames) specifies the equation for which the predictions are calculated. Use this option
to predict a statistic for a particular equation. Equation names, such as equation(income), are
used to identify equations.
One equation name may be specified when predicting the dependent variable, the residuals, or
the conditional variance. For example, specifying equation(income) causes predict to predict
income, and specifying variance equation(income) causes predict to predict the conditional
variance of income.
Two equations may be specified when predicting a conditional variance or covariance. For example, specifying equation(income, consumption) variance causes predict to predict the
conditional covariance of income and consumption.
343
dynamic(time constant) specifies when predict starts producing dynamic forecasts. The specified
time constant must be in the scale of the time variable specified in tsset, and the time constant
must be inside a sample for which observations on the dependent variables are available. For
example, dynamic(tq(2008q4)) causes dynamic predictions to begin in the fourth quarter of
2008, assuming that your time variable is quarterly; see [D] datetime. If the model contains
exogenous variables, they must be present for the whole predicted sample. dynamic() may not
be specified with residuals.
.001
.002
.003
. use http://www.stata-press.com/data/r13/stocks
(Data from Yahoo! Finance)
. quietly mgarch dcc (toyota nissan = , noconstant)
> (honda = L.nissan, noconstant), arch(1) garch(1)
. tsappend, add(50)
. predict H*, variance dynamic(2016)
01jan2009
01jul2009
01jan2010
Date
01jul2010
01jan2011
Recent in-sample one-step-ahead forecasts are plotted to the left of the vertical line in the above
graph, and the dynamic out-of-sample forecasts appear to the right of the vertical line. The graph
shows the tail end of the huge increase in return volatility that took place in 2008 and 2009. It also
shows that the dynamic forecasts quickly converge.
344
Also see
[TS] mgarch dcc Dynamic conditional correlation multivariate GARCH models
[U] 20 Estimation and postestimation commands
Title
mgarch dvech Diagonal vech multivariate GARCH models
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
mgarch dvech eq
eq . . . eq
if
in
, options
Description
Model
arch(numlist)
garch(numlist)
distribution(dist # )
constraints(numlist)
ARCH terms
GARCH terms
SE/Robust
vce(vcetype)
Reporting
level(#)
nocnsreport
display options
Maximization
maximize options
control the maximization process; seldom used
from(matname)
initial values for the coefficients; seldom used
svtechnique(algorithm spec)starting-value maximization algorithm
sviterate(#)
number of starting-value iterations; default is sviterate(25)
coeflegend
You must tsset your data before using mgarch dvech; see [TS] tsset.
indepvars may contain factor variables; see [U] 11.4.3 Factor variables.
depvars and indepvars may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
345
346
Menu
Statistics
>
>
Multivariate GARCH
Description
mgarch dvech estimates the parameters of diagonal vech (DVECH) multivariate generalized autoregressive conditionally heteroskedastic (MGARCH) models in which each element of the conditional
correlation matrix is parameterized as a linear function of its own past and past shocks.
DVECH MGARCH models are less parsimonious than the conditional correlation models discussed
in [TS] mgarch ccc, [TS] mgarch dcc, and [TS] mgarch vcc because the number of parameters in
DVECH MGARCH models increases more rapidly with the number of series modeled.
Options
Model
SE/Robust
vce(vcetype) specifies the estimator for the variancecovariance matrix of the estimator.
vce(oim), the default, specifies to use the observed information matrix (OIM) estimator.
vce(robust) specifies to use the Huber/White/sandwich estimator.
Reporting
347
Maximization
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), nonrtolerance, and from(matname); see [R] maximize for all options except
from(), and see below for information on from(). These options are seldom used.
from(matname) specifies initial values for the coefficients. from(b0) causes mgarch dvech to begin
the optimization algorithm with the values in b0. b0 must be a row vector, and the number of
columns must equal the number of parameters in the model.
svtechnique(algorithm spec) and sviterate(#) specify options for the starting-value search
process.
svtechnique(algorithm spec) specifies the algorithm used to search for initial values. The
syntax for algorithm spec is the same as for the technique() option; see [R] maximize.
svtechnique(bhhh 5 nr 16000) is the default. This option may not be specified with
from().
sviterate(#) specifies the maximum number of iterations that the search algorithm may
perform. The default is sviterate(25). This option may not be specified with from().
The following option is available with mgarch dvech but is not shown in the dialog box:
coeflegend; see [R] estimation options.
As discussed in [TS] mgarch, MGARCH models differ in the parsimony and flexibility of their
specifications for a time-varying conditional covariance matrix of the disturbances, denoted by Ht .
In a DVECH MGARCH model with one ARCH term and one GARCH term, the (i, j)th element of
conditional covariance matrix is modeled by
Technical note
The general vech MGARCH model developed by Bollerslev, Engle, and Wooldridge (1988) can be
written as
yt = Cxt + t
t =
1/2
Ht t
p
X
ht = s +
i=1
(1)
(2)
Ai vech(ti 0ti ) +
q
X
j=1
Bj htj
(3)
348
where
Ht
ht = vech(Ht );
the vech() function stacks the lower diagonal elements of a symmetric matrix into a column
vector, for example,
vech
1 2
2 3
= (1, 2, 3)0
Ht = S +
p
X
Ai ti 0ti +
i=1
q
X
Bj Htj
(30 )
j=1
Some examples
Example 1: Model with common covariates
We have data on a secondary market rate of a six-month U.S. Treasury bill, tbill, and on
Moodys seasoned AAA corporate bond yield, bond. We model the first-differences of tbill and the
first-differences of bond as a VAR(1) with an ARCH(1) term.
Std. Err.
=
=
=
2454
1183.52
0.0000
P>|z|
D.bond
bond
LD.
.2967674
.0247149
12.01
0.000
.2483271
.3452077
tbill
LD.
.0947949
.0098683
9.61
0.000
.0754533
.1141364
_cons
.0003991
.00143
0.28
0.780
-.0024036
.0032019
bond
LD.
.0108373
.0301501
0.36
0.719
-.0482558
.0699304
tbill
LD.
.4344747
.0176497
24.62
0.000
.3998819
.4690675
_cons
.0011611
.0021033
0.55
0.581
-.0029612
.0052835
1_1
2_1
2_2
.004894
.0040986
.0115149
.0002006
.0002396
.0005227
24.40
17.10
22.03
0.000
0.000
0.000
.0045008
.0036289
.0104904
.0052871
.0045683
.0125395
1_1
2_1
2_2
.4514942
.2518879
.843368
.0456835
.036736
.0608055
9.88
6.86
13.87
0.000
0.000
0.000
.3619562
.1798866
.7241914
.5410323
.3238893
.9625446
D.tbill
Sigma0
L.ARCH
349
350
The output has three parts: an iteration log, a header, and an output table. The iteration log has
two parts: the first part reports the iterations from the process of searching for starting values, and
the second part reports the iterations from maximizing the log-likelihood function.
The header describes the estimation sample and reports a Wald test against the null hypothesis that
all the coefficients on the independent variables in each equation are zero. Here the null hypothesis
is rejected at all conventional levels.
The output table reports point estimates, standard errors, tests against zero, and confidence intervals
for the estimated coefficients, the estimated elements of S, and any estimated elements of A or B.
Here the output indicates that in the equation for D.tbill, neither the coefficient on LD.bond nor
the constant are statistically significant. The elements of S are reported in the Sigma0 equation. The
estimate of S[1, 1] is 0.005, and the estimate of S[2, 1] is 0.004. The ARCH term results are reported
in the L.ARCH equation. In the L.ARCH equation, 1 1 is the coefficient on the ARCH term for the
conditional variance of the first dependent variable, 2 1 is the coefficient on the ARCH term for the
conditional covariance between the first and second dependent variables, and 2 2 is the coefficient
on the ARCH term for the conditional variance of the second dependent variable.
=
=
=
=
=
=
=
=
3566.8824
3701.6181
3952.8048
4076.5164
4166.6842
4180.2998
4182.4545
4182.9563
=
=
=
4183.0293
4183.1112
4183.1113
=
=
=
=
=
4183.1113
4202.0304
4210.2929
4215.7798
4217.7755
=
=
=
=
4219.0078
4221.4197
4221.433
4221.433
Number of obs
Wald chi2(3)
Prob > chi2
Std. Err.
=
=
=
351
2454
1197.76
0.0000
P>|z|
D.bond
bond
LD.
.2941649
.0234734
12.53
0.000
.2481579
.3401718
tbill
LD.
.0953158
.0098077
9.72
0.000
.076093
.1145386
D.tbill
tbill
LD.
.4385945
.0136672
32.09
0.000
.4118072
.4653817
1_1
2_1
2_2
.0048922
.0040949
.0115043
.0002005
.0002394
.0005184
24.40
17.10
22.19
0.000
0.000
0.000
.0044993
.0036256
.0104883
.0052851
.0045641
.0125203
1_1
2_1
2_2
.4519233
.2515474
.8437212
.045671
.0366701
.0600839
9.90
6.86
14.04
0.000
0.000
0.000
.3624099
.1796752
.7259589
.5414368
.3234195
.9614836
Sigma0
L.ARCH
We specified each equation separately to remove the insignificant parameters. All the parameter
estimates are statistically significant.
352
=
=
=
=
=
=
=
=
-6087.0665
-6022.2046
-5986.6152
-5976.5739
-5974.4342
-5974.4046
-5974.4036
-5974.4035
Estimating parameters
(setting technique to bhhh)
Iteration 0:
log likelihood
Iteration 1:
log likelihood
Iteration 2:
log likelihood
Iteration 3:
log likelihood
Iteration 4:
log likelihood
=
=
=
=
=
-5974.4035
-5973.812
-5973.8004
-5973.7999
-5973.7999
(not concave)
Number of obs
Wald chi2(2)
Prob > chi2
=
=
=
1499
272.47
0.0000
[L.ARCH]1_1 - [L.ARCH]2_2 = 0
[L.GARCH]1_1 - [L.GARCH]2_2 = 0
Coef.
Std. Err.
P>|z|
acme
acme
L1.
.3365278
.0255134
13.19
0.000
.2865225
.3865331
_cons
1.124611
.060085
18.72
0.000
1.006847
1.242376
anvil
L1.
.3151955
.0263287
11.97
0.000
.2635922
.3667988
_cons
1.215786
.0642052
18.94
0.000
1.089947
1.341626
1_1
2_1
2_2
1.889237
.4599576
2.063113
.2168733
.1139843
.2454633
8.71
4.04
8.40
0.000
0.000
0.000
1.464173
.2365525
1.582014
2.314301
.6833626
2.544213
1_1
2_1
2_2
.2813443
.181877
.2813443
.0299124
.0335393
.0299124
9.41
5.42
9.41
0.000
0.000
0.000
.222717
.1161412
.222717
.3399716
.2476128
.3399716
1_1
2_1
2_2
.1487581
.085404
.1487581
.0697531
.1446524
.0697531
2.13
0.59
2.13
0.033
0.555
0.033
.0120445
-.1981094
.0120445
.2854716
.3689175
.2854716
anvil
Sigma0
L.ARCH
L.GARCH
353
We could test our constraints by fitting the unconstrained model and performing either a Wald or a
likelihood-ratio test. The results indicate that we might further restrict the time-invariant components
of the conditional variances to be the same across companies.
Std. Err.
=
=
=
5000
.
.
P>|z|
Sigma0
1_1
2_1
2_2
1.026283
.4300997
1.019753
.0823348
.0590294
.0837146
12.46
7.29
12.18
0.000
0.000
0.000
.8649096
.3144042
.8556751
1.187656
.5457952
1.18383
1_1
2_1
2_2
.2878739
.1036685
.2034196
.02157
.0161446
.019855
13.35
6.42
10.25
0.000
0.000
0.000
.2455975
.0720256
.1645044
.3301504
.1353114
.2423347
1_1
2_1
2_2
.1837825
.0884425
.2025718
.0274555
.02208
.0272639
6.69
4.01
7.43
0.000
0.000
0.000
.1299706
.0451665
.1491355
.2375943
.1317185
.256008
1_1
2_1
2_2
.0782467
.2888104
.201618
.053944
.0818303
.0470584
1.45
3.53
4.28
0.147
0.000
0.000
-.0274816
.1284261
.1093853
.183975
.4491948
.2938508
L.ARCH
L2.ARCH
L.GARCH
354
The model test is omitted from the output, because there are no covariates in the model. The univariate
tests indicate that the included parameters fit the data well. In [TS] mgarch dvech postestimation,
we discuss prediction from models without covariates.
Stored results
mgarch dvech stores the following in e():
Scalars
e(N)
e(k)
e(k extra)
e(k eq)
e(k dv)
e(df m)
e(ll)
e(chi2)
e(p)
e(estdf)
e(usr)
e(tmin)
e(tmax)
e(N gaps)
e(rank)
e(ic)
e(rc)
e(converged)
Macros
e(cmd)
e(model)
e(cmdline)
e(depvar)
e(covariates)
e(dv eqs)
e(indeps)
e(tvar)
e(title)
e(chi2type)
e(vce)
e(vcetype)
e(tmins)
e(tmaxs)
e(dist)
e(arch)
e(garch)
e(svtechnique)
e(technique)
e(properties)
e(estat cmd)
e(predict)
e(marginsok)
e(marginsnotok)
number of observations
number of parameters
number of extra estimates added to
number of equations in e(b)
number of dependent variables
model degrees of freedom
log likelihood
significance
1 if distribution parameter was estimated, 0 otherwise
user-provided distribution parameter
minimum time in sample
maximum time in sample
number of gaps
rank of e(V)
number of iterations
return code
1 if converged, 0 otherwise
mgarch
dvech
command as typed
names of dependent variables
list of covariates
dependent variables with mean equations
independent variables in each equation
time variable
title in estimation output
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
formatted minimum time
formatted maximum time
distribution for error term: gaussian or t
specified ARCH terms
specified GARCH terms
maximization technique(s) for starting values
maximization technique
b V
program used to implement estat
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
355
coefficient vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
Hessian matrix
estimates of A matrices
estimates of B matrices
estimates of Sigma0 matrix
Sigma hat
variancecovariance matrix of the estimators
parameter information, used by predict
marks estimation sample
yt = Cxt + t
1/2
t = Ht t
Ht = S +
p
X
Ai ti 0ti +
i=1
q
X
Bj Htj
j=1
where
Ht
PT
t=1 lt .
356
If we assume that t follow a multivariate t distribution with degrees of freedom (df) greater than
2, then the log-likelihood function for observation t is
df + m
2
lt = log
log
df
2
Htj when
t i < 1 and t j < 1. mgarch dvech substitutes an estimator of the unconditional covariance of
the disturbances,
b = T 1
T
X
0
b
b
t b
b
t
(4)
t=1
for ti 0ti when t i < 1 and for Htj when t j < 1, where b
b
t is the vector of residuals
calculated using the estimated parameters.
mgarch dvech uses analytic first and second derivatives in maximizing the log-likelihood function
based on the multivariate normal distribution. mgarch dvech uses numerical derivatives in maximizing
the log-likelihood function based on the multivariate t distribution.
References
Bollerslev, T., R. F. Engle, and J. M. Wooldridge. 1988. A capital asset pricing model with time-varying covariances.
Journal of Political Economy 96: 116131.
Lutkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Also see
[TS] mgarch dvech postestimation Postestimation tools for mgarch dvech
[TS] mgarch Multivariate GARCH models
[TS] tsset Declare data to be time-series data
[TS] arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators
[TS] var Vector autoregressive models
[U] 20 Estimation and postestimation commands
Title
mgarch dvech postestimation Postestimation tools for mgarch dvech
Description
Remarks and examples
Description
The following standard postestimation commands are available after mgarch dvech:
Command
Description
contrast
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
margins
marginsplot
nlcom
predict
predictnl
pwcompare
test
testnl
357
358
type
{ stub* | newvarlist }
if
in
, statistic options
Description
statistic
Main
xb
residuals
variance
These statistics are available both in and out of sample; type predict
the estimation sample.
Description
options
Options
equation(eqnames)
names of equations for which predictions are made
dynamic(time constant) begin dynamic forecast at specified time
>
Postestimation
>
Main
xb, the default, calculates the linear predictions of the dependent variables.
residuals calculates the residuals.
variance predicts the conditional variances and conditional covariances.
Options
equation(eqnames) specifies the equation for which the predictions are calculated. Use this option
to predict a statistic for a particular equation. Equation names, such as equation(income), are
used to identify equations.
One equation name may be specified when predicting the dependent variable, the residuals, or
the conditional variance. For example, specifying equation(income) causes predict to predict
income, and specifying variance equation(income) causes predict to predict the conditional
variance of income.
Two equations may be specified when predicting a conditional variance or covariance. For example, specifying equation(income, consumption) variance causes predict to predict the
conditional covariance of income and consumption.
359
dynamic(time constant) specifies when predict starts producing dynamic forecasts. The specified
time constant must be in the scale of the time variable specified in tsset, and the time constant
must be inside a sample for which observations on the dependent variables are available. For
example, dynamic(tq(2008q4)) causes dynamic predictions to begin in the fourth quarter of
2008, assuming that your time variable is quarterly; see [D] datetime. If the model contains
exogenous variables, they must be present for the whole predicted sample. dynamic() may not
be specified with residuals.
360
=
=
=
=
=
=
=
=
-6087.0665
-6022.2046
-5986.6152
-5976.5739
-5974.4342
-5974.4046
-5974.4036
-5974.4035
Estimating parameters
(setting technique to bhhh)
Iteration 0:
log likelihood
Iteration 1:
log likelihood
Iteration 2:
log likelihood
Iteration 3:
log likelihood
Iteration 4:
log likelihood
=
=
=
=
=
-5974.4035
-5973.812
-5973.8004
-5973.7999
-5973.7999
(not concave)
Number of obs
Wald chi2(2)
Prob > chi2
=
=
=
1499
272.47
0.0000
[L.ARCH]1_1 - [L.ARCH]2_2 = 0
[L.GARCH]1_1 - [L.GARCH]2_2 = 0
Coef.
Std. Err.
P>|z|
acme
acme
L1.
.3365278
.0255134
13.19
0.000
.2865225
.3865331
_cons
1.124611
.060085
18.72
0.000
1.006847
1.242376
anvil
L1.
.3151955
.0263287
11.97
0.000
.2635922
.3667988
_cons
1.215786
.0642052
18.94
0.000
1.089947
1.341626
1_1
2_1
2_2
1.889237
.4599576
2.063113
.2168733
.1139843
.2454633
8.71
4.04
8.40
0.000
0.000
0.000
1.464173
.2365525
1.582014
2.314301
.6833626
2.544213
1_1
2_1
2_2
.2813443
.181877
.2813443
.0299124
.0335393
.0299124
9.41
5.42
9.41
0.000
0.000
0.000
.222717
.1161412
.222717
.3399716
.2476128
.3399716
1_1
2_1
2_2
.1487581
.085404
.1487581
.0697531
.1446524
.0697531
2.13
0.59
2.13
0.033
0.555
0.033
.0120445
-.1981094
.0120445
.2854716
.3689175
.2854716
anvil
Sigma0
L.ARCH
L.GARCH
361
Now we use tsappend (see [TS] tsappend) to extend the data, use predict to obtain the dynamic
predictions, and graph the predictions.
10
. tsappend, add(12)
. predict H*, variance dynamic(tw(1998w26))
. tsline H_acme_acme H_anvil_anvil if t>=tw(1995w25), legend(rows(2))
1995w26
1996w27
1997w26
1998w26
t
variance prediction (acme, acme) dynamic(tw(1998w26))
variance prediction (anvil, anvil) dynamic(tw(1998w26))
The graph shows that the in-sample predictions are similar for the conditional variances of Acme
Inc. and Anvil Inc. and that the dynamic forecasts converge to similar levels. It also shows that
the ARCH and GARCH parameters cause substantial time-varying volatility. The predicted conditional
variance of acme ranges from lows of just over 2 to highs above 10.
362
Std. Err.
Number of obs
Wald chi2(.)
Prob > chi2
z
=
=
=
5000
.
.
P>|z|
Sigma0
1_1
2_1
2_2
1.026283
.4300997
1.019753
.0823348
.0590294
.0837146
12.46
7.29
12.18
0.000
0.000
0.000
.8649096
.3144042
.8556751
1.187656
.5457952
1.18383
1_1
2_1
2_2
.2878739
.1036685
.2034196
.02157
.0161446
.019855
13.35
6.42
10.25
0.000
0.000
0.000
.2455975
.0720256
.1645044
.3301504
.1353114
.2423347
1_1
2_1
2_2
.1837825
.0884425
.2025718
.0274555
.02208
.0272639
6.69
4.01
7.43
0.000
0.000
0.000
.1299706
.0451665
.1491355
.2375943
.1317185
.256008
1_1
2_1
2_2
.0782467
.2888104
.201618
.053944
.0818303
.0470584
1.45
3.53
4.28
0.147
0.000
0.000
-.0274816
.1284261
.1093853
.183975
.4491948
.2938508
L.ARCH
L2.ARCH
L.GARCH
Now we use predict to obtain the in-sample conditional variances of acme and use tsline (see
[TS] tsline) to graph the results.
50
1000
2000
3000
4000
5000
The graph shows that the predicted conditional variances vary substantially over time, as the
parameter estimates indicated.
363
Because there are no covariates in the model for acme, specifying xb puts a prediction of 0 in each
observation, and specifying residuals puts the value of the dependent variable into the prediction.
Also see
[TS] mgarch dvech Diagonal vech multivariate GARCH models
[U] 20 Estimation and postestimation commands
Title
mgarch vcc Varying conditional correlation multivariate GARCH models
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
mgarch vcc eq
eq . . . eq
if
in
, options
Description
Model
arch(numlist)
garch(numlist)
het(varlist)
distribution(dist # )
constraints(numlist)
SE/Robust
vce(vcetype)
Reporting
level(#)
nocnsreport
display options
Maximization
maximize options
from(matname)
coeflegend
364
eqoptions
Description
noconstant
arch(numlist)
garch(numlist)
het(varlist)
ARCH terms
GARCH terms
365
You must tsset your data before using mgarch vcc; see [TS] tsset.
indepvars and varlist may contain factor variables; see [U] 11.4.3 Factor variables.
depvars, indepvars, and varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
Menu
Statistics
>
>
Multivariate GARCH
Description
mgarch vcc estimates the parameters of varying conditional correlation (VCC) multivariate generalized autoregressive conditionally heteroskedastic (MGARCH) models in which the conditional variances
are modeled as univariate generalized autoregressive conditionally heteroskedastic (GARCH) models
and the conditional covariances are modeled as nonlinear functions of the conditional variances. The
conditional correlation parameters that weight the nonlinear combinations of the conditional variance
follow the GARCH-like process specified in Tse and Tsui (2002).
The VCC MGARCH model is about as flexible as the closely related dynamic conditional correlation
MGARCH model (see [TS] mgarch dcc), more flexible than the conditional correlation MGARCH model
(see [TS] mgarch ccc), and more parsimonious than the diagonal vech model (see [TS] mgarch
dvech).
Options
Model
arch(numlist) specifies the ARCH terms for all equations in the model. By default, no ARCH terms
are specified.
garch(numlist) specifies the GARCH terms for all equations in the model. By default, no GARCH
terms are specified.
het(varlist) specifies that varlist be included in the model in the specification of the conditional
variance for all equations. This varlist enters the variance specification collectively as multiplicative
heteroskedasticity.
distribution(dist # ) specifies the assumed distribution for the errors. dist may be gaussian,
normal, or t.
gaussian and normal are synonyms; each causes mgarch vcc to assume that the errors come
from a multivariate normal distribution. # may not be specified with either of them.
366
t causes mgarch vcc to assume that the errors follow a multivariate Student t distribution, and
the degree-of-freedom parameter is estimated along with the other parameters of the model. If
distribution(t #) is specified, then mgarch vcc uses a multivariate Student t distribution
with # degrees of freedom. # must be greater than 2.
constraints(numlist) specifies linear constraints to apply to the parameter estimates.
SE/Robust
vce(vcetype) specifies the estimator for the variancecovariance matrix of the estimator.
vce(oim), the default, specifies to use the observed information matrix (OIM) estimator.
vce(robust) specifies to use the Huber/White/sandwich estimator.
Reporting
Maximization
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), nonrtolerance, and from(matname); see [R] maximize for all options except
from(), and see below for information on from(). These options are seldom used.
from(matname) specifies initial values for the coefficients. from(b0) causes mgarch vcc to begin
the optimization algorithm with the values in b0. b0 must be a row vector, and the number of
columns must equal the number of parameters in the model.
The following option is available with mgarch vcc but is not shown in the dialog box:
coeflegend; see [R] estimation options.
Eqoptions
noconstant suppresses the constant term in the mean equation.
arch(numlist) specifies the ARCH terms in the equation. By default, no ARCH terms are specified.
This option may not be specified with model-level arch().
garch(numlist) specifies the GARCH terms in the equation. By default, no GARCH terms are specified.
This option may not be specified with model-level garch().
het(varlist) specifies that varlist be included in the specification of the conditional variance. This
varlist enters the variance specification collectively as multiplicative heteroskedasticity. This option
may not be specified with model-level het().
367
MGARCH models are dynamic multivariate regression models in which the conditional variances
and covariances of the errors follow an autoregressive-moving-average structure. The VCC MGARCH
model uses a nonlinear combination of univariate GARCH models with time-varying cross-equation
weights to model the conditional covariance matrix of the errors.
As discussed in [TS] mgarch, MGARCH models differ in the parsimony and flexibility of their
specifications for a time-varying conditional covariance matrix of the disturbances, denoted by Ht .
In the conditional correlation family of MGARCH models, the diagonal elements of Ht are modeled
as univariate GARCH models, whereas the off-diagonal elements are modeled as nonlinear functions
of the diagonal terms. In the VCC MGARCH model,
hij,t = ij,t
p
hii,t hjj,t
where the diagonal elements hii,t and hjj,t follow univariate GARCH processes and ij,t follows the
dynamic process specified in Tse and Tsui (2002) and discussed below.
Because the ij,t varies with time, this model is known as the VCC GARCH model.
Technical note
The VCC GARCH model proposed by Tse and Tsui (2002) can be written as
yt = Cxt + t
1/2
t = Ht t
1/2
1/2
Ht = Dt Rt Dt
Rt = (1 1 2 )R + 1 t1 + 2 Rt1
where
Ht
Dt = .
..
0
..
..
.
.
2
m,t
0
2
2,t
..
.
0
2
in which each i,t
evolves according to a univariate GARCH model of the form
P
Pqi
pi
2
2
i,t
= si + j=1
j 2i,tj + j=1
j i,tj
by default, or
2
i,t
= exp(i zi,t ) +
Ppi
j=1
j 2i,tj +
Pqi
j=1
2
j i,tj
(1)
368
12,t
Rt =
..
.
1m,t
12,t
1
..
.
2m,t
1m,t
2m,t
..
..
.
.
1 and 2 are parameters that govern the dynamics of conditional correlations. 1 and 2
are nonnegative and satisfy 0 1 + 2 < 1.
To differentiate this model from Engle (2002), Tse and Tsui (2002) call their model a VCC MGARCH
model.
Some examples
Example 1: Model with common covariates
We have daily data on the stock returns of three car manufacturersToyota, Nissan, and Honda,
from January 2, 2003, to December 31, 2010in the variables toyota, nissan, and honda. We
model the conditional means of the returns as a first-order vector autoregressive process and the
conditional covariances as a VCC MGARCH process in which the variance of each disturbance term
follows a GARCH(1,1) process.
. use http://www.stata-press.com/data/r13/stocks
(Data from Yahoo! Finance)
. mgarch vcc (toyota nissan honda = L.toyota L.nissan L.honda, noconstant),
> arch(1) garch(1)
Calculating starting values....
Optimizing log likelihood
(setting technique to bhhh)
Iteration 0:
log likelihood =
16901.2
Iteration 1:
log likelihood = 17028.644
Iteration 2:
log likelihood = 17145.905
Iteration 3:
log likelihood = 17251.485
Iteration 4:
log likelihood = 17306.115
Iteration 5:
log likelihood =
17332.59
Iteration 6:
log likelihood = 17353.617
Iteration 7:
log likelihood =
17374.86
Iteration 8:
log likelihood = 17398.526
Iteration 9:
log likelihood = 17418.748
(switching technique to nr)
Iteration 10: log likelihood = 17442.552
Iteration 11: log likelihood = 17455.702
Iteration 12: log likelihood = 17463.605
Iteration 13: log likelihood = 17463.922
Iteration 14: log likelihood = 17463.925
Iteration 15: log likelihood = 17463.925
Number of obs
Wald chi2(9)
Prob > chi2
Std. Err.
P>|z|
=
=
=
2014
17.67
0.0392
toyota
toyota
L1.
-.0565645
.0335696
-1.68
0.092
-.1223597
.0092307
nissan
L1.
.0248101
.0252701
0.98
0.326
-.0247184
.0743385
honda
L1.
.0035836
.0298895
0.12
0.905
-.0549986
.0621659
ARCH_toyota
arch
L1.
.0602805
.0086798
6.94
0.000
.0432683
.0772926
garch
L1.
.9224692
.0110316
83.62
0.000
.9008477
.9440907
_cons
4.38e-06
1.12e-06
3.91
0.000
2.18e-06
6.58e-06
nissan
toyota
L1.
-.0196399
.0387112
-0.51
0.612
-.0955124
.0562325
nissan
L1.
-.0306663
.031051
-0.99
0.323
-.091525
.0301925
honda
L1.
.0383151
.0354691
1.08
0.280
-.0312031
.1078332
ARCH_nissan
arch
L1.
.0774227
.0119642
6.47
0.000
.0539733
.1008722
garch
L1.
.9076856
.0139339
65.14
0.000
.8803756
.9349956
_cons
6.20e-06
1.70e-06
3.65
0.000
2.87e-06
9.53e-06
toyota
L1.
-.0358293
.0340492
-1.05
0.293
-.1025645
.030906
nissan
L1.
.0544071
.0276156
1.97
0.049
.0002814
.1085327
honda
L1.
-.0424383
.0326249
-1.30
0.193
-.1063819
.0215054
honda
369
370
ARCH_honda
arch
L1.
.0458673
.0072714
6.31
0.000
.0316157
.0601189
garch
L1.
.9369252
.0101756
92.08
0.000
.9169815
.9568689
_cons
4.99e-06
1.29e-06
3.85
0.000
2.45e-06
7.52e-06
.6643028
.0151086
43.97
0.000
.6346905
.6939151
.7302092
.0126361
57.79
0.000
.705443
.7549755
.634732
.0159738
39.74
0.000
.6034239
.6660401
.0277374
.8255524
.0086942
.0755882
3.19
10.92
0.001
0.000
.010697
.6774023
.0447778
.9737025
corr(toyota,
nissan)
corr(toyota,
honda)
corr(nissan,
honda)
Adjustment
lambda1
lambda2
The output has three parts: an iteration log, a header, and an output table.
The iteration log has three parts: the dots from the search for initial values, the iteration log from
optimizing the log likelihood, and the iteration log from the refining step. A detailed discussion of
the optimization methods is in Methods and formulas.
The header describes the estimation sample and reports a Wald test against the null hypothesis
that all the coefficients on the independent variables in the mean equations are zero. Here the null
hypothesis is rejected at the 5% level.
The output table first presents results for the mean or variance parameters used to model each
dependent variable. Subsequently, the output table presents results for the parameters in R. For
example, the estimate of the mean of the process that associates Toyota and Nissan is 0.66. Finally,
the output table presents results for the adjustment parameters 1 and 2 . In the example at hand,
the estimates for both 1 and 2 are statistically significant.
The VCC MGARCH model reduces to the CCC MGARCH model when 1 = 2 = 0. The output
below shows that a Wald test rejects the null hypothesis that 1 = 2 = 0 at all conventional levels.
. test _b[Adjustment:lambda1] = _b[Adjustment:lambda2] = 0
( 1) [Adjustment]lambda1 - [Adjustment]lambda2 = 0
( 2) [Adjustment]lambda1 = 0
chi2( 2) =
Prob > chi2 =
482.80
0.0000
These results indicate that the assumption of time-invariant conditional correlations maintained in
the CCC MGARCH model is too restrictive for these data.
371
17455.98
17455.98
(backed up)
372
Number of obs
Wald chi2(1)
Prob > chi2
Std. Err.
P>|z|
=
=
=
2014
1.62
0.2032
ARCH_toyota
arch
L1.
.0609064
.0087784
6.94
0.000
.043701
.0781117
garch
L1.
.921703
.0111493
82.67
0.000
.8998509
.9435552
_cons
4.42e-06
1.13e-06
3.91
0.000
2.20e-06
6.64e-06
ARCH_nissan
arch
L1.
.0806598
.0123529
6.53
0.000
.0564486
.104871
garch
L1.
.9035239
.014421
62.65
0.000
.8752592
.9317886
_cons
6.61e-06
1.79e-06
3.70
0.000
3.11e-06
.0000101
nissan
L1.
.0175566
.0137982
1.27
0.203
-.0094874
.0446005
ARCH_honda
arch
L1.
.0461398
.0073048
6.32
0.000
.0318226
.060457
garch
L1.
.9366096
.0102021
91.81
0.000
.9166139
.9566053
_cons
5.03e-06
1.31e-06
3.85
0.000
2.47e-06
7.59e-06
.6635251
.0150293
44.15
0.000
.6340682
.692982
.7299703
.0124828
58.48
0.000
.7055045
.754436
.6338207
.0158681
39.94
0.000
.6027198
.6649217
.0285319
.8113924
.0092448
.0854955
3.09
9.49
0.002
0.000
.0104124
.6438243
.0466514
.9789604
honda
corr(toyota,
nissan)
corr(toyota,
honda)
corr(nissan,
honda)
Adjustment
lambda1
lambda2
It turns out that the coefficient on L1.nissan in the honda equation is now statistically insignificant.
We could further improve the model by removing L1.nissan from the model.
There is no mean equation for Toyota or Nissan. In [TS] mgarch vcc postestimation, we discuss
prediction from models without covariates.
373
374
Number of obs
Wald chi2(.)
Prob > chi2
=
=
=
2015
.
.
[ARCH_toyota]L.arch - [ARCH_nissan]L.arch = 0
[ARCH_toyota]L.garch - [ARCH_nissan]L.garch = 0
Coef.
Std. Err.
P>|z|
ARCH_toyota
arch
L1.
.0797459
.0101634
7.85
0.000
.059826
.0996659
garch
L1.
.9063808
.0118211
76.67
0.000
.883212
.9295497
_cons
4.24e-06
1.10e-06
3.85
0.000
2.08e-06
6.40e-06
ARCH_nissan
arch
L1.
.0797459
.0101634
7.85
0.000
.059826
.0996659
garch
L1.
.9063808
.0118211
76.67
0.000
.883212
.9295497
_cons
5.91e-06
1.47e-06
4.03
0.000
3.03e-06
8.79e-06
.6720056
.0162585
41.33
0.000
.6401394
.7038718
.0343012
.7945548
.0128097
.101067
2.68
7.86
0.007
0.000
.0091945
.596467
.0594078
.9926425
corr(toyota,
nissan)
Adjustment
lambda1
lambda2
We could test our constraints by fitting the unconstrained model and performing a likelihood-ratio
test. The results indicate that the restricted model is preferable.
=
=
=
=
=
=
=
=
=
=
-13252.793
-12859.124
-12522.14
-12406.487
-12304.275
-12273.103
-12256.104
-12254.55
-12254.482
-12254.478
= -12254.478
= -12254.478
= -12254.478
= -12254.478
Std. Err.
Number of obs
Wald chi2(2)
Prob > chi2
=
=
=
2499
5226.19
0.0000
P>|z|
acme
afrelated
.9672465
.0510066
18.96
0.000
.8672753
1.067218
ARCH_acme
arch
L1.
.0949142
.0147302
6.44
0.000
.0660435
.1237849
garch
L1.
.7689442
.038885
19.77
0.000
.6927309
.8451574
_cons
2.129468
.464916
4.58
0.000
1.218249
3.040687
anvil
afinputs
_cons
-1.018629
.1015986
.0145027
.0177952
-70.24
5.71
0.000
0.000
-1.047053
.0667205
-.9902037
.1364766
ARCH_anvil
arch
L1.
L2.
.4990272
.2839812
.0243531
.0181966
20.49
15.61
0.000
0.000
.4512959
.2483165
.5467584
.3196459
apex
L1.
1.897144
.0558791
33.95
0.000
1.787623
2.006665
_cons
.0682724
.0662257
1.03
0.303
-.0615276
.1980724
-.6574256
.0294259
-22.34
0.000
-.7150994
-.5997518
.2375029
.6492072
.0179114
.0254493
13.26
25.51
0.000
0.000
.2023971
.5993274
.2726086
.6990869
corr(acme,
anvil)
Adjustment
lambda1
lambda2
375
376
The results indicate that increases in the futures prices for related products lead to higher returns on
the Acme stock, and increased input prices lead to lower returns on the Anvil stock. In the conditional
variance equation for Anvil, the coefficient on L1.apex is positive and significant, which indicates
that an increase in the return on the Apex stock leads to more variability in the return on the Anvil
stock.
Stored results
mgarch vcc stores the following in e():
Scalars
e(N)
e(k)
e(k aux)
e(k extra)
e(k eq)
e(k dv)
e(df m)
e(ll)
e(chi2)
e(p)
e(estdf)
e(usr)
e(tmin)
e(tmax)
e(N gaps)
e(rank)
e(ic)
e(rc)
e(converged)
Macros
e(cmd)
e(model)
e(cmdline)
e(depvar)
e(covariates)
e(dv eqs)
e(indeps)
e(tvar)
e(title)
e(chi2type)
e(vce)
e(vcetype)
e(tmins)
e(tmaxs)
e(dist)
e(arch)
e(garch)
e(technique)
e(properties)
e(estat cmd)
e(predict)
e(marginsok)
e(marginsnotok)
number of observations
number of parameters
number of auxiliary parameters
number of extra estimates added to
number of equations in e(b)
number of dependent variables
model degrees of freedom
log likelihood
significance
1 if distribution parameter was estimated, 0 otherwise
user-provided distribution parameter
minimum time in sample
maximum time in sample
number of gaps
rank of e(V)
number of iterations
return code
1 if converged, 0 otherwise
mgarch
vcc
command as typed
names of dependent variables
list of covariates
dependent variables with mean equations
independent variables in each equation
time variable
title in estimation output
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
formatted minimum time
formatted maximum time
distribution for error term: gaussian or t
specified ARCH terms
specified GARCH terms
maximization technique
b V
program used to implement estat
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
377
coefficient vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
Hessian matrix
variancecovariance matrix of the estimators
parameter information, used by predict
marks estimation sample
n
o
1/2
lt = 0.5m log(2) 0.5log {det (Rt )} log det Dt
0.5e
t R1
0t
t e
1/2
where e
t = Dt
t is an m 1 vector of standardized residuals, t = yt Cxt . The log-likelihood
PT
function is t=1 lt .
If we assume that t follow a multivariate t distribution with degrees of freedom (df) greater than
2, then the log-likelihood function for observation t is
df
m
log {(df 2)}
2
2
n
o df + m
e
t R1
0t
1/2
t e
0.5log {det (Rt )} log det Dt
log 1 +
2
df 2
lt = log
df + m
2
log
The starting values for the parameters in the mean equations and the initial residuals b
t are
obtained by least-squares regression. The starting values for the parameters in the variance equations
are obtained by a procedure proposed by Gourieroux and Monfort (1997, sec. 6.2.2). The starting
values for the parameters in R are calculated from the standardized residuals e
t . Given the starting
values for the mean and variance equations, the starting values for the parameters 1 and 2 are
obtained from a grid search performed on the log likelihood.
The initial optimization step is performed in the unconstrained space. Once the maximum is found,
we impose the constraints 1 0, 2 0, and 0 1 + 2 < 1, and maximize the log likelihood
in the constrained space. This step is reported in the iteration log as the refining step.
GARCH estimators require initial values that can be plugged in for ti 0ti and
Htj when
t i < 1 and t j < 1. mgarch vcc substitutes an estimator of the unconditional covariance of the
disturbances
b = T 1
T
X
t=1
0
b
b
t b
b
t
(2)
378
for ti 0ti when t i < 1 and for Htj when t j < 1, where b
b
t is the vector of residuals
calculated using the estimated parameters.
mgarch vcc uses numerical derivatives in maximizing the log-likelihood function.
References
Engle, R. F. 2002. Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional
heteroskedasticity models. Journal of Business & Economic Statistics 20: 339350.
Gourieroux, C. S., and A. Monfort. 1997. Time Series and Dynamic Models. Trans. ed. G. M. Gallo. Cambridge:
Cambridge University Press.
Tse, Y. K., and A. K. C. Tsui. 2002. A multivariate generalized autoregressive conditional heteroscedasticity model
with time-varying correlations. Journal of Business & Economic Statistics 20: 351362.
Also see
[TS] mgarch vcc postestimation Postestimation tools for mgarch vcc
[TS] mgarch Multivariate GARCH models
[TS] tsset Declare data to be time-series data
[TS] arch Autoregressive conditional heteroskedasticity (ARCH) family of estimators
[TS] var Vector autoregressive models
[U] 20 Estimation and postestimation commands
Title
mgarch vcc postestimation Postestimation tools for mgarch vcc
Description
Remarks and examples
Description
The following standard postestimation commands are available after mgarch vcc:
Command
Description
contrast
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
margins
marginsplot
nlcom
predict
predictnl
pwcompare
test
testnl
379
380
type
{ stub* | newvarlist }
if
in
, statistic options
Description
statistic
Main
xb
residuals
variance
correlation
These statistics are available both in and out of sample; type predict
the estimation sample.
Description
options
Options
equation(eqnames)
names of equations for which predictions are made
dynamic(time constant) begin dynamic forecast at specified time
>
Postestimation
>
Main
xb, the default, calculates the linear predictions of the dependent variables.
residuals calculates the residuals.
variance predicts the conditional variances and conditional covariances.
correlation predicts the conditional correlations.
Options
equation(eqnames) specifies the equation for which the predictions are calculated. Use this option
to predict a statistic for a particular equation. Equation names, such as equation(income), are
used to identify equations.
One equation name may be specified when predicting the dependent variable, the residuals, or
the conditional variance. For example, specifying equation(income) causes predict to predict
income, and specifying variance equation(income) causes predict to predict the conditional
variance of income.
Two equations may be specified when predicting a conditional variance or covariance. For example, specifying equation(income, consumption) variance causes predict to predict the
conditional covariance of income and consumption.
381
dynamic(time constant) specifies when predict starts producing dynamic forecasts. The specified
time constant must be in the scale of the time variable specified in tsset, and the time constant
must be inside a sample for which observations on the dependent variables are available. For
example, dynamic(tq(2008q4)) causes dynamic predictions to begin in the fourth quarter of
2008, assuming that your time variable is quarterly; see [D] datetime. If the model contains
exogenous variables, they must be present for the whole predicted sample. dynamic() may not
be specified with residuals.
.001
.002
.003
. use http://www.stata-press.com/data/r13/stocks
(Data from Yahoo! Finance)
. quietly mgarch vcc (toyota nissan = , noconstant)
> (honda = L.nissan, noconstant), arch(1) garch(1)
. tsappend, add(50)
. predict H*, variance dynamic(2016)
01jan2009
01jul2009
01jan2010
Date
01jul2010
01jan2011
Recent in-sample one-step-ahead forecasts are plotted to the left of the vertical line in the above
graph, and the dynamic out-of-sample forecasts appear to the right of the vertical line. The graph
shows the tail end of the huge increase in return volatility that took place in 2008 and 2009. It also
shows that the dynamic forecasts quickly converge.
382
Also see
[TS] mgarch vcc Varying conditional correlation multivariate GARCH models
[U] 20 Estimation and postestimation commands
Title
newey Regression with NeweyWest standard errors
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
newey depvar
indepvars
if
in
weight , lag(#) options
Description
options
Model
lag(#)
noconstant
Reporting
level(#)
display options
coeflegend
lag(#) is required.
You must tsset your data before using newey; see [TS] tsset.
indepvars may contain factor variables; see [U] 11.4.3 Factor variables.
depvar and indepvars may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
aweights are allowed; see [U] 11.1.6 weight.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
Menu
Statistics
>
Time series
>
Description
newey produces Newey West standard errors for coefficients estimated by OLS regression. The
error structure is assumed to be heteroskedastic and possibly autocorrelated up to some lag.
Options
Model
lag(#) specifies the maximum lag to be considered in the autocorrelation structure. If you specify
lag(0), the output is the same as regress, vce(robust). lag() is required.
noconstant; see [R] estimation options.
383
384
Reporting
Example 1
newey, lag(0) is equivalent to regress, vce(robust):
. use http://www.stata-press.com/data/r13/auto
(1978 Automobile Data)
. regress price weight displ, vce(robust)
Linear regression
price
Coef.
weight
displacement
_cons
1.823366
2.087054
247.907
. generate t = _n
. tsset t
time variable:
delta:
Number of obs =
F( 2,
71) =
Prob > F
=
R-squared
=
Root MSE
=
Robust
Std. Err.
P>|t|
.7808755
7.436967
1129.602
2.34
0.28
0.22
0.022
0.780
0.827
t, 1 to 74
1 unit
74
14.44
0.0000
0.2909
2518.4
3.380387
16.91595
2500.269
price
Coef.
weight
displacement
_cons
1.823366
2.087054
247.907
Number of obs
F( 2,
71)
Prob > F
Newey-West
Std. Err.
P>|t|
.7808755
7.436967
1129.602
2.34
0.28
0.22
0.022
0.780
0.827
=
=
=
385
74
14.44
0.0000
3.380387
16.91595
2500.269
Because newey requires the dataset to be tsset, we generated a dummy time variable t, which in
this example played no role in the estimation.
Example 2
Say that we have time-series measurements on variables usr and idle and now wish to fit an
OLS model but obtain Newey West standard errors allowing for a lag of up to 3:
. use http://www.stata-press.com/data/r13/idle2, clear
. tsset time
time variable: time, 1 to 30
delta: 1 unit
. newey usr idle, lag(3)
Regression with Newey-West standard errors
maximum lag: 3
usr
Coef.
idle
_cons
-.2281501
23.13483
Newey-West
Std. Err.
.0690927
6.327031
Number of obs
F( 1,
28)
Prob > F
t
-3.30
3.66
P>|t|
0.003
0.001
=
=
=
30
10.90
0.0026
-.08662
36.09516
386
Stored results
newey stores the following in e():
Scalars
e(N)
e(df m)
e(df r)
e(F)
e(lag)
e(rank)
Macros
e(cmd)
e(cmdline)
e(depvar)
e(wtype)
e(wexp)
e(title)
e(vcetype)
e(properties)
e(estat cmd)
e(predict)
e(asbalanced)
e(asobserved)
Matrices
e(b)
e(Cns)
e(V)
Functions
e(sample)
number of observations
model degrees of freedom
residual degrees of freedom
F statistic
maximum lag
rank of e(V)
newey
command as typed
name of dependent variable
weight type
weight expression
title in estimation output
title used to label Std. Err.
b V
program used to implement estat
program used to implement predict
factor variables fvset as asbalanced
factor variables fvset as asobserved
coefficient vector
constraints matrix
variancecovariance matrix of the estimators
marks estimation sample
d
b OLS ) = (X0 X)1 X0
b X(X0 X)1
Var(
That is, the coefficient estimates are simply those of OLS linear regression.
For lag(0) (no autocorrelation), the variance estimates are calculated using the White formulation:
b X = X0
b 0X =
X0
n X 2 0
eb x xi
nk i i i
b OLS , where xi is the ith row of the X matrix, n is the number of observations,
Here ebi = yi xi
and k is the number of predictors in the model, including the constant if there is one. The above
formula is the same as that used by regress, vce(robust) with the regression-like formula (the
default) for the multiplier qc ; see Methods and formulas of [R] regress.
387
For lag(m), m > 0, the variance estimates are calculated using the Newey West (1987)
formulation
b X = X0
b 0X +
X0
X
m
n
l
n X
1
ebt ebtl (x0t xtl + x0tl xt )
nk
m+1
l=1
t=l+1
Whitney K. Newey (1954 ) earned degrees in economics at Brigham Young University and
MIT. After a period at Princeton, he returned to MIT as a professor in 1990. His interests in
theoretical and applied econometrics include bootstrapping, nonparametric estimation of models,
semiparametric models, and choosing the number of instrumental variables.
Kenneth D. West (1953 ) earned a bachelors degree in economics and mathematics at Wesleyan
University and then a PhD in economics at MIT. After a period at Princeton, he joined the
University of Wisconsin in 1988. His interests include empirical macroeconomics and timeseries econometrics.
References
Hardin, J. W. 1997. sg72: NeweyWest standard errors for probit, logit, and poisson models. Stata Technical Bulletin
39: 3235. Reprinted in Stata Technical Bulletin Reprints, vol. 7, pp. 182186. College Station, TX: Stata Press.
Newey, W. K., and K. D. West. 1987. A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent
covariance matrix. Econometrica 55: 703708.
Wang, Q., and N. Wu. 2012. Long-run covariance and its applications in cointegration regression. Stata Journal 12:
515542.
White, H. L., Jr. 1980. A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity.
Econometrica 48: 817838.
Also see
[TS] newey postestimation Postestimation tools for newey
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] forecast Econometric model forecasting
[TS] tsset Declare data to be time-series data
[R] regress Linear regression
[U] 20 Estimation and postestimation commands
Title
newey postestimation Postestimation tools for newey
Description
Remarks and examples
Description
The following postestimation commands are available after newey:
Command
Description
contrast
estat summarize
estat vce
estimates
forecast
lincom
linktest
margins
marginsplot
nlcom
predict
predictnl
pwcompare
test
testnl
type
newvar
if
in
, statistic
Description
Main
xb
stdp
residuals
These statistics are available both in and out of sample; type predict
the estimation sample.
388
389
>
Postestimation
>
Main
chi2
df
6.415
0.0113
The result implies that the error term is serially correlated; therefore, we should rather fit the model
with newey. But lets use the outcome from regress to conduct a test for the statistical significance
of a particular linear combination of the two coefficients in the regression:
. test 1.15*honda+toyota = 1
( 1) 1.15*honda + toyota = 1
F( 1, 2012) =
5.52
Prob > F =
0.0189
We reject the null hypothesis that the linear combination is valid. Lets see if the conclusion
remains the same when we fit the model with newey, obtaining the NeweyWest standard errors for
the OLS coefficient estimates.
390
1.15*honda + toyota = 1
F(
1, 2012) =
Prob > F =
2.57
0.1088
The conclusion would be the opposite, which illustrates the importance of using the proper estimator
for the standard errors.
Example 2
We want to produce forecasts based on dynamic regressions for each of the three stocks. We will
treat the stock returns for toyota as a leading indicator for the two other stocks. We also check for
autocorrelation with the BreuschGodfrey test.
. use http://www.stata-press.com/data/r13/stocks
(Data from Yahoo! Finance)
. regress toyota l(1/2).toyota
(output omitted )
. estat bgodfrey
Breusch-Godfrey LM test for autocorrelation
lags(p)
chi2
df
4.373
0.0365
chi2
df
0.099
0.7536
chi2
df
0.923
0.3367
The first result indicates that we should consider using newey to fit the model for toyota. The
point forecasts would not be actually affected because newey produces the same OLS coefficient
estimates reported by regress. However, if we were interested in obtaining measures of uncertainty
surrounding the point forecasts, we should then use the results from newey for that first equation.
391
Lets illustrate the use of forecast with newey for the first equation and regress for the two
other equations. We first declare the forecast model:
. forecast create stocksmodel
Forecast model stocksmodel started.
Then we refit the equations and add them to the forecast model:
. quietly newey toyota l(1/2).toyota, lag(1)
. estimates store eq_toyota
. forecast estimates eq_toyota
Added estimation results from newey.
Forecast model stocksmodel now contains 1 endogenous variable.
. quietly regress nissan l(1/2).nissan l.toyota
. estimates store eq_nissan
. forecast estimates eq_nissan
Added estimation results from regress.
Forecast model stocksmodel now contains 2 endogenous variables.
. quietly regress honda l(1/2).honda l.toyota
. estimates store eq_honda
. forecast estimates eq_honda
Added estimation results from regress.
Forecast model stocksmodel now contains 3 endogenous variables.
We use tsappend to add the number of periods for the forecast, and then we obtain the predicted
values with forecast solve:
. tsappend, add(7)
. forecast solve, prefix(stk_)
Computing dynamic forecasts for model stocksmodel.
Starting period:
Ending period:
Forecast prefix:
2016
2022
stk_
2016: ............
2017: ...........
2018: ...........
2019: ..........
2020: .........
2021: ........
2022: ........
Forecast 3 variables spanning 7 periods.
The graph below shows several interesting results. First, the stock returns of the competitor (toyota)
does not seem to be a leading indicator for the stock returns of the two other companies (otherwise, the
patterns for the movements in nissan and honda would be following the recent past movements in
toyota). You can actually fit the models above for nissan and honda to confirm that the coefficient
estimate for the first lag of toyota is not significant in any of the two equations. Second, immediately
after the second forecasted period, there is basically no variation in the predictions, which indicates
the very short-run predicting influence of past history on the forecasts of the three stock returns.
392
.02
Stock returns
0
.02
.04
01
01
c2
de
01
c2
de
08
01
c2
de
15
Honda
01
c2
de
24
Date
Toyota
Also see
[TS] newey Regression with NeweyWest standard errors
[U] 20 Estimation and postestimation commands
01
c2
de
31
11
20
jan
08
Nissan
Title
pergram Periodogram
Syntax
Remarks and examples
Menu
Methods and formulas
Description
References
Options
Also see
Syntax
pergram varname
if
in
, options
Description
options
Main
generate(newvar)
Plot
cline options
marker options
marker label options
Add plots
addplot(plot)
twoway options
nograph
You must tsset your data before using pergram; see [TS] tsset. Also, the time series must be dense
(nonmissing with no gaps in the time variable) in the specified sample.
varname may contain time-series operators; see [U] 11.4.4 Time-series varlists.
nograph does not appear in the dialog box.
Menu
Statistics
>
Time series
>
Graphs
>
Periodogram
Description
pergram plots the log-standardized periodogram for a dense time series.
Options
Main
generate(newvar) specifies a new variable to contain the raw periodogram values. The generated
graph log-transforms and scales the values by the sample variance and then truncates them to the
[ 6, 6 ] interval before graphing them.
393
394
pergram Periodogram
Plot
cline options affect the rendition of the plotted points connected by lines; see [G-3] cline options.
marker options specify the look of markers. This look includes the marker symbol, the marker size,
and its color and outline; see [G-3] marker options.
marker label options specify if and how the markers are to be labeled; see [G-3] marker label options.
Add plots
addplot(plot) adds specified plots to the generated graph; see [G-3] addplot option.
twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).
The following option is available with pergram but is not shown in the dialog box:
nograph prevents pergram from constructing a graph.
Example 1
We have time-series data consisting of 144 observations on the monthly number of international
airline passengers (in thousands) between 1949 and 1960 (Box, Jenkins, and Reinsel 2008, Series G).
We can graph the raw series and the log periodogram for these data by typing
pergram Periodogram
395
100
600
. use http://www.stata-press.com/data/r13/air2
(TIMESLAB: Airline passengers)
. scatter air time, m(o) c(l)
1950
1955
Time (in months)
1960
. pergram air
6.00
4.00
2.00
0.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
6.00
The periodogram highlights the annual cycle together with the harmonics. Notice the peak at a
frequency of about 0.08 cycles per month (cpm). The period is the reciprocal of frequency, and the
reciprocal of 0.08 cpm is approximately 12 months per cycle. The similarity in shape of each group
of 12 observations reveals the annual cycle. The magnitude of the cycle is increasing, resulting in
the peaks in the periodogram at the harmonics of the principal annual cycle.
Example 2
This example uses 215 observations on the annual number of sunspots from 1749 to 1963 (Box
and Jenkins 1976, Series E). The graph of the raw series and the log periodogram for these data are
given as
396
pergram Periodogram
50
Number of sunspots
100
150
200
. use http://www.stata-press.com/data/r13/sunspot
(TIMESLAB: Wolfer sunspot data)
. scatter spot time, m(o) c(l)
1750
1800
1850
Year
1900
1950
. pergram spot
6.00
4.00
2.00
0.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
Number of sunspots
Log Periodogram
6.00 4.00 2.00 0.00 2.00
4.00
6.00
The periodogram peaks at a frequency of slightly less than 0.10 cycles per year, indicating a 10to 12-year cycle in sunspot activity.
Example 3
Here we examine the number of trapped Canadian lynx from 1821 through 1934 (Newton 1988,
587). The raw series and the log periodogram are given as
pergram Periodogram
397
8000
. use http://www.stata-press.com/data/r13/lynx2
(TIMESLAB: Canadian lynx)
. scatter lynx time, m(o) c(l)
50
100
150
Time
. pergram lynx
6.00
4.00
2.00
0.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
6.00
The periodogram indicates that there is a cycle with a duration of about 10 years for these data but
that it is otherwise random.
Example 4
To more clearly highlight what the periodogram depicts, we present the result of analyzing a
time series of the sum of four sinusoids (of different periods). The periodogram should be able to
decompose the time series into four different sinusoids whose periods may be determined from the
plot.
398
pergram Periodogram
20
10
Sum of 4 cosines
0
10
20
. use http://www.stata-press.com/data/r13/cos4
(TIMESLAB: Sum of 4 Cosines)
. scatter sumfc time, m(o) c(l)
50
100
150
Time
6.00
4.00
2.00
0.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
Sum of 4 cosines
Log Periodogram
6.00 4.00 2.00 0.00 2.00
4.00
6.00
The periodogram clearly shows the four contributions to the original time series. From the plot, we
can see that the periods of the summands were 3, 6, 12, and 36, although you can confirm this by
using
pergram Periodogram
399
5.
13.
25.
49.
period
omega
36
12
6
3
.02777778
.08333333
.16666667
.33333333
Ck2
1
= 2
n
2
n
X
2i(t1)k
x(t)e
t=1
2
n
X
2i(t1)
1
x(t)e
fb() = n t=1
fb(1 )
if [ 0, .5 ]
if [ .5, 1 ]
The periodogram (and sample spectral density) is symmetric about = 0.5. Further standardize
the periodogram such that
n
1 X nCk2
=1
n
b2
k=2
where
b is the sample variance of the time series so that the average value of the ordinate is one.
Once the amplitudes are standardized, we may then take the natural log of the values and produce
the log periodogram. In doing so, we truncate the graph at 6. We drop the word log and simply
refer to the log periodogram as the periodogram in text.
400
pergram Periodogram
References
Box, G. E. P., and G. M. Jenkins. 1976. Time Series Analysis: Forecasting and Control. Oakland, CA: HoldenDay.
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Chatfield, C. 2004. The Analysis of Time Series: An Introduction. 6th ed. Boca Raton, FL: Chapman & Hall/CRC.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Newton, H. J. 1988. TIMESLAB: A Time Series Analysis Laboratory. Belmont, CA: Wadsworth.
Also see
[TS] tsset Declare data to be time-series data
[TS] corrgram Tabulate and graph autocorrelations
[TS] cumsp Cumulative spectral distribution
[TS] wntestb Bartletts periodogram-based test for white noise
Title
pperron PhillipsPerron unit-root test
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
pperron varname
if
in
, options
Description
options
Main
noconstant
trend
regress
lags(#)
You must tsset your data before using pperron; see [TS] tsset.
varname may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
Tests
>
Description
pperron performs the PhillipsPerron (1988) test that a variable has a unit root. The null hypothesis
is that the variable contains a unit root, and the alternative is that the variable was generated by a
stationary process. pperron uses NeweyWest (1987) standard errors to account for serial correlation,
whereas the augmented DickeyFuller test implemented in dfuller (see [TS] dfuller) uses additional
lags of the first-differenced variable.
Options
Main
402
yt = + yt1 + t + ut
(1)
by ordinary least squares (OLS), but serial correlation will present a problem. To account for this, the
augmented DickeyFuller tests regression includes lags of the first differences of yt .
The PhillipsPerron test involves fitting (1), and the results are used to calculate the test statistics.
Phillips and Perron (1988) proposed two alternative statistics, which pperron presents. Phillips and
Perrons test statistics can be viewed as DickeyFuller statistics that have been made robust to
serial correlation by using the NeweyWest (1987) heteroskedasticity- and autocorrelation-consistent
covariance matrix estimator.
Hamilton (1994, chap. 17) and [TS] dfuller discuss four different cases into which unit-root tests
can be classified. The PhillipsPerron test applies to cases one, two, and four but not to case three.
Cases one and two assume that the variable has a unit root without drift under the null hypothesis, the
only difference being whether the constant term is included in regression (1). Case four assumes
that the variable has a random walk, with or without drift, under the null hypothesis. Case three,
which assumes that the variable has a random walk with drift under the null hypothesis, is just a
special case of case four, so the fact that the PhillipsPerron test does not apply is not restrictive.
The table below summarizes the relevant cases:
Case
1
2
4
Process under
null hypothesis
Regression
restrictions
dfuller
option
= 0, = 0
=0
(none)
noconstant
(default)
trend
The critical values for the PhillipsPerron test are the same as those for the augmented DickeyFuller
test. See Hamilton (1994, chap. 17) for more information.
Example 1
Here we use the international airline passengers dataset (Box, Jenkins, and Reinsel 2008, Series G).
This dataset has 144 observations on the monthly number of international airline passengers from
1949 through 1960. Because the data exhibit a clear upward trend over time, we will use the trend
option.
403
. use http://www.stata-press.com/data/r13/air2
(TIMESLAB: Airline passengers)
. pperron air, lags(4) trend regress
Phillips-Perron test for unit root
Test
Statistic
Z(rho)
Z(t)
Number of obs
=
143
Newey-West lags =
4
Interpolated Dickey-Fuller
1% Critical
5% Critical
10% Critical
Value
Value
Value
-46.405
-5.049
-27.687
-4.026
-20.872
-3.444
-17.643
-3.144
Coef.
air
L1.
_trend
_cons
.7318116
.7107559
25.95168
Std. Err.
.0578092
.1670563
7.325951
12.66
4.25
3.54
P>|t|
0.000
0.000
0.001
.6175196
.3804767
11.46788
.8461035
1.041035
40.43547
Just as in the example in [TS] dfuller, we reject the null hypothesis of a unit root at all common
significance levels. The interpolated critical values for Zt differ slightly from those shown in the
example in [TS] dfuller because the sample sizes are different: with the augmented DickeyFuller
regression we lose observations because of the inclusion of lagged difference terms as regressors.
Stored results
pperron stores the following in r():
Scalars
r(N)
r(lags)
r(pval)
r(Zt)
r(Zrho)
number of observations
number of lagged differences used
MacKinnon approximate p-value (not included if noconstant specified)
PhillipsPerron test statistic
Phillips Perron test statistic
yi = yi1 + i
where i are independently and identically distributed as N (0, 2 ) and y0 = 0, the OLS estimate
(based on an n-observation time series) of the autocorrelation parameter is given by
n
X
bn =
yi1 yi
i=1
n
X
yi2
i=1
404
yi = + yi1 + i
where we may exclude the constant or include a trend term. There are two statistics, Z and Z ,
calculated as
1 n2
b 2 b 2
Z = n(b
n 1)
n
b0,n
2
2 sn
s
1 nb
b0,n bn 1 1 b 2
Z =
n
b0,n
b2
bn sn
b
2
n
1 X
bj,n =
u
bi u
bij
n i=j+1
q
X
b2 =
b
+
2
1
0,n
n
s2n =
1
nk
j=1
n
X
j
q+1
bj,n
u
bi2
i=1
where ui is the OLS residual, k is the number of covariates in the regression, q is the number of
b 2 , and
Newey West lags to use in calculating
b is the OLS standard error of b.
n
The critical values, which have the same distribution as the Dickey Fuller statistic (see Dickey
and Fuller 1979) included in the output, are linearly interpolated from the table of values that appear
in Fuller (1996), and the MacKinnon approximate p-values use the regression surface published in
MacKinnon (1994).
Peter Charles Bonest Phillips (1948 ) was born in Weymouth, England, and earned degrees in
economics at the University of Auckland in New Zealand, and the London School of Economics.
After periods at the Universities of Essex and Birmingham, Phillips moved to Yale in 1979. He
also holds appointments at the University of Auckland and the University of York. His main
research interests are in econometric theory, financial econometrics, time-series and panel-data
econometrics, and applied macroeconomics.
Pierre Perron (1959 ) was born in Quebec, Canada, and earned degrees at McGill, Queens, and
Yale in economics. After posts at Princeton and the Universite de Montreal, he joined Boston
University in 1997. His research interests include time-series analysis, econometrics, and applied
macroeconomics.
405
References
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Dickey, D. A., and W. A. Fuller. 1979. Distribution of the estimators for autoregressive time series with a unit root.
Journal of the American Statistical Association 74: 427431.
Fuller, W. A. 1996. Introduction to Statistical Time Series. 2nd ed. New York: Wiley.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
MacKinnon, J. G. 1994. Approximate asymptotic distribution functions for unit-root and cointegration tests. Journal
of Business and Economic Statistics 12: 167176.
Newey, W. K., and K. D. West. 1987. A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent
covariance matrix. Econometrica 55: 703708.
Phillips, P. C. B., and P. Perron. 1988. Testing for a unit root in time series regression. Biometrika 75: 335346.
Also see
[TS] tsset Declare data to be time-series data
[TS] dfgls DF-GLS unit-root test
[TS] dfuller Augmented DickeyFuller unit-root test
[XT] xtunitroot Panel-data unit-root tests
Title
prais Prais Winsten and Cochrane Orcutt regression
Syntax
Remarks and examples
References
Menu
Stored results
Also see
Description
Methods and formulas
Options
Acknowledgment
Syntax
prais depvar
indepvars
options
if
in
, options
Description
Model
rhotype(regress)
rhotype(freg)
rhotype(tscorr)
rhotype(dw)
rhotype(theil)
rhotype(nagar)
corc
ssesearch
twostep
noconstant
hascons
savespace
SE/Robust
vce(vcetype)
Reporting
level(#)
nodw
display options
Optimization
optimize options
coeflegend
You must tsset your data before using prais; see [TS] tsset.
indepvars may contain factor variables; see [U] 11.4.3 Factor variables.
depvar and indepvars may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
406
407
Menu
Statistics
>
Time series
>
Prais-Winsten regression
Description
prais uses the generalized least-squares method to estimate the parameters in a linear regression
model in which the errors are serially correlated. Specifically, the errors are assumed to follow a
first-order autoregressive process.
Options
Model
The prais estimator can use any consistent estimate of to transform the equation, and each
of these estimates meets that requirement. The default is regress, which produces the minimum
sum-of-squares solution (ssesearch option) for the Cochrane Orcutt transformation none of
these computations will produce the minimum sum-of-squares solution for the full Prais Winsten
transformation. See Judge et al. (1985) for a discussion of each estimate of .
corc specifies that the Cochrane Orcutt transformation be used to estimate the equation. With this
option, the Prais Winsten transformation of the first observation is not performed, and the first
observation is dropped when estimating the transformed equation; see Methods and formulas below.
ssesearch specifies that a search be performed for the value of that minimizes the sum-of-squared
errors of the transformed equation (Cochrane Orcutt or Prais Winsten transformation). The search
method is a combination of quadratic and modified bisection searches using golden sections.
twostep specifies that prais stop on the first iteration after the equation is transformed by the
two-step efficient estimator. Although iterating these estimators to convergence is customary, they
are efficient at each step.
noconstant; see [R] estimation options.
hascons indicates that a user-defined constant, or a set of variables that in linear combination forms a
constant, has been included in the regression. For some computational concerns, see the discussion
in [R] regress.
savespace specifies that prais attempt to save as much space as possible by retaining only those
variables required for estimation. The original data are restored after estimation. This option is
rarely used and should be used only if there is insufficient space to fit a model without the option.
408
SE/Robust
vce(vcetype) specifies the estimator for the variancecovariance matrix of the estimator; see
[R] vce option.
vce(ols), the default, uses the standard variance estimator for ordinary least-squares regression.
vce(robust) specifies to use the Huber/White/sandwich estimator.
vce(cluster clustvar) specifies to use the intragroup correlation estimator.
vce(hc2) and vce(hc3) specify an alternative bias correction for the vce(robust) variance
calculation; for more information, see [R] regress. You may specify only one of vce(hc2),
vce(hc3), or vce(robust).
All estimates from prais are conditional on the estimated value of . Robust variance estimates
here are robust only to heteroskedasticity and are not generally robust to misspecification of the
functional form or omitted variables. The estimation of the functional form is intertwined with
the estimation of , and all estimates are conditional on . Thus estimates cannot be robust to
misspecification of functional form. For these reasons, it is probably best to interpret vce(robust)
in the spirit of Whites (1980) original paper on estimation of heteroskedastic-consistent covariance
matrices.
Reporting
Optimization
optimize options: iterate(#), no log, tolerance(#). iterate() specifies the maximum number of iterations. log/nolog specifies whether to show the iteration log. tolerance() specifies
the tolerance for the coefficient vector; tolerance(1e-6) is the default. These options are seldom
used.
The following option is available with prais but is not shown in the dialog box:
coeflegend; see [R] estimation options.
yt = xt + ut
409
ut = ut1 + et
and the et are independently and identically distributed as N (0, 2 ). The covariance matrix of the
error term u can then be written as
2
T 1
1
T 2
1
2
1
T 3
=
2
1 ..
..
..
..
..
.
.
.
.
.
T 1
T 2
T 3
1
The Prais Winsten estimator is a generalized least-squares (GLS) estimator. The Prais Winsten
method (as described in Judge et al. 1985) is derived from the AR(1) model for the error term described
above. Whereas the Cochrane Orcutt method uses a lag definition and loses the first observation in
the iterative method, the Prais Winsten method preserves that first observation. In small samples,
this can be a significant advantage.
Technical note
To fit a model with autocorrelated errors, you must specify your data as time series and have (or
create) a variable denoting the time at which an observation was collected. The data for the regression
should be equally spaced in time.
Example 1
Say that we wish to fit a time-series model of usr on idle but are concerned that the residuals
may be serially correlated. We will declare the variable t to represent time by typing
. use http://www.stata-press.com/data/r13/idle
. tsset t
time variable: t, 1 to 30
delta: 1 unit
Coef.
idle
_cons
-.1254511
14.54641
rho
.5707918
Std. Err.
.0492356
4.272299
t
-2.55
3.40
P>|t|
0.017
0.002
=
=
=
=
=
=
29
6.49
0.0168
0.1938
0.1640
2.4862
-.024428
23.31245
410
and
ut = 0.5708 ut1 + et
We can also fit the model with the Prais Winsten method,
. prais usr idle
Iteration 0: rho = 0.0000
Iteration 1: rho = 0.3518
(output omitted )
Iteration 14: rho = 0.5535
Prais-Winsten AR(1) regression -- iterated estimates
SS
df
MS
Source
Model
Residual
43.0076941
169.165739
1
28
43.0076941
6.04163354
Total
212.173433
29
7.31632528
usr
Coef.
idle
_cons
-.1356522
15.20415
rho
.5535476
Std. Err.
.0472195
4.160391
-2.87
3.65
P>|t|
0.008
0.001
Number of obs
F( 1,
28)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
30
7.12
0.0125
0.2027
0.1742
2.458
-.0389275
23.72633
and
ut = .5535 ut1 + et
As the results indicate, for these data there is little difference between the Cochrane Orcutt and
Prais Winsten estimators, whereas the OLS estimate of the slope parameter is substantially different.
Example 2
We have data on quarterly sales, in millions of dollars, for 5 years, and we would like to use
this information to model sales for company X. First, we fit a linear model by OLS and obtain the
DurbinWatson statistic by using estat dwatson; see [R] regress postestimation time series.
. use http://www.stata-press.com/data/r13/qsales
. regress csales isales
Source
SS
df
MS
Model
Residual
110.256901
.133302302
1
18
110.256901
.007405683
Total
110.390204
19
5.81001072
csales
Coef.
isales
_cons
.1762828
-1.454753
. estat dwatson
Durbin-Watson d-statistic(
Std. Err.
.0014447
.2141461
2,
t
122.02
-6.79
20) =
P>|t|
0.000
0.000
.7347276
Number of obs
F( 1,
18)
Prob > F
R-squared
Adj R-squared
Root MSE
=
20
=14888.15
= 0.0000
= 0.9988
= 0.9987
= .08606
.1793181
-1.004849
411
Because the DurbinWatson statistic is far from 2 (the expected value under the null hypothesis of
no serial correlation) and well below the 5% lower limit of 1.2, we conclude that the disturbances are
serially correlated. (Upper and lower bounds for the d statistic can be found in most econometrics
texts; for example, Harvey [1990]. The bounds have been derived for only a limited combination of
regressors and observations.) To reinforce this conclusion, we use two other tests to test for serial
correlation in the error distribution.
. estat bgodfrey, lags(1)
Breusch-Godfrey LM test for autocorrelation
lags(p)
chi2
df
7.998
0.0047
chi2
df
11.329
0.0008
estat bgodfrey reports the BreuschGodfrey Lagrange multiplier test statistic, and estat
durbinalt reports the Durbins alternative test statistic. Both tests give a small p-value and thus
reject the null hypothesis of no serial correlation. These two tests are asymptotically equivalent when
testing for AR(1) process. See [R] regress postestimation time series if you are not familiar with
these two tests.
We correct for autocorrelation with the ssesearch option of prais to search for the value of
that minimizes the sum-of-squared residuals of the Cochrane Orcutt transformed equation. Normally,
the default Prais Winsten transformations is used with such a small dataset, but the less-efficient
Cochrane Orcutt transformation allows us to demonstrate an aspect of the estimators convergence.
. prais csales isales, corc ssesearch
Iteration 1: rho = 0.8944 , criterion = -.07298558
Iteration 2: rho = 0.8944 , criterion = -.07298558
(output omitted )
Iteration 15: rho = 0.9588 , criterion = -.07167037
Cochrane-Orcutt AR(1) regression -- SSE search estimates
Source
SS
df
MS
Model
Residual
2.33199178
.071670369
1
17
2.33199178
.004215904
Total
2.40366215
18
.133536786
csales
Coef.
isales
_cons
.1605233
1.738946
rho
.9588209
Std. Err.
.0068253
1.432674
t
23.52
1.21
Number of obs
F( 1,
17)
Prob > F
R-squared
Adj R-squared
Root MSE
P>|t|
0.000
0.241
=
=
=
=
=
=
19
553.14
0.0000
0.9702
0.9684
.06493
.1749234
4.761624
412
We noted in Options that, with the default computation of , the Cochrane Orcutt method produces
an estimate of that minimizes the sum-of-squared residuals the same criterion as the ssesearch
option. Given that the two methods produce the same results, why would the search method ever be
preferred? It turns out that the back-and-forth iterations used by Cochrane Orcutt may have difficulty
converging if the value of is large. Using the same data, the Cochrane Orcutt iterative procedure
requires more than 350 iterations to converge, and a higher tolerance must be specified to prevent
premature convergence:
. prais csales isales, corc tol(1e-9) iterate(500)
Iteration 0: rho = 0.0000
Iteration 1: rho = 0.6312
Iteration 2: rho = 0.6866
(output omitted )
Iteration 377: rho = 0.9588
Iteration 378: rho = 0.9588
Iteration 379: rho = 0.9588
Cochrane-Orcutt AR(1) regression -- iterated estimates
Source
SS
df
MS
Number of obs
F( 1,
17)
Model
2.33199171
1 2.33199171
Prob > F
Residual
.071670369
17 .004215904
R-squared
Adj R-squared
2.40366208
18 .133536782
Root MSE
Total
csales
Coef.
isales
_cons
.1605233
1.738946
rho
.9588209
Std. Err.
.0068253
1.432674
t
23.52
1.21
P>|t|
0.000
0.241
=
=
=
=
=
=
19
553.14
0.0000
0.9702
0.9684
.06493
.1749234
4.761625
413
Stored results
prais stores the following in e():
Scalars
e(N)
e(N gaps)
e(mss)
e(df m)
e(rss)
e(df r)
e(r2)
e(r2 a)
e(F)
e(rmse)
e(ll)
e(N clust)
e(rho)
e(dw)
e(dw 0)
e(rank)
e(tol)
e(max ic)
e(ic)
Macros
e(cmd)
e(cmdline)
e(depvar)
e(title)
e(clustvar)
e(cons)
e(method)
e(tranmeth)
e(rhotype)
e(vce)
e(vcetype)
e(properties)
e(predict)
e(marginsok)
e(asbalanced)
e(asobserved)
Matrices
e(b)
e(V)
e(V modelbased)
Functions
e(sample)
number of observations
number of gaps
model sum of squares
model degrees of freedom
residual sum of squares
residual degrees of freedom
R2
adjusted R2
F statistic
root mean squared error
log likelihood
number of clusters
autocorrelation parameter
DurbinWatson d statistic for transformed regression
DurbinWatson d statistic of untransformed regression
rank of e(V)
target tolerance
maximum number of iterations
number of iterations
prais
command as typed
name of dependent variable
title in estimation output
name of cluster variable
noconstant or not reported
twostep, iterated, or SSE search
corc or prais
method specified in rhotype() option
vcetype specified in vce()
title used to label Std. Err.
b V
program used to implement predict
predictions allowed by margins
factor variables fvset as asbalanced
factor variables fvset as asobserved
coefficient vector
variancecovariance matrix of the estimators
model-based variance
estimation sample
yt = axt + bzt + c + ut
An estimate of the correlation in the residuals is then obtained. By default, prais uses the auxiliary
regression:
ut = ut1 + et
This can be changed to any computation noted in the rhotype() option.
414
(1)
p
p
p
p
1 2 y1 = a( 1 2 x1 ) + b( 1 2 z1 ) + c 1 2 + 1 2 v1
(10 )
Thus the differences between the Cochrane Orcutt and the Prais Winsten methods are that the latter
uses (10 ) in addition to (1), whereas the former uses only (1), necessarily decreasing the sample size
by one.
Equations (1) and (10 ) are used to transform the data and obtain new estimates of a, b, and c.
When the twostep option is specified, the estimation process stops at this point and reports these
estimates. Under the default behavior of iterating to convergence, this process is repeated until the
change in the estimate of is within a specified tolerance.
The new estimates are used to produce fitted values
ybt = b
axt + bbzt + b
c
and then is reestimated using, by default, the regression defined by
(2)
We then reestimate (1) by using the new estimate of and continue to iterate between (1) and (2)
until the estimate of converges.
Convergence is declared after iterate() iterations or when the absolute difference in the estimated
correlation between two iterations is less than tol(); see [R] maximize. Sargan (1964) has shown
that this process will always converge.
Under the ssesearch option, a combined quadratic and bisection search using golden sections
searches for the value of that minimizes the sum-of-squared residuals from the transformed equation.
The transformation may be either the Cochrane Orcutt (1 only) or the Prais Winsten (1 and 10 ).
All reported statistics are based on the -transformed variables, and is assumed to be estimated
without error. See Judge et al. (1985) for details.
The Durbin Watson d statistic reported by prais and estat dwatson is
n1
P
d=
(uj+1 uj )2
j=1
n
P
j=1
u2j
415
All estimates from prais are conditional on the estimated value of . Robust variance estimates here
are robust only to heteroskedasticity and are not generally robust to misspecification of the functional
form or omitted variables. The estimation of the functional form is intertwined with the estimation
of , and all estimates are conditional on . Thus estimates cannot be robust to misspecification
of functional form. For these reasons, it is probably best to interpret vce(robust) in the spirit of
Whites original paper on estimation of heteroskedastic-consistent covariance matrices.
Acknowledgment
We thank Richard Dickens of the Centre for Economic Performance at the London School of
Economics and Political Science for testing and assistance with an early version of this command.
Sigbert Jon Prais (19282014) was born in Frankfurt and moved to Britain in 1934 as a refugee.
After earning degrees at the universities of Birmingham and Cambridge and serving in various
posts in research and industry, he settled at the National Institute of Economic and Social
Research. Praiss interests extended widely across economics, including studies of the influence
of education on economic progress.
Christopher Blake Winsten (19232005) was born in Welwyn Garden City, England; the son
of the writer Stephen Winsten and the painter and sculptress Clare Blake. He was educated
at the University of Cambridge and worked with the Cowles Commission at the University of
Chicago and at the universities of Oxford, London (Imperial College) and Essex, making many
contributions to economics and statistics, including the PraisWinsten transformation and joint
authorship of a celebrated monograph on transportation economics.
Donald Cochrane (19171983) was an Australian economist and econometrician. He was born
in Melbourne and earned degrees at Melbourne and Cambridge. After wartime service in the
Royal Australian Air Force, he held chairs at Melbourne and Monash, being active also in work
for various international organizations and national committees.
Guy Henderson Orcutt (1917 ) was born in Michigan and earned degrees in physics and
economics at the University of Michigan. He worked at Harvard, the University of Wisconsin,
and Yale. He has contributed to econometrics and economics in several fields, most distinctively
in developing microanalytical models of economic behavior.
References
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Cochrane, D., and G. H. Orcutt. 1949. Application of least squares regression to relationships containing auto-correlated
error terms. Journal of the American Statistical Association 44: 3261.
Davidson, R., and J. G. MacKinnon. 1993. Estimation and Inference in Econometrics. New York: Oxford University
Press.
Durbin, J., and G. S. Watson. 1950. Testing for serial correlation in least squares regression. I. Biometrika 37:
409428.
. 1951. Testing for serial correlation in least squares regression. II. Biometrika 38: 159177.
Hardin, J. W. 1995. sts10: PraisWinsten regression. Stata Technical Bulletin 25: 2629. Reprinted in Stata Technical
Bulletin Reprints, vol. 5, pp. 234237. College Station, TX: Stata Press.
Harvey, A. C. 1990. The Econometric Analysis of Time Series. 2nd ed. Cambridge, MA: MIT Press.
416
Hildreth, C., and J. Y. Lu. 1960. Demand relations with autocorrelated disturbances. Reprinted in Agricultural
Experiment Station Technical Bulletin, No. 276. East Lansing, MI: Michigan State University Press.
Judge, G. G., W. E. Griffiths, R. C. Hill, H. Lutkepohl, and T.-C. Lee. 1985. The Theory and Practice of Econometrics.
2nd ed. New York: Wiley.
King, M. L., and D. E. A. Giles, ed. 1987. Specification Analysis in the Linear Model: Essays in Honor of Donald
Cochrane. London: Routledge & Kegan Paul.
Kmenta, J. 1997. Elements of Econometrics. 2nd ed. Ann Arbor: University of Michigan Press.
Prais, S. J., and C. B. Winsten. 1954. Trend estimators and serial correlation. Working paper 383, Cowles Commission.
http://cowles.econ.yale.edu/P/ccdp/st/s-0383.pdf.
Sargan, J. D. 1964. Wages and prices in the United Kingdom: A study in econometric methodology. In Econometric
Analysis for National Economic Planning, ed. P. E. Hart, G. Mills, and J. K. Whitaker, 2564. London: Butterworths.
Theil, H. 1971. Principles of Econometrics. New York: Wiley.
White, H. L., Jr. 1980. A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity.
Econometrica 48: 817838.
Wooldridge, J. M. 2013. Introductory Econometrics: A Modern Approach. 5th ed. Mason, OH: South-Western.
Zellner, A. 1990. Guy H. Orcutt: Contributions to economic statistics. Journal of Economic Behavior and Organization
14: 4351.
Also see
[TS] prais postestimation Postestimation tools for prais
[TS] tsset Declare data to be time-series data
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[R] regress Linear regression
[R] regress postestimation time series Postestimation tools for regress with time series
[U] 20 Estimation and postestimation commands
Title
prais postestimation Postestimation tools for prais
Description
Also see
Description
The following standard postestimation commands are available after prais:
Command
Description
contrast
estat ic
estat summarize
estat vce
estimates
forecast
lincom
linktest
margins
marginsplot
nlcom
predict
predictnl
pwcompare
test
testnl
type
newvar
if
in
, statistic
Description
Main
xb
stdp
residuals
These statistics are available both in and out of sample; type predict
the estimation sample.
417
418
>
Postestimation
>
Main
xb, the default, calculates the fitted values the prediction of xj b for the specified equation. This is
the linear predictor from the fitted regression model; it does not apply the estimate of to prior
residuals.
stdp calculates the standard error of the prediction for the specified equation, that is, the standard
error of the predicted expected value or mean for the observations covariate pattern. The standard
error of the prediction is also referred to as the standard error of the fitted value.
As computed for prais, this is strictly the standard error from the variance in the estimates of
the parameters of the linear model and assumes that is estimated without error.
residuals calculates the residuals from the linear prediction.
Also see
[TS] prais Prais Winsten and Cochrane Orcutt regression
[U] 20 Estimation and postestimation commands
Title
psdensity Parametric spectral density estimation after arima, arfima, and ucm
Syntax
Remarks and examples
Menu
Methods and formulas
Description
References
Options
Also see
Syntax
psdensity
type
newvarsd newvarf
if
in
, options
where newvarsd is the name of the new variable that will contain the estimated spectral density and
newvarf is the name of the new variable that will contain the frequencies at which the spectral density
estimate is computed.
Description
options
pspectrum estimate the power spectrum rather than the spectral density
range(a b) limit the frequency range to [a, b)
cycle(#)
estimate the spectral density from the specified stochastic cycle; only allowed
after ucm
estimate the spectral density of the short-memory component of the ARFIMA
smemory
process; only allowed after arfima
Menu
Statistics
>
Time series
>
Postestimation
>
Description
psdensity estimates the spectral density of a stationary process using the parameters of a previously
estimated parametric model.
psdensity works after arfima, arima, and ucm.
Options
pspectrum causes psdensity to estimate the power spectrum rather than the spectral density. The
power spectrum is equal to the spectral density times the variance of the process.
range(a b) limits the frequency range. By default, the spectral density is computed over [0, ).
Specifying range(a b) causes the spectral density to be computed over [a, b). We require that
0 a < b < .
cycle(#) causes psdensity to estimate the spectral density from the specified stochastic cycle after
ucm. By default, the spectral density from the first stochastic cycle is estimated. cycle(#) must
specify an integer that corresponds to a cycle in the model fit by ucm.
smemory causes psdensity to ignore the ARFIMA fractional integration parameter. The spectral
density computed is for the short-memory ARMA component of the model.
419
420
psdensity Parametric spectral density estimation after arima, arfima, and ucm
Technical note
A more formal specification of the spectral density allows us to be more specific about how the
spectral density specifies the relative importance of the random components.
If yt is a covariance-stationary process with absolutely summable autocovariances, its spectrum is
given by
psdensity Parametric spectral density estimation after arima, arfima, and ucm
1
1X
gy () =
0 +
k cos(k)
2
421
(1)
k=1
gy ()eik d
k =
(2)
1.
0 =
gy ()d
which means that the variance of yt can be decomposed in terms of the spectrum gy (). In particular,
gy ()d is the contribution to the variance of yt attributable to the random components in the interval
(, + d).
The spectrum depends on the units in which yt is measured, because it depends on the 0 . Dividing
both sides of (1) by 0 gives us the scale-free spectral density of yt :
1X
1
+
k cos(k)
2
fy () =
k=1
By construction,
fy ()d = 1
so fy ()d is the fraction of the variance of yt attributable to the random components in the interval
(, + d).
422
psdensity Parametric spectral density estimation after arima, arfima, and ucm
Example 1
Lets consider the changes in the number of manufacturing employees in the United States, which
we plot below.
. use http://www.stata-press.com/data/r13/manemp2
(FRED data: Number of manufacturing employees in U.S.)
. tsline D.manemp, yline(-0.206)
1950m1
1960m1
1970m1
1980m1
Month
1990m1
2000m1
2010m1
We added a horizontal line at the sample mean of 0.0206 to highlight that there appear to be
more runs above or below the mean than we would expect in data generated by an i.i.d. process.
As a first pass at modeling this dependence, we use arima to estimate the parameters of a first-order
autoregressive (AR(1)) model. Formally, the AR(1) model is given by
yt = yt1 + t
where yt is the dependent variable, is the autoregressive coefficient, and t is an i.i.d. error term.
See [TS] arima for an introduction to ARMA modeling and the arima command.
psdensity Parametric spectral density estimation after arima, arfima, and ucm
. arima D.manemp, ar(1) noconstant
(setting optimization to BHHH)
Iteration 0:
log likelihood = -870.64844
Iteration 1:
log likelihood = -870.64794
Iteration 2:
log likelihood = -870.64789
Iteration 3:
log likelihood = -870.64787
Iteration 4:
log likelihood = -870.64786
(switching optimization to BFGS)
Iteration 5:
log likelihood = -870.64786
Iteration 6:
log likelihood = -870.64786
ARIMA regression
Sample: 1950m2 - 2011m2
Number of obs
Wald chi2(1)
Prob > chi2
D.manemp
Coef.
OPG
Std. Err.
=
=
=
423
733
730.51
0.0000
P>|z|
ARMA
ar
L1.
.5179561
.0191638
27.03
0.000
.4803959
.5555164
/sigma
.7934554
.0080636
98.40
0.000
.777651
.8092598
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
The statistically significant estimate of 0.518 for the autoregressive coefficient indicates that there
is an important amount of positive autocorrelation in this series.
The spectral density of a covariance-stationary process is symmetric around 0. Following convention,
psdensity estimates the spectral density over the interval [0, ) at the points given in Methods and
formulas.
Now we use psdensity to estimate the spectral density of the process implied by the estimated
ARMA parameters. We specify the names of two new variables in the call to psdensity. The first
new variable will contain the estimated spectral density. The second new variable will contain the
frequencies at which the spectral density is estimated.
424
psdensity Parametric spectral density estimation after arima, arfima, and ucm
.1
.4
.5
Frequency
The above graph is typical of a spectral density of an AR(1) process with a positive coefficient. The
curve is highest at frequency 0, and it tapers off toward zero or a positive asymptote. The estimated
spectral density is telling us that the low-frequency random components are the most important random
components of an AR(1) process with a positive autoregressive coefficient.
.8
20
.9
80
1.2
100
The closer the is to 1, the more important are the low-frequency components relative to the
high-frequency components. To illustrate this point, we plot the spectral densities implied by AR(1)
models with = 0.1 and = 0.9.
2
Frequency
2
Frequency
psdensity Parametric spectral density estimation after arima, arfima, and ucm
425
As gets closer to 1, the plot of the spectral density gets closer to being a spike at frequency 0,
implying that only the lowest-frequency components are important.
Example 2
Now lets consider a dataset for which the estimated coefficient from an AR(1) model is negative.
Below we plot the changes in initial claims for unemployment insurance in the United States.
200
200
01jan1970
01jan1980
01jan1990
Date
01jan2000
01jan2010
The plot looks a little more jagged than we would expect from an i.i.d. process, but it is hard to
tell. Below we estimate the AR(1) coefficient.
. arima D.icsa, ar(1) noconstant
(setting optimization to BHHH)
Iteration 0:
log likelihood = -9934.0659
Iteration 1:
log likelihood = -9934.0657
Iteration 2:
log likelihood = -9934.0657
ARIMA regression
Sample: 14jan1967 - 19feb2011
Number of obs
Wald chi2(1)
Prob > chi2
P>|z|
=
=
=
2302
666.06
0.0000
D.icsa
Coef.
ar
L1.
-.2756024
.0106789
-25.81
0.000
-.2965326
-.2546722
/sigma
18.10988
.1176556
153.92
0.000
17.87928
18.34048
ARMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
426
psdensity Parametric spectral density estimation after arima, arfima, and ucm
.1
.3
Frequency
The above graph is typical of a spectral density of an AR(1) process with a negative coefficient.
The curve is lowest at frequency 0, and it monotonically increases to its highest point, which occurs
when the frequency is .
.8
20
.9
80
1.2
100
When the coefficient of an AR(1) model is negative, the high-frequency random components are
the most important random components of the process. The closer the is to 1, the more important
are the high-frequency components relative to the low-frequency components. To illustrate this point,
we plot the spectral densities implied by AR(1) models with = 0.1, and = 0.9.
2
Frequency
2
Frequency
psdensity Parametric spectral density estimation after arima, arfima, and ucm
427
As gets closer to 1, the plot of the spectral density shifts toward becoming a spike at frequency
, implying that only the highest-frequency components are important.
For examples of psdensity after arfima and ucm, see [TS] arfima and [TS] ucm.
Introduction
The spectral density f () is estimated at the values {1 , 2 , . . . , N } using one of the
formulas given below. Given a sample of size N , after accounting for any if or in restrictions, the
N values of are given by i = (i 1)/(N 1) for i {1, 2, . . . , N }.
In the rare case in which the dataset in memory has insufficient observations for the desired
resolution of the estimated spectral density, you may use tsappend or set obs (see [TS] tsappend
or [D] obs) to increase the number of observations in the current dataset.
You may use an if restriction or an in restriction to restrict the observations to handle panel data
or to compute the estimates for a subset of the observations.
fARMA (; , , 2 , 0 ) =
2 |1 + 1 ei + 2 ei2 + + q eiq |
20 |1 1 ei 2 ei2 p eip |2
where [0, ] and 2 is the variance of the idiosyncratic error and 0 is the variance of the
dependent variable. We estimate 0 using the arima parameter estimates.
The spectral density for the ARFIMA model is
428
psdensity Parametric spectral density estimation after arima, arfima, and ucm
(
f (; , , 2 )
(1 2 )2k1
Pk1 k12
2
i=0
Pk
j=0
2i
Pk
i=0 (1)
j+i k
j
k
i
Pk1
i=0
(1
k1 2 2i
i
2
2k1
)
2 .
References
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Fuller, W. A. 1996. Introduction to Statistical Time Series. 2nd ed. New York: Wiley.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Harvey, A. C. 1989. Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge: Cambridge
University Press.
. 1993. Time Series Models. 2nd ed. Cambridge, MA: MIT Press.
Priestley, M. B. 1981. Spectral Analysis and Time Series. London: Academic Press.
Trimbur, T. M. 2006. Properties of higher order stochastic cycles. Journal of Time Series Analysis 27: 117.
Wei, W. W. S. 2006. Time Series Analysis: Univariate and Multivariate Methods. 2nd ed. Boston: Pearson.
Also see
[TS] arfima Autoregressive fractionally integrated moving-average models
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] ucm Unobserved-components model
Title
rolling Rolling-window and recursive estimation
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Acknowledgment
Options
References
Syntax
rolling
exp list
if
in
, options
: command
Description
options
Main
window(#)
recursive
rrecursive
Options
clear
saving(filename, . . .)
stepsize(#)
start(time constant)
end(time constant)
keep(varname , start )
Reporting
nodots
noisily
trace
Advanced
reject(exp)
window(#) is required.
You must tsset your data before using rolling; see [TS] tsset.
aweights are allowed in command if command accepts aweights; see [U] 11.1.6 weight.
elist contains
eexp is
(name: elist)
elist
eexp
newvar = (exp)
(exp)
specname
[eqno]specname
429
430
specname is
eqno is
b
b[]
se
se[]
##
name
Menu
Statistics
>
Time series
>
Description
rolling is a moving sampler that collects statistics from command after executing command on
subsets of the data in memory. Typing
. rolling exp list, window(50) clear: command
executes command on sample windows of span 50. That is, rolling will first execute command by
using periods 150 of the dataset, and then using periods 251, 352, and so on. rolling can also
perform recursive and reverse recursive analyses, in which the starting or ending period is held fixed
and the window size grows.
command defines the statistical command to be executed. Most Stata commands and user-written
programs can be used with rolling, as long as they follow standard Stata syntax and allow the if
qualifier; see [U] 11 Language syntax. The by prefix cannot be part of command.
exp list specifies the statistics to be collected from the execution of command. If no expressions
are given, exp list assumes a default of b if command stores results in e() and of all the scalars if
command stores results in r() and not in e(). Otherwise, not specifying an expression in exp list
is an error.
Options
Main
window(#) defines the window size used each time command is executed. The window size refers to
calendar periods, not the number of observations. If there are missing data (for example, because
of weekends), the actual number of observations used by command may be less than window(#).
window(#) is required.
recursive specifies that a recursive analysis be done. The starting period is held fixed, the ending
period advances, and the window size grows.
rrecursive specifies that a reverse recursive analysis be done. Here the ending period is held fixed,
the starting period advances, and the window size shrinks.
431
Options
clear specifies that Stata replace the data in memory with the collected statistics even though the
current data in memory have not been saved to disk.
saving( filename , suboptions ) creates a Stata data file (.dta file) consisting of (for each statistic
in exp list) a variable containing the window replicates.
double specifies that the results for each replication be saved as doubles, meaning 8-byte reals.
By default, they are saved as floats, meaning 4-byte reals.
every(#) specifies that results be written to disk every #th replication. every() should be specified
in conjunction only with saving() when command takes a long time for each replication. This
will allow recovery of partial results should your computer crash. See [P] postfile.
stepsize(#) specifies the number of periods the window is to be advanced each time command is
executed.
start(time constant) specifies the date on which rolling is to start. start() may be specified
as an integer or as a date literal.
end(time constant) specifies the date on which rolling is to end. end() may be specified as an
integer or as a date literal.
keep(varname , start ) specifies a variable to be posted along with the results. The value posted
is the value that corresponds to the right edge of the window. Specifying the start() option
requests that the value corresponding to the left edge of the window be posted instead. This option
is often used to record calendar dates.
Reporting
nodots suppresses display of the replication dot for each window on which command is executed.
By default, one dot character is printed for each window. A red x is printed if command returns
with an error or if any value in exp list is missing.
noisily causes the output of command to be displayed for each window on which command is
executed. This option implies the nodots option.
trace causes a trace of the execution of command to be displayed. This option implies the noisily
and nodots options.
Advanced
reject(exp) identifies an expression that indicates when results should be rejected. When exp is
true, the saved statistics are set to missing values.
432
causes Stata to regress depvar on indepvar using periods 120, store the regression coefficients
( b), run the regression using periods 221, and so on, finishing with a regression using periods
81100 (the last 20 periods).
The stepsize() option specifies how far ahead the window is moved each time. For example,
if you specify step(2), then command is executed on periods 120, and then 322, 524, etc. By
default, rolling replaces the dataset in memory with the computed statistics unless the saving()
option is specified, in which case the computed statistics are saved in the filename specified. If the
dataset in memory has been changed since it was last saved and you do not specify saving(), you
must use clear.
rolling can also perform recursive and reverse recursive analyses. In a recursive analysis, the
starting date is held fixed, and the window size grows as the ending date is advanced. In a reverse
recursive analysis, the ending date is held fixed, and the window size shrinks as the starting date is
advanced.
Example 1
We have data on the daily returns to IBM stock (ibm), the S&P 500 (spx), and short-term interest
rates (irx), and we want to create a series containing the beta of IBM by using the previous 200 trading
days at each date. We will also record the standard errors, so that we can obtain 95% confidence
intervals for the betas. See, for example, Stock and Watson (2011, 118) for more information on
estimating betas. We type
. use http://www.stata-press.com/data/r13/ibm
(Source: Yahoo! Finance)
. tsset t
time variable: t, 1 to 494
delta: 1 unit
. generate ibmadj = ibm - irx
(1 missing value generated)
. generate spxadj = spx - irx
(1 missing value generated)
. rolling _b _se, window(200) saving(betas, replace)
> spxadj
(running regress on estimation sample)
(note: file betas.dta not found)
Rolling replications (295)
1
2
3
4
5
..................................................
..................................................
..................................................
..................................................
..................................................
.............................................
file betas.dta saved
50
100
150
200
250
Our dataset has both a time variable t that runs consecutively and a date variable date that
measures the calendar date and therefore has gaps at weekends and holidays. Had we used the date
variable as our time variable, rolling would have used windows consisting of 200 calendar days
instead of 200 trading days, and each window would not have exactly 200 observations. We used
the keep(date) option so that we could refer to the date variable when working with the results
dataset.
433
We can list a portion of the dataset created by rolling to see what it contains:
. use betas, clear
(rolling: regress)
. sort date
. list in 1/3, abbrev(10)
1.
2.
3.
start
end
date
_b_spxadj
_b_cons
_se_spxadj
_se_cons
1
2
3
200
201
202
16oct2003
17oct2003
20oct2003
1.043422
1.039024
1.038371
-.0181504
-.0126876
-.0235616
.0658531
.0656893
.0654591
.0748295
.074609
.0743851
The variables start and end indicate the first and last observations used each time that rolling
called regress, and the date variable contains the calendar date corresponding the period represented
by end. The remaining variables are the estimated coefficients and standard errors from the regression.
In our example , b spxadj contains the estimated betas, and b cons contains the estimated alphas.
The variables se spxadj and se cons have the corresponding standard errors.
Finally, we compute the confidence intervals for the betas and examine how they have changed
over time:
. generate lower = _b_spxadj - 1.96*_se_spxadj
. generate upper = _b_spxadj + 1.96*_se_spxadj
.6
.8
Beta
1.2
01oct2003
01jan2004
01apr2004
01jul2004
date
_b[spxadj]
01oct2004
01jan2005
lower/upper
As 2004 progressed, IBMs stock returns were less influenced by returns in the broader market.
Beginning in June of 2004, IBMs beta became significantly different from unity at the 95% confidence
level, as indicated by the fact that the confidence interval does not contain one from then onward.
In addition to rolling-window analyses, rolling can also perform recursive ones. Suppose again
that you have data collected at 100 consecutive points in time, and now you type
. rolling _b, window(20) recursive clear: regress depvar indepvar
434
Stata will first regress depvar on indepvar by using observations 120, store the coefficients, run
the regression using observations 121, observations 122, and so on, finishing with a regression
using all 100 observations. Unlike a rolling regression, in which case the number of observations is
held constant and the starting and ending points are shifted, a recursive regression holds the starting
point fixed and increases the number of observations. Recursive analyses are often used in forecasting
situations. As time goes by, more information becomes available that can be used in making forecasts.
See Kmenta (1997, 423424).
Example 2
Using the same dataset, we type
. use http://www.stata-press.com/data/r13/ibm, clear
(Source: Yahoo! Finance)
. tsset t
time variable: t, 1 to 494
delta: 1 unit
. generate ibmadj = ibm - irx
(1 missing value generated)
. generate spxadj = spx - irx
(1 missing value generated)
. rolling _b _se, recursive window(200) clear: regress ibmadj spxadj
(output omitted )
. list in 1/3, abbrev(10)
1.
2.
3.
start
end
_b_spxadj
_b_cons
_se_spxadj
_se_cons
1
1
1
200
201
202
1.043422
1.039024
1.037687
-.0181504
-.0126876
-.016475
.0658531
.0656893
.0655896
.0748295
.074609
.0743481
Here the starting period remains fixed and the window grows larger.
In a reverse recursive analysis, the ending date is held fixed, and the window size becomes smaller
as the starting date is advanced. For example, with a dataset that has observations numbered 1100,
typing
. rolling _b, window(20) reverse recursive clear: regress depvar indepvar
creates a dataset in which the first observation has the results based on periods 1100, the second
observation has the results based on 2100, the third having 3100, and so on, up to the last
observation having results based on periods 81100 (the last 20 observations).
Example 3
Using the data on stock returns, we want to build a model in which we predict todays IBM stock
return on the basis of yesterdays returns on IBM and the S&P 500. That is, letting it and st denote
the returns to IBM and the S&P 500 on date t, we want to fit the regression model
it = 0 + 1 it1 + 2 st1 + t
where t is a regression error term, and then compute
c c
c
id
t+1 = 0 + 1 it + 2 st
435
We will use recursive regression because we suspect that the more data we have to fit the regression
model, the better the model will predict returns. We will use at least 20 periods in fitting the regression.
. use http://www.stata-press.com/data/r13/ibm, clear
(Source: Yahoo! Finance)
. tsset t
time variable: t, 1 to 494
delta: 1 unit
One alternative would be to use rolling with the recursive option to fit the regressions, collect
the coefficients, and then compute the predicted values afterward. However, we will instead write a
short program that computes the forecasts automatically and then use rolling, recursive on that
program. The program must accept an if expression so that rolling can indicate to the program
which observations are to be used. Our program is
program myforecast, rclass
syntax [if]
regress ibm L.ibm L.spx if
// Find last time period of estimation sample and
// make forecast for period just after that
summ t if e(sample)
local last = r(max)
local fcast = _b[_cons] + _b[L.ibm]*ibm[last] + ///
_b[L.spx]*spx[last]
return scalar forecast = fcast
// Next periods actual return
// Will return missing value for final period
return scalar actual = ibm[last+1]
end
1.0000
-0.0957
1.0000
Our model does not work too wellthe correlation between actual returns and our forecasts is
negative.
Stored results
rolling sets no r- or e-class macros. The results from the command used with rolling, depending
on the last window of data used, are available after rolling has finished.
Acknowledgment
We thank Christopher F. Baum of the Department of Economics at Boston College and author of
the Stata Press books An Introduction to Modern Econometrics Using Stata and An Introduction to
Stata Programming for an earlier rolling regression command.
436
References
Kmenta, J. 1997. Elements of Econometrics. 2nd ed. Ann Arbor: University of Michigan Press.
Stock, J. H., and M. W. Watson. 2011. Introduction to Econometrics. 3rd ed. Boston: AddisonWesley.
Also see
[D] statsby Collect statistics for a command across a by list
[R] stored results Stored results
Title
sspace State-space models
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
Covariance-form syntax
sspace state ceq state ceq . . . state ceq obs ceq obs ceq . . . obs ceq
if
in
, options
where each state ceq is of the form
(statevar lagged statevars
indepvars , state noerror noconstant )
and each obs ceq is of the form
(depvar statevars
indepvars
, noerror noconstant )
Error-form syntax
sspace state efeq state efeq . . . state efeq obs efeq obs efeq . . . obs efeq
if
in
, options
where each state efeq is of the form
indepvars
state errors , state noconstant )
(statevar lagged statevars
and each obs efeq is of the form
(depvar statevars
indepvars
obs errors
, noconstant )
statevar is the name of an unobserved state, not a variable. If there happens to be a variable of the
same name, the variable is ignored and plays no role in the estimation.
lagged statevars is a list of lagged statevars. Only first lags are allowed.
state errors is a list of state-equation errors that enter a state equation. Each state error has the form
e.statevar, where statevar is the name of a state in the model.
obs errors is a list of observation-equation errors that enter an equation for an observed variable.
Each error has the form e.depvar, where depvar is an observed dependent variable in the model.
equation-level options
Description
Model
state
noerror
noconstant
437
438
options
Description
Model
covstate(covform)
covobserved(covform)
constraints(constraints)
specifies the covariance structure for the errors in the state variables
specifies the covariance structure for the errors in the observed
dependent variables
apply specified linear constraints
SE/Robust
vce(vcetype)
Reporting
level(#)
nocnsreport
display options
Maximization
maximize options
Advanced
method(method)
specify the method for calculating the log likelihood; seldom used
coeflegend
covform
Description
identity
dscalar
diagonal
unstructured
method
Description
hybrid
use the stationary Kalman filter and the De Jong diffuse Kalman
filter; the default
use the stationary De Jong Kalman filter and the De Jong diffuse
Kalman filter
use the stationary Kalman filter and the nonstationary large-
diffuse Kalman filter; seldom used
dejong
kdiffuse
You must tsset your data before using sspace; see [TS] tsset.
indepvars may contain factor variables; see [U] 11.4.3 Factor variables.
indepvars and depvar may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, statsby, and rolling are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
439
Menu
Statistics
>
>
State-space models
Description
sspace estimates the parameters of linear state-space models by maximum likelihood. Linear
state-space models are very flexible and many linear time-series models can be written as linear
state-space models.
sspace uses two forms of the Kalman filter to recursively obtain conditional means and variances
of both the unobserved states and the measured dependent variables that are used to compute the
likelihood.
The covariance-form syntax and the error-form syntax of sspace reflect the two different forms
in which researchers specify state-space models. Choose the syntax that is easier for you; the two
forms are isomorphic.
Options
Equation-level options
Model
Options
Model
covstate(diagonal) specifies a diagonal covariance matrix, and it is the default for the covarianceform syntax.
covstate(unstructured) specifies a symmetric, positive-definite covariance matrix with parameters for all variances and covariances. covstate(unstructured) may not be specified
with the error-form syntax.
covobserved(covform) specifies the covariance structure for the observation errors.
covobserved(identity) specifies a covariance matrix equal to an identity matrix, and it is the
default for the error-form syntax.
2
covobserved(dscalar) specifies a covariance matrix equal to observed
times an identity matrix.
covobserved(diagonal) specifies a diagonal covariance matrix, and it is the default for the
covariance-form syntax.
440
SE/Robust
vce(vcetype) specifies the estimator for the variancecovariance matrix of the estimator.
vce(oim), the default, causes sspace to use the observed information matrix estimator.
vce(robust) causes sspace to use the Huber/White/sandwich estimator.
Reporting
Maximization
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), and from(matname); see [R] maximize for all options except from(), and
see below for information on from(). These options are seldom used.
from(matname) specifies initial values for the maximization process. from(b0) causes sspace
to begin the maximization algorithm with the values in b0. b0 must be a row vector; the number
of columns must equal the number of parameters in the model; and the values in b0 must be
in the same order as the parameters in e(b).
Advanced
method(method) specifies how to compute the log likelihood. This option is seldom used.
method(hybrid), the default, uses the Kalman filter with model-based initial values for the states
when the model is stationary and uses the De Jong (1988, 1991) diffuse Kalman filter when
the model is nonstationary.
method(dejong) uses the Kalman filter with the De Jong (1988) method for estimating the initial
values for the states when the model is stationary and uses the De Jong (1988, 1991) diffuse
Kalman filter when the model is nonstationary.
method(kdiffuse) is a seldom used method that uses the Kalman filter with model-based initial
values for the states when the model is stationary and uses the large- diffuse Kalman filter
when the model is nonstationary.
The following option is available with sspace but is not shown in the dialog box:
coeflegend; see [R] estimation options.
441
yt = Dzt + Fwt + Gt
where
442
t N (0, Q)
t N (0, R)
yt = (yt1 ) + t
as a state-space model with the observation equation
yt = + ut
and the state equation
ut = ut1 + t
where the unobserved state is ut = yt .
Here we fit this model to data on the capacity utilization rate. The variable lncaputil contains
data on the natural log of the capacity utilization rate for the manufacturing sector of the U.S. economy.
We treat the series as first-difference stationary and fit its first-difference to an AR(1) process. Here
we estimate the parameters of the above state-space form of the AR(1) model:
443
. use http://www.stata-press.com/data/r13/manufac
(St. Louis Fed (FRED) manufacturing data)
. constraint 1 [D.lncaputil]u = 1
. sspace (u L.u, state noconstant) (D.lncaputil u,
searching for initial values ..........
(setting technique to bhhh)
Iteration 0:
log likelihood =
1505.36
Iteration 1:
log likelihood = 1512.0581
(output omitted )
Refining estimates:
Iteration 0:
log likelihood =
1516.44
Iteration 1:
log likelihood =
1516.44
noerror), constraints(1)
State-space model
Sample: 1972m2 - 2008m12
Number of obs
Wald chi2(1)
Prob > chi2
Log likelihood =
1516.44
( 1) [D.lncaputil]u = 1
lncaputil
Coef.
u
L1.
.3523983
D.lncaputil
u
_cons
1
-.0003558
=
=
=
443
61.73
0.0000
OIM
Std. Err.
P>|z|
.0448539
7.86
0.000
.2644862
.4403104
(constrained)
.0005781
-0.62
0.538
-.001489
.0007773
0.000
.000054
.0000704
var(u)
.0000622
4.18e-06
14.88
Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.
The iteration log has three parts: the dots from the search for initial values, the log from finding
the maximum, and the log from a refining step. Here is a description of the logic behind each part:
1. The quality of the initial values affect the speed and robustness of the optimization algorithm.
sspace takes a few iterations in a nonlinear least-squares (NLS) algorithm to find good
initial values and reports a dot for each (NLS) iteration.
2. This iteration log is the standard method by which Stata reports the search for the maximum
likelihood estimates of the parameters in a nonlinear model.
3. Some of the parameters are transformed in the maximization process that sspace reports in
part 2. After a maximum candidate is found in part 2, sspace looks for a maximum in the
unconstrained space, checks that the Hessian of the log-likelihood function is of full rank,
and reports these iterations as the refining step.
The header in the output describes the estimation sample, reports the log-likelihood function at
the maximum, and gives the results of a Wald test against the null hypothesis that the coefficients
on all the independent variables, state variables, and lagged state variables are zero. In this example,
the null hypothesis that the coefficient on L1.u is zero is rejected at all conventional levels.
The estimation table reports results for the state equations, the observation equations, and the
variancecovariance parameters. The estimated autoregressive coefficient of 0.3524 indicates that there
is persistence in the first-differences of the log of the manufacturing rate. The estimated mean of the
differenced series is 0.0004, which is smaller in magnitude than its standard error, indicating that
there is no deterministic linear trend in the series.
444
Typing
. arima D.lncaputil, ar(1) technique(nr)
(output omitted )
produces nearly identical parameter estimates and standard errors for the mean and the autoregressive
parameter. Because sspace estimates the variance of the state error while arima estimates the
standard deviation, calculations are required to obtain the same results. The different parameterization
of the variance parameter can cause small numerical differences.
Technical note
In some situations, the second part of the iteration log terminates but the refining step never
converges. Only when the refining step converges does the maximization algorithm find interpretable
estimates. If the refining step iterates without convergence, the parameters of the specified model are
not identified by the data. (See Rothenberg [1971], Drukker and Wiggins [2004], and Davidson and
MacKinnon [1993, sec. 5.2] for discussions of identification.)
yt = yt1 + t1 + t
(1)
yt
t
=
1
0 0
yt1
t1
+
1
t
(2)
yt = ( 1 0 )
yt
t
(3)
The unobserved states in this model are u1t = yt and u2t = t . We set the process mean to zero
because economic theory and the previous example suggest that we should do so. Below we estimate
the parameters in the state-space model by using the error-form syntax:
445
. constraint 2 [u1]L.u2 = 1
. constraint 3 [u1]e.u1 = 1
. constraint 4 [D.lncaputil]u1 = 1
. sspace (u1 L.u1 L.u2 e.u1, state noconstant) (u2 e.u1, state noconstant)
> (D.lncaputil u1, noconstant), constraints(2/4) covstate(diagonal)
searching for initial values ...........
(setting technique to bhhh)
Iteration 0:
log likelihood = 1506.0947
Iteration 1:
log likelihood =
1514.014
(output omitted )
Refining estimates:
Iteration 0:
log likelihood =
1531.255
Iteration 1:
log likelihood =
1531.255
State-space model
Sample: 1972m2 - 2008m12
Number of obs
Wald chi2(2)
Prob > chi2
Log likelihood =
1531.255
( 1) [u1]L.u2 = 1
( 2) [u1]e.u1 = 1
( 3) [D.lncaputil]u1 = 1
lncaputil
Coef.
u1
L1.
.8056815
u2
L1.
e.u1
1
1
e.u1
-.5188453
D.lncaputil
u1
OIM
Std. Err.
=
=
=
443
333.84
0.0000
P>|z|
0.000
.7032418
.9081212
0.000
-.6564317
-.3812588
0.000
.0000505
.0000659
u1
.0522661
15.41
(constrained)
(constrained)
u2
var(u1)
.0000582
.0701985
-7.39
(constrained)
3.91e-06
14.88
Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.
The command in the above output specifies two state equations, one observation equation, and
two options. The first state equation defines u1t and the second defines u2t according to (2) above.
The observation equation defines the process for D.lncaputil according to the one specified in (3)
above. Several coefficients in (2) and (3) are set to 1, and constraints 24 place these restrictions on
the model.
The estimated coefficient on L.u1 in equation u1, 0.806, is the estimate of in (2), which is the
autoregressive coefficient in the ARMA model in (1). The estimated coefficient on e.u1 in equation
u2, 0.519, is the estimate of , which is the moving-average term in the ARMA model in (1).
This example highlights a difference between the error-form syntax and the covariance-form syntax.
The error-form syntax used in this example includes only explicitly included errors. In contrast, the
covariance-form syntax includes an error term in each equation, unless the noerror option is specified.
The default for covstate() also differs between the error-form syntax and the covarianceform syntax. Because the coefficients on the errors in the error-form syntax are frequently used to
446
estimate the standard deviation of the errors, covstate(identity) is the default for the errorform syntax. In contrast, unit variances are less common in the covariance-form syntax, for which
covstate(diagonal) is the default. In this example, we specified covstate(diagonal) to estimate
a nonunitary variance for the state.
Typing
. arima D.lncaputil, noconstant ar(1) ma(1) technique(nr)
(output omitted )
produces nearly identical results. As in the AR(1) example above, arima estimates the standard deviation
of the error term, while sspace estimates the variance. Although they are theoretically equivalent,
the different parameterizations give rise to small numerical differences in the other parameters.
lncaputilt
lnhourst
=
1
2
0
3
lncaputilt1
lnhourst1
+
1t
2t
(4)
where yt = yt yt1 , t = (1t , 2t )0 and Var() = . We can write this VAR(1) process as a
state-space model with state equations
u1t
u2t
=
1
2
0
3
u1(t1)
u2(t1)
+
lncaputil
lnhours
=
u1t
u2t
1t
2t
(5)
447
. constraint 5 [D.lncaputil]u1 = 1
. constraint 6 [D.lnhours]u2 = 1
. sspace (u1 L.u1, state noconstant)
>
(u2 L.u1 L.u2, state noconstant)
>
(D.lncaputil u1, noconstant noerror)
>
(D.lnhours u2, noconstant noerror),
>
constraints(5/6) covstate(unstructured)
searching for initial values ...........
(setting technique to bhhh)
Iteration 0:
log likelihood = 2993.6647
Iteration 1:
log likelihood = 3088.7416
(output omitted )
Refining estimates:
Iteration 0:
log likelihood = 3211.7532
Iteration 1:
log likelihood = 3211.7532
State-space model
Sample: 1972m2 - 2008m12
Number of obs
Wald chi2(3)
Prob > chi2
Coef.
=
=
=
443
166.87
0.0000
OIM
Std. Err.
P>|z|
u1
u1
L1.
.353257
.0448456
7.88
0.000
.2653612
.4411528
u1
L1.
.1286218
.0394742
3.26
0.001
.0512537
.2059899
u2
L1.
-.3707083
.0434255
-8.54
0.000
-.4558208
-.2855959
D.lncaputil
u1
(constrained)
(constrained)
0.000
0.000
0.000
.0000541
.0000208
.0000335
.0000705
.0000312
.0000437
u2
D.lnhours
u2
var(u1)
cov(u1,u2)
var(u2)
.0000623
.000026
.0000386
4.19e-06
2.67e-06
2.61e-06
14.88
9.75
14.76
Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.
448
The var estimates are not the same as the sspace estimates because the generalized least-squares
estimator implemented in var is only asymptotically equivalent to the ML estimator implemented
in sspace, but the point estimates are similar. The comparison is useful for pedagogical purposes
because the var estimator is relatively simple.
Some problems require constraining a covariance term to zero. If we wanted to constrain
cov(u2,u1): cons to zero, we could type
. constraint 7 [cov(u2,u1)]_cons = 0
. sspace (u1 L.u1, state noconstant)
>
(u2 L.u1 L.u2, state noconstant)
>
(D.lncaputil u1, noconstant noerror)
>
(D.lnhours u2, noconstant noerror),
>
constraints(5/7) covstate(unstructured)
(output omitted )
lncaputilt
lnhourst
=
1
2
0
3
lncaputilt1
lnhourst1
+
1
0
0
0
1(t1)
2(t1)
We can write this VARMA(1,1) process as a state-space model with state equations
s1t
1
s2t = 0
s3t
2
where the states are
1 0
s1(t1)
1
0 0 s2(t1) + 1
0 3
s3(t1)
0
0
0 1t
2t
1
s1t
lncaputilt
s2t =
1 1t
s3t
lnhourst
Var
1t
2t
=
12
0
0
22
+
1t
2t
constraint
constraint
constraint
constraint
constraint
7
8
9
10
11
[u1]L.u2
= 1
[u1]e.u1
= 1
[u3]e.u3
= 1
[D.lncaputil]u1 = 1
[D.lnhours]u3 = 1
Number of obs
Wald chi2(4)
Prob > chi2
Coef.
OIM
Std. Err.
=
=
=
443
427.55
0.0000
P>|z|
0.000
.7033964
.9082098
u1
u1
L1.
.8058031
u2
L1.
e.u1
1
1
e.u1
-.518907
.0701848
-7.39
0.000
-.6564667
-.3813474
u1
L1.
.1734868
.0405156
4.28
0.000
.0940776
.252896
u3
L1.
e.u3
-.4809376
1
.0498574
-9.65
(constrained)
0.000
-.5786563
-.3832188
D.lncaputil
u1
(constrained)
(constrained)
0.000
0.000
.0000505
.0000331
.0000659
.0000432
.0522493
15.42
(constrained)
(constrained)
u2
u3
D.lnhours
u3
var(u1)
var(u3)
.0000582
.0000382
3.91e-06
2.56e-06
14.88
14.88
Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.
449
450
The estimates of the parameters in the model for D.lncaputil are similar to those in the univariate
model fit in example 2. The estimates of the parameters in the model for D.lnhours indicate that
the lag of D.lncaputil has a positive effect on D.lnhours.
Technical note
The technique(nr) option facilitates convergence in example 4. Fitting state-space models is
notoriously difficult. Convergence problems are common. Four methods for overcoming convergence
problems are 1) selecting an alternate optimization algorithm by using the technique() option,
2) using alternative starting values by specifying the from() option, 3) using starting values obtained
by estimating the parameters of a restricted version of the model of interest, or 4) putting the variables
on the same scale.
ft1
t
=
+
ft1
ft2
0
ipmant
1
1t
incomet 2
2t
= ft +
hourst
3
3t
unempt
4
4t
where
ft
1
1
2
0
2
1t
1
0
Var 2t =
0
3t
4t
0
0
22
0
0
0
0
32
0
0
0
0
42
451
Coef.
Number of obs
Wald chi2(6)
Prob > chi2
=
=
=
442
751.95
0.0000
OIM
Std. Err.
P>|z|
f
f
L1.
.2651932
.0568663
4.66
0.000
.1537372
.3766491
lf
L1.
.4820398
.0624635
7.72
0.000
.3596136
.604466
f
L1.
.3502249
.0287389
12.19
0.000
.2938976
.4065522
.0746338
.0217319
3.43
0.001
.0320401
.1172276
.2177469
.0186769
11.66
0.000
.1811407
.254353
-.0676016
.0071022
-9.52
0.000
-.0815217
-.0536816
.1383158
.2773808
.0911446
.0237232
.0167086
.0188302
.0080847
.0017932
8.28
14.73
11.27
13.23
0.000
0.000
0.000
0.000
.1055675
.2404743
.0752988
.0202086
.1710641
.3142873
.1069903
.0272378
lf
(constrained)
D.ipman
D.income
D.hours
D.unemp
var(D.ipman)
var(D.income)
var(D.hours)
var(D.unemp)
Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.
The output indicates that the unobserved factor is quite persistent and that it is a significant predictor
for each of the observed variables.
452
These models are frequently used to forecast the dependent variables and to estimate the unobserved
factors. We present some illustrative examples in [TS] sspace postestimation. The dfactor command
estimates the parameters of dynamic-factor models; see [TS] dfactor.
yt = t + t
where
t = t1 + t
The model is called a local-level model because the level of the series is modeled as a random walk
plus an idiosyncratic noise term. (The model is also known as the random-walk-plus-noise model.)
The local-level model is nonstationary because of the random-walk component. When the variance
of the idiosyncratic-disturbance t is zero and the variance of the level-disturbance t is not zero, the
local-level model reduces to a random walk. When the variance of the level-disturbance t is zero
and the variance of the idiosyncratic-disturbance t is not zero,
t = t1 =
and the local-level model reduces to
y t = + t
which is a simple regression with a time-invariant mean. The parameter is not estimated in the
state-space formulation below.
In this example, we fit weekly levels of the Standard and Poors 500 Index to a local-level model.
Because this model is already in state-space form, we fit close by typing
453
. use http://www.stata-press.com/data/r13/sp500w
. constraint 13 [z]L.z
= 1
. constraint 14 [close]z = 1
. sspace (z L.z, state noconstant) (close z, noconstant), constraints(13 14)
searching for initial values ..........
(setting technique to bhhh)
Iteration 0:
log likelihood = -12581.763
Iteration 1:
log likelihood = -12577.727
(output omitted )
Refining estimates:
Iteration 0:
log likelihood = -12576.99
Iteration 1:
log likelihood = -12576.99
State-space model
Sample: 1 - 3093
Log likelihood = -12576.99
( 1) [z]L.z = 1
( 2) [close]z = 1
Number of obs
OIM
Std. Err.
close
Coef.
z
L1.
(constrained)
(constrained)
3093
P>|z|
0.000
0.000
155.4794
8.599486
close
var(z)
var(close)
170.3456
15.24858
7.584909
3.392457
22.46
4.49
185.2117
21.89767
The results indicate that both components have nonzero variances. The output footer informs us
that the model is nonstationary at the estimated parameter values.
Technical note
In the previous example, we estimated the parameters of a nonstationary state-space model. The
model is nonstationary because one of the eigenvalues of the A matrix has unit modulus. That all
the coefficients in the A matrix are fixed is also important. See Lutkepohl (2005, 636637) for why
the ML estimator for the parameters of a nonstationary state-model that is nonstationary because of
eigenvalues with unit moduli from a fixed A matrix is still consistent and asymptotically normal.
454
t
t
=
1 1
0 1
t1
t1
+
1t
2t
ipmant = t + t
The estimated parameters are
. use http://www.stata-press.com/data/r13/dfex
(St. Louis Fed (FRED) macro data)
. constraint 15 [f1]L.f1 = 1
. constraint 16 [f1]L.f2 = 1
. constraint 17 [f2]L.f2 = 1
. constraint 18 [ipman]f1
= 1
. sspace (f1 L.f1 L.f2, state noconstant)
>
(f2 L.f2, state noconstant)
>
(ipman f1, noconstant), constraints(15/18)
searching for initial values ..........
(setting technique to bhhh)
Iteration 0:
log likelihood = -362.93861
Iteration 1:
log likelihood = -362.12048
(output omitted )
Refining estimates:
Iteration 0:
log likelihood = -359.1266
Iteration 1:
log likelihood = -359.1266
State-space model
Sample: 1972m1 - 2008m11
Number of obs
Log likelihood = -359.1266
( 1) [f1]L.f1 = 1
( 2) [f1]L.f2 = 1
( 3) [f2]L.f2 = 1
( 4) [ipman]f1 = 1
OIM
Std. Err.
ipman
Coef.
f1
L1.
(constrained)
f2
L1.
(constrained)
f2
L1.
(constrained)
f1
(constrained)
443
P>|z|
3.62
2.72
2.39
0.000
0.003
0.008
.067506
.0049898
.0063989
f1
f2
ipman
var(f1)
var(f2)
var(ipman)
.1473071
.0178752
.0354429
.0407156
.0065743
.0148186
.2271082
.0307606
.0644868
455
There is little evidence that either of the variance parameters are zero. The fit obtained indicates
that we could now proceed with specification testing and checks to see how well this model forecasts
these data.
Stored results
sspace stores the following in e():
Scalars
e(N)
e(k)
e(k aux)
e(k eq)
e(k dv)
e(k obser)
e(k state)
e(k obser err)
e(k state err)
e(df m)
e(ll)
e(chi2)
e(p)
e(tmin)
e(tmax)
e(stationary)
e(rank)
e(ic)
e(rc)
e(converged)
Macros
e(cmd)
e(cmdline)
e(depvar)
e(obser deps)
e(state deps)
e(covariates)
e(indeps)
e(tvar)
e(eqnames)
e(title)
e(tmins)
e(tmaxs)
e(R structure)
e(Q structure)
e(chi2type)
e(vce)
e(vcetype)
e(opt)
e(method)
e(initial values)
e(technique)
e(tech steps)
e(datasignature)
e(datasignaturevars)
e(properties)
e(estat cmd)
e(predict)
e(marginsok)
e(marginsnotok)
number of observations
number of parameters
number of auxiliary parameters
number of equations in e(b)
number of dependent variables
number of observation equations
number of state equations
number of observation-error terms
number of state-error terms
model degrees of freedom
log likelihood
2
significance
minimum time in sample
maximum time in sample
1 if the estimated parameters indicate a stationary model, 0 otherwise
rank of VCE
number of iterations
return code
1 if converged, 0 otherwise
sspace
command as typed
unoperated names of dependent variables in observation equations
names of dependent variables in observation equations
names of dependent variables in state equations
list of covariates
independent variables
variable denoting time within groups
names of equations
title in estimation output
formatted minimum time
formatted maximum time
structure of observed-variable-error covariance matrix
structure of state-error covariance matrix
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
type of optimization
likelihood method
type of initial values
maximization technique
iterations taken in maximization technique
the checksum
variables used in calculation of checksum
b V
program used to implement estat
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
456
Matrices
e(b)
e(Cns)
e(ilog)
e(gradient)
e(gamma)
e(A)
e(B)
e(C)
e(D)
e(F)
e(G)
e(chol R)
e(chol Q)
e(chol Sz0)
e(z0)
e(d)
e(T)
e(M)
e(V)
e(V modelbased)
Functions
e(sample)
parameter vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
mapping from parameter vector to state-space matrices
estimated A matrix
estimated B matrix
estimated C matrix
estimated D matrix
estimated F matrix
estimated G matrix
Cholesky factor of estimated R matrix
Cholesky factor of estimated Q matrix
Cholesky factor of initial state covariance matrix
initial state vector augmented with a matrix identifying nonstationary components
additional term in diffuse initial state vector, if nonstationary model
inner part of quadratic form for initial state covariance in a partially
nonstationary model
outer part of quadratic form for initial state covariance in a partially
nonstationary model
variancecovariance matrix of the estimators
model-based variance
marks estimation sample
457
sspace estimates the parameters of linear state-space models by maximum likelihood. The Kalman
filter is a method for recursively obtaining linear, least-squares forecasts of yt conditional on
past information. These forecasts are used to construct the log likelihood, assuming normality and
stationarity. When the model is nonstationary, a diffuse Kalman filter is used.
Hamilton (1994a; 1994b, 389) shows that the QML estimator, obtained when the normality
assumption is dropped, is consistent and asymptotically normal, although the variancecovariance
matrix of the estimator (VCE) must be estimated by the Huber/White/sandwich estimator. Hamiltons
discussion applies to stationary models, and specifying vce(robust) produces a consistent estimator
of the VCE when the errors are not normal.
Methods for computing the log likelihood differ in how they calculate initial values for the Kalman
filter when the model is stationary, how they compute a diffuse Kalman filter when the model is
nonstationary, and whether terms for initial states are included. sspace offers the method(hybrid),
method(dejong), and method(kdiffuse) options for computing the log likelihood. All three
methods handle both stationary and nonstationary models.
method(hybrid), the default, uses the initial values for the states implied by stationarity to
initialize the Kalman filter when the model is stationary. Hamilton (1994b, 378) discusses this method
of computing initial values for the states and derives a log-likelihood function that does not include
terms for the initial states. When the model is nonstationary, method(hybrid) uses the De Jong
(1988, 1991) diffuse Kalman filter and log-likelihood function, which includes terms for the initial
states.
method(dejong) uses the stationary De Jong (1988) method when the model is stationary and
the De Jong (1988, 1991) diffuse Kalman filter when the model is nonstationary. The stationary
De Jong (1988) method estimates initial values for the Kalman filter as part of the log-likelihood
computation, as in De Jong (1988).
method(kdiffuse) implements the seldom-used large- diffuse approximation to the diffuse
Kalman filter when the model is nonstationary and uses initial values for the states implied by
stationarity when the model is stationary. The log likelihood does not include terms for the initial
states in either case. We recommend that you do not use method(kdiffuse) except to replicate
older results computed using this method.
De Jong (1988, 1991) and De Jong and Chu-Chun-Lin (1994) derive the log likelihood and a diffuse
Kalman filter for handling nonstationary data. De Jong (1988) replaces the stationarity assumption
with a time-immemorial assumption, which he uses to derive the log-likelihood function, an initial
state vector, and a covariance of the initial state vector when the model is nonstationary. By default,
and when method(hybrid) or method(dejong) is specified, sspace uses the diffuse Kalman filter
given in definition 5 of De Jong and Chu-Chun-Lin (1994). This method uses theorem 3 of De Jong
and Chu-Chun-Lin (1994) to compute the covariance of the initial states. When using this method,
sspace saves the matrices from their theorem 3 in e(), although the names are changed. e(Z) is
their U1 , e(T) is their U2 , e(A) is their T, and e(M) is their M.
See De Jong (1988, 1991) and De Jong and Chu-Chun-Lin (1994) for the details of the De Jong
diffuse Kalman filter.
Practical estimation and inference require that the maximum likelihood estimator be consistent and
normally distributed in large samples. These statistical properties of the maximum likelihood estimator
are well established when the model is stationary; see Caines (1988, chap. 5 and 7), Hamilton (1994b,
388389), and Hannan and Deistler (1988, chap. 4). When the model is nonstationary, additional
assumptions must hold for the maximum likelihood estimator to be consistent and asymptotically
normal; see Harvey (1989, sec. 3.4), Lutkepohl (2005, 636637), and Schneider (1988). Chang,
Miller, and Park (2009) show that the ML and the QML estimators are consistent and asymptotically
normal for a class of nonstationary state-space models.
458
We now give an intuitive version of the Kalman filter. sspace uses theoretically equivalent, but
numerically more stable, methods. For each time t, the Kalman filter produces the conditional expected
state vector zt|t and the conditional covariance matrix t|t ; both are conditional on information up
to and including time t. Using the model and previous period results, for each t we begin with
(6)
e
t|t = yt yt|t1
(7)
In the last steps, we update the conditional expected state vector and the conditional covariance
with the time t information:
(8)
1 0
t|t = t|t1 t|t1 Dt|t
D t|t1
Equations (6)(8) are the Kalman filter. The equations denoted by (6) are the one-step predictions.
The one-step predictions do not use contemporaneous values of yt ; only past values of yt , past values
of the exogenous xt , and contemporaneous values of xt are used. Equations (7) and (8) form the
update step of the Kalman filter; they incorporate the contemporaneous dependent variable information
into the predicted states.
The Kalman filter requires initial values for the states and a covariance matrix for the initial states
to start off the recursive process. Hamilton (1994b) discusses how to compute initial values for the
Kalman filter assuming stationarity. This method is used by default when the model is stationary. De
Jong (1988) discusses how to estimate initial values by maximum likelihood; this method is used
when method(dejong) is specified.
Letting be the vector of parameters in the model, Lutkepohl (2005) and Harvey (1989) show
that the log-likelihood function for the parameters of a stationary model is given by
(
lnL() = 0.5
nT ln(2) +
T
X
t=1
ln(|t|t1 |) +
T
X
)
et 1
t|t1 et
0
t=1
459
Not all state-space models are identified, as discussed in Hamilton (1994b) and Lutkepohl (2005).
sspace checks for local identification at the optimum. sspace will not declare convergence unless
the Hessian is full rank. This check for local identifiability is due to Rothenberg (1971).
Specifying method(dejong) causes sspace to maximize the log-likelihood function given in
section 2 (vii) of De Jong (1988). This log-likelihood function includes the initial states as parameters
to be estimated. We use some of the methods in Casals, Sotoca, and Jerez (1999) for computing the
De Jong (1988) log-likelihood function.
References
Anderson, B. D. O., and J. B. Moore. 1979. Optimal Filtering. Englewood Cliffs, NJ: Prentice Hall.
Brockwell, P. J., and R. A. Davis. 1991. Time Series: Theory and Methods. 2nd ed. New York: Springer.
Caines, P. E. 1988. Linear Stochastic Systems. New York: Wiley.
Casals, J., S. Sotoca, and M. Jerez. 1999. A fast and stable method to compute the likelihood of time invariant
state-space models. Economics Letters 65: 329337.
Chang, Y., J. I. Miller, and J. Y. Park. 2009. Extracting a common stochastic trend: Theory with some applications.
Journal of Econometrics 150: 231247.
Davidson, R., and J. G. MacKinnon. 1993. Estimation and Inference in Econometrics. New York: Oxford University
Press.
De Jong, P. 1988. The likelihood for a state space model. Biometrika 75: 165169.
. 1991. The diffuse Kalman filter. Annals of Statistics 19: 10731083.
De Jong, P., and S. Chu-Chun-Lin. 1994. Stationary and non-stationary state space models. Journal of Time Series
Analysis 15: 151166.
Drukker, D. M., and V. L. Wiggins. 2004. Verifying the solution from a nonlinear solver: A case study: Comment.
American Economic Review 94: 397399.
Hamilton, J. D. 1994a. State-space models. In Vol. 4 of Handbook of Econometrics, ed. R. F. Engle and D. L.
McFadden, 30393080. Amsterdam: Elsevier.
. 1994b. Time Series Analysis. Princeton: Princeton University Press.
Hannan, E. J., and M. Deistler. 1988. The Statistical Theory of Linear Systems. New York: Wiley.
Harvey, A. C. 1989. Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge: Cambridge
University Press.
. 1993. Time Series Models. 2nd ed. Cambridge, MA: MIT Press.
Lutkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Rothenberg, T. J. 1971. Identification in parametric models. Econometrica 39: 577591.
Schneider, W. 1988. Analytical uses of Kalman filtering in econometrics: A survey. Statistical Papers 29: 333.
Stock, J. H., and M. W. Watson. 1989. New indexes of coincident and leading economic indicators. In NBER
Macroeconomics Annual 1989, ed. O. J. Blanchard and S. Fischer, vol. 4, 351394. Cambridge, MA: MIT Press.
. 1991. A probability model of the coincident economic indicators. In Leading Economic Indicators: New
Approaches and Forecasting Records, ed. K. Lahiri and G. H. Moore, 6389. Cambridge: Cambridge University
Press.
460
Also see
[TS] sspace postestimation Postestimation tools for sspace
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] dfactor Dynamic-factor models
[TS] tsset Declare data to be time-series data
[TS] ucm Unobserved-components model
[TS] var Vector autoregressive models
[U] 20 Estimation and postestimation commands
Title
sspace postestimation Postestimation tools for sspace
Description
Remarks and examples
Description
The following standard postestimation commands are available after sspace:
Command
Description
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
nlcom
predict
predictnl
test
testnl
type
{ stub* | newvarlist }
if
in
, statistic options
Description
Main
xb
states
residuals
rstandard
observable variables
latent state variables
residuals
standardized residuals
These statistics are available both in and out of sample; type predict
the estimation sample.
461
462
Description
options
Options
Advanced
smethod(method)
method
Description
onestep
smooth
filter
>
Postestimation
>
Main
Options
equation(eqnames) specifies the equation(s) for which the predictions are to be calculated. If you
do not specify equation() or stub*, the results are the same as if you had specified the name
of the first equation for the predicted statistic.
You specify a list of equation names, such as equation(income consumption) or equation(factor1 factor2), to identify the equations. Specify names of state equations when
predicting states and names of observable equations in all other cases.
equation() may not be specified with stub*.
rmse(stub* | newvarlist) puts the root mean squared errors of the predicted statistics into the specified
new variables. The root mean squared errors measure the variances due to the disturbances but do
not account for estimation error.
463
dynamic(time constant) specifies when predict starts producing dynamic forecasts. The specified
time constant must be in the scale of the time variable specified in tsset, and the time constant
must be inside a sample for which observations on the dependent variables are available. For
example, dynamic(tq(2008q4)) causes dynamic predictions to begin in the fourth quarter of
2008, assuming that your time variable is quarterly; see [D] datetime. If the model contains
exogenous variables, they must be present for the whole predicted sample. dynamic() may not
be specified with rstandard, residuals, or smethod(smooth).
Advanced
smethod(method) specifies the method for predicting the unobserved states; smethod(onestep),
smethod(filter), and smethod(smooth) cause different amounts of information on the dependent variables to be used in predicting the states at each time period.
smethod(onestep), the default, causes predict to estimate the states at each time period using
previous information on the dependent variables. The Kalman filter is performed on previous
periods, but only the one-step predictions are made for the current period.
smethod(smooth) causes predict to estimate the states at each time period using all the sample
data by the Kalman smoother. smethod(smooth) may not be specified with rstandard.
smethod(filter) causes predict to estimate the states at each time period using previous and
contemporaneous data by the Kalman filter. The Kalman filter is performed on previous periods
and the current period. smethod(filter) may be specified only with states.
ft1
t
=
+
ft1
ft2
0
ipmant
1
1t
incomet 2
2t
= ft +
hourst
3
3t
unempt
4
4t
where
ft
1
1
2
0
2
1t
1
2t 0
Var =
3t
0
4t
0
0
22
0
0
0
0
32
0
0
0
0
2
4
464
by typing
. use http://www.stata-press.com/data/r13/dfex
(St. Louis Fed (FRED) macro data)
. constraint 1 [lf]L.f = 1
. sspace (f L.f L.lf, state noconstant)
>
(lf L.f, state noconstant noerror)
>
(D.ipman f, noconstant)
>
(D.income f, noconstant)
>
(D.hours f, noconstant)
>
(D.unemp f, noconstant),
>
covstate(identity) constraints(1)
(output omitted )
Below we obtain the one-step predictions for each of the four dependent variables in the model,
and then we graph the actual and predicted ipman:
. predict dep*
(option xb assumed; fitted values)
1970m1
1980m1
1990m1
2000m1
2010m1
The graph shows that the one-step predictions account for only a small part of the swings in the
realized ipman.
465
2008m1
2008m4
2008m7
2008m10
2009m1
2009m4
The model predicts that the changes in industrial production will remain negative for the forecast
horizon, although they increase toward zero.
1970m1
1980m1
1990m1
2000m1
2010m1
466
467
smethod(onestep) and smethod(filter) is that smethod(filter) uses contemporaneous information on the dependent variables while smethod(onestep) does not.
As noted in [TS] sspace, sspace has both a stationary and a diffuse Kalman filter. predict uses
the same Kalman filter used for estimation.
With the smethod(smooth) option, predict estimates the states in each period using all the
sample information by applying the Kalman smoother. predict uses the Harvey (1989, sec. 3.6.2)
fixed-interval smoother with model-based initial values to estimate the states when the estimated
parameters imply a stationary model. De Jong (1989) provides a computationally efficient method.
Hamilton (1994) discusses the model-based initial values for stationary state-space models. When the
model is nonstationary, the De Jong (1989) diffuse Kalman smoother is used to predict the states.
The smoothed estimates of the states are subsequently used to predict the dependent variables.
The dependent variables are predicted by plugging in the estimated states. The residuals are
calculated as the differences between the predicted and the realized dependent variables. The root
mean squared errors are the square roots of the diagonal elements of the mean squared error matrices
that are computed by the Kalman filter. The standardized residuals are the residuals normalized by
the Cholesky factor of their mean squared error produced by the Kalman filter.
predict uses the Harvey (1989, sec. 3.5) methods to compute the dynamic forecasts and the root
mean squared errors. Let be the period at which the dynamic forecasts begin; must either be in
the specified sample or be in the period immediately following the specified sample.
The dynamic forecasts depend on the predicted states in the period 1, which predict obtains by
running the Kalman filter or the diffuse Kalman filter on the previous sample observations. The states
in the periods prior to starting the dynamic predictions may be estimated using smethod(onestep)
or smethod(smooth).
Using an if or in qualifier to alter the prediction sample can change the estimate of the unobserved
states in the period prior to beginning the dynamic predictions and hence alter the dynamic predictions.
The initial states are estimated using e(b) and the prediction sample.
References
De Jong, P. 1988. The likelihood for a state space model. Biometrika 75: 165169.
. 1989. Smoothing and interpolation with the state-space model. Journal of the American Statistical Association
84: 10851088.
. 1991. The diffuse Kalman filter. Annals of Statistics 19: 10731083.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Harvey, A. C. 1989. Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge: Cambridge
University Press.
Lutkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Also see
[TS] sspace State-space models
[TS] dfactor Dynamic-factor models
[TS] dfactor postestimation Postestimation tools for dfactor
[U] 20 Estimation and postestimation commands
Title
tsappend Add observations to a time-series dataset
Syntax
Remarks and examples
Menu
Stored results
Description
Also see
Options
Syntax
tsappend ,
options
Description
options
add(#)
last(date | clock)
tsfmt(string)
panel(panel id)
add # observations
add observations at date or clock
use time-series function string with last(date | clock)
add observations to panel panel id
Menu
Statistics
>
Time series
>
>
Description
tsappend appends observations to a time-series dataset or to a panel dataset. tsappend uses and
updates the information set by tsset.
Options
add(#) specifies the number of observations to add.
last(date | clock) and tsfmt(string) must be specified together and are an alternative to add().
last(date | clock) specifies the date or the date and time of the last observation to add.
tsfmt(string) specifies the name of the Stata time-series function to use in converting the date
specified in last() to an integer. The function names are tc (clock), tC (Clock), td (daily), tw
(weekly), tm (monthly), tq (quarterly), and th (half-yearly).
For clock times, the last time added (if any) will be earlier than the time requested in
last(date | clock) if last() is not a multiple of delta units from the last time in the data.
For instance, you might specify last(17may2007) tsfmt(td), last(2001m1) tsfmt(tm), or
last(17may2007 15:30:00) tsfmt(tc).
panel(panel id) specifies that observations be added only to panels with the ID specified in panel().
468
469
Introduction
tsappend adds observations to a time-series dataset or to a panel dataset. You must tsset your
data before using tsappend. tsappend simultaneously removes any gaps from the dataset.
There are two ways to use tsappend: you can specify the add(#) option to request that #
observations be added, or you can specify the last(date | clock) option to request that observations
be appended until the date specified is reached. If you specify last(), you must also specify tsfmt().
tsfmt() specifies the Stata time-series date function that converts the date held in last() to an
integer.
tsappend works with time series of panel data. With panel data, tsappend adds the requested
observations to all the panels, unless the panel() option is also specified.
115.349555
461.241577
1
477
115.349555
.966963473
Total
576.591132
478
1.2062576
Coef.
Std. Err.
y
L1.
.4493507
.0411417
_cons
11.11877
.8314581
Number of obs
F( 1,
477)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
479
119.29
0.0000
0.2001
0.1984
.98334
P>|t|
10.92
0.000
.3685093
.5301921
13.37
0.000
9.484993
12.75254
. mat b = e(b)
. mat colnames b = L.xb one
. tsset
time variable: t2, 1960m2 to 2000m1
delta: 1 month
. tsappend, add(12)
. tsset
time variable: t2, 1960m2 to 2001m1
delta: 1 month
. predict xb if t2<=tm(2000m2)
(option xb assumed; fitted values)
(12 missing values generated)
470
The calls to tsset before and after tsappend were unnecessary. Their output reveals that tsappend
added another year of observations. We then used predict and matrix score to obtain the dynamic
predictions, which allows us to produce the following graph:
18
19
20
21
22
23
1995m1
1996m1
1997m1
1998m1
time
1999m1
2000m1
2001m1
Fitted values
In the call to tsappend, instead of saying that we wanted to add 12 observations, we could have
specified that we wanted to fill in observations through the first month of 2001:
. use http://www.stata-press.com/data/r13/tsappend1, clear
. tsset
time variable: t2, 1960m2 to 2000m1
delta: 1 month
. tsappend, last(2001m1) tsfmt(tm)
. tsset
time variable: t2, 1960m2 to 2001m1
delta: 1 month
We specified the tm() function in the tsfmt() option. [D] functions contains a list of timeseries functions for translating date literals to integers. Because we have monthly data, and since
[D] functions tells us that we want to use the tm() function, we specified the tsfmt(tm) option.
The following table shows the most common types of time-series data, their formats, the appropriate
translation functions, and the corresponding options for tsappend:
Description
time
time
daily
weekly
monthly
quarterly
half-yearly
yearly
Format
%tc
%tC
%td
%tw
%tm
%tq
%th
%ty
Function
tc()
tC()
td()
tw()
tm()
tq()
th()
ty()
Option
tsfmt(tc)
tsfmt(tC)
tsfmt(td)
tsfmt(tw)
tsfmt(tm)
tsfmt(tq)
tsfmt(th)
tsfmt(ty)
471
33.33
33.33
33.33
33.33
66.67
100.00
3
100.00
. by id: summarize t2
3
25
95%
24
max
24
............1111111111111
1111.11111111111111111111
11111111111111111111.....
XXXXXXXXXXXXXXXXXXXXXXXXX
-> id = 1
Variable
Obs
Mean
Std. Dev.
Min
Max
t2
13
474
3.89444
468
480
-> id = 2
Variable
Obs
Mean
Std. Dev.
Min
Max
t2
20
465.5
5.91608
456
475
Variable
Obs
Mean
Std. Dev.
Min
Max
t2
24
468.3333
7.322786
456
480
-> id = 3
The output from xtdescribe and summarize on these data tells us that one panel starts later
than the other, that another panel ends before the other two, and that the remaining panel has a gap
in the time variable but otherwise spans the entire time frame.
472
min
19
Cum.
1
1
1
33.33
33.33
33.33
33.33
66.67
100.00
100.00
Freq.
5%
25%
19
19
Pattern
n =
T =
50%
26
75%
31
3
31
95%
31
max
31
............1111111111111111111
11111111111111111111111111.....
1111111111111111111111111111111
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
. by id: summarize t2
-> id = 1
Variable
Obs
Mean
Std. Dev.
Min
Max
t2
19
477
5.627314
468
486
-> id = 2
Variable
Obs
Mean
Std. Dev.
Min
Max
t2
26
468.5
7.648529
456
481
-> id = 3
Variable
Obs
Mean
Std. Dev.
Min
Max
t2
31
471
9.092121
456
486
This output from xtdescribe and summarize after the call to tsappend shows that the call to
tsappend, add(6) added 6 observations to each panel and filled in the gap in the time variable in
the second panel. tsappend, add() did not cause a uniform end date over the panels.
The following output illustrates the contrast between tsappend, add() and tsappend, last()
tsfmt() with panel data that end at different dates. The output from xtdescribe and summarize
shows that the call to tsappend, last() tsfmt() filled in the gap in t2 and caused all the panels
to end at the specified end date. The output also shows that the panels remain unbalanced because
one panel has a later entry date than the other two.
66.67
33.33
66.67
100.00
3
100.00
. by id: summarize t2
3
31
95%
31
1111111111111111111111111111111
............1111111111111111111
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-> id = 1
Variable
Obs
Mean
Std. Dev.
Min
Max
t2
19
477
5.627314
468
486
Variable
Obs
Mean
Std. Dev.
Min
Max
t2
31
471
9.092121
456
486
-> id = 3
Variable
Obs
Mean
Std. Dev.
Min
Max
t2
31
471
9.092121
456
486
-> id = 2
Stored results
tsappend stores the following in r():
Scalars
r(add)
Also see
[TS] tsset Declare data to be time-series data
max
31
473
Title
tsfill Fill in gaps in time variable
Syntax
Remarks and examples
Menu
Also see
Description
Option
Syntax
tsfill
, full
You must tsset your data before using tsfill; see [TS] tsset.
Menu
Statistics
>
Time series
>
>
Description
tsfill is used after tsset to fill in gaps in time-series data and gaps in panel data with
new observations, which contain missing values. For instance, perhaps observations for timevar =
1, 3, 5, 6, . . . , 22 exist. tsfill would create observations for timevar = 2 and timevar = 4 containing
all missing values. There is seldom reason to do this because Statas time-series operators consider
timevar, not the observation number. Referring to L.gnp to obtain lagged gnp values would correctly
produce a missing value for timevar = 3, even if the data were not filled in. Referring to L2.gnp
would correctly return the value of gnp in the first observation for timevar = 3, even if the data were
not filled in.
Option
full is for use with panel data only. With panel data, tsfill by default fills in observations for
each panel according to the minimum and maximum values of timevar for the panel. Thus if the
first panel spanned the times 520 and the second panel the times 115, after tsfill they would
still span the same periods; observations would be created to fill in any missing times from 520
in the first panel and from 115 in the second.
If full is specified, observations are created so that both panels span the time 120, the overall
minimum and maximum of timevar across panels.
474
475
income
1.
2.
3.
4.
5.
1995m7
1995m8
1995m11
1995m12
1996m1
1153
1181
1236
1297
1265
6.
1996m3
1282
You can fill in the gaps by interpolation easily with tsfill and ipolate. tsfill creates the
missing observations:
. tsfill
. list mdate income
mdate
income
1.
2.
3.
4.
5.
1995m7
1995m8
1995m9
1995m10
1995m11
1153
1181
.
.
1236
6.
7.
8.
9.
1995m12
1996m1
1996m2
1996m3
1297
1265
.
1282
new
new
new
We can now use ipolate (see [D] ipolate) to fill them in:
. ipolate income mdate, gen(ipinc)
. list mdate income ipinc
mdate
income
ipinc
1.
2.
3.
4.
5.
1995m7
1995m8
1995m9
1995m10
1995m11
1153
1181
.
.
1236
1153
1181
1199.3333
1217.6667
1236
6.
7.
8.
9.
1995m12
1996m1
1996m2
1996m3
1297
1265
.
1282
1297
1265
1273.5
1282
476
year
income
1.
2.
3.
4.
5.
1
1
1
1
2
1988
1989
1990
1991
1989
14500
14750
14950
15100
22100
6.
7.
2
2
1990
1992
22200
22800
Just as with nonpanel time-series datasets, you can use tsfill to fill in the gaps:
. tsfill
. list edlevel year income
edlevel
year
income
1.
2.
3.
4.
5.
1
1
1
1
2
1988
1989
1990
1991
1989
14500
14750
14950
15100
22100
6.
7.
8.
2
2
2
1990
1991
1992
22200
.
22800
new
You could instead use tsfill to produce fully balanced panels with the full option:
. tsfill, full
. list edlevel year income, sep(0)
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
edlevel
year
income
1
1
1
1
1
2
2
2
2
2
1988
1989
1990
1991
1992
1988
1989
1990
1991
1992
14500
14750
14950
15100
.
.
22100
22200
.
22800
new
new
new
Video example
Time series, part 1: Formatting dates, tsset, tsreport, and tsfill
Also see
[TS] tsset Declare data to be time-series data
[TS] tsappend Add observations to a time-series dataset
477
Title
tsfilter Filter a time-series, keeping only selected periodicities
Syntax
Acknowledgments
Description
References
Syntax
Filter one variable
tsfilter filter
type
newvar = varname
if
in
, options
Name
See
bk
bw
cf
hp
BaxterKing
Butterworth
ChristianoFitzgerald
HodrickPrescott
[TS]
[TS]
[TS]
[TS]
tsfilter
tsfilter
tsfilter
tsfilter
bk
bw
cf
hp
You must tsset or xtset your data before using tsfilter; see [TS] tsset and [XT] xtset.
varname and varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
options differ across the filters and are documented in each filters manual entry.
Description
tsfilter separates a time series into trend and cyclical components. The trend component may
contain a deterministic or a stochastic trend. The stationary cyclical component is driven by stochastic
cycles at the specified periods.
479
Although the filters implemented in tsfilter have been widely applied by macroeconomists,
they are general time-series methods and may be of interest to other researchers.
Remarks are presented under the following headings:
An example dataset
A baseline method: Symmetric moving-average (SMA) filters
An overview of filtering in the frequency domain
SMA revisited: The BaxterKing filter
Filtering a random walk: The ChristianoFitzgerald filter
A one-parameter high-pass filter: The HodrickPrescott filter
A two-parameter high-pass filter: The Butterworth filter
An example dataset
Time series are frequently filtered to remove unwanted characteristics, such as trends and seasonal
components, or to estimate components driven by stochastic cycles from a specific range of periods.
Although the filters implemented in tsfilter can be used for both purposes, their primary purpose
is the latter, and we restrict our discussion to that use.
We explain the methods implemented in tsfilter by estimating the business-cycle component
of a macroeconomic variable, because they are frequently used for this purpose. We estimate the
business-cycle component of the natural log of an index of the industrial production of the United
States, which is plotted below.
. use http://www.stata-press.com/data/r13/ipq
(Federal Reserve Economic Data, St. Louis Fed)
. tsline ip_ln
1920q1 1930q1 1940q1 1950q1 1960q1 1970q1 1980q1 1990q1 2000q1 2010q1
The above graph shows that ip ln contains a trend component. Time series may contain deterministic trends or stochastic trends. A polynomial function of time is the most common deterministic
time trend. An integrated process is the most common stochastic trend. An integrated process is a
random variable that must be differenced one or more times to be stationary; see Hamilton (1994) for
a discussion. The different filters implemented in tsfilter allow for different orders of deterministic
time trends or integrated processes.
480
We now illustrate the four methods implemented in tsfilter, each of which will remove the
trend and estimate the business-cycle component. Burns and Mitchell (1946) defined oscillations in
business data with recurring periods between 1.5 and 8 years to be business-cycle fluctuations; we
use their commonly accepted definition.
yt =
q
X
j ytj
j=q
for each t {q + 1, . . . , T q}, where j = j for j {q, . . . , q}. Although the original series
has T observations, the filtered series has only T 2q , where q is known as the order of the SMA
filter.
SMA filters with weights that sum to zero remove deterministic and stochastic trends of order 2 or
less, as shown by Fuller (1996) and Baxter and King (1999).
0.2ip lnt2 0.2ip lnt1 + 0.8ip lnt 0.2ip lnt+1 0.2ip lnt+2
and plot the filtered series. We do not even need tsfilter to implement this second-order SMA
filter; we can use generate.
. generate ip_sma = -.2*L2.ip_ln-.2*L.ip_ln+.8*ip_ln-.2*F.ip_ln-.2*F2.ip_ln
(4 missing values generated)
.2
.1
ip_sma
0
.1
.2
. tsline ip_sma
1920q1 1930q1 1940q1 1950q1 1960q1 1970q1 1980q1 1990q1 2000q1 2010q1
481
There is no good reason why we chose that particular SMA filter. Baxter and King (1999) derived
a class of SMA filters with coefficients that sum to zero and get as close as possible to keeping only
the specified cyclical component.
eij fy ()d
j =
(1)
1.
Equation (1) can be manipulated to show what fraction of the variance of yt is attributable to
stochastic cycles in a specified range of frequencies. Hamilton (1994, 156) discusses this point in
more detail.
Equation (1) implies that if fy () = 0 for [1 , 2 ], then stochastic cycles at these frequencies
contribute zero to the variance and autocovariances of yt .
The goal of time-series filters is to transform the original series into a new series yt for which
the spectral density function of the filtered series fy () is zero for unwanted frequencies and equal
to fy () for desired frequencies.
A linear filter of yt can be written as
yt
j ytj = (L)yt
j=
where we let yt be an infinitely long series as required by some of the results below. To see the
impact of the filter on the components of yt at each frequency , we need an expression for fy ()
in terms of fy () and the filter weights j . Wei (2006, 282) shows that for each ,
fy () = |(ei )|2 fy ()
(2)
482
where |(ei )| is known as the gain of the filter. Equation (2) makes explicit that the squared gain
function |a(ei )|2 converts the spectral density of the original series, fy (), into the spectral density
of the filtered series, fy (). In particular, (2) says that for each frequency , the spectral density
of the filtered series is the product of the square of the gain of the filter and the spectral density of
the original series.
As we will see in the examples below, the gain function provides a crucial interpretation of what
a filter is doing. We want a filter for which fy () = 0 for unwanted frequencies and for which
fy () = fy () for desired frequencies. So we seek a filter for which the gain is 0 for unwanted
frequencies and for which the gain is 1 for desired frequencies.
In practice, we cannot find such an ideal filter exactly, because the constraints an ideal filter places
on filter coefficients cannot be satisfied for time series with only a finite number of observations. The
expansive literature on filters is a result of the trade-offs involved in designing implementable filters
that approximate the ideal filter.
Ideally, filters pass or block the stochastic cycles at specified frequencies by having a gain of
1 or 0. Band-pass filters, such as the BaxterKing (BK) and the ChristianoFitzgerald (CF) filters,
pass through stochastic cycles in the specified range of frequencies and block all the other stochastic
cycles. High-pass filters, such as the HodrickPrescott (HP) and Butterworth filters, only allow the
stochastic cycles at or above a specified frequency to pass through and block the lower-frequency
stochastic cycles. For band-pass filters, let [0 , 1 ] be the set of desired frequencies with all other
frequencies being undesired. For high-pass filters, let 0 be the cutoff frequency with only those
frequencies 0 being desired.
483
.3
.2
1920q1 1930q1 1940q1 1950q1 1960q1 1970q1 1980q1 1990q1 2000q1 2010q1
The above graph tells us what the estimated business-cycle component looks like, but it presents no
evidence as to how well we have estimated the component. A periodogram is better for this purpose.
A periodogram is an estimator of a transform of the spectral density function; see [TS] pergram for
details. Below we plot the periodogram for the BK estimate of the business-cycle component. pergram
displays the results in natural frequencies, which are the standard frequencies divided by 2 . We use
the xline() option to draw vertical lines at the lower natural-frequency cutoff (1/32 = 0.03125)
and the upper natural-frequency cutoff (1/6 0.16667).
0.00
2.00
4.00
6.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
If the filter completely removed the stochastic cycles corresponding to the unwanted frequencies,
the periodogram would be a flat line at the minimum value of 6 outside the range identified by
the vertical lines. That the periodogram takes on values greater than 6 outside the specified range
indicates the inability of the BK filter to pass through only stochastic cycles at frequencies inside the
specified band.
484
We can also evaluate the BK filter by plotting its gain function against the gain function of an
ideal filter. In the output below, we reestimate the business-cycle component to store the gain of the
BK filter for the specified parameters. (The coefficients and the gain of the BK filter are completely
determined by the specified minimum period, the maximum period, and the order of the SMA filter.)
We label the variable bkgain for the graph below.
. drop ip_bk
. tsfilter bk ip_bk = ip_ln, minperiod(6) maxperiod(32) gain(bkgain abk)
. label variable bkgain "BK filter"
Below we generate ideal, the gain function of the ideal band-pass filter at the frequencies f.
Then we plot the gain of the ideal filter and the gain of the BK filter.
. generate f = _pi*(_n-1)/_N
. generate ideal = cond(f<_pi/16, 0, cond(f<_pi/3, 1,0))
. label variable ideal "Ideal filter"
.5
2
Ideal filter
3
BK filter
The graph reveals that the gain of the BK filter deviates markedly from the square-wave gain of the
ideal filter. Increasing the symmetric moving average via the smaorder() option will cause the gain
of the BK filter to more closely approximate the gain of the ideal filter at the cost of lost observations
in the filtered series.
485
3. Their filter does a good job passing through stochastic cycles of desired frequencies and blocking
stochastic cycles from unwanted frequencies on a range of processes that are close to being a
random-walk process.
The CF filter obtains its optimality properties at the cost of an additional parameter that must be
estimated and a loss of robustness. The CF filter is optimal for a random-walk process. If the true
process is a random walk with drift, then the drift term must be estimated and removed; see [TS] tsfilter
cf for details. The CF filter is not symmetric, so it will not remove second-order deterministic or
second-order integrated processes. tsfilter cf also implements another filter that Christiano and
Fitzgerald (2003) derived that is an SMA filter with coefficients that sum to zero. This filter is designed
to be as close as possible to the random-walk optimal filter under the constraint that it be an SMA
filter with constraints that sum to zero; see [TS] tsfilter cf for details.
Technical note
A random-walk process is a first-order integrated process; it must be differenced once to produce
a stationary process. Formally, a random-walk process is given by yt = yt1 + t , where t is a zeromean stationary random variable. A random-walk-plus-drift process is given by yet = + yet1 + t ,
where t is a zero-mean stationary random variable.
0.00
2.00
4.00
6.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
The periodogram of the CF estimates of the business-cycle component indicates that the CF filter
did a better job than the BK filter of passing through only the desired stochastic cycles. Given that
ip ln is well approximated by a random-walk-plus-drift process, the relative performance of the CF
filter is not surprising.
486
As with the BK filter, plotting the gain of the CF filter and the gain of the ideal filter gives an
impression of how well the filter isolates the specified components. In the output below, we reestimate
the business-cycle component, using the gain() option to store the gain of the CF filter, and we plot
the gain functions.
. drop ip_cf
.5
1.5
2
Ideal filter
3
CF filter
Comparing this graph with the graph of the BK gain function reveals that the CF filter is closer to
the gain of the ideal filter than is the BK filter. The graph also reveals that the gain of the CF filter
oscillates above and below 1 for desired frequencies.
The choice between the BK or the CF filter is one between robustness or efficiency. The BK filter
handles a broader class of stochastic processes, but the CF filter produces a better estimate of ct if
yt is close to a random-walk process or a random-walk-plus-drift process.
487
0.00
2.00
4.00
6.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
Because the HP filter is a high-pass filter, the high-frequency stochastic cycles corresponding to
those periods below 6 remain in the estimated component. Of more concern is the presence of the
low-frequency stochastic cycles that the filter should remove. We address this issue in the example
below.
2.00
4.00
6.00
0.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
488
Although the periodogram looks better than the periodogram with the default smoothing, the HP
filter still did not zero out the low-frequency stochastic cycles as well as the CF filter did. We take
another look at this issue by plotting the gain functions for these filters along with the gain function
from the ideal band-pass filter.
.2
.4
.6
.8
Ideal filter
HP(677) filter
Comparing the gain graphs reveals that the gain of the CF filter is closest to the gain of the ideal
filter. Both the BK and the HP filters allow some low-frequency stochastic cycles to pass through. The
plot also illustrates that the HP filter is a high-pass filter because its gain is 1 for those stochastic
cycles at frequencies above 6 periods, whereas the other gain functions go to zero.
489
Technical note
Conventionally, economists have used = 1600, which Hodrick and Prescott (1997) recommended
for quarterly data. Ravn and Uhlig (2002) derived values for at monthly and annual frequencies that
are rescalings of the conventional = 1600 for quarterly data. These heuristic values are the default
values; see [TS] tsfilter hp for details. In the filter literature, filter parameters are set as functions of
the cutoff frequency; see Pollock (2000, 324), for instance. This method finds the filter parameter
that sets the gain of the filter equal to 1/2 at the cutoff frequency. Applying this method to selecting
at the cutoff frequency of 32 periods requires solving
2
1/2 =
4 {1 cos(2/32)}
1 + 4 {1 cos(2/32)}
for , which yields 677.13, which was used in the previous example.
.2
.4
.6
.8
The gain function of the HP filter is a function of the parameter , and sets both the location of
the cutoff frequency and the slope of the gain function. The graph below illustrates this dependence
by plotting the gain function of the HP filter for set to 10, 677.13, and 1,600 along with the gain
function for the ideal band-pass filter with cutoff periods of 32 periods and 6 periods.
1
Ideal filter
HP(677) filter
3
HP(10) filter
HP(1600) filter
490
tsfilter bw implements the high-pass Butterworth filter using the computational method that
Pollock (2000) derived. This filter has two parameters: the cutoff period and the order of the filter
denoted by m. The cutoff period sets the location where the gain function starts to filter out the
high-period (low-frequency) stochastic cycles, and m sets the slope of the gain function for a given
cutoff period. For a given cutoff period, the slope of the gain function at the cutoff period increases
with m. For a given m, the slope of the gain function at the cutoff period increases with the cutoff
period.
We cannot obtain a vertical slope at the cutoff frequency, which is the ideal, because the computation
becomes unstable; see Pollock (2000). The m for which the computation becomes unstable depends
on the cutoff period.
Pollock (2000) and Gomez (1999) argue that the additional flexibility produced by the additional
parameter makes the high-pass Butterworth filter a better filter than the HP filter for estimating the
cyclical components.
Pollock (2000) shows that the high-pass Butterworth filter can estimate the desired components of
the dth difference of a dth-order integrated process as long as m d.
.2
.4
.6
.8
.
.
.
.
.
2
Ideal filter
BW 6
3
BW 2
The graph illustrates that the slope of the gain function increases with the order of the filter.
The graph below provides another perspective by plotting the gain function from the ideal band-pass
filter on a graph with plots of the gain functions from the Butterworth filter of order 6, the CF filter,
and the HP(677) filter.
491
.25
.5
.75
1.25
Ideal filter
CF filter
BW 6
HP(677) filter
Although the slope of the gain function from the CF filter is closer to being vertical at the cutoff
frequency, the gain function of the Butterworth filter does not oscillate above and below 1 after it first
reaches the value of 1. The flatness of the Butterworth filter below and above the cutoff frequency
is not an accident; it is one of the filters properties.
2.00
4.00
6.00
0.00
.
>
.
.
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
492
The periodogram reveals that the two-pass process has passed the original series ip ln through
a band-pass filter. It also reveals that the two-pass process did a reasonable job of filtering out the
stochastic cycles corresponding to the unwanted frequencies.
.2
.4
.6
.8
.
.
.
.
.
493
1
Ideal filter
BW upper filter 6
3
BW upper filter 2
BW upper filter 20
Because the cutoff period is 6, the gain functions for m = 2 and m = 6 are much flatter than the
gain functions for m = 2 and m = 6 in example 7 when the cutoff period was 32. The gain function
for m = 20 is reasonably close to vertical, so we used it in example 8. We mentioned above that
for any given cutoff period, the computation eventually becomes unstable for larger values of m. For
instance, when the cutoff period is 32, m = 20 is not numerically feasible.
.25
.25
494
1920q1 1930q1 1940q1 1950q1 1960q1 1970q1 1980q1 1990q1 2000q1 2010q1
CF filter
There is a long tradition in economics of using models to estimate components. Instead of comparing
filters by their gain functions, some authors compare filters by finding underlying models for which
the filter parameters are the model parameters. For instance, Harvey and Jaeger (1993), Gomez (1999,
2001), Pollock (2000, 2006), and Harvey and Trimbur (2003) derive models that correspond to the
HP or the Butterworth filter. Some of these references also compare components estimated by filters
with components estimated by making predictions from estimated models. In effect, these references
point out that arima, dfactor, sspace, and ucm (see [TS] arima, [TS] dfactor, [TS] sspace, and
[TS] ucm) implement alternative methods to component estimation.
ct = B(L)yt =
bj ytj
j=
where bj are the coefficients of the impulseresponse sequence of some ideal filter. The impulse
response sequence is the inverse Fourier transform of either a square wave or step function depending
upon whether the filter is a band-pass or high-pass filter, respectively.
495
The infinite-order impulseresponse sequence for the filters implemented in tsfilter are symmetric
and time-invariant.
In the frequency domain, the relationships between the true cyclical component and its finite
estimates respectively are
c() = B()y()
and
b
b
c() = B()y()
b
b.
where B() and B()
are the frequency transfer functions of the filters B and B
The frequency transfer function for B() can be expressed in polar form as
B() = |B()|exp{i()}
where |B()| is the filters gain function and () is the filters phase function. The gain function
determines whether the amplitude of the stochastic cycle is increased or decreased at a particular
frequency. The phase function determines how a cycle at a particular frequency is shifted forward or
backward in time.
In this form, it can be shown that the spectrum of the cyclical component, fc (), is related to the
spectrum of yt series by the squared gain:
fc () = |B()|2 fy ()
Each of the four filters in tsfilter has an option for returning an estimate of the gain function
together with its associated scaled frequency a = / , where 0 . These are consistent
estimates of |B()|, the gain from the ideal linear filter.
The band-pass filters implemented in tsfilter, the BK and CF filters, use a square wave as the
ideal transfer function:
1 if || [l , h ]
B() =
0 if ||
/ [l , h ]
The high-pass filters, the HodrickPrescott and Butterworth filters, use a step function as the ideal
transfer function:
1 if || h
B() =
0 if || < h
Acknowledgments
We thank Christopher F. Baum of the Department of Economics at Boston College and author of the
Stata Press books An Introduction to Modern Econometrics Using Stata and An Introduction to Stata
Programming for his previous implementations of these filters: BaxterKing (bking), Christiano
Fitzgerald (cfitzrw), HodrickPrescott (hprescott), and Butterworth (butterworth).
496
developed.
References
Baxter, M., and R. G. King. 1999. Measuring business cycles: Approximate band-pass filters for economic time series.
Review of Economics and Statistics 81: 575593.
Bianchi, G., and R. Sorrentino. 2007. Electronic Filter Simulation and Design. New York: McGrawHill.
Burns, A. F., and W. C. Mitchell. 1946. Measuring Business Cycles. New York: National Bureau of Economic
Research.
Butterworth, S. 1930. On the theory of filter amplifiers. Experimental Wireless and the Wireless Engineer 7: 536541.
Christiano, L. J., and T. J. Fitzgerald. 2003. The band pass filter. International Economic Review 44: 435465.
Fuller, W. A. 1996. Introduction to Statistical Time Series. 2nd ed. New York: Wiley.
Gomez, V. 1999. Three equivalent methods for filtering finite nonstationary time series. Journal of Business and
Economic Statistics 17: 109116.
. 2001. The use of Butterworth filters for trend and cycle estimation in economic time series. Journal of Business
and Economic Statistics 19: 365373.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Harvey, A. C., and A. Jaeger. 1993. Detrending, stylized facts and the business cycle. Journal of Applied Econometrics
8: 231247.
Harvey, A. C., and T. M. Trimbur. 2003. General model-based filters for extracting cycles and trends in economic
time series. The Review of Economics and Statistics 85: 244255.
Hodrick, R. J., and E. C. Prescott. 1997. Postwar U.S. business cycles: An empirical investigation. Journal of Money,
Credit, and Banking 29: 116.
King, R. G., and S. T. Rebelo. 1993. Low frequency filtering and real business cycles. Journal of Economic Dynamics
and Control 17: 207231.
Leser, C. E. V. 1961. A simple method of trend construction. Journal of the Royal Statistical Society, Series B 23:
91107.
Pollock, D. S. G. 1999. A Handbook of Time-Series Analysis, Signal Processing and Dynamics. London: Academic
Press.
. 2000. Trend estimation and de-trending via rational square-wave filters. Journal of Econometrics 99: 317334.
. 2006. Econometric methods of signal extraction. Computational Statistics & Data Analysis 50: 22682292.
Priestley, M. B. 1981. Spectral Analysis and Time Series. London: Academic Press.
Ravn, M. O., and H. Uhlig. 2002. On adjusting the HodrickPrescott filter for the frequency of observations. Review
of Economics and Statistics 84: 371376.
Schmidt, T. J. 1994. sts5: Detrending with the HodrickPrescott filter. Stata Technical Bulletin 17: 2224. Reprinted
in Stata Technical Bulletin Reprints, vol. 3, pp. 216219. College Station, TX: Stata Press.
Wei, W. W. S. 2006. Time Series Analysis: Univariate and Multivariate Methods. 2nd ed. Boston: Pearson.
Also see
[TS] tsset Declare data to be time-series data
[XT] xtset Declare data to be panel data
[TS] tssmooth Smooth and forecast univariate time-series data
Title
tsfilter bk BaxterKing time-series filter
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
Filter one variable
tsfilter bk
type
newvar = varname
if
in
, options
options
Main
minperiod(#)
maxperiod(#)
smaorder(#)
stationary
Trend
gain(gainvar anglevar)
You must tsset or xtset your data before using tsfilter; see [TS] tsset and [XT] xtset.
varname and varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
>
Baxter-King
Description
tsfilter bk uses the Baxter and King (1999) band-pass filter to separate a time series into trend
and cyclical components. The trend component may contain a deterministic or a stochastic trend. The
stationary cyclical component is driven by stochastic cycles at the specified periods.
See [TS] tsfilter for an introduction to the methods implemented in tsfilter bk.
497
498
Options
Main
minperiod(#) filters out stochastic cycles at periods smaller than #, where # must be at least 2
and less than maxperiod(). By default, if the units of the time variable are set to daily, weekly,
monthly, quarterly, or half-yearly, then # is set to the number of periods equivalent to 1.5 years;
yearly data use minperiod(2); otherwise, the default value is minperiod(6).
maxperiod(#) filters out stochastic cycles at periods larger than #, where # must be greater than
minperiod(). By default, if the units of the time variable are set to daily, weekly, monthly,
quarterly, half-yearly, or yearly, then # is set to the number of periods equivalent to 8 years;
otherwise, the default value is maxperiod(32).
smaorder(#) sets the order of the symmetric moving average, denoted by q . The order is an integer
that specifies the number of observations in each direction used in calculating the symmetric
moving average estimate of the cyclical component. This number must be an integer greater than
zero and less than (T 1)/2. The estimate for the cyclical component for the tth observation, yt ,
is based upon the 2q + 1 values ytq , ytq+1 , . . . , yt , yt+1 , . . . , yt+q . By default, if the units
of the time variable are set to daily, weekly, monthly, quarterly, half-yearly, or yearly, then # is
set to the equivalent of 3 years; otherwise, the default value is smaorder(12).
stationary modifies the filter calculations to those appropriate for a stationary series. By default,
the series is assumed nonstationary.
Trend
trend(newvar | newvarlist | stub*) saves the trend component(s) in the new variable(s) specified by
newvar, newvarlist, or stub*.
Gain
gain(gainvar anglevar) saves the gain in gainvar and its associated angular frequency in anglevar.
Gains are calculated at the N angular frequencies that uniformly partition the interval (0, ], where
N is the sample size.
499
Symmetric moving-average (SMA) filters with coefficients that sum to zero remove stochastic and
deterministic trends of first and second order; see Fuller (1996), Baxter and King (1995), and Baxter
and King (1999).
For an infinitely long series, there is an ideal band-pass filter for which the gain function is 1 for
[0 , 1 ] and 0 for all other frequencies; see [TS] tsfilter for an introduction to gain functions.
It just so happens that this ideal band-pass filter is an SMA filter with coefficients that sum to zero.
Baxter and King (1999) derive the coefficients of this ideal band-pass filter and then define the BK
filter to be the SMA filter with 2q + 1 terms that are as close as possible to those of the ideal filter.
There is a trade-off in choosing q : larger values of q cause the gain of the BK filter to be closer to
the gain of the ideal filter, but they also increase the number of missing observations in the filtered
series.
The smaorder() option specifies q . The default value of smaorder() is the number of periods
equivalent to 3 years, following the Baxter and King (1999) recommendation.
Although the mathematics of the frequency-domain approach to time-series analysis is in terms of
stochastic cycles at frequencies [, ], applied work is generally in terms of periods p, where
p = 2/ . So tsfilter bk has the minperiod() and maxperiod() options to specify the desired
range of stochastic cycles.
Among economists, the BK filter is commonly used for investigating business cycles. Burns and
Mitchell (1946) defined business cycles as stochastic cycles in business data corresponding to periods
between 1.5 and 8 years. The default values for minperiod() and maxperiod() are the Burns
Mitchell values of 1.5 and 8 years, scaled to the frequency of the dataset. The calculations of
the default values assume that the time variable is formatted as daily, weekly, monthly, quarterly,
half-yearly, or yearly; see [D] format.
For each variable, the band-pass BK filter estimate of ct is put in the corresponding new variable,
and when the trend() option is specified, the estimate of t is put in the corresponding new variable.
tsfilter bk automatically detects panel data from the information provided when the dataset was
tsset or xtset. All calculations are done separately on each panel. Missing values at the beginning
and end of the sample are excluded from the sample. The sample may not contain gaps.
Baxter and King (1999) derived their method for nonstationary time series, but they noted that
a small modification makes it applicable to stationary time series. Imposing the condition that the
filter coefficients sum to zero is what makes their method applicable to nonstationary time series;
dropping this condition yields a filter for stationary time series. Specifying the stationary option
causes tsfilter bk to use coefficients calculated without the constraint that they sum to zero.
500
7.5
9.5
. use http://www.stata-press.com/data/r13/gdp2
(Federal Reserve Economic Data, St. Louis Fed)
. tsline gdp_ln
1950q1
1960q1
1970q1
1980q1
1990q1
2000q1
2010q1
0.00
2.00
4.00
6.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
Because our sample is of quarterly data, tsfilter bk used the default values of minperiod(6),
maxperiod(32), and smaorder(12). The minimum and maximum periods are the Burns and
Mitchell (1946) business-cycle periods for quarterly data. The default of smaorder(12) was recommend by Baxter and King (1999) for quarterly data.
In the periodogram, we added vertical lines at the natural frequencies corresponding to the
conventional Burns and Mitchell (1946) values for business-cycle components. pergram displays the
501
results in natural frequencies, which are the standard frequencies divided by 2 . We use the xline()
option to draw vertical lines at the lower natural-frequency cutoff (1/32 = 0.03125) and the upper
natural-frequency cutoff (1/6 0.16667).
If the filter completely removed the stochastic cycles at the unwanted frequencies, the periodogram
would be a flat line at the minimum value of 6 outside the range identified by the vertical lines.
The periodogram reveals that the default value of smaorder(12) did not do a good job of filtering
out the high-periodicity stochastic cycles, because there are too many points above 6.00 to the left
of the left-hand vertical line. It also reveals that the filter did not remove enough low-periodicity
stochastic cycles, because there are too many points above 6.00 to the right of the right-hand vertical
line.
We address these problems in the next example.
.5
1
Ideal filter
BK SMAorder 20
3
BK SMAorder 12
502
As discussed in [TS] tsfilter, the gain function of the ideal filter is a square wave with a value of 0
at the frequencies corresponding to unwanted frequencies and a value of 1 at the desired frequencies.
The vertical lines in the gain function of the ideal filter occur at the frequencies /16, corresponding
to 32 periods, and at /3, corresponding to 6 periods. (Given that p = 2/ , where p is the period
corresponding to the frequency , the frequency is given by 2/p.)
The differences between the gain function of the filter with SMA-order 12 and the gain function
of the ideal band-pass filter is the root of the issues mentioned at the end of example 1. The filter
with SMA-order 20 is closer to the gain function of the ideal band-pass filter at the cost of 16 more
missing values in the filtered series.
Below we compute and graph the periodogram of the series filtered with SMA-order 20.
0.00
2.00
4.00
6.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
The above periodogram indicates that the filter of SMA-order 20 removed more of the stochastic cycles
at the unwanted periodicities than did the filter of SMA-order 12. Whether removing the stochastic
cycles at the unwanted periodicities is worth losing more observations in the filtered series is a
judgment call.
503
.04
.04
Below we plot the estimated business-cycle component with recessions identified by the shaded
areas.
1957q3
1969q3
1981q3
1993q3
2005q3
Stored results
tsfilter bk stores the following in r():
Scalars
r(smaorder)
r(minperiod)
r(maxperiod)
Macros
r(varlist)
r(filterlist)
r(trendlist)
r(method)
r(stationary)
r(unit)
Matrices
r(filter)
ct =
X
j=
bj ytj
504
Letting pl and ph be the minimum and maximum periods of the stochastic cycles of interest, the
weights bj in this calculation are given by
bj =
1 (h l )
if j = 0
where l = 2/pl and h = 2/ph are the lower and higher cutoff frequencies, respectively.
For the default case of nonstationary time series with finite length, the ideal band-pass filter cannot
be used without modification. Baxter and King (1999) derived modified weights for a finite order
SMA filter with coefficients that sum to zero.
As a result, Baxter and King (1999) estimate ct by
ct =
+q
X
bbj ytj
j=q
The coefficients b
bj in this calculation are equal to bbj = bj bq , where bbj = bbj and bq is the mean
of the ideal coefficients truncated at q :
bq = (2q + 1)1
q
X
bj
j=q
P+q
Note that j=q b
bj = 0 and that the first and last q values of the cyclical component cannot be
estimated using this filter.
If the stationary option is set, the BK filter sets the coefficients to the ideal coefficients, that is,
Pq b
bbj = bj . For these weights, bbj = bbj , and although P
b
j= bj = 0, for small q ,
q bj 6= 0.
References
Baxter, M., and R. G. King. 1995. Measuring business cycles approximate band-pass filters for economic time series.
NBER Working Paper No. 5022, National Bureau of Economic Research. http://www.nber.org/papers/w5022.
. 1999. Measuring business cycles: Approximate band-pass filters for economic time series. Review of Economics
and Statistics 81: 575593.
Burns, A. F., and W. C. Mitchell. 1946. Measuring Business Cycles. New York: National Bureau of Economic
Research.
Fuller, W. A. 1996. Introduction to Statistical Time Series. 2nd ed. New York: Wiley.
Pollock, D. S. G. 1999. A Handbook of Time-Series Analysis, Signal Processing and Dynamics. London: Academic
Press.
. 2006. Econometric methods of signal extraction. Computational Statistics & Data Analysis 50: 22682292.
Also see
[TS] tsset Declare data to be time-series data
[XT] xtset Declare data to be panel data
[TS] tsfilter Filter a time-series, keeping only selected periodicities
[D] format Set variables output format
[TS] tssmooth Smooth and forecast univariate time-series data
Title
tsfilter bw Butterworth time-series filter
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
Filter one variable
tsfilter bw
type
newvar = varname
if
in
, options
options
Main
maxperiod(#)
order(#)
Trend
gain(gainvar anglevar)
You must tsset or xtset your data before using tsfilter; see [TS] tsset and [XT] xtset.
varname and varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
>
Butterworth
Description
tsfilter bw uses the Butterworth high-pass filter to separate a time series into trend and cyclical
components. The trend component may contain a deterministic or a stochastic trend. The stationary
cyclical component is driven by stochastic cycles at the specified periods.
See [TS] tsfilter for an introduction to the methods implemented in tsfilter bw.
505
506
Options
Main
maxperiod(#) filters out stochastic cycles at periods larger than #, where # must be greater than 2.
By default, if the units of the time variable are set to daily, weekly, monthly, quarterly, half-yearly,
or yearly, then # is set to the number of periods equivalent to 8 years; otherwise, the default value
is maxperiod(32).
order(#) sets the order of the Butterworth filter, which must be an integer. The default is order(2).
Trend
trend(newvar | newvarlist | stub*) saves the trend component(s) in the new variable(s) specified by
newvar, newvarlist, or stub*.
Gain
gain(gainvar anglevar) saves the gain in gainvar and its associated angular frequency in anglevar.
Gains are calculated at the N angular frequencies that uniformly partition the interval (0, ], where
N is the sample size.
507
We cannot obtain a vertical slope at the cutoff frequency, which is the ideal, because the computation
becomes unstable; see Pollock (2000). The filter order for which the computation becomes unstable
depends on the cutoff period.
Among economists, the high-pass Butterworth filter is commonly used for investigating business
cycles. Burns and Mitchell (1946) defined business cycles as stochastic cycles in business data
corresponding to periods between 1.5 and 8 years. For this reason, the default value for maxperiod()
is the number of periods in 8 years, if the time variable is formatted as daily, weekly, monthly,
quarterly, half-yearly, or yearly; see [D] format. The default value for maxperiod() is 32 for all
other time formats.
For each variable, the high-pass Butterworth filter estimate of ct is put in the corresponding new
variable, and when the trend() option is specified, the estimate of t is put in the corresponding
new variable.
tsfilter bw automatically detects panel data from the information provided when the dataset was
tsset or xtset. All calculations are done separately on each panel. Missing values at the beginning
and end of the sample are excluded from the sample. The sample may not contain gaps.
7.5
9.5
. use http://www.stata-press.com/data/r13/gdp2
(Federal Reserve Economic Data, St. Louis Fed)
. tsline gdp_ln
1950q1
1960q1
1970q1
1980q1
1990q1
2000q1
2010q1
The series is nonstationary. Pollock (2000) shows that the high-pass Butterworth filter can estimate
the components driven by the stochastic cycles at the specified frequencies when the original series
is nonstationary.
Below we use tsfilter bw to filter gdp ln and use pergram (see [TS] pergram) to compute
and to plot the periodogram of the estimated cyclical component.
. tsfilter bw gdp_bw = gdp_ln
. pergram gdp_bw, xline(.03125 .16667)
0.00
2.00
4.00
6.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
508
tsfilter bw used the default value of maxperiod(32) because our sample is of quarterly data. In
the periodogram, we added vertical lines at the natural frequencies corresponding to the conventional
Burns and Mitchell (1946) values for business-cycle components. pergram displays the results in
natural frequencies, which are the standard frequencies divided by 2 . We use option xline() to draw
vertical lines at the lower natural-frequency cutoff (1/32 = 0.03125) and the upper natural-frequency
cutoff (1/6 0.16667).
If the filter completely removed the stochastic cycles at the unwanted frequencies, the periodogram
would be a flat line at the minimum value of 6 outside the range identified by the vertical lines.
The periodogram reveals two issues. First, it indicates that the default value of order(2) did not
do a good job of filtering out the high-periodicity stochastic cycles, because there are too many points
above 6.00 to the left of the left-hand vertical line. Second, it reveals the high-pass nature of the
filter, because none of the low-period (high-frequency) stochastic cycles have been filtered out.
We cope with these two issues in the remaining examples.
509
.2
.4
.6
.8
1
Ideal filter
BW order 8
3
BW order 2
As discussed in [TS] tsfilter, the gain function of the ideal filter is a square wave with a value of 0
at the frequencies corresponding to unwanted frequencies and a value of 1 at the desired frequencies.
The vertical lines in the gain function of the ideal filter occur at /16, corresponding to 32 periods,
and at /3, corresponding to 6 periods. (Given that p = 2/ , where p is the period corresponding
to frequency , the frequency is given by 2/p.)
The distance between the gain function of the filter with order 2 and the gain function of the ideal
band-pass filter at /16 is the root of the first issue mentioned at the end of example 1. The filter
with order 8 is much closer to the gain function of the ideal band-pass filter at /16 than is the
filter with order 2. That both gain functions are 1 to the right of the vertical line at /3 reveals the
high-pass nature of the filter.
.2
.4
.6
.8
510
Ideal filter
BW order 15
3
BW order 8
We specified much higher orders for the filter in this example because the cutoff period is 6 instead
of 32. (As previously mentioned, holding the order of the filter constant, the slope of the gain function
at the cutoff period decreases when the period decreases.) The above graph indicates that the filter
with order(15) is reasonably close to the gain function of the ideal filter.
Now we compute and plot the periodogram of the estimated business-cycle component.
0.00
2.00
4.00
6.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
The graph indicates that the above applications of the Butterworth filter did a reasonable job of
filtering out the high-periodicity stochastic cycles but that the low-periodicity stochastic cycles have
not been completely removed.
511
.04
Below we plot the estimated business-cycle component with recessions identified by the shaded
areas.
1950q1
1960q1
1970q1
1980q1
1990q1
2000q1
2010q1
Stored results
tsfilter bw stores the following in r():
Scalars
r(order)
r(maxperiod)
Macros
r(varlist)
r(filterlist)
r(trendlist)
r(method)
r(unit)
"
() = 1 +
tan(c /2)
tan(/2)
2m #1
where m is the order of the filter, c = 2/ph is the cutoff frequency, and ph is the maximum
period.
Here is an outline of the computational procedure that Pollock (2000) derived.
Pollock (2000) showed that the Butterworth filter corresponds to a particular model. Actually, his
model is more general than the Butterworth filter, but tsfilter bw restricts the computations to the
case in which the model corresponds to the Butterworth filter.
512
The model represents the series to be filtered, yt , in terms of zero mean, covariance stationary,
and independent and identically distributed shocks t and t :
yt =
(1 + L)m
t + t
(1 L)m
From this model, Pollock (2000) shows that the optimal estimate for the cyclical component is
given by
c = Q(L + H )1 Q0 y
where Var{Q0 (y c)} = 2 L and Var{Q0 c} = 2 H . Here L and H are symmetric Toeplitz
matrices with 2m + 1 nonzero diagonal bands and generating functions (1 + z)m (1 + z 1 )m and
(1 z)m (1 z 1 )m , respectively.
The parameter in this expression is a function of ph (the maximum period of stochastic cycles
filtered out) and the order of the filter:
= {tan(/ph )}2m
The matrix Q0 in this expression is a function of the coefficients in the polynomial (1 L)d =
1 + 1 L + + d Ld :
d
..
.
0
Q = 0
.
..
0
0
. . . 1
.
..
. ..
. . . d
... 0
..
.
...
...
0
0
1
..
.
d1
d
..
.
... 0
.
..
. ..
... 1
. . . 1
..
.
0
0
. . . d
... 0
0
..
.
...
0
..
.
0
1
...
...
..
.
0
0
..
.
d1
d
... 1
. . . 1
(T d)T
0
..
.
0
..
.
0
1
It can be shown that H = Q0 Q and L = |H |, which simplifies the calculation of the cyclical
component to
c = Q{|Q0 Q| + (Q0 Q)}1 Q0 y
References
Bianchi, G., and R. Sorrentino. 2007. Electronic Filter Simulation and Design. New York: McGrawHill.
Burns, A. F., and W. C. Mitchell. 1946. Measuring Business Cycles. New York: National Bureau of Economic
Research.
Butterworth, S. 1930. On the theory of filter amplifiers. Experimental Wireless and the Wireless Engineer 7: 536541.
Pollock, D. S. G. 1999. A Handbook of Time-Series Analysis, Signal Processing and Dynamics. London: Academic
Press.
. 2000. Trend estimation and de-trending via rational square-wave filters. Journal of Econometrics 99: 317334.
. 2006. Econometric methods of signal extraction. Computational Statistics & Data Analysis 50: 22682292.
Also see
[TS] tsset Declare data to be time-series data
[XT] xtset Declare data to be panel data
[TS] tsfilter Filter a time-series, keeping only selected periodicities
[D] format Set variables output format
[TS] tssmooth Smooth and forecast univariate time-series data
513
Title
tsfilter cf ChristianoFitzgerald time-series filter
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
Filter one variable
tsfilter cf
type
newvar = varname
if
in
, options
options
Main
minperiod(#)
maxperiod(#)
smaorder(#)
stationary
drift
Trend
gain(gainvar anglevar)
You must tsset or xtset your data before using tsfilter; see [TS] tsset and [XT] xtset.
varname and varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
>
Christiano-Fitzgerald
Description
tsfilter cf uses the Christiano and Fitzgerald (2003) band-pass filter to separate a time series
into trend and cyclical components. The trend component may contain a deterministic or a stochastic
trend. The stationary cyclical component is driven by stochastic cycles at the specified periods.
See [TS] tsfilter for an introduction to the methods implemented in tsfilter cf.
514
515
Options
Main
minperiod(#) filters out stochastic cycles at periods smaller than #, where # must be at least 2
and less than maxperiod(). By default, if the units of the time variable are set to daily, weekly,
monthly, quarterly, or half-yearly, then # is set to the number of periods equivalent to 1.5 years;
yearly data use minperiod(2); otherwise, the default value is minperiod(6).
maxperiod(#) filters out stochastic cycles at periods larger than #, where # must be greater than
minperiod(). By default, if the units of the time variable are set to daily, weekly, monthly,
quarterly, half-yearly, or yearly, then # is set to the number of periods equivalent to 8 years;
otherwise, the default value is maxperiod(32).
smaorder(#) sets the order of the symmetric moving average, denoted by q . By default, smaorder()
is not set, which invokes the asymmetric calculations for the ChristianoFitzgerald filter. The order
is an integer that specifies the number of observations in each direction used in calculating the
symmetric moving average estimate of the cyclical component. This number must be an integer
greater than zero and less than (T 1)/2. The estimate of the cyclical component for the tth
observation, yt , is based upon the 2q + 1 values ytq , ytq+1 , . . . , yt , yt+1 , . . . , yt+q .
stationary modifies the filter calculations to those appropriate for a stationary series. By default,
the series is assumed nonstationary.
drift removes drift using the approach described in Christiano and Fitzgerald (2003). By default,
drift is not removed.
Trend
trend(newvar | newvarlist | stub*) saves the trend component(s) in the new variable(s) specified by
newvar, newvarlist, or stub*.
Gain
gain(gainvar anglevar) saves the gain in gainvar and its associated angular frequency in anglevar.
Gains are calculated at the N angular frequencies that uniformly partition the interval (0, ], where
N is the sample size.
516
Although the CF band-pass filter implemented in tsfilter cf has been widely applied by
macroeconomists, it is a general time-series method and may be of interest to other researchers.
As discussed by Christiano and Fitzgerald (2003) and in [TS] tsfilter, if one had an infinitely long
series, one could apply an ideal band-pass filter that perfectly separates out cyclical components driven
by stochastic cycles at the specified periodicities. In finite samples, it is not possible to exactly satisfy
the conditions that a filter must fulfill to perfectly separate out the specified stochastic cycles; the
expansive filter literature reflects the trade-offs involved in choosing a finite-length filter to separate
out the specified stochastic cycles.
Christiano and Fitzgerald (2003) derive a finite-length CF band-pass filter that minimizes the mean
squared error between the filtered series and the series filtered by an ideal band-pass filter that perfectly
separates out components driven by stochastic cycles at the specified periodicities. Christiano and
Fitzgerald (2003) place two important restrictions on the mean squared error problem that their filter
solves. First, the CF filter is restricted to be a linear filter. Second, yt is assumed to be a random-walk
process; in other words, yt = yt1 + t , where t is independently and identically distributed with
mean zero and finite variance. The CF filter is the best linear predictor of the series filtered by the
ideal band-pass filter when yt is a random walk.
Christiano and Fitzgerald (2003) make four points in support of the random-walk assumption.
First, the mean squared error problem solved by their filter requires that the process for yt be
specified. Second, they provide a method for removing drift so that their filter handles cases in
which yt is a random walk with drift. Third, many economic time series are well approximated by a
random-walk-plus-drift process. (We add that many time series encountered in applied statistics are
well approximated by a random-walk-plus-drift process.) Fourth, they provide simulation evidence
that their filter performs well when the process generating yt is not a random-walk-plus-drift process
but is close to being a random-walk-plus-drift process.
Comparing the CF filter with the BaxterKing (BK) filter provides some intuition and explains
the smaorder() option in tsfilter cf. As discussed in [TS] tsfilter and Baxter and King (1999),
symmetric moving-average (SMA) filters with coefficients that sum to zero can extract the components
driven by stochastic cycles at specified periodicities when the series to be filtered has a deterministic
or stochastic trend of order 1 or 2.
The coefficients of the finite-length BK filter are as close as possible to the coefficients of an ideal
SMA band-pass filter under the constraints that the BK coefficients are symmetric and sum to zero.
The coefficients of the CF filter are not symmetric nor do they sum to zero, but the CF filter was
designed to filter out the specified periodicities when yt has a first-order stochastic trend.
To be robust to second-order trends, Christiano and Fitzgerald (2003) derive a constrained version
of the CF filter. The coefficients of the constrained filter are constrained to be symmetric and to
sum to zero. Subject to these constraints, the coefficients of the constrained CF filter minimize the
mean squared error between the filtered series and the series filtered by an ideal band-pass filter that
perfectly separates out the components. Christiano and Fitzgerald (2003) note that the higher-order
detrending properties of this constrained filter come at the cost of lost efficiency. If the constraints
are binding, the constrained filter cannot predict the series filtered by the ideal filter as well as the
unconstrained filter can.
Specifying the smaorder() option causes tsfilter cf to compute the SMA-constrained CF filter.
The choice between the BK and the CF filters is one between robustness and efficiency. The BK
filter handles a broader class of stochastic processes than does the CF filter, but the CF filter produces
a better estimate of ct if yt is close to a random-walk process or a random-walk-plus-drift process.
Among economists, the CF filter is commonly used for investigating business cycles. Burns and
Mitchell (1946) defined business cycles as stochastic cycles in business data corresponding to periods
517
between 1.5 and 8 years. The default values for minperiod() and maxperiod() are the Burns
Mitchell values of 1.5 and 8 years scaled to the frequency of the dataset. The calculations of the default
values assume that the time variable is formatted as daily, weekly, monthly, quarterly, half-yearly, or
yearly; see [D] format.
When yt is assumed to be a random-walk-plus-drift process instead of a random-walk process,
specify the drift option, which removes the linear drift in the series before applying the filter. Drift
is removed by transforming the original series to a new series by using the calculation
zt = yt
(t 1)(yT y1 )
T 1
The cyclical component ct is calculated from drift-adjusted series zt . The trend component t is
calculated by t = yt ct .
By default, the CF filter assumes the series is nonstationary. If the series is stationary, the
stationary option is used to change the calculations to those appropriate for a stationary series.
For each variable, the CF filter estimate of ct is put in the corresponding new variable, and when
the trend() option is specified, the estimate of t is put in the corresponding new variable.
tsfilter cf automatically detects panel data from the information provided when the dataset was
tsset or xtset. All calculations are done separately on each panel. Missing values at the beginning
and end of the sample are excluded from the sample. The sample may not contain gaps.
7.5
9.5
. tsline gdp_ln
1950q1
1960q1
1970q1
1980q1
1990q1
2000q1
2010q1
The series looks like it might be generated by a random-walk-plus-drift process and is thus a
candidate for the CF filter.
518
Below we use tsfilter cf to filter gdp ln, and we use pergram (see [TS] pergram) to compute
and to plot the periodogram of the estimated cyclical component.
0.00
2.00
4.00
6.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
Because our sample is of quarterly data, tsfilter cf used the default values of minperiod(6)
and maxperiod(32). The minimum and maximum periods are the Burns and Mitchell (1946)
business-cycle periods for quarterly data.
In the periodogram, we added vertical lines at the natural frequencies corresponding to the
conventional Burns and Mitchell (1946) values for business-cycle components. pergram displays the
results in natural frequencies, which are the standard frequencies divided by 2 . We use the xline()
option to draw vertical lines at the lower natural-frequency cutoff (1/32 = 0.03125) and the upper
natural-frequency cutoff (1/6 0.16667).
If the filter completely removed the stochastic cycles at the unwanted frequencies, the periodogram
would be a flat line at the minimum value of 6 outside the range identified by the vertical lines.
The periodogram reveals that the CF did a reasonable job of filtering out the unwanted stochastic
cycles.
519
.04
Below we plot the estimated business-cycle component with recessions identified by the shaded
areas.
1950q1
1960q1
1970q1
1980q1
1990q1
2000q1
2010q1
Stored results
tsfilter cf stores the following in r():
Scalars
r(smaorder)
r(minperiod)
r(maxperiod)
Macros
r(varlist)
r(filterlist)
r(trendlist)
r(method)
r(symmetric)
r(drift)
r(stationary)
r(unit)
Matrices
r(filter)
average, and the weights are the ChristianoFitzgerald coefficients; only returned when
smaorder() is used to set q
X
ct =
bj ytj
j=
If pl and ph are the minimum and maximum periods of the stochastic cycles of interest, the weights
bj in the ideal band-pass filter are given by
520
bj =
1 (h l )
if j = 0
where l = 2/pl and h = 2/ph are the lower and higher cutoff frequencies, respectively.
Because our time series has finite length, the ideal band-pass filter cannot be computed exactly.
Christiano and Fitzgerald (2003) derive the finite-length CF band-pass filter that minimizes the mean
squared error between the filtered series and the series filtered by an ideal band-pass filter that
perfectly separates out the components. This filter is not symmetric nor do the coefficients sum to
zero. The formula for calculating the value of cyclical component ct for t = 2, 3, . . . , T 1 using
the asymmetric version of the CF filter can be expressed as
ct = b0 yt +
TX
t1
bj yt+j + ebT t yT +
j=1
t2
X
bj ytj + ebt1 y1
j=1
c1 =
T
2
X
1
b0 y1 +
bj yj+1 + ebT 1 yT
2
j=1
and
cT =
T
2
X
1
b0 y T +
bj yT j + ebT 1 y1
2
j=1
When the stationary option is used to invoke the stationary calculations, all weights are set to
the ideal filter weight, that is, e
bj = bj .
If the smaorder() option is set, the symmetric version of the CF filter is used. This option specifies
the length of the symmetric moving average denoted by q . The symmetric calculations for ct are
similar to those used by the BK filter:
q1
X
bj Lj (yt )
j=q+1
Pq1
where, for the default nonstationary calculations, b
bq = (1/2)b0 j=1 bj . If the smaorder()
and stationary options are set, then b
bq is set equal to the ideal weight bq .
521
References
Baxter, M., and R. G. King. 1999. Measuring business cycles: Approximate band-pass filters for economic time series.
Review of Economics and Statistics 81: 575593.
Burns, A. F., and W. C. Mitchell. 1946. Measuring Business Cycles. New York: National Bureau of Economic
Research.
Christiano, L. J., and T. J. Fitzgerald. 2003. The band pass filter. International Economic Review 44: 435465.
Pollock, D. S. G. 1999. A Handbook of Time-Series Analysis, Signal Processing and Dynamics. London: Academic
Press.
. 2006. Econometric methods of signal extraction. Computational Statistics & Data Analysis 50: 22682292.
Also see
[TS] tsset Declare data to be time-series data
[XT] xtset Declare data to be panel data
[TS] tsfilter Filter a time-series, keeping only selected periodicities
[D] format Set variables output format
[TS] tssmooth Smooth and forecast univariate time-series data
Title
tsfilter hp HodrickPrescott time-series filter
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
Filter one variable
tsfilter hp
type
newvar = varname
if
in
, options
options
Main
smooth(#)
Trend
gain(gainvar anglevar)
You must tsset or xtset your data before using tsfilter; see [TS] tsset and [XT] xtset.
varname and varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
>
Hodrick-Prescott
Description
tsfilter hp uses the HodrickPrescott high-pass filter to separate a time series into trend and
cyclical components. The trend component may contain a deterministic or a stochastic trend. The
smoothing parameter determines the periods of the stochastic cycles that drive the stationary cyclical
component.
See [TS] tsfilter for an introduction to the methods implemented in tsfilter hp.
522
523
Options
Main
smooth(#) sets the smoothing parameter for the HodrickPrescott filter. By default if the units of the
time variable are set to daily, weekly, monthly, quarterly, half-yearly, or yearly, then the RavnUhlig
rule is used to set the smoothing parameter; otherwise, the default value is smooth(1600). The
RavnUhlig rule sets # to 1600p4q , where pq is the number of periods per quarter. The smoothing
parameter must be greater than 0.
Trend
trend(newvar | newvarlist | stub*) saves the trend component(s) in the new variable(s) specified by
newvar, newvarlist, or stub*.
Gain
gain(gainvar anglevar) saves the gain in gainvar and its associated angular frequency in anglevar.
Gains are calculated at the N angular frequencies that uniformly partition the interval (0, ], where
N is the sample size.
524
7.5
9.5
. use http://www.stata-press.com/data/r13/gdp2
(Federal Reserve Economic Data, St. Louis Fed)
. tsline gdp_ln
1950q1
1960q1
1970q1
1980q1
1990q1
2000q1
2010q1
Because our sample is of quarterly data, tsfilter hp used the default value for the smoothing
parameter of 1,600.
In the periodogram, we added vertical lines at the natural frequencies corresponding to the
conventional Burns and Mitchell (1946) values for business-cycle components of 32 periods and
6 periods. pergram displays the results in natural frequencies, which are the standard frequencies
divided by 2 . We use the xline() option to draw vertical lines at the lower natural-frequency cutoff
(1/32 = 0.03125) and the upper natural-frequency cutoff (1/6 0.16667).
If the filter completely removed the stochastic cycles at the unwanted frequencies, the periodogram
would be a flat line at the minimum value of 6 outside the range identified by the vertical lines.
0.00
2.00
4.00
6.00
525
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
The periodogram reveals a high-periodicity issue and a low-periodicity issue. The points above 6.00
to the left of the left-hand vertical line in the periodogram reveal that the filter did not do a good
job of filtering out the high-periodicity stochastic cycles with the default value smoothing parameter
of 1,600. That there is no tendency of the points to the right of the right-hand vertical line to be
smoothed toward 6.00 reveals that the HP filter did not remove any of the low-periodicity stochastic
cycles. This result is not surprising, because the HP filter is a high-pass filter.
In the next example, we address the high-periodicity issue. See [TS] tsfilter and [TS] tsfilter bw
for how to turn a high-pass filter into a band-pass filter.
526
0.00
2.00
4.00
6.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
A comparison of the two periodograms reveals that setting the smoothing parameter to 677.13
removes more of the high-periodicity stochastic cycles than does the default 1,600. In [TS] tsfilter,
we found that the HP filter was not as good at removing the high-periodicity stochastic cycles as
was the ChristianoFitzgerald filter implemented in tsfilter cf or as was the Butterworth filter
implemented in tsfilter bw.
.04
.04
Below we plot the estimated business-cycle component with recessions identified by the shaded
areas.
1950q1
1960q1
1970q1
1980q1
1990q1
2000q1
2010q1
tsfilter hp automatically detects panel data from the information provided when the dataset was
tsset or xtset. All calculations are done separately on each panel. Missing values at the beginning
and end of the sample are excluded from the sample. The sample may not contain gaps.
527
Stored results
tsfilter hp stores the following in r():
Scalars
r(smooth)
Macros
r(varlist)
r(filterlist)
r(trendlist)
r(method)
r(unit)
smoothing parameter
original time-series variables
variables containing estimates of the cyclical components
variables containing estimates of the trend components, if trend() was specified
Hodrick-Prescott
units of time variable set using tsset or xtset
X
T
1
T
X
2
2
{(t+1 t ) (t t1 )}
(yt t ) +
min
t
t=2
t=1
c = (IT M1 )y
where y is the column vector y0 = (y1 , y2 , . . . , yT ), IT is the T T identity matrix, and M is the
T T matrix:
(1 + )
2
0
0
0
...
0
2 (1 + 5)
4
0
0
...
0
4
(1 + 6)
4
0
...
0
4
(1 + 6)
4
...
0
..
..
..
..
..
..
..
.
.
.
.
.
.
.
.
.
.
M=
..
..
..
..
..
..
.
.
.
.
.
.
0
0
0
...
4
(1 + 6)
4
0
...
0
4
(1 + 6)
4
0
...
0
0
4
(1 + 5) 2
0
...
0
0
0
2
(1 + )
The gain of the HP filter is given by (see King and Rebelo [1993], Maravall and del Rio [2007],
or Harvey and Trimbur [2008])
() =
4{1 cos()}2
1 + 4{1 cos()}2
528
As discussed in [TS] tsfilter, there are two approaches to selecting . One method, based on
the heuristic argument of Hodrick and Prescott (1997), is used to compute the default values for .
The method sets to 1,600 for quarterly data and to the rescaled values worked out by Ravn and
Uhlig (2002). The rescaled default values for are 6.25 for yearly data, 100 for half-yearly data,
129,600 for monthly data, 1600 124 for weekly data, and 1600 (365/4)4 for daily data.
The second method for selecting uses the recommendations of Pollock (2000, 324), who uses
the gain function of the filter to identify a value for .
Additional literature critiques the HP filter by pointing out that the HP filter corresponds to a specific
model. Harvey and Trimbur (2008) show that the cyclical component estimated by the HP filter is
equivalent to one estimated by a particular unobserved-components model. Harvey and Jaeger (1993),
Gomez (1999), Pollock (2000), and Gomez (2001) also show this result and provide interesting
comparisons of estimating ct by filtering and model-based methods.
References
Burns, A. F., and W. C. Mitchell. 1946. Measuring Business Cycles. New York: National Bureau of Economic
Research.
Gomez, V. 1999. Three equivalent methods for filtering finite nonstationary time series. Journal of Business and
Economic Statistics 17: 109116.
. 2001. The use of Butterworth filters for trend and cycle estimation in economic time series. Journal of Business
and Economic Statistics 19: 365373.
Harvey, A. C., and A. Jaeger. 1993. Detrending, stylized facts and the business cycle. Journal of Applied Econometrics
8: 231247.
Harvey, A. C., and T. M. Trimbur. 2008. Trend estimation and the HodrickPrescott filter. Journal of the Japanese
Statistical Society 38: 4149.
Hodrick, R. J., and E. C. Prescott. 1997. Postwar U.S. business cycles: An empirical investigation. Journal of Money,
Credit, and Banking 29: 116.
King, R. G., and S. T. Rebelo. 1993. Low frequency filtering and real business cycles. Journal of Economic Dynamics
and Control 17: 207231.
Leser, C. E. V. 1961. A simple method of trend construction. Journal of the Royal Statistical Society, Series B 23:
91107.
Maravall, A., and A. del Rio. 2007. Temporal aggregation, systematic sampling, and the HodrickPrescott filter.
Working Paper No. 0728, Banco de Espana.
http://www.bde.es/webbde/Secciones/Publicaciones/PublicacionesSeriadas/DocumentosTrabajo/07/Fic/dt0728e.pdf.
Pollock, D. S. G. 1999. A Handbook of Time-Series Analysis, Signal Processing and Dynamics. London: Academic
Press.
. 2000. Trend estimation and de-trending via rational square-wave filters. Journal of Econometrics 99: 317334.
. 2006. Econometric methods of signal extraction. Computational Statistics & Data Analysis 50: 22682292.
Ravn, M. O., and H. Uhlig. 2002. On adjusting the HodrickPrescott filter for the frequency of observations. Review
of Economics and Statistics 84: 371376.
Also see
[TS] tsset Declare data to be time-series data
[XT] xtset Declare data to be panel data
[TS] tsfilter Filter a time-series, keeping only selected periodicities
[D] format Set variables output format
[TS] tssmooth Smooth and forecast univariate time-series data
Title
tsline Plot time-series data
Syntax
Remarks and examples
Menu
References
Description
Also see
Options
Syntax
Time-series line plot
twoway tsline varlist if
in
, tsline options
Time-series range plot with lines
twoway tsrline y1 y2 if
in
, tsrline options
where the time variable is assumed set by tsset (see [TS] tsset), varlist has the interpretation
y1 y2 . . . yk .
Description
tsline options
Plots
scatter options
twoway options
tsrline options
Description
Plots
rline options
twoway options
Menu
Statistics
>
Time series
>
Graphs
>
Line plots
Description
tsline draws line plots for time-series data.
tsrline draws a range plot with lines for time-series data.
529
530
tsline and tsrline are both commands and plottypes as defined in [G-2] graph twoway. Thus
the syntax for tsline is
. graph twoway tsline ...
. twoway tsline ...
. tsline ...
and similarly for tsrline. Being plot types, these commands may be combined with other plot types
in the twoway family, as in,
. twoway (tsrline
. . . ) (tsline . . . ) (lfit . . . ) . . .
. . . || tsline . . . || lfit . . . || . . .
Options
Plots
scatter options are any of the options allowed by the graph twoway scatter command except that
marker options, marker placement option, and marker label options will be ignored if specified;
see [G-2] graph twoway scatter.
rline options are any of the options allowed by the graph twoway rline command; see [G-2] graph
twoway rline.
twoway options are any of the options documented in [G-3] twoway options. These include options
for titling the graph (see [G-3] title options), for saving the graph to disk (see [G-3] saving option),
and the by() option, which will allow you to simultaneously plot different subsets of the data
(see [G-3] by option).
Also see the recast() option discussed in [G-3] advanced options for information on how to
plot spikes, bars, etc., instead of lines.
Basic examples
Example 1
We simulated two separate time series (each of 200 observations) and placed them in a Stata
dataset, tsline1.dta. The first series simulates an AR(2) process with 1 = 0.8 and 2 = 0.2; the
second series simulates an MA(2) process with 1 = 0.8 and 2 = 0.2. We use tsline to graph
these two series.
531
. use http://www.stata-press.com/data/r13/tsline1
. tsset lags
time variable: lags, 0 to 199
delta: 1 unit
. tsline ar ma
50
Simulated AR(.8,.2)
100
lags
150
200
Simulated MA(.8,.2)
Example 2
Suppose that we kept a calorie log for an entire calendar year. At the end of the year, we would
have a dataset (for example, tsline2.dta) that contains the number of calories consumed for 365
days. We could then use tsset to identify the date variable and tsline to plot calories versus time.
Knowing that we tend to eat a little more food on Thanksgiving and Christmas day, we use the
ttick() and ttext() options to point these days out on the time axis.
532
01jan2002
01apr2002
01jul2002
Date
01oct2002
xmas
3400
thanks
3600
Calories consumed
3800
4000
4200
4400
01jan2003
We were uncertain of the exact values we logged, so we also gave a range for each day. Here is
a plot of the summer months.
3300
3400
Calories
3500 3600
3700
3800
01may2002
01jun2002
01jul2002
Date
Calorie range
01aug2002
01sep2002
Calories consumed
Options associated with the time axis allow dates (and times) to be specified in place of numeric
date (and time) values. For instance, we used
ttick(28nov2002 25dec2002, tpos(in))
to place tick marks at the specified dates. This works similarly for tlabel, tmlabel, and tmtick.
533
Suppose that we wanted to place vertical lines for the previously mentioned holidays. We could
specify the dates in the tline() option as follows:
3400
3600
Calories consumed
3800
4000
4200
4400
01jan2002
01apr2002
01jul2002
Date
01oct2002
01jan2003
We could also modify the format of the time axis so that only the day in the year is displayed in
the labeled ticks:
3400
3600
Calories consumed
3800
4000
4200
4400
Jan1
Apr1
Video example
Time series, part 2: Line graphs and tin()
Jul1
Date (2002)
Oct1
Jan1
534
References
Cox, N. J. 2009a. Speaking Stata: Graphs for all seasons. Stata Journal 6: 397419.
. 2009b. Stata tip 76: Separating seasonal time series. Stata Journal 9: 321326.
. 2012. Speaking Stata: Transforming the time axis. Stata Journal 12: 332341.
Also see
[TS] tsset Declare data to be time-series data
[G-2] graph twoway Twoway graphs
[XT] xtline Panel-data line plots
Title
tsreport Report time-series aspects of a dataset or estimation sample
Syntax
Remarks and examples
Menu
Stored results
Description
Also see
Options
Syntax
tsreport
varlist
if
in
, options
Description
options
Main
detail
casewise
panel
varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
>
Description
tsreport reports time gaps in a dataset or in a subset of variables. By default, tsreport reports
periods in which no information is recorded in the dataset; the time variable does not include these
periods. When you specify varlist, tsreport reports periods in which either no information is
recorded in the dataset or the time variable is present, but one or more variables in varlist contain a
missing value.
Options
Main
535
536
Basic examples
Time-series commands sometimes require that observations be on a fixed time interval with no
gaps, or the command may not function properly. tsreport provides a tool for reporting the gaps
in a sample.
month
income
1
1
1
2
2
2
1998m1
1998m2
1998m3
1998m1
1998m2
1998m4
687
783
790
1435
1522
1532
1.
2.
3.
4.
5.
6.
edlevel
month
Two gaps are reported in the sample. We know the second panel is missing the month of March, but
where is the second gap? The note at the bottom of the output is telling us something about panel
changes. Lets use the detail option to get more information:
. tsreport, detail
Panel variable:
Time variable:
edlevel
month
3
6
edlevel
Start
End
N. Obs.
1
2
1998m4
1998m3
.
1998m3
.
1
We now see what is happening. tsreport is counting the change from the first panel to the second
panel as a gap. Look at the output from the list command above. The value of month in observation
537
4 is not one month later than the value of month in observation 3, so tsreport reports a gap. (If
we are programmers writing a procedure that does not account for panels, a change from one panel
to the next represents a break in the time series just as a gap in the data does.) For the second gap,
tsreport indicates that just one observation is missing because we are only missing the month of
March. This gap is between observations 5 and 6 of the data.
In other cases, we may not care about changes in panels and not want them counted as gaps. We
can use the panel option to specify that tsreport should ignore panel changes:
. tsreport, detail panel
Panel variable:
edlevel
Time variable:
month
Starting period
Ending period
Observations
Number of gaps
Gap report
=
=
=
=
Obs.
5
1998m1
1998m4
6
1
edlevel
Start
End
N. Obs.
1998m3
1998m3
tsreport now indicates there is just one gap, corresponding to March for the second panel.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
13feb2007
13feb2007
13feb2007
13feb2007
13feb2007
13feb2007
13feb2007
13feb2007
13feb2007
13feb2007
13feb2007
hour
price1
price2
08:00:00
09:00:00
10:00:00
11:00:00
12:00:00
13:00:00
14:00:00
15:00:00
16:00:00
17:00:00
20:00:00
140
155
.
155
160
.
165
170
175
180
190
245
250
250
250
255
.
255
260
265
.
270
First, lets invoke tsreport without specifying price1 or price2. We will specify the detail
option so that we can see the periods corresponding to the gap or gaps reported:
538
hour
Starting period
Ending period
Observations
Number of gaps
= 13feb2007 08:00:00
= 13feb2007 20:00:00
=
11
=
1
Gap report
Obs.
10
11
Start
End
N. Obs.
13feb2007 18:00:00
13feb2007 19:00:00
One gap is reported, lasting two periods. We have no data corresponding to 6:00 p.m. and 7:00 p.m.
on February 13, 2007.
What about observations 3, 6, and 10? We are missing data on one or both of the price variables for
those observations, but the time variable itself is present for those observations. By default, tsreport
defines gaps as periods in which no information, not even the time variable itself, is recorded.
If we instead want to obtain information about when one or more variables are missing information,
then we specify those variables in our call to tsreport. Here we specify price1, first without the
detail option:
. tsreport price1
Gap summary report
Variable
price1
Start
End
13feb2007 08:00:00
13feb2007 20:00:00
Number of
Obs.
Gaps
9
The output indicates that we have data on price1 from 8:00 a.m. to 8:00 p.m. However, we only
have 9 observations on price1 during that span because we have 3 gaps in the data. Lets specify
the detail option to find out where:
. tsreport price1, detail
Variable:
Time variable:
price1
hour
Starting period
Ending period
Observations
Number of gaps
= 13feb2007 08:00:00
= 13feb2007 20:00:00
=
9
=
3
Gap report
Obs.
2 4
5 7
10 11
Start
End
N. Obs.
13feb2007 10:00:00
13feb2007 13:00:00
13feb2007 18:00:00
13feb2007 10:00:00
13feb2007 13:00:00
13feb2007 19:00:00
1
1
2
The three gaps correspond to observations 3 and 6, for which price1 is missing, as well as the
two-period gap in the evening when not even the time variable is recorded in the dataset.
539
When you specify multiple variables with tsreport, by default, it summarizes gaps in each
variable separately. Apart from combining the information into one table, typing
. tsreport price1 price2
The only difference between the two methods is that the former stores results for both variables in
r-class macros for later use, whereas if you were to type the latter two commands in succession,
r-class macros would only contain results for price2.
In many types of analyses, including linear regression, you can only use observations for which
all the variables contain nonmissing data. Similarly, you can have tsreport report as gaps periods
in which any of the specified variables contain missing values. To do that, you use the casewise
option.
4
7
11
= 13feb2007 08:00:00
= 13feb2007 20:00:00
=
8
=
3
Start
End
N. Obs.
13feb2007 10:00:00
13feb2007 13:00:00
13feb2007 17:00:00
13feb2007 10:00:00
13feb2007 13:00:00
13feb2007 19:00:00
1
1
3
The first gap reported by tsreport corresponds to observation 3, when price1 is missing, and the
second gap corresponds to observation 6, when both price1 and price2 are missing. The third gap
spans 3 observations: the 5:00 p.m. observation is missing for price2, and as we discovered earlier,
not even the time variable is present at 6:00 p.m. and 7:00 p.m.
Video example
Time series, part 1: Formatting dates, tsset, tsreport, and tsfill
540
Stored results
tsreport, when no varlist is specified or when casewise is specified, stores the following in
r():
Scalars
r(N gaps)
r(N obs)
r(start)
r(end)
Macros
r(tsfmt)
Matrices
r(table)
number of gaps
number of observations
first time in series
last time in series
%fmt of time variable
matrix containing start and end times of each gap, if detail is specified
tsreport, when a varlist is specified and casewise is not specified, stores the following in r():
Scalars
r(N gaps#)
r(N obs#)
r(start#)
r(end#)
Macros
r(tsfmt)
r(var#)
Matrices
r(table#)
Also see
[TS] tsset Declare data to be time-series data
Title
tsrevar Time-series operator programming command
Syntax
Stored results
Description
Also see
Options
Syntax
tsrevar varlist
if
in
, substitute list
You must tsset your data before using tsrevar; see [TS] tsset.
Description
tsrevar, substitute takes a varlist that might contain op.varname combinations and substitutes
equivalent temporary variables for the combinations.
tsrevar, list creates no new variables. It returns in r(varlist) the list of base variables
corresponding to varlist.
Options
substitute specifies that tsrevar resolve op.varname combinations by creating temporary variables
as described above. substitute is the default action taken by tsrevar; you do not need to
specify the option.
list specifies that tsrevar return a list of base variable names.
541
542
Example 1
. use http://www.stata-press.com/data/r13/tsrevarex
. tsrevar l.gnp d.gnp r
creates two temporary variables containing the values for l.gnp and d.gnp. The variable r appears
in the new variable list but does not require a temporary variable.
The resulting variable list is
. display "r(varlist)"
00014P
00014Q r
1.
2.
3.
4.
5.
in 1/5
gnp
__00014P
__00014Q
128
135
132
138
145
.
128
135
132
138
.
7
-3
6
7
3.2
3.8
2.6
3.9
4.2
The order of the list will probably differ from that of the original list; base variables are listed only
once and are listed in the order that they appear in the dataset.
Technical note
tsrevar, substitute avoids creating duplicate variables. Consider
. tsrevar gnp l.gnp r cd l.cd l.gnp
l.gnp appears twice in the varlist. tsrevar will create only one new variable for l.gnp and use
that new variable twice in the resulting r(varlist). Moreover, tsrevar will even do this across
multiple calls:
. tsrevar gnp l.gnp cd l.cd
. tsrevar cpi l.gnp
l.gnp appears in two separate calls. At the first call, tsrevar creates a temporary variable corresponding to l.gnp. At the second call, tsrevar remembers what it has done and uses that same
temporary variable for l.gnp again.
Stored results
tsrevar stores the following in r():
Macros
r(varlist)
Also see
[P] syntax Parse Stata syntax
[P] unab Unabbreviate variable list
[U] 11 Language syntax
[U] 11.4.4 Time-series varlists
[U] 18 Programming Stata
543
Title
tsset Declare data to be time-series data
Syntax
Remarks and examples
Menu
Stored results
Description
References
Options
Also see
Syntax
Declare data to be time series
tsset timevar , options
tsset panelvar timevar , options
Display how data are currently tsset
tsset
Clear time-series settings
tsset, clear
In the declare syntax, panelvar identifies the panels and timevar identifies the times.
options
Description
Main
unitoptions
Delta
deltaoption
noquery
unitoptions
Description
(default)
clocktime
daily
weekly
monthly
quarterly
halfyearly
yearly
generic
format(% fmt)
545
deltaoption specifies the period between observations in timevar units and may be specified as
deltaoption
Example
delta(#)
delta((exp))
delta(# units)
delta((exp) units)
delta(1) or delta(2)
delta((7*24))
delta(7 days) or delta(15 minutes) or delta(7 days 15 minutes)
delta((2+3) weeks)
second
minute
hour
day
week
secs
mins
sec
min
and for all other %t timevars, units specified must match the frequency of the data; for example, for
%ty, units must be year or years.
Menu
Statistics
>
Time series
>
>
Description
tsset declares the data in memory to be a time series. tssetting the data is what makes Statas
time-series operators such as L. and F. (lag and lead) work; the operators are discussed under
Remarks and examples below. Also, before using the other ts commands, you must tsset the data
first. If you save the data after tsset, the data will be remembered to be time series and you will
not have to tsset again.
There are two syntaxes for setting the data:
tsset timevar
tsset panelvar timevar
In the first syntaxtsset timevarthe data are set to be a straight time series.
In the second syntaxtsset panelvar timevarthe data are set to be a collection of time series,
one for each value of panelvar, also known as panel data, cross-sectional time-series data, and xt data.
Such datasets can be analyzed by xt commands as well as ts commands. If you tsset panelvar
timevar, you do not need to xtset panelvar timevar to use the xt commands.
tsset without argumentstssetdisplays how the data are currently tsset and sorts the data
on timevar or panelvar timevar if they are sorted differently from that.
tsset, clear is a rarely used programmers command to declare that the data are no longer a
time series.
546
Options
Main
Alternative 2
Alternative 3
format t %td
tsset t
(t not formatted)
(t not formatted)
tsset t, daily
tsset t, format(%td)
timevar is not required to be a %t variable; it can be any variable of your own concocting so
long as it takes on only integer values. In such cases, it is called generic and considered to be
%tg. Specifying the unitoption generic or attaching a special format to timevar, however, is not
necessary because tsset will assume that the variable is generic if it has any numerical format
other than a %t format (or if it has a %tg format).
clearused in tsset, clearmakes Stata forget that the data ever were tsset. This is a rarely
used programmers option.
Delta
delta() specifies the period of timevar and is commonly used when timevar is %tc. delta() is
only sometimes used with the other %t formats or with generic time variables.
If delta() is not specified, delta(1) is assumed. This means that at timevar = 5, the previous
time is timevar = 5 1 = 4 and the next time would be timevar = 5 + 1 = 6. Lag and lead
operators, for instance, would work this way. This would be assumed regardless of the units of
timevar.
If you specified delta(2), then at timevar = 5, the previous time would be timevar = 5 2 = 3
and the next time would be timevar = 5 + 2 = 7. Lag and lead operators would work this way.
In the observation with timevar = 5, L.price would be the value of price in the observation
for which timevar = 3 and F.price would be the value of price in the observation for which
timevar = 7. If you then add an observation with timevar = 4, the operators will still work
appropriately; that is, at timevar = 5, L.price will still have the value of price at timevar = 3.
There are two aspects of timevar: its units and its periodicity. The unitoptions set the units.
delta() sets the periodicity.
We mentioned that delta() is commonly used with %tc timevars because Statas %tc variables
have units of milliseconds. If delta() is not specified and in some model you refer to L.price,
you will be referring to the value of price 1 ms ago. Few people have data with periodicity
of a millisecond. Perhaps your data are hourly. You could specify delta(3600000). Or you
could specify delta((60*60*1000)), because delta() will allow expressions if you include an
extra pair of parentheses. Or you could specify delta(1 hour). They all mean the same thing:
timevar has periodicity of 3,600,000 ms. In an observation for which timevar = 1,489,572,000,000
(corresponding to 15mar2007 10:00:00), L.price would be the observation for which timevar =
1,489,572,000,000 3,600,000 = 1,489,568,400,000 (corresponding to 15mar2007 9:00:00).
547
When you tsset the data and specify delta(), tsset verifies that all the observations follow
the specified periodicity. For instance, if you specified delta(2), then timevar could contain any
subset of {. . . , 4, 2, 0, 2, 4, . . . } or it could contain any subset of {. . . , 3, 1, 1, 3, . . . }.
If timevar contained a mix of values, tsset would issue an error message. If you also specify
a panelvaryou type tsset panelvar timevar, delta(2)the check is made on each panel
independently. One panel might contain timevar values from one set and the next, another, and
that would be fine.
The following option is available with tsset but is not shown in the dialog box:
noquery prevents tsset from performing most of its summary calculations and suppresses output.
With this option, only the following results are posted:
r(tdelta)
r(panelvar)
r(timevar)
r(tsfmt)
r(unit)
r(unit1)
Overview
tsset sets timevar so that Statas time-series operators are understood in varlists and expressions.
The time-series operators are
Operator
Meaning
L.
L2.
...
F.
F2.
...
D.
D2.
...
S.
S2.
...
lag xt1
2-period lag xt2
lead xt+1
2-period lead xt+2
difference xt xt1
difference of difference xt xt1 (xt1 xt2 ) = xt 2xt1 + xt2
seasonal difference xt xt1
lag-2 (seasonal) difference xt xt2
Time-series operators may be repeated and combined. L3.gnp refers to the third lag of variable
gnp, as do LLL.gnp, LL2.gnp, and L2L.gnp. LF.gnp is the same as gnp. DS12.gnp refers to the
one-period difference of the 12-period difference. LDS12.gnp refers to the same concept, lagged
once.
D1. = S1., but D2. 6= S2., D3. 6= S3., and so on. D2. refers to the difference of the difference.
S2. refers to the two-period difference. If you wanted the difference of the difference of the 12-period
difference of gnp, you would write D2S12.gnp.
548
Operators may be typed in uppercase or lowercase. Most users would type d2s12.gnp instead of
D2S12.gnp.
You may type operators however you wish; Stata internally converts operators to their canonical
form. If you typed ld2ls12d.gnp, Stata would present the operated variable as L2D3S12.gnp.
Stata also understands operator(numlist). to mean a set of operated variables. For instance, typing
L(1/3).gnp in a varlist is the same as typing L.gnp L2.gnp L3.gnp. The operators can also be
applied to a list of variables by enclosing the variables in parentheses; for example,
. list year L(1/3).(gnp cpi)
year
L.gnp
1.
2.
3.
4.
1989
1990
1991
1992
.
5452.8
5764.9
5932.4
8.
1996
7330.1
L2.gnp
L3.gnp
.
.
.
.
5452.8
.
5764.9
5452.8
(output omitted )
6892.2
6519.1
L.cpi
L2.cpi
L3.cpi
.
100
105
108
.
.
100
105
.
.
.
100
122
119
112
In operator#., making # zero returns the variable itself. L0.gnp is gnp. Thus, you can type list
year l(0/3).gnp to mean list year gnp L.gnp L2.gnp L3.gnp.
The parenthetical notation may be used with any operator. Typing D(1/3).gnp would return the
first through third differences.
The parenthetical notation may be used in operator lists with multiple operators, such as
L(0/3)D2S12.gnp.
Operator lists may include up to one set of parentheses, and the parentheses may enclose a numlist;
see [U] 11.1.8 numlist.
Before you can use these time-series operators, however, the dataset must satisfy two requirements:
1. the dataset must be tsset and
2. the dataset must be sorted by timevar or, if it is a cross-sectional time-series dataset, by panelvar
timevar.
tsset handles both requirements. As you use Stata, however, you may later use a command that
re-sorts that data, and if you do, the time-series operators will not work:
. tsset time
(output omitted )
. regress y x l.x
(output omitted )
. (you continue to use Stata and, sometime later:)
. regress y x l.x
not sorted
r(5);
Then typing tsset without arguments will reestablish the sort order:
. tsset
(output omitted )
. regress y x l.x
(output omitted )
Here typing tsset is the same as typing sort time. Had we previously tsset country time,
however, typing tsset would be the same as typing sort country time. You can type the sort
command or type tsset without arguments; it makes no difference.
549
income
1.
2.
1
1153
2
1181
(output omitted )
9.
9
1282
. tsset t
time variable: t, 1 to 9
delta: 1 unit
. regress income l.income
(output omitted )
tm() is the function that returns a month equivalent; tm(1995m7) evaluates to the constant 426,
meaning 426 months after January 1960. We now have variable newt containing
550
newt
income
1
2
3
426
1153
427
1181
428
1208
(output omitted )
9
434
1282
1
2
3
9.
newt
income
1995m7
1153
1995m8
1181
1995m9
1208
(output omitted )
1996m3
1282
Technical note
In addition to monthly, Stata understands clock times (to the millisecond level) as well as daily,
weekly, quarterly, half-yearly, and yearly data. See [D] datetime for a description of these capabilities.
Lets reconsider the previous example, but rather than monthly, lets assume the data are daily,
weekly, etc. The only thing to know is that, corresponding to function tm(), there are functions
td(), tw(), tq(), th(), and ty() and that, corresponding to format %tm, there are formats %td,
%tw, %tq, %th, and %ty. Here is what we would have typed had our data been on a different time
scale:
Daily:
Weekly:
1994w1:
2004m7:
1994q1:
1921h2:
1842:
551
In each example above, we subtracted one from our time variable in constructing the new time
variable newt because we assumed that our starting time value was 1. For the quarterly example, if
our starting time value were 5 and that corresponded to 1994q1, we would type
. generate newt = tq(1994q1) + t - 5
Had our initial time value been t = 742 and that corresponded to 1994q1, we would have typed
. generate newt = tq(1994q1) + t - 742
1153
1181
1208
1272
1236
6.
7.
8.
9.
1297
1265
1230
1282
Say that we know that the first observation corresponds to July 1995 and continues without gaps. We
can create a monthly time variable and format it by typing
. generate t = tm(1995m7) + _n - 1
. format t %tm
income
1995m7
1153
1995m8
1181
1995m9
1208
(output omitted )
1996m3
1282
t, 1995m7 to 1996m3
1 month
552
income
1.
2.
3.
4.
5.
7/1995
8/1995
9-1995
10,1995
11 1995
1153
1181
1208
1272
1236
6.
7.
8.
9.
12 1995
1/1996
2.1996
3- 1996
1297
1265
1230
1282
The first step is to convert the string to a numeric representation. Doing so is easy using the monthly()
function; see [D] datetime.
. gen mdate = monthly(yrmo, "MY")
. list yrmo mdate income
yrmo
1.
2.
3.
9.
mdate
income
7/1995
426
8/1995
427
9-1995
428
(output omitted )
3- 1996
434
1153
1181
1208
1282
Our new variable, mdate, contains the number of months from January 1960. Now that we have
numeric variable mdate, we can tsset the data:
. format mdate %tm
. tsset mdate
time variable:
delta:
or type
. tsset mdate, monthly
time variable:
delta:
553
mdate
income
1.
2.
3.
4.
5.
7/1995
8/1995
9-1995
10,1995
11 1995
1995m7
1995m8
1995m9
1995m10
1995m11
1153
1181
1208
1272
1236
6.
7.
8.
9.
12 1995
1/1996
2.1996
3- 1996
1995m12
1996m1
1996m2
1996m3
1297
1265
1230
1282
Stata can translate many different date formats, including strings like 12jan2009; January 12, 2009;
12-01-2009; 01/12/2009; 01/12/09; 12jan2009 8:14; 12-01-2009 13:12; 01/12/09 1:12 pm; Wed Jan
31 13:03:25 CST 2009; 1998q1; and more. See [D] datetime.
income
1.
2.
3.
4.
5.
7/1995
8/1995
11 1995
12 1995
1/1996
1153
1181
1236
1297
1265
6.
3- 1996
1282
Once the dataset has been tsset, we can use the time-series operators. The D operator specifies first
differences:
. list mdate income d.income
mdate
income
D.income
1.
2.
3.
4.
5.
1995m7
1995m8
1995m11
1995m12
1996m1
1153
1181
1236
1297
1265
.
28
.
61
-32
6.
1996m3
1282
554
We can use the operators in an expression or varlist context; we do not have to create a new variable
to hold D.income. We can use D.income with the list command, with regress or any other Stata
command that allows time-series varlists.
1.
2.
3.
4.
5.
02.13.2007
02.13.2007
02.13.2007
02.13.2007
02.13.2007
time
price
08:00
09:00
10:00
11:00
12:00
140
155
160
155
160
Variable time is a string variable. The first step in making this dataset a time-series dataset is to
translate the string to a numeric variable:
. generate double t = clock(time, "MDY hm")
. list in 1/5
1.
2.
3.
4.
5.
02.13.2007
02.13.2007
02.13.2007
02.13.2007
02.13.2007
time
price
08:00
09:00
10:00
11:00
12:00
140
155
160
155
160
1.487e+12
1.487e+12
1.487e+12
1.487e+12
1.487e+12
See [D] datetime for an explanation of what is going on here. clock() is the function that converts
strings to datetime (%tc) values. We typed clock(time, "MDY hm") to convert string variable time,
and we told clock() that the values in time were in the order month, day, year, hour, and minute.
We stored new variable t as a double because time values are large, and doing so is required to
prevent rounding. Even so, the resulting values 1.487e+12 look rounded, but that is only because of
the default display format for new variables. We can see the values better if we change the format:
. format t %20.0gc
. list in 1/5
1.
2.
3.
4.
5.
02.13.2007
02.13.2007
02.13.2007
02.13.2007
02.13.2007
time
price
08:00
09:00
10:00
11:00
12:00
140
155
160
155
160
1,486,972,800,000
1,486,976,400,000
1,486,980,000,000
1,486,983,600,000
1,486,987,200,000
555
1.
2.
3.
4.
5.
02.13.2007
02.13.2007
02.13.2007
02.13.2007
02.13.2007
time
price
08:00
09:00
10:00
11:00
12:00
140
155
160
155
160
t
13feb2007
13feb2007
13feb2007
13feb2007
13feb2007
08:00:00
09:00:00
10:00:00
11:00:00
12:00:00
We could drop variable time. New variable t contains the same information as time and t is better
because it is a Stata time variable, the most important property of which being that it is numeric
rather than string. We can tsset it. Here, however, we also need to specify the period with tssets
delta() option. Statas time variables are numeric, but they record milliseconds since 01jan1960
00:00:00. By default, tsset uses delta(1), and that means the time-series operators would not
work as we want them to work. For instance, L.price would look back only 1 ms (and find nothing).
We want L.price to look back 1 hour (3,600,000 ms):
. tsset t, delta(1 hour)
time variable: t,
13feb2007 08:00:00.000 to 13feb2007 14:00:00.000
delta: 1 hour
. list t price l.price in 1/5
1.
2.
3.
4.
5.
13feb2007
13feb2007
13feb2007
13feb2007
13feb2007
price
L.price
08:00:00
09:00:00
10:00:00
11:00:00
12:00:00
140
155
160
155
160
.
140
155
160
155
1.
2.
3.
4.
5.
02.13.2007
02.13.2007
02.13.2007
02.13.2007
02.13.2007
time
price
08:00
09:00
10:00
11:00
12:00
140
155
160
155
160
556
Remember, variable time is a string variable, and we need to translate it to numeric. So we translate,
but this time we forget to make the new variable a double:
. generate t = clock(time, "MDY hm")
. list in 1/5
1.
2.
3.
4.
5.
02.13.2007
02.13.2007
02.13.2007
02.13.2007
02.13.2007
time
price
08:00
09:00
10:00
11:00
12:00
140
155
160
155
160
1.49e+12
1.49e+12
1.49e+12
1.49e+12
1.49e+12
We see the first differencet now lists as 1.49e+12 rather than 1.487e+12 as it did previouslybut
this is nothing that would catch our attention. We would not even know that the value is different.
Lets continue.
We next put a %20.0gc format on t to better see the numerical values. In fact, that is not something
we would usually do in an analysis. We did that in the example to emphasize to you that the t values
were really big numbers. We will repeat the exercise just to be complete, but in real analysis, we
would not bother.
. format t %20.0gc
. list in 1/5
1.
2.
3.
4.
5.
02.13.2007
02.13.2007
02.13.2007
02.13.2007
02.13.2007
time
price
08:00
09:00
10:00
11:00
12:00
140
155
160
155
160
1,486,972,780,544
1,486,976,450,560
1,486,979,989,504
1,486,983,659,520
1,486,987,198,464
1.
2.
3.
4.
5.
02.13.2007
02.13.2007
02.13.2007
02.13.2007
02.13.2007
time
price
08:00
09:00
10:00
11:00
12:00
140
155
160
155
160
t
13feb2007
13feb2007
13feb2007
13feb2007
13feb2007
07:59:40
09:00:50
09:59:49
11:00:59
11:59:58
By now, you should see a problem: the translated datetime values are off by a second or two. That
was caused by rounding. Dates and times should be the same, not approximately the same, and when
you see a difference like this, you should say to yourself, The translation is off a little. Why is
that? and then you should think, Of course, rounding. I bet that I did not create t as a double.
557
Let us assume, however, that you do not do this. You instead plow ahead:
. tsset t, delta(1 hour)
time values with period less than delta() found
r(451);
And that is what will happen when you forget to create t as a double. The rounding will cause
uneven period, and tsset will complain.
By the way, it is only important that clock times (%tc and %tC variables) be stored as doubles.
The other date values %td, %tw, %tm, %tq, %th, and %ty are small enough that they can safely be
stored as floats, although forgetting and storing them as doubles does no harm.
Technical note
Stata provides two clock-time formats, %tc and %tC. %tC provides a clock with leap seconds. Leap
seconds are occasionally inserted to account for randomness of the earths rotation, which gradually
slows. Unlike the extra day inserted in leap years, the timing of when leap seconds will be inserted
cannot be foretold. The authorities in charge of such matters announce a leap second approximately
6 months before insertion. Leap seconds are inserted at the end of the day, and the leap second is
called 23:59:60 (that is, 11:59:60 pm), which is then followed by the usual 00:00:00 (12:00:00 am).
Most nonastronomers find these leap seconds vexing. The added seconds cause problems because
of their lack of predictabilityknowing how many seconds there will be between 01jan2012 and
01jan2013 is not possibleand because there are not necessarily 24 hours in a day. If you use a leap
second adjustedclock, most days have 24 hours, but a few have 24 hours and 1 second. You must
look at a table to find out.
From a time-series analysis point of view, the nonconstant day causes the most problems. Lets
say that you have data on blood pressure, taken hourly at 1:00, 2:00, . . . , and that you have tsset
your data with delta(1 hour). On most days, L24.bp would be blood pressure at the same time
yesterday. If the previous day had a leap second, however, and your data were recorded using a
leap second adjustedclock, there would be no observation L24.bp because 86,400 seconds before
the current reading does not correspond to an on-the-hour time; 86,401 seconds before the current
reading corresponds to yesterdays time. Thus, whenever possible, using Statas %tc encoding rather
than %tC is better.
When times are recorded by computers using leap secondadjusted clocks, however, avoiding %tC
is not possible. For performing most time-series analysis, the recommended procedure is to map the
%tC values to %tc and then tsset those. You must ask yourself whether the process you are studying
is based on the clockthe nurse does something at 2 oclock every dayor the true passage of
timethe emitter spits out an electron every 86,400,000 ms.
When dealing with computer-recorded times, first find out whether the computer (and its timerecording software) use a leap secondadjusted clock. If it does, translate that to a %tC value. Then
use function cofC() to convert to a %tc value and tsset that. If variable T contains the %tC value,
. gen double t = cofC(T)
. format t %tc
. tsset t, delta(. . . )
Function cofC() moves leap seconds forward: 23:59:60 becomes 00:00:00 of the next day.
558
Panel data
Example 8: Time-series data for multiple groups
Assume that we have a time series on average annual income and that we have the series for two
groups: individuals who have not completed high school (edlevel = 1) and individuals who have
(edlevel = 2).
. use http://www.stata-press.com/data/r13/tssetxmpl5, clear
. list edlevel year income, sep(0)
1.
2.
3.
4.
5.
6.
7.
edlevel
year
income
1
1
1
1
2
2
2
1988
1989
1990
1991
1989
1990
1992
14500
14750
14950
15100
22100
22200
22800
Having tsset the data, we can now use time-series operators. The difference operator, for example,
can be used to list annual changes in income:
. list edlevel year income d.income, sep(0)
1.
2.
3.
4.
5.
6.
7.
edlevel
year
income
D.income
1
1
1
1
2
2
2
1988
1989
1990
1991
1989
1990
1992
14500
14750
14950
15100
22100
22200
22800
.
250
200
150
.
100
.
We see that in addition to producing missing values due to missing times, the difference operator
correctly produced a missing value at the start of each panel. Once we have tsset our panel data,
we can use time-series operators and be assured that they will handle missing time periods and panel
changes correctly.
559
Video example
Time series, part 1: Formatting dates, tsset, tsreport, and tsfill
Stored results
tsset stores the following in r():
Scalars
r(imin)
r(imax)
r(tmin)
r(tmax)
r(tdelta)
Macros
r(panelvar)
r(timevar)
r(tdeltas)
r(tmins)
r(tmaxs)
r(tsfmt)
r(unit)
r(unit1)
r(balanced)
minimum panel ID
maximum panel ID
minimum time
maximum time
delta
name of panel variable
name of time variable
formatted delta
formatted minimum time
formatted maximum time
%fmt of time variable
units of time variable: Clock, clock, daily, weekly, monthly, quarterly,
halfyearly, yearly, or generic
units of time variable: C, c, d, w, m, q, h, y, or ""
unbalanced, weakly balanced, or strongly balanced; a set of panels
are strongly balanced if they all have the same time values, otherwise
balanced if same number of time values, otherwise unbalanced
References
Baum, C. F. 2000. sts17: Compacting time series data. Stata Technical Bulletin 57: 4445. Reprinted in Stata Technical
Bulletin Reprints, vol. 10, pp. 369370. College Station, TX: Stata Press.
Cox, N. J. 2010. Stata tip 68: Week assumptions. Stata Journal 10: 682685.
. 2012. Stata tip 111: More on working with weeks. Stata Journal 12: 565569.
Also see
[TS] tsfill Fill in gaps in time variable
Title
tssmooth Smooth and forecast univariate time-series data
Syntax
Description
References
Also see
Syntax
tssmooth smoother
type
newvar = exp
if
in
, ...
Smoother category
smoother
Moving average
with uniform weights
with specified weights
ma
ma
Recursive
exponential
double exponential
nonseasonal HoltWinters
seasonal HoltWinters
exponential
dexponential
hwinters
shwinters
Nonlinear filter
nl
See [TS] tssmooth ma, [TS] tssmooth exponential, [TS] tssmooth dexponential,
[TS] tssmooth hwinters, [TS] tssmooth shwinters, and [TS] tssmooth nl.
Description
tssmooth creates new variable newvar and fills it in by passing the specified expression (usually
a variable name) through the requested smoother.
561
References
Abraham, B., and J. Ledolter. 1983. Statistical Methods for Forecasting. New York: Wiley.
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Bowerman, B. L., R. T. OConnell, and A. B. Koehler. 2005. Forecasting, Time Series, and Regression: An Applied
Approach. 4th ed. Pacific Grove, CA: Brooks/Cole.
Chatfield, C. 2001. Time-Series Forecasting. London: Chapman & Hall/CRC.
. 2004. The Analysis of Time Series: An Introduction. 6th ed. Boca Raton, FL: Chapman & Hall/CRC.
Chatfield, C., and M. Yar. 1988. Holt-Winters forecasting: Some practical issues. Statistician 37: 129140.
Holt, C. C. 2004. Forecasting seasonals and trends by exponentially weighted moving averages. International Journal
of Forecasting 20: 510.
Montgomery, D. C., L. A. Johnson, and J. S. Gardiner. 1990. Forecasting and Time Series Analysis. 2nd ed. New
York: McGrawHill.
Winters, P. R. 1960. Forecasting sales by exponentially weighted moving averages. Management Science 6: 324342.
Also see
[TS] tsset Declare data to be time-series data
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] sspace State-space models
[TS] tsfilter Filter a time-series, keeping only selected periodicities
[R] smooth Robust nonlinear smoother
Title
tssmooth dexponential Double-exponential smoothing
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
tssmooth dexponential
type
newvar = exp
if
in
, options
Description
options
Main
replace
parms(# )
samp0(#)
s0(#1 #2 )
forecast(#)
You must tsset your data before using tssmooth dexponential; see [TS] tsset.
exp may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
Smoothers/univariate forecasters
>
Double-exponential smoothing
Description
tssmooth dexponential models the trend of a variable whose difference between changes from
the previous values is serially correlated. More precisely, it models a variable whose second difference
follows a low-order, moving-average process.
Options
Main
563
forecast(#) specifies the number of periods for the out-of-sample prediction; 0 # 500. The
default is forecast(0), which is equivalent to not performing an out-of-sample forecast.
x
bt = mt + bt t
where x
bt is the smoothed or predicted value of the series x, and the terms mt and bt change over time.
Abraham and Ledolter (1983), Bowerman, OConnell, and Koehler (2005), and Montgomery, Johnson,
and Gardiner (1990) all provide good introductions to double-exponential smoothing. Chatfield (2001,
2004) provides helpful discussions of how double-exponential smoothing relates to modern time-series
methods.
The double-exponential method has been used both as a smoother and as a prediction method.
[TS] tssmooth exponential shows that the single-exponential smoothed series is given by
St = xt + (1 )St1
where is the smoothing constant and xt is the original series. The double-exponential smoother is
obtained by smoothing the smoothed series,
[2]
[2]
St = St + (1 )St1
[2]
Values of S0 and S0 are necessary to begin the process. Per Montgomery, Johnson, and Gar[2]
diner (1990), the default method is to obtain S0 and S0 from a regression of the first Npre values
of xt on e
t = (1, . . . , Npre t0 )0 . By default, Npre is equal to one-half the number of observations
in the sample. Npre can be specified using the samp0() option.
[2]
The values of S0 and S0 can also be specified using the option s0().
90
100
110
Sales
120
130
140
20
40
Time
60
564
The following example illustrates that double-exponential smoothing is simply smoothing the
smoothed series. Because the starting values are treated as time-zero values, we actually lose 2
observations when smoothing the smoothed series.
. use http://www.stata-press.com/data/r13/sales2
. tssmooth exponential double sm1=sales, p(.7) s0(1031)
exponential coefficient =
sum-of-squared residuals =
root mean squared error =
0.7000
13923
13.192
0.7000
7698.6
9.8098
=
=
=
0.7000
3724.4
6.8231
sm2c
1.
2.
3.
4.
5.
1031
1028.3834
1030.6306
1017.8182
1022.938
1031
1028.3834
1030.6306
1017.8182
1022.938
6.
7.
8.
9.
10.
1026.0752
1041.8587
1042.8341
1035.9571
1030.6651
1026.0752
1041.8587
1042.8341
1035.9571
1030.6651
The double-exponential method can also be viewed as a forecasting mechanism. The exponential
forecast method is a constrained version of the HoltWinters method implemented in [TS] tssmooth
hwinters (as discussed by Gardner [1985] and Chatfield [2001]). Chatfield (2001) also notes that the
double-exponential method arises when the underlying model is an ARIMA(0,2,2) with equal roots.
This method produces predictions x
bt for t = t1 , . . . , T + forecast(). These predictions are
obtained as a function of the smoothed series and the smoothed-smoothed series. For t [t0 , T ],
x
bt = 2 +
[2]
where St and St
[2]
St 1 +
S
1
1 t
The out-of-sample predictions are obtained as a function of the constant term, the linear term of the
[2]
smoothed series at the last observation in the sample, and time. The constant term is aT = 2ST ST ,
[2]
565
f1
1.
2.
3.
4.
5.
1031
1031
1023.524
1034.8039
994.0237
1031
1031
1023.524
1034.8039
994.0237
6.
7.
8.
9.
10.
1032.4463
1031.9015
1071.1709
1044.6454
1023.1855
1032.4463
1031.9015
1071.1709
1044.6454
1023.1855
The following graph describes the fit that we obtained by applying the double-exponential forecast
method to our sales data. The out-of-sample dynamic predictions are not constant, as in the singleexponential case.
566
950
1000
Sales
1050
1100
20
40
time
dexpc(0.3631) = sales
60
80
sales
tssmooth dexponential automatically detects panel data from the information provided when
the dataset was tsset. The starting values are chosen separately for each series. If the smoothing
parameter is chosen to minimize the sum-of-squared prediction errors, the optimization is performed
separately on each panel. The stored results contain the results from the last panel. Missing values at
the beginning of the sample are excluded from the sample. After at least one value has been found,
missing values are filled in using the one-step-ahead predictions from the previous period.
Stored results
tssmooth dexponential stores the following in r():
Scalars
r(N)
r(alpha)
r(rss)
r(rmse)
r(N pre)
r(s2 0)
r(s1 0)
r(linear)
r(constant)
r(period)
Macros
r(method)
r(exp)
r(timevar)
r(panelvar)
number of observations
smoothing parameter
sum-of-squared errors
root mean squared error
number of observations used in calculating starting values, if starting values calculated
initial value for linear term, i.e., S0[2]
initial value for constant term, i.e., S0
final value of linear term
final value of constant term
period, if filter is seasonal
smoothing method
expression specified
time variable specified in tsset
panel variable specified in tsset
567
xt = 0 + 1 t
where xt is the series to be smoothed and t is a time variable that has been normalized to equal 1 in
the first period included in the sample. The regression coefficient estimates b0 and b1 are obtained
via OLS. The sample is determined by the option samp0(). By default, samp0() includes the first
half of the observations. Given the estimates b0 and b1 , the starting values are
S0 = b0 {(1 )/}b1
[2]
S0 = b0 2{(1 )/}b1
References
Abraham, B., and J. Ledolter. 1983. Statistical Methods for Forecasting. New York: Wiley.
Bowerman, B. L., R. T. OConnell, and A. B. Koehler. 2005. Forecasting, Time Series, and Regression: An Applied
Approach. 4th ed. Pacific Grove, CA: Brooks/Cole.
Chatfield, C. 2001. Time-Series Forecasting. London: Chapman & Hall/CRC.
. 2004. The Analysis of Time Series: An Introduction. 6th ed. Boca Raton, FL: Chapman & Hall/CRC.
Chatfield, C., and M. Yar. 1988. Holt-Winters forecasting: Some practical issues. Statistician 37: 129140.
Gardner, E. S., Jr. 1985. Exponential smoothing: The state of the art. Journal of Forecasting 4: 128.
Holt, C. C. 2004. Forecasting seasonals and trends by exponentially weighted moving averages. International Journal
of Forecasting 20: 510.
Montgomery, D. C., L. A. Johnson, and J. S. Gardiner. 1990. Forecasting and Time Series Analysis. 2nd ed. New
York: McGrawHill.
Winters, P. R. 1960. Forecasting sales by exponentially weighted moving averages. Management Science 6: 324342.
Also see
[TS] tsset Declare data to be time-series data
[TS] tssmooth Smooth and forecast univariate time-series data
Title
tssmooth exponential Single-exponential smoothing
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
tssmooth exponential
type
newvar = exp
if
in
, options
Description
options
Main
replace
parms(# )
samp0(#)
s0(#)
forecast(#)
You must tsset your data before using tssmooth exponential; see [TS] tsset.
exp may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
Smoothers/univariate forecasters
>
Single-exponential smoothing
Description
tssmooth exponential models the trend of a variable whose change from the previous value
is serially correlated. More precisely, it models a variable whose first difference follows a low-order,
moving-average process.
Options
Main
569
Introduction
Exponential smoothing can be viewed either as an adaptive-forecasting algorithm or, equivalently,
as a geometrically weighted moving-average filter. Exponential smoothing is most appropriate when
used with time-series data that exhibit no linear or higher-order trends but that do exhibit lowvelocity, aperiodic variation in the mean. Abraham and Ledolter (1983), Bowerman, OConnell, and
Koehler (2005), and Montgomery, Johnson, and Gardiner (1990) all provide good introductions to
single-exponential smoothing. Chatfield (2001, 2004) discusses how single-exponential smoothing
relates to modern time-series methods. For example, simple exponential smoothing produces optimal
forecasts for several underlying models, including ARIMA(0,1,1) and the random-walk-plus-noise
state-space model. (See Chatfield [2001, sec. 4.3.1].)
The exponential filter with smoothing parameter creates the series St , where
St = Xt + (1 )St1
for t = 1, . . . , T
and S0 is the initial value. This is the adaptive forecast-updating form of the exponential smoother.
This implies that
T
1
X
St =
(1 )K XT k + (1 )T S0
k=0
which is the weighted moving-average representation, with geometrically declining weights. The
choice of the smoothing constant determines how quickly the smoothed series or forecast will adjust
to changes in the mean of the unfiltered series. For small values of , the response will be slow
because more weight is placed on the previous estimate of the mean of the unfiltered series, whereas
larger values of will put more emphasis on the most recently observed value of the unfiltered series.
Examples
Example 1: Smoothing a series for specified parameters
Lets consider some examples using sales data. Here we forecast sales for three periods with a
smoothing parameter of 0.4:
. use http://www.stata-press.com/data/r13/sales1
. tssmooth exponential sm1=sales, parms(.4) forecast(3)
exponential coefficient =
0.4000
sum-of-squared residuals =
8345
root mean squared error =
12.919
To compare our forecast with the actual data, we graph the series and the forecasted series over
time.
570
1000
1020
Sales
1040 1060
1080
1100
10
20
30
40
50
Time
exp parms(0.4000) = sales
sales
The graph indicates that our forecasted series may not be adjusting rapidly enough to the changes
in the actual series. The smoothing parameter controls the rate at which the forecast adjusts.
Smaller values of adjust the forecasts more slowly. Thus we suspect that our chosen value of 0.4
is too small. One way to investigate this suspicion is to ask tssmooth exponential to choose the
smoothing parameter that minimizes the sum-of-squared forecast errors.
. tssmooth exponential sm2=sales, forecast(3)
computing optimal exponential coefficient (0,1)
optimal exponential coefficient =
0.7815
sum-of-squared residuals
=
6727.7056
root mean squared error
=
11.599746
The output suggests that the value of = 0.4 is too small. The graph below indicates that the
new forecast tracks the series much more closely than the previous forecast.
. line sm2 sales t, title("Single exponential forecast with optimal alpha")
> ytitle(sales) xtitle(Time)
1000
1020
Sales
1040 1060
1080
1100
10
20
30
40
50
Time
parms(0.7815) = sales
sales
571
We noted above that simple exponential forecasts are optimal for an ARIMA (0,1,1) model. (See
[TS] arima for fitting ARIMA models in Stata.) Chatfield (2001, 90) gives the following useful
derivation that relates the MA coefficient in an ARIMA (0,1,1) model to the smoothing parameter in
single-exponential smoothing. An ARIMA (0,1,1) is given by
xt xt1 = t + t1
where t is an identically and independently distributed white-noise error term. Thus given b, an
b t . Because t is not observable,
estimate of , an optimal one-step prediction of x
bt+1 is x
bt+1 = xt +
it can be replaced by
bt = xt x
bt1
yielding
b tx
x
bt+1 = xt + (x
bt1 )
Letting
b = 1 + b and doing more rearranging implies that
b t b
bxt1
x
bt+1 = (1 + )x
x
bt+1 =
bxt (1
b)b
xt1
Number of obs
Wald chi2(1)
Prob > chi2
D.sales
Coef.
OPG
Std. Err.
P>|z|
=
=
=
49
1.41
0.2347
sales
_cons
.5025469
1.382727
0.36
0.716
-2.207548
3.212641
ma
L1.
-.1986561
.1671699
-1.19
0.235
-.5263031
.1289908
/sigma
11.58992
1.240607
9.34
0.000
9.158378
14.02147
ARMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
. di 1 + _b[ARMA:L.ma]
.80134387
572
coefficient (0,1)
0.6927
21629
14.706801
-> id = 4
computing optimal exponential coefficient (0,1)
optimal exponential coefficient =
0.3866
sum-of-squared residuals
=
22321.334
root mean squared error
=
14.940326
-> id = 5
computing optimal exponential coefficient (0,1)
optimal exponential coefficient =
0.4540
sum-of-squared residuals
=
20714.095
root mean squared error
=
14.392392
tssmooth exponential computed starting values and chose an optimal for each panel individually.
573
26.
27.
28.
29.
30.
sales2
sm3
26
27
28
29
30
1011.5
1028.3
.
1028.4
1054.8
1007.5
1010.3
1022.9
1022.9
1026.75
Because the data for t = 28 are missing, the prediction for period 28 has been used in its place.
This implies that the updating equation for period 29 is
574
diff
29.
30.
31.
32.
33.
29
30
31
32
33
-3.5
-1.050049
-.3150635
-.0946045
-.0283203
34.
35.
36.
37.
38.
34
35
36
37
38
-.0085449
-.0025635
-.0008545
-.0003662
-.0001221
0.7000
6215.3
11.624
sales
sales3
sm4
1.
2.
3.
4.
46.
1
2
3
4
46
1031
1022.1
1005.6
1025
1055.2
.
.
1005.6
1025
1055.2
.
.
1016.787
1008.956
1057.2
47.
48.
49.
50.
51.
47
48
49
50
51
1056.8
1034.5
1041.1
1056.1
.
1056.8
1034.5
.
.
.
1055.8
1056.5
1041.1
1041.1
1041.1
52.
53.
52
53
.
.
.
.
1041.1
1041.1
The output above illustrates that missing values at the beginning or end of the sample cause the
sample to be truncated. The new series begins with nonmissing data and begins predicting immediately
after it stops.
One period after the actual data concludes, the exponential forecast becomes a constant. After the
actual end of the data, the forecast at period t is substituted for the missing data. This also illustrates
why the forecasted series is a constant.
575
Stored results
tssmooth exponential stores the following in r():
Scalars
r(N)
r(alpha)
r(rss)
r(rmse)
r(N pre)
r(s1 0)
Macros
r(method)
r(exp)
r(timevar)
r(panelvar)
number of observations
smoothing parameter
sum-of-squared prediction errors
root mean squared error
number of observations used in calculating starting values
initial value for St
smoothing method
expression specified
time variable specified in tsset
panel variable specified in tsset
References
Abraham, B., and J. Ledolter. 1983. Statistical Methods for Forecasting. New York: Wiley.
Bowerman, B. L., R. T. OConnell, and A. B. Koehler. 2005. Forecasting, Time Series, and Regression: An Applied
Approach. 4th ed. Pacific Grove, CA: Brooks/Cole.
Chatfield, C. 2001. Time-Series Forecasting. London: Chapman & Hall/CRC.
. 2004. The Analysis of Time Series: An Introduction. 6th ed. Boca Raton, FL: Chapman & Hall/CRC.
Chatfield, C., and M. Yar. 1988. Holt-Winters forecasting: Some practical issues. Statistician 37: 129140.
Holt, C. C. 2004. Forecasting seasonals and trends by exponentially weighted moving averages. International Journal
of Forecasting 20: 510.
Montgomery, D. C., L. A. Johnson, and J. S. Gardiner. 1990. Forecasting and Time Series Analysis. 2nd ed. New
York: McGrawHill.
Winters, P. R. 1960. Forecasting sales by exponentially weighted moving averages. Management Science 6: 324342.
Also see
[TS] tsset Declare data to be time-series data
[TS] tssmooth Smooth and forecast univariate time-series data
Title
tssmooth hwinters HoltWinters nonseasonal smoothing
Syntax
Remarks and examples
References
Menu
Stored results
Also see
Description
Methods and formulas
Options
Acknowledgment
Syntax
tssmooth hwinters
type
newvar = exp
if
in
, options
Description
options
Main
replace
parms(# # )
samp0(#)
s0(#cons #lt )
forecast(#)
Options
diff
Maximization
maximize options
from(# # )
You must tsset your data before using tssmooth hwinters; see [TS] tsset.
exp may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
Smoothers/univariate forecasters
>
Description
tssmooth hwinters is used in smoothing or forecasting a series that can be modeled as a linear
trend in which the intercept and the coefficient on time vary over time.
Options
Main
577
If you experience difficulty converging (many iterations and not concave messages), try using
from() to provide better starting values.
samp0(#) and s0(#cons #lt ) specify how the initial values #cons and #lt for the recursion are
obtained.
By default, initial values are obtained by fitting a linear regression with a time trend using the
first half of the observations in the dataset.
samp0(#) specifies that the first # observations be used in that regression.
s0(#cons #lt ) specifies that #cons and #lt be used as initial values.
forecast(#) specifies the number of periods for the out-of-sample prediction; 0 # 500. The
default is forecast(0), which is equivalent to not performing an out-of-sample forecast.
Options
diff specifies that the linear term is obtained by averaging the first difference of expt and the intercept
is obtained as the difference of exp in the first observation and the mean of D.expt .
If the diff option is not specified, a linear regression of expt on a constant and t is fit.
Maximization
maximize options controls the process for solving for the optimal and when parms() is not
specified.
maximize options: nodifficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), and nonrtolerance; see [R] maximize. These options are seldom used.
from(# # ), 0 < # < 1 and 0 < # < 1, specifies starting values from which the optimal values
of and will be obtained. If from() is not specified, from(.5 .5) is used.
x
bt+1 = at + bt t
where x
bt is the forecast of the original series xt , at is a mean that drifts over time, and bt is a
coefficient on time that also drifts. In fact, as Gardner (1985) has noted, the HoltWinters method
produces optimal forecasts for an ARIMA(0,2,2) model and some local linear models. See [TS] arima
and the references in that entry for ARIMA models, and see Harvey (1989) for a discussion of the
local linear model and its relationship to the HoltWinters method. Abraham and Ledolter (1983),
Bowerman, OConnell, and Koehler (2005), and Montgomery, Johnson, and Gardiner (1990) all
provide good introductions to the HoltWinters method. Chatfield (2001, 2004) provides helpful
discussions of how this method relates to modern time-series analysis.
The HoltWinters method can be viewed as an extension of double-exponential smoothing with
two parameters, which may be explicitly set or chosen to minimize the in-sample sum-of-squared
forecast errors. In the latter case, as discussed in Methods and formulas, the smoothing parameters
are chosen to minimize the in-sample sum-of-squared forecast errors plus a penalty term that helps
to achieve convergence when one of the parameters is too close to the boundary.
578
Given the series xt , the smoothing parameters and , and the starting values a0 and b0 , the
updating equations are
at = xt + (1 ) (at1 + bt1 )
x
bt+ = at + bt
and beta=.3")
90
100
Sales
110
120
130
140
20
40
60
Time
sales
The graph indicates that the forecasts are for linearly decreasing sales. Given aT and bT , the out-ofsample predictions are linear functions of time. In this example, the slope appears to be too steep,
probably because our choice of and .
579
hw2
1.
2.
3.
4.
5.
93.31973
98.40002
100.8845
98.50404
93.62408
97.80807
98.11447
99.2267
96.78276
92.2452
58.
59.
60.
61.
62.
116.5771
119.2146
119.2608
111.0299
109.2815
116.5771
119.2146
119.2608
111.0299
109.2815
63.
107.5331
107.5331
When the smoothing parameters are chosen to minimize the in-sample sum-of-squared forecast
errors, changing the initial values can affect the choice of the optimal and . When changing the
initial values results in different optimal values for and , the predictions will also differ.
When the HoltWinters model fits the data well, finding the optimal smoothing parameters
generally proceeds well. When the model fits poorly, finding the and that minimize the in-sample
sum-of-squared forecast errors can be difficult.
580
The following graph contains the data and the forecast using the optimal and . Comparing
this graph with the one above illustrates how different choices of and can lead to very different
forecasts. Instead of linearly decreasing sales, the new forecast is for linearly increasing sales.
. line sales hw3 t, title("Holt-Winters Forecast with optimal alpha and beta")
> ytitle(Sales) xtitle(Time)
90
100
Sales
110
120
130
140
20
40
60
Time
sales
Stored results
tssmooth hwinters stores the following in r():
Scalars
r(N)
r(alpha)
r(beta)
r(rss)
r(prss)
number of observations
smoothing parameter
smoothing parameter
sum-of-squared errors
penalized sum-of-squared errors,
if parms() not specified
root mean squared error
r(rmse)
Macros
r(method) smoothing method
r(exp)
expression specified
r(s2 0)
r(s1 0)
r(linear)
r(constant)
r(timevar)
r(panelvar)
r(N pre)
581
at = xt + (1 ) (at1 + bt1 )
bt = (at at1 ) + (1 ) bt1
By default, the initial values are found by fitting a linear regression with a time trend. The time
variable in this regression is normalized to equal one in the first period included in the sample. By
default, one-half of the data is used in this regression, but this sample can be changed using samp0().
a0 is then set to the estimate of the constant, and b0 is set to the estimate of the coefficient on the
time trend. Specifying the diff option sets b0 to the mean of D.x and a0 to x1 b0 . s0() can also
be used to specify the initial values directly.
Sometimes, one or both of the optimal parameters may lie on the boundary of [ 0, 1 ]. To keep the
estimates inside [ 0, 1 ], tssmooth hwinters parameterizes the objective function in terms of their
inverse logits, that is, in terms of exp()/{1 + exp()} and exp()/{1 + exp()}. When one of
these parameters is actually on the boundary, this can complicate the optimization. For this reason,
e be the
tssmooth hwinters optimizes a penalized sum-of-squared forecast errors. Let x
bt (e
, )
e
forecast for the series xt , given the choices of
e and . Then the in-sample penalized sum-of-squared
prediction errors is
P =
T h
X
t=1
e 2+I
e 12)2
{xt x
bt (e
, )}
)| 12)2 + I|f (e)|>12) (|f ()|
|f (
e)|>12) (|f (e
where f (x) = ln {x(1 x)}. The penalty term is zero unless one of the parameters is close to the
boundary. When one of the parameters is close to the boundary, the penalty term will help to obtain
convergence.
Acknowledgment
We thank Nicholas J. Cox of the Department of Geography at Durham University, UK, and coeditor
of the Stata Journal for his helpful comments.
References
Abraham, B., and J. Ledolter. 1983. Statistical Methods for Forecasting. New York: Wiley.
Bowerman, B. L., R. T. OConnell, and A. B. Koehler. 2005. Forecasting, Time Series, and Regression: An Applied
Approach. 4th ed. Pacific Grove, CA: Brooks/Cole.
Chatfield, C. 2001. Time-Series Forecasting. London: Chapman & Hall/CRC.
. 2004. The Analysis of Time Series: An Introduction. 6th ed. Boca Raton, FL: Chapman & Hall/CRC.
Chatfield, C., and M. Yar. 1988. Holt-Winters forecasting: Some practical issues. Statistician 37: 129140.
582
Gardner, E. S., Jr. 1985. Exponential smoothing: The state of the art. Journal of Forecasting 4: 128.
Harvey, A. C. 1989. Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge: Cambridge
University Press.
Holt, C. C. 2004. Forecasting seasonals and trends by exponentially weighted moving averages. International Journal
of Forecasting 20: 510.
Montgomery, D. C., L. A. Johnson, and J. S. Gardiner. 1990. Forecasting and Time Series Analysis. 2nd ed. New
York: McGrawHill.
Winters, P. R. 1960. Forecasting sales by exponentially weighted moving averages. Management Science 6: 324342.
Also see
[TS] tsset Declare data to be time-series data
[TS] tssmooth Smooth and forecast univariate time-series data
Title
tssmooth ma Moving-average filter
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
Reference
Syntax
Moving average with uniform weights
tssmooth ma type newvar = exp if
in , window(#l #c #f ) replace
Moving average with specified weights
tssmooth ma type newvar = exp if
in , weights( numlistl <#c > numlistf )
replace
You must tsset your data before using tssmooth ma; see [TS] tsset.
exp may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
Smoothers/univariate forecasters
>
Moving-average filter
Description
tssmooth ma creates a new series in which each observation is an average of nearby observations
in the original series.
In the first syntax, window() is required and specifies the span of the filter. tssmooth ma constructs
a uniformly weighted moving average of the expression.
In the second syntax, weights() is required and specifies the weights to be used. tssmooth ma
then applies the specified weights to construct a weighted moving average of the expression.
Options
window(#l #c #f ) describes the span of the uniformly weighted moving average.
#l specifies the number of lagged terms to be included, 0 #l one-half the number of
observations in the sample.
#c is optional and specifies whether to include the current observation in the filter. A 0 indicates
exclusion and 1, inclusion. The current observation is excluded by default.
#f is optional and specifies the number of forward terms to be included, 0 #f one-half the
number of observations in the sample.
583
584
weights( numlistl <#c > numlistf ) is required for the weighted moving average and describes
the span of the moving average, as well as the weights to be applied to each term in the average.
The middle term literally is surrounded by < and >, so you might type weights(1/2 <3> 2/1).
numlistl is optional and specifies the weights to be applied to the lagged terms when computing
the moving average.
#c is required and specifies the weight to be applied to the current term.
numlistf is optional and specifies the weights to be applied to the forward terms when computing
the moving average.
The number of elements in each numlist is limited to one-half the number of observations in the
sample.
replace replaces newvar if it already exists.
Overview
Moving averages are simple linear filters of the form
Pf
x
bt =
i=l
Pf
wi xt+i
i=l
wi
where
x
bt
xt
wi
Moving averages are used primarily to reduce noise in time-series data. Using moving averages to
isolate signals is problematic, however, because the moving averages themselves are serially correlated,
even when the underlying data series is not. Still, Chatfield (2004) discusses moving-average filters
and provides several specific moving-average filters for extracting certain trends.
585
. use http://www.stata-press.com/data/r13/sales1
. tsset
time variable: t, 1 to 50
delta: 1 unit
. tssmooth ma sm1 = sales, window(2 1 2)
The smoother applied was
(1/5)*[x(t-2) + x(t-1) + 1*x(t) + x(t+1) + x(t+2)]; x(t)= sales
We would like to smooth our series so that there is no autocorrelation in the noise. Below we
compute the noise as the difference between the smoothed series and the series itself. Then we use
ac (see [TS] corrgram) to check for autocorrelation in the noise.
. generate noise = sales-sm1
0.40
Autocorrelations of noise
0.20
0.00
0.20
0.40
. ac noise
10
15
20
25
Lag
Bartletts formula for MA(q) 95% confidence bands
x
bt = (1/9)(1xt2 + 2xt1 + 3xt + 2xt+1 + 1xt+2 )
In what follows, 1/2 does not mean one-half, it means the numlist 1 2:
. tssmooth ma sm2 = sales, weights( 1/2 <3> 2/1)
The smoother applied was
(1/9)*[1*x(t-2) + 2*x(t-1) + 3*x(t) + 2*x(t+1) + 1*x(t+2)]; x(t)= sales
. generate noise2 = sales-sm2
586
0.40
Autocorrelations of noise2
0.20
0.00
0.20
0.40
. ac noise2
10
15
20
25
Lag
Bartletts formula for MA(q) 95% confidence bands
The graph shows no significant evidence of autocorrelation in the noise from the second filter.
Technical note
tssmooth ma gives any missing observations a coefficient of zero in both the uniformly weighted
and weighted moving-average filters. This simply means that missing values or missing periods are
excluded from the moving average.
Sample restrictions, via if and in, cause the expression smoothed by tssmooth ma to be missing
for the excluded observations. Thus sample restrictions have the same effect as missing values in a
variable that is filtered in the expression. Also, gaps in the data that are longer than the span of the
filter will generate missing values in the filtered series.
Because the first l observations and the last f observations will be outside the span of the filter,
those observations will be set to missing in the moving-average series.
Video example
Time series, part 6: Moving-average smoothers using tssmooth
587
Stored results
tssmooth ma stores the following in r():
Scalars
r(N)
r(w0)
r(wlead#)
r(wlag#)
Macros
r(method)
r(exp)
r(timevar)
r(panelvar)
number of observations
weight on the current observation
weight on lead #, if leads are specified
weight on lag #, if lags are specified
smoothing method
expression specified
time variable specified in tsset
panel variable specified in tsset
Reference
Chatfield, C. 2004. The Analysis of Time Series: An Introduction. 6th ed. Boca Raton, FL: Chapman & Hall/CRC.
Also see
[TS] tsset Declare data to be time-series data
[TS] tssmooth Smooth and forecast univariate time-series data
Title
tssmooth nl Nonlinear filter
Syntax
Remarks and examples
Menu
Stored results
Description
Methods and formulas
Options
Also see
Syntax
tssmooth nl type newvar = exp if
in , smoother(smoother , twice )
replace
where smoother is specified as Sm Sm . . .
and Sm is one of
1|2|3|4|5|6|7|8| 9 R
3 R S S|R S|R ...
E
H
The numbers specified in smoother represent the span of a running median smoother. For example,
a number 3 specifies that each value be replaced by the median of the point and the two adjacent
data values. The letter H indicates that a Hanning linear smoother, which is a span-3 smoother with
binomial weights, be applied.
The letters E, S, and R are three refinements that can be combined with the running median and
Hanning smoothers. First, the end points of a smooth can be given special treatment. This is specified
by the E operator. Second, smoothing by 3, the span-3 running median, tends to produce flat-topped
hills and valleys. The splitting operator, S, splits these repeated values, applies the end-point operator
to them, and then rejoins the series. Third, it is sometimes useful to repeat an odd-span median
smoother or the splitting operator until the smooth no longer changes. Following a digit or an S with
an R specifies this type of repetition.
Finally, the twice operator specifies that after smoothing, the smoother be reapplied to the resulting
rough, and any recovered signal be added back to the original smooth.
Letters may be specified in lowercase, if preferred. Examples of smoother , twice include
3RSSH
3rssh
3RSSH,twice
3rssh,twice
4253H
4253h
4253H,twice
4253h,twice
You must tsset your data before using tssmooth nl; see [TS] tsset.
exp may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
Smoothers/univariate forecasters
588
>
Nonlinear filter
43RSR2H,twice
43rsr2h,twice
589
Description
tssmooth nl uses nonlinear smoothers to identify the underlying trend in a series.
Options
Main
smoother(smoother , twice ) is required; it specifies the nonlinear smoother to be used.
replace replaces newvar if it already exists.
Stored results
tssmooth nl stores the following in r():
Scalars
r(N)
Macros
r(method)
r(smoother)
r(timevar)
r(panelvar)
number of observations
nl
specified smoother
time variable specified in tsset
panel variable specified in tsset
Also see
[TS] tsset Declare data to be time-series data
[TS] tssmooth Smooth and forecast univariate time-series data
Title
tssmooth shwinters HoltWinters seasonal smoothing
Syntax
Remarks and examples
References
Menu
Stored results
Also see
Description
Methods and formulas
Options
Acknowledgment
Syntax
tssmooth shwinters
type
newvar = exp
if
in
, options
Description
options
Main
replace
parms(# # # )
samp0(#)
s0(#cons #lt )
forecast(#)
period(#)
additive
Options
sn0 0(varname)
sn0 v(newvar)
snt v(newvar)
normalize
altstarts
Maximization
maximize options
from(# # # )
You must tsset your data before using tssmooth shwinters; see [TS] tsset.
exp may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
Smoothers/univariate forecasters
>
Description
tssmooth shwinters performs the seasonal HoltWinters method on a user-specified expression,
which is usually just a variable name, and generates a new variable containing the forecasted series.
590
591
Options
Main
Options
sn0 0(varname) specifies the initial seasonal values to use. varname must contain a complete years
worth of seasonal values, beginning with the first observation in the estimation sample. For example,
if you have monthly data, the first 12 observations of varname must contain nonmissing data.
sn0 0() cannot be used with sn0 v().
sn0 v(newvar) stores in newvar the initial seasonal values after they have been estimated. sn0 v()
cannot be used with sn0 0().
snt v(newvar) stores in newvar the seasonal values for the final years worth of data.
normalize specifies that the seasonal values be normalized. In the multiplicative model, they are
normalized to sum to one. In the additive model, the seasonal values are normalized to sum to
zero.
altstarts uses an alternative method to compute the starting values for the constant, the linear,
and the seasonal terms. The default and the alternative methods are described in Methods and
formulas. altstarts may not be specified with s0().
592
Maximization
maximize options controls the process for solving for the optimal , , and when the parms()
option is not specified.
maximize options: nodifficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), and nonrtolerance; see [R] maximize. These options are seldom used.
from(# # # ), 0 < # < 1, 0 < # < 1, and 0 < # < 1, specifies starting values from which
the optimal values of , , and will be obtained. If from() is not specified, from(.5 .5 .5)
is used.
Introduction
The seasonal HoltWinters methods forecast univariate series that have a seasonal component.
If the amplitude of the seasonal component grows with the series, the HoltWinters multiplicative
method should be used. If the amplitude of the seasonal component is not growing with the series, the
HoltWinters additive method should be used. Abraham and Ledolter (1983), Bowerman, OConnell,
and Koehler (2005), and Montgomery, Johnson, and Gardiner (1990) provide good introductions to the
HoltWinters methods in recursive univariate forecasting methods. Chatfield (2001, 2004) provides
introductions in the broader context of modern time-series analysis.
Like the other recursive methods in tssmooth, tssmooth shwinters uses the information stored
by tsset to detect panel data. When applied to panel data, each series is smoothed separately, and
the starting values are computed separately for each panel. If the smoothing parameters are chosen
to minimize the in-sample sum-of-squared forecast errors, the optimization is performed separately
on each panel.
When there are missing values at the beginning of the series, the sample begins with the first
nonmissing observation. Missing values after the first nonmissing observation are filled in with
forecasted values.
593
The graph below describes the fit and the forecast that was obtained.
. line sales shw1 t, title("Multiplicative Holt-Winters forecast")
> xtitle(Time) ytitle(Sales)
95
100
Sales
105
110
115
1990q1
1992q1
1994q1
sales
1996q1
Time
1998q1
2000q1
594
The output reveals that the multiplicative model has a better in-sample fit, and the graph below
shows that the forecast from the multiplicative model is higher than that of the additive model.
. line shw1 shwa t if t>=tq(2000q1), title("Multiplicative and additive"
> "Holt-Winters forecasts") xtitle("Time") ytitle("Sales") legend(cols(1))
108
109
Sales
110 111
112
113
2000q1
2000q2
2000q3
Time
2000q4
2001q1
To check whether the estimated seasonal components are intuitively sound, we list the last years
seasonal components.
595
37.
38.
39.
40.
seas
1999q1
1999q2
1999q3
1999q4
-2.7533393
-.91752566
1.8082417
1.8626233
The output indicates that the signs of the estimated seasonal components agree with our intuition.
Stored results
tssmooth shwinters stores the following in r():
Scalars
r(N)
r(alpha)
r(beta)
r(gamma)
r(prss)
r(rss)
r(rmse)
Macros
r(method)
number of observations
smoothing parameter
smoothing parameter
smoothing parameter
shwinters, additive or
shwinters, multiplicative
r(normalize) normalize, if specified
r(s2 0)
r(s1 0)
r(linear)
r(constant)
r(period)
r(exp)
r(timevar)
r(panelvar)
expression specified
time variable specified in tsset
panel variable specified in tsset
r(N pre)
P =
T h
X
t=1
e
e 12)2
{xt x
bt (e
, ,
e)}2 + I|f (
)| 12)2 + I|f (e)|>12) (|f ()|
e)|>12) (|f (e
i
2
+I|f (e
(|f
(e
)|
12)
)|>12)
596
x
where f (x) = ln 1x
. The penalty term is zero unless one of the parameters is close to the
boundary. When one of the parameters is close to the boundary, the penalty term will help to obtain
convergence.
HoltWinters seasonal multiplicative procedure
As with the other recursive methods in tssmooth, there are three aspects to implementing the
HoltWinters seasonal multiplicative procedure: the forecasting equation, the initial values, and the
updating equations. Unlike in the other methods, the data are now assumed to be seasonal with period
L.
Given the estimates a(t), b(t), and s(t + L), a step-ahead point forecast of xt , denoted by
ybt+ , is
ybt+ = {a(t) + b(t) } s(t + L)
Given the smoothing parameters , , and , the updating equations are
a(t) =
xt
+ (1 ) {a(t 1) + b(t 1)}
s(t L)
s(t) =
xt
a(t)
+ (1 )s(t L)
To restrict the seasonal terms to sum to 1 over each year, specify the normalize option.
The updating equations require the L + 2 initial values a(0), b(0), s(1 L), s(2 L), . . . , s(0).
Two methods calculate the initial values with the first m years, each of which contains L seasons.
By default, m is set to the number of seasons in half the sample.
The initial value of the trend component, b(0), can be estimated by
b(0) =
xm x1
(m 1)L
where xm is the average level of xt in year m and x1 is the average level of xt in the first year.
The initial value for the linear term, a(0), is then calculated as
a(0) = x1
L
b(0)
2
To calculate the initial values for the seasons 1, 2, . . . , L, we first calculate the deviation-adjusted
values,
xt
n
o
S(t) =
(L+1)
xi
j
b(0)
2
where i is the year that corresponds to time t, j is the season that corresponds to time t, and xi is
the average level of xt in year i.
597
Next, for each season l = 1, 2, . . . , L, we define sl as the average St over the years. That is,
sl =
m1
1 X
Sl+kL
m
for l = 1, 2, . . . , L
k=0
s0l = sl
PL
l=1 sl
for l = 1, 2, . . . , L
Technical note
If there are missing values in the first few years, a small value of m can cause the starting value
methods for seasonal term to fail. Here you should either specify a larger value of m by using
samp0() or directly specify the seasonal starting values by using the snt0 0() option.
xt = (0 + 1 t) + st + t
As in the multiplicative case, there are three smoothing parameters, , , and , which can either
be set or chosen to minimize the in-sample sum-of-squared forecast errors.
The updating equations are
x
bt+ = a(t) + b(t) + s(t + L)
598
As in the multiplicative case, there are two methods for setting the initial values.
The default method is to obtain the initial values for a(0), b(0), s(1 L), . . . , s(0) from the
regression
xt = a(0) + b(0)t + s,1L D1 + s,2L D2 + + s,0 DL + et
where the D1 , . . . , DL are dummy variables with
Di =
1
0
if t corresponds to season i
otherwise
When altstarts is specified, an alternative method is used that regresses the xt series on a time
variable that has been normalized to equal one in the first period in the sample and on a constant
term. b(0) is set to the estimated coefficient on the time variable, and a(0) is set to the estimated
constant term. Then the demeaned series x
et = xt is created, where is the mean of the xt .
The x
et are regressed on L seasonal dummy variables. The lth seasonal starting value is then set to
l , where l is the estimated coefficient on the lth seasonal dummy variable. The sample in both the
regression and the mean calculation is restricted to include the first samp0 years, where, by default,
samp0() includes half the data.
Acknowledgment
We thank Nicholas J. Cox of the Department of Geography at Durham University, UK, and coeditor
of the Stata Journal for his helpful comments.
References
Abraham, B., and J. Ledolter. 1983. Statistical Methods for Forecasting. New York: Wiley.
Bowerman, B. L., R. T. OConnell, and A. B. Koehler. 2005. Forecasting, Time Series, and Regression: An Applied
Approach. 4th ed. Pacific Grove, CA: Brooks/Cole.
Chatfield, C. 2001. Time-Series Forecasting. London: Chapman & Hall/CRC.
. 2004. The Analysis of Time Series: An Introduction. 6th ed. Boca Raton, FL: Chapman & Hall/CRC.
Chatfield, C., and M. Yar. 1988. Holt-Winters forecasting: Some practical issues. Statistician 37: 129140.
Holt, C. C. 2004. Forecasting seasonals and trends by exponentially weighted moving averages. International Journal
of Forecasting 20: 510.
Montgomery, D. C., L. A. Johnson, and J. S. Gardiner. 1990. Forecasting and Time Series Analysis. 2nd ed. New
York: McGrawHill.
Winters, P. R. 1960. Forecasting sales by exponentially weighted moving averages. Management Science 6: 324342.
Also see
[TS] tsset Declare data to be time-series data
[TS] tssmooth Smooth and forecast univariate time-series data
Title
ucm Unobserved-components model
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
ucm depvar
options
indepvars
if
in
, options
Description
Model
model(model)
specify trend and idiosyncratic components
seasonal(#)
include a seasonal component with a period of # time units
cycle(# , frequency(#f ) ) include a cycle component of order # and optionally set initial
frequency to #f , 0 < #f < ; cycle() may be specified up to
three times
apply specified linear constraints
constraints(constraints)
collinear
keep collinear variables
SE/Robust
vce(vcetype)
Reporting
level(#)
nocnsreport
display options
Maximization
maximize options
coeflegend
model
Description
rwalk
none
ntrend
dconstant
llevel
dtrend
lldtrend
rwdrift
lltrend
strend
rtrend
599
600
You must tsset your data before using ucm; see [TS] tsset.
indepvars may contain factor variables; see [U] 11.4.3 Factor variables.
indepvars and depvar may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
Menu
Statistics
>
Time series
>
Unobserved-components model
Description
Unobserved-components models (UCMs) decompose a time series into trend, seasonal, cyclical,
and idiosyncratic components and allow for exogenous variables. ucm estimates the parameters of
UCMs by maximum likelihood.
All the components are optional. The trend component may be first-order deterministic or it may
be first-order or second-order stochastic. The seasonal component is stochastic; the seasonal effects
at each time period sum to a zero-mean finite-variance random variable. The cyclical component is
modeled by the stochastic-cycle model derived by Harvey (1989).
Options
Model
model(model) specifies the trend and idiosyncratic components. The default is model(rwalk). The
available models are listed in Syntax and discussed in detail in Models for the trend and idiosyncratic
components under Remarks and examples below.
seasonal(#) adds a stochastic-seasonal component to the model. # is the period of the season, that
is, the number of time-series observations required for the period to complete.
cycle(#) adds a stochastic-cycle component of order # to the model. The order # must be 1, 2, or
3. Multiple cycles are added by repeating the cycle(#) option with up to three cycles allowed.
cycle(#, frequency(#f )) specifies #f as the initial value for the central-frequency parameter
in the stochastic-cycle component of order #. #f must be in the interval (0, ).
constraints(constraints), collinear; see [R] estimation options.
SE/Robust
vce(vcetype) specifies the estimator for the variancecovariance matrix of the estimator.
vce(oim), the default, causes ucm to use the observed information matrix estimator.
vce(robust) causes ucm to use the Huber/White/sandwich estimator.
Reporting
601
Maximization
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), and from(matname); see [R] maximize for all options except from(), and
see below for information on from().
from(matname) specifies initial values for the maximization process. from(b0) causes ucm to
begin the maximization algorithm with the values in b0. b0 must be a row vector; the number
of columns must equal the number of parameters in the model; and the values in b0 must be
in the same order as the parameters in e(b).
If you model fails to converge, try using the difficult option. Also see the technical note below
example 5.
The following option is available with ucm but is not shown in the dialog box:
coeflegend; see [R] estimation options.
An introduction to UCMs
UCMs decompose a time series into trend, seasonal, cyclical, and idiosyncratic components and
allow for exogenous variables. Formally, UCMs can be written as
yt = t + t + t + xt + t
(1)
where yt is the dependent variable, t is the trend component, t is the seasonal component, t is
the cyclical component, is a vector of fixed parameters, xt is a vector of exogenous variables, and
t is the idiosyncratic component.
By placing restrictions on t and t , Harvey (1989) derived a series of models for the trend and the
idiosyncratic components. These models are briefly described in Syntax and are further discussed in
Models for the trend and idiosyncratic components. To these models, Harvey (1989) added models for
the seasonal and cyclical components, and he also allowed for the presence of exogenous variables.
It is rare that a UCM contains all the allowed components. For instance, the seasonal component
is rarely needed when modeling deseasonalized data.
Harvey (1989) and Durbin and Koopman (2012) show that UCMs can be written as state-space
models that allow the parameters of a UCM to be estimated by maximum likelihood. In fact, ucm
uses sspace (see [TS] sspace) to perform the estimation calculations; see Methods and formulas for
details.
602
After estimating the parameters, predict can produce in-sample predictions or out-of-sample
forecasts; see [TS] ucm postestimation. After estimating the parameters of a UCM that contains
a cyclical component, estat period converts the estimated central frequency to an estimated
central period and psdensity estimates the spectral density implied by the model; see [TS] ucm
postestimation and the examples below.
We illustrate the basic approach of analyzing data with UCMs, and then we discuss the details of
the different trend models in Models for the trend and idiosyncratic components.
Although the methods implemented in ucm have been widely applied by economists, they are general
time-series techniques and may be of interest to researchers from other disciplines. In example 8, we
analyze monthly data on the reported cases of mumps in New York City.
10
. use http://www.stata-press.com/data/r13/unrate
. tsline unrate, name(unrate)
1950m1
1960m1
1970m1
1980m1
Month
1990m1
2000m1
2010m1
This series looks like it might be well approximated by a random-walk model. Formally, a
random-walk model is given by
yt = t
t = t1 + t
The random-walk is so frequently applied, at least as a starting model, that it is the default model
for ucm. In the output below, we fit the random-walk model to the unemployment data.
unrate
var(level)
Coef.
.0467196
OIM
Std. Err.
.002403
Number of obs
z
19.44
603
757
P>|z|
0.000
.0420098
.0514294
The output indicates that the model is nonstationary, as all random-walk models are.
We consider a richer model in the next example.
Example 2
We suspect that there should be a stationary cyclical component that produces serially correlated
shocks around the random-walk trend. Harvey (1989) derived a stochastic-cycle model for these
stationary cyclical components.
The stochastic-cycle model has three parameters: the frequency at which the random components
are centered, a damping factor that parameterizes the dispersion of the random components around
the central frequency, and the variance of the stochastic-cycle process that acts as a scale factor.
604
Number of obs
Wald chi2(2)
Prob > chi2
118.88421
OIM
Std. Err.
unrate
Coef.
frequency
damping
.0933466
.9820003
.0103609
.0061121
var(level)
var(cycle1)
.0143786
.0270339
.0051392
.0054343
=
=
=
757
26650.81
0.0000
P>|z|
9.01
160.66
0.000
0.000
.0730397
.9700207
.1136535
.9939798
2.80
4.97
0.003
0.000
.004306
.0163829
.0244511
.0376848
The estimated central frequency for the cyclical component is small, implying that the cyclical
component is centered on low-frequency components. The high-damping factor indicates that all the
components from this cyclical component are close to the estimated central frequency. The estimated
variance of the stochastic-cycle process is small but significant.
We use estat period to convert the estimate of the central frequency to an estimated central
period.
. estat period
cycle1
Coef.
period
frequency
damping
67.31029
.0933466
.9820003
Std. Err.
7.471004
.0103609
.0061121
52.66739
.0730397
.9700207
81.95319
.1136535
.9939798
605
Because we have monthly data, the estimated central period of 67.31 implies that the cyclical
component is composed of random components that occur around a central periodicity of about 5.61
years. This estimate falls within the conventional Burns and Mitchell (1946) definition of business-cycle
shocks occurring between 1.5 and 8 years.
We can convert the estimated parameters of the cyclical component to an estimated spectral
density of the cyclical component, as described by Harvey (1989). The spectral density of the cyclical
component describes the relative importance of the random components at different frequencies; see
Frequency-domain concepts used in the stochastic-cycle model for details. We use psdensity (see
[TS] psdensity) to obtain the spectral density of the cyclical component implied by the estimated
parameters, and we use twoway line (see [G-2] graph twoway line) to plot the estimated spectral
density.
Frequency
The estimated spectral density shows that the cyclical component is composed of random components
that are tightly distributed at the low-frequency peak.
606
yt = yt1 + t
where t is a zero-mean, covariance-stationary process with finite variance 2 , and || < 1 so that
yt is covariance stationary. The first-order autocorrelation of this AR(1) process is .
=0.8
321 0 1 2
=0.8
21 0 1 2 3
=0
2 1 0 1 2
Below are plots of simulated data when is set to 0, 0.8, and 0.8. When = 0, the data are i.i.d.
When = 0.8, the value today is strongly negatively correlated with the value yesterday, so this case
should be a prototypical high-frequency noise example. When = 0.8, the value today is strongly
positively correlated with the value yesterday, so this case should be a prototypical low-frequency
shock example.
Time
The plots above confirm our conjectures. The plot when = 0.8 contains fewer runs above or
below the mean, and it is more jagged than the i.i.d. plot. The plot when = 0.8 contains more runs
above or below the mean, and it is smoother than the i.i.d. plot.
607
Spectral density
10
15
20
25
Below we plot the spectral densities for the AR(1) model with = 0, = 0.8, and = 0.8.
Frequency
=0
=0.8
=0.8
The high-frequency components are much more important to the AR(1) process with = 0.8 than
to the i.i.d. process with = 0. The low-frequency components are much more important to the
AR(1) process with = 0.8 than to the i.i.d. process.
Technical note
Autoregressive moving-average (ARMA) models parameterize the autocorrelation in a time series
by allowing todays value to be a weighted average of past values and a weighted average of past i.i.d.
shocks; see Hamilton (1994), Wei (2006), and [TS] arima for introductions and a Stata implementation.
The intuitive ARMA parameterization has many nice features, including that one can easily rewrite
the ARMA model as a weighted average of past i.i.d. shocks to trace how a shock feeds through the
system.
Although it is easy to obtain the spectral density of an ARMA process, the parameters themselves
provide limited information about the underlying spectral density.
In contrast, the parameters of the stochastic-cycle parameterization of autocorrelation in a time series
directly provide information about the underlying spectral density. The parameter 0 is the central
frequency at which the random components are clustered. If 0 is small, then the model is centered
on low-frequency components. If 0 is close to , then the model is centered on high-frequency
components. The parameter is the damping factor that indicates how tightly clustered the random
components are at the central frequency 0 . If is close to 0, there is no clustering of the random
components. If is close to 1, the random components are tightly distributed at the central frequency
0 .
In the graph below, we draw the spectral densities implied by stochastic-cycle models with
four sets of parameters: 0 = /4, = 0.8; 0 = /4, = 0.9; 0 = 4/5, = 0.8; and
0 = 4/5, = 0.9. The graph below illustrates that 0 is the central frequency at which the other
important random components are distributed. It also illustrates that the damping parameter controls
the dispersion of the important components at the central frequency.
50
608
0 = 5
= 0.9
30
40
0 = 4
= 0.9
0 = 5
= 0.8
10
20
0 = 4
= 0.8
/4
/2
3/4
146.28326
OIM
Std. Err.
unrate
Coef.
cycle1
frequency
damping
2.882382
.7004295
.0668017
.125157
cycle2
frequency
damping
.0667929
.9074708
.0207704
.0027886
.002714
var(level)
var(cycle1)
var(cycle2)
Number of obs
Wald chi2(4)
Prob > chi2
=
=
=
609
757
7681.33
0.0000
P>|z|
43.15
5.60
0.000
0.000
2.751453
.4551262
3.013311
.9457328
.0206848
.0142273
3.23
63.78
0.001
0.000
.0262514
.8795858
.1073344
.9353559
.0039669
.0014363
.001028
5.24
1.94
2.64
0.000
0.026
0.004
.0129953
0
.0006991
.0285454
.0056037
.0047289
The output provides some support for the existence of a second, high-frequency cycle. The highfrequency components are centered at 2.88, whereas the low-frequency components are centered at
0.067. That the estimated damping factor is 0.70 for the high-frequency cycle whereas the estimated
damping factor for the low-frequency cycle is 0.91 indicates that the high-frequency components are
more diffusely distributed at 2.88 than the low-frequency components are at 0.067.
We obtain and plot the estimated spectral densities to get another look at these results.
. psdensity sdensity2a omega2a
. psdensity sdensity2b omega2b, cycle(2)
Frequency
UCM cycle 1 spectral density
UCM cycle 2 spectral density
The estimated spectral densities indicate that we have found two distinct cyclical components.
610
It does not matter whether we specify omega2a or omega2b to be the x-axis variable, because
they are equal to each other.
Technical note
That the estimated spectral densities in the previous example do not overlap is important for
parameter identification. Although the parameters are identified in large-sample theory, we have found
it difficult to estimate the parameters of two cyclical components when the spectral densities overlap.
When the spectral densities of two cyclical components overlap, the parameters may not be well
identified and the optimization procedure may not converge.
The random-walk model corresponds to an ARIMA model that is first-order integrated and has
an i.i.d. error term. In other words, the random-walk UCM and the ARIMA(0,1,0) are asymptotically
equivalent. Thus
ucm unrate
and
arima unrate, arima(0,1,0) noconstant
138.8364
D.unrate
Coef.
Number of obs
Wald chi2(3)
Prob > chi2
=
=
=
611
756
683.34
0.0000
OPG
Std. Err.
P>|z|
ARMA
ar
L1.
L2.
.5398016
.2468148
.0586304
.0359396
9.21
6.87
0.000
0.000
.4248882
.1763744
.6547151
.3172551
ma
L1.
-.5146506
.0632838
-8.13
0.000
-.6386845
-.3906167
/sigma
.2013332
.0032644
61.68
0.000
.1949351
.2077313
.2
.8
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
. psdensity sdensity_arma omega_arma
. line sdensity_arma omega_arma
Frequency
The estimated spectral density from the ARIMA(2,1,1) has a similar shape to the plot obtained by
combining the two spectral densities estimated from the stochastic-cycle model in example 3. For
this particular application, the estimated central frequencies of the two cyclical components from the
612
stochastic-cycle model provide information about the business-cycle component and the high-frequency
component that is not easily obtained from the ARIMA(2,1,1) model. On the other hand, it is easier
to work out the impulseresponse function for the ARMA model than for the stochastic-cycle model,
implying that the ARMA model is easier to use when tracing the effect of a shock feeding through
the system.
Example 5
200
600
700
01jan1970
01jan1980
01jan1990
Date
01jan2000
01jan2010
This series looks like it was generated by a random walk with extra noise, so we want to use a
random-walk model that includes an additional random term. This structure causes the model to be
occasionally known as the random-walk-plus-noise model, but it is more commonly known as the
local-level model in the UCM literature.
The local-level model models the trend as a random walk and models the idiosyncratic components
as independent and identically distributed components. Formally, the local-level model specifies the
observed time-series yt , for t = 1, . . . , T , as
yt = t + t
t = t1 + t
where t i.i.d. N (0, 2 ) and t i.i.d. N (0, 2 ) and are mutually independent.
613
icsa
var(level)
var(icsa)
Coef.
116.558
124.2715
OIM
Std. Err.
8.806587
7.615506
Number of obs
z
13.24
16.32
2303
P>|z|
0.000
0.000
99.29745
109.3454
133.8186
139.1976
Technical note
The estimation procedure will not always converge when estimating the parameters of the local-level
model. If the series does not vary enough in the random level, modeled by the random walk, and in
the stationary shocks around the random level, the estimation procedure will not converge because it
will be unable to set the variance of one of the two components to 0.
Take another look at the graphs of unrate and icsa. The extra noise around the random level
that can be seen in the graph of icsa allows us to estimate both variances.
A closely related point is that it is difficult to estimate the parameters of a local-level model with
a stochastic-cycle component because the series must have enough variation to identify the variance
of the random-walk component, the variance of the idiosyncratic term, and the parameters of the
stochastic-cycle component. In some cases, series that look like candidates for the local-level model
are best modeled as random-walk models with stochastic-cycle components.
In fact, convergence can be a problem for most of the models in ucm. Convergence problems
occur most often when there is insufficient variation to estimate the variances of the components in
the model. When there is insufficient variation to estimate the variances of the components in the
model, the optimization routine will fail to converge as it attempts to set the variance equal to 0.
This usually shows up in the iteration log when the log likelihood gets stuck at a particular value and
the message (not concave) or (backed up) is displayed repeatedly. When this happens, use the
614
iterate() option to limit the number of iterations, look to see which of the variances is being driven
to 0, and drop that component from the model. (This technique is a method to obtain convergence
to interpretable estimates, not a model-selection method.)
Example 6
We might suspect that there is some serial correlation in the idiosyncratic shock. Alternatively,
we could include a cyclical component to model the stationary time-dependence in the series. In the
example below, we add a stochastic-cycle model for the stationary cyclical process, but we drop
the idiosyncratic term and use a random-walk model instead of the local-level model. We change
the model because it is difficult to estimate the variance of the idiosyncratic term along with the
parameters of a stationary cyclical component.
. ucm icsa, model(rwalk) cycle(1)
searching for initial values ....................
(setting technique to bhhh)
Iteration 0:
log likelihood = -10008.167
Iteration 1:
log likelihood = -10007.272
Iteration 2:
log likelihood = -10007.206 (backed up)
Iteration 3:
log likelihood = -10007.17 (backed up)
Iteration 4:
log likelihood = -10007.148 (backed up)
(switching technique to nr)
Iteration 5:
log likelihood = -10007.137 (not concave)
Iteration 6:
log likelihood = -9885.1932 (not concave)
Iteration 7:
log likelihood = -9884.1636
Iteration 8:
log likelihood = -9881.6478
Iteration 9:
log likelihood = -9881.4496
Iteration 10: log likelihood = -9881.4441
Iteration 11: log likelihood = -9881.4441
Refining estimates:
Iteration 0:
log likelihood = -9881.4441
Iteration 1:
log likelihood = -9881.4441
Unobserved-components model
Components: random walk, order 1 cycle
Sample: 07jan1967 - 19feb2011
Number of obs
Wald chi2(2)
Log likelihood = -9881.4441
Prob > chi2
=
=
=
2303
23.04
0.0000
OIM
Std. Err.
P>|z|
1.469633
.1644576
.3855657
.0349537
3.81
4.71
0.000
0.000
.7139385
.0959495
2.225328
.2329656
97.90982
149.7323
8.320047
9.980798
11.77
15.00
0.000
0.000
81.60282
130.1703
114.2168
169.2943
icsa
Coef.
frequency
damping
var(level)
var(cycle1)
Although the output indicates that the model fits well, the small estimate of the damping parameter
indicates that the random components will be widely distributed at the central frequency. To get a
better idea of the dispersion of the components, we look at the estimated spectral density of the
stationary cyclical component.
615
.145
.17
Frequency
The graph shows that the random components that make up the cyclical component are diffusely
distributed at a central frequency.
616
In the output below, we include 2 lags of the dependent variable in the random-walk UCM.
. ucm icsa L(1/2).icsa, model(rwalk)
searching for initial values ..........
(setting technique to bhhh)
Iteration 0:
log likelihood = -10044.209
Iteration 1:
log likelihood = -9975.8312
Iteration 2:
log likelihood = -9953.5727
Iteration 3:
log likelihood = -9936.7489
Iteration 4:
log likelihood = -9927.2306
(switching technique to nr)
Iteration 5:
log likelihood = -9918.9538
Iteration 6:
log likelihood = -9890.8306
Iteration 7:
log likelihood = -9889.562
Iteration 8:
log likelihood = -9889.5608
Iteration 9:
log likelihood = -9889.5608
Refining estimates:
Iteration 0:
log likelihood = -9889.5608
Iteration 1:
log likelihood = -9889.5608
Unobserved-components model
Components: random walk
Sample: 21jan1967 - 19feb2011
Number of obs
Wald chi2(2)
Prob > chi2
=
=
=
2301
271.88
0.0000
icsa
Coef.
icsa
L1.
L2.
-.3250633
-.1794686
.0205148
.0205246
-15.85
-8.74
0.000
0.000
-.3652715
-.2196961
-.2848551
-.1392411
317.6474
9.36691
33.91
0.000
299.2886
336.0062
var(level)
P>|z|
Now we use arima to estimate the parameters of an asymptotically equivalent ARIMA(2,1,0) model.
(We specify the technique(nr) option so that arima will compute the observed information matrix
standard errors that ucm computes.) We use nlcom to compute a point estimate and a standard error
for the variance, which is directly comparable to the one produced by ucm.
617
ARIMA regression
Sample:
14jan1967 - 19feb2011
Number of obs
Wald chi2(2)
Prob > chi2
P>|z|
=
=
=
2302
271.95
0.0000
D.icsa
Coef.
ar
L1.
L2.
-.3249383
-.1793353
.0205036
.0205088
-15.85
-8.74
0.000
0.000
-.3651246
-.2195317
-.284752
-.1391388
/sigma
17.81606
.2625695
67.85
0.000
17.30143
18.33068
ARMA
Note: The test of the variance against zero is one sided, and the two-sided
confidence interval is truncated at zero.
. nlcom _b[sigma:_cons]^2
_nl_1:
_b[sigma:_cons]^2
D.icsa
Coef.
_nl_1
317.4119
Std. Err.
9.355904
z
33.93
P>|z|
0.000
299.0746
335.7491
It is no accident that the parameter estimates and the standard errors from the two estimators
are so close. As the sample size grows the differences in the parameter estimates and the estimated
standard errors will go to 0, because the two estimators are equivalent in large samples.
t = t1 + t1 + t
t = t1 + t
(2)
(3)
Following Harvey (1989), we define 11 flexible models for yt that specify both t and t in (1).
These models place restrictions on the general model specified in (2) and (3) and on t in (1). In
other words, these models jointly specify t and t .
To any of these models, a cyclical component, a seasonal component, or exogenous variables may
be added.
618
Syntax option
model(none)
Model
No trend
model(ntrend)
yt =t
Deterministic constant
model(dconstant)
yt = + t
=
Local level
model(llevel)
yt =t + t
t =t1 + t
Random walk
model(rwalk)
yt =t
t =t1 + t
Deterministic trend
model(dtrend)
yt =t + t
t =t1 +
=
model(lldtrend)
yt =t + t
t =t1 + + t
=
model(rwdrift)
yt =t
t =t1 + + t
=
model(lltrend)
yt =t + t
t =t1 + t1 + t
t =t1 + t
Smooth trend
model(strend)
yt =t + t
t =t1 + t1
t =t1 + t
Random trend
model(rtrend)
yt =t
t =t1 + t1
t =t1 + t
The majority of the models available in ucm are designed for nonstationary time series. The
deterministic-trend model incorporates a first-order deterministic time-trend in the model. The locallevel, random-walk, local-level-with-deterministic-trend, and random-walk-with-drift models are for
modeling series with first-order stochastic trends. A series with a dth-order stochastic trend must be
differenced d times to be stationary. The local-linear-trend, smooth-trend, and random-trend models
are for modeling series with second-order stochastic trends.
The no-trend-or-idiosyncratic-component model is useful for using ucm to model stationary series
with cyclical components or seasonal components and perhaps exogenous variables. The no-trend and
the deterministic-constant models are useful for using ucm to model stationary series with seasonal
components or exogenous variables.
619
Seasonal component
A seasonal component models cyclical behavior in a time series that occurs at known seasonal
periodicities. A seasonal component is modeled in the time domain; the period of the cycle is specified
as the number of time periods required for the cycle to complete.
Example 8
Lets begin by considering a series that displays a seasonal effect. Below we plot a monthly series
containing the number of new cases of mumps in New York City between January 1928 and December
1972. (See Hipel and McLeod [1994] for the source and further discussion of this dataset.)
1930m1
1940m1
1950m1
Month
1960m1
1970m1
The graph reveals recurring spikes at regular intervals, which we suspect to be seasonal effects. The
series may or may not be stationary; the graph evidence is not definitive.
Deterministic seasonal effects are a standard method of incorporating seasonality into a model. In a
model with a constant term, the s deterministic seasonal effects are modeled as s parameters subject to
the constraint that they sum to zero; formally, t + t1 + + t(s1) = 0. A stochastic-seasonal
model is a more flexible alternative that allows the seasonal effects at time t to sum to t , a zero-mean,
finite-variance, i.i.d. random variable; formally, t + t1 + + t(s1) = t .
In the output below, we model the seasonal effects by a stochastic-seasonal model, we allow for
the series to follow a random walk, and we include a stationary cyclical component.
620
mumps
Coef.
frequency
damping
.3863607
.8405622
.0282037
.0197933
221.2131
4.151639
12228.17
140.5179
4.383442
813.8394
var(level)
var(seasonal)
var(cycle1)
=
=
=
534
2141.69
0.0000
P>|z|
13.70
42.47
0.000
0.000
.3310824
.8017681
.4416389
.8793563
1.57
0.95
15.03
0.058
0.172
0.000
0
0
10633.08
496.6231
12.74303
13823.27
The output indicates that the trend and seasonal variances may not be necessary. When the variance of
the seasonal component is zero, the seasonal component becomes deterministic. Below we estimate the
parameters of a model that includes deterministic seasonal effects and a stationary cyclical component.
. ucm mumps ibn.month, model(none) cycle(1)
searching for initial values .......
(setting technique to bhhh)
Iteration 0:
log likelihood = -3944.7035
Iteration 1:
log likelihood = -3646.639
Iteration 2:
log likelihood = -3546.182
Iteration 3:
log likelihood = -3468.1879
Iteration 4:
log likelihood = -3432.8603
(switching technique to nr)
Iteration 5:
log likelihood = -3405.0632
Iteration 6:
log likelihood = -3285.9443
Iteration 7:
log likelihood = -3283.0404
Iteration 8:
log likelihood = -3283.0284
Iteration 9:
log likelihood = -3283.0284
Refining estimates:
Iteration 0:
log likelihood = -3283.0284
Iteration 1:
log likelihood = -3283.0284
Number of obs
Wald chi2(14)
Prob > chi2
mumps
Coef.
cycle1
frequency
damping
.3272754
.844874
.0262922
.0184994
480.5095
561.9174
832.8666
894.0747
869.6568
770.1562
433.839
218.2394
140.686
148.5876
215.0958
330.2232
13031.53
=
=
=
621
534
3404.29
0.0000
P>|z|
12.45
45.67
0.000
0.000
.2757436
.8086157
.3788071
.8811322
32.67128
32.66999
32.67696
32.64568
32.56282
32.48587
32.50165
32.56712
32.64138
32.69067
32.70311
32.68906
14.71
17.20
25.49
27.39
26.71
23.71
13.35
6.70
4.31
4.55
6.58
10.10
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
416.475
497.8854
768.8209
830.0904
805.8348
706.4851
370.1369
154.409
76.7101
84.51508
150.9989
266.1538
544.544
625.9494
896.9122
958.0591
933.4787
833.8274
497.541
282.0698
204.662
212.6601
279.1927
394.2926
798.2719
16.32
0.000
11466.95
14596.11
mumps
month
1
2
3
4
5
6
7
8
9
10
11
12
var(cycle1)
Note: Tests of variances against zero are one sided, and the two-sided
confidence intervals are truncated at zero.
Technical note
In a stochastic model for the seasonal component, the seasonal effects sum to the random variable
t i.i.d. N (0, 2 ):
s1
X
t =
tj + t
j=1
Stored results
Because ucm is estimated using sspace, most of the sspace stored results appear after ucm. Not
all of these results are relevant for ucm; programmers wishing to treat ucm results as sspace results
should see Stored results of [TS] sspace. See Methods and formulas for the state-space representation
of UCMs, and see [TS] sspace for more documentation that relates to all the stored results.
622
number of observations
number of parameters
number of auxiliary parameters
number of equations in e(b)
number of dependent variables
number of stochastic cycles
model degrees of freedom
log likelihood
2
significance
minimum time in sample
maximum time in sample
1 if the estimated parameters indicate a stationary model, 0 otherwise
rank of VCE
number of iterations
return code
1 if converged, 0 otherwise
ucm
command as typed
unoperated names of dependent variables in observation equations
list of covariates
independent variables
variable denoting time within groups
names of equations
type of model
title in estimation output
formatted minimum time
formatted maximum time
Wald; type of model 2 test
vcetype specified in vce()
title used to label Std. Err.
type of optimization
type of initial values
maximization technique
iterations taken in maximization technique
b V
program used to implement estat
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
parameter vector
constraints matrix
iteration log (up to 20 iterations)
gradient vector
variancecovariance matrix of the estimators
model-based variance
marks estimation sample
623
Introduction
The general form of UCMs can be expressed as
yt = t + t + t + xt + t
where t is the trend, t is the seasonal component, t is the cycle, is the regression coefficients
for regressors xt , and t is the idiosyncratic error with variance 2 .
We can decompose the trend as
t = t
t = t1 + t1 + t
t = t1 + t
where t is the local level, t is the local slope, and t and t are i.i.d. normal errors with mean 0
and variance 2 and 2 , respectively.
Next consider the seasonal component, t , with a period of s time units. Ignoring a seasonal
Ps1
disturbance term, the seasonal effects will sum to zero, j=0 tj = 0. Adding a normal error term,
t , with mean 0 and variance 2 , we express the seasonal component as
t =
s1
X
tj + t
j=1
Finally, the cyclical component, t , is a function of the frequency , in radians, and a unit-less
scaling variable , termed the damping effect, 0 < < 1. We require two equations to express the
cycle:
t = t1 cos + et1 sin + t
et
et = t1 sin + et1 cos +
where the t and
et disturbances are normally distributed with mean 0 and variance 2 .
The disturbance terms t , t , t , t , t , and
et are independent.
State-space formulation
ucm is an easy-to-use implementation of the state-space command sspace, with special modifications, where the local linear trend components, seasonal components, and cyclical components are
states of the state-space model. The state-space model can be expressed in matrix form as
yt = Dzt + Fxt + t
zt = Azt1 + Ct
where yt , t = 1, . . . , T , are the observations and zt are the unobserved states. The number of states,
m, depends on the model specified. The k 1 vector xt contains the exogenous variables specified
as indepvars, and the 1 k vector F contains the regression coefficients to be estimated. t is the
observation equation disturbance, and the m0 1 vector t contains the state equation disturbances,
where m0 m. Finally, C is a m m0 matrix of zeros and ones. These recursive equations are
evaluated using the diffuse Kalman filter of De Jong (1991).
624
Below we give the state-space matrix structures for a local linear trend with a stochastic seasonal
component, with a period of 4 time units, and an order-2 cycle. The state vector, zt , and its transition
matrix, A, have the structure
1
0
A = 0
0
0
1 0
0
0
0
0
0
0
1 0
0
0
0
0
0
0
0 1 1 1
0
0
0
0
0 1
0
0
0
0
0
0
0 0
1
0
0
0
0
0
0 0
0
0
cos sin
1
0
0 0
0
0 sin cos
0
1
0 0
0
0
0
0
cos sin
0 0
0
0
0
0
sin cos
1
0
C = 0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
1
t
t
t1
zt =
t2
t,1
et,1
t,2
et,2
t
t
t = t
t
et
D = (1 0 1 0 0 1 0 0 0)
t = (t1 cos c + t1
sin c ) + t,1
t = (t1 sin c + t1
cos c ) + t,2
where t,j i.i.d. N (0, 2 ) and 0 < < 1 is a damping effect. The cycle is variance-stationary
when < 1 because Var(t ) = 2 /(1 ). We will express a UCM with a cyclical component added
to a trend as
yt = t + t + t
where t can be any of the trend parameterizations discussed earlier.
Higher-order cycles, k = 2 or k = 3, are defined as
t,j
= (t1,j sin c + t1,j
cos c ) + t1,j+1
625
t,k
= (t1,k sin c + t1,k
cos c ) + t,2
Harvey and Trimbur (2003) discuss the properties of this model and its state-space formulation.
Andrew Charles Harvey (1947 ) is a British econometrician. After receiving degrees in economics
and statistics from the University of York and the London School of Economics and working
for a period in Kenya, he has worked as a teacher and researcher at the University of Kent,
the London School of Economics, and now the University of Cambridge. Harveys interests are
centered on time series, especially state-space models, signal extraction, volatility, and changes
in quantiles.
References
Burns, A. F., and W. C. Mitchell. 1946. Measuring Business Cycles. New York: National Bureau of Economic
Research.
De Jong, P. 1991. The diffuse Kalman filter. Annals of Statistics 19: 10731083.
Durbin, J., and S. J. Koopman. 2012. Time Series Analysis by State Space Methods. 2nd ed. Oxford: Oxford
University Press.
Fuller, W. A. 1996. Introduction to Statistical Time Series. 2nd ed. New York: Wiley.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Harvey, A. C. 1989. Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge: Cambridge
University Press.
. 1993. Time Series Models. 2nd ed. Cambridge, MA: MIT Press.
Harvey, A. C., and T. M. Trimbur. 2003. General model-based filters for extracting cycles and trends in economic
time series. The Review of Economics and Statistics 85: 244255.
Hipel, K. W., and A. I. McLeod. 1994. Time Series Modelling of Water Resources and Environmental Systems.
Amsterdam: Elsevier.
Priestley, M. B. 1981. Spectral Analysis and Time Series. London: Academic Press.
Trimbur, T. M. 2006. Properties of higher order stochastic cycles. Journal of Time Series Analysis 27: 117.
Wei, W. W. S. 2006. Time Series Analysis: Univariate and Multivariate Methods. 2nd ed. Boston: Pearson.
Also see
[TS] ucm postestimation Postestimation tools for ucm
[TS] arima ARIMA, ARMAX, and other dynamic regression models
[TS] sspace State-space models
[TS] tsfilter Filter a time-series, keeping only selected periodicities
[TS] tsset Declare data to be time-series data
[TS] tssmooth Smooth and forecast univariate time-series data
[TS] var Vector autoregressive models
[U] 20 Estimation and postestimation commands
Title
ucm postestimation Postestimation tools for ucm
Description
Options for predict
Options for estat period
Also see
Description
The following postestimation commands are of special interest after ucm:
Command
Description
estat period
psdensity
Description
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
nlcom
predict
predictnl
test
testnl
626
627
type
stub* | newvarlist
if
in
, statistic options
Description
statistic
Main
xb
trend
seasonal
cycle
residuals
rstandard
These statistics are available both in and out of sample; type predict
the estimation sample.
Description
options
Options
rmse(stub* | newvarlist)
put estimated root mean squared errors of predicted statistics in the new
variable
dynamic(time constant) begin dynamic forecast at specified time
Advanced
smethod(method)
method
Description
onestep
smooth
filter
>
Postestimation
>
Main
xb, trend, seasonal, cycle, residuals, and rstandard specify the statistic to be predicted.
xb, the default, calculates the linear predictions using the exogenous variables. xb may not be
used with the smethod(filter) option.
trend estimates the unobserved trend component.
seasonal estimates the unobserved seasonal component.
cycle estimates the unobserved cyclical component.
628
residuals calculates the residuals in the equation for the dependent variable. residuals may
not be specified with dynamic().
rstandard calculates the standardized residuals, which are the residuals normalized to have unit
variances. rstandard may not be specified with the smethod(filter), smethod(smooth),
or dynamic() option.
Options
rmse(stub* | newvarlist) puts the root mean squared errors of the predicted statistic into the specified
new variable. Multiple variables are only required for predicting cycles of a model that has more
than one cycle. The root mean squared errors measure the variances due to the disturbances but
do not account for estimation error. The stub* syntax is for models with multiple cycles, where
you provide the prefix and predict will add a numeric suffix for each predicted cycle.
dynamic(time constant) specifies when predict should start producing dynamic forecasts. The
specified time constant must be in the scale of the time variable specified in tsset, and the
time constant must be inside a sample for which observations on the dependent variable are
available. For example, dynamic(tq(2008q4)) causes dynamic predictions to begin in the fourth
quarter of 2008, assuming that your time variable is quarterly; see [D] datetime. If the model
contains exogenous variables, they must be present for the whole predicted sample. dynamic()
may not be specified with the rstandard, residuals, or smethod(smooth) option.
Advanced
smethod(method) specifies the method for predicting the unobserved components. smethod() causes
different amounts of information on the dependent variable to be used in predicting the components
at each time period.
smethod(onestep), the default, causes predict to estimate the components at each time period
using previous information on the dependent variable. The Kalman filter is performed on
previous periods, but only the one-step predictions are made for the current period.
smethod(smooth) causes predict to estimate the components at each time period using all
the sample data by the Kalman smoother. smethod(smooth) may not be specified with the
rstandard option.
smethod(filter) causes predict to estimate the components at each time period using previous
and contemporaneous data by the Kalman filter. The Kalman filter is performed on previous
periods and the current period. smethod(filter) may not be specified with the xb option.
, options
Description
Main
level(#)
cformat(% fmt)
629
>
Postestimation
>
Options
level(#) specifies the confidence level, as a percentage, for confidence intervals. The default is
level(95) or as set by set level; see [U] 20.7 Specifying the width of confidence intervals.
cformat(% fmt) sets the display format for the table numeric values. The default is cformat(%9.0g).
Example 1
We begin by modeling monthly data on the median duration of employment spells in the United
States. We include a stochastic-seasonal component because the data have not been seasonally adjusted.
. use http://www.stata-press.com/data/r13/uduration2
(BLS data, not seasonally adjusted)
. ucm duration, seasonal(12) cycle(1) difficult
searching for initial values ....................
(setting technique to bhhh)
Iteration 0:
log likelihood = -409.79452
Iteration 1:
log likelihood = -403.38288
Iteration 2:
log likelihood = -403.37351 (backed up)
Iteration 3:
log likelihood = -403.36878 (backed up)
Iteration 4:
log likelihood = -403.36759 (backed up)
(switching technique to nr)
Iteration 5:
log likelihood = -403.36699 (backed up)
Iteration 6:
log likelihood = -397.87773 (not concave)
Iteration 7:
log likelihood = -396.44601 (not concave)
Iteration 8:
log likelihood = -394.58451 (not concave)
Iteration 9:
log likelihood = -392.58307 (not concave)
Iteration 10: log likelihood = -389.9884 (not concave)
Iteration 11: log likelihood =
-388.885
Iteration 12: log likelihood = -388.65318
Iteration 13: log likelihood = -388.29788
Iteration 14: log likelihood = -388.26268
Iteration 15: log likelihood = -388.25677
Iteration 16: log likelihood = -388.25675
Refining estimates:
Iteration 0:
log likelihood = -388.25675
Iteration 1:
log likelihood = -388.25675
630
Number of obs
Wald chi2(2)
Prob > chi2
498
7.17
0.0277
OIM
Std. Err.
P>|z|
1.641531
.2671232
.7250323
.1050168
2.26
2.54
0.024
0.011
.2204938
.0612939
3.062568
.4729524
.1262922
.0017289
.0641496
.0221428
.0009647
.0211839
5.70
1.79
3.03
0.000
0.037
0.001
.0828932
0
.0226299
.1696912
.0036196
.1056693
duration
Coef.
frequency
damping
var(level)
var(seasonal)
var(cycle1)
=
=
=
Below we predict the trend and the seasonal components to get a look at the model fit.
. predict strend, trend
. predict season, seasonal
. tsline duration strend, name(trend) nodraw legend(rows(1))
. tsline season, name(season) yline(0,lwidth(vthin)) nodraw
10
12
14
1970m1
1980m1
1990m1
Month
2010m1
trend, onestep
seasonal, onestep
1
0
1
2000m1
1970m1
1980m1
1990m1
Month
2000m1
2010m1
631
The trend tracks the data well. That the seasonal component appears to change over time indicates
that the stochastic-seasonal component might fit better than a deterministic-seasonal component.
Example 2
In this example, we use the model to forecast the median unemployment duration. We use the root
mean squared error of the prediction to compute a confidence interval of our dynamic predictions.
Recall that the root mean squared error accounts for variances due to the disturbances but not due to
the estimation error.
10
12
14
. tsappend, add(12)
. predict duration_f, dynamic(tm(2009m1)) rmse(rmse)
. scalar z = invnormal(0.95)
. generate lbound = duration_f - z*rmse if tm>=tm(2008m12)
(497 missing values generated)
. generate ubound = duration_f + z*rmse if tm>=tm(2008m12)
(497 missing values generated)
. label variable lbound "90% forecast interval"
. twoway (tsline duration duration_f if tm>=tm(2006m1))
>
(tsrline lbound ubound if tm>=tm(2008m12)),
>
ysize(2) xtitle("") legend(cols(1))
2006m1
2007m1
2008m1
2009m1
2010m1
The model forecasts a large temporary increase in the median duration of unemployment.
Also see
[TS] ucm Unobserved-components model
[TS] psdensity Parametric spectral density estimation after arima, arfima, and ucm
[TS] sspace postestimation Postestimation tools for sspace
[U] 20 Estimation and postestimation commands
Title
var intro Introduction to vector autoregressive models
Description
References
Also see
Description
Stata has a suite of commands for fitting, forecasting, interpreting, and performing inference
on vector autoregressive (VAR) models and structural vector autoregressive (SVAR) models. The suite
includes several commands for estimating and interpreting impulseresponse functions (IRFs), dynamicmultiplier functions, and forecast-error variance decompositions (FEVDs). The table below describes
the available commands.
varwle
[TS] varwle
vargranger
[TS] vargranger
varlmar
[TS] varlmar
varnorm
[TS] varnorm
This entry provides an overview of vector autoregressions and structural vector autoregressions.
More rigorous treatments can be found in Hamilton (1994), Lutkepohl (2005), and Amisano and
Giannini (1997). Stock and Watson (2001) provide an excellent nonmathematical treatment of vector
autoregressions and their role in macroeconomics. Becketti (2013) provides an excellent introduction
to VAR analysis with an emphasis on how it is done in practice.
632
633
Introduction to VARs
A VAR is a model in which K variables are specified as linear functions of p of their own lags,
p lags of the other K 1 variables, and possibly additional exogenous variables. Algebraically, a
p-order VAR model, written VAR(p), with exogenous variables xt is given by
t {, } (1)
where
f 1 xt + W
f 2 xt2 + + W
f s xts + et
W0 yt = a + W1 yt1 + + Wp ytp + W
(2)
(3)
634
The cross-equation error variancecovariance matrix contains all the information about contemporaneous correlations in a VAR and may be the VARs greatest strength and its greatest weakness.
Because no questionable a priori assumptions are imposed, fitting a VAR allows the dataset to speak
for itself. However, without imposing some restrictions on the structure of , we cannot make a
causal interpretation of the results.
If we make additional technical assumptions, we can derive another representation of the VAR in
(1). If the VAR is stable (see [TS] varstable), we can rewrite yt as
yt = +
X
i=0
Di xti +
i uti
(4)
i=0
where is the K 1 time-invariant mean of the process and Di and i are K M and K K
matrices of parameters, respectively. Equation (4) states that the process by which the variables in
yt fluctuate about their time-invariant means, , is completely determined by the parameters in
Di and i and the (infinite) past history of the exogenous variables xt and the independent and
identically distributed (i.i.d.) shocks or innovations, ut1 , ut2 , . . . . Equation (4) is known as the
vector moving-average representation of the VAR. The Di are the dynamic-multiplier functions, or
transfer functions. The moving-average coefficients i are also known as the simple IRFs at horizon
i. The precise relationships between the VAR parameters and the Di and i are derived in Methods
and formulas of [TS] irf create.
The joint distribution of yt is determined by the distributions of xt and ut and the parameters
v, Bi , and Ai . Estimating the parameters in a VAR requires that the variables in yt and xt be
covariance stationary, meaning that their first two moments exist and are time invariant. If the yt are
not covariance stationary, but their first differences are, a vector error-correction model (VECM) can
be used. See [TS] vec intro and [TS] vec for more information about those models.
If the ut form a zero mean, i.i.d. vector process, and yt and xt are covariance stationary and are
not correlated with the ut , consistent and efficient estimates of the Bi , the Ai , and v are obtained
via seemingly unrelated regression, yielding estimators that are asymptotically normally distributed.
When the equations for the variables yt have the same set of regressors, equation-by-equation OLS
estimates are the conditional maximum likelihood estimates.
Much of the interest in VAR models is focused on the forecasts, IRFs, dynamic-multiplier functions,
and the FEVDs, all of which are functions of the estimated parameters. Estimating these functions is
straightforward, but their asymptotic standard errors are usually obtained by assuming that ut forms
a zero mean, i.i.d. Gaussian (normal) vector process. Also, some of the specification tests for VARs
have been derived using the likelihood-ratio principle and the stronger Gaussian assumption.
In the absence of contemporaneous exogenous variables, the disturbance variancecovariance
matrix contains all the information about contemporaneous correlations among the variables. VARs
are sometimes classified into three types by how they account for this contemporaneous correlation.
(See Stock and Watson [2001] for one derivation of this taxonomy.) A reduced-form VAR, aside
from estimating the variancecovariance matrix of the disturbance, does not try to account for
contemporaneous correlations. In a recursive VAR, the K variables are assumed to form a recursive
dynamic structural equation model in which the first variable is a function of lagged variables, the
second is a function of contemporaneous values of the first variable and lagged values, and so on.
In a structural VAR, the theory you are working with places restrictions on the contemporaneous
correlations that are not necessarily recursive.
Stata has two commands for fitting reduced-form VARs: var and varbasic. var allows for
constraints to be imposed on the coefficients. varbasic allows you to fit a simple VAR quickly
without constraints and graph the IRFs.
635
Because fitting a VAR of the correct order can be important, varsoc offers several methods for
choosing the lag order p of the VAR to fit. After fitting a VAR, and before proceeding with inference,
interpretation, or forecasting, checking that the VAR fits the data is important. varlmar can be used
to check for autocorrelation in the disturbances. varwle performs Wald tests to determine whether
certain lags can be excluded. varnorm tests the null hypothesis that the disturbances are normally
distributed. varstable checks the eigenvalue condition for stability, which is needed to interpret the
IRFs and IRFs.
Introduction to SVARs
As discussed in [TS] irf create, a problem with VAR analysis is that, because is not restricted
to be a diagonal matrix, an increase in an innovation to one variable provides information about the
innovations to other variables. This implies that no causal interpretation of the simple IRFs is possible:
there is no way to determine whether the shock to the first variable caused the shock in the second
variable or vice versa.
However, suppose that we had a matrix P such that = PP0 . We can then show that the variables
in P1 ut have zero mean and that E{P1 ut (P1 ut )0 } = IK . We could rewrite (4) as
yt = +
s PP1 uts
s=0
=+
s P1 uts
s=0
=+
s wts
(5)
s=0
(6)
636
Equation (6) implies that Psr = A1 B, where Psr is the P matrix identified by a particular
short-run SVAR model. The latter equality in (6) implies that
X
yt = +
sr
(7)
s ets
s=0
Psr identifies the structural IRFs by defining a transformation of , and Psr is identified by
the restrictions placed on the parameters in A and B. Because there are only K(K + 1)/2 free
parameters in , only K(K + 1)/2 parameters may be estimated in an identified Psr . Because there
are 2K 2 total parameters in A and B, the order condition for identification requires that at least
2K 2 K(K + 1)/2 restrictions be placed on those parameters. Just as in the simultaneous-equations
framework, this order condition is necessary but not sufficient. Amisano and Giannini (1997) derive
a method to check that an SVAR model is locally identified near some specified values for A and B.
Before moving on to models with long-run constraints, consider these limitations. We cannot place
constraints on the elements of A in terms of the elements of B, or vice versa. This limitation is
imposed by the form of the check for identification derived by Amisano and Giannini (1997). As
noted in Methods and formulas of [TS] var svar, this test requires separate constraint matrices for
the parameters in A and B. Also, we cannot mix short-run and long-run constraints.
Long-run restrictions
A general short-run SVAR has the form
637
In long-run models, the constraints are placed on the elements of C, and the free parameters are
estimated. These constraints are often exclusion restrictions. For instance, constraining C[1, 2] to be
zero can be interpreted as setting the long-run response of variable 1 to the structural shocks driving
variable 2 to be zero.
Statas svar command estimates the parameters of structural VARs. See [TS] var svar for more
information and examples.
Dynamicmultiplier functions describe how the endogenous variables react over time to a unit
change in an exogenous variable. This is a different experiment from that in IRFs and FEVDs because
dynamic-multiplier functions consider a change in an exogenous variable instead of a shock to an
endogenous variable.
irf create estimates IRFs, Cholesky orthogonalized IRFs, dynamic-multiplier functions, and
structural IRFs and their standard errors. It also estimates Cholesky and structural FEVDs. The irf
graph, irf cgraph, irf ograph, irf table, and irf ctable commands graph and tabulate these
estimates. Stata also has several other commands to manage IRF and FEVD results. See [TS] irf for a
description of these commands.
fcast compute computes dynamic forecasts and their standard errors from VARs. fcast graph
graphs the forecasts that are generated using fcast compute.
VARs allow researchers to investigate whether one variable is useful in predicting another variable.
A variable x is said to Granger-cause a variable y if, given the past values of y , past values of x are
useful for predicting y . The Stata command vargranger performs Wald tests to investigate Granger
causality between the variables in a VAR.
References
Amisano, G., and C. Giannini. 1997. Topics in Structural VAR Econometrics. 2nd ed. Heidelberg: Springer.
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lutkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Stock, J. H., and M. W. Watson. 2001. Vector autoregressions. Journal of Economic Perspectives 15: 101115.
Watson, M. W. 1994. Vector autoregressions and cointegration. In Vol. 4 of Handbook of Econometrics, ed. R. F.
Engle and D. L. McFadden. Amsterdam: Elsevier.
638
Also see
[TS] var Vector autoregressive models
[TS] var svar Structural vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
[TS] vec Vector error-correction models
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
Title
var Vector autoregressive models
Syntax
Remarks and examples
References
Menu
Stored results
Also see
Description
Methods and formulas
Options
Acknowledgment
Syntax
var depvarlist
if
in
, options
Description
options
Model
noconstant
lags(numlist)
exog(varlist)
Model 2
constraints(numlist)
nolog
iterate(#)
tolerance(#)
noisure
dfk
small
nobigf
Reporting
level(#)
lutstats
nocnsreport
display options
coeflegend
You must tsset your data before using var; see [TS] tsset.
depvarlist and varlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, fp, rolling, statsby, and xi are allowed; see [U] 11.1.10 Prefix commands.
coeflegend does not appear in the dialog box.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
Menu
Statistics
>
>
639
640
Description
var fits a multivariate time-series regression of each dependent variable on lags of itself and on
lags of all the other dependent variables. var also fits a variant of vector autoregressive (VAR) models
known as the VARX model, which also includes exogenous variables. See [TS] var intro for a list of
commands that are used in conjunction with var.
Options
Model
Model 2
641
nobigf requests that var not save the estimated parameter vector that incorporates coefficients that
have been implicitly constrained to be zero, such as when some lags have been omitted from a
model. e(bf) is used for computing asymptotic standard errors in the postestimation commands
irf create and fcast compute; see [TS] irf create and [TS] fcast compute. Therefore, specifying
nobigf implies that the asymptotic standard errors will not be available from irf create and
fcast compute. See Fitting models with some lags excluded.
Reporting
Introduction
A VAR is a model in which K variables are specified as linear functions of p of their own lags, p
lags of the other K 1 variables, and possibly exogenous variables. A VAR with p lags is usually
denoted a VAR(p). For more information, see [TS] var intro.
642
7
7
7
Coef.
.046148
.011719
.009445
Std. Err.
0.1286
0.1142
0.2513
9.736909
8.508289
22.15096
P>|z|
=
73
= -24.63163
= -24.40656
= -24.06686
0.1362
0.2032
0.0011
dln_inv
dln_inv
L1.
L2.
-.3196318
-.1605508
.1254564
.1249066
-2.55
-1.29
0.011
0.199
-.5655218
-.4053633
-.0737419
.0842616
dln_inc
L1.
L2.
.1459851
.1146009
.5456664
.5345709
0.27
0.21
0.789
0.830
-.9235013
-.9331388
1.215472
1.162341
dln_consump
L1.
L2.
.9612288
.9344001
.6643086
.6650949
1.45
1.40
0.148
0.160
-.3407922
-.369162
2.26325
2.237962
_cons
-.0167221
.0172264
-0.97
0.332
-.0504852
.0170409
dln_inc
dln_inv
L1.
L2.
.0439309
.0500302
.0318592
.0317196
1.38
1.58
0.168
0.115
-.018512
-.0121391
.1063739
.1121995
dln_inc
L1.
L2.
-.1527311
.0191634
.1385702
.1357525
-1.10
0.14
0.270
0.888
-.4243237
-.2469067
.1188615
.2852334
dln_consump
L1.
L2.
.2884992
-.0102
.168699
.1688987
1.71
-0.06
0.087
0.952
-.0421448
-.3412354
.6191431
.3208353
_cons
.0157672
.0043746
3.60
0.000
.0071932
.0243412
dln_consump
dln_inv
L1.
L2.
-.002423
.0338806
.0256763
.0255638
-0.09
1.33
0.925
0.185
-.0527476
-.0162235
.0479016
.0839847
dln_inc
L1.
L2.
.2248134
.3549135
.1116778
.1094069
2.01
3.24
0.044
0.001
.005929
.1404798
.4436978
.5693471
dln_consump
L1.
L2.
-.2639695
-.0222264
.1359595
.1361204
-1.94
-0.16
0.052
0.870
-.5304451
-.2890175
.0025062
.2445646
_cons
.0129258
.0035256
3.67
0.000
.0060157
.0198358
643
The output has two parts: a header and the standard Stata output table for the coefficients, standard
errors, and confidence intervals. The header contains summary statistics for each equation in the VAR
and statistics used in selecting the lag order of the VAR. Although there are standard formulas for all
the lag-order statistics, Lutkepohl (2005) gives different versions of the three information criteria that
drop the constant term from the likelihood. To obtain the Lutkepohl (2005) versions, we specified
the lutstats option. The formulas for the standard and Lutkepohl versions of these statistics are
given in Methods and formulas of [TS] varsoc.
The dfk option specifies that the small-sample divisor 1/(T m) be used in estimating instead
of the maximum likelihood (ML) divisor 1/T , where m is the average number of parameters included
in each of the K equations. All the lag-order statistics are computed using the ML estimator of .
Thus, specifying dfk will not change the computed lag-order statistics, but it will change the estimated
variancecovariance matrix. Also, when dfk is specified, a dfk-adjusted log likelihood is computed
and stored in e(ll dfk).
The lag() option takes a numlist of lags. To specify a model that includes the first and second
lags, type
. var y1 y2 y3, lags(1/2)
not
. var y1 y2 y3, lags(2)
because the latter specification would fit a model that included only the second lag.
yt = v + A4 yt4 + ut
you would specify the lags(4) option. Doing so is equivalent to fitting the more general model
644
6
6
Coef.
.011917
.009197
Std. Err.
0.0702
0.2794
5.059587
25.97262
P>|z|
=
73
= -12.78264
= -12.63259
= -12.40612
0.4087
0.0001
dln_inc
dln_inc
L1.
L2.
-.1343345
.0120331
.1391074
.1380346
-0.97
0.09
0.334
0.931
-.4069801
-.2585097
.1383111
.2825759
dln_consump
L1.
L2.
.3235342
.0754177
.1652769
.1648624
1.96
0.46
0.050
0.647
-.0004027
-.2477066
.647471
.398542
dln_inv
_cons
.0151546
.0145136
.0302319
.0043815
0.50
3.31
0.616
0.001
-.0440987
.0059259
.074408
.0231012
dln_consump
dln_inc
L1.
L2.
.2425719
.3487949
.1073561
.1065281
2.26
3.27
0.024
0.001
.0321578
.1400036
.452986
.5575862
dln_consump
L1.
L2.
-.3119629
-.0128502
.1275524
.1272325
-2.45
-0.10
0.014
0.920
-.5619611
-.2622213
-.0619648
.2365209
dln_inv
_cons
.0503616
.0131013
.0233314
.0033814
2.16
3.87
0.031
0.000
.0046329
.0064738
.0960904
.0197288
All the postestimation commands for analyzing VARs work when exogenous variables are included
in a model, but the asymptotic standard errors for the h-step-ahead forecasts are not available.
645
6
6
7
No. of obs
AIC
HQIC
SBIC
chi2
P>chi2
(lutstats)
RMSE
R-sq
.043895
.011143
.008981
0.1280
0.1141
0.2512
[dln_inv]L2.dln_inc = 0
[dln_inc]L2.dln_consump = 0
9.842338
8.584446
22.86958
0.0798
0.1268
0.0008
=
73
= -31.69254
= -31.46747
= -31.12777
646
Coef.
Std. Err.
P>|z|
dln_inv
dln_inv
L1.
L2.
-.320713
-.1607084
.1247512
.124261
-2.57
-1.29
0.010
0.196
-.5652208
-.4042555
-.0762051
.0828386
dln_inc
L1.
L2.
.1195448
-2.55e-17
.5295669
1.18e-16
0.23
-0.22
0.821
0.829
-.9183873
-2.57e-16
1.157477
2.06e-16
dln_consump
L1.
L2.
1.009281
1.008079
.623501
.5713486
1.62
1.76
0.106
0.078
-.2127586
-.1117438
2.231321
2.127902
_cons
-.0162102
.016893
-0.96
0.337
-.0493199
.0168995
dln_inc
dln_inv
L1.
L2.
.0435712
.0496788
.0309078
.0306455
1.41
1.62
0.159
0.105
-.017007
-.0103852
.1041495
.1097428
dln_inc
L1.
L2.
-.1555119
.0122353
.1315854
.1165811
-1.18
0.10
0.237
0.916
-.4134146
-.2162595
.1023908
.2407301
dln_consump
L1.
L2.
.29286
1.78e-19
.1568345
8.28e-19
1.87
0.22
0.062
0.829
-.01453
-1.45e-18
.6002501
1.80e-18
_cons
.015689
.003819
4.11
0.000
.0082039
.0231741
dln_consump
dln_inv
L1.
L2.
-.0026229
.0337245
.0253538
.0252113
-0.10
1.34
0.918
0.181
-.0523154
-.0156888
.0470696
.0831378
dln_inc
L1.
L2.
.2224798
.3469758
.1094349
.1006026
2.03
3.45
0.042
0.001
.0079912
.1497984
.4369683
.5441532
dln_consump
L1.
L2.
-.2600227
-.0146825
.1321622
.1117618
-1.97
-0.13
0.049
0.895
-.519056
-.2337315
-.0009895
.2043666
_cons
.0129149
.003376
3.83
0.000
.0062981
.0195317
None of the free parameter estimates changed by much. Whereas the coefficients in the equation
dln inv are now significant at the 10% level, the coefficients in the equation for dln inc remain
jointly insignificant.
Stored results
var stores the following in e():
Scalars
e(N)
e(N gaps)
e(k)
e(k eq)
e(k dv)
e(df eq)
e(df m)
e(df r)
e(ll)
e(ll dfk)
e(obs #)
e(k #)
e(df m#)
e(df r#)
e(r2 #)
e(ll #)
e(chi2 #)
e(F #)
e(rmse #)
e(aic)
e(hqic)
e(sbic)
e(fpe)
e(mlag)
e(tmin)
e(tmax)
e(detsig)
e(detsig ml)
e(rank)
number of observations
number of gaps in sample
number of parameters
number of equations in e(b)
number of dependent variables
average number of parameters in an equation
model degrees of freedom
residual degrees of freedom (small only)
log likelihood
dfk adjusted log likelihood (dfk only)
number of observations on equation #
number of parameters in equation #
model degrees of freedom for equation #
residual degrees of freedom for equation # (small only)
R-squared for equation #
log likelihood for equation #
x2 for equation #
F statistic for equation # (small only)
root mean squared error for equation #
Akaike information criterion
HannanQuinn information criterion
SchwarzBayesian information criterion
final prediction error
highest lag in VAR
first time period in sample
maximum time
determinant of e(Sigma)
bml
determinant of
rank of e(V)
647
648
Macros
e(cmd)
e(cmdline)
e(depvar)
e(endog)
e(exog)
e(exogvars)
e(eqnames)
e(lags)
e(exlags)
e(title)
e(nocons)
e(constraints)
e(cnslist var)
e(small)
e(lutstats)
e(timevar)
e(tsfmt)
e(dfk)
e(properties)
e(predict)
e(marginsok)
e(marginsnotok)
Matrices
e(b)
e(Cns)
e(Sigma)
e(V)
e(bf)
e(exlagsm)
e(G)
Functions
e(sample)
var
command as typed
names of dependent variables
names of endogenous variables, if specified
names of exogenous variables, and their lags, if specified
names of exogenous variables, if specified
names of equations
lags in model
lags of exogenous variables in model, if specified
title in estimation output
nocons, if noconstant is specified
constraints, if specified
list of specified constraints
small, if specified
lutstats, if specified
time variable specified in tsset
format for the current time variable
dfk, if specified
b V
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
coefficient vector
constraints matrix
b matrix
yt = AYt1 + B0 xt + ut
where
(5)
649
yt
..
Y = BZ + U
where
Y= (y1 , . . . , yT )
Y is K T
B= (A, B0 )
B is K (Kp + M )
Y0 . . . , YT 1
Z=
Z is (Kp + M ) T
x1 . . . , xT
U= (u1 , . . . , uT )
U is K T
Intercept terms in the model are included in xt . If there are no exogenous variables and no intercept
terms in the model, xt is empty.
The coefficients are estimated by iterated seemingly unrelated regression. Because the estimation
is actually performed by reg3, the methods are documented in [R] reg3. See [P] makecns for more
on estimation with constraints.
b be the matrix of residuals that are obtained via Y BZ
b , where B
b is the matrix of estimated
Let U
coefficients. Then the estimator of is
b 0U
b
b = 1U
Te
By default, the maximum likelihood divisor of Te = T is used. When dfk is specified, a small-sample
degrees-of-freedom adjustment is used; then, Te = T m where m is the average number of parameters
per equation in the functional form for yt over the K equations.
small specifies that Wald tests after var be assumed to have F or t distributions instead of
chi-squared or standard normal distributions. The standard errors from each equation are computed
using the degrees of freedom for the equation.
The gamma matrix stored in e(G) referred to in Stored results is the (Kp + 1) (Kp + 1)
matrix given by
T
1X
(1, Yt0 )(1, Yt0 )0
T
t=1
The formulas for the lag-order selection criteria and the log likelihood are discussed in [TS] varsoc.
650
Acknowledgment
We thank Christopher F. Baum of the Department of Economics at Boston College and author of
the Stata Press books An Introduction to Modern Econometrics Using Stata and An Introduction to
Stata Programming for his helpful comments.
References
Greene, W. H. 2008. Econometric Analysis. 6th ed. Upper Saddle River, NJ: Prentice Hall.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Stock, J. H., and M. W. Watson. 2001. Vector autoregressions. Journal of Economic Perspectives 15: 101115.
Watson, M. W. 1994. Vector autoregressions and cointegration. In Vol. 4 of Handbook of Econometrics, ed. R. F.
Engle and D. L. McFadden. Amsterdam: Elsevier.
Also see
[TS] var postestimation Postestimation tools for var
[TS] tsset Declare data to be time-series data
[TS] dfactor Dynamic-factor models
[TS] forecast Econometric model forecasting
[TS] mgarch Multivariate GARCH models
[TS] sspace State-space models
[TS] var svar Structural vector autoregressive models
[TS] varbasic Fit a simple VAR and graph IRFs or FEVDs
[TS] vec Vector error-correction models
[U] 20 Estimation and postestimation commands
Title
var postestimation Postestimation tools for var
Description
Remarks and examples
Description
The following postestimation commands are of special interest after var:
Command
Description
fcast compute
fcast graph
irf
vargranger
varlmar
varnorm
varsoc
varstable
varwle
Description
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
margins
marginsplot
nlcom
predict
predictnl
test
testnl
651
652
type
newvar
if
in
Description
statistic
Main
xb
stdp
residuals
These statistics are available both in and out of sample; type predict
the estimation sample.
>
Postestimation
>
Main
xb, the default, calculates the linear prediction for the specified equation.
stdp calculates the standard error of the linear prediction for the specified equation.
residuals calculates the residuals.
equation(eqno | eqname) specifies the equation to which you are referring.
equation() is filled in with one eqno or eqname for options xb, stdp, and residuals. For
example, equation(#1) would mean that the calculation is to be made for the first equation,
equation(#2) would mean the second, and so on. You could also refer to the equation by its name;
thus, equation(income) would refer to the equation named income and equation(hours), to
the equation named hours.
If you do not specify equation(), the results are the same as if you specified equation(#1).
For more information on using predict after multiple-equation estimation commands, see [R] predict.
653
Forecasting
Two types of forecasts are available after you fit a VAR(p): a one-step-ahead forecast and a dynamic
h-step-ahead forecast.
The one-step-ahead forecast produces a prediction of the value of an endogenous variable in the
current period by using the estimated coefficients, the past values of the endogenous variables, and any
exogenous variables. If you include contemporaneous values of exogenous variables in your model,
you must have observations on the exogenous variables that are contemporaneous with the period
in which the prediction is being made to compute the prediction. In Stata terms, these one-stepahead predictions are just the standard linear predictions available after any estimation command.
Thus predict, xb eq(eqno | eqname) produces one-step-ahead forecasts for the specified equation.
predict, stdp eq(eqno | eqname) produces the standard error of the linear prediction for the
specified equation. The standard error of the forecast includes an estimate of the variability due to
innovations, whereas the standard error of the linear prediction does not.
The dynamic h-step-ahead forecast begins by using the estimated coefficients, the lagged values of
the endogenous variables, and any exogenous variables to predict one step ahead for each endogenous
variable. Then the one-step-ahead forecast produces two-step-ahead forecasts for each endogenous
variable. The process continues for h periods. Because each step uses the predictions of the previous
steps, these forecasts are known as dynamic forecasts. See the following sections for information on
obtaining forecasts after svar:
[TS] fcast compute Compute dynamic forecasts after var, svar, or vec
[TS] fcast graph Graph forecasts after fcast compute
654
Also see
[TS] var Vector autoregressive models
[U] 20 Estimation and postestimation commands
Title
var svar Structural vector autoregressive models
Syntax
Remarks and examples
References
Menu
Stored results
Also see
Description
Methods and formulas
Options
Acknowledgment
Syntax
Short-run constraints
svar depvarlist if
in , aconstraints(constraintsa ) aeq(matrixaeq )
acns(matrixacns ) bconstraints(constraintsb ) beq(matrixbeq ) bcns(matrixbcns )
short run options
Long-run constraints
svar depvarlist if
in , lrconstraints(constraintslr ) lreq(matrixlreq )
lrcns(matrixlrcns )
long run options
655
656
Description
Model
noconstant
aconstraints(constraintsa )
aeq(matrixaeq )
acns(matrixacns )
bconstraints(constraintsb )
beq(matrixbeq )
bcns(matrixbcns )
lags(numlist)
Model 2
exog(varlistexog )
varconstraints(constraintsv )
noislog
isiterate(#)
istolerance(#)
noisure
dfk
small
noidencheck
nobigf
Reporting
level(#)
full
var
lutstats
nocnsreport
display options
Maximization
maximize options
coeflegend
657
Description
noconstant
lrconstraints(constraintslr )
lreq(matrixlreq )
lrcns(matrixlrcns )
lags(numlist)
Model 2
exog(varlistexog )
varconstraints(constraintsv )
noislog
isiterate(#)
istolerance(#)
noisure
dfk
small
noidencheck
nobigf
Reporting
level(#)
full
var
lutstats
nocnsreport
display options
Maximization
maximize options
coeflegend
Menu
Statistics
>
>
658
Description
svar fits a vector autoregressive model subject to short- or long-run constraints you place on
the resulting impulseresponse functions (IRFs). Economic theory typically motivates the constraints,
allowing a causal interpretation of the IRFs to be made. See [TS] var intro for a list of commands
that are used in conjunction with svar.
Options
Model
A=
1 0
. 1.5
specifies that A[1, 1] = 1, A[1, 2] = 0, A[2, 2] = 1.5, and A[2, 1] is a free parameter.
659
. 1
A=
1 0
660
Model 2
Reporting
661
var specifies that the output from var also be displayed. By default, the underlying VAR is fit
quietly.
lutstats specifies that the Lutkepohl versions of the lag-order selection statistics be reported. See
Methods and formulas in [TS] varsoc for a discussion of these statistics.
nocnsreport; see [R] estimation options.
display options: cformat(% fmt), pformat(% fmt), and sformat(% fmt); see [R] estimation options.
Maximization
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), nonrtolerance, and from(init specs); see [R] maximize. These options are
seldom used.
The following option is available with svar but is not shown in the dialog box:
coeflegend; see [R] estimation options.
Introduction
This entry assumes that you have already read [TS] var intro and [TS] var; if not, please do. Here
we illustrate how to fit SVARs in Stata subject to short-run and long-run restrictions. For more detailed
information on SVARs, see Amisano and Giannini (1997) and Hamilton (1994). For good introductions
to VARs, see Lutkepohl (2005), Hamilton (1994), Stock and Watson (2001), and Becketti (2013).
662
One way to impose the Cholesky restrictions is to assume an SVAR model of the form
e K A1 A2 L2 Ap Lp )yt = Be
e t
A(I
e is a lower triangular matrix with ones on the diagonal and B
e is a diagonal matrix. Because
where A
1 e
e
b
the P matrix for this model is Psr = A B, its estimate, Psr , obtained by plugging in estimates
e and B
e , should equal the Cholesky decomposition of .
b
of A
To illustrate, we use the German macroeconomic data discussed in Lutkepohl (2005) and used
in [TS] var. In this example, yt = (dln inv, dln inc, dln consump), where dln inv is the
first difference of the log of investment, dln inc is the first difference of the log of income, and
dln consump is the first difference of the log of consumption. Because the first difference of the
natural log of a variable can be treated as an approximation of the percentage change in that variable,
we will refer to these variables as percentage changes in inv, inc, and consump, respectively.
We will impose the Cholesky restrictions on this system by applying
constraint matrices
. 0
1 0 0
and
B = 0 .
A = . 1 0
0 0
. . 1
0
0
.
With these structural restrictions, we assume that the percentage change in inv is not contemporaneously affected by the percentage changes in either inc or consump. We also assume that the
percentage change of inc is affected by contemporaneous changes in inv but not consump. Finally,
we assume that percentage changes in consump are affected by contemporaneous changes in both
inv and inc.
The following commands fit an SVAR model with these constraints.
. use http://www.stata-press.com/data/r13/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. matrix A = (1,0,0\.,1,0\.,.,1)
. matrix B = (.,0,0\0,.,0\0,0,.)
663
No. of obs
Log likelihood
Std. Err.
/a_1_1
/a_2_1
/a_3_1
/a_1_2
/a_2_2
/a_3_2
/a_1_3
/a_2_3
/a_3_3
1
-.0336288
-.0435846
0
1
-.424774
0
0
1
(constrained)
.0294605
-1.14
.0194408
-2.24
(constrained)
(constrained)
.0765548
-5.55
(constrained)
(constrained)
(constrained)
/b_1_1
/b_2_1
/b_3_1
/b_1_2
/b_2_2
/b_3_2
/b_1_3
/b_2_3
/b_3_3
.0438796
0
0
0
.0110449
0
0
0
.0072243
.0036315
12.08
(constrained)
(constrained)
(constrained)
.0009141
12.08
(constrained)
(constrained)
(constrained)
.0005979
12.08
P>|z|
=
=
73
606.307
0.254
0.025
-.0913702
-.0816879
.0241126
-.0054812
0.000
-.5748187
-.2747293
0.000
.036762
.0509972
0.000
.0092534
.0128365
0.000
.0060525
.0083962
The SVAR output has four parts: an iteration log, a display of the constraints imposed, a header with
sample and SVAR log-likelihood information, and a table displaying the estimates of the parameters
from the A and B matrices. From the output above, we can see that the equality constraint matrices
supplied to svar imposed the intended constraints and that the SVAR header informs us that the model
we fit is just identified. The estimates of a 2 1, a 3 1, and a 3 2 are all negative. Because the
off-diagonal elements of the A matrix contain the negative of the actual contemporaneous effects,
the estimated effects are positive, as expected.
b and B
b are stored in e(A) and e(B), respectively, allowing us to compute the
The estimates A
estimated Cholesky decomposition.
. matrix Aest = e(A)
. matrix Best = e(B)
. matrix chol_est = inv(Aest)*Best
664
dln_inc
0
.01104494
.0046916
dln_consump
0
0
.00722432
svar stores the estimated from the underlying var in e(Sigma). The output below illustrates
the computation of the Cholesky decomposition of e(Sigma). It is the same as the output computed
from the SVAR estimates.
. matrix sig_var = e(Sigma)
. matrix chol_var = cholesky(sig_var)
. matrix list chol_var
chol_var[3,3]
dln_inv
dln_inv
.04387957
dln_inc
.00147562
dln_consump
.00253928
dln_inc
0
.01104494
.0046916
dln_consump
0
0
.00722432
We might now wonder why we bother obtaining parameter estimates via nonlinear estimation if
we can obtain them simply by a transform of the estimates produced by var. When the model is just
identified, as in the previous example, the SVAR parameter estimates can be computed via a transform
of the VAR estimates. However, when the model is overidentified, such is not the case.
1 0 0
. 0 0
A = 0 1 0
and
B = 0 . 0
. . 1
0 0 .
The output below contains the commands and results we obtained by fitting this model on the
Lutkepohl data.
. matrix B = (.,0,0\0,.,0\0,0,.)
. matrix A = (1,0,0\0,1,0\.,.,1)
665
No. of obs
Log likelihood
Std. Err.
P>|z|
/a_1_1
/a_2_1
/a_3_1
/a_1_2
/a_2_2
/a_3_2
/a_1_3
/a_2_3
/a_3_3
1
0
-.0435911
0
1
-.4247741
0
0
1
(constrained)
(constrained)
.0192696
-2.26
(constrained)
(constrained)
.0758806
-5.60
(constrained)
(constrained)
(constrained)
/b_1_1
/b_2_1
/b_3_1
/b_1_2
/b_2_2
/b_3_2
/b_1_3
/b_2_3
/b_3_3
.0438796
0
0
0
.0111431
0
0
0
.0072243
.0036315
12.08
(constrained)
(constrained)
(constrained)
.0009222
12.08
(constrained)
(constrained)
(constrained)
.0005979
12.08
chi2(
=
=
73
605.6613
0.024
-.0813589
-.0058233
0.000
-.5734973
-.2760508
0.000
.036762
.0509972
0.000
.0093356
.0129506
0.000
.0060525
.0083962
1)=
1.292
The footer in this example reports a test of the overidentifying restriction. The null hypothesis of this
test is that any overidentifying restrictions are valid. In the case at hand, we cannot reject this null
hypothesis at any of the conventional levels.
666
7
7
7
Coef.
.046148
.011719
.009445
Std. Err.
0.1286
0.1142
0.2513
10.76961
9.410683
24.50031
P>|z|
=
73
= -16.03581
= -15.77323
= -15.37691
0.0958
0.1518
0.0004
dln_inv
dln_inv
L1.
L2.
-.3196318
-.1605508
.1192898
.118767
-2.68
-1.35
0.007
0.176
-.5534355
-.39333
-.0858282
.0722283
dln_inc
L1.
L2.
.1459851
.1146009
.5188451
.508295
0.28
0.23
0.778
0.822
-.8709326
-.881639
1.162903
1.110841
dln_consump
L1.
L2.
.9612288
.9344001
.6316557
.6324034
1.52
1.48
0.128
0.140
-.2767936
-.3050877
2.199251
2.173888
_cons
-.0167221
.0163796
-1.02
0.307
-.0488257
.0153814
dln_inc
dln_inv
L1.
L2.
.0439309
.0500302
.0302933
.0301605
1.45
1.66
0.147
0.097
-.0154427
-.0090833
.1033046
.1091437
dln_inc
L1.
L2.
-.1527311
.0191634
.131759
.1290799
-1.16
0.15
0.246
0.882
-.4109741
-.2338285
.1055118
.2721552
dln_consump
L1.
L2.
.2884992
-.0102
.1604069
.1605968
1.80
-0.06
0.072
0.949
-.0258926
-.3249639
.6028909
.3045639
_cons
.0157672
.0041596
3.79
0.000
.0076146
.0239198
dln_consump
dln_inv
L1.
L2.
-.002423
.0338806
.0244142
.0243072
-0.10
1.39
0.921
0.163
-.050274
-.0137607
.045428
.0815219
dln_inc
L1.
L2.
.2248134
.3549135
.1061884
.1040292
2.12
3.41
0.034
0.001
.0166879
.1510199
.4329389
.558807
dln_consump
L1.
L2.
-.2639695
-.0222264
.1292766
.1294296
-2.04
-0.17
0.041
0.864
-.517347
-.2759039
-.010592
.231451
_cons
.0129258
.0033523
3.86
0.000
.0063554
.0194962
667
The equation-level model tests reported in the header indicate that we cannot reject the null
hypotheses that all the coefficients in the first equation are zero, nor can we reject the null that all
the coefficients in the second equation are zero at the 5% significance level. We use a combination of
theory and the p-values from the output above to place some exclusion restrictions on the underlying
VAR(2). Specifically, in the equation for the percentage change of inv, we constrain the coefficients
on L2.dln inv, L.dln inc, L2.dln inc, and L2.dln consump to be zero. In the equation for
dln inc, we constrain the coefficients on L2.dln inv, L2.dln inc, and L2.dln consump to be
zero. Finally, in the equation for dln consump, we constrain L.dln inv and L2.dln consump to
be zero. We then refit the SVAR from the previous example.
.
.
.
.
constraint
constraint
constraint
constraint
1
2
3
4
[dln_inv]L2.dln_inv = 0
[dln_inv ]L.dln_inc = 0
[dln_inv]L2.dln_inc = 0
[dln_inv]L2.dln_consump = 0
. constraint 5 [dln_inc]L2.dln_inv = 0
. constraint 6 [dln_inc]L2.dln_inc = 0
. constraint 7 [dln_inc]L2.dln_consump = 0
. constraint 8 [dln_consump]L.dln_inv = 0
. constraint 9 [dln_consump]L2.dln_consump = 0
. svar dln_inv dln_inc dln_consump if qtr<=tq(1978q4), aeq(A) beq(B)
> varconst(1/9) noislog
Estimating short-run parameters
(output omitted )
Structural vector autoregression
( 1)
( 2)
( 3)
( 4)
( 5)
( 6)
( 7)
( 8)
( 9)
(10)
(11)
(12)
(13)
[a_1_1]_cons
[a_1_2]_cons
[a_1_3]_cons
[a_2_1]_cons
[a_2_2]_cons
[a_2_3]_cons
[a_3_3]_cons
[b_1_2]_cons
[b_1_3]_cons
[b_2_1]_cons
[b_2_3]_cons
[b_3_1]_cons
[b_3_2]_cons
=
=
=
=
=
=
=
=
=
=
=
=
=
1
0
0
0
1
0
1
0
0
0
0
0
0
668
No. of obs
Log likelihood
Std. Err.
P>|z|
/a_1_1
/a_2_1
/a_3_1
/a_1_2
/a_2_2
/a_3_2
/a_1_3
/a_2_3
/a_3_3
1
0
-.0418708
0
1
-.4255808
0
0
1
(constrained)
(constrained)
.0187579
-2.23
(constrained)
(constrained)
.0745298
-5.71
(constrained)
(constrained)
(constrained)
/b_1_1
/b_2_1
/b_3_1
/b_1_2
/b_2_2
/b_3_2
/b_1_3
/b_2_3
/b_3_3
.0451851
0
0
0
.0113723
0
0
0
.0072417
.0037395
12.08
(constrained)
(constrained)
(constrained)
.0009412
12.08
(constrained)
(constrained)
(constrained)
.0005993
12.08
chi2(
1)=
=
=
73
601.8591
0.026
-.0786356
-.0051061
0.000
-.5716565
-.2795051
0.000
.0378557
.0525145
0.000
.0095276
.013217
0.000
.006067
.0084164
.8448
If we displayed the underlying VAR(2) results by using the var option, we would see that most of
the unconstrained coefficients are now significant at the 10% level and that none of the equation-level
model statistics fail to reject the null hypothesis at the 10% level. The svar output reveals that the
p-value of the overidentification test rose and that the coefficient on a 3 1 is still insignificant at the
1% level but not at the 5% level.
Before moving on to models with long-run constraints, consider these limitations. We cannot place
constraints on the elements of A in terms of the elements of B, or vice versa. This limitation is
imposed by the form of the check for identification derived by Amisano and Giannini (1997). As
noted in Methods and formulas, this test requires separate constraint matrices for the parameters in
A and B. Another limitation is that we cannot mix short-run and long-run constraints.
yt = Cet
In long-run models, the constraints are placed on the elements of C, and the free parameters are
estimated. These constraints are often exclusion restrictions. For instance, constraining C[1, 2] to be
zero can be interpreted as setting the long-run response of variable 1 to the structural shocks driving
variable 2 to be zero.
Similar to the short-run model, the Plr matrix such that Plr P0lr = identifies the structural
impulseresponse functions. Plr = C is identified by the restrictions placed on the parameters in
C. There are K 2 parameters in C, and the order condition for identification requires that there be
at least K 2 K(K + 1)/2 restrictions placed on those parameters. As in the short-run model, this
order condition is necessary but not sufficient, so the Amisano and Giannini (1997) check for local
identification is performed by default.
669
. 0
C=
0 .
. use http://www.stata-press.com/data/r13/m1gdp
. matrix lr = (.,0\0,.)
. svar d.ln_m1 d.ln_gdp, lreq(lr)
Estimating long-run parameters
(output omitted )
Structural vector autoregression
( 1) [c_1_2]_cons = 0
( 2) [c_2_1]_cons = 0
Sample: 1959q4 - 2002q2
Overidentified model
Coef.
/c_1_1
/c_2_1
/c_1_2
/c_2_2
.0301007
0
0
.0129691
Std. Err.
.0016277
18.49
(constrained)
(constrained)
.0007013
18.49
chi2(
1)=
No. of obs
Log likelihood
=
=
171
1151.614
P>|z|
0.000
.0269106
.0332909
0.000
.0115946
.0143436
.1368
We have assumed that the underlying VAR has 2 lags; four of the five selection-order criteria
computed by varsoc (see [TS] varsoc) recommended this choice. The test of the overidentifying
restrictions provides no indication that it is not valid.
670
Stored results
svar stores the following in e():
Scalars
e(N)
e(N cns)
e(k eq)
e(k dv)
e(k aux)
e(ll)
e(ll #)
e(N gaps var)
e(k var)
e(k eq var)
e(k dv var)
e(df eq var)
e(df m var)
e(df r var)
e(obs # var)
e(k # var)
e(df m# var)
e(df r# var)
e(r2 # var)
e(ll # var)
e(chi2 # var)
e(F # var)
e(rmse # var)
e(mlag var)
e(tparms var)
e(aic var)
e(hqic var)
e(sbic var)
e(fpe var)
e(ll var)
e(detsig var)
e(detsig ml var)
e(tmin)
e(tmax)
e(chi2 oid)
e(oid df)
e(rank)
e(ic ml)
e(rc ml)
number of observations
number of constraints
number of equations in e(b)
number of dependent variables
number of auxiliary parameters
log likelihood from svar
log likelihood for equation #
number of gaps in the sample
number of coefficients in VAR
number of equations in underlying VAR
number of dependent variables in underlying VAR
average number of parameters in an equation
model degrees of freedom
if small, residual degrees of freedom
number of observations on equation #
number of coefficients in equation #
model degrees of freedom for equation #
residual degrees of freedom for equation # (small only)
R-squared for equation #
log likelihood for equation # VAR
2 statistic for equation #
F statistic for equation # (small only)
root mean squared error for equation #
highest lag in VAR
number of parameters in all equations
Akaike information criterion
HannanQuinn information criterion
SchwarzBayesian information criterion
final prediction error
log likelihood from var
determinant of e(Sigma)
bml
determinant of
first time period in the sample
maximum time
overidentification test
number of overidentifying restrictions
rank of e(V)
number of iterations
return code from ml
671
svar
command as typed
long-run model, if specified
lags in model
names of dependent variables
names of endogenous variables
names of exogenous variables, if specified
noconstant, if noconstant specified
long-run constraints
cross-parameter equality constraints on A
cross-parameter equality constraints on B
alternate divisor (dfk), if specified
names of equations
lutstats, if specified
constraints var, if there are constraints on VAR
small, if specified
format of timevar
name of timevar
title in estimation output
b V
program used to implement predict
coefficient vector
constraints matrix
b matrix
L(A, B) =
NK
N
N
b)
ln(2) +
ln(|W|2 ) tr(W0 W
2
2
2
where W = B1 A.
1 B and A = IK , W = B1 = C1 A
1 =
When there are long-run constraints, because C = A
1
(AC) . Substituting the last term for W in the short-run log likelihood produces the long-run log
likelihood
672
L(C) =
NK
N
f 2 ) N tr(W
f 0W
f
b)
ln(2) +
ln(|W|
2
2
2
f = (AC)
1 .
where W
For both the short-run and the long-run models, the maximization is performed by the scoring
method. See Harvey (1990) for a discussion of this method.
Based on results from Amisano and Giannini (1997), the score vector for the short-run model is
h
i
L(A, B)
b IK )
= N {vec(W01 )}0 {vec(W)}0 (
[vec(A), vec(B)]
(IK B1 ), (A0 B01 B1 )
and the expected information matrix is
I [vec(A), vec(B)] = N
(W1 B01 )
(IK 2 + ) (W01 B1 ), (IK B1 )
01
(IK B )
where is the commutation matrix defined in Magnus and Neudecker (1999, 4648).
Using results from Amisano and Giannini (1997), we can derive the score vector and the expected
information matrix for the case with long-run restrictions. The score vector is
h
i
L(C)
01 C01 C1 )
b IK ) (A
= N {vec(W01 )}0 {vec(W)}0 (
vec(C)
and the expected information matrix is
NK
NK
NK
NK
Vsr
=
Ra (W0 B)
0K 2
0K 2
Ra (IK B)
has full column rank of 2K 2 , where NK = (1/2)(IK 2 + ), Ra is the constraint matrix for the
parameters in A (that is, Ra vec(A) = ra ), and Rb is the constraint matrix for the parameters in B
(that is, Rb vec(B) = rb ).
673
For the long-run case, based on results from the C model in Amisano and Giannini (1997), the
model is identified if the matrix
(I C01 )(2NK )(I C1 )
Vlr
=
Rc
has full column rank of K 2 , where Rc is the constraint matrix for the parameters in C; that is,
Rc vec(C) = rc .
The test of the overidentifying restrictions is computed as
LR = 2(LLvar LLsvar )
where LR is the value of the test statistic against the null hypothesis that the overidentifying restrictions
are valid, LLvar is the log likelihood from the underlying VAR(p) model, and LLsvar is the log
likelihood from the SVAR model. The test statistic is asymptotically distributed as 2 (q), where q is the
number of overidentifying restrictions. Amisano and Giannini (1997, 3839) emphasize that, because
this test of the validity of the overidentifying restrictions is an omnibus test, it can be interpreted as
a test of the null hypothesis that all the restrictions are valid.
Because constraints might not be independent either by construction or because of the data, the
number of restrictions is not necessarily equal to the number of constraints. The rank of e(V)
gives the number of parameters that were independently estimated after applying the constraints. The
maximum number of parameters that can be estimated in an identified short-run or long-run SVAR is
K(K + 1)/2. This implies that the number of overidentifying restrictions, q , is equal to K(K + 1)/2
minus the rank of e(V).
The number of overidentifying restrictions is also linked to the order condition for each model. In
a short-run SVAR model, there are 2K 2 parameters. Because no more than K(K + 1)/2 parameters
may be estimated, the order condition for a short-run SVAR model is that at least 2K 2 K(K + 1)/2
restrictions be placed on the model. Similarly, there are K 2 parameters in long-run SVAR model.
Because no more than K(K + 1)/2 parameters may be estimated, the order condition for a long-run
SVAR model is that at least K 2 K(K + 1)/2 restrictions be placed on the model.
Acknowledgment
We thank Gianni Amisano of the Dipartimento di Scienze Economiche at the Universit`a degli
Studi di Brescia for his helpful comments.
References
Amisano, G., and C. Giannini. 1997. Topics in Structural VAR Econometrics. 2nd ed. Heidelberg: Springer.
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Christiano, L. J., M. Eichenbaum, and C. L. Evans. 1999. Monetary policy shocks: What have we learned and to
what end? In Handbook of Macroeconomics: Volume 1A, ed. J. B. Taylor and M. Woodford. New York: Elsevier.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Harvey, A. C. 1990. The Econometric Analysis of Time Series. 2nd ed. Cambridge, MA: MIT Press.
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Magnus, J. R., and H. Neudecker. 1999. Matrix Differential Calculus with Applications in Statistics and Econometrics.
Rev. ed. New York: Wiley.
674
Also see
[TS] var svar postestimation Postestimation tools for svar
[TS] tsset Declare data to be time-series data
[TS] var Vector autoregressive models
[TS] varbasic Fit a simple VAR and graph IRFs or FEVDs
[TS] vec Vector error-correction models
[U] 20 Estimation and postestimation commands
Title
var svar postestimation Postestimation tools for svar
Description
Remarks and examples
Description
The following postestimation commands are of special interest after svar:
Command
Description
fcast compute
fcast graph
irf
vargranger
varlmar
varnorm
varsoc
varstable
varwle
Description
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
nlcom
predict
predictnl
test
testnl
675
676
type
newvar
if
in
Description
statistic
Main
xb
stdp
residuals
These statistics are available both in and out of sample; type predict
the estimation sample.
>
Postestimation
>
Main
xb, the default, calculates the linear prediction for the specified equation.
stdp calculates the standard error of the linear prediction for the specified equation.
residuals calculates the residuals.
equation(eqno | eqname) specifies the equation to which you are referring.
equation() is filled in with one eqno or eqname for options xb, stdp, and residuals. For
example, equation(#1) would mean that the calculation is to be made for the first equation,
equation(#2) would mean the second, and so on. You could also refer to the equation by its name;
thus, equation(income) would refer to the equation named income and equation(hours), to
the equation named hours.
If you do not specify equation(), the results are the same as if you specified equation(#1).
For more information on using predict after multiple-equation estimation commands, see [R] predict.
Forecasting
See the following sections for information on obtaining forecasts after svar:
[TS] fcast compute Compute dynamic forecasts after var, svar, or vec
[TS] fcast graph Graph forecasts after fcast compute
Also see
[TS] var svar Structural vector autoregressive models
[U] 20 Estimation and postestimation commands
677
Title
varbasic Fit a simple VAR and graph IRFs or FEVDs
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
varbasic depvarlist
if
in
, options
Description
options
Main
lags(numlist)
irf
fevd
nograph
step(#)
You must tsset your data before using varbasic; see [TS] tsset.
depvarlist may contain time-series operators; see [U] 11.4.4 Time-series varlists.
rolling, statsby, and xi are allowed; see [U] 11.1.10 Prefix commands.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
Menu
Statistics
>
>
Basic VAR
Description
varbasic fits a basic vector autoregressive (VAR) model and graphs the impulseresponse functions (IRFs), the orthogonalized impulseresponse functions (OIRFs), or the forecast-error variance
decompositions (FEVDs).
Options
Main
lags(numlist) specifies the lags to be included in the model. The default is lags(1 2). This option
takes a numlist and not simply an integer for the maximum lag. For instance, lags(2) would
include only the second lag in the model, whereas lags(1/2) would include both the first and
second lags in the model. See [U] 11.1.8 numlist and [U] 11.4.4 Time-series varlists for more
discussion of numlists and lags.
irf causes varbasic to produce a matrix graph of the IRFs instead of a matrix graph of the OIRFs,
which is produced by default.
678
679
fevd causes varbasic to produce a matrix graph of the FEVDs instead of a matrix graph of the
OIRFs, which is produced by default.
nograph specifies that no graph be produced. The IRFs, OIRFs, and FEVDs are still estimated and
saved in the IRF file varbasic.irf.
step(#) specifies the forecast horizon for estimating the IRFs, OIRFs, and FEVDs. The default is eight
periods.
Example 1
We fit a three-variable VAR with two lags to the German macro data used by Lutkepohl (2005).
The three variables are the first difference of natural log of investment, dln inv; the first difference
of the natural log of income, dln inc; and the first difference of the natural log of consumption,
dln consump. In addition to fitting the VAR, we want to see the OIRFs. Below we use varbasic to
fit a VAR(2) model on the data from the second quarter of 1961 through the fourth quarter of 1978.
By default, varbasic produces graphs of the OIRFs.
. use http://www.stata-press.com/data/r13/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. varbasic dln_inv dln_inc dln_consump if qtr<=tq(1978q4)
Vector autoregression
Sample: 1960q4 - 1978q4
No. of obs
=
73
Log likelihood =
606.307
AIC
= -16.03581
FPE
= 2.18e-11
HQIC
= -15.77323
Det(Sigma_ml) = 1.23e-11
SBIC
= -15.37691
Equation
Parms
RMSE
R-sq
chi2
P>chi2
dln_inv
dln_inc
dln_consump
7
7
7
.046148
.011719
.009445
0.1286
0.1142
0.2513
10.76961
9.410683
24.50031
0.0958
0.1518
0.0004
680
Coef.
Std. Err.
P>|z|
dln_inv
dln_inv
L1.
L2.
-.3196318
-.1605508
.1192898
.118767
-2.68
-1.35
0.007
0.176
-.5534355
-.39333
-.0858282
.0722283
dln_inc
L1.
L2.
.1459851
.1146009
.5188451
.508295
0.28
0.23
0.778
0.822
-.8709326
-.881639
1.162903
1.110841
dln_consump
L1.
L2.
.9612288
.9344001
.6316557
.6324034
1.52
1.48
0.128
0.140
-.2767936
-.3050877
2.199251
2.173888
_cons
-.0167221
.0163796
-1.02
0.307
-.0488257
.0153814
dln_inc
dln_inv
L1.
L2.
.0439309
.0500302
.0302933
.0301605
1.45
1.66
0.147
0.097
-.0154427
-.0090833
.1033046
.1091437
dln_inc
L1.
L2.
-.1527311
.0191634
.131759
.1290799
-1.16
0.15
0.246
0.882
-.4109741
-.2338285
.1055118
.2721552
dln_consump
L1.
L2.
.2884992
-.0102
.1604069
.1605968
1.80
-0.06
0.072
0.949
-.0258926
-.3249639
.6028909
.3045639
_cons
.0157672
.0041596
3.79
0.000
.0076146
.0239198
dln_consump
dln_inv
L1.
L2.
-.002423
.0338806
.0244142
.0243072
-0.10
1.39
0.921
0.163
-.050274
-.0137607
.045428
.0815219
dln_inc
L1.
L2.
.2248134
.3549135
.1061884
.1040292
2.12
3.41
0.034
0.001
.0166879
.1510199
.4329389
.558807
dln_consump
L1.
L2.
-.2639695
-.0222264
.1292766
.1294296
-2.04
-0.17
0.041
0.864
-.517347
-.2759039
-.010592
.231451
_cons
.0129258
.0033523
3.86
0.000
.0063554
.0194962
681
.06
.04
.02
0
.02
.06
.04
.02
0
.02
.06
.04
.02
0
.02
0
step
95% CI
orthogonalized irf
Because we are also interested in looking at the FEVDs, we can use irf graph to obtain the
graphs. Although the details are available in [TS] irf and [TS] irf graph, the command below produces
what we want after the call to varbasic.
. irf graph fevd, lstep(1)
varbasic, dln_consump, dln_consump
.5
.5
.5
0
0
step
95% CI
Technical note
Stata stores the estimated IRFs, OIRFs, and FEVDs in a IRF file called varbasic.irf in the current
working directory. varbasic replaces any varbasic.irf that already exists. Finally, varbasic
makes varbasic.irf the active IRF file. This means that the graph and table commands irf graph,
682
irf cgraph, irf ograph, irf table, and irf ctable will all display results that correspond to
the VAR fit by varbasic.
Stored results
See Stored results in [TS] var.
References
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Also see
[TS] varbasic postestimation Postestimation tools for varbasic
[TS] tsset Declare data to be time-series data
[TS] var Vector autoregressive models
[TS] var svar Structural vector autoregressive models
[U] 20 Estimation and postestimation commands
Title
varbasic postestimation Postestimation tools for varbasic
Description
Remarks and examples
Description
The following postestimation commands are of special interest after varbasic:
Command
Description
fcast compute
fcast graph
irf
vargranger
varlmar
varnorm
varsoc
varstable
varwle
Description
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
margins
marginsplot
nlcom
predict
predictnl
test
testnl
683
684
type
newvar
if
in
Description
statistic
Main
xb
stdp
residuals
These statistics are available both in and out of sample; type predict
the estimation sample.
>
Postestimation
>
Main
xb, the default, calculates the linear prediction for the specified equation.
stdp calculates the standard error of the linear prediction for the specified equation.
residuals calculates the residuals.
equation(eqno | eqname) specifies the equation to which you are referring.
equation() is filled in with one eqno or eqname for the xb, stdp, and residuals options.
For example, equation(#1) would mean that the calculation is to be made for the first equation,
equation(#2) would mean the second, and so on. You could also refer to the equation by its name;
thus, equation(income) would refer to the equation named income and equation(hours), to
the equation named hours.
If you do not specify equation(), the results are the same as if you specified equation(#1).
For more information on using predict after multiple-equation estimation commands, see [R] predict.
685
. use http://www.stata-press.com/data/r13/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. varbasic dln_inv dln_inc dln_consump if qtr<=tq(1978q4)
(output omitted )
. varlmar
Lagrange-multiplier test
lag
chi2
df
1
2
5.5871
6.3189
9
9
0.78043
0.70763
Because we cannot reject the null hypothesis of no autocorrelation in the residuals, this test does
not indicate any model misspecification.
Also see
[TS] varbasic Fit a simple VAR and graph IRFs or FEVDs
[U] 20 Estimation and postestimation commands
Title
vargranger Perform pairwise Granger causality tests after var or svar
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
vargranger
, estimates(estname) separator(#)
vargranger can be used only after var or svar; see [TS] var and [TS] var svar.
Menu
Statistics
>
>
>
Description
vargranger performs a set of Granger causality tests for each equation in a VAR, providing a
convenient alternative to test; see [R] test.
Options
estimates(estname) requests that vargranger use the previously obtained set of var or svar
estimates stored as estname. By default, vargranger uses the active results. See [R] estimates
for information on manipulating estimation results.
separator(#) specifies how often separator lines should be drawn between rows. By default, separator
lines appear every K lines, where K is the number of equations in the VAR under analysis. For
example, separator(1) would draw a line between each row, separator(2) between every
other row, and so on. separator(0) specifies that lines not appear in the table.
687
Because it may be interesting to investigate these types of hypotheses by using the VAR that
underlies an SVAR, vargranger can also produce these tests by using the e() results from an svar.
When vargranger uses svar e() results, the hypotheses concern the underlying var estimates.
See [TS] var and [TS] var svar for information about fitting VARs and SVARs in Stata. See
Lutkepohl (2005), Hamilton (1994), and Amisano and Giannini (1997) for information about Granger
causality and on VARs and SVARs in general.
Excluded
df
df_r
dln_inv
dln_inv
dln_inv
Prob > F
dln_inc
dln_consump
ALL
.04847
1.5004
1.5917
2
2
4
66
66
66
0.9527
0.2306
0.1869
dln_inc
dln_inc
dln_inc
dln_inv
dln_consump
ALL
1.7683
1.7184
1.9466
2
2
4
66
66
66
0.1786
0.1873
0.1130
dln_consump
dln_consump
dln_consump
dln_inv
dln_inc
ALL
.97147
6.1465
3.7746
2
2
4
66
66
66
0.3839
0.0036
0.0080
Because the estimates() option was not specified, vargranger used the active e() results.
Consider the results of the three tests for the first equation. The first is a Wald test that the coefficients
on the two lags of dln inc that appear in the equation for dln inv are jointly zero. The null
hypothesis that dln inc does not Granger-cause dln inv cannot be rejected. Similarly, we cannot
reject the null hypothesis that the coefficients on the two lags of dln consump in the equation for
dln inv are jointly zero, so we cannot reject the hypothesis that dln consump does not Grangercause dln inv. The third test is with respect to the null hypothesis that the coefficients on the two
lags of all the other endogenous variables are jointly zero. Because this cannot be rejected, we cannot
reject the null hypothesis that dln inc and dln consump, jointly, do not Granger-cause dln inv.
Because we failed to reject most of these null hypotheses, we might be interested in imposing
some constraints on the coefficients. See [TS] var for more on fitting VAR models with constraints
on the coefficients.
688
The first two calls to test show how vargranger obtains its results. The first test reproduces
the first test reported for the dln inv equation. The second test reproduces the ALL entry for the
first equation. The third test reproduces the standard F statistic for the dln inv equation, reported
in the header of the var output in the previous example. The standard F statistic also includes the
lags of the dependent variable, as well as any exogenous variables in the equation. This illustrates
that the test performed by vargranger of the null hypothesis that the coefficients on all the lags of
all the other endogenous variables are jointly zero for a particular equation; that is, the All test is
not the same as the standard F statistic for that equation.
689
Excluded
df
df_r
dln_inv
dln_inv
dln_inv
Prob > F
dln_inc
dln_consump
ALL
.04847
1.5004
1.5917
2
2
4
66
66
66
0.9527
0.2306
0.1869
dln_inc
dln_inc
dln_inc
dln_inv
dln_consump
ALL
1.7683
1.7184
1.9466
2
2
4
66
66
66
0.1786
0.1873
0.1130
dln_consump
dln_consump
dln_consump
dln_inv
dln_inc
ALL
.97147
6.1465
3.7746
2
2
4
66
66
66
0.3839
0.0036
0.0080
As we expected, the vargranger results are identical to those in the first example.
Stored results
vargranger stores the following in r():
Matrices
r(gstats)
r(gstats)
Clive William John Granger (19342009) was born in Swansea, Wales, and earned degrees at the
University of Nottingham in mathematics and statistics. Joining the staff there, he also worked
at Princeton on the spectral analysis of economic time series, before moving in 1973 to the
University of California, San Diego. He was awarded the 2003 Nobel Prize in Economics for
methods of analyzing economic time series with common trends (cointegration). He was knighted
in 2005, thus becoming Sir Clive Granger.
690
References
Amisano, G., and C. Giannini. 1997. Topics in Structural VAR Econometrics. 2nd ed. Heidelberg: Springer.
Granger, C. W. J. 1969. Investigating causal relations by econometric models and cross-spectral methods. Econometrica
37: 424438.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Phillips, P. C. B. 1997. The ET Interview: Professor Clive Granger. Econometric Theory 13: 253303.
Also see
[TS] var Vector autoregressive models
[TS] var svar Structural vector autoregressive models
[TS] varbasic Fit a simple VAR and graph IRFs or FEVDs
[TS] var intro Introduction to vector autoregressive models
Title
varlmar Perform LM test for residual autocorrelation after var or svar
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
varlmar
, options
options
Description
mlag(#)
estimates(estname)
separator(#)
varlmar can be used only after var or svar; see [TS] var and [TS] var svar.
You must tsset your data before using varlmar; see [TS] tsset.
Menu
Statistics
>
>
>
Description
varlmar implements a Lagrange multiplier (LM) test for autocorrelation in the residuals of VAR
models, which was presented in Johansen (1995).
Options
mlag(#) specifies the maximum order of autocorrelation to be tested. The integer specified in mlag()
must be greater than 0; the default is 2.
estimates(estname) requests that varlmar use the previously obtained set of var or svar estimates
stored as estname. By default, varlmar uses the active results. See [R] estimates for information
on manipulating estimation results.
separator(#) specifies how often separator lines should be drawn between rows. By default,
separator lines do not appear. For example, separator(1) would draw a line between each row,
separator(2) between every other row, and so on.
692
varlmar uses the estimation results stored by var or svar. By default, varlmar uses the active
estimation results. However, varlmar can use any previously stored var or svar estimation results
specified in the estimates() option.
chi2
df
1
2
3
4
5
5.5871
6.3189
8.4022
11.8742
5.2914
9
9
9
9
9
0.78043
0.70763
0.49418
0.22049
0.80821
Because we cannot reject the null hypothesis that there is no autocorrelation in the residuals for
any of the five orders tested, this test gives no hint of model misspecification. Although we fit the
VAR with the dfk option to be consistent with the example in [TS] var, varlmar always uses the ML
estimator of . The results obtained from varlmar are the same whether or not dfk is specified.
chi2
df
1
2
3
4
5
5.5871
6.3189
8.4022
11.8742
5.2914
9
9
9
9
9
0.78043
0.70763
0.49418
0.22049
0.80821
693
Because the underlying VAR(2) is the same as the previous example (we assure you that this is
true), the output from varlmar is also the same.
Stored results
varlmar stores the following in r():
Matrices
r(lm)
LMs = (T d .5) ln
b|
|
e s|
|
References
Davidson, R., and J. G. MacKinnon. 1993. Estimation and Inference in Econometrics. New York: Oxford University
Press.
Johansen, S. 1995. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University
Press.
Also see
[TS] var Vector autoregressive models
[TS] var svar Structural vector autoregressive models
[TS] varbasic Fit a simple VAR and graph IRFs or FEVDs
[TS] var intro Introduction to vector autoregressive models
Title
varnorm Test for normally distributed disturbances after var or svar
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
varnorm
, options
options
Description
jbera
skewness
kurtosis
estimates(estname)
cholesky
separator(#)
varnorm can be used only after var or svar; see [TS] var and [TS] var svar.
You must tsset your data before using varnorm; see [TS] tsset.
Menu
Statistics
>
>
>
Description
varnorm computes and reports a series of statistics against the null hypothesis that the disturbances
in a VAR are normally distributed. For each equation, and for all equations jointly, up to three statistics
may be computed: a skewness statistic, a kurtosis statistic, and the JarqueBera statistic. By default,
all three statistics are reported.
Options
jbera requests that the JarqueBera statistic and any other explicitly requested statistic be reported.
By default, the JarqueBera, skewness, and kurtosis statistics are reported.
skewness requests that the skewness statistic and any other explicitly requested statistic be reported.
By default, the JarqueBera, skewness, and kurtosis statistics are reported.
kurtosis requests that the kurtosis statistic and any other explicitly requested statistic be reported.
By default, the JarqueBera, skewness, and kurtosis statistics are reported.
estimates(estname) specifies that varnorm use the previously obtained set of var or svar estimates
stored as estname. By default, varnorm uses the active results. See [R] estimates for information
on manipulating estimation results.
694
695
cholesky specifies that varnorm use the Cholesky decomposition of the estimated variancecovariance
b to orthogonalize the residuals when varnorm is applied to svar
matrix of the disturbances, ,
results. By default, when varnorm is applied to svar results, it uses the estimated structural
b 1 B
b on C
b to orthogonalize the residuals. When applied to var e() results,
decomposition A
b For this reason, the cholesky option
varnorm always uses the Cholesky decomposition of .
may not be specified when using var results.
separator(#) specifies how often separator lines should be drawn between rows. By default,
separator lines do not appear. For example, separator(1) would draw a line between each row,
separator(2) between every other row, and so on.
696
chi2
df
2.821
3.450
1.566
7.838
2
2
2
6
0.24397
0.17817
0.45702
0.25025
Skewness
chi2
df
.11935
-.38316
-.31275
0.173
1.786
1.190
3.150
1
1
1
3
0.67718
0.18139
0.27532
0.36913
Kurtosis
chi2
df
3.9331
3.7396
2.6484
2.648
1.664
0.376
4.688
1
1
1
3
0.10367
0.19710
0.53973
0.19613
dln_inv
dln_inc
dln_consump
ALL
Skewness test
Equation
dln_inv
dln_inc
dln_consump
ALL
Kurtosis test
Equation
dln_inv
dln_inc
dln_consump
ALL
In this example, neither the single-equation JarqueBera statistics nor the joint JarqueBera statistic
come close to rejecting the null hypothesis.
The skewness and kurtosis results have similar structures.
The JarqueBera results use the sum of the skewness and kurtosis statistics. The skewness and
kurtosis results are based on the skewness and kurtosis coefficients, respectively. See Methods and
formulas.
697
We fit an SVAR that is based on an underlying VAR with two lags that is the same as the one
fit in the previous example. We impose a structural decomposition that is the same as the Cholesky
decomposition, as illustrated in [TS] var svar.
. matrix a = (.,0,0\.,.,0\.,.,.)
. matrix b = I(3)
. svar dln_inv dln_inc dln_consump if qtr<=tq(1978q4), dfk aeq(a) beq(b)
(output omitted )
. varnorm
Jarque-Bera test
Equation
chi2
df
2.821
3.450
1.566
7.838
2
2
2
6
0.24397
0.17817
0.45702
0.25025
Skewness
chi2
df
.11935
-.38316
-.31275
0.173
1.786
1.190
3.150
1
1
1
3
0.67718
0.18139
0.27532
0.36913
Kurtosis
chi2
df
3.9331
3.7396
2.6484
2.648
1.664
0.376
4.688
1
1
1
3
0.10367
0.19710
0.53973
0.19613
dln_inv
dln_inc
dln_consump
ALL
Skewness test
Equation
dln_inv
dln_inc
dln_consump
ALL
Kurtosis test
Equation
dln_inv
dln_inc
dln_consump
ALL
Because the estimated structural decomposition is the same as the Cholesky decomposition, the
varnorm results are the same as those from the previous example.
Technical note
b the estimated variancecovariance matrix of
The statistics computed by varnorm depend on ,
the disturbances. var uses the maximum likelihood estimator of this matrix by default, but the dfk
option produces an estimator that uses a small-sample correction. Thus specifying dfk in the call to
var or svar will affect the test results produced by varnorm.
698
Stored results
varnorm stores the following in r():
Macros
r(dfk)
dfk, if specified
Matrices
r(kurtosis)
r(skewness)
r(jb)
bP
b0
b=P
implies that
b 1
b 10 = IK
bP
P
b is one way of performing the orthogonalization. When varnorm is applied
b t by P
premultiplying u
b
b When varnorm is applied to
to var results, P is defined to be the Cholesky decomposition of .
b is set, by default, to the estimated structural decomposition; that is, P
b =A
b 1 B
b,
svar results, P
b and B
b are the svar estimates of the A and B matrices, or C
b , where C
b is the long-run
where A
SVAR estimation of C. (See [TS] var svar for more on the origin and estimation of the A and B
b is set
matrices.) When varnorm is applied to svar results and the cholesky option is specified, P
b
to the Cholesky decomposition of .
b t to be the orthogonalized VAR residuals given by
Define w
b 1 u
b t = (w
bt
w
b1t , . . . , w
bKt )0 = P
The K 1 vectors of skewness and kurtosis coefficients are then computed using the orthogonalized
residuals by
T
X
3
b 1 = (bb11 , . . . , bbK1 )0 ;
bbk1 = 1
w
bkt
b
T
i=1
T
X
bbk2 = 1
w
b4
T i=1 kt
b 2 = (bb12 , . . . , bbK2 )0 ;
b
b0 b
b1 = T b1 b1
2 (K)
0 b
b
b2 = T (b2 3) (b2 3)
24
and
b3 =
b1 +
b2
699
2 (K)
2 (2K)
b1 ,
b2 , and
b3 are for tests of the null hypothesis that the K 1 vector of disturbances follows
a multivariate normal distribution. The corresponding statistics against the null hypothesis that the
disturbances from the k th equation come from a univariate normal distribution are
b2
b1k = T b k1
2 (1)
2
b2
b2k = T ( b k2 3)
24
and
b3k =
b1 +
b2
2 (1)
d
2 (2)
References
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Jarque, C. M., and A. K. Bera. 1987. A test for normality of observations and regression residuals. International
Statistical Review 2: 163172.
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Also see
[TS] var Vector autoregressive models
[TS] var svar Structural vector autoregressive models
[TS] varbasic Fit a simple VAR and graph IRFs or FEVDs
[TS] var intro Introduction to vector autoregressive models
Title
varsoc Obtain lag-order selection statistics for VARs and VECMs
Syntax
Postestimation option
References
Menu
Remarks and examples
Also see
Description
Stored results
Preestimation options
Methods and formulas
Syntax
Preestimation syntax
varsoc depvarlist
if
in
, preestimation options
Postestimation syntax
varsoc , estimates(estname)
preestimation options
Description
Main
maxlag(#)
exog(varlist)
constraints(constraints)
noconstant
lutstats
level(#)
separator(#)
You must tsset your data before using varsoc; see [TS] tsset.
by is allowed with the preestimation version of varsoc; see [U] 11.1.10 Prefix commands.
Menu
Preestimation for VARs
Statistics
>
>
>
>
>
>
>
>
>
>
>
>
700
701
Description
varsoc reports the final prediction error (FPE), Akaikes information criterion (AIC), Schwarzs
Bayesian information criterion (SBIC), and the Hannan and Quinn information criterion (HQIC) lagorder selection statistics for a series of vector autoregressions of order 1, . . . , maxlag(). A sequence
of likelihood-ratio test statistics for all the full VARs of order less than or equal to the highest lag
order is also reported. In the postestimation version, the maximum lag and estimation options are
based on the model just fit or the model specified in estimates(estname).
The preestimation version of varsoc can also be used to select the lag order for a vector errorcorrection model (VECM). As shown by Nielsen (2001), the lag-order selection statistics discussed
here can be used in the presence of I(1) variables.
Preestimation options
Main
maxlag(#) specifies the maximum lag order for which the statistics are to be obtained.
exog(varlist) specifies exogenous variables to include in the VARs fit by varsoc.
constraints(constraints) specifies a list of constraints on the exogenous variables to be applied.
Do not specify constraints on the lags of the endogenous variables because specifying one would
mean that at least one of the VAR models considered by varsoc will not contain the lag specified
in the constraint. Use var directly to obtain selection-order criteria with constraints on lags of the
endogenous variables.
noconstant suppresses the constant terms from the model. By default, constant terms are included.
lutstats specifies that the Lutkepohl (2005) versions of the information criteria be reported. See
Methods and formulas for a discussion of these statistics.
level(#) specifies the confidence level, as a percentage, that is used to identify the first likelihoodratio test that rejects the null hypothesis that the additional parameters from adding a lag are jointly
zero. The default is level(95) or as set by set level; see [U] 20.7 Specifying the width of
confidence intervals.
separator(#) specifies how often separator lines should be drawn between rows. By default,
separator lines do not appear. For example, separator(1) would draw a line between each row,
separator(2) between every other row, and so on.
Postestimation option
estimates(estname) specifies the name of a previously stored set of var or svar estimates.
When no depvarlist is specified, varsoc uses the postestimation syntax and uses the currently
active estimation results or the results specified in estimates(estname). See [R] estimates for
information on manipulating estimation results.
702
Example 1: Preestimation
Here we use varsoc as a preestimation command.
. use http://www.stata-press.com/data/r13/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. varsoc dln_inv dln_inc dln_consump if qtr<=tq(1978q4), lutstats
Selection-order criteria (lutstats)
Sample: 1961q2 - 1978q4
Number of obs
=
71
lag
0
1
2
3
4
LL
LR
564.784
576.409
588.859
591.237
598.457
Endogenous:
Exogenous:
23.249
24.901*
4.7566
14.438
df
9
9
9
9
0.006
0.003
0.855
0.108
FPE
AIC
HQIC
2.7e-11
-24.423
-24.423*
2.5e-11
-24.497 -24.3829
2.3e-11* -24.5942* -24.3661
2.7e-11 -24.4076 -24.0655
2.9e-11 -24.3575 -23.9012
SBIC
-24.423*
-24.2102
-24.0205
-23.5472
-23.2102
The sample used begins in 1961q2 because all the VARs are fit to the sample defined by any if or
in conditions and the available data for the maximum lag specified. The default maximum number
of lags is four. Because we specified the lutstats option, the table contains the Lutkepohl (2005)
versions of the information criteria, which differ from the standard definitions in that they drop the
constant term from the log likelihood. In this example, the likelihood-ratio tests selected a model
with two lags. AIC and FPE have also both chosen a model with two lags, whereas SBIC and HQIC
have both selected a model with zero lags.
703
Example 2: Postestimation
varsoc works as a postestimation command when no dependent variables are specified.
. var dln_inc dln_consump if qtr<=tq(1978q4), lutstats exog(l.dln_inv)
(output omitted )
. varsoc
Selection-order criteria (lutstats)
Sample: 1960q4 - 1978q4
lag
0
1
2
LL
LR
460.646
467.606
477.087
Endogenous:
Exogenous:
13.919
18.962*
df
4
4
0.008
0.001
Number of obs
FPE
AIC
HQIC
73
SBIC
dln_inc dln_consump
L.dln_inv _cons
Because we included one lag of dln inv in our original model, varsoc did likewise with each
model it fit.
Based on the work of Tsay (1984), Paulsen (1984), and Nielsen (2001), these lag-order selection
criteria can be used to determine the lag length of the VAR underlying a VECM. See [TS] vec intro
for an example in which we use varsoc to choose the lag order for a VECM.
Stored results
varsoc stores the following in r():
Scalars
r(N)
r(tmax)
r(tmin)
Macros
r(endog)
r(lutstats)
r(cns#)
Matrices
r(stats)
number of observations
last time period in sample
first time period in sample
r(mlag)
r(N gaps)
r(exog)
r(rmlutstats)
T
b 1 | K ln(2) K
ln |
2
704
b is the maximum
where T is the number of observations, K is the number of equations, and
likelihood estimate of E[ut u0t ], where ut is the K 1 vector of disturbances. Because
b 1 | = ln |
b|
ln |
the log likelihood can be rewritten as
n
o
T
b | + K ln(2) + K
LL =
ln |
2
Letting LL(j ) be the value of the log likelihood with j lags yields the LR statistic for lag order j as
LR(j)
= 2 LL(j) LL(j 1)
Model-order statistics
The formula for the FPE given in Lutkepohl (2005, 147) is
K
T + Kp + 1
FPE = |u |
T Kp 1
This formula, however, assumes that there is a constant in the model and that none of the variables
are dropped because of collinearity. To deal with these problems, the FPE is implemented as
FPE
K
T +m
= |u |
T m
where m is the average number of parameters over the K equations. This implementation accounts
for variables dropped because of collinearity.
By default, the AIC, SBIC, and HQIC are computed according to their standard definitions, which
include the constant term from the log likelihood. That is,
2tp
T
LL
ln(T )
SBIC = 2
+
tp
T
T
2ln ln(T )
LL
HQIC = 2
+
tp
T
T
AIC
=2
LL
where tp is the total number of parameters in the model and LL is the log likelihood.
705
Lutstats
Lutkepohl (2005) advocates dropping the constant term from the log likelihood because it does
not affect inference. The Lutkepohl versions of the information criteria are
2pK 2
= ln |u | +
T
ln(T )
SBIC = ln |u | +
pK 2
T
2ln ln(T )
HQIC = ln |u | +
pK 2
T
AIC
References
Akaike, H. 1973. Information theory and an extension of the maximum likelihood principle. In Second International
Symposium on Information Theory, ed. B. N. Petrov and F. Csaki, 267281. Budapest: AkailseoniaiKiudo.
Amemiya, T. 1985. Advanced Econometrics. Cambridge, MA: Harvard University Press.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Nielsen, B. 2001. Order determination in general vector autoregressions. Working paper, Department of Economics,
University of Oxford and Nuffield College. http://ideas.repec.org/p/nuf/econwp/0110.html.
Paulsen, J. 1984. Order determination of multivariate autoregressive time series with unit roots. Journal of Time Series
Analysis 5: 115127.
Tsay, R. S. 1984. Order selection in nonstationary autoregressive models. Annals of Statistics 12: 14251433.
Also see
[TS] var Vector autoregressive models
[TS] var svar Structural vector autoregressive models
[TS] varbasic Fit a simple VAR and graph IRFs or FEVDs
[TS] vec Vector error-correction models
[TS] var intro Introduction to vector autoregressive models
[TS] vec intro Introduction to vector error-correction models
Title
varstable Check the stability condition of VAR or SVAR estimates
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
varstable
, options
Description
options
Main
estimates(estname)
amat(matrix name)
graph
dlabel
modlabel
marker options
rlopts(cline options)
nogrid
pgrid( . . . )
Add plots
addplot(plot)
twoway options
varstable can be used only after var or svar; see [TS] var and [TS] var svar.
Menu
Statistics
>
>
>
Description
varstable checks the eigenvalue stability condition after estimating the parameters of a vector
autoregression using var or svar.
Options
Main
estimates(estname) requests that varstable use the previously obtained set of var estimates
stored as estname. By default, varstable uses the active estimation results. See [R] estimates
for information on manipulating estimation results.
706
707
amat(matrix name) specifies a valid Stata matrix name by which the companion matrix A can be
saved (see Methods and formulas for the definition of the matrix A). The default is not to save
the A matrix.
graph causes varstable to draw a graph of the eigenvalues of the companion matrix.
dlabel labels each eigenvalue with its distance from the unit circle. dlabel cannot be specified
with modlabel.
modlabel labels the eigenvalues with their moduli. modlabel cannot be specified with dlabel.
marker options specify the look of markers. This look includes the marker symbol, the marker size,
and its color and outline; see [G-3] marker options.
rlopts(cline options) affect the rendition of the reference unit circle; see [G-3] cline options.
nogrid suppresses the polar grid circles.
pgrid( numlist
, line options ) determines the radii and appearance of the polar grid circles.
By default, the graph includes nine polar grid circles with radii 0.1, 0.2, . . . , 0.9 that have the grid
line style. The numlist specifies the radii for the polar grid circles. The line options determine the
appearance of the polar grid circles; see [G-3] line options. Because the pgrid() option can be
repeated, circles with different radii can have distinct appearances.
Add plots
addplot(plot) adds specified plots to the generated graph. See [G-3] addplot option.
twoway options are any of the options documented in [G-3] twoway options, except by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).
708
Example 1
After fitting a VAR with var, we can use varstable to check the stability condition. Using the
same VAR model that was used in [TS] var, we demonstrate the use of varstable.
. use http://www.stata-press.com/data/r13/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. var dln_inv dln_inc dln_consump if qtr>=tq(1961q2) & qtr<=tq(1978q4)
(output omitted )
. varstable, graph
Eigenvalue stability condition
Eigenvalue
.5456253
-.3785754
-.3785754
-.0643276
-.0643276
-.3698058
+
+
-
Modulus
.545625
.540232
.540232
.464074
.464074
.369806
.3853982i
.3853982i
.4595944i
.4595944i
Because the modulus of each eigenvalue is strictly less than 1, the estimates satisfy the eigenvalue
stability condition.
Specifying the graph option produced a graph of the eigenvalues with the real components on
the x axis and the complex components on the y axis. The graph below indicates visually that these
eigenvalues are well inside the unit circle.
.5
Imaginary
0
.5
.5
0
Real
.5
Example 2
This example illustrates two other features of the varstable command. First, varstable can
check the stability of the estimates of the VAR underlying an SVAR fit by var svar. Second, varstable
can check the stability of any previously stored var or var svar estimates.
709
We begin by refitting the previous VAR and storing the results as var1. Because this is the same
VAR that was fit in the previous example, the stability results should be identical.
. var dln_inv dln_inc dln_consump if qtr>=tq(1961q2) & qtr<=tq(1978q4)
(output omitted )
. estimates store var1
Now we use svar to fit an SVAR with a different underlying VAR and check the estimates of that
underlying VAR for stability.
. matrix A = (.,0\.,.)
. matrix B = I(2)
. svar d.ln_inc d.ln_consump, aeq(A) beq(B)
(output omitted )
. varstable
Eigenvalue stability condition
Eigenvalue
.548711
-.2979493 +
-.2979493 -.3570825
.4328013i
.4328013i
Modulus
.548711
.525443
.525443
.357082
The estimates() option allows us to check the stability of the var results stored as var1.
. varstable, est(var1)
Eigenvalue stability condition
Eigenvalue
.5456253
-.3785754
-.3785754
-.0643276
-.0643276
-.3698058
+
+
-
.3853982i
.3853982i
.4595944i
.4595944i
Modulus
.545625
.540232
.540232
.464074
.464074
.369806
The results are identical to those obtained in the previous example, confirming that we were
checking the results in var1.
Stored results
varstable stores the following in r():
Matrices
r(Re)
r(Im)
r(Modulus)
710
A1
I
0
A=
.
..
A2
0
I
..
.
0
. . . Ap1
...
0
...
0
..
..
.
.
...
I
Ap
0
0
..
.
0
and obtains
its eigenvalues by using matrix eigenvalues. The modulus of the complex eigenvalue
r + ci is r2 + c2 . As shown by Lutkepohl (2005) and Hamilton (1994), the VAR is stable if the
modulus of each eigenvalue of A is strictly less than 1.
References
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Also see
[TS] var Vector autoregressive models
[TS] var svar Structural vector autoregressive models
[TS] varbasic Fit a simple VAR and graph IRFs or FEVDs
[TS] var intro Introduction to vector autoregressive models
Title
varwle Obtain Wald lag-exclusion statistics after var or svar
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
varwle
, estimates(estname) separator(#)
varwle can be used only after var or svar; see [TS] var and [TS] var svar.
Menu
Statistics
>
>
>
Description
varwle reports Wald tests the hypothesis that the endogenous variables at a given lag are jointly
zero for each equation and for all equations jointly.
Options
estimates(estname) requests that varwle use the previously obtained set of var or svar estimates
stored as estname. By default, varwle uses the active estimation results. See [R] estimates for
information on manipulating estimation results.
separator(#) specifies how often separator lines should be drawn between rows. By default,
separator lines do not appear. For example, separator(1) would draw a line between each row,
separator(2) between every other row, and so on.
711
712
df
df_r
2.64902
1.25799
3
3
66
66
df
df_r
2.19276
.907499
3
3
66
66
Prob > F
0.0560
0.2960
Equation: dln_inc
lag
1
2
Prob > F
0.0971
0.4423
Equation: dln_consump
lag
1
2
df
df_r
1.80804
5.57645
3
3
66
66
df
df_r
3.78884
2.96811
9
9
66
66
Prob > F
0.1543
0.0018
Equation: All
lag
1
2
Prob > F
0.0007
0.0050
Because the VAR was fit with the dfk and small options, varwle used the small-sample estimator
b in constructing the VCE, producing an F statistic. The first two equations appear to have a
of
different lag structure from that of the third. In the first two equations, we cannot reject the null
hypothesis that all three endogenous variables have zero coefficients at the second lag. The hypothesis
that all three endogenous variables have zero coefficients at the first lag can be rejected at the 10%
level for both of the first two equations. In contrast, in the third equation, the coefficients on the
second lag of the endogenous variables are jointly significant, but not those on the first lag. However,
we strongly reject the hypothesis that the coefficients on the first lag of the endogenous variables
are zero in all three equations jointly. Similarly, we can also strongly reject the hypothesis that the
coefficients on the second lag of the endogenous variables are zero in all three equations jointly.
If we believe these results strongly enough, we might want to refit the original VAR, placing some
constraints on the coefficients. See [TS] var for details on how to fit VAR models with constraints.
713
89.72411
-64.73622
0
126.2964
/b_1_1
/b_2_1
/b_1_2
/b_2_2
1
0
0
1
Std. Err.
No. of obs
Log likelihood
z
6.725107
13.34
10.67698
-6.06
(constrained)
9.466318
13.34
P>|z|
=
=
89
578.277
0.000
0.000
76.54315
-85.66271
102.9051
-43.80973
0.000
107.7428
144.8501
(constrained)
(constrained)
(constrained)
(constrained)
The output table from var svar gives information about the estimates of the parameters in the A
and B matrices in the structural VAR. But, as discussed in [TS] var svar, an SVAR model builds on
an underlying VAR. When varwle uses the estimation results produced by svar, it performs Wald
lag-exclusion tests on the underlying VAR model. Next we run varwle on these svar results.
714
chi2
df
6.88775
1.873546
2
2
0.032
0.392
Equation: dln_consump
lag
1
2
chi2
df
9.938547
13.89996
2
2
0.007
0.001
chi2
df
34.54276
19.44093
4
4
0.000
0.001
Equation: All
lag
1
2
Now we fit the underlying VAR with two lags and apply varwle to these results.
. var dln_inc dln_consump
(output omitted )
. varwle
Equation: dln_inc
lag
1
2
chi2
df
6.88775
1.873546
2
2
0.032
0.392
Equation: dln_consump
lag
1
2
chi2
df
9.938547
13.89996
2
2
0.007
0.001
chi2
df
34.54276
19.44093
4
4
0.000
0.001
Equation: All
lag
1
2
Because varwle produces the same results in these two cases, we can conclude that when varwle
is applied to svar results, it performs Wald lag-exclusion tests on the underlying VAR.
715
Stored results
varwle stores the following in r():
Matrices
if e(small)==""
r(chi2)
r(df)
r(p)
if e(small)!=""
r(F)
r(df r)
r(df)
r(p)
2 test statistics
degrees of freedom
p-values
F test statistics
numerator degrees of freedom
denominator degree of freedom
p-values
Abraham Wald (19021950) was born in Cluj, in what is now Romania. He studied mathematics at
the University of Vienna, publishing at first on geometry, but then became interested in economics
and econometrics. He moved to the United States in 1938 and later joined the faculty at Columbia.
His major contributions to statistics include work in decision theory, optimal sequential sampling,
large-sample distributions of likelihood-ratio tests, and nonparametric inference. Wald died in a
plane crash in India.
References
Amisano, G., and C. Giannini. 1997. Topics in Structural VAR Econometrics. 2nd ed. Heidelberg: Springer.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Lutkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
Mangel, M., and F. J. Samaniego. 1984. Abraham Walds work on aircraft survivability. Journal of the American
Statistical Association 79: 259267.
Wolfowitz, J. 1952. Abraham Wald, 19021950. Annals of Mathematical Statistics 23: 113 (and other reports in
same issue).
Also see
[TS] var Vector autoregressive models
[TS] var svar Structural vector autoregressive models
[TS] varbasic Fit a simple VAR and graph IRFs or FEVDs
[TS] var intro Introduction to vector autoregressive models
Title
vec intro Introduction to vector error-correction models
Description
References
Also see
Description
Stata has a suite of commands for fitting, forecasting, interpreting, and performing inference
on vector error-correction models (VECMs) with cointegrating variables. After fitting a VECM, the
irf commands can be used to obtain impulseresponse functions (IRFs) and forecast-error variance
decompositions (FEVDs). The table below describes the available commands.
Fitting a VECM
vec
[TS] vec
vecnorm
vecstable
varsoc
[TS] vecnorm
[TS] vecstable
[TS] varsoc
This manual entry provides an overview of the commands for VECMs; provides an introduction
to integration, cointegration, estimation, inference, and interpretation of VECM models; and gives an
example of how to use Statas vec commands.
717
Because Nielsen (2001) has shown that the methods implemented in varsoc can be used to choose
the order of the autoregressive process, no separate vec command is needed; you can simply use
varsoc. veclmar tests that the residuals have no serial correlation, and vecnorm tests that they are
normally distributed.
All the irf routines described in [TS] irf are available for estimating, interpreting, and managing
estimated IRFs and FEVDs for VECMs.
Remarks are presented under the following headings:
Introduction to cointegrating VECMs
What is cointegration?
The multivariate VECM specification
Trends in the Johansen VECM framework
VECM estimation in Stata
Selecting the number of lags
Testing for cointegration
Fitting a VECM
Fitting VECMs with Johansens normalization
Postestimation specification testing
Impulseresponse functions for VECMs
Forecasting with VECMs
718
yt + xt = t ,
yt + xt = t ,
t = t1 + t
t = t1 + t ,
|| < 1
(3)
(4)
where t and t are i.i.d. disturbances over time that are correlated with each other. Because t is
I(1), (3) and (4) imply that both xt and yt are I(1). The condition that || < 1 implies that t and
yt + xt are I(0). Thus yt and xt cointegrate, and (1, ) is the cointegrating vector.
Using a bit of algebra, we can rewrite (3) and (4) as
yt =zt1 + 1t
xt = zt1 + 2t
(5)
(6)
where = (1)/(), zt = yt +xt , and 1t and 2t are distinct, stationary, linear combinations
of t and t . This representation is known as the vector error-correction model (VECM). One can think
of zt = 0 as being the point at which yt and xt are in equilibrium. The coefficients on zt1 describe
how yt and xt adjust to zt1 being nonzero, or out of equilibrium. zt is the error in the system,
and (5) and (6) describe how system adjusts or corrects back to the equilibrium. As goes to 1, the
system degenerates into a pair of correlated random walks. The VECM parameterization highlights
this point, because 0 as 1.
719
If we knew , we would know zt , and we could work with the stationary system of (5) and (6).
Although knowing seems silly, we can conduct much of the analysis as if we knew because
there is an estimator for the cointegrating parameter that converges to its true value at a faster rate
than the estimator for the adjustment parameters and .
The definition of a bivariate cointegrating relation requires simply that there exist a linear combination
of the I(1) variables that is I(0). If yt and xt are I(1) and there are two finite real numbers a 6= 0
and b 6= 0, such that ayt + bxt is I(0), then yt and xt are cointegrated. Although there are two
parameters, a and b, only one will be identifiable because if ayt + bxt is I(0), so is cayt + cbxt
for any finite, nonzero, real number c. Obtaining identification in the bivariate case is relatively
simple. The coefficient on yt in (4) is unity. This natural construction of the model placed the
necessary identification restriction on the cointegrating vector. As we discuss below, identification in
the multivariate case is more involved.
If yt is a K 1 vector of I(1) variables and there exists a vector , such that yt is a vector
of I(0) variables, then yt is said to be cointegrating of order (1,0) with cointegrating vector . We
say that the parameters in are the parameters in the cointegrating equation. For a vector of length
K , there may be at most K 1 distinct cointegrating vectors. Engle and Granger (1987) provide a
more general definition of cointegration, but this one is sufficient for our purposes.
The multivariate VECM specification
In practice, most empirical applications analyze multivariate systems, so the rest of our discussion
focuses on that case. Consider a VAR with p lags
(7)
yt = v + yt1 +
p1
X
i yti + t
(8)
i=1
where =
Pj=p
j=1
Aj Ik and i =
Pj=p
j=i+1
Engle and Granger (1987) show that if the variables yt are I(1) the matrix in (8) has rank
0 r < K , where r is the number of linearly independent cointegrating vectors. If the variables
cointegrate, 0 < r < K and (8) shows that a VAR in first differences is misspecified because it omits
the lagged level term yt1 .
Assume that has reduced rank 0 < r < K so that it can be expressed as = 0 , where
and are both r K matrices of rank r. Without further restrictions, the cointegrating vectors are
not identified: the parameters (, ) are indistinguishable from the parameters (Q, Q10 ) for any
r r nonsingular matrix Q. Because only the rank of is identified, the VECM is said to identify
the rank of the cointegrating space, or equivalently, the number of cointegrating vectors. In practice,
the estimation of the parameters of a VECM requires at least r2 identification restrictions. Statas vec
command can apply the conventional Johansen restrictions discussed below or use constraints that
the user supplies.
The VECM in (8) also nests two important special cases. If the variables in yt are I(1) but not
cointegrated, is a matrix of zeros and thus has rank 0. If all the variables are I(0), has full rank
K.
720
There are several different frameworks for estimation and inference in cointegrating systems.
Although the methods in Stata are based on the maximum likelihood (ML) methods developed by
Johansen (1988, 1991, 1995), other useful frameworks have been developed by Park and Phillips (1988,
1989); Sims, Stock, and Watson (1990); Stock (1987); and Stock and Watson (1988); among others.
The ML framework developed by Johansen was independently developed by Ahn and Reinsel (1990).
Maddala and Kim (1998) and Watson (1994) survey all of these methods. The cointegration methods
in Stata are based on Johansens maximum likelihood framework because it has been found to be
particularly useful in several comparative studies, including Gonzalo (1994) and Hubrich, Lutkepohl,
and Saikkonen (2001).
Trends in the Johansen VECM framework
Deterministic trends in a cointegrating VECM can stem from two distinct sources; the mean of the
cointegrating relationship and the mean of the differenced series. Allowing for a constant and a linear
trend and assuming that there are r cointegrating relations, we can rewrite the VECM in (8) as
0
yt = yt1 +
p1
X
i yti + v + t + t
(9)
i=1
where is a K 1 vector of parameters. Because (9) models the differences of the data, the constant
implies a linear time trend in the levels, and the time trend t implies a quadratic time trend in the
levels of the data. Often we may want to include a constant or a linear time trend for the differences
without allowing for the higher-order trend that is implied for the levels of the data. VECMs exploit
the properties of the matrix to achieve this flexibility.
Because is a K r rank matrix, we can rewrite the deterministic components in (9) as
v = +
t = t + t
(10a)
(10b)
where and are r 1 vectors of parameters and and are K 1 vectors of parameters.
is orthogonal to , and is orthogonal to ; that is, 0 = 0 and 0 = 0, allowing us to
rewrite (9) as
p1
X
yt = (0 yt1 + + t) +
i yti + + t + t
(11)
i=1
If no restrictions are placed on the trend parameters, (11) implies that there are quadratic trends
in the levels of the variables and that the cointegrating equations are stationary around time
trends (trend stationary).
CASE 2: Restricted trend,
=0
By setting = 0, we assume that the trends in the levels of the data are linear but not quadratic.
This specification allows the cointegrating equations to be trend stationary.
CASE 3: Unrestricted constant,
= 0 and = 0
By setting = 0 and = 0, we exclude the possibility that the levels of the data have
quadratic trends, and we restrict the cointegrating equations to be stationary around constant
means. Because is not restricted to zero, this specification still puts a linear time trend in the
levels of the data.
721
= 0, = 0, and = 0
By adding the restriction that = 0, we assume there are no linear time trends in the levels of
the data. This specification allows the cointegrating equations to be stationary around a constant
mean, but it allows no other trends or constant terms.
CASE 5: No trend,
= 0, = 0, = 0, and = 0
This specification assumes that there are no nonzero means or trends. It also assumes that the
cointegrating equations are stationary with means of zero and that the differences and the levels
of the data have means of zero.
This flexibility does come at a price. Below we discuss testing procedures for determining the
number of cointegrating equations. The asymptotic distribution of the LR for hypotheses about r
changes with the trend specification, so we must first specify a trend specification. A combination of
theory and graphical analysis will aid in specifying the trend before proceeding with the analysis.
11.2
11.4
11.6
11.8
12
12.2
We provide an overview of the vec commands in Stata through an extended example. We have
monthly data on the average selling prices of houses in four cities in Texas: Austin, Dallas, Houston,
and San Antonio. In the dataset, these average housing prices are contained in the variables austin,
dallas, houston, and sa. The series begin in January of 1990 and go through December 2003, for
a total of 168 observations. The following graph depicts our data.
1990m1
1995m1
2000m1
2005m1
t
ln of house prices in austin
ln of house prices in houston
The plots on the graph indicate that all the series are trending and potential I(1) processes. In a
competitive market, the current and past prices contain all the information available, so tomorrows
price will be a random walk from todays price. Some researchers may opt to use [TS] dfgls to
investigate the presence of a unit root in each series, but the test for cointegration we use includes the
case in which all the variables are stationary, so we defer formal testing until we test for cointegration.
The time trends in the data appear to be approximately linear, so we will specify trend(constant)
when modeling these series, which is the default with vec.
The next graph shows just Dallas and Houstons data, so we can more carefully examine their
relationship.
11.2
11.4
11.6
11.8
12
12.2
722
1990m1 1991m11
1994m1
1996m1
1998m1
2000m1
2002m1
2004m1
t
ln of house prices in dallas
Except for the crash at the end of 1991, housing prices in Dallas and Houston appear closely
related. Although average prices in the two cities will differ because of resource variations and other
factors, if the housing markets become too dissimilar, people and businesses will migrate, bringing
the average housing prices back toward each other. We therefore expect the series of average housing
prices in Houston to be cointegrated with the series of average housing prices in Dallas.
Selecting the number of lags
To test for cointegration or fit cointegrating VECMs, we must specify how many lags to include.
Building on the work of Tsay (1984) and Paulsen (1984), Nielsen (2001) has shown that the methods
implemented in varsoc can be used to determine the lag order for a VAR model with I(1) variables.
As can be seen from (9), the order of the corresponding VECM is always one less than the VAR. vec
makes this adjustment automatically, so we will always refer to the order of the underlying VAR. The
output below uses varsoc to determine the lag order of the VAR of the average housing prices in
Dallas and Houston.
. use http://www.stata-press.com/data/r13/txhprice
. varsoc dallas houston
Selection-order criteria
Sample: 1990m5 - 2003m12
Number of obs
lag
0
1
2
3
4
LL
LR
299.525
577.483
590.978
593.437
596.364
Endogenous:
Exogenous:
555.92
26.991*
4.918
5.8532
df
4
4
4
4
0.000
0.000
0.296
0.210
FPE
AIC
.000091 -3.62835
3.2e-06
-6.9693
2.9e-06* -7.0851*
2.9e-06 -7.06631
3.0e-06 -7.05322
164
HQIC
SBIC
-3.61301
-6.92326
-7.00837*
-6.95888
-6.9151
-3.59055
-6.85589
-6.89608*
-6.80168
-6.71299
dallas houston
_cons
We will use two lags for this bivariate model because the HannanQuinn information criterion (HQIC)
method, Schwarz Bayesian information criterion (SBIC) method, and sequential likelihood-ratio (LR)
test all chose two lags, as indicated by the * in the output.
The reader can verify that when all four cities data are used, the LR test selects three lags, the
HQIC method selects two lags, and the SBIC method selects one lag. We will use three lags in our
four-variable model.
723
maximum
rank
0
1
2
parms
6
9
10
LL
576.26444
599.58781
599.67706
eigenvalue
.
0.24498
0.00107
166
2
5%
trace
critical
statistic
value
46.8252
15.41
0.1785*
3.76
Besides presenting information about the sample size and time span, the header indicates that test
statistics are based on a model with two lags and a constant trend. The body of the table presents test
statistics and their critical values of the null hypotheses of no cointegration (line 1) and one or fewer
cointegrating equations (line 2). The eigenvalue shown on the last line is used to compute the trace
statistic in the line above it. Johansens testing procedure starts with the test for zero cointegrating
equations (a maximum rank of zero) and then accepts the first null hypothesis that is not rejected.
In the output above, we strongly reject the null hypothesis of no cointegration and fail to reject
the null hypothesis of at most one cointegrating equation. Thus we accept the null hypothesis that
there is one cointegrating equation in the bivariate model.
Using all four series and a model with three lags, we find that there are two cointegrating
relationships.
. vecrank austin dallas houston sa, lag(3)
Johansen tests for cointegration
Trend: constant
Number of obs =
Sample: 1990m4 - 2003m12
Lags =
maximum
rank
0
1
2
3
4
parms
36
43
48
51
52
LL
1107.7833
1137.7484
1153.6435
1158.4191
1158.5868
eigenvalue
.
0.30456
0.17524
0.05624
0.00203
165
3
5%
trace
critical
statistic
value
101.6070
47.21
41.6768
29.68
9.8865*
15.41
0.3354
3.76
Fitting a VECM
vec estimates the parameters of cointegrating VECMs. There are four types of parameters of interest:
1. The parameters in the cointegrating equations
2. The adjustment coefficients
3. The short-run coefficients
4. Some standard functions of and that have useful interpretations
724
Although all four types are discussed in [TS] vec, here we discuss only types 13 and how they
appear in the output of vec.
Having determined that there is a cointegrating equation between the Dallas and Houston series,
we now want to estimate the parameters of a bivariate cointegrating VECM for these two series by
using vec.
. vec dallas houston
Vector error-correction model
Sample: 1990m3 - 2003m12
Log likelihood =
Det(Sigma_ml) =
Equation
D_dallas
D_houston
599.5878
2.50e-06
Parms
RMSE
R-sq
4
4
.038546
.045348
0.1692
0.3737
Coef.
Std. Err.
No. of obs
AIC
HQIC
SBIC
chi2
P>chi2
32.98959
96.66399
P>|z|
=
166
= -7.115516
= -7.04703
= -6.946794
0.0000
0.0000
D_dallas
_ce1
L1.
-.3038799
.0908504
-3.34
0.001
-.4819434
-.1258165
dallas
LD.
-.1647304
.0879356
-1.87
0.061
-.337081
.0076202
houston
LD.
-.0998368
.0650838
-1.53
0.125
-.2273988
.0277251
_cons
.0056128
.0030341
1.85
0.064
-.0003339
.0115595
D_houston
_ce1
L1.
.5027143
.1068838
4.70
0.000
.2932258
.7122028
dallas
LD.
-.0619653
.1034547
-0.60
0.549
-.2647327
.1408022
houston
LD.
-.3328437
.07657
-4.35
0.000
-.4829181
-.1827693
_cons
.0033928
.0035695
0.95
0.342
-.0036034
.010389
Cointegrating equations
Equation
Parms
_ce1
Identification:
chi2
P>chi2
1640.088
0.0000
beta
Coef.
dallas
houston
_cons
1
-.8675936
-1.688897
Std. Err.
P>|z|
_ce1
.
.0214231
.
.
-40.50
.
.
0.000
.
.
-.9095821
.
.
-.825605
.
725
The header contains information about the sample, the fit of each equation, and overall model
fit statistics. The first estimation table contains the estimates of the short-run parameters, along with
their standard errors, z statistics, and confidence intervals. The two coefficients on L. ce1 are the
parameters in the adjustment matrix for this model. The second estimation table contains the
estimated parameters of the cointegrating vector for this model, along with their standard errors, z
statistics, and confidence intervals.
Using our previous notation, we have estimated
b = (0.304, 0.503)
and
b = (1, 0.868)
b=
0.165 0.0998
0.062 0.333
b = (0.0056, 0.0034)
v
Overall, the output indicates that the model fits well. The coefficient on houston in the cointegrating
equation is statistically significant, as are the adjustment parameters. The adjustment parameters in
this bivariate example are easy to interpret, and we can see that the estimates have the correct
signs and imply rapid adjustment toward equilibrium. When the predictions from the cointegrating
equation are positive, dallas is above its equilibrium value because the coefficient on dallas in the
cointegrating equation is positive. The estimate of the coefficient [D dallas]L. ce1 is .3. Thus
when the average housing price in Dallas is too high, it quickly falls back toward the Houston level.
The estimated coefficient [D houston]L. ce1 of .5 implies that when the average housing price in
Dallas is too high, the average price in Houston quickly adjusts toward the Dallas level at the same
time that the Dallas prices are adjusting.
Fitting VECMs with Johansens normalization
As discussed by Johansen (1995), if there are r cointegrating equations, then at least r2 restrictions
are required to identify the free parameters in . Johansen proposed a default identification scheme
that has become the conventional method of identifying models in the absence of theoretically justified
restrictions. Johansens identification scheme is
e0)
0 = (Ir ,
e is an (K r) r matrix of identified parameters. vec
where Ir is the r r identity matrix and
applies Johansens normalization by default.
To illustrate, we fit a VECM with two cointegrating equations and three lags on all four series. We
are interested only in the estimates of the parameters in the cointegrating equations, so we can specify
the noetable option to suppress the estimation table for the adjustment and short-run parameters.
726
2
2
Identification:
586.3044
2169.826
=
165
= -13.40174
= -13.03496
= -12.49819
0.0000
0.0000
beta
Coef.
Std. Err.
P>|z|
austin
dallas
houston
sa
_cons
1
-1.30e-17
-.2623782
-1.241805
5.577099
.
.
.1893625
.229643
.
.
.
-1.39
-5.41
.
.
.
0.166
0.000
.
.
.
-.6335219
-1.691897
.
.
.
.1087655
-.7917128
.
austin
dallas
houston
sa
_cons
-1.41e-18
1
-1.095652
.2883986
-2.351372
.
.
.0669898
.0812396
.
.
.
-16.36
3.55
.
.
.
0.000
0.000
.
.
.
-1.22695
.1291718
.
.
.
-.9643545
.4476253
.
_ce1
_ce2
The Johansen identification scheme has placed four constraints on the parameters in :
[ ce1]austin=1, [ ce1]dallas=0, [ ce2]austin=0, and [ ce2]dallas=1. (The computational method used imposes zero restrictions that are numerical rather than exact. The values 3.48e
17 and 1.26e17 are indistinguishable from zero.) We interpret the results of the first equation as
indicating the existence of an equilibrium relationship between the average housing price in Austin
and the average prices of houses in Houston and San Antonio.
The Johansen normalization restricted the coefficient on dallas to be unity in the second
cointegrating equation, but we could instead constrain the coefficient on houston. Both sets of
restrictions define just-identified models, so fitting the model with the latter set of restrictions will
yield the same maximized log likelihood. To impose the alternative set of constraints, we use the
constraint command.
. constraint define 1 [_ce1]austin = 1
. constraint define 2 [_ce1]dallas = 0
. constraint define 3 [_ce2]austin = 0
. constraint define 4 [_ce2]houston = 1
727
2
2
586.3392
3455.469
0.0000
0.0000
Coef.
Std. Err.
P>|z|
austin
dallas
houston
sa
_cons
1
0
-.2623784
-1.241805
5.577099
.
(omitted)
.1876727
.2277537
.
-1.40
-5.45
.
0.162
0.000
.
-.6302102
-1.688194
.
.1054534
-.7954157
.
austin
dallas
houston
sa
_cons
0
-.9126985
1
-.2632209
2.146094
(omitted)
.0595804
.
.0628791
.
-15.32
.
-4.19
.
0.000
.
0.000
.
-1.029474
.
-.3864617
.
-.7959231
.
-.1399802
.
_ce1
_ce2
Only the estimates of the parameters in the second cointegrating equation have changed, and the
new estimates are simply the old estimates divided by 1.095652 because the new constraints are
just an alternative normalization of the same just-identified model. With the new normalization, we
can interpret the estimates of the parameters in the second cointegrating equation as indicating an
equilibrium relationship between the average house price in Houston and the average prices of houses
in Dallas and San Antonio.
Postestimation specification testing
Inference on the parameters in depends crucially on the stationarity of the cointegrating equations,
so we should check the specification of the model. As a first check, we can predict the cointegrating
equations and graph them over time.
. predict ce1, ce equ(#1)
. predict ce2, ce equ(#2)
728
.4
.4
1990m1
1995m1
2000m1
2005m1
2000m1
2005m1
.3
.2
1990m1
1995m1
t
Although the large shocks apparent in the graph of the levels have clear effects on the predictions
from the cointegrating equations, our only concern is the negative trend in the first cointegrating
equation since the end of 2000. The graph of the levels shows that something put a significant brake
on the growth of housing prices after 2000 and that the growth of housing prices in San Antonio
slowed during 2000 but then recuperated while Austin maintained slower growth. We suspect that
this indicates that the end of the high-tech boom affected Austin more severely than San Antonio.
This difference is what causes the trend in the first cointegrating equation. Although we could try to
account for this effect with a more formal analysis, we will proceed as if the cointegrating equations
are stationary.
We can use vecstable to check whether we have correctly specified the number of cointegrating
equations. As discussed in [TS] vecstable, the companion matrix of a VECM with K endogenous
variables and r cointegrating equations has K r unit eigenvalues. If the process is stable, the moduli
of the remaining r eigenvalues are strictly less than one. Because there is no general distribution
729
theory for the moduli of the eigenvalues, ascertaining whether the moduli are too close to one can
be difficult.
. vecstable, graph
Eigenvalue stability condition
Eigenvalue
1
1
-.6698661
.3740191
.3740191
-.386377
-.386377
.540117
-.0749239
-.0749239
-.2023955
.09923966
Modulus
+
+
-
.4475996i
.4475996i
.395972i
.395972i
+
-
.5274203i
.5274203i
1
1
.669866
.583297
.583297
.553246
.553246
.540117
.532715
.532715
.202395
.09924
.5
Imaginary
0
.5
.5
0
Real
.5
Because we specified the graph option, vecstable plotted the eigenvalues of the companion
matrix. The graph of the eigenvalues shows that none of the remaining eigenvalues appears close to
the unit circle. The stability check does not indicate that our model is misspecified.
Here we use veclmar to test for serial correlation in the residuals.
. veclmar, mlag(4)
Lagrange-multiplier test
lag
chi2
df
1
2
3
4
56.8757
31.1970
30.6818
14.6493
16
16
16
16
0.00000
0.01270
0.01477
0.55046
730
The results clearly indicate serial correlation in the residuals. The results in Gonzalo (1994) indicate
that underspecifying the number of lags in a VECM can significantly increase the finite-sample bias
in the parameter estimates and lead to serial correlation. For this reason, we refit the model with five
lags instead of three.
. vec austin dallas houston sa, lags(5) rank(2) noetable bconstraints(1/4)
Iteration 1:
log likelihood = 1200.5402
(output omitted )
Iteration 20:
log likelihood = 1203.9465
Vector error-correction model
Sample:
1990m6 - 2003m12
2
2
No. of obs
AIC
HQIC
SBIC
chi2
P>chi2
498.4682
4125.926
0.0000
0.0000
=
163
= -13.79075
= -13.1743
= -12.27235
Coef.
Std. Err.
P>|z|
austin
dallas
houston
sa
_cons
1
0
-.6525574
-.6960166
3.846275
.
(omitted)
.2047061
.2494167
.
-3.19
-2.79
.
0.001
0.005
.
-1.053774
-1.184864
.
-.2513407
-.2071688
.
austin
dallas
houston
sa
_cons
0
-.932048
1
-.2363915
2.065719
(omitted)
.0564332
.
.0599348
.
-16.52
.
-3.94
.
0.000
.
0.000
.
-1.042655
.
-.3538615
.
-.8214409
.
-.1189215
.
_ce1
_ce2
Comparing these results with those from the previous model reveals that
1. there is now evidence that the coefficient [ ce1]houston is not equal to zero,
2. the two sets of estimated coefficients for the first cointegrating equation are different, and
3. the two sets of estimated coefficients for the second cointegrating equation are similar.
The assumption that the errors are independently, identically, and normally distributed with zero
mean and finite variance allows us to derive the likelihood function. If the errors do not come from
a normal distribution but are just independently and identically distributed with zero mean and finite
variance, the parameter estimates are still consistent, but they are not efficient.
731
We use vecnorm to test the null hypothesis that the errors are normally distributed.
. qui vec austin dallas houston sa, lags(5) rank(2) bconstraints(1/4)
. vecnorm
Jarque-Bera test
Equation
chi2
df
D_austin
D_dallas
D_houston
D_sa
ALL
74.324
3.501
245.032
8.426
331.283
2
2
2
2
8
0.00000
0.17370
0.00000
0.01481
0.00000
Skewness test
Equation
Skewness
chi2
df
D_austin
D_dallas
D_houston
D_sa
ALL
.60265
.09996
-1.0444
.38019
9.867
0.271
29.635
3.927
43.699
1
1
1
1
4
0.00168
0.60236
0.00000
0.04752
0.00000
Equation
Kurtosis
chi2
df
D_austin
D_dallas
D_houston
D_sa
ALL
6.0807
3.6896
8.6316
3.8139
64.458
3.229
215.397
4.499
287.583
1
1
1
1
4
0.00000
0.07232
0.00000
0.03392
0.00000
Kurtosis test
The results indicate that we can strongly reject the null hypothesis of normally distributed errors.
Most of the errors are both skewed and kurtotic.
Impulseresponse functions for VECMs
With a model that we now consider acceptably well specified, we can use the irf commands to
estimate and interpret the IRFs. Whereas IRFs from a stationary VAR die out over time, IRFs from a
cointegrating VECM do not always die out. Because each variable in a stationary VAR has a timeinvariant mean and finite, time-invariant variance, the effect of a shock to any one of these variables
must die out so that the variable can revert to its mean. In contrast, the I(1) variables modeled in a
cointegrating VECM are not mean reverting, and the unit moduli in the companion matrix imply that
the effects of some shocks will not die out over time.
These two possibilities gave rise to new terms. When the effect of a shock dies out over time, the
shock is said to be transitory. When the effect of a shock does not die out over time, the shock is
said to be permanent.
Below we use irf create to estimate the IRFs and irf graph to graph two of the orthogonalized
IRFs.
732
vec1, austin, sa
vec1, dallas, sa
.015
.01
.005
10
20
30
10
20
30
step
Graphs by irfname, impulse variable, and response variable
The graphs indicate that an orthogonalized shock to the average housing price in Austin has a
permanent effect on the average housing price in San Antonio but that an orthogonalized shock to
the average price of housing in Dallas has a transitory effect. According to this model, unexpected
shocks that are local to the Austin housing market will have a permanent effect on the housing market
in San Antonio, but unexpected shocks that are local to the Dallas housing market will have only a
transitory effect on the housing market in San Antonio.
Forecasting with VECMs
Cointegrating VECMs are also used to produce forecasts of both the first-differenced variables and
the levels of the variables. Comparing the variances of the forecast errors of stationary VARs with
those from a cointegrating VECM reveals a fundamental difference between the two models. Whereas
the variances of the forecast errors for a stationary VAR converge to a constant as the prediction
horizon grows, the variances of the forecast errors for the levels of a cointegrating VECM diverge
with the forecast horizon. (See sec. 6.5 of Lutkepohl [2005] for more about this result.) Because all
the variables in the model for the first differences are stationary, the forecast errors for the dynamic
forecasts of the first differences remain finite. In contrast, the forecast errors for the dynamic forecasts
of the levels diverge to infinity.
We use fcast compute to obtain dynamic forecasts of the levels and fcast graph to graph
these dynamic forecasts, along with their asymptotic confidence intervals.
733
. tsset
time variable:
delta:
t, 1990m1 to 2003m12
1 month
Forecast for sa
2004m1 2004m7 2005m1 2005m7 2006m1 2004m1 2004m7 2005m1 2005m7 2006m1
95% CI
forecast
As expected, the widths of the confidence intervals grow with the forecast horizon.
References
Ahn, S. K., and G. C. Reinsel. 1990. Estimation for partially nonstationary multivariate autoregressive models. Journal
of the American Statistical Association 85: 813823.
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Engle, R. F., and C. W. J. Granger. 1987. Co-integration and error correction: Representation, estimation, and testing.
Econometrica 55: 251276.
Gonzalo, J. 1994. Five alternative methods of estimating long-run equilibrium relationships. Journal of Econometrics
60: 203233.
Granger, C. W. J. 1981. Some properties of time series data and their use in econometric model specification. Journal
of Econometrics 16: 121130.
Granger, C. W. J., and P. Newbold. 1974. Spurious regressions in econometrics. Journal of Econometrics 2: 111120.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Hubrich, K., H. Lutkepohl, and P. Saikkonen. 2001. A review of systems cointegration tests. Econometric Reviews
20: 247318.
Johansen, S. 1988. Statistical analysis of cointegration vectors. Journal of Economic Dynamics and Control 12:
231254.
. 1991. Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models.
Econometrica 59: 15511580.
. 1995. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University
Press.
Lutkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
734
Maddala, G. S., and I.-M. Kim. 1998. Unit Roots, Cointegration, and Structural Change. Cambridge: Cambridge
University Press.
Nielsen, B. 2001. Order determination in general vector autoregressions. Working paper, Department of Economics,
University of Oxford and Nuffield College. http://ideas.repec.org/p/nuf/econwp/0110.html.
Park, J. Y., and P. C. B. Phillips. 1988. Statistical inference in regressions with integrated processes: Part I. Econometric
Theory 4: 468497.
. 1989. Statistical inference in regressions with integrated processes: Part II. Econometric Theory 5: 95131.
Paulsen, J. 1984. Order determination of multivariate autoregressive time series with unit roots. Journal of Time Series
Analysis 5: 115127.
Phillips, P. C. B. 1986. Understanding spurious regressions in econometrics. Journal of Econometrics 33: 311340.
Phillips, P. C. B., and S. N. Durlauf. 1986. Multiple time series regressions with integrated processes. Review of
Economic Studies 53: 473495.
Sims, C. A., J. H. Stock, and M. W. Watson. 1990. Inference in linear time series models with some unit roots.
Econometrica 58: 113144.
Stock, J. H. 1987. Asymptotic properties of least squares estimators of cointegrating vectors. Econometrica 55:
10351056.
Stock, J. H., and M. W. Watson. 1988. Testing for common trends. Journal of the American Statistical Association
83: 10971107.
Tsay, R. S. 1984. Order selection in nonstationary autoregressive models. Annals of Statistics 12: 14251433.
Watson, M. W. 1994. Vector autoregressions and cointegration. In Vol. 4 of Handbook of Econometrics, ed. R. F.
Engle and D. L. McFadden. Amsterdam: Elsevier.
Also see
[TS] vec Vector error-correction models
[TS] irf Create and analyze IRFs, dynamic-multiplier functions, and FEVDs
Title
vec Vector error-correction models
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
vec depvarlist
if
in
options
, options
Description
Model
rank(#)
lags(#)
trend(constant)
trend(rconstant)
trend(trend)
trend(rtrend)
trend(none)
bconstraints(constraintsbc )
aconstraints(constraintsac )
Adv. model
sindicators(varlistsi )
noreduce
Reporting
level(#)
nobtable
noidtest
alpha
pi
noptable
mai
noetable
dforce
nocnsreport
display options
Maximization
maximize options
coeflegend
735
736
Menu
Statistics
>
>
Description
vec fits a type of vector autoregression in which some of the variables are cointegrated by using
Johansens (1995) maximum likelihood method. Constraints may be placed on the parameters in the
cointegrating equations or on the adjustment terms. See [TS] vec intro for a list of commands that
are used in conjunction with vec.
Options
Model
Adv. model
sindicators(varlistsi ) specifies the normalized seasonal indicator variables to include in the model.
The indicator variables specified in this option must be normalized as discussed in Johansen (1995).
If the indicators are not properly normalized, the estimator of the cointegrating vector does not
converge to the asymptotic distribution derived by Johansen (1995). More details about how these
variables are handled are provided in Methods and formulas. sindicators() cannot be specified
with trend(none) or with trend(rconstant).
737
noreduce causes vec to skip the checks and corrections for collinearity among the lags of the
dependent variables. By default, vec checks to see whether the current lag specification causes
some of the regressions performed by vec to contain perfectly collinear variables; if so, it reduces
the maximum lag until the perfect collinearity is removed.
Reporting
display options: vsquish, cformat(% fmt), pformat(% fmt), sformat(% fmt), and nolstretch;
see [R] estimation options.
Maximization
738
The following option is available with vec but is not shown in the dialog box:
coeflegend; see [R] estimation options.
Introduction
VECMs are used to model the stationary relationships between multiple time series that contain
unit roots. vec implements Johansens approach for estimating the parameters of a VECM.
[TS] vec intro reviews the basics of integration and cointegration and highlights why we need
special methods for modeling the relationships between processes that contain unit roots. This manual
entry assumes familiarity with the material in [TS] vec intro and provides examples illustrating how to
use the vec command. See Johansen (1995), Hamilton (1994), and Becketti (2013) for more in-depth
introductions to cointegration analysis.
Example 1
This example uses annual data on the average per-capita disposable personal income in the eight
U.S. Bureau of Economic Analysis (BEA) regions of the United States. We use data from 19482002
in logarithms. Unit-root tests on these series fail to reject the null hypothesis that per-capita disposable
income in each region contains a unit root. Because capital and labor can move easily between the
different regions of the United States, we would expect that no one series will diverge from all the
remaining series and that cointegrating relationships exist.
Below we graph the natural logs of average disposal income in the New England and the Southeast
regions.
10
11
. use http://www.stata-press.com/data/r13/rdinc
. line ln_ne ln_se year
1950
1960
1970
1980
1990
year
ln(new_england)
ln(southeast)
2000
739
The graph indicates a differential between the two series that shrinks between 1960 and about
1980 and then grows until it stabilizes around 1990. We next estimate the parameters of a bivariate
VECM with one cointegrating relationship.
. vec ln_ne ln_se
Vector error-correction model
Sample: 1950 - 2002
Log likelihood =
Det(Sigma_ml) =
Equation
D_ln_ne
D_ln_se
300.6224
4.06e-08
Parms
RMSE
R-sq
4
4
.017896
.018723
0.9313
0.9292
Coef.
Std. Err.
No. of obs
AIC
HQIC
SBIC
chi2
P>chi2
664.4668
642.7179
P>|z|
=
53
= -11.00462
= -10.87595
= -10.67004
0.0000
0.0000
D_ln_ne
_ce1
L1.
-.4337524
.0721365
-6.01
0.000
-.5751373
-.2923675
ln_ne
LD.
.7168658
.1889085
3.79
0.000
.3466119
1.08712
ln_se
LD.
-.6748754
.2117975
-3.19
0.001
-1.089991
-.2597599
_cons
-.0019846
.0080291
-0.25
0.805
-.0177214
.0137521
_ce1
L1.
-.3543935
.0754725
-4.70
0.000
-.5023168
-.2064701
ln_ne
LD.
.3366786
.1976448
1.70
0.088
-.050698
.7240553
ln_se
LD.
-.1605811
.2215922
-0.72
0.469
-.5948939
.2737317
_cons
.002429
.0084004
0.29
0.772
-.0140355
.0188936
D_ln_se
Cointegrating equations
Equation
Parms
_ce1
Identification:
chi2
P>chi2
29805.02
0.0000
beta
Coef.
ln_ne
ln_se
_cons
1
-.9433708
-.8964065
Std. Err.
P>|z|
_ce1
.
.0054643
.
.
-172.64
.
.
0.000
.
.
-.9540807
.
.
-.9326609
.
The default output has three parts. The header provides information about the sample, the model
fit, and the identification of the parameters in the cointegrating equation. The main estimation table
740
contains the estimates of the short-run parameters, along with their standard errors and confidence
intervals. The second estimation table reports the estimates of the parameters in the cointegrating
equation, along with their standard errors and confidence intervals.
The results indicate strong support for a cointegrating equation such that
ln ne .943 ln se .896
should be a stationary series. Identification of the parameters in the cointegrating equation is achieved
by constraining some of them to be fixed, and fixed parameters do not have standard errors. In this
example, the coefficient on ln ne has been normalized to 1, so its standard error is missing. As
discussed in Methods and formulas, the constant term in the cointegrating equation is not directly
estimated in this trend specification but rather is backed out from other estimates. Not all the elements
of the VCE that correspond to this parameter are readily available, so the standard error for the cons
parameter is missing.
To get a better idea of how our model fits, we predict the cointegrating equation and graph it over
time:
.25
.05
. predict ce, ce
. line ce year
1950
1960
1970
1980
1990
2000
year
Although the predicted cointegrating equation has the right appearance for the time before the
mid-1960s, afterward the predicted cointegrating equation does not look like a stationary series. A
better model would account for the trends in the size of the differential.
As discussed in [TS] vec intro, simply normalizing one of the coefficients to be one is sufficient to
identify the parameters of the single cointegrating vector. When there is more than one cointegrating
equation, more restrictions are required.
Example 2
We have data on monthly unemployment rates in Indiana, Illinois, Kentucky, and Missouri from
January 1978 through December 2003. We suspect that factor mobility will keep the unemployment
rates in equilibrium. The following graph plots the data.
741
10
12
1980m1
1985m1
1990m1
t
missouri
kentucky
1995m1
2000m1
2005m1
indiana
illinois
The graph shows that although the series do appear to move together, the relationship is not as clear
as in the previous example. There are periods when Indiana has the highest rate and others when
Indiana has the lowest rate. Although the Kentucky rate moves closely with the other series for most
of the sample, there is a period in the mid-1980s when the unemployment rate in Kentucky does not
fall at the same rate as the other series.
We will model the series with two cointegrating equations and no linear or quadratic time trends
in the original series. Because we are focusing on the cointegrating vectors, we use the noetable
option to suppress displaying the short-run estimation table.
742
2
2
Identification:
133.3885
195.6324
lags(4)
=
308
= -2.306048
= -2.005818
= -1.555184
0.0000
0.0000
beta
Coef.
Std. Err.
P>|z|
_ce1
missouri
indiana
kentucky
illinois
_cons
1
-2.52e-18
.3493902
-1.135152
-.3880707
.
.
.2005537
.2069063
.4974323
.
.
1.74
-5.49
-0.78
.
.
0.081
0.000
0.435
.
.
-.0436879
-1.540681
-1.36302
.
.
.7424683
-.7296235
.5868787
_ce2
missouri
indiana
kentucky
illinois
_cons
9.30e-17
1
.2059473
-1.51962
2.92857
.
.
.2718678
.2804792
.6743122
.
.
0.76
-5.42
4.34
.
.
0.449
0.000
0.000
.
.
-.3269038
-2.069349
1.606942
.
.
.7387985
-.9698907
4.250197
Except for the coefficients on kentucky in the two cointegrating equations and the constant
term in the first, all the parameters are significant at the 5% level. We can refit the model with the
Johansen normalization and the overidentifying constraint that the coefficient on kentucky in the
second cointegrating equation is zero.
.
.
.
.
constraint
constraint
constraint
constraint
define
define
define
define
1
2
3
4
[_ce1]missouri
[_ce1]indiana
[_ce2]missouri
[_ce2]indiana
=
=
=
=
1
0
0
1
2
1
Identification:
(
(
(
(
(
1)
2)
3)
4)
5)
chi2
P>chi2
145.233
209.9344
0.0000
0.0000
743
=
308
= -2.311522
= -2.016134
= -1.572769
beta is overidentified
[_ce1]missouri = 1
[_ce1]indiana = 0
[_ce2]missouri = 0
[_ce2]indiana = 1
[_ce2]kentucky = 0
beta
Coef.
Std. Err.
_ce1
missouri
indiana
kentucky
illinois
_cons
1
0
.2521685
-1.037453
-.3891102
.
(omitted)
.1649653
.1734165
.4726968
_ce2
missouri
indiana
kentucky
illinois
_cons
0
1
0
-1.314265
2.937016
(omitted)
.
(omitted)
.0907071
.6448924
P>|z|
1.53
-5.98
-0.82
0.126
0.000
0.410
-.0711576
-1.377343
-1.315579
.5754946
-.6975626
.5373586
-14.49
4.55
0.000
0.000
-1.492048
1.67305
-1.136483
4.200982
chi2(
1) =
.3139
The test of the overidentifying restriction does not reject the null hypothesis that the restriction
is valid, and the p-value on the coefficient on kentucky in the first cointegrating equation indicates
that it is not significant. We will leave the variable in the model and attribute the lack of significance
to whatever caused the kentucky series to temporarily rise above the others from 1985 until 1990,
though we could instead consider removing kentucky from the model.
Next, we look at the estimates of the adjustment parameters. In the output below, we replay
the previous results. We specify the alpha option so that vec will display an estimation table for
the estimates of the adjustment parameters, and we specify nobtable to suppress the table for the
parameters of the cointegrating equations because we have already looked at those.
744
416.9744
7.84e-07
Adjustment parameters
Equation
Parms
D_missouri
D_indiana
D_kentucky
D_illinois
No. of obs
AIC
HQIC
SBIC
2
2
2
2
chi2
P>chi2
19.39607
6.426086
8.524901
22.32893
0.0001
0.0402
0.0141
0.0000
Std. Err.
alpha
Coef.
D_missouri
_ce1
L1.
-.0683152
.0185763
-3.68
0.000
-.1047242
-.0319063
_ce2
L1.
.0405613
.0112417
3.61
0.000
.018528
.0625946
D_indiana
_ce1
L1.
-.0342096
.0220955
-1.55
0.122
-.0775159
.0090967
_ce2
L1.
.0325804
.0133713
2.44
0.015
.0063732
.0587877
D_kentucky
_ce1
L1.
-.0482012
.0231633
-2.08
0.037
-.0936004
-.0028021
_ce2
L1.
.0374395
.0140175
2.67
0.008
.0099657
.0649133
D_illinois
_ce1
L1.
.0138224
.0227041
0.61
0.543
-.0306768
.0583215
_ce2
L1.
.0567664
.0137396
4.13
0.000
.0298373
.0836955
chi2(
P>|z|
=
308
= -2.311522
= -2.016134
= -1.572769
1) =
.3139
All the coefficients are significant at the 5% level, except those on Indiana and Illinois in the first
cointegrating equation. From an economic perspective, the issue is whether the unemployment rates
in Indiana and Illinois adjust when the first cointegrating equation is out of equilibrium. We could
impose restrictions on one or both of those parameters and refit the model, or we could just decide
to use the current results.
745
Technical note
vec can be used to fit models in which the parameters in are not identified, in which case only
the parameters in and the moving-average impact matrix C are identified. When the parameters in
b and
are not identified, the values of
b can vary depending on the starting values. However, the
estimates of and C are identified and have known asymptotic distributions. This method is valid
because these additional normalization restrictions impose no restriction on or C.
yt = (yt1 + + t) +
p1
X
i yti + + t + t
i=1
Parameter restrictions
trend
rtrend
constant
rconstant
none
none
=0
= 0, and = 0
= 0, = 0 and = 0
= 0, = 0, = 0, and = 0
H(r)
H (r)
H1 (r)
H1 (r)
H2 (r)
trend(trend) allows for a linear trend in the cointegrating equations and a quadratic trend in
the undifferenced data. A linear trend in the cointegrating equations implies that the cointegrating
equations are assumed to be trend stationary.
trend(rtrend) defines a restricted trend model that excludes linear trends in the differenced data
but allows for linear trends in the cointegrating equations. As in the previous case, a linear trend in
a cointegrating equation implies that the cointegrating equation is trend stationary.
trend(constant) defines a model with an unrestricted constant. This allows for a linear trend
in the undifferenced data and cointegrating equations that are stationary around a nonzero mean. This
is the default.
trend(rconstant) defines a model with a restricted constant in which there is no linear or
quadratic trend in the undifferenced data. A nonzero allows for the cointegrating equations to be
stationary around nonzero means, which provide the only intercepts for differenced data. Seasonal
indicators are not allowed with this specification.
trend(none) defines a model that does not include a trend or a constant. When there is no trend
or constant, the cointegrating equations are restricted to being stationary with zero means. Also, after
adjusting for the effects of lagged endogenous variables, the differenced data are modeled as having
mean zero. Seasonal indicators are not allowed with this specification.
746
Technical note
vec uses a switching algorithm developed by Boswijk (1995) to maximize the log-likelihood
function when constraints are placed on the parameters. The starting values affect both the ability of
the algorithm to find a maximum and its speed in finding that maximum. By default, vec uses the
parameter estimates that correspond to Johansens normalization. Sometimes, other starting values
will cause the algorithm to find a maximum faster.
To specify starting values for the parameters in , we specify a 1 (K r) matrix in the afrom()
option. Specifying starting values for the parameters in is slightly more complicated. As explained
in Methods and formulas, specifying trend(constant), trend(rtrend), or trend(trend) causes
b to be backed out. The switching
some of the estimates of the trend parameters appearing in
algorithm estimates only the parameters of the cointegrating equations whose estimates are stored in
e(betavec). For this reason, only the parameters stored in e(betavec) can have their initial values
set via bfrom().
The table below describes which trend parameters in the cointegrating equations are estimated by
the switching algorithm for each of the five specifications.
Trend specification
none
rconstant
constant
rtrend
trend
Trend parameters in
cointegrating equations
none
cons
cons
cons, trend
cons, trend
Collinearity
As expected, collinearity among variables causes some parameters to be unidentified numerically.
If vec encounters perfect collinearity among the dependent variables, it exits with an error.
In contrast, if vec encounters perfect collinearity that appears to be due to too many lags in the
model, vec displays a warning message and reduces the maximum lag included in the model in an
effort to find a model with fewer lags in which all the parameters are identified by the data. Specifying
the noreduce option causes vec to skip over these additional checks and corrections for collinearity.
Thus the noreduce option can be used to force the estimation to proceed when not all the parameters
are identified by the data. When some parameters are not identified because of collinearity, the results
cannot be interpreted but can be used to find the source of the collinearity.
Stored results
vec stores the following in e():
Scalars
e(N)
e(k rank)
e(k eq)
e(k dv)
e(k ce)
e(n lags)
e(df m)
e(ll)
e(chi2 res)
e(df lr)
e(beta iden)
e(beta icnt)
e(k #)
e(df m#)
e(r2 #)
e(chi2 #)
e(rmse #)
e(aic)
e(hqic)
e(sbic)
e(tmin)
e(tmax)
e(detsig ml)
e(rank)
e(converge)
Macros
e(cmd)
e(cmdline)
e(trend)
e(tsfmt)
e(tvar)
e(endog)
e(covariates)
e(eqnames)
e(cenames)
e(reduce opt)
e(reduce lags)
e(title)
e(aconstraints)
e(bconstraints)
e(sindicators)
e(properties)
e(predict)
e(marginsok)
e(marginsnotok)
number of observations
number of unconstrained parameters
number of equations in e(b)
number of dependent variables
number of cointegrating equations
number of lags
model degrees of freedom
log likelihood
value of test of overidentifying restrictions
degrees of freedom of the test of overidentifying restrictions
1 if the parameters in are identified and 0 otherwise
number of independent restrictions placed on
number of variables in equation #
model degrees of freedom in equation #
R2 of equation #
2 statistic for equation #
RMSE of equation #
value of AIC
value of HQIC
value of SBIC
minimum time
maximum time
determinant of the estimated covariance matrix
rank of e(V)
1 if the switching algorithm converged, 0 if it did not converge
vec
command as typed
trend specified
format of the time variable
variable denoting time within groups
endogenous variables
list of covariates
equation names
names of cointegrating equations
noreduce, if noreduce is specified
list of maximum lags to which the model has been reduced
title in estimation output
constraints placed on
constraints placed on
sindicators, if specified
b V
program used to implement predict
predictions allowed by margins
predictions disallowed by margins
747
748
Matrices
e(b)
e(V)
e(beta)
e(V beta)
e(betavec)
b
VCE of
directly obtained estimates of
e(pi)
b
estimates of
e(V pi)
e(alpha)
e(V alpha)
b
VCE of
estimates of
VCE of
b
e(omega)
e(mai)
e(V mai)
Functions
e(sample)
b
estimates of
estimates of C
b
VCE of C
yt = 0 yt1 +
p1
X
i yti + v + t + w1 s1 + + wm sm + t
(1)
i=1
where
749
There are two types of deterministic elements in (1): the trend, v + t, and the orthogonalized
seasonal terms, w1 s1 + + wm sm . Johansen (1995, chap. 11) shows that inference about the
number of cointegrating equations is based on nonstandard distributions and that the addition of any
term that generalizes the deterministic specification in (1) changes the asymptotic distributions of the
statistics used for inference on the number of cointegrating equations and the asymptotic distribution
of the ML estimator of the cointegrating equations. In fact, Johansen (1995, 84) notes that including
event indicators causes the statistics used for inference on the number of cointegrating equations to
have asymptotic distributions that must be computed case by case. For this reason, event indicators
may not be specified in the present version of vec.
If seasonal indicators are included in the model, they cannot be collinear with a constant term. If
they are collinear with a constant term, one of the indicator variables is omitted.
As discussed in Specification of constants and trends, we can reparameterize the model as
yt = (yt1 + + t) +
p1
X
i yti + + t + t
(2)
i=1
e 0 Z1t + Z2t + t
Z0t =
(3)
(4)
t=1
e have rank r.
with the constraints that and
Johansen (1995, chap. 6), building on Anderson (1951), shows how the parameters can be
e , and the data, yielding the concentrated log-likelihood function
expressed as analytic functions of ,
Lc =
1n
T K ln(2) + T ln(||)
2
T
o
X
e 0 R1t )0 1 (R0t
e 0 R1t )
+
(R0t
t=1
(5)
750
where
Mij = T 1
PT
R0t = Z0t
R1t = Z1t
t=1
Zit Z0jt ,
M02 M1
22 Z2t ;
M12 M1
22 Z2t .
i, j {0, 1, 2};
and
When the trend in the VECM is unrestricted, we can define the variables in (3) directly in terms
of the variables in (1):
Z1t = yt1 is K 1
0
0
Z2t = (yt1
, . . . , ytp+1
, 1, t, s1 , . . . , sm )0 is {K(p 1) + 2 + m} 1;
= (1 , . . . , p1 , v, , w1 , . . . , wm ) is K {K(p 1) + 2 + m}
When there is a restricted trend in the VECM in (2), = 0, but the intercept v = + is
unrestricted. The VECM with the restricted trend can be written as
0
yt = ( , )
yt1
t
+
p1
X
i yti + v + w1 s1 + + wm sm + t
i=1
= (1 , . . . , p1 , v, w1 , . . . , wm ) is K {K(p 1) + 1 + m}
e = 0 , 0 is the (K + 1) r matrix composed of the r cointegrating vectors and the r
trend coefficients
In the restricted trend specification, m1 = K + 1, m2 = {K(p 1) + 1 + m}, and there are
nparms = Kr + (K + 1)r + K{K(p 1) + 1 + m} parameters in (3).
Unrestricted constant
An unrestricted constant in the VECM in (2) is equivalent to setting = 0 in (1), which can be
written in the form of (3) by defining
Z1t = yt1 is (K 1)
0
0
Z2t = (yt1
, . . . , ytp+1
, 1, s1 , . . . , sm )0 is {K(p 1) + 1 + m} 1;
= (1 , . . . , p1 , v, w1 , . . . , wm ) is K {K(p 1) + 1 + m}
751
When there is a restricted constant in the VECM in (2), it can be written in the form of (3) by
defining
0
0
, 1 is (K + 1) 1
Z1t = yt1
0
0
Z2t = (yt1
, . . . , ytp+1
)0 is K(p 1) 1
= (1 , . . . , p1 ) is K K(p 1)
e = 0 , 0 is the (K + 1) r matrix composed of the r cointegrating vectors and the r
When there is no trend in the VECM in (2), it can be written in the form of (3) by defining
Z1t = yt1 is K 1
0
0
Z2t = (yt1
, . . . , ytp+1
)0 is K(p 1) + m 1
= (1 , . . . , p1 ) is K K(p 1)
e = QQ1
e = Q(
e Q 1 )0 =
0
e 0 would not change the value of the log
Substituting
into the log likelihood in (5) for
e As discussed
likelihood, so some a priori identification restrictions must be found to identify and .
in Johansen (1995, chap. 5 and 6) and Boswijk (1995), if the restrictions exactly identify or overidentify
e the estimates of the unconstrained parameters in
e will be superconsistent, meaning that the estimates
,
e
of the free parameters in will converge at a faster rate than estimates of the short-run parameters
in and i . This allows the distribution of the estimator of the short-run parameters to be derived
e
conditional on the estimated .
Johansen (1995, chap. 6) has proposed a normalization method for use when theory does not
provide sufficient a priori restrictions to identify the cointegrating vector. This method has become
widely adopted by researchers. Johansens identification scheme is
e 0 = (Ir ,
0)
(6)
752
e ) = S01
e (
e 0 S11
e )1
(
e ) = S00 S01
e (
e 0 S11
e )1
e 0 S10
(
PT
0
Sij = (1/T ) t=1 Rit Rjt
i, j {0, 1}
b is given by the r
Johansen (1995) shows that by inserting these solutions into equation (5),
eigenvectors v1 , . . . , vr corresponding to the r largest eigenvalues 1 , . . . , r that solve the generalized
eigenvalue problem
|i S11 S10 S1
(7)
00 S01 | = 0
The eigenvectors corresponding to 1 , . . . , r that solve (7) are the unidentified parameter estimates.
To impose the identification restrictions in (6), we normalize the eigenvectors such that
i S11 vi = S01 S1
00 S01 vi
(8)
1 if i = j
0 otherwise
(9)
and
vi0 S11 vj =
At the optimum the log-likelihood function with the Johansen identification restrictions can be expressed
in terms of T, K, S00 , and the r largest eigenvalues
r
o
X
1 n
bi )
ln(1
Lc = T K ln(2) + K + ln(|S00 |) +
2
i=1
b = S01
and
(10)
b 0 S10
b = S00
753
be
The r series in E
t are called the predicted, nondemeaned cointegrating equations because they still
contain the terms and . We want to work with the predicted, demeaned cointegrating equations.
Thus we need estimates of and . In the trend(rconstant) specification, the algorithm directly
produces the estimator
b . Similarly, in the trend(rtrend) specification, the algorithm directly
produces the estimator
b. In the remaining cases, to back out estimates of and , we need estimates
of v and , which we can obtain by estimating the parameters of the following VAR:
p1
X
b
e t1 +
yt = E
i yti + v + t + w1 s1 + + wm sm + t
(11)
i=1
b = (b
0
b )1
b 0v
(12)
b = (b
0
b )1
b 0b
(13)
b 0f = (
b 0y ,
b,
b)
b t1 +
yt = E
p1
X
i yti + + t + w1 s1 + + wm sm + t
(14)
i=1
754
1
b
bB
(T d)
0
b S11
b )1 .
b B = (
where
As we would expect, the estimator of = 0 is
b =
b0
b
and its estimated VCE is given by
1
b
b0)
b (
bB
(T d)
The moving-average impact matrix C is estimated by
b =
b (b
b )1
b
C
b 0
b is the orthogonal complement of
by ,
where
b is the orthogonal complement of
b , and
Pp1
b
= IK i=1 i . The orthogonal complement of a K r matrix Q that has rank r is a matrix
Q of rank K r, such that Q0 Q = 0. Although this operation is not uniquely defined, the results
used by vec do not depend on the method of obtaining the orthogonal complement. vec uses the
following method: the orthogonal complement of Q is given by the r eigenvectors with the highest
eigenvalues from the matrix Q0 (Q0 Q)1 Q0 .
b is estimated by
Per Johansen (1995, chap. 13) and Drukker (2004), the VCE of C
T d b b b0
Sq Vb
Sq
T
where
bq = C
b b
S
(b
1 , b
2 ) if p > 1
b
=
b
1
if p = 1
b 0
b
b0 IK )
1 = (C
=
b (b
0
b )1
b
b
2 = p1 C
p1 is a (p 1) 1 vector of ones
(16)
755
R0 vec() = 0
(17)
e) = b
R0e vec(
(18)
Although (17) and (18) are intuitive, they can be rewritten in a form to facilitate computation.
Specifically, (17) can be written as
vec(0 ) = Ga
(19)
where G is Kr n and a is n 1. Equation (18) can be rewritten as
e ) = Hb + h0
vec(
(20)
)
= rank R e (Ir
e(beta icnt).
e is given by jacob , where
Similarly, the number of freely estimated parameters in and
n
o
b )G
jacob = rank (b
Im1 )H, (IK
Using jacob , we can calculate several other parameter counts of interest. In particular, the degrees of
freedom of the overidentifying test are given by (K + m1 r)r jacob , and the number of freely
estimated parameters in the model is nparms = Km2 + jacob .
Although the problem of maximizing the log-likelihood function in (4), subject to the constraints in
(17) and (18), could be handled by the algorithms in [R] ml, the switching algorithm of Boswijk (1995)
has proven to be more convergent. For this reason, vec uses the Boswijk (1995) switching algorithm
to perform the optimization.
756
b0, a
b 0 ), the algorithm iteratively updates the estimates until convergence
b0 ,
Given starting values (b
is achieved, as follows:
bj
b j+1 = {H0 (b
b
b 1
b 1
b
0j
b j S11 )H}1 H0 (b
j
j InZ1 )h0 }
j
j S11 ){vec(P) (b
1 0 b 1
b
b
b
b j S11 )vec(P)
b 1
bj+1 = {G(
a
G (j
j j S11 j )G}
b 0j
bj
bj
b 0j
b is given by
The estimated VCE of
1
H{H0 (W S11 )H}1 H0
(T d)
1
b
where W is
b 0
b . As in the case without constraints, the estimated VCE of
b can be obtained
either from the VCE of the short-run parameters, as described below, or via the formula
"
#
n 1
o1
0
1
0 b
b
b
Vb
G0
b = (T d) G G ( S11 )G
Boswijk (1995) notes that, as long as the parameters of the cointegrating equations are exactly
e
identified or overidentified, the constrained ML estimator produces superconsistent estimates of .
This implies that the method of estimating the short-run parameters described above applies in the
presence of constraints, as well, albeit with a caveat: when there are constraints placed on , the
VARs must be estimated subject to these constraints.
b , Drukker (2004)
With these estimates and the estimated VCE of the short-run parameter matrix V
b
b
shows that the estimated VCE for is given by
0
b IK )Vb (
b
(
b IK )
757
L
2nparms
AIC = 2
+
T
T
L
ln(T )
SBIC = 2
nparms
+
T
T
2ln ln(T )
L
nparms
HQIC = 2
+
T
T
where nparms is the total number of parameters in the model and L is the value of the log likelihood
at the optimum.
References
Anderson, T. W. 1951. Estimating linear restrictions on regression coefficients for multivariate normal distributions.
Annals of Mathematical Statistics 22: 327351.
Becketti, S. 2013. Introduction to Time Series Using Stata. College Station, TX: Stata Press.
Boswijk, H. P. 1995. Identifiability of cointegrated systems. Discussion Paper #95-78, Tinbergen Institute.
http://www1.fee.uva.nl/pp/bin/258fulltext.pdf.
Boswijk, H. P., and J. A. Doornik. 2004. Identifying, estimating and testing restricted cointegrating systems: An
overview. Statistica Neerlandica 58: 440465.
Drukker, D. M. 2004. Some further results on estimation and inference in the presence of constraints on alpha in a
cointegrating VECM. Working paper, StataCorp.
Engle, R. F., and C. W. J. Granger. 1987. Co-integration and error correction: Representation, estimation, and testing.
Econometrica 55: 251276.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Johansen, S. 1988. Statistical analysis of cointegration vectors. Journal of Economic Dynamics and Control 12:
231254.
. 1991. Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models.
Econometrica 59: 15511580.
. 1995. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University
Press.
Maddala, G. S., and I.-M. Kim. 1998. Unit Roots, Cointegration, and Structural Change. Cambridge: Cambridge
University Press.
758
Park, J. Y., and P. C. B. Phillips. 1988. Statistical inference in regressions with integrated processes: Part I. Econometric
Theory 4: 468497.
. 1989. Statistical inference in regressions with integrated processes: Part II. Econometric Theory 5: 95131.
Phillips, P. C. B. 1986. Understanding spurious regressions in econometrics. Journal of Econometrics 33: 311340.
Phillips, P. C. B., and S. N. Durlauf. 1986. Multiple time series regressions with integrated processes. Review of
Economic Studies 53: 473495.
Rothenberg, T. J. 1971. Identification in parametric models. Econometrica 39: 577591.
Sims, C. A., J. H. Stock, and M. W. Watson. 1990. Inference in linear time series models with some unit roots.
Econometrica 58: 113144.
Stock, J. H. 1987. Asymptotic properties of least squares estimators of cointegrating vectors. Econometrica 55:
10351056.
Stock, J. H., and M. W. Watson. 1988. Testing for common trends. Journal of the American Statistical Association
83: 10971107.
Watson, M. W. 1994. Vector autoregressions and cointegration. In Vol. 4 of Handbook of Econometrics, ed. R. F.
Engle and D. L. McFadden. Amsterdam: Elsevier.
Also see
[TS] vec postestimation Postestimation tools for vec
[TS] tsset Declare data to be time-series data
[TS] var Vector autoregressive models
[TS] var svar Structural vector autoregressive models
[U] 20 Estimation and postestimation commands
Title
vec postestimation Postestimation tools for vec
Description
Remarks and examples
Description
The following postestimation commands are of special interest after vec:
Command
Description
fcast compute
fcast graph
irf
veclmar
vecnorm
vecstable
Description
estat ic
estat summarize
estat vce
estimates
forecast
lincom
lrtest
margins
marginsplot
nlcom
predict
predictnl
test
testnl
759
760
type
newvar
if
in
Description
statistic
Main
xb
stdp
residuals
ce
levels
usece(varlistce )
These statistics are available both in and out of sample; type predict
the estimation sample.
>
Postestimation
>
Main
xb, the default, calculates the fitted values for the specified equation. The form of the VECM implies
that these fitted values are the one-step predictions for the first-differenced variables.
stdp calculates the standard error of the linear prediction for the specified equation.
residuals calculates the residuals from the specified equation of the VECM.
ce calculates the predicted value of the specified cointegrating equation.
levels calculates the one-step prediction of the level of the endogenous variable in the requested
equation.
usece(varlistce ) specifies that previously predicted cointegrating equations saved under the names in
varlistce be used to compute the predictions. The number of variables in the varlistce must equal
the number of cointegrating equations specified in the model.
equation(eqno | eqname) specifies to which equation you are referring.
equation() is filled in with one eqno or eqname for xb, residuals, stdp, ce, and levels
options. equation(#1) would mean that the calculation is to be made for the first equation,
equation(#2) would mean the second, and so on. You could also refer to the equation by its
name. equation(D income) would refer to the equation named D income and equation( ce1),
to the first cointegrating equation, which is named ce1 by vec.
If you do not specify equation(), the results are as if you specified equation(#1).
For more information on using predict after multiple-equation estimation commands, see [R] predict.
Forecasting
See the following sections for information on obtaining forecasts after vec:
[TS] fcast compute Compute dynamic forecasts after var, svar, or vec
[TS] fcast graph Graph forecasts after fcast compute
Also see
[TS] vec Vector error-correction models
[U] 20 Estimation and postestimation commands
761
Title
veclmar Perform LM test for residual autocorrelation after vec
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
Reference
Syntax
veclmar
, options
options
Description
mlag(#)
estimates(estname)
separator(#)
Menu
Statistics
>
>
>
Description
veclmar implements a Lagrange multiplier (LM) test for autocorrelation in the residuals of vector
error-correction models (VECMs).
Options
mlag(#) specifies the maximum order of autocorrelation to be tested. The integer specified in mlag()
must be greater than 0; the default is 2.
estimates(estname) requests that veclmar use the previously obtained set of vec estimates stored
as estname. By default, veclmar uses the active results. See [R] estimates for information on
manipulating estimation results.
separator(#) specifies how many rows should appear in the table between separator lines. By
default, separator lines do not appear. For example, separator(1) would draw a line between
each row, separator(2) between every other row, and so on.
763
Example 1
We fit a VECM using the regional income data described in [TS] vec and then call veclmar to test
for autocorrelation.
. use http://www.stata-press.com/data/r13/rdinc
. vec ln_ne ln_se
(output omitted )
. veclmar, mlag(4)
Lagrange multiplier test
lag
chi2
df
1
2
3
4
8.9586
4.9809
4.8519
0.3270
4
4
4
4
0.06214
0.28926
0.30284
0.98801
At the 5% level, we cannot reject the null hypothesis that there is no autocorrelation in the residuals
for any of the orders tested. Thus this test finds no evidence of model misspecification.
Stored results
veclmar stores the following in r():
Matrices
r(lm)
yt = yt1 +
p1
X
i yti + t
i=1
As discussed in [TS] vec, as long as the parameters in the cointegrating vectors, , are exactly
identified or overidentified, the estimates of these parameters are superconsistent. This implies that
the r 1 vector of estimated cointegrating relations
bt =
b yt
E
(1)
can be used as data with standard estimation and inference methods. When the parameters of the
b t ; in these
cointegrating equations are not identified, (1) does not provide consistent estimates of E
cases, veclmar exits with an error message.
The VECM above can be rewritten as
bt +
yt = E
p1
X
i=1
i yti + t
764
which is just a VAR with p 1 lags where the endogenous variables have been first-differenced and
b . veclmar fits this VAR and then calls varlmar to
is augmented with the exogenous variables E
compute the LM test for autocorrelation.
The above discussion assumes no trend and implicitly ignores constraints on the parameters in
. As discussed in vec, the other four trend specifications considered by Johansen (1995, sec. 5.7)
b t can
complicate the estimation of the free parameters in but do not alter the basic result that the E
be used as data in the subsequent VAR. Similarly, constraints on the parameters in imply that the
b t can still be used as data in
subsequent VAR must be estimated with these constraints applied, but E
the VAR.
See [TS] varlmar for more information on the Johansen LM test.
Reference
Johansen, S. 1995. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University
Press.
Also see
[TS] vec Vector error-correction models
[TS] varlmar Perform LM test for residual autocorrelation after var or svar
[TS] vec intro Introduction to vector error-correction models
Title
vecnorm Test for normally distributed disturbances after vec
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
vecnorm
, options
options
Description
jbera
skewness
kurtosis
estimates(estname)
dfk
separator(#)
Menu
Statistics
>
>
>
Description
vecnorm computes and reports a series of statistics against the null hypothesis that the disturbances
in a VECM are normally distributed.
Options
jbera requests that the JarqueBera statistic and any other explicitly requested statistic be reported.
By default, the JarqueBera, skewness, and kurtosis statistics are reported.
skewness requests that the skewness statistic and any other explicitly requested statistic be reported.
By default, the JarqueBera, skewness, and kurtosis statistics are reported.
kurtosis requests that the kurtosis statistic and any other explicitly requested statistic be reported.
By default, the JarqueBera, skewness, and kurtosis statistics are reported.
estimates(estname) requests that vecnorm use the previously obtained set of vec estimates stored
as estname. By default, vecnorm uses the active results. See [R] estimates for information on
manipulating estimation results.
dfk requests that a small-sample adjustment be made when computing the estimated variance
covariance matrix of the disturbances.
separator(#) specifies how many rows should appear in the table between separator lines. By
default, separator lines do not appear. For example, separator(1) would draw a line between
each row, separator(2) between every other row, and so on.
765
766
Example 1
This example uses vecnorm to test for normality after estimating the parameters of a VECM using
the regional income data.
. use http://www.stata-press.com/data/r13/rdinc
. vec ln_ne ln_se
(output omitted )
. vecnorm
Jarque-Bera test
Equation
chi2
df
D_ln_ne
D_ln_se
ALL
0.094
0.586
0.680
2
2
4
0.95417
0.74608
0.95381
Skewness test
Equation
Skewness
chi2
df
D_ln_ne
D_ln_se
ALL
.05982
.243
0.032
0.522
0.553
1
1
2
0.85890
0.47016
0.75835
Equation
Kurtosis
chi2
df
D_ln_ne
D_ln_se
ALL
3.1679
2.8294
0.062
0.064
0.126
1
1
2
0.80302
0.79992
0.93873
Kurtosis test
The JarqueBera results present test statistics for each equation and for all equations jointly
against the null hypothesis of normality. For the individual equations, the null hypothesis is that the
disturbance term in that equation has a univariate normal distribution. For all equations jointly, the
null hypothesis is that the K disturbances come from a K -dimensional normal distribution. In this
example, the single-equation and overall JarqueBera statistics do not reject the null of normality.
767
The single-equation skewness test statistics are of the null hypotheses that the disturbance term
in each equation has zero skewness, which is the skewness of a normally distributed variable. The
row marked ALL shows the results for a test that the disturbances in all equations jointly have zero
skewness. The skewness results shown above do not suggest nonnormality.
The kurtosis of a normally distributed variable is three, and the kurtosis statistics presented in the
table test the null hypothesis that the disturbance terms have kurtosis consistent with normality. The
results in this example do not reject the null hypothesis.
The statistics computed by vecnorm are based on the estimated variancecovariance matrix of the
disturbances. vec saves the ML estimate of this matrix, which vecnorm uses by default. Specifying
the dfk option instructs vecnorm to make a small-sample adjustment to the variancecovariance
matrix before computing the test statistics.
Stored results
vecnorm stores the following in r():
Macros
r(dfk)
Matrices
r(jb)
r(skewness)
r(kurtosis)
dfk, if specified
JarqueBera 2 , df, and p-values
skewness 2 , df, and p-values
kurtosis 2 , df, and p-values
References
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Jarque, C. M., and A. K. Bera. 1987. A test for normality of observations and regression residuals. International
Statistical Review 2: 163172.
Johansen, S. 1995. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University
Press.
Lutkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Also see
[TS] vec Vector error-correction models
[TS] varnorm Test for normally distributed disturbances after var or svar
[TS] vec intro Introduction to vector error-correction models
Title
vecrank Estimate the cointegrating rank of a VECM
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
vecrank depvarlist
if
in
, options
Description
options
Model
lags(#)
trend(constant)
trend(rconstant)
trend(trend)
trend(rtrend)
trend(none)
Adv. model
sindicators(varlistsi )
noreduce
Reporting
notrace
max
ic
level99
levela
You must tsset your data before using vecrank; see [TS] tsset.
depvar may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, rolling, and statsby are allowed; see [U] 11.1.10 Prefix commands.
vecrank does not allow gaps in the data.
Menu
Statistics
>
>
Description
vecrank produces statistics used to determine the number of cointegrating equations in a vector
error-correction model (VECM).
768
769
Options
Model
lags(#) specifies the number of lags in the VAR representation of the model. The VECM will include
one fewer lag of the first differences. The number of lags must be greater than zero but small
enough so that the degrees of freedom used by the model are less than the number of observations.
trend(trend spec) specifies one of five trend specifications to include in the model. See [TS] vec
intro and [TS] vec for descriptions. The default is trend(constant).
Adv. model
Reporting
notrace requests that the output for the trace statistic not be displayed. The default is to display the
trace statistic.
max requests that the output for the maximum-eigenvalue statistic be displayed. The default is to not
display this output.
ic causes the output for the information criteria to be displayed. The default is to not display this
output.
level99 causes the 1% critical values to be displayed instead of the default 5% critical values.
levela causes both the 1% and the 5% critical values to be displayed.
Introduction
Before estimating the parameters of a VECM models, you must choose the number of lags in the
underlying VAR, the trend specification, and the number of cointegrating equations. vecrank offers
several ways of determining the number of cointegrating vectors conditional on a trend specification
and lag order.
770
vecrank implements three types of methods for determining r, the number of cointegrating
equations in a VECM. The first is Johansens trace statistic method. The second is his maximum
eigenvalue statistic method. The third method chooses r to minimize an information criterion.
All three methods are based on Johansens maximum likelihood (ML) estimator of the parameters
of a cointegrating VECM. The basic VECM is
0
yt = yt1 +
p1
X
i yti + t
t=1
where y is a (K 1) vector of I(1) variables, and are (K r) parameter matrices with rank
r < K , 1 , . . . , p1 are (K K) matrices of parameters, and t is a (K 1) vector of normally
distributed errors that is serially uncorrelated but has contemporaneous covariance matrix .
Building on the work of Anderson (1951), Johansen (1995) derives an ML estimator for the
parameters and two likelihood-ratio (LR) tests for inference on r. These LR tests are known as the
trace statistic and the maximum-eigenvalue statistic because the log likelihood can be written as the
log of the determinant of a matrix plus a simple function of the eigenvalues of another matrix.
Let 1 , . . . , K be the K eigenvalues used in computing the log likelihood at the optimum.
Furthermore, assume that these eigenvalues are sorted from the largest 1 to the smallest K . If there
are r < K cointegrating equations, and have rank r and the eigenvalues r+1 , . . . , K are zero.
K
X
bi )
ln(1
i=r+1
Example 1
We have quarterly data on the natural logs of aggregate consumption, investment, and GDP in
the United States from the first quarter of 1959 through the fourth quarter of 1982. As discussed in
King et al. (1991), the balanced-growth hypothesis in economics implies that we would expect to
find two cointegrating equations among these three variables. In the output below, we use vecrank
to determine the number of cointegrating equations using Johansens multiple-trace test method.
maximum
rank
0
1
2
3
parms
39
44
47
48
LL
1231.1041
1245.3882
1252.5055
1254.1787
eigenvalue
.
0.26943
0.14480
0.03611
771
91
5
5%
trace
critical
statistic
value
46.1492
29.68
17.5810
15.41
3.3465*
3.76
The header produces information about the sample, the trend specification, and the number of
lags included in the model. The main table contains a separate row for each possible value of r, the
number of cointegrating equations. When r = 3, all three variables in this model are stationary.
In this example, because the trace statistic at r = 0 of 46.1492 exceeds its critical value of 29.68,
we reject the null hypothesis of no cointegrating equations. Similarly, because the trace statistic at
r = 1 of 17.581 exceeds its critical value of 15.41, we reject the null hypothesis that there is one or
fewer cointegrating equation. In contrast, because the trace statistic at r = 2 of 3.3465 is less than its
critical value of 3.76, we cannot reject the null hypothesis that there are two or fewer cointegrating
equations. Because Johansens method for estimating r is to accept as rb the first r for which the null
hypothesis is not rejected, we accept r = 2 as our estimate of the number of cointegrating equations
between these three variables. The * by the trace statistic at r = 2 indicates that this is the value
of r selected by Johansens multiple-trace test procedure. The eigenvalue shown in the last line of
output computes the trace statistic in the preceding line.
Example 2
In the previous example, we used the default 5% critical values. We can estimate r with 1%
critical values instead by specifying the level99 option.
. vecrank y i c, lags(5) level99
Johansen tests for cointegration
Trend: constant
Number of obs =
Sample: 1960q2 - 1982q4
Lags =
maximum
rank
0
1
2
3
parms
39
44
47
48
LL
1231.1041
1245.3882
1252.5055
1254.1787
eigenvalue
.
0.26943
0.14480
0.03611
91
5
1%
trace
critical
statistic
value
46.1492
35.65
17.5810*
20.04
3.3465
6.65
The output indicates that switching from the 5% to the 1% level changes the resulting estimate from
r = 2 to r = 1.
772
Example 3
In the output below, we reexamine the balanced-growth hypothesis. We use the levela option to
obtain both the 5% and 1% critical values, and we use the notrace option to suppress the table of
trace statistics.
. vecrank y i c, lags(5) max levela notrace
Johansen tests for cointegration
Trend: constant
Number of obs =
Sample: 1960q2 - 1982q4
Lags =
maximum
rank
0
1
2
3
parms
39
44
47
48
LL
1231.1041
1245.3882
1252.5055
1254.1787
eigenvalue
0.26943
0.14480
0.03611
max
statistic
28.5682
14.2346
3.3465
5% critical
value
20.97
14.07
3.76
91
5
1% critical
value
25.52
18.63
6.65
We can reject r = 1 in favor of r = 2 at the 5% level but not at the 1% level. As with the trace
statistic method, whether we choose to specify one or two cointegrating equations in our VECM will
depend on the significance level we use here.
Example 4
We use these information-criteria methods to estimate the number of cointegrating equations in
our balanced-growth data.
parms
39
44
47
48
LL
1231.1041
1245.3882
1252.5055
1254.1787
eigenvalue
0.26943
0.14480
0.03611
SBIC
-25.12401
-25.19009
-25.19781*
-25.18501
HQIC
-25.76596
-25.91435
-25.97144*
-25.97511
773
91
5
AIC
-26.20009
-26.40414
-26.49463
-26.50942
Both the SBIC and the HQIC estimators suggest that there are two cointegrating equations in the
balanced-growth data.
Stored results
vecrank stores the following in e():
Scalars
e(N)
e(k eq)
e(k dv)
e(tmin)
e(tmax)
e(n lags)
e(k ce95)
e(k ce99)
e(k cesbic)
e(k cehqic)
Macros
e(cmd)
e(cmdline)
e(trend)
e(reduced lags)
e(reduce opt)
e(tsfmt)
Matrices
e(max)
e(trace)
e(lambda)
e(k rank)
e(hqic)
e(sbic)
e(aic)
number of observations
number of equations in e(b)
number of dependent variables
minimum time
maximum time
number of lags
number of cointegrating equations
number of cointegrating equations
number of cointegrating equations
number of cointegrating equations
chosen
chosen
chosen
chosen
by
by
by
by
vecrank
command as typed
trend specified
list of maximum lags to which the model has been reduced
noreduce, if noreduce is specified
format for current time variable
vector
vector
vector
vector
vector
vector
vector
of
of
of
of
of
of
of
maximum-eigenvalue statistics
trace statistics
eigenvalues
numbers of unconstrained parameters
HQIC values
SBIC values
AIC values
774
"
#
K
X
1
bi
LA = T K {ln (2) + 1} + ln (|S00 |) +
ln 1
2
i=1
(2)
Thus the LR test that compares the unrestricted model in (2) with the restricted model in (1) is given
by
K
X
bi
LRtrace = T
ln 1
i=r+1
As discussed by Johansen (1995), the trace statistic has a nonstandard distribution under the null
hypothesis because the null hypothesis places restrictions on the coefficients on yt1 , which is
assumed to have K r random-walk components. vecrank reports the Osterwald-Lenum (1992)
critical values.
The maximum-eigenvalue statistic compares the null model containing r cointegrating relations
with the alternative model that has r + 1 cointegrating relations. Thus using these two values for r
in (1) and a few lines of algebra implies that the LR test of this hypothesis is
br+1
LRmax = T ln 1
As for the trace statistic, because this test involves restrictions on the coefficients on a vector of
I(1) variables, the test statistics distribution will be nonstandard. vecrank reports the OsterwaldLenum (1992) critical values.
The formulas for the AIC, SBIC, and HQIC are given in Methods and formulas of [TS] vec.
Sren Johansen (1939 ) earned degrees in mathematical statistics at the University of Copenhagen,
where he is now based. In addition to making contributions to mathematical statistics, probability
theory, and medical statistics, he has worked mostly in econometricsin particular, on the theory
of cointegration.
References
Anderson, T. W. 1951. Estimating linear restrictions on regression coefficients for multivariate normal distributions.
Annals of Mathematical Statistics 22: 327351.
Aznar, A., and M. Salvador. 2002. Selecting the rank of the cointegration space and the form of the intercept using
an information criterion. Econometric Theory 18: 926947.
Engle, R. F., and C. W. J. Granger. 1987. Co-integration and error correction: Representation, estimation, and testing.
Econometrica 55: 251276.
Gonzalo, J., and J.-Y. Pitarakis. 1998. Specification via model selection in vector error correction models. Economics
Letters 60: 321328.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
775
Hubrich, K., H. Lutkepohl, and P. Saikkonen. 2001. A review of systems cointegration tests. Econometric Reviews
20: 247318.
Johansen, S. 1988. Statistical analysis of cointegration vectors. Journal of Economic Dynamics and Control 12:
231254.
. 1991. Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models.
Econometrica 59: 15511580.
. 1995. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University
Press.
King, R. G., C. I. Plosser, J. H. Stock, and M. W. Watson. 1991. Stochastic trends and economic fluctuations.
American Economic Review 81: 819840.
Lutkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Maddala, G. S., and I.-M. Kim. 1998. Unit Roots, Cointegration, and Structural Change. Cambridge: Cambridge
University Press.
Osterwald-Lenum, M. G. 1992. A note with quantiles of the asymptotic distribution of the maximum likelihood
cointegration rank test statistics. Oxford Bulletin of Economics and Statistics 54: 461472.
Park, J. Y., and P. C. B. Phillips. 1988. Statistical inference in regressions with integrated processes: Part I. Econometric
Theory 4: 468497.
. 1989. Statistical inference in regressions with integrated processes: Part II. Econometric Theory 5: 95131.
Phillips, P. C. B. 1986. Understanding spurious regressions in econometrics. Journal of Econometrics 33: 311340.
Phillips, P. C. B., and S. N. Durlauf. 1986. Multiple time series regressions with integrated processes. Review of
Economic Studies 53: 473495.
Sims, C. A., J. H. Stock, and M. W. Watson. 1990. Inference in linear time series models with some unit roots.
Econometrica 58: 113144.
Stock, J. H. 1987. Asymptotic properties of least squares estimators of cointegrating vectors. Econometrica 55:
10351056.
Stock, J. H., and M. W. Watson. 1988. Testing for common trends. Journal of the American Statistical Association
83: 10971107.
Watson, M. W. 1994. Vector autoregressions and cointegration. In Vol. 4 of Handbook of Econometrics, ed. R. F.
Engle and D. L. McFadden. Amsterdam: Elsevier.
Also see
[TS] tsset Declare data to be time-series data
[TS] vec Vector error-correction models
[TS] vec intro Introduction to vector error-correction models
Title
vecstable Check the stability condition of VECM estimates
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Options
References
Syntax
vecstable
, options
Description
options
Main
estimates(estname)
amat(matrix name)
graph
dlabel
modlabel
marker options
rlopts(cline options)
nogrid
pgrid( . . . )
Add plots
addplot(plot)
twoway options
Menu
Statistics
>
>
>
Description
vecstable checks the eigenvalue stability condition in a vector error-correction model (VECM)
fit using vec.
Options
Main
estimates(estname) requests that vecstable use the previously obtained set of vec estimates stored
as estname. By default, vecstable uses the active results. See [R] estimates for information on
manipulating estimation results.
776
777
amat(matrix name) specifies a valid Stata matrix name by which the companion matrix can be saved.
The companion matrix is referred to as the A matrix in Lutkepohl (2005) and [TS] varstable. The
default is not to save the companion matrix.
graph causes vecstable to draw a graph of the eigenvalues of the companion matrix.
dlabel labels the eigenvalues with their distances from the unit circle. dlabel cannot be specified
with modlabel.
modlabel labels the eigenvalues with their moduli. modlabel cannot be specified with dlabel.
marker options specify the look of markers. This look includes the marker symbol, the marker size,
and its color and outline; see [G-3] marker options.
rlopts(cline options) affects the rendition of the reference unit circle; see [G-3] cline options.
nogrid suppresses the polar grid circles.
pgrid( numlist , line options ) pgrid( numlist , line options ) . . .
pgrid( numlist , line options ) determines the radii and appearance of the polar grid circles.
By default, the graph includes nine polar grid circles with radii 0.1, 0.2, . . . , 0.9 that have the grid
linestyle. The numlist specifies the radii for the polar grid circles. The line options determine the
appearance of the polar grid circles; see [G-3] line options. Because the pgrid() option can be
repeated, circles with different radii can have distinct appearances.
Add plots
addplot(plot) adds specified plots to the generated graph; see [G-3] addplot option.
twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).
778
Example 1
In example 1 of [TS] vec, we estimated the parameters of a bivariate VECM of the natural logs
of the average disposable incomes in two of the economic regions created by the U.S. Bureau of
Economic Analysis. In that example, we concluded that the predicted cointegrating equation was
probably not stationary. Here we continue that example by refitting that model and using vecstable
to analyze the eigenvalues of the companion matrix of the corresponding VAR.
. use http://www.stata-press.com/data/r13/rdinc
. vec ln_ne ln_se
(output omitted )
. vecstable
Eigenvalue stability condition
Eigenvalue
1
.9477854
.2545357 +
.2545357 -
Modulus
1
.947785
.343914
.343914
.2312756i
.2312756i
The output contains a table showing the eigenvalues of the companion matrix and their associated
moduli. The table shows that one of the roots is 1. The table footer reminds us that the specified
VECM imposes one unit modulus on the companion matrix.
The output indicates that there is a real root at about 0.95. Although there is no distribution
theory to measure how close this root is to one, per other discussions in the literature (for example,
Johansen [1995, 137138]), we conclude that the root of 0.95 supports our earlier analysis, in which
we concluded that the predicted cointegrating equation is probably not stationary.
If we had included the graph option with vecstable, the following graph would have been
displayed:
.5
Imaginary
0
.5
.5
0
Real
.5
The graph plots the eigenvalues of the companion matrix with the real component on the x axis and
the imaginary component on the y axis. Although the information is the same as in the table, the
graph shows visually how close the root with modulus 0.95 is to the unit circle.
779
Stored results
vecstable stores the following in r():
Scalars
r(unitmod)
Matrices
r(Re)
r(Im)
r(Modulus)
where A is the companion matrix of the VAR that corresponds to the VECM.
References
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Johansen, S. 1995. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University
Press.
Lutkepohl, H. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Also see
[TS] vec Vector error-correction models
[TS] vec intro Introduction to vector error-correction models
Title
wntestb Bartletts periodogram-based test for white noise
Syntax
Remarks and examples
References
Menu
Stored results
Also see
Description
Methods and formulas
Options
Acknowledgment
Syntax
wntestb varname
if
in
, options
Description
options
Main
table
level(#)
Plot
marker options
marker label options
cline options
Add plots
addplot(plot)
twoway options
You must tsset your data before using wntestb; see [TS] tsset. In addition, the time series must be dense
(nonmissing with no gaps in the time variable) in the specified sample.
varname may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
Tests
>
Description
wntestb performs Bartletts periodogram-based test for white noise. The result is presented
graphically by default but optionally may be presented as text in a table.
Options
Main
table displays the test results as a table instead of as the default graph.
level(#) specifies the confidence level, as a percentage, for the confidence bands included on the
graph. The default is level(95) or as set by set level; see [U] 20.7 Specifying the width of
confidence intervals.
780
781
Plot
marker options specify the look of markers. This look includes the marker symbol, the marker size,
and its color and outline; see [G-3] marker options.
marker label options specify if and how the markers are to be labeled; see
[G-3] marker label options.
cline options specify if the points are to be connected with lines and the rendition of those lines; see
[G-3] cline options.
Add plots
addplot(plot) adds specified plots to the generated graph; see [G-3] addplot option.
twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).
Example 1
In this example, we generate two time series and show the graphical and statistical tests that can
be obtained from this command. The first time series is a white-noise process, and the second is a
white-noise process with an embedded deterministic cosine curve.
. drop _all
. set seed 12393
. set obs 100
obs was 0, now 100
. generate x1 = rnormal()
. generate x2 = rnormal() + cos(2*_pi*(_n-1)/10)
. generate time = _n
. tsset time
time variable:
delta:
time, 1 to 100
1 unit
We can then submit the white-noise data to the wntestb command by typing
782
0.00
1.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
We can see in the graph that the values never appear outside the confidence bands. The test statistic
has a p-value of 0.91, so we conclude that the process is not different from white noise. If we had
wanted only the statistic without the plot, we could have used the table option.
Turning our attention to the other series (x2), we type
. wntestb x2
0.00
1.00
0.00
0.10
0.20
0.30
Frequency
0.40
0.50
Here the process does appear outside of the bands. In fact, it steps out of the bands at a frequency
of 0.1 (exactly as we synthesized this process). We also have confirmation from the test statistic, at
a p-value of 0.001, that the process is significantly different from white noise.
783
Stored results
wntestb stores the following in r():
Scalars
r(stat)
Bartletts statistic
r(p)
probability value
X
2 2
k
(1)j e2a j = G(a)
lim Pr max q Uk a =
q
1kq
q
j=
r
B = max
1kq
n b
k
F
k
2
q
where Fbk is the cumulative periodogram defined in terms of the sample spectral density fb (see
[TS] pergram) as
Pk b
j=1 f (j )
Fbk = Pq
b
j=1 f (j )
The associated p-value for the Bartlett statistic and the confidence bands on the graph are computed
as 1 G(B) using Fellers result.
Maurice Stevenson Bartlett (19102002) was a British statistician. Apart from a short period
in industry, he spent his career teaching and researching at the universities of Cambridge,
Manchester, London (University College), and Oxford. His many contributions include work on
the statistical analysis of multivariate data (especially factor analysis) and time series and on
stochastic models of population growth, epidemics, and spatial processes.
Acknowledgment
wntestb is based on the wntestf command by H. Joseph Newton (1996) of the Department of
Statistics at Texas A&M University and coeditor of the Stata Journal.
784
References
Bartlett, M. S. 1955. An Introduction to Stochastic Processes with Special Reference to Methods and Applications.
Cambridge: Cambridge University Press.
Feller, W. 1948. On the KolmogorovSmirnov limit theorems for empirical distributions. Annals of Mathematical
Statistics 19: 177189.
Gani, J. 2002. Professor M. S. Bartlett FRS, 19102002. Statistician 51: 399402.
Newton, H. J. 1988. TIMESLAB: A Time Series Analysis Laboratory. Belmont, CA: Wadsworth.
. 1996. sts12: A periodogram-based test for white noise. Stata Technical Bulletin 34: 3639. Reprinted in Stata
Technical Bulletin Reprints, vol. 6, pp. 203207. College Station, TX: Stata Press.
Olkin, I. 1989. A conversation with Maurice Bartlett. Statistical Science 4: 151163.
Also see
[TS] tsset Declare data to be time-series data
[TS] corrgram Tabulate and graph autocorrelations
[TS] cumsp Cumulative spectral distribution
[TS] pergram Periodogram
[TS] wntestq Portmanteau (Q) test for white noise
Title
wntestq Portmanteau (Q) test for white noise
Syntax
Remarks and examples
Also see
Menu
Stored results
Description
Methods and formulas
Option
References
Syntax
wntestq varname
if
in
, lags(#)
You must tsset your data before using wntestq; see [TS] tsset. Also the time series must be dense (nonmissing
with no gaps in the time variable) in the specified sample.
varname may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
Tests
>
Description
wntestq performs the portmanteau (or Q) test for white noise.
Option
lags(#) specifies the number of autocorrelations to calculate. The default is to use min(bn/2c 2, 40),
where bn/2c is the greatest integer less than or equal to n/2.
Example 1
In the example shown in [TS] wntestb, we generated two time series. One (x1) was a white-noise
process, and the other (x2) was a white-noise process with an embedded cosine curve. Here we
compare the output of the two tests.
. drop _all
. set seed 12393
. set obs 100
obs was 0, now 100
. generate x1 = rnormal()
. generate x2 = rnormal() + cos(2*_pi*(_n-1)/10)
. generate time = _n
. tsset time
time variable: time, 1 to 100
delta: 1 unit
785
786
32.6863
0.7875
129.4436
0.0000
This example shows that both tests agree. For the first process, the Bartlett and portmanteau tests
result in nonsignificant test statistics: a p-value of 0.9053 for wntestb and one of 0.9407 for wntestq.
For the second process, each test has a significant result to 0.0010.
Stored results
wntestq stores the following in r():
Scalars
r(stat)
r(df)
Q statistic
degrees of freedom
r(p)
probability value
where m is the number of autocorrelations calculated (equal to the number of lags specified) and
indicates convergence in distribution to a 2 distribution with m degrees of freedom. bj is the
estimated autocorrelation for lag j ; see [TS] corrgram for details.
787
References
Box, G. E. P., and D. A. Pierce. 1970. Distribution of residual autocorrelations in autoregressive-integrated moving
average time series models. Journal of the American Statistical Association 65: 15091526.
Diggle, P. J. 1990. Time Series: A Biostatistical Introduction. Oxford: Oxford University Press.
Ljung, G. M., and G. E. P. Box. 1978. On a measure of lack of fit in time series models. Biometrika 65: 297303.
Sperling, R. I., and C. F. Baum. 2001. sts19: Multivariate portmanteau (Q) test for white noise. Stata Technical
Bulletin 60: 3941. Reprinted in Stata Technical Bulletin Reprints, vol. 10, pp. 373375. College Station, TX:
Stata Press.
Also see
[TS] tsset Declare data to be time-series data
[TS] corrgram Tabulate and graph autocorrelations
[TS] cumsp Cumulative spectral distribution
[TS] pergram Periodogram
[TS] wntestb Bartletts periodogram-based test for white noise
Title
xcorr Cross-correlogram for bivariate time series
Syntax
Remarks and examples
Menu
Methods and formulas
Description
References
Options
Also see
Syntax
xcorr varname1 varname2
if
in
, options
Description
options
Main
generate(newvar)
table
noplot
lags(#)
Plot
base(#)
marker options
marker label options
line options
Add plots
addplot(plot)
twoway options
You must tsset your data before using xcorr; see [TS] tsset.
varname1 and varname2 may contain time-series operators; see [U] 11.4.4 Time-series varlists.
Menu
Statistics
>
Time series
>
Graphs
>
Description
xcorr plots the sample cross-correlation function.
Options
Main
789
lags(#) indicates the number of lags and leads to include in the graph. The default is to use
min(b n/2 c 2, 20).
Plot
base(#) specifies the value from which the lines should extend. The default is base(0).
marker options, marker label options, and line options affect the rendition of the plotted crosscorrelations.
marker options specify the look of markers. This look includes the marker symbol, the marker
size, and its color and outline; see [G-3] marker options.
marker label options specify if and how the markers are to be labeled; see
[G-3] marker label options.
line options specify the look of the dropped lines, including pattern, width, and color; see
[G-3] line options.
Add plots
addplot(plot) provides a way to add other plots to the generated graph; see [G-3] addplot option.
twoway options are any of the options documented in [G-3] twoway options, excluding by(). These
include options for titling the graph (see [G-3] title options) and for saving the graph to disk (see
[G-3] saving option).
1.00
0.50
0.00
0.50
40
20
0
Lag
20
40
1.00
1.00
Crosscorrelogram
790
We included a vertical line at lag 5, because there is a well-defined peak at this value. This peak
indicates that the output lags the input by five periods. Further, the fact that the correlations are
negative indicates that as input (coded gas rate) is increased, output (% CO2 ) decreases.
We may obtain the table of autocorrelations and the character-based plot of the cross-correlations
(analogous to the univariate time-series command corrgram) by specifying the table option.
. xcorr input output, table
-1
0
1
LAG
CORR
[Cross-correlation]
-20
-19
-18
-17
-16
-15
-14
-13
-12
-11
-10
-9
-8
-7
-6
-5
-4
-3
-2
-1
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
-0.1033
-0.1027
-0.0998
-0.0932
-0.0832
-0.0727
-0.0660
-0.0662
-0.0751
-0.0927
-0.1180
-0.1484
-0.1793
-0.2059
-0.2266
-0.2429
-0.2604
-0.2865
-0.3287
-0.3936
-0.4845
-0.5985
-0.7251
-0.8429
-0.9246
-0.9503
-0.9146
-0.8294
-0.7166
-0.5998
-0.4952
-0.4107
-0.3479
-0.3049
-0.2779
-0.2632
-0.2548
-0.2463
-0.2332
-0.2135
-0.1869
791
n
o
Cov x1 (t), x2 (t + k) = R12 (k)
This function is not symmetric about lag zero; that is,
n
o
Rij (k)
ij (k) = Corr xi (t), xj (t + k) = p
Rii (0)Rjj (0)
where 11 and 22 are the autocorrelation functions for x1 and x2 , respectively. The sequence 12 (k)
is the cross-correlation function and is drawn for lags k (Q, Q + 1, . . . , 1, 0, 1, . . . , Q 1, Q).
If 12 (k) = 0 for all lags, x1 and x2 are not cross-correlated.
References
Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. 2008. Time Series Analysis: Forecasting and Control. 4th ed.
Hoboken, NJ: Wiley.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Newton, H. J. 1988. TIMESLAB: A Time Series Analysis Laboratory. Belmont, CA: Wadsworth.
Also see
[TS] tsset Declare data to be time-series data
[TS] corrgram Tabulate and graph autocorrelations
[TS] pergram Periodogram
Glossary
add factor. An add factor is a quantity added to an endogenous variable in a forecast model. Add
factors can be used to incorporate outside information into a model, and they can be used to
produce forecasts under alternative scenarios.
ARCH model. An autoregressive conditional heteroskedasticity (ARCH) model is a regression model
in which the conditional variance is modeled as an autoregressive (AR) process. The ARCH(m)
model is
y t = x t + t
Glossary
793
Cholesky ordering. Cholesky ordering is a method used to orthogonalize the error term in a VAR or
VECM to impose a recursive structure on the dynamic model, so that the resulting impulseresponse
functions can be given a causal interpretation. The method is so named because it uses the Cholesky
decomposition of the error-covariance matrix.
CochraneOrcutt estimator. This estimation is a linear regression estimator that can be used when the
error term exhibits first-order autocorrelation. An initial estimate of the autocorrelation parameter
is obtained from OLS residuals, and then OLS is performed on the transformed data yet = yt yt1
et = xt xt1 .
and x
cointegrating vector. A cointegrating vector specifies a stationary linear combination of nonstationary
variables. Specifically, if each of the variables x1 , x2 , . . . , xk is integrated of order one and there
exists a set of parameters 1 , 2 , . . . , k such that zt = 1 x1 + 2 x2 + + k xk is a stationary
process, the variables x1 , x2 , . . . , xk are said to be cointegrated, and the vector is known as a
cointegrating vector.
conditional variance. Although the conditional variance is simply the variance of a conditional
distribution, in time-series analysis the conditional variance is often modeled as an autoregressive
process, giving rise to ARCH models.
correlogram. A correlogram is a table or graph showing the sample autocorrelations or partial
autocorrelations of a time series.
covariance stationarity. A process is covariance stationary if the mean of the process is finite and
independent of t, the unconditional variance of the process is finite and independent of t, and the
covariance between periods t and t s is finite and depends on t s but not on t or s themselves.
Covariance-stationary processes are also known as weakly stationary processes.
cross-correlation function. The cross-correlation function expresses the correlation between one series
at time t and another series at time t k as a function of the time t and lag k . If both series
are stationary, the function does not depend on t. The function is not symmetric about k = 0:
12 (k) 6= 12 (k).
cyclical component. A cyclical component is a part of a time series that is a periodic function of
time. Deterministic functions of time are deterministic cyclical components, and random functions
of time are stochastic cyclical components. For example, fixed seasonal effects are deterministic
cyclical components and random seasonal effects are stochastic seasonal components.
Random coefficients on time inside of periodic functions form an especially useful class of stochastic
cyclical components; see [TS] ucm.
deterministic trend. A deterministic trend is a deterministic function of time that specifies the long-run
tendency of a time series.
difference operator. The difference operator denotes the change in the value of a variable
from period t 1 to period t. Formally, yt = yt yt1 , and 2 yt = (yt yt1 ) =
(yt yt1 ) (yt1 yt2 ) = yt 2yt1 + yt2 .
drift. Drift is the constant term in a unit-root process. In
yt = + yt1 + t
is the drift when t is a stationary, zero-mean process.
dynamic forecast. A dynamic forecast uses forecast values wherever lagged values of the endogenous
variables appear in the model, allowing one to forecast multiple periods into the future.
dynamic-multiplier function. A dynamic-multiplier function measures the effect of a shock to an
exogenous variable on an endogenous variable. The k th dynamic-multiplier function of variable i
794
Glossary
where the equation for yt represents the conditional mean of the process and t represents the
conditional variance. See [TS] arch or Hamilton (1994, chap. 21) for details on how the conditional
variance equation can be viewed as an ARMA process. GARCH models are often used because the
ARMA specification often allows the conditional variance to be modeled with fewer parameters
than are required by a pure ARCH model. Many extensions to the basic GARCH model exist; see
[TS] arch for those that are implemented in Stata. See also ARCH model.
generalized least-squares estimator. A generalized least-squares (GLS) estimator is used to estimate
the parameters of a regression function when the error term is heteroskedastic or autocorrelated.
In the linear case, GLS is sometimes described as OLS on transformed data because the GLS
estimator can be implemented by applying an appropriate transformation to the dataset and then
using OLS.
Glossary
795
Granger causality. The variable x is said to Granger-cause variable y if, given the past values of y,
past values of x are useful for predicting y.
high-pass filter. Time-series filters are designed to pass or block stochastic cycles at specified
frequencies. High-pass filters, such as those implemented in tsfilter bw and tsfilter hp, pass
through stochastic cycles above the cutoff frequency and block all other stochastic cycles.
HoltWinters smoothing. A set of methods for smoothing time-series data that assume that the value
of a time series at time t can be approximated as the sum of a mean term that drifts over time,
as well as a time trend whose strength also drifts over time. Variations of the basic method allow
for seasonal patterns in data, as well.
impulseresponse function. An impulseresponse function (IRF) measures the effect of a shock to an
endogenous variable on itself or another endogenous variable. The k th impulseresponse function
of variable i on variable j measures the effect on variable j in period t + k in response to a
one-unit shock to variable i in period t, holding everything else constant.
independent and identically distributed. A series of observations is independently and identically
distributed (i.i.d.) if each observation is an independent realization from the same underlying
distribution. In some contexts, the definition is relaxed to mean only that the observations are
independent and have identical means and variances; see Davidson and MacKinnon (1993, 42).
integrated process. A nonstationary process is integrated of order d, written I(d), if the process must
be differenced d times to produce a stationary series. An I(1) process yt is one in which yt is
stationary.
Kalman filter. The Kalman filter is a recursive procedure for predicting the state vector in a state-space
model.
lag operator. The lag operator L denotes the value of a variable at time t 1. Formally, Lyt = yt1 ,
and L2 yt = Lyt1 = yt2 .
linear filter. A linear filter is a sequence of weights used to compute a weighted average of a time
series at each time period. More formally, a linear filter (L) is
(L) = 0 + 1 L + 2 L2 + =
=0
where L is the lag operator. Applying the linear filter (L) to the time series xt yields a sequence
of weighted averages of xt :
X
(L)xt =
L xt
=0
796
Glossary
The acute trade-off between parsimony and flexibility has given rise to a plethora of models; see
[TS] mgarch.
NeweyWest covariance matrix. The NeweyWest covariance matrix is a member of the class of
heteroskedasticity- and autocorrelation-consistent (HAC) covariance matrix estimators used with
time-series data that produces covariance estimates that are robust to both arbitrary heteroskedasticity
and autocorrelation up to a prespecified lag.
one-step-ahead forecast. See static forecast.
orthogonalized impulseresponse function. An orthogonalized impulseresponse function (OIRF)
measures the effect of an orthogonalized shock to an endogenous variable on itself or another
endogenous variable. An orthogonalized shock is one that affects one variable at time t but no
other variables. See [TS] irf create for a discussion of the difference between IRFs and OIRFs.
partial autocorrelation function. The partial autocorrelation function (PACF) expresses the correlation
between periods t and t k of a time series as a function of the time t and lag k , after controlling
for the effects of intervening lags. For a stationary time series, the PACF does not depend on t.
The PACF is not symmetric about k = 0: the partial autocorrelation between yt and ytk is not
equal to the partial autocorrelation between yt and yt+k .
periodogram. A periodogram is a graph of the spectral density function of a time series as a function
of frequency. The pergram command first standardizes the amplitude of the density by the sample
variance of the time series, and then plots the logarithm of that standardized density. Peaks in the
periodogram represent cyclical behavior in the data.
phase function. The phase function of a linear filter specifies how the filter changes the relative
importance of the random components at different frequencies in the frequency domain.
portmanteau statistic. The portmanteau, or Q, statistic is used to test for white noise and is calculated
using the first m autocorrelations of the series, where m is chosen by the user. Under the null
hypothesis that the series is a white-noise process, the portmanteau statistic has a 2 distribution
with m degrees of freedom.
PraisWinsten estimator. A PraisWinsten estimator is a linear regression estimator that is used
when the error term exhibits first-order autocorrelation; see p
also CochraneOrcutt estimator.
Here
p
e1 = 1 2 x1 ,
the first observation in the dataset is transformed as ye1 = 1 2 y1 and x
so that the first observation is not lost. The PraisWinsten estimator is a generalized least-squares
estimator.
priming values. Priming values are the initial, preestimation values used to begin a recursive process.
random walk. A random walk is a time-series process in which the current periods realization is
equal to the previous periods realization plus a white-noise error term: yt = yt1 + t . A random
walk with drift also contains a nonzero time-invariant constant: yt = + yt1 + t . The constant
term is known as the drift parameter. An important property of random-walk processes is that
the best predictor of the value at time t + 1 is the value at time t plus the value of the drift
parameter.
recursive regression analysis. A recursive regression analysis involves performing a regression at
time t by using all available observations from some starting time t0 through time t, performing
another regression at time t + 1 by using all observations from time t0 through time t + 1, and
so on. Unlike a rolling regression analysis, the first period used for all regressions is held fixed.
rolling regression analysis. A rolling, or moving window, regression analysis involves performing
regressions for each period by using the most recent m periods data, where m is known as the
window size. At time t the regression is fit using observations for times t 19 through time t; at
time t + 1 the regression is fit using the observations for time t 18 through t + 1; and so on.
Glossary
797
seasonal difference operator. The period-s seasonal difference operator s denotes the difference
in the value of a variable at time t and time t s. Formally, s yt = yt yts , and 2s yt =
s (yt yts ) = (yt yts ) (yts yt2s ) = yt 2yts + yt2s .
serial correlation. Serial correlation refers to regression errors that are correlated over time. If a
regression model does not contained lagged dependent variables as regressors, the OLS estimates
are consistent in the presence of mild serial correlation, but the covariance matrix is incorrect.
When the model includes lagged dependent variables and the residuals are serially correlated, the
OLS estimates are biased and inconsistent. See, for example, Davidson and MacKinnon (1993,
chap. 10) for more information.
serial correlation tests. Because OLS estimates are at least inefficient and potentially biased in the
presence of serial correlation, econometricians have developed many tests to detect it. Popular ones
include the DurbinWatson (1950, 1951, 1971) test, the BreuschPagan (1980) test, and Durbins
(1970) alternative test. See [R] regress postestimation time series.
smoothing. Smoothing a time series refers to the process of extracting an overall trend in the data.
The motivation behind smoothing is the belief that a time series exhibits a trend component as
well as an irregular component and that the analyst is interested only in the trend component.
Some smoothers also account for seasonal or other cyclical patterns.
spectral analysis. See frequency-domain analysis.
spectral density function. The spectral density function is the derivative of the spectral distribution
function. Intuitively, the spectral density function f () indicates the amount of variance in a time
series that is attributable to sinusoidal components with frequency . See also spectral distribution
function. The spectral density function is sometimes called the spectrum.
spectral distribution function. The (normalized) spectral distribution function F () of a process
describes the proportion of variance that can be explained by sinusoids with frequencies in the
range (0, ), where 0 . The spectral distribution and density functions used in frequencydomain analysis are closely related to the autocorrelation function used in time-domain analysis;
see Chatfield (2004, chap. 6) and Wei (2006, chap. 12).
spectrum. See spectral density function.
state-space model. A state-space model describes the relationship between an observed time series
and an unobservable state vector that represents the state of the world. The measurement equation
expresses the observed series as a function of the state vector, and the transition equation describes
how the unobserved state vector evolves over time. By defining the parameters of the measurement
and transition equations appropriately, one can write a wide variety of time-series models in the
state-space form.
static forecast. A static forecast uses actual values wherever lagged values of the endogenous variables
appear in the model. As a result, static forecasts perform at least as well as dynamic forecasts,
but static forecasts cannot produce forecasts into the future if lags of the endogenous variables
appear in the model.
Because actual values will be missing beyond the last historical time period in the dataset, static
forecasts can only forecast one period into the future (assuming only first lags appear in the model);
for that reason, they are often called one-step-ahead forecasts.
steady-state equilibrium. The steady-state equilibrium is the predicted value of a variable in a dynamic
model, ignoring the effects of past shocks, or, equivalently, the value of a variable, assuming that
the effects of past shocks have fully died out and no longer affect the variable of interest.
798
Glossary
Variants of unit-root tests suitable for panel data have also been developed; see [XT] xtunitroot.
VAR. A vector autoregressive (VAR) model is a multivariate regression technique in which each
dependent variable is regressed on lags of itself and on lags of all the other dependent variables
in the model. Occasionally, exogenous variables are also included in the model.
VECM. A vector error-correction model (VECM) is a type of VAR that is used with variables that
are cointegrated. Although first-differencing variables that are integrated of order one makes them
stationary, fitting a VAR to such first-differenced variables results in misspecification error if the
variables are cointegrated. See The multivariate VECM specification in [TS] vec intro for more
on this point.
white noise. A variable ut represents a white-noise process if the mean of ut is zero, the variance
of ut is 2 , and the covariance between ut and us is zero for all s 6= t.
Glossary
799
YuleWalker equations. The YuleWalker equations are a set of difference equations that describe the
relationship among the autocovariances and autocorrelations of an autoregressive moving-average
(ARMA) process.
References
Breusch, T. S., and A. R. Pagan. 1980. The Lagrange multiplier test and its applications to model specification in
econometrics. Review of Economic Studies 47: 239253.
Chatfield, C. 2004. The Analysis of Time Series: An Introduction. 6th ed. Boca Raton, FL: Chapman & Hall/CRC.
Davidson, R., and J. G. MacKinnon. 1993. Estimation and Inference in Econometrics. New York: Oxford University
Press.
Dickey, D. A., and W. A. Fuller. 1979. Distribution of the estimators for autoregressive time series with a unit root.
Journal of the American Statistical Association 74: 427431.
Durbin, J. 1970. Testing for serial correlation in least-squares regressions when some of the regressors are lagged
dependent variables. Econometrica 38: 410421.
Durbin, J., and G. S. Watson. 1950. Testing for serial correlation in least squares regression. I. Biometrika 37:
409428.
. 1951. Testing for serial correlation in least squares regression. II. Biometrika 38: 159177.
. 1971. Testing for serial correlation in least squares regression. III. Biometrika 58: 119.
Elliott, G. R., T. J. Rothenberg, and J. H. Stock. 1996. Efficient tests for an autoregressive unit root. Econometrica
64: 813836.
Hamilton, J. D. 1994. Time Series Analysis. Princeton: Princeton University Press.
Phillips, P. C. B., and P. Perron. 1988. Testing for a unit root in time series regression. Biometrika 75: 335346.
Wei, W. W. S. 2006. Time Series Analysis: Univariate and Multivariate Methods. 2nd ed. Boston: Pearson.
A
Abraham, B., [TS] tssmooth, [TS] tssmooth
dexponential, [TS] tssmooth exponential,
[TS] tssmooth hwinters, [TS] tssmooth
shwinters
ac command, [TS] corrgram
acplot, estat subcommand, [TS] estat acplot
add factor, [TS] Glossary
add, irf subcommand, [TS] irf add
adjust, forecast subcommand, [TS] forecast adjust
Adkins, L. C., [TS] arch
Ahn, S. K., [TS] vec intro
Aielli, G. P., [TS] mgarch, [TS] mgarch dcc
Akaike, H., [TS] varsoc
alternative scenarios, [TS] forecast, [TS] forecast
adjust, [TS] forecast clear, [TS] forecast
coefvector, [TS] forecast create, [TS] forecast
describe, [TS] forecast drop, [TS] forecast
estimates, [TS] forecast exogenous,
[TS] forecast identity, [TS] forecast list,
[TS] forecast query, [TS] forecast solve
Amemiya, T., [TS] varsoc
Amisano, G., [TS] irf create, [TS] var intro, [TS] var
svar, [TS] vargranger, [TS] varwle
An, S., [TS] arfima
Anderson, B. D. O., [TS] sspace
Anderson, T. W., [TS] vec, [TS] vecrank
Ansley, C. F., [TS] arima
A-PARCH, see asymmetric power autoregressive
conditional heteroskedasticity
AR, see autoregressive
ARCH, see autoregressive conditional heteroskedasticity
arch command, [TS] arch, [TS] arch postestimation
ARFIMA, see autoregressive fractionally integrated
moving-average model
arfima command, [TS] arfima, [TS] arfima
postestimation
ARIMA, see autoregressive integrated moving-average
model
arima command, [TS] arima, [TS] arima
postestimation
ARMA, see autoregressive moving average
ARMAX, see autoregressive moving average with
exogenous inputs
aroots, estat subcommand, [TS] estat aroots
asymmetric power autoregressive conditional
heteroskedasticity, [TS] arch
801
B
Baillie, R. T., [TS] arfima
band-pass filters, [TS] tsfilter bk, [TS] tsfilter cf,
[TS] Glossary
Bartlett, M. S., [TS] wntestb
Bartletts
bands, [TS] corrgram
periodogram test, [TS] wntestb
Baum, C. F., [TS] arch, [TS] arima, [TS] dfgls,
[TS] rolling, [TS] time series, [TS] tsfilter,
[TS] tsset, [TS] var, [TS] wntestq
Bauwens, L., [TS] mgarch
BaxterKing filter, [TS] tsfilter, [TS] tsfilter bk
Baxter, M., [TS] tsfilter, [TS] tsfilter bk, [TS] tsfilter
cf
Becketti, S., [TS] arch, [TS] arima, [TS] corrgram,
[TS] dfuller, [TS] irf, [TS] prais, [TS] time
series, [TS] tssmooth, [TS] var intro, [TS] var
svar, [TS] vec intro, [TS] vec
Bera, A. K., [TS] arch, [TS] varnorm, [TS] vecnorm
Beran, J., [TS] arfima, [TS] arfima postestimation
Berkes, I., [TS] mgarch
Berndt, E. K., [TS] arch, [TS] arima
Bianchi, G., [TS] tsfilter, [TS] tsfilter bw
bk, tsfilter subcommand, [TS] tsfilter bk
Black, F., [TS] arch
block exogeneity, [TS] vargranger
Bloomfield, P., [TS] arfima
C
Caines, P. E., [TS] sspace
calendars, [TS] intro
Cameron, A. C., [TS] forecast estimates
Casals, J., [TS] sspace
ccc, mgarch subcommand, [TS] mgarch ccc
cf, tsfilter subcommand, [TS] tsfilter cf
cgraph, irf subcommand, [TS] irf cgraph
Chang, Y., [TS] sspace
Chatfield, C., [TS] arima, [TS] corrgram,
[TS] pergram, [TS] tssmooth, [TS] tssmooth
dexponential, [TS] tssmooth exponential,
[TS] tssmooth hwinters, [TS] tssmooth ma,
[TS] tssmooth shwinters, [TS] Glossary
Cheung, Y.-W., [TS] dfgls
Cholesky ordering, [TS] Glossary
Chou, R. Y., [TS] arch
ChristianoFitzgerald filter, [TS] tsfilter, [TS] tsfilter cf
Christiano, L. J., [TS] irf create, [TS] tsfilter,
[TS] tsfilter cf, [TS] var svar
Chu-Chun-Lin, S., [TS] sspace
clear, forecast subcommand, [TS] forecast clear
clock time, [TS] tsset
cluster estimator of variance, PraisWinsten and
CochraneOrcutt regression, [TS] prais
Cochrane, D., [TS] prais
CochraneOrcutt regression, [TS] prais, [TS] Glossary
coefvector, forecast subcommand, [TS] forecast
coefvector
cointegration, [TS] fcast compute, [TS] fcast graph,
[TS] vec intro, [TS] vec, [TS] veclmar,
[TS] vecnorm, [TS] vecrank, [TS] vecstable,
[TS] Glossary
compute, fcast subcommand, [TS] fcast compute
Comte, F., [TS] mgarch
conditional variance, [TS] arch, [TS] Glossary
D
data manipulation, [TS] tsappend, [TS] tsfill,
[TS] tsreport, [TS] tsrevar, [TS] tsset
David, J. S., [TS] arima
Davidson, R., [TS] arch, [TS] arima, [TS] prais,
[TS] sspace, [TS] varlmar, [TS] Glossary
Davis, G., [TS] arima
Davis, R. A., [TS] corrgram, [TS] sspace
dcc, mgarch subcommand, [TS] mgarch dcc
De Jong, P., [TS] dfactor, [TS] sspace, [TS] sspace
postestimation, [TS] ucm
DeGroot, M. H., [TS] arima
Deistler, M., [TS] sspace
del Rio, A., [TS] tsfilter hp
describe,
forecast subcommand, [TS] forecast describe
irf subcommand, [TS] irf describe
deterministic trend, [TS] Glossary
dexponential, tssmooth subcommand,
[TS] tssmooth dexponential
dfactor command, [TS] dfactor, [TS] dfactor
postestimation
dfgls command, [TS] dfgls
dfuller command, [TS] dfuller
diagonal vech model, [TS] mgarch, [TS] mgarch dvech
E
EGARCH, see exponential generalized autoregressive
conditional heteroskedasticity
Eichenbaum, M., [TS] irf create, [TS] var svar
eigenvalue stability condition, [TS] estat aroots,
[TS] varstable, [TS] vecstable
Elliott, G. R., [TS] dfgls, [TS] Glossary
Enders, W., [TS] arch, [TS] arima, [TS] arima
postestimation, [TS] corrgram
endogenous variable, [TS] Glossary
F
factor model, [TS] dfactor
Fair, R. C., [TS] forecast solve
fcast compute command, [TS] fcast compute
fcast graph command, [TS] fcast graph
feasible generalized least squares, [TS] dfgls,
[TS] prais, [TS] var
Feller, W., [TS] wntestb
FEVD, see forecast-error variance decomposition
FGLS, see feasible generalized least squares
filters, [TS] tsfilter, also see smoothers
BaxterKing, [TS] tsfilter bk
Butterworth, [TS] tsfilter bw
ChristianoFitzgerald, [TS] tsfilter cf
HodrickPrescott, [TS] tsfilter hp
Fiorentini, G., [TS] mgarch
Fitzgerald, T. J., [TS] tsfilter, [TS] tsfilter cf
Flannery, B. P., [TS] arch, [TS] arima
forecast, [TS] forecast
adjust command, [TS] forecast adjust
clear command, [TS] forecast clear
coefvector command, [TS] forecast coefvector
create command, [TS] forecast create
describe command, [TS] forecast describe
drop command, [TS] forecast drop
estimates command, [TS] forecast estimates
exogenous command, [TS] forecast exogenous
identity command, [TS] forecast identity
list command, [TS] forecast list
query command, [TS] forecast query
solve command, [TS] forecast solve
G
gain, [TS] tsfilter, [TS] tsfilter bk, [TS] tsfilter bw,
[TS] tsfilter cf, [TS] tsfilter hp, [TS] Glossary
Gani, J., [TS] wntestb
GARCH, see generalized autoregressive conditional
heteroskedasticity
Gardiner, J. S., [TS] tssmooth, [TS] tssmooth
dexponential, [TS] tssmooth exponential,
[TS] tssmooth hwinters, [TS] tssmooth
shwinters
Gardner, E. S., Jr., [TS] tssmooth dexponential,
[TS] tssmooth hwinters
generalized
autoregressive conditional heteroskedasticity,
[TS] arch, [TS] Glossary
least-squares estimator, [TS] prais, [TS] Glossary
Geweke, J., [TS] dfactor
Giannini, C., [TS] irf create, [TS] var intro, [TS] var
svar, [TS] vargranger, [TS] varwle
Giles, D. E. A., [TS] prais
GJR, see threshold autoregressive conditional
heteroskedasticity
Glosten, L. R., [TS] arch
Golub, G. H., [TS] arfima, [TS] arfima postestimation
Gomez, V., [TS] tsfilter, [TS] tsfilter hp
Gonzalo, J., [TS] vec intro, [TS] vecrank
Gourieroux, C. S., [TS] arima, [TS] mgarch ccc,
[TS] mgarch dcc, [TS] mgarch vcc
Gradshteyn, I. S., [TS] arfima
Granger, C. W. J., [TS] arch, [TS] arfima,
[TS] vargranger, [TS] vec intro, [TS] vec,
[TS] vecrank
Granger causality, [TS] vargranger, [TS] Glossary
graph,
fcast subcommand, [TS] fcast graph
irf subcommand, [TS] irf graph
graphs,
autocorrelations, [TS] corrgram
correlogram, [TS] corrgram
cross-correlogram, [TS] xcorr
cumulative spectral density, [TS] cumsp
forecasts, [TS] fcast graph
impulseresponse functions, [TS] irf, [TS] irf
cgraph, [TS] irf graph, [TS] irf ograph
parametric autocorrelation, [TS] estat acplot
parametric autocovariance, [TS] estat acplot
partial correlogram, [TS] corrgram
periodogram, [TS] pergram
white-noise test, [TS] wntestb
Greene, W. H., [TS] arch, [TS] arima, [TS] corrgram,
[TS] var
Griffiths, W. E., [TS] arch, [TS] prais
H
Hall, B. H., [TS] arch, [TS] arima
Hall, R. E., [TS] arch, [TS] arima
Hamilton, J. D., [TS] arch, [TS] arfima, [TS] arima,
[TS] corrgram, [TS] dfuller, [TS] estat
aroots, [TS] fcast compute, [TS] forecast
solve, [TS] irf, [TS] irf create, [TS] pergram,
[TS] pperron, [TS] psdensity, [TS] sspace,
[TS] sspace postestimation, [TS] time series,
[TS] tsfilter, [TS] ucm, [TS] var intro, [TS] var,
[TS] var svar, [TS] vargranger, [TS] varnorm,
[TS] varsoc, [TS] varstable, [TS] varwle,
[TS] vec intro, [TS] vec, [TS] vecnorm,
[TS] vecrank, [TS] vecstable, [TS] xcorr,
[TS] Glossary
Hannan, E. J., [TS] sspace
Hardin, J. W., [TS] newey, [TS] prais
I
identity, forecast subcommand, [TS] forecast
identity
impulseresponse functions, [TS] irf, [TS] irf add,
[TS] irf cgraph, [TS] irf create, [TS] irf ctable,
[TS] irf describe, [TS] irf drop, [TS] irf graph,
[TS] irf ograph, [TS] irf rename, [TS] irf set,
[TS] irf table, [TS] var intro, [TS] varbasic,
[TS] vec intro, [TS] Glossary
independent and identically distributed, [TS] Glossary
information criterion, [TS] varsoc
innovation accounting, [TS] irf
integrated autoregressive moving-average model,
[TS] estat acplot, [TS] psdensity
integrated process, [TS] Glossary
IRF, see impulseresponse functions
J
Jaeger, A., [TS] tsfilter, [TS] tsfilter hp
Jagannathan, R., [TS] arch
Jarque, C. M., [TS] varnorm, [TS] vecnorm
JarqueBera statistic, [TS] varnorm, [TS] vecnorm
Jeantheau, T., [TS] mgarch
Jenkins, G. M., [TS] arfima, [TS] arima,
[TS] corrgram, [TS] cumsp, [TS] dfuller,
[TS] estat acplot, [TS] pergram, [TS] pperron,
[TS] psdensity, [TS] xcorr
Jerez, M., [TS] sspace
Johansen, S., [TS] irf create, [TS] varlmar, [TS] vec
intro, [TS] vec, [TS] veclmar, [TS] vecnorm,
[TS] vecrank, [TS] vecstable
Johnson, L. A., [TS] tssmooth, [TS] tssmooth
dexponential, [TS] tssmooth exponential,
[TS] tssmooth hwinters, [TS] tssmooth
shwinters
Joyeux, R., [TS] arfima
Judge, G. G., [TS] arch, [TS] prais
Judson, R. A., [TS] forecast
K
Kalman
filter, [TS] arima, [TS] dfactor, [TS] dfactor
postestimation, [TS] sspace, [TS] sspace
postestimation, [TS] ucm, [TS] ucm
postestimation, [TS] Glossary
forecast, [TS] dfactor postestimation, [TS] sspace
postestimation, [TS] ucm postestimation
smoothing, [TS] dfactor postestimation,
[TS] sspace postestimation, [TS] ucm
postestimation
Kalman, R. E., [TS] arima
Kilian, L., [TS] forecast solve
Kim, I.-M., [TS] vec intro, [TS] vec, [TS] vecrank
King, M. L., [TS] prais
King, R. G., [TS] tsfilter, [TS] tsfilter bk, [TS] tsfilter
cf, [TS] tsfilter hp, [TS] vecrank
Klein, L. R., [TS] forecast, [TS] forecast adjust,
[TS] forecast describe, [TS] forecast estimates,
[TS] forecast list, [TS] forecast solve
Kmenta, J., [TS] arch, [TS] prais, [TS] rolling
L
lag operator, [TS] Glossary
lag-exclusion statistics, [TS] varwle
lag-order selection statistics, [TS] var intro, [TS] var,
[TS] var svar, [TS] varsoc, [TS] vec intro
Lagrange multiplier test, [TS] varlmar, [TS] veclmar
Lai, K. S., [TS] dfgls
Laurent, S., [TS] mgarch
leap seconds, [TS] tsset
Ledolter, J., [TS] tssmooth, [TS] tssmooth
dexponential, [TS] tssmooth exponential,
[TS] tssmooth hwinters, [TS] tssmooth
shwinters
Lee, T.-C., [TS] arch, [TS] prais
Leser, C. E. V., [TS] tsfilter, [TS] tsfilter hp
Lieberman, O., [TS] mgarch
Lilien, D. M., [TS] arch
Lim, G. C., [TS] arch
linear
filter, [TS] tsfilter, [TS] tsfilter cf, [TS] tssmooth
ma, [TS] Glossary
regression, [TS] newey, [TS] prais
Ling, S., [TS] mgarch
list, forecast subcommand, [TS] forecast list
Ljung, G. M., [TS] wntestq
long-memory process, [TS] arfima, [TS] Glossary
Lu, J. Y., [TS] prais
Lund, R., [TS] arima
Lutkepohl, H., [TS] arch, [TS] dfactor, [TS] fcast
compute, [TS] irf, [TS] irf create, [TS] mgarch
dvech, [TS] prais, [TS] sspace, [TS] sspace
postestimation, [TS] time series, [TS] var
intro, [TS] var, [TS] var svar, [TS] varbasic,
[TS] vargranger, [TS] varnorm, [TS] varsoc,
[TS] varstable, [TS] varwle, [TS] vec intro,
[TS] vecnorm, [TS] vecrank, [TS] vecstable
M
MA, see moving average model
ma, tssmooth subcommand, [TS] tssmooth ma
MacKinnon, J. G., [TS] arch, [TS] arima, [TS] dfuller,
[TS] pperron, [TS] prais, [TS] sspace,
[TS] varlmar, [TS] Glossary
Maddala, G. S., [TS] vec intro, [TS] vec, [TS] vecrank
Magnus, J. R., [TS] var svar
O
OConnell, R. T., [TS] tssmooth, [TS] tssmooth
dexponential, [TS] tssmooth exponential,
[TS] tssmooth hwinters, [TS] tssmooth
shwinters
ograph, irf subcommand, [TS] irf ograph
Olkin, I., [TS] wntestb
one-step-ahead forecast, see static forecast
Ooms, M., [TS] arfima
Orcutt, G. H., [TS] prais
orthogonalized impulseresponse function, [TS] irf,
[TS] var intro, [TS] vec intro, [TS] vec,
[TS] Glossary
Osterwald-Lenum, M. G., [TS] vecrank
Owen, A. L., [TS] forecast
Q
Q statistic, see portmanteau statistic
query, forecast subcommand, [TS] forecast query
R
random walk, [TS] Glossary
Ravn, M. O., [TS] tsfilter, [TS] tsfilter hp
Rebelo, S. T., [TS] tsfilter, [TS] tsfilter hp
recursive estimation, [TS] rolling
recursive regression analysis, [TS] Glossary
Reinsel, G. C., [TS] arfima, [TS] arima,
[TS] corrgram, [TS] cumsp, [TS] dfuller,
[TS] estat acplot, [TS] pergram, [TS] pperron,
[TS] psdensity, [TS] vec intro, [TS] xcorr
rename, irf subcommand, [TS] irf rename
Robins, R. P., [TS] arch
robust, Huber/White/sandwich estimator of variance
ARCH, [TS] arch
ARFIMA, [TS] arfima
ARIMA and ARMAX, [TS] arima
dynamic-factor model, [TS] dfactor
GARCH, [TS] arch
NeweyWest regression, [TS] newey
PraisWinsten and CochraneOrcutt regression,
[TS] prais
state-space model, [TS] sspace
unobserved-components model, [TS] ucm
rolling command, [TS] rolling
rolling regression, [TS] rolling, [TS] Glossary
Rombouts, J. V. K., [TS] mgarch
Room, T., [TS] arima
Rothenberg, T. J., [TS] dfgls, [TS] sspace, [TS] var
svar, [TS] vec, [TS] Glossary
Runkle, D. E., [TS] arch
Ryzhik, I. M., [TS] arfima
S
SAARCH, see simple asymmetric autoregressive
conditional heteroskedasticity
Saikkonen, P., [TS] vec intro, [TS] vecrank
Salvador, M., [TS] vecrank
Samaniego, F. J., [TS] varwle
Sanchez, G., [TS] arima
sandwich/Huber/White estimator of variance, see robust,
Huber/White/sandwich estimator of variance
Sargan, J. D., [TS] prais
Sargent, T. J., [TS] dfactor
scenarios, [TS] forecast, [TS] forecast adjust,
[TS] forecast clear, [TS] forecast coefvector,
[TS] forecast create, [TS] forecast describe,
[TS] forecast drop, [TS] forecast estimates,
[TS] forecast exogenous, [TS] forecast
identity, [TS] forecast list, [TS] forecast query,
[TS] forecast solve
Schmidt, T. J., [TS] tsfilter
Schneider, W., [TS] sspace
Schwert, G. W., [TS] dfgls
seasonal
ARIMA, [TS] arima
difference operator, [TS] Glossary
smoothing, [TS] tssmooth, [TS] tssmooth shwinters
seemingly unrelated regression, [TS] dfactor
selection-order statistics, [TS] varsoc
Sentana, E., [TS] mgarch
Serfling, R. J., [TS] irf create
serial correlation, see autocorrelation
test, [TS] Glossary
set, irf subcommand, [TS] irf set
Shumway, R. H., [TS] arima
shwinters, tssmooth subcommand, [TS] tssmooth
shwinters
Silvennoinen, A., [TS] mgarch, [TS] mgarch ccc
simple asymmetric autoregressive conditional
heteroskedasticity, [TS] arch
Sims, C. A., [TS] dfactor, [TS] irf create, [TS] var
svar, [TS] vec intro, [TS] vec, [TS] vecrank
simulation, [TS] forecast, [TS] forecast adjust,
[TS] forecast clear, [TS] forecast coefvector,
[TS] forecast create, [TS] forecast describe,
[TS] forecast drop, [TS] forecast estimates,
[TS] forecast exogenous, [TS] forecast
identity, [TS] forecast list, [TS] forecast query,
[TS] forecast solve
skewness, [TS] varnorm
smoothers, [TS] tssmooth, [TS] Glossary
double exponential, [TS] tssmooth dexponential
exponential, [TS] tssmooth exponential
HoltWinters,
nonseasonal, [TS] tssmooth hwinters
seasonal, [TS] tssmooth shwinters
moving average, [TS] tssmooth ma
nonlinear, [TS] tssmooth nl
solve, forecast subcommand, [TS] forecast solve
Sorrentino, R., [TS] tsfilter, [TS] tsfilter bw
Sotoca, S., [TS] sspace
Sowell, F., [TS] arfima
spectral
analysis, [TS] Glossary
density, [TS] psdensity, [TS] Glossary
distribution, [TS] cumsp, [TS] pergram,
[TS] psdensity, [TS] Glossary
spectrum, [TS] psdensity, [TS] Glossary
Sperling, R. I., [TS] arch, [TS] arima, [TS] dfgls,
[TS] wntestq
sspace command, [TS] sspace, [TS] sspace
postestimation
stability, [TS] var intro, [TS] var, [TS] var svar,
[TS] vecstable
after ARIMA, [TS] estat aroots
after VAR or SVAR, [TS] varstable
after VEC, [TS] vec intro, [TS] vec
standard errors, robust,
see robust, Huber/White/sandwich estimator of
variance
T
table, irf subcommand, [TS] irf table
tables, [TS] irf ctable, [TS] irf table
TARCH, see threshold autoregressive conditional
heteroskedasticity
Terasvirta, T., [TS] mgarch, [TS] mgarch ccc
test,
DickeyFuller, see DickeyFuller test
Granger causality, see Granger causality
Lagrange multiplier, see Lagrange multiplier test
normality, see normality test
Wald, see Wald test
Teukolsky, S. A., [TS] arch, [TS] arima
Theil, H., [TS] prais
threshold autoregressive conditional heteroskedasticity,
[TS] arch
time-domain analysis, [TS] arch, [TS] arfima,
[TS] arima, [TS] Glossary
time-series
filter, [TS] psdensity, [TS] ucm
operators, [TS] tsset
time-varying variance, [TS] arch
trend, [TS] Glossary
Trimbur, T. M., [TS] psdensity, [TS] tsfilter,
[TS] tsfilter hp, [TS] ucm
Trivedi, P. K., [TS] forecast estimates
tsappend command, [TS] tsappend
Tsay, R. S., [TS] varsoc, [TS] vec intro
Tse, Y. K., [TS] mgarch, [TS] mgarch vcc
tsfill command, [TS] tsfill
tsfilter, [TS] tsfilter
bk command, [TS] tsfilter bk
bw command, [TS] tsfilter bw
cf command, [TS] tsfilter cf
hp command, [TS] tsfilter hp
tsline command, [TS] tsline
tsreport command, [TS] tsreport
tsrevar command, [TS] tsrevar
tsrline command, [TS] tsline
tsset command, [TS] tsset
tssmooth, [TS] tssmooth
dexponential command, [TS] tssmooth
dexponential
exponential command, [TS] tssmooth exponential
hwinters command, [TS] tssmooth hwinters
ma command, [TS] tssmooth ma
nl command, [TS] tssmooth nl
shwinters command, [TS] tssmooth shwinters
Tsui, A. K. C., [TS] mgarch, [TS] mgarch vcc
U
UCM, see unobserved-components model
ucm command, [TS] ucm, [TS] ucm postestimation
Uhlig, H., [TS] tsfilter, [TS] tsfilter hp
unit-root
models, [TS] vec intro, [TS] vec
process, [TS] Glossary
test, [TS] dfgls, [TS] dfuller, [TS] pperron,
[TS] Glossary
univariate time series, [TS] arch, [TS] arfima,
[TS] arima, [TS] newey, [TS] prais, [TS] ucm
unobserved-components model, [TS] psdensity
model, [TS] ucm
postestimation, [TS] ucm postestimation
V
Van Loan, C. F., [TS] arfima, [TS] arfima
postestimation
VAR, see vector autoregressive
var command, [TS] var, [TS] var postestimation
varbasic command, [TS] varbasic, [TS] varbasic
postestimation
vargranger command, [TS] vargranger
W
Wald, A., [TS] varwle
Wald test, [TS] vargranger, [TS] varwle
Wang, Q., [TS] arima, [TS] newey
Watson, G. S., [TS] prais, [TS] Glossary
Watson, M. W., [TS] arch, [TS] dfactor, [TS] dfgls,
[TS] irf create, [TS] rolling, [TS] sspace,
[TS] time series, [TS] var intro, [TS] var,
[TS] var svar, [TS] vec intro, [TS] vec,
[TS] vecrank
Wei, W. W. S., [TS] psdensity, [TS] tsfilter, [TS] ucm,
[TS] Glossary
weighted moving average, [TS] tssmooth,
[TS] tssmooth ma
West, K. D., [TS] newey, [TS] pperron
White, H. L., Jr., [TS] newey, [TS] prais
white noise, [TS] wntestb, [TS] wntestq, [TS] Glossary
X
xcorr command, [TS] xcorr
Y
Yar, M., [TS] tssmooth, [TS] tssmooth dexponential,
[TS] tssmooth exponential, [TS] tssmooth
hwinters, [TS] tssmooth shwinters
YuleWalker equations, [TS] corrgram, [TS] Glossary
Z
Zakoian, J. M., [TS] arch
Zellner, A., [TS] prais