0% found this document useful (0 votes)
57 views16 pages

MACF Dan MPACF (Tiao - Box1981)

The document discusses modeling and analyzing multiple time series data. It proposes vector autoregressive moving average (ARMA) models as an approach for modeling the relationships between multiple time series that allow for feedback between the series. The document outlines modeling procedures including tentative specification, estimation, and diagnostic checking. It illustrates the approaches using three real-world examples.

Uploaded by

Amalia Yuniar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views16 pages

MACF Dan MPACF (Tiao - Box1981)

The document discusses modeling and analyzing multiple time series data. It proposes vector autoregressive moving average (ARMA) models as an approach for modeling the relationships between multiple time series that allow for feedback between the series. The document outlines modeling procedures including tentative specification, estimation, and diagnostic checking. It illustrates the approaches using three real-world examples.

Uploaded by

Amalia Yuniar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Modeling Multiple Times Series with Applications

G. C. Tiao; G. E. P. Box

Journal of the American Statistical Association, Vol. 76, No. 376. (Dec., 1981), pp. 802-816.

Stable URL:
http://links.jstor.org/sici?sici=0162-1459%28198112%2976%3A376%3C802%3AMMTSWA%3E2.0.CO%3B2-E

Journal of the American Statistical Association is currently published by American Statistical Association.

Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained
prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in
the JSTOR archive only for your personal, non-commercial use.

Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at
http://www.jstor.org/journals/astata.html.

Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed
page of such transmission.

The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic
journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers,
and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take
advantage of advances in technology. For more information regarding JSTOR, please contact [email protected].

http://www.jstor.org
Thu Apr 3 15:56:33 2008
Modeling Multiple Time Series With Applications
G.C. TlAO and G.E. P. BOX*

An approach to the modeling and analysis of multiple (1979), Akaike (1980), Hannan (1980), Hannan, Duns-
time series is proposed. Properties of a class of vector muir, and Deistler (1980), and Quinn (1980). There are,
autoregressive moving average models are discussed. however, considerable divergences of view. The object
Modeling procedures consisting of tentative specifics- of this article is to describe an approach to the modeling
tion, estimation, and diagnostic checking are outlined and and analysis that we have developed over a considerable
illustrated by three real examples. period of time and that we are finding effective. Our main
emphasis will be on motivating, describing, and illus-
KEY WORDS: Multiple time series; Vector autoregres- trating the various methods used in an iterative model
sive moving average models; Cross-correlations; Partial building process. Much, if not all, of the underlying the-
autoregression; Intervention analysis; Transfer function.
ory can be found in the references given and, therefore,
1. INTRODUCTION will not be repeated. Section 2 presents a short review
of the widely used univariate (k = 1) time series and
Business, economic, engineering and environmental transfer function models as developed in Box and Jenkins
data are often collected in roughly equally spaced time (1970). Section 3 discusses a class of vector autoregres-
intervals, for example, hour, week, month, or quarter. sive moving average models. Model building procedures
In many problems, such time series data may be available are discussed in Section 4 and applied to two actual ex-
on several related variables of interest. Two of the rea- amples in Section 5. A comparison with some alternative
sons for analyzing and modeling such series jointly are approaches and some concluding remarks pertaining to
1. To understand the dynamic relationships among the analysis of fitting results are given in Section 6.
them. They may be contemporaneously related, one se-
2. UNlVARlATE TIME SERIES AND TRANSFER
ries may lead the others or there may be feedback
FUNCTION MODELS
relationships.
2. To improve accuracy of forecasts. When there is When k = 1 we shall write Z, = Z, in (1.2). An im-
information on one series contained in the historical data portant class of models for discrete univariate series orig-.
of another, better forecasts can result when the series are inally proposed by Yule (1927) and Slutsky (1937) and
modeled jointly. developed by such authors as Bartlett, Kendall, Walker,
Wold, and Yaglom are stochastic difference equations of
Let the form
{ Z } , . . , { Z , t = 0, + 1 , + 2 , . . . (1.1)
be k series taken in equally spaced time intervals. Writing where +,(B) = 1 - +lB - ... - +,BP and O,(B) = 1
Zt = (Zit,. . . ,Z/it)', (1.2) - 0,B - ... - O,BY. In (2.1) the a,'; are indedendently
identically and normally distributed random shocks (0-r
we shall refer to the k series as a k-dimensional vector with zero mean and variance 0 2 ;
white is the
of multiple time series. Models that are of possible use back-shift operator such that BZ, = Z , - l ; and z, = Z,
in representing such multiple time series, considerations
- q is the deviation of the observation Z, from some
of their properties, and methods for relating them to ac-
convenient location q.
tual data have been extensively discussed in the litera-
Relationships between series izl,), . . . , {zkrl can
ture. See in particular Quenouille (1957), Whittle (1963).
sometimes be represented by linear transfer function
Hannan (l970), Zellner and Palm (1974), Brillinger (1975),
models of the form
Dunsmuir and Hannan (1976). Box and Hauah (1977).
Granger and Newbold (1977), Parzen (1977), wallis z h t = C [ws,,i(B)Bb";18r,,i(B)1
Z it
i€k(h)
(1977), Chan and Wallis (1978), Deistler, Dunsmuir, and (2.2)
Hannan (1978), Hallin (1978), Jenkins (1979), Hsiao. aht ( h = 1, 2, . . . k)
+ [flyl,(B)I~pl,(B)I
where z,, = 0, k(h) is the set ( I , . . . , h - 1);
* G.C. Tiao is Professor of Statistics and Business, and G.E.P. Box
wS,,;(B),6r,,;(B),cppl,(B),and flyl,(B)are pol~nomialsin B;
is Vilas Research Professor, Department of Statistics, University of
Wisconsin, Madison, WI 53706. The authors are grateful to W.R. Bell,
I. Chang, M.R. Grupe, G.B. Hudak, and R.S. Tsay for computing
assistance. This research was partially supported by the U.S. Bureau O Journal of the American Statistical Association
of the Census under JSA 80-10. the Armv Research Office. Durham, December 1981, Volume 76, Number 376
NC under Grant No. D A A G ~ ~ - 7 8 - ~ 0 0 1and
6 6 , the Alcoa Foundation. Theory and Methods Section
Tiao and Box: Modeling Multiple Time Series With Applications 803

the bl,i's are nonnegative integers; and {al,}, . . . , {akr} Figure 1. Data Generated From a Bivariate MA(1)
are k independent Gaussian white-noise processes with Model With Parameter Values in (3.2)
zero means and variances u12, . . . , uk2. In particular, F i r s t Series
intervention models of this form with one or more of the
zh3sindicator variables have proved useful (Box and Tiao '"'t I I
1975; Abraham 1980).
Transfer function models of the form (2.2), however,
assume that the series, when suitably arranged, possess
a triangular relationship, implying for example that z1
depends only on its own past; z2 depends on its own past
and on the present and past of z, ; 23 on its own past and
on the present and past of z2 and zl ; and so on. On the
other hand, if z l depends on the past of z2, and also z2
depends on the past of z l , then we must have a model
that allows for this feedback.

3. MULTIPLE STOCHASTIC DIFFERENCE EQUATION


MODELS
3.1 The Vector ARMA Model
A useful class of models obtained by direct generali-
zation of the Yule-Slutsky ARMA models that allow for
feedback relationships among the k series is obtained Second S e r i e s

from (2.2) by letting k(h) be the set (1, . . . , k) excluding


h. These models can be alternatively expressed as the
vector autoregressive moving average ARMA models
(Quenouille 1957),

where

are matrix polynomials in B, the 9 ' s and 0's are k x k


matrices, z, = Z, - q is the vector of deviations from
some origin q that is the mean if the series is stationary,
and {a,) with a, = (a,,, . . . , ak,)' is a sequence of random
shock vectors identically independently and normally
distributed with zero mean and covariance matrix T . We
shall suppose that the zeros of the determinantal poly- ated from the bivariate first order autoregressive [AR(l)]
nomials I Q,(B) I and I O,(B) I are on or outside the unit model, (I - Q B ) ~ =
, a,, with
circle. The series z, will be stationary when the zeros of
I Q,(B) I are all outside the unit circle, and will be inver-
tible when those of I O,(B) I are all outside the unit circle.
Properties of such models have been discussed by, for
example, Hannan (1970), Anderson (197l), and Granger While in both cases the series are seen to be stationary,
and Newbold (1977). observations from the autoregressive model are seen to
have more "momentum" than those from the moving
Some Simple Examples. To illustrate the behavior of
average model.
observations from these models, Figure 1 shows two se-
In practice, time series often exhibit nonstationary be-
ries with 250 observations generated from the bivariate
havior. When several such series are considered jointly,
(k = 2) first order moving average [MA(l)] model, z,
nonstationarity may be modeled by allowing the zeros of
= (I - OB)a,, with
I Q(B) I in (3.1) to lie on the unit circle. A particular ex-
ample is the model (1 - B)z, = ( I - OB)a,, that is, after
differencing each series we obtain a vector MA(1) model.
This is a vector analog of the commonly used univariate
Figure 2 shows two series with 150 observations gener- nonstationary model (1 - B)z, = (1 - 0B)a,. However,
804 Journal of the American Statistical Association, December 1981

Figure 2. Data Generated From a Bivariate AR(1) relationships. Furthermore, relationships between the
Model With Parameter Values in (3.3) vector transfer function model and the econometric linear
First series
simultaneous equation model have been discussed in Zell-
ner and Palm (1974) and Wallis (1977).
I4.r I

3.2 Cross-Covarianceand Cross-Correlation


Matrices

For a stationary vector time series {Z,) with mean vec-


tor q, let r ( l ) be the lag 1 cross-covariance matrix

and let p(1) = {pij(l)) be the corresponding cross-corre-


lation matrix.
NO When the vector ARMA model in (3.1) is stationary,
it is well known that

Second Series

where the +j's are obtained from the relationship

= - I , r = max(p, q), and it is understood that ( a ) if


P < ~ , Q ~ =+ I = Q, = 0, and (b) if q < p, 8,+, =
... '= 0, = 0.
' a .

In particular, when p = 0, that is, we have a vector


MA(q) model, then
f

it should be noted here that for vector time series, linear Thus, all auto- and cross-correlations are zero when 1
combinations of the elements of z, may often be station- > q. On the other hand, for a vector autoregressive model
ary, and simultaneous differencing of all series can lead the auto- and cross-correlations in general will decay
to unnecessary complications in model fitting. See, for gradually to zero as I 1 I increases.
example, the discussion in Box and Tiao (1977) and Hill-
mer and Tiao (1979).
3.3 A Determinantai Criterion for ARMA Models
Tranfer Function Model. For the vector model in (3. l), and the Partial Autoregression Matrices
in general, all elements of z, are related to all elements
of ztPj ( j = 1, 2, . . . ) and there can be feedback From the moment equations in (3.5) for a stationary
relationships between all the series. However, if the 2,'s ARMA (p, q) model, we see that the autocovariance
can be arranged so that the coefficient matrices Q'S and matrices I'(1)'s and the autoregressive coefficient mat-
0's are all lower triangular, then (3.1) can be written as rices Q, , . . . , Q, are related as follows:
a transfer function model of the form (2.2). More gen-
erally, if the Q'S and 0's are all lower block triangular,
then we obtain a generalization of the transfer function
form of (2.2) in which both the input vector series and
the output vector series are allowed to have feedback
Tiao and Box: Modeling Multiple Time Series With Applications 805

where 4. MODEL BUILDING STRATEGY FOR MULTIPLE TIME


SERIES
The models in (3.1) contain a dauntingly large number
{k2(p + q ) + tk(k + 1)) of parameters, complicating
methods for model building. It is natural that attempts
have been made to simplify the general form in the model
building process, for example by Granger and Newbold
(1977) and Wallis (1977). While we sympathize with this
aspiration, we feel that so far at least these attempts have
not been successful. In some comparisons made later in
Section 6, we argue that they do not result in genuine
simplification, nor do they provide feasible methods
when k is greater than 2 or 3. We see no alternative but
to provide for direct initial fitting of models of the form
(3.1). It must, however, be added
1. that often models of rather low order ( p and q
small) provide adequate approximation,
2. that occasionally knowledge of the system might
allow simplification a priori, although even here prudent
checking of the adequacy of the simplifcation would be
necessary (see Zellner and Palm 1974),
g l ( p , m) = [r(m +p - 1), . . . , r ( m + I)], and 3, that considerable simplification is almost invaria-
= [Q,, . . . , Q,- Consider now the k x k matrix bly possible after an initial model has been fitted,
4. that 2 and 3 imply that provision should be made
to allow models to be fitted in which certain parameters
are fixed or constrained in some other way,
where dij(l, m) is the determinant 5 , that other methods of seeking simplifications, for
dij(l, m) = det A(/, m) cj(l, m)
g1i(15m) Yij(1 + m) I 9
example principal component analysis or canonical anal-
ysis (see Box and Tiao 1977), will often prove effective.
In brief, we feel that although the full form (3.1) needs
to be fitted initially, subsequent iterations will usually
cj(l, m) is the jth column of c(1, m), gti(l,m) is the ith row lead to simplification. -
of gl(l, m), and yij(l + m) is the (i, j)th element of r(1 In what follows we sketch an iterative approach con-
+ m). It follows from (3.7) that for an ARMA ( p , q) sisting of (a) tentative specification (identification), (b)
model estimation, and (c) diagnostic checking for the vector
D ( l , m ) = O for l > p and m r q . (3.9) ARMA models in (3.1). A computer package to carry out
this analysis has been completed (Tiao et al. 1979) con-
This provides a multivariate generalization of the results sisting of three main programs: (a) Preliminary Analysis,
in Gray, Kelley, and McIntire (1978) for univariate (b) Stepwise Autoregression, and (c) Estimation and
ARMA models. Forecasting.
In the special case m = q = 0, (3.7) is a multivariate
generalization of the Yule-Walker equations for autore- 4.1 Tentative Specification
gressive models in univariate time series. Analogous to
the partial autocorrelation function for the univariate The aim here is to employ statistics (a) that can be
case, we may define a partial autoregression matrix func- readily calculated from the data and (b) that facilitate the
tion y(1) having the property that if the model is AR(p), choice of subclass of models worthy of further examination.
then
Sample Cross-Correlations. The sample cross-corre-
lations cij(l),

From (3.7), we define ?(I) as

where Zi is the sample mean of the ith component series


of Z,, are particularly useful in spotting low order vector
moving average models, since from (3.6) pij(l) = 0 for I
> 4.
806 Journal of the American Statistical Association, December 1981

For the data shown in Figure 1, which were generated Figure 3. Sample Auto- and Cross-Correlations for
from a bivariate MA(1) model, Figures 3(a)-(c) show, the Data in Figure 1
respectively, the sample autocorrelations bll(l) and 622(l),
and the sample cross-correlations b12(1).The large values
occurring at ( 1 ( = 1 would lead to tentative specification
of the model as an MA(1). However, graphs of this kind
become increasingly cumbersome as the number of series
is increased. Furthermore, identification is not easy from
a listing of sample cross-correlation matrices p(1) like that
in Table l(a), particularly when k is greater than 4 or 5.
In this circumstance, we have found the following sim-
ple device of great practical value. Instead of the nu-
merical values, a plus sign is used to indicate a value
greater than 2n - 'I2,aminus sign avalue less than - 2n -'I2,
and a dot to indicate a value inbetween -2n-'I2 and
2n-'I2. The motivation is that if the series were white
noise, for large n the bij(l)'s would be normally distributed
with mean 0 and variance n-'. The symbols can be ar-
ranged either as in Table l(b) or as in Table l(c). We
realize that the variances of the bij(l)'s can be consid-
&ably greater than n-'I2 when the series are highly au-
tocorrelated, so that these indicator symbols, if taken large correlations suggests the possibility of autoregres-
literally, can lead to overparameterization. However, we sive behavior. In general, the pattern of indicator symbols
do not interpret these indicator symbols in the sense of for the cross-correlation matrices makes it very easy to
a formal significance test, but as a rather crude "signal- identify a low order moving average model.
to-noise ratio" guide. Taken together they can give useful
and assimilable indicators of the general correlation Sample Partial Autoregression and Related Summary
pattern. Statistics. For an AR(p) process, the partial autoregres-
Table 2 shows sample cross-correlation matrices in sion matrices 9(1) in (3.11) are zero for 1 > p. They are
terms of these indicator symbols for the series in Figure therefore particularly useful for identifying an autore-
2 generated from an AR(1) model. The persistence of gressive model. Estimates of 9(1) and their standard er-

Table -1. Cross-Correlations Matrices p (I) for the Data in Figure 1


(a) Sample cross-correlation matrices b(1) for the data in Figure 1
Lag 1-6
-.28 .37 .03 .08 .04 -.03 -.I1 .04 -.02 -.09
- 2 - 9 0 o ] - 0 - 8 - 0 .og][-.o2 -.OBI [:by ::b]
Lag 7-12

[I::: - 1 [ : : I[ : - 1 [ - - 1
( b ) p(1) in term of indicator symbols
03 00 .06 .04
0 8 :08] [-,Ol .Ol]

Lag 1-6

Lag 7-1 2

Ic) Pattern of correlations for each element in the matrix over all lags
Tiao and Box: Modeling Multiple Time Series With Applications 807

Table 2. Sample Cross-Correlation Matrices p(1) for the Data in Figure 2 in Terms of lndicator Symbols
Lag 1-6

Lag 7-12

rors can be obtained by fitting autoregressive models of be the matrix of residual sum of squares and cross prod-
successively high order I = 1, 2, . . . by standard mul- ucts after fitting an AR(1). The likelihood ratio statistic
tivariate least squares. is the ratio of the determinants
It is well known (see, e.g., Anderson 1971) that for a
stationary AR(p) model asymptotically the estimates cp' I , U = IS(/) l I l S ( 1 - 1) 1 . (4.2)
. $"a are jointly normally distributed. A useful sum- Using Bartlett's (1938) approximation, the statistic
mary of the pattern of the partials is obtained by listing
indicator symbols, assigning a plus (minus) sign when a M(1) = - ( N - - 1 . k)log,U (4.3)
coefficient in ~ ( 1 is) greater (less) than 2 (-2) times its
estimated standard errors, and a dot for values in between. is, on the null hypothesis, asymptotically distributed as
To help tentatively determine the order of an autore- x 2 with k2 degrees of freedom, where N = n - p - 1
gressive model, we may also employ the likelihood ratio is the effective number of observations, assuming that a
statistics corresponding to testing the null hypotheses constant term is included in the model.
cpl = 0 against the alternative Q, # 0 when an AR(1) model Finally, a measure of the extent to which the fit is
is fitted. Let improved as the order is increased is provided by the
diagonal elements of the residual covariance matrices
corresponding to the successive AR models.
For illustration, the matrices of summary symbols, the
M(1) statistics, and the diagonal elements of the residual
covariance matrices for the series in Figure 2 are shown
in Table 3 for 1 = 1, . . . , 5. They indicate that an AR(1)
Table 3. lndicator Symbols for Partial or at most an AR(2) would be adequate for the data.
Autoregression and Related Statistics for Data in For the series shown in Figure 1, the pattern of the
Figure 2 partials and related statistics are given in Table 4. Notice
here that if we had confined attention to autoregressive
lndica tor M (1): Diagonal elements of models as is advocated in Parzen (1977), we would have
Lag I symbols 7 x4 $ needed p to be as high as 7. This is not surprising since
with the MA(1) model of (3.2) written in the autoregres-
sive form z, = nlz,- + r r ~ z , -+~ ... + a,, we find

Thus, although the determinants I n;. I decrease rapidly


towards zero as j increases, the elements of nj converge
to zero very slowly so that many autoregressive terms
would be needed to provide an adequate approximation.
In general, the pattern of the partial autoregression
matrices, the M(1) statistic, and the diagonal elements of
the residual covariance matrix are useful to distinguish
between moving average and low order autoregressive
models and to select tentatively the appropriate order for
;
' means approximately distributed as the latter.
808 Journal of the American Statistical Association, December 1981

Table 4. Pattern of Partial Autoregression and Sample Residual Cross-Correlation Matrices After AR
Related Statistics for Data in Figure 1 Fit. After each AR(1) fit, 1 = 1, . . . ,p, cross-correlation
matrices of the residuals 9,'s may be readily obtained.
Pattern of Table 5 shows indicator symbols for residual correlations
Lag P (1) after fitting AR(1) and AR(2) to the AR data plotted in
Figure 2. Again a plus sign is used to indicate values
greater than 2n-Il2, a minus sign for values less than
- 2n - 112, and a dot for in-between values. They verify
that there is no need to go beyond an AR(2) model.
It is perhaps worth emphasizing here again that these
indicator symbols are proposed as a rough preliminary
device to help arrive at an initial model. They should not
be treated as "exact significance testing." In a recent
paper by Li and McLeod (1980), expressions have been
obtained for the asymptotic distributions of the residual
autocorrelations. As in the univariate case, the low order
autocorrelations have variance considerably less than
.-- 112.
r1
For mixed vector autoregressive moving avetage models
in general, however, both the population cross-correla-
tion matrices p(1) and the partial autoregression matrices
y(1) decay only gradually toward 0. In some situations,
the order of mixed models may be tentatively identified
by inspection of patterns in residual cross-correlations
after the AR fit, but in others study of residual correla-
tions could be misleading. For illustration, consider the
case of a stationary ARMA(1, 1) model

-
(I - qB)z, = (I - OB)a,. (4.5)
4.38
7 16.5 If an AR(1) model is fitted to {z,), then the estimate Q will
+ .94 be biased. In fact, asymptotically Q converges in prob-
ability to

- .91
Thus the residuals 9, = ,
z, - q0z,- approximately follow
the model

Table 5, Indicator Symbols for Residual Cross Correlations for the AR ( 1 ) Data of Figure 2
AR (1) Lag 1-6

Lag 7-12

AR (2) Lag 1-6

Lag 7-12
Tiao and Box: Modeling Multiple Time Series With Applications 809

For k = 1, (8,) follows an ARMA(1, 2) model so that the ,


z,, , . . . , z,. The likelihood function is then determined
autocorrelations of 8, are from a,+ ,, . . . , a,, using the preliminary values z,,
. . . , z, and conditional on zero values for a,, . . . ,
a,-,-,. Thus, as shown in Wilson (1973),
and p;(l) and p;(2) are functions of cp and 0. Table 6 gives
values of p;(l) and p;(2) for various combinations of val- l,(cp, 0, $, I z) I $ I-'"-~)/~exp{-4 tr $,-'S(Q, O)),
ues of cp and 0. For each combination, the first value is
p;(l) and the second ~ ~ ( 2 ) .
We see that if the true value of cp is large in magnitude,
where S(Q, 0) = x:=,+I a,al,. Properties of the maxi-
mum likelihood estimates obtained from (4.10) have been
residual autocorrelations would lead to the choice of an
discussed in Nicholls (1976, 1977) and Anderson (1980).
MA(1) model for ci, and therefore the correct identifica-
It has been shown in Hillmer and Tiao (1979) that this
tion. For intermediate values of cp, a moving average of
approximation can be seriously inadequate if n is not
order 2 or higher might be selected, resulting in
sufficiently large and one or more zeros of I O,(B) I lie on
overparametrization.
or close to the unit circle. Specifically, this would lead
In Gray, Kelley, and McIntire (1978) and Beguin, Gour-
to estimates of the moving average parameters with large
icroux, and Monfort (1980), methods have been proposed
bias.
to determine the order of univariate ARMA model. These
methods are essentially equivalent to estimating, for k Exact Likelihood Function. For univariate ARMA
= 1, the determinant D(1, m) in (3.8) using sample esti- models, the exact likelihood function has been considered
mates of the autocovariances and selecting the orders of by Tiao and Ali (1971), Newbold (1974), Dent (1977),
autoregressive and moving average polynomials on the Ansley (1979), and others. For vector models, this func-
basis of the property in (3.9). We are currently studying tion has been studied by Osborn (1977) for the pure mov-
sampling properties of estimates of appropriate functions ing average case and by Phadke and Kedem (1978),
of D(1, m) in the vector case. Nicholls and Hall (1979), and Hillmer and Tiao (1979).
It takes the form
4.2 Estimation
Once the order of the model in (3.1) has been tenta-
tively selected, efficient estimates of the associated pa- where lI depends (a) only on z,, . . . , z, if q = 0 and
rameter matrices Q = (Q,, . . . , Q,), 0 = (01,. . . , O,), (b) on all the data vectors z , , . . . , z, if q f 0. Estimation
and $, are determined by maximizing the likelihood func- algorithms have been developed and incorporated in our
tion. Approximate standard errors and correlation matrix computer package for the vector MA(q) model. For the
of the estimates of elements of the qj's and Oj's can also general ARMA(p, q) model, it has been shown that a
be obtained. close approximation to the exact likelihood can be ob-
tained by considering the transformation
Conditional Likelihood. For the ARMA ( p , q) model,
we can write
so that (4.12)

and then applying the results for MA(q) to w,, t = p +


As in the univariate case discussed in Box and Jenkins 1, . . . , n.
(1970), the likelihood function can be approximated by Because estimation of moving average parameters
a "conditional" likelihood function as follows. The series using the exact likelihood is rather slow, we presently
is regarded as consisting of the n - p vector observations employ the conditional method in the preliminary stages
of iterative model building and switch to the exact method
Table 6. Asymptotic Values of pa (1) and pa (2) towards the end.
\
4.3 Diagnostic Checking
To guard against model misspecification and to search
for directions of improvement, a detailed diagnostic anal-
ysis of the residual series {a,), where

is performed. Useful diagnostic checks include (a) plots


of standardized residual series against time and/or other
variables and (b) cross-correlation matrices of the resid-
810 Journal of the American Statistical Association, December 1981

Table 7. Pattern of Sample Cross-Correlations for the SCC Data


ZI Stocks Z2 Cars Z3 Commodities

Zl Stocks ++++++++ .... ------------ ------------


........ - - --- - -- --------

Z2 Cars ............ ++++++++++++ ++++++++++++


.....+++ +++ ..... ++ ......

Z3 Commodities

uals I,. As before, the structures of the correlations are large residual correlation at lag 1 after the AR(1) fit, sug-
summarized by indicator symbols. Overall X 2 tests based gesting also the possibility of an ARMA(1, 1) model.
on the sample cross correlations of the residuals have
Estimation. Both an AR(2) and an ARMA(1, 1) model
been proposed in recent papers by Hosking (1980) and
were fitted using the exact likelihood method* but results
Li and McLeod (1980). However, as is noted in Box and
are given only for the ARMA(1, 1) model, which pro-
Jenkins (1970), such overall tests are not substitutes for
duced a marginally better representation. For this model,
more detailed study of the correlation structure.

5. ANALYSES OF TWO EXAMPLES


where is a vector of constants, Table 10 shows the
We now apply the model building approach introduced initial unrestricted fit and also the fits for two simpler
in the preceding section to the following sets of data:models obtained by setting to zero those coefficients
1. The Financial Time Ordinary Share Index, U.K. whose estimates were small compared to their standard
Car Production and the Financial Time Commodity Price errors.
Index: Quarterly Data 311952-411967, obtained from Diagnostic Checking. Table 11 suggests that the re-
Coen, Gomme, and Kendall (1969). This will be referred stricted ARMA(1, 1) model provides an adequate rep-
to as the SCC data. resentation of the data.
2. The Gas Furnace Data given in Box and Jenkins
(1970). Implication of the Model. The final model implies that
the system is approximated by
5.1 The SCC Data
The three series are
Z,,: Financial Time Ordinary Share Index
Z2,: U.K. Car Production
Z3,: Financial Time Commodity Price Index Upon substituting (5.2a) into (5.2c), we get
The authors of the original study were interested in the (1 - .83B)Z3t = 2.8 + .40(1 - .98B)Z1(t-1)
possibility of predicting Zl, from lagged values of Z2,and
Z3, using a standard regression analysis in which Z,, was
treated as a dependent variable and Z2(t-6)and Z3(1-7) Thus all three series behave approximately as random
as regressors or independent variables. For a critical eval-walks with slightly correlated innovations. From the
uation of this approach, see Box and Newbold (1970). point of view of forecasting, (5.2d) is of some interest
Here we consider what structure is revealed by the pres- since it implies that ordinary share Z1,,- is a leading
ent multiple time series analysis, in which the three seriesindicator at lag 1 for the commodity index Z3,. Its effect
are jointly modeled. is small, however, as can be seen for example by the
improvement achieved over the corresponding best fitting
Tentative Spec$cation. We see in Table 7 that the
univariate model, which was
original series show high and persistent auto- and cross-
correlations. Examination of the partials and related sta-
tistics in Table 8 shows that for 1> 2 most of the elements
of Y(1) are small compared with their estimated standard The residual variance of .15 1 from the univariate model
errors and the M(1) statistic fails to show significant im- is not much larger than the value .I34 for a3, obtained
provement. Table 9 shows that the pattern of the cross-
correlations of the residuals after AR(2) is consonant with * For this example, estimates from the conditional likelihood for the
estimated white noise. However, note that there is one ARMA(1, I ) case are very close to the exact results.
Tiao a n d Box: M o d e l i n g Multiple Time Series With Applications 811

Table 8. Partial Autoregression and Related from the final vector model. Although the multiple time
Statistics: SCC Data series analysis fails to reveal anything very surprising for
this example, it shows what is there and does not mislead.
MIII
Indicator Symbols statistic Diagonal Elements of
Lag for Partials 7x g 2 $ x 10 5.2 The Gas Furnace Data
The two series consist of (a) input gas rate and (b)
output as C02 concentration at 9-second intervals from
a gas furnace. We shall let Z l t = gas rate + .057 and Z2(
= C02 - 5.35. This set of data was employed in Box
and Jenkins (1970) to illustrate a procedure of identifi-
cation, fitting, and checking of a transfer function model
of the form (2.3) for k = 2 relating two time series one
of which is known to be input for the other. Using this
approach, the following models were found for the input
Z1,and the output Zzr;

Table 9. Pattern of Cross-Correlation Matrices of Residuals: SCC Data

(a) AR(1) model


. . + . . . . . . . . . . . . . . . . . . . . .

(b) AR(2) model


. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . - . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . + . .

Table 10. Estimation Results for the Model (5.1): SCC Data (exact likelihood)

(1) Full Model

(2) Restricted Model (inter-


[:I
( 54)

(4.08
.82)
(1.47)
1g: :;: 1::2:;:A:!]1;:
- .32
(.la)
.15

(.17)
-.06

(.08)
-.29

- .79
(.28)
.23

(.21)
-(.11)
.44
(.13)
,013 ,022
,078 ]
,129

mediate)

(3) Restricted Model (final) (


"1 I .I3][
2.76
(1.07)
!" ' (
] 1 :- (.06)
- .40
623)
- .41
(.12)
,019
,085

.023 ,134
]
812 Journal of the American Statistical Association, December 1981

Table 11. Pattern of Residual Cross-Correlations Table 12. Tentative Identification for the Gas
After Final Restricted ARMA(1,l) Model Fit: SCC Furnace Data
Data
(a) Pattern of cross-correlations of the original data
11 d2 53 zlt Z2t

dl . . . . . . .-. . .......... ......... zit ++++++++++++ ------------

d3 .......... . . . . . . .-. . ..........


(b) M statistic for partial autoregression
Lag1 1 2 3 4 5 6 7 8 9 1 0 1 1
where w(B) = -(.53 + .37B + .51B2),6 ( B ) = 1 - .57B,
q ( B ) = 1 - 1.53B + .63B2, and the { a l t )and {a,,) series M(I) 1650 665 31.7 22.5 5.6 12.9 1.8 8.0 3.5 0 2.0
are assumed independent. ( c ) Pattern of cross-correlations of the residuals after AR(6) fit
Particularly when we are dealing with econometric 11 t 32t
rather than engineering models, feedback relationships
may not be known a priori; it is of interest, therefore, to 6: . . . . . . . . . . .- ............
analyze the data using the present approach where no ............ ............
d2t
distinction is made between an input and output variable
and the fact that no feedback could occur in the system
is not used. we see that C$12(1) are small compared with their standard
errors over all lags, confirming (as in this case is known
Tentative In 12. we see that the from the physical nature of the apparatus the
auto- and cross-correlations of the original data in part data) that there is a unidirectional relationship between
(a) are persistently large in magnitude, ruling out low Zit and &, involving no feedback. Also, +21(1) is small for
order moving average models; the M(1) statistic (xi) in = 2, and the residuals and d 2 , are essentially
part (b) suggests that an AR(6) might be uncorrelated, implying a delay of 3 periods. It should be
priate; and the residual cross correlation pattern after an noted also that the variances for a l , and a2, are very close
AR(6) fit in part (c) seems to the to those for a,, and a2, in (5.4), and their correlation is
of this model. negligible.
Estimation Results. Estimation results corresponding To facilitate comparison with (5.4), we set cpll"' = 0
to an unrestricted AR(6) model for 1 > 3, qI2(l)= 0 for all I , q21'1'= 0 for 1 = 1 , 2, and
q22(1)= 0 for 1 = 5,6. Estimation results for this restricted
(I - q l B - " ' - q 6 B 6 ) Z t= at (5.5) AR(6) model are then
are as follows:
41 4 2 43

i 3 ][ ][
81 42 - 1.38
[I::
(.06)

(.08)
-0 51
(.05)
1.55
(.06)
[;1.20
(.13)
.14
(.16)
l o ]
(.08)
- .59
(.11)
[i::
44
(.06)
0

45
- .58
(.Ill
-.53 -.I4
(.07) (.lo)
46
]
[ -.44
43

(.15)
.17

(.19)
(.09)
- 081
-.I7
(.11)
[:;:6
44
(. 15) (.09)

(.19) ( . 1 1 )
::I] [ . '(.16)
I (.04)
'21

$ - [.0359
[ - :] [." :]
(.I71

-.0029]
(.I11

.0561 ' b(a I , az) = 0 (5.7)


45 46
Examination of the pattern of the cross-correlations of
- .04
the residuals suggests that the model is adequate.
Implication of the Bivariate Model. The final AR(6)
(.18) (.lo) (.11) (.04) model (5.7) can be written

=
.0345
-0 02
1
.0566 A '
6 ( a l , a 2 ) = .045 (5.6) [ q22 ( B )
where q l l ( B ) = 1 - 1.98B + 1.38B2 - .35B3, cpZl(B)
L

If we let = (.53 - .11B - .21B)B3, and cpz2(B) = ( 1 - 1.53B


41 = {C$ij")}, + .58B2 + .14B3 - .12B4). Assuming a l , and a2, are
Tiao and Box: Modeling Multiple Time Series With Applications 813

uncorrelated, the input model cpll(B)Zlt = air with is known or how much we are prepared to assume. In
Var(alt) = .0359 is essentially the same as (5.4a). Now some applications, particularly in engineering and most
the model relating the output Z2, to the input Z1, is examples of intervention analysis, an adequate initial
specification may be possible from knowledge of the na-
ture of the problem. This may allow a flow diagram show-
ing the feedback structure to be drawn and likely orders
with Var(a2,) = .0561. The noise model cp22-1(B)a2tis to be guessed for the various dynamic components. The
'
not very different from the corresponding one cp - (B)a2, resulting models can then be directlyfitted in the manner
in (5.4b), but the dynamic model - ( P ~ ~ ( B ) ( P Z Z - ' ( Bat) Z ~described
~ and illustrated in Box and MacGregor (1974,
first sight appears markedly different from the first term 1976) and Box and Tiao (1975). For a single input with
on the right side of (5.4b). The reason is that in the form feedback known to be absent, a prewhitening method is
(5.9) the denominators of the dynamic model and of the given in Box and Jenkins (1970) for identifiing an un-
noise model are constrained to be identical. This restric- known dynamic system, but extension of this identifi-
tion is not present in the transfer function model (5.4b). cation method to multiple inputs is rather complex.
The less restrictive form can however be written in the Particularly for economic and business examples, how-
form of (5.9) if we set cpz2(B) = cp(B) and -cpzl(B) = ever, the feedback structure and orders of the multiple
~ ( B ) B ~ { ~ ( -B')(B)).
S For this example, the factor system are often unknown. The present multiple time
(p(B)K1(B) 1 - .96B, and it is then seen that the series procedure has the great advantage that it allows
models are in fact very similar. This may be confirmed identiJication of the feedback and dynamic structure.
by comparing the impulse response weights in Table 13, Furthermore,
where w(B)BbS- '(B) = x,?==ovj~jand - ( P ~ ~ ( B ) ( P ~ ~ - ' ( B 1.) A one-sided causal relationship, if it exists, will
= xj",ov*j~j.
emerge in the identification process, and the stochastic
Further AR Results. It is instruc- structures of the input as well as the transfer function
tive to examine for this data the changes in the fitted relationship between input and output will be modeled
autoregressive models as the order is increased. Using simU1taneously.
indicator symbols (and omitting the dots) Table 14 shows 2. Stochastic multiple input and multiple output sit-
the situation for p = 1, . . . , 6. The residual covariance uations are readily handled.
matrix for each order is also given. The following obser- 3. A useful method is provided for seeking leading
vations may be made. indicators in economic and business applications. In this
context it should be noted that a unidirectional dynamic
1. If only AR(1) or AR(2) were considered, one might relationship may not exist between two time series even
be led to believe mistakenly that there was a feedback when one variable is known to be the input for the other.
relationship between these two series. One reason for this phenomenon is the effect of temporal
2. The unidirectional dynamic relationship becomes aggregation. As shown in Tiao and Wei (1976), pseudo-
clear when the order of the model, p , is increased to feedback relationships could occur because of this tem-
three. Since the input series Z I t essentially follows a poral aggregation effect, and it would be a mistake to
univariate AR(3) model, this suggests that the present impose a transfer function model in such a situation.
procedure will correctly identify the one-sided causal 4. However, when a simple transfer function struc-
dynamic relationship once the input model is appropri- ture of the form (2.2) is appropriate, the present multiple
ately selected. time series approach could rarely reproduce it directly-
3. The delay b = 3 emerges when the order p is in- see, for example, (5.4b) and (5.9)-and some analysis of
creased to 4. Since only very marginal improvement in the fitted form might be necessary to reveal a more par-
the fit occurs for p > 4, this is saying that the delay is simonious and more easily understood structure.
correctly identified only when the model is specified es-
sentially correctly. 6. COMPARISON WITH SOME OTHER APPROACHES
AND CONCLUDING REMARKS
Implications on General Time Series Model Building.
The relative merit of the present procedure and more We have discussed various tools used in an iterative
direct modeling of the system will depend on how much approach to modeling multiple time series and illustrated

Table 13, Impulse Response Weights for the Gas Furnace Data
I
814 Journal of the American Statistical Association, December 1981

Table 14. Successive AR Fitting Results for the Gas Furnace Data
Order of AR QI Q2 'f's Q4 Qs Q6 $

how they work in practice. Much further work is needed, can be written as
especially in the identification of mixed autoregressive
moving average models and in developing faster esti- I Q(B) I Z, =H(B)a,, (6.2)
mation algorithms and better tools for diagnostic check- where H(B) = A(B)B,(B), A(B) is the adjoint matrix and
ing. In spite of the imperfections of the present tools and I q P ( B )I the determinant of q P ( B ) .As in the G and N
the preliminary nature of the approach, we have felt it approach, an individual model is first constructed for
appropriate to present them here in order to (a) illustrate each series. From the degrees of the moving average
the potential usefulness of vector autoregressive moving polynomials 0,,(B) of these individual models, the degree
average models in characterizing dynamic structures in of H(B) is determined. Next, models of the form
the data and (b) stimulate further development of mod-
eling procedures. Several alternative approaches to mod- DI(B)Z, = H(B)ar, (6.3)
eling multiple time series have been proposed in the lit- where Dl(B)is a diagonal matrix polynomial in B of degree
erature. It may be of interest to discuss briefly those I , are fitted successively for 1 = r, r - 1, . . . , where r
proposed by Granger and Newbold ( 1 977), Wallis ( 1 977), is some specified maximum order, to determine an ap-
and Chan and Wallis (1978). propriate value for I. A likelihood ratio test is then per-
In the Granger and Newbold approach, one begins by formed to check whether the diagonal elements of Dl(B)
fitting univariate ARMA models to each series, are identical, that is, of the form (6.2).Finally, from the
fitted H(B) and ( q ( B ) ( or Dl(B),one guesses at the values
of p and q in (3.1) and then proceeds to estimate the
and then attempts to identify the dynamic structure of parameters in q P ( B )and B,(B). The efficacy of this ap-
the k white noise residual series {Cj,)by examination of proach is open to question on several grounds.
their cross-correlations. A model of the form (2.2) with 1 . The degree of the polynomial H(B) in (6.2)can be
k(h) being the set ( 1 , . . . , k) excluding h is then fitted higher than the maximum degree of 0,,(B) for the indi-
to the k residual series. This model and the prewhitening vidual series. For example, suppose k = 2,
transformations (6.1) then determine the model for the
original vector series. As the authors themselves pointed
out, the procedure is complex and difficult to apply for
k > 2. One major difficulty arises from the fact that the and the two elements of a, are independent. Then q l =
parameters in the model for the residuals are subject to q2 = 0, but it would be a mistake to infer that H(B) is of
various complicated nonlinear constraints. Also, it can degree zero.
be readily shown that even if the vector series { Z , )follows 2. For vector AR or ARMA models, the represen-
a low order ARMA model (3.I ) , the corresponding model tation (6.2)is certainly nonparsimonious. Apart from the
for the residual vector { C , ) where C ' , = ( C I , ,. . . , C k r ) covariance matrix $., for k series the maximum number
can be complex and difficult to identify in practice. of parameters in the original form (3.1) is k 2 ( p + q ) ,
The Wallis and Chan approach uses the form (6.1)for while the maximum number of parameters in the form
each individual series and the fact that the model (3.1) (6.2)is kp + [(k - 1)p + q ] k 2 ,representing an increase
Tiao and Box: Modeling Multiple Time Series With Applications 815

of pk(k - parameters. The increase could be even Table 15. Identification of Muskrat-Mink Data
greater if the diagonal form (6.3) is employed. Thus, as-
suming the degree of H(B) is correctly specified, even for (a) Partial Autoregression and Related Statistics
k as low as 3 or 4, a very large number of additional Diagonal elements of
parameters will have to be estimated merely to identify Lag Partials MU) 7~4~ $
correctly a low order vector AR model, say p = 1 or 2. 1 + - 111.7 .062
3. Since the correspondence between the degrees of + + ,059
the determinantal polynomial I q(B) I and H(B) and the 2 - 4.8 ,0571
values of ( p , q) is not necessarily one to one, it is not ,0572
clear how one determines p and q in (3.1) from the form (b) Cross-correlations of Residuals After AR(1) Fit
(6.2). a1 a2
4. The approach is made even more computationally
burdensome because the authors propose to employ the
exact likelihood method for moving average parameters
throughout the processes of model building. Our expe-
rience, however, suggests that because this method con- where 2,- ,(I) is the one step ahead forecast of z, made
verges relatively slowly it is better to use it only in the at time t - 1, and denoting, for stationary series,
final stage of the estimation process.
r,(o) = E(Z,Z~,)
The chief distinction between our approach and the
two alternatives just discussed is that we believe it better and
to tackle the dynamic relationships of the k series in their
entirety, employing tools such as the estimates of cross-
correlation matrices and partial autoregression matrices it will often be informative to compute eigenvalues and
to shed light directly on the structure. Simplifications of eigenvectors of estimates of the following matrices:
one kind or another will then often follow. At least for
the tentative specification of the vector autoregressive
or the vector moving average model, our procedures
seem far simpler to use in practice and do not require the Such analyses are described in Quenouille (1957), Box
multitude of steps these alternative approaches need to and Tiao (1977), and Tiao et al. (1979). Also, the eigen-
arrive at even a simple model. values and eigenvectors of the spectral density matrix of
To illustrate these points, we briefly consider the mink- the model should also be considered (see Brillinger 1975).
muskrat example which Chan and Wallis used to illustrate These techniques are useful in (a) detecting exact con-
their methods. They treat two series YI,* and Yz,* ob- current or lagged linear relations between series, and (b)
tained after "detrending" the muskrat and mink series facilitating understanding and interpretation of the fitted
by first and second degree polynomials respectively. Pro- model. In our opinion, this is one of the most important
ceeding through the various steps outlined above, they and challenging topics for further research.
eventually arrive at an AR(1) model. However, it will be
seen that this same model is suggested immediately by [Received Yanuary 1981. Revised June 1981.I
the simple procedures we propose. Table 15(a) shows the
partial autoregression results for 1 = 1, 2 and Table 15(b) REFERENCES
the residual cross correlations after the AR(1) . fit. A verv ABRAHAM, B. (1980), "Intervention Analysis and Multiple Time Se-
,
ries," Biometrika, 67, 73-78.
of this set of data is given in A n s l e ~and ABRAHAM, B., and BOX, G.E.p, (1978), "Deterministic and Fore-
~ e w b o l d(1979). For various reasons, we do not wish to c a s t - ~ d a p t i v eTime Dependent Models," Applied Statistics, 27,
sanctify this AR(1) model. These include the question of 120-130.
whether any linear structural model is adequate for these AKAIKE, H. (1980), "On the Identification of State Space Models and
Their Use in Control," in Directions in Time Series, eds. D.R. Bril-
series (see and Lim 1980). the the linger and G.C. Tiao, Institute of Mathematical Statistics, 175-187.
detrending procedures and the suspicious behavior of a ANDERSON, T.W. (1971), The Statistical Analysis of Time Series,
high autocorrelation at lag 10 occurring in the residuals John Wiley.
(1980), "Maximum Likelihood Estimation for Vector Autore-
seem suspect. Our only point is to show that the circui- gressive Moving Average Models," in Directions in Time Series, eds.
tous route adopted by Chan and Wallis to arrive at this D.R. Brillinger and G.C. Tiao, Institute of Mathematical Statistics,
model is unnecessary. 49-59.
ANSLEY, C . (1979), "An Algorithm for the Exact Likelihood of a
Before concluding this paper, it is worth noting that in Mixed Autoregressive Moving Average Process," Biometrika, 66,
modeling as well as analysis of vector time series one 59-65.
often finds it useful to perform various eigenvalue and ANSLEY, C . F . , and NEWBOLD, P. (1979), "Multivariate Partial Au-
tocorrelations," Proceedings of Business and Economic Statistics
eigenvector analyses. Specifically, writing (3.1) in the Section, American Statistical Association, 349-353.
form BARTLETT, M.S. (1938), "Further Aspects of the Theory of Multiple
Regression," Proceedings of the Cambridge Philosophical Society,
zt = ?,-,(l) + a,, (6.4) 34, 33-40.
816 Journal of the American Statistical Association, December 1981

BEGUIN, J.M., GOURIEROUX, C., and MONFORT, A. (1980), JENKINS, G.J. (1979),Practical Experiences with Modelling and ~ o r e z
"Identification of a Mixed Autoregressive-Moving Average Process: casting Time Series, Channel Islands: GJP Ltd.
The Corner Method," in Time Series, ed. O.D. Anderson, Amster- LI, W.K., and McLEOD, A.I. (1980), "Distribution of the Residual
dam: North-Holland, 423-436. Autocorrelations in Multivariate ARMA Time Series Models," TR-
BOX, G.E.P, and HAUGH, L. (1977), "Identification of Dynamic 80-03, University of Western Ontario.
Regression Models Connecting Two Time Series," Journal of the NEWBOLD, P. (1974), "The Exact Likelihood Function for a Mixed
American Statistical Association, 72, 121- 130. Autoregressive Moving Average Process," Biometrika, 61,423-427.
BOX, G.E.P., and JENKINS, G.M. (1970), Time Series Analysis- NICHOLLS, D.F. (1976), "The Efficient Estimation of Vector Linear
Forecasting and Control, San Francisco: Holden-Day. Time Series Models," Biometrika, 63, 381-390.
BOX, G.E.P., and MAcGREGOR, J.F. (1974),"The Analysisof Closed (1977),"A Comparison of Estimation Methods for Vector Linear
Loop Dynamic Stochastic Systems," Technometrics, 16, 391-398. Time Series Models," Biometrika, 64, 85-90.
(1976), "Parameter Estimation with Closed-Loop Operating NICHOLLS, D.F., and HALL, A.D. (1979), "The Exact Likelihood
data," Technometrics, 18, 371-380. of Multivariate Autoregressive-Moving Average Models," Biomet-
BOX, G.E.P., and NEWBOLD, P. (1970), "Some Comments on a rika, 66, 259-264.
Paper by Coen, Gomme and Kendall," Journal of the Royal Statis- OSBORN, D.R. (1977), "Exact and ~ ~ ~ r o x i mMaximum
ate Likelihood
tical Society, Ser. A, 134, 229-240. Estimators for Vector Moving Average Processes," Journal of the
BOX, G.E.P., and TIAO, G.C. (1975), "Intervention Analysis with Royal Statistical Society, Ser. B, 39, 114-1 18.
Applications to Environmental and Economic Problems," Journal of PARZEN, E. (1977), "Multiple Time Series: Determining the Order of
the American Statistical Association, 70, 70-79. Approximating Autoregressive Schemes," in Multivariate Analysis-
(1977), "A Canonical Analysis of Multiple Time Series," Bio- I V , ed. P. Krishnaiah, Amsterdam: North-Holland, 283-295.
metrika, 64, 355-365. PHADKE, M.S., and KEDEM, G. (1978), "Computation of the Exact
BRILLINGER, D.R. (1975), Time Series Data Analysis and Theory, Likelihood Function of Multivariate Moving Average Models,"
New York: Holt, Rinehart, and Winston. Biometrika, 65, 511-519.
CHAN, W.Y.T., and WALLIS, K.F. (1978), "Multiple Time Series QUENOUILLE, M.H. (1957), The Analysis of Multiple Time Series,
Modelling: Another Look at the Mink-Muskrat Interaction," Applied London: Griffin.
Statistics, 27, 168-175. QUINN, B.G. (1980), "Order Determination for a Multivariate Auto-
COEN, P.G., GOMME, E.D., and KENDALL, M.G. (1969), "Lagged regression," Journal of the Royal Statistical Society, Ser. B, 42,
Relationships in Economic Forecasting," Journal of the Royal Sta- 182-185.
tistical Society, Ser. A, 132, 133-163. SLUTSKY, E. (1937), "The Summation of Random Causes as the
DEISTLER, M., DUNSMUIR, W., and HANNAN, E.J. (1978), "Vec- Source of Cyclic Processes," Econometrika, 5, 105-146.
tor Linear Time Series Models: Corrections and Extensions," Ad- TIAO, G.C., and ALI, M.M. (1971), "Analysis of Correlated Random
vances in Applied Probability, 10, 360-372. Effects: Linear Model with Two Random Components," Biometrika,
DENT, W. (1977), "Computation of the Exact Likelihood Function of 58, 37-51.
an ARIMA Process," Journal of Statistical Computation and Sim- TIAO, G.C., BOX, G.E.P., GRUPE, M.R., HUDAK, G.B., BELL,
ulation, 5, 193-206. W.R., and CHANG, I. (1979), "The Wisconsin Multiple Time Series
DUNSMUIR, W., and HANNAN, E.J. (1976), "Vector Linear Time (WMTS-I) Program: A Preliminary Guide," Department of Statistics,
Series Models," Advances in Applied Probability, 8, 339-364. University of Wisconsin, Madison.
GRANGER, C. W.J., and NEWBOLD, P. (1977), Forecasting Eco- TIAO, G.C., and WEI, W.S. (1976), "Effect of Temporal Aggregation
nomic Time Series, New York: Academic Press. on the Dynamic Relationship of Two Time Series Variables," Bio-
GRAY, H.L., KELLEY, G.D., and McINTIRE, D.D. (1978),"A New metrika, 63, 513-523.
Approach to ARMA Modelling," Communications in Statistics, B7, TONG, H., and LIM, K.S. (1980), "Threshold Autoregression, Limit
1-77. Cycles and Cyclical Data," Journal of the Royal Statistical Society,
HALLIN, M. (1978), "Mixed Autoregressive-Moving Average Multi- Ser. B, 42, 245-292.
variate Processes with Time-Dependent Coefficients," Journal of WALLIS, K.F. (1977), "Multiple Time Series Analysis and the Final
Multivariate Analysis, 8, 567-572. Form of Econometric Models," Econometrika, 45, 1481-1497.
HANNAN, E.J. (1970), Multiple Time Series, New York: John Wiley. WHITTLE, P. (1963), "On the Fitting of Multivariate Autoregressions,
(1980), "The Estimation of the Order of an ARMA Process," and the Approximate Canonical Factorization of a Spectral Density
Annals of Statistics, 8, 1071-1081. Matrix," Biometrika, 50, 129-134.
HANNAN, E.J., DUNSMUIR, W.T.M., and DEISTLER, M. (1980), WILSON, G.T. (1973), "The Estimation of Parameters in Multivariate
"Estimation of Vector ARMAX Models," Journal of Multivariate Time Series Models," Journal of the Royal Statistical Society, Ser.
Analysis, 10, 275-295. B, 35, 76-85.
HILLMER, S.C., and TIAO, G.C. (1979), "Likelihood Function of YULE, G.U. (1927), "On a Method of Investigating Periodicities in
Stationary Multiple Autoregressive Moving Average Models," Jour- Disturbed Series, With Special Reference to Wolfer's Sunspot Num-
nal of the American Statistical Association, 74, 652-660. bers," Philosophical Transactions of the Royal Society of London,
HOSKING, J.R.M. (1980), "The Multivariate Portmanteau Statistic," Ser. A, 226, 267-298.
Journal of the American Statistical Association, 75, 602-607. ZELLNER, A., and PALM, F. (1974), "Time Series Analysis and
HSIAO, C. (19791, "Autoregressive Modeling of Canadian Money and Simultaneous Equation Econometric Models," Journal of Econo-
Income Data," Journal of the American Statistical Association, 74, metrics, 2, 17-54.
553-560.

You might also like