CTR18 04 C PDF
CTR18 04 C PDF
CTR18 04 C PDF
Abstract
We review the roles of causality and exogeneity in macro-econometric time-series modelling,
forecasting and policy; their inter-relationships; problems in establishing them empirically; the
importance played by non-stationarity in the data generation process (DGP), and the relation of
causality to Granger causality. Although causality and exogeneity need not be invariant features of
the DGP, they remain relevant in non-stationary processes, and inferences about them become easier
in some ways, though more difficult in others.
1 Introduction
Scientific knowledge is always and everywhere fallible. The history of Newtonian gravitational theory
is the classic example—once deemed unassailable truth and repeatedly confirmed by experiment, it is
now seen as an approximate model. Interestingly, that view was in fact first proposed by Adam Smith
(1795). ‘Causal knowledge’ is simply a special case of this general problem, exacerbated by disputable
formulations of the concept, doubts about inference procedures, and in economics, by the complexities
of data processes. This chapter considers causality in non-stationary macro-economic time-series, where
causality is viewed as ‘actually determining an aspect of behaviour’: exogeneity—the property of being
‘determined outside the system under analysis’—is also considered, but primarily in relation to causality.
The literature on both concepts is vast, and thoroughly discussing it is well beyond the scope
of this chapter. However, many of the implications of causality and exogeneity were developed in
weakly-stationary systems, where it can be difficult to test for their presence or absence. All modern
macro-economies are non-stationary from both stochastic and deterministic changes, often modelled as
integrated-cointegrated systems prone to intermittent structural breaks—sudden large changes, invari-
ably unanticipated. In such non-stationary data generation processes (DGPs), many results concerning
∗
Financial support from the UK Economic and Social Research Council under grants RES-015-27-0035 and RES-000-
23-0539 is gratefully acknowledged. I am indebted for helpful comments on an earlier draft to Nancy Cartwright, Mike
Clements, Clive Granger, Michael Massmann, Grayham Mizon, Adrian Pagan, Franz Palm, Julian Reiss and Elliott Sober;
to participants at the 2001 EC2 meeting on ‘Causality and Exogeneity in Econometrics’ at Louvain-La-Neuve; and to four
anonymous referees.
1
2
modelling, forecasting, and policy change substantially relative to the stationary case. The primary dif-
ficult becomes that of establishing time invariant relationships, but once that is achieved, the roles of
causality and exogeneity are often more easily ascertained.
The precursor to the approach here lies in the theory of forecasting developed in Clements and
Hendry (1999, 2002b) and reviewed in Hendry and Clements (2003). Those authors are concerned with
the historical prevalence of forecast failure, defined as a significant deterioration in forecast performance
relative to the anticipated outcome (usually based on the historical performance of a model): for doc-
umentation see (e.g.) Stock and Watson (1996) and Clements and Hendry (2001b). The forecast-error
taxonomies established by the latter authors show that the main sources of such forecast failures are
changes in coefficients of deterministic terms, or location shifts. Structural breaks affecting mean-zero
variables pose fewer difficulties (see e.g. Hendry, 2000b), as do mis-specification and mis-estimation
when breaks do not occur. Such results entail reconsidering how causality and exogeneity apply when
empirical models are mis-specified for complicated, evolving and non-stationary DGPs. For example,
since one cannot prove that causal models need dominate in forecasting, neither forecast success nor
failure is informative about causal links. Nevertheless, in other settings, non-constancy can help deter-
mine causal direction.
The structure of the chapter is as follows. Section 2 describes the econometric background, noting
some of the many salient contributions to the literature. Then section 3 reviews the three well known
forms of weak, strong and super exogeneity in relation to inference, forecasting, and policy, and notes
their role in non-stationary systems. Analyzing exogeneity first is convenient for section 4, which then
provides a similar approach for causality, before briefly discussing Granger causality, following the
overview in Hendry and Mizon (1999). Section 5 notes some links between exogeneity and causality.
Section 6 examines their role in forecasting, and section 7 discusses the roles of exogeneity and causality
in policy analyses. Section 8 concludes.
2 Background
Let (Ω, F, P (·)) denote the probability space supporting a vector of m discrete-time, real random
variables wt , with sample space Ω and event space Ft−1 at each time t ∈ T . The joint density
Dwt (wt |Ft−1 , λ) for λ ∈ Λ ⊆ Rq is the data generation process (DGP) of the economy under analysis.
The q-dimensional parameter (or ‘index’) λ does not depend on Ft−1 at any t, but need not be constant
over time since λ is determined by economic agents’ decisions. The history of the stochastic process
{wt } up to time (t − 1) is denoted by Wt−1 = (W0 , w1 , . . . , wt−1 ) = (W0 , Wt−11 ), where W is the
0
set of initial conditions. For a sample period t = 1, . . . , T , the DGP becomes DW (WT1 |W0 , λ), and is
sequentially factorized as:
T
Y
DW WT1 | W0 , λ = Dwt (wt | Wt−1 , δ t ) (1)
t=1
where gδ (λ) = (δ 1 . . . δ T ) for a 1–1 function gδ (·). While the notation potentially allows the pa-
rameters of (1) to shift every period, we assume that such changes are in practice intermittent. His-
torical sources of non-constancy are regime shifts (changes in policy), and technological, legislative,
behavioural and institutional structural breaks (shifts in the parameters of the system). Since economic
processes also evolve for many reasons, the {wt } are likely to be integrated of at least first order (de-
noted I(1)). Thus, a key feature of (1) is the non-stationarity of {∆wt } (or of ∆2 wt if the level is
3
I(2)), with changing first and second data moments. Because Dwt (·) alters over time, all expectations
operators have to be time dated, as in Et [wt ] and Vt [wt wt0 ] for unconditional expectations and vari-
ances respectively. Such dating is separate from the timing of any conditioning information, such as
Wt−1 . While Dwt (wt |Wt−1 , δ t ) characterizes the generation of the outcomes, measurement errors are
bound to add a further layer of complexity to empirical analyses (especially of causality), but as they
are not germane to the concepts per se, the {wt } are assumed to be observed.
Reductions from the DGP for {wt } to that of the transformed subset of n < m variables {xt } to
be analyzed can radically alter the causality and exogeneity status of variables. In general (see, inter
alia, Hendry, 1995a, and Mizon, 1995), there exists a ‘local DGP’ Dxt (xt |Xt−1 , γ t ) with γ t ∈ Γ ⊆ R`
derived from Dwt (wt |·) by aggregation, transformation, and marginalization with respect to all excluded
current-dated and lagged variables. Map from wt by an information-preserving transform h(·) such that
h(wt ) = (x0t , η 0t )0 . Then δ t is transformed to (ρ0t , γ 0t ), such that:
Dwt (wt | Wt−1 , δ t ) = Dηt |xt (η t | xt , Wt−1 , ρt ) Dxt (xt | Wt−1 , γ t ) . (2)
To restrict attention to the marginal model, Dxt (·), without loss of information requires that:1
so that lagged η must be irrelevant. Finally, the r ≤ ` parameters of interest, µ, must be a function of
γ t alone, so ρt does not provide information about γ t . For example, if important explanatory variables
are omitted from (3), and ρt is non-constant, so will be the coefficients in (3) even when γ in (2) is
constant (see e.g., Hendry and Doornik, 1997). Such parametric links are intrinsic to (2), and the usual
assumption that the parameters in models thereof are variation free, so (γ t , ρt ) ∈ Γ×R, is insufficient.
When (3) holds, it is fully informative about the roles of the variables; but in general, there will be a
loss of information from the elimination of {η t }, with a consequential (unintended) transformation of
the parameters.
when fx (xt |Xt−1 , θ) is the postulated sequential joint density at time t. We assume that k < ` and that
θ represents the constant (meta)parameters postulated by the model builder. Inevitably, fx (·) 6= Dxt (·),
which may affect inferences about exogeneity or causality. An econometric model is congruent if it
matches the data evidence in all relevant directions, and Bontemps and Mizon (2003) show that such
a model would encompass the local DGP (denoted LDGP). Further, a successful analysis requires that
µ = gµ (θ), and while achieving that goal is difficult, progressive research can glean important insights
even in processes that are not time invariant.
1
We ignore any complications related to the role of the initial conditions, W0 .
4
3 Exogeneity
The notion of exogeneity, or synonyms thereof, in relation to econometric modelling dates back to the
origins of the discipline (see e.g., Morgan, 1990, and Hendry and Morgan, 1995), including important
contributions by Koopmans (1950) and Phillips (1957). Here we follow the approach in Richard (1980),
formalized by Engle, Hendry and Richard (1983): Ericsson (1992) provides an exposition.
We assume that fx (xt |Xt−1 , θ) in (4) is a valid reduction to a congruent model of the LDGP, and
partition x0t = (yt0 : z0t ) where yt is n1 × 1 and zt is n2 × 1 with n = n1 + n2 , with Xt−1 partitioned
accordingly as (Yt−1 , Zt−1 ). To model yt conditional on zt , factorize fx (xt |Xt−1 , θ) into a conditional
and a marginal density, transforming θ ∈ Θ to ψ ∈ Ψ:
such that gψ (·) is a 1–1 reparametrization which sustains the partition ψ 0 = (ψ 01 : ψ 02 ), when ψ i has ki
elements (k1 + k2 = k), and:
As only fy|z(yt |zt , Xt−1 , ψ 1 ) is to be retained, the analysis of µ is without loss of information relative
to (4) when zt is weakly exogenous for µ, namely µ = f (ψ 1 ) alone and (ψ 1 , ψ 2 ) ∈ Ψ1 × Ψ2 .
Thus exogeneity is not a property of a variable: in (3), all variables are endogenous. Moreover, weak
exogeneity is just as relevant to instrumental variables estimation, as the marginal density of zt then
relates to the distribution of the putative instruments: merely asserting orthogonality is inadequate, as
illustrated by counter examples in Hendry (1995a, 1995c).
Strong exogeneity also requires the absence of feed back from lagged yt−i onto zt so that:
T
Y
fz (zt | Xt−1 , ψ 2 ) = fZ Z1T | X0 , ψ 2 .
(7)
t=1
Finally, super exogeneity requires weak exogeneity and the invariance of µ to changes in ψ 2 . As is
well known, the three concepts respectively sustain conditional inference; conditional multi-step fore-
casting;2 and conditional policy analysis (when components of zt are policy instruments).
The absence of links between the parameter spaces Ψ1 and Ψ2 is not purely a matter of model
specification, dependent on the parameters of interest, and at the choice of the investigator. In some
settings, as neatly illustrated by Ericsson (1992), changing the parameters of interest can deliver or lose
weak exogeneity. But in other settings, the LDGP determines the actual, as opposed to the claimed,
exogeneity status. Factorize (3) as:
Dxt (xt | Xt−1 , γ t ) = Dyt |zt yt | zt , Xt−1 , φ1,t Dzt zt | Xt−1 , φ2,t (8)
When µ enters both φ1,t and φ2,t in (8), inference can be distorted if (6) falsely asserts weak exogeneity:
see e.g., Phillips and Loretan (1991). The consequences of failures of weak exogeneity can vary from
a loss of estimation efficiency through to a loss of parameter constancy, depending on the source of the
problem: see Hendry (1995a, ch. 5). We now illustrate both extreme cases.
First, consider an experimental setting where the Gauss–Markov conditions might appear to be
satisfied so that:
y = Zβ + with ∼ NT 0, σ 2 I ,
(9)
2
Projecting yT +h onto xT −i also circumvents any feedback problem: see e.g., Bhansali (2002).
5
E[y | Z] = Zβ,
and hence E[Z0 ] = 0. Nevertheless, OLS need not be the most efficient unbiased estimator of β.
An explicit weak exogeneity condition is required when Z is stochastic, such that β cannot be learned
from its marginal distribution. Otherwise, an unbiased estimator which is an affine function of y can
dominate, possibly dramatically: see Hendry (2003). Secondly, if zt is simultaneously determined with
yt , yet experiences a location shift, then a conditional model of yt given zt (incorrectly treating zt as
weakly exogenous) will have non-constant parameters even though the DGP equation for yt is constant.
Although it can be difficult to test exogeneity claims when unconditional second moments are con-
stant over time, as Engle et al. (1983) note, despite only one specification corresponding to reality, both
main forms of non-stationarity (integrability and breaks) alter the analysis as we now show.
First, cointegrated systems provide a major forum for testing one aspect of exogeneity: see inter alia,
Hunter (1992a, 1992b), Urbain (1992), Johansen (1992), Dolado (1992), Boswijk (1992), and Paruolo
and Rahbek (1999). Equilibrium-correction mechanisms which cross-link equations violate long-run
weak exogeneity, confirming that weak exogeneity cannot necessarily be obtained merely by choosing
the ‘parameters of interest’. Conversely, the presence of a given disequilibrium term in more than one
equation is testable.
Secondly, processes subject to structural breaks sustain tests for super exogeneity and the Lucas
(1976) critique: see e.g., Hendry (1988), Fischer (1989), Favero and Hendry (1992), and Engle and
Hendry (1993). When conditional models are constant despite data moments changing considerably,
there is prima facie evidence of super exogeneity for that model’s parameters; and if the model as
formulated does not have constant parameters, resolving that failure ought to take precedence over
issues of exogeneity. However, while such tests are powerful for location shifts, changes to ‘reaction
parameters’ of mean-zero stochastic variables are difficult to detect: see (e.g.) Hendry (2000b).
Richard (1980) considers changes in causal direction, reconsidered in section 5.1 in relation to
determining ‘causal direction’ through a lack of time invariance.
4 Causality
The concept of causality has intrigued philosophers for millennia, so section 4.1 provides a brief in-
cursion into the philosophy of causality. Section 4.2 looks at notions of causality in economic dis-
course, then sub-section 4.3 considers causal inference in observational disciplines. Both statisticians
and economists have attempted to test for its existence, as it is particularly important if one is interested
in testing theories or conducting policy analysis: see Simon (1952), Zellner (1979), Hoover (1990) and
Hendry (1995a) for general discussions of causality and causal inference in econometrics; and e.g.,
Holland (1986), Cox (1992) and Lauritzen and Richardson (2002) in statistics. Finally, sub-section 4.4
evaluates the concept of Granger causality, and its empirical implementation. Section 5 considers links
between causality and exogeneity.
As remarked in Hendry (1995a), ‘causality’ is a philosophical mine field – even without the complexities
introduced by quantum physics, one-off events, anticipations, ‘simultaneity’, and the issue that (e.g.)
6
changes may enlarge or restrict the opportunity sets of agents who could choose to do otherwise than just
react mechanically. Thus, we will step as lightly as feasible: the increase in the prevalence of footnotes
signals the difficulty. Fortunately, Hoover (2001) provides an excellent discussion. In particular, he
notes that many analyses conflate the concept with inference about it, which he calls ‘the epistemic
fallacy’, essentially confusing truth with its criterion. We will first consider the concept, briefly comment
on what makes something a cause, then turn to how one might ascertain causal relationships.3
What makes a cause just that? A cause is a quantitative process that induces changes over time,
mediated within a structure. In turn, a structure is an entity that remains invariant to interventions
‘and directly characterizes the relationships involved’ (i.e., corresponds to reality): see Hendry (1995a,
ch. 2).4 The relation between cause and effect is asymmetric: the latter does not induce the former. The
‘causal field’ or structure is central, as changes to that structure can alter what causes what.
This notion of cause is consistent with at least some earlier formulations, such as Simon (1957,
chs. 1, 3) who formalizes causal order as an asymmetric relation invariant under interventions to the
‘basic’ parameters of the system; Cartwright (1989) who discusses ‘causal capacities’, such that causes
out need causes in (i.e., we need to postulate that causal links already exist); Hoover (1990) who argues
for invariance under interventions; and Hoover (2001) where cause is an asymmetric relationship with
‘unconnected parameters’ within a causal structure that is a feature of reality.5
Our everyday thinking is replete with causal assertions: the car stopped because the driver braked;
the atom disintegrated because it was hit by the proton; the pressure of the gas rose because heat was
applied; output rose because interest rates fell;... At one level, such statements seem unexceptional.
Consider the first example: the brakes were first applied; the braking system remained intact; the harder
the brakes were applied, the more the car slowed; the brakes stopped the car (not the car slowing ‘caused’
the brakes to come on). If the brake cable were cut (as in some murder stories), or the brake fluid was
too low (as happens in reality) and so on, pressing the brake pedal would achieve little: that system is
not invariant to such ‘interventions’. More fancifully, the car might be attached to a cable, the tightening
of which ‘actually causes’ it to slow, so the braking is incidental – causal discussion is often based on
thought experiments involving counterfactuals. More realistically, it could be argued that the driver
caused the car to stop, or even that the trigger for the driver pressing the brakes was the cause... ‘Causal
chains’ may have many steps, with the ‘ultimate causes’ possibly hidden from human knowledge.
Consequently, it is difficult to independently specify what makes something a cause. Given the
key role of the ‘causal structure’, causal connections need be neither necessary nor sufficient. The for-
mer (necessity) implies that if A is to cause B, then not-A entails not-B; the latter (sufficiency) that
not-B entails not-A. Counter examples to necessity arise when there are multiple possible causes, as
in medicine – smoking causes lung cancer, but so might environmental pollution. Equally, sufficiency
3
This organization partially follows Hoover’s discussion of Hume, who distinguished between the conceptual, ontological,
epistemological, and pragmatic aspects of causality. Hoover cites Hume as complaining that his efforts at clarification ‘heated
[his] brain’, a view with which I sympathize.
4
The perceptive reader will not miss the problem: it is impossible to define systems that are invariant to all interventions –
else nothing would change. While that is a view of the world which Parmenides might have endorsed, Heraclitus would have
disagreed, since he regarded flux as so intense one cannot even step into the same river twice (on both views, see Gottlieb,
2000). Neither of theses settings is conducive to causal inference – but then neither seems realistic. However, Nancy Cartwright
has correctly noted that my definition is in the nature of a sufficient condition to discern a cause, rather than defining cause per
se. For example, a rope can cause an object to which it is attached not to move.
5
Conversely, the notion of a ‘causal chain’, in (say) Strotz and Wold (1960), was shown to be insufficient to characterize
exogeneity by Engle et al. (1983) precisely because it failed to specify the absence of links between parameters.
7
crumbles if (say) whether or not lung cancer occurs in any given smoker may depend on the quality
of their ‘DNA repair kit’: a really good ‘kit’ may continually repair the incipient damage so the dis-
ease never appears. Such examples reinforce that the ‘causal field’ characterizing the system and its
background conditions are central: economists are anyway used to the idea that non-stationarities and
non-linearities may mean that a cause sometimes has an effect, and sometimes does not, depending on
thresholds. But these are difficulties for inference about causal links, not about causality as a concept.
It can be no surprise that ascertaining actual causes is difficult. My countryman, David Hume
(1758) tends to be the source of much thinking about the topic.6 Hume asserted that we cannot know
necessary connections in reality (my italics). The key word is know, for therein lies the difficulty: causal
inference is uncertain (although, as Hoover, 2001, notes, this did not stop Hume from arguing in ‘causal’
terms about economics). Just as we cannot have a criterion for truth, yet science seeks for ‘truth’, so
with causal models, we cannot have a criterion for knowing that the necessary connections have been
ascertained. Hoover himself remarks (p24) that “our knowledge of reality consists of empirically based
conjectures and is necessarily corrigible”. Even greater problems arise when inferring anything from
probability relations. Scientists – and economists – often formulate theory models with theory causal
connections between variables, test those models empirically, and if they are not rejected, use the theory
connections as if they held in reality: the ‘causes in’ come from the theory, and the ‘causes out’ from
the evidence not rejecting that theory—rather a weak basis. Policy changes often provide the backdrop
for such testing, and play an important role below, so we turn to economics.
It may be thought that non-experimental research is at a severe disadvantage for causal inference com-
pared to experimental (see Blalock, 1961, and Wold, 1969, on causal inference in non-experimental
research). While it is undoubtedly true in many cases that an efficacious, well-designed and care-
fully controlled experiment can deliver important insights, experimental evidence may also mislead (see
Cartwright, 1983, for a general critical appraisal of physical ‘laws’ mainly derived from experiments).
Doll (2001) recounts the fascinating tale of how cigarette smoking was implicated in lung cancer. Initial
suggestions in the 1920s of a possible connection (based on the steady rise in both) prompted a series
of laboratory ‘tests’ on animals, which failed to find any link. When Doll commenced his own observa-
tional study, he suspected that the widespread use of ‘tar’ as a road surfacing material might be to blame,
and focused on testing that idea. Fortunately, he also collected data on other aspects of the sample of
hospital patients investigated. Tabulating their smoking prevalence against lung cancer incidence re-
vealed a dramatically strong connection. Later experiments confirmed his finding. In retrospect, we can
understand why the early experiments had not found the link: there is a long latency between smoking
and contracting the disease, and the first experiments had not persevered for sufficiently long, whereas
later investigators were more persistent. Doll provides several other examples where theoretical or ex-
perimental analyses failed to suggest links whereas careful study of the observational data revealed the
causal connections.7
Economists, of course, have long been aware of both spurious (see Yule, 1897) and nonsense (see
Yule, 1926) correlations. Indeed, the claim that causal inference is hazardous from observational data
alone is buttressed by issues of observational equivalence (see e.g., Basmann, 1988), lack of identi-
fication, and model mis-specification (see e.g., Lutkepohl, 1982). However, the potential dangers for
empirical research from these sources may have been over-emphasized: it is here that non-stationarities
due to DGPs suffering structural breaks begin to offer dividends, and not just complications. In a world
subject to intermittent large shifts, observational equivalence and lack of unique identification seem
most unlikely: the problem is finding any autonomous relations, not a plethora of them. Model mis-
specification is manifestly ubiquitous, and poses problems for all aspects of inference, not just about
7
Doll also highlights the possible dangers in such inferences, citing the close observational link between cirrhosis of the
liver and smoking, which we now know is intermediated by alcohol (a higher proportion of smokers drink heavily than non-
smokers).
9
causality. However, enhanced data variation can help to diagnose mis-specification in a progressive
research strategy, namely an approach in which knowledge is gradually accumulated as codified, repro-
ducible, information about the world without needing complete prior information as to its nature (see
e.g., Hendry, 1995a, ch. 8, and Hendry, 2000a, ch. 8). For example, location shifts in policy variables
that do not induce forecast failure in relationships linking them to targets provide strong evidence of a
causal link: see section 7.
Most economic relations are inexact, so stochastic formulations are inevitable, leading to both statistical
testing and, more fundamentally, to the idea of evaluating causality in terms of changes to the joint
distributions of the observables. Granger (1969, 1980, 1988b) has been the most forceful advocate
of such an approach: also see Chamberlain (1982), Newbold (1982), Geweke (1984), Phillips (1988),
Florens and Mouchart (1982, 1985), Mosconi and Giannini (1992), and Toda and Phillips (1993, 1994).
Granger (1969) provided a precise definition of his concept, although most later investigators followed
a different route. The fundamental basis for such tests is that causes contain ‘special information’ about
effects not contained elsewhere in the information set; and that ‘time’s arrow’ is unidirectional – only
the past can cause the present, the future cannot. Anticipations of future events can influence present
outcomes, but absent a crystal ball, those must be functions of available information. Thus, ‘We inhabit
a world in which the future promises endless possibilities and the past lies irretrievably behind us.’
(Coveney and Highfield, 1990, p297, who base the direction of time on increasing entropy).
Specifically, Granger (1969) proposed that if in the universe of information, deleting the history
of one set of variables does not alter the joint distribution of any other variables, then the omitted
variables do not to cause the others: we refer to this property as Granger non-causality (denoted GNC).
Conversely, knowing the cause should help forecast the future.
(3) above required that η did not Granger cause x. There, Dxt (xt | Wt−1 , γ t ) defined the ‘causal
structure’ which directly characterizes the relationships (i.e., it is the DGP of xt ); the causality takes
place in time, and is asymmetric (without precluding that the opposite direction may also hold); and the
past induces changes in other variables. Granger causality, therefore, has many attributes in common
with the earlier characterization of cause. However, the issue of invariance is not resolved, Granger’s
definition of causality does not explicitly involve parameters (see e.g., Engle et al., 1983, Buiter, 1984,
and Hoover, 2001), and contemporaneous links are eschewed. Consequently, Granger causality does
not seem to completely characterize the notion of ‘cause’. The ‘anticipations’ example in section 4.2
is sometimes cited as a counter example, but does not in fact violate the concept, and would anyway
confuse inference procedures for most definitions of causality unless the entire mechanism was known.
Perhaps most importantly, the definition of GNC is non-operational as it relates to the universe of in-
formation. Empirical tests for Granger causality are usually based on reductions within fx (xt |Xt−1 , ·),
often without testing its congruence. Hendry and Mizon (1999) defined empirical Granger non-causality
(EGNC) with respect to the information set generated by {xt } as follows. If the density fz (·) does not
depend on Yt−1 , so that:
fz (zt | Xt−1 , ·) = fz (zt | Zt−1 , X0 , ·) , (10)
10
then y does not empirically Granger-cause z. Since causality in the LDGP need not entail that in the
DGP, and vice versa, it cannot be a surprise that the same is true of EGNC. Among the drawbacks of
EGNC, Hendry and Mizon (1999) list:
• the presence of EGNC in a model does not entail the existence of GNC in the DGP;
• the existence of GNC in the DGP need not entail the presence of EGNC in a model (see e.g.,
Hendry, 1997);
• the existence of GNC is specific to each point in time, but detecting EGNC requires extended
sample periods.
Any or all of these drawbacks can seriously confound empirical tests. The first two are important
results for interpreting empirical modelling, and are almost certainly difficult to disentangle in station-
ary processes. However, concerning the third, if parameter change is a feature of reality, that is hardly
the fault of Granger’s conceptualization. In any case, location shifts in subsets of the variables should
help clarify what the genuine links are – bringing Granger causality close to the concept in section 4.1.
Hendry and Mizon (1999) also show that EGNC plays a pervasive role in econometrics, irrespective of
whether or not there exist ‘genuine DGP causes’. They illustrate their claim for ten areas of economet-
ric modelling, namely marginalizing; conditioning; distributions of estimators and tests; inference via
simulation; cointegration; encompassing; forecasting; policy analysis; dynamic simulation; and impulse
response analysis. As their paper is recent, we skip the details, although several issues recur below.
Super exogeneity augments weak exogeneity by the requirement that the parameters of interest be in-
variant to shifts in the parameters of the marginal distribution. Such a condition is far stronger than
‘variation freeness’, and is more like the condition of ‘independent parameters’ used in the causality
formulation of Hoover (2001). If causality corresponded to a regular response that was invariant under
interventions, then super exogeneity would embody several of the key elements of causality. Simon
(1952, 1953) used the invariance of a relationship under interventions to an input variable as an opera-
tional notion of cause, as did Hoover (1990).
11
∂yt
6= 0, (12)
∂z0t
and the parameters of interest are µ = fµ (ψ 1 ). When zt is super exogenous for µ, so ψ 2 in (7) can
change without altering the conditional relationship between yt and zt , then many of the ingredients for
a cause are satisfied: an asymmetric quantitative process, (12), that induces changes over time, mediated
within the structure Dyt |zt (·). Thus, zt causes the resultant change in yt , and the response of yt to zt
remains the same for different sequences {zt }. Our ability to detect changes in ψ 1 in response to shifts
in ψ 2 depends on the magnitude of the changes in the latter: large or frequent changes in ψ 2 that left
(11) invariant would provide strong evidence of a causal link which could sustain policy changes when
(e.g.) zt was under government control.
However, economic policy usually depends on disequilibria in the rest of the economy (such as ex-
cess demands) which would appear to interlink private sector and policy parameters, thereby violating
weak exogeneity of zt for ψ 1 as required for super exogeneity. Nevertheless, as Hendry and Mizon
(2000) argue, reliable estimates of policy responses can be obtained when zt is not weakly exogenous
provided the parameters to be shifted by the policy agency are not functions of ψ 1 : for example, cointe-
gration relations are often the basis of estimated disequilibria, and are usually established at the level of
the complete system, whereas their parameters are not generally subject to policy interventions (which
might explain the problems arising in cases when they are, as – say – in the transition of economies
from controlled to free markets).
A similar argument would seem to hold when nature creates the ‘experimental design’. For example,
Hume’s ‘causal’ analysis of inflation in response to a major gold discovery assumes invariant relations
to propagate the shock.
The appendix to Engle and Hendry (1993) in Ericsson and Irons (1994) shows that if a given conditional
model has both invariant parameters and invariant error variances across regimes, whereas the joint
process varies across those regimes, then the reverse regression cannot have invariant parameters. Thus,
such a reverse conditioning should fail either or both constancy and invariance tests, precluding its
interpretation as a causal link, and leaving at most one possibility. This notion is echoed in the analysis of
interventions in relation to interpreting causality in directed acyclic graphs by Lauritzen and Richardson
(2002). Importantly, a break in a model for a subset of variables will persist even under extensions of
the information set: for example, if an unmodelled jump in CPI inflation is later ‘explained’ by a jump
in oil prices, the latter now reflects the break in the model: see e.g. Hendry (1988).8
8
Of course, apparent breaks may be the result of an unmodelled non-linearity: slipping down a gentle bank then over a cliff
is a non-linear effect, but the problem is the break at the end...
12
Fildes and Makridakis (1995) and Makridakis and Hibon (2000) stress the discrepancy between theory
and empirical findings in forecasting in general. Clements and Hendry (2001a) show how their theory
can help close that gap. Moreover, they also account for the ‘principles’ based on empirical econometric
forecasting performance enunciated by Allen and Fildes (2001), who find that admissible reductions of
VARs with relatively generous lag specifications, estimated by least squares, and tested for constant
parameters do best on average, even though congruent models need not outperform non-congruent.
Such findings stand in marked contrast to what can be established when the economy is representable
as a stationary stochastic process (with unconditional moments which are constant over time): well-
tested, causally-relevant congruent models which embodied valid restrictions would both fit best, and
by encompassing, dominate in forecasting on average. Unfortunately, new unpredictable events continue
to occur, so any operational theory of economic forecasting must allow for data moments altering: see
Stock and Watson (1996) and Clements and Hendry (2001b) on the prominence of structural change in
macroeconomic time-series.
Thus, for forecasting per se, correctly established causality in congruent models does not ensure
success. However, care is essential in how that result is interpreted. Location shifts (or mimics thereof)
are sufficiently common to have left a less than impressive track record of macro-econometric fore-
casting. No one can forecast the unpredictable, and all devices will fail for events that were ex ante
unpredictable. But some devices do not adjust after breaks and so suffer systematic forecast failure:
equilibrium-correction mechanisms based on cointegration relationships whose mean has changed – but
that is not known – are a potentially disastrous example. Nevertheless, causal-based modelling should
not be abandoned as a basis for forecasting, particularly in a policy context (see section 7): rather, the
solution lies in formulating variants of models that are robust to such shifts. Intercept corrections and
additional differencing are simple possibilities, but hopefully better ones will be developed now that an
explanation exists for the historical outcomes (see e.g., Hendry, 2004).
However, it is the converse implication that is crucial: causal links are not sensibly tested by forecast
evaluation, since neither success nor failure entails correct or incorrect attribution of causality. Forecast
failure could, but need not, imply inappropriately attributed causal links. Indeed, all four possibilities
can occur without logical contradiction: correct causality followed by forecast failure; incorrect causal-
13
ity followed by forecast failure; correct causality followed by forecast ‘success’; incorrect causality fol-
lowed by forecast ‘success’. Worse still, ‘tricks’ exist that can help avoid forecast failure independently
of the validity of any causal attributions by the model in use: intercept corrections are a well-known
device with such properties.
Despite such a fundamental result, many economists seem to persist in the view that ‘forecasting is
the ultimate test of a model’. Three comments can be made about such a view: see Clements and Hendry
(2003). First, ex post parameter-constancy tests and ex ante forecast evaluations have very different
properties, with the latter susceptible to many additional problems such as (increased) data inaccuracy
at the forecast-origin and over the forecast horizon. Secondly, non-constancy in coefficients of mean-
zero variables has a much smaller impact on forecast accuracy than changes in location components.
Thirdly, when the future can differ substantively from the past, ‘because of the things we don’t know we
don’t know’ (see Singer, 1997), forecast failure is not so much a diagnostic of a model as an indication of
an event unrelated to existing information: thus, it potentially provides new knowledge. Consequently,
if the basis for their view is that new evidence is needed for ‘independent’ checking of inferences, when
data moments are non-constant, then great care is needed in using such information correctly, and ex
ante forecast evaluation is unlikely to be a reliable approach to doing so.
Turning to the role of Granger causality, for multi-step conditional forecasts, EGNC of the condi-
tioning variables is crucial for valid inferences: neglected feedbacks would otherwise violate the condi-
tioning assumptions. On the other hand, EGNC does not matter in closed models, where every variable
is jointly forecast, nor for 1-step ahead forecasts even in open models, nor if multi-step estimation is
used.
Surprisingly, therefore, in the forecasting arena, neither sense of causality can be accorded an im-
portant role.
affect policy conclusions. Moreover, since extrapolative devices rarely include the policy instruments,
shifts in those instruments then act as post-forecasting breaks to such devices, inducing a failure not
present in an econometric model which is well-specified for the policy effects. Consequently, compared
to in-sample structural breaks, the situation reverses for these two model types. Hence, pooling of both
forms of model will be needed in the face of location structural breaks and policy regime shifts.
Policy outcomes depend on reaction parameters connecting target variables with instruments. As
shown in Hendry (2000b), structural breaks which do not alter the unconditional expectations of the I(0)
transforms of variables are not easily detected by conventional constancy tests, so rarely induce forecast
failure. This has adverse implications for impulse-response analyses as discussed in section 7.3 be-
low. However, most policy changes entail location shifts in variables (as against, e.g., mean-preserving
spreads), and hence provide a crucial step in a progressive research strategy: if causal attribution is
incorrect, then forecast failure should result from the policy, allowing substantive learning of causal
connections, providing these do not themselves change too often (which seems unlikely). Thus, re-
search effort into establishing which forecast failures resulted from policy change,s and which from
other sources of location shifts, would seem merited.
7.1 Co-breaking
Co-breaking is the property that when variables shift, there exist linear combinations of variables which
do not shift, and so are independent of the breaks (see Clements and Hendry, 1999, ch. 9, Krolzig
and Toro, 2002, and Massmann, 2001). Co-breaking is analogous to cointegration, where a linear
combination of variables is stationary even though all the component series are integrated. Whenever
there is co-breaking between the instruments of economic policy and the target variables, changes in
the former will produce consistent changes in the latter, so constant co-breaking implements a causal
relation. The existence of co-breaking between the means of the policy instruments and those of the
targets is testable: a policy shift that induced forecast failure would be strong evidence that the causal
links were incorrectly specified. As section 7.3 shows, co-breaking is also necessary to justify impulse-
response analysis.
7.2 Control
The status of variables in a system can be altered by deliberate changes in governmental control proce-
dures. For example, a variable (such as an interest rate) that was weakly exogenous for the parameters
of (say) an inflation equation, can cease to be so after a control rule is introduced (see e.g., Johansen
and Juselius, 2000), yet the VAR involved need not suffer from forecast failure. Of course, the control
rule can be effective only if there is an already existing causal link between the instrument and the tar-
get, so the new feedback rule can exploit that link to achieve its objective: control rules cannot create
policy-target links by specifying target-instrument links.
Although impulse response analyses are widespread (see e.g., Lütkepohl, 1991, Runkle, 1987, and
Sims, 1980), they suffer from many drawbacks: see Banerjee, Hendry and Mizon (1996), Ericsson et al.
(1998), and Hendry and Mizon (1998).9 Here, we focus on those problems which are germane to a
9
The literature on ‘structural VARs’ (see, e.g., Bernanke, 1986, and Blanchard and Quah, 1989) faces similar difficulties.
15
yt = β 0 zt−1 + t , (13)
8 Conclusion
Both exogeneity and causality play different roles in modelling, forecasting and policy. This is well
known for exogeneity, where different concepts have been explicitly defined in Engle et al. (1983), but
16
seems less well established for causality. A cause was viewed as an asymmetrical process inducing
change over time in a structure. The perceptive reader will not have missed the close connections to
dynamic econometric systems which are invariant to extensions of information (over time, interventions,
and additional variables). When causality is defined within a theory model, the correspondence of the
model to reality becomes the key link. Thus, we conclude that causality is important in modelling;
and manifestly crucial in policy. However, causality cannot be proved to be a necessary property of
variables in dominating forecasting models. When location shifts occur, robust forecasting methods
can outperform ‘causal models’. Nevertheless, by itself, such a finding – or even forecast failure – is
insufficient to preclude the use of the causal model for policy: the reaction parameters of interest to
policy could have remained constant, perhaps due to co-breaking.
Both concepts (exogeneity and causality) seem robust to extensions to non-stationary systems, but
their implications are sometimes less clear cut, and inferences about them can be more hazardous. In
particular, pre-existing exogeneity, or causal direction, can alter over time. Conversely, it can be difficult
in weakly stationary systems to correctly ascertain exogeneity or causality since the systems in question
do not change enough. Thus, the news is not all bad. For example, it is well known that cointegrated
relations which remove unit roots provide a basis for testing long-run weak exogeneity; and policy-
induced location shifts can highlight the presence or absence of causal links.
Granger causality shares many of the characteristics of the general definition, together with the re-
sultant inferential difficulties. The requirement that causality be judged against the universe of available
information renders it non-operational; but attempts to infer GNC from empirical models become prone
to serious errors unless a congruent, encompassing and invariant system is used.
The strength of evidence about causality depends on the magnitudes of changes in inputs which
nevertheless produce consistent output responses. Co-breaking with causal links is needed to sustain
economic policy, since few policy changes are of the ‘mean-preserving spread’ form, and most involve
location shifts. Although the latter are the main problem for forecasting, in a progressive research
strategy, the resultant forecast failure can be of benefit for modelling, and so later policy. Conversely,
shifts in mean-zero parameters are difficult to detect in forecasting, but can seriously distort impulse-
response based policy analyses.
Finally, without mean co-breaking, or causal links, impulse responses need not deliver useful in-
formation about reactions in the economy. There seems no alternative to modelling the exogeneity and
causality structure of the economy if reliable policy inferences are desired.
References
Allen, P. G., and Fildes, R. A. (2001). Econometric forecasting strategies and techniques. In Armstrong,
J. S. (ed.), Principles of Forecasting, pp. 303–362. Boston: Kluwer Academic Publishers.
Banerjee, A., Hendry, D. F., and Mizon, G. E. (1996). The econometric analysis of economic policy.
Oxford Bulletin of Economics and Statistics, 58, 573–600.
Basmann, R. L. (1988). Causality tests and observationally equivalent representations of econometric
models. Journal of Econometrics, 39, 69–104.
Bernanke, B. S. (1986). Alternative explorations of the money-income correlation. In Brunner, K., and
Meltzer, A. H. (eds.), Real Business Cycles, Real Exchange Rates, and Actual Policies, Vol. 25
17
ers, 1993, and Oxford University Press, 2000; and in Ericsson, N. R. and Irons, J. S. (eds.) Testing
Exogeneity, Oxford: Oxford University Press, 1994.
Ericsson, N. R. (1992). Cointegration, exogeneity and policy analysis: An overview. Journal of Policy
Modeling, 14, 251–280.
Ericsson, N. R., Hendry, D. F., and Mizon, G. E. (1998). Exogeneity, cointegration and economic policy
analysis. Journal of Business and Economic Statistics, 16, 370–387.
Ericsson, N. R., and Irons, J. S. (1994). Testing Exogeneity. Oxford: Oxford University Press.
Favero, C., and Hendry, D. F. (1992). Testing the Lucas critique: A review. Econometric Reviews, 11,
265–306.
Fildes, R. A., and Makridakis, S. (1995). The impact of empirical accuracy studies on time series
analysis and forecasting. International Statistical Review, 63, 289–308.
Fischer, A. M. (1989). Policy regime changes and monetary expectations: Testing for super exogeneity.
Journal of Monetary Economics, 24, 423–436.
Florens, J.-P., and Mouchart, M. (1982). A note on non-causality. Econometrica, 50, 583–592.
Florens, J.-P., and Mouchart, M. (1985). A linear theory for noncausality. Econometrica, 53, 157–175.
Geweke, J. B. (1984). Inference and causality in economic time series models. In Griliches, Z., and
Intriligator, M. D. (eds.), Handbook of Econometrics, Vol. 2, Ch. 19. Amsterdam: North-Holland.
Gottlieb, A. (2000). The Dream of Reason. London: The Penguin Press.
Granger, C. W. J. (1969). Investigating causal relations by econometric models and cross-spectral meth-
ods. Econometrica, 37, 424–438.
Granger, C. W. J. (1980). Testing for causality – A personal viewpoint. Journal of Economic Dynamics
and Control, 2, 329–352.
Granger, C. W. J. (1988a). Causality, cointegration, and control. Journal of Economic Dynamics and
Control, 12, 551–559.
Granger, C. W. J. (1988b). Some recent developments in the concept of causality. Journal of Econo-
metrics, 39, 199–211.
Granger, C. W. J., and Deutsch, M. (1992). Comments on the evaluation of policy models. Journal of
Policy Modeling, 14, 497–516.
Heckman, J. J. (2000). Causal parameters and policy analysis in economics: A twentieth century retro-
spective. Quarterly Journal of Economics, 115, 45–97.
Hendry, D. F. (1988). The encompassing implications of feedback versus feedforward mechanisms in
econometrics. Oxford Economic Papers, 40, 132–149. Reprinted in Ericsson, N. R. and Irons,
J. S. (eds.) Testing Exogeneity, Oxford: Oxford University Press, 1994.
Hendry, D. F. (1995a). Dynamic Econometrics. Oxford: Oxford University Press.
Hendry, D. F. (1995b). Econometrics and business cycle empirics. Economic Journal, 105, 1622–1636.
Hendry, D. F. (1995c). On the interactions of unit roots and exogeneity. Econometric Reviews, 14,
383–419.
Hendry, D. F. (1997). The econometrics of macroeconomic forecasting. Economic Journal, 107, 1330–
1357. Reprinted in T.C. Mills (ed.), Economic Forecasting. Edward Elgar, 1999.
Hendry, D. F. (2000a). Econometrics: Alchemy or Science? Oxford: Oxford University Press. New
19
Edition.
Hendry, D. F. (2000b). On detectable and non-detectable structural change. Structural Change and
Economic Dynamics, 11, 45–65. Reprinted in The Economics of Structural Change, Hagemann,
H. Landesman, M. and Scazzieri (eds.), Edward Elgar, Cheltenham, 2002.
Hendry, D. F. (2003). A modified Gauss–Markov theorem for stochastic regressors. Unpublished paper,
Economics Department, Oxford University.
Hendry, D. F. (2004). Robustifying forecasts from equilibrium-correction models. Unpublished paper,
Economics Department, University of Oxford.
Hendry, D. F., and Clements, M. P. (2003). Economic forecasting: Some lessons from recent research.
Economic Modelling, 20, 301–329. European Central Bank, Working Paper 82.
Hendry, D. F., and Doornik, J. A. (1997). The implications for econometric modelling of forecast failure.
Scottish Journal of Political Economy, 44, 437–461. Special Issue.
Hendry, D. F., and Mizon, G. E. (1998). Exogeneity, causality, and co-breaking in economic policy
analysis of a small econometric model of money in the UK. Empirical Economics, 23, 267–294.
Hendry, D. F., and Mizon, G. E. (1999). The pervasiveness of Granger causality in econometrics. In
Engle, R. F., and White, H. (eds.), Cointegration, Causality and Forecasting. Oxford: Oxford
University Press.
Hendry, D. F., and Mizon, G. E. (2000). Reformulating empirical macro-econometric modelling. Oxford
Review of Economic Policy, 16, 138–159.
Hendry, D. F., and Morgan, M. S. (1995). The Foundations of Econometric Analysis. Cambridge:
Cambridge University Press.
Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association,
81, 945–960 and 968–970.
Hoover, K. D. (1990). The logic of causal inference: Econometrics and the conditional analysis of
causation. Economics and Philosophy, 6, 207–234.
Hoover, K. D. (2001). Causality in Macroeconomics. Cambridge: Cambridge University Press.
Hume, D. (1758). An Enquiry Concerning Human Understanding, (1927 ed.). Chicago: Open Court
Publishing Co.
Hunter, J. (1992a). Cointegrating exogeneity. Economics Letters, 34, 33–35.
Hunter, J. (1992b). Tests of cointegrating exogeneity for PPP and uncovered interest rate parity in the
United Kingdom. Journal of Policy Modeling, 14, 453–463.
Johansen, S. (1992). Testing weak exogeneity and the order of cointegration in UK money demand.
Journal of Policy Modeling, 14, 313–334.
Johansen, S., and Juselius, K. (2000). How to control a target variable in the VAR model. Mimeo,
European University of Institute, Florence.
Koopmans, T. C. (1950). When is an equation system complete for statistical purposes?. In Koopmans,
T. C. (ed.), Statistical Inference in Dynamic Economic Models, No. 10 in Cowles Commission
Monograph, Ch. 17. New York: John Wiley & Sons.
Krolzig, H.-M., and Toro, J. (2002). Testing for super-exogeneity in the presence of common determin-
istic shifts. Annales d’Économie et de Statistique, 67/68, 41–71.
20
Lauritzen, S. L., and Richardson, T. S. (2002). Chain graph models and their causal interpretations.
Journal of the Royal Statistical Society, B, 64, 1–28.
Lucas, R. E. (1976). Econometric policy evaluation: A critique. In Brunner, K., and Meltzer, A. (eds.),
The Phillips Curve and Labor Markets, Vol. 1 of Carnegie-Rochester Conferences on Public
Policy, pp. 19–46. Amsterdam: North-Holland Publishing Company.
Lutkepohl, H. (1982). Non-causality due to omitted variables. Journal of Econometrics, 19, 367–378.
Lütkepohl, H. (1991). Introduction to Multiple Time Series Analysis. New York: Springer-Verlag.
Makridakis, S., and Hibon, M. (2000). The M3-competition: Results, conclusions and implications.
International Journal of Forecasting, 16, 451–476.
Massmann, M. (2001). Co-breaking in macroeconomic time series. Unpublished paper, Economics
Department, Oxford University.
Mizon, G. E. (1995). Progressive modelling of macroeconomic time series: the LSE methodology. In
Hoover, K. D. (ed.), Macroeconometrics: Developments, Tensions and Prospects, pp. 107–169.
Dordrecht: Kluwer Academic Press.
Morgan, M. S. (1990). The History of Econometric Ideas. Cambridge: Cambridge University Press.
Mosconi, R., and Giannini, C. (1992). Non-causality in cointegrated systems: Representation, estima-
tion and testing. Oxford Bulletin of Economics and Statistics, 54, 399–417.
Newbold, P. (1982). Causality testing in economics. In Anderson, O. (ed.), Time Series Analysis: Theory
and Practice 1, pp. 701–716. Amsterdam, The Netherlands: North Holland.
Paruolo, P., and Rahbek, A. (1999). Weak exogeneity in I(2) systems. Journal of Econometrics, 93,
281–308.
Penrose, R. (1989). The Emperor’s New Mind. Oxford: Oxford University Press.
Phillips, A. W. H. (1957). Stabilization policy and the time form of lagged response. Economic Journal,
67, 265–277. Reprinted in Leeson, R. (ed.) A. W. H. Phillips: Collected Works in Contemporary
Perspective, Cambridge: Cambridge University Press, 2000.
Phillips, P. C. B. (1988). Reflections on econometric methodology. Economic Record, 64, 344–359.
Phillips, P. C. B., and Loretan, M. (1991). Estimating long-run economic equilibria. Review of Economic
Studies, 58, 407–436.
Richard, J.-F. (1980). Models with several regimes and changes in exogeneity. Review of Economic
Studies, 47, 1–20.
Runkle, D. E. (1987). Vector autoregressions and reality. Journal of Business and Economic Statistics,
5, 437–442.
Simon, H. A. (1952). On the definition of causal relations. Journal of Philosophy, 49, 517–527.
Simon, H. A. (1953). Causal ordering and identifiability. In Hood, W. C., and Koopmans, T. C. (eds.),
Studies in Econometric Method, No. 14 in Cowles Commission Monograph, Ch. 3. New York:
John Wiley & Sons.
Simon, H. A. (1957). Models of Man. New York: John Wiley & Sons.
Sims, C. A. (1972). Money, income and causality. American Economic Review, 62, 540–552.
Sims, C. A. (1980). Macroeconomics and reality. Econometrica, 48, 1–48. Reprinted in Granger,
C. W. J. (ed.) (1990), Modelling Economic Series. Oxford: Clarendon Press.
21
Singer, M. (1997). Thoughts of a nonmillenarian. Bulletin of the American Academy of Arts and
Sciences, 51(2), 36–51.
Smith, A. (1795). The history of astronomy.. pp. 33–105. Edinburgh: W. Creech. Liberty Classics
edition, by I. S. Ross, 1982.
Stock, J. H., and Watson, M. W. (1996). Evidence on structural instability in macroeconomic time series
relations. Journal of Business and Economic Statistics, 14, 11–30.
Strotz, R. H., and Wold, H. O. A. (1960). Recursive versus non-recursive systems: An attempt at a
synthesis. Econometrica, 28, 417–421.
Toda, H. Y., and Phillips, P. C. B. (1993). Vector autoregressions and causality. Econometrica, 61,
1367–1393.
Toda, H. Y., and Phillips, P. C. B. (1994). Vector autoregressions and causality: A theoretical overview
and simulation study. Econometric Reviews, 13, 259–285.
Urbain, J.-P. (1992). On weak exogeneity in error correction models. Oxford Bulletin of Economics and
Statistics, 54, 187–207.
Weissmann, G. (1991). Asprin. Scientific American, 58–64.
Wold, H. O. A. (1969). Econometrics as pioneering in non-experimental model building. Econometrica,
37, 369–381.
Yule, G. U. (1897). On the theory of correlation. Journal of the Royal Statistical Society, 60, 812–838.
Yule, G. U. (1926). Why do we sometimes get nonsense-correlations between time-series? A study in
sampling and the nature of time series (with discussion). Journal of the Royal Statistical Society,
89, 1–64. Reprinted in Hendry, D. F. and Morgan, M. S. (1995), The Foundations of Econometric
Analysis. Cambridge: Cambridge University Press.
Zellner, A. (1979). Causality and econometrics. In Brunner, K., and Meltzer, A. (eds.), The Phillips
Curve and Labor Markets, pp. 9–54. Amsterdam: North-Holland Publishing Company.