It's All About Mechanisms - What Process-Tracing Case Studies Should Be Tracing
It's All About Mechanisms - What Process-Tracing Case Studies Should Be Tracing
Derek Beach
To cite this article: Derek Beach (2016) It's all about mechanisms – what process-
tracing case studies should be tracing, New Political Economy, 21:5, 463-472, DOI:
10.1080/13563467.2015.1134466
It is widely agreed that the core of Process-tracing (PT) as a distinct case-study methodology is that it
involves tracing causal mechanisms that link causes (X) with their effects (i.e. outcomes) (Y) (Checkel
2008, Hall 2008, Bennett 2008a, 2008b, Rohlfing 2012, Beach and Pedersen 2013; introduction to
this special issue).1 We trace causal mechanisms whereby a cause (or set of causes) produces an
outcome to both: (1) make stronger evidence-based inferences about causal relationships when we
have within-case evidence of each step of the causal process (or absence thereof) in-between a
cause and outcome, and in particular, the activities that provide the causal links in the process,
and (2) because tracing mechanisms gives us a better understanding of how a cause produces an
outcome. This tracing can be used for either building or testing theories of causal mechanisms.
Yet, when we look at the methodological literature on PT, there is considerable ambiguity and
discord about what causal mechanisms actually are. The result of this ambiguity about what mech-
anisms are clearly maps onto existing applications of PT, with most PT case studies completely
ignoring the underlying theoretical causal processes; that is, it is black-boxed. In the few PT appli-
cations where mechanisms are discussed explicitly, they are typically only done so in a very
cursory fashion, with the result that there is considerable ambiguity about what theoretical
process the ensuing case study actually is tracing. The analytical result is that the underlying
causal mechanism is relatively unstudied; it is grey-boxed.
This article first attempts to provide a clear definition of causal mechanisms that provides scholars
using PT with a framework for theorising mechanisms in a fashion that is amenable to in-depth
empirical analysis. I contend that PT needs to adopt an understanding of causal mechanisms
where they are explicitly fleshed out by unpacking causal processes linking X and Y into a series
of interlocking parties composed of entities engaging in activities that transmit causal forces from
cause to outcome. It is argued that explicitly theorising mechanisms in this fashion results in both
better causal theories and in PT case studies that are more clearly focused on tracing mechanisms.
The article then illustrates the analytical problems that result from not developing causal mech-
anisms in sufficient detail using a detailed analysis of the theoretical claims in an article by Ziblatt
(2009), where he claims he is tracing mechanisms. While his empirical analysis of the ‘mechanism’
does shed some light on possible underlying causal processes, the evidence provided is quite anec-
dotal because it is impossible to develop clear empirical tests of an underlying causal mechanism
when we do not really know exactly what it is we are tracing. The result is that we do not gain
the ability to make stronger inferences about causal processes linking causes with outcomes
through mechanisms, which was the very reason we use PT in the first place.
The article then develops a methodological framework for building or testing theories of causal
mechanisms in the social sciences, discussing how we can empirically evaluate the presence/
absence of each part of a mechanism using informal Bayesian logic.
lower level of aggregation, where we lose focus on the process between the variables (Mahoney 2001:
578, Mayntz 2004: 244–5, Hedström and Ylikoski 2010: 51–2, Waldner 2012: 76–7).
The result of viewing mechanisms as intervening variables is that the actual causal process itself is
grey-boxed (Bunge 1997: 428, Waldner 2012). Yet, tracing the steps of the causal process between X
and Y was the very reason we would want to engage in the PT of causal mechanisms in the first place.
A theorised causal mechanism describes each part of the mechanism whereby causal forces are
transferred from cause to outcome. The parts of causal mechanisms can be helpfully defined in
terms of entities that engage in activities that transmit causal forces from cause to outcome (Macha-
mer et al. 2000, Machamer 2004, Rohlfing 2012, Beach and Pedersen 2013). Entities can be understood
as the factors (actors, organisations or structures) engaging in activities, where the activities are the
producers of change or what transmits causal forces through a mechanism. Note that it is the activi-
ties that entities engage in that move the mechanism from an initial causal condition through differ-
ent parts to an outcome. This denotes a key distinction from a theory as a causal graph in which the
activities are not described, but are depicted merely as causal arrows linking one entity to the next. In
the following, I depict entities as letters (A, B, C) and activities as arrows (→). A part of the mechanism
is therefore the combination of entity and activity (e.g. A→).
Ideally, when theorising the parts of a causal mechanism, the parts will exhibit productive continu-
ity, meaning that each of the parts logically leads to the next part, and there are not large logical holes
in the causal story linking X and Y together (Machamer et al. 2000: 3, Darden 2002: 283). If a mech-
anism is represented schematically as: X → [A→B→C] → Y, productive continuity lies in the arrows
and their transferal of causal forces from one part of a mechanism to the next. A missing arrow,
namely, the inability to specify an activity connecting A and B, leaves an explanatory gap in the pro-
ductive continuity of the mechanism (Machamer et al. 2000: 3). In real-world research, we are often
not able to get much beyond what Machamer terms a ‘mechanism sketch’ (Machamer 2004).
However, even a crude depiction of a process in terms of both entities and activities is better than
no information on process.
When conceptualising a causal mechanism, we should therefore be able to identify clearly the
different parts and how they are related through the nature of the activities. Explicitly theorising
causal mechanisms results both in: (1) better causal theories, and in (2) actual empirical tracing of
causal processes in cases, enabling stronger inferences to be made as a result. Better causal theories
result from having to make explicit each part of the causal link between a cause and an outcome,
enabling detailed scrutiny of the causal logic of each of the links in the causal process. Second, by
actually tracing causal processes in detail in an empirical case study, evidence is provided suggesting
either that the mechanism was present, the mechanism should be revised because it did not work as
theorised, or there is no causal link, suggesting that, in the case, there is no causal relationship. In the
philosophy of science, the term ‘mechanistic evidence’ is used for this type of empirical material,
focusing our attention on the connection between the type of empirical material used as evidence
and the type of causal claim being made (Russo and Williamson 2007, Illari 2011). Mechanistic
466 D. BEACH
All of this has a close resemblance to Historical Institutionalist (HI) theories of the importance of tem-
porality, critical junctures and path dependency. Other (Rational Choice-inspired (RC)) scholars have
claimed that we always need to theorise multiple mechanisms when studying macro-level phenom-
ena, describing mechanisms linking the macro to the micro-level (situational mechanisms), micro-
micro (action-based mechanisms), and micro to the macro-level (transformational mechanisms)
(e.g. Hedström and Swedburg 1998). Yet, in both instances, scholars have imported theories into
their recommendations for what theorised mechanisms should include. However, there is no
logical reason why a mechanism has to look like an integrative, macro-micro-macro RC theory,
or a highly contingent, HI theory with path dependencies and critical junctures. A theory is not a
research methodology, and vice versa. Our theory should tell us what type of causal mechanism
to expect, but theory should not dictate our understanding of mechanisms themselves.
While telling us something about the process whereby landholding inequality (X) is linked to elec-
toral fraud (Y) – it works through local officials – he does not provide sufficient details of the causal
mechanism. In particular, he tells us precious little about the actual causal process whereby landed
elites are able to capture local officials.
For instance, what types of power resources do landed elites deploy to capture officials? Does
capture occur through the use of material resources such as the power to control revenue or
through control of appointment processes? Or perhaps by deploying more subtle, discursive
resources? And do landed elites have to actively intervene to capture officials, or do officials anticipate
what local officials want? In the next step of the causal process, when and why should local officials
be responsive? And once captured, what is the process whereby local officials actually engage in
NEW POLITICAL ECONOMY 467
electoral fraud? What types of actions do they use? It could range from something like removal of
voters from electoral rolls to pressuring poll officers, and so on.
By not explicitly theorising the parts of a causal mechanism in sufficient detail, we are left unable
to answer basic questions about the underlying causal logic linking X and Y together, making it dif-
ficult to evaluate whether the theorised causal process is logically consistent, and even more proble-
matic, it is very difficult to trace systematically whether there is evidence of the process when we are
not told what the process is that is being traced.
The subsequent empirical case study does illustrate that both X and Y are present and that there is
some anecdotal evidence suggesting that the link is through something like a ‘capture’ process.
However, given that the mechanism is not explicitly theorised, the actual analysis ends up merely
producing small empirical vignettes that insinuate the existence of an underlying mechanism
without providing strong empirical evidence. Indeed, it is difficult to determine whether the pre-
sented empirical material actually confirms the underlying mechanism, given that we are left gues-
sing about what the underlying mechanism is.
The anecdotal nature of the evidence is clear in the following quote that is representative of the
types of empirical material presented. Ziblatt writes, ‘As one Landrat from Posen reported in his
memoirs in 1894, “I had to join the local branch of the Agrarian League, because everyone I interact
with socially – and everyone I hunt with – is a member!”’ (2009: 16). It is obvious that this piece of
empirical material relates in some fashion to an underlying part of mechanism whereby landed
elites can pressure local officials. But by not detailing the underlying causal mechanism, we are
left unsure about basic questions such as whether social pressure is the only means whereby local
officials are captured. Furthermore, by not telling us what empirical fingerprints (predicted evidence)
each of the parts of the mechanism can be expected to leave in a case – a natural result of not the-
orising explicitly each of the parts – the presentation of empirical material seems very unsystematic. A
scholar critical of case-study methodology might go so far as to state that it is just ‘cherry picking’
empirical observations instead of being a systematic analysis of whether there is empirical evidence
that confirms or disconfirms the theoretical mechanism being present in the case.
Unfortunately, by not unpacking the mechanism in sufficient detail, Ziblatt’s case study does not
enable us to examine the underlying causal logic linking X with Y, nor does it enable us to assess sys-
tematically whether there is empirical evidence that the hypothesised ‘capture’ mechanism actually
worked in a given case. The result is that the ‘capture’ mechanism remains in a ‘grey box’.
The theorised mechanism could be improved by fleshing out the activities that provide the causal
links between the two parts of the mechanism. While the mechanism could be fleshed out even
more, I recommend keeping the theorised mechanism as simple as possible in relation to the
research question. Tracing a mechanism with thirty parts empirically would be a daunting task,
and there is the serious question of whether we could formulate a generalisable mechanism that
is so complex.
Ziblatt’s theory deals with how landed elites might be able to produce electoral fraud through
‘capture’. To transform the theory into a fleshed out mechanism, we would want to disaggregate
the causal process into a series of parts. A suggested mechanism that is more explicitly described
is depicted in Figure 1, composed of two parts linking X and Y is theorised based upon the empirical
insights drawn from the case study.
The core difference between the causal theory used by Ziblatt in his article and the ‘capture’ mech-
anism as described in Figure 1 is that my revision explicitly theorises what is going on in each part of
the causal mechanism – focusing in particular on the activities that provide the causal links – thereby
telling us more about how and why capture is theorised to happen. The first part of the mechanism
details the how and why process whereby local officials are captured, followed by theorisations on
the process whereby captured officials influence elections, resulting in electoral fraud.
In the subsequent empirical analysis, we would systematically trace whether there is evidence
suggesting that the observable implications of the two parts of this simple mechanism were
present instead of the empirical vignettes used by Ziblatt; vignettes that shed some light on the
468 D. BEACH
Figure 1. A ‘capture’ causal mechanism linking landed elites with electoral fraud.
Note: Entities are underlined and activities are in italics.
workings of the mechanism but in an ad hoc and unsystematic fashion. The empirical part of the PT
case study would focus solely on tracing the ‘capture’ causal mechanism, asking whether there is
empirical evidence of each of the parts in a given case. For example, do we find evidence of
landed elites actually putting pressure on local officials through appointments? Do we find evidence
of systematic removal of voters from electoral rolls by local officials, or were other instruments used?
By making the theorised mechanism more explicit in terms of a system of interlocking parts that
transmits causal forces from the cause (or set of causes) to the outcome, we then know where to
look for the empirical fingerprints in the subsequent PT case study. I now turn to a short example
of how causal mechanisms can be studied empirically, introducing the use of informal Bayesian
logic as a tool to focus our attention on what questions we should ask when developing the obser-
vable manifestations of parts of mechanisms for empirical testing.
evaluating whether the predicted evidence in theoretically certain and/or unique in relation to the
theorised mechanism (see below), (2) collecting empirical evidence and assessing whether one actu-
ally found the predicted evidence, and (3) evaluating whether we can trust the found evidence. Only
after we have evaluated both what found evidence can tell us in relation to theory (certainty and
uniqueness) and whether we can trust the evidence, can we make inferences about the presence/
absence of hypothesised mechanisms. When building theories, the sequence is basically reversed,
although here there is considerable more back-and-forth between theories and empirics (for
more, see Beach and Pedersen 2016).
What kinds of empirical material can act as evidence for the existence of parts of mechanisms?
Mechanistic evidence is not only a series of events between the occurrence of the cause and
outcome, as events can be the empirical fingerprint of many different things. Evidence is also not
just cross-case variation in values of X and Y. Instead, mechanistic evidence is any observable mani-
festation of our theorised causal mechanism(s) that has a probative value in relation to determining
whether it was present or not in the case.
We can distinguish among four particular types of evidence that can be the fingerprints of a part
of a mechanism. First, pattern evidence relates to statistical patterns in the empirical record. If we are
testing a theory relating to racial discrimination in employment, we would expect that we would find
statistical differences in rates of employment in the empirical record. A second form of evidence
relates to sequences, making predictions about temporal or spatial chronologies of events. When
testing a theory about rational decision-making, relevant evidence might be whether decision-
makers first collected all relevant information, evaluated it, and then took the decision that they
believed best solved the policy problem they face. If the analyst then found that the decision was
taken before information was collected (as often happens in the real world), we would downgrade
our confidence in the theory being valid in the case. Third, trace evidence refers to material where
its mere existence provides proof. If our theory states that lobbyists had to meet with decision-
makers, and there were few alternative explanations in the case for them meeting, finding that
they did would strengthen our confidence in the theory. Fourthly, account evidence relates to
material where it is the content that matters. This can be in the form of what participants tell us in
interviews, or the content of relevant documents like legislative proposals.
When testing theories of mechanisms, developing predictions of what evidence we might find
involves answering the question ‘what empirical fingerprints might each part of the causal mechan-
ism, if operative leave in the selected case’? Here, we need to think carefully but also creatively about
the data-generating processes that the operation of the theorised parts of the causal process will
leave. As one piece of evidence is usually not conclusive, it is important to develop as many empirical
fingerprints as possible that the data-generating processes might have left in the case for each part of
the mechanism. Predictions are also typically case-specific, as the same mechanism might leave very
different empirical fingerprints in different cases.
In contrast, when we are building theories of mechanisms, we engage in an extensive soaking and
probing a case, starting by reading existing accounts, and probing available empirical records in a
search for patterns in the record that might be empirical fingerprints of underlying causal processes.
When we uncover an empirical observation that we have a hunch might be an empirical fingerprint
of a causal process, we start to engage in a form of backwards induction to uncover the underlying
parts of the mechanism in operation, asking ourselves what mechanisms the found material is poten-
tially evidence of. Here, the process of the assessment of evidence described below flows in the oppo-
site direction, from empirics to theory.
After we have described what the predicted evidence is in a theory-testing design, we then need
to elaborate what the evidence can tell us about the studied causal relationship (Van Evera 1997).
Here, there are two relevant questions to ask: (1) do we have to find the evidence? (theoretical cer-
tainty), and (2) if found, are there plausible alternative explanations for finding it? (theoretical unique-
ness). Theoretical certainty relates to the disconfirming power of evidence. For example, the suspect
being in the town when a murder is committed is a certain prediction. If the predicted evidence is not
470 D. BEACH
found, then we can disconfirm the hypothesis that the suspect is guilty. Theoretical uniqueness
describes the expected probability of finding the predicted evidence if the hypothesis is not true,
and relates to the confirming power of evidence. For example, a unique prediction would be expect-
ing to find ‘smoking gun’ evidence linking the suspect to the crime (powder burns on the suspect’s
hands, suspect’s fingerprints are on the gun). If we found this set of predicted pieces of evidence, it
would be difficult to account for this unless the suspect actually used it to commit the crime (strong
confirmation). However, we would not be certain to find the smoking gun, and if we did not find it, we
would be unable to make any inferences beyond ‘we did not find the smoking gun’. Note that theor-
etical uniqueness is a form of empirical ‘control’ for other causes, but relates to other causes for
finding the evidence itself and therefore is not typically drawn from rival theoretical claims about
the causes of a given outcome (Rohlfing 2014). Rival overall theories are only relevant when they
actually provide a competing explanation for finding a particular piece of evidence.
After predictions are developed for each part of the mechanism in a theory-testing case study,
empirical material is then collected and assessed before inferences are made. This empirical assess-
ment involves three steps: (1) assessing whether we have actually found the predicted evidence
(content assessment), (2) assessing what it means if we do not find the predicted evidence in the
case (what is the probability that we will empirically observe the predicted evidence?) and (3) asses-
sing whether we can actually trust the found evidence (what is the probability that our measure is
accurate?).
As in a court of law, before we can admit observations as evidence upon which we can make infer-
ences, we first need to assess using contextual knowledge whether the observations we have col-
lected match the evidence we predicted we would find. Second, if the predicted evidence was not
found, the research has to assess what this means. While an empirical fingerprint might be theoreti-
cally certain, if we are unable to gain access to a particular record in the archives that might enable us
to assess whether the evidence is there or not, we would not be able to downgrade our confidence in
the hypothesis based on not finding the evidence. Only when we have engaged in an extensive
search and we are able to document that we had access to all known sources, can we claim that,
by not finding the predicted theoretically certain evidence, we have downgraded our confidence
in the hypothesis being valid (here, absence of evidence would be evidence of absence). Finally,
we need to engage in extensive source criticism to evaluate whether we can trust the found evi-
dence, or whether the source might have motives for producing a biased account. Other things
equal, we attribute stronger evidential weight to pieces of evidence that we can trust.
Taken together, after the empirical evidence is evaluated for what it can tell us theoretically (cer-
tainty and uniqueness) and whether we can trust it empirically, we are able to sum together the
different pieces of evidence for each part of the mechanism, enabling us to infer whether there is
evidence of the mechanism being present in a given case.
Conclusion
If we want to reap the methodological benefits of PT as a tool for gaining greater understanding of
causal processes using in-depth case studies, we need to take the study of mechanisms seriously by
explicitly theorising them in sufficient detail to enable us to know exactly what it is we should be
tracing empirically.
PT is by no means a panacea and, in many respects, is a very limited methodological tool. Like an
electron microscope, it has only a few different uses, but what it does, it does powerfully. PT requires
the deployment of extensive analytical resources, meaning that one is only able to realistically assess
a small number of cases. Additionally, PT taken on its own only enables within-case inferences about
causal processes, meaning that PT case studies have to be nested in comparative designs to enable
cross-case generalisations to be made. But by tracing mechanisms using in-depth case studies, we
can analytically pry open the causal processes that link causes and outcomes.
NEW POLITICAL ECONOMY 471
This article has provided a set of clear methodological framework for tracing mechanisms by
suggesting that they should be: (1) understood as systems linking causes and outcomes, (2)
unpacked into a series of interlocking parts composed of entities engaging in activities that transfer
causal forces from one part to the next, and (3) operationalised by developing predictions of what
evidence we should find if each part of the mechanism is present.
Naturally, there will be theoretical claims that are more difficult to translate into causal processes
than others, in particular, when there are feedback loops and other forms of non-linear processes. In
addition, just because we have made the causal mechanism explicit, it does not mean that we can
trace each of its parts empirically, given that especially psychological processes might not leave suf-
ficient empirical fingerprints to determine whether they were present or not in a case. However,
unless we explicitly theorise mechanisms in sufficient detail, we do not know what we should be
tracing in PT.
Notes
1. Indeed, given that many scholars understand the term ‘process’ to denote non-causal, narrative analysis, it might be
more appropriate to use the term ‘Mechanism-Tracing’ instead to accurately describe what we should be doing in PT
if we have the ambition to study causal processes. To avoid more jargon, this article sticks with the well-known PT
term.
2. Additionally, observations within a single case over time cannot be treated as independent, given that what happens
at t0 naturally impacts on what happens at t1.
3. In assessing hundreds of articles from top journals in the past decade, the author has found precious few examples of
case studies that explicitly theorize causal process in between X and Y. Therefore, when compared to almost all of the
existing article-length research, Ziblatt’s study provides us with much more theoretical information on the causal
process than is typical. But this does not mean that Ziblatt’s theorized mechanism could not be improved further
(see below).
4. Note that there is considerable debate about whether this use of Bayesian logic should be formalized in terms of
quantified probabilities, or used in a more informal, Folk Bayesian fashion as in law. For more on this discussion,
see Beach and Pedersen (2016).
Disclosure statement
No potential conflict of interest was reported by the author.
Notes on contributor
Derek Beach is an associate Professor of Political Science at the University of Aarhus, Denmark. He has published two
books on case-based methods with the University of Michigan Press, along with numerous methodological articles
on process-tracing. His substantive research deals with European integration, focusing in particular on tracing the role
of institutions in high-level negotiations in the EU. He is the academic co-convenor of the ECPR Method Schools.
References
Abell, Peter. (2004), ‘Narrative explanation: An alternative to variable-centered explanation?’, Annual Review of Sociology,
30 (1), pp. 287–310.
Beach, Derek and Pedersen, Rasmus Brun. (2013), Process-tracing Methods: Foundations and Guidelines (Ann Arbor:
University of Michigan Press).
Beach, Derek and Pedersen, Rasmus Brun. (2016), Causal Case Studies: Comparing, Matching and Tracing (Ann Arbor:
University of Michigan Press).
Bennett, Andrew. (2008a), ‘Process-Tracing: A Bayesian Perspective’, in Janet M. Box-Steffensmeier, Henry E. Brady and
David Collier (eds), The Oxford Handbook of Political Methodology (Oxford: Oxford University Press), pp. 702–21.
Bennett, Andrew. (2008b), ‘The Mother of all “Isms”: Organizing Political Science Around Causal Mechanisms’, in Ruth
Groff (ed), Revitalizing Causality: Realism about Causality in Philosophy and Social Science (London: Routledge), pp.
205–19.
Bhaskar, Roy. (1978). A Realist Theory of Science (Brighton: Harvester).
Bunge, Mario. (1997), ‘Mechanism and Explanation’, Philosophy of the Social Sciences, 27 (4), pp. 410–65.
472 D. BEACH
Bunge, Mario. (2004), ‘How Does It Work? The Search for Explanatory Mechanisms’, Philosophy of the Social Sciences, 34 (2),
pp. 182–210.
Checkel, Jeffrey T. (2008), ‘Tracing Causal Mechanisms’, International Studies Review, 8 (2), pp. 362–70.
Darden, Lindley. (2002), ‘Strategies for Discovering Mechanisms: Schema Instantiation, Modular Subassembly, Forward/
Backward Chaining’, Philosophy of Science, (Supplement PSA 2000 Part II) 69, S354–65.
Falleti, Tulia G. and Julia F. Lynch. (2009). ‘Context and Causal Mechanisms in Political Analysis’, Comparative Political
Studies, 42 (9), pp. 1143–1166.
George, Alexander L. and Andrew, Bennett. (2005), Case Studies and Theory Development in the Social Sciences
(Cambridge, MA: MIT Press).
Gerring, John. (2007), Case Study Research (Cambridge: Cambridge University Press).
Glennan, Stuart S. (1996), ‘Mechanisms and the Nature of Causation’, Erkenntnis, 44 (1), pp. 49–71.
Glennan, Stuart S. (2002), ‘Rethinking Mechanistic Explanation’, Philosophy of Science, 69, pp. 342–53.
Goertz, Gary. and Mahoney, James. (2012), A Tale of Two Cultures (Princeton: Princeton University Press).
Grzymala-Busse, Anna. (2011), ‘Time Will Tell? Temporality and the Analysis of Causal Mechanisms and Processes’,
Comparative Political Studies, 44 (9), pp. 1267–97.
Hall, Peter A. (2008), ‘Systematic Process Analysis: When and How to Use It’, European Political Science, 7 (3), pp. 304–17.
Hedström, Peter. and Richard, Swedberg. (eds) (1998), Social Mechanisms an Analytical Approach to Social Theory
(Cambridge: Cambridge University Press).
Hedström, Peter. and Ylikoski, Petri. (2010), ‘Causal Mechanisms in the Social Sciences’, Annual Review of Sociology, 36, 49–
67.
Hernes, Gudmund. (1998), ‘Real Virtuality’, in Peter Hedström and Richard Swedberg (eds), Social Mechanisms an
Analytical Approach to Social Theory (Cambridge: Cambridge University Press), pp. 74–101.
Illari, Phyllis McKay. (2011), ‘Mechanistic Evidence: Disambiguating the Russo-Williamson Thesis’, International Studies in
the Philosophy of Science, 25 (2), pp. 139–57.
King, Gary, Keohane, Robert O. and Verba, Sidney. (1994), Designing Social Inquiry: Scientific Inference in Qualitative
Research (Princeton: Princeton University Press).
Machamer, Peter. (2004), ‘Activities and Causation: The Metaphysics and Epistemology of Mechanisms’, International
Studies in the Philosophy of Science, 18 (1), pp. 27–39.
Machamer, Peter, Darden, Lindley. and Craver, Carl F. (2000), ‘Thinking about Mechanisms’, Philosophy of Science, 67 (1),
1–25.
Mahoney, James. (2001), ‘Beyond Correlational Analysis: Recent Innovations in Theory and Method’, Sociological Forum,
16 (3), pp. 575–93.
Mahoney, James. (2012), ‘The Logic of Process Tracing Tests in the Social Sciences’, Sociological Methods and Research, 41
(4), pp. 570–97.
Mayntz, Renate. (2004). ‘Mechanisms in the Analysis of Social Macro-Phenomena’, Philosophy of the Social Sciences 34 (2),
pp. 237–59.
Morgan, Stephen L. and Christopher Winship. (2007). Counterfactuals and Causal Inference: Methods and Principles for
Social Research (Cambridge: Cambridge University Press).
Pearl, Judea. (2000). Causality: Models, Reasoning and Inference (Cambridge: Cambridge University Press).
Roberts, Clayton. (1996). The Logic of Historical Explanation (University Park: Pennsylvania State University Press).
Rohlfing, Ingo. (2012), Case Studies and Causal Inference (Houndmills: Palgrave Macmillan).
Rohlfing, Ingo. (2014), ‘Comparative Hypothesis Testing Via Process Tracing’, Sociological Methods and Research, 43 (4), pp.
606–42.
Russo, Federica. and Williamson, Jon. (2007), ‘Interpreting Causality in the Health Sciences’, International Studies in the
Philosophy of Science, 21 (2), pp. 157–70.
Suganami, Hidemi. (1996). On the Causes of War (Oxford: Clarendon Press).
Van Evera, Stephen. (1997), Guide to Methods for Students of Political Science (Ithica, CA: Cornell University Press).
Waldner, David. (2012), ‘Process Tracing and Causal Mechanisms’, in H. Kincaid (ed), Oxford Handbook of the Philosophy of
Social Science (Oxford: Oxford University Press), pp. 5–84.
Waskan, Jonathan. (2011), ‘Mechanistic Explanation at the Limit’, Synthese, 183, 389–408.
Ziblatt, Daniel. (2009). ‘Shaping Democratic Practice and the Causes of Electoral Fraud: The Case of Nineteenth-Century
Germany’, American Political Science Review, 103 (1): 1–21.