Chirimuuta Noncausal Penultimate
Chirimuuta Noncausal Penultimate
C O M P U TAT I O N A L
NEUROSCIENCE: CAUSAL
AND NON-CAUSAL
m. chirimuuta*
February 3, 2016
contents
1
2
3
4
Introduction
1.1 Efficient Coding Explanation in Computational Neuroscience
1.2 Defining Non-causal Explanation . . . . . . . . . . . . . . . . .
Case I: Hybrid Computation
Case II: the Gabor Model Revisited
Case III: A Dynamical Model of Prefrontal Cortex
4.1 A New Explanation of Context-Dependent Computation . . .
4.2 Causal or Non-causal? . . . . . . . . . . . . . . . . . . . . . . .
Causal and Non-causal: does the difference matter?
2
3
4
7
14
16
16
17
22
abstract
This paper examines three candidate cases of non-causal explanation in
computational neuroscience. I argue that there are instances of efficient
coding explanation which are strongly analogous to examples of non-causal
explanation in physics and biology, as presented by Batterman (2002), Woodward (2003) and Lange (2013). By integrating Langes and Woodwards
accounts I offer a new way to elucidate the distinction between causal and
non-causal explanation, and to address concerns about the explanatory
sufficiency of non-mechanistic models in neuroscience. I also use this
framework to shed light on the dispute over the interpretation of dynamical
models of the brain.
introduction
introduction
introduction
instances of distinctively mathematical, non-causal explanation, of the
sort discussed by Lange (2013). Importantly, these explanations meet the
mechanists own criterion for explanatory sufficiency when they are able
to answer w-questions (Kaplan, 2011, 354). In Section 4 I will turn to
one recent example of a dynamical model of a region of the monkey
brain. Here, it is less clear that the model offers a non-causal explanation
and I will argue that the issue turns on whether or not one is willing
to give a realist interpretation of the model components resulting from
dimensionality reduction analysis employed by the model builders. In the
remainder of this section I will say more about the relevant background,
defining efficient coding explanation and presenting my preferred account
of non-causal explanation.
1.1 Efficient Coding Explanation in Computational Neuroscience
In a recent publication I argue that models in computational neuroscience
often yield a distinct, non-mechanistic, pattern of explanation which I call
efficient coding explanation (Chirimuuta, 2014). The term computational
neuroscience labels a broad research area which uses applied mathematics
and computer science to analyze and simulate neural systems. That paper
responds to the work of Kaplan (2011), which attempts to incorporate all
explanatory models of this field within the mechanistic framework. The
case turns on the particular example of the Gabor model of V1 receptive
fields, where a mechanistic criterion for explanatory success, the models
to mechanism mapping constraint (3M) [Kaplan (2011, 347), Kaplan
and Craver (2011, 611)] fails, and yet the model is still able to provide
counterfactual information, thus answering w-questions (Woodward,
2003).
How can this be?7 Well, the models in question ignore biophysical
specifics in order to describe the information processing capacity of a neuron or neuronal population. They figure in computational or informationtheoretic explanations of why the neurons should behave in ways described
by the model. So while, on the one hand, such receptive field models may
simply be thought of as phenomenological descriptions which compactly
summarise observed responses of neurons in primary visual cortex (Kaplan,
2011, 358 ff.), on the other hand, by analysis of the information theoretic
properties of the Gabor function itself, one gains an explanation of why
neurons with the properties captured by the model appear at this particular
stage of visual processing.
In short, such models figure prominently in explanations of why a
particular neural system exhibits a characteristic behaviour. Neuroscientists
formulate hypotheses as to the behaviours role in a specific informationprocessing task, and then show that the observed behaviour conforms to
(or is consistent with) a theoretically derived prediction about how that
information could efficiently be transmitted or encoded in the system,
given limited energy resources. Typically, such explanations appeal to
coding principles like redundancy reduction (Barlow, 1961)the notion that
more information can be transmitted through a cable (e.g. axon) of fixed
bandwidth if some of the correlations between signals are removed. They do
not involve decomposition of biophysical mechanisms thought to underlie
the behaviour in question; rather, they take an observed behaviour and
formulate an explanatory hypothesis about its functional utility.
7 See Chirimuuta (2014, section 5.1) for more detailed discussion of the points covered here.
introduction
It is worth saying a word about the notion of efficiency in play here. A
feature of this research is that neuroscientists draw on knowledge of manmade computational systems and attempt to reverse-engineer the brain,
looking for the principles of neural design (Sterling and Laughlin, 2015).
A basic fact is that information processing makes substantial demands on
resources, both in terms of the material required to build a computer or
nervous system, and the energetic cost of computational processing. It
is assumed, reasonably, that the explanation of many features of neural
systems can be derived from consideration of resource constraintsthe
need to achieve good computational performance in spite of a relatively
small resource budget. As Sarpeshkar (1998, 1602) writes:
The three physical resources that a machine uses to perform its
computation are time, space, and energy. Computer scientists
have traditionally treated energy as a free resource and have
focused mostly on time . . . and space . . . . However, energy
cannot be treated as a free resource when we are interested
in systems of vast complexity, such as the brain. . . . Energy
has clearly been an extremely important resource in natural
evolution. [Cf. Attwell and Laughlin (2001); Sterling and
Laughlin (2015)]
So while, in what follows, it is instructive to compare efficient coding
explanations to optimality explanations in biologybecause both are in the
business of comparing actual biological systems to theoretically optimal
solutionsthe field does not rely on the strong adaptationist assumption
that the brain of humans, or any other animal, is somehow optimal.8 One
must instead make the weaker assumption that there is some process or
mechanismeither evolutionary, developmental or occurring during the life
of the organism as adaptation through neural plasticitywhich causes the
system to tend towards the optimal solution. Efficient coding explanations
typically proceed without specifying what that process is.
1.2 Defining Non-causal Explanation
Efficient coding explanations often make fine-grained predictions about
what would occur in counterfactual scenarios. This is possible because of
the way that the efficiency of a computational procedure is sensitive to the
nature of the particular task at hand. For example, neurons with a certain
kind of receptive field structure might be the most efficient means to encode
sensory information in one kind of environment, but not for another. Thus
one can show how neural properties are counterfactually dependent on the
evolutionary or developmental environment. For this reason I have argued
that this branch of computational neuroscience employs a proprietary kind
of non-mechanistic, causal explanation. Yet if one is willing to extend the
notion of a mechanism to include the whole apparatus of natural selection
and ontogenesis one might propose that computational neuroscience is still
just in the business of discovering mechanisms. However, the assimilatory
impulses of even the most flexible-minded mechanist would have to stop at
8 Nor do optimality explanations in biology always rest on this assumption (Godfrey-Smith,
2001). Instead, the point is to show that an observed feature has similarities with a theoretically
predicted optimum, though there may be substantial departures from optimality due to
structural or other constraints. Also, Wouters (2007) gives an account of non-causal design
explanation in biology which does not depend on any claims about the optimality, or near
optimality, of biological systems.
introduction
the idea of distinctively mathematical non-causaland non-constitutive
explanation. So a central motivation for exploring the question of whether
non-causal explanations occur in neuroscience is as a way to get clearer on
the limits of the mechanist framework.
Woodwards interventionist theory of causal explanation has been extremely influential in the philosophy of neuroscience, and Woodwards
proposal that explanatory sufficiency is tracked by the ability of a theory
or model to address w-questions is accepted by authors such as Kaplan
and Craver. The basic intuition is that, a successful explanation should
identify conditions that are explanatorily or causally relevant to the explanandum: the relevant factors are just those that make a difference to
the explanandum in the sense that changes in these factors lead to changes
in the explanandum Woodward (forthcoming, 5).
Interestingly, Woodward (2003, 221) suggests that the ability to address
w-questions may range beyond causal explanation, writing:
the common element in many forms of explanation, both causal
and noncausal, is that they must answer what-if-things-hadbeen-different questions. When a theory tells us how Y would
change under interventions on X, we have (or have the material
for constructing) a causal explanation. When a theory or
derivation answers a what-if-things-had-been-different question
but we cannot interpret this as an answer to a question about
what would happen under an intervention, we may have a
noncausal explanation of some sort.
Woodward gives the example of the hypothesis that the stability of the
planets is counterfactually dependent on the four dimensional structure
of space-time. What if space-time had been six-dimensional? There is no
intervention associated with this question;9 but the hypothesis is that if
things had been different then planetary orbits would indeed be less stable.
In this paper I follow Woodward in defining an intervention as an
idealized, unconfounded experimental manipulation of one variable which
causally affects a second variable only via the causal path running between
these two variables (Woodward, 2013, 46).10 Like various other authors,11 I
believe it is useful to de-couple the counterfactualist parts of Woodwards
account of explanation from the causal, interventionist ones and thereby
develop an account of non-causal explanation. Moreover, I propose that this
account be integrated with Langes notion of distinctively mathematical
explanation to give a clearer standard for differentiating causal from noncausal explanations than is often employed in the literature.12
9 For one thing, if God in a new act of creation were to change the dimensionality of space-time,
this could not be thought of as a possible intervention to be performed by finite beings. It
would be contentious to extend an interventionist account of causation to acts of creation by
infinite beings. More to the point, the theory of general relativity tells us that the counterfactual
dependence of planetary stability on the geometry of space-time is not a result of any causal
relationship between these two. For this reason, we cannot think of alterations in space-time
which result in changes in planetary stability as interventions in Woodwards sense. See
definition at the start of next paragraph, and see Woodward (2014, 702).
10 I take this to be uncontentious since many mechanist authors have adopted Woodwards
interventionist approach to causation, most notably Craver (2007).
11 See Bokulich (2008), Bokulich (2011), Saatsi and Pexton (2013).
12 A new paper by Baron et al. (forthcoming) independently hits upon this idea of using the
framework of counterfactual explanation to characterise distinctively mathematical explanation.
Their more formal presentation of this synthesis is a useful supplement to the examples I
present below.
introduction
Marc Lange analyses distinctively mathematical explanations of regularities and events in terms of the modal strength of mathematical facts, in
comparison to ordinary causal laws. For example, Lange (2013, 488) writes:
That Mother has three children and twenty-three strawberries,
and that twenty-three cannot be divided evenly by three, explains why Mother failed when she tried a moment ago to
distribute her strawberries evenly among her children without
cutting any.
This is conceptually different from any causal explanation that mentions, for
instance, that the attempt made Mother hungry and frustrated (hangry)
and so she ended up eating two of the strawberries herself, or that one of
the little darlings stole from the other, etc.
Thus I share Langes view that distinctively mathematical explanations
employed in science are non-causal ones (Lange, 2013, 506). Furthermore,
I propose that we bring Langes notion of modal strength in non-causal
explanation to bear on Woodwards idea that non-causal explanations
occur when we show that there are dependencies between the explananda
and explans which cannot be understood in interventionist terms. In
such cases, knowledge of the dependencies does not show us how things
would be in a range of counterfactual scenarios in which we perform
manipulations on the explananda. Instead, we are told how things would
be under certain impossible scenarios in which the laws of mathematics
are altered.13 This of course assumes that counterpossible statements
counterfactuals or subjunctive conditionals with impossible antecedents
can be non-vacuously true. I invite the reader to consider that it is intuitive
that the statement, if thirteen were evenly divisible by three, then I could
share my bakers dozen of doughnuts equally amongst my three best
friends is non-vacuously true, and that the statement if thirteen were
evenly divisible by three, then a bakers dozen of doughnuts would be a
healthy snack is non-trivially false; what is more, there is a recent literature
on the semantics of counterpossibles which underwrites these intuitions.14
In the examples I present in the following two sections, we have a noncausal explanation which is reliant on a trade-off demonstrated in the theory
of information. Such trade-offs are candidates for being brute mathematical
factsnothing could be done to make them not obtain. For this reason, we
can think of the trade-offs as modally strong mathematical facts, in Langes
sense, and as yielding information about counterfactual dependencies
which go beyond interventionist interpretation, in Woodwards sense. Like
Lange (2013), but unlike Saatsi and Pexton (2013), I intend my account
of non-causal explanation in neuroscience to cover explanations both of
particular events and regularities, since the kind of mathematical facts that
my examples depend on do yield explanations of both sorts.
In contrast to the account of non-causal explanation just outlined, Robert
Batterman and Collin Rice have focussed on the way that certain models
in physics, biology and economics idealise away from the causal processes
which lead up to a phenomenon. Such models result in representations of
the causes of phenomena which are, at best caricatures (Batterman and
Rice, 2014). So these authors argue that the relevant notion of explanation
is not a causal one since the explanation of a phenomenon does not arise
13 While it is thought not even God could affect such interventions.
14 See e.g. (Brogaard and Salerno, 2013) and (Bjerring, 2014). I am very grateful to an anonymous
reviewer for pointing me to this literature.
DIGITAL'CODE'
not'cold'
hardly'cold' not'cold'
less'cold'
cold'
very'cold'
cold'
coldest'
1622
case i: hybrid
computation
Rahul
Sarpeshkar
precision.
Using A/D/As
probably
not abe
good
technique
for maintainones
components,
the moreisthe
signal will
affected
by noise.
Given any
anything more
4 bits
of precisionone
on tries
an analog
we shall
set ing
of components
thethan
more
economically
to useinput.
thoseAs
components,
22
in section
4.3,are
thegoing
mainto
use
A/D/Asby
is noise.
in distributed
analog
thediscuss
more ones
signals
befor
corrupted
computation,
where
it
is
unnecessary
to
maintain
too
much
precision
on
The point of Sarpeshkars analysis is to sketch out an optimally efficient
one wire.
system, given this trade-off. The 1998 paper contains a number of different
To maximize the efficiency of information processing in a hybrid chain,
calculations to show, for example, the point at which a given analogue
there is an optimal amount of analog processing that must occur before
system
useless
due tolink;
noise
signalbecomes
restoration
in a hybrid
thataccumulation.
is, hybrid linksSarpeshkar
should not proposes
be too
hybrid
computation
as
the
best
general
solution
to
the
problem
building
long or too short. If the link is too long, we expend too muchofpower
(or a
system
to noise
a digital
but economical
area, which
or both)isinresilient
each analog
stage(like
to maintain
thecomputer)
requisite precision
at the
with
resources
(like an Ifanalogue
Asexpend
Sarpeshkar
(1998,
1636)
input
of the A/D/A.
the link iscomputer).
too short, we
too much
power
puts
hybrid
computation
combines
the best
the analog
digital
(orit,area
or both)
in frequent signal
restorations.
In of
section
4.2.2, weand
analyze
the optimal
length
of a hybrid
say,
if weidea
are is
worlds
to create
a world
that is link
morequantitatively.
efficient thanNeedless
either. to
The
basic
about efficiency,
then
the link can
be as longsteps
or as using
short asdigitalwe
to unconcerned
alternate between
digital and
analogue
processing
like,
as
long
as
we
meet
the
A/D/A
constraint.
analogue converters. This way, one has the benefit of small chunks of
efficient analogue computation, in which noise accumulates, interspersed
4.2.1 The
A/D/A. To
signal,
we mustThe
have
discrete
attractor in
with digital
processing
to restore
clean aup
the signal.
idea
is illustrated
states.
In
digital
signal
restoration,
the
input
signal
is
compared
with a
Figure 2.
threshold,
and
high-gain
circuits
restore
the
output
to
an
attractor
state
Most of Sarpeshkars analysis is illustrated in terms of metal that
wires
is a function of the input attractor state. The input may deviate by a fairly
and other artificial electrical components; but he is keen to argue that
large amount from its attractor state, and the output will still be very close
his conclusions generalise to biology (Sarpeshkar, 1998, 1634 and 1636).
to its attractor state. The noise immunity of digital circuits arises because the
Furthermore,
it is in
interesting
to consider
neuronal
physiology in
terms
typical distance
voltage space
between an
input attractor-state
level
and of
theahybrid
hypothesis:
threshold
level is many times the variance of the noise or the offset in the
10
11
12
13
14
15
16
17
18
19
20
point is that what these authors call a mechanism for context dependent computation cannot
be thought of as providing a constitutive explanation.
29 I.e. a very roughly sketched description of difference makers [Weisberg (2007), Strevens (2008)].
21
22
23
References
references
Andersen, H. (forthcoming). Complements, not competitors: causal and
mathematical explanations. British Journal for the Philosophy of Science.
Attwell, D. and S. B. Laughlin (2001). An energy budget for signalling in the
grey matter of the brain. Journal of Cerebral Blood Flow and Metabolism 21,
11331145.
Barberis, S. D. (2013). Functional analyses, mechanistic explanations, and
explanatory tradeoffs. Journal of Cognitive Science 14, 229251.
Barlow, H. B. (1961). Possible principles underlying the transformation
of sensory messages. In W. A. Rosenblith (Ed.), Sensory Communication.
Cambridge, MA: MIT Press.
Baron, S., M. Colyvan, and D. Ripley (forthcoming). How mathematics can
make a difference. Philosophers Imprint.
Batterman, R. (2002). The Devil in the Details. Oxford: Oxford University
Press.
Batterman, R. (2010). On the explanatory role of mathematics in empirical
science. British Journal for the Philosophy of Science 61, 125.
Batterman, R. and C. Rice (2014). Minimal model explanations. Philosophy
of Science 81(3), 349376.
Bechtel, W. (2008). Mental Mechanisms: Philosophical Perspectives on Cognitive
Neuroscience. London: Routledge.
Bechtel, W. (2011). Mechanism and biological explanation. Philosophy of
Science 78(4), 533557.
Bjerring, J. C. (2014). On counterpossibles. Philos Stud 168, 327353.
Blakemore, C. and G. F. Cooper (1970). Development of the brain depends
on the visual environment. Nature 228, 477478.
Bokulich, A. (2008). Can classical structures explain quantum phenomena?
British Journal for the Philosophy of Science 59, 21735.
Bokulich, A. (2011). How scientific models can explain. Synthese 180, 3345.
Brogaard, B. and J. Salerno (2013). Remarks on counterpossibles. Synthese 190, 639660.
Chemero, A. and M. Silberstein (2008). After the philosophy of mind:
Replacing scholasticism with science. Philosophy of Science 75, 127.
Chirimuuta, M. (2014). Minimal models and canonical neural computations: the distinctness of computational explanation in neuroscience.
Synthese 191, 127153.
Cover, T. M. and J. A. Thomas (2006). Elements of Information Theory (2nd
ed.). Hoboken, NJ: Wiley Interscience.
Craver, C. F. (2007). Explaining the Brain. Oxford: Oxford University Press.
Craver, C. F. (2014).
presentation.
PSA
24
References
Daugman, J. G. (1985). Uncertainty relation for resolution in space, spatial
frequency, and orientation optimized by two-dimensional visual cortical
filters. J. Opt. Soc. Am. A 2(7), 11601169.
Gabor, D. (1946). Theory of communication. Journal of the Institution of
Electrical Engineers 93, 429459.
Godfrey-Smith, P. (2001).
Three Kinds of Adaptationism, pp. 335357.
Cambridge: Cambridge University Press.
Gould, S. J. (1981). The Mismeasure of Man. London: Penguin.
Huneman, P. (2010). Topological explanations and robustness in biological
sciences. Synthese 177, 213245.
Husbands, P. and O. Holland (2008). The ratio club: A hub of british
cybernetics. In P. Husbands, O. Holland, and M. Wheeler (Eds.), The
Mechanical Mind in History, pp. 91148. MIT Press.
Hyvrinen, A. and P. O. Hoyer (2001). A two-layer sparse coding model
learns simple and complex cell receptive fields and topography from
natural images. Vision Research 41, 24132423.
Irvine, E. (2014). Models, robustness, and non-causal explanation: a foray
into cognitive science and biology. Synthese DOI 10.1007/s11229-014-05240.
Izhikevich, E. M. (2010). Dynamical Systems in Neuroscience: The Geometry of
Excitability and Bursting. Cambridge, MA: MIT Press.
Kaplan, D. M. (2011). Explanation and description in computational
neuroscience. Synthese 183, 339373.
Kaplan, D. M. and W. Bechtel (2011). Dynamical models: An alternative
or complement to mechanistic explanations? Topics in Cognitive Science 3,
438444.
Kaplan, D. M. and C. F. Craver (2011). The explanatory force of dynamical
and mathematical models in neuroscience: A mechanistic perspective.
Philosophy of Science 78, 601627.
Kitcher, P. (1989). Explanatory unification and the causal structure of the
world. In P. Kitcher and W. Salmon (Eds.), Scientific Explanation, pp. 410
505. Minneapolis: University of Minnesota Press.
Komornicki,
A.,
G.
Mullen-Schulz,
and
D.
Landon
(2009).
Roadrunner:
Hardware
and
Software
Overview.
http://www.redbooks.ibm.com/redpapers/pdfs/redp4477.pdf.
Lange, M. (2013). What makes a scientific explanation distinctively
mathematical? British Journal for the Philosophy of Science 64, 485511.
Levy, A. (2014). What was hodgkin and huxleys achievement?
Journal for Philosophy of Science 65, 469492.
Machamer, P., L. Darden, and C. F. Craver (2000).
mechanisms. Philosophy of Science 67, 125.
MacKay, D. (1991). Behind the Eye. Oxford: Blackwells.
British
Thinking about
25
References
MacKay, D. and W. McCulloch (1952). The limiting information capacity of
a neuronal link. Bulletin of Mathematical Biophysics 14, 127135.
Mante, V., D. Sussillo, K. V. Shenoy, and W. T. Newsome (2013a). Contextdependent computation by recurrent dynamics in prefrontal cortex.
Nature 503, 7884.
Mante, V., D. Sussillo, K. V. Shenoy, and W. T. Newsome (2013b).
Supplementary information.
McCulloch, W. and W. Pitts (1943). A logical calculus of the ideas immanent
in nervous activity. Bulletin of Mathematical Biophysics 7, 115 133.
Ott, E. (2002). Chaos in dynamical systems. Cambridge: Cambridge University
Press.
Piccinini, G. and S. Bahar (2013). Neural computation and the computational
theory of cognition. Cognitive Science 34, 453488.
Piccinini, G. and C. Craver (2011). Integrating psychology and neuroscience:
Functional analyses as mechanism sketches. Synthese 183(3), 283311.
Rice, C. (2012). Optimality explanations: a plea for an alternative approach.
Biology and Philosophy 27, 685703.
Rice, C. (2015). Moving beyond causes: Optimality models and scientific
explanation. Nos 49(3), 589615.
Rieke, F., D. Warland, R. d. R. V. Steveninck, and W. Bialek (1999). Spikes:
Exploring the Neural Code. Cambridge, MA: MIT Press.
Ross, L. N. (2015). Dynamical models and explanation in neuroscience.
Philosophy of Science 82(1), 3254.
Saatsi, J. (2011). The enhanced indispensability argument: Representational
versus explanatory role of mathematics in science. British Journal for
Philosophy of Science 62, 143154.
Saatsi, J. and M. Pexton (2013). Reassessing woodwards account of
explanation: Regularities, counterfactuals, and noncausal explanations.
Philosophy of Science 80(5), 613624.
Sarpeshkar, R. (1998). Analog versus digital: Extrapolating from electronics
to neurobiology. Neural Computation 10, 16011638.
Sarpeshkar, R. (2010). Ultra Low Power Bioelectronics. Cambridge: Cambridge
University Press.
Serban, M. (2015). The scope and limits of a mechanistic view of
computational explanation. Synthese 192(10), 33713396.
Silberstein, M. and A. Chemero (2013).
Constraints on localization
and decomposition as explanatory strategies in the biological sciences.
Philosophy of Science 80(5), 958970.
Stepp, N. A., Chemero, and M. T. Turvey (2011). Philosophy for the rest of
cognitive science. Topics in Cognitive Science 3(2), 425437.
Sterling, P. and S. B. Laughlin (2015). Principles of Neural Design. Cambridge,
MA: MIT Press.
26
References
Strevens, M. (2008). Depth: An account of scientific explanation. Cambridge,
MA: Harvard University Press.
Sussillo, D. and O. Barak (2013).
Opening the black box: Lowdimensional dynamics in high-dimenional recurrent neural networks.
Neural Computation 25, 626649.
von Neumann, J. (2000). The Computer and the Brain. New Haven: Yale
University Press.
Weisberg, M. (2007). Three kinds of idealization. Journal of Philosophy 104(12),
639659.
Weiskopf, D. A. (2011).
Models and mechanisms in psychological
explanation. Synthese 183, 313338.
Woodward, J. F. (2003). Making Things Happen. New York: Oxford University
Press.
Woodward, J. F. (2013). Mechanistic eplanation: its scope and limits.
Proceedings of the Aristotelian Society Supplementary Volume 87, 3965.
Woodward, J. F. (2014). A functional account of causation. Philosophy of
Science 81(5), 691713.
Woodward, J. F. (forthcoming).
Explanation in neurobiology: An
interventionist perspective. In D. M. Kaplan (Ed.), Integrating Psychology
and Neuroscience: Prospects and Problems. Oxford: Oxford University Press.
Wouters, A. G. (2007). Design explanation: determining the constraints on
what can be alive. Erkenntnis 67, 6580.
27